Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 76

Running Head: VALIDITY OF TEACHERS BEHAVIOR INVENTORY

Abstract

This paper explored the validity of the Student Evaluation of Teachers (SET) used
by the Colegio de San Lorenzo, the Teachers Behavior Inventory (TBI). This study
examined the TBI according to the aspects of validity proposed by Samuel Messicks
Theory of Unified Validity. Professors were clustered into 28 groups (gender x course x
class size), from which two were randomly sampled. Then, two different SETs were
administered to students from these classes: first is the TBI, then after a month the
Validated Cumulative SET. Moreover, this research extended the literature by aligning
the said instrument according to the Constructivist approach of teaching by Jerome
Bruners Theory of Instruction and Barak Rosenshines Six Teaching Functions.
Exploratory Factor Analysis and Cronbachs Alpha were conducted on SPSS. Results
showed that the TBI lack evidence in structural and substantive aspects of Validity.
Furthermore, it also failed to represent all aspects of Bruners Theory of Instruction.

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


INTRODUCTION

Background of the Study


Education is a never ending process and a quest for knowledge. Through this
journey, we meet and interact with a number of people from whom we can learn.
However, in order to survive this quest, a person must not solely rely on what other
people tell him/her; rather, he/she must also learn from what he/she has. An individual
must know how to integrate these innate abilities with what he/she acquires for him/her to
develop new knowledge.
There are learning theories in education which address how people learn
(Learning-Theories, 2014). These learning theories have two main categories: Behaviorist
and Constructivist (Forrester & Jantzie, 2014; Weegar & Pacis, 2012). Behaviorist
theorists such as John B. Watson (1878 1958) and B.F. Skinner (1904 -1990) posit that
learning are caused by external factors like changes in the environment (Weegar & Pacis,
2012). Furthermore, when applied to teaching orientation, the teacher becomes the
provider of information. Thus, when applied to education, it becomes a teacher-centered
learning. On the other hand, constructivist theorists like Lev Vygotsky (1896-1934) and
Jean Piaget (1896 1980) view learning as an active construction of knowledge.
Constructivism gives importance to the involvement of the learner in making sense of his
surroundings or what most of the people call student-centered learning (Woolfolk,
2013).
Jean Piaget (1964), one of the most prominent constructivists, said during his
remark in a conference at Cornell University that [T]he principal goal of education is to

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


create men who are capable of doing new things, not simply of repeating what other
generations have donemen who are creative, inventive, and discoverers. The second
goal of education is to form minds which can be critical, can verify, and not accept
everything they are offered (Duckworth, 1964).
In line with the second goal of education which is to develop students to be
critical thinkers, they need to learn first about the concepts and principles that underlie
critical thinking and apply them before they can be skilled in it. One instrument which
that generates evidence for critical thinking of students is Course Evaluation Form, for it
assesses whether they teach their students in a way which foster critical thinking (Paul &
Elder, 2007).
Student Evaluation of Teachers (SET), is one way of involving students in
learning and allowing them to critically think. Student Evaluation of Teaching (Bonitz,
2011), Student Evaluation of Professors (Wilson, 1998), and Student Course Evaluation
(Gravestock & Gregor-Greenleaf, 2008) are some terminologies that have been used to
refer to it. Even though it is called by different names, the concept and use of it remains
the same. It is a widely accepted tool of getting students assessments on their teachers.
Furthermore, it also has both formative and summative functions: the former is used for
refining the quality of education, while the latter is for assessment of the instructors
performance for the whole semester (Johnson, 2002; Socha, 2009; Stark-Wroblewski,
Ahlering, & Brill, 2007). The latter serves as an indicator for personnel decisions such as
tenure, salary increase and promotion (Murray, 2005).
SET originated from the University of Washington in 1920's, initiated by
psychologist E.T. Guthrie. Since then, it has been adopted by most colleges and
universities worldwide (Murray, 2005). Here in the Philippines, it has been a practice by

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


most universities including the University of the Philippines (University of the
Philippines, n.d.).
Academic excellence is the one of the utmost priorities of every educational
institution. Aside from the above mentioned, other schools and colleges also use SET
including Colegio de San Lorenzo (CDSL). CDSL is an institution dedicated to providing
quality education. As an illustration, the griffin as its seal stands for three creatures that
correspond to the following virtues: lion for faith; dragon for service; and eagle for
academic excellence (Colegio De San Lorenzo, 2013).
In the CDSL handbook (under the Student Rights Section), the first item states
that the students have the right to receive, primarily through competent instruction,
relevant quality education in line with national goals which are conducive to their full
development as persons with human dignity. In line with this objective, the institution
ensures that the students are provided quality education by providing the best professors
(Colegio De San Lorenzo, 2013).
Like most of educational institutions, CDSL has its own SET. While most
universities, like the University of the Philippines (University of the Philippines, n.d.),
call it SET, the CDSLs version is known as Teachers Behavior Inventory (TBI). The
main purpose of it is for students to rate their teachers performance and teaching skills
based on the questions listed. At par with most SETs worldwide (Reyes D. , 2015), TBI
serves its summative purpose since it is conducted near the end of the semester, and holds
15% of the evaluation process of teachers. First, teachers can improve and adjust the way
the course is managed and delivered to lead to a more fruitful learning experience.
Second, the teachers are ranked according to the ratings they get. Lastly, like the implied
function of SET, tenureship, the result of the TBI is used on decision making and as part

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


of the whole evaluation process of the teachers. Consequently, teachers who ranked
poorly in the TBI could be de-loaded, while teachers who performed satisfactorily could
be given more loads (Reyes, 2015).
In a separate interview conducted with Engr. Danilo Reyes and Ms. Jennette
Manlangit, it was discovered that the TBI was proposed and created by Ms. Girly Duran
back in the year 2000. She is a psychology graduate who had then been working as a
Human Resource Personnel. The Vice President for Academic Affairs Committee during
that time, Maria Paz Manaligod, approved it to serve as the official instrument of
assessing faculty members effectiveness. This version was used until 2011 (Manlangit,
2015; Reyes D. , 2015). The original TBI consisted of 58 items; 53 of it were close-ended
questions and five open ended questions. However, only the first 40 questions were
considered for interpretation since not all students were taking it seriously (Manlangit,
2015). The close ended questions were answerable by a Likert scale, ranging from 1, 2, 3,
4, 5 which corresponds to Never or Very Rarely, Sometimes, Regularly, Often,
Very Often or Always respectively. These items correspond to the following areas
namely: Principles and Methods of Teaching, Non-Aversive Teacher Behavior,
Knowledge of the Subject Matter, Personal Teacher Characteristics, and Motivational
Teacher Behavior (Catubig, 2012) (see appendix A).
In 2011, the former Chairperson of the Department of Psychology and Head of
Academics Committee, Mr. Glenn Catubig, released a newer version of the TBI. He,
together with two psychology studentsMs. Sherel Kate Simon and Mr. Rodrigo
Babierarevised it (Manlangit, 2015). From a randomly sampled 180 college students,
these items underwent Factor Analysis and Cronbachs Alpha to establish its construct
validity and reliability. Based on the results, this version of TBI labeled as TBI-RV

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


measured three (3) factors: Course Preparation, Professional Teacher Behaviors, and
Teacher Motivational Behaviors. The revision was made because the questions were
overwhelming or too many for students. From 40, it was lessened to a 19-item
instrument composed of 17 close-ended and two (2) open-ended questions (Catubig,
2012). The TBI-RV was implemented upon its release on March 1, 2012 (Manlangit,
2015) (see appendix B).
In 2013, a few modifications were made by the present Head of Academic Affairs
Committee and College Registrar, Engr. Danilo Reyes. Questions number 16 and 17 were
rephrased. The reason for the revisions, particularly for questions number 16 and 17, was
to make it easier for students to rate their professors. There were no major changes made
in any of the questions, aside from the replacement of the Professors Code to the Name
of Professors; hence the items are still the same as the version of Catubig. Since then, this
version (labeled as version 1.1) has served as the current TBI for the college (Reyes,
2015; Manlangit, 2015).
As a matter of policy, the TBI is administered to the college students of CDSL
two weeks before final examination week of every semester. This is done after the
observation of the department heads and deans on the professors to give them room for
improvement. The distribution of the TBI is done by the Office of the Registrar and
Human Resource Department. They use purposive sampling to ensure that the professor
will be rated on their subject of mastery and not on their out-of-line subjects. (Manlangit,
2015).
Statement of the Problem
Since Colegio De San Lorenzo follows the constructivist approach towards
academic excellence (Fifteenth Congress of the Philippines, 2012), it practices TBI as a

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


way of refining quality of education. With this, the researcher was motivated to conduct a
study which aimed to explore the validity of CDSLs TBI in terms of measuring effective
teaching. In order to do this, the researcher examined its construct validity in terms of
measuring teacher effectiveness, as well as aligning it to a constructivist approach of
teaching.
This concept of validity was taken from Samuel Messicks Unified Theory of
Validity. For Messick, validity is an integrated evaluative judgment of the degree to
which empirical evidence and theoretical rationales support the adequacy and
appropriateness of inferences and actions based on test scores or other modes of
assessment. According to him, the six aspects of validity are: content, substantive,
structural, generalizability, external, and consequential (Messick, 1994).
The present study focused on the five aspects: substantive, which emphasized the
role of structural theories or models used to assess the task; structural, which involved
the rational development of construct-based scoring criteria and rubrics; generalizability,
which examined the extent to which the score properties and interpretations were
consistent with the construct across different groups and settings; and external, how did
the questions relate with the construct. Since the researcher had no capability to identify
the consequences (positive or negative) of misinterpretation of the results, this study did
not focus on the consequential aspect which involved the consequences of the result of
the test. Moreover, it did not discuss the content aspect, which pertained to relevance to
the construct since the TBI itself was used as the instrument for this study.
These were used to answer the following research questions:
1. How were the items of the TBI inter-correlate in measuring teachers
effectiveness based on constructivist view of teaching which was taken from these

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


two theories: Rosenshines Six Teaching Functions and Jerome Bruners Theory of
Instruction? (structural aspect)
2. How did the items in the TBI converge with the construct of effective teaching?
(external aspect)
3. How did gender of professors, program/major of students, and class size relate
with the ratings of the professors in the TBI? (generalizability)
4. How did the items in the TBI represent Constructivist teaching: Rosenshines Six
Teaching Functions and Jerome Bruners Theory of Instruction? (substantive
aspect)
Objectives
With the information at hand, the researcher wanted to explore if the TBI
contained the aspects of Messicks (1994) Unified Validity: content, substantive,
structural, generalizability, and external. This led the researcher to the four-fold
objectives:
First, the researcher aimed to find out whether the TBI of CDSL is validly
measuring teachers effectiveness in terms of structural aspect. This study defined
teachers effectiveness based on constructivist view of teaching which was derived from
these two theories: Rosenshines Six Teaching Functions and Jerome Bruners Theory of
Instruction. From here, the researcher analyzed whether the construct matched the
questions or content within the TBI. Using these information, the researcher examined if
the TBI had internal consistency, through scrutinizing the consistency of the questions.
All of these were examined through Cronbachs Alpha and Exploratory Factor Analysis.
Second, the researcher intended to discover whether the contents were relevant or
irrelevant. According to Messick, there were two major threats to construct validity:
Construct underrepresentation and construct-irrelevant variance. The former referred to

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


how the assessment failed to include important dimensions or facets of the construct. The
latter pertained to the broadness of the assessment that it contained excess reliable
variance associated with other distinct constructs (Messick, 1995). With this, the
researcher conducted Exploratory Factor Analysis (EFA) to know whether the TBI
contained these sources of invalidity.
Third, the researcher examined if the common factors such as gender of
professors, programs/majors of students, and class size affected the ratings the professors
received in the TBI. From here, the researcher examined whether the TBI had a validity
across conditions of the said factors. Thus, it examined the generalizability aspect of
validity through Exploratory Factor Analysis.
Fourth, the researcher aligned the SET of CDSL with the role of teachers in the
constructivist set of values which is as a facilitator of knowledge. Constructivism is the
most used orientation of education since the Commission on Higher Education
recommends that Higher Educational Institutions should have a student-centered
approach (Fifteenth Congress of the Philippines, 2012) (Commision on Higher
Education, 2012). Barak Rosenshines (1982) Six Teaching Functions and Jerome
Bruners (1966) Theory of Instruction were used to align the roles of teachers to these
approaches.
Significance of the Study
With these objectives, the findings and results of this thesis would be beneficial to
the following communities in Colegio de San Lorenzo:
To the School Administration. This study can improve the evaluation process of
professors at Colegio de San Lorenzo. By presenting a validated instrument aligned with

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


the constructivist point of view, the institution can produce more effective teachers.
Moreover, this study would introduce an instrument that measures teachers effectiveness
in line with the student-centered learning orientation in education. Furthermore, this
study would align the TBI to a constructivists view of teacher effectiveness based on
Rosenshines Six Teaching Functions and Bruners Theory of Instruction.
To the Faculty. With the use of a validated instrument with contents that are
relevant and in line with teaching effectiveness, the confidence of professors in the
process of assessing their teaching effectiveness could be increased. With these, students
could have better access to academic excellence.
REVIEW OF RELATED LITERATURE
Validity relies on the interpretation of the test and not on the test itself (Cronbach
& Meehl, 1955) which makes the users examine not the test itself, but what inference
they make from it (Messick, 1994). According to Messick (1955), there are six aspects of
unified construct validity: content, substantive, structural, generalizability, external and
consequential. In his article Validation of Inferences From Persons Responses and
Performance as Scientific Inquiry Into Score Meaning (Messick, 1995), he discussed the
meaning of each aspect mentioned: content refers to whether the item is relevant or
representing of the assessed construct; substantive pertains to the theories or models the
test was derived from; structural involves the rational development of construct-based
scoring criteria and rubrics; generalizability examines the extent to which the score
properties and interpretations are consistent with the construct; external aspect refers to
both convergent (its correlation to the construct) and discriminant (its distinctiveness

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


from other constructs) correlation, and consequential concerns the consequences of the
underrepresentation which can either be positive or negative it brings.

Validity of Student Evaluation of Teachers


Some of Messicks (1995) aspects of validity were examined in several studies of
SET. In the longitudinal study of Marsh (1992), it extensively examined the SETs
validity against a variety of indicators of effective teaching. The analysis of this study
considered: the generalizability in terms of mean ratings of the same teachers over a
period of time; the generalizability of the structure across teaching at different levels and
in different disciplines; factor structure; the covariance stability or change over time; and
the profile of the teachers. Even though the research does not explicitly use the Unified
Theory of Validity, the areas examined correspond to the following: generalizability,
content, structure, and external. The findings of this study showed that SETs are valid
when they are properly made (Marsh H. W., 1992).
In a separate study made by Cashin (1988), he made a metaanalysis of SET in
terms of its generalizabilty and validity. Generalizability of SET refers to how the data
reflect teachers effectiveness in general and not just in a particular area. The study
showed, by reviewing data from Marsh (1982 ), that it is not the course that determines
the student ratings; rather, it is the professor. Since teacher effectiveness has no single
criterion, this study therefore examined data that either support or challenge student
ratings of effective teaching to ensure SET validity in general. Thus, the validity is
broken down into the following: validity in terms of student learning, validity in terms of
instructors self ratings, and validity in terms of the ratings of others. These aspects are

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


highly correlated to student ratings. Furthermore, the result of this study showed that
SETs are reliable and valid (Cashin, 1988).
In another research by Kember and Leung (2008), SETs validity was explored in
terms of content, criterion, and construct validity. The researchers made their own
construct of excellent teaching by interviewing eighteen teachers of University of Hong
Kong who were awarded the Vice Chancellors Award for exemplary teaching. Through
this, the rearchers were able to make their own criterion for excellent teaching. Then, they
conducted a validation study of the SET by running a confirmatory factor analysis and
Cronbachs Alpha on its items. The result of this study showed that the SET has
consistency (convergent validity) with the construct.
Even though SET has been proven valid by these studies, several studies also
showed how other factors could also implicate biases in its results like differing profile of
both teachers and students.
Programs of Students
In a study by Jenkins and Downs (2001), the SET ratings were explored through
their relation to the courses of students. The participants in this study came from the
College of Education of South Eastern University. However they were grouped according
to their level: (1) graduate courses and (2) undergraduate courses. In this study, the
researchers ran a Fischers Zr transformation to examine the correlation between this
factor and the ratings they gave their instructors. The result showed that students from
graduate courses gave higher ratings to their instructors efficacy than those students from
undergraduate courses.

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


In a separate study conducted by Feldman (1978), the ratings the students gave in
the SET were examined through their association to the following factors: class size,
course level, the electivity of the course, the particular subject matter of the course, and
the time of day that the course is held. The researcher ran statistical methods and was able
to arrive at these results: (1) the ratings in the SET are inversely correlated with the class
size, (2) course level and the electivity of the course tend to be higher on the upper
years, (3) those instructors teaching arts, humanities, and languages have higher rating
than those professors who are teaching other subjects, (4) the time of day that the course
is held has does not correlate with the ratings on the SET. However, these results are not
statistically significant.
Gender of Teachers and Subject Being Taught
In a study by Bonitz (2011), the possible biases that may affect the result of the
SET were explored and these biases are: gender of the teachers, gender of the students
and course type. The researcher conducted an Analysis of Variance (ANOVA) on SPSS to
know whether these factors have an effect on the ratings or result of SET. The findings
showed that the gender of the student has an effect on the ratings the teachers receive on
SET because female students tend to give higher ratings on SET than male students.
However, the gender of the teachers as well as the type of course have no significant
relationship with the ratings of SET. Likewise, the students did not perceive the
effectiveness of the teachers based on their teachers gender.
Another study by Cramer and Alexitch (2000) explored the relationship among
the SET ratings and the following factors: student gender, discipline being taught, and
teacher gender. The researchers conducted an Analysis of Variance (ANOVA) on SPSS

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


among these targeted factors and the teaching aspects in the SET which include: style,
sensitivity, and student needs. This was done to know if these factors correlate with each
other. The result of this study showed that female students gave higher ratings to their
teachers on SET and they perceived that their professors were better than male students.
Moreover, the discipline being taught is divided according to Social Sciences, Sciences,
and Fine Arts/Humanities and this study showed that professors on Sciences tend to have
lower ratings than Social Sciences and Fine Arts/Humanities. Lastly, the results also
showed that the gender of the professors is sensitive to the ratings in SET; that is, female
professors tend to receive higher ratings than male professors.
Yet another research conducted by Kohn and Hatfield (2006) examined the ratings
the professors receive in SET and its relation to the following factors: students expected
grades, subject being taught categorized as either a (1) minor/major/elective or (2)
required/core course, gender of students and, gender of the professors. The researchers
conducted different statistical analysis on SPSS like Coefficient Alpha (Cronbachs
Alpha), Comparative Fit Index (CFI), Critical Ratios (CR), and Chi-Square/df to know if
the said factors correlate with the ratings the teachers receive in SET. The results showed
that students expected grades are positively correlated with the ratings they gave. In
terms of subject being taught, the interest of students on the subjects affects the ratings
they gave on SET. Moreover, the result of this study also showed that female students
give higher ratings than male students. Furthermore, in terms of gender of instructors,
male instructors receive higher ratings from their female students than from their male
students. However, female instructors did not show correlation in terms of the ratings of
their effectiveness that they receive from both female and male students.

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


Class Size and Student Load of Professors
Bedard and Kuhn (2005) conducted a study among the economic classes at
University of California, Santa Barbara from Fall of 1997 to Spring of 2004. Within this
period, 64 professors offered 655 economic courses and these data were used in this
study. The students were categorized acoording to their levels: lower division, upper
division, Masters, or PhD. On the other hand, the calss size were grouped as this: 1-19,
20-39, 40-59, 60-79, 80-99, 100-149, 150-199, 200-299, and 300+. The researchers used
the question Please rate the overall quality of the instructors teaching which is
answerable by a Likert scale with responses of (1) poor, (2) fair, (3) good, (4) very good,
and (5) excellent. The class size of the enrolled students differed from the number of
students due to the following reasons: absenteeism on the day the instructors were
evaluated; late withdrawals from the course; opted not to respond; and student auditing
courses. With these data, the researchers ran correlation statistical-fixed effect and found
that the larger the class size the lower the ratings received by the teacher.
Another research by Monks and Schmidt (2010) examined the impact of class size
to student assessment of course and teachers. From a highly selective and private
unversity at the East Coast of the United States, the data were obtained from its
administrative records and course evaluations through the period of 1996-2008. The
sample was composed of 48 faculty members, 88 separate courses and 1,928 course
sections. The student evaluation examined these questions : (1) overall instructor rating;
(2) amount learned (3) overall course rating), and (4) the mean of the students lowest
and highest expected grade in the course (calculated as expected course GPA). These
questions, except from the latter, were answerable by a likert scale ranging 1-5 (with 5

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


being the best/most). Through data analysis, the researchers discovered that the larger the
class size the lower the scores given to the following: self-reported learning, teachers
rating; course rating, and expected grade. Furthermore, professors who have higher
student load (number of students handled) received lower ratings on self-reported
learning, as well as on course and teacher ratings. However, the results here were not
statistically significant.
Aside from these biases, another important facet of SETs validity was the
definition of the teacher effectiveness. Effective teaching is a hypothetical construct,
which means that there is no direct definition of it . However, it can be defined by other
measures which are correlated to it such as students academic improvement (Marsh H.
W., 2006). Since there are differing descriptions, researches define an effective teacher
through the effectivity of his/her instruction in promoting academic improvement of
students. Through which Constructivism, among other approaches, is proved to be more
successful with (Khalid & Azeem, 2012).
Constructivism
Pagn (2006), in his paper, contrasted the constructivist approach of teaching and
traditional approach of education. The theory of instruction of Bruner was discussed and
constrasted with traditional models of teaching. The researcher referred to factual
examples wherein Constructivist teaching was applied, that yielded improvement to
students learning. This paper suggested that educational institutions which applied
Constructivist principles helped their students learn more efficiently and effectively.
King and Rosenshine (1993) conducted a research on 34 fifth-grade students. The
researchers divided the students into two groups. The first group were trained to ask and

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


answer their partners thought-provoking questions such as Why is important? and
What would happen if ? and were guided to ask highly elaborated questions. On the
other hand, the second group was just instructed to ask their partners questions which are
not further elaborated. The results showed that those students who were guided to ask
elaborated stems of questions outperfomed those who were not guided on the following
areas: (a) explanations provided during discussion, (b) posttest comprehension, and (c)
knowledge mapping. Thus, this study indicated that cooperative discussion contexts
structured guidance in asking thought-provoking questions prompts explanations that
helped in learning.
Synthesis
For the past decades, SET has been studied in terms of the different aspects of
Validity as well as the possible factors (i.e. gender, course, and subject) which may
influence its result. The majority of these researches proved that Student Evaluation of
Teacher is valid. However, these researches studied SETs validity in varied areas.
Marsh (1992) and Cashin (1988) both studied SET in terms of its generalizability.
However, they differ on the conditions used. Marsh (1992) stated that SET is
generalizable when the ratings of professors are compared to: (1) their own ratings over
time, (2) their ratings from different levels of students. Similarly, Cashin (1988) study
proved that SET is generalizable when correlated to the professors: self ratings, ratings
by others, and student learnings. On the other hand, the instrument itself is proven to have
the following aspects of validity: criterion aspect, content aspect, construct aspect and
convergent aspect (Kember & Leung, 2008). All of these studies use different
comprehensive study designs from longitudinal, meta-analysis longitudinal case study.

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


Even though SET has been proven to be valid by these studies, it is also proven
that different factors may affect the ratings the professors receive from it such as course
of the students, gender of the students, gender of the instructors and the discipline being
taught, and the class size and student load of the professors. The course of the student
raters has a relationship with the ratings they give their professors. Students from
graduate and undergraduate courses (Jenkins & Downs, 2001); Arts, Social Sciences, and
Sciences courses (Feldman, 1978) rate the instructors differently. However, in terms of
SET ratings in relationship with the gender of teachers and subject being taught, the
studies are not consistent with each other. Cramer and Alexitch (2000) and Kohn and
Hatfield (2006) studies both showed that female students gave higher ratings to their
instructors; especially, if the instructor is a male (Kohn & Hatfield, 2006). Although,
female instructors still tend to receive higher ratings than male instructors (Cramer &
Alexitch, 2000). Furthermore, professors on Sciences tend to have lower ratings than
Social Sciences and Fine Arts/Humanities (Cramer & Alexitch, 2000). Students gave
higher ratings to teachers who teach their subject of interest (Kohn & Hatfield, 2006). On
the other hand, Bonitz (2011) contradicted the results of these studies and showed that the
gender of the teachers as well as the type of course has no significant relationship with
the ratings of teachers effectiveness in SET. Lastly the ratings that the teachers received
from SET showed correlation between class size and student load. This revealed that the
higher the number of class size and student load of professors, the lower the ratings they
receive from their students (Bedard & Kuhn, 2005; Monk & Schmidt, 2010).
In addition, since there is no direct way in defining teachers effectiveness (Marsh
H. W., 2006), researches studied some approaches of instruction which can be effective

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


in promoting academic success. They found that it is Constructivism (Pagn, 2006; King
& Rosenshine, 1993).
Research Gaps
These studies have proved that SETs are valid in terms of their content and
generalizability. However, none of these directly used the Unified Theory of Validity and
the structure (in terms of content) of the SETs used in these studies were different from
each other. In addition, these studies focused on certain categorizations and did not
expand on the courses of the students. Furthermore, these studies did not align the SET
with the Constructivists view of teacher effectiveness particularly to Rosenshines Six
Teaching Functions and Bruners Theory of Instruction. Despite the fact that there are a
multitude on studies on SETs validity in the international sphere, there is still a dearth on
researches about SETs in the Philippine setting. In light of these research gaps, this study
will test the validity of the SET of Colegio de San Lorenzo, TBI. Using Samuel
Messicks Unified Theory of Validity, this study will also align the SET with the
Constructivists view of teacher effectiveness based on Rosenshines Six Teaching
Functions and Bruners Theory of Instruction.
STUDY FRAMEWORKS
This chapter discusses the theoretical framework, conceptual framework and
operational framework of the study.
Theoretical Framework

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY

Figure 1. A theoretical framework for Theory of Unified Validity


This section discusses the theoretical framework for Samuel Messicks Theory of
Unified Validity (see Figure1). The concept of validity from Samuel Messick (1994)
will be the foundation of this thesis.
Validity has been a part of psychometrics for the past decades. It basically aims to
know whether a test measures what it is supposed to measure. Traditionally, validity was
viewed as a Trinitarian concept composed of content validity, criterion-related validity
and construct validity which means that each of thesemay not be all present
contributes to evidence of validity (Guion, 1980). However, this view was critiqued by
Samuel Messick (1994). In his research, he stated that this view was incomplete and
fragmented for it failed to take into account the evidence of the implications of score
meaning as basis for action and of social consequences of test use. In his Theory of

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


Unified Validity, the considerations of content, criteria, and consequences into a construct
framework for the empirical testing of rational hypotheses about score meaning and
theoretically relevant relationships were integrated. Moreover, he stated that in the
unified concept of validity had six aspects and these are: content, generalizability,
structural, substantive, consequential, and external. In addition, these six aspects were
unified. Therefore, it would function as a general validity criteria standards for all
educational and psychological measurement.
Content
The content aspect specifies the boundaries of the construct domain to be assessed
by determining the knowledge, skills, attitudes, motives, and other attributes to be
revealed by the assessment tasks. It refers to how well the test items measure the
construct. However, it is not sufficient to merely select tasks that are relevant to the
construct but also how representative the content are. Both content relevance and
representativeness of assessment tasks are traditionally appraised by expert professional
judgment, documentation of which serves to address the content aspect of construct
validity (Messick, 1995).
Substantive
The substantive aspect is the soundness of the theoretical foundation underlying
the construct. It emphasizes the role of substantive theories and process modeling in
identifying the domain processes and construct. It adds to content aspect by empirical
evidence of response consistencies reflective to the domain processes. Therefore, the
substantive aspect measure the process representation of the construct and the degree to

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


which these processes are reflected in its measurement. Both the content and substantive
aspects of construct validity are concerned with representativeness (Messick, 1995).
Generalizability
Generalizability refers to the aspect which determines whether the test generalizes
across different groups and settings. This is meant to insure that the score interpretation
would not be limited to the sample assessed tasks but be broadly be generalizable to the
construct domain. In addition to this, generalizability is also affected not just by
generalizability across tasks, but also affected by the degree of generalizability across
time or occasions and across observers or raters of the task performance. Evidence of
such depends on the correlation of the assessed tasks with other tasks or occasions or
group settings (Messick, 1995).
Structural
The structural aspect refers to the interrelationships of measured dimensions by
the test and whether these correlate with the construct of interest and test scores. This is
concerned with the rational consistency of the scoring models with what is known about
the structural relations inherent in behavioral of the construct in question (Messick,
1995).
External
The external aspect refers to the extent to which the relationships of assessment
score with other measures and non-assessment behaviors reflect the expected high, low,
and interactive relations implicit in the theory of the construct being assessed. It includes
the convergent (relatedness) and discriminant (un-relatedness) relationship of the items to
the construct. Thus, the meaning of the scores is externally substantiated by appraising

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


the degree to which the empirical relationship (convergent) with other measuresor the
lack (discriminant) of thereof measures are consistent with that meaning (Messick,
1995).
Consequential
The consequential aspect refers to the potential risks of invalid or inappropriately
interpreted test scores. It includes the evidence and rationales for evaluating the intended
and unintended short-term and long-term consequences and use of score interpretation.
These consequences can either be positive, such as improved system or negative,
associated with the bias and unfairness in test scoring and interpretation (Messick, 1995).
The items in the TBI will be explored through the five aspects of Construct
Validity namely: Content, Structural. Generalizability, Substantive, and External. As
previously mentioned, the Consequential aspect of validity will not be tackled by this
research because the researcher has no means of deciding if the decisions made from
misinterpreting the results of the TBI are worth it or not.

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


Conceptual Framework

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


Figure 2. A conceptual framework for integrated validation process
This section presents the conceptual framework for integrated validation process
of the TBI (see Figure 2) that was developed based on the integration of the theories of
Samuel Messick, Barak Rosenshine, and Jerome Bruner. The conceptual framework
focused on the validation process of the TBI based on the Unified Theory of Validity and
aligning it to the Constructivists view of effective teacher by Rosenshines Six Teaching
Functions and Jerome Bruners Theory of Instruction.
Constructivist Teaching
Constructivist teaching is an active process which involves both the teacher and
the student. The learners must be the makers of the knowledge while the teachers must be
the facilitator of learning (Gray, 1995).
Gray (1995) conducted a life-research study of Pat, a teacher who was asked to
change his/her approach to teaching to constructivism. According to this study, a
constructivist classroom must involve the following characteristics: the learners are
actively involved; the environment is democratic; the activities are interactive and
student-centered; and the teacher facilitates a process of learning in which students are
encouraged to be responsible and autonomous. Moreover, teachers in a constructivist
classroom employ these characteristics in order to have a student-centered learning. This
study suggests that a constructivist teaching is an effective way to facilitate learning. It
promotes autonomy and responsibility at the same time it also encourages meaningful
learning (Gray, 1995).
Rosenshines Six Teachings Functions

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


Rosenshine (1982) conducted research to know the characteristics of successful
and effective teachers who have the capacity to increase their students' academic
achievement. Upon the results of this study, he was able to identify the six common
teacher functions which are: (1) reviewing daily and checking previous days work (2)
overviews of new content/skills are being provided, carrying on in small steps (but, if
necessary, at a rapid pace), detailed or redundant-instructions and explanations are
provided, and phasing in new skills while old skills are being mastered; (3) manifesting
student confident practice with high frequency of questions, prompting during initial
learning; and allowing student response through feedback; (4) feedback and correctives,
recycling instruction are given (if necessary) and corrections by simplifying questions,
giving clues, explaining, and reviewing are made; (5) students are given time for
independent practice and seatwork until they are sure of material; and (6) weekly and
monthly reviews and re-teaching are conducted, if necessary. These characteristics
proved to improve the learning of the students.
In a constructivist view, the experiences of the learners should facilitate in their
construction of knowledge in which the instruction should be made of. In this way,
learners themselves construct their knowledge whether individually or sociallybased
on their interpretation or sense of their environment and not what is directly transmitted
to them by their teachers (Jonassen, 1999).
Jerome Bruners Theory of Instruction
According to Bruner (1966), instruction does not commit learners to have results
in their mind; rather, it establishes knowledge through teaching the learners to be
involved in the process of instruction. Under this theory, he specified that an effective

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


instruction has four aspects: (1) predisposition of the students to learn, since learning
involves relationship between the instructor and student, teachers must help students
master their social skills for them to be engaged in the instructional process; (2) structure
and form of knowledge help the students to easily grasp the lessons taught and which can
be characterized in three ways --- mode of representation (enactive representation, iconic
representation, and symbolic representation), economy (which is the amount of
information that needs to be kept and processed in mind to achieve understanding) and
effective power (which refers to the generative value of the students learned
propositions); (3) sequence and its uses, refer to the ways of presenting the materials or
knowledge which is based on the ease and difficulty of learners to grasp them; (4) form
and pacing of reinforcement (rewards and punishments), are needed to motivate the
students to learn.
An Integrative Model of Messicks Theory of Unified Validity and Rosenshines Six
Teaching Functions and Bruners Theory of Instruction
Here, the items in the TBI will be explored through Samuel Messicks aspect of
Unified Theory of Validity. First, the seventeen close-ended questions in the TBI will be
explored through the four aspects of validity: content, structure, substantive, and external.
The researcher wants to know if these questions correspond to the said aspects. After
knowing these aspects, the researcher will align it with the construct of Jerome Bruners
Theory of Instruction and incorporate Rosenshines Six Teaching Functions.
At the same time, the researcher explored the generalizability of the TBI in these
factors: Gender, whether the Professor is a male or a female; Program of Students,
Bachelor of Science Psychology (BS Psy), Bachelor of Science in Hotel and Restaurant

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


Management (BSHRM), Bachelor of Science in Tourism (BST), Bachelor of Science in
Education (BSEd), Bachelor of Arts in Communication Arts, and Bachelor in Computer
Science (BSCS); and Class Size of the section being taught, whether it is small (0-30) or
big (31-60).
GENDER
Male
Female

PROGRAM OF STUDENTS
Bachelor of Science

Psychology (BS Psy)


Bachelor of Science in
Hotel and Restaurant

Management (BSHRM)
Bachelor of Science in

Tourism (BSTSM)
Bachelor of Science in

Education (BSEd)
Bachelor of Arts in

Communication Arts
Bachelor in Computer
Science (BSCS)

Operational Framework

CLASS SIZE
Small (0-30)
Big (31-60)

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY

Figure 3. Operational framework


This section presents the operational framework for the detailed validation
process of the TBI (see Figure 3). This section is subdivided into three parts: Part One
which focuses on the validation of TBI, Part Two (is optional) which focuses on the
validation process of the Cumulative SET made by the researcher, and Part Three which
focuses on the validation of the synthesized validated TBI and validated Cumulative SET.

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


Part One of the methodology is composed of three steps which yielded a
Validated TBI. First, TBI was administered to a sample of students. Then, Exploratory
Factor Analysis was conducted to know if the items within it measured the same
construct and to explore which characteristics of Teachers effectiveness were measured.
At the same time, Cronbachs Alpha was ran in the Statistical Package for Social
Sciences to explore the internal consistency of the items.
In case the validity of the TBI was low and did not include Rosenshines Six Teaching
Functions and reflect Bruners Theory of Instruction, the researcher would proceed to
Part Two.
Part Two of methodology focused on the filling in the validated TBI with the
Cumulative SET made by the researcher. This part had three steps. First, the cumulative
SET made by the researcher was administered to the department heads and deans of
Colegio de San Lorenzo. Bruners Theory of Instruction and Rosenshines Six Aspects of
Teaching Functions were set as the construct of Teachers Effectiveness. Next, the
researcher ran a Content Validity Ratio on it to validate its content. The instrument
yielded from this is called VALIDATED CUMULATIVE SET.
Part Three focused on the process of making a constructivist-aligned TBI. The
validated Cumulative SET was administered to the college students (same set of
participants. Then, Exploratory Factor Analysis was run to confirm if these items adhered
to the construct of teachers effectiveness and Cronbachs Alpha was also ran to
determine if the items had internal consistency. Finally, the result (VALIDATED ITEMS
of Cumulative SET) from this were merged to the validated items on the TBI to produce
a constructivist-aligned TBI which would then be called RECOMMENDED TBI.

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY

Operational Definitions
Teachers effectiveness - qualities a teacher possess in order for him or her to
successfully deliver the lessons to his or her students. This will be taken from
Rosenshines (1982) Six Teaching Functions and Bruners (1966) Theory of Instruction.
Constructivism Teaching an approach to teaching wherein the learning is focused on
the student rather than the instructor. In here, the teachers do not simply feed the students
lessons and information.
Student Evaluation of Teachers instrument used to measure teachers effectiveness
answered by the students.
Teachers Behavior Inventory (TBI) instrument used by Colegio de San Lorenzo to
measure teachers effectiveness.
Validated Teachers Behavior Inventory (Validated TBI) the product from TBI
validation process: Cronbach Alpha and Exploratory Factor Analysis.
Cumulative Student Evaluation of Teachers (Cumulative SET) Instrument devised
by the researcher which is an accumulation of questions from different SETs of other
schools which are not included in the TBI of CDSL.
Validated Cumulative Student Evaluation of Teachers (Validated Cumulative SET)
Instrument which will be produced from the experts and content validation of
Cumulative SET.
Recommended Teachers Behavior Inventory (Recommended TBI) the instrument
that will be produced from the three processes; validation, synthesis, and aligning. This
instrument will undergo validation process according to Messicks (1994) aspects of

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


Unified Validity and which will also be aligned to the Constructivist view of Teaching by
Rosenshine (1982) and Bruner (1966).
METHODOLOGY
The purpose of this study was to examine the validity of Colegio de San
Lorenzos Teachers Behavior Inventory and to align it with Rosenshines Six Teaching
Functions.
Research Design and Methods
A descriptive and quantitative research methodology were used for this study to
statistically quantify the validity of the TBI. Different instruments such as TBI,
Cumulative SET, and Synthesized SET were used and administered to a selected sample
from CDSLs population to validate the TBI.
The research was carried out in three stages. First, Part One dealt with validation
of the TBI and aligning it to the Constructivists view of effective teacher by
administering the TBI to the College Students of Colegio de San Lorenzo. Part Two dealt
with the filling in of the items to hit all the Rosenshines Six Teaching Functions and
Bruners Theory of Instruction - in case, the items in the TBI did not meet all of them
by the items in the cumulative SET the researcher made. Lastly, Part Three dealt with the
synthesis of the validated items in TBI and Cumulative SET and making a Validated and
Constructivist-aligned TBI from it.
PART ONE
Here, the researcher explored the five aspects of construct validity namely;
content, substantive, structural, generalizability, and external. In order to do this,
Cronbachs Alpha and Exploratory Factor Analysis were conducted on SPSS. As a rule of

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


thumb, the result from Cronbachs alpha must be greater than 0.70 for it to be acceptable
and reliable.
Units of Analysis and Sampling
This study was conducted on the College Department of Colegio De San Lorenzo.
The researcher first classified the professors classes according to the combination of
these variables: PROGRAM - BS Psychology (BSPsy), BS Education (BSEd), BS
Computer Science (BSCS), BS Tourism (BST), BS Hotel and Restaurant Management
(BSHRM), BS Business Administration (BSBA), and AB Communication Arts (ABCA);
GENDER - Female and Male; and, CLASS SIZE - Big and Small. Thus, there were a
total of 28 clusters:
Table 2.1 Sampling

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY

Table 2.2 Clusters of Professors


GENDER OF PROFESSORS

CLASS SIZE

PROGRAM

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


1. Male
2. Male
3. Male
4. Male
5. Male
6. Male
7. Male
8. Male
9. Male
10. Male
11. Male
12. Male
13. Male
14. Male
15. Female
16. Female
17. Female
18. Female
19. Female
20. Female
21. Female
22. Female
23. Female
24. Female
25. Female
26. Female
27. Female
28. Female

Big (31-60)
Big (31-60)
Big (31-60)
Big (31-60)
Big (31-60)
Big (31-60)
Big (31-60)
Small (1-30)
Small (1-30)
Small (1-30)
Small (1-30)
Small (1-30)
Small (1-30)
Small (1-30)
Big (31-60)
Big (31-60)
Big (31-60)
Big (31-60)
Big (31-60)
Big (31-60)
Big (31-60)
Small (1-30)
Small (1-30)
Small (1-30)
Small (1-30)
Small (1-30)
Small (1-30)
Small (1-30)

BS TSM
BS HRM
BS BA
BS PSY
BS EDU
BS CS
BS CA
BS TSM
BS HRM
BS BA
BS PSY
BS EDU
BS CS
BS CA
BS TSM
BS HRM
BS BA
BS PSY
BS EDU
BS CS
BS CA
BS TSM
BS HRM
BS BA
BS PSY
BS EDU
BS CS
BS CA

The researcher did this to ensure the Generalizability of the TBI across conditions.
Also, to secure that the said factors would not be blocking factors for the test validity.
From here, the researcher used simple random sampling: a technique wherein the
samples or portion of population are selected in an unbiased way (Myers & Hansen,
2014). In order to do this, the researcher got two names of professors from each cluster
by fish bowl method. Furthermore, the students from the chosen class were the main
raters. The same set of students may be chosen again; however, in a different class and
professors.

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


The participants were composed of professors and the students from their class.
There are professors who would be the persons to be observed and their students as the
main observers or raters. From the 56 classes of 45 professors: a total of 1319 TBI forms
were taken which are composed of the seven courses: 184 PSY, 180 HRM, 194 CS, 161
TSM, 210 EDU, 198 CA, and 192 BA. They were given informed consent to ensure their
rights.
Research Instrument
Teachers Behavior Inventory
The TBI was composed of 19 questions: 17 close-ended questions and two open
ended questions. The 17 close-ended questions were answerable by a Likert scale which
was labeled as: Never, Sometimes, Regularly, Often, and Always for items 1-15; Nothing,
Very Little, Some, Much, and Very Much for item 16; and, Never, Seldom, Sometimes,
Frequently, and Always for item 17. The upper section of the TBI has four entries
provided asking for: Instructors Name, Subject/ Section, Day/ Time, and Room. At the
bottom most part of it, was a space provided asking for Students Number.
Procedure
Before the researcher administered the TBI, she briefed the professors and the
students about this study. Then, an informed consent was given to each of the students
(see appendix C) and informed consent (see appendix D) with study information sheet
(see appendix E) were given to the professors. The aim of this study was disclosed to the
professors, but was fully disclosed to the students at first to avoid bias on their ratings.
This aim was explained at the debriefing process of this research after the part three of
the methodology was conducted.

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


The researcher administered the Teachers Behavior Inventory to the students in
the chosen classes of the professors. The students were given ten (10) minutes to answer
all the questions contained in the TBI. Lastly, the researcher collected all the
accomplished TBI forms from the students.
Data Analysis
The researcher explored the internal consistency and validity of the TBI by
running Cronbachs Alpha and through Exploratory Factory Analysis. Cronbachs Alpha
was the most commonly used measure for internal consistency, like in knowing if the
questions in an instrument were reliable (Laerd Statistics, n.d.). As a rule of thumb, an
Alpha () greater than 0.70 is already acceptable (Hof, 2012). On the other hand, Factor
Analysis was used for the following purposes: 1) to know the least number of common
factors which best explain the correlations among the indicators; 2) to identify the most
plausible factor solutions through factor rotations; 3) to have an estimate for the pattern
and structure loadings, communalities, and the unique variances of the indicators; 4) for
the common factors to have an interpretation; 5) to give an estimate to factor scores, if
necessary. There are four (4) steps in running an Exploratory Factor Analysis. First,
checking the sampling adequacy. Second, determine the number of factors. Third,
determine the rotation to be used. Lastly, know the factors (Sharma, 1996). By doing
these, the four aspects of validity were explored namely: structure, generalizability,
external, and substantive.
PART TWO

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


This part of the research had to do with the aligning of the TBI with the
Constructivist view of Six Effective Teaching Functions by Barack Rosenshine (1982)
and Theory of Instruction by Jerome Bruner (1966).
Units of Analysis and Sampling
The researcher used purposive sampling to all the deans and department heads of
Colegio de San Lorenzo. Purposive sampling is a non-random selection of samples that
reflect a specific purpose of the study (Myers & Hansen, 2014) in this case, experts
validity. The participants of this study were all the department heads and deans of
Colegio de San Lorenzo, composed of: two Males, and seven Females. They were given
informed consent to ensure their rights. They were provided with token for their
participation.
Research Instrument
Cumulative Student Evaluation of Teachers (SET)
The Cumulative SET was an instrument made by the researcher that is composed
of close-ended questions, from different SETs of other schools, which were not included
in the TBI. The upper part of it was provided for: Students Number, Section and Year,
Teachers Code, Subject and Subject Code
The questionnaire was composed of three pages. The first page was for the
instruction which also included the definitions for the constructs: teacher effectiveness
and constructivist approach to instruction. The second and third pages were the SET
created by the researcher composed of 54 item close-ended questions which were not
found in the TBI but found in other schools SETs. These questions were answerable by a

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


Likert scale which was labeled as: Never, Sometimes, Regularly, Often, and Always (see
appendix G).
Procedure
The researcher first informed the participants about the study, then invitation letter
(see appendix F), study information sheet (see appendix E), informed consent (see
appendix D) were given to them.
To assess the validity of the items in the TBI, a quantitative study was conducted.
In order to do this, the researcher first conducted the process of Expert Validity. The
participants were given 40 minutes each to answer the questionnaire. Using Rosenshines
(1982) Six Teaching Functions Theory and Bruners (1966) Theory of Instruction as
paradigm for an effective and efficient teacher, they were asked to put E or U or NE
beside each item: E, if they found the item Essential and related with the construct; U, if
they found the item Useful but not related to the construct; and, NE if they found the item
as Non-essential or not related to the construct. After two hours, the researcher collected
the instrument.
Data Analysis
After this, the researcher ran Content Validity Ratio (CVR) to quantify the validity
of its content. By doing this, the content aspect of validity was explored. The formula is
CVR = (E - N/2)/(N-1), where: E = number of expert raters who rated the item as
Essential and N = the total number of expert raters. CVR can range from -1.0 to +1.0.
The more essential the object was considered to be, the closer it was to +1.0. Conversely,
the more non-essential it was, the closer to -1.0 the CVR was. Only the items labeled as E

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


or Essential were computed (Fletcher, 2010). In this case wherein there were eight (8)
raters, the minimal CVR value is 0.75 (Cohen, Swerdik, & Sturman, 2013).

PART THREE
After the researcher computed the Content Validity Ratio (CVR) of each questions
in the Cumulative SET, all the items which were retained were made into an instrument
which was labeled as VALIDATED SET. Then, the Validated SET were administered to
the same participants. Once this validation was done, the items from here were merged
from the Validated TBI items to align it to Barack Rosenshines (1982) Six Teaching
Functions and Jerome Bruners (1966) Theory of Instruction.
Research Instrument
Validated SET
The new instrument (Synthesized SET) was composed of all the validated
questions in the TBI and Cumulative SET; a total of 14 close-ended questions. The
questions were answerable by a Likert scale which is labeled as: Never, Sometimes,
Regularly, Often, and Always. Aside from these, spaces were provided on the topmost
part of the instrument for the following: Instructors Code, Day and Time, Section, and
Subject (see appendix H).
Units of Analysis and Sampling
This study was conducted on the College Department of Colegio de San Lorenzo.
The researcher first classified the professors classes according to the combination of
these variables: GENDER - Female and Male; CLASS SIZE - Big and Small; and,
COURSE - BS Psychology (BSPsy), BS Education (BSEd), BS Computer Science

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


(BSCS), BS Tourism (BST), BS Hotel and Restaurant Management (BSHRM), BS
Business Administration (BSBA), and AB Communication Arts (ABCA). Thus, there were
a total of 28 clusters:

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


Table 2.3 Sampling

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY

Table 2.4 Clusters of Professors


GENDER OF PROFESSORS
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.

Male
Male
Male
Male
Male
Male
Male
Male
Male
Male
Male
Male
Male
Male
Female
Female
Female
Female
Female
Female
Female
Female
Female
Female
Female
Female
Female
Female

CLASS SIZE

PROGRAM

Big (31-60)
Big (31-60)
Big (31-60)
Big (31-60)
Big (31-60)
Big (31-60)
Big (31-60)
Small (1-30)
Small (1-30)
Small (1-30)
Small (1-30)
Small (1-30)
Small (1-30)
Small (1-30)
Big (31-60)
Big (31-60)
Big (31-60)
Big (31-60)
Big (31-60)
Big (31-60)
Big (31-60)
Small (1-30)
Small (1-30)
Small (1-30)
Small (1-30)
Small (1-30)
Small (1-30)
Small (1-30)

BS TSM
BS HRM
BS BA
BS PSY
BS EDU
BS CS
BS CA
BS TSM
BS HRM
BS BA
BS PSY
BS EDU
BS CS
BS CA
BS TSM
BS HRM
BS BA
BS PSY
BS EDU
BS CS
BS CA
BS TSM
BS HRM
BS BA
BS PSY
BS EDU
BS CS
BS CA

The researcher did this to ensure the Generalizability of the TBI across conditions
and also to secure that the said factors would not be a blocking factor for the test validity.
From here, the researcher used simple random sampling: a technique wherein the
samples or potion of population are selected in an unbiased way (Myers & Hansen,

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


2014). In order to do this, the researcher got two names from each cluster by fish bowl
method.
They were given informed consent to ensure their rights. The same cluster of
participants from the first part participated in this third part of the study. However, on
this, the number of total number students decreased. The participants were composed of
professors and the students from their class. There are professors who would be the
persons to be observed and their students as the main observers or raters. From the 56
classes of 45 professors: a total of 1230 TBI forms were taken which are composed of the
seven courses: 190 PSY, 167 HRM, 218 CS, 162 TSM, 129 EDU, 184 CA, and 180 BA.
They were given informed consent to ensure their rights.
Procedure
Before the researcher administered the TBI, she briefed the professors and the
students about this study. Then, an informed consent was given to each of the students
(see appendix C) and informed consent (see appendix D) with study information sheet
(see appendix E) were given to the professors. The aim of this study was disclosed to the
professors, but was not fully disclosed to the students at first to avoid bias on their
ratings. This aim was explained at the debriefing process of this research after the part
three of the methodology was conducted.
The researcher administered the Validated Cumulative SET to the participants.
Next, the participants were given ten (10) minutes to answer all the questions in it. Then,
the researcher collected all Validated Cumulative SET.
Data Analysis

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


The researcher explored the internal consistency and validity of the TBI by
running Cronbachs Alpha and through Exploratory Factory Analysis on SPSS. By doing
these, the five aspects of validity were explored namely: content, structure,
generalizability, external, and substantive.

RESULTS AND ANALYSIS


PART ONE
This section showed the two different statistical procedure conducted in SPSS to
test TBI aspects of validity according to Messicks Theory of Unified Validity: structural,
external, substantive, and generalizability. First, Cronbachs Alpha tested the items
degree of relatedness to one another, which pertained to the structural aspect of the
instrument. Second, EFA tested the items on the TBI for its structural, external,
substantive, and generalizability aspects.

Cronbachs Alpha
Table 4.1.1 Cronbachs Alpha of the TBI
Cronbach's Alpha

Cronbach's Alpha Based on

N of Items

Standardized Items

.926

.926

17

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


The Cronbachs Alpha table showed results that the 17 closed ended questions are
highly reliable, with ()=0.926. As a rule of thumb, an Alpha () greater than 0.70 is
already acceptable. Hence, an alpha () of 0.926 assumes that the items on the TBI are
interrelated and reliable (Hof, 2012).

TABLE 4.1.2 Item-Total Statistics


Scale Mean if

ITEM01
ITEM02
ITEM03
ITEM04
ITEM05
ITEM06
ITEM07
ITEM08
ITEM09
ITEM10
ITEM11
ITEM12
ITEM13
ITEM14
ITEM15
ITEM16
ITEM17

Scale Variance

Item Deleted
if Item Deleted
67.2608
109.707
67.2585
110.081
67.4776
108.749
67.1880
112.010
67.3548
111.205
67.4238
109.823
67.6513
107.485
67.4867
108.394
67.5307
107.076
67.6520
106.846
67.8772
106.194
67.7301
107.269
67.4397
107.825
67.6217
106.909
67.4754
107.746
67.3904
112.950
67.2320
115.605

Corrected ItemTotal Correlation


.651
.643
.672
.578
.566
.602
.636
.672
.719
.665
.648
.615
.676
.680
.663
.589
.383

Squared Multiple Cronbach's Alpha if Item


Correlation
.538
.552
.522
.419
.424
.447
.489
.536
.553
.511
.512
.450
.510
.536
.502
.376
.180

Deleted

This table showed which item should be eliminated to increase the Cronbachs
Alpha. Items 01-16 in the TBI has lesser () than the initial () = 0.926. Only item 17 has
a greater () = 0.927. Ergo, deleting this would increase the internal consistency or
structural aspect of validity by () 0.01.

Exploratory Factor Analysis

.922
.922
.921
.923
.923
.923
.922
.921
.920
.921
.922
.923
.921
.921
.921
.923
.927

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


Through Exploratory Factor Analysis, the number and nature of common factor
for the items/variables were extracted, as well as measuring its degree of relationship to
the construct.
Table 4.1.3 Correlation Matrixa
IT

IT

IT

IT

IT

IT

IT

IT

IT

IT

IT

IT

IT

IT

IT

IT

IT

GE

CO

CLA

ND

UR

SSSI

ER

SE

ZE

M0 M0 M0 M0 M0 M0 M0 M0 M0 M1 M1 M1 M1 M1 M1 M1 M1
1

Corr
ITE

1.0

M02

ITE
M03

M04

M06

ITE
M07

ITE
M08

ITE
M09
ITE
M10

49

40

39

42

45

52

42

40

39

45

41

41

44

22

-.07 -.04

.
659

1.0

3
.

4
.

9
.

5
.

3
.

3
.

0
.

2
.

3
.

6
.

4
.

6
.

8
.

1
.

50

40

37

43

47

51

38

38

34

46

42

37

46

22

588 602

0
.

9
.

6
.

3
.

8
.

5
.

6
.

5
.

9
.

3
.

6
.

8
.

7
.

9
.

49

39

41

46

51

55

45

41

41

45

46

40

40

27

2
.

8
.

5
.

6
.

2
.

2
.

8
.

6
.

8
.

1
.

0
.

2
.

46

36

34

46

41

33

30

36

43

36

38

37

-.05

9
.

6
.

6
.

9
.

8
.

6
.

2
.

8
.

4
.

1
.

6
.

56

37

41

45

36

32

29

39

37

39

37

5
.

1.0

-.09
23 .024

00
1
.

36

56

9
.

5
.

5
.

8
.

7
.

8
.

8
.

3
.

3
.

2
.

4
.

46

44

49

42

39

35

42

40

41

32

28

1.0

-.03 -.09

00
9
.

2
.

34

37

46

9
.

7
.

8
.

0
.

4
.

2
.

3
.

2
.

2
.

7
.

62

47

44

42

37

46

48

44

38

25

1.0
00
6
.

9
.

9
.

46

41

44

62

-.033
0

-.06 -.13
7
.

8
.

6
.

6
.

0
.

2
.

4
.

3
.

8
.

56

45

41

39

43

46

45

42

28

1.0
00
6
.

5
.

9
.

7
.

41

45

49

47

56

-.013
3

-.06 -.15
3
.

8
.

7
.

0
.

4
.

0
.

4
.

0
.

56

55

43

49

48

48

44

27

1.0

-.01 -.12

00

.007
0

9
.

5
.

7
.

7
.

5
.

1
1.0

5
.

3
.

7
.

7
.

5
.

6
.

420 386 452

33

36

42

44

45

56

00

62

51

47

50

50

39

27

-.027
0

523 515 552


.

-.001
2

453 478 516

.033
7

425 433 465

26 .032

00

5
.

1.0

399 376 418

-.014
0

404 409 392

-.02 -.09

00

493 500 496

-.019
2

1.0

-.06 -.02

00 602

-.040
9

46

ITE

0
.

M05

9
.

.
ITE

8
.

6
ITE

7
.

00 659 588

on
ITE

6
.

elati
M01

5
.

6
. .025 -.13
6

-.039

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


.
ITE

ITE
M12

ITE
M13

ITE
M14

ITE
M15

ITE
M16

ITE
M17

GEN

32

39

42

41

55

62

402 385 418

52

46

50

51

42

28

-.03 -.17

00
6
.

7
.

0
.

6
.

8
.

5
.

4
.

36

29

35

37

39

43

51

52

9
.

1
.

2
.

0
.

54

51

50

40

-.12

8
.

4
.

6
.

7
.

3
.

0
.

2
.

43

39

42

46

43

49

47

46

54

2
.

0
.

2
.

58

51

44

-.12

8
.

8
.

2
.

0
.

0
.

7
.

0
.

9
.

7
.

36

37

40

48

46

48

50

50

51

58

7
.

2
.

5
.

62

41

28

1.0

-.02 -.14

00
4
.

3
.

3
.

2
.

4
.

7
.

0
.

1
.

2
.

9
.

38

39

41

44

45

48

50

51

50

51

62

-.0

2
.

-.15
41

3
.

2
.

4
.

0
.

5
.

1
.

2
.

0
.

7
.

4
.

37

37

32

38

42

44

39

42

40

44

41

41

6
.

2
.

2
.

3
.

4
.

6
.

1
.

0
.

2
.

2
.

2
.

8
.

26

23

28

25

28

27

27

28

26

21

28

29

31

03

02

20

COU

-.0

-.0

-.0

2
-.0

4
-.0

-.0

-.0

-.0

-.0

3
.

60

63

10

-.0

-.1

-.1

-.1

5
.

01

02

-.0
02

30

8
.

-.1

3
.

-.0

35
5
-.1

2
-.1

-.019

-.13 -.08
7

-.0

-.1 1.00

52

37

-.1

-.0

03
27

2
-.1

.118
00

-.11

9
1.0

-.05
31

00

-.0

62

3
.

1.0

4
.

.021
6

79

29 .035

00
1
.

5
.

1.0

DER

-.002
7

5
.

-.027
4

221 229 275

-.0

21 .022

00

448 467 402

8
.

1.0

416 378 400

-.044
8

414 426 461

26 .012

00
2
.

0
.

1.0

456 463 458

.036
5

393 349 416

.
1.0

30
M11

-.1

5
-.1

.087
0

019
1.0

.019
RSE
CLA

41

27

94

-.0

-.0

-.0

SSSI
19

14

ZE
Sig.
ITE

M01

34

57

-.0

-.0

-.0

-.0

21
.

36

75
.

-.0
00

28

24

45

-.0

-.0

-.0

03

01

27

33

13

39

3
.

7
.

00

00

00

00

00

00

00

56
.

19

84

-.0

02

M02

000

ITE

-.079
00
-.07

.087

44

27

02

19 118

6
.

1
.

00

00

00

00

00

00

1.000
9

.
.
00 .002

000 000

taile

M03

95

(1-

ITE

92

03
40

d)

57
.

.075
067

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

00

00

00

00

00

00

00

00

00

00

00

00

00

0
.

.
00 .012

000

.248
162

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
. .230

000 000

00

00

00

00

00

00

00

00

00

00

00

00

00

00

.
000

.305

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY

ITE
M04

00

00

00

00

00

00

00

00

00

00

00

00

M05

ITE
M06

ITE
M07

ITE
M08

ITE
M09

ITE
M10

ITE
M11

ITE
M12

ITE
M13

ITE
M14

ITE
M15
ITE
M16

00 .122

000 000 000

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

00

00

00

00

00

00

00

00

00

00

00

00

0
.

.
00 .187

000 000 000

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

00

00

00

00

00

00

00

00

00

00

00

00

0
.

.
00 .138

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

00

00

00

00

00

00

00

00

00

00

00

00

0
.

.
00 .015

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

00

00

00

00

00

00

00

00

00

00

00

00

0
.

.
00 .011

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

00

00

00

00

00

00

00

00

00

00

00

00

0
.

.
00 .353

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

00

00

00

00

00

00

00

00

00

00

00

00

0
.

.
00 .185

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

00

00

00

00

00

00

00

00

00

00

00

00

0
.

.
00 .101

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

00

00

00

00

00

00

00

00

00

00

00

00

0
.

.
00 .327

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

00

00

00

00

00

00

00

00

00

00

00

00

0
.

.
00 .211

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

00

00

00

00

00

00

00

00

00

00

00

00

0
.

.
00 .159

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

00

00

00

00

00

00

00

00

00

00

00

00

0
.

.
00 .104

.218
000

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

000 000 000

00

00

00

00

00

00

00

00

00

00

00

00

00

.475
000

000 000 000


.

.162
000

000 000 000

.056
000

000 000 000

.095
000

000 000 000

.080
000

000 000 000

.397
000

000 000 000

.314
000

000 000 000

.113
000

000 000 000

.160
000

000 000 000

.487
000

000 000 000

.119
020

0
ITE

0
. .030

.
000

.242

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY

ITE
M17

00

00

00

00

00

00

00

00

00

00

00

00

00

000 000 000

COU
RSE

0
.

0
.

0
.

.
18

13

01

002 012 230

0
.

0
.

0
.

0
.

.
12

DER

.000
001

0
.
GEN

.000

0
.

0
.

0
.

15

10

03

00

.
35

18

10

32

011

.
.001

211

240

2
.

7
.

8
.

5
.

3
.

5
.

1
.

7
.

9
.

4
.

0
.

02

00

00

00

00

00

00

00

00

00

00

00

00

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

0
.

48

16

31

39

08

09

05

16

47

21

24

0
.

.
00 .240

.002

067 162 000

CLA
.

SSSI
075 248 305 119
ZE
a. Determinant = .000

1
.

.
00 .001

113

002
0

This table above gives the correlations between variables. From this table, all
values 0.90. This means that no item is redundant.

Table 4.1.4 KMO and Bartlett's Test


Kaiser-Meyer-Olkin Measure of Sampling Adequacy.
Bartlett's Test of Sphericity

.943

Approx. Chi-Square

11079.455

Df

190

Sig.

.000

A KMO and Bartletts Test are Measure for Sampling Adequacy (MSA), which
tell us if the sampling size is enough for Factor Analysis. KMO with a value 90 is
marvelous. Hence, KMO 0.943 (sig=0.00; df=190) suggests that the sample is adequate
for factor analysis.

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY

Table 4.1.5 Total Variance Explained


Factor

Initial Eigenvalues
Extraction Sums of Squared Loadings
Rotation Sums of Squared Loadings
Total
% of Variance Cumulative Total
% of Variance Cumulative % Total
% of Variance Cumulative

%
1
7.917
39.586
39.586
2
1.271
6.356
45.942
3
1.111
5.556
51.498
4
1.098
5.490
56.989
5
.928
4.642
61.631
6
.880
4.398
66.028
7
.766
3.832
69.860
8
.702
3.509
73.369
9
.691
3.455
76.824
10
.612
3.060
79.884
11
.544
2.722
82.606
12
.512
2.558
85.164
13
.440
2.202
87.366
14
.423
2.116
89.482
15
.403
2.015
91.497
16
.381
1.903
93.400
17
.363
1.816
95.216
18
.341
1.706
96.922
19
.318
1.589
98.511
20
.298
1.489
100.000
Extraction Method: Principal Axis Factoring.

7.464
.787
.513
.385

37.319
3.934
2.567
1.925

37.319
41.253
43.820
45.745

4.355
2.179
2.112
.503

21.776
10.895
10.558
2.517

%
21.776
32.670
43.228
45.745

The rotation used for this study was Principal Axis Factoring (PAF) because it is
commonly used for behavioral and social sciences (Sage Pub, 2015), like psychology. It
is used for theoretical exploration of underlying factor structure and to discover any latent
variables that cause the manifest variables to covary. To reveal the underlying factor
structure during factor extraction (PAF), the shared variance of a variable is partitioned
from its unique variance and error variance; only shared variance appears in the solution.
In exploring the number of factors, when there is no theory used as a guide is to
use the rule of thumb eigenvalues greater than one (Kline, 2010), and the rationale
behind this is that the variance of a factor must be at least as large as the single
standardized original variable (The University of Texas at Austin, 1995).

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


Using the rule of eigenvalues > 1, Principal Axis Factoring (PAF)s result showed
that there are 4 common factors for items 1-20 (including gender, course, and class size).
Also, this table shows that the extracted factors 1, 2, 3, and 4 have values of 7.917, 1.271,
1.111, and 1.098 respectively; with a Cumulative frequency of 56.730.
Since there are only 6% non-redundant residuals, the factor structure adequately
accounts for the correlation among the variables. This is merely a diagnostic check to see
if the model is adequate for factor analysis.
Table 4.1.6 Scree Plot

In the above Scree Plot, it showed that there are four (4) factors above the
eigenvalue of 1.

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


Rotation Used
Promax rotation, an oblique rotation, is used since the researcher suspects that
these four factors extracted correlate with one another to form one factor (Rennie, 1997)
which is teacher effectiveness. In an oblique rotation, like Promax, Pattern Matrix
Table will show the factor loadings of each items. These factor loadings determine which
item fall under which Factor. Also, they tell which item should be retained.

Table 4.1.7 Pattern Matrixa


01 Answer questions in an expert and knowledgeable manner
02 Explains the subject matter clearly
03 Cites up-to-date information about the subject matter
04 Maintains eye contact with the students while speaking
05 Maintains Classroom discipline
06 Implements school rules and regulations
07 Distributes and explains to the students the course syllabus two weeks from the start of classes
08 Takes up the subject matter as stated in the syllabus
09 Adjusts methods of instruction to students learning abilities
10 Provides review of the subject matter if test results show poor performance of students
11 Modifies or changes teaching procedures if test results show poor performance of students
12 Promptly (within the next session) provides students with the results of their written tests and
assignments
13 Clearly explains his/her grading system
14 Points out how to learn credits or academic rewards for acceptable academic performance
15 Praises or gives approval to desired academic behaviors such as doing assignments, participating
in class discussions, etc.
16 How much have you learned from this course?
17 Does your teacher come to class regularly?
GENDER
COURSE
CLASSSIZE
Extraction Method: Principal Axis Factoring.

1
.003
-.104
.115
.005
-.073
.173
.296
.182
.391
.750
.844

Factor
2
.775
.867
.554
.447
-.010
-.085
-.012
.086
.188
-.036
-.054

3
-.012
-.020
.034
.332
1.051
.641
.148
.185
.170
.014
-.095

4
.049
.095
.137
-.102
-.314
-.015
.417
.468
.098
-.011
-.002

.823

.059

-.104

-.160

.623
.764

.185
.015

.077
.011

-.180
-.068

.775

-.063

.114

-.144

.338
.260
.089
-.304
.046

.269
-.012
-.071
.219
-.090

.049
.117
.134
-.027
.067

.020
.089
-.264
-.071
-.049

Rotation Method: Promax with Kaiser Normalization.


a. Rotation converged in 9 iterations.

Factor loadings of above 0.70 will determine what items in the TBI will join
together to form a factor. FACTOR 1 includes items 10-12 and 14-15. FACTOR 2
includes items 1-2. For FACTOR 3, since there is only one item which has a factor
loading greater than 0.70; factor loading relatively close to 0.70 will be considered. Thus,
making items 5-6 form FACTOR 3. There are no above .70 in FACTOR 4, however we
may take into consideration items 7-8 (with loadings .417 and .468 respectively); since,

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


other items (3, 4, 9, 16, and 17) have low or negative factor loadings. GENDER,
COURSE, and CLASS SIZE did not load on any of the factors.
Table 4.1.8 Extracted Factors from the TBI
FACTOR 1

FACTOR 2

FACTOR 3

FACTOR 4

sequence and its uses

structure and form of

form and pacing of

10.

knowledge
01.
Answer questions in

reinforcement
05.
Maintains Classroom

07.

the subject matter if test results

an expert and knowledgeable

discipline

explains to the students the

show poor performance of

manner

06.

students

02.

11.

Provides review of

Modifies or changes

Explains the subject

Implements school

rules and regulations

matter clearly

teaching procedures if test

Distributes and

course syllabus two weeks from


the start of classes
08.

Takes up the subject

matter as stated in the syllabus

results show poor performance


of students
12.

Promptly (within the

next session) provides students


with the results of their written
tests and assignments
14.

Points out how to

earn credits or academic rewards


for acceptable academic
performance
15.

Praises or gives

approval to desired academic


behaviors such as doing
assignments, participating in
class discussions, etc.

Items 10, 11, 12, 14, and 15 loaded on FACTOR 1 which all seem to address the
sequence aspect of instruction of Bruners theory of teaching except items 14 and 15.
Items 14 and 15 refer to praises which are form of reinforcement and not sequence. This
result suggests that these factors correlate with the sequence of teaching. On the other
hand, items 1 and 2 on FACTOR 2 address the structure and form aspect. Lastly, items
4 and 5 on FACTOR 3 address form and pacing of reinforcement aspect. FACTOR 4
did not seem to match any of the aspects of Bruner.
PART TWO

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


This section is focused on filling in the unrepresented aspects of teacher
effectiveness, using Bruners Theory of Instruction and Rosenshines Six Teaching
Functions as paradigm. This has one part which is the Content Validation, using Content
Validity Ratio. Moreover, this determined the items to be retained to form a new
instrument which was used in the last part.
Table 4.2.1 Content Validity Ratio
Item
Number
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27

NonEssential

Useful

Essential

1
2
1
3
1

7
6
7
5
7
8
8
7
5
4
8
7
4
8
8
6
2
1
4
3
4
4
6
4
3
3
3

1
3
4
1
4

2
2
2
1

2
6
7
4
5
3
4
2
2
3
3
4

Content
Validity
Ratio
(CVR)
0.75
0.50
0.75
0.25
0.75
1.00
1.00
0.75
0.25
0.00
1.00
0.75
0.00
1.00
1.00
0.50
-0.50
-0.75
0.00
-0.25
0.00
0.00
0.50
0.00
-0.25
-0.25
-0.25

Item
Number

NonEssential

28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54

1
2
3
1

3
3

1
1
1

Useful

Essential

4
3
2
4
2

3
3
2
3
6
7
4
5
7
7
6
6
4
4
5
6
6
7
6
2
3
5
4
5
5
5
6

3
2
1
1
1
1
3
3
2
2
2
1
2
2
2
3
4
3
2
2
1

Above are two tables each divided by five (5) columns and twenty-eight (28)
rows. Each column corresponds to: item number, Non-Essential, Useful, Essential, and
Content Validity Ratio (CVR). Whereas, the rows corresponds to the label, item number,
number of expert raters, and the computed Content Validity Ratio (CVR). There are
entries that are highlighted green which means that on that item, there is/are rater/s who
did not answer and these are items: 30, 33, 34, 35, 38, 39, 40, 41, 42, and 47.

Content
Validity
Ratio
(CVR)
-0.25
-0.25
-0.50
-0.25
0.50
0.75
0.00
0.25
0.75
0.75
0.50
0.50
0.00
0.00
0.25
0.50
0.50
0.75
0.50
-0.50
-0.25
0.25
0.00
0.25
0.25
0.25
0.50

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


Table 4.2.2 Retained Items from the Cumulative SET
ITEM

QUESTIONS

CVR

NUMBER
1
3

The instructor presented the course syllabus during the first week of the semester
The instructor explained the grading criteria to his/her students and applied them

0.75
0.75

5
6

objectively
The instructor demonstrates excellent knowledge of the course content.
The instructor presented the course content and materials clearly and in a well-

0.75
1.00

7
8

organized manner
The instructor stimulates critical thinking and analysis among his/her students
The instructor encourages his/her students to conduct library search, to use on-line

1.00
0.75

11

material and to read extra material related to the course content


The instructor provides timely feedback regarding his/her students progress in the

1.00

12
14

course
The exams covered readings and lectures related to course content
The instructor uses variety of educational aids, material and activities that helped

0.75
1.00

15

clarify the course content and lectures


The instructor encouraged his/her students to ask questions and express their

1.00

33

viewpoints on matters related to the lectures


I find the format of this class (lecture, discussion, problem-solving) helpful to the way

0.75

36
37

I learn
I learn better when the instructor summarizes key ideas from class session
I find the comments on exams or other written work helpful to my understanding of

0.75
0.75

45

the class content


I find class discussions help me in understanding the readings

0.75

The tables state that out of the fifty-four (54) items on the Cumulative Set, only
fourteen (14) were considered to be valid based on the minimum value of CVR required
in an eight (8) member panelist. These item numbers are: 1, 3, 5, 6, 7, 8, 11, 12, 14, 15,
33, 36, 37, and 45.
PART THREE
Just like the first part, this section used two different statistical procedures which
were analyzed through in SPSS to test TBI aspects of validity according to Messicks
Theory of Unified Validity: structural, external, substantive, and generalizability. First,
Cronbachs Alpha tested the items degree of relatedness to one another, which pertained

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


to the structural aspect of the instrument. Second, EFA tested the items on the TBI for its
structural, external, substantive, and generalizability aspects.
Cronbach Alpha

Table 4.3.1 Cronbachs Alpha


Cronbach's Alpha

Cronbach's Alpha Based on

N of Items

Standardized Items

.931

.933

14

The Cronbachs Alpha table showed results that the 17 closed ended questions are
highly reliable, with ()=0.931. As a rule of thumb, an Alpha () greater than 0.70 is
already acceptable. Hence, an alpha () of 0.931 assumes that the items on the TBI are
interrelated and reliable (Hof, 2012).

Table 4.3.2 Item-Total Statistics

ITEM01
ITEM02
ITEM03
ITEM04
ITEM05
ITEM06
ITEM07
ITEM08
ITEM09
ITEM10
ITEM11
ITEM12

Scale Mean if

Scale Variance

Corrected Item-

Squared

Cronbach's

Item Deleted

if Item Deleted

Total

Multiple

Alpha if Item

54.3341
54.2943
54.0886
54.2033
54.3024
54.4618
54.3967
54.0862
54.3984
54.1593
54.2537
54.1862

90.169
90.227
90.149
90.045
89.171
89.907
89.374
90.907
89.027
89.395
88.705
90.140

Correlation
.560
.654
.734
.721
.708
.594
.635
.647
.689
.704
.743
.695

Correlation
.390
.485
.569
.552
.518
.401
.422
.445
.488
.513
.581
.533

Deleted
.930
.927
.924
.925
.925
.929
.927
.927
.926
.925
.924
.925

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


ITEM13
ITEM14

54.3577
54.0894

89.738
89.352

.669
.743

.484
.604

.926
.924

In the above table, the Cronbach alpha if items deleted for items 1-14 are all less
than the initial Cronbach Alpha () 0.931. Therefore, deleting any item from 1-14 would
not improve or increase the questionnaires structural aspect of validity (internal
consistency).

Exploratory Factor Analysis


Table 4.3.3 Correlation Matrixa
ITE ITE ITE ITE ITE ITE ITE ITE ITE ITE ITE ITE ITE ITE GEN

Corre ITEM
lation

01
ITEM
02
ITEM
03
ITEM
04
ITEM
05
ITEM
06
ITEM
07
ITEM
08

M0

M0

M0

M0

M0

M0

M0

M0

M0

M1

M1

M1

M1

M1

1
1.00

0
.566

.566 .481 .470 .435 .364 .354 .407 .398 .403 .417 .358 .366 .381
1.00
0

.481 .555

.470 .542 .639

CLAS

DER URS SSIZE


E
.030 -.145

.015

.033 -.105

-.072

.639 .575 .421 .469 .540 .520 .539 .597 .534 .503 .591 -.030 -.121

-.046

.555 .542 .518 .410 .429 .444 .444 .472 .472 .463 .438 .473
1.00

CO

1.00
0

.435 .518 .575 .571

.571 .418 .472 .526 .529 .504 .588 .507 .500 .577 -.032 -.119
1.00
0

.364 .410 .421 .418 .512

.512 .514 .461 .514 .540 .554 .486 .493 .539 -.050 -.072
1.00
0

.354 .429 .469 .472 .514 .495

.495 .351 .502 .458 .432 .432 .442 .441


1.00

.454 .474 .485 .494 .476 .456 .493


0
.407 .444 .540 .526 .461 .351 .454 1.00 .506 .512 .550 .491 .435 .512
0

-.057
-.041

.060 -.087

.040

.029 -.046

-.082

.013 -.095

-.044

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


ITEM
09
ITEM
10
ITEM
11
ITEM
12
ITEM
13
ITEM
14
GEN
DER
COU
RSE
CLAS
Sig.
(1tailed
)

SSIZE
ITEM
01
ITEM
02
ITEM
03
ITEM
04
ITEM
05
ITEM
06
ITEM
07
ITEM
08
ITEM
09
ITEM
10
ITEM
11
ITEM
12
ITEM
13
ITEM
14
GEN
DER
COU
RSE

.398 .444 .520 .529 .514 .502 .474 .506

1.00
0

.403 .472 .539 .504 .540 .458 .485 .512 .532

.532 .556 .508 .504 .560


1.00
0

.417 .472 .597 .588 .554 .432 .494 .550 .556 .603

.004 -.133

-.045

.603 .546 .501 .590 -.020 -.072

-.024

1.00
0

.358 .463 .534 .507 .486 .432 .476 .491 .508 .546 .590

.590 .557 .638 -.013 -.095


1.00
0

.366 .438 .503 .500 .493 .442 .456 .435 .504 .501 .557 .581

.581 .648 -.023 -.095


1.00
0

.381 .473 .591 .577 .539 .441 .493 .512 .560 .590 .638 .648 .608
.030 .033

-.03

-.03

-.05

2
-.11

0
-.07

-.14

-.10

0
-.12

5
-.07

1
-.04

9
-.05

2
-.04

.015

.060 .029 .013 .004

.608 -.019 -.110


1.00

-.02

-.01

-.02

-.01

0
-.02

3
-.09

3
-.09

9
-.11

8
-.11

-.08

-.04

-.09

-.13

0
-.07

6
-.08

5
-.04

3
-.04

2
-.02

5
-.07

5
-.06

0
-.03

2
-.07

1.00
0
-.009

-.009
1.00
0

-.069
-.037
-.071
.068
.013

.068

.013

1.000

.000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000

.148

.000

.302

.000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000

.122

.000

.006

.000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000

.145

.000

.052

.000 .000 .000 .000 .000 .000 .000 .000 .000 .000

.134

.000

.023

.000 .000 .000 .000 .000 .000 .000 .000 .000

.040

.006

.073

.000 .000 .000 .000 .000 .000 .000 .000

.018

.001

.083

.000 .000 .000 .000 .000 .000 .000

.153

.053

.002

.000 .000 .000 .000 .000 .000

.320

.000

.060

.000 .000 .000 .000 .000

.447

.000

.058

.000 .000 .000 .000

.244

.006

.203

.000 .000 .000

.325

.000

.007

.000 .000

.208

.000

.008

.000

.255

.000

.095

.164

.000

.006

.381

.008

.000
.000 .000

.000 .000 .000


.000 .000 .000 .000

.040

-.028 -.112

-.070

.000 .000 .000 .000 .000


.000 .000 .000 .000 .000 .000

.000 .000 .000 .000 .000 .000 .000


.000 .000 .000 .000 .000 .000 .000 .000

.000 .000 .000 .000 .000 .000 .000 .000 .000


.000 .000 .000 .000 .000 .000 .000 .000 .000 .000

.000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000
.000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000

.000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000 .000
.148 .122 .145 .134 .040 .018 .153 .320 .447 .244 .325 .208 .255 .164
.000 .000 .000 .000 .006 .001 .053 .000 .000 .006 .000 .000 .000 .000

.381

.329

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


CLAS

.302 .006 .052 .023 .073 .083 .002 .060 .058 .203 .007 .008 .095 .006
SSIZE
a. Determinant = .000

.008

.329

This table shows the correlations between variables. From this table, all values
0.90. This means that no item is measuring the same thing.

Table 4.3.4 KMO and Bartlett's Test


Kaiser-Meyer-Olkin Measure of Sampling Adequacy.
Approx. Chi-Square
Bartlett's Test of Sphericity

.962
9374.043

df

136

Sig.

.000

KMO and Bartletts Test are used to measure sampling adequacy. A KMO 90 is
marvelous. Hence, KMO 0.962 suggests that the sample is adequate for factor analysis.
Table 4.3.5 Total Variance Explained
Factor

Initial Eigenvalues
Extraction Sums of Squared Loadings
Rotation Sums of Squared Loadings
Total % of Variance Cumulative % Total % of Variance Cumulative % Total % of Variance Cumulative %
1
7.519
44.228
44.228 7.086
41.684
41.684 7.086
41.683
41.683
2
1.114
6.552
50.779 .470
2.765
44.449 .460
2.706
44.389
3
1.012
5.951
56.730 .445
2.617
47.066 .455
2.676
47.066
4
.947
5.572
62.302
5
.864
5.085
67.387
6
.742
4.362
71.749
7
.604
3.553
75.303
8
.526
3.097
78.399
9
.514
3.026
81.425
10
.484
2.845
84.270
11
.432
2.543
86.813
12
.420
2.470
89.284
13
.407
2.397
91.681
14
.381
2.243
93.924
15
.358
2.107
96.031
16
.349
2.054
98.085
17
.326
1.915
100.000
Extraction Method: Principal Axis Factoring.

Same Extraction Method as part 1 is used here. The rotation used for this study
was Principal Axis Factoring (PAF) because it is commonly used for behavioral and

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


social sciences (Sage Pub, 2015), like psychology. It is used for theoretical exploration of
underlying factor structure and to discover any latent variables that cause the manifest
variables to covary. To reveal the underlying factor structure during factor extraction
(PAF), the shared variance of a variable is partitioned from its unique variance and error
variance; only shared variance appears in the solution.
Using the rule of eigenvalues > 1, Principal Axis Factoring (PAF)s result showed
that there are 3 common factors for items 1-17 (including gender, course, and class size).
Also, this table shows that extracted factors 1, 2, 3 have values of 7.519, 1.114, and
1.012, respectively; with a Cumulative frequency of 56.730.
Since there are only 1% non-redundant residuals, the factor structure adequately
accounts for the correlation among the variables. This is merely a diagnostic check to see
if the model is adequate for factor analysis.
Table 4.3.6 Scree Plot

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


Through the scree plot, it could be seen that there are three (3) factors with
Eigenvalue greater than 1.
Rotation Used
Promax rotation, an oblique rotation, is used since the researcher suspects that
these four factors extracted correlate with one another to form one factor (Rennie, 1997)
which is teacher effectiveness. In an oblique rotation, like Promax, Pattern Matrix
Table will show the factor loadings of each items. These factor loadings determine which
item fall under which Factor. Also, they tell which item should be retained.
Table 4.3.7 Pattern Matrixa

1)
2)
3)
4)
5)
6)

The course syllabus was presented at the first week of the semester
The grading criteria were applied objectively
The course content was demonstrated with excellent knowledge
The course content and materials were presented clearly and in an organized-manner.
Stimulated the critical thinking of his or her students
Encouraged the use of different and additional resources related to the subject, such as: library search, online

and other additional materials.


7)
Provided feedback to his or her students progress.
8)
The exams content were all covered in the subjects lectures and readings.
9)
The subject was clarified through use of different educational aids, materials and activities.
10)
The students were encouraged to ask questions and voice out their views which are related to the course.
11)
I find the class format (discussion and lecture) helpful to my learning style
12)
I learn better when the key concept and ideas were summarized by the instructor.
13)
I better understand the class content by the comments on exams and/or other written works.
14)
I find the discussions in this class helpful in understanding the course content, ideas, and concepts
GENDER
COURSE
CLASSSIZE
Extraction Method: Principal Axis Factoring.

1
-.079
.090
.384
.380
.520

Factor
2
.793
.696
.434
.423
.263

3
-.146
-.085
.091
.076
-.080

.742

-.027

-.517

.624
.446
.645
.665
.697
.781
.724
.825
-.044
-.001
-.064

.064
.255
.097
.082
.084
-.066
-.025
-.060
.062
-.163
.021

-.099
.101
-.039
.040
.159
.150
.073
.197
-.145
.007
-.144

Rotation Method: Promax with Kaiser Normalization.


a. Rotation converged in 5 iterations.

Factor loadings of above 0.70 will determine what items in the CUMULATIVE
SET will join together to form a factor. FACTOR 1 includes items 6, 12, 13, and 14 join
together with factor loading values of 0.742, 0.781, 0.724, and 0.828, respectively. For
FACTOR 2, since there is only one item which has a factor loading greater than 0.70;
factor loading relatively close to 0.70 will be considered. Thus, making items 1 and 2

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


form FACTOR 2. However, there it very low or negative factor loadings for FACTOR 3.
Hence, making it not significant. Items 3, 4, 5, 7, 8, 9, 10, and 11 will be excluded since
they have very low or negative factor loadings. Furthermore, GENDER, COURSE, and
CLASS SIZE did not load on any of the factors for having very low or negative values.
Table 4.3.8 Extracted Factors from the Cumulative SET
Predisposition to learn
6. Encouraged the use of different and

?
1.

The course syllabus was presented at the first

additional resources related to the subject,

week of the semester

such as: library search, online and other

2.

The grading criteria were applied objectively

additional materials.
12. I learn better when the key concept and
ideas were summarized by the instructor.
13. I better understand the class content by the
comments on exams and/or other written
works.
14. I find the discussions in this class helpful
in understanding the course content, ideas,
and concepts

Factor 1 which includes items 6, 12, 13, and 14 matched Bruners aspect
Predisposition of students to learn. While, FACTOR 2 with items 1-2 did not fit any of
Bruners aspect.

Synthesis of Validated SET and Validated TBI

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


RECOMMENDED TBI

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


PREDISPOSITION OF STUDENTS TO LEARN
6. Encouraged the use of different and additional resources related to the subject, such
as: library search, online and other additional materials.
12. I learn better when the key concept and ideas were summarized by the instructor.
13. I better understand the class content by the comments on exams and/or other written
works.
14. I find the discussions in this class helpful in understanding the course content, ideas,
and concepts
STRUCTURE AND FORM OF KNOWLEDGE
01.

Answer questions in an expert and knowledgeable manner

02.
Explains the subject matter clearly
SEQUENCE AND ITS USES
10.

Provides review of the subject matter if test results show poor performance of

students
11.

Modifies or changes teaching procedures if test results show poor performance of

students
12.

Promptly (within the next session) provides students with the results of their

written tests and assignments


14.

Points out how to earn credits or academic rewards for acceptable academic

performance
15.

Praises or gives approval to desired academic behaviors such as doing

assignments, participating in class discussions, etc


FORM AND PACING OF REINFORCEMENT
05.

Maintains Classroom discipline

06.

Implements school rules and regulations

DISCUSSION
The current study examined the TBI according to the four out of the six aspects of
Messicks theory of Unified Construct Validity and attempted to align it with the

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


Constructivist approach of Teaching using Bruners Theory of Instruction and
Rosenshines Six Teaching Functions.
The study found that the items in CDSLs TBI are highly correlated thus making it
valid in terms of structural aspect of validity. Like the previous researches (Marsh, 1992;
Kember & Leung, 2008) which proved SETs structural validity, based on the present
study, the TBI is also said to have high internal consistency thus making it structurally
valid.
On the external aspect of validity, this study demonstrates that the TBI have items
which converged well to a parent factor. However, unlike previous literatures (Cashin,
1988; Kember & Leung, 2008) the SET of CDSL seem to be poor in terms of external
aspect of validity. There were items which diverged or cross loaded to other factors. Such
items would contribute to discriminant validity issues (Farrell & Rudd, 2009). Therefore,
eliminating them would increase the validity of TBI in terms of the external aspect.
Since the following conditions (course, gender of professors, and class size) did
not load on any of the factors, this means that the CDSLs SET is not affected by these.
Making it valid in terms of the generalizability aspect. Even though past researches
proved that the ratings in the SET correlated with these factors (Cramer & Alexitch,
2000; Feldman, 1978; Jenkins & Downs, 2001; Kohn & Hatfield, 2006) this current study
seem to agree with Bonitzs (2011) findings that gender and course do not correlate with
the ratings of the professors. Moreover, current study also disproved researches of
Feldman (1978) and Kohn & Hatfield (2006) which state that class size has a relationship
to the ratings of teachers.

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


On the substantive aspect of validity, there were items that were excluded since
they have non-significant to negative factor loadings on the four factors extracted thus
making them invalid in terms of substantive aspect of validity because they have failed to
represent measures of the construct of effective teaching which were taken from
Bruners Theory of Instruction (Messick, 1994). Moreover, items which load together
with significant value of factor loadings were considered to be valid. Through EFA,
four factors were extracted, three of which matched three of Bruners aspects of teaching
which are: FACTOR 1 sequence and its uses, FACTOR 2 structure and form of
knowledge and, FACTOR 3 form and pacing of reinforcement. However, there are two
items (items 14-15) under FACTOR 1, which did not seem to be included in sequence
and its uses aspect for they seem to fall under reinforcement aspect. Furthermore, the
result showed that items 07-08, which pertain to the course syllabus and following it,
formed a new factor and did not fall under FACTOR 1. This suggests that sequence and
its uses aspect that pertains to the speed of lessons based on the ease and difficulty of
students to grasp knowledge, is divergent with implementing the syllabus.
The present study addresses the two threats to validity of every instrument which
are construct underrepresentation and construct-irrelevant variance (Messick, 1994).
Since the construct of effective teaching were taken from Bruner (1966) and Rosenshine
(1982), those items which failed to represent the aspects from Bruners Theory of
Instruction were eliminated. Moreover, the study filled in the aspects underrepresented by
the items in the TBI through PARTS 2 and 3 of methodology. Through this, Rosenshines
(1982) Six Teaching Functions and Bruners (1966) were incorporated to the instrument

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


which its items loaded together and form a factor that matched the predisposition of
students to learn aspect of Instruction by Bruner.
This research was conducted not only to explore the aspects of validity of the TBI,
but also to present an instrument which characterizes the qualities of teacher in a
Constructivist approach of instruction. By doing so, the unrepresented factor of
predisposition of students to learn will be added. Aside from increasing the substantive
aspect of validity, inclusion of this factor is critical in shaping instructors constructivist
qualities in which students curiosity will be ignited (Bruner, 1966; King & Rosenshine,
1993). According to Bruner (1966), learning starts with a students predisposition in
which the curiosity ignited by his/her instructor (e.g. through open ended-questions)
would help him/her engage in the learning process itself.
Moreover, previous studies showed great improvements on students academic
success on educational institutions wherein Constructivism was employed. In such
environment, students act as builder and teachers as facilitators of knowledge (Gray,
1995; Pagn, 2006). Hence, this paper findings could not just mold professors into
Constructivist qualities, but also help increase students academic success.
STUDY LIMITATIONS AND RECOMMENDATION FOR FUTURE
RESEARCHERS
This study should be considered in the view of its methodological limitations,
which provide recommendations for future researchers. The present study showed and
provided strong sampling adequacy for the population used. Moreover, this paper intends
to maintain the normality of the process of data gathering to avoid any extraneous
variables; which is why the researcher did not gather participants complete profile.

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


However, since this paper failed to provide complete demographics of the participants
(i.e. gender of students, age of students), this could pave the way to future researchers to
re-examine the instrument according to these individual differences (Cramer & Alexitch,
2000). Of equal importance is the issue of absenteeism during the TBI and Cumulative
SET administration. The present findings already proved that class size does not correlate
with any of the items at the TBI nor at the Cumulative SET; however, it might have an
implications to the ratings themselves (average rating of a class) as well as representing
class size (Bedard & Kuhn, 2005; Monk & Schmidt, 2010). One suggestion for future
literature and for the institution themselves is to conduct the TBI through a database
before a major exam as part of their clearance. In this way, all clusters (sections, gender
etc.) will be fully represented.
In addition to this, the present research was conducted among the college students
and Department Heads and Deans of Colegio de San Lorenzo. Thus, the sample of this
study is limited to the population of CDSL. Although this study already proved that the
TBI provides generalizability across conditions (Factors such as Gender of Professors
and Course of Student Raters, as well as Class Size), future studies might benefit if they
include more factors (i.e. gender of students, subject being taught, expected grades of
students, and workload) or even changing the population (i.e. other universities) to
explore the generalizability aspect of validity in a wider setting (Cashin, 1988), as well
as, discover other possible sources of invalidity (Cramer & Alexitch, 2000; Kohn &
Hatfield, 2006).
The findings of the research suggest that the sequence and its uses aspect diverge
with the syllabus and converge with the reward aspects of the TBI. However, this did

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


not explore how these two aspects affect the teachers way of presenting their lessons to
their students. In the light of next researches, experimental design of study could further
discover the causes for these aspects to correlate. Furthermore, the institution might as
well take these into account and consider conducting an Aptitude Test for each program
offered before they make their course syllabus; to adjust it according to the students
capabilities.
The study examined the validity of TBI against the four aspects of validity:
structural, generalizability, substantive, and external. In which, it proved that the TBI has
high validity in the first two aspects and low validity on the latter two. The methodology
of this research suggests how these (substantive and external) aspects of the instruments
validity could be improved. To shed more light for this study, adding more theories of
constructivist teaching could benefit future researchers. As well as, adding more items to
the recommended TBI to equally represent each of the aspects of Bruners Theory of
Instruction. After which, future researchers could also re-run Factor Analysis this time
Confirmatory Factor Analysis on the recommended TBI to confirm the items with
model of Bruner.
As mentioned, this paper focused only on examining TBI on four aspects of
Messicks Theory of Unified Construct Validity. Therefore, it did not include content and
consequential aspects of validity. The contents of the instrument was not examined due to
the belief of the researcher that it should be administered as it is. However, for future
reference, students may be employed as content validator themselves to directly address
their needs (Marsh H. W., 1992). On the other hand, consequential aspect might as well
be studied by other researchers.

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


CONCLUSION
SET has been explored over the years across different conditions and aspects
(Feldman, 1978; Cramer & Alexitch, 2000; Kohn & Hatfield, 2006; Murray, 2005) and
tested to be valid if it is well made (Cashin, 1988; Marsh, 1992; Bonitz, 2011). The
present study extends the literature by using the facets of Messicks Theory of Unified
Validity as a guideline to the validation process of the CDSLs SET, which is the TBI;
aligning the said instrument with the Constructivist Approach of teaching by Jerome
Bruners Theory of Instruction and Rosenshines Six Teaching Functions. The findings
from this paper suggest that the TBI of CDSL tend to have poor result in some aspects of
validity and lack some aspects of a constructivist teacher; ergo, just like other SETs, it
should be used cautiously (Marsh H. W., 1992). In addition to, through the results of this
paper, it seems that the sequence of learning of the students converge well with
rewards and diverge with implementation of the course syllabus. Furthermore, it is the
hope of this research that the presented validated instrument (as well as the characteristics
under it) which is aligned to the constructivist view of teaching could be adopted by the
educators and educational institution to promote and to lead students academic success.

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY

References
Bedard, K., & Kuhn, P. (2005). Where Class Size Really Matters: Class Size and
Student Ratings of Instructor Effectiveness.
Bonitz, V. S. (2011). Student evaluation of teaching: Individual differences
and bias effects. Iowa: Graduate College at Digital Repository @ Iowa
State.
Bonitz, V. S. (2011). Student evaluation of teaching: Individual differences and
bias effects. Digital Repository @ Iowa State University Graduate
Theses and Dissertations , 1-134.
Bruner, J. (1966). Toward a Theory of Instruction. The Belknap Press of
Harvard University Press, 39-72.
Cashin, W. E. (1988). Student Ratings of Teaching: A Summary of the
Research. iDEA PAPER.
Cashin, W. E. (1988). Student Ratings of Teaching: A Summary of the
Research. iDEA PAPER.
Catubig, G. C. (2012, February 27). Quezon City, Philippines: Colegio de San
Lorenzo.
Cohen, R. J., Swerdik, M. E., & Sturman, E. D. (2013). Of Tests and Testing. In
R. J. Cohen, M. E. Swerdik, & E. D. Sturman, Psychological TEsting and
Assessment (p. 129). McGraw-Hill.
Colegio De San Lorenzo. (2013). College Student Handbook. Quezon City: not
indicated.

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


Commision on Higher Education. (2012). CHED MEMORANDUM 46 - Policy
Standard for Quality Assurance (QA) In Philippine Higher Education
Through an Outcomes-based and Typology-based QA. CHED Central
Office Records. Quezon City, Republic of the Philippines: Commision on
Higher Education.
Cramer, K. M., & Alexitch, L. R. (2000). Student Evaluations of College
Professors: Identifying Sources of Bias* . The Canadian Journal of
Higher Education , 143-164.
Cronbach, L. J., & Meehl, P. E. (1955). Construct Validity in Psychological Tests.
Psychoological Bulletin, 281-302.
Duckworth, E. (1964). Piaget Rediscovered. Journal of Research in Science
Teaching , 2(3), 172-175. doi:10.1002/tea.3660020305
Farrell, A. M., & Rudd, J. M. (2009). Factor Analysis and Discriminant Validity: A
Brief Review of Some Practical Issues. ANZMAC , 1-9.
Feldman, K. A. (1978). Course characteristics and college students' ratings of
their teachers: What we know and what we don't. Research in Higher
Education, 9(3), 199-242.
Fifteenth Congress of the Philippines. (2012). REPUBLIC ACT NO. 10533: AN
ACT ENHANCING THE PHILIPPINE BASIC EDUCATION SYSTEM
BYSTRENGTHENING ITS CURRICULUM AND INCREASING THE NUMBER
OF YEARS FOR BASIC EDUCATION, APPROPRIATING FUNDS THEREFOR
AND FOR OTHER PURPOSES. Metro Manila: Fifteenth Congress of the
Philippines.
Flaherty, C. (2014, May 20). Evaluating Evaluations. Retrieved from Inside
Higher Ed: https://www.insidehighered.com/news/2014/05/20/studysuggests-research-plays-bigger-role-faculty-evaluations-studentevaluations
Fletcher, T. D. (2010, August 7). Content Validity Ratio. Retrieved July 26,
2015, from Applied Psychometric Theory:
http://finzi.psych.upenn.edu/library/psychometric/html/CVratio.html
Forrester, D., & Jantzie, N. (2014). Learning Theories. Retrieved from
Universty of Calgary: Kilde: http://www.acs.ucalgary.ca/
%7Egnjantzi/learning_theories.htm
Gravestock, P., & Gregor-Greenleaf, E. (2008). Student Course Evaluations:
Research, Models and Trends. Toronto: The Higher Education Quality
Council of Ontario.
Gravestock, P., & Gregor-Greenleaf, E. (2008). Student Course Evaluations:
Research, Models, and Trends. Toronto: Higher Education Quality
Council of Ontario.

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


Gray, A. (1995). The Road to Knowledge is Always Under Construction': A Life
History Journey to Constructivist Teaching. SSTA Research Centre
Report #97-07.
Guion, R. M. (1980). On Trinitarian doctrines of validity. Professional
Psychology, 385-398.
Hegina, A. J. (2014, September 17). UP, Ateneo up in world university
rankingssurvey. Retrieved from Inquirer.net:
http://globalnation.inquirer.net/111263/up-ateneo-up-in-worlduniversity-rankings-survey/
Hof, M. (2012). Questionnaire Evaluation with Factor Analysis and Cronbachs
Alpha An Example. Retrieved from Americ.
Jenkins, S. J., & Downs, E. (2001). Relationship between faculty personality
and student evaluation of courses. College Student Journal, 35(5).
Johnson, V. E. (2002). Teacher Course Evaluations and Student Grades: An
Academic Tango. Chance, 15(3), 9-16.
Jonassen, D. (1999). Designing Constructivist Learning Environment.
Pensylvania State University, 215-239.
Kember, D., & Leung, D. Y. (2008, August). Establishing the validity and
reliability of course evaluation questionnaires. Assessment &
Evaluation in Higher Education, 33(4), 341-353.
Khalid, A., & Azeem, M. (2012). Constructivist Vs Traditional: Effective
Instructional Approach in Teacher. International Journal of Humanities
and Social Science, 2, 170-177.
King, A., & Rosenshine, B. (1993). Effects of Guided Cooperative Questioning
on Children's Knowledge Construction. The Journal of Experimental
Education, 127-148.
Kline, T. J. (2010). Assessing Validity Via Item Internal Structure. In T. J. Kline,
Psychological Testing: A Practical Approach to Design and Evaluation
(pp. 242-243). Sage Publications India Pvt Limited, 2010.
Kohn, J., & Hatfield, L. (2006, September). The role of gender in teaching
effectiveness ratings of faculty. Academy of Educational Leadership
Journal, 10(3).
Laerd Statistics. (n.d.). Cronbach's Alpha. Retrieved September 14, 2015,
from Laerd Statistics: https://statistics.laerd.com/spsstutorials/cronbachs-alpha-using-spss-statistics.php
Learning-Theories. (2014, not indicated not indicated). Summaries of
Learning Theories and Models. Retrieved from learning-theories.com:
knowledge base and webliography: http://www.learning-theories.com/

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


Manlangit, J. (2015, March 11). Interview on Ms. Manlangit About the TBI. (A.
G. Retrita, Interviewer)
Marsh, H. W. (1992). A Longitudinal Perspective of Students' Evaluations of
University Teaching: Ratings of the Same Teachers over a 13-Year
Period. Annual Meeting of the American Educational Research
Association, 1-18.
Marsh, H. W. (2006). Students Evaluations of University Teaching:
Dimensionality, Reliability, Validity, Potential Biases and Usefulness.
Oxford University and SELF Research Centre, University of Western
Sydney.
Messick, S. (1994). Validity of Psychological Assessment: Validation of
Inferences from Persons' Reponses and Performances as Scientific
Inquiry into Score Meaning. Research Report, 1-33.
Messick, S. (1994). Validity of Psychological Assessment: Validation of
Inferences from Person's Responses and Perfomances as Scientific
Inquiry into Score Meaning. Educational Testing Service, Princeton, N.
J., 1-33.
Messick, S. (1994). Validity of Psychological Assessment: Validation of
Inferences from Persons' Responses and Performances as Scientific
Inquiry into Score Meaning. Edcational Testing Service.
Messick, S. (1995). Validity of Psychological Assessment. American
Psychological Association, Inc., 50, 741-749.
Messick, S. (1995). Validity of psychological assessment: Validation of
inferences from persons' responses and performances as scientific
inquiry into score meaning. American Psychologist, 741-749.
Messick, S. (1995). Validity on Psychological Assessment: Validity of
Inferences From Persons' Responses and Performances as Scientific
Inquiry Into Score Meaning. American Psychologist, 746.
Messick, S. (1995). Validity on Psychological Assessment: Validity of
Inferences From Persons' Responses and Performances as Scientific
Inquiry Into Score Meaning. American Psychologist, 745-746.
Messick, S. (1995). Validity on Psychological Assessment: Validity of
Inferences From Persons' Responses and Performances as Scientific
Inquiry Into Score Meaning. American Psychologist, 746.
Messick, S. (1995). Validity on Psychological Assessment: Validity of
Inferences From Persons' Responses and Performances as Scientific
Inquiry Into Score Meaning. American Psychologist, 746.
Monk, J., & Schmidt, R. (2010). The Impact of Class Size and Number of
Students on Outcomes in Higher Education. Cornell University ILR
School.

EXPLORING THE VALIDITY OF TEACHERS BEHAVIOR INVENTORY


Murray, H. G. (2005, June). Student Evaluation of Teaching: Has It Made a
Difference? Society for Teaching and Learning in Higher Education (pp.
1-15). Ontario: Society for Teaching and Learning in Higher Education .
Myers, A., & Hansen, C. (2014). Alternatives to Experimentation: Surveys and
Interviews. In A. Myers, & C. Hansen, Experimental Psychology (pp. 94122). Cengage Learning.
Pagn, B. (2006). Positive Contributions of Constructivism to Educational
Design. PsychOpen.
Paul, R., & Elder, L. (2007). Consequential Validity: Using Assessment to to
Drive Instruction. The Foundation for Critical Thinking.
Rennie, K. M. (1997). Exploratory and Confirmatory Rotation Strategies in
Exploratory Factor Analysis. Southwest Educational Research
Association.
Reyes, D. (2015, March 10). Exploring TBI. (A. G. Retrita, Interviewer)
Reyes, E. D. (2014, December). Exploring TBI. (A. Alfaro, & L. Balena,
Interviewers)
Rosenshine, B. (1982). Teaching Functions in Instructional Programs.
Research on Teaching: Implications for Practice, 1-40.
Rosenshine, B. (1982). Teaching Functions in Intructional Programs. National
Institute of Education, 1-40.
Rosenshine, B. (1982). Teaching Functions in Intructional Programs. National
Institute of Education, 1-40.
Sage Pub. (2015, 10 12). Retrieved from http://www.sagepub.com:
http://www.sagepub.com/sites/default/files/upmbinaries/19710_784.pdf
Sharma, S. (1996). Applied Multivariate Techniques . John Wiley and Sons, Inc.
Socha, A. B. (2009). Students Assessment of Instruction: A Validity Study.
Western Carolina: Alan Brian Socha.
Stark-Wroblewski, K., Ahlering, R. F., & Brill, F. M. (2007). Toward a more
comprehensive approach to evaluating teaching effectiveness:
supplementing student evaluations of teaching with prepost learning
measures. Assessment & Evaluation in Higher Education, 402-415.
The University of Texas at Austin. (1995, June 26). SAS Libary Factor Analysis
Using SAS PROC FACTOR. Retrieved from Institute for Digital Research
and Education: http://www.ats.ucla.edu/stat/sas/library/factor_ut.htm
University of the Philippines. (n.d.). Student Evaluation of Teachers. Retrieved
December 14, 2014, from Office of the Director of Instruction:
http://ovcaa.upd.edu.ph/odi/set.html

You might also like