Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

31

The International Journal of Educational and Psychological Assessment


September 2011, Vol. 8(2)

Establishing a Scale that Measures Responsibility for Learning


Carlo Magno

De La Salle University, Manila


Abstract
The present study advances the domain on responsibility for learning by establishing a
measurement tool. Items were generated contextualized for college students. The tool
constructed is composed of 10 items for deportment, 10 items for learning process, and 10
items for motivation anchored on Zimmerman and Kitsantas (2005, 2007, 2008)
conceptualization of responsibility for learning. A total of 2054 college students responded
on a seven-point numeric scale. Higher scores attribute responsibility for the self while
lower scores attribute responsibility towards the teacher. There were five factors tested
using a Confirmatory Factor Analysis (CFA) and the three-factor structure attained the best
fit as opposed to a one-factor and a series of two-factor structures. Convergent validity was
also established among the subscales (in the CFA and bivariate correlations). High internal
consistencies were attained for the items using both CTT and IRT approaches. Graded
Response Model was used and it was evidenced that the three subscales covers 90% of the
continuum in the distribution pertaining to measurement precision. Appropriate threshold
categories of the seven-point numeric scale were also found.
Keywords: Responsibility for learning, deportment, learning process, motivation, scale
development

Introduction
Students who are responsible are generally able to direct their learning into
functional outcomes. This indicates that students who are more responsible in
school are more able to generate thoughts and feelings that allow them to achieve
their goals. Students who manifest this situation are more able to perform and
achieve in school. Measurement techniques on the construct responsibility for
learning need to be established in order to advance studies in education and
psychology.
As early as 1951, Rockwell already recognized the need to develop students
responsibility for their behavior in school. In order to develop responsibility of
students, teachers needs to review classroom structure, lesson plan for the day, class
activities, self-evaluation, standards set, and acceptance. This was supported by
Corno (1992) where four learning environments were clustered that facilitates
responsibility for learning: Opportunities to pursue interest, releasing the potential
for revision, peers as learning partners, and participant modeling instruction.
The earliest study on responsibility directed for learning was conducted by
Crandall, Katvosky, and Crandall (1965) where they conceptualize responsibility as
an internalize locus of control and individuals reinforce themselves to be motivated.
They constructed the Intellectual Achievement Responsibility (IAR) Questionnaire
to deal with childrens achievement development. The IAR contains 34 situations
and a binary response is provided to detect external and internal locus of control on
each situation. The context where the items were created at that time already
2011 Time Taylor Academic Journals ISSN 2094-0734

32
The International Journal of Educational and Psychological Assessment
September 2011, Vol. 8(2)

reflected the responsibility for learning (i.e., If a teacher passes you to the next
grade, would it probably be; When you do well on a test at school, it is more
likely to be ; When you have trouble understanding something in school, is it
usually Jackson (2002) described responsibility as the defining aspect of mature
character, and the development of responsible learning is seen to inhibit
impulsivity. Individuals who are highly responsible are said to direct learning
experiences toward functional outcomes in both educational and professional
contexts (p. 52).
In 1965 Crandall, Katvosky, and Crandall studied about responsibility for
learning. They constructed a scale the measures intellectual achievement
responsibility which has a more social and developmental framework. The scale is
focused on external locus of control situations and the point of view is childrens
instrumentality over their own behaviors. The researchers were actually attempting
to explain characterizations of the responsibility within a social cognitive framework.
That is, when they correlated the subscales they assumed that when one is
responsible, intellectual academic success goes along with it. A more contemporary
study by OConnor and Jackson (2009) constructed a scale that measures
responsibility as part of a measure for learning style. They constructed the scales in
a measurement model and found that learning responsibility has a negative path
estimate with impulsivity, but it increases with emotional independence and
practicality. Their study further described responsibility in the context of learning.
Magno (2010) also found that responsibility for learning is a subscale of the
academic self-regulation scale after conducting principal factor analysis. The factor
structure of the scale uncovered the same factors with that of Zimmerman and
Martinez-Pons (1988) but a new factor was extracted called the learning
responsibility. It is composed of items about rechecking homework if it is done
correctly, doing things as soon as the teacher gives the task, having concern with
deadlines, prioritizing school work, and finishing all home works first. Magno
(2010) further defines this scale as the learners liability, accountability, and
conscientiousness of the learning task and learning experience. The studies of
OConnor and Jackson (2009) and Magno (2010) were able to identify
responsibility for learning within a major construct of learning style and academic
self-regulated learning, Responsibility for learning can be distinguished with
academic self-regulated learning conceptuality. As a sub-scale of academic selfregulated learning it is termed as learning responsibility described as the learners
liability, accountability and conscientiousness of the learning task and learning
experience (Magno, 2010, p.70). On the other hand, responsibility for learning
is a general emotive trait characterized by control and independence over ones
actions. Learning responsibility is a self-regulation is more social-cognitive and
strategic while responsibility for learning per se is emotive, independence, and
control.
More directed studies on the outcomes of responsibility for learning was
conducted by Zimmerman and Kitsantas (2005-2007). In their 2005 study they
looked at the effect of responsibility for learning and self-efficacy beliefs in the use
of learning strategies such as such in reading, note taking, test taking, writing, and
studying. In the said study, they also constructed the perceived responsibility for
2011 Time Taylor Academic Journals ISSN 2094-0734

33
The International Journal of Educational and Psychological Assessment
September 2011, Vol. 8(2)

learning scale in this study. The scale indicates whether the respondents perceived
the students themselves or the teacher was more responsible for various learning
tasks or outcomes. In this instrument, the respondents are presented with 18 items
where they will indicate whether the student of the teacher was more responsible
for various learning tasks or outcomes, such as a students motivation, deportment,
and learning process. The respondents answered each item using a seven point
scale: 1 (mainly the teacher), 2 (definitely more the teacher), 3 (slightly more the
teacher), 4 (both equally), 5 (slightly more the student), 6 (definitely more the
student), and, 7 (mainly the student). Responsibility for learning is attributed to
students efforts. Higher scores are reflected more on attributions for the students.
Factor analysis of the items was conducted and the items results to be classified
under three factors (81% total variance). However, there were very few items in the
second and third extracted factors so a single a single index of students
responsibility for learning was considered by the authors. The item mean score was
5.21, with SD of 1.21, and Cronbachs alpha of .97 was obtained in the 2005 study.
In the 2007 study, items mean score for males was 5.23 (SD=1.02) and 5.42
(SD=.92) for females and a Cronbachs alpha of .90 was obtained (Kitsantas &
Zimmerman, 2008).
The present study tested whether responsibility for learning will support a
three-factor structure composed of motivation, deportment, and learning process as
opposed to a single factor as the outcome in the 2005 study of Zimmerman and
Kitsantas. A scale was constructed following the content of the three factors
proposed by Zimmerman and Kitsantas that was contextualized for Filipino college
students. Moreover, additional IRT analyses were provided to uncover the
measurement precision of the scale using a Graded Response Model (GRM). The
In the GRM analysis, the Test Information Functions (TIF) and analysis of
threshold categories were further conducted.
Method
Participants
The participants in the study were 2054 Filipino college students in the
National Capital Region (NCR) of the Philippines. These are college students that
are enrolled in different universities and colleges in the NCR. Majority of the
participants are in their first and second year in college and they have an average
age of 18.2. They were conveniently sampled and all volunteered to participate in
the study.
Instrument
A self-report instrument that measures responsibility for learning was
constructed in the present study. The items and factors of the scale were adapted
from the original conceptualization of Zimmerman and Kitsantas (2005, 2007,
2008). There were 10 items constructed under each proposed factor (deportment,
learning process, and motivation). The participants are prompted with the
2011 Time Taylor Academic Journals ISSN 2094-0734

34
The International Journal of Educational and Psychological Assessment
September 2011, Vol. 8(2)

statement who is more responsible when in answering each of the items. Each
item is also responded using a 7 point numeric scale where higher ratings are
responsibility attributed to the self and lower ratings are responsibility attributed to
the teacher: 1 (mainly the teacher), 2 (definitely more the teacher), 3 (slightly more
the teacher), 4 (both equally), 5 (slightly more the student), 6 (definitely more the
student), and, 7 (mainly the student). The items were reviewed by one educational
psychologist and another specialist in scale development. The items were also
presented to a group of preservice teachers to review the relevance of the items for
college students experience in the Philippines.
Procedure
The questionnaire was administered to 2054 college students from different
colleges and university in the NCR. At the onset of administering the questionnaire,
they were asked if they are willing to participate in the study. They were requested
to sign a consent form if they agree to participate. They were instructed to answer
the items in the questionnaire and reminded not to leave any item unanswered.
The participants can answer at their own pace within 40 to 50 minutes. They were
also reminded that there is no right or wrong answer so they need to respond as
honestly as possible. Once completed, the questionnaire is returned and the
participants were debriefed about the purpose of the study.
Data Analysis
The factor structure of the responsibility for learning was tested using
Confirmatory Factor Analysis (CFA). The procedure allowed the researcher to fit
five separate common factor models to the observed data under various types of
constraints. The five factors tested were compared using the Chi-square goodness
of fit index (2), Root Mean Square Standardized Residual (RMS), and Root Mean
Square Error Approximation (RMSEA). Comparative fit indices were also used
such as the Akaike Information Criterion (AIC), Schwartz-Beyesian Criterion
(SBC), and Browne-Cudeck Cross-Validation Index (BCCVI).
The reliability of the scale was determined using Cronbachs alpha. A
separate measure for person and item reliability using a one-parameter IRT was
used.
The Graded Response Model was used by determining measurement
precision of the scale by interpreting a generated Test Information Function (TIF)
and threshold values.
Results
Descriptive statistics of the three subscales of the instrument was
determined. Table 1 reports the means, standard deviation, confidence interval of
the mean, skewness, and kurtosis of the subscales.

2011 Time Taylor Academic Journals ISSN 2094-0734

35
The International Journal of Educational and Psychological Assessment
September 2011, Vol. 8(2)

Table 1

Descriptive Statistics for the Subscales of Responsibility for Learning


CI
N

-95%

+95%

SD

Skewness

Kurtosis

Deportment

2054

4.66

4.62

4.71

1.00

-0.41

0.59

Learning Process

2054

3.08

3.06

3.10

0.55

-0.09

1.24

Motivation

2054

4.65

4.61

4.70

0.99

-0.31

0.56

The mean scores for the subscale obtained in the present study are low as
compared to the mean scores of the 2005 and 2008 study of Zimmerman and
Kitsantas (M=5.37, SD=0.95). The range of means (M=3.08 to M=4.66) obtained in
the present study indicates that students have the tendency to perceive the teacher
to be responsible for their learning.
Table 2

Correlations and Internal Consistencies of the Subscales for Responsibility for


Learning
(1)

1
2
3

(2)

(3)

Deportment
Learning Process

.10**

Motivation

.65**

.11**

Cronbach's alpha

.83

Person reliability

.76

Item reliability

.97

Person RMSE

.32

Item RMSE

.02

.80
.75
.99
.31
.02

.84
.78
.99
.32
.02

All correlation coefficients were significant when the three subscales of


responsibility for learning were intercorrelated (p<.001). There was a strong
correlation between motivation and deportment (r=.65). However, the correlation
between learning process and deportment (r=.10, p<.001) and between learning
process and motivation (r=.11, p<.001) were weak. The significant correlations
among the three subscales of responsibility for learning indicate the convergence of
these subscales.
The items that belong under each subscale also showed to have high
internal consistencies as indicated by the Cronbachs alpha values (.80 to .84). An
IRT approach was also used to estimate internal consistencies of the scale. The
IRT approach separates calibration for person and items which assumes that item
measure is not influenced by person measures. Both item (.97 to .99) and person
reliabilities (.75 to .78) obtained are very high. This indicates that when item
internal consistency is not influenced by sample characteristics, responses for the
items tend to be uniform. When the Root Mean Square Errors (RMSE) is
estimated for both person and item, estimates of error for the items are less (.02 for
all) but estimates of error for person are quite large (.31 to .32).
2011 Time Taylor Academic Journals ISSN 2094-0734

36
The International Journal of Educational and Psychological Assessment
September 2011, Vol. 8(2)

The appropriate factor structure of responsibility for learning was


determined by testing five measurement models. Later on the best model is
determined by assessing each models goodness of fit. The first model is a onefactor model where all 30 items of the responsibility for learning was placed as
manifest variables under one latent factor. The second to the fourth model are twofactor models where each of the subscales was paired under one latent factor. The
first of these models combined deportment and motivation (20 items) under one
latent factor and it was correlated with learning process (10 items) in a separate
latent factor. The second model combined deportment and learning process (20
items) under one latent factor correlated with motivation in another (10 items). The
third model combined learning process and motivation under one latent factor (20
items) correlated with deportment under a separate latent factor (10 items). The last
model tested is a three-factor model where each of the deportment, learning
process, and motivation subscales are structured with their own latent factor with 10
items each as manifest variables.
Table 3

Goodness of Fit Indices for the Five Models of Responsibility for Learning
Model
One-factor
Deportment+Motivation
and Learning Process
Deportment+Learning
Process and Motivation
Learning
Process+Motivation
and Deportment
Three-factor

5674.94
4991.95

df
405
376

RMS
.062
.061

RMSEA
.089
.086

AIC
2.82
2.49

SBC
2.99
2.65

BCCVI
2.82
2.49

5183.97

404

.059

.084

2.58

2.75

2.59

5020.65

404

.059

.083

2.51

2.67

2.51

4694.58

402

.057

.080

2.34

2.52

2.34

The results of the CFA showed that for all the five models, all parameter
estimates of the items were significant, p<.001. All the correlations of the latent
factors of the two-factor and three-factor models were also significant (p<.001)
which indicates convergent validity of the scale.
When the five measurement models were compared in their goodness of
fit, all indices favors a three-factor model where deportment, learning process, and
motivation are structured each on a latent factor. The three factor structure had the
lowest chi-square value (2=4694.58). The RMS and RMSEA were also adequate
with the lowest value. The comparative fit indices were also consistent in showing
the three-factor structure having the best fit. The AIC, SBC, and BCCVI had the
lowest values for the three-factor model.
The measurement model for the three-factor structure also showed to have
high parameter estimates for the items and very low standard errors (see Tables 4
to 6).

2011 Time Taylor Academic Journals ISSN 2094-0734

37
The International Journal of Educational and Psychological Assessment
September 2011, Vol. 8(2)

Table 4

Deportment Items, CFA Parameter Estimates, and Standard Error


Parameter
Estimate
1.07

0.03

I frequently go out of the classroom?

0.89

0.03

I sneak food inside the classroom?

0.97

0.04

Im caught cheating in an exam?

1.03

0.04

I get reprimanded for chatting with my seatmate?

0.95

0.03

I always recite and share my ideas during class?

0.82

0.04

I photocopy books or handouts about the lessons in class?

0.81

0.03

I answer back at the teacher?

0.84

0.03

I goof around rather than finish the task given to us?

0.87

0.03

I dont take down notes in class?

0.93

0.03

Im caught using cell phone during class?

SE

Table 5

Learning Process Items, CFA Parameter Estimates, and Standard Error


Parameter
Estimate

SE

0.93

0.03

0.89

0.03

I go to the library to read further about the lessons in class?

0.93

0.04

I get high grades for open-notes exams?

0.89

0.03

I did the wrong thing in answering my seatwork?

0.93

0.03

The class is way behind the lesson?

0.92

0.03

0.81

0.04

0.80

0.04

0.58

0.03

0.94

0.03

I make use of the Internet rather than books in doing my


homework?
I make my own reviewers for the lesson?

I do advanced readings so that it can give me an upper hand on


discussions?
I ask help from someone who is known to be good at a certain
subject?
I look up foreign words that I dont understand, in the dictionary?
I do not remember information from assigned readings?

2011 Time Taylor Academic Journals ISSN 2094-0734

38
The International Journal of Educational and Psychological Assessment
September 2011, Vol. 8(2)

Table 6

Motivation Items, CFA Parameter Estimates, and Standard Error


Parameter
Estimate
0.97

0.04

I study harder after getting a high grade on a quiz?

0.95

0.03

Im lazy to go to class because Im not happy with my course?

0.98

0.03

I believe that finishing my studies will give me a good future?

0.77

0.04

I am seriously considering shifting to another course?

0.93

0.03

Id rather finish my schoolwork than hanging out with friends?

0.84

0.03

I am not motivated to take this course seriously?

0.94

0.04

0.83

0.03

0.88

0.03

0.99

0.03

I always try to finish my projects on time?

I once skipped class to finish my other schoolwork due that


day?
I get more challenged whenever I get reprimanded?
I am even more motivated to study when receiving good
grades?

SE

The Test Information Function (TIF) was generated for each subscale of
the responsibility for learning using a Graded Response Model (GRM). The TIF
provides the amount of precision provided the scales. Waterman et al. (2010)
explained that TIF provide information regarding the spread of a scale score
relative to the latent construct being assessed by that scale. Scales with highly
constrained test information curve are imprecise measures for much of the
continuum of the domain. On the other hand, scales with TIF curves that
encompass a large range (-2.00 SD units to 2.00 SD units includes about 95% of the
possible values of a normal distribution) can be said to provide precise scores along
much of the continuum of the domain of interest (pp. 274-275).
The results of the TIF generated from the GRM showed that for all the
three subscales (deportment, learning process, and motivation) there is 1.5 SD units
below and above the mean of (values plotted on the y axis) (see Figure 1). This
indicates that the scales encompass about 90% of the possible values in a normal
distribution. This indicates that the three subscales provide adequate measurement
precision along the continuum of responsibility for learning.
Table 7

Average Response Threshold Values for each Subscale of the Responsibility for
Learning
Deportment
Learning
Process
Motivation

1
-0.671
-0.563

2
-0.36
-0.367

Threshold Values
3
4
5
-0.094
0.139
0.372
-0.156
0.044
0.299

6
0.655
0.581

7
1.385
1.013

-0636

-0.394

-0146

0.734

1.45

0.116

2011 Time Taylor Academic Journals ISSN 2094-0734

0.403

39
The International Journal of Educational and Psychological Assessment
September 2011, Vol. 8(2)

Figure 1
Test Information Function for the Three Subscales of Responsibility for Learning

1.1 TIF for Deportment

1.2 TIF for Learning Process

1.3 TIF for Motivation

Part of the results generated by the GRM is the analysis of the spread of
threshold values. Analyzing the spread of the threshold within a seven-point
numeric scale indicates the extent to which responses might or might not indicate
interval scaling. Threshold values that are far from each other indicate equidistance
and a good indicator of interval scaling.
The threshold values obtained for the subscales deportment (-0.671, -0.36, 0.94, 0.139, 0.372, 0.655, and 1.385), learning process (-0.563, -0.367, -0.156,
0.044, 0.299, 0.581, and 1.013), and motivation (-0.636, -0.394, -0.146, 0.116,
0.403, 0.734, and 1.45) are varied indicating that the scales cover different range of
levels. Motivation had the highest category threshold with a maximum average
calibration of 1.385 followed by motivation and the lowest category threshold was
for learning process (see Table 7).
2011 Time Taylor Academic Journals ISSN 2094-0734

40
The International Journal of Educational and Psychological Assessment
September 2011, Vol. 8(2)

Discussion
The results of the present study showed the psychometric properties of the
scale responsibility for learning in terms of its factor structure, convergent validity,
internal consistencies, measurement precision, and threshold levels.
First, in terms of the scales factor structure, it was found that responsibility
for learning is best explained composed of the three factors of deportment, learning
process, and motivation as originally conceptualized by Zimmerman and Kitsantas
(2005, 2007, 2008). The three factors subsumes responsibility for learning as
indicated by their correlations both in the CFA and bivariate analysis. This key
finding provides a workable factor structure for subsequent studies using the scale.
In the 2005 study of Zimmerman and Kitsantas, the resulting principal components
analyses lead them to use a unidimensional measure for responsibility for learning.
But originally, the initial steps revealed a three-factor structure. The present study
supports the three-factor structure as opposed to a single-factor or a two-factor
structure of responsibility for learning. Having the three factors best explains
responsibility of learning considering that it covers a wide array of characteristics.
Responsibility for learning having composed of several dimensions is evidenced in
previous literature coinciding with multiple ways of structuring classroom
environment that make students responsible for their learning (Corno, 1992;
Jackson, 2002).
The items of the scale also showed evidence of reliability with the high
internal consistencies obtained using Cronbachs alpha. When the IRT approach
was used to estimate person and item reliabilities, the item reliabilities were even
higher than the Cronbachs alpha values. This indicates that very high internal
consistencies of the items can be obtained when the person characteristics were
separated in the analysis. The internal consistencies along each subscale indicate
that students responses on the items that belong in the same domain are consistent
which means that measurement is uniform across the items. This can also be an
indicator of validity where behavioral indicators of the domain refer to
measurement of the same construct.
The present study did not only show the factorial validity and internal
consistency of the responsibility for learning scale but its measurement precision
and scale threshold using a Graded Response Model. Analysis in the Classical Test
Theory (CTT) is limited in showing the validity and reliability of scales anchored
on correlation coefficients. The present study provided more information regarding
the scale using an IRT Graded Response Model.
First, the measurement precision of the three subscales was determined by
generating Test Information Functions (TIF). The continuum of the scales covered
90% of a normal distribution which means that the scale has adequate
measurement precision regarding responsibility for learning. Measurement
precision in a CTT is done through construct validation which will require the
present scale correlated with a previous established scale. However, the
measurement precision based on the areas covered in a normal curve would only
require the present instrument. A standard deviation of 1.5 units above and below

2011 Time Taylor Academic Journals ISSN 2094-0734

41
The International Journal of Educational and Psychological Assessment
September 2011, Vol. 8(2)

the average (ability estimate) is enough that covers the continuum for the
responsibility for learning subscales.
Second, the threshold values of the seven point scale were analyzed to
further support measurement accuracy in the calibration of the scales. The CTT
approach has no known estimate that provides how well the response format of a
scale is. In the GRM approach, the threshold level of the seven point numeric scale
used in the instrument deemed to be accurate in three ways: (1) scale categories
were monotonically increasing across three subscales, (2) a variety of threshold
values were obtained, and (3) difficulty of items. The monotonic increase of the
scale categories indicate that the participants endorse higher levels of response for
high scale categories and endorse lower levels of response for low level categories.
This indicates that participants can discriminate well among the seven point
numeric scale. There is an increasing threshold value from a scale of 1 to 7 in all
the three subscales. There is accuracy on how participants responses using the
seven-point scale. Moreover, a variety of threshold values were obtained for all
three subscales. This indicates that the scales are equidistant and the scale
categories are not equivalent as perceived by the respondents. This proves that the
respondents can distinguish among the seven point numeric scale used. Lastly, is
the analysis of the difficulty of the scales. Lower threshold categories were found for
the learning process. The findings on the difficulty describe the trend on how
difficult or easy responsibility of learning can be and it varies across the subscales.
For example, there is some degree of easiness for students to attribute responsibility
of their learning process toward the teacher (Mean value of 3.08, see Table 1). The
results also show that there is some degree of difficulty for students to attribute
responsibility for their motivation and deportment for themselves.
The findings in the threshold categories of the seven point scale coincide
with the attribution provided in responding to each item. Higher scores would
mean attribution of responsibility for the self and lower scores is the attribution of
responsibility towards the teacher. It was found that students perceive it easy to
attribute the responsibility of the learning process towards the teacher and difficult
to assume responsibility on motivation and deportment on the self. These key
findings need further exploration to provide a better picture how the dynamics of
these three factors work.
Generally, the present study established the construction of responsibility
for learning. The pattern of findings especially in the scale categories may depend
on the kind of sample used in the study. However, key results are provided
regarding the psychometric properties of the scale. The scale is recommended for
use by other researchers to further determine its characteristics especially its pattern
of threshold and dynamics of the three factors (deportment, learning process, and
motivation). With responsibility for learning having sound psychometric properties,
future researchers can further establish a line of research with the use of the
construct.

2011 Time Taylor Academic Journals ISSN 2094-0734

42
The International Journal of Educational and Psychological Assessment
September 2011, Vol. 8(2)

References
Corno, L. (1992). Encouraging students to take responsibility for learning and
performance. The Elementary School Journal, 93(1) 69-83.
Crandall, V. C., Katkovsky, W., & Crandall, V. J. (1965). Childrens beliefs in their
own control of reinforcements in intellectual-academic achievement
situations. Child Development 36, 91-109.
Jackson, C. J. (2002). Learning styles and its measurement: An applied
neuropsychological model of learning for business and education. Sydney,
Australia: Cymeon.
Kitsantas, A., & Zimmerman, B. J. (2008). College students homework and
academic achievement: The mediating role of self-regulatory beliefs.
Metacognition and Learning, 4, 97-110.
Magno, C. (2010). Assessing academic self-regulated learning among Filipino
college students: The factor structure and item fit. The International Journal
of Educational and Psychological Assessment, 5(1), 61-76.
OConnor, P. J., & Jackson, C. J. (2009). The factor structure and validity of the
learning styles profiler (LSP). European Journal of Psychological
Assessment, 24(2), 117-123.
Rockwell, J. G. (1951). Pupil responsibility for behavior. The Elementary School
Journal, 51(5), 266-270.
Waterman, C., Victor, T. W., Jensen, M. P., Gould, E. M., Gammaitoni, A. R., &
Galer, B. S. (2010). The assessment of pain quality: An item response
theory analysis. The Journal of Pain, 11(3), 273-279.
Zimmerman, B. J., & Martinez-Ponz, M. (1988). Construct validation of strategy
model of student self-regulated learning. Journal of Educational Psychology,
80, 284-290.
Zimmerman, B. J., & Kitsantas, A. (2005). Homework practices and academic
achievement: The mediating role of self-efficacy and perceived
responsibility beliefs. Contemporary Educational Psychology, 30, 397-417.
Zimmerman, B. J., & Kitsantas, A. (2007). Reliability and Validity of Self-Efficacy
for Learning Form (SELF) Scores of College Students. Journal of
Psychology, 215(3), 157-163.
About the Author
Dr. Carlo Magno is presently a faculty of the Counseling and Educational
Psychology Department under the College of Education in De La Salle University
in Manila Philippines. Most of his publications are in the areas of self-regulation,
metacognition, language learning, learner-centeredness, and teacher performance
assessment. Further correspondence can be addressed to him at
crlmgn@yahoo.com or to De La Salle University, 2401 Taft Ave., Manila,
Philippines.
This study was funded by the University Research and Coordination Office
(URCO) of De La Salle University, Manila.
2011 Time Taylor Academic Journals ISSN 2094-0734

You might also like