Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

  O’Byrne Comprehensive Exam 2nd Task ‐ 1 

Running head: SECOND TASK

Cognition and Instruction Comprehensive Examination:

Article Review & Study Design

W. Ian O’Byrne

University of Connecticut
  O’Byrne Comprehensive Exam 2nd Task ‐ 2 

PART I

Attached is a study that appeared in Computers and Education: “Technology uses and student
achievement: A longitudinal study” by Jing Lei and Yong Zhao. This is a fairly representative
study for any newly emerging area such as technology and media use in the classroom. That is,
there are typically many weaknesses in early research that only become apparent with time. In
five, single-spaced pages, systematically organize and critique the major design and interpretive
weaknesses in this study. Relate both your design and interpretive concerns to other studies in
this area that have taken more rigorous approaches to issues of technology and media use and
classrooms and the relationships to learning.

PART II

Identify the most important research question(s) in the area of media construction and
communication in school classrooms. Then design a study to answer your question(s). In five,
single-spaced, pages present the research question(s), the rationale, theoretical framework,
methodology, and analytic approach(es) you would use, including both independent and
dependent measures. You may include instruments that have yet to be developed. If so, please
describe, briefly, how each would be evaluated for both validity and reliability.
  O’Byrne Comprehensive Exam 2nd Task ‐ 3 

Part I – Article Review


In “Technology uses and student achievement: A longitudinal study” (Lei & Zhao, 2005), the
authors investigate how technology use impacts student achievement in a one-to-one laptop
classroom. The purpose of the study was to examine the specific technologies used by students,
and to identify those that were the most effective in increasing a student’s GPA. The authors
suggest that the quantity of technology use in a classroom is not the most important factor in
raising student achievement, rather, the “how” or quality of technology use is many times more
important. In addition to these findings, the authors argue that excessive usage of technology can
be detrimental to student achievement if accompanied by assurances of high quality. The authors
further suggest that technology uses that were determined to have the most positive impact on
GPA were not the most employed by students; and the most widely used technology uses tended
to be the least beneficial to student GPA. At times the authors are not as explicit with the details
and scope of the methodology and design of the study to support testing through replication.
Considering the investments and research occurring around the use of technology in learning and
literacy programs, this study is timely and significant. As a result of the changes that have
occurred to literacy and pedagogy as a result of the proliferation of Internet and Communication
Technologies (ICTs) (Leu, Kinzer, Coiro, & Cammack, 2004), there are a few deficiencies that
hinder researchers and educators from appreciating the full implications of this study. I will
critique the design and interpretative issues associated with this study, and suggest examples
from other studies in the field that have taken a more rigorous approach.

Design
Three significant areas in this work need improvement. Specifically, the theoretical framework
of this study is far too narrow to capture the complexity of interactions that occur as students use
technology and multimodal texts. Secondly, the choice of sample, data collection instruments
and analysis techniques used do not account for all the variance in the model. Finally, it appears
that the instruments used to collect data are incomplete, and not capable of demonstrating the
variety of attitudes, aptitudes and skills involved as teachers and students use technological tools
in the classroom.

Theoretical Framework
The authors adequately present the literature in support of their research question as to which
affects GPA more, quality or quantity. The authors cite literature that argues this point (Loveless,
1996; McFarlane, 1997; Burbules & Callister, 2000; Cuban, 2001) to justify testing whether or
not effective technology use depends on “how it was used” (Wenglingsky, 1998, p. 3). In order
to meet the stated goals of this study, the authors should expand the theoretical framework of the
study.

As literacy, learning and pedagogy are affected by ICTs, we must question the traditional
relationships between teacher, student and text and make the “classroom walls more porous”
(Jewitt, 2008, p. 245) in order to examine the technological, literacy and discourse practices
involved (Cope & Kalantzis, 2000). Because of the deictic changes (Leu, 2000) that are
occurring to literacy it is important to incorporate a New Literacies perspective (Coiro, Knobel,
Lankshear & Leu, 2008; Leu, O’Byrne, Zawilinski, McVerry, Everett-Cacopardo, 2009) to fully
observe the interactions that occur as students and teachers use ICTs. The authors should also
incorporate multiliteracies theory (New London Group, 1996), which would allow for the diverse
  O’Byrne Comprehensive Exam 2nd Task ‐ 4 

social and cultural texts that extend around the world (Jewitt, 2008). Finally, the authors should
include a theoretical perspective that accounts for multimodality. Multimodal (Kress & van
Leeuwen, 2001) analysis studies the use of signs, symbols and text that students recreate while
using the medium afforded them by others. In making decisions about the “quality” of tools or
work done by students, research shows that individuals draw upon “available modal resources”
(Jewitt, 2008, p. 246) when working with ICTs. The inclusion of these three theoretical
frameworks to the models already involved would allow the researchers to more thoroughly
investigate the factors that affect technology use in the classroom.

Methodology
Several limitations in the design and methodology of the study substantially restrict the
conclusions that can be reached. It is not clear whether or not the subjects of the study are a
convenience sample, from which limited conclusions can be drawn. This is a limitation of the
study, but of much more significance is that the authors fail to disclose this, and several other
aspects about the design and methodology of the study. The participants come from a population
where only 1% of the building receives free or reduced school lunch, and the teacher-student
ratio was 9:1. The site of the survey also is host to a wealth of technological tools, including:
computers and projectors in every classroom; wireless Internet connection; and one-to-one
laptops for all students. The participant sample, after removing special education students, and
students who did not complete both the pretest and posttest, consisted of 130 students. Aside
from an apparent lack of power due to the small sample size, the authors made no other notation
as to why students were included in the sample other than they had completed all requirements of
the surveys. Consequently, the conclusions drawn by this study are diminished due to the design
and structure of the sample group.

The interviews were used to supplement the data received by the surveys, but the authors were
not explicit in how teachers and students were selected for the semi-structured interviews. The
questions used in the interviews were not shared in the paper, but ranged from favorite
technological tools, to perceived benefits, to student use of tools. The authors do not quantify the
level of expertise the students and teachers of the interview pool have in regard to using
technology; or the instructional routines used with students.

The analysis of the data from the surveys and interviews is only said to be “descriptive” in
nature, and it is assumed that the taxonomy used to categorize the technology uses (Levin &
Bruce, 2001) was somehow used in the analysis of the data. Since the authors themselves
describe the initial analysis used “descriptive” sources, it is unclear what conclusions can be
drawn from the data. The authors used the data from the surveys and compared it against student
self-reported GPA to conduct T tests and an ANOVA to test correlations. The interview data was
also coded according to an unknown coding scheme. Also, the authors do not explain the
procedure for coding the results of the surveys or the interviews, and any inter-rater checks to
ensure reliability (Shrout & Fleiss, 1979). As a result of these defects in design and the reporting
of the study, it limits the replicability of the study to test its veracity.

Measures
A third flaw in the study is the incomplete selection of measures used to collect data. The
researchers collected two forms of data throughout the study: surveys and interviews. The
  O’Byrne Comprehensive Exam 2nd Task ‐ 5 

surveys were self-report surveys administered to students at the beginning and end of the
academic year. These surveys consisted of four sections: demographic information; current
technology use; self-report of GPA; and student technology uses. The pretest only included the
first three sections, while the posttest included all four. Validity of this survey is called into
question due to the fact that the survey is self-reported and the dependent variable for which this
study is based on students’ self-reported GPA. The scope of the survey is too broad, in that it
refers to uses of technology that students never come into contact with, such as: overhead
projectors, telephones, and TV/VCRs. The survey also consists of Likert scale items with a scale
of 1-4. This small scale reduces the amount of variance explained from the analysis of the
survey. Aside from the survey, nine students and ten teachers were given semi-structured
interviews to obtain further perspective on which technology they used, the overall benefit of
these tools, and which tools they believed students used. The authors do not provide any details
on the interview questions or process that would allow for replication of the study. Furthermore
there is no explanation as to why some individuals received interviews while others did not.

Perhaps the authors could have considered implementing the following measures. First, there
should be a measure of overall information fluency (Bunz, 2004), or an individual’s ability to
“express oneself creatively, reformulate knowledge, and synthesize information regarding new
information technology” (Bunz, 2004, p. 2). Secondly, the researchers should have included a
measure to collect data on the competency of the students and the teachers in the use of
technology. The CEW Scale (Bunz, 2004) is one such tool, but there are other instruments that
can be used to measure ability of technology use. Third, measures should also be used to
determine the critical thinking and problem solving skills that are being used by students in order
to truly identify that which improve GPA. In addition, there should also be measures to
determine if teachers are actually using and modeling technology use in classroom instruction.
This data could result in videotapes of lessons, observations or journals; but there should be
some means to quantify the amount of technology used in the instructional routine of teachers.
Of the five factors identified as key to developing instruction using technology (Cope &
Kalantzis, 2000) the entire “model begins with immersion in an acquisition-rich environment”
(Jewitt, 2008, p. 248). The authors cite McFarlane (1997) in the introduction to justify computer
use and the importance of objectives in well-designed tasks. Perhaps they should have attended
to McFarlane’s comment that “lack of resources – computers and software are undeniably
important. However, a shortage of appropriate training in the effective use of Instructional
Technology may be the critical missing link (McFarlane, 1997, p. 3). None of the measures used
in this study attempt to account for skill level of students or teachers in relation to technological
use, or instructional methods that support and model usage of technology.

Interpretation
The study and their conclusions suffer from several flaws in logic, most of which are due to the
issues found in the design and methodology of the study. This critique will be organized around
the questions used by the authors in the results and discussion section. Additionally, the authors
changed their research questions from those used initially to a different set of questions used in
the results section. Consequently, questions abound as to significance of their resultant
conclusions. For instance, as noted in the italicized distinctions selected by the authors of the
study, the deficiencies become more obvious.
  O’Byrne Comprehensive Exam 2nd Task ‐ 6 

Result 1: Quantity of technology use: does the amount of time spent using technology have any
impact on student achievement?
The results shared by the authors describe a regression analysis that shows a negative correlation
between quantity of time using technological tools and change in students’ GPA. Aside from the
negative and very low (Cohen, 1988) effect size (-0.047), the more severe issue arises due to the
fact that this is all supported by self-reported data. The authors share a “no-gain-point” value of 3
hours per day of technology use as the cutoff point for affecting change in GPA, a finding that
could be the subject of future research. The authors then split the sample in into two groups
based on use of more than 3 hours, or less than 3 hours of technology use per day. The analysis
of an independent-sample T test is used to show that there is a significant difference in change of
GPA between the groups (t(128) = 4.122, p<.001). Although this finding supports the earlier
claim about the “no-gain-point” of 3 hours, no evidence is given to show equivalency of the two
groups. The large amount of variance shown in the group with more than 3 hours of technology
use also causes us to question equivalency between the groups. (Mean: -0.78, SD: .271).

Result 2: How are technologies being used in schools in general?


The findings in this section share a multitude of technology use by students with the findings
said to support earlier findings by Bruce & Levin (1997). It would be interesting to follow up
these findings with future research that analyzed results obtained from student work, and not just
a survey or interview.

Result 3: What are the most frequent and least frequent technology uses?
In the section on frequency of technology use by students, the authors separate the most popular
and least popular uses based on student results. The findings are largely anecdotal and show uses
that the students like and find to be “very cool.” The question remains how much of these results
measure technology use as a part of teacher instruction and modeling, and not simply what the
students find to be “very cool.”

Result 4: Quality of technology use: what technology uses have positive educational impacts?
This section of results determines “quality” of use based on correlation with GPA and results
from interviews. Two central questions cloud the findings presented here. The first of which is
the lack of measures to inform the nature of teacher instruction in the building, and the level of
student ability using the tools. Without either of these, and with holes in the theoretical
framework, it is difficult to judge quality of use. The second problem is that Microsoft Word was
shown to be a tool that students frequently used, but apparently has the least positive impact on
GPA. The authors share that students frequently used Microsoft Word as a hiding mechanism
during off task behavior. Students were skilled at “switching programs” and covering up other
activities with the Microsoft Word document to distract the teacher. Thus, this leads to
uncertainty about the relationship between GPA and the “quality” of technology use.

Result 5: Quantity vs. quality of technology use: are technology uses that had positive impact
most frequently used?
In this section the authors suggest that students rarely use technology that affects GPA
positively, while negatively correlated uses are more frequent. The findings here are inconclusive
because they do not account for teacher instruction, and student’s skill with technology. The
graph provided in this section seems more to be a display of the types of technology use that
  O’Byrne Comprehensive Exam 2nd Task ‐ 7 

teachers use with students, as compared to what uses students employ on their own. The
negatively correlated items (Microsoft Word, Internet, Emailing friends, PowerPoint) are shown
in contrast to the technology uses that are required by instructional technology modeling (Create
websites, Geometer’s Sketchpad, Desktop Publishing, Programming, Aleks, Science Probe).

Conclusion
In outlining the conclusions and implications for further research, three other studies should be
consulted that address the same questions about student technology use and working with media
in the classroom using a much more rigorous approach.

In the work undertaken by Kimber, Pillay & Richards (2007) a theoretical framework of
technoliteracy is coined to incorporate the theories of multiliteracies and New Literacies.
Multimodal design is also highlighted as a process for agency as students construct electronic
concept maps and websites. The work of the 41 teenage girls is analyzed using the Structure of
Observed Learning Outcomes (SOLO) taxonomy (Biggs & Collis, 1982) which is used to
differentiate levels of understanding shown in the products constructed by students. The
participants are organized between two different classroom teachers, each of which has differing
levels of expertise with technoliterate practices. The results suggest incremental and time-related
increases in learning gains through the use of the technoliteracy instructional model and using
multimodal texts.

In the study by O’Brien, Beach & Scharber (2007) the principles of motivation, new literacies,
and multimediating are used to frame the endeavors of students working with multimodal
sources. The seventh and eighth grade students (n = 15) worked with media-rich sources to
examine how their work with the projects would affect agency, self-efficacy and their status as
struggling readers and writers. Data analysis of observations, interviews, focus groups, think-
alouds, and change in standardized reading test scores showed greater engagement and increase
in agency and self-efficacy.

Finally, the research conducted by Watts & Lloyd (2004) examines the use of classroom
interventions using ICT and multimedia use daily to explore gains in student achievement. The
study consisted of 219 6th grade students in eight schools in the United Kingdom. The research
data consisted of pre- and post-test assessments, along with interviews with 48 teachers and
students. The pre- and post-tests took place in a teaching unit on journalism and consisted of two
writing tasks that were scored for quality and clarity of thought. Researchers compiled
descriptive data about school and teacher performance, along with student GPA and standardized
test scores. Results suggest use of ICTs and multimodal creation tools allow for growth in
motivation, quality, complexity of writing, and linguistic abilities. Teachers noted difficulty in
monitoring off-task students. Students described ICT use and working with multimodal texts to
be a more reliable source of information than the teacher in some instances.

Consequently, the study although inexact, leads to the understanding that improvement is
possible as long as the sample group is comprehensively evaluated and results are predicated on
sound scientific sampling.
  O’Byrne Comprehensive Exam 2nd Task ‐ 8 

References
Biggs, J., & Collis, K. (1982). Evaluating the quality of learning: The SOLO taxonomy
(Structure of the Observed Learning Outcomes). New York: Academic Press.

Bruce, B. C., & Levin, J. A. (1997). Educational technology: media for inquiry, communication,
construction, and expression. Journal of Educational Computing Research, 17(1), 79–
102.

Burbules, N., & Callister, T. (2000). Watch IT: The promises and risks of new information
technologies for education. Boulder, CO: Westview Press.

Bunz, U. (2004). The Computer-Email-Web (CEW) Fluency Scale: Development and


Validation. International Journal of Human-Computer Interaction, 17, 479–506.

Cohen, J. (1988). Statistical power for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum.

Coiro, J., Knobel, M., Lankshear, C. & Leu, D. (Eds.) (2008). Handbook of research on new
literacies. Mahwah, NJ: Lawrence Erlbaum Associates.

Cope, B., & Kalantzis, M. (Eds.). (2000). Multiliteracies. London: Routledge.

Cuban, L. (2001). Oversold and underused: Computers in schools 1980–2000. Cambridge, MA:
Harvard University Press.

Flewitt, R. (2006). Using video to investigate preschool classroom interaction: Education


research assumptions and methodological practices. Visual Communication, 5(1), 25–51.

Jewitt, C. (2008). Multimodality and Literacy in School Classrooms. Review of Research in


Education, 32(1), 241-267.

Kimber, K., Pillay, H., & Richards, C. (2007). Technoliteracy and learning: An analysis of the
quality of knowledge in electronic representations of understanding. Computers &
Education, 48(1), 59-79.

Kress, G., & van Leeuwen, T. (2001). Multimodal discourse: The modes and media of
contemporary communication. London: Arnold.

Lei, J., & Zhao, Y. (2005). Technology uses and student achievement: A longitudinal study.
Computers & Education, 49(2), 284-296.

Leu, D. J. (2000). Literacy and technology: Deictic consequences for literacy education in an
information age. In M.L. Kamil, P. Mosenthal, R. Barr, & P.D. Pearson (Eds.), Handbook
of reading research: Volume III (pp.743-770). Mahwah, NJ: Erlbaum.

Leu, D.J., Jr., Kinzer, C.K., Coiro, J., Cammack, D. (2004). Toward a theory of new literacies
  O’Byrne Comprehensive Exam 2nd Task ‐ 9 

emerging from the Internet and other information and communication technologies. In
R.B. Ruddell & N. Unrau (Eds.), Theoretical Models and Processes of Reading, Fifth
Edition (1568-1611). Newark, DE: International Reading Association.

Leu, D. J., O’Byrne, W. I., Zawilinski, L. McVerry, J. G., & Everett-Cacopardo, H. (2009).
Expanding the New Literacies Conversation. Educational Researcher, 38(4), 264-269.

Levin, J. & Bruce, B. (2001). Technology as media: the learner centered perspective. Paper
presented at the 2001 AERA meeting, Seattle, WA.

Loveless, T. (1996). Why aren’t computers used more in schools? Educational Policy, 10(4),
448–467.

McFarlane, A. (1997). What are we and how did we get here? In A. McFarlane (Ed.),
Information technology and authentic learning: realizing the potential of computers in
the primary classroom. London, England: Routledge.

The New London Group. (1996). A pedagogy of multiliteracies: Designing social futures.
Harvard Educational Review, 66, 60–92.

O’Brien, D., Beach, R., & Scharber, C. (2007). “Struggling” middle schoolers: Engagement and
literate competence in a reading writing intervention class. Reading Psychology, 28(1),
51-73.

Peyton, J., & Bruce, B. (1993). Understanding the multiple threads of network-based
classrooms. In B. C. Bruce, J. K. Peyton, & T. W. Batson (Eds.), Network-based
classrooms: Promises and realities (pp. 50–64). New York: Cambridge University Press.

Shrout, P. & Fleiss, J. (1979). Intraclass correlations: Uses in assessing rater reliability.
Psychological Bulletin, 86, 420-428.

Wenglinsky, H. (1998). Does it compute: The relationship between educational technology and
student achievement in mathematics. Princeton, NJ: Educational Testing Service.

Zhao, Y. (2003). What should teachers know about technology: Perspectives and practices.
Greenwich, CT: Information Age Publishing.

Watts, M., & Lloyd, C. (2004). The use of innovative ICT in the active pursuit of literacy.
Journal of Computer Assisted Learning, 20(1), 50-58.
  O’Byrne Comprehensive Exam 2nd Task ‐ 10 

Part Two – Study Design


Rationale
Literacy and the practices used to communicate have changed drastically as a result of the effects
of Internet and Communication Technologies (ICTs) and its interaction in society (Coiro,
Knobel, Lankshear & Leu, 2008). As these changes amalgamate into new modes and mediums
of expression, researchers and educators endeavor to prepare adolescents for the skills and
strategies needed. Because of the rapid change that is occurring, coupled with high stakes
pedagogy and policy, classrooms have not witnessed a great deal of the change warranted by the
permutations of ICTs on society (Leu, McVerry, O’Byrne, Zawilinski, Castek & Hartman,
2009). The skills and strategies needed by these changes in literacy have begun to be understood
through research that surveys the landscapes of both in-school and out-of-school contexts (Coiro,
Knobel, Lankshear & Leu, 2008). This study extends the research on online reading
comprehension and the instructional method known as Internet Reciprocal Teaching (IRT) (Leu,
Coiro, Castek, Hartman, Henry, & Reinking, 2008).

This quasi-experimental, mixed methods study (Shadish, Cook & Campbell, 2002; Johnson &
Onwuegbuzie, 2004) will examine the communication practices of adolescents as they construct
media using ICT tools. The purpose of the study is to examine the effect of an instructional
model on the skills and strategies used in online content construction by adolescents. This study
will also examine how media construction and communication affect offline communication.
Results will address effects of working with construction of media in three areas: (1) traditional
writing achievement; (2) online reading comprehension; (3) instructional model; (4) dispositions
of students skilled in online content creation. This study is timely and appropriate because it
helps illuminate effects of instruction and modeling on student use of ICT tools. This study will
also help researchers and educators examine future research needed to understand the effect on
the complexity of activities involved as students use ICT tools for literacy practices.

Theoretical Framework
To properly address the dynamic nature of literacy as it is affected by ICTs, this study needs to
be framed by multiple theoretical perspectives (Labbo & Reinking, 1999). The tenets of New
Literacies theory, Multiliteracies, and Multimodal Design will be used to support the work being
done in this study.

New Literacies
Changes occurring to literacy as a result of ICTs have caused researchers to examine the “higher
order thinking skills and flexibility” (Peterson & Walberg, 1979) that will be needed as
adolescents interact globally. The pedagogies entrusted with facilitating these skills need to be
reexamined as ICTs continue to “incorporate” (Bruce & Levin, 2003) themselves into literacy
practices. New Literacies maintains that as technology evolves, literacy rapidly changes (Leu,
Kinzer, Coiro, & Cammack, 2004), and this transactional relationship has grown with
unprecedented speed (Coiro, Knobel, Lankshear, & Leu, 2008).

Multiliteracies
In communication and content construction using ICT tools, the construction of literacy occurs
through the “legitimation and valuing of different kinds of texts and interactions” (Jewitt, 2008,
p. 248). In order for these interactions to occur in the classroom it is important to expand the
  O’Byrne Comprehensive Exam 2nd Task ‐ 11 

traditional texts and written languages used in order to incorporate the diverse communicative
mediums that happen around the world (New London Group, 1996).

Multimodal Design
As an extension of the work outlining multimodalities (Kress & van Leeuwen, 2001; Jewitt,
2008), design focuses on the subtle interplay of meaning in constructing multimodal texts.
Design is outlined by six elements: linguistic, visual, audio, gestural, spatial and multimodal
(New London Group, 2000). When incorporated with a new literacies framework, students act as
“designers” and are “licensed to apply critiqued knowledge of the subject/topic synthesized from
multimodal sources” (Kimber & Wyatt-Smith, 2006, p. 26). As a result the student constructs a
“representation of new knowledge” (Kimber & Wyatt-Smith, 2006, p. 26), communicates this
using illustrative capital and strives to engage their audience.

Previous Research
New Literacies of online reading comprehension
Prior research informing this study includes the examinations of online reading comprehension
(Leu, Kinzer, Coiro & Cammack, 2004; Leu et al., 2007). The research examined the skills and
strategies necessary for students to successfully search, sift and succeed in finding answers to
questions obtained in inquiry projects (Leu, Kinzer, Coiro, & Cammack, 2004). Work instructing
adolescents in online reading comprehension skills (question, locate, evaluate, synthesize,
communicate) included the implementation of IRT (Leu, Coiro, Castek, Hartman, Henry, &
Reinking, 2008; Leu et al., 2008), proven to effectively build these skills in adolescents. IRT
initially was based on Reciprocal Teaching (Palincsar & Brown, 1984). The goal of IRT is to
facilitate the reading and communication practices used by individuals as they use ICT tools.

Communication
Research investigating computer mediated communication (CMC) as a pedagogical tool is still
evolving and the focus is split between in-school and out-of-school settings. Of the research that
examines in-school CMC use, there is some research that informs this study. Research shows
that children do use CMC tools successfully in classrooms (Burnett & Myers, 2006), but that
many times they seem more preoccupied in editing styles and fonts rather than content or
grammar (Matthewman & Triggs, 2004). It is highly accepted that CMC tools are rapidly
changing communication in the classroom (Burnett, Dickinson, Myers & Merchant, 2006), but in
order to see gains in writing complexity and voice (Merchant, Dickson, Burnett & Myers, 2006)
extensive research into scaffolding techniques is required (Vincent, 2006). Not only has CMC
use been shown to raise engagement levels in the most recalcitrant writers (McGinnis, 2007;
Harushimana, 2008), it has also been able to build volume, creativity and complexity in student
writing (Vincent, 2001; Riley & Ahlberg, 2004; Kimber, Pillay & Richards, 2007). Even as we
learn more about CMC use by adolescents, they continue to reinvent ways to use the tools to
their own purposes (Merchant, 2001; Merchant, 2005; Mallan & Giardina, 2009).

Media Construction
Online content creation (OCC) by students in educational settings also suffers from a paucity of
research that strives to inform a highly dynamic field. Even as researchers begin to understand
the practices of OCC tool use by adolescents, the divisions between practices, mediums and
genres blur (Kervin, 2009), and the roles of students and educators merge as well (Matthewman,
  O’Byrne Comprehensive Exam 2nd Task ‐ 12 

Blight & Davies, 2004). Despite the vacillating ground upon which OCC is situated, students and
educators are able to successfully use media to construct and reconfigure new spaces and
mediums where previously none existed (Hull & Katz, 2006; Marsh, 2006; Ranker, 2008). OCC
has been shown to motivate students (Watts & Lloyd, 2004), and build student agency (Skinner
& Hagood, 2008); even for students with special needs (Faux, 2005). Furthermore, research
shows students able to complete complex tasks, producing high quality content, sometimes with
little or no instruction (Bruce, 2008; Courtland & Paddington, 2008; Nelson, Hull & Roche-
Smith, 2008).

Design
Research Questions
(1) What effects does an instructional model supporting media construction and communication
using ICT tools have on traditional writing performance, while controlling for previous writing
ability? (2) What effects does an instructional model supporting media construction and
communication using ICT tools have on online reading comprehension performance? (3) What
are the skills and strategies used by 8th grade students that effectively use online tools for media
construction and communication?

Participants
The students in this study will be 360 eighth grade students from an urban school in Connecticut,
which is justified (ES: 0.2, power: 0.95, ∝: 0.05) by a power analysis (Faul, Erdfelder, Lang, &
Buchner, 2007). This school was chosen because it provides a diverse background of socio-
economics, ethnicity, technical ability, and academic achievement. The students will be recruited
from a population of three eighth grade English language arts (ELA) teachers in one building.
The teachers will be selected from volunteers, and randomly assigned to the treatment or control
groups. The students that the teachers normally would receive at the beginning of each school
year will be recruited for this study. Teachers normally are assigned 120 students and it is the
duty of the team to divide them between four classes, so each class will consist of around 30
students. Students documented with learning disabilities or as limited English proficient will
account for 30% of the students, or 100 students across all three groups.

Eighth grade students in this middle school have been selected because the Direct Assessment of
Writing (DAW) portion of the state diagnostic tests in seventh and eighth grades focuses on
persuasive writing. This sample was also selected because the school administers school-wide
writing prompts (SWP) that mimic the DAW three times during the year (September, January &
May). Although this sample is partially a convenience sample, it provides the researcher with the
opportunity to compare student results with progress on state assessed writing (7th & 8th grade
DAW) and two measures of building assessed writing (SWP in September & January). These
two measures provide objective data using traditional offline measures that are assessed State
and locally, not self-reported data. Two trained raters, whose scores are added into a composite
score, score writing samples. If scores disagree by more than one point, a third rater with more
training scores them. Training includes use of holistic methods such as anchor sets to ensure
reliability of scores.

Independent & Dependent Measures


  O’Byrne Comprehensive Exam 2nd Task ‐ 13 

The independent variable is classroom condition, which has three levels: treatment group using
IRT as an instructional model in assisting content creation with one-to-one computer access;
control group using traditional instructional model with one-to-one computer access; control
group using traditional methods with no computer access. The dependent variables are: 1) 7th &
8th grade scores on the CT State DAW test; 2) pre- and posttest scores on the SWP in September
& January; 3) Pretest & posttest scores on Online Reading Comprehension Assessment (ORCA);
5) coded and transcribed video screen captures of student interviews. The ORCA has been
shown to display high measures of validity and reliability (McVerry, O’Byrne & Robbins, 2009).

Procedure
Students will use media construction tools to build websites during student inquiry projects.
These tools include, but are not limited to: iPhoto and Aviary (photo editing software); iWeb
(website creation software): iMovie (video editing software); Audible (audio editing software;
and Microsoft Word. Within the 8th grade ELA curriculum, students will select a topic of
interest, use the Internet to inquire about the topic and construct a website to communicate
findings. Students will research and work collaboratively, but each student will submit their own
work. The treatment classroom will be equipped with one laptop per student, wireless Internet
connectivity, and an LCD projector connected to the teacher’s laptop. One of the control
classrooms will be equipped with one laptop per student, wireless Internet connectivity, and an
LCD projector connected to the teacher’s laptop.

Beginning in September all students will take the pretest assessments (ORCA & SWP). The
teacher in the intervention group will receive basic training using the media construction tools
and IRT. Both control teachers will be expected to use normal classroom instruction to assist the
students in working through the 8th grade ELA curriculum; the control group with computer
access can use the machines at their discretion. Since the IRT model has been validated (Leu et
al., 2008), any modifications made to the IRT model as a result of focusing on media
construction and communication will be noted for future iterations of the model. The researcher
and classroom teacher will discuss and make changes to the model when necessary. Weekly
fidelity protocols (Leu et al., 2008) will document adherence to the model, and any changes
made.

The treatment will occur for 90 minutes, twice a week for 16 weeks. The researcher will serve as
a participant observer in the classroom. The intervention teacher will deliver instruction with the
support of the researcher, and collaboratively plan lessons. Instruction will consist of three
phases, consistent with the IRT model. Phase 1 will be largely teacher directed and provide
modeling for students of online reading comprehension skills and media construction tools.
Phase 2 will build upon student strategy exchange skills, with some support from instructor
scaffolding. Students will plan the purpose, audience and design of the website and begin
construction. Phase 3 is largely student directed, with the instructor highlighting work by groups,
providing “just in time” strategies (Gee, 2003), or directing groups to representative websites.

The final products for this study will be a graphic organizer and a website. The graphic organizer
is a storyboard (Bailey & Blythe, 1998) completed by students using colored pencils. The
students are to complete the graphic organizer, adding in all details and functions of the proposed
website. Each class in the intervention will discuss the characteristics they believe a high quality
  O’Byrne Comprehensive Exam 2nd Task ‐ 14 

website contains after being given a list of ten sample websites by the instructors. Rubrics to
assess websites will be constructed by students, with instructor support.

Analysis
Quantitative Analysis
RQ#1: The hypothesis of interest is: What effects does an instructional model supporting media
construction and communication using ICT tools have on traditional writing performance, while
controlling for previous writing ability? An analysis of covariance (ANCOVA) will be run
separately for the DAW and SWP, using the pretest for each as the covariate. The rationale for
running these analyses separate is that there is likely a high correlation between the two tests
because the SWP is meant to prepare students for the DAW. ANCOVA is preferred in this
context because it focuses on posttest differences between the groups (e.g., prior writing ability,
prior instruction), while holding pretest differences constant.

RQ#2: The hypothesis of interest is: What effects does an instructional model supporting media
construction and communication using ICT tools have on online reading comprehension
performance? An analysis of variance (ANOVA) will be used to test differences from pretest to
posttest of the ORCA within groups. The three levels of group membership as the independent
variable, pretest and posttest scores on the ORCA will be used as the dependent variables.

Although there may be a difference between offline writing skills, online communication skills,
and working with multimodal sources; it is hypothesized that correlations will exist between
these variables. Very high correlations could exist between the DAW, SWP and ORCA, because
of this; the decision has been made to run them separately. Furthermore, a MANOVA could be
used to test correlations between the dependent variables.

Qualitative Analysis
RQ#3: The hypothesis of interest is: What are the skills and strategies used by 8th grade students
that effectively use online tools for media construction and communication? The top twenty
scoring students on the websites, as identified by the rubric, will be interviewed using iShowU,
screen capture software. The students will be asked to share decisions made during the content
creation process. These interviews will be coded and transcribed using NVivo 8, a qualitative
data analysis program. Coding procedures will use a rigorous content analysis (Mayring, 2000)
to inductively analyze (Patton, 2002) the documents. Qualitative data, such as website content
and student commentary, will be analyzed through a constant comparative analysis (Straus &
Corbin, 1990).
Potential Limitations
Limitations exist because the study uses a non-equivalent groups design, the study lacks random
assignment of group membership. The threat of selection could affect internal validity; prior
differences between groups could affect outcomes of the study. Not much is known about
correlations between online content creation or communication, and traditional writing skills.
There could be other variables affecting these relationships and this study will uncover these.
There are other variables that affect traditional writing ability, showing a correlation with skills
in online spaces does not mean that one affects the other. Future studies should create
assessments of online media construction or communication and test how these correlate to
instructional method, prior instruments, and student dispositions.
  O’Byrne Comprehensive Exam 2nd Task ‐ 15 

References
Bailey, G. & Blythe, M. (1998). Outlining, Diagramming and Storyboarding –
Three Essential Strategies for Planning and Creating Effective Educational Websites.
Learning & Leading With Technology, 6-11.

Bruce, B., & Levin, J. (2003). Roles for new technologies in language arts: Inquiry,
communication, construction, and expression. In J. Flood, D. Lapp, J. Squire, & J. Jensen
(Eds.), The handbook for research on teaching the language arts, 2nd edition (649-657).
Mahwah, NJ: Lawrence Erlbaum Associates.

Bruce, D. (2008). Visualizing literacy: Building bridges with media. Reading & Writing
Quarterly, 24(3), 264-282.

Burnett, C., & Myers, J. (2006). Observing children writing on screen: Exploring the
process of multi-modal composition. Language and Literacy, 8(2), 1.

Burnett, C., Dickinson, P., Myers, J., & Merchant, G. (2006). Digital connections:
transforming literacy in the primary school. Cambridge Journal of Education, 36(1), 11-
29.

Coiro, J., Knobel, M., Lankshear, C. & Leu, D. (Eds.) (2008). Handbook of research on
new literacies. Mahwah, NJ: Lawrence Erlbaum Associates.

Courtland, M., Paddington, D., & Schools, L. (2008). Digital literacy in a grade 8
classroom: An e-zine webquest. Language and Literacy, 10(1), 1.

Faul, F., Erdfelder, E., Lang, A., & Buchner, A. (2007). G*Power 3: A flexible statistical
power analysis program for the social, behavioral, and biomedical sciences. Behavior
Research Methods, 39, 175-191.

Faux, F. (2005). Multimodality: how students with special educational needs create
multimedia stories. Education, Communication & Information, 5(2), 167-181.

Gee, J. (2003). What video games have to teach us about learning and literacy.
Macmillan. New York.

Harushimana, I. (2008). Literacy through gaming: The influence of videogames on the


writings of high school freshman males. Journal of Literacy and Technology, 9(2), 1-22.

Hull, G., & Katz, M. (2006). Crafting an agentive self: Case studies of digital
storytelling. Research in the Teaching of English, 41(1), 43.

Jewitt, C. (2008). Multimodality and Literacy in School Classrooms. Review of Research


in Education, 32 (1), 241-267.
  O’Byrne Comprehensive Exam 2nd Task ‐ 16 

Johnson, R.B. & Onwuegbuzie, A.J. (2004). Mixed methods research: A research
paradigm whose time has come. Educational Researcher, 33, 14-26.

Kervin, L. (2009). 'GetReel': engaging year 6 students in planning, scripting, actualising


and evaluating media text. Literacy, 43(1), 29-35.

Kimber, K. & Wyatt-Smith, C. (2006). Using and creating knowledge with new
technologies: a case for students-as-designers. Learning, Media and Technology, 31(1),
19-34.

Kimber, K., Pillay, H., & Richards, C. (2007). Technoliteracy and learning: An analysis
of the quality of knowledge in electronic representations of understanding. Computers &
Education, 48(1), 59-79.

Kress, G. & van Leeuwen, T. (2001). Multimodal discourse: The modes and media of
contemporary communication. London: Arnold.

Labbo, L. & Reinking, D. (1999). Negotiating the multiple realities of technology in


literacy research and instruction. Reading Research Quarterly, 34, 478-492.

Leu, D.J., Jr., Kinzer, C.K., Coiro, J., Cammack, D. (2004). Toward a theory of new
literacies emerging from the Internet and other information and communication
technologies. In R.B. Ruddell & N. Unrau (Eds.), Theoretical Models and Processes of
Reading, Fifth Edition (1568-1611). Newark, DE: International Reading Association.

Leu, D. J., Zawilinski, L., Castek, J., Banerjee, M., Housand, B., Liu, Y., et al. (2007).
What is new about the new literacies of online reading comprehension? In L. Rush, J.
Eakle, & A. Berger, (Eds.). Secondary school literacy: What research reveals for
classroom practices. (pp. 37-68). Urbana, IL: National Council of Teachers of English.

Leu, D. J., Coiro, J., Castek, J., Hartman, D., Henry, L. A., & Reinking, D. (2008).
Research on instruction and assessment in the new literacies of online reading
comprehension. To appear in Cathy Collins Block, Sherri Parris, & Peter Afflerbach
(Eds.). Comprehension instruction: Research-based best practices. New York: Guilford
Press.

Leu, D. J., Reinking, D., Hutchinson, A., McVerry, J. G., Robbins, K., Rogers, A.,
Malloy, J., O’Byrne, W. I., Zawilinski, L. (2008). The TICA Project: Teaching the New
Literacies of Online Reading Comprehension to Adolescents. An alternative symposium
presented at the National Reading Conference, Orlando, FL.

Leu, D. J., McVerry, J. G., O’Byrne, W. I., Zawilinski, L., Castek, J., Hartman, D. K.
(2009). The new literacies of online reading comprehension and the irony of no child left
behind: Students who require our assistance the most, actually receive it the least. In L.
Mandel Morrow, R. Rueda, & D. Lapp (Eds). Handbook of research on literacy
instruction: Issues of diversity, policy, and equity. New York, NY: Guilford Press.
  O’Byrne Comprehensive Exam 2nd Task ‐ 17 

McGinnis, T. (2007). Khmer rap boys, X-Men, asia's fruits, and Dragonball Z: Creating
multilingual and multimodal classroom contexts. Journal of Adolescent & Adult Literacy,
50(7), 570–579.

McVerry, J. G., O’Byrne, W. I. & Robbins, K. (2009). Validating instruments used in the
TICA Project. American Educational Research Association Annual Meeting, San Diego,
CA.

Mallan, K. & Giardina, N. (2009) Wikidentities: Young people collaborating on virtual


identities in social network sites. First Monday, 14(6). Retrieved June 15, 2009 from
http://www.uic.edu/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/2445/2213

Matthewman, S., Blight, A., & Davies, C. (2004). What does multimodality mean for
English? Creative tensions in teaching new texts and new literacies. Education,
Communication & Information, 4(1), 153-176.

Matthewman, S., & Triggs, P. (2004). ‘Obsessive compulsive font disorder’: the
challenge of supporting pupils writing with the computer. Computers & Education, 43(1-
2), 125-135.

Marsh, J. (2006). Emergent media literacy: Digital animation in early childhood.


Language and Education, 20(6), 493-506.

Mayring, P. (2000). Qualitative content analysis. Forum Qualitative Social


Research/Forum Qualitative Sozialforschung, 1(2). Retrieved September 24, 2008, from
http://www.qualitative-research.net/fqs-texte/2-00/2-00mayring-e.htm

Merchant, G. (2001). Teenagers in cyberspace: an investigation of language use and


language change in internet chatrooms. Journal of Research in Reading, 24(3), 293-306.

Merchant, G. (2005). Digikids: cool dudes and the new writing. E-Learning, 2(1), 50-60.

Merchant, G., Dickinson, P., Burnett, C., & Myers, J. (2006). Do you like dogs or
writing? Identity performance in childen's digital message exchange. English in
Education, 40(3), 21-38.

Nelson, M., Hull, G., & Roche-Smith, J. (2008). Challenges of multimedia self-
presentation: taking, and mistaking, the show on the road. Written Communication, 25(4),
415-440.

The New London Group. (2000). A pedagogy of multiliteracies designing social futures.
In B. Cope & M. Kalantzis (Eds.), Multiliteracies: Literacy learning and the design of
social futures (pp. 9–37). London: Routledge.

Palincsar, A., & Brown, A. (1984). Reciprocal Teaching of Comprehension-Fostering


  O’Byrne Comprehensive Exam 2nd Task ‐ 18 

and Comprehension-Monitoring Activities. Cognition & Instruction, 1(2), 117.

Patton, M. (2002). Qualitative Research and Evaluation Methods. Thousand Oaks, CA:
Sage.

Ranker, J. (2008). Composing across multiple media: A case study of digital video
production in a fifth grade classroom. Written Communication, 25(2), 196-234.

Peterson, P. & Walberg, H. (Eds.). (1979). Research on teaching: Concepts, findings, and
implications. Berkeley, CA: McCutchan.

Riley, N., & Ahlberg, M. (2004). Investigating the use of ICT-based concept mapping
techniques on creativity in literacy tasks. Journal of Computer Assisted Learning, 20(4),
244-256.

Rosenshine, B., & Meister, C. (1994). Reciprocal teaching: A review of the research.
Review of Educational Research, 64(4), 479.

Shadish, W., Cook, T. & Campbell, D. (2002). Experimental and quasi-experimental


designs for generalized casual inference. New York: Houghton Mifflin.

Skinner, E., & Hagood, M. (2008). Developing literate identities with English langauge
learners through digital storytelling. The Reading Matrix. 8(2). Retrieved June 1, 2009,
from http://www.readingmatrix.com/archives/archives_vol8_no2.html

Strauss, A. & Corbin, J. (1990). Basics of qualitative research: Grounded theory


procedures and techniques. Newbury Park, CA: Sage.

Vincent, J. (2001) The Role of visually rich technology in facilitating children’s writing.
Journal of Computer Assisted Learning, 17, 242-250.

Vincent, J. (2006). Children writing: Multimodality and assessment in the writing


classroom. Literacy, 40(1), 51-57.

Watts, M., & Lloyd, C. (2004). The use of innovative ICT in the active pursuit of
literacy. Journal of Computer Assisted Learning, 20(1), 50-58.

You might also like