Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 9

 

Answer and do the following.


1.    Take the Digital Literacy Skills Self-Inventory on page 88. Then answer the question on page 89. Put your
answer here.
2.    Apply your digital literacy skills on information literacy by searching for the identified sample sites below.
Navigate around each site. In three to five sentences, share what information can be found in each.
a.   Professor Garfield with URL http://www.professorgarfield.org/pgf_home.html

b.   Common Sense Media with URL https://www.commonsense.org/education/digital-citizenship#digcit-program

c.    Lesson Plan Booster: Digital Literacy and Online Ethics with URL
https://www.educationworld.com/a_lesson/lesson-plan-booster/cyber-ethics.shtml

the Site is legitimate where the author is Jason Tomaszewski which EducationWorld Associate Editor
and studied at Robbert Moris University BA Communication. The article published in 2011. The information has legitimate
sources which it is indicated the names as well as the title of the article.
Research Objective (s) Methodology Result & Discussion Insights Gained
the nature of technological A Web-based survey To determine if the survey This research examines
pedagogical content instrument was developed items were reliable, internal the validity of the
knowledge (TPACK) through encompassing questions consistency (Cronbach’s
the use of a factor analysis. regarding 24 items alpha) values were TPACK model, which
concerning online teachers’ computed. The values are might be effective in
technological pedagogical presented alongside the hallways of
content knowledge descriptive statistics in Table academia, but perhaps
(Archambault & Crippen, 1 and indicate that the
2009). Responses were on a subscales, which have alpha
provides limited
Likert-type scale, ranging values from 0.89 to 0.70, are benefit to
from 1 ¼ Poor, 2 ¼ Fair, 3 ¼ reasonable for internal administrators,
Good, 4 ¼ Very Good, and 5 reliability (Morgan, Leech, teachers, and most
¼ Excellent. To establish Gloeckner, & Barrett, 2004). importantly, students.
construct validity, the Additionally, a factor
instrument underwent analysis was performed on From the practitioner
expert review and two the 24 item survey. This data contained within
rounds of think-aloud analysis confirmed the this research, it seems
piloting (Archambault & existence of three separate that from the onset,
Crippen, 2009). 3.1. factors within the survey,
Procedure The survey was using the Kaiser
measuring each of
deployed to 1795 online Normalization as indicated these domains is
teachers employed at virtual by the components with complicated and
schools from across the eigenvalues greater than convoluted, potentially
nation using Dillman’s one. The amount of variance due to the notion that
(2007) Tailored Design explained by the three
survey methodology. Email factors was 58.21%. Tables they are not separate.
addresses for these teachers 2–4 illustrate how the The data emerging
were gathered via virtual survey items loaded by from the current study
school websites. A total of factor, as indicated by the support such a
596 responses from 25 rotated component matrix,
different states were which converged in five
conclusion. This leads
gathered, which iterations. The the researchers to
represented an overall communalities for each item consider what type of
response rate of 33%. While are also presented. These model might more
the response rate is modest, results indicate that the accurately describe
it is recognized as highly accepted
acceptable for a web-based sevenmutually exclusive teachers’ content,
survey (Manfreda, Bosnjak, domains of the TPACK pedagogical, and
Berzelak, Hass, & Vehovar, theory may not exist in technological
2008; Shih & Fan, 2008). In practice. Specifically, the knowledge, and how
addition, this survey respondents reported the
represents a unique existence of three factors:
this model might better
examination of practitioner’s pedagogical content inform colleges of
responses to the TPACK knowledge, technological– education and teacher
theoretical framework and curricular content education programs in
should be regarded as an knowledge, and
preparing future
initial yet integral step in technological knowledge.
understanding if this theory From the responses educators for the
is tenable. 3.2. Respondents provided, practitioners challenges of teaching
Participants were indicated strong connections in the 21st century
predominantly female, with between content knowledge
456 responses (77%) versus and pedagogical knowledge,
139 (23%) male (consistent noted by the
with the averages among interconnection of
educators), and were responses to the content,
between the ages of 26 and pedagogy, and pedagogical
45 (63%). The majority of content questions.
respondents (559, 92%) Respondents also reported a
reported having a bachelor’s connection between
degree and 380 (62%) technological content,
indicated that they had technological pedagogy, and
earned a master’s degree, technological pedagogical
while 7 (2%) reported they content questions. However
were currently working respondents did not
toward their master’s distinguish among these
degrees. Of the 62% with constructs. Instead,
master’s degrees, 148 (48%) responses to these items
were education (M Ed) loaded together with no
degrees, including those in clear separation. Finally,
curriculum and instruction, respondents did separate
while 73 (19%) reported the technological knowledge
having a degree in a items, where no reference
particular content area, such to content or pedagogy was
as mathematics, science, used. In an effort to test if
social studies, or English. these findings were a
3.3. Analytic strategy The reflection of the items
responses to the survey within the instrument or an
were analyzed using the accurate reflection of the
Statistical Program for Social perception of respondents,
Sciences version 16. A factor each of the 24 items was
analysis using varimax analyzed using a Pearson r
rotation was performed on correlation to conduct a
the total survey. The Corrected Item–Total
purpose of a factor analysis, Correlation analysis. The
according to Gorsuch (1983), overall internal consistency
is to “summarize the of the instrument was,r ¼
interrelationships among the 0.94, and as indicated by
variables in a concise but Table 5, the removal of any
accurate manner as an aid in one item does not improve
conceptualization” (p. 2). the reliability of the survey,
This method assists the which is anindication that
researchers in establishing a the survey ismost reliable
level of construct validity retaining all of the 24items.
(Bryman & Cramer, 1990). This also gives credibility to
Coefficients of internal the conclusion that the
consistency were obtained previously described factors
for the total survey and by accurately reflect the
the seven expected perceptions of the
constructs. Additionally, the respondents rather than any
relationship between each issues with the instrument.
of the 24 items in the survey Although the TPACK
with each framework is helpful from
an organizational
standpoint, the data from
this study suggest that it
faces the same problems as
that of pedagogical content
knowledge in that it is
difficult to separate out each
of the domains, calling into
question their existence in
practice. The fact that three
major factors become
evident is noteworthy, but
rather than being comprised
of pedagogy, content, and
technology, the only clear
domain that distinguishes
itself is that of technology.
Based on these results, it
seems that the technological
pedagogical content
knowledge (TPACK)
framework experiences the
same difficulty as Shulman’s
quarter century old
conception of pedagogical
content knowledge. It is
possible that when
experienced educators
consider teaching a
particular topic, the
methods of doing so are
considered as part and
parcel of the content, and
when considering an online
context, the domain of
technology is added to the
equation as a natural part of
the medium, making it
difficult to separate aspects
of content, pedagogy, and
technology. This was
illustrated by the second
phase of the thinkaloud
pilot, for which the lead
researcher met with three
different teachers who all
taught various classes
online. After being provided
with definitions of each
TPACK construct, the online
teachers were asked to read
each item of the instrument
and decide under which
TPACK domain they thought
the item fits. In doing so,
they encountered difficulty
when trying to decipher
issues of pedagogy and
content. Three online
teachers were challenged
with separating out specific
issues of content and
pedagogy. For example,
Item d – “My ability to
decide on the scope of
concepts taught within my
class” was interpreted by
two teachers as belonging to
the domain of pedagogical
content rather than content
alone. The same
misinterpretation happened
with Item b – “My ability to
create materials that map to
specific district/state
standards.” The participants
saw this item as a part of
pedagogy content
(Archambault & Crippen,
2009). These examples,
coupled with the results
from the factor analysis,
support the notion that
TPACK creates additional
boundaries along and
already ambiguous lines
drawn between pedagogy
and content knowledge.
Confounding the
measurement of TPACK is
the difficulty in developing
an instrument or
methodology to assess for
each of the domains
described by the framework
that will apply in different
contexts. One of the major
problems surfaces when
attempting to measure
content knowledge, defined
by Mishra and Koehler
(2006) as knowledge of the
subject-matter to be taught
(e.g. earth science,
mathematics, language arts,
etc.). Items that were
developed to measure this
construct within the current
instrument were written
with the intent of being
generalizable so that
teachers could apply them
to their own subject-matter.
These included questions
regarding the ability to
create materials that map to
specific district/state
standards and the ability to
decide on the scope and
sequence of concepts taught
within in a class. The
challenge becomes creating
and validating an instrument
that is applicable in a
multitude of contexts,
including different content
areas. If this is not possible,
then the conceptualization
of TPACK may need to be
different for every
imaginable content area,
including subject domains
within each of these areas.
This questions the value of
the framework itself as a
cohesive, overarching
model. Despite the issue
with content-related items,
the inability to differentiate
between and among the
constructs of the TPACK
framework is significant, and
it calls into question the its
precision, namely whether
or not the domains
described by the model exist
independently (Gess-
Newsome & Lederman,
1999). Without the ability to
separate the components of
the framework, it suggests
that further refinement to
the framework may be
necessary. This is echoed by
Angeli and Valanides
(2009): .Koehler et al.’s
(2007) conceptualization of
TPACK needs further
theoretical clarity. It is
argued that if TPACK is to be
considered as an analytical
theoretical framework for
guiding and explaining
teachers’ thinking about
technology integration in
teaching and learning, then
TPACK’s degree of precision
needs to be put under
scrutiny.Furthermore, the
boundaries between some
components of TPACK, such
as, for example, what they
define as Technological
content knowledge and
Technological pedagogical
knowledge, are fuzzy
indicating a weakness in
accurate knowledge
categorization or
discrimination, and,
consequently, a lack of
precision in the framework
(p. 157). Because the TPACK
domains do not statistically
distinguish themselves, this
also leads to the heuristic
value of the model being
diminished. Specifically, the
heuristic value describes the
extent to which the
framework helps
researchers predict
outcomes or reveal new
knowledge. This is a
weakness in the current
model, as effective models
can be judged on their
ability to explain and predict
various phenomena (Järvelin
& Wilson, 2003). In addition
to a model’s explanatory
power, Järvelin and Wilson
(2003) lay out the following
criteria for evaluating
effective conceptual models:
Simplicity: simpler is better,
other things being equal.
Accuracy: accuracy and
explicitness in concepts are
desirable. Scope: a broader
scope is better because it
subsumes narrower ones
Systematic power: the
ability to organize concepts,
relationships, and data in
meaningful, systematic ways
is desirable. Reliability: the
ability, within the range of
the model, to provide valid
representations across the
full range of possible
situations. Validity: the
ability to provide valid
representations and findings
is desirable. Fruitfulness: the
ability to suggest problems
for solving and hypotheses
for testing is desirable. In
addition to weaknesses in
TPACK’s precision and
heuristic value, the
framework is also limited in
its ability to assist
researchers in predicting
outcomes or revealing new
knowledge. While it focuses
on three major areas of
teaching, namely content,
pedagogy, and technology, it
does not represent the
causative interaction or the
direction of the relationship
between and among these
domains. This makes it
difficult for TPACK to be a
fruitful model, as it does not
suggest problems for solving
or hypotheses for testing
within the field of
educational technology. It
would appear from this
study that there is room to
continue to build on TPACK
or even conceptualize other
models that provide a less
complex, more precise way
of representing the effective
integration of technology to
improve student learning.
Despite its fuzzy boundaries,
the TPACK framework has
theoretical appeal, providing
an analytical structure
highlighting the importance
of content knowledge when
incorporating the use of
technology. As Koehler and
Mishra (2008) recognize,
“Instead of applying
technological tools to every
content area uniformly,
teachers should come to
understand that the various
affordances and constraints
of technology differ by
curricular subject-matter
content or pedagogical
approach” (p. 22). This focus
on subject-matter content is
important when considering
the effective use of
technology. However, this
appeal is tempered by the
difficulty in measuring each
of the constructs described
by the framework. Further,
using this model, what
changes can colleges of
education enact to produce
more skilled teachers? As
Harris et al. (2009) point out:
TPACK is a framework for
teacher knowledge, and as
such, it may be helpful to
those planning professional
development for teachers by
illuminating what teachers
need to know about
technology, pedagogy, and
content and their
interrelationships. The
TPACK framework does not
specify how this should be
accomplished, recognizing
that there are may possible
approaches to knowledge
development of this type. (p.
403) There is confusion
among the field of
educational technology, not
only concerning the
definitions, but also the
specific activities and
methods to develop TPACK.
This makes it difficult to
implement knowledge from
a framework that is yet to be
fully defined, which limits its
practical application. This is
an important area for future
research, including detailed
examples of TPACK as it
pertains to teacher practice

You might also like