ARC-Backgrounder Checklist

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

B

Research the contextual background references that are mentioned by the


author, but not fully explained. Consider how this contextual reference
information improves text comprehension.
On your
A/B/C
Handout
Individual Write the full bibliographic reference for the text by following the Full Harvard
work NTU Harvard Referencing Guide. Reference
(Prep Identify all contextual background references used by the author.
before Highlight all people, places, events, and references to outside A
the ARC)
sources.
Select up to three contextual references that are mentioned by the A A clear
author, but not fully explained. heading for
Research these contextual background references for relevant B connections
information about them. Wikipedia can be one source, but look for + paragraph
supporting info from other sources, too. Include source information connection
as a full Harvard reference. to the text +
In bullet points, include key information you found that is relevant to A reference to
understanding this contextual background reference in the text. outside
Briefly explain the significance of this contextual background B source using
reference to the meaning of the text. full Harvard
Group As you reach one of your contextual background references in the reference
work text during your group work, explain the information you found
(during about it.
the ARC) Explain how that information improves your understanding of the
text concept and its significance to the author’s points.

Work with group to determine why the author used the contextual
reference background to support points in the text.

Work with group to rank each contextual background reference from Ranking scale
1 (irrelevant) to 5 (very useful to help understand the text) (optional)

Individual What was the biggest challenge of being the Backgrounder? Why?
reflection(
after the
ARC)

What advice would you give the next Backgrounder of your group?

Reflecting on each element in checklist above:


A = done without problem
B= done but some issuesC = perhaps not done / very difficult

Adapted from checklists created by Seburn, T., 2016. Academic reading circles. Createspace Independent Publishing Platform.
Adapted from checklists created by Seburn, T., 2016. Academic reading circles. Createspace Independent Publishing Platform.
Rating scales McNamara (1996), the scale used in assessing performance tasks
represents both implicitly and explicitly, the theoretical foundation upon
which the test is built.
Weigle (2002, p. 109) reviewed rating scales used to score writing
assessments
and found two features that could be used to distinguish between
different types. These apply equally well to scales used to assess
speaking.
+ Task-specific versus generic scales
+ Holistic (primary traits) VS analytic scales (multiple traits - 5 criteria
Range, Accuracy, Fluency, Interaction and Coherence)
Behaviour or Bachman (1990) and Council of Europe (2001), scales can also be
ability? categorised according to whether they are more ‘real world’ and
‘behavioural’ in orientation or whether they are more
‘ability/interaction’
and ‘normative’
Nested systems Pollitt (1991), the grade of the examination contributes to the
interpretation of each score.
Intuitive North (2000), Fulcher (2003) categorised recent methods of scale
approaches development within two general approaches: intuitive and empirical.
towards + intuitive: experts prepare rating scales according to their intuitions,
designing established practice, a teaching syllabus, a needs analysis. Refined
rating scales overtime, new development.
+ empirical methods: more directly based on evidence.
Upshur and Turner (1995) described three alternative empirical
approaches: empirically derived, binary-choice, boundary-definition
(EBB) scales
Criticism of Fulcher et al. (2011, p. 9) objected to such quantitative ‘scaling
quantitative rating descriptors’ approaches, arguing that ‘Measurement-driven scales suffer
scales from descriptional inadequacy. They are not sensitive to the
communicative context or the interactional complexities of language
use.’

North (2000), quantitative rating scales are concerned with broad


characterisations of functional language ability, while second language
acquisition research tends to focus on discrete elements of language
form.
Reporting to Alderson (1991) raised the objection that the kinds of scales employed
different audiences by trained assessors may be too technical or detailed to communicate
effectively to non-specialists. He recommended the use of adapted user-
oriented scales for this purpose.
Feedback Stobart (2008) their retrospective standpoint, delayed feedback,
reductive scoring and limited sampling limits the diagnostic and
formative potential of external tests.

Sinclair and Coulthard (1975) observed that much classroom


Adapted from checklists created by Seburn, T., 2016. Academic reading circles. Createspace Independent Publishing Platform.
interaction follows what they call an I-R-F pattern.

Long and Robinson (1998) error correction and a ‘focus on


form’ can be beneficial.

Ways to encourage Walsh (2011) ways that have been suggested to encourage reflective
reflective thinking thinking include allowing time for learners to self-correct and increasing
wait time for learners (the time between putting a question and
expecting a response.

Meddings and Thornbury (2009) recommend a five-point strategy for


the treatment of the successful or unsuccessful use of linguistic forms in
the classroom that reflects current thinking. It includes retrieval,
repetition, recasting, reporting and recycling.

???
1. Fulcher (2003): Assessment designers need to decide how best to take account of
such choices when scoring performance. P132
2. Weigle (2002, p. 109) reviewed rating scales used to score writing assessments
and found two features that could be used to distinguish between
different types. These apply equally well to scales used to assess speaking.
3. Stobart (2008) their retrospective standpoint, delayed feedback, reductive scoring
and limited sampling limits the diagnostic and formative potential of external tests.

Adapted from checklists created by Seburn, T., 2016. Academic reading circles. Createspace Independent Publishing Platform.

You might also like