Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 26

ANALYSIS OF

UNIVERSAL ACADEMIC
DATA

Course Instructor: Rameesha Mahtab


Universal Screening
“Universal screening" in education refers to a
systematic assessment process designed to evaluate
all students within a particular educational setting,
typically a school or grade level.
 Comprehensive Coverage
 Eraly Identification
 Data Driven Decision Making
 Multitiered Support System
 Focus of Screening:

Focus on Academic Performance:


 While students develop many important skills during their
school years, the primary goal of academic instruction is to
ensure students learn and master essential academic skills.
 Instruction should be designed for Meaningfully
Accelerating Learning, helping them acquire a useful set of
skills and knowledge that they can apply in various
contexts.
 Key academic skills have been identified that should
develop at specific stages to ensure continued growth
toward functional skill competence.
 Focus of Screening:

Focus on Academic Performance:


 Evaluating Child Learning Outcomes:
Student learning outcomes can be assessed in two main
ways:
 Static Performance relative to Expectations: Assessing
if a second grader can read at the level expected for their
grade.
 Learning Trajectory over Time: Monitoring a student's
progress in math over the school year to ensure they are
on track to reach proficiency by the end of the year.
 Focus of Screening:

Focus on Academic Performance:


 Critical Skills and Their Generative Nature
Critical skills are generative, meaning mastering them
improves a child’s overall functioning in various contexts.
Examples:
 Phonemic awareness is a crucial early literacy skill that
lays the foundation for decoding words and reading
fluently.
 In mathematics, a sequence of computational skills
(addition, subtraction, multiplication) reflects functional
and generative learning outcomes.
Obtaining and Examining Universal Screening
Data
Screening is conducted to gather information that helps in identifying areas
where students need instruction. This data is used to guide teaching efforts
within a multitiered intervention model to ensure that all students' educational
needs are met.
 Multitiered Intervention Models:
These models categorize the levels of instructional support into three tiers based on
students' needs and their response to instruction.
 Tier 1 (Core Instruction): Tier 1 involves assessment and instruction for all
students. It is often called core instruction because it includes the general
education curriculum that everyone experiences.
 Tier 2 (Supplemental Instruction): Tier 2 targets students who are not
successfully responding to Tier 1 instruction. Instruction here is more intense,
assessment is more frequent, and it usually occurs in small groups.
 Tier 3 (Intensive Intervention): Tier 3 is for students who do not respond to
Tier 1 and Tier 2 interventions. These students receive individualized and
intensive instruction.
Obtaining and Examining Universal Screening
Data

 Screening helps determine how students are learning and what


instructional adjustments are necessary. It engages educators
in understanding how to best support student learning.
 Screening and subsequent interventions focus on altering
instructional variables that can improve student learning
outcomes.
Steps of Screening

 Step 1: Select a Valid Screening Measure


 Step 2: Specify Comparison Criteria
 Step 3: Interpret Screening Data
 Step 4: Organize and Present Screening Data
 Step 5: Plan for Implementation
Step 1: Select a Valid Screening Measure
Step 1: Select a Valid Screening Measure

1. Alignment with Performance Expectations: The screening


measure must be aligned with what students are expected to
achieve in the classroom at a specific point in their instruction.
This means the skills and knowledge assessed by the screening
tool should match what is being taught.
2. Appropriate Difficulty Level: The screening tool must be
appropriately challenging to accurately identify students who are
at risk for learning difficulties compared to their peers.
3. Multiple Measures in Cases of Systemic Problems: When there
are widespread learning issues within a system, a single screening
measure may not be sufficient. Multiple measures may be
necessary to get an accurate picture of student performance.
Step 1: Select a Valid Screening Measure

4. Avoiding Temptation to Select Easier Tasks: Explanation:


While easier tasks may seem to better identify students at risk,
they can mislead the team into thinking most students are on
track when they are not. Leaders should ensure the team selects
measures that reflect true performance expectations.
5. Critical Questions for Screening: Leaders should guide the
team to focus on two key questions:
 "What is expected of students currently?" and
 "Are most students able to perform the skill that is currently

expected?"
These questions help determine the appropriate level of intervention..
Step 1: Select a Valid Screening Measure

6. Key Estimates: Following estimates help evaluate the


validity of screening tasks for making RTI decisions.
 Sensitivity: The ability of the test to correctly identify students
who are at risk (true positives).
 Specificity: The ability of the test to correctly identify students
who are not at risk (true negatives).
 Positive Predictive Power: The likelihood that students who fail
the screening will actually fail the year-end accountability
measure.
 Negative Predictive Power: The likelihood that students who
pass the screening will actually pass the year-end accountability
measure.
Step 1: Select a Valid Screening Measure

6. Key Estimates:
 Sensitivity and Specificity:

Sensitivity and specificity measure the test’s accuracy in


identifying true positives and true negatives.
 True Positives: Students predicted to fail without intervention
who actually fail.
 True Negatives: Students predicted to pass without intervention

who actually pass.


Example: A math screening test might predict that certain students
will fail without additional help. Sensitivity measures how many of
these students actually fail, while specificity measures how many
predicted to pass actually pass.
Step 1: Select a Valid Screening Measure

6. Key Estimates:
 Predictive Power Estimates:

Positive and negative predictive powers measure the accuracy of


the screening test’s predictions about student outcomes.
 Positive Predictive Power: The probability that students who fail the
screening will fail the year-end test.
 Negative Predictive Power: The probability that students who pass the
screening will pass the year-end test.
Example: If a reading test has high positive predictive power,
most students who fail the screening will likely struggle with
reading at the end of the year. High negative predictive power
means most students who pass the screening will do well in
reading at the end of the year.
Step 1: Select a Valid Screening Measure

7. Cut Scores: A cut score is a threshold used to make


decisions about who receives intervention. Scores above the
cut score indicate no need for intervention, while scores
below indicate the need for intervention.
8. Dichotomous Judgments: Explanation: Screening tools
result in a binary decision about whether a student is at risk
or not.
Step 2: Specify Comparison Criteria
Step 2: Specify Comparison Criteria

Comparison of Class or Grade Performance to Desired


Outcome:
 The team should compare the overall performance of the class

or grade to a specific performance level that indicates the


desired learning outcome.
 Example: If the desired outcome for fourth-grade reading is to

pass a state proficiency test, the team compares students'


reading fluency scores to a benchmark associated with passing
that test.
 There are two main approaches to establishing these criteria:

using local data or adopting established benchmarks from the


literature.
Step 2: Specify Comparison Criteria

Approach 1: Using Local Data to Establish Criteria:


 Schools or districts can use their own historical data to

identify a performance criterion related to successful


outcomes. This involves statistical analysis of local
curriculum-based measurements (CBM) and year-end
accountability scores.
Approach 2: Adopting Established Benchmarks from the
Literature:
 Instead of using local data, schools can adopt performance

criteria that have been established and validated in educational


research.
Step 2: Specify Comparison Criteria

Approach 2: Adopting Established Benchmarks from the


Literature:
 Performance criteria (Deno and Mirkin, 1977):
These criteria categorize student performance levels to guide
instructional decisions:
 Frustrational Level: The student struggles significantly with the

material e.g. A fourth grader reading below 80 words per minute


might be at the frustrational level,
 Instructional Level: The student can learn with appropriate

instruction e.g. A fourth grader reading 80-100 words per minute


at the instructional level
 Mastery Level: The student has fully mastered the material. e.g

A fourth grader reading100 words per minute at the mastery


level.
Step 3: Interpret Screening Data
Step 3: Interpret Screening Data

 A normal distribution of scores helps educators identify the


overall performance trends of a class or grade level, allowing
them to make informed decisions about the need for
interventions or instructional adjustments.
 Graphical Representation: Visualizing data through graphs
makes it easier to interpret and understand performance
trends. Common graphical tools include bar graphs,
histograms, and line graphs.
 Graphs are analysed to identify key insights.
 Below Threshold: Sudents scoring below a critical threshold are
identified who need immediate intervention.
 Class Performance: Assess whether the median score indicates a
need for class-wide instructional adjustments.
Step 4: Organize and Present Screening Data
Step 4: Organize and Present Screening
Data
1. Present data on typical growth rates in a given skill and
compare this to the growth needed for students at the school
to meet expected performance outcomes.
2. The consultant should be prepared to present and analyze
student performance data broken down by demographic
factors. (Gender, Poverty)
3. Data should be presented starting with grade-level
performance, then class median scores, and finally individual
student performance.
4. As data is presented, the consultant should emphasize areas
where students are underperforming and lead discussions on
how to address these issues efficiently.
Step 5: Plan for Implementation
Step 5: Plan for Implementation

1. The principal should be the instructional leader and actively


support the intervention.
2. The intervention plan should directly address the specific
problem identified and align with the school’s priorities.
3. The plan should be designed to be effective if all components
are implemented correctly.
4. A system for monitoring progress should be established to
evaluate the effects of the intervention.
Source

Practical Handbook of School Psychology, Gretchen Gimpel


Peacock, Ruth A. Ervin, Edward J. Daly III, Kenneth W. Merrell

You might also like