-by Dr. Tim Grivois, Executive Director
I often work with schools to better understand and work with data (If the muscles in your neck just got tense, it’s ok….it happens, and this article may prevent future data-related muscle tension).
One common problem I notice is that schools and districts often use different types of data for purposes they aren’t built to serve. The best way to prevent this is to understand the difference between screening, diagnostic, fidelity, and outcome data.
Screening data is used to identify someone for something. Some examples are:
- Reading fluency scores
- All students above a certain GPA
- Non-white students who qualify for but haven’t enrolled in honors or AP courses
- All students who enrolled after the start of the year
- Above or below a particular score on the Student Risk Screening Scale
- Above or below a certain number of disciplinary referrals
Schools typically use screening data to create entrance criteria for interventions and exit criteria to know when to discontinue an intervention. Also, when space in special offerings is scarce, screening data helps allocate resources equitably.
Screening data is essential, and on its own, incomplete. For example, a student may have low reading fluency scores for several reasons, each of which requires a unique type of support. In addition, students can be over or under-identified for social, emotional, or behavioral support due to systemic bias or inequitable discipline systems.
The most common way schools misuse screening data is to ‘fix’ low screening assessment scores by teaching to a test or manipulating the cut score without addressing the root cause. For example, we can ‘fix’ the number of students on an F-list by telling teachers not to assign Fs. Probably not all that bad of an idea, but on its own, insufficient to address chronic underachievement meaningfully. Screening data can help us identify a student’s needs, but won’t tell us what to do about it.
Diagnostic data is used to identify why something is happening. Some examples are:
- Functional Behavioral Assessments
- Task/Item Analysis
- Math / Reading Running Records
- Oral Assessments (when used to uncover how/why a student is thinking about a task)
Diagnostic assessments are either highly individualized or highly uniform. Teachers individualize diagnostic assessments when they need detailed information about a single student. Schools make diagnostic assessments uniform when they want to know how each student in a class or grade level is thinking about a specific task.
What makes diagnostic data exceptionally useful is that we can use what we discover to reteach concepts effectively and find any gaps in background knowledge efficiently and effectively.
Fidelity data is used to make sure we are implementing practices effectively and appropriately. Where other data types are about students’ performance levels, fidelity data is about our performance level.
Because fidelity data describes us, we have to learn not to take the data too personally. Often, fidelity data lets us know that we’re working too hard, and that we need to implement a more realistic strategy. Making things easier is often the best way to increase fidelity.
That said, fidelity data also helps us decide if the issue is the intervention or if the problem is that we never really did the intervention in the first place. Some simple fidelity tools include:
- “Were we able to [insert strategy]?” Likert scale
- “Fist to five” at a staff meeting
- Reflective CICO
- Fidelity inventories
Outcome data describes what happened as a result of instruction or intervention. Some examples of outcome data are:
- Standards-based assessments/grading
- Scaled scores
- End-of-year disciplinary referrals
- End-of-year Positive Behavior Interventions and Support reports in SWIS
- Social, emotional, and academic growth measures
What makes outcome data challenging to use on its own is that we need screening, diagnostic, and fidelity data to understand why outcomes happened. For example, did we simply screen successful students into successful programs and achieved above-average success? Or did we meet students’ instructional needs using a critical strategy that we implemented well?
Ultimately, we want to see outcome data trending upward. However, if we don’t know anything about our students or how well we implemented a strategy, we can’t say with confidence why our outcomes look the way they do. In fact, we’re just guessing. Even worse, our guesswork can create false, unhelpful judgments about the quality of programming because our data lacks context. outcome data is an important part of a larger story.
Understanding the difference between screening, diagnostic, fidelity, and outcome data helps us make better decisions for our students. By respecting the limits of each kind of data, we can build a more accurate picture of our students, schools, and districts, and ultimately, serve our students better.