March 12, 2012
By Sarah Van Duyn
Today, No Child Left Behind and increased accountability demands have spurred the widespread use of high-stakes, multiple choice tests. When Frederick J. Kelly invented an early form of the multiple choice test as an industrial revolution-era response to the need for efficiently assessing America’s growing body of students, he probably had no idea that almost a century later, his invention would still have an impact on educational testing. While multiple choice tests may be somewhat antiquated, when used in conjunction with other assessment types, they can provide powerful objective benchmarks to inform school planning.
Externally-created multiple choice assessments, the type often used by states for Adequate Yearly Progress indicators and high-stakes decisions, are not inherently problematic. These assessments are often tested for reliability and validity, and can provide useful tools for measuring student achievement over time, or for evaluating student performance more broadly (e.g., in relation to overall district, state, or national benchmarks). Problems arise when these assessments are the sole source of information used in decision making – as they often are. Whether the test has high-stakes consequences for
the student, by preventing graduation;
the teacher, by determining compensation or employment; or
the school, by mandating resource allocations or even closure,
the impact of high-stakes accountability tends to disproportionately disenfranchise certain students, teachers, and schools. In particular, disadvantaged communities with little access to affordable early childhood education and crater-sized achievement gaps for minority and disadvantaged students tend to suffer the most negative consequences. Too often, this type of educational testing evaluates the level of resources available to the school and community rather than student achievement, disempowering students, teachers, and the school.
In order to prevent these negative consequences, formative and summative educational assessments should be used in a non-punitive environment, with multiple measures used to benchmark student progress throughout the year and inform decision-making regarding instructional strategies and resource allocation.
Student achievement data can be a wonderful tool to raise questions and inform planning to overcome obstacles to teaching and learning.
The Ontario Focused Intervention Partnership, for example, examines student test data to identify low- or stagnant-performance schools, analyzes achievement data in partnership with these schools and districts, and then uses this analysis to target effective instructional strategies and resource allocation. Schools within this partnership have improved at a much faster rate than the average for the province.
If educational testing is used to evaluate schools, it must include multiple measures and involve the whole school in the examination of data and the improvement planning that arises out of this analysis. Often, this type of comprehensive approach is not undertaken, turning educational testing from a tool that builds the collective capacity of a school to a punitive, accusatory tool that isolates and demoralizes teachers. Educational testing, when used as part of a system focused on building schools’ and teachers’ capacities and providing equal access to educational opportunities for all students, offers tremendous value.
For more information on how districts can use testing data to inform decisions regarding teacher and learning, please see Best Practices in Data Collection and Management, Measuring Student Learning Growth and Its Use for Evaluations, Review of Student Performance Assessments, and many other reports in Hanover’s Member Library.