Become a Client

Access the best custom research to help hit your organization's goals. Set up your custom consult below in 60 seconds or less

  • This field is for validation purposes and should be left unchanged.
PUT THE POWER OF DATA See the stats TO WORK FOR YOUR ORGANIZATION

Become a Hanoverian

Join the second fastest growing market research firm worldwide and a top 50 fastest growing company in D.C. area – embark on a career with Hanover Research. Hanoverians hail from top educational institutions and firms in the U.S. and globally. Our staff have degrees ranging from bachelors to advanced masters and doctorate degrees in a variety of arts and sciences fields as well as professional experience in diverse industry areas. More than anything, we seek out candidates who approach questions with curiosity and precision and who delight in providing clients with high-quality, accurate, and objective work.
  • This field is for validation purposes and should be left unchanged.

Get the answers you need.

to general questions, press inquiries, references, citations & more. Just fill out the form below and we.ll get back to you within 48 hours or less

For inquiries relating to press and media coverage, reference and citations, content licensing, and other matters, please fill out the form below and we will contact you within 48 hours.
  • This field is for validation purposes and should be left unchanged.

View Our Fact Sheet SEE THE STATS: FILL OUT THE FORM FOR YOUR FREE FACT SHEET

Hanover

Multiple Choice Testing

By Sarah Van Duyn

Today, No Child Left Behind and increased accountability demands have spurred the widespread use of high-stakes, multiple choice tests. When Frederick J. Kelly invented an early form of the multiple choice test as an industrial revolution-era response to the need for efficiently assessing America’s growing body of students, he probably had no idea that almost a century later, his invention would still have an impact on educational testing. While multiple choice tests may be somewhat antiquated, when used in conjunction with other assessment types, they can provide powerful objective benchmarks to inform school planning.

Externally-created multiple choice assessments, the type often used by states for Adequate Yearly Progress indicators and high-stakes decisions, are not inherently problematic. These assessments are often tested for reliability and validity, and can provide useful tools for measuring student achievement over time, or for evaluating student performance more broadly (e.g., in relation to overall district, state, or national benchmarks). Problems arise when these assessments are the sole source of information used in decision making – as they often are. Whether the test has high-stakes consequences for

        • the student, by preventing graduation;

        • the teacher, by determining compensation or employment; or

        • the school, by mandating resource allocations or even closure,

the impact of high-stakes accountability tends to disproportionately disenfranchise certain students, teachers, and schools. In particular, disadvantaged communities with little access to affordable early childhood education and crater-sized achievement gaps for minority and disadvantaged students tend to suffer the most negative consequences. Too often, this type of educational testing evaluates the level of resources available to the school and community rather than student achievement, disempowering students, teachers, and the school.

In order to prevent these negative consequences, formative and summative educational assessments should be used in a non-punitive environment, with multiple measures used to benchmark student progress throughout the year and inform decision-making regarding instructional strategies and resource allocation.

Student achievement data can be a wonderful tool to raise questions and inform planning to overcome obstacles to teaching and learning.

      • The Ontario Focused Intervention Partnership, for example, examines student test data to identify low- or stagnant-performance schools, analyzes achievement data in partnership with these schools and districts, and then uses this analysis to target effective instructional strategies and resource allocation. Schools within this partnership have improved at a much faster rate than the average for the province.

If educational testing is used to evaluate schools, it must include multiple measures and involve the whole school in the examination of data and the improvement planning that arises out of this analysis. Often, this type of comprehensive approach is not undertaken, turning educational testing from a tool that builds the collective capacity of a school to a punitive, accusatory tool that isolates and demoralizes teachers. Educational testing, when used as part of a system focused on building schools’ and teachers’ capacities and providing equal access to educational opportunities for all students, offers tremendous value.

For more information on how districts can use testing data to inform decisions regarding teacher and learning, please see Best Practices in Data Collection and Management, Measuring Student Learning Growth and Its Use for Evaluations, Review of Student Performance Assessments, and many other reports in Hanover’s Member Library.