rankings

By Adam Tyner

May 1 is National College Decision day, the deadline for many high school seniors to decide which college or university to attend. College ratings matter to students in making their final decisions, and they matter to the institutions they’ll be attending as well. Here at Hanover Research, we have seen this firsthand, executing a range of custom research projects to help our higher education clients better understand their rankings and develop strategies to improve them. But what are the broader implications of higher education institutions responding to these ranking systems?

Today, college rankings systems that heavily influence stakeholder decision-making explicitly reward increased institutional spending and exclusion of students based on their pre-enrollment characteristics (i.e., “input-based” measures). For example, 25 percent of the U.S. News & World Report’s Best Colleges rankings formula is comprised of factors such as spending per student (10 percent), faculty salary (7 percent), and class size reduction (8 percent). Much of the rest of the formula is comprised of measures of student characteristics before enrollment, ultimately appraising a school’s ability to attract already high-achieving students and exclude others. Rather than incentivizing the improvement of student outcomes for all types of students, focusing on the factors emphasized by traditional rankings systems may lead institutions to increase costs and reduce access to underprivileged students.

More recently, “value-added” measures have become increasingly important to select college ratings systems. Value-added measures assess the portions of student outcomes attributable to an institution (i.e., the amount of value the institution itself adds to a student’s life) such as job placement and salary upon graduation. Of course, student populations vary by institution. Comparing unadjusted outcomes can therefore incentivize colleges and universities to exclude students whom they deem statistically less likely to obtain high-paying jobs upon graduation. In response, value-added rankings use data on students’ pre-enrollment characteristics to control for student advantages/disadvantages and isolate the actual institutional impact.

The differences between value-added rankings and other rankings are best illustrated through a simple example. Under an input-based formula, an institution’s ranking can increase simply by enrolling more students with high SAT scores. By contrast, value-added rankings control for incoming SAT scores, and institutions are evaluated on their impact on students’ outcomes, regardless of where these students start. This allows the impact of an Ivy League institution’s education to be objectively evaluated against that of a large, public university, despite disparities between incoming student characteristics between the two.

Ratings systems such as U.S. News and the Forbes rankings devote a very limited proportion of their formulas to value-added rankings, but other rankings systems are adopting value-added measures more aggressively. For example, the 2015 Money college ranking devotes a full third of its formula to value-added measures, including graduation rate, graduate income, and loan default risk. More radically, this year the Brookings Institution and the Economist have released new ratings systems that exclusively employ value-added measures of graduate income, mid-career income, and other outcomes. These ratings shift focus away from student selection and spending, instead focusing on how institutions are affecting their graduates’ skills, if at all.

Value-added college ratings systems do not solve the problems inherent in judging and comparing institutions of higher learning. The question of what makes for a quality education will never be answered by any single statistic, ranking, or rating. Still, value-added rankings can encourage better policies in higher education by turning attention toward institutional performance and away from metrics that tend to incentivize frivolous spending and the excluding of “risky” and disadvantaged students.


Adam Tyner (@redandexpert ) is an analyst with Hanover’s Higher Education practice. Prior to joining Hanover, Adam taught English as a second language at schools in China and California and political theory at the University of California, San Diego, where he earned a Ph.D. in political science.

Hanover Research