College Rankings: The Good, the Bad and the Ugly Behind Assessing Higher Education

Jan 25, 2010

College ranking systems are both wildly popular and deeply controversial. They have an enormous influence on student and peer assessment of schools, yet their methodology and data tend to be flawed. This article explores pros and cons of college rankings and what institutions can do to make them better.

View 27 Popular Schools »

Ranking Schools

Gaming the System

In spite of great resistance from many colleges and universities, ranking systems have become the standard by which our educational institutions are measured. Peers rely on them to form judgments about quality and prestige, and students use them to guide their first and most important decision: where to go to school. The influence that ranking systems have - particularly the dominant U.S. News & World Report - has led many to worry that schools will change policies just to improve their rank, and that prospective students have become too reliant on an oversimplified measure of quality.

Proponents of ranking systems argue that they promote marketplace-style competition between schools. In theory, if two (or two hundred) institutions are competing by improving their offerings, then the consumer/student wins. That's 'good' competition, but many critics argue that rankings actually tend to promote 'bad' competition - schools simply adjusting policies to manipulate their data and increase their rank without necessarily improving the quality of the education they're offering.

Clemson University

The 'bad' competition problem was highlighted at the summer 2009 Association of Institutional Research forum. The focus was on college rankings, and during one session Clemson University laid bare all the tactics they had used to move from 38th up to 22nd in the U.S. News & World Report rankings. The simplest change they made was to class size. U.S. News uses the number of classes with fewer than 20 students as a key indicator for the 'student experience' measure. So Clemson reduced classes with 20-25 students down to 18 or 19, and allowed larger classes to swell even further. Although the university hadn't actually added more course sections or improved the student to faculty ratio, they were still able to show 'improvement' on class size.

Clemson also tightened their admissions standards in order to raise the average GPA and standardized test scores across their student body. They stopped admitting full time, first-time undergraduates who weren't in the top third of their graduating classes and began to continuously reassess their SAT average during the admissions process to determine if they needed to increase the minimum SAT score in the next round. Although these measures may seem like an innocent way to encourage academic rigor, Clemson is a public university - their duty is to provide education access to the entire community, not just those students who will improve the school's stats.

The university made other changes that even their representative admitted were 'questionable.' U.S. News rewards schools for high faculty salaries, so Clemson increased their reported salaries by $20,000 through a combination of data manipulation and actual salary bumps (funded primarily by a tuition increase). Before reporting other financial information to U.S. News, Clemson experimented with multiple definitions of things like spending categories to figure out how to present data in the best light. They emphasized academic expenditures, minimized administrative costs and, like many other schools, campaigned to get as many $5 alumni donations as possible in order to improve their alumni participation rate.

Finally, the college admitted to a practice that elicited actual gasps from the audience: When their president fills out the reputational survey form, he rates all other programs below average to make Clemson look better by comparison. However, as the Clemson representative pointed out, the only truly shocking thing about their data manipulation is that Clemson is being honest about it. These are all common practices among colleges and universities in an environment where 'good ranking' is conflated with 'good education.'

Making the grade

A Circular Assessment

Even if schools could be prevented from changing policy just to manipulate their rank, the ranking systems still suffer from methodological flaws. The 'objective data' on which most systems base their measurements is self-reported by the institutions they're measuring. As we just saw with Clemson's tactics, there's ample evidence to suggest that self-reported data is often manipulated.

Furthermore, the way different factors are weighted can change, leading to a new rank that doesn't have anything to do with a change in the institution. This is especially problematic in tiered systems where either a change in a factor's weight or a change in the delineation of the tiers can not only adjust a school's rank number by a few points, it can move the school into a whole different tier. Should a second tier school become a top tier school - or vice versa - just because the ranking methodology has changed?

Perhaps it wouldn't matter if rankings weren't so influential. But recent studies confirm that the U.S. News rankings have become so powerful that they now dictate a school's reputation, even though their scoring system relies on reputation as an independent measure. Reputation, as determined by the assessment of peers in higher education, is worth 25% of a school's total score, weighted more than any other factor. The theory is that institutions that have earned good reputations among their peers deserve higher ranking. However, researchers Michael N. Bastedo (University of Michigan) and Nicholas A. Bowman (University of Notre Dame) found that the rankings are now so influential that peer assessment can now be traced back to schools' existing rankings.

US News and World Report

Drawing from the print editions of the 'America's Best Colleges' issues of U.S. News & World Report, Bastedo and Bowman broke down the interaction between rank and reputation in terms of both overall rank and tier placement. For the overall rank measurement they were able to get data on the top 25 national universities and top 25 liberal arts colleges from 1989, 1995, 2000 and 2006. However, U.S. News only ranked the top 25 institutions until 1992, when they implemented the tier system in order to rank a broader range of schools. Thus Bastedo and Bowman were only able to measure tier placement from 1992 to 2000, when U.S. News stopped including tier in the print edition.

What the researchers sought to uncover is whether peer assessments affect rank and tier, as the ranking's methodology indicates they should, or if a school's early rank and/or tier placement is what is actually influencing peer assessments. They found that, for both liberal arts colleges and national universities, a school's overall rank in 1989 was a statistically significant predictor of its peer assessment rating in 2006. They also found that a tier level change between 1992 and 2000 had a sizable effect on 2000 peer assessment ratings for national universities, and a marginal effect on peer assessment in 2000 for liberal arts colleges. All of these effects persisted even after the researchers controlled for measures of instructional quality and school performance.

These results indicate a significant problem with the U.S. News methodology. As noted above, the reputation measure has the strongest influence on a school's rank because it's weighted more than any other factor. Yet this study shows that the reputation score has simply become a reflection of a school's previous overall rank and tier placement. In effect, today's U.S. News rankings are circular - a good early rank means a good reputation assessment, which in turn assures a good rank, which in turn assures another good reputation assessment... etcetera.

College Access and Attainment

Student Access, Student Choice

The self-perpetuating nature of the U.S. News & World Report rankings has real-world consequences for both schools and prospective students. More and more, students are coming to rely on rank when making their school choices. Caroline Radaj, a student tour guide at the University of Wisconsin-Madison, chose UW-Madison based almost entirely on its high rank, and notes that the status of the school's ranking is among the most common questions from students and their parents. Linda Carlson, the parent of a freshman at Harvey Mudd, notes that she and her daughter relied on rank to help them choose a school that would offer an appropriately academic setting.

A 2005 study by Kevin Rask (Colgate University) and Amanda Griffith (Cornell University) confirms that there are countless others relying on rankings. Griffith and Rask found that high ability students across all demographics are sensitive to the U.S. News rank when making their school choice. More specifically, they're sensitive to changes in rank, independent of all other measures of quality. Thus, if U.S. News changes their methodology and an institution's rank drops, the school will be less likely to attract high ability students regardless of whether or not it suffered any drops in actual performance. The higher a school's rank, the stronger this effect becomes, and the more schools are motivated to manipulate data in order to preserve their rank.

A diverse student body

In 2007, the Institution for Higher Education Policy (IHEP) published a monograph of several essays exploring ranking systems and their implications for higher education worldwide. In her essay 'The Impact of Higher Education Rankings on Student Access, Choice and Opportunity,' Marguerite Clark focuses on the effects of U.S. ranking systems on disadvantaged students. She found that schools' tendency to change admissions policies in order to bolster their rank effectively bars access for many low-income and disadvantaged students. Even schools that don't competitively adjust their SAT minimums often use tools like tuition discounting and merit aid to seduce high-achieving students who may not actually have financial need. These practices have combined with soaring tuition costs to make higher education less and less accessible to disadvantaged students.

Another essay in the IHEP monograph serves to remind us of the nature of most ranking systems. In 'The Development of the U.S. News Rankings,' Alvin Sanoff, a long-term managing editor for the U.S. News rankings project, explores the history and development of their rankings. He points out that these rankings were initially intended as a marketing tool for the magazine. Although they've changed dramatically in response to consumer criticism, offering more granular rankings and detailed scores, the U.S. News & World Report rankings are still a private enterprise by a media outlet. The IHEP editors note that this is true of almost all popular American ranking systems, from The Princeton Review's Annual College Rankings to Forbes Magazine's America's Best Colleges list. The academic community places so much weight on school rankings that we tend to forget the original sourcee.

Overwhelmed with choice

Evolving so as to Serve the Community

Keep digging and you will find hundreds of studies in the lastdecade on college rankings. The studies look at many different aspects of the system, but they all come to similar conclusions: Rankings have a real world influence on higher education, and it is often negative.

Nevertheless, rankings look like they're here to stay. In a world overflowing with institutions of higher education - the National Center for Education Statistics (NCES) estimates over 4,200 universities in the U.S. alone - rankings provide one of the few quantitative sources of information. Even after students narrow their options by factors like location, size and private vs. public, they're still confronted with a staggering list of schools. Rankings offer an (ideally) objective guide for students trying to focus on the best schools.

Critics argue that this is a dramatically oversimplified version of 'best.' Historically, most rankings have provided a single score that fails to reflect important differences between types of schools, individual programs and the diverse needs and desires of America's many students. However, venerable institutions like U.S. News & World Report have responded to this criticism by offering more granular assessments. Not only have they added measurements like economic diversity and freshmen retention rate, they also now rank specific academic programs so students can find the best school for the subject that interests them. There's also been a rash of new niche rankings that cater to different priorities. For example, Washington Monthly and Mother Jones Magazine rank schools for social justice and the Sierra Club gives schools a 'green' rank based on environmental responsibility.

Acknowledging that ranking systems are both useful and problematic, an international coalition of higher education groups formed the International Rankings Expert Group (IREG). At a meeting in Berlin in the spring of 2006, a group of IREG participants from the U.S., England, Germany, the Netherlands, China, Japan, Russia and other nations developed a framework for good ranking practices. The list, called the Berlin Principles, is intended to help countries build and refine ranking systems that positively effect higher education. If influential ranking systems like U.S. News & World Report adopted the Berlin Principles, they would become an invaluable tool for promoting quality higher education.

International Rankings Expert Group

The Berlin Principles

Paraphrased from the IHEP monograph, the Berlin Principles are as follows:

Purposes & goals of rankings:

  1. Each ranking system should be one of many diverse approaches to assessing higher education.
  2. Rankings should be clear about purpose and targeted groups.
  3. Rankings should recognize institutions' diversity, and take their different missions and goals into account.
  4. Rankings should be clear about the sources from which they receive their information.
  5. All rankings, but especially ones comparing schools from different countries, should be specific about the cultural, economic, linguistic and historical contexts of the educational systems they are ranking.

Design & weighting of indicators:

  1. Ranking methodology should be transparent.
  2. Indicators should be chosen according to relevance and validity.
  3. Outcomes should be preferred to inputs whenever possible.
  4. The weights assigned to different indicators need to be prominent and vary as little as possible.

Collection and processing of data:

  1. Due attention should be paid to ethical standards as well as to the good practices recommended here.
  2. Audited and verifiable data is to be used whenever possible.
  3. Rankings should adhere to proper scientific standards and procedures for data collection.
  4. Measures of quality assurance should be applied to the ranking processes.
  5. Rankings should apply organizational measures that will enhance their credibility.

Presentation of ranking results:

  1. Rankings should provide consumers with clear understanding of all of the factors used to develop them, offering consumers a choice in how those rankings are displayed.
  2. Rankings should also be compiled in such a way as to eliminate or reduce data errors and should be organized in a way that will make it easy to correct faults.
Show me popular schools

Related to College Rankings: The Good, the Bad and the Ugly Behind Assessing Higher Education

  • Related
  • Recently Updated
  • Popular
British Higher Education's Market Economy

Britain's higher education system has seen dramatic changes in policy since the coalition government took office last year....

Crisis in Higher Education: Next Steps for Students and Universities

Over the last few days, Education-Portal.com reported on a few of the issues that have caused some to believe higher...

Buzzwords in Higher Education

Professors, administrators and analysts in higher education often make use of language and definitions that are unique to the...

Education Around the World: EP Looks at Higher Education in 3 Different Countries

Throughout the world, higher education plays a large role in workforce development. Postsecondary programs in every country...

Looking for Work? Try Higher Education - Job Openings Jumped in the First Half of 2010

Popular Schools

Popular Schools

Copyright