There are a bunch of college ratings out there to help students decide what college to apply to (and give something for alumni to gloat about). The tough part is that there doesn’t seem to be any agreement on what makes a good college. Alex Richards and Ron Coddington describe the discrepancies.
Notice how few measures are shared by two or more raters. That indicates a lack of agreement among them on what defines quality. Much of the emphasis is on “input measures” such as student selectivity, faculty-student ratio, and retention of freshmen. Except for graduation rates, almost no “outcome measures,” such as whether a student comes out prepared to succeed in the work force, are used.
This, on top of spotty data across universities, makes rankings, especially for schools that are close in ratings to each other, difficult to know which one to follow. This goes for other types of ratings too. Any headline that starts with “Best states/countries/schools/programs/etc to…” requires some salt since rankings can change dramatically depending on the measures.
But you already knew that, right?
One thing is for sure though. UCLA and Cal stat departments are the best programs to be in. That’s fact.