President Obama’s college-ratings plan has evoked strong reactions—mostly criticisms and reservations from college leaders and postsecondary associations about performance-based measurements. Few seem to support the notion that student outcomes can and should be assessed as quantifiable performance metrics.

In tension with these concerns are the results of a 2014 survey of college and university presidents by Gallup and Inside HigherEd. The data reveals:

“[T]hree-quarters or more of presidents said their colleges should publish information on institution-level loan debt and job placement rates, program-level job placement rates for graduates, graduate school placement rates, and ways that students are living meaningful lives.”

The support for transparency, of course, begs the question that the Lumina Foundation’s Strategy Director Zakiya Smith asked, “If you think you should be sharing information about job placement, say, then why aren’t you?”

Let’s assume that college presidents mean what they say and are amenable to providing more information about their graduates. What information should institutions be responsible for publishing? What kinds of data points would be most meaningful and integral to Obama’s ratings plan?

My colleague, Michael Horn, has proposed a Quality-Value Index (written about here and here) where QV* = 90-Day Hire Rate + Change in Salary/Revenue per conferral + Retrospective Student Satisfaction + Cohort Repayment Rate (*Each factor is normalized and measured relative to the average).

We, at the Christensen Institute, acknowledge that although these factors may be imperfect, they’re important starting points and placeholders to pressure-test and refine through a potential pilot program. There are important questions that need to be asked: How do we weight these different elements? Should we use one-year as opposed to 90-day hire rates? How do we avoid stacking selective liberal arts colleges against broad-access institutions that serve minorities or mostly working adults? Should we disaggregate the data by Carnegie classifications of institutions or by age of student population, major fields of study, ethnicity, income, or Pell grants? What are the right questions to use in a survey to measure student satisfaction? These are all points worth exploring, but first, we need access to data.

Easier said than done. This much-needed database that could track students throughout their postsecondary education and into the workforce in an effort to offer transparency about the value of college was introduced in the 2005 Spellings Commission; the student-unit record system, however, was heavily opposed by privacy advocates, private colleges, and Congressional Republicans. The 2008 reauthorization of the Higher Education Act (HEA) included a section forbidding the creation of a federal student-unit record system.

A recent report by the New America Foundation implicates the private nonprofit college lobby for the National Association of Independent Colleges and Universities (NAICU) as having considerable influence as the most vocal opponent of this more robust database despite representing only a small fraction of college-going students in the U.S. Although policymakers fight over the student-unit record system, it seems unlikely that institutions will provide this information until it is federally mandated.

In the meantime, however, one could imagine a subset of organizations that would wish to voluntarily display their QV in order to prove how well they’re performing on all of these fronts. Non-accredited providers, such as coding bootcamps or other upstart ventures, might wish to use the QV as a way for their students to access Title IV funding without having to seek institutional accreditation from regional or national agencies.

Critics of the college-ratings plan from within academia throw up their hands at the lack of good data around student outcomes and reject the notion that college performance can somehow be quantified, but it may just be that the college-ratings plan needs a different starting point or sandbox to test whether these metrics work. More marginalized, non-accredited organizations might be the ideal source for a pilot program that ultimately demonstrates the power of a new postsecondary ratings system.

Author

  • CCI Avatar
    Michelle R. Weise, PhD