Why NAEP Should Go to College

The general belief that, compared to other countries, K–12 education in the United States is in the middle of the pack and possibly falling behind, has been confirmed by almost two decades of international assessments. Just as firm is the contrasting belief that our colleges and universities are the world’s best, but no empirical data back that up. We have no common metric to compare the learning outcomes of colleges and universities and no data to show if students graduating from college can read better than when they finished high school. We also have no data on whether going to an Ivy League school results in higher levels of learning than going to a state-supported school.

In 2011, the Program for International Student Assessment (PISA) compared the college graduation rates of the United States to those of other Organization for Economic Co-operation and Development (OECD) countries and found that we had slipped from second to thirteenth place. The U.S. rates did not decline. It was just that the graduation rates of other countries rose faster.

In 2012, another international survey, the Program for the International Assessment of Adult Competencies (PIAAC), was conducted in 23 countries to measure competencies in literacy, numeracy, and problem solving using technology. Americans between the ages of 16 and 24 (roughly college age) scored below the international average in all three content areas.

If you ask college faculty if their students are ready to find a good job upon graduating, almost all of them will likely say yes. In a 2014 Gallup poll, 96% of chief academic officers felt their students were prepared for success in the workforce. This rosy view was at odds with the finding that only 11% of business leaders consider college graduates prepared for the world of work. Clearly, there is a disturbing disconnect between the perceptions of those living in the ivory tower versus those in the real world affected by those living in the ivory tower. So, it seems, something may be seriously wrong with our preconceived notions about what and how much students learn in college.

There was an earlier attempt to get a handle on this. Two decades ago, the National Center for Education Statistics (NCES) tried to develop a collegiate assessment dubbed “NAEP Goes to College.” The plan was to consider the National Assessment of Educational Progress (NAEP) as a possible assessment vehicle. In 1991 and 1992, NCES sponsored two study design workshops with assessment experts, institutional researchers, university faculty, and administrators and education policymakers and commissioned position papers on critical thinking, problem solving, and communication skills. Four public hearings were held, more than 600 expert participants were asked which skills should be assessed, and the whole effort was documented in a 1995 NCES publication (National Assessment of College Student Learning: Identifying College Graduates’ Essential Skills in Writing, Speech and Listening, and Critical Thinking) before it was scuttled when the higher education community opposed it.

Recently, the National Assessment Governing Board that oversees NAEP has been working on relating NAEP to college preparedness and has determined cut-scores on the NAEP assessment that best predict college preparedness. They found that less than 40% of high school students are prepared to go to college.

An obvious next step would be to validate these cut-scores by following a cohort of students in high school into college to see how well NAEP predicts college preparedness. More long term, extending the NAEP assessment into the college population makes sense. For example, NAEP could assess students as they enter college, two years later, and when they graduate. Already, NAEP has been expanded from a national assessment to include state assessments and many large-scale district assessments.

Why conduct an external standardized assessment like NAEP in the college population? Without it, we’ll have to continue relying on each individual institution to provide idiosyncratic measures of outcomes but have no way to compare these measures across institutions, allowing institutions to look good without any external validity check. Before NAEP came along, we had the same problem in K–12 education. Think back to 1987, when John Jacob Cannell, a West Virginia physician, produced a report showing that, on norm-referenced tests, all 50 states were reporting that they were above the national average.

If NAEP were to develop a collegiate assessment, many issues would have to be addressed. First, since NAEP is legislatively authorized, Congress would have to give NAEP the authority. Second, NAEP would need to determine what to assess. Content such as reasoning, writing, communication, and problem-solving skills seems logical. Third, individual institutions would not be identified—just as individual schools are not currently identified in NAEP. Instead, NAEP would report on types of institutions such as four-year versus two-year institutions and public versus private institutions. Fourth, NAEP would also want to set performance standards just like it does in grades 4, 8, and 12. These would likely be internationally competitive levels of learning outcomes that we would expect institutions of higher education to strive for.

There is a prevailing view that U.S. institutions of higher education are the best in the world. Why don’t we find out if that is true?

Gary W. Phillips is a vice president and Institute Fellow at AIR.