Skip to content
Link copied to clipboard
Link copied to clipboard

Higher education’s ‘biggest scandal’ may not be what you think

I’m a college professor, and I have little idea of how well I teach because universities have made few efforts to determine how well any of us are teaching — or how much our students are learning.

Eric Monday teaches a class at the University of Kentucky.
Eric Monday teaches a class at the University of Kentucky.Read more

In the latest piece of evidence about the learning losses from the pandemic, a recent report from the National Assessment of Educational Progress found that fourth- and eighth-grade students showed significant dips in reading and math scores between 2019 and 2022. The declines in math were the worst since the exam was first given in 1969.

So how much learning did college students lose during the pandemic? As a professor at Penn, this is a question that’s often on my mind.

The answer: We don’t know — and, even more, we don’t want to know.

When COVID-19 arrived, most universities sent students home to take classes on their laptops. At Penn, we taught remotely from March 2020 through September of last year. Many of us suspected that our students weren’t learning as much as they did in actual classrooms. But we didn’t make any real effort to find out, which might be the biggest scandal of all.

This lack of concern for the outcomes of teaching in universities is not a new problem. In the early 20th century, when universities reconstituted themselves around research, professors got hired and promoted based on the research papers they had published. In most cases, their teaching didn’t matter.

Indeed, good teaching could count against you. “Many college professors are suspicious of a colleague who appears to be a particularly good teacher,” a dean at Ohio State University wrote in a letter to a colleague in 1910. “There is a rather wide spread notion in American Universities that a man who is an attractive teacher must in some way or other be superficial or unscientific.”

» READ MORE: Pa. and N.J. reading and math scores dipped during the pandemic as U.S. saw ‘troubling’ decline

Not surprisingly, universities made few sustained efforts to determine how well faculty were teaching — or how much their students were learning. Today, thanks to an explosion in the learning sciences, we know much more about what helps college students succeed in the classroom. The best courses engage them in higher-level thinking and problem-solving, focused on the key questions that define whatever discipline they’re studying.

But many professors remain unaware of that research, which isn’t a standard part of their preparation. They study to become biologists or historians or statisticians, not educators. And we often assume — without evidence — that their degrees will qualify them to teach in these fields, too.

A great researcher does not necessarily make a good teacher.

Then we evaluate a professor’s instruction via student surveys, a notoriously imprecise tool. Students can tell us important things about instructors: whether they return work on time, make themselves available outside of class, and more. But students can’t tell us if we’re effective teachers.

That is — or should be — a professional question. And there’s no national test in higher education — like the National Assessment of Educational Progress for K-12 schools — to help us frame an answer.

“Students can’t tell us if we’re effective teachers.”

Jonathan Zimmerman

If universities took teaching seriously, we would require professors to take coursework about the fundamentals of teaching well before they entered the classroom. We would judge their teaching via peer review, evaluating each other’s in-classroom performance just like we do with our research. And we would make much more substantial efforts to measure what our students learn.

You would think that the COVID crisis might have inspired us to do that. But you would be wrong. So far as I know, no major institution made a full-scale commitment to determining how much — or little — their students learned during the pandemic, when so much instruction went online.

A nationwide survey of 2,000 students in June 2021 showed that over half of them believed they learned less in the previous year than they had before COVID. More students are also reporting lower levels of psychological well-being, which likely inhibited their learning as well.

But we really don’t know, because we haven’t invested in knowing it. Like teacher preparation and peer review of instruction, measuring student learning would be expensive. Things of value cost something. And if you’re not willing to sustain the costs, you probably don’t value them very much.

» READ MORE: College rankings are pure ‘hype’ (and I work at Penn)

Many colleagues at Penn have told me that investigating instruction would be throwing good money after bad because you could never really evaluate the results of teaching scientifically: It’s too ineffable, too idiosyncratic, too “personal.”

Seriously? Institutions like my own have generated vast knowledge about hundreds of hugely complex human behaviors, ranging from stock market decisions to spousal choices. We could do the same for our teaching if we wanted. And we don’t.

My university has some terrific teachers, some terrible ones, and many in between. But we won’t tell you which is which, or how much our students are learning from them. That’s not because we’re hiding something in the darkness. It’s because we haven’t summoned the will to bring it to light.

Jonathan Zimmerman teaches education and history at the University of Pennsylvania. He is the author of “The Amateur Hour: A History of College Teaching in America,” which was published in 2020 by Johns Hopkins University Press.