Five years ago, the University of Southern California’s Pedro Noguera and I published In Search of Common Ground. In a time of intense polarization, the two of us—from different parts of the political spectrum—sought to find points of agreement and better understand our disagreements. As we wrote, I was repeatedly struck by the outsized role of simple statistical facts (on questions like spending, achievement, or staffing) in grounding our exchange and helping us talk to—rather than past—one another.
Taking trusted facts for granted is easy because we’re the fortunate heirs of institutions that do a remarkable job of producing them. Agreed-upon facts permit practical, pointed assessments of policy and practice. On questions like reading achievement or school spending, data points provide an essential discipline. Mississippi and Louisiana’s outperformance on the National Assessment of Educational Progress allows us to discuss what’s going on, rather than just argue wildly about which states we might think are faring well.
John Donovan, the moderator-in-chief at Open to Debate, a nonpartisan, nonprofit media group, recently : “Here’s what I’ve watched happen over the 20 years as a moderator: The space for genuine disagreement keeps shrinking.” But, he continued, “From the moderator’s chair, I get to see something most people don’t: What happens when smart people from opposite sides . . . engage evidence instead of just trading talking points. It doesn’t always change minds. But it changes the quality of the conversation.”
By the way, the ability of data to anchor fruitful discourse is one reason I launched the RHSU Edu-Scholar Rankings back in 2010 (for the new 2026 rankings, see here). The idea was to ground discussion of scholarly influence in something more systematic, in data, making it easier to discuss the views, agendas, and expertise that shape education research.
But useful evidence doesn’t just appear. It must be regularly and uniformly collected by a neutral party that has the requisite authority, resources, and capacity, and then the stature to issue results that most parties will deem credible. When it comes to education data, the only obvious candidate for this role is the federal government. No other entity, whether it’s a university consortium, a deep-pocketed nonprofit, or an organization like the National Governors Association or Council of Chief State School Officers, can meet these criteria.
Only the federal government can. Now, it just so happens that, in Washington and elsewhere, we’re in the midst of an overdue conversation about the future of the Institute of Education Sciences. Since its creation in 2002, IES has been tasked with leading federal efforts on data, research, and evidence. Last winter, as readers doubtless recall, Elon Musk’s DOGE took an axe to the U.S. Department of Education. IES was a big target, with DOGE slashing 90% of the institute’s 200-person staff and canceling $900 million worth of contracts.
What to do now? One camp, on the political right, would like Washington to get out of federal education data and research altogether. I reject that stance. I’ve that the federal government is uniquely suited to effectively and credibly collect national education data, a role that it’s played since the 1800s. Heck, of the Constitution charges Congress with overseeing federal “weights and measures,” which would certainly seem to include gauging whether the nation’s students can read or whether taxpayer-subsidized college-goers complete their degrees.
When it comes to education data, the only obvious candidate for this role is the federal government.
I’m all for shrinking the federal bureaucracy and cutting federal spending. But successful self-government demands the transparency and honesty that allows for course correction. The agencies charged with producing that transparency need to be protected and bolstered, not undermined. That’s why, for instance, President Donald Trump’s misguided decision last fall to scapegoat and fire the head of the Bureau of Labor Statistics was so destructive and why it was so important that Congress refused to confirm the partisan apparatchik that Trump tried to install.
Most of what IES does, however, is not the kind of invaluable data collection I’m talking about. Instead, much of what the agency does is pay university researchers and assorted contractors to produce an array of evaluations, studies, and guides of uncertain value. If IES’ unique role is providing useful, credible evidence to inform policy and practice, much of what results doesn’t meet that bar. Put simply, the field of education research is mostly not focused on producing reliable findings of general import; it isn’t tackling pressing concerns, devotes , and evinces an unfortunate taste for ideological agendas. Even IES’ “What Works” products, intended to be useful, tend to be underutilized—perhaps because they feature more supposition and low-grade evidence than compelling substantiation of reliable practices.
Given Trump administration efforts to dismantle the Education Department, it’s unclear whether IES will continue to exist in its familiar form or be subsumed into another department. Either way, the federal government needs to prioritize the collection of reliable data on American education. Currently, after last year’s cuts, many data collections are held together by duct tape.
The goal should be not merely to ensure that NAEP, the Integrated Postsecondary Education Data System, and other vital data collections continue but that these efforts are enhanced. NAEP, for instance, may well be the best investment taxpayers make when it comes to K–12 education. It tells us how students are faring and serves as a check on leaders inclined to elide or cloud hard facts. Yes, let’s streamline processes, review contracts, and rethink practices, as former IES director Mark Schneider But we should be investing more heavily in NAEP, not less.
In the early days of IES, thanks to the single-minded efforts of inaugural director Russ Whitehurst, it enjoyed some success promoting scholarship that’s scientifically valid. Confronting a research community where dubious fads and quasi-advocacy reigned, Whitehurst was an unrelenting champion of empiricism and established small pockets of hard-edged scholarship at several prominent universities. When Whitehurst left, though, his project stalled out. Two decades later, those beachheads remain isolated outposts in a field that frequently seems more defined by the than scientific inquiry.
The problem is less with IES than with the state of the field. Given that, there’s only so much that tinkering with the IES machinery can accomplish. Indeed, most criticism of the agency is about balky processes, burdensome applications, and a lack of timeliness. Those are real issues, but downsizing or reorganizing IES won’t address broader concerns about the state of education research. That would require the agency to once again embrace Whitehurst’s unflinching commitment to rewriting the field’s standards and norms.