On Wednesday, I鈥檒l be publishing the 2026 RHSU Edu-Scholar Public Influence Rankings, tracking the 200 education scholars who had the largest influence on policy and practice last year.
I want to take a few moments to explain the nature of the exercise. It鈥檚 rooted in twin presumptions: First, ideas matter; second, people tend to devote more time and energy to those activities that are valued. Unfortunately, over many years, I鈥檝e found that higher education doesn鈥檛 sufficiently value many things that it really should (including, I fear, teaching and learning).
Today, though, I want to focus on a specific shortcoming when it comes to research: Higher ed鈥檚 fixation on grants and unread academic journals has meant that it just doesn鈥檛 pay much attention to whether scholars are contributing to the real world of policy and practice. While this may not much matter when it comes to the study of physics or Renaissance poetry, it does when researchers work in education鈥攚ith its immense day-to-day implications for millions of students and 糖心动漫vlog.
Now, just because education researchers are influencing policy or practice doesn鈥檛 mean their work is necessarily good or useful. Indeed, regular readers know that I鈥檓 skeptical about the value of much education research and certainly don鈥檛 think policy or practice should be driven by the whims of researchers. Why? Well, researchers inevitably bring their own biases, education research tends to be plagued by methodological complications, and even valid findings may not translate into actionable advice. So, 鈥渋nfluential鈥 is intended here more as a descriptor than as a compliment.
Scholars are at their best not when they鈥檙e handing down edicts from on high but when they鈥檙e asking hard questions, challenging lazy conventions, and scrutinizing the real-world impacts of yesterday鈥檚 reforms. On that count, it鈥檚 enormously healthy for education scholars to interact with the policymakers and 糖心动漫vlog they seek to persuade.
That makes it a big problem that higher education tends to reserve its professional rewards for scholars who stay in their comfort zone, producing narrow, jargon-laden papers for unread academic journals notable mostly for their unreadable prose. Consequently, there can be little incentive for responsible scholars to wade into heated, oft-unpleasant debates about policy or practice.
That鈥檚 where this exercise can help. Over the past decade-plus, dozens of deans and provosts have used these rankings to identify candidates for job openings or inform decisions about promotion and pay. I鈥檝e heard from hundreds of scholars who鈥檝e pointed to the results when seeking institutional support or to illustrate their impact when applying for positions, grants, fellowships, or tenure. And prominent institutions have bragged about the rankings, spotlighting activity that otherwise rarely garners much notice.
Some of this has been in the service of scholarship that I find problematic. But even when that鈥檚 the case, the rankings have helped make possible a more robust debate about which scholars are influencing policy and practice and what we should make of their work. Now, no one should overstate the precision of this exercise. It鈥檚 a data-informed conversation starter, analogous to similar rankings of ballplayers or mutual fund managers.
Finally, I want to reiterate that the rankings do not address teaching, mentoring, or service (even though I suspect that, much of the time, these are the most valuable things that professors do). For better or worse, this is an exercise in gauging public influence, not a summation of a scholar鈥檚 worldly contributions.