Ricardo Torres, the CEO of the National Student Clearinghouse, is retiring next month after 17 years at the helm. His last few weeks on the job have not been quiet.
On Jan. 13, the clearinghouse’s research team announced they had found a significant error in their October enrollment report: Instead of freshman enrollment falling by 5 percent, it actually seemed to have increased; the clearinghouse is releasing its more complete enrollment report tomorrow. In the meantime, researchers, college officials and policymakers are re-evaluating their understanding of how 2024’s marquee events, like the bungled FAFSA rollout, influenced enrollment; some are questioning their reliance on clearinghouse research.
It’s come as a difficult setback at the end of Torres’s tenure. He established the research center in 2010, two years after becoming CEO, and helped guide it to prominence as one of the most widely used and trusted sources of postsecondary student data.
The clearinghouse only began releasing the preliminary enrollment report, called the “Stay Informed” report, in 2020 as a kind of “emergency measure” to gauge the pandemic’s impact on enrollment, Torres told Inside Higher Ed. The methodological error in October’s report, which the research team discovered this month, had been present in every iteration since. And a spokesperson for the clearinghouse said that after reviewing the methodology for their “Transfer and Progress” report, which they’ve released every February since 2023, was also affected by the miscounting error; the 2025 report will be corrected, but the last two were skewed.
Torres said the clearinghouse is exploring discontinuing the “Stay Informed” report entirely.
Such a consequential snafu would put a damper on anyone’s retirement and threaten to tarnish their legacy. But Torres is used to a little turbulence: He oversaw the clearinghouse through a crucial period of transformation, from an arm of the student lending sector to a research powerhouse. He said the pressure on higher ed researchers is only going to get more intense in the years ahead, given the surging demand for enrollment and outcomes data from anxious college leaders and ambitious lawmakers. Transparency and integrity, he cautioned, will be paramount.
His conversation with Inside Higher Ed, edited for length and clarity, is below.
Q: You’ve led the clearinghouse since 2008, when higher ed was a very different sector. How does it feel to be leaving?
A: It’s a bit bittersweet, but I feel like we’ve accomplished something during my tenure that can be built upon. I came into the job not really knowing about higher ed; it was a small company, a $13 million operation serving the student lending industry. We were designed to support their fundamental need to understand who’s enrolled and who isn’t, for the purposes of monitoring student loans. As a matter of fact, the original name of the organization was the National Student Loan Clearinghouse. When you think about what happened when things began to evolve and opportunities began to present themselves, we’ve done a lot.
Q: Tell me more about how the organization has changed since the days of the Student Loan Clearinghouse.
A: Frankly, the role and purpose of the clearinghouse and its main activities have not changed in about 15 years. The need was to have a trusted, centralized location where schools could send their information that then could be used to validate loan status based on enrollments. The process, prior to the clearinghouse, was loaded with paperwork. The registrars that are out there now get this almost PTSD effect when they go back in time before the clearinghouse. If a student was enrolled in School A, transferred to School B and had a loan, by the time everybody figured out that you were enrolled someplace else, you were in default on your loan. We were set up to fix that problem.
What made our database unique at that time was that when a school sent us enrollment data, they had to send all of the learners because they actually didn’t know who had a previous loan and who didn’t. That allowed us to build a holistic, comprehensive view of the whole lending environment. So we began experimenting with what else we could do with the data.
Our first observation was how great a need there was for this data. Policy formulation at almost every level—federal, state, regional—for improving learner outcomes lacked the real-time data to figure out what was going on. Still, democratizing the data alone was insufficient because you need to convert that insight into action of some kind that is meaningful. What I found as I was meeting schools and individuals was that the ability and the skill sets required to convert data to action were mostly available in the wealthiest institutions. They had all the analysts in the world to figure out what the hell was going on, and the small publics were just scraping by. That was the second observation, the inequity.
The third came around 2009 to 2012, when there was an extensive effort to make data an important part of decision-making across the country. The side effect of that, though, was that not all the data sets were created equal, which made answering questions about what works and what doesn’t that much more difficult.
The fourth observation, and I think it’s still very relevant today, is that the majority of our postsecondary constituencies are struggling to work with the increasing demands they’re getting from regulators: from the feds, from the states, from their accreditors, the demand for reports is increasing. The demand for feedback is increasing. Your big institutions, your flagships, might see this as a pain in the neck, but I would suggest that your smaller publics and smaller private schools are asking, “Oh my gosh, how are we even going to do this?” Our data helps.
Q: What was the clearinghouse doing differently in terms of data collection?
A: From the postsecondary standpoint, our first set of reports that we released in 2011 focused on two types of learners that at most were anecdotally referred to: transfer students and part-time students. The fact that we included part-time students, which [the Integrated Postsecondary Education Data System] did not, was a huge change. And our first completion report, I believe, said that over 50 percent of baccalaureate recipients had some community college in their background. That was eye-popping for the country to see and really catalyzed a lot of thinking about transfer pathways.
We also helped spur the rise of these third-party academic-oriented organizations like Lumina and enabled them to help learners by using our data. One of our obligations as a data aggregator was to find ways to make this data useful for the field, and I think we accomplished that. Now, of course, demand is rising with artificial intelligence; people want to do more. We understand that, but we also think we have a huge responsibility as a data custodian to do that responsibly. People who work with us realize how seriously we take that custodial relationship with the data. That has been one of the hallmarks of our tenure as an organization.
Q: Speaking of custodial responsibility, people are questioning the clearinghouse’s research credibility after last week’s revelation of the data error in your preliminary enrollment report. Are you worried it will undo the years of trust building you just described? How do you take accountability?
A: No. 1: The data itself, which we receive from institutions, is reliable, current and accurate. We make best efforts to ensure that it accurately represents what the institutions have within their own systems before any data is merged into the clearinghouse data system.
When we first formed the Research Center, we had to show how you can get from the IPEDS number to the clearinghouse number and show people our data was something they could count on. We spent 15 years building this reputation. The key to any research-related error like this is, first, you have to take ownership of it and hold yourself accountable. As soon as I found out about this we were already making moves to [make it public]—we’re talking 48 hours. That’s the first step in maintaining trust.
That being said, there’s an element of risk built into this work. Part of what the clearinghouse brings to the table is the ability to responsibly advance the dialogue of what’s happening in education and student pathways. There are things that are happening out there, such as students stopping out and coming back many years later, that basically defy conventional wisdom. And so the risk in all of this is that you shy away from that work and decide to stick with the knitting. But your obligation is, if you’re going to report those things, to be very transparent. As long as we can thread that needle, I think the clearinghouse will play an important role in helping to advance the dialogue.
We’re taking this very seriously and understand the importance of the integrity of our reports considering how the field is dependent on the information we provide. Frankly, one of the things we’re going to take a look at is, what is the need for the preliminary report at the end of the day? Or do we need to pair it with more analysis—is it just enough to say that total enrollments are up X or down Y?
Q: Are you saying you may discontinue the preliminary report entirely?
A: That’s certainly an option. I think we need to assess the field’s need for an early report—what questions are we trying to answer and why is it important that those questions be answered by a certain time? I’ll be honest; this is the first time something like this has happened, where it’s been that dramatic. That’s where the introspection starts, saying, “Well, this was working before; what the heck happened?”
When we released the first [preliminary enrollment] report [in 2020], we thought it’d be a one-time thing. Now, we’ve issued other reports that we thought were going to be one-time and ended up being a really big deal, like “Some College, No Credential.” We’re going to continue to look for opportunities to provide those types of insights. But I think any research entity needs to take a look at what you’re producing to make sure there’s still a need or a demand, or maybe what you’re providing needs to pivot slightly. That’s a process that’s going to be undertaken over the next few months as we evaluate this report and other reports we do.
Q: How did this happen, exactly? Have you found the source of the imputation error?
A: The research team is looking into it. In order to ensure for this particular report that we don’t extrapolate this to a whole bunch of other things, you just need to make sure that you know you’ve got your bases covered analytically.
There was an error in how we imputed a particular category of dual-enrolled students versus freshmen. But if you look at the report, the total number of learners wasn’t impacted by that. These preliminary reports were designed to meet a need after COVID, to understand what the impact was going to be. We basically designed a report on an emergency basis, and by default, when you don’t have complete information, there’s imputation. There’s been a lot of pressure on getting the preliminary fall report out. That being said, you learn your lesson—you gotta own it and then you keep going. This was very unfortunate, and you can imagine the amount of soul searching to ensure that this never happens again.
Q: Do you think demand for more postsecondary data is driving some irresponsible analytic practices?
A: I can tell you that new types of demands are going to be put out there on student success data, looking at nondegree credentials, looking at microcredentials. And there’s going to be a lot of spitballing. Just look at how ROI is trying to be calculated right now; I could talk for hours about the ins and outs of ROI methodology. For example, if a graduate makes $80,000 after graduating but transferred first from a community college, what kind of attribution does the community college get for that salary outcome versus the four-year school? Hell, it could be due to a third-party boot camp done after earning a degree. Research on these topics is going to be full of outstanding questions.
Q: What comes next for the clearinghouse’s research after you leave?
A: I’m excited about where it’s going. I’m very excited about how artificial intelligence can be appropriately leveraged, though I think we’re still trying to figure out how to do that. I can only hope that the clearinghouse will continue its journey of support. Because while we don’t directly impact learner trajectories, we can create the tools that help people who support learners every year impact those trajectories. Looking back on my time here, that’s what I’m most proud of.