It’s a busy week in education in the Bay Area in California. With the American Educational Research Association (AERA) Annual Meeting in San Francisco with thousands of education researchers, the NewSchools Venture Fund Summit in Burlingame with a who’s who of education leaders and entrepreneurs, the GreatSchools 2013 Summit in San Francisco, the National Education Writer’s Association’s 66th National Seminar in Palo Alto, ImagineK12’s Demo Day in Palo Alto, and more, educators, investors, policymakers, entrepreneurs, and researchers have plenty of opportunities to meet.
One of the more critical conversations occurred on Sunday to kick off the week. The topic, ironically enough though, was about a meeting that happens rarely in education.
Bror Saxberg, chief learning officer at Kaplan, organized a panel discussion at the AERA meeting about why learning scientists and educational entrepreneurs don’t connect that much. I, along with Dick Clark of USC, Kenneth Koedinger, co-director of the Pittsburgh Science of Learning Center, Michael Moe of GSV Capital, Stacey Childress of the Bill & Melinda Gates Foundation, and Nadya Dabby from the U.S. Department of Education, discussed not only how these conversations don’t happen, but the fundamental reasons why they don’t.
Saxberg and many others have noted that, all too often, products and services in the education market are not informed by what we know about learning. As a result, these new offerings tend to start at ground zero and do not take advantage of what’s become, over the past couple of decades in particular, a sizeable literature about how people learn and how to design optimal learning experiences.
Although learning scientists have far more to learn—and some of the biggest advances I believe will occur in the field instead of the lab given the rise of adaptive learning products—not having products informed by what’s known about learning as a starting point is often a big miss for students. Yet we see it all the time.
To take a notable example, people from the biggest of the massive open online course platforms, Coursera, often talk about how exciting it is that they can do A-B testing to learn what works. With the massive user base they have and the big data they are able to collect, there is indeed a huge potential for breakthroughs. What sort of A-B testing are they doing though? One professor, for example, tested whether showing his face during a lesson led to improved learning. What’s sad about that is that the research to answer these sorts of questions is already well established.
From a higher level, it often seems that the best business plans in education have the least interesting learning science behind them, and the worst business plans in education have the most interesting learning science behind them. On the panel, Koedinger, a co-founder of Carnegie Learning, confirmed the point when he talked about how once he and his team had brought their research-informed product to market, the majority of the market incentives encouraged them not to improve the product along its ability to help students learn.
This points to the first of the three ideas I offered in my opening remarks as to why educational entrepreneurs and learning scientists don’t talk all that much: In public education, the incentives don’t encourage educational entrepreneurs to seek out what’s known from learning science. The products that win in the marketplace aren’t necessarily those that are the best for learning, as the policies in public K-12 education in particular are focused heavily on input-based metrics that encourage compliance, but not student learning growth. As a result, seeking out what’s known about how students learn and improving products accordingly isn’t necessarily rewarded. To change this, we need to fix the demand-side problem. Moving from a policy environment that rewards inputs like seat time to one that values student outcomes in a competency-based learning environment is critical to create smarter demand.
Second, entrepreneurs sometimes suffer from the “We went to school, therefore we are experts” mentality—when in fact, what we think we know about how learning works from our experiences is often incorrect. Because entrepreneurs have this notion, they either think they can extrapolate to solve system-wide problems for which they don’t have a solid understanding of causality or they can utilize a lean startup approach and figure it out on the ground. There is a lot to be said for leveraging a lean startup—or discovery-driven—approach. But in a discovery-driven process, the goal is to identify assumptions, test them and gain knowledge as fast and cheaply as possible. Leveraging good research that has already created a knowledge base does just that. Ignoring it is a mistake.
Finally, researchers have a long way to go to help solve the problem. The catalog of sessions at AERA was the weight of a phonebook. Outside of asking Saxberg what sessions would be useful, I had no hope of navigating it. We need more education research about things that actually matter in the field and are relevant for teachers and students. We need more translation of good research into the popular domain to help people understand more widely what is the good research and what does it say. Today every company seems to have a research study that they bring to districts validating what they do. How to clarify what’s good? And we need faster research that takes advantage of the massive amounts of data we can generate about education through digital learning.
In the panel conversation, the lack of good networks, better use of the emerging edtech incubators, the structure of federal research funding, the lag-time between learning and tangible results, and other things surfaced as additional facets of the problem. In seeking to fix this, I’m curious though: what else have you observed as something that holds this back? Students await the answer.