Data points to problems in the K-12 e-market

By:

Dec 1, 2011

The Keeping Pace 2011 report represents another tour de force by Evergreen Education Group in summarizing the busy, messy growth in K-12 online learning. The report brings much needed clarity and perspective about the state of the movement.

I particularly appreciated the “Quality, accountability, and research” section. The authors made the important point that “just because online learning can work does not mean online learning will work” (p. 40).

The most provocative graphic was Figure 9, which used bar graphs to show student enrollment levels among a set of Ohio eCommunity schools (public charter schools that operate entirely online and which students attend on a full-time basis). Juxtaposed on the same plane were scatter plots that showed student performance levels among the programs. One would hope for a tight correlation between the bar graphs and scatter plots. That is, in the ideal, student enrollments would increase in schools that were generating high student performance. But the figure showed no such correlation. In fact, analysis by Education Sector found no correlation at all between the schools’ performance index ratings and enrollment levels.

The implication was that families were not choosing schools that demonstrated better results.

In their blog series about Ohio e-Schools, Bill Tucker, Erin Dillon, and Padmini Jambulapati from Education Sector said that the Ohio data raises questions about the e-school “market” in Ohio. They question why families are making the choices they do. Perhaps families are choosing based on outcomes other than those Ohio’s performance index measures. Math and reading scores, for example, might be less important than specific programs offered by an e-school, whether friends recommend it, and so forth.

Another possibility is that the test scores and relative performance rankings among schools were not readily accessible to families. Families did not have understandable data, time to act on it before registration periods, or some other transparency problem.

In either case, the study is helpful to anyone thinking about how to channel online learning to higher quality. No commercial enterprise has sufficient market incentive to devise and maintain the type of comprehensive assessment system needed to help the K-12 e-market measure the right things and report these results broadly and transparently.

I heard Stacey Childress, Deputy Director for Education at the Bill and Melinda Gates Foundation, say recently that one of the most important places for the Gates Foundation to invest is in areas where market incentives for vendors are low but the shared need for students is high. This is the sweet spot for philanthropists and the government. Rather than distorting the market and prematurely picking the winners, institutions that focus in that sweet spot help build the foundation for a more effective self-directed market.

The effort to create comprehensive, relevant outcome metrics must begin with government and philanthropists. Fortunately America has many brilliant people in both sectors. I hope the people behind the SMARTER Balanced and PARCC assessment consortia work to make this possible for online learning. Meanwhile, foundations need to play a significant role as guides and accelerators. Online learning is our world’s most promising education opportunity in more than a century. But it is desperate for the infrastructure and institutions that will channel it to its most honorable potential.

Heather is an adjunct researcher for the Christensen Institute and president of Ready to Blend. She is the co-author of Blended: Using Disruptive Innovation to Improve Schools and co-founder of Brain Chase Productions, which produces online-learning challenges disguised as worldwide treasure hunts for students in grades 1-8.