As online learning gains share and transforms our education system, for some time I have argued that foundations and philanthropists would be wise to spend their dollars in moving public policy, creating proof points, and the like to create smarter demand and not invest on the supply side in the technology products and solutions themselves.
The market is plenty motivated to create disruptive products and services to serve the public education system, but today’s policies and regulations don’t incentivize and reward those products and services that best serve students. As a result, philanthropic dollars are critical to help create the correct conditions such that those products that are efficacious and serve a higher end—student learning—are the ones that gain share.
As we’ve argued, public policy should reward those providers that best deliver student outcomes—and punish those providers that do not serve the public good.
There is one area, however, where I think philanthropic dollars should probably fund products and services, which is in the category of assessments. If we’re going to have a system that pays providers on how students do on outcome measures, we need robust assessments that are authentic and that people trust. The political incentives—for a variety of reasons—to create high-quality assessments aren’t particularly strong, so having philanthropists invest dollars to create these assessments and continue to push innovation is critical.
This is why yesterday’s announcement that The William and Flora Hewlett Foundation will award a $100,000 prize to the designers of software that can reliably automate essay grading for state tests to drive testing of deeper learning is so important. Open Education Solutions and The Common Pool designed and will be managing the competition.
The Hewlett Foundation’s leadership in creating better assessments to measure critical reasoning and writing is a big step forward—and its use of Kaggle, a platform for predictive modeling competitions, to host the competition is clever.
According to the press release, “The automated scoring competition intends to solve the longstanding problem of high cost and low turnaround of current testing deeper learning such as student essays. The goal is to shift testing away from standardized bubble tests to tests that evaluate critical thinking, problem solving and other 21st century skills.”
In addition, the competition is being conducted with the support of the two state testing consortia that are currently designing the next-generation assessments for the Common Core. Having this buy-in and collaboration gives the competition serious validity and the potential to have real impact.