State officials from several states gathered in June to discuss Pearson’s Next-Generation Assessments Roadmap. Pearson created the interactive tool to help states that are trying to transition to online assessments by 2014. Both of the Race to the Top assessment consortia, SMARTER Balanced Assessment Consortium (SBAC) and Partnership for the Assessment of Readiness for College and Careers (PARCC), have set this 2014 target.
I left the conference with two impressions. The first was that online assessments pose an enormous opportunity. In the short term, they eliminate the sizable hassle of shipping, storing, securing, and grading reams of pencil and paper assessments. They also allow states to report results much faster and iterate more quickly based on results.
Long term, they enable countless other opportunities. Their convenience affords a much easier way to offer on-demand testing, rather than the typical once-a-year regime. This allows students to finish a course and demonstrate mastery at their own pace, unbounded by semester calendars. The convenience of online assessments also helps states offer them more often. Frequent, through-course assessment means that testing can become a teaching tool by offering a series of guideposts. It also means that states can measure student growth, not just achievement.
Their sexiest feature, if tests can be called sexy, is that they can be a lot more engaging for students. Even simple digital assessment items, such as dragging and dropping a sequence into the correct order, promise better student engagement than homogenous multiple choice questions do. Pendred Noyce at Education Week writes here about the possibilities of Virtual Performance Assessment (VPA) to allow for deeper measure of mastery for science and other complex subjects, because VPA requires students to apply knowledge in virtual labs and scenarios and allows graders to take more complex measurements.
My second impression from the conference was that many state leaders feel completely overwhelmed by the enormity of the task, both political and operational, of moving assessments online. Pearson’s Roadmap offers a strong starting place, but these leaders need more than a roadmap. Most of all, they need examples. Researchers and foundations can play a key role in profiling states, districts, independent schools, and other countries that are exemplary in at least some aspect of online assessment. For example, the administrator from Illinois who said that Internet connectivity posed an insurmountable barrier in his state might benefit from reading this case study about how North Carolina improved school data networks. The administrator from Arizona who worried about reliability issues needs more information about how Virginia already delivers roughly 2 million online tests annually. All the states would benefit from more examples of how and when to use adaptive assessment, VPA, and other next-generation testing techniques. This includes hearing from the Graduate Management Admissions Council on its decades-old, computer-adaptive GMAT test.
Michael Horn and Katherine Mackey made the case here for shifting to outcome-based policies, where states reward continuous improvements against a set of overall goals. For this massive shift in policy to work, the state assessment system must be rebuilt. The field needs more energy toward envisioning and operationalizing next-generation online assessments.