Competency-based assessment should follow sports analytics’ lead

By: , and

Jun 7, 2017

Competency-based education (CBE) is a trendy (but not new) idea in education, which proponents hope will revolutionize the way that students progress through school. In the CBE model, competencies refer to knowledge, skills, and abilities that allow an individual to perform tasks successfully. Core to this premise, competencies must be measurable: if a student can successfully perform the relevant task, then he has demonstrated that he possesses the associated competencies.

But how and when measurement occurs can prove challenging on the ground. In some models, competency assessment is a one-time event. If a student passes the assessment once, then he is deemed certifiably competent for the rest of his education (or perhaps life), despite everyday evidence that competence is not eternal. Additionally, if CBE assessments aren’t well aligned to their corresponding real-world competency, then students may be developing competency in test taking more than anything else.

How might CBE proponents tackle these shortcomings in measurement and assessment? There is another realm in which assessment has become exceptionally sophisticated and successful over the past few decades: professional sports. Statistics, like “batting average” in baseball, have been tracked since the inception of the MLB. In recent years, however, sports statistics have become more advanced. For example, an MLB player’s “wins above replacement” (WAR) estimates how many more wins a team would net over the course of a season with that player than if he were replaced by a minor leaguer.

Sports-analytics style assessment offers many promising features. For example, WAR analytics can tease out an individual’s contribution in a team situation where success or failure (winning or losing the game) might depend on dozens of other people. In fact, the 2016 MLB player with the highest WAR hailed from a team with a losing record.

These individual player contribution stats don’t need to rely on standardized tests that reduce a situation’s complexity to the point where they lose authenticity. Additionally, these assessments never stop: when an athlete sets a record or wins a championship, the game doesn’t end there. Success does not equate to a player “passing a test” and getting to play professionally for as long as she wants. Rather, each game is another assessment that gives the player and coaching staff more information and that could indicate growth or deterioration of a player’s competencies.

Lastly, sports assessment is well aligned with the authentic performance of interest, a game, because the assessment comes directly from observing games in real time. Efforts to improve assessment results, such as drills and training, thus inherently must also help the player win games. As such, energy spent to pass “the test” does not come at the cost of energy spent to win more games.

Of course, sports assessment suffers from drawbacks as well. The never-ending nature of the assessment makes it inherently high pressure: a player who consistently performs poorly may lose the opportunity to keep playing in future games. This downside shouldn’t translate into other less competitive activities that don’t have a limited number of spots for participants, but is still worth keeping in mind. Additionally, in certain high stakes performances, it might be important for people who have lost competencies to lose privileges: we wouldn’t want a truck driver who has the misfortune of losing their sight to retain their driver’s license because they have thousands of hours behind the wheel.

Collecting all of the relevant data to perform advanced sports analytics can also be expensive. The camera system used to collect data for the NBA costs over $100,000 per stadium per year alone—far too much for most college teams to be able to collect the same quality of data.

How might CBE proponents learn from sports analytics? Assessment, in theory, should continue indefinitely, acting as an additional source of guidance rather than a hurdle that needs to be passed on the way to other, more important work. Assessment should be based on performance in the authentic environment of interest, not an artificially constructed environment. And finally, assessment should be intelligent within that authentic context, comparing different students accurately without the use of standardization.

What might this look like in practice? Some programming learning environments already take advantage of systems similar to professional sports. They offer recurring programming challenges and competitions among learners. The challenges are opportunities to learn and improve. An interested learner never needs to stop participating in them. The challenges provide an authentic environment, as the problems being solved are similar to the problems a programmer might need to solve in his work in the real world. Because collecting results and data is inherent to programming competitions, there is no significant added cost to collect data for assessment.

These principles could be used with more traditional curriculum as well. For a writing class, students could start a blog or an account on a fiction writing website. Assignments would be authentic, as students would craft their work with the same goals in mind as professional bloggers or novelists. The work would be continuous, as students could continue to develop their portfolio even after the official end of the class. Data collection could be simple and inexpensive, as most of the relevant info would be available in the web traffic to a student’s blog or portfolio. CBE has the potential to improve education radically, but only if the assessments used to certify competencies are effective and authentic. Professional sports have developed successful methods of assessment, and CBE could do worse than to take a few pages from the sports analytics playbook.

For more, see:

Lee Weinstein is a PhD student in mechanical engineering at the Massachusetts Institute of Technology and is pursuing a minor in education.

Sanjay Sarma is the Vice President for Open Learning at the Massachusetts Institute of Technology and leads the MIT Office of Digital Learning. He is also the Fred Fort Flowers (1941) and Daniel Fort Flowers (1941) Professor of Mechanical Engineering at MIT.