[close]

Blog

December 16, 2010

More interaction in online courses isn’t always better

by Michael B. Horn

A fascinating study caught my eye a few months back. Titled “Interaction in Online Courses: More is NOT Always Better,” authors Christian J. Grandzol and John R. Grandzol (both of Bloomsburg University of Pennsylvania) lead with a startling abstract, which has significant policy implications:

“Cognitive theory suggests more interaction in learning environments leads to improved learning outcomes and increased student satisfaction… Using a sample of 359 lower-level online, undergraduate business courses, we investigated course enrollments, student and faculty time spent in interaction, and course completion rates… Our key findings indicate that increased levels of interaction, as measured by time spent, actually decrease course completion rates. This result is counter to prevailing curriculum design theory and suggests increased interaction may actually diminish desired program reputation and growth.”

Counter-intuitive indeed.

The authors find that learner-learner interaction was significantly, but negatively, associated with course completion rates. Learner-faculty interaction and enrollment size weren’t significantly related to course completion. The time that students and faculty spent in threaded discussions didn’t seem to matter (which, the authors point out, doesn’t necessarily mean that discussions are not important to the learning process). Nor was there a significant relationship between the amount of faculty participation and course completion rates.

The authors offer three explanations for why this could be.

First, it is consistent with other findings that the more discussions students have to pay attention to, the less satisfied they were with the learning environment.

Second, others have suggested that maybe higher-level courses (at the MBA level, for example) require more interaction; introductory courses need little interaction. The authors’ sample consisted of community college courses, so perhaps they do not need higher levels of interaction because the content may not need interpretation or further analysis.

Some friends of mine in the cognitive science world agreed this could make some sense, as when one is a novice in a field, you have limited working memory about the topic. This means there is little space to do hard, unfamiliar work. It’s quite possible that working with others, especially those who are unfamiliar, takes up its own working memory load, which would squeeze out one’s ability to focus on the skills one is trying to master.

The authors’ third reason is the following: “the factors that loaded on student participation may have contributed to this finding. The amount of time a student spends on a course home page may have little to do with course completion. We cannot be certain a student is actively engaged or whether they just had the page open. The gradebook and email interpretations are more interesting. Perhaps the students that spent the most time in gradebook happened to be in the most rigorous courses with many graded assignments. The rigor of these courses may have contributed to the lower course completion rates, not the time spent reading a gradebook. Courses where students spent much time interacting via email may have contributed to lower completion rates. Email is a time intensive way to communicate, and may have led to less rewarding class experiences.”

The authors have several takeaways from these findings (even as they admit that time alone is a problematic measure for any study because “what takes one student ten minutes to complete may take another student twenty”), three of which I have included here.

First, “requiring extensive faculty feedback as a performance metric may be inappropriate.” Second, “administrative decisions regarding section size must accommodate variations in types, levels, and content of courses; absolute, comprehensive standards may be counterproductive. Caps on section size may be more arbitrary than evidence-based (at least for section sizes up to 30 based on courses in our sample).” Third, “requiring student interaction just for the sake of interaction may lead to diminished completion rates. Again, standards for online teaching should not contain arbitrary thresholds for required interaction.”

Should any of this really be all that surprising? The answer to whether more interaction is good or bad is very likely “it depends.” Given that everyone has different learning needs—we learn at different paces, have different aptitudes, and so forth that depend on what we are studying—the findings should make sense.

So what are the implications for policy? This doesn’t mean we should discourage interaction, but it does mean we should not measure the quality of a program based on inputs like seat time. What might be one effective process for one student in one course may be a very ineffective process for another student in another course. What we should really be worried about is mastery. Did the student finish the course and understand the material?

This might seem simple, but our whole system is based on measuring inputs. As we construct the education system of the future, all too often we fall back on them to measure quality—which constrains innovation and hurts students.

For example, as the authors write, accrediting bodies like AACSB International, an accreditor of business schools, insist “that interactions among participants define quality, that passive learning is not the preferred mode of higher education, and that learning communities require opportunities for students to learn from one another (2003).”

More perniciously in my view, in May the NCAA adopted “what it considered more stringent standards for online schools… in part to prevent student athletes from skirting rigorous coursework for what it worried were more lenient online classes” (see “N.H. Virtual Charter School Gets NCAA Approval,” Education Week). Were those standards based on outcomes? Not really. In fact, teacher-student interaction was one of the pillars.

In light of this finding, the NCAA might want to rethink if it really wants to hurt some students in certain courses where more interaction is not better and further constrain innovation in learning for the benefit of all children. We won’t reach a student-centric system with misguided policymaking focused on the inputs like this.

Comments

  • This is a timely topic to discuss, i suspect two reasons and two issues we need to explore.
    First is nothing new- faculty training and teaching skill development. I work in content publishing and for years have been discussing teaching strategies with instructors. For some time now teachers have been asked to create more interactivity with students, regardless of modality, onground or online. While most teachers think about what would work best, few are offered training or asked to follow best practices. So most “interaction” added into teaching is likely ineffective.
    Second – as we move to the use of data analysis to measure teaching and learning, we need to determine what we want to accomplish. Currently most people talk about data points around time on task, or number of interactions, etc. But how much do we now about best practices in these areas? More importantly, we need to figure out what more we want to measure. How can we show progress toward outcome in terms of data points?

    dan

    by Dan Bartell on Jan 10, 2011 9:22 AM PST REPLY

Leave a Comment

(Required)
(Required but not published)
(Optional)