Tuesday, October 11, 2016

A Comparison of Online and Face-to-Face Approaches to Teaching

A Comparison of Online and Face-to-Face Approaches to Teaching InJournal of Political Science Education
 troduction to American Government
Bolsen, Toby; Evans, Michael; Fleming, Anna McCaghren
Journal of Political Science Education, v12 n3 p302-317 2016
This article reports results from a large study comparing four different approaches to teaching Introduction to American Government: (1) traditional, a paper textbook with 100% face-to-face lecture-style teaching; (2) breakout, a paper textbook with 50% face-to-face lecture-style teaching and 50% face-to-face small-group breakout discussion sections moderated by graduate students; (3) blended, an interactive online textbook with face-to-face full-class meetings taught with a blend of lecture, discussions, and in-class activities; and (4) online only, an interactive online textbook with (almost) no face-to-face class meetings. We find that the mode of course delivery is significantly related to student academic engagement and performance as well as civic educational outcomes. Although drop rates were higher in the online only condition, students who successfully completed the online course were significantly more likely to express interest in discussing and participating in politics. Furthermore, students in the online only and blended conditions demonstrated significantly higher levels of objective political knowledge relative to students taking the course in a more traditional format. Finally, students enrolled in sections that assigned the interactive online textbook rated their textbook as significantly more beneficial to their learning experiences than did students who used the traditional paper textbook.

A Comparison of Online and Face-to-Face Approaches to Teaching Introduction to American Government

 & 
Toby Bolsen is an Assistant Professor in the Department of Political Science at Georgia State University. He received a PhD in political science from Northwestern University in 2010. In addition to conducting research on teaching and learning, Professor Bolsen studies political behavior, preference formation, media and communications, experimental methods, and U.S. energy policy. His work has been published in peer-reviewed journals, including American Journal of Political Science, Public Opinion Quarterly, Political Behavior, Annals of the American Academy of Political and Social Science, Journal of Communication, American Politics Review, International Journal of Press/Politics, and Journal of Experimental Political Science.
Michael Evans is a Lecturer in the Department of Political Science at Georgia State University. He received a PhD in political science from the University of Maryland in 2009. In addition to conducting research on teaching and learning, Dr. Evans studies American constitutionalism, including founding-era political thought, the role of jurisprudence in Supreme Court decision making, and influences on the court's legitimacy. His work has been published in peer-reviewed journals, including Journal of Empirical Legal Studies, American Politics Research, and Justice System Journal.
Anna McCaghren Fleming is a PhD student in the Political Science Department at Georgia State University with a concentration in American politics and political theory. In addition to conducting research on teaching and learning, she studies political behavior and experimental methods.
Pages 302-317 | Received 18 May 2015, Accepted 21 Aug 2015, Published online: 26 Jan 2016


This article reports results from a large study comparing four different approaches to teaching Introduction to American Government: (1) traditional, a paper textbook with 100% face-to-face lecture-style teaching; (2) breakout, a paper textbook with 50% face-to-face lecture-style teaching and 50% face-to-face small-group breakout discussion sections moderated by graduate students; (3) blended, an interactive online textbook with face-to-face full-class meetings taught with a blend of lecture, discussions, and in-class activities; and (4) online only, an interactive online textbook with (almost) no face-to-face class meetings. We find that the mode of course delivery is significantly related to student academic engagement and performance as well as civic educational outcomes. Although drop rates were higher in the online only condition, students who successfully completed the online course were significantly more likely to express interest in discussing and participating in politics. Furthermore, students in the online only and blended conditions demonstrated significantly higher levels of objective political knowledge relative to students taking the course in a more traditional format. Finally, students enrolled in sections that assigned the interactive online textbook rated their textbook as significantly more beneficial to their learning experiences than did students who used the traditional paper textbook.
It is estimated that half of all higher education enrollees now take at least one course online (Means, Bakia, and Murphy 2014). This is an over fivefold increase since 2002 and 56% increase since 2012 (Allen and Seaman 2013). This trend, coupled with the sudden rise of Massive Open Online Courses (MOOCs; Pappano 2012) and widespread publicity of the apparent success of Khan Academy’s flipped-classroom model in primary and secondary education (Khan 2012), has sparked a recent explosion of scholarly interest in the relative effects of online, face-to-face, and hybrid/blended teaching approaches on learning outcomes in higher education courses (e.g., Bowen et al. 2014; Gabrielson and Watts 2014; Harmon, Alpert, and Lambrinos 2014; Joyce et al. 2014; Xu and Jaggars2014). Although research has revealed many advantages to online modes of course delivery, college faculty continue to be skeptical about its pedagogical value (Allen and Seaman 2013), and scholarship on political science teaching and learning is only beginning to address the relative effects of different approaches to online and face-to-face teaching on academic and civic outcomes in introductory American Government courses (Craig 2014).
The vast majority of studies on online versus face-to-face courses have concluded that purely online and hybrid courses are at least as effective as traditional face-to-face courses at facilitating learning of course content and/or in promoting student satisfaction. A widely cited U.S. Department of Education meta-analysis of empirical pedagogical studies published between 1996 and 2008 concluded that students in online, and especially hybrid, courses performed better than those in purely face-to-face courses (Means et al. 2010). Such positive results have been observed in college courses covering a variety of subjects, including general statistics (Bowen et al. 2014), economics (Birch and Williams 2013), business education (Goeman and Deschacht 2014; Harmon, Alpert, and Lambrinos2014; McLaren 2004), biology (Johnson 2002), geography (Moore and Gilmartin 2010), and political science (Botsch and Botsch 2012; Dolan2008; Pollock and Wilson 2002; Roscoe 2012; Wilson, Pollock, and Hamann2006). And although two recent studies (Figlio, Rush, and Yin 2013; Joyce et al. 2014) have found a slight advantage to classroom experience in introductory economics courses, all students in both studies had equal access to online course materials. Thus, those studies are properly interpreted as showing a slight advantage to hybrid (i.e., online with some classroom time) over exclusively online courses.
Perhaps more surprisingly, two studies on Introduction to American Government courses have found that online courses generally do as well or better than pure face-to-face courses at promoting positive civic knowledge, attitudes, and/or behaviors, such as commitment to routine newspaper reading and political knowledge, interest, trust, and efficacy (Botsch and Botsch 2012; Pollock and Wilson 2002). This challenges the common claim that online environments poorly facilitate acquisition of social norms and other higher order learning outcomes. Since most agree that Introduction to American Government is intended, at least in part, to “improve the civic culture of citizens” (Botsch and Botsch 2012, 494), this finding is certainly reassuring. It also corroborates Huerta and Jozwiak’s (2008) finding that civic educational goals can be achieved through relatively simple measures (e.g., adding a New York Times reading requirement) and not only through the more elaborate methods—for example, simulations, project-based learning assignments, and/or experiential-learning projects—that have been shown to be highly effective at promoting civic knowledge and engagement (Beaumont et al.2006; Bernstein 2008; McBeth and Robison 2012). It should be noted, however, that civic attitudes and political knowledge among students across nearly all of the above studies remained, in absolute terms, rather negative and low, suggesting, as political scientists have long argued (Delli, Carpini, and Keeter 1996; Galston 2007), that civic education still has much room for improvement at all levels.
Overall, these positive findings suggest that student learning can be enhanced or, at least, not harmed, by shifting from traditional to online modes of course instruction. This, to be sure, is welcome news to those responsible for allocating scarce classroom space or concerned about meeting rising student demand for online course offerings. However, amid these positive findings, many continue to be critical of online modes of learning. According to a 2012 survey, only 30% of college faculty are accepting of online courses—an approval rating that is only two percentage points higher than in 2002 (Allen and Seaman 2013). Although all the factors influencing faculty support for online modes of course delivery are not well known, the literature on online learning has raised at least three major concerns.
First, there are concerns about lower levels of student persistence in online courses. Although this problem received a lot of publicity recently due to a report showing an overall completion rate of only 4% among one million Coursera MOOC registrants (Perna et al. 2013), a number of studies had already documented low completion rates in ordinary (neither “massive” nor “open”) online courses relative to face-to-face (e.g., Botsch and Botsch 2012; Jaggars 2012; Lee and Choi 2011; McLaren 2004). This has presented a double challenge to studies claiming to show equal or greater performance by students in online versus face-to-face courses. On the one hand, it calls into question the validity of such findings, since at least some of the apparently observed differences can be due to the fact that academically weaker students disproportionately drop from online groups, thus artificially driving up online groups’ measured performances. On the other hand, it raises the normatively troubling possibility that increased reliance on online modes of courses delivery could lead to increased time to degree, increased education costs, and/or decreased graduation rates.
A second, though related, concern about online course delivery is that the types of students disadvantaged by online courses are those also academically disadvantaged in other ways, which suggests that a greater reliance upon online learning can exacerbate extant performance gaps. For example, based on analysis of performance by 40,000 students in almost 500,000 community and technical college courses in Washington State, Xu and Jaggars (2014) find that students typically perform worse in online rather than face-to-face courses, but that the gap in performance between the two formats is greatest among males, African Americans, younger students, and those with lower incoming grade point averages (GPAs). Since the nature and extent of this overall performance gap between online and face-to-face students observed among community and technical college students differs starkly from most studies conducted at four-year colleges, Joyce et al. (2014) speculate, and provide evidence suggesting, that online courses present more of a challenge to students who commute and work. Thus, although a widely-touted benefit of online learning is its ability to flexibly accommodate busy student schedules, it appears that students with substantial time commitments off-campus are among those who are least likely to succeed in online courses.
Finally, a third concern raised in the literature on online learning is the belief that an online course requires more time to develop and administer than a face-to-face course of comparable quality. It is widely assumed that, to be effective, online courses require significant expenditures of time developing rich course materials (e.g., high-quality video lectures, quizzes, simulations, sophisticated adaptive feedback systems, etc.), providing extensive individualized feedback on essays or projects, and/or facilitating deep, meaningful online discussions or other virtual social interactions (Ariely 2013; Botsch and Botsch 2012; Bowen et al. 2014). Thus, the argument goes, for teachers to attempt to expand the proportion of their courses that are offered online, they must either expend greater time or reduce their teaching effectiveness. For example, despite finding outcomes in their online courses to be equal to or better than face-to-face courses, Botsch and Botsch (2012) have opted not to increase their number of online offerings on the grounds that their own workloads are greater for online courses and that reducing their workloads by “relying more on standardized testing and less on written essays and … online discussions” with individualized instructor feedback would “sacrifice too much quality” (498). Another example of this concern was expressed by behavioral economist Dan Ariely (2013) when reflecting on his experience teaching a Coursera MOOC. In his judgment, the extensive time he and his team spent producing video lectures (150 hours per one hour of video lecture) was “worth it” because if they had spent less time “the quality would have suffered and the learning experience would have taken a toll” (Ariely 2013, para 20).
In ways to be explained below, our study seeks to directly address the first and third concerns about online learning while shedding light on the second concern. Furthermore, we also seek to address a question of unique importance to political science education, in general, and particularly introductory American government courses. As discussed above, introductory American government courses are widely understood to have a civic educational purpose (Beaumont et al. 2006; Bernstein2008; Botsch and Botsch 2012; Huerta and Jozwiak 2008; McBeth and Robison 2012; Pollock and Wilson 2002). Thus, we also seek to shed light on the potential for improving civic education by comparing the effects of the factors discussed above not only on academic performance but also on political knowledge, attitudes, and behaviors.

Research design

To evaluate these research questions, we implemented a quasi-experimental study. The study involved offering POLS 1101: Introduction to American Government, a core course required for all students who graduate from a large public university in the southeast, in different formats concurrently. On average, the department teaches nearly 4,000 students across 40 large sections of this course each year. Class sizes range from 75 to 200 students. We selected 13 sections being offered in the spring semester of 2014 at random to be included in our study that would employ one of four different approaches for teaching the subject matter. Table 1 lists each of the four learning approaches (i.e., “conditions”) in our study, what constituted the “treatment” in each condition, as well as the number of sections, instructors, graduate assistants, and students in each condition. All instructors scheduled to teach this course in the spring of 2014 met prior to the beginning of the semester to agree upon an order for teaching the chapters, the primary textbook to use across the different quasi-experimental conditions, the number of tests to administer throughout the semester, and a set of “common core” multiple-choice objective knowledge items that would be employed on each of three unit tests that would be administered across all sections. The study, in total, involved the coordination of 10 faculty members and nine graduate students in structuring the curriculum for the course for the spring semester of 2014 to ensure that teaching was as similar as possible within each of the conditions listed in Table 1 and to ensure that there were key differences between how the subject matter was presented across conditions to a total of 1,524 students enrolled in this course at the beginning of the semester. We next describe the key features that constitute the treatment in each condition in our study.

Table 1. Study design and conditions.

We label the first condition in Table 1 “traditional” (n = 298) and it included three different sections with two different full-time faculty instructors. Students in this section purchased a hard text, We the People, 10th edition (Patterson 2012) and attended two 75-minute lectures per week. Note that the traditional condition we employ was completely devoid of online elements, which is distinct from recent studies demonstrating slightly greater learning gains from time spent in the classroom (Figlio, Rush, and Yin 2013; Joyce et al. 2014).
We labeled the second condition in Table 1 “breakout” (n = 438) and it included four sections instructed by three different full-time faculty members and nine graduate teaching assistants. Students in this section purchased the same textbook as those who signed-up for the traditionalsections. However, rather than attending two 75-minute lectures per week, these students attended one 75-minute lecture and one 75-minute discussion breakout session led by a graduate teaching assistant. As with our traditional condition, this one had no online elements.
We label the third condition listed in Table 1 “blended” (n = 507). This condition combined an online component—use of an interactive online textbook, Central Ideas in American Government, 4th edition (J. Evans and Michaud 2012)—with two 75-minute full-class meetings involving a blend of lectures, discussions, and small-group activities. Courses in this condition consisted of four different sections instructed by four different graduate students who had been trained to employ the blended approach in a graduate seminar the previous summer. The online component of this condition consisted solely of the online textbook, from which students were able to use the following features: accessibility from computers and mobile devices, study questions (with immediate scoring) at the bottom of every page, hyperlinked footnotes, glossary terms defined by hovering over words, very basic discussion boards on which students can offer brief responses to open-ended questions and can view others students’ responses, and a small number of embedded opinion polls per chapter through which students can see how their views compare to those of other students in the course.
We label the fourth condition in Table 1 “online” (n = 281) and it consisted of two sections with one part-time faculty member using the same online textbook as students in the blended condition. The online aspect of this condition differed from that of the blended in only one respect. In this condition, unlike in blended, students were required to do assignments that are built-in to each chapter of the textbook. These assignments can involve reading original text, watching videos, investigating external Web sites or participating in other similar activities designed to provide students with deeper engagement with the concepts presented in the main body of the textbook. Assignments were automatically graded and instructors did not provide feedback. Thus, the online elements of even this condition involved no time to develop and minimal time to implement. This gives us additional leverage for testing the assumption that quality online teaching is necessarily a time-intensive enterprise. It should be noted that the online condition deviated from an exclusively online course in two ways. First, students were able to meet with the instructor for a live in-class review session before each exam. Second, as is discussed below, students took proctored exams in a classroom.
An advantage of reliance on the interactive online textbook for ourblended and online conditions is that it allowed us to test the assumption that effective online teaching requires extensive time developing (e.g., Ariely 2013) and/or implementing (e.g., Botsch and Botsch 2012) online course materials. In our case, it is especially important to test this assumption since we rely heavily on graduate students to teach our introductory American Government courses and their schedules and experience levels render it practically impossible for them to develop their own quality online learning materials.

Measures

We conducted a pretest that included 18 factual knowledge (“common core”) questions about American Government (the complete battery of questions is reported in the appendix) in the second week of the semester for all conditions. We informed students in all participating sections that the pretest was completely anonymous, not for credit, and that they were encouraged to participate but could abstain. For students enrolled in theonline condition, the pretest was administered via the Internet and a maximum time limit was set at 20 minutes to complete all questions; all other sections completed the pretest in a face-to-face proctored environment. Instructors across all sections re-administered these same questions on three unit exams throughout the semester. We did not collect any information that would allow us to identify an individual student for the pre- and postobjective knowledge tests. We, thus, focus exclusively on learning outcomes at the section/condition levels for the objective knowledge results we report below.
We also administered an anonymous and voluntary exit survey in the last week of the semester. The exit survey allowed us to measure political interest, engagement, efficacy and knowledge, and overall ratings of the textbook and learning experience across the four conditions. We included demographic questions on the exit survey that allow us to control for differences in age, gender, ethnicity, and other individual-level factors that might correlate with the outcome measures. The political attitude and surveillance knowledge questions on the exit survey allow us to build upon previous research on the effects of online versus face-to-face American Government courses on civic educational outcomes (Botsch and Botsch 2012; Pollock and Wilson 2002) by being the first to compare outcomes across multiple types of face-to-face and online conditions. We note that our study is larger than most similar previous studies (N = 1,524) and employs a broader range of outcome measures including objective core knowledge about American Government, surveillance knowledge about key current political officeholders, measures of political interest and behavioral intentions to participate in politics, and students’ evaluations of a hard versus Web-based textbook.

Results

We begin by reporting the results from tracking performance across conditions on a battery of 18 “common core” questions. Over the course of the semester, the common core questions originally included on the pretest (see appendix) were placed on the exams of the unit to which each question corresponded for all instructors and all conditions. Table 2reports the average percentage of correct responses across sections within each condition on the pre- and posttest, as well as the percentage increase in comparing performance on these same questions within each condition over time.

Table 2. Common core knowledge test (% correct).

As shown in Table 2, students generally did not perform well on the pretest based on the average percentage of correct responses across conditions. However, students in the online condition significantly outperformed students in all other conditions on the pretest by answering 55% of the questions correct on average compared to 41–42% correct in all other conditions, F(1, 70) = 6.99, p < .01 (see Appendix Table A2 for additional significance tests from analysis of variance [ANOVA] estimations, which account for multiple comparisons across conditions). We attributed the significantly higher percentage of correct scores on the pretest among students enrolled in the online sections, at least in part, to differences in the context in which the questions were answered: Students in the online sections (despite the time limit) were able to lookup answers to the objective knowledge questions. This led us to require all students who were enrolled in the online sections of the course to come to campus for in-person proctored sets of examinations for all subsequent tests.
We also find, as Table 2 shows, that the online sections earned the highest marks on the posttest by answering 79% correct on average. This is significantly better than performance compared to the other three conditions, F(1, 70) = 2.81, p ≤ .10 (See Table A2 in the appendix for the results from all statistical tests from ANOVA estimations that account for multiple comparison across experimental conditions). Conditions that used the online textbook (online and blended) performed significantly better than conditions in which a traditional hard textbook was used (traditional and breakout), F(1, 70) = 3.30, p ≤ .10. It is worth noting that, although the change in learning (i.e., “Difference” column in Table 2) from the pre- to the posttest is smallest in the online sections (+24%), the gain in learning is likely depressed for the online condition since the pretest results were likely inflated (for the reasons discussed above). Nonetheless, pre- and posttest objective knowledge gains on the common core items are positive and highly significant across all conditions (seeTable A2 in the appendix).

Exit survey

In addition to tracking aggregate performance on the common core questions, as previously mentioned, we also asked students across all conditions to complete an exit survey that included several measures that allowed us to test for learning and engagement effects. The exit survey also included demographic questions that serve as control variables in the statistical analyses we report below. A key set of dependent variables on the exit survey included four items that tap knowledge related to current political officeholders at both the federal and state level. Students were asked four open-ended knowledge questions on the exit survey to assess if they could name the current speaker of the U.S. House of Representatives, the current U.S. Senate Majority Leader, the current Governor of Georgia, and the current Chief Justice of the U.S. Supreme Court. We created a single scaled measure of surveillance knowledge based on responses to these items (α = .72).
We assessed differences in levels of political engagement across conditions in our study with three primary dependent measures. First, we asked students the extent to which they agreed or disagreed with the statement that “This course made me want to talk about politics outside of the classroom” on a 1- to 7-point fully-labeled scale from “strongly disagree” to “strongly agree.” We also asked students how important they believe it is to vote and engage in other forms of political action on a 1- to 7-point fully labeled scale ranging from “extremely unimportant” to “extremely important.” We also included a question that asked for students’ subjective evaluation of textbook. Specifically, students were asked, “How beneficial do you think the textbook used in this course was for your learning experience?” on a 1- to 7-point fully labeled scale from “not beneficial at all” to “very beneficial.” Recall that students in both theblended and online conditions used the online textbook with a variety of interactive features and, additionally, students in the latter condition wererequired to complete assignments embedded within the online textbook’s chapters. Students enrolled in both the traditional and breakout conditions read the aforementioned hard textbook without using any online features.
We evaluate the effect of the different pedagogical conditions onsurveillance knowledge, political discussion, importance of participation, andtextbook evaluation by regressing each dependent variable on the experimental conditions, omitting the traditional condition as the baseline in each model and controlling for individual-level factors including age, sex, party identification, and ethnicity. We present the results in Table 3. (In Table A1 in the appendix, we report the mean, standard deviation, andn for each measure across conditions.)

Table 3. Determinants of political knowledge, engagement, and textbook satisfaction.

We find a number of important differences in comparing online and face-to-face approaches to teaching Introduction to American Government on our key outcome measures. The first column in Table 3 reports the results from a regression estimating the determinants of surveillance knowledge on our condition variables, controlling for demographic factors. We find that students in the online and blended conditions were significantly more knowledgeable about current public officials at the national and state levels relative to students taking the course in the traditional section (p < .01, Column 1, Table 3). This mirrors the results we observed in evaluating performance on the objective knowledge common core items (see Table 2). Moreover, students enrolled in the online sections were significantly more likely to discuss politics outside of the classroom (p < .01, Column 2, Table 3) and to express that it is important to vote and participate in politics (p < .01, Column 3, Table 3) relative to students enrolled in the traditional sections of the course. The fourth column inTable 3 shows that students in the online and blended conditions, both conditions where a Web-based text was employed, reported significantly higher levels of satisfaction with the text compared to students in the traditional (baseline) sections where a standard textbook was adopted ((p < .01, Column 4, Table 3). We conducted joint significance tests for the impact of the experimental conditions (relative to the baseline condition) on each dependent variable reported in Table 3 and find that we can reject the null hypothesis that the condition variables are jointly insignificant in all models (see F statistics for each model below Table 3).
We used Clarify in Stata to estimate predicted values for each dependent variable across all conditions with covariates included in Table 3 for each regression held at their mean value (Tomz, Wittenberg, and King 2003). As a way to illustrate the substantive magnitude of the results presented above, Figure 1 shows the percent deviation of each variable’s predicted value from that variable’s predicted mean across all conditions. The predicted value for the online sections is higher than the predicted mean value for every variable across conditions — for example, in the case ofsurveillance knowledge, the predicted value for the online condition is 45% above the mean across all conditions. Notably, the traditional andbreakout sections were below the predicted mean across all four dependent measures. The far right-hand column in Figure 1 shows that there are sizable differences in evaluations of the textbook in the sections that used the Web-text (online and blended) compared to the sections where a hard textbook was assigned (breakout and traditional).
Figure 1. Predicted values (percent deviation from variable mean). We used Clarify (Tomz, Wittenberg, and King 2003) to estimate predicted values for each dependent variable across all conditions with covariates included in Table 3 for each regression held at their mean value. This graph shows the percent deviation of each variable’s predicted value from that variable’s predicted mean value across all conditions.

Discussion and conclusion

With a few important caveats, we are confident that these results provide strong evidence for the effectiveness of the online and blendedapproaches to teaching Introduction to American Government. Within our study, the fully online approach was at least as effective as, and, in some respects, apparently even more effective than, the other approaches at promoting student learning, political knowledge, and positive civic attitudes and behaviors. Furthermore, students in the blended condition exhibited the highest gains in course content knowledge and significantly outperformed students in the traditional and breakout conditions in surveillance knowledge. These findings are consistent with those of previous studies focused on the academic and civic educational effects of online Introduction to American Government courses (Botsch and Botsch2012; Dolan 2008; Wilson, Pollock, and Hamman 2006).
There are at least two aspects of our blended and online conditions that render their high performance somewhat surprising. First and foremost, as discussed above, neither the blended nor the fully online condition required any time developing course materials and both involved little time implementing and overseeing the online components of the course. Although much of the discussion over MOOCs and flipped classrooms centers around the purported benefits of video lectures and/or tutorials, our online conditions fared well without making use of either, which suggests they are not a necessary element of an effective online learning environment. Given that other studies seem to demonstrate that they also are insufficient (H. K. Evans 2014; Figlio, Rush, and Yin 2013), it appears that the expenditure of time required to produce high-quality video lectures/tutorials (Ariely 2013) may not (always) be a cost worth incurring. Similarly, we find that positive outcomes are possible in online courses that do not involve instructors actively moderating discussions (Botsch and Botsch 2012; Trudeau 2005), providing individualized feedback, and/or personally grading written assignments or projects (Botsch and Botsch 2012; Hughes 2007). Thus, investing time into these activities also is clearly not necessary for effective online teaching, although it is certainly possible that effectiveness would be even higher by engaging in them.
Second, it is noteworthy that our two online conditions relied on less-experienced and less-expensive instructors than did our two face-to-face conditions. All of our blended sections were taught by graduate students and the online sections were taught by a part-time faculty member who, although having a little experience teaching Introduction to American Government, had never before taught an online course. By contrast, all of the sections in our two face-to-face conditions were taught by full-time faculty with areas of expertise in American Government and most of whom had extensive experience teaching Introduction to American Government. On the one hand, this provides confidence that the superior performances of our online conditions were due to the modes of course delivery and not to systematic differences in the quality of instructors. On the other hand, this points to at least two potential practical advantages for departments. First, it suggests that offering online or blended courses using off-the-shelf online materials can be a cost-effective strategy for providing quality instruction to large numbers of students in introductory courses. Second, it appears that by designing courses modeled after ourblended condition, departments can provide graduate students with meaningful classroom teaching experiences without compromising quality.
This study also suggests that courses modeled after our breakoutcondition are neither the most cost-effective way to promote undergraduate student learning nor the best means of providing graduate students with classroom teaching experience. On every measured outcome except drop rates, we observed lower scores in thebreakout condition compared to the other three conditions. Although this is surprising in light of Pollock, Hamann, and Wilson’s (2011) intuitive finding that small-group discussions promote student engagement, satisfaction, and learning, it is consistent with the results of a recent study conducted at another large public university (Blackstone and Oldmixon2015). Instead, consistent with Trudeau’s (2005) findings, our study suggests that the blended approach, employing (in this case) simple unmoderated online discussions followed by full-class discussions, is an effective way to enjoy the advantages of class discussions that does not involve the logistical difficulties and resource intensiveness associated with reliance on breakout discussion sessions.
Of course, not all “online” or “blended” courses are alike. A lot more research is needed before we can know exactly what it is about our online and blended conditions that promoted student learning in our study. We thus echo Bowen et al. (2014) in calling for more research into the specific kinds of online features that yield the best results. To be sure, both the subjective student evaluations and the objective data suggest that one or more feature(s) of the interactive online textbook is/are likely part of the explanation for the higher performance of our online conditions. Assuming this is the case, future research will be required to identify exactly what makes the interactive online textbook effective. Although this is only speculative, we think a plausible hypothesis is that the study questions (which are graded instantly and allow for retakes when missed) at the bottom of each page are the most influential feature. We base this conjecture on recent research showing that frequent testing/questioning significantly enhances learning in general (Karpicke and Blunt 2011; Pennebaker, Gosling, and Ferrell 2013; Schmidmaier et al. 2011) and online learning in particular (Szpunar, Khan, and Schacter 2013). But, again, we have to leave an assessment of this and other possible hypotheses—including about the possible effects of interactive textbooks on civic educational outcomes—for future studies.
We conclude by adding two important caveats to the above conclusions. First, despite the apparent advantages of the online condition in the aggregate, it is possible that some types of students were systematically disadvantaged by that approach (Joyce et al. 2014; Xu and Jaggars 2014). The drop/withdraw rate for students enrolled in sections of the onlinecondition was 9.96% compared to 5.37% in the traditional condition, 3.41% in the breakout condition, and 4.53% in the blended condition (seeTable A3 in the appendix for the calculated drop/withdraw rate for each condition). This corroborates the finding in previous studies that persistence is lower in online courses than in blended and purely face-to-face courses (e.g., Botsch and Botsch 2012; Jaggars 2012; Lee and Choi2011; McLaren 2004) and suggests that some students were more likely than others to find the purely online environment hospitable. We must leave open for future research the question of whether this gap can be reduced—perhaps through targeted interventions or other changes in how the course is taught online—or if instead, for at least a certain percentage of students, there is no adequate substitute for face-to-face courses.
Finally, it is important to note that, although we have considered a broader set of dependent variables than have most previous studies, there are still important learning outcomes that we did not measure or evaluate in this study. When we speak above of the apparent ability to achieve outcomes of equal or greater “quality” without the time commitment or experience level of faculty that are typically assumed necessary for quality online courses, the potential limitations of our measures of quality should be kept in mind. Perhaps most importantly, we do not seek to measure students’ development of higher order thinking skills and/or ability to critically analyze political issues and ideas. Thus, our study does not assess the impact of mode of course delivery on such outcomes. By extension, this means we may be underestimating the opportunity cost of relying on courses that do not involve experienced faculty actively moderating online discussions, facilitating simulations, providing extensive individualized feedback, and/or personally grading written assignments or projects. It is possible that our study is limited to demonstrating the comparative effectiveness of online and face-to-face courses when it comes to achieving basic academic and civic educational outcomes. It is quite possible that a future study focused on relatively higher order learning outcomes could reach different conclusions than this study about the effectiveness of different modes of course delivery. And such a study may even find that the level of academic rigor required in courses is more important than whether or not courses are offered online or face-to-face, or if they are hybrid between the two.

Appendix

Table A1. Sample descriptive statistics from exit survey.

Table A2. Differences between pre- and posttest common core averages using ANOVA.

Table A3. Drop rates across conditions.

Supplemental material

upse_a_1090905_sm3278.docx

Download MS Word (17 KB)

Notes


Students self-selected into sections of American government that were predetermined and randomly assigned to employ different pedagogies. However, the only students who were aware of this were those who signed up to take the course online. We report the demographic composition of the students in each condition in Appendix Table A1.

The time spent developing and administering the online element of this blended condition was considerably less than most previous studies on the effects of online courses. For example, no time was spent filming or editing video lectures (Figlio, Rush, and Yin 2013), moderating or grading online discussions (Botsch and Botsch 2012; Trudeau 2005), or making, grading, or providing feedback on assignments (Botsch and Botsch 2012; Hughes 2007). Indeed, the only time spent on any kind of individualized online feedback or instruction was through answering student questions over e-mail.

We chose the second week of the semester because the first week is an “Add/Drop” period in which students often move in and out of specific sections. We wanted to hold as many students as possible “constant” in comparing the pre- and posttest performance measures on these questions and at the same time administer the pretest as early in the semester as possible before too much content from the course had been encountered.

It is possible the differences are partly due to actual higher levels of prior knowledge in the online conditions. However, we doubt this was a significant influence. The online students also performed better than students (not a part of our study) in a special honors section that took the pretest under proctored conditions. We are confident that the honors students would be more likely than those who took the online course to enter with disproportionately higher levels of prior knowledge.

The correct answers to these questions, when the exit survey was administered in May 2014, were John Boehner, Harry Reid, Nathan Deal, and John Roberts, respectively.

Of course, there are deeper issues creating systematic disadvantages for some students over others. Two important factors are the rising costs of higher education—which forces lower income students to work full time while attempting to maintain full class loads—and systematic differences in preparedness for college emanating from vast disparities in the quality of primary and secondary educational opportunities. In light of systemic forces such as these, it seemly unlikely that course design is, or could be, a significant cause of, or solution to, the problem.


Taylor & Francis, Ltd. 325 Chestnut Street Suite 800, Philadelphia, PA 19106. Tel: 800-354-1420; Fax: 215-625-2940; Web site: http://www.tandf.co.uk/journals
Publication Type: Journal Articles; Reports - Research
Education Level: Higher Education; Postsecondary Education
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Identifiers: United States (Southeast)

No comments:

Post a Comment