Get ready for ISTELive 21! Launch the site now.
Creative Constructor
Lab Virtual
Leadership Exchange
at ISTELive 21
Edtech Advocacy &
Policy Summit

How Does Online Computer Programming Instruction Compare With Face-to-Face?

Listen and learn

Listen and learn : Research paper
Lecture presentation


Tuesday, December 1, 12:45–1:30 pm PST (Pacific Standard Time)
Presentation 2 of 3
Other presentations:
Building Awareness of Teens Lived Experience Through Game Design
The Impact of a GenCyber Camp on Inservice Teachers’ TPACK

Dr. Albert Ritzhaupt  
Ning Yang  
Zhen Xu  
Maya Israel  
Dr. Karthikeyan Umapathy  

The purpose of this study is to examine the existing literature comparing online to face-to-face computer programming learning experiences on student learning outcomes. Specifically, we employ meta-analysis to examine the question of how online computer programming instruction compares with face-to-face.

Audience: Curriculum/district specialists, Teacher education/higher ed faculty, Technology coordinators/facilitators
Attendee devices: Devices not needed
Topic: Computer science & computational thinking
Grade level: Community college/university
Subject area: Computer science, Higher education
ISTE Standards: For Educators:
Designer
  • Use technology to create, adapt and personalize learning experiences that foster independent learning and accommodate learner differences and needs.
  • Design authentic learning activities that align with content area standards and use digital tools and resources to maximize active, deep learning.
  • Explore and apply instructional design principles to create innovative digital learning environments that engage and support learning.
Additional detail: ISTE author presentation

Proposal summary

Framework

With the increasing demand for computer science (CS) education opportunities both in K-12 and higher education, there is a clear need to better understand the nuances associated with the instructional modality used for instruction. Computer science enrollment is increasing within the United States with more students in K-12 schools completing Advanced Placement coursework (EdScoop, 2018), and with more students majoring in computer science within institutions of higher education (NSF, 2018). Since former President Obama’s announcement of his Computer Science for All initiative in 2016, many school districts started mandating that CS be a graduation requirement, or minimally, that schools offer CS coursework to students. Another major initiative was the creation and dissemination of the new AP CS Principles course, which was designed to make CS coursework accessible to all learners, particularly historically marginalized groups in CS like women and minorities. With already more than 76,000 students enrolled in the course this current academic year, the AP CSP course is reaching its objective by broadening participation in CS coursework (EdScoop, 2018). Consequently, these initiatives have increased student interest in CS as a viable undergraduate degree option.

While the excitement surrounding CS education continues to grow, there are still major barriers to the completion of undergraduate degrees in CS - namely, computer programming courses. Computer programming spans the entire CS curriculum, and thus, has been identified as a critical skill among CS students. Most of the students in these courses can be classified as novice computer programmers (e.g., students with limited experience in computer programming), and they often struggle with a wide range of computer programming topics, including memorizing complex syntax and commands to learning and applying problem-solving strategies to creating and using abstract data types (Lahtinen, Ala-Mutka, & Järvinen, 2005). Novice computer programmers experience a range of affective states when learning computer programming, including frustration, confusion, and boredom (Bosch & D’Mello, 2017). These affective states can adversely influence a student’s attitudes towards computer programming, and ultimately, impact student achievement and persistence. Research on retention rates in introductory programming courses in institutions of higher education are dismal at approximately 67%, which has been stagnant for nearly a decade even with advancements in technology and pedagogical strategies (Bennedsen & Caspersen, 2007; Watson, & Li, 2014).

Methods

Method

Literature Search Procedure
In systematically searching for literature, we initially searched five databases known for publishing computer science education artifacts: ACM Digital Library, IEEE Xplore Digital Library, LearnTechLib, ProQuest, and ScienceDirect. Since online learning can be expressed with many terms, we used the searching string ("online" OR "online learning" OR "online education" OR "distance" OR "virtual" OR "web based" OR "web-assisted" OR "Web-Based Instruction" OR "WBI" OR "web-based" OR "online instruction") AND ("face to face" OR "face-to-face" OR "traditional" OR "off-line education") AND ("learning" OR "student" OR "learner" OR "user" OR "participant") AND ("computer science" OR "computer programming" OR "CS" OR "coding"). Through the initial search, we retrieved 2,640 articles. In the later phase, we also searched the reference lists of included studies for referrals to other relevant studies as Cooper recommended (2007) and found an additional 30 articles.

Inclusion and Exclusion Criteria
To be included in this meta-analysis, studies had to (1) be published between January 1st, 2000 and 2019; (2) compare online instruction and face-to-face instruction as a between-subject condition; (3) use a quasi-experimental or experimental research design; (4) focus on students’ cognitive, affective, and behavioral learning outcomes; and (5) reported in English. This stage resulted in 38 articles. In the second stage, we filtered out articles that did not specify the programming component in the course, were not an empirical study, or did not report statistical outcomes needed to calculate effect sizes. In total, 9 articles were retained in our final sample of publications.

Coding and Extraction Procedures
We carefully read the 9 articles and extracted information about the study and extracted data reporting outcomes of the studies (e.g., M, SD, N) to calculate the effect sizes. To understand the online computer programming courses and provide indications for future studies, we coded information extracted from the 9 manuscripts as the moderators in this study. We collected information about author details, year of publication, design, course modality (asynchronous vs. synchronous), educational level (elementary, middle school, high school, undergraduate, graduate), course duration, programming level (introduction, intermediate, upper level), learning environment (learning management system), and programming language.

Effect Sizes Extractions and Calculations
Comprehensive Meta-Analysis (CMA) version 3.0 was used to calculate the effect sizes for the publications identified through our systematic procedures. We employed SPSS version 25.0 to descriptively analyze our dataset. In calculating the effect sizes, we computed only one effect size per study for the learning, affect, and retention domains. As noted by Lipsey and Wilson (2001), when a study contributes more than one effect size, it can lead to statistical dependence, resulting in biased overall effect sizes. As with any meta-analysis, effect sizes must be standardized before running the analysis. We chose to use Hedge’s g as the standardized measure of effect size for continuous variables because Hedge’s g is better than Cohen’s d for adjusting small sample size bias (Borenstein, Hedges, Higgins, & Rothstein, 2011).

All data were assumed under random-effects models with an α = .05. Borenstein et al. (2011) suggest random effects models are more appropriate when the effect sizes of the studies included in the meta-analysis differ from each other. Since the outcome measures and environments differed dramatically from study-to-study, we chose to use the random-effects model in this study. An effect size of 0.2 is considered small, 0.5 is considered medium, and 0.8 is considered large (Cohen, 1992). To account for the possibility that the current meta-analysis overlooked non-significant results, the fail-safe N (Rosenthal, 1979), which is the number of unpublished studies needed in order to change the effect size estimate to non-significant, was calculated. Also, the Orwin's fail-safe N test was calculated to determine the number of missing null studies required to bring the existing effect size to a trivial level (Orwin, 1983). Publication bias was evaluated with the fail-safe N procedure, Orwin's fail-safe N test, and by visual inspection of the funnel plot.

Results

Results
Due to spacing constraints, we only show the forest plot for the cognitive domain in this proposal. The full presentation will include all of the results for each domain under consideration.

Learning Domain
The effects of online versus face-to-face computer programming learning environments were examined with five independent effects sizes on student learning outcomes. The total sample size for the model was N = 618 students who either experienced online computer programming instruction as a treatment condition (n = 231) or experienced face-to-face computer programming instruction as a control condition (n = 387). The effect size is computed using Hedge’s g for these data. The overall effect size using a random-effects model is g = 0.245, which is a small effect size (Cohen, 1992). This overall effect size was statistically significant at Z-value = 2.859, p = .004 with a 95% confidence interval of 0.077 to 0.413, not overlapping zero.

Two of the five effect size calculations were negative, suggesting more effect sizes from the analysis favor the online computer programming instruction on student learning outcomes. The observed effect size varies somewhat from study to study, but a certain amount of variation is expected due to sampling error. The Q-statistic provides a test of the null hypothesis that all studies in the analysis share a common effect size (Borenstein et al., 2011). The Q-value is 3.735 with four degrees of freedom and a p-value of p = .443. Thus, the studies do in fact appear to share a common effect size. However, the data also need to be considered for publication bias. Visual inspection of the funnel plots generated from the meta-analyses ideally should show symmetrical distributions around the weighted mean effect sizes. The funnel plot is a scatter plot of effect sizes estimated from individual studies in a meta-analysis against a measure of study precision as measured by the standard error (Stern & Egger, 2001). Generally speaking, a symmetric funnel plot suggests the absence of publication bias in the meta-analysis (Duval & Tweedie, 2000). However, the funnel plot alone should not be the only mechanism to assess publication bias in the models. In the full presentation, we will include the analysis from the fail-safe N and Orwin's fail-safe N test.

Importance

Significance
As more emphasis is placed on CS education, both educational practitioners and researchers must be cautious of the efficacy of online education for computer programming instruction. The results of this study have direct findings for practitioners and future research applications. We will expand on these opportunities in our full presentation.

References

References
*Alonso, F., Manrique, D., Martínez, L., & Viñes, J. M. (2010). How blended learning reduces underachievement in higher education: An experience in teaching computer sciences. IEEE Transactions on Education, 54(3), 471-478.
Bennedsen, J., & Caspersen, M. E. (2007). Failure rates in introductory programming. ACM SIGCSE Bulletin, 39(2), 32-36.
Bernard, R. M., Abrami, P. C., Lou, Y., Borokhovski, E., Wade, A., Wozney, L., ... & Huang, B. (2004). How does distance education compare with classroom instruction? A meta-analysis of the empirical literature. Review of Educational Research, 74(3), 379-439.
Borenstein, M., Hedges, L. V., Higgins, J. P., & Rothstein, H. R. (2011). Introduction to meta-analysis. John Wiley & Sons.
Bosch, N., & D’Mello, S. (2017). The affective experience of novice computer programmers. International Journal of Artificial Intelligence in Education, 27(1), 181-206.
*Boutell, M. (2017, March). Choosing Face-to-face or Video-based Instruction in a Mobile App Development Course. In Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science Education (pp. 75-80). ACM.
*Caldwell, E. R. (2006). A comparative study of three instructional modalities in a computer programming course: Traditional instruction, web-based instruction, and online instruction. The University of North Carolina at Greensboro.
*Chung, H., Long, S., Han, S. C., Sarker, S., Ellis, L., & Kang, B. H. (2018, January). A comparative study of online and face-to-face embedded systems learning course. In Proceedings of the 20th Australasian Computing Education Conference (pp. 63-72). ACM.
Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1), 155-159. doi: 10.1037/0033- 2909.112.1.155
*Dutton, J., Dutton, M., & Perry, J. (2001). Do online students perform as well as lecture students?. Journal of Engineering education, 90(1), 131-136.
*Dutton, J., Dutton, M., & Perry, J. (2002). How do online students differ from lecture students. Journal of asynchronous learning networks, 6(1), 1-20.
Duval, S., & Tweedie, R. (2000). Trim and fill: A simple funnel‐plot–based method of testing and adjusting for publication bias in meta‐analysis. Biometrics, 56(2), 455-463.
*Glenn, L. M., Jones, C. G., & Hoyt, J. E. (2003). The effect of interaction levels on student performance: A comparative analysis of web-mediated versus traditional delivery. Journal of Interactive Learning Research, 14(3), 285-299.
*He, W., & Yen, C. J. (2014). The role of delivery methods on the perceived learning performance and satisfaction of IT students in software programming courses. Journal of Information Systems Education, 25(1), 23-33.
*Kleinman, J., & Entin, E. B. (2002). Comparison of in-class and distance-learning students' performance and attitudes in an introductory computer science course. Journal of Computing Sciences in Colleges, 17(6), 206-219.
Lahtinen, E., Ala-Mutka, K., & Järvinen, H. M. (2005). A study of the difficulties of novice programmers. ACM SIGCSE Bulletin, 37(3), 14-18.
Lipsey, M. W., & D.B. Wilson. (2001). Practical meta-analysis. Thousand Oaks, CA: Sage.
Means, B., Toyama, Y., Murphy, R., & Baki, M. (2013). The effectiveness of online and blended learning: A meta-analysis of the empirical literature. Teachers College Record, 115(3), 1-47.
Olson, D. (2002). A comparison of online and lecture methods for delivering the CS 1 course. Journal of Computing Sciences in Colleges, 18(2), 57-63.
Orwin, R. G. (1983). A fail-safe N for effect size in meta-analysis. Journal of Educational Statistics, 8(2), 157-159.
Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological Bulletin, 86(3), 638 - 641.
Sterne, J. A., & Egger, M. (2001). Funnel plots for detecting bias in meta-analysis: guidelines on choice of axis. Journal of Clinical Epidemiology, 54(10), 1046-1055.
Watson, C., & Li, F. W. (2014, June). Failure rates in introductory programming revisited. Proceedings of the 2014 Conference on Innovation & Technology in Computer Science Education (pp. 39-44). New York, NY: ACM.

More [+]

Presenters

Photo
Dr. Albert Ritzhaupt, University Of Florida

People also viewed

5 Creative Projects From STEAM Power: Infusing Art Into Your STEM Curriculum
App Design for Beginners
Students in the Driver's Seat: AI Projects in the Classroom

Testimonials