Paper3: Assessing the use of technology and Khan Academy to improve educational outcomes
Location: W179a, Table 3
Listen and learn : Research paper
Tuesday, June 26, 11:45 am–12:45 pm
Location: W179a, Table 3
The session will focus on a study on the use of technology and Khan Academy, an open educational platform, as an innovative approach to enhance students’ performance in mathematics in Guatemala. Attendees will learn how technology can play a critical role in complementing traditional teaching in challenging contexts.
|Attendee devices:||Devices not needed|
|Focus:||Digital age teaching & learning|
|Topic:||Distance, online blended and flipped learning|
|ISTE Standards:||For Educators:
The adoption of blended learning models --combining technology with traditional teaching methods-- as a strategy to enhance student academic performance is a relatively nascent approach. There are a variety of blended learning models and sub-models, each resulting in different outcomes on student learning. Benefits range from increasing student engagement to enhancing teacher support in the integration of technology into in-class activities. Because the use of blended learning models is relatively new, there is limited documentation of their impact on academic performance. The available literature tends to be qualitative in nature and in the context of developed countries. It also shows mixed findings. Some studies find blended models to have more non-academic benefits, such as increasing students’ comfort with working on computers, creating opportunities for collaboration, and increasing student engagement. Other studies find positive results related to academic achievement through self-organized and mediated learning in environments where children would otherwise be denied opportunities for good schooling. Most studies emphasize the importance of teacher training and adjustment to ensure an effective transition when incorporating digital content and technology into daily classroom activities. This study aimed to explore various blended learning models in the context of public schools in a developing country. It provides insight into whether the introduction of blended learning models can help overcome common challenges encountered in public education systems in developing countries (i.e. poor infrastructure, subpar teachers) and improve overall student performance.
The study used a quasi-experimental design that combined quantitative and qualitative methods. The quantitative approach employed structured surveys and standardized math exams aimed at students and teachers. The student survey collected information on students’ demographic and household characteristics, as well as information on their experiences with technology inside and outside of school. The teacher survey gathered demographic information, such as age, gender, education level and years working as teacher, as well as information on how teachers use the technology available at school. To gauge changes in students’ math performance, the study utilized standardized tests comprising 30 math questions. In addition, a standardized math exam for teachers, with a similar format and content of that of the student exams, served as a control for how teachers’ mastery of math concepts could affect student outcomes.
The qualitative approach entailed a series of focus group discussions with primary school teachers in intervention schools to better understand teachers’ perspectives and experiences integrating technology into their curricula. Specifically, focus groups aimed to gather teachers’ opinions on the advantages and challenges of integrating technology and the Khan Academy tools into traditional teaching methods.
The evaluation tested various sub-interventions that made up the pilot: schools with 16-computer labs with Khan Academy (with and without Internet); schools with 30-computer labs and Khan Academy (with and without Internet); and tablets and Khan Academy (without Internet). It specifically used three evaluation groups comprising 30 schools: 1) the pilot intervention group, including schools with the sub-interventions described above (technology and Khan Academy, with/without Internet); 2) the traditional programming group, comprising schools that received the conventional computer intervention (computer labs and teacher training, but no Khan Academy); and 3) a comparison group of 10 schools with no access to technology or Khan Academy. To arrive at findings, the evaluation compared the scores of students in the pilot intervention group and in the traditional programming group, separately, against those of students in the comparison group.
The study employed a two-stage randomization strategy to draw the evaluation sample. In the first stage, evaluators randomly selected schools into each of the evaluation groups. In the second stage, evaluators selected students at random within the schools for data collection. The final sample included 2,356 students: 1,146 interviewed and tested at baseline and 1,210 interviewed and tested at endline. In the case of teachers, findings are based on a sample of 206 teachers: 99 at baseline and 107 at endline. Evaluators conducted descriptive and multivariate regression analysis using statistical software Stata to estimate the difference-in-differences and arrive at findings. The analyses controlled for a rich set of variables, including gender, grade, class size, household socio-economic characteristics, availability of technology at home, frequency and time of technology use, and teacher’s math performance, among others.
The study found that combining technology with Khan Academy produces a higher positive effect on student math performance. Relative to the comparison group, the pilot intervention (technology and Khan Academy) leads to an average increase of 10 points in math scores, out of a maximum possible score of 100 points, which is double the increase of five points produced by the traditional programming intervention (technology without Khan Academy). When comparing the different sub-interventions against the comparison group, the evaluation found that the provision of tablets and Khan Academy has a larger effect on student math performance than the other sub-interventions. On average, the combination of tablets with Khan Academy leads to a 10-point increase in math scores, while the use of computers with Khan Academy leads to an average increase of eight points. These findings are statistically significant and controlled for other factors that can have an influence on student academic performance, such as gender, socioeconomic status, class size, teacher’s math score, whether the student had repeated a grade at least once, availability of computers or tablets at home, and frequency of technology use at school, among other factors.
These findings highlight that the availability of complementary math content programs can benefit student academic performance, even in the context of limited resources and poorly qualified teachers. For instance, 73% of participating students did not have access to computers or tablets at home. Among those who did have technology at home, 33% said their parents do not know how to use the technology. Teachers in the study generally scored low in the standardized math test, with an average score of 64 out of 100 points, underscoring the poor preparation of teachers to instruct students.
The findings are also important considering that students had limited exposure to Khan Academy. On average, students in the study used Khan Academy tools one to two times a week for less than one hour at a time. Findings showed that increased exposure to Khan Academy leads to additional benefits on math performance. For instance, students who said they used the platform one to two times a week for a full hour at a time attained, on average, 28 additional points on the math test than students in the comparison group.
The novelty of this study lies on testing various blended learning sub-models in the context of a developing country, with challenging conditions both at the school level and at the student’s household level. Based on robust quantitative data, its findings shed light on the type of models that can be feasibly implemented in similar contexts to achieve enhanced academic outcomes. It also provides insight into the most convenient model set-up vis-à-vis infrastructural, financial, and human resources limitations commonly found in these contexts. It also collected important learnings from the teachers on how to better integrate this type of tools into traditional teaching methods. More broadly, the results of the study expand the available literature on the impact of this type of learning models and can serve as guidance for the design of future similar interventions.
Baker, E.L., Gearhart, M., & Herman, J.L. (1994). “Evaluating the Apple Classrooms of Tomorrow.” Los Angeles, CA: UCLA Center for the Study of Evaluation/Center for Technology Assessment.
Baker, F. B., & Kim, S. (2004). “Item response theory: parameter estimation techniques.” 2nd Ed. New York City, NY: Marcel Dekker, Inc.
Barrera-Osorio, F. & Linden, L. (2009). “The use and misuse of computers in education: Evidence from a randomized experiment in Colombia.” Policy Research Working Paper 4836. Washington, DC: World Bank.
Barrera-Osorio, F. (2006). “The Impact of Private Provision of Public Education: Empirical Evidence from Bogotá’s Concession Schools.” Policy Research Working Paper 4121. Washington, DC: World Bank.
Bedard, K. & Cho, I. (2010). “Early Gender Test Score Gaps across OECD countries,” Economics of Education Review, 29(3): 348-363.
Brophy, J. (2006). “Grade repetition.” Education Policy Series 6. UNESCO. International Academy of Education & International Institute for Educational Planning.
Center for Children and Technology (2001). “IMB Reinventing Education: Research Summary and Perspective.” New York, NY: Center for Children and Technology.
Cheung, A. & Slavin, R. (2011). “The effectiveness of educational technology applications for enhancing mathematics achievement in k-12 classrooms: A meta-analysis.” Best Evidence Encyclopedia.
DeBaz, T. (1994). “Meta-analysis of the relationship between students’ characteristics and achievement and attitudes toward science.” Columbus, OH: ERIC Clearinghouse for Science, Mathematics, and Environmental Education.
Flores, M. (2009). “Construcción de las Pruebas de Matemáticas usadas en la Evaluación Nacional de Primaria 2008.” Guatemala, Guatemala: DIGEDUCA.
Fortín, A. (2013). “Evaluación Educativa Estandarizada en Guatemala: Un camino recorrido, un camino por recorrer.” Guatemala, Guatemala: Ministerio de Educación.
Fortin, N., Oreopoulos, P. & Phipps, S. (2011). “Leaving Boys Behind: Gender Disparities in High Academic Achievement.” The Einaudi Institute for Economics and Finance.
Guiso, L., Ferdinando, P., and Zingales, L., (2008). “Culture, Gender, and Math,” Science, 320(5880): 1164-1165.
Linden, L., Banerjee, A., and Duflo, E. (2003). “Computer-Assisted Learning: Evidence from a Randomized Experiment.” Poverty Action Lab Paper No. 5. Boston, MA: Poverty Action Lab.
LoGerfo, L., Nichols, A., & Chaplin, D. (2006). “Gender Gaps in Math and Reading Gains during Elementary and High School by Race and Ethnicity,” The Urban Institute.
Mann, D., Shakeshaft, C., Becker, J., Kottkamp, R. (1999). “West Virginia Story: Achievement gains from a statewide comprehensive instructional technology program.” Santa Monica, CA: Milken Family Foundation.
Meyer, J. P., & Hailey, E. (2012). “A study of Rasch partial credit, and rating scale model parameter recovery in WINSTEPS and jMetrik.” Journal of Applied Measurement, 13(3), 248-258.
Ministerio de Educación de Guatemala (2010). “Orientaciones Pedagógicas Curriculares – Nivel Medio Ciclo Básico.” Guatemala, Guatemala.
Ministerio de Educación de Guatemala (2010). “Orientaciones Pedagógicas Curriculares – Nivel Medio Ciclo Diversificado.” Guatemala, Guatemala.
Ministerio de Educación de Guatemala (2010). “Orientaciones Pedagógicas Curriculares – Nivel Primario.” Guatemala, Guatemala.
Murphy, R., Gallagher, L., Krumm, A ., Mislevy, J., & Hafter, A. (2014). “Research on the Use of Khan Academy in Schools.” Menlo Park, CA: SRI Education.
Niederle, M. and Vesterlund, L. (2010). “Explaining the Gender Gap in Math Test Scores: The Role of Competition,” Journal of Economic Perspectives, 24: 129-144.
RAND. (2012). “Teachers Matter: Understanding Teachers’ Impact on Student Achievement.” RAND Education.
Raza, M. A. & Shah, A. F. (2011). “Impact of Favourite Subject towards the Scientific Aptitude of Students at Elementary Level.” Pakistan Journal of Social Sciences (PJSS) Vol. 31, No. 1, pp. 135-143.
Ringstaff, C., and Kelley, L. (2002). “The Learning Return On Our Educational Technology Investment.” San Francisco, CA: WestEd RTEC.
Schacter, John (1999). “Education Technology on Student Achievement: What the Most Current Research Has to Say.” Santa Monica, CA: Milken Exchange on Education Technology.
Schenkel, B. (2009). “The impact of an attitude toward mathematics on mathematics performance.” Marietta College.
Slaving, R. & Lake, C. (2007). “Effective Programs in Elementary Mathematics: A Best-Evidence Synthesis.” Best Evidence Enciclopedia.
Spielvogel, R., et. al (2001). “IMB’s Reinventing Education Grant Partnership Inititative – Individual Site Reports.” New York, NY: Center for Children and Technology.
Sun, J. & Metros, S. (2011). “The Digital Divide and Its Impact on Academic Performance.” US-China Education Review A 2 (2011) 153-161. Los Angeles, CA: David Publishing.
Trucano, M. (2014). “Translating and implementing the Khan Academy in Brazil.” Washington, DC: The World Bank Edutech Blog.
Vandenberg, K. C., "Class Size and Academic Achievement" (2012). Electronic Theses & Dissertations. Paper 408.
Waxman, H. & Houston, R. (2012). “Evaluation of the 2010-2011 Reasoning Mind Program in Beaumont ISD.” Reasoning Mind, Inc.
Carlued Leon is the Global Research Manager at MANAUS Consulting. She is responsible for designing, implementing, and overseeing qualitative and quantitative research worldwide. Her research covers a wide range of international development topics, including education, gender, and public health. She has conducted research for organizations like UNFPA, UN Women, the Inter-American Development Bank, the International Organization for Migration, AIDS Healthcare Foundation, and Plan International. Her most recent articles have been presented at international forums like the 2017 International AIDS Society Conference and the Better Work Conference of the International Finance Corporation (IFC, the World Bank Group).