Menu
Makerspaces in education serve as a platform for constructing knowledge and uncovering the learning process through learner-led inquiry. These unique learning environments arose from the influences of leading pedagogical theorists: Jean Piaget, Seymour Papert, Lev Vygotsky, and John Dewey. Constructivism is a widely known developmental theory where learning through experience is highly effective (Piaget, 1964). Constructivism supports makerspace assessments because educators can evaluate students’ knowledge when students construct their learning experiences within this space. Papert's (1986) constructionism often connects with learning in makerspaces and builds upon Piaget’s constructivism. Papert (1986) suggests that learning “is most effective when part of an activity the learner experiences is constructing a meaningful product” (p.2). Therefore, educators can hone in on the most significant part of students’ learning through this alternative assessment tool. Furthermore, the collaboration with teachers and peers in makerspaces also reflects Lev Vygotsky’s (1978) socio-constructivist theory. Finally, Dewey asserts that experience is also formulated outside the parameter of direct participation and emphasizes hands-on learning (Hatzigianni et al., 2020). Dewey’s theory supports student-centered learning in makerspace. Based on these findings, educators can use the collaborative aspect of makerspaces to build knowledge that aligns with educational standards. We use this comprehensive theoretical foundation to explore how to apply assessment in makerspaces and inform further research.
Methods
We use PRISMA for our study design and analysis method. The following research questions guide our study:
RQ1: How can educators demonstrate the effectiveness of makerspace assessments in PreK through 12th-grade education?
RQ2: How do educators evaluate standards using makerspaces?
RQ3: How do makerspace assessments demonstrate student outcomes?
RQ4: How do educators consider which parts of the makerspace process can represent a gradable assessment?
Eligibility Criteria
We selected our sample using the following criteria:
Literature samples are peer-reviewed journal articles that specifically address learner assessments in makerspaces and could not include the evaluation of a makerspace environment. We reviewed different study methods and evaluated sources based on their relevance to our topic and impact factor.
Information Sources
Our search employed themes based on keyword strings most applicable for the databases viewed without language or geographical restrictions. Sources range from 2012-2022: older sources show the development of makerspaces in the last decade, and new sources reveal assessment innovations in makerspaces.
Search Strategy
The research team recorded the number of search results for each query. The search strategy incorporates terms related to makerspace assessments and our expertise in this field. Terms include assessment of makerspace, makerspace assessments, makerspace as assessment, educator(s) + makerspace assessment, educators(s) + makerspace(s) + evaluate, makerspace(s) + literacies, makerspaces(s) + standards, makerspace(s) + learning standards, makerspace(s) + educational standards, makerspace(s) + gradable assessment, and makerspace(s) + gradable assignment.
Data Selection and Collection Processes
We reviewed different kinds of makerspaces to understand their potential application in the PK-12 setting. We reviewed studies’ full texts when we could not unquestionably exclude an article based on its title, abstract, and keywords. Once selected, we read the entire article before including it in the systematic review. Two reviewers independently searched databases using search strings, with the first reviewer analyzing the odd-numbered studies and the second examining the even-numbered studies. We assessed studies based on inclusion criteria and determined whether each study met, did not meet, or might meet the criteria (van Tulder et al., 2003). The research team also excluded articles based on the following exclusion criteria: no article published before 2012; only peer-reviewed journal articles, conference proceedings, and books and could not include dissertations; articles could not discuss assessing the makerspace itself. The two reviewers met to discuss the articles based on inclusion criteria. Two additional reviewers examined the data for inter-rater reliability. We included studies when at least three reviewers agreed that the studies met the inclusion criteria.
Data Items
Extracted data includes literature samples that meet the inclusion criteria and address these themes:
Effectiveness of makerspace assessments.
Evaluating standards through the use of makerspaces.
Using makerspaces to demonstrate student outcomes.
Identifying which parts of the makerspace processes represent a gradable assessment.
Study of Risk of Bias Assessment
All research team members will analyze the risk of bias in each study independently and use inter-rater reliability measures to mediate disagreements.
Effect measures:
We are evaluating these measures at this time.
Synthesis methods
We are synthesizing our findings and will have the results ready by the conference. We used the PRISMA review process to obtain the inclusion and exclusion criteria. We are preparing the data for presentation: cataloging and visually displaying data and writing the rationale for study selections and subgroup analysis (PRISMA, 2021).
Reporting bias assessment
Our assessment includes potential study limitations and reviewer assumptions about ambiguous information. The research team also has a positive bias toward makerspaces and multimodal learning methods. We selected PRISMA as our systematic approach with inclusion and exclusion criteria, attempting to avoid selecting studies based on our positive bias.
Certainty assessment
We are currently determining the factors of certainty assessment, such as the congruity of results across studies and the magnitude of effects that result from these congruities (PRISMA, 2021).
We report our initial findings with the PRISMA systematic review and found limited research addressing the four research questions on applying makerspaces as an assessment tool. We will present the statistical results of our review and hope they provide a clearer understanding of the studies analyzed in our systematic review.
RQ1:
The studies mainly focused on the effectiveness of makerspace assessment in K-12 education. Some studies introduce the significance of utilizing makerspace as a multimodal method for assessing students’ problem-solving abilities. (Freiman, V. 2020; Davis, R. L., Schneider, B., & Blikstein, P. 2017; Trust, T., Maloy, R. W., & Edwards, S. 2017). Some researchers in this area believe makerspace assessment, as an activity-centered assessment method, promotes students’ agency as they explore, create, and design their artifacts to present their understanding of specific topics. (Angello et al., 2016; Blikstein et al., 2017; Buxton et al., 2022; Chu,S. L et al., 2017; Cun, A., & Abramovich, S. 2018; Jones, W. M., 2020; Okundaye, O, et al., 2018; Trust, T. et al., 2017; Wardrip, P.S. et al., 2021)
RQ2:
Studies show how makerspaces help educators meet their teaching standards. Researchers evaluate the application of makerspace and how assessment embodies the educational standards on both federal and state levels. For most subjects in K-12 education, educators utilize makerspace as an efficient way to meet and implement the standards. (Belair & Waskie-Laura, 2021; Blikstein et al., 2017; Cun & Abramovich, 2018; Cun et al., 2019; Lindstrom et al., 2017; Litts et al., 2019; Trust et al., 2017) Overall, educators can apply the makerspace environment for various standards and multimodal literacies.
RQ3:
Initial findings indicate that educators can reflect students’ learning outcomes by assessing students in makerspaces. For instance, different categories of maker activities target lesson plans and provide approaches for evaluating learning (Angello et al., 2016; Bilkstein et al., 2017). Other initial findings emphasize how makerspace assessments can demonstrate student outcomes and show evidence of how written data could reveal students’ learning in makerspaces with a Likert scale (Hadad et al., 2019; Oliver et al., 2020). Other studies like Buxton et al. (2022) highlight that assessment in makerspaces draws out students' different skill sets in a joyful environment. Kay et al. (2019) further support the argument by concluding that makerspace assessments can contribute to students displaying 21st-century skills.
RQ4:
There are still very few studies considering which parts of the makerspace process represent a gradable assessment. However, Lawson (2018) identified that the activities aligned with educational standards and thus acted as the assessment piece. Therefore, the activities could translate into a gradable assessment. Lock (2021) also found that teachers can use several stages of the makerspace experience as formative and summative assessments when situated within an assessment framework.
Attendees can expect to learn about the pedagogical importance of our study. It explores the practicality of applying relative theoretical frameworks such as constructionism and social-constructivism through assessing educational standards shown by student outcomes in makerspaces (Halverson & Sheridan, 2014; Lock, 2021). Attendees can learn more about using makerspaces as an alternative assessment strategy and expect practical information about applying makerspaces in educational settings. Overall, the multimodal format of makerspaces is valuable to ISTE attendees.
References
Angello, G., Chu, S. L., Okundaye, O., Zarei, N., & Quek, F. (2016, June 21). Making as the new colored pencil: Translating elementary curricula into maker activities. Proceedings of the 15th International Conference on Interaction Design and Children. Association for Computing Machinery (ACM) Digital Library (DL). https://doi.org/10.1145/2930674.2930723
Belair, J., & Waskie-Laura, N. (2021). Preparing students for a technology-driven future: How school librarians can integrate computer science standards into the curriculum. Knowledge Quest, 50(2). ERIC. https://doi.org/https://eric.ed.gov/?id=EJ1324362
Blikstein, P., Kabayadondo, Z., Martin, A., & Fields, D. (2017). An assessment instrument of technological literacies in makerspaces and FabLabs. Journal of Engineering Education, 106(1), 149–175. Wiley Online Library. https://doi.org/10.1002/jee.20156
Buxton, A., Kay, L., & Nutbrown, B. (2022). Developing a makerspace learning and assessment framework. 6th FabLearn Europe / MakeEd Conference 2022, 5, 1–7. https://doi.org/10.1145/3535227.3535232
Chu, S. L., Schlegel, R., Quek, F., Christy, A., & Chen, K. (2017). "I make, therefore I am’. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 18, 109–120. ACM DL Digital Library. https://doi.org/10.1145/3025453.3025458
Cun, A., & Abramovich, S. (2019). The challenge of assessment for library makerspaces. Proceedings of the Association for Information Science and Technology, 55(1), 781–782. Wiley Online Library. https://doi.org/https://doi.org/10.1002/pra2.2018.14505501114
Davis, R. L., Schneider, B., & Blikstein, P. (2017). Making the invisible visible: A new method for capturing student development in makerspaces. CSCL 2017 Proceedings, 175–182.
Freiman, V. (2020). Issues of teaching in a new technology-rich environment: Investigating the case of New Brunswick (Canada) school makerspaces. In Y. B.-D. Kolikant, D. Martinovic, & M. Milner-Bolotin (Eds.), STEM Teachers and Teaching in the Digital Era: Professional Expectations and Advancement in the 21st Century Schools (pp. 275–292). Springer. https://doi.org/10.1007/978-3-030-29396-3
Hadad, R., Thomas, K., Kachovska, M., & Yin, Y. (2019). Practicing formative assessment for computational thinking in making environments. Journal of Science Education and Technology, 29(1), 162–173. SpringerLink. https://doi.org/10.1007/s10956-019-09796-6
Halverson, E. R., & Sheridan, K. (2014). The maker movement in education. Harvard Educational Review, 84(4). https://doi.org/https://www.makersempire.com/wp-content/uploads/2018/02/The-Maker-Movement-in-Education-Halverson-14.pdf
Hatzigianni, M., Stevenson, M., Bower, M., Falloon, G., & Forbes, A. (2020). Children’s views on making and designing. European Early Childhood Education Research Journal, 28(2), 286–300. Taylor & Francis Online. https://doi.org/10.1080/1350293x.2020.1735747
Kay, L., Marsh, J., Hyatt, D., Chesworth, L., Nisha, B., Nutbrown, B., & Olney, B. (2019). Assessment of learning in STEAM-focused makerspaces. In A. Blum-Ross, K. Kumpulainen, & J. Marsh (Eds.), Enhancing Digital Literacy and Creativity: Makerspaces in the Early Years. Routledge. https://doi.org/10.4324/9780429243264
Knobloch, K., Yoon, U., & Vogt, P. M. (2011). Preferred reporting items for systematic reviews and meta-analyses (PRISMA) statement and publication bias. Journal of Cranio-Maxillofacial Surgery, 39(2), 91–92. NIH National Library of Medicine: National Center for Biotechnology Information. https://doi.org/10.1016/j.jcms.2010.11.001
Lawson, C. A., Cook, M., Dorn, J., & Pariso, B. (2018). A STEAM-focused program to facilitate teacher engagement before, during, and after a fieldtrip visit to a children’s museum. Journal of Museum Education, 43(3), 236–244. Taylor & Francis Online. https://doi.org/10.1080/10598650.2018.1474421
Lindstrom, D., Thompson, A. D., & Schmidt-Crawford, D. A. (2017). The maker movement: Democratizing STEM education and empowering learners to shape their world. Journal of Digital Learning in Teacher Education, 33(3), 89–90. Taylor & Francis Online. https://doi.org/10.1080/21532974.2017.1316153
Litts, B. K., Lewis, W. E., & Mortensen, C. K. (2019). Engaging youth in computational thinking practices through designing place-based mobile games about local issues. Interactive Learning Environments, 28(2), 1–14. Taylor & Francis Online. https://doi.org/10.1080/10494820.2019.1674883
Lock, J., Becker, S., & Redmond, P. (2021). Teachers conceptualizing and developing assessment for skill development: Trialing a maker assessment framework. Research Evaluation, 30(4). OXFORD ACADEMIC. https://doi.org/10.1093/reseval/rvab029
Okundaye, O., Chu, S., Quek, F., Berman, A., Natarajarathinam, M., & Kuttolamadom, M. (2018). From making to micro-manufacture. Proceedings of the Conference on Creativity and Making in Education. https://doi.org/10.1145/3213818.3213822
Oliver, K. M., Houchins, J. K., Moore, R. L., & Wang, C. (2020). Informing makerspace outcomes through a linguistic analysis of written and video-recorded project assessments. International Journal of Science and Mathematics Education, 19(2), 333–354. OXFORD ACADEMIC. https://doi.org/10.1093/reseval/rvab029
Papert, S. (1986). Constructionism: A new opportunity for elementary science education. Massachusetts Institute of Technology, Media Laboratory, Epistemology and Learning Group.
Piaget, J. (1964). Part I: Cognitive development in children: Piaget development and learning. Journal of Research in Science Teaching, 2(3), 176–186. Wiley Online Library. https://doi.org/10.1002/tea.3660020306
PRISMA. (2021). PRISMA: Transparent reporting of systematic reviews and meta-analyses. https://www.prisma-statement.org/
Trust, T., Maloy, R. W., & Edwards, S. (2017). Learning through making: Emerging and expanding designs for college classes. TechTrends, 62(1), 19–28. ResearchGate. https://doi.org/10.1007/s11528-017-0214-0
van Tulder, M., Furlan, A., Bombardier, C., & Bouter, L. (2003). Updated method guidelines for systematic reviews in the Cochrane collaboration back review group. Spine, 28(12), 1290–1299. https://doi.org/10.1097/01.brs.0000065484.95996.af
Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes (M. Cole, V. John-Steiner, S. Scribner, & E. Souberman, Eds.). Harvard University Press.
Wardrip, P. S., Saplan, K., & Evancho, J. (2021). “Finding the right window into what they’re doing”: Assessment of maker-based learning experiences remotely. TechTrends, 65(6), 952–962. SpringerLink. https://doi.org/10.1007/s11528-021-00664-y