Event Information
The Technology Acceptance Model (TAM) is an established framework for understanding how individuals accept and use technologies (Davis, 1989). Two key factors in the TAM that influence the acceptance of learning technologies are their Perceived Usefulness (PU) and Perceived Ease of Use (PEOU). While PU refers to individuals’ belief in the ability of the technology to enhance their productivity, PEOU refers to their perception of the level of difficulty or simplicity associated with using the technology. Previous findings indicate that PEOU and PU are crucial precursors toward individuals accepting learning technologies (Granić & Marangunić, 2019). We adapt the TAM to explore participants’ attitudes toward accepting and using the GAI2N in the context of GenAI use and adoption in education: Are there patterns in perception of the GAI2N across participants? Do these perceptions reflect challenges and benefits that indicate their acceptance of the GAI2N based on PU and PEOU?
This exploratory, IRB-approved, qualitative research study asks the question: how do higher education faculty perceive and engage with a tool designed to help them consider implementations of GenAI in their syllabus?
The four tool designers (also educational researchers) carefully structured a one-hour webinar with ISTE+ASCD to present the tool to a public, national audience in November, 2025. Invitations to participate in the webinar will be shared with teacher education faculty from across the country through the ISTE+ASCD Alliance for Innovation in Teacher Education Pledge community as well as social media platforms. Built around the design of sharing the tool for maximum utility for the higher education participants, they then center several moments and data elements. These include: (1) Individual virtual polling responses related to participants’ greatest concerns surrounding implementing GenAI at the course level, (2) After a presentation related to how to use the GAI2N, virtual polling responses related to concrete ways participants plan to use the GAI2N and their perceptions of challenges and benefits to using the tool, and (3) Chat responses related to questions participants still have about including GenAI in their syllabi. Given the public nature of the webinar, at its conclusion, participants will additionally have the opportunity to share anonymous feedback in a short survey. The survey will include Likert scale questions adapted from the validated Basic Technology Adoption Model (TAM) Questionnaire, as well a couple open-ended questions for further feedback.
The quantitative data will be analyzed using descriptive statistics, and the qualitative data collected from these data sources will be analyzed using thematic analysis and Saldaña’s approach to iterative coding. This initial research is necessary due to the sense of urgency reported by faculty in requesting support in navigating the use of GenAI in their coursework.
Because this study is exploratory and set for data collection in November, 2025, we do not yet have results, though we have a history of collaborating and meeting our scheduled deadlines on several projects together. We anticipate a robust set of qualitative results based on the design and tools in place. Because some educators in the U.S. have begun to implement the GAI2N and have provided positive feedback, we anticipate there will be meaningful results that we can analyze methodically from the webinar.
This paper has educational significance in elevating the conversation around the need to equip preservice and inservice teachers with the skills, knowledge, and practices to integrate GenAI into their teaching contexts. The examined tool (the GAI2N) filled a significant gap in scaffolded support for teacher education faculty who wanted to thoughtfully bring generative AI into their teacher education coursework. Sharing this research provides an example of the role that a tool such as the GAI2N can play in bridging the planning and implementation of GenAI in preservice and inservice teacher education. Gaining insight into the challenges, reflections, and perceptions of perceived use and usefulness of GAI2N may lead to a deeper understanding and intricacies related to the acceptance and use of GenAI. Furthermore, it can inform the development of additional resources and scaffolds to advance the responsible, reflective integration of generative AI as a learning tool and as new TPACK knowledge for preservice and inservice teachers.
AI4K12 Initiative. (2020). Retrieved from https://ai4k12.org/resources/big-ideas-poster/
Benali, M., & Mak, J. (2024). Perception of preservice Moroccan teachers regarding the adoption of CHATGPT in their teaching practices. In Artificial Intelligence Applications in K-12 (pp. 112-137). Routledge.
Black, N. B., George, S., Eguchi, A., Dempsey, J. C., Langran, E., Fraga, L., Brunvand, S., & Howard, N. (2024). A Framework for Approaching AI Education in Educator Preparation Programs. Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23069-23077. https://doi.org/10.1609/aaai.v38i21.30351
Chen, B., Cheng, J., Wang, C., & Leung, V. (2025). Pedagogical Biases in AI-Powered Educational Tools: The Case of Lesson Plan Generators. Social Innovations Journal, 30.
Chiu,T.K.F., Ahmad, Z., Ismailov, M., & Temitayo Sanusi, I. (2024). What are artificial intelligence
literacy and competency? A comprehensive framework to support them. Computers and
Education Open, 6. https://doi.org/10.1016/j.caeo.2024.100171
Danaher, J. (2022). Techno-optimism: An analysis, an evaluation and a modest defence. Philosophy & Technology, 35(2), 54.
Digital Promise. (n.d.). AI literacy.
https://digitalpromise.org/initiative/artificial-intelligence-in-education/ai-literacy/
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13, 3, 319–340.
Dilek, M., Baran, E., & Aleman, E. (2025). AI Literacy in Teacher Education: Empowering Educators Through Critical Co-Discovery. Journal of Teacher Education, 76(3), 294-311.
EdSAFE AI Alliance, n.d. Retrieved from
https://www.edsafeai.org/_files/ugd/5be6a9_0dffff673cd042578c25cc098b2929fc.pdf
Feuerriegel, S., Hartmann, J., Janiesch, C., et al. (2024). Generative AI. Business & Information, Systems Engineering, 66, 111–126.
Fleming, E. C., Robert, J., Sparrow, J., Wee, J., Dudas, P., & Slattery, M. J. (2021). A digital fluency framework to support 21st-century skills. Change: The Magazine of Higher Learning, 53(2), 41–48. https://doi.org/10.1080/00091383.2021.1883977
Granić, A., & Marangunić, N. (2019). Technology acceptance model in educational context: A systematic literature review. British Journal of Educational Technology, 50(5), 2572-2593.
Grover, S. (2024, March). Teaching AI to K-12 learners: Lessons, issues, and guidance. In Proceedings of the 55th ACM Technical Symposium on Computer Science Education V. 1 (pp. 422-428).
Grover, S., Fields, D., Kafai, Y., White, S., & Strickland, C. (2024, March). Enduring Lessons from 'Computer Science for All' for AI Education in Schools. In Proceedings of the 55th ACM
Technical Symposium on Computer Science Education V. 2 (pp. 1533-1534).
Hsu, H. P., Mak, J., Werner, J., White-Taylor, J., Geiselhofer, M., Gorman, A., & Capurro, C. T. (2024). Preliminary Study on pre-service teachers’ applications and perceptions of generative artificial intelligence for lesson planning. Journal of Technology and Teacher Education, 32(3), 409-437.
International Society for Technology in Education (ISTE). (2024a). ISTE Standards for Educators. https://iste.org/standards/students
International Society for Technology in Education (ISTE). (2024b). ISTE Standards for Students.
https://iste.org/standards/students
Kassorla, M., Georgieva, M., & Papini, A. (2024). AI literacy in teaching and learning: A durable
framework for higher education. Educause.
https://www.educause.edu/content/2024/ai-literacy-in-teaching-and-learning/defining-ai-literacy-f
or-higher-education
Kasun, S., Blackwood, A., Mak, J., and Black, N. B. (2025). The GAI2N: GenAI Integration Navigator - A Reflective Guide for Educator Preparation Program Faculty. https://drive.google.com/file/d/1n9dmWGuSsJwvJQAilEt9USSs4mYR9ztU/view Accessed: 2025-08-17.
Kestin, G., Miller, K., Klales, A., et al. (2024, May 14). AI tutoring outperforms active learning (Version 1) [Preprint]. Research Square. https://doi.org/10.21203/rs.3.rs-4243877/v1
Oakley, B., Johnston, M., Chen, K., Jung, E., & Sejnowski, T. (2025). The memory paradox: Why our brains need knowledge in an age of AI. SSRN. https://doi.org/10.2139/ssrn.5250447
OpenAI. (n.d.). Creating a GPT. OpenAI Help Center. https://help.openai.com
Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C. L., Mishkin, P., Zhang, C., Agarwal, S., et al. (2022). Training language models to follow instructions with human feedback
(arXiv:2203.02155). arXiv. https://doi.org/10.48550/arXiv.2203.02155
Quill Network. (n.d.). Introducing UDL & AI. https://quillnetwork.com
Pearl, M., Brock, J. & Kumar, A. (2024, May 28). Delving into the dangers of DeepSeek. Center for Strategic and International Studies. https://www.csis.org/analysis/delving-dangers-deepseek
Prem, E. (2023). From ethical AI frameworks to tools: a review of approaches. AI and Ethics, 3(3), 699-716.
Reutzel, D. R., & Cooter, R. B. (2023). Teaching children to read: The teacher makes the difference (9th ed.). Pearson.
Shelby, R., Rismani, S., Henne, K., Moon, A., Rostamzadeh, N., Nicholas, P., & Virk, G. (2022).
Identifying sociotechnical harms of algorithmic systems: Scoping a taxonomy for harm
Reduction.
Shwartz, V. (2024, May 8). Artificial intelligence needs to be trained on culturally diverse datasets to avoid bias. The Conversation. http://theconversation.com/artificial-intelligence-needs-to-be-trained-on-culturally-diverse-datase
ts-to-avoid-bias-222811
Stanford Teaching Commons. (n.d.). AI and education. Teaching Commons, Stanford University.
https://teachingcommons.stanford.edu/resources/teaching-guides/ai-and-education
Tacelosky, K., Kasun, G. S., Liao, Y. C., Shapiro, B., & Harris, K. (In press). Exploring critical AI literacy in language education: A case study. Foreign Language Annals.
Touretzky, D. (2019). The AI4K12 Initiative: Developing national guidelines for teaching ai in k-12. Global SW Education Conference: Seoul, Korea. November 3-4, 2020.
https://raw.githubusercontent.com/touretzkyds/ai4k12/master/documents/GlobalSWEdu2020_Touretzky.pdf
UNESCO. (2024). AI competency framework for teachers. https://doi.org/10.54675/ZJTE2084
Van de Pol, J., Volman, M., & Beishuizen, J. (2010). Scaffolding in teacher–student interaction: A decade of research. Educational Psychology Review, 22(3), 271–296.
https://doi.org/10.1007/s10648-010-9127-6
Wu, L., Chen, H., & Li, P. (2024). Reducing misinterpretation in large language models through
task-oriented fine-tuning. Journal of Artificial Intelligence Research, 79, 1123–1140.
https://doi.org/10.1613/jair.1.14123
Zhou, S., Zhang, X., Liu, J., & Wang, Y. (2023). Task-specific adaptation of GPT models for improved contextual understanding. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP), 4567–4578. https://aclanthology.org/2023.emnlp-main.456/
Other presentations in this group:
| Related exhibitors: | Microsoft Corporation |