Change display time — Currently: Central Daylight Time (CDT) (Event time)

Confronting Technosocial Blind Spots: Navigating AI's Impact on Education

,

Lecture presentation
ISTELive Content
Save to My Favorites
This is presentation 2 of 2, scroll down to see more details.

Other presentations in this group:

Session description

This session explores technosocial blind spots in AI-driven education, examining privacy concerns, algorithmic bias, and hidden human labor. Attendees will learn strategies to address these challenges, ensure equitable access, and balance AI capabilities with human wisdom to create more inclusive and transformative learning environments.

Framework

My research and resulting paper and presentation embodies several key perspectives and theoretical frameworks:
1. Critical Theory: My work embodies a critical perspective on technology in education, questioning assumptions and examining power dynamics inherent in the development and deployment of AI and algorithmic systems. This is evident in my analysis of the social, ethical, and political implications of these technologies.
2. Sociotechnical Systems Theory: My research adopts a sociotechnical perspective, emphasizing the interplay between technological systems and their social, cultural, and organizational contexts. This is clear in my concept of "technosocial blind spots" and my analysis of how these blind spots emerge from the complex interactions between technology and society.
3. Ethics of Technology: My paper is deeply grounded in the ethical considerations surrounding AI and big data in education. This framework is evident in my discussion of privacy concerns, algorithmic bias, and the need for ethical governance of educational technologies.
4. Social Justice and Equity Lens: Throughout my paper, I consistently apply a social justice and equity perspective, examining how AI and algorithmic systems can exacerbate existing inequalities or create new forms of discrimination.
5. Reflexivity in Research and Practice: I advocate for greater reflexivity in developing and implementing educational technologies, embodying a theoretical approach that emphasizes self-awareness and critical self-examination in research and practice.
6. Systems Thinking: My analysis of the various manifestations of technosocial blind spots and their interconnected nature reflects a systems thinking approach, considering the complex, interrelated factors at play in educational technology ecosystems.
7. Postcolonial and Decolonial Perspectives: My discussion of cultural oversight and bias, particularly in relation to the development of AI systems primarily reflecting Western values, incorporates elements of postcolonial and decolonial theory.
8. Democratic Theory: My emphasis on the importance of democratic governance and participation in the development and deployment of AI in education aligns with democratic theoretical frameworks.
9. Labor Theory: Including the hidden human labor behind AI systems brings in perspectives from labor theory, highlighting overlooked technological development aspects.
10. Technological Determinism Critique: My paper implicitly critiques technological determinism by emphasizing the social construction of technology and the importance of human agency in shaping technological outcomes.

These perspectives and frameworks are interwoven throughout my paper and presentation, creating a rich, multidisciplinary approach to examining the challenges and opportunities of AI and algorithmic systems in education. This interdisciplinary theoretical grounding allows me to provide a comprehensive and nuanced analysis of the complex issues surrounding educational technology in the algorithmic age.

More [+]

Methods

My paper is a theoretical and analytical piece that draws on existing literature, case studies (particularly the inBloom case), and conceptual frameworks to explore the concept of "technosocial blind spots" in educational technology. My work is more accurately described as a critical analysis and synthesis of existing knowledge rather than an empirical study.
My paper:
1. Introduces and defines the concept of technosocial blind spots
2. Uses the inBloom case as a primary example to illustrate these blind spots
3. Examines various manifestations of technosocial blind spots in educational technology
4. Draws on a wide range of existing literature and theories to support my arguments
5. Provides a critical analysis of the challenges and opportunities in the algorithmic age of education
6. Offers recommendations for addressing these blind spots and navigating the future of educational technology
Given the nature of my paper, it would not be possible to "replicate" it in the traditional sense of a scientific study. However, other researchers could certainly repeat and extend my work by:
1. Applying my framework of technosocial blind spots to other case studies in educational technology
2. Conducting empirical studies to test some of the assertions and hypotheses implied in my analysis
3. Developing quantitative measures of technosocial blind spots in educational technology implementations
4. Exploring the effectiveness of my recommended strategies for addressing these blind spots in real-world contexts

More [+]

Results

Based on my research and the resulting paper and presentation, I can summarize my key findings and expectations as follows:
1. Technosocial Blind Spots: I've identified and defined the concept of technosocial blind spots in educational technology, demonstrating how these oversights can lead to unintended consequences and erosion of public trust.
2. Case Study Analysis: Through the inBloom case, I've illustrated how technosocial blind spots can manifest in real-world educational technology initiatives, leading to their failure despite significant resources and ambitious goals.
3. Manifestations of Blind Spots: I've outlined several key areas where technosocial blind spots commonly occur: a) Overlooking social impacts b) Assuming more data leads to better outcomes c) Ethical blindness and privacy concerns d) Cultural oversight and bias e) Economic consequences and inequalities f) Political ramifications and democratic governance g) The hidden human labor behind AI
4. Ethical Implications: I've highlighted the critical importance of addressing privacy, consent, and data governance issues in the development and deployment of educational AI.
5. Equity Concerns: My analysis reveals how AI and algorithmic systems can exacerbate existing inequalities or create new forms of discrimination if not carefully designed and implemented.
6. Democratic Governance: I've emphasized the need for inclusive dialogue and collaboration among stakeholders to establish clear guidelines and safeguards for educational technology.
7. Reflexivity: I argue for greater reflexivity in the development and implementation of educational technologies, emphasizing the need for ongoing critical examination of assumptions and biases.
8. Future Directions: I've outlined key considerations for navigating the challenges and opportunities of education in the algorithmic age, including: a) Balancing AI power with human wisdom b) Protecting learner privacy and agency c) Striving for equitable and transformative learning for all

Results from engaging with my research and the resulting paper and presentation include:
1. Increased awareness and attention to technosocial blind spots in the development and implementation of educational technologies.
2. More inclusive and collaborative approaches to educational technology governance.
3. Greater emphasis on ethical considerations and privacy protections in educational AI systems.
4. Enhanced efforts to address equity issues and mitigate potential biases in algorithmic educational tools.
5. A shift towards more reflexive and critically aware practices in educational technology development.
6. Continued research and dialogue on the long-term impacts of AI and algorithmic systems on education and society.

These findings and expectations provide a framework for understanding and addressing the complex challenges of integrating AI and algorithmic systems into education, with a focus on ethical, equitable, and socially responsible approaches.

More [+]

Importance

Based on my research and the resulting paper and presentation, I can describe the educational and scientific importance of my study as follows:

1. Conceptual Framework: I have introduced and defined the concept of "technosocial blind spots" in educational technology. This provides a valuable framework for understanding and analyzing the complex interactions between technology and society in educational contexts. This framework can be applied by researchers, educators, and policymakers to identify potential issues before they become problematic.

2. Critical Analysis: My study offers a critical examination of the implementation of AI and algorithmic systems in education. This critical perspective is crucial for a balanced and responsible integration of these technologies in educational settings.

3. Ethical Considerations: I have highlighted important ethical issues surrounding the use of AI and big data in education, particularly regarding privacy, consent, and data governance. This focus on ethics is essential for developing responsible and trustworthy educational technology practices.

4. Equity and Inclusion: My analysis of how AI systems can perpetuate or exacerbate inequalities provides valuable insights for creating more inclusive and equitable educational technologies.

5. Policy Implications: By emphasizing the need for democratic governance and stakeholder collaboration, my study offers important considerations for policymakers and administrators in shaping the future of educational technology.

6. Interdisciplinary Approach: My research bridges technology, education, ethics, and social science, providing a holistic view of the challenges and opportunities in educational technology.

7. Future-Oriented: My study helps prepare the education sector for upcoming challenges and opportunities as AI and algorithmic systems become more prevalent.

The value of this study to ISTE and ASCD conference audiences lies in its:

1. Relevance: It addresses current and emerging issues in educational technology that are directly relevant to educators, administrators, and policymakers.

2. Practical Implications: It offers insights that can inform decision-making about the adoption and implementation of AI and algorithmic systems in schools.

3. Critical Thinking: It encourages educators to think critically about the technologies they use and their potential impacts.

4. Ethical Awareness: It raises awareness about important ethical considerations in educational technology, which aligns with ISTE and ASCD's commitment to responsible and ethical use of technology in education.

5. Equity Focus: Its emphasis on equity and inclusion resonates with ISTE and ASCD's goals of promoting inclusive and equitable educational practices.

6. Forward-Thinking: It helps conference attendees anticipate and prepare for future trends and challenges in educational technology.

7. Holistic Perspective: It provides a comprehensive view of the educational technology landscape, considering technological, social, ethical, and policy aspects.

8. Framework for Analysis: It offers a conceptual framework (technosocial blind spots) that educators and administrators can apply in their own contexts to assess and improve their use of educational technology.

By providing these insights and frameworks, my study equips ISTE and ASCD conference audiences with the knowledge and tools to navigate the complex landscape of AI and algorithmic systems in education more effectively and responsibly.

More [+]

References

Adedoyin, O. B., & Soykan, E. (2023). Covid-19 pandemic and online learning: The challenges and opportunities. Interactive Learning Environments, 31(2), 863–875. https://doi.org/10.1080/10494820.2020.1813180
Anderson, R. (2019). The emergence of data privacy conversations and state responses. Institute for Higher Education Policy. http://files.eric.ed.gov/fulltext/ED595109.pdf
Azubuike, O. B., Adegboye, O., & Quadri, H. (2021). Who gets to learn in a pandemic? Exploring the digital divide in remote learning during the COVID-19 pandemic in Nigeria. International Journal of Educational Research Open, 2, 100022. https://doi.org/10.1016/j.ijedro.2020.100022
Barrett, L. (2020). Ban facial recognition technologies for children—And for everyone else. Boston University Journal of Science and Technology Law, 26. https://www.bu.edu/jostl/files/2020/08/1-Barrett.pdf
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? 610–623. https://doi.org/10.1145/3442188.3445922
Berghel, H. (2018). Malice domestic: The Cambridge analytica dystopia. Computer, 51(05), 84–89.
Bulger, M., McCormick, P., & Pitcan, M. (2017). The legacy of InBloom. Data & Society. https://datasociety.net/pubs/ecl/InBloom_feb_2017.pdf
Cadwallar, C., & Graham-Harrison, E. (2018). Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. The Guardian. https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election
Cave, S. (2020). The problem with intelligence: Its value-laden history and the future of AI. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 29–35. https://doi.org/10.1145/3375627.3375813
Chun, W. H. K. (2021). Discriminating data: Correlation, neighborhoods, and the new politics of recognition. MIT Press.
Cramer, J., & Krueger, A. B. (2016). Disruptive change in the taxi business: The case of Uber. American Economic Review, 106(5), 177–182. https://doi.org/10.1257/aer.p20161002
Dhawan, S. (2020). Online learning: A panacea in the time of COVID-19 crisis. Journal of Educational Technology Systems, 49(1), 5–22. https://doi.org/10.1177/0047239520934018
Dieterle, E., Dede, C., & Walker, M. (2022). The cyclical ethical effects of using artificial intelligence in education. AI & SOCIETY. https://doi.org/10.1007/s00146-022-01497-w
Dieterle, E., Holland, B., & Dede, C. (2021). The cyclical effects of ethical decisions involving Big Data and digitial learning platforms. In E. B. Mandinach & E. S. Gummer (Eds.), The ethical use of data in education: Promoting responsible policies and practices (pp. 198–215). Teachers College Press.
Du, M., Yang, F., Zou, N., & Hu, X. (2021). Fairness in deep learning: A computational perspective. IEEE Intelligent Systems, 36(4), 25–34. https://doi.org/10.1109/MIS.2020.3000681
Eubanks, V. (2017). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.
Faelens, L., Hoorelbeke, K., Cambier, R., Van Put, J., Van De Putte, E., De Raedt, R., & Koster, E. H. W. (2021). The relationship between Instagram use and indicators of mental health: A systematic review. Computers in Human Behavior Reports, 4, 100121. https://doi.org/10.1016/j.chbr.2021.100121
Finn, E. (2017). What algorithms want: Imagination in the age of computing. The MIT Press. https://doi.org/10.7551/mitpress/9780262035927.001.0001
Gallagher, M., & Breines, M. (2023). Unpacking the hidden curricula in educational automation: A methodology for ethical praxis. Postdigital Science and Education, 5(1), 56–76. https://doi.org/10.1007/s42438-022-00342-z
Gekara, V., & Snell, D. (2020). The growing disruptive impact of work automation: Where should future research focus? In A. Wilkinson & M. Barry (Eds.), The future of work and employment. Edward Elgar Publishing. https://doi.org/10.4337/9781786438256.00019
Georgopoulos, M., Oldfield, J., Nicolaou, M. A., Panagakis, Y., & Pantic, M. (2021). Mitigating demographic bias in facial datasets with style-based multi-attribute transfer. International Journal of Computer Vision, 129(7), 2288–2307. https://doi.org/10.1007/s11263-021-01448-w
Ginsberg, B. (2019). Reflections on analytics: Knowledge and power. In J. Bachner, B. Ginsberg, & K. W. Hill (Eds.), Analytics, policy, and governance (pp. 226–244). Yale University Press. https://doi.org/10.12987/9780300225174-011
Gonen, H., & Goldberg, Y. (2019). Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. Proceedings of the 2019 Conference of the North, 609–614. https://doi.org/10.18653/v1/N19-1061
Hao, K. (2020). The coming war on the hidden algorithms that trap people in poverty. MIT Technology Review. https://www.technologyreview.com/2020/12/04/1013068/algorithms-create-a-poverty-trap-lawyers-fight-back/
Harding, V. (2024). AI needs you: How we can change AI’s future and save our own. Princeton University Press.
Harwell, D. (2020). Mass school closures in the wake of the coronavirus are driving a new wave of student surveillance. Washington Post. https://www.washingtonpost.com/technology/2020/04/01/online-proctoring-college-exams-coronavirus/
Hinds, J., Williams, E. J., & Joinson, A. N. (2020). “It wouldn’t happen to me”: Privacy concerns and perspectives following the Cambridge Analytica scandal. International Journal of Human-Computer Studies, 143, 102498. https://doi.org/10.1016/j.ijhcs.2020.102498
Holmes, W., Bialik, M., & Fadel, C. (2019). Artificial intelligence in education: Promises and implications for teaching and learning. The Center for Curriculum Redesign.
Keyes, O., & Austin, J. (2022). Feeling fixes: Mess and emotion in algorithmic audits. Big Data & Society, 9(2), 205395172211137. https://doi.org/10.1177/20539517221113772
Khalaf, A. M., Alubied, A. A., Khalaf, A. M., & Rifaey, A. A. (2023). The impact of social media on the mental health of adolescents and young adults: A systematic review. Cureus. https://doi.org/10.7759/cureus.42990
Köchling, A., & Wehner, M. C. (2020). Discriminated by an algorithm: A systematic review of discrimination and fairness by algorithmic decision-making in the context of HR recruitment and HR development. Business Research, 13(3), 795–848. https://doi.org/10.1007/s40685-020-00134-w
Kolkman, D. (2020). The usefulness of algorithmic models in policy making. Government Information Quarterly, 37(3), 101488. https://doi.org/10.1016/j.giq.2020.101488
Li, C., & Lalani, F. (2020). The COVID-19 pandemic has changed education forever. This is how. https://www.weforum.org/agenda/2020/04/coronavirus-education-global-covid19-online-digital-learning/
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2022). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), 1–35. https://doi.org/10.1145/3457607
Mitchell, S., Potash, E., Barocas, S., D’Amour, A., & Lum, K. (2021). Algorithmic fairness: Choices, assumptions, and definitions. Annual Review of Statistics and Its Application, 8(1), 141–163. https://doi.org/10.1146/annurev-statistics-042720-125902
Muhammed, S. T., & Mathew, S. K. (2022). The disaster of misinformation: A review of research in social media. International Journal of Data Science and Analytics, 13(4), 271–285. https://doi.org/10.1007/s41060-022-00311-6
O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.
Pagano, T. P., Loureiro, R. B., Lisboa, F. V. N., Peixoto, R. M., Guimarães, G. A. S., Cruz, G. O. R., Araujo, M. M., Santos, L. L., Cruz, M. A. S., Oliveira, E. L. S., Winkler, I., & Nascimento, E. G. S. (2023). Bias and unfairness in machine learning models: A systematic review on datasets, tools, fairness metrics, and identification and mitigation methods. Big Data and Cognitive Computing, 7(1), 15. https://doi.org/10.3390/bdcc7010015
Pathiranage, A., & Karunaratne, T. (2023). Teachers’ agency in technology for education in pre- and post-COVID-19 periods: A systematic literature review. Education Sciences, 13(9), 917. https://doi.org/10.3390/educsci13090917
Santori, D. (2024). The quantified school: Pedagogy, subjectivity, and metrics. Palgrave Macmillan UK. https://doi.org/10.1057/978-1-137-58385-7
Schneier, B. (2016). Data and Goliath: The hidden battles to collect your data and control your world. W.W. Norton & Company.
Signé, L. (2023). Africa’s fourth industrial revolution. Cambridge University Press.
Simon, S. (2013). K-12 student database jazzes tech startups, spooks parents. Reuters. https://www.reuters.com/article/idUSBRE92204W20130304/
Singer, N. (2013, October 6). Deciding who sees students’ data. N.Y. Times. https://www.nytimes.com/2013/10/06/business/deciding-who-sees-students-data.html
Stone, B. (2013). The everything store: Jeff Bezos and the age of Amazon. Little, Brown.
Sundararajan, A. (2016). The sharing economy: The end of employment and the rise of crowd-based capitalism. The MIT Press.
Turner Lee, N. (2024). Digitally invisible: How the Internet is creating the new underclass. Brookings Institution Press.
Véliz, C. (2020). Privacy is power: Why and how you should take back control of your data. Bantam Press.
Završnik, A. (2021). Algorithmic justice: Algorithms and big data in criminal justice settings. European Journal of Criminology, 18(5), 623–642. https://doi.org/10.1177/1477370819876762
Zeide, E. (2019a). Artificial intelligence in higher education: Applications, promise and perils, and ethical questions. Educause Review, 54(3). https://ssrn.com/abstract=4320049
Zeide, E. (2019b). Robot teaching, pedagogy, and policy. https://ssrn.com/abstract=3441300
Zuboff, S. (2020). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.

More [+]

Presenters

Photo
Chief Learning Scientist
Kovexa Solutions

Session specifications

Topic:

Artificial Intelligence

TLP:

Yes

Audience:

Corporate, Curriculum Designer/Director

Attendee devices:

Devices not needed

Subject area:

Computer Science, Interdisciplinary (STEM/STEAM)

ISTE Standards:

For Education Leaders:
Equity and Citizenship Advocate
  • Ensure access to technology, connectivity, inclusive digital content and learning environments that meet the needs of all students.
For Educators:
Citizen
  • Foster digital literacy by encouraging curiosity, reflection, and the critical evaluation of digital resources.
Analyst
  • Use assessment data to guide progress, personalize learning, and communicate feedback to education stakeholders in support of students reaching their learning goals.

TLPs:

Ensure Equity, Develop Expertise