two young children using an ai chatbot

Creating an AI Risk Framework for Education to Protect Students, Families, and Teachers

BlogHealthy SchoolsNov 7 2024

Artificial intelligence (AI) has the potential to fundamentally change education—from preschool through university. Indeed, many school districts (such as Palo Alto Unified School District and New York City Public Schools) have recently reversed previously cautious approaches and are now embracing AI. Yet, with broader use comes increased risk and a greater need to mitigate the potential dangers AI could pose in education if not handled well.


The following interview features Tyler Sherman from Anaheim Union High School District talking about the benefits and risks of AI use in schools and the classroom.

The Use of AI in Education ResearchPlay video

Today, AI is ubiquitous and even built-in to the office software we use. In education it’s helping with things such as intelligent tutoring systems, learning management systems, and dashboards. This technological influx accelerated with the public release of generative AI tools such as ChatGPT, Gemini, and Copilot. AI in education is used for a wide range of applications, including:

  • Curriculum development
  • Grading
  • Lesson planning and differentiation
  • Interactive tutoring
  • Research

Substantial literature exists on AI risk, but most existing research and policy focuses on preparing students for an AI-enabled workforce and not the ethical implications of AI within education. To fill this gap, we adapted the National Institute for Standards and Technology’s AI Risk Management Framework and the AI principles identified by Dr. Anna Jobin and her team in their seminal work on the landscape of AI ethics to develop an education-specific AI risk framework. This framework can help educators and decision makers establish policies for how AI is used in education systems.

The framework outlines eight key areas of potential risk in using AI, highlighting specific applications for using AI in schools. In particular, the framework spotlights the need for transparency in the use of AI, as well as critical attention to ensure that AI is not used in ways that could risk student or family privacy, produce inaccurate conclusions, or further existing inequities.


Table 1: AI Risk Framework for Education

Table 1: AI Risk Framework for Education

In applying this framework, educators must carefully consider the specific context of their schools. The below table provides examples of how each risk element might relate to schools’ use of AI.


Table 2: Examples of AI Risk Framework elements in education contexts

Table 2: Examples of AI Risk Framework elements in education contexts

With these risks in mind, practitioners or decision makers should consider the following best practices that can inform appropriate, effective, and risk-aware choices when implementing AI systems.

  • Clearly articulate why AI is being used for a particular goal and establish key milestones to ensure that it is achieving its intended outcomes.
  • Use a risk management framework to help identify possible sources of AI risk under each element of risk.
  • Develop an evaluation plan based on the risk management framework that considers how to assess the accuracy of AI-generated products, how to explain or understand AI-based decisions, and how to quantify the impact of AI.
  • Engage the community (including students, teachers, and families) at each stage of AI implementation, from initial planning to launch.
  • Consider consulting outside experts to ensure that school districts fully understand AI tools or products that they plan to implement.

Suggested citation

Kelley, C., Holquist, S., & Aceves, L. (2024). Creating an AI risk framework for education to protect students, families, and teachers. Child Trends. DOI: 10.56417/4180t2183v

a teacher answers students' questions

WANT MORE INFORMATION ON CHILD TRENDS' RESEARCH?

Sign up now

Newsletters