Education researchers are already using artificial intelligence (AI) to predict student outcomes, analyze qualitative data, conduct literature reviews, and write and edit research findings. One reason AI tools are appealing is that they make these tasks faster, (comparatively) cheaper, and more automated. Despite this promise, though, there are valid concerns about accuracy, security, plagiarism, and ethics when AI is used to conduct research. This blog describes four ways researchers are using AI to support education research[1] and offers best practices.
This series from the Innovation to Evidence Project team at Child Trends is for practitioners, policymakers, and researchers who want to transform K-12 school systems to better meet the needs of students, families, and communities. This blog focuses on the promising applications of artificial intelligence (AI) in education research.
What is AI? While the term AI is used to mean many different things, the most widely accepted working definition considers AI to be technology that performs tasks that would require intelligence if they were done by humans. Such tasks include understanding or producing images, speech, and text, or doing complex pattern recognition.
The following interview features Seth Van Doren, a doctoral student at the University of California Irvine, and June Ahn, a professor at the University of California Irvine, talking about how AI can facilitate the research process.
“Which students will be most in need of tutoring next week?” “Which classroom is showing the most signs of social and emotional stress?” To answer such questions, school district staff and education researchers turn to AI-powered learning analytics to help identify and understand patterns in student behavior. You’ll see terms like “predictive analytics” or “educational data mining” used to describe this work. These researchers use data to:
Much of this work relies on longitudinal administrative data or more granular data from student engagement with learning management systems. Learning analytics empowers you to make more effective use of “Big Data” sources such as the data created by learning management systems or online course software (which can contain tens of thousands of data points for each student).
However, recent scholarship also highlights limitations to the use of predictive analytics technologies, including a lack of transparency in algorithmic predictions, ethical concerns with applying risk scores to students, and the potential that learning analytics may supplant instructors’ judgement and expertise.
The Isaac Elementary School District in Phoenix, AZ is using an AI-based tool to flag students who are at risk, both academically and socio-emotionally. The tool generates predictions about which students might be falling behind academically or showing warning signs of social and emotional difficulties.
The Peninsula School District in Washington developed a statement on “Artificial Intelligence – Principles and Beliefs” that highlights their commitment to the ethical considerations for AI adoption.
“What themes emerge in open-ended questions about our school’s response to COVID?” “How many school district discipline policies define conditions under which restraint can be used?” To answer questions like these, you can turn to AI in analysis of qualitative data. AI support for qualitative education research may include:
Most current research does not use AI to replace human qualitative data analysis, but instead to support human researchers. And while AI may make qualitative research more efficient, there are risks in terms of data privacy, transparency, and research quality. This is of special concern for education researchers who need to consider student privacy from both an ethical and a legal perspective.
Yet another promising contribution AI has made to the field of education research is the development of tools to support literature review and meta-analysis. Generative AI tools can be used to help identify relevant literature, extract information from large databases of papers, and summarize findings.
A variety of publicly available tools facilitate AI-assisted literature review, including Elicit, Consensus, and SciSpace. This is a rapidly changing field, and researchers interested in AI-assisted literature review should conduct a market scan as they initiate a project to ensure they are using the technologies that best meet their needs.
Most generative AI tools that are not specifically marketed for literature review (such as ChatGPT, Gemini [formerly Bard], or Copilot) have known issues with “hallucinating” or fabricating citations, and as such should not be used for this purpose.
You can use AI in your research operations for help with tasks like brainstorming, extracting information from research papers, scientific writing and editing, conducting peer review, and formatting citations.
Researchers use both general purpose chatbots (such as ChatGPT, Gemini, or Copilot) and tools designed specifically for research and writing (such as Grammarly or Jenni). These uses of AI to help researchers plan, think about, and conduct research can improve research efficiency and reduce burden. However, as with any use of AI, you must carefully consider concerns about accuracy, privacy, and plagiarism when choosing to use these tools.
While it is clear that AI holds substantial promise in educational research, you must also be aware of potential risks to data privacy, accuracy, and transparency. To make effective use of AI tools, we suggest three important best practices:
[1] Note that there are also many applications of AI to pedagogy, but we focus here on AI as a research tool.
Kelley, C., Holquist, S., Kelley, S. & Aceves, L. (2024). Promising applications of AI in education research. Child Trends. DOI: 10.56417/3848u69g
Sign up below
© Copyright 2024 ChildTrendsPrivacy Statement
Newsletter SignupLinkedInYouTube