The Education Balancing Act: AI Progression, Fairness and Biases

Sophia Alejandro is a staff writer and a first-year MPA student.

As the education landscape changes, so do the means with which we teach students. From textbooks to iPads and ever-evolving technology. Now, students and teachers are facing these innovations through one glaring new facet: Artificial Intelligence (AI). 

In the few years AI tools have been available to the public, emerging data shows how the technology might be a promising resource within schools. Researchers at Georgia State University employed AI as a resource to students in the university’s two largest introductory lecture courses, political science and economics, in order to determine if students who had access to AI performed better against their peers without it. Across the board, the study found that utilizing the chatbot “significantly shifted” students’ final grades and, critically, “reduced the likelihood students dropped the course.” At a time when “more than a third of students who initially enroll in college do not ultimately earn a credential,” the potential of AI tools to enhance learning outcomes is promising. Ideally, the adaptive machine-learning systems behind most publicly-available AI tools tailor experiences to individual needs, personalize feedback and serve best as intelligent assistants to aid in research and writing. Yet, despite the potential for an exhilarating transformation in learning there is a necessity to acknowledge the potential dangers of AI to the traditional educational model. Just as AI algorithms hold the potential to empower learning, they can also house hidden prejudices, perpetuate educational inequalities and jeopardize the trust placed in them as a transformative resource.

Nationwide, classrooms are diversifying. According to UnidosUS, “The Latino population in the United States grew 23% between 2010 and 2020, from 50.5 million to 62.1 million. This increase is due overwhelmingly to births, not immigration. In that timeframe, 9.3 million Latino babies were born.” Not only is this population growing, but so is the English Language Learner (ELL) population in K-12 education. The U.S. Department of Education’s Office of English Language Acquisition reports that in the 2019-2020 school year, “the [ELL] population had grown by almost 1 million and a half students to a total of 5,115,887 [ELLs], representing 10.4% of total student enrollment.” This makes ELLs among the fastest and largest growing groups of students in the United States.

Although there is much confusion about AI usage in schools among students, many teachers have found that AI can be used to help students that require accommodations, such as ELL students. This can be through means such as translation, graphs and visualization, test questions and writing assistance. However, there are significant limitations and biases.

Largely speaking, AI tools have not been created by people all around the world, but by a select few tech companies; as such, AI does not possess the worldliness of a human being. AI algorithms are still written by humans, and predominantly written in English by English-speaking programmers. The result is that in regard to direct translations between languages, AI machines like “Large Language Models” are not able to recognize the nuances of culture behind the language. While a word translated by AI may give an impression as proper or formal, it may not be a true translation appropriate for its context. A human reader can discern nuances from lived experience, but AI lacks the innumerable, predominantly subjective reference points of shared experiences that humans use to communicate, and surely all possible reference points would be difficult to account for and code into AI language models. When this is considered within the context of education, technical limitations become larger-scale issues. If teachers begin to uncritically rely on artificial intelligence to grade assignments or for language translations and writing assistance, the programming will not catch onto all of the nuances of language that a native speaker, or even a learner, will understand. This can also lead to issues when applying AI to programs like TurnItIn, which has falsely accused students of plagiarism and using AI in writing, according to a study from Stanford University.

James Zou, a co-author of the Stanford study, was quoted in an Education Week article on bias against non-native English speakers in AI, that there was a “substantial bias whereby many of [non-native English speakers’] writings are mistakenly flagged as generated by GPT, when they were really written by humans.”

If teachers, schools and policymakers are lax in their usage of AI and rely on it without evidence-based guardrails and curriculum, this could lead to further student achievement gaps for ELLs or, in the worst case, a further stereotyping of this group of students. Such outcomes exacerbate existing disparities and undermine the fundamental promise of education and equal opportunity for all.

AI algorithms can also be used for individualized education plans (IEPs) which greatly helps to eliminate the strain of the teacher and counselor shortage across the U.S. However, when designed to personalize learning in this way, there is an inherent potential for bias that must be counteracted. For example: when AI systems begin to teach using student achievement or scores within a particular subject, it will tailor the next lesson accordingly. If the student scores high, the next assignment will become increasingly difficult. If the student scores low, it will change the curriculum to be less challenging. What happens to students from disadvantaged backgrounds whose scores are lower than their counterparts, and what role will this play in further deepening student achievement gaps? How can we ensure individualization of assignments to meet student needs without preserving ongoing problems?

Outside of the classroom, education institutions have jumped headfirst into using AI tools for a variety of administrative tasks — despite its often experimental nature — including in making admissions and financial aid decisions. Vendors behind these algorithms have argued that the tools aren’t just mathematical models to be followed blindly by administrators, but can allow admissions teams, for example, to experiment with financial aid packages and “see how those might change things like the diversity, gender balance and academic profile of their incoming class.”

“The criticisms about algorithms or about artificial intelligence specifically have been around this idea that they are sort of running loose on their own and don’t have overriding guardrails that reference institutional philosophies or strategic goals,” Nathan Mueller, a principal at education consulting firm EAB, told Higher Ed Dive. “We would never want anyone to just follow a mathematical exercise without any consideration of the other key strategic aspects.”

However, experts are still skeptical about how institutions rely on these tools, because just like with classroom aids or translation programs, the same algorithms for financial aid decisions “are frequently trained on data resulting from human decision-making.” Beyond a simple translation error, financial aid decisions have long-term impacts on a prospective student’s college choice, lifetime cost of education and inevitably career outlook — and these differences are pronounced across racial and ethnic lines.

While acknowledging these challenges is the first step, addressing the deep-rooted, systemic challenges intensified by AI requires a multi-pronged policy approach. As U.S. student diversity grows, data inclusion becomes necessary to ensure an accurate understanding and reflection of all members of the population. Policymakers must proactively combat these data and algorithmic biases before expanding application to spaces such as education through proactive human oversight and commercial transparency.

Not only should we consider the best methods for AI tools to be used assisting students’ education, but we must ensure schools can be one of the first places for students to learn about the ethical quandaries and hazards of AI. Educators in the classroom play a critical role in navigating this complex landscape, though it must be recognized they are learners too. Through fostering critical thinking skills and digital literacy, students may be empowered to question AI outputs and recognize harmful biases. Honest conversations about ethical AI application and its implications are crucial to responsible technological use in classrooms.

The potential benefits of artificial intelligence are equally as great as its challenges. However, if policymakers are able to proactively address these issues, AI can be a powerful tool of transformation within education, even allowing for a bridging of the resource and opportunity gap that has long followed marginalized groups, to create a brighter future for all.

This piece was edited by Executive Editor Nathan Varnell.

Photo by on Ron Lach on Pexels.

One thought on “The Education Balancing Act: AI Progression, Fairness and Biases

Leave a comment