Assistant Professor Angela Stewart Gives Testimony on Civil Rights and the Use of AI in Education

May 2, 2024

On March 25, Assistant Professor Angela Stewart testified as a panelist while The Pennsylvania Advisory Committee on the U.S. Commission on Civil Rights discussed the use of artificial intelligence (AI) in the classroom. As the use of AI becomes more prevalent in education, there must be measures taken to ensure AI technologies are safe and ethical. The Committee isconducting research and hearing from AI researchers and professionals on AI algorithms and their effects in the classroom, particularly how they can create barriers and reinforce biases against marginalized groups.

Stewart, one of four panelists, presented a 10-minute testimony, followed by a question-and-answer session with the Committee.

“I prepared a testimony on the topic of data and how it is used in AI systems, as well as how it can reinforce systemic oppressions or create new ones,” said Stewart. “In this particular hearing, the topics spanned from law surveillance to funding mechanisms for AI educational technologiesto the inner workings of AI.”

Many contemporary AI techniques make decisions based on unexplained processes -- leading to concerns about bias, discrimination, and prejudice against users. Users are particularly vulnerable if they are marginalized based on race, gender, sexuality, or class. Stewart emphasizes that transparency is key in developing ethical AI algorithms and mitigating these harms.

“AI is often called a black box, meaning that we don’t know the inner workings of these systemsor how they make predictions. However, we do know that AI is really good at finding patterns in data. Yet, historical data contains historical oppression, bias, and can be generally harmful to those most marginalized,” stated Stewart. “Since its inception, the United States education system has been oppressive to learners from marginalized identities. If we don't want to continue to perpetuate these inequities, then we need to appropriately design AI that is anti-oppressive.”

Along with developing transparent AI algorithms, it is also important for people to be mindful when implementing AI in educational settings. Keeping in mind the oppression that marginalized groups face, AI could be employed in a way that is constructive and helpful rather than harmful.

“Technology creators, policymakers, and school administrators should consider larger systems of oppression, such as racism and sexism, when creating and implementing AI in classrooms. This looks like educating oneself about the socio-historical background of technology and AI in schools, as well as how that intersects with race, gender, and other identity markers. This also means creating systems with systemically marginalized populations, rather than for them,” declared Stewart. “AI systems should be created alongside other social interventions. For example, not just providing schools with access to the newest adaptive learning system or AI-backed analytics, but also accompanying this with funding approaches that give money to support underfunded and under-supported schools.”

Learn more about the public briefings here.

--Alyssa Morales