Addressing AI Bias in Higher Education: AI solution ‘Human Solutions’

At the forefront of discussions on the intersection of artificial intelligence and higher education, Paul LeBlanc, the outgoing president of Southern New Hampshire University (SNHU), along with his colleague George Siemens, have underscored the importance of addressing data bias in AI technologies. 

Their insights were shared during their development of "Human Systems," an AI solution aimed at enhancing various facets of university life. This innovative project, scheduled for a summer 2024 launch, seeks to leverage AI beneficially while mitigating prevalent issues associated with it.

During the IFE Conference hosted by Tec de Monterrey in Mexico, the topic of algorithmic bias, particularly its potential amplified impact on the Global South, was highlighted by Mexican participants. The concern raised centres around the inherent biases within data sets and their implications for AI applications. 

Siemens posed a critical question to attendees: "All data is biased. So the question is, how do you avoid harmful bias?" He further explained the inherent challenges in selecting and prioritising data within AI models, emphasising the biased nature of source data derived from predictive language models.

Michael Fung, Executive Director of Tecnologico de Monterrey’s Institute for the Future of Education, echoed these sentiments, noting that the rapid integration of AI in higher education brings the issue of bias between the global north and south into sharp relief. Fung argued that the complete elimination of bias is unrealistic, advocating instead for transparency regarding the assumptions and limitations accompanying data, stating: 

“I don’t think you remove it completely. Just like any kind of information out there, it’s always coloured by some kind of perspective of bias. It’s about being clear – what are the assumptions and limitations that come with the data?”

LeBlanc also commented on the general inadequacies within higher education institutions in handling data effectively, proposing the formation of a global data consortium. 

This initiative, supported initially by funding from the Bill and Melinda Gates Foundation, aims to construct more accurate AI models for educational purposes by tackling algorithmic bias and preventing cultural hegemony. Both LeBlanc and Fung stress the importance of clarity concerning data sources to understand and address biases within AI outputs.

Concluding his remarks, LeBlanc emphasised, "What we really have to do is be very clear about the sources of data that’s used to inform whatever AI outputs are in place – so when we see these biases, we understand why they’re biases," advocating for increased research to refine AI practices in education.


Previous
Previous

Hambro Perks Invests in AI Education Innovator CENTURY

Next
Next

Academian Inc. launches in world of EdTech