AI deepfake explicit photo scandal sparks resignations and legal action at Pennsylvania school
A private school in Pennsylvania has faced significant backlash after a student used AI to create explicit fake images of nearly 50 female classmates.
The scandal has led to the resignation of Lancaster Country Day School’s head, Matt Micciche, and the school board’s president, Angela Ang-Alhadeff, following mounting pressure from parents and legal action.
The issue first came to light in November 2023 when an anonymous report about explicit AI-generated images was submitted through the “Safe2Say Something” portal, a platform operated by the Pennsylvania Attorney General’s Office.
However, parents allege that no action was taken until mid-2024, when police were alerted. The accused student was arrested in August, and their phone was seized for investigation.
Parents initiated legal proceedings last week, accusing the school of failing to comply with mandatory reporting laws. Their lawsuit named the school, its head, and the board president. Resignations were announced late Friday, but parents stated through their lawyer, Matthew Faranda-Diedrich, that they would still pursue legal action.
Classes were canceled on Monday as the school addressed what it described as a “difficult time” for the community. Students had previously staged a walkout, calling for leadership changes and improved safety measures.
In a statement before resigning, Micciche acknowledged the students’ frustration. “Our students rightfully exercised their voice today to express their concern and frustration with the school's response to the situation involving deepfake nudes. Many feel strongly that we haven't been as open and communicative as we could, adding to their pain,” he said.
Broader implications
The incident highlights the growing threat of AI-generated deepfakes and the challenges they pose to schools, parents, and communities.
In a LinkedIn post, Amanda Bickerstaff, an educator and founder of AI for Education, addressed the issue:
“Unfortunately, as we've commented before, the emergence of easy-to-use and increasingly sophisticated tools and platforms with little regulation and oversight will only make this a more common occurrence.”
The case also underscores gaps in current US laws addressing AI-generated harmful content. While there have been proposals to criminalize the creation and distribution of explicit AI images, these efforts have largely stalled at the federal level.
Globally, countries like South Korea have implemented stricter measures to combat AI-generated explicit content. Their initiatives include harsher penalties for producing and sharing nonconsensual deepfakes and increased monitoring of online platforms.
The UK is also grappling with the regulation of AI and deepfakes. Recent proposals include the introduction of a legal framework aimed at regulating the use of AI across industries, with specific focus on transparency and accountability.
The UK Online Safety Bill, seeks to hold online platforms accountable for hosting harmful content, including nonconsensual explicit deepfakes. In the UK it is illegal to create, possess, or share sexually explicit deepfakes of someone under 18.
The UK government’s AI regulation white paper outlines plans to address ethical concerns surrounding AI use, though critics argue that enforcement mechanisms remain vague.
This case highlights the urgent need for schools, lawmakers, and communities to address the risks associated with AI-generated content. As Bickerstaff noted, “Deepfakes pose a serious challenge for educators, parents, and communities to grapple with.”