Charity Issues Warning Over AI-Generated Content

Introduction

Recent reports indicate a surge in individuals viewing or creating AI-generated child abuse imagery, posing a complex moral dilemma and challenging existing legal frameworks. Moreover, 70% of adults remain uninformed about the role that AI plays in generating such abusive content. What are the ongoing efforts to raise awareness about these issues?

 

Rising Number of Offences

A trend highlighted by the Lucy Faithfull Foundation (LFF) shows an increasing number of individuals contacting helplines; expressing confusion about the ethics surrounding AI-generated child abuse imagery. Callers, including an arrested IT worker, also reveal a mix of fascination with the technology but also misunderstanding regarding its legal status. The charity emphasises that the creation or viewing of such images, even if the children are not real, remains illegal.

 

Donald Findlater of the LFF has noted that deviant sexual fantasies can be a strong predictor of whether an individual convicted of a sexual crime will reoffend. The blurred boundaries between real and AI-generated images, alongside the misconception that no harm is done without the involvement of any real children, contribute to the rising confusion among offenders and the intense moral debate on the matter.

The Legislative Framework

Under UK law, the creation, viewing, or sharing of sexual images involving individuals under 18 is illegal. This includes content generated through AI.

 

Materials relating to child sexual abuse, whether portraying real or simulated children, are subject to existing legislation, including the Protection of Children Act 1978 and the Coroners and Justice Act 2009. The Protection of Children Act 1978 criminalises the taking, distribution, and possession of "indecent photographs or pseudo-photographs" of a child. The Coroners and Justice Act 2009 extends the legal framework to include non-photographic images of a child, such as cartoons or drawings. The gravity of these offences is evident. Therefore, some can argue that the current legal framework is a sufficient deterrent and tool for prosecution.  

 

However, a significant number of adults remain unaware of the role AI plays in producing images of sexual abuse, as shown by a study from the Lucy Faithfull Foundation. This underscores the urgent need for comprehensive public education to raise awareness about this rising issue. The Internet Watch Foundation (IWF) has highlighted the severity of the problem further by identifying nearly 3,000 AI-generated abuse images that contravene UK law, emphasising the imperative for robust measures to counteract the effects of such content.

 

Conclusion

The surge in AI-generated child sexual abuse cases presents an ethical, legal and social challenge. The collaboration of organizations such as the Lucy Faithfull Foundation and the Internet Watch Foundation is crucial in raising awareness about the issue related to indistinguishable AI-generated child sexual abuse as adults remain uninformed about the role of AI in generating abusive imagery.

 

By Adham Shaker