More on KentOnline
AI-powered “nudifying” apps which can create non-consensual explicit images of people, including children, should be banned, an online safety charity has said.
Internet Matters has called on the Government to strengthen the Online Safety Act to ban tools which can create deepfake nudes after a study from the group estimated that as many as half a million children have encountered such images online.
It said its research had found a growing fear among young people over the issue, with 55% of teenagers saying it would be worse to have a deepfake nude of them created and shared than a real image.
Strengthening the new online safety laws and new legislation to ban nudifying tools are necessary because current legislation is not keeping pace, Internet Matters said, arguing that the AI models used to generate sexual images of children are not currently illegal in the UK, despite possession of such an image being a criminal offence.
Children have told us about the fear they have that this could happen to them without any knowledge and by people they don’t know. They see deepfake image abuse as a potentially greater violation because it is beyond their control
Earlier this month, online safety watchdog the Internet Watch Foundation (IWF) warned that AI-generated child sexual abuse content is now being increasingly found on the open, public web, rather than hidden away on dark web forums.
Internet Matters said it estimates that 99% of deepfake nudes feature women and girls, and warned the content is being used to facilitate child-on-child sexual abuse, adult perpetrated sexual abuse, and sextortion.
Internet Matters co-chief executive Carolyn Bunting said: “AI has made it possible to produce highly realistic deepfakes of children with the click of a few buttons.
“Nude deepfakes are a profound invasion of bodily autonomy and dignity, and their impact can be life-shattering.
“With nudifying tools largely focused on females, they are having a disproportionate impact on girls.
“Children have told us about the fear they have that this could happen to them without any knowledge and by people they don’t know. They see deepfake image abuse as a potentially greater violation because it is beyond their control.
“Deepfake image abuse can happen to anybody, at any time. Parents should not be left alone to deal with this concerning issue.
“It is time for Government and industry to take action to prevent it by cracking down on the companies that produce and promote these tools that are used to abuse children.”
The safety organisation’s study involved surveying 2,000 parents of children aged three to 17, and 1,000 children aged nine to 17, in the UK.
It found that teenage boys are twice as likely to report an experience with a nude deepfake. However, boys are more likely to be the creators of deepfake nudes, and girls are more likely to be the victims.
The study also indicated support among both children and parents for more education around deepfakes, with 92% of teenagers and 88% of parents saying they believe children should be taught about the risks of the technology in school.
Online deepfake abuse disproportionately impacts woman and girls online, which is why we will work with Internet Matters and other partners to address this as part of our mission to halve violence against women and girls over the next decade
Minister for safeguarding and violence against women and girls Jess Phillips said: “This Government welcomes the work of Internet Matters, which has provided an important insight into how emerging technologies are being misused.
“The misuse of AI technologies to create child sexual abuse material is an increasingly concerning trend.
“Online deepfake abuse disproportionately impacts woman and girls online, which is why we will work with Internet Matters and other partners to address this as part of our mission to halve violence against women and girls over the next decade.
“Technology companies, including those developing nudifying apps, have a responsibility to ensure their products cannot be misused to create child sexual abuse content or non-consensual deepfake content.”