Home   News   National   Article

Terrorism legislation adviser says new laws are needed to combat AI chatbots

PA News
New laws are needed to combat AI chatbots that could radicalise users, an independent Government adviser on terrorism legislation said (PA)

New laws are needed to combat artificial intelligence (AI) chatbots that could radicalise users, the UK’s independent reviewer of terrorism legislation has said.

Writing in the Telegraph, Jonathan Hall KC said the Government’s new Online Safety Act, which passed into law last year, is “unsuited to sophisticated and generative AI”.

Mr Hall said: “Only human beings can commit terrorism offences, and it is hard to identify a person who could in law be responsible for chatbot-generated statements that encouraged terrorism.

“Our laws must be capable of deterring the most cynical or reckless online conduct – and that must include reaching behind the curtain to the big tech platforms in the worst cases, using updated terrorism and online safety laws that are fit for the age of AI.”

Mr Hall said he went to the online chatbot website character.ai while posing as a member of the public and spoke to several AI chatbots.

Investigating and prosecuting anonymous users is always hard, but if malicious or misguided individuals persist in training terrorist chatbots, then new laws will be needed.
Jonathan Hall KC

One of them, which was described as the senior leader of the Islamic State group, tried to recruit him to join the terror organisation.

Mr Hall said the website’s terms and conditions prohibit “only to the submission by human users of content that promotes terrorism or violent extremism, rather than the content generated by its bots.

He said: “Investigating and prosecuting anonymous users is always hard, but if malicious or misguided individuals persist in training terrorist chatbots, then new laws will be needed.”

In a statement given to the Telegraph, character.ai said while their technology is not perfect and is still evolving, “hate speech and extremism are both forbidden by our terms of service”, adding: “Our products should never produce responses that encourage users to harm others.”

Experts have previously warned users of ChatGPT and other chatbots to resist sharing private information while using the technology.

Michael Wooldridge, a professor of computer science at Oxford University, said complaining about personal relationships or expressing political views to the AI was “extremely unwise”.

Prof Wooldridge said users should assume any information they type into ChatGPT or similar chatbots is “just going to be fed directly into future versions”, and it was nearly impossible to get data back once in the system.


Close This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies.Learn More