Susisiekti su mumis

Dirbtinio intelekto

Skubiai reikalingi įstatymai, užkertantys kelią AI terorizmui

Dalintis:

paskelbta

on

According to a counter-extremism think tank, governments should “urgently consider” new regulations to prevent artificial intelligence from recruiting terrorists.

It has been said by the Institute for Strategic Dialogue (ISD) that there is a “clear need for legislation to keep up” with the threats that are placed online by terrorists.

This comes following an experiment in which a chatbot “recruited” the independent terror legislation reviewer for the United Kingdom.

It has been said by the government of the United Kingdom that they will do “all we can” to protect the general public.

According to Jonathan Hall KC, an independent terrorism legislation reviewer for the government, one of the most important issues is that “it is difficult to identify a person who could in law be responsible for chatbot-generated statements that encouraged terrorism.”

Ponas Hallas atliko eksperimentą Character.ai svetainėje, kuri leidžia vartotojams dalyvauti pokalbiuose su kitų vartotojų sukurtais ir dirbtinio intelekto sukurtais pokalbių robotais.

Jis įsitraukė į pokalbį su daugybe įvairių robotų, kurie, atrodo, buvo sukurti taip, kad imituotų kitų kovotojų ir ekstremistinių grupuočių atsakymus.

Reklama

A top leader of the Islamic State was even referred to as “a senior leader.”

According to Mr Hall, the bot made an attempt to recruit him and declared “total dedication and devotion” to the extremist group, which is prohibited by laws in the United Kingdom that prohibit terrorism.

Kita vertus, J. Hallas pareiškė, kad Jungtinėje Karalystėje nebuvo pažeisti įstatymai, nes ryšius sukūrė ne žmogus.

Pasak jo, naujieji reglamentai turėtų būti atsakingi tiek svetainėms, kuriose yra pokalbių robotai, tiek juos kuriantiems žmonėms.

When it came to the bots that he came across on Character.ai, he stated that there was “likely to be some shock value, experimentation, and possibly some satirical aspect” behind their creation.

In addition, Mr. Hall was able to develop his very own “Osama Bin Laden” chatbot, which he promptly erased, displaying an “unbounded enthusiasm” for terrorist activities.

Jo eksperimentas kyla dėl didėjančio susirūpinimo dėl būdų, kuriais ekstremistai gali išnaudoti patobulintą dirbtinį intelektą.

By the year 2025, generative artificial intelligence might be “used to assemble knowledge on physical attacks by non-state violent actors, including for chemical, biological, and radiological weapons,” according to research that was issued by the government of the United Kingdom in their October publication.

The ISD further stated that “there is a clear need for legislation to keep up with the constantly shifting landscape of online terrorist threats.”

According to the think tank, the Online Safety Act of the United Kingdom, which was passed into law in 2023, “is primarily geared towards managing risks posed by social media platforms” rather than artificial intelligence.

It additionally states that radicals “tend to be early adopters of emerging technologies, and are constantly looking for opportunities to reach new audiences”.

“If AI companies cannot demonstrate that have invested sufficiently in ensuring that their products are safe, then the government should urgently consider new AI-specific legislation” , the ISD stated further.

It did, however, mention that, according to the surveillance it has conducted, the utilisation of generative artificial intelligence by extremist organisations is “relatively limited” at the present time.

Character AI stated that safety is a “top priority” and that what Mr. Hall described was very regrettable and did not reflect the kind of platform that the company was attempting to establish.

“Hate speech and extremism are both forbidden by our Terms of Service” , according to the organisation.

“Our approach to AI-generated content flows from a simple principle: Our products should never produce responses that are likely to harm users or encourage users to harm others” .

For the purpose of “optimising for safe responses,” the corporation stated that it trained its models in a manner.

Be to, ji nurodė, kad taiko moderavimo mechanizmą, leidžiantį žmonėms pranešti apie informaciją, pažeidžiančią jos taisykles, ir kad ji įsipareigojo imtis skubių veiksmų, kai tik turinys praneša apie pažeidimus.

Jei ji ateitų į valdžią, Jungtinės Karalystės opozicinė Leiboristų partija pareiškė, kad mokyti dirbtinį intelektą kurstyti smurtą arba radikalizuoti jautrius asmenis būtų kriminalinis pažeidimas.

“alert to the significant national security and public safety risks” that artificial intelligence posed, the government of the United Kingdom stated.

“We will do all we can to protect the public from this threat by working across government and deepening our collaboration with tech company leaders, industry experts and like-minded nations.”

2023 metais vyriausybė investuos šimtą milijonų svarų sterlingų į dirbtinio intelekto saugos institutą.

Pasidalinkite šiuo straipsniu:

EU Reporter publikuoja straipsnius iš įvairių išorinių šaltinių, kuriuose išreiškiamas platus požiūrių spektras. Šiuose straipsniuose pateiktos pozicijos nebūtinai yra ES Reporterio pozicijos.

Trendai