The European Union Police Service (Europol) has published an urgent report warning that ChatGPT and other systems with artificial intelligence can be used for online fraud and other cybercrimes..
Large language models that can be used for multiple purposes, like Open AI's ChatGPT, can benefit businesses and individual users. However, the European police agency emphasized that they also pose a problem for law enforcement, as attackers can use them for their own purposes..
"
The publication is the result of a series of workshops organized by the Europol Innovation Lab to discuss the potential criminal use of ChatGPT as the most prominent example of large language models and how these models can be used to support investigative work..
The EU agency points to the fact that ChatGPT's moderation rules can be circumvented using the so-called hinting technique - the practice of entering data into an AI model precisely to obtain a certain result..
Because ChatGPT is a new technology, loopholes keep popping up with updates that could ask the AI \u200b\u200bto pretend to be a fictional character or provide the answer in code.. Other workarounds can replace trigger words or change the context later in the interaction. The EU body emphasized that the most effective workarounds to free the model from any restrictions are constantly improving and becoming more complex..
Experts have identified a number of illegal use cases of ChatGPT, which persist in the most advanced model of OpenAI - GPT-4 - where the potential for malicious system reactions is even higher in some cases..
Because ChatGPT can generate readable information, Europol warns that the new technology could speed up an attacker's investigation process without prior knowledge of a potential crime area such as home invasion, terrorism, cybercrime or child sexual abuse..
"
Phishing, the practice of sending a fake email to get users to click on a link, is a major area of \u200b\u200bapplication.. In the past, such scams were easily detected due to grammatical or language errors, while AI-generated text allows you to impersonate another in a very realistic manner..
Similarly, online scams can be made more believable by using ChatGPT to create fake social media activity..
In addition, the ability of AI to impersonate the style and speech of specific people can lead to cases of abuse regarding propaganda, hate speech and disinformation..
In addition to plain text, ChatGPT can also code in a variety of programming languages, empowering attackers with little to no IT knowledge to turn natural language into malware..
" If the hints are broken down into separate steps, bypassing these security measures is as easy as shelling pears,"
[see_also ids\u003d"
ChatGPT is considered general purpose AI - a model that can be adapted to perform various tasks..
As the European Parliament finalizes its position on the AI \u200b\u200blaw, MEPs are discussing introducing some stringent requirements for this basic model, such as risk management, reliability and quality control..
Another risk is that these large language models could become available on the dark web without any guarantees and be trained on particularly harmful data.. The type of data these systems will feed on and how they can be controlled are the main questions for the future..
Earlier, OpenAI CEO Sam Altman said that he was "