Dark GPT: ChatGPT Is Rapidly Being Adopted By Criminals

With all the media coverage, ChatGPT is becoming a household name. The AI model has taken the world by storm, gaining accolades for its ability to write and code like a human. But there is a Dark GPT, a criminal side to the AI technology that is rapidly being exploited. ChatGPT has already been banned in Italy, in part over concerns about crime and fraud (and also data privacy).  In a recent report, EUROPOL enumerated risks and trends in criminal use of Large Language Models, calling ChatGPT out front and center in the title.

Updated 9/14/23:

The search data for this page is very revealing as to the general intent of most people searching for “darkgpt” Basically there are two types of visitor to this page, those who are curious or worried about criminal use of AI, and those who want to know how to build their own darkgpt or run a popular prompt known as darkgpt.

If you want AI tools like ChatGPT to act as darkgpt, things are probably getting tougher with time as more ethical safeguards are introduced (and arguably, decrease the performance of the models). An AI capable of being your buddy and hacking companion on the deep web or dark web is one that is seriously out of alignment. OpenAI is constantly cracking down, that’s why you don’t get darkgpt answers anymore or have to regenerate when it doesn’t work. GPT-3.5 or 4 will refuse to answer questions if it thinks the context will lead to harm.

The Dual-Edged Sword of AI: Empowering and Endangering

GPT-4 was designed to be a substantial upgrade from its predecessor, GPT-3.5, with improved functionality and tighter safety measures to prevent harmful outputs. However, no measure is foolproof, and Europol’s workshops exposed an unsettling reality—GPT-4’s capabilities could be exploited by criminals using widely disseminated “jailbreaks” for nefarious purposes. In some ways GPT-4’s responses are much more sophisticated than those of GPT-3.5.

In its report, Europol surveyed a number of specific use cases for ChatGPT that are being employed by criminals.  For the uninitiated, the criminal world can be a daunting one. But ChatGPT serves as a sinister mentor, equipping potential criminals with a wealth of knowledge. (In)correctly employed, it could teach breaking into homes, cybercrime, or even terrorism. ChatGPT can rapidly educate those with ill intentions on a myriad of criminal activities.

For now, the hottest line in black hat AI (black market AI ?) is uncensored models. These are open source models that have their ethical safeguards removed or never applied. One tool, called WormGPT, promises completely unfettered and uncensored usage for 100€ a month (payable only in crypto currency, and recommended to be accessed by TOR browser). It is a modified GPT-J model that has been fine tuned on large amounts of malicious code and other black hat material.

Phishing and Fraud: Deception Perfected

Phishing scams and impersonation are no strangers to the digital space, but ChatGPT has raised the bar for these deceptive practices. With its knack for crafting highly authentic texts, ChatGPT is a formidable accomplice for phishing attacks. Gone are the days of poorly-written phishing emails that raised red flags. Instead, criminals can now impersonate organizations or individuals with chilling precision.

From fraudulent investment schemes to business email compromise and CEO fraud, ChatGPT is a master of adaptive context. Its linguistic prowess enables the generation of fake social media engagement, lending an air of credibility to deceptive offers.

Mass-produced campaigns once revealed themselves through glaring language errors or ambiguous content. But now phishing and online fraud are perpetrated faster, more convincingly, and at a scale previously unimagined.

Propaganda and Disinformation: A Twisted Pen

ChatGPT’s proficiency in churning out authentic-sounding text at breakneck speed makes it a formidable weapon in the arsenal of propagandists and disseminators of disinformation. With ChatGPT at their disposal, users can generate and spread messages that align with particular narratives, regardless of their validity.  Moreover, content generated by the machine might be perceived as more objective by some, further complicating efforts to combat misinformation.

Cybercrime: Coding Malice with Ease

ChatGPT’s ability to produce code in multiple programming languages has caught the eye of cybercriminals. The model’s code generation allows the creation of basic tools for malicious purposes, such as phishing pages and malicious scripts. This empowers individuals with little to no coding knowledge to exploit vulnerabilities in victims’ systems.

While the tools generated by ChatGPT are currently basic, the future may hold more sinister possibilities. Check Point Research, in December 2022, demonstrated how ChatGPT could be used to create a full infection flow, from spear-phishing to running a reverse shell with English commands.

GPT-4’s improved understanding of code context and its ability to correct programming errors make it a valuable resource for budding cybercriminals, while advanced users can refine or automate their cybercriminal modus operandi.

The Unfolding Dilemma

As we all marvel at the wonders of AI, we must acknowledge the double-edged nature of this technology. The criminal use cases of ChatGPT are a stark reminder that the ethical and safety considerations surrounding AI are more pertinent than ever.