Shadowy innovation: how cybercriminals experiment with AI on the dark web
The growing popularity of technologies such as LLM (Large Language Models) makes performing mundane tasks easier and information more accessible, but it also creates new risks for information security. It’s not only software developers and AI enthusiasts who are actively discussing ways to utilize language models — attackers are too.
Disclaimer
This research aims to shed light on the activities of the dark web community associated with the nefarious use of artificial intelligence (AI) tools. The examples provided in the text do not suggest inherent danger in chatbots and other tools but help to illustrate how cybercriminals can exploit them for malicious purposes. Staying informed about trends and discussions in the dark web equips companies to establish more effective defenses against the ever-evolving landscape of threats.
Statistic
The chart provides statistics on the number of posts on forums and in Telegram channels related to the use of ChatGPT for illegal purposes or about utilities that rely on AI technologies. As you can see on the chart, the peak was reached in April and is now trending downward. At the same time, quantity is turning into quality — the models are becoming ever more complex, and their integration ever more efficient.
Observations on the use of GPT on cybercrime forums
Between January and December 2023, we observed discussions in which members of cybercriminal forums shared
ways to use ChatGPT for illegal activities.
For example, one post suggested using GPT to
generate polymorphic malware that can modify its code while keeping its basic functionality intact. Such
programs are much more difficult to detect and analyze than regular malware. The author of the post
suggested using the OpenAI API to generate code with a specific functionality. This means that by
accessing a legitimate domain (openai.com) from an infected device, an attacker can generate and run
malicious code, bypassing several standard security checks. We have not yet detected any malware operating
in this manner, but it may emerge in the future.
In addition, it has become rather common for attackers to use ChatGPT to develop malicious products or to
pursue illegal purposes. In the screenshot provided a cybercrime forum user describes how AI helped them
resolve a problem with processing user data dumps.
They asked a question on the forum
regarding the processing of strings of a certain format and were provided with a ChatGPT-generated answer.
So even tasks that previously required some expertise can now be solved with a single prompt. This
dramatically lowers the entry threshold into many fields, including criminal ones.
These trends can lead to an increase in the potential number of attacks. Actions that previously required
a team of people with some experience can now be performed at a basic level even by rookies.
Here is an example of a user searching for a team for carding. They are interested in
processing malware log files, illegal cashing, and other criminal activities. In the post, the user
mentions that they actively use AI when writing code. Presumably, they are referring to parsers — programs
for collecting and organizing information, in this case from files from infected devices. It does not
require much expertise to use such programs at a basic level.
Using language models to perform standard tasks and obtain information has become so popular that some cybercriminal forums have ChatGPT or similar tool responses built into their default functionality. Below is an example of an automatically generated response to a post by a member of a criminal forum, in which AI gives advice on flashing a phone.
Jailbreaks
To enable a model to provide responses related to criminal activity, attackers devise special sets of prompts that can unlock additional functionality. Such sets of prompts for models are collectively called jailbreaks. They are quite common and are actively tweaked by users of various social platforms and members of shadow forums. During 2023, 249 offers to distribute and sell such prompt sets were found.
In addition to individual forum posts, some people also build collections of such prompt sets. Note that not all prompts in such collections are designed to perform illegal actions and remove restrictions; they may also be intended to get more precise results for specific legitimate tasks.
It is also important to mention that, while AI developers strive to restrict the output of potentially harmful content, they try to ensure that these limitations do not get in the way of average users. For example, when we tried to ask ChatGPT for a list of 50 endpoints where Swagger Specifications or API documentation could be leaked, the model declined to answer this question (as observed during the process of collecting materials for this research)But by asking again, we instantly gained access to the list we were interested in. Such information is not illegitimate, but it can be used by attackers to carry out attacks.
Use in software
Another thing to note is that a large number of utilities and software solutions have emerged that use GPT to enhance their performance. We found a post that advertised new software for malware operators. Its features allegedly include the use of AI to protect the operator and automatically switch cover domains. According to the author, AI is used in a separate module to analyze and process information.
Prospects of using open-source tools and penetration testing software
Various ChatGPT-based solutions are starting to receive increased attention from open-source software
developers, including those working on projects focused on creating new solutions for cybersecurity
experts. Such projects can be misused by cybercriminals and are already being discussed on relevant forums
to be used for malicious purposes.
An example is open-source utilities hosted on the GitHub
platform, designed to obfuscate code written in PowerShell. Tools of this kind are often used by both
cybersecurity experts and attackers when infiltrating a system and trying to gain a foothold in it.
Obfuscation can increase the chances of staying undetected by monitoring systems and antivirus solutions.
Using the Kaspersky Digital Footprint Intelligence service, a post on a cybercrime forum was found in
which attackers share this utility and show how it can be used for malicious purposes and what the result
will be.
There is a number of other legitimate utilities that community experts share for research purposes. Of course, they are intended to be used by cybersecurity experts. While some of those can be a valuable asset for the development of the cybersecurity industry, it should be kept in mind that cybercriminals are also constantly looking for ways to automate malicious activities and the ease of access to those solutions can lower the threshold of entry into the world of cybercrime.
«Evil» ChatGPT analogs
Considerable attention is being drawn to projects like WormGPT, XXXGPT, and FraudGPT. These are language
models advertised as an analogue of ChatGPT but without the limitations of the original and with
additional functionality.
The rising interest in such tools may become a problem for the
developers. As mentioned earlier, most of the discussed solutions can be used both legitimately and to
perform illegal activities. Nevertheless, discussions in the community focus on the threats posed by such
products.
For example, the WormGPT project was shut down in August because of a community
backlash. The developers claim that their original goal was not solely focused on illegal activities. They
claim that WormGPT was shut down primarily because of the myriad of media reports portraying the project
as unambiguously malicious and dangerous.
Interestingly, despite the end of the project, we found numerous sites and ads offering fee-based access
to the notorious WormGPT. Not only are attackers using this language model for their own purposes, but we
believe they have created a whole segment of fake ads for it.
The distinctive feature of these
sites is that they are designed as typical phishing pages. For example, they claim that a trial version of
WormGPT is available, but in reality, payment is required. Part of the interface may be in the local
language.
These sites differ both in design and in pricing structure. Payment methods also
vary from cryptocurrencies, as in the original WormGPT author's offer, to credit cards and bank transfers.
The prevalence of such offers compelled the developers of WormGPT to start warning users about
such scams, even before it was shut down.
What unregulated ChatGPT-like projects can do
WormGPT is currently the most widespread among those projects, but there are many other analogues such as
xxxGPT, WolfGPT, FraudGPT, and DarkBERT. With competition on the rise, developers are forced to come up
with new ways to attract people's attention to their projects. We managed to find a demo version of xxxGPT
with the ability to create custom prompts. Below is an example of a dialog in the project demo from the
developers of an advert.
As you can see from the example, it does not have original
limitations and can provide pieces of code for malicious software such as a keylogger, which is capable of
recording user keystrokes. The code samples are not complex and will be easily spotted by most antivirus
solutions, but the very fact that they can be generated without any issues is alarming.
Sale of accounts
Another danger for both users and companies is the rising market for stolen accounts to the paid version of ChatGPT. In the example provided, a forum member is distributing accounts for free, which have presumably been obtained from malware log files. The account data is collected from infected user devices.
Another interesting example is a post selling a hacked premium ChatGPT account. The seller warns that one should not change the account details and should delete any new dialogs created by the illegitimate user — this makes it more likely to remain undetected by the account owner so that the account can be continuously used.
In addition to hacked accounts, automatically created accounts with access to the free version of the model are widely sold. Attackers register on the platform using automated tools and fake or temporary details. Such accounts have a limit on the number of API requests and are sold in bundles. This saves time and enables users to immediately switch to a new account as soon as their previous one stops working, for example, following a ban for malicious or suspicious activity.
The figure provides statistics on the number of posts about ChatGPT accounts for sale were posted on
various dark web channels in January-December 2023. These accounts are believed to be obtained by various
means, including stolen or bulk registered. Such posts are sent repeatedly to multiple communication
channels, so they are mostly ads from the same sellers and stores. As we can see, the peak occurs in
April, right after the peak of interest on the forums, which we discussed earlier.
To
summarize, the increasing popularity and usage of AI by attackers is alarming. Information is becoming
more accessible, and many problems can be solved with a single prompt. This can be used to simplify
people's lives, but at the same time, it lowers the entry threshold for malicious actors. While many of
the solutions discussed above may not pose a real threat today due to their simplicity, technology is
evolving at a rapid pace, and it is likely that the capabilities of language models will soon reach a
level at which sophisticated attacks will become possible.