A new report by cybersecurity firm SlashNext indicates that AI-generated cybercrime is rising. Following the discovery of WormGPT, dark web tools have emerged, further exacerbating the situation.
The emergence of WormGPT and FraudGPT is just the beginning of a range of AI tools cybercriminals plan to use. FraudGPT can generate phishing web pages, and create malicious code, hacking tools, and scam letters.
The researchers from SlashNext communicated with a pseudonymous person called CanadianKingpin12 using Telegram.
“During our investigation, we took on the role of a potential buyer to dig deeper into CanadianKingpin12 and their product, FraudGPT,” SlashNext said. “Our main objective was to assess whether FraudGPT outperformed WormGPT in terms of technological capabilities and effectiveness.”
The team received unexpected information during their interaction with the seller, who was demonstrating FraudGPT. The seller revealed upcoming AI chatbots, DarkBart and DarkBert, with internet access and integration with Google Lens, enabling text and image communication.
It’s Not What You Think
SlashNext points out that S2W initially created DarkBert as a legitimate cybercrime-fighting tool. But criminals have now repurposed it for illicit purposes.
CanadianKingpin12 informed researchers that DarkBert can aid in sophisticated social engineering attacks. It can exploit computer system vulnerabilities, and distribute various types of malware, including ransomware.
“ChatGPT has guardrails in place to protect against unlawful or nefarious use cases,” David Schwed, COO at blockchain security firm Halborn, previously told Decrypt on Telegram. “WormGPT and FraudGPT don’t have those guardrails, so you can ask them to develop malware for you.”
SlashNext recommends that companies take a proactive approach to cybersecurity training and implement improved email verification measures to protect against AI-generated cybercrime tools.