Cybercrime is constantly increasing on the internet due to cyber criminals always learning new ways to attack the world, from identity theft, and malware attacks and now using new scams such as “FraudGPT.”
FraudGPT is an AI tool used by hackers to conduct fraudulent activities, and just like other chatbots powered by AI, it is a language model that has been trained on large volumes of text data, which enables it to produce responses to human-like input questions.
It has been reported that fraudGPT was posted on the dark web where it was stated that it is sold on a subscription basis. FraudGPT starts at $200 (R3,665.86) which is per month and year costs $1,700 (R31,159.89) which assists hackers conduct their activities with the help of AI. Any subscriber can write any malicious code, create undetectable malware, find Non-VBV (Non-Verified by Visa) Bins, create phishing pages and hacking tools, find groups, sites, and markets. The subscriber can also write scam pages/letters, find leaks and vulnerabilities while learning to code (hack), and find cardable sites.
FraudGPT is advertised as a bot with no restrictions nor guidelines and is sold as a trusted seller on different underground Dark Web platforms like Empire, WHM, Torrez, World, AlphaBay, Versus, and the encrypted messaging app Telegram. To accomplish this, they participate in LLM (large language models) “jailbreaking” by using cues to make the model bypass its innate safeguards and filters.
Before, cyber-criminals had advanced coding and hacking skills, but now anyone can access these tools, making it easier for inexperienced attackers to start. This not only raises the danger but shows it significantly.
With hackers increasingly using AI for advanced attacks, cybersecurity experts need to adjust and enhance their defense tactics.