For $50, cyberattackers can use GhostGPT to write malicious codeFor $50, cyberattackers can use GhostGPT to write malicious code
Malware writing is only one of several malicious activities for which the new, uncensored generative AI chatbot can be used.
A recently debuted AI chatbot dubbed GhostGPT has given aspiring and active cybercriminals a handy new tool for developing malware, carrying out business email compromise scams, and executing other illegal activities.
Like previous, similar chatbots like WormGPT, GhostGPT is an uncensored AI model, meaning it is tuned to bypass the usual security measures and ethical constraints available with mainstream AI systems such as ChatGPT, Claude, Google Gemini, and Microsoft Copilot.
GenAI With No Guardrails: Uncensored Behavior
Bad actors can use GhostGPT to generate malicious code and to receive unfiltered responses to sensitive or harmful queries that traditional AI systems would typically block, Abnormal Security researchers said in a blog post this week.
"GhostGPT is marketed for a range of malicious activities, including coding, malware creation, and exploit development," according to Abnormal. "It can also be used to write convincing emails for business email compromise (BEC) scams, making it a convenient tool for committing cybercrime." A test that the security vendor conducted of GhostGPT's text generation capabilities showed the AI model producing a very convincing Docusign phishing email, for example.
The security vendor first spotted GhostGPT for sale on a Telegram channel in mid-November. Since then, the rogue chatbot appears to have gained a lot of traction among cybercriminals, a researcher at Abnormal tells Dark Reading. The authors offer three pricing models for the large language model: $50 for one-week usage; $150 for one month and $300 for three months, says the researcher, who asked not to be named.
To read the complete article, visit Dark Reading.