Better phishing, easy malicious implants: How AI could change cyberattacks
Artificial intelligence and machine learning (AI/ML) models have already shown some promise in increasing the sophistication of phishing lures, creating synthetic profiles, and creating rudimentary malware, but even more innovative applications of cyberattacks will likely come in the near future.
Malware developers have already started toying with code generation using AI, with security researchers demonstrating that a full attack chain could be created.
The Check Point Research team, for example, used current AI tools to create a complete attack campaign, starting with a phishing email generated by OpenAI’s ChatGPT that urges a victim to open an Excel document. The researchers then used the Codex AI programming assistant to create an Excel macro that executes code downloaded from a URL and a Python script to infect the targeted system.
Each step required multiple iterations to produce acceptable code, but the eventual attack chain worked, says Sergey Shykevich, threat intelligence group manager at Check Point Research.
“It did require a lot of iteration,” he says. “At every step, the first output was not the optimal output — if we were a criminal, we would have been blocked by antivirus. It took us time until we were able to generate good code.”
Over the past six weeks, ChatGPT — a large language model (LLM) based on the third iteration of OpenAI’s generative pre-trained transformer (GPT-3) — has spurred a variety of what-if scenarios, both optimistic and fearful, for the potential applications of artificial intelligence and machine learning. The dual-use nature of AI/ML models have left businesses scrambling to find ways to improve efficiency using the technology, while digital-rights advocates worry over the impact the technology will have on organizations and workers.
Cybersecurity is no different. Researchers and cybercriminal groups have already experimented with using GPT technology for a variety of tasks. Purportedly novice malware authors have used ChatGPT to write malware, although developers attempts to use the ChatGPT service to produce applications, while sometimes successful, often produce code with bugs and vulnerabilities.
Yet AI/ML is influencing other areas of security and privacy as well. Generative neural networks (GNNs) have been used to create photos of synthetic humans, which appear authentic but do not depict a real person, as a way to enhance profiles used for fraud and disinformation. A related model, known as a generative adversarial network (GAN), can create fake video and audio of specific people, and in one case, allowed fraudsters to convince accountants and human resources departments to wire $35 million to the criminals’ bank account.
The AI systems will only improve over time, raising the specter of a variety of enhanced threats that can fool existing defensive strategies.
Variations on a (Phishing) Theme
For now, cybercriminals often use the same or similar template to create spear-phishing email messages or construct landing pages for business email compromise (BEC) attacks, but using a single template across a campaign increases the chance that defensive software could detect the attack.
So, one main initial use of LLMs like ChatGPT will be as a way to produce more convincing phishing lures, with more variability and in a variety of languages, that can dynamically adjust to the victim’s profile.
To read the complete article, visit Dark Reading.