NSA sounds alarm on AI's cybersecurity risks
The rapid adoption of artificial intelligence tools is potentially making them “highly valuable” targets for malicious cyber actors, the National Security Agency warned in a recent report.
Bad actors looking to steal sensitive data or intellectual property may seek to “co-opt” an organization’s AI systems to achieve, according to the report. The NSA recommends organizations adopt defensive measures such as promoting a “security-aware” culture to minimize the risk of human error and ensuring the organization’s AI systems are hardened to avoid security gaps and vulnerabilities.
“AI brings unprecedented opportunity, but also can present opportunities for malicious activity,” NSA Cybersecurity Director Dave Luber said in a press release.
The report comes amid growing concerns about potential abuses of AI technologies, particularly generative AI, including the Microsoft-backed OpenAI’s wildly popular ChatGPT model.
In February, OpenAI said in a blog post that it terminated the accounts of five state-affiliated threat groups who were using the startup’s large language models to lay the groundwork for malicious hacking efforts. The company acted in collaboration with Microsoft threat researchers.
To read the complete article, visit Cybersecurity Dive.