Generative AI projects pose major cybersecurity risk to enterprises

Elizabeth Montalbano, Dark Reading

June 29, 2023

2 Min Read
Generative AI projects pose major cybersecurity risk to enterprises

Organizations’ rush to embrace generative AI may be ignoring the significant security threats that large language model (LLM)-based technologies like ChatGPT pose, particularly in the open source development space and across the software supply chain, new research has found.

A report by Rezilion released June 28 explored the current attitude of the open source landscape to LLMs, particularly in their popularity, maturity, and security posture. What researchers found is that despite the rapid adoption of this technology among the open source community (with a whopping 30,000 GPT-related projects on GitHub alone), the initial projects being developed are, overall, insecure — resulting in an increased threat with substantial security risk for organizations.

This risk only stands to increase in the short-term as generative AI continues its rapid adoption throughout the industry, demanding an immediate response to improve security standards and practices in the development and maintenance of the technology, says Yotam Perkal, director of vulnerability research at Rezilion.

“Without significant improvements in the security standards and practices surrounding LLMs, the likelihood of targeted attacks and the discovery of vulnerabilities in these systems will increase,” he tells Dark Reading.

As part of its research, the team investigated the security of 50 of the most popular GPT and LLM-based open source projects on GitHub, ranging in length of development time from two to six months. What they found is that while they were all extremely popular with developers, their relative immaturity was paired with a generally low security rating.

If developers rely on these projects to develop new generative-AI-based technology for the enterprise, then they could be creating even more potential vulnerabilities against which organizations are not prepared to defend, Perkal says.

“As these systems gain popularity and adoption, it is inevitable that they will become attractive targets for attackers, leading to the emergence of significant vulnerabilities,” he says.

Key Areas of Risk

The researchers identified four key areas of generative AI security risk that the adoption of generative AI in the open source community presents, with some overlap between the groups:

  • Trust boundary risk;

  • Data management risk;

  • Inherent model risk;

  • And basic security best practices.

To read the complete article, visit Dark Reading.

Subscribe to receive Urgent Communications Newsletters
Catch up on the latest tech, media, and telecoms news from across the critical communications community