Apple Intelligence could introduce device security risks

Robert Lemos, Dark Reading

June 15, 2024

3 Min Read
Cosmo Condina via Alamy Stock Photo

Apple’s long-awaited announcement of its generative AI (GenAI) capabilities came with an in-depth discussion of the company’s security considerations for the platform. But the tech industry’s past focus on harvesting user data from nearly every product and service have left many concerned over the data security and privacy implications of Apple’s move. Fortunately, there are some proactive ways that companies can address potential risks.

Apple’s approach to integrating GenAI — dubbed Apple Intelligence — includes context-sensitive searches, editing emails for tone, and the easy creation of graphics, with Apple promising that the powerful features require only local processing on mobile devices to protect user and business data. The company detailed a five-step approach to strengthen privacy and security for the platform, with much of the processing done on a user’s device using Apple Silicon. More complex queries, however, will be sent to the company’s private cloud and use the services of OpenAI and its large language model (LLM).

While companies will have to wait to see how Apple’s commitment to security plays out, the company has put a lot of consideration into how GenAI services will be handled on devices and how the information will be protected, says Joseph Thacker, principal AI engineer and security researcher at AppOmni, a cloud-security firm.

“Apple’s focus on privacy and security in the design is definitely a good sign,” he says. “Features like not allowing privileged runtime access and preventing user targeting show they are thinking about potential abuse cases.”

Apple spent significant time during its announcement reinforcing the idea that the company takes security seriously, and published a paper online that describes the company’s five requirements for its Private Cloud Compute service, such as no privileged runtime access and hardening the system to prevent targeting specific users.

Still, large language models (LLMs), such as ChatGPT, and other forms of GenAI are new enough that the threats remain poorly understood, and some will slip through Apple’s efforts, says Steve Wilson, chief privacy officer at cloud security and compliance provider Exabeam, and lead on the Open Web Application Security Project’s Top 10 Security Risks for LLMs.

“I really worry that LLMs are a very, very different beast, and traditional security engineers, they just don’t have experience with these AI techniques yet,” he says. “There are very few people who do.”

Apple Makes Security a Centerpiece

Apple seems to be aware of the security risks that concern its customers, especially businesses. The implementation of Apple Intelligence across a user’s devices, dubbed the Personal Intelligence System, will connect data from applications in a way that has, perhaps, only been implemented through the company’s health-data services. Conceivably, every message and email sent from a device could be reviewed by AI and context added through on-device semantic indexes.

Yet, the company pledged that, in most cases, the data never leaves the device, and the information is anonymized as well.

To read the complete article, visit Dark Reading.

About the Author

Subscribe to receive Urgent Communications Newsletters
Catch up on the latest tech, media, and telecoms news from across the critical communications community