Big Tech Pushing On-Device AI as Privacy, Performance Booster | Kisaco Research
AI
Big Tech
AI Hardware

Big Tech Pushing On-Device AI as Privacy, Performance Booster

Artificial intelligence and other technology companies are pushing their large language models out of the cloud and onto users’ personal devices in a move they say will enhance privacy and security.

  • Apple, Microsoft among those releasing on-device AI offerings
  • How local AI will handle more complex tasks is uncertain

Artificial intelligence and other technology companies are pushing their large language models out of the cloud and onto users’ personal devices in a move they say will enhance privacy and security.

Apple Inc. unveiled generative AI-powered assistant “Apple Intelligence” on June 10, advertising a “brand new standard for privacy” during its Worldwide Developers Conference. The new feature will be integrated directly onto cellphones, tablets, and computers through “on-device processing,” the tech giant said. In May, Microsoft Corp. introduced Copilot+ PCs, its “intelligent” computers with built-in, locally running AI models.

“The center of gravity of AI processing is gradually shifting from the cloud to the edge, on devices,” said Durga Malladi, senior vice president and general manager of technology and edge solutions at Qualcomm Technologies Inc., which provides chips that can run models locally.

On-device LLMs are among the latest attempts at a privacy-enhancing approach to generative AI. Until now, companies have relied on cloud-based enterprise accounts with consumer-facing generative AI tools such as OpenAI’s ChatGPT Enterprise, or have built and customized their own internal solutions. By running generative AI tools directly on devices, the shift from the cloud “removes previous limitations on things like latency, cost and even privacy,” Microsoft wrote in a blog post.

The push toward on-device AI comes as some users have sought a more personalized generative AI experience that doesn’t require trading their personal data. But the latest remedy is no panacea. Even local models will require the same amount of training and can still produce inaccurate outputs or inappropriately expose training data. Adoption will also yield new privacy and security concerns between users, their providers, and third parties.

“There’s no magic answer or perfect answer” to protecting privacy while using LLMs, said Brian Hengesbaugh, chair of Baker McKenzie’s global data privacy and security business unit.

On-device AI “looks like a good step in the right direction,” he said. “I would caution, though, not to let anybody think, ‘Oh, great! It’s on device. And now we don’t have anything to worry about.’”

Privacy Risks

Tech companies including Qualcomm Inc., Nvidia Corp., Apple, and Microsoft Corp. say running AI models on individual devices, like an iPhone or Windows laptop, should ensure that prompts fed into generative AI tools remain private. Big tech’s privacy promise is attractive to any organization working with confidential information, whether it be employees’ personal data, trade secrets or copyrighted material.

“You’ve completely eliminated the risk of sending it to the company to process it, to send it back,” said Mark McCreary, chair of Fox Rothschild’s artificial intelligence practice and co-chair of its privacy and data security practice. “You’ve eliminated the concern that, in transit, it’s going to have a problem because it never leaves the device.”

Still, Noah Johnson, co-founder and chief technology officer of data security and governance company Dasera, noted that many of the core perils associated with generative AI remain.

“How do you ensure that the model doesn’t allow someone to learn more than they should about the sensitive data?” he said. “That’s really the crux of the issue and those challenges, again, are pretty fundamental to just machine learning in general and not specific to where the model is running.”

On-device AI tools’ ability to reduce an organization’s risk will ultimately depend on their use case for the technology. If a company leverages generative AI in any automated decision-making capacity, such as for human resources or hiring purposes, “you can still have privacy implications, even if that use case sits on an app, on a phone, or on a device,” said Hengesbaugh.

If a large language model is running on a personal device that’s where its accompanying data willlive. Hosting such consequential amounts of data—both from the original training set and any new information aggregated through inferences—will test companies’ existing cybersecurity practices.

“This is a totally new type of technology that we are going to be running on our own devices. And whenever we implement a new system, process, or technology on anything, it opens up a new attack vector as well,” said Alex Urbelis, general counsel and chief information security officer at ENS Labs Ltd., a nonprofit developer of blockchain routing technology. “So, these AI systems may be great because we’re localizing a lot of private data, but on the other hand, are they an attack vector for third-parties?”

Those with mature AI governance policies built on the standards set by the National Institute of Standards and Technology or the International Organization for Standardization may be best situated to pivot to on-device AI, Hengesbaugh said.

These companies will now grapple with essential questions at the intersection of AI, privacy, and security, he said. “What’s the process of putting in the prompts? And how long is the data kept? And what are the controls around it? But it wouldn’t be so novel from a cyber perspective that you couldn’t address it with your security impact assessment.”

Still, for many entities, efforts to develop data governance are ongoing. For example, safeguarding company data, especially while navigating “Bring Your Own Device” policies, could become more complicated in the age of on-device AI.

“Every organization constantly fights people taking company data onto more personal devices, usually mobile devices. And maybe as those new, cool features become available, if you do the task on your mobile device,” McCreary said, “there’s an increased risk of that data walking over to the mobile device.”

Source: Bloomberg Law. Read the full article here