Samsung has tried and tried hard. But sometimes your best is not enough. It now looks like Samsung has fallen short with Galaxy AI. Apple has effected a brilliant privacy and security sleight of hand over its premium rival. If this works to plan, what Apple has done could be a reason for millions of Samsung users to switch to iPhone.
It has been clear for a year that the next generation of smartphone sales will be steered by innovative new AI capabilities, with the joy for marketing and sales teams being the newer the hardware, the more toys to play with. This is fast becoming the biggest, widest reason to upgrade a smartphone in years.
But there’s a huge privacy and security battle here as well—framed as on-device versus in the cloud in its simplest terms. At least until this week. Now Apple has changed the entire game—and no one saw this coming.
Google may have been fastest out the traps with smartphone AI, speeding ahead of rivals with add-on after add-on to its apps and services. But Google’s challenge will always be its uneasy optics when it comes to security and privacy, especially at the premium end of the market where Samsung is the only real Android rival to Apple.
Samsung has played a clever hand this year. Galaxy AI brought us the concept of “hybrid AI,” where AI processing of sensitive data would be on-device only, reducing the privacy risk for users sharing data with cloud models. It’s all very nascent still, but it has seemed for most of the year that this could steal some Apple on-device thunder.
By the time of those first Apple leaks in the spring suggesting iPhone AI would be on-device only, it looked like a straight head-to-head between Apple and Samsung, despite Google’s increasing focus on its own “Nano” on-device AI. Could Samsung’s looser approach beat Apple if the iPhone maker was limiting itself to on-device silicon? And what about its likely Faustian pacts with ChatGPT or Gemini or both?
The issue for Apple—as many of us have commented—has been whether it can deliver a competitive AI offering while maintaining its privacy and security credentials. This has seemed a major challenge—especially if we are to believe it rushed to an answer as a response to Google’s and Samsung’s unexpected lead.
But what we were actually given at WWDC was brilliant on Apple’s part. If it works as billed, this could redefine smartphone AI and erect hurdles for its rivals that could be almost impossible to leap. A closed ecosystem of device and cloud silicon, with an almost end-to-end encrypted philosophy applied to any AI queries or data leaving a user’s device, such that it is quasi-anonymized and enclaved and assured to such an extent that an external researcher could provide third-party accreditation.
Samsung has no answer to this—suddenly its hybrid AI approach seems crude and underwhelming. Apple is offering the best of both worlds, and is willing to promise its users that “your data is never stored or made accessible [even] to Apple” even in the cloud, while offering the best and brightest generative AI that cannot be completed on-device only. Private Cloud Compute, in theory at least, redefines the space.
Ironically, as the DOJ and others target Apple with claims that its walled garden harms users, this new AI architecture relies on exactly such end-to-end control to work. Apple needs to be able to verify the software and hardware on-device and in the cloud, and it needs to provide a tight interface between both. This extends to the silicon itself, with its own custom processors at both ends designed with this in mind.
As John Hopkins’ crypto expert Matthew Green explains, while Apple has always favored on-device, “the problem is that while modern phone ‘neural’ hardware is improving, it’s not improving fast enough… This fundamentally requires servers.”
“For the first time ever,” Apple says, “Private Cloud Compute extends the industry-leading security and privacy of Apple devices into the cloud, making sure that personal user data sent to PCC isn’t accessible to anyone other than the user — not even to Apple. Built with custom Apple silicon and a hardened operating system designed for privacy, we believe PCC is the most advanced security architecture ever deployed for cloud AI compute at scale.”
This approach, Green says, is not easy. “Building trustworthy computers is literally the hardest problem in computer security. Honestly it’s almost the only problem in computer security. But while it remains a challenging problem, we’ve made a lot of advances. Apple is using almost all of them.”
That’s how significant this is. Forget the image playground and genmojis, tinted app icons and more flexible home screens. The game-changing update at WWDC was architectural and is Apple’s biggest macro innovation in years. Every link in a new chain assured and subject to verification by other links—hardware and software alike.
As MIT’s Technology Review explains, this new architecture “offers an implicit contrast with the likes of Alphabet, Amazon, or Meta, which collect and store enormous amounts of personal data. Apple says any personal data passed on to the cloud will be used only for the AI task at hand and will not be retained or accessible to the company, even for debugging or quality control, after the model completes the request… Apple is saying people can trust it to analyze incredibly sensitive data—photos, messages, and emails that contain intimate details of our lives—and deliver automated services based on what it finds there, without actually storing the data online or making any of it vulnerable.”
“Secure and private AI processing in the cloud poses a formidable new challenge,” Apple says. “Powerful AI hardware in the data center can fulfill a user’s request with large, complex machine learning models — but it requires unencrypted access to the user’s request and accompanying personal data. That precludes the use of end-to-end encryption, so cloud AI applications have to date employed traditional approaches to cloud security.” This opens up those cloud servers and the data therein to attack. And Apple is keen to stress that this entire approach is akin to a response to a red team exercise as to how a fully sophisticated adversary would attack.
So not just a new approach to cloud privacy but to cloud security as well. “Our threat model for Private Cloud Compute includes an attacker with physical access to a compute node and a high level of sophistication — that is, an attacker who has the resources and expertise to subvert some of the hardware security properties of the system and potentially extract data that is being actively processed by a compute node.” This means the top-end of the private market or nation state level.
With the awkward timing that comes from being outplayed, Samsung put out its latest hybrid AI PR just before Apple’s WWDC unveiling. “We believe our hybrid approach is the most practical and reliable solution to meet all these needs and puts Samsung ahead of the curve,” the company said. “We are providing users with a balance between the instant responsiveness and extra privacy assurance of on-device AI and the versatility of cloud-based AI through open collaborations with industry-leading partners in offering a variety of functions they need for daily life.”
The stark truth for Samsung though is that Apple—again in theory at least—has moved so far beyond this hybrid device/cloud balancing act that it could be as compelling as its early lead in end-to-end encryption. And for enterprises trusting staff to use generative AI for sensitive tasks this presents a new paradigm.
We await to see how Apple’s promised transparency will work when data or queries leave a device for the cloud, and the details provided as to which AI model is being used. But—again in theory at least—this shifts the debate away from the specific LLMs and the concerns of Gemini’s Vs ChatGPT’s own security credentials. Apple is doing the heavy lift security and privacy work instead by creating this framework. Clearly, if a user actively chooses to use a different model for a different task from an specific app then that can be specifically flagged.
The Apple Intelligence use cases showed at WWDC are just what we can envisage today—the reality is that AI should become so seamless on a device and with its cloud extension that its much less obvious when and where it’s being applied. “Private Cloud Compute continues Apple’s profound commitment to user privacy,” Apple says. “With sophisticated technologies to satisfy our requirements of stateless computation, enforceable guarantees, no privileged access, non-targetability, and verifiable transparency, we believe Private Cloud Compute is nothing short of the world-leading security architecture for cloud AI compute at scale.”
What this has also done is given Apple a new market lead over rivals, giving them a serious choice—especially Samsung. Does the leading Android OEM continue with a private on-device and mainly open cloud offering, or does it look at closing this new gap. Apple has taken the market by surprise, and if it executes as promised it could redefine this space. “Your phone might seem to be in your pocket,” Green says, “but a part of it lives 2,000 miles away in a data center.”
The question now is whose data center do you trust—especially for those Samsung users considering $1000-plus purchases of new AI-centric smartphones.
#Apple #Gave #Millions #Samsung #Users #Reason #Buy #iPhone