Blog

How Leaked Exploit Chains Fueled the Age of AI-Accelerated iOS Attacks

Matthias Frielingsdorf and Mateusz Krzywicki

We laid the foundation of our argument in part one of this series: The History of iOS Exploits: Apple’s (Flawed) Security Paradigm. If you missed it, here’s the TL;DR before we begin part two. 

For years, the consensus surrounding Apple’s security was simple: iOS exploits are rare, expensive, and reserved for the most sophisticated threat actors. But in the wake of Coruna, DarkSword, and Mythos-like AI models, that consensus is quickly collapsing, and 2026 is the year it becomes undeniable. 

Building an iOS exploit chain has never been easy. Apple, for its part, has systematically raised the cost of exploitation with every release; kalloc_type, PAC, SPTM, MIE, and tighter sandboxing all compound to make even finding a viable attack surface a significant challenge. 

Exploiting iOS devices requires techniques that are closely guarded secrets, proprietary knowledge held by a handful of offensive security companies and government contractors. They're trade craft, built over years of research and protected by the economics of scarcity.

But there's also the talent problem. 

The people with the expertise to build these exploit chains are either:

  • Employed by Apple itself

  • Contracted by offensive security firms or government agencies

  • Working for defensive security companies

The result? A full device compromise on iOS requires chaining multiple vulnerabilities across user and kernel modes, and the people who can do this are either few in number, very expensive, or both.

Because iOS exploits were rare and expensive to build, the industry, and Apple in particular, built its entire security posture around a set of assumptions that the scarcity of exploits is based on the cost of exploit development. This led to the situation where Apple does not provide an endpoint security framework for iOS, and the industry and Apple itself lack the visibility we need to determine whether their assumptions remain valid. 

This led us to a couple of questions of what could happen that could change these assumptions:

  • What happens if threat actors change their tactics or just publish infection links on X/Twitter?  

  • What happens if NSO gets hacked? 

  • What happens if a commercial spyware company gets into financial trouble or gets sanctioned and cannot sell to western countries anymore? 

  • Would any of them start to sell exploits to organized crime? Maybe not 0-Days - but maybe N-Days that have less value, because they are already patched. 

  • What happens if the cost of finding bugs and developing exploits gets significantly reduced?

Interestingly many of these things have already happened in the past:

In 2023: An Intellexa customer publicly posted infection links for the Predator malware as response for tweets from targets in the European Commission and Parliament.

In 2024 Google Threat Intelligence published an analysis that exploits very similar or identical from exploits from both Intellexa and NSO Group have been used by a Russian threat actor in Mongolia. We still don’t know if these groups got hacked, or if exploits were resold. But I wouldn't argue that hacking NSO Group or Intellexa is significantly more challenging than hacking the NSA, CIA or Hacking Team. All cases that have publicly happened before.

And finally, in 2026 we have seen that the Coruna exploit kit made its way to an organized crime group which then deployed it in waterhole campaigns to steal crypto currency.

But all of these required changes in how offensive actors behave. Let’s take a look at the cost of finding bugs and what AI means for it.  

AI Changed the Bug Equation, But Not the Exploit Equation (Yet)

The rise of AI-assisted vulnerability research has shifted part of the equation. Finding bugs on iOS is no longer the exclusive domain of elite researchers. A significant portion of the iOS codebase is open source — WebKit, the XNU kernel, dyld, various system daemons — and even closed-source components are accessible through decompilation. Programs like AIxCC have shown that AI models can analyze code at scale, identify patterns and find potential vulnerabilities faster than any human could.

But there’s one important distinction that should be made.

Bugs are not exploits. 

Even with AI making bug discovery cheaper and faster, getting from a vulnerability to a working exploit on iOS still requires techniques that are kept private. The public knowledge base doesn't cover the bypass methods, memory manipulation tricks, sandbox escape attack surfaces, or the kernel exploitation primitives that make a chain actually work.

So while the barrier to finding bugs has dropped, the barrier to exploiting them has not, at least not for anyone relying on publicly available information. For now.

2026: The Dam Breaks. Exploit Chains Going Mass Market

Then, in 2026, a cascade of developments followed. 

The exploit chains, Coruna and DarkSword, first reported in March 2026, were used in mass attacks on iPhone users. This wasn't a nation-state deploying a carefully targeted tool against a terrorist or criminal. These were nation-states and organized crime groups deploying watering hole attacks against a large set of people but not targeted individuals. 

Both Coruna and DarkSword leaked. The full exploit chains — including the private techniques that had been closely guarded for years — became available to the general public.

This is the moment that could change everything for AI-assisted exploitation. 

Those private techniques, the ones that kept the barrier to exploitation high even as bug discovery got cheaper, are now in the training data. They're in the public discourse. They're available for AI models to learn from, generalize, and apply to new targets.

GPT-5.4-Cyber and Mythos and the New Frontier of AI

On top of all this, a new generation of frontier AI models arrived. 

OpenAI’s GPT-5.4-Cyber, Anthropic’s Mythos, and other frontier models represent a paradigm shift that goes far beyond simple bug finding. As demonstrated by Anthropic’s technical disclosure, these systems are capable of identifying previously unknown vulnerabilities and building full, working exploit chains autonomously, requiring little to no human intervention after an initial prompt on different platforms. Now, combined with the iOS exploitation techniques from the leaked chains, this could make AI-Assisted exploit development for iOS feasible. 

So what does this mean?

There are a couple of things that are influenced by this.

First: There will likely be a larger group of organizations able to find 0-Days for iOS. But at the same time, many more defensive organizations and Apple will also be able to find bugs faster, so the shelf life of bugs will likely decrease.

Second: Given the right set of bugs, these AI models might enable many different organizations to build exploits that compromise iOS. For organizations that already have the skills to build an iOS exploit in-house, these tools will speed up the process.

Third: Even though Apple might be able to patch some of these techniques, this still lowers the bar for many researchers and AI tools to enter iOS exploitation. So there will likely be opportunities to find similar techniques.

There are also interesting questions about what happens internally at Apple. 

If bugs are being found faster and in greater volume, Apple's teams may not be able to patch them quickly enough. Bug bounty payouts could also come under pressure, if too many submissions qualify, Apple may reduce rewards, which weakens the incentive for defensive researchers to participate.

The calculus for a researcher starts to look like: “How many tokens do I spend to find something that actually pays out?” And since everyone is running the same models, it becomes closer to a lottery than a skill-based competition.  We have observed this before. Just recently, Apple significantly changed macOS bounties, reducing a fully TCC (Transparency, Consent, and Control) bypass from $50,000 to $5,000, clearly showing that their assumptions about exploitability and bugs were skewed. 

Now let’s look at a couple of scenarios that could change with this:

If these AI models can reliably find bugs and exploits under iOS at a low cost and with minimal skill, we will see the same criminal threat actors already targeting desktop systems with ransomware and advanced exploits also target iOS. We have seen the start of this with Coruna earlier this year, but this was still a resold kit. But with the DarkSwords leak and these modules, this scenario could happen. 

Let’s assume this scenario is unlikely and that iOS exploitation would still require expert knowledge. These models could give experts the ability to compress the research and exploit development timeline from a couple of months to weeks, days, or even hours. In case there is a stronger supply of bugs and exploits like this, it could lead to changed behavior on the threat actor side, as operational security might be less important.

But there is an even more interesting and more dangerous scenario: 

Even though Apple has been disclosing information when bugs have been exploited in the wild since iOS 14, it has often taken weeks or months for public exploits to surface. This is something we saw in both Coruna and DarkSword; many of these vulnerabilities never saw public exploits (years after patching) until someone leaked them. 

But now assume that these frontier models can actually speed up exploit development. This now opens up a new avenue for threat actors. Instead of finding bugs themselves, they can wait for Apple to patch them, and specifically look for the bugs that were exploited in the wild, and start building an exploit. And because they were exploited in the wild, it also means someone found a way to get it done. This can significantly reduce the scope of finding additional bugs that might be necessary for a chain, and because many users don’t immediately patch their devices, this might be a very effective way to develop capabilities and benefit from others’ work. 

The same obviously applies to bugs that are not exploited in the wild but might still be exploitable. This might make mass N-Day attacks feasible. And because many organizations may have reliability requirements, they may not be able to quickly patch and will be vulnerable to those attacks. 

The Defensive Industry's Problem

All of this leaves the defensive industry in a deeply uncomfortable position. 

Apple's assumptions about iOS exploitation being rare and accessible to few were reasonable when they were made. The attacks were rare, the talent was scarce, and the economics favored defense. But every one of those assumptions is now under pressure from AI and leaked exploitation techniques.  

Exploit development is no longer constrained by scarcity in the same way. The inputs that once limited it, private techniques, elite talent, and time, are beginning to change. AI lowers the cost of discovery. Leaked exploit chains reduce the need for original research. And frontier models are starting to connect those pieces together in ways that were previously out of reach.

This does not mean that every attacker can suddenly build a full iOS exploit chain overnight. But it does mean the trajectory is clear. The gap between vulnerability discovery and reliable exploitation is shrinking, and it is shrinking in a way that compounds over time.

That has two immediate implications for enterprises.

First, the idea that mobile compromise is rare enough to be deprioritized no longer holds. The risk model shifts from exceptional to plausible. Not constant, but frequent enough that it needs to be accounted for in the same way as endpoint or identity threats.

Second, the signal of compromise becomes harder to detect using traditional approaches. These exploit-based attacks do not rely on app-based malware, known indicators, jailbreaks or user-visible behavior. They operate within legitimate processes, abuse trusted channels and leverage the fact that many services and apps rely on the OS to keep their data secure. But once the OS is compromised that security barrier no longer holds.

The result is a visibility problem, not just a prevention problem.

What comes next is not a single breakthrough moment where mobile exploitation suddenly becomes trivial. It is a gradual normalization of capability. More actors gaining access. More pathways into enterprise systems that originate from a phone that appears secure. So what does this mean for defenders and organizations that build defensive software? 

Defenders do not need to assume worst-case scenarios to justify action. They only need to recognize that the conditions that kept mobile risk contained are changing. And when those conditions change, the strategies built on top of them have to change too. The focus must shift from a patch-only strategy to one that incorporates continuous, independent OS-level detection. Just because Apple has not yet provided an endpoint security framework, does not mean you cannot ask for one.

Defenders should also start building up mobile security knowledge and procedures to investigate devices and hunt for mobile threats. Mobile devices should and need to be treated the same way as desktops.  A modern mobile device has full access to corporate email, messaging, credentials, and cloud infrastructure. Thus they deserve hunting teams, security telemetry, incident response processes and forensic investigations. 

And what about iVerify? What’s the impact?

For iVerify, the approach has always started with the actual threat rather than theoretical models. That meant developing mobile threat hunting and forensic investigation capabilities grounded in compromised devices, and using the behaviors and indicators from those investigations to identify similar activity across a much larger device population.

A meaningful part of that work has been reducing the friction involved in collecting relevant data. Getting a sysdiagnose off a device used to be a significant barrier. Making that process accessible to users directly changed what was possible at scale. It allowed us to detect advanced attacks where traditional approaches failed simply because we had access to data points that mattered.

The other side of that work is understanding iOS well enough to use its constraints in your favor. The same architecture that restricts what attackers and apps can do still produces observable side effects at the OS level. That is the surface we work from.

The assumption underlying all of it was that these attacks were more widespread than public disclosures suggested. Coruna and DarkSword confirmed that. We would be glad to be wrong about what comes next but based on what we are seeing, we do not expect to be.


Secure your mobile workforce today

Get Our Latest Blog Posts Delivered Straight to Your Inbox

Get Our Latest Blog Posts Delivered Straight to Your Inbox

Subscribe to our blog to receive the latest research and industry trends delivered straight to your inbox. Our blog content covers sophisticated mobile threats, unpatched vulnerabilities, smishing, and the latest industry news to keep you informed and secure.

Subscribe

Subscribe