Blog

The History of iOS Exploits: Apple’s Flawed Security Paradigm

Matthias Frielingsdorf and Mateusz Krzywicki

There is a lot of hype around Mythos and OpenAI’s GPT-5.4-Cyber and how it's going to change the security industry and basically turn everyone into a world-class hacker. As is typical with new models and technologies, separating genuine impact from overblown hype can be challenging, especially regarding the security of iOS devices. 

If you’re looking for more clarity, these blog posts are for you. In the first, we’ll explain how iOS exploits are developed, and in the second, we’ll explore why 2026 and frontier security LLMs are poised to change some paradigms, but perhaps not for the reason you might think.

TL;DR

The current iOS security model is built on assumptions that leave it vulnerable to the changing threat landscape, a problem potentially set to be compounded by frontier AI.

  • Complexity to Commodity: Modern iOS exploitation (e.g., Coruna, DarkSword) requires chaining multiple vulnerabilities across WebKit, the sandbox, and the kernel, bypassing sophisticated hardware and software mitigations. This complexity drove exploit development from the public jailbreak community to commercial offensive companies, who guarded techniques and kept them inaccessible to the public and large language models (LLMs).

  • The Flawed Paradigm: Apple's security strategy focused on making exploitation hard and enforcing fast patching, assuming that threats would remain rare and highly targeted. This led to a deliberate decision not to build an endpoint security framework (ESF) layer, preserving user privacy but creating a critical visibility gap.

  • Three Critical Flaws in the Strategy: Apple's defense against widespread threats is fundamentally undermined by its reliance on human behavior and a lack of detection capability:

    1. User Apathy: A significant portion of users (e.g., 25% were not running iOS 26 seven months after release) do not update promptly, and only a fraction use advanced features like Lockdown Mode.

    2. Unscalable Detection: Without ESF, the only way to detect advanced attacks like Pegasus is through slow, unscalable forensic investigation (like civil society's MVT tool), which finds residual artifacts but is easily bypassed by cleaner malware.

    3. Critical Blind Spot: The core flaw is that Apple does not provide a scalable technology to measure the true rarity of attacks. Should exploit costs drop, techniques leak, or threat actors change tactics, Apple lacks the necessary framework to detect the pivot to mass exploitation.

Head straight to part two to see how leaked exploit chains are fueling the age of AI-accelerated iOS attacks.

Understanding the Modern iOS Exploit Chain 

Before we can discuss the effects new frontier modules like Mythos and GPT-5.4-Cyber will have on iOS Exploitation and Security, we need to examine how iOS exploitation looks today.

Luckily, in this case with Coruna and DarkSword, we have plenty of examples over the last couple of months that actually show us how real attackers approached it. As both attacks targeted the browser (WebKit) as an entry point, they look pretty much the same.

Exploit Steps

Coruna iOS 13 - 17

DarkSword iOS 18

Safari vulnerability turned into arbitrary memory read/write primitive

x

x

Safari PAC bypass - code execution

x

x

Safari sandbox escape - code execution outside of Safari sandbox

x

x

Kernel arbitrary memory read/write primitive

x

x

Kernel PAC bypass - code execution

x


Kernel SPTM/PPL bypass

x


In both the Coruna and DarkSword exploits, attackers had to find vulnerabilities within the WebKit engine powering the Safari web browser, then find bypasses for mitigations such as pointer authentication codes (PAC) to get arbitrary code execution in WebKit. Then they had to find another vulnerability in a different component to get unsandboxed code execution, and finally find a bug inside the kernel to achieve "read and write” access. 

In the case of Coruna, the attackers also sought to identify additional vulnerabilities to bypass kernel memory mitigations, allowing them to load unsigned code with arbitrary entitlements. DarkSword did not attempt it because they found an elegant way to get the job done without attacking more advanced mitigations. 

In short, attackers had to find and exploit vulnerabilities across multiple different components to achieve a full system compromise or exfiltrate data. If we compare this against other platforms, then we could say:

“Yeah, looks pretty similar, just more bugs we need to find - how hard can that be?” 

The Unique Complexity of iOS Exploitation

iOS is unlike many of the other operating systems. Firstly, while some parts of the OS are open source, many important components are closed source and change very frequently. This requires a lot of reverse engineering and understanding. 

Secondly, and this is even more important, Apple started to build exploit mitigations in hardware. Mitigations built into hardware are harder to bypass, and Apple has a lot of them. And then, every year, Apple also ships additional exploit mitigations, analyzes exploit techniques, and tries to kill entire classes of bugs. While a lot of the bugs themselves might look similar to bugs you find on Linux, Windows, or Android, actually exploiting these bugs and being able to achieve goals is way more complicated.

Jonathan Levin provides an excellent overview of many of the soft- and hardware mitigations in his talk at OBTS. The following table is derived from it.


iOS 

Mitigation

Introduced in Hardware with iPhone

Jailbreak Bypass Method

2008-
2011 

< 5 

None


Get root, disable MACF sysctls


2011 


5.0 

MACF sysctls r/o in user mode 


Get kernel r/w, patch


2012-
2015 

6.0-9.x 


MACF sysctls ignored,

KPP (WatchTower) debuts



Patch in kernel (possibly racing KPP)

CS bypass, amfidebilitate


2016 

10.0 

KTRR protects kernel code section


iPhone 7 (A10)

Patch kernel data, kernel code exec


2018 

12.0

APRR/PPL protect (some) kernel data

PAC largely kills kernel code exec


iPhone XS (A12)

Patch kernel r/w data (zones)

JB in User mode


2021 

15.0 

Read-Only (= write-once) kernel zones,

CoreTrust


Physical (PTE) level attacks


2022 

16.0

LockDown Mode, Developer Mode 



CoreTrust, Triangulation (Dopamine)


2023

17.0

GXF
SPTM

iPhone 13 (A15)

Coruna / DarkSword for customization

2024

18.0

Exclaves

iPhone 14 Pro (A16)

DarkSword for customization

2025


26.0

MIE

iPhone 17 (A19)


We won’t go into much detail on the different mitigations in this blog, but many of them are designed to make attackers’ lives harder. 

In earlier years, many people often criticized Apple for just trying to make jailbreaking harder. This makes sense, as users have used the mitigations to remove Apple’s control over the system and to bypass the App Store, which was a significant source of income.

But later mitigations since 2018, including iOS 12, had a significant impact on Attackers as well; an impact we can see in current exploit chains, which have bypasses for many of these mitigations.

Many of these mitigations are also unique to iOS, which means finding those bypasses requires a good understanding of the mitigation and the system itself. 

In the early years of iOS (iOS 1 - 13), this was mostly done by the jailbreak community, which had evolved around the goal of freeing devices. And for many years, it was a constant battle between Apple and the community, often resulting in Apple’s defeat and the community's ability to “pop” Apple's latest mitigations. But Apple learned from these defeats and began to harden the system further. 

Much to the jailbreakers' dislike, Apple had a huge advantage. Every time they bypassed iOS’s mitigations, Apple had the code immediately available to analyze the issue and develop a patch. And there is no good way to get around this; jailbreaks were meant for users, so they had to be published. 

The Evolution of iOS Hacking: From Community to Commercial

Around 2018 - 2020, offensive companies and offensive research started to boom. Both offensive companies and Apple had an interest in skilled iOS attackers and began hiring them. Additionally, because bypassing all these mitigations and finding bugs has become much more challenging and time-consuming, a single researcher is unlikely to find an entire chain on their own. And even if they could, they wouldn’t publish it; they’d sell the bugs to Apple or an offensive company for millions of dollars. 

The last person known to have found an entire chain on their own and published it was Linus Henze at OBTS v5.

This is also the last jailbreak based on bugs and exploits, published by individual researchers. After this, all (partial) jailbreaks for later iOS versions, like TrollStore or Dopamine, rely on exploits or powerful bugs used by offensive threat actors. TrollStore for iOS > 15.4.1 is based on the CoreTrust vulnerability used by Intellexa for Predator (patched in iOS 17.0, 16.7). Dopamine up to iOS 16.6 is based on Operation Triangulation. The recent discovery of Coruna and DarkSword allows jailbreaking up to 17.2.1 for Dopamine and partial jailbreaking/customization up to 18.7.

Even though many of the vulnerabilities/bugs used by these actors were patched before, it took public notice or a leak of an exploit for the community to start developing the jailbreak. This is an important change for iOS security and the jailbreak scene. It's not just about the availability of bugs, but also about how to turn a known bug into a working exploit to reach a primitive. 

Techniques became way more important, to the point that no one was publishing them anymore, and the barrier for someone without any prior knowledge to enter iOS hacking became pretty impossible. 

Something that is a huge change from previous times is that many of the elite iOS hackers basically educated themselves in the community during school time. These times are long past, and only in rare cases do exceptionally skilled people (such as Alfie) still manage to do it. This will have an impact not just on the jailbreaking scene but also on Apple and offensive companies, but that is a topic for a different blog post. 

The LLM Gap: Why Earlier AI Models Couldn’t Build Full iOS Exploits

This state also has significant implications for another technology that has gained traction in recent years: large language models (LLMs). 

LLMs, like GPT 4.5, started to gain traction in many offensive and defensive companies, as they also demonstrated the ability to assist in bug finding and exploitation. But earlier LLMs have a significant weakness: they are only as good as their training data and the people instructing them. So, in the case of iOS, where developers of LLMs don’t have access to the “private knowledge” required to build real exploits and no public training data exists, these tools won’t help anyone without that knowledge to bridge the gap. However, they can help and assist in finding bugs that still look the same as on other platforms. 

There are some good public examples of people like https://github.com/zeroxjf who are really good at finding bugs assisted by LLMs. But bugs themselves don't cut it anymore, and they don’t make a difference if you can’t exploit them.

Now, before we dive into the changes of 2026 and what Mythos could bring, I’d like to spend some time talking about Apple’s defensive strategy. 

Apple’s Historical Security Paradigm and the Rise of Spyware

Now we know from other common platforms that malware needs to be able to execute arbitrary code or have control over a dedicated process to get its job done. 

On iOS, this either means:

  1. You get code execution inside of WebKit (JavaScript)

  2. You manage to install an app

  3. You manage to exploit a 0-click attack surface

While A) and B) are trivial in the beginning: you just need to lure a user to your website or install your app (sideload/bypass App Store review), the damage you can do is fairly limited, so you need an exploit to bypass iOS’s mitigations to access sensitive data.

Now this gets Apple’s defenses* down to:
- Limit sideloading
- Make App Store review solid
- Make exploitation hard
- Stop jailbreaking
- Fix bugs fast and get users to update

*Note: These represent the primary attack categories most relevant to this discussion. Other attack vectors exist, but are generally lower impact or less operationally relevant in the current threat landscape.

By making exploitation really hard, stopping jailbreaks, and fixing bugs fast, Apple thought it could contain the threats and make the iPhone secure. And it worked. Since iOS 9, almost all public jailbreaks have been limited to outdated iOS Versions. And because of strict sandboxing and App Store review, “malware in the App Store” was not really a problem, resulting in the famous “What happens on your iPhone, stays on your iPhone” campaign in 2019. 

So ultimately, for enterprises and users, the dominant strategy became: 

Just patch your devices, and you are good

It was almost perfect. 

But only almost.

The Flaws in Apple’s Assumption: A Critical Detection Gap

By 2016, another player had become increasingly painful over time: NSO Group and its famous Pegasus malware.
Because iOS Security got so good, it became a problem for many law enforcement agencies and governments that needed to inspect phones. This finally led some of them to develop exploits on their own, while others relied on tools such as Cellebrite or Magnet Forensics for physical extractions, or on NSO Group, Intellexa, RCS Labs, or Paragon for remote infections. 

For Apple itself, this is not necessarily a problem, because the economics were on its side in the beginning. Because these capabilities were rare and hard to develop, they could charge a premium, making it expensive — and it got even more expensive with every new iPhone and iOS version — and because of that, it was not widespread. 

However, over time, commercial spyware became more widespread and started to proliferate. And, as with every powerful technology, at some point, people start abusing it. As usual, civil society is one of the first groups that gets hit, and so they developed the capabilities to detect these attacks.

But why did they have to develop these capabilities? Why couldn’t they just buy an anti-virus like on Windows?       

The answer lies in the iPhone's strict security and privacy measures. Mass attacks were not happening on the iPhone. Even if an app passed App Store review, strict sandboxing made it hard for it to do any real damage. Therefore, without exploits, the apps couldn’t do any damage. Apart from jailbroken iPhones, there are no documented cases of mass malware actually doing any damage to the phone. (One exception is the 2019 attack against the Uyghur group discovered by Google’s Threat Intelligence Group / Project Zero) and now Coruna and DarkSword in 2026.

So why should Apple develop a security framework for attacks with no public track record, particularly when deeper device monitoring cuts directly against its privacy stance? 

There was just no incentive to do so. At one point, Apple even went so far as to ban apps in the App Store that had the word jailbreak in them. And that left many security companies in a tough spot. Due to the limited capabilities, they had to focus on detecting jailbreaks (which is easier, as they make device-wide changes to bypass Apple’s security mitigations for all apps, while malware has no interest in that).

With that in mind, it’s no surprise that civil society had to develop its own tooling to find the attacks that were happening against them, especially given that they could not rely on the aid of legacy mobile threat defense (MTD) companies. And what started as small cases at the beginning turned out to be way more widespread. 

In 2021, Amnesty published its Pegasus Report, stating that over 50,000 people were likely targeted with Pegasus (among them politicians such as President Macron, many journalists, human rights activists, businesspersons, and family and friends). In addition, they published their tool, MVT, and the indicators of compromise they used to determine that the phones were hacked with Pegasus. 

So what made their methodology special? They used encrypted iTunes backups, full file system extractions and diagnostic files such as sysdiagnoses of the iPhone to find traces of extraction. 

While Pegasus used a couple of advanced exploits, the implants themselves were not sophisticated at all; no obfuscation, no cleanup, just persistent artifacts left behind to roam the system as they wished. They weren’t sophisticated because they didn’t have to be. The lack of visibility meant they did not have to worry about which process name to choose, which files were left behind, or if they were leaving any traces in databases. Access to all of this is guarded behind the sandbox, so they could just do so happily. But that changed when the people at CitizenLab and Amnesty International began sifting through the forensics and found anomalous processes. But NSO Group started to learn their message and, step by step, cleaned up more of their traces. 

The important takeaway here is that forensic data sources contained the traces needed to find the spyware attacks. And once you find indicators, some of them (if there are files) can also be checked by apps on the phone, similar to how many of these check for jailbreaks. You might not be able to read a file, but if you get a different error message depending on whether the file exists or not, that reveals important information. 

There is one limitation, though: it requires knowing the exact name and path of the file, which is something a threat actor can, and did, easily change. So these IOCs have very limited value, and thus, to this day, finding advanced attacks on iOS requires different data sources, a different approach, or an endpoint security framework. 

The Remaining Flaws in Apple’s Security Paradigm

Commercial spyware was not limited to NSO Group; reports of its abuse surfaced from many more companies, and Apple had to react. 

In response, they introduced Apple Threat Notifications in iOS 14,  along with information if vulnerabilities have been exploited in the wild. In iOS 16, they introduced Lockdown Mode as an added security mode for targeted persons that would reduce the attack surface, but still no endpoint security framework. 

And once again, why should they, if the attacks are limited to a small group of targeted persons?

So to sum this up, Apple's and the industry's security paradigm was: 

  • iOS exploits are hard to achieve and rare:

Thus: 

  • You don't need mobile endpoint security and the risk of privacy invasion

  • Just patch your devices and you are good

  • Use Lockdown Mode on targeted devices

But this approach has huge logical flaws.

The first is that Apple assumes users will do the right thing and update their iPhones because they want the shiny new iOS version and protection against the latest vulnerabilities. And while a majority of iPhone users update regularly, a significant portion does not. 

Seven months after the release of iOS 26, 25% of devices were not running the latest version, which amounts to a couple of hundred million devices. In fact, 5% of devices aren’t even running beyond iOS 17.3.

The second is that people would choose Lockdown Mode advanced security over the reduced usability. A company like Apple, with its predominant focus on usability, should know that this will never be the case. 

Working with many people impacted by advanced spyware, it’s clear that only a fraction actually used Lockdown Mode, even though logic dictates they should. In reality, they still use 4-digit passcodes, have a phone on iOS 16.6, and have been confirmed infected with Pegasus three times. 

You cannot build a security principle on the idea of humans doing the right thing.

And then there is another huge flaw, one that finally showed up in 2026.

How do you know these attacks are rare? And how do you know when it changes, if you never built the technology to detect this at scale?

And that raised a lot of other questions:

  • Are you just building your assumptions on public reports? Public reports that require forensic investigation - a process that is very slow, requires expertise, and takes a lot of time, making it not really scalable. 

  • Are you just building your paradigm on the fact that exploits are expensive?

    • What happens if these exploits leak? 

    • What happens if threat actors change their tactics or just publish infection links on X/Twitter? 

    • What happens if NSO gets hacked? 

    • What happens if a commercial spyware company gets into financial trouble or gets sanctioned and cannot sell to western countries anymore? 

    • Would any of them start to sell exploits to organized crime? Maybe not 0-days, but maybe N-days that have less value because they are already patched. 

  • What happens if the cost of finding bugs and developing exploits gets significantly reduced?

Any of these scenarios leads to a situation where threats on iOS could turn from being targeted to actually flipping and being exploited en masse. But how would you know if you lack the tools and framework to do so?

To understand how these critical techniques went mass-market and changed the mobile threat landscape, continue reading part two: How Leaked Exploit Chains Fueled the Age of AI-Accelerated iOS Attacks.


Get the independent detection layer required to defend your mobile endpoints against AI-accelerated exploits with iVerify Enterprise

Get Our Latest Blog Posts Delivered Straight to Your Inbox

Get Our Latest Blog Posts Delivered Straight to Your Inbox

Subscribe to our blog to receive the latest research and industry trends delivered straight to your inbox. Our blog content covers sophisticated mobile threats, unpatched vulnerabilities, smishing, and the latest industry news to keep you informed and secure.

Subscribe

Subscribe