Why Apple’s walled garden is no match for Pegasus spyware | Technology


    You will, by now, have heard about Pegasus. It’s the brand name for a family of spyware tools sold by the NSO Group, an Israeli outfit of hackers-for-hire who sell their wares to intelligence agencies, law enforcement, and militaries around the world. developed, marketed and licensed to intelligence agencies, law enforcement and militaries around the world by the Israeli NSO Group.

    Sign up to TechScape, Alex Hern’s weekly tech newsletter.

    An investigation by the Guardian and 16 other media organisations around the world into a massive data leak suggests widespread abuse of NSO Group’s hacking software by government customers. The company insists it is intended for use only against criminals and terrorists but the investigation has revealed that journalists, human rights activists and opposition politicians are also being targeted. Since our phones are increasingly external brains, storing our lives in digital form, a successful deployment of Pegasus can be devastating. Messages, emails, contact details, GPS location, calendar entries and more can be extracted from the device in a matter of minutes.

    On Sunday, the Guardian and its media partners began to publish the results of the investigation into the NSO Group, Pegasus, and the people whose numbers appear on the leaked list:

    The Guardian and its media partners will be revealing the identities of people whose number appeared on the list in the coming days. They include hundreds of business executives, religious figures, academics, NGO employees, union officials and government officials, including cabinet ministers, presidents and prime ministers.

    The list also contains the numbers of close family members of one country’s ruler, suggesting the ruler may have instructed their intelligence agencies to explore the possibility of monitoring their own relatives.

    The presence of a number in the data does not reveal whether there was an attempt to infect the phone with spyware such as Pegasus, the company’s signature surveillance tool, or whether any attempt succeeded. There are a very small number of landlines and US numbers in the list, which NSO says are “technically impossible” to access with its tools – which reveals some targets were selected by NSO clients even though they could not be infected with Pegasus.

    There’s a lot more to read on our site, including the fact that the numbers of almost 200 journalists were identified in the data; links to the killing of Jamal Khashoggi; and the discovery that a political rival of Narendra Modi, the autocratic leader of India, was among those whose number was found in the leaked documents.

    But this is a tech newsletter, and I want to focus on the tech side of the story. Chiefly: how the hell did this happen?

    The messages are coming from inside the house

    Pegasus affects the two largest mobile operating systems, Android and iOS, but I’m going to focus on iOS here for two reasons: one is a technical problem that I’ll get to in a bit, but the other is that, although Android is by far the most widely used mobile OS, iPhones have a disproportionately high market share among many of the demographics targeted by the customers of NSO Group.

    That’s partly because they exist predominantly in the upper tiers of the market, with price tags that keep them out of the reach of much of the world’s smartphone users but still within the reach of the politicians, activists and journalists potentially targeted by governments around the world.

    But it’s also because they have a reputation for security. Dating back to the earliest days of the mobile platform, Apple fought to ensure that hacking iOS was hard, that downloading software was easy and safe, and that installing patches to protect against newly discovered vulnerabilities was the norm.

    And yet Pegasus has worked, in one way or another, on iOS for at least five years. The latest version of the software is even capable of exploiting a brand-new iPhone 12 running iOS 14.6, the newest version of the operating system available to normal users. More than that: the version of Pegasus that infects those phones is a “zero-click” exploit. There is no dodgy link to click, or malicious attachment to open. Simply receiving the message is enough to become a victim of the malware.

    It’s worth pausing to note what is, and isn’t, worth criticising Apple for here. No software on a modern computing platform can ever be bug-free, and as a result no software can ever be fully hacker-proof. Governments will pay big money for working iPhone exploits, and that motivates a lot of unscrupulous security researchers to spend a lot of time trying to work out how to break Apple’s security.

    But security experts I’ve spoken to say that there is a deeper malaise at work here. “Apple’s self-assured hubris is just unparalleled,” Patrick Wardle, a former NSA employee and founder of the Mac security developer Objective-See, told me last week. “They basically believe that their way is the best way.”

    What that means in practice is that the only thing that can protect iOS users from an attack is Apple – and if Apple fails, there’s no other line of defence.

    Security for the 99%

    At the heart of the criticism, Wardle accepts, is a solid motivation. Apple’s security model is based on ensuring that, for the 99% – or more – for whom the biggest security threat they will ever face is downloading a malicious app while trying to find an illegal stream of a Hollywood movie, their data is safe. Apps can only be downloaded from the company’s own App Store, where they are supposed to be vetted before publication. When they are installed, they can only access their own data, or data a user explicitly decides to share with them. And no matter what permissions they are given, a whole host of the device’s capabilities are permanently blocked off from them.

    But if an app works out how to escape that “sandbox”, then the security model is suddenly inverted. “I have no idea if my iPhone is hacked,” Wardle says. “My Mac computer on the other hand: yes, it’s an easier target. But I can look at a list of running processes; I have a firewall that I can ask to show me what programs are trying to talk to the internet. Once an iOS device is successfully penetrated, unless the attacker is very unlucky, that implant is going to remain undetected.”

    A similar problem exists at the macro scale. An increasingly common way to ensure critical systems are protected is to use the fact that an endless number of highly talented professionals are constantly trying to break them – and to pay them money for the vulnerabilities they find. This model, known as a “bug bounty”, has become widespread in the industry, but Apple has been a laggard. The company does offer bug bounties, but for one of the world’s richest organisations, its rates are pitiful: an exploit of the sort that the NSO Group deployed would command a reward of about $250,000, which would barely cover the cost of the salaries of a team that was able to find it – let alone have a chance of out-bidding the competition, which wants the same vulnerability for darker purposes.

    And those security researchers who do decide to try to help fix iPhones are hampered by the very same security model that lets successful attackers hide their tracks. It’s hard to successfully research the weaknesses of a device that you can’t take apart physically or digitally.

    In a statement, Apple said:

    Apple unequivocally condemns cyberattacks against journalists, human rights activists, and others seeking to make the world a better place. For over a decade, Apple has led the industry in security innovation and, as a result, security researchers agree iPhone is the safest, most secure consumer mobile device on the market. Attacks like the ones described are highly sophisticated, cost millions of dollars to develop, often have a short shelf life, and are used to target specific individuals. While that means they are not a threat to the overwhelming majority of our users, we continue to work tirelessly to defend all our customers, and we are constantly adding new protections for their devices and data.

    There are ways round some of these problems. Digital forensics does still work on iPhones – despite, rather than because, of Apple’s stance. In fact, that’s the other reason why I’ve focused on iPhones rather than Android devices here. Because while the NSO Group was good at covering its tracks, it wasn’t perfect. On Android devices, the relative openness of the platform seems to have allowed the company to successfully erase all its traces, meaning that we have very little idea which of the Android users who were targeted by Pegasus were successfully affected.

    But iPhones are, as ever, trickier. There is a file, DataUsage.sqlite, that records what software has run on an iPhone. It’s not accessible to the user of the device, but if you back up the iPhone to a computer and search through the backup, you can find the file. The records of Pegasus had been removed from that file, of course – but only once. What the NSO Group didn’t know, or perhaps didn’t spot, is that every time some software is run, it is listed twice in that file. And so by comparing the two lists and looking for inconsistencies, Amnesty’s researchers were able to spot when the infection landed.

    So there you go: the same opacity that makes Apple devices generally safe makes it harder to protect them when that safety is broken. But it also makes it hard for the attackers to clean up after themselves. Perhaps two wrongs do make a right?

    If you want to read more please subscribe to receive TechScape in your inbox every Wednesday.



    Source link

    Previous articleDell Mobile Connect update brings refreshed look and new calling interface
    Next articlePanasonic Toughbook G2 review: The most rugged, modular PC in existence today