Intrepidus Group
Insight

Tag Archives: risk

Strengths and Weaknesses in Apple’s MDM System

Posted: August 5, 2011 – 9:21 pm | Author: | Filed under: bugs, iOS, Mobile Device Management, Tools

Yesterday, for the first time, I headlined a talk at a major security conference. It was quite the experience, and not nearly as nerve-wracking as I might’ve expected. Actually, it was pretty easy — I took the approach that “this is some cool stuff I found, let me tell you about it” and kept a conversational mindset. Don’t know if that’s what experienced presenters do, but it worked for me, and I think I pulled it off. Achievement: UNLOCKED.

For those who couldn’t make it to Black Hat, I wanted to post a short summary of my talk here, as well as links to the slides and white paper. So here we go!

When iOS 4.0 was released, Apple included some new features for remote management of iOS devices. This Mobile Device Management (MDM) is aimed at enterprises, and provides them with the ability to remotely configure and control all the devices within their organization. MDM includes three key features:

  • Configuration: Install and remove configuration and provisioning profiles
  • Query: Retrieve specific information from devices such as software versions, application lists, etc.
  • Control: Remotely lock, unlock, and wipe devices

When the MDM system was first announced, developers expected documentation to be released — but it never came. Certainly, the protocol was shared with those commercial vendors selling MDM implementations, but it was never publicly released, not even exclusively within the developer program.

It’s certainly Apple’s prerogative to keep these APIs private, but doing so makes it difficult for people in the security firm to answer the simple question: “How secure is it?” And, as consultants supporting the deployment of iOS devices across large enterprises, we get asked that question. It’d be nice to be able to answer it, so we did some poking, prodding, and a whole lot of educated guessing, and now have a (reasonably) complete picture of the MDM protocol.

The attached white paper documents the protocol, as much as is possible. It describes how the server wakes up clients via Push Notification messages, what the device says to the server when it connects, how commands are sent to the devices, and finally, how the responses make their way back to the server. In short, enough information is presented that you can actually create your own MDM server. In fact, it even includes source code to a very simple MDM server you can use for your own research. (But don’t even think of using it as a substitute for a real MDM system — it’s just there to demonstrate the protocol).

So, to answer the first question, “How secure is Apple’s MDM?”, I’d have to say, it’s not too bad. The protocol itself is fairly straightforward and, with one exception, doesn’t appear to have any real security flaws. Unfortunately, the implementation on the iOS devices I tested (4.2.x and 4.3.x on iPhone 3Gs and iPad 1 and 2) has some implementation flaws and/or deficiencies that could lead to denial of service or disclosure of data. I’ll discuss both of these briefly here, but for full details see the attached slides.

The only real protocol problem that I believe exists is that the EraseDevice command (used to remotely wipe a device) doesn’t require any authentication from the server. As long as the device has received the EraseDevice command via the MDM connection, it will honor it. If an attacker is able to get a device to communicate with a rogue MDM server using traditional Man-in-the-Middle (MITM) techniques, then they could cause that device to erase itself the next time it checks in with MDM. It might be nice if this command required, for example, the UnlockToken used to clear the device’s passcode, as evidence that the MDM server issuing the Wipe command is, in fact, authorized to do so. I’d actually been so convinced this would be a requirement, that when I was able to wipe a device without additional authentication it left me stunned for several minutes.

On the other hand, it’s possible that Apple deliberately chose the less strict form of this command. When an organization determines that a device needs to be wiped, it’s likely because it’s been stolen or lost, and the erasure of the data on that device is of paramount importance. So the risk of accidental erasure, or malicious erasure by 3rd parties, might be acceptable given the requirement that in an emergency, nothing stand in the way of wiping the device. And besides, you’re supposed to be backing these things up regularly anyway.

The implementation flaws I discovered center mostly around weak authentication, both for communication with the MDM server itself, and with regards to authentication of server commands. In the former case, it appears that the client on the device will accept any valid certificate. That is, as long as the server it’s speaking with has a certificate which has been signed by an appropriate authority (or signed by a certificate authority which is recognized by the device), the connection will be permitted. This obviously opens the door to MITM attacks against MDM-managed devices.

The other key issue is that the device doesn’t appear to authenticate the commands themselves. A checkbox when creating an MDM enrollment profile offers to “sign messages,” but even when checked, the device happily accepted plain, unsigned commands. Forcing the server to sign commands, and adhering to a strict policy for validating those signatures, would make MITM attacks that get past the transport-layer protections significantly more difficult.

A final interesting issue I uncovered, by chance, was an Evil Maid attack. Briefly, the “traditional” Evil Maid attack describes how an attacker, with physical access to a device secured with full disk encryption (FDE), could boot that device from external media to install malware, such as a key logger, into the boot partition. This malware would enable the attacker to recover the user’s password to unlock the FDE on the device, which would be exploited by the attacker on subsequent accesses to the hardware.

Apparently, sending a copy of the original MDM enrollment profile to an enrolled device will cause that device to re-enroll in MDM. This re-enrollment also causes the device to create a new UnlockToken, and send that to the server. If the attacker has a rogue server set up using MITM, then that token would go to the attacker, and they’d then be able to clear the device’s passcode and access protected information on the device.

This attack is certainly not easy, and relies on some pretty specific boot exploits using DFU mode and tethered access to the locked filesystem. To date, I do not believe this mode is available on iPad 2, so those devices should be protected (at least, from the specific variant of the attack that I’ve experimented with).

Fortunately, this attack can be eliminated with the stronger session- and command-level authentication steps suggested above, along with protection (or elimination) of some key data on the device. Full details, again, are in the slides.

What’s the bottom line, then? Do these weaknesses make MDM a significant risk for enterprises? In my opinion, certainly not. The benefits of MDM far exceed the risks introduced by these vulnerabilities. If, however your organization stores sensitive information on iOS devices, then care should be taken whenever those devices are left unattended, whether in a hotel or even within the office environment. But then, the same care should be applied to any device with sensitive information — mobile, desktop, or laptop.

So in the end, I hope that our work can help you better understand how MDM works, and properly evaluate its risks and benefits to your organization. For those of you who made it to my talk, thanks for coming and I hope you enjoyed it! For those who didn’t, I hope the attached material will prove an acceptable substitute (and, no, I don’t believe that Black Hat makes videos available to the general public). If anyone has any questions about this research, please feel free to contact me.

david.

[Update -- slides, whitepaper, and code are now available on Github: github.com/intrepidusgroup/imdmtools.]

2 comments

Quantifying the Unknown: Measuring a Theoretical SecurID Attack

Posted: March 22, 2011 – 11:03 am | Author: | Filed under: Cryptography, Passwords, Risk Analysis

It’s been a few days since the attack on RSA / SecurID was made public. Last Friday, I considered potential risks the compromise may pose to RSA’s customers. Since then, the security world has been buzzing with analysis of risks, worst-case scenarios, and second-guessing of the offical RSA press releases.

Late yesterday, RSA released additional information via their SecureCare system. However, as this is only available to RSA customers, I haven’t been able to directly review it. Rich Mogull, at Securosis, has posted his take in an update last night, and includes some very good, specific recommended actions. I’d like to take a moment to present some back-of-the-envelope numbers relating to a theoretical attack scenario, especially in light of what (little) was just revealed by RSA.

Briefly, we still don’t know what was compromised in the breach, nor do we have any real way to quantify the risk that may present to users of RSA tokens. The just-released note included the following:

“To compromise any RSA SecurID deployment, the attacker needs to possess multiple pieces of information about the token, the customer, the individual users and their PINs. Some of this information is never held by RSA and is controlled only by the customer. In order to mount a successful attack, someone would need to have possession of all this information.”

This still doesn’t tell us much. Which information is “controlled only by the customer”? The user’s password and PIN, certainly, but what else? The token’s serial number? It’s seed value? Beyond those items, what else is left to have been compromised at RSA?

Again, my fear remains that seeds, in some form, were compromised. If that is the case, then is there any way to estimate how easy it could be to attack one or more accounts using compromised seed information? Though there’s quite a bit we don’t know, it is still possible to make a rough estimate at the effort required, with a few simple (and I believe, easily defensible) assumptions.

A quick disclaimer: I do not have knowledge of the algorithms, aside from what I’ve been able to glean from others’ attempts at reverse-engineering them, and most of that dates to the older version of the system. So it’s quite possible that some of these assumptions will be wrong. But based on what I’ve been able to learn, I’m confident that I’m at least on the right track. It is my hope that this thought exercise will help RSA customers with a more concrete understanding of the system to accurately guage their own levels of risk.

First, let’s remember that any attacker will need to have more than just a list of seeds. They’ll need to identify the specific seed in use by their target. And they’ll need the target’s PIN and possibly a password as well. A key reason for using this kind of authentication is that you presume the PIN and password can be collected using other means, such as phishing, or man-in-the-middle attacks.

Assumption 1: The attacker already has all required information (userid, PIN, password, recent valid tokencodes) for one or many targets at your organization.


RSA supports many different kinds of tokens — and further research indicates that some of these have slightly different features. Tokens can create 6- or 8-digit tokencodes, for example. Soft tokens may be configured to incorporate the user’s PIN directly into the resultant tokencode, while hardware tokens keep it separate. It’s even possible that different models of hardware tokens use slightly different algorithms, with the differences all tracked in the authentication server. Of course, none of these algorithms have been publicly released by RSA. Determining which token types are in use at the target’s organizaton might be possible through network based attacks, or even simple visual observation of token fobs hanging from lanyards.

Assumption 2: The attacker is well-funded and has already acquired or reverse-engineered the algorithm used by a large fraction of your organization’s tokens.


As has been described before, current tokens use a 128-bit seed value for their cryptographic algorithm, which is far too large to brute force. But a compromise that reveals, for example, all seeds which have ever been issued, could reduce that significantly. However, at least some soft tokens can create seeds dynamically, in a secure exchange with private authentication servers. These seeds need never be sent to RSA (though it’s possible they are, again, that’s not entirely clear).

Assumption 3: The attacker has acquired seeds used by 40 million already-issued hardware tokens, which are in use by a large proportion of your user population.


Finally, because the algorithms are unpublished, we still do not know whether tokens use actual clock time, or some internal ticker that’s simply synchronized with the server. Because this can have a significant impact on the speed needed for the attack, we’ll consider both possibilities.

So, the attacker has the algorithm, a list of seeds, and one or many userids, PINs, and associated valid tokencodes. How do they identify the seed used by a given person, and thus, gain the ability to replicate their token and impersonate that user at will?

Let’s assume they have a single seed, and they’re 99% sure it’s for a given user. But they want to double-check. Take the tokencode they’ve already acquired, and look at the timestamp from when it was used. Put that timestamp, and the potential seed, into the algorithm, and see if the result matches the tokencode you have. If it does, bingo. If it doesn’t, maybe back the timestamp up 30 seconds, and also 30 seconds the other way, just to account for clock skew.

Total effort: 3 calls of the algorithm.

How long would this take? It’s impossible to tell without understanding how the algorithm works. If the system performs many successive encryption operations, then it could be very slow. However, since hardware tokens are constantly computing new tokencodes, it’s likely this would produce too much of a drain on the battery.

Assumption 4: The algorithm is very nearly equivalent to a simple AES-128 encryption of some small data set.


According to Wikipedia, a single AES-128 encryption of a 25-byte payload should take about 1 microsecond (for certain modes of AES). That’s probably on the high-end for the payload size, but since this is just a back-of-the-envelope calculation, we’ll go with it.

Assumption 4a: The algorithm can perform 1 million tokencode computations per second.


So, to verify a single seed, against a single observed tokencode, with a known absolute time for the tokencode, should take 3 microseconds.

Now, what if the token doesn’t use absolute clock time, but instead uses a counter unique to each token? A single year contains (approximately) 525,000 minutes. A token which changes every 30 seconds, therefore, goes through 1.05 million tokencodes in a year. If a token has a useful lifetime of 5 years, that’s just over 5 million values. In practice, tokens proably only last about 3 years or so, but this is probably a good upper limit.

To verify, then, a tokencode using an internal counter, the attacker would need to have two consecutive token codes (not difficult, given assumption #1), and would need to run through all 5 million potential time values. If the algorithm can compute a million tokencodes per second, that’s 5 seconds to test. (Actually, marginally more, as whenever a ‘hit’ is encountered, the 2nd tokencode will need to be computed to verify it wasn’t a coincidental result.)

This is all assuming the attacker has a pretty good idea of the target’s seed, which is pretty unlikely. How long, then, would it take to run the above tests against 40,000,000 compromised seeds? Simply take the two figures (3 microseconds and 5 seconds) and multiply by the size of the seed list.

# Seeds Absolute time (clock-based) Relative time (token-based timer)
1 3 microseconds 5 seconds
40 mil 2 minutes 6.3 years


As you can see, if the tokens use a real, clock-based time, then an attack could complete in just a few minutes. If each token has its own clock (synchronized with the server at activation), then an attack across 40 million seeds takes much longer. However, this assumes a single CPU. In an 8-core system, it could conceivably take 1/8 as long, or 289 days. Put together a rack with a dozen such systems, and now the attack is only 24 days. Even if we double the guess, that’s well within the time defined by a typical password aging policy, and certainly worth the effort to a well-funded attacker.

Once again, the bottom line is that the lack of specific details from RSA, and the obscurity of their underlying algorithms, makes it nearly impossible to know what the true risk is. I’ve made a few assumptions here, but I don’t think they’re far off. If, as many are growing to suspect, some large number of token seeds have been compromised, then I believe the risk is real, and could be significant for some customers.

The Securosis analysis properly identifies high-value targets such as corporate executives or defense contractors, as having the highest risk to this sort of (hopefully still theoretical) attack. If it is the case that seeds have been compromised, then these organizations should probably be evaluating options. Moving high-profile users to soft tokens with newly-generated seeds would be a good short-term solution until new, uncompromised tokens can be acquired.

Finally, all organzations should review the first, key assumption: that an attacker already has (or can readily acquire) a valid userid, PIN, and tokencode combination. Reminding users of ways to verify secure communications (to reduce the chances of a man-in-the-middle attack) is a good first step. Additional traning to recognize the inevitable phishing attacks should probably also be seriously considered. Changing passwords and/or PINs more frequently, at least until the compromise has been better described and risks better assessed, may also be wise.

2 comments

The RSA/SecurID Compromise: What is my risk?

Posted: March 18, 2011 – 8:32 am | Author: | Filed under: Cryptography, Risk Analysis

So yesterday, RSA, a security division within EMC and the folks responsible for SecurID, one of the most popular forms of two-factor authentication, announced that they’d been hacked.

What does this mean? Well, we don’t have many details, but the most troubling bit is that apparently the attackers acquired information “specifically related to RSA’s SecurID two-factor authentication products.” In particular, that “this information could potentially be used to reduce the effectiveness of a current two-factor authentication implementation as part of a broader attack.”

This is quite troubling. SecurID is used by over 25,000 customers, with an estimated 40 million physical tokens in circulation (in addition to 250 million software-based tokens). Many of these are used for secure authentication to corporate websites and email, and they’ve seen increasing use in online banking. A “reduction in effectiveness” could have very serious, and wide-ranging, consequences.

So what exactly could the attackers have gotten away with? First, a quick review of how SecurID tokens work.

At its core, SecurID is a cryptographic algorithm that produces random numbers in a pre-determined sequence. This sequence is known to an authentication server, and used to validate that the person logging in has the token in their possession. To keep the tokens unique, each is pre-loaded with a seed that initializes the sequence for each token. The resulting 6-digit numbers, or “tokencodes,” are therefore produced in a sequence specific and unique to each token.

This seed is typically 128-bits in length, so there are approximately 500 gagillion (a really really big number) potential sequences that any individual token could produce. Far too many for an attacker to have any practical chance at a brute-force attack.

Where the attack gets scary is if the seeds have been revealed to a third party. While there are many good reasons why RSA should not keep copies of tokens’ initial seeds, there are also some reasons why they might. Ultimately, I believe we’re facing four attack scenarios:

  1. Attackers get a list of seeds and token serial numbers. Then if they are able to acquire the serial number from a target’s token, they can replicate the token in software and use that to impersonate the target.
  2. Attackers get a list of seeds, and the corporations to which they’ve been assigned. This makes the attack a little tougher, but having only several thousand seeds to test is enormously better than having a 128-bit seed to test.
  3. Attackers get a list of all seeds issued thus far. Instead of having several thousand potential seeds to test, they have a few hundred million. Still much better than searching a 128-bit keyspace.
  4. Attackers find some weakness in the method used to generate seeds in the first place. Perhaps it uses a weak random number algorithm. Or maybe there’s a “master seed” that generates new seeds in sequence, just like the tokens itself.

(There’s actually a fifth scenario — internal documentation revealing a known weakness in the algorithm that allows an attacker to derive the key simply by observing multiple tokencodes. Without knowing how their tokencode algorithm works, we can’t know if this is even possible, but it seems exceedingly remote. At least we hope so.)

In all scenarios, the attacker will also need to observe at least one, probably two, tokencodes from the target in order to synchronize their sequence with the target’s token. They’ll need to observe a login anyway, just to get the target’s PIN (which is usually prepended to the tokencode at login).

So what’s the risk to your enterprise? Until we know more, there’s no way to say. If any of the first three scenarios come into play, then the risk for some high-value targets may be reasonably high. Any attacker who can monitor login attempts, perhaps through something as simple as a fake login page, will be able in short order to duplicate the target’s token and authenticate as them. The only way to mitigate that would be to replace all the tokens in circulation.

If the fourth (or worse, fifth) scenario is true, then there’s a much more significant risk to the RSA/SecurID system as a whole. It would compromise not only issued tokens, but every replacement token in stock. It breaks the system, until the seed-generation process, or even the token algorithm itself, can be changed, and new tokens produced.

Ideally, though, RSA won’t have any seeds stored, nor will there be any weakness in the methods used to generate those seeds. If that’s the case, then the worst that could happen is that the token algorithm itself may be leaked. Perhaps study of the algorithm could reveal weaknesses….but that’s a much longer term concern.

There is, however, one last, very likely scenario: Just as with any big-news item, this compromise could open the doors for any of several phishing scenarios. Attackers could certainly capitalize on the uncertainty of what’s happened to trick users into revealing information that would enable a reset of their credentials, regardless of whether they’re even using the SecurID system in the first place. In the long run, this attack could affect far more than just RSA’s customers.

15 comments

image

This site is protected with Urban Giraffe's plugin 'HTML Purified' and Edward Z. Yang's Powered by HTML Purifier. 24532 items have been purified.