Intrepidus Group
Insight

Category Archives: Reverse Engineering

More Fun with Apple TV Hacking (and Manual RSA Signature Validation)

Posted: February 21, 2014 – 8:59 am | Author: | Filed under: Cryptography, iOS, PKI, Reverse Engineering

In my last post, I showed how the latest Apple TV system checks for an Apple-signed certificate before allowing changes to certain device settings. In particular, this prevents easily enabling the “Add Site” application, detailed in my 2013 DerbyCon talk. However, as I mentioned in the last post, it’s possible to load the profile on an Apple TV running 5.2 or 5.3, and then upgrade to 6.0, and retain access to Add Site. The problem then is that the system won’t actually permit adding any sites. What gives?

When adding a site (or channel or application or whatever you want to call it), the system first asks for a URL which points to a “vendor bag,” a .plist file defining the new application. Then it prompts for a site name, and then finally exits with the error “The site could not be verified for this device. Please check logs and retry.” Pulling the AppleTV binary into IDA Pro, we eventually find where the series of Add Site prompts occurs, in the method “[MEInternetTextEntryDialog _showNextPrompt]“.

This method is basically a 4-element finite state machine. When in state 0, it calls a method which prompts the user to enter the URL, and then changes the state to 1. In state 1, it asks for the new site’s name, then goes to state 2. In state 2, it sets up a call to “_verifySiteInfo”, then in state 3, it checks the result of that verification. If the response is good, it adds the site. If not, it shows the error and the user goes back to the beginning.

So what’s in “_verifySiteInfo”? That calls “[ATVAddSiteEntry entryWithName: andURL:]“, which calls “sub_186700″, which then calls “[ATVVendorBag isTrusted]“. If the response to the isTrusted call is zero, then the next pass through “_showNextPrompt” (in state 3) will display the error message and return to step 0.

So the actual check happens in the “[ATVVendorBag isTrusted]” method. Here’s the bulk of that routine, as disassembled by IDA Pro (and re-written manually to make it easier to follow):

  result = 1;

//
// If /AppleInternal/Library/PreferenceBundles/Carrier Settings.bundle
//  exists, it's an internal build, and allow the site addition
//
  if ( [[ATVSettingsFacade sharedInstance] runningAnInternalBuild])
    return result;

//
// If the bag doesn't include icloud-auth-enabled, skip to next check
//
  if (! [[self valueForKey:"icloud-auth-enabled"] boolValue])
  {
    result = 1;
    goto LABEL_1;
  }

//
// The bag includes icloud-auth-enabled. Get and verify signature.
//
  sig = [self valueForKey:"iCloudAuthSignature"];
  text = [self merchantID];
  text = [text stringByAppendingString:"iCloudAuth"];
// text will now be "<merchant id>iCloudAuth"

  text_utf8 = [text UTF8String];
  text_len = strlen(text_utf8);

// put the text to be signed into a byte array
// and put the signature we pulled from the bag into an list of signatures
  text_bytes = [NSData dataWithBytes:text_utf8 length:text_len];
  sig_array = [NSArray arrayWithObjects:sig count:1];

// now we take the text, and the list of signatures, and see if a sig matches
  result = sub_43C5D0(text, sig_array);  // Here's where the fun happens 

  if ( result == 1 ) // It passed the test -- the signature is valid
  {
LABEL_1:

    if ( ! [[self valueForKey:"vendorBagLoadedByAddSite"] boolValue])
      return result;

      // return 1 if:
      //  * not added by Add Site AND
      //    * not icloud-auth-enabled or
      //    * is icloud-auth-enabled and signature matches

// If we got here, then vendor bag was loaded by addSite
// So we have to see if device is authorized

    text = [self merchantID];
    text = [text stringByAppendingString: [ATVDevice uniqueID]];
// Now text is "<merchant id><device udid>"

// And we do the same stuff as before with UTF8 strings, etc.
    text_utf8 = [text UTF8String];
    text_len = strlen(text_utf8);
    text_bytes = [NSData dataWithBytes:text_utf8 length:text_len];

// Only this time the signatures are stored in the
// com.apple.frontrow settings, likely loaded onto the device
// via the profile that enabled Add Site to begin with.
// And we may have more than one authorization (signature) to check.
    sig_array = [ATVSettingsFacade addSiteDeviceAuthorizations];

    result = sub_43C5D0(text, sig_array);  // test against all the signatures
    goto LABEL_2;
  }
// if we got here, then iCloudAuthSignature failed
  result = 0;

LABEL_2:
  if ( !result && _internalLogLevel >= 3 )
  {
    _ATVLog(3, [self merchantID], @"Trust failure for merchant %@: %@");
    result = 0;
  }
  return result;

This is all sort of complicated. Summarized, in pseudo-code:

If runningAnInternalBuild: Trusted

If icloud-auth-enabled:
  If iCloudAuthSignature invalid: Not Trusted

If vendorBagLoadedByAddSite: 
  If device is authorized: Trusted
  Else: Not Trusted
Else: Trusted

Basically, if the bag has an icloud-auth-signature, it better be valid, and if the bag was loaded by Add Site, then it the device has to be authorized for this particular merchant.

So, what is this elusive signature? We can find examples of icloud-auth-signature in the StoreFront call, mentioned in the last post:

<key>merchant</key>
<string>iMovieNewAuth</string>

<key>icloud-auth-enabled</key>
<true/>

<key>icloud-auth-signature</key>
<data>YxH4I6zha8O331odzY3Zf+APR9gYi/Atorp84x3BTqVg5N4EqAwzyh72UpiF4mgCw5CLneC/I/VlNUntZB17y6yXLstZpbRvKnr/LoQtccmLo7ELWcmFWfU3gEb7u4ne/E1N92oCHrOIxsBbnEqkOp65M47k9x6GojqDsfT4Lrr0XIJ86LH+cl2UIgVQlR77Q8fnSnvChLqGjwIdvKEi2xcfm/v40bFN0JkRV1wrEsw8Zvu3m53GKEOsLbHVCd6Waqsisopbsk3Q4j+D50EnJ699n4UlNoat0bEc4Jz8TjEHoMnB5f23NV0KlFOoC0LPVdJecbAH0bGjfD9WjgMHhA==</data>

This is the same format as similar signatures for javascript-url-signature and root-url-signature, and it validates in exactly the same way,with the same key. Interestingly, though, it doesn’t look like javascript-url and root-url signatures are actually checked in version 6.0! (Though it’s possible I made a mistake on that — I could find the checks in 5.2, but not in 6.0). The validation happens in the code above, at sub_43C5D0. This routine, paraphrased again, looks like this:

hash = CC_SHA1(text, len(text)
key = SecKeyCreateRSAPublicKey(0, 13208544, 270, 1)
  
for sig in signatures:
    if  (! SecKeyRawVerify(key, 32770, hash, 20, sig, len(sig))):
        return 1

return 0

The 32770 above (in hex, 0×8002) is a constant that tells SecKeyRawVerify to expect a PKCS1SHA1 signature, which we can find in SecKey.h:

/* For SecKeyRawSign/SecKeyRawVerify only, data to be signed is a SHA1
   hash; standard ASN.1 padding will be done, as well as PKCS1 padding
   of the underlying RSA operation. */
kSecPaddingPKCS1SHA1 = 0x8002,

Going to address 13208544, or 0xc98be0 in hex, and grabbing the next 270 bytes, gives us the public key. We can do that in IDA, or even with a simple python script:

$ python
Python 2.7.5 (default, Aug 25 2013, 00:04:04) 
[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> f=open("AppleTV", "r")
>>> d=f.read()
>>> o=0xc98be0 - 0x1000  # must subtract a memory offset
>>> k=''
>>> for i in range(0, 270):
...   k += d[o+i]
... 
>>> import binascii
>>> binascii.b2a_hex(k)
'3082010a02820101009cd356c93bcebeb15ebecf620314fe0e15cd26a5afdcd8bf8f1790e8de426a4b1fcbe6bbc47c865f610a28437b36a78ed80844961bade02d7da942b05019f1a34c13953278e288b10b45d44fa1264db895a4589776828ba0b2499ab0bdff65e902755ae406e517ccac6b2c237a26472decf4dfa219efb026f3020df8a80ffe4f7a2de03ece182d33d11dfbdb6df7261ce5c0dd4b3f4e02f3784ba6165ecfcde44b02e7e684e3b0649e5fe13d871681501ce5d50e8841832c0f3426387b6ede7e447d3fabe808c13af4ef556bffd480cd840b54a5c81f3db29682890c727571a8394c12101ef79f4f2d2e147dbf389b7b9fcabf22144d4da21b63f269e61ecaad0203010001'
>>> 

Write that out to a file (in binary form, not hexadecimal), and use asn1parse to get the raw key specifics:

$ openssl asn1parse -in rsakey.bin -inform DER
    0:d=0  hl=4 l= 266 cons: SEQUENCE          
    4:d=1  hl=4 l= 257 prim: INTEGER           :9CD356C93BCEBEB15EBECF620314FE0E15CD26A5AFDCD8BF8F1790E8DE426A4B1FCBE6BBC47C865F610A28437B36A78ED80844961BADE02D7DA942B05019F1A34C13953278E288B10B45D44FA1264DB895A4589776828BA0B2499AB0BDFF65E902755AE406E517CCAC6B2C237A26472DECF4DFA219EFB026F3020DF8A80FFE4F7A2DE03ECE182D33D11DFBDB6DF7261CE5C0DD4B3F4E02F3784BA6165ECFCDE44B02E7E684E3B0649E5FE13D871681501CE5D50E8841832C0F3426387B6EDE7E447D3FABE808C13AF4EF556BFFD480CD840B54A5C81F3DB29682890C727571A8394C12101EF79F4F2D2E147DBF389B7B9FCABF22144D4DA21B63F269E61ECAAD
  265:d=1  hl=2 l=   3 prim: INTEGER           :010001

The long number is the modulus, and the short number is the exponent (65537).

So now we can validate the signature. For that, we could simply use some functions in the python Crypto module, but where would be the fun in that? Let's just do it manually. In the following code, "message" is the string we want to verify, and "signature" is the signature (base-64 encoded) we pulled from StoreFront or a deviceAuthorizations setting.

from Crypto.Hash import SHA
import binascii, base64

key = '9CD356C93BCEBEB15EBECF620314FE0E15CD26A5AFDCD8BF8F1790E8DE426A4B1FCBE6BBC47C865F610A28437B36A78ED80844961BADE02D7DA942B05019F1A34C13953278E288B10B45D44FA1264DB895A4589776828BA0B2499AB0BDFF65E902755AE406E517CCAC6B2C237A26472DECF4DFA219EFB026F3020DF8A80FFE4F7A2DE03ECE182D33D11DFBDB6DF7261CE5C0DD4B3F4E02F3784BA6165ECFCDE44B02E7E684E3B0649E5FE13D871681501CE5D50E8841832C0F3426387B6EDE7E447D3FABE808C13AF4EF556BFFD480CD840B54A5C81F3DB29682890C727571A8394C12101EF79F4F2D2E147DBF389B7B9FCABF22144D4DA21B63F269E61ECAAD'

exponent = 65537

def manual_check(signature, message):
    sig = binascii.b2a_hex(base64.b64decode(signature))

    h = SHA.new(message).hexdigest()
    print "Hash: %s" % h
    m = int(key, 16)
    ct = int(sig, 16)
    pt = pow(ct, exponent, m)
    out = "%x" % pt
    print "PT: %s" % out
    check = out[-40:]
    print "Check: %s" % check
    if check == h:
        print "VERIFIED"
    else:
        print "        not verified"

signature = 'YxH4I6zha8O331odzY3Zf+APR9gYi/Atorp84x3BTqVg5N4EqAwzyh72UpiF4mgCw5CLneC/I/VlNUntZB17y6yXLstZpbRvKnr/LoQtccmLo7ELWcmFWfU3gEb7u4ne/E1N92oCHrOIxsBbnEqkOp65M47k9x6GojqDsfT4Lrr0XIJ86LH+cl2UIgVQlR77Q8fnSnvChLqGjwIdvKEi2xcfm/v40bFN0JkRV1wrEsw8Zvu3m53GKEOsLbHVCd6Waqsisopbsk3Q4j+D50EnJ699n4UlNoat0bEc4Jz8TjEHoMnB5f23NV0KlFOoC0LPVdJecbAH0bGjfD9WjgMHhA=='

message = 'iMovieNewAuthiCloudAuth'

manual_check(signature, message)

Running that code produces the following output:

Hash: 2dcd288c1ccc82c8ef7dcc17fdf3abd785c02050
PT: 1fffffffffffffffffffffffffffffffffffff
ffffffffffffffffffffffffffffffffffffffffff
ffffffffffffffffffffffffffffffffffffffffff
ffffffffffffffffffffffffffffffffffffffffff
ffffffffffffffffffffffffffffffffffffffffff
ffffffffffffffffffffffffffffffffffffffffff
ffffffffffffffffffffffffffffffffffffffffff
ffffffffffffffffffffffffffffffffffffffffff
ffffffffffffffffffffffffffffffffffffffffff
ffffffffffffffffffffffffffffffffffffffffff
fffffffffffffffffffff003021300906052b0e03021a0
50004142dcd288c1ccc82c8ef7dcc17fdf3abd785c02050
Check: 2dcd288c1ccc82c8ef7dcc17fdf3abd785c02050
VERIFIED

The plaintext ("PT" above) is a DER-format signature, matching the requirements for PKCS1-v1.5. Basically, it's:

0x00 (not seen)
0x01 (leading 0 not seen)
0xff (times "a lot") (pad the message to a pre-determined length)
0x00 (end of padding)
sig (actual signature in DER format)

The actual signature includes flags identifying it as SHA-1 based, and the actual message that was signed (the SHA1 hash). We can simply ignore everything except the 20 bytes at the end, which looks exactly like the hash we generated (2dcd288...c02050). Or if you like, we can use asn1parse again, or an online DER parser to break it all out:

SEQUENCE(2 elem)
  SEQUENCE(2 elem)
      OBJECT IDENTIFIER 1.3.14.3.2.26
      NULL
  OCTET STRING(20 byte) 2DCD288C1CCC82C8EF7DCC17FDF3ABD785C02050

(Where 1.3.14.3.2.26 corresponds to the OID for SHA-1).

This same signature check is used for all the above-mentioned signatures:

  • javascript-url-signature
  • root-url-signature
  • icloud-auth-signature
  • addSiteDeviceAuthorizations

As I said earlier, I'm not sure the first two are being checked any longer. The third seems to be included on few of the newer applications loaded by the StoreFront call, while the last is only checked if a vendor bag is loaded by Add Site.

The signatures for the last check are stored as an array in the com.apple.frontrow "addSiteDeviceAuthorizations" setting. And, as we saw last time, the only way to add a stting to that list is with a profile signed by Apple. So the only way to make Add Site work under Apple TV 6.x (ignoring any unfortunately-still-speculative jailbreaks) is to:

  1. Retrieve the target Apple TV's unique device identifier (udid)
  2. Using your app's Merchant ID, create the string "<merchant><udid>"
  3. Get Apple to sign that string with the appropriate private key
  4. Include that signature in a configuration profile that enables the Add Site application
  5. Get Apple to sign the profile
  6. Install the profile on the Apple TV from step 1

Then, and only then, will you be able to load your custom application on the Apple TV.

All this leaves me with a question: "Why did Apple add all these hoops to jump through?" It's basically a parallel to how Provisioning Profiles work for iOS developers. Was this extra level of security really necessary? As far as I know, the Add Site functionality wasn't widely known until my talk last fall, yet these changes appeared in early iOS 7-based Apple TV betas in mid-summer 2013. Perhaps they were always on the roadmap, and Apple just couldn't finish them in time for the previous version.

Or perhaps...is this a prelude to wider availablity of Apple TV app development? If Apple were to open up Apple TV to general development (and they've certainly been fielding a lot of new applications lately), then they certainly couldn't have done that without some kind of control in place.

That's what I'm kind of hoping: That we'll see, in the near future, an official way for "everyday" Apple devlopers to build Apple TV apps, and to distribute them via a new "Channel Store." Maybe this will even be unveiled with the next major Apple TV update (currently rumored for April).

I'm keeping my fingers crossed.

Comments disabled

Apple TV Hacking, Counterattacks, and Certificate Pinning

Posted: February 11, 2014 – 4:03 pm | Author: | Filed under: iOS, PKI, Reverse Engineering

A few months ago I presented a neat hack at DerbyCon that let you put your own apps on Apple TV. A few days afterwards, the hack stopped working. It’s time I had a follow-up to explain just what happened (and hopefully teach a little about certificate pinning in the process).

First, a quick review: The Apple TV OS has a feature called “Add Site,” through which a developer can add a pointer to a custom application, which will then appear on the Apple TV’s home screen. To enable this feature, a special configuration profile needs to be loaded. The fun part of my talk was showing how one could find exactly what’s needed to make this profile, through some simple disassembly of the Apple TV binary.

A few days before my talk, the “Add Site” button (which had been a blank icon) suddenly had a nice pretty picture on it. I wasn’t sure how Apple made that happen, but presumed there was a simple explanation. And there was: the Apple TV fetches an XML file called “StoreFront” which includes, among other things, descriptions of all the applications on the Apple TV home screen (I’m just going to call it ATV, okay?) To fetch your own copy of this file just use the following command (all one line, naturally):

curl -H "User-Agent: iTunes-AppleTV/5.3 (2; 8GB; dt:11)"
 https://itunes.apple.com/WebObjects/MZStore.woa/wa/storeFront 
| sed 's/&lt;/</g; s/&gt;/>/g'

(Thanks to Aman Gupta, @tmm1, for saving me the trouble of figuring out the User-Agent bit. The sed script at the end is needed because the embedded application list is encoded as a element of the larger file, and would be nearly unreadable with all the HTML entities.)

The pertinent part looks like this:

<dict>
    <key>merchant</key>
    <string>internet-add-site</string>

    <key>enabled</key>
    <string>YES</string>

    <key>menu-title</key>
    <string>Add Site</string>

    <key>menu-icon-url</key>
    <dict>
        <key>720</key>
        <string>http://a1.phobos.apple.com/us/r1000/000/Features/atv/AutumnResources/images/AddSite@720.png</string>

        <key>1080</key>
        <string>http://a1.phobos.apple.com/us/r1000/000/Features/atv/AutumnResources/images/AddSite@1080.png</string>
    </dict>

    <key>menu-icon-url-version</key>
    <string>2.1</string>

    <key>minimum-required-version</key>
    <string>6.0</string>

    <key>preferred-order</key>
    <integer>2147483647</integer>
</dict>

Apparently, the Add Site application is defined just like every other app, though it doesn’t appear unless the right local preference is set. The change they made was to add the menu-icon-url items.

A few days after the talk, the Add Site application disappeared from my ATVs. This was also because of the StoreFront file — in this case, they added a “minimum-required-version” flag, which kept anything that wasn’t ATV 6.0 from seeing the app, even when it’s already been enabled. (ATV versions are one lower than mobile iOS, so ATV 6.0 == iOS 7.0).

Okay, well, let’s upgrade to ATV 6.0 and reload the profile. Easy enough, right? Except that it doesn’t work.

Turns out, Apple also added a profile signature requirement under 6.0 (and thanks again to @tmm1 for coming up with the conclusive proof on this). Where did this happen? And can we work around it, for example, by creating a profile signed by a CA we trust?

Great question! Let’s find out.

Adding a configuration profile to ATV happens in the ManagedConfiguration framework, in particular, the profiled daemon:

/System/Library/PrivateFrameworks/ManagedConfiguration.framework
/Support/profiled
This method:
[MCProfile evaluateTrustOfCertificateChain:(id)
 outIsAllowedToWriteDefaults:(char *)]
basically answers the question “Is the profile signed by X allowed to write to the com.apple.frontrow defaults?” The actual check happens in the method:
[MCProfileTrustEvaluator  sanitizedProfileSignerCertificateChain
IsAllowedToWriteDefaults:(id)]
I’ll paraphrase the code here:

  policy = SecPolicyCreateConfigurationProfileSigner()
  trust = SecTrustCreateWithCertificates(cert_chain, policy)
  result = SecTrustEvaluate(trust)

  if (result == 4 || result == 1) 
    ALLOWED = TRUE
  else 
    ALLOWED = FALSE

From here out, it gets easy, because it’s documented! First, what does SecPolicyCreateConfigurationProfileSigner do? Google gets us this file: SecPolicyPriv.h.

/*!
 @function SecPolicyCreateConfigurationProfileSigner
 @abstract Check for key usage of digital signature, has a EKU OID of
     1.2.840.113635.100.4.16 and
    roots to Apple Application Integration 2 Certification Authority
*/
SecPolicyRef SecPolicyCreateConfigurationProfileSigner(void);

This tells us that a specific Extended Key Usage (1.2.840.113635.100.4.16) must be present in the certificate used to sign the profile. Check out the formal docs for more information.

Okay, we can easily spoof that, no big deal. What about the second part? How is the “roots to Apple” test checked? If it just says “The issuer has to be called ‘Apple’” then we should be able to fake it out. Let’s look at the actual code (SecPolicy.c) (I consolidated the relevant parts into one block):

static const UInt8 kAppleCASHA1[kSecPolicySHA1Size] = {
    0x61, 0x1E, 0x5B, 0x66, 0x2C, 0x59, 0x3A, 0x08, 0xFF, 0x58,
    0xD1, 0x4A, 0xE2, 0x24, 0x52, 0xD1, 0x98, 0xDF, 0x6C, 0x60
};

static bool SecPolicyAddAppleAnchorOptions(CFMutableDictionaryRef 
  options)
{
    return SecPolicyAddAnchorSHA1Options(options, kAppleCASHA1);
}

SecPolicyRef SecPolicyCreateConfigurationProfileSigner(void)
{
  SecPolicyRef result = NULL;
  CFMutableDictionaryRef options = NULL;
  require(options = CFDictionaryCreateMutable(kCFAllocatorDefault, 0,
                          &kCFTypeDictionaryKeyCallBacks,
                          &kCFTypeDictionaryValueCallBacks), errOut);

  SecPolicyAddBasicX509Options(options);
  SecPolicyAddAppleAnchorOptions(options);

  // Require the profile signing EKU
  add_eku(options, &oidAppleExtendedKeyUsageProfileSigning);

  require(result = SecPolicyCreate(kSecPolicyOIDAppleProfileSigner, 
    options), errOut);

errOut:
  CFReleaseSafe(options);
  return result;
}

This creates a SecPolicyRef object which requires basic X509 stuff, the aforementioned EKU flag, and an Apple anchor. The AddAppleAnchor part, then, is the real key here, and that adds…. a check for a SHA1 hash. Damn! It’s pinned to a specific root CA certificate based on that certificate’s SHA1 hash.

If your ATV is on 6.x, and it doesn’t already have the Add Site profile loaded, then you can’t add it, unless you get a profile signed by Apple. And since the Apple TV program isn’t open to the public, you can’t get that profile. So, we’re pretty much stuck.

Great for Security. Lousy for DIY Apple TV hackers. Right?

Well, there is one workaround: If your ATV is on 5.2 or 5.3, you can still add the profile (you just won’t see the button), and then upgrade to 6.0, and, boom!, the Add Site application is available, and fully functional.

So we still win in the end, right? Not so fast. Apple also introduced a signature test when using the Add Site feature. But since this post is getting a little long already, I’ll save that for next time.

Comments disabled

Good fun with bad crypto

Posted: January 15, 2014 – 11:04 am | Author: | Filed under: Cryptography, Passwords, Reverse Engineering

A few months back, one of the consultants here at Intrepidus ran across a strange password hash format:

OLEOIECBPAFFKGADMDGGLBBEMIGNIPCKOAEFIPCKOLEO

He did some digging, and eventually found an application which would not only create the hashes, it would *decrypt* them. So it’s not even a hash at all, just a really lousy encryption system. Well, not even encryption. Technically, it’s an encoding. “Citrix CTX1 Encoding”, to be exact. “How does this work?” I wondered. Unfortunately, the person who created the app we downloaded specifically declined to explain the algorithm, so we just moved on.

Then a little while ago I thought it’d be fun to figure out. Using the application as an oracle, I set about encrypting various words to see what happened.

The first thing I noticed is that it seemed to be a binary-level encoding. If I entered gibberish for a password and then tried to decrypt it, I’d get all kinds of high-bit special characters. Then, I noticed that the ciphertext was four times as long as plaintext — that is, any single character encoded to four cipher characters, which I took to calling “quads.” Also, the encryption appeared to be position dependent:

a MEGB
aa MEGB KFAA
aaa MEGB KFAA MEGB

[I'll add spaces between cipher quads for readability]

There was also some kind of multiple-character interaction going on:

aa MEGB KFAA
ab MEGB KGAD
ba MHGC KGAD
bb MHGC KFAA

Note how the 2nd quad is the same for “ab” and “ba” and also for “aa” and “bb.”

And, finally, letter case matters:

a MEGB
A OEEB

So all that was very interesting. Also interesting is how many letters are in the cipher alphabet. I tried encrypting all letters, numbers, and a bunch of special characters [A-Za-z0-9!-)], and found a total of 16 letters in use, A through P. Also, the distribution of the letters was very uneven, with over 30 As and only 6 Ms. The distribution is even stranger when you look at the data positionally within each ciphertext quad.

Overall 1st 2nd 3rd 4th
33 A 18 A 15 A
31 B 4 B 16 B 11 B
7 C 5 C 1 C 1 C
13 D 7 D 6 D
30 E 11 E 15 E 4 E
15 F 15 F 2 G
8 G 6 G 13 H
19 H 1 H 5 H
7 I 1 I 6 I
11 J 7 J 4 J
29 K 18 K 3 K 8 K
25 L 16 L 9 L
6 M 2 M 4 M
19 N 13 N 6 N
24 O 15 O 9 O
11 P 8 P 3 P

Again, very interesting. Still not totally clear, though a pattern should already be evident (were I paying attention). That data was generated with just a long string…what if we simply encrypt ‘A’, then encrypt ‘B’, then ‘C’, etc., and keep looking at the single quad output from a single character? I’ll throw in some special characters too, which will make sense very shortly.

@ OFEA    P PFFA
A OEEB    Q PEFB
B OHEC    R PHFC
C OGED    S PGFD
D OBEE    T PBFE
E OAEF    U PAFF
F ODEG    V PDFG
G OCEH    W PCFH
H ONEI    X PNFI
I OMEJ    Y PMFJ
J OPEK    Z PPFK
K OOEL    [ POFL
L OJEM    \ PJFM
M OIEN    ] PIFN
N OLEO    ^ PLFO
O OKEP    _ PKFP

There’s a very clear pattern happening here. Let’s look at it in binary. Since all the letters in the ciphertext fall in the 0x4x range, we’ll just look at the lower-most nibble of all the ciphertext values. That is, for each of the letters in the ciphertext quad (“OHEC”) we’ll look at the binary values for the lower-most four bits (so, for “C”, which is 0×43 in ASCII, the lowermost nibble would be “0011″):

A 0x41 OEEB 1111 0101 0101 0010
B 0x42 OHEC 1111 1000 0101 0011
C 0x43 OGED 1111 0111 0101 0100
D 0x44 OBEE 1111 0010 0101 0101

P 0x50 PFFA 0000 0110 0110 0001
Q 0x51 PEFB 0000 0101 0110 0010

Look at the last two columns (the 3rd and 4th letters) in each ciphertext quad. There’s a direct correlation to the nibbles of the actual hex value. A 4x in the plaintext causes the 3rd column to be a 5 (or letter E), and a 5x makes it F. The last column is simply one off — a x0 puts an A in the 4th column, an x1, a B, etc. That is, 0-F maps to A-P for both those columns.

There’s a similar system at play for the first two columns, only the input nibbles are first XORed with 0x0A and 0×05 (for high and low nibbles, respectively). That is, 0×41 (“A”) will be mapped to 4 XOR 0x0a, or 0100 ^ 1010, which gives hex 0x0e, or (once mapped to A-P), the ciphertext letter “O,” because 0x0e = 14, and O is the 14th letter (when A is 0). For the lower nibble, we have 1 XOR 5, or 0001 XOR 0101 in binary, which is simply a 4, which maps to the ciphertext letter E.

So we have, for 0×41 “A”, ciphertext of OE (using XORs) and EB (simply mapping 4 to E and 1 to B), or OEEB. Of course, we don’t even need the 1st two columns since the last two are so easy to decode. So not only is this a bad “cipher”, it’s inefficent.

All that only answers how to encrypt the 1st letter. What happens in the subsequent letters? Well, that’s pretty easy too. Remember that, for both AA and BB, the second quad that’s output was KFAA. That’s the case for CC, DD, EE, and in fact any time that the 2nd character is the same as the first. What happens if you XOR a letter with itself? You get 00. What is KFAA in this crazy system? 00. So the 2nd ciphertext quad is the 2nd plaintext letter XORed with the 1st plaintext letter.

Unfortunately, that doesn’t hold true for the 3rd letter. But a little more experimentation shows that it’s simply using a running “sum” of all previous characters XORed together.

Here’s a very simple script to encrypt, and another to decrypt, these really bad password strings:

import sys

def encrypt(pt):
    ct = ''
    last = 0
    for ch in pt:
        ct += enc_letter(ch, last)
        last ^= ord(ch)

    return ct

def enc_letter(ch, last=0):
    c = ord(ch) ^ last
    h = c / 16 # high nybble
    l = c % 16 # low nybble

    a = h ^ 0x0A 
    b = l ^ 0x05 
    c = h
    d = l

    ach = chr(a+0x41)
    bch = chr(b+0x41)
    cch = chr(c+0x41)
    dch = chr(d+0x41)

    return "%s%s%s%s " % (ach, bch, cch, dch)

x = sys.stdin.readline()
while x:
    print encrypt(x.strip())
    x = sys.stdin.readline()

And decryption:

import re, sys

def decrypt(ct):
    pt = ''
    last = 0
    for i in range(0, len(ct), 4):
        pc = dec_letter(ct[i:i+4], last) 
        pt += pc
        last ^= ord(pc)

    return pt

def dec_letter(ct, last=0):
    c = (ord(ct[2]) - 1) & 0x0f
    d = (ord(ct[3]) - 1) & 0x0f

    x = c*16+d

    pc = chr(x^last)

    return pc

x = sys.stdin.readline()
while x:
    x = re.sub('[^A-P]', '', x.upper())
    print decrypt(x)
    x = sys.stdin.readline()

The bottom line here, predictably, is: Don’t make your own crypto. This vendor did, and they created something that was pretty much totally useless. On the other hand, it was fun to figure out, so I guess they provided some entertainment.

Comments disabled

Raspberry Pi Media Center on AppleTV – No Jailbreak Required

Posted: September 29, 2013 – 4:04 pm | Author: | Filed under: iOS, Raspberry Pi, Reverse Engineering, SDK

A few months ago, I started looking into using a Raspberry Pi (I’m gonna call it rPI from now on) as an access point / media server for the car. It started off as a way to let my boys play Minecraft with each other during long car trips….and then kind of went a little over the top after that.

orig-rpi-in-car-small

In June, an interesting AppleTV hack called PlexConnect got some press, and I started thinking about trying to get videos from the rPI onto an AppleTV.

While I’ve had AppleTVs for years (since shortly after the first, non-iOS units were introduced), I hadn’t really looked into how they worked until recently. One question I had was: How do developers test the applications they write for the AppleTV? Because there had to be developers…I didn’t really think Apple was writing the 3rd party channel apps…

About this time, Apple updated the channels available on AppleTV. One of the new channels added was SkyNews, and so I spent a little time looking through their videos. I happened upon a short segment discussing the AppleTV (how meta), and saw something interesting. At one point, the camera panned across a flat screen TV on a table in their studio. It was showing the AppleTV home screen, with the “normal” apps, plus the SkyNews app, plus the WWDC app. But none of the other apps which had just been introduced. So it seemed likely that this was filmed sometime in June (because that’s the only time the WWDC app was there) but before the other apps were installed. Which meant that this was some kind of developer unit.

And in the corner of the screen was a blank application tile labeled “Add Site.”

skynews-detail

I spent a week digging through configuration files, property lists, application binaries, and tearing things apart in IDA pro. The result of all this was a method that can be used to easily put additional applications onto a stock, non-jailbroken, AppleTV. (I used a jailbroken AppleTV for most of my research, but this works on non-jailbroken devices as well).

But before I get into that, a quick note about how AppleTV applications work. They’re not really applications, per se. An application (or channel or feed or whatever you want to call it) isn’t a binary, compiled for the AppleTV and loaded onto the device. Instead, it’s a all web-based, using a mix of XML and Javascipt files, which are then interpreted in a single AppleTV.app and converted into what you see on the screen.

This is one reason all the AppleTV channels have such a consistent look and feel — they’re all just calling stock library functions to build out menus, preview screens, etc.

While trying to write my first AppleTV app, I quickly decided that my workflow (write python to generate XML, turn around, pick up the AppleTV remote, see how it looks, turn back to the computer, etc.) was just too slow. So I wrote an AppleTV “simulator.” It’s just a simple Python script that acts as a web proxy. It fetches an AppleTV application page from a remote server, applies an XSL transform and appends a CSS file, then returns the result to the browser. Ideally, the result is a web page that looks a lot like the equivalent page in an AppleTV.

simulator-moviedetail

But I’m getting ahead of myself. Looking through the AppleTV binary, I found all kinds of interesting strings. Configuration settings to enable diagnostics, set the device name, and the intriguingly-named “EnableAddSite”. I generated a simple profile using Configurator.app (or maybe I found a template online somewhere — I don’t really remember) and tried to figure out how to test out all these settings.

The official way to load a profile on an AppleTV is slow and cumbersome, requiring disconnecting the device from the TV, connecting it to a computer using a USB cable, pushing a profile onto the device, then reconnecting it to the TV. This isn’t really feasible for high-volume fuzzing of a dozen or more configuration settings.

So I tried using Mobile Device Management, which also didn’t work — I could install the MDM profile, but couldn’t make the device enroll in MDM. Because I was working on a jailbroken unit, I was able to force polls of the MDM server, and so tweaked the server to send different profiles with each request. This simple approach let me test out all kind of values for settings — with the theory that bad values would generate errors, and good values would be set aside so I could manually investigate what they did. Turns out, this didn’t work either, because the AppleTV didn’t really care if I set a ludicrous value for something.

And nothing I did would make the EnableAddSite key work.

In the end, I used IDA Pro to disassemble and decompile the AppleTV binary, and eventually found a boolean function “AddSiteIsEnabled” that checks to see whether the right profile had been loaded.

ida-addsite

Basically, the function:

  1. Loads up the shared preferences file
  2. Looks for a key named “F2BE6C81-66C8-4763-BDC6-385D39088028″
  3. If that key exists, and has a value which is itself a list of keys, then it looks for a sub-key called “EnableAddSite”
  4. If that sub-key exists, and has a value “True,” then the function returns true

Put another way, you need a .mobileconfig profile that includes the following payload:


<key>PayloadContent</key>
<array>
    <dict>
        <key>DefaultsDomainName</key>
        <string>com.apple.frontrow</string>
        <key>DefaultsData</key>
        <dict>
            <key>F2BE6C81-66C8-4763-BDC6-385D39088028</key>
            <dict>
                <key>EnableAddSite</key>
                <true/>
                <key>AddSiteLoggingURL</key>
                <string>http://my.server.com/log</string>
            </dict>
        </dict>
    </dict>
</array>

To load that on the AppleTV, you can either do it the “official” way (the aforementioned method using a USB cable)…or you can do it the easy way. While searching for the AddSite test, I located the code which handles the settings menus, and found functions hidden behind secret sequences of key presses. The trick pertinent to loading a profile is this:

  1. Start up a simple web server somewhere and put the .mobileconfig file there
  2. Go to the AppleTV Settings app
  3. Select “General” then scroll the cursor down to highlight “Send Data To Apple”
  4. Press “Play” (not the normal “Select” button)
  5. Enter the URL for the .mobileconfig file (don’t forget to add http://)
  6. After loading, you may need to restart the AppleTV for the change to take effect

This is much easier than the USB method.

Once you’ve got the Add Site button / app added, then you can add any site you like. You either need to provide a URL to a “Vendor bag” (a configuration file called bag.plist), or a URL to the app’s main “home” screen (plus a name for the app). (Don’t forget to include the “http://” here, too — it will accept a straight hostname without complaint, but then won’t do anything.) A typical bag.plist might look a little like this:


<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>menu-title</key>
    <string>My App</string>
    <key>enabled</key>
    <string>YES</string>
    <key>merchant</key>
    <string>my-app-id</string>
    <key>root-url</key>
    <string>http://my.server.com/main.xml</string>
    <key>menu-icon-url</key>
    <dict>
        <key>720</key>
        <string>http://my.server.com/images/icon720.png</string>
        <key>1080</key>
        <string>http://my.server.com/images/icon1080.png</string>
    </dict>
</dict>
</plist>

new-addsite-icon-rpiDerby

Several of the 3rd party apps added this year have easy-to-discover bag.plist files. You can grab one of those, change the name, point to a different icon, and load it up just to see what it does.

There are a few drawbacks to this trick, though. For example:

  • You can’t remove sites you add this way
  • It can get a little flaky, especially if there are a lot of sites added which no longer exist (because they were all various tests on different servers, for example)

Also, the logging facility is both a general facility (all apps use it, not just those you’ve manually added) and, in the case of some applications, fairly chatty. It’s possible that one of them might disclose credentials, so don’t point your logs at someone else’s server.

One interesting thing that I noticed just last week: The Add Site application no longer has a blank tile. It’s been replaced with a nice blue button with a big “+” in the middle. This appeared on both a version 5.3 and a jailbroken 5.2 AppleTV. It’s almost like the AppleTV always looked to Apple for an icon, but they never had one ready on the server until recently.

Which brings me to the current state of affairs for this hack: Under the newest version of the AppleTV software (6.0, based on iOS 7.0), this hack breaks. But the functionality is still there — the same keys are in the binary, but the profile no longer enables it. However: it’s not just the AddSite trick that broke, but also the other diagnostic settings I discovered. So I’m hopeful that they’ve just changed the application name or something, so instead of directing the profile to the “com.apple.frontrow” set of shared preferences, perhaps we just change the profile to point to a different address. I’m not sure — I haven’t had much time to play with it.

What I did notice in the 6.0 binary is what appears to be a greatly expanded “Add Site Manager.” Not only can you add new sites, but you can list currently-active sites, and delete sites. And there’s also a mechanism to verify devices for use with a site, perhaps to restrict availability in some way. This new functionality feels a lot like what I’d expect a public “Channel Store” to look like, and I’m really hoping that’s what it’ll be, eventually.

In the meantime, if you have an AppleTV running 5.2 or 5.3, and want to give this a try, it’s pretty easy to get going. I was fortunate enough to have the opportunity to present this hack at DerbyCon 3.0 this weekend, and my slides go into much greater detail as to how this works, and some of the other tricks I found. I’ve put the slides, as well as some really rough code (for the rPI-based server and the “simulator” proxy) on GitHub: Check them out at https://github.com/intrepidusgroup/rpi-atv. There’s also a short video demonstrating this technique with a brand-new AppleTV.

Enjoy, and if you write some cool apps (or better yet, find a way to enable this on AppleTV 6.0), drop me a note!


UPDATE – Oct 3 2013: Well, shoot. It looks like Apple’s disabled the Add Site feature, not just for new versions of the AppleTV software, but for 5.2 and 5.3 as well. I suppose I should have seen this coming — the same mechanism that allowed them to add an icon for the feature last week (probably the box reaching out to Apple) probably has a flag letting them mark the feature as disabled. I guess I should be glad it didn’t happen before my demo on Sunday.

What I didn’t mention in my talk (or maybe I did, I forget — I’d had it on a slide in an earlier draft) was that we did notify Apple of the talk about a month beforehand. There were theoretical (but undemonstrated) security risks associated with the hack. I also wanted Apple to have a chance to warn content providers, in the process of developing apps, that they might want to hide their development servers from public view until they were ready to go live. This seemed the responsible approach to disclosing this research, as my hope was to foster innovation, not to put people or providers at risk.

[As an aside, to those "authorized" developers working on AppleTV apps -- I really hope this doesn't cause too much of a hassle for you guys...and if it does...sorry. Find me at a con and I'll buy you a beer.]

I’d really hoped that the community could pull together some awesome apps, to demonstrate to Apple just how high the demand for an SDK is. Kind of like what happened when the first iPhone jailbreak happened. I figured the hack would be short-lived, just not so much so. On the other hand, before Sunday, very few people even knew this feature existed. Until now, we all thought the only way to get an app on the AppleTV was with a jailbreak. Now there’s another target for the jailbreaking community to go after, and I hope they do so. There’s gotta be a way to re-enable this.

So, anyway, sorry to have given everyone hope only to have it yanked away so quickly. But again…maybe there’s a fix. If this can be re-enabled more easily than a full-blown jailbreak, then perhaps there’s still hope for people to develop their own AppleTV applications. If anyone figures it out, you know I’ll be tweeting about it, and getting back to work on my own apps. But until then, thanks to everyone for the kind words about this post and my talk. It was definitely a fun ride!


UPDATE – Oct 4 2013: Shortly after this post originally went live, Aman Gupta (@tmm1) reached out to me to begin speculation on how this could be fixed for ATV 6.0. Then early yesterday, he broke the news to me that Apple had disabled Add Site for ATV 5.2 and 5.3. He’s not stopped poking at the problem, though, and after an all-nighter with a jailbroken device and CyCript, he pinpointed how Apple disabled the feature.

When the AppleTV application starts up, it reaches out to Apple to collect information about all the “standard” applications. One of the apps returned is “internet-add-site.” The information is returned in roughly the same format as the bag.plist files we’ve been working with, and this is where Apple recently added a link for the Add Site icon. Well, they added the following property/value:

<key>minimum-required-version</key>
<string>6.0</string>

The bad news is that, it appears because of this change, the AppleTV won’t even try to add the Add Site button. It won’t even check whether it’s enabled. I still have one last slim possibility to investigate, but it looks like they’ve effectively broken this for old devices.

The good news is that they didn’t add anything new or convoluted — so hopefully, if we can figure out how to re-enable the profile changes that enabled Add Site to begin with, it should be able to work with 6.0. But that’s still a ways off (and might need to wait until an ATV 6.0 jailbreak happens…)

Amusingly, though…we were able to get the Add Site back on a jailbroken device. The file at /User/Library/Application Support/Front Row/ExtraInternetCategories.plist contains a list of extra sites that were added. I suggested that Aman take the data he intercepted, put it into a new bag.plist on a local server (while stripping the minimum-required-version key), and put a pointer to that bag in the ExtraInternetCategories file. He did that, and bingo, the button came back.

Of course, if you can do this, then you don’t need the Add Site button anyway — just add sites directly to the ExtraInternetCategories file. Here’s a full file, accessing a single site:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<array>
        <dict>
                <key>Name</key>
                <string></string>
                <key>URL</key>
                <string>http://my.server.com/bag.plist</string>
        </dict>
</array>
</plist>

Just add additional <dict> blocks for each external bag.plist you want to include.

But really, that’s just a hack for people with jailbroken devices. The goal of making this available to everyone still depends on finding the trick for ATV 6.0, and that will probably still take a while…

Comments disabled

APKTool, make me a logcat sandwich

Posted: March 8, 2013 – 2:57 pm | Author: | Filed under: android, Mobile Device Management, Mobile Security, Reverse Engineering, Tools

I recently turned a few friends on to Zed Shaw’s “learn python the hard way” course and it reminded me how bad of a programmer I can be. In fact, I’m that guy how litters his code with print statements. So it’s probably no shock then that a lot of times when I’m trying to figure out what’s going on in an Android app we’re reversing, that I’ll want to drop in some print statements. I use to do this by adding a few lines of smali directly into a class file, but there were a few things I needed to deal with for that to work how I wanted it. For example, here is what the default “debug” log call looks like in smali.
invoke-static {v0, v1}, Landroid/util/Log;->d(Ljava/lang/String;Ljava/lang/String;)I
If you were going to drop this line into the code somewhere, you would need to make sure both v0 and v1 are Strings. I would typically want “v1″ to be the string I wanted logged out, and “v0″ (in this example) to be the log “Tag” value so I knew where I was in the code when it was dumped to the log (I may have a dozen or so values getting logged out, so this helps to keep things straight when you see them in the logs). Setting up this Tag string and not stomping on things sometimes meant I needed to increase the local variable count and add some more lines for setting the string and then making sure I got the register/variables correct in that previous logging line. This worked alright if it wasn’t too late in the night or I had enough caffeine in me, but I typically would screw something up and would end up recompiling a bunch of times. I wanted an easier way and something that could deal with logging out things that weren’t already strings.

Thus I created this simple class file I can drop into the root of any application (yes, this is not as good as a real debugger using JDWP, but sometimes doing things quick and dirty gets the job done quicker for me). I wanted to stay with Android log utility syntax, but simplified a few things. I overloaded the logging object’s “d” method so that it could take just about any variable type I was dealing with. One handy example of this is a byte arrays (which is often what we find decryption keys stored in). The wrapper in IGLogger will convert the byte array into a hex string and dump that to the logs. All you need to add is one statement to the code. If “v0″ contained a byte array we wanted printed out, just drop this line of code.
invoke-static {v0}, Liglogger;->d([B)I
Since “iglogger.smali” is in the root of the recompiled APK, we can statically invoke it from any other class in the project. In this case, we need to tell the “d” method v0 is a byte array “[B" and sticking with the standard Android logging utility class, we're returning an Integer (although I've thought about just making that a Void... I never check it). You may notice we're not passing a log Tag variable with this statement. IGLogger supports that if you want, but we've added a trick to IGLogger that I find works pretty well. In IGLogger, we'll create a new Throwable object, get the getStackTrace method to find out the last class and method we were in, and put that in our log Tag. If the APK is not obfuscated, this will even include a line number. This same trick allows for a very simple "hey, I got here and this is how" stack trace method to be dumped by placing this one line of code anywhere.
invoke-static {}, Liglogger;->d()I
You might have heard a lot of us here are fans of Virtuous Ten Studio for working with smali. I have a bunch of these IGLogger print statements in  Extras->Smali->CodeSnippets. Makes it really simple to just click and drop in a log statement.

But that wasn’t good enough for Niko here when we had a massively huge app that was obfuscated. He talked me into automating the process of logging out each class and method that was entered so we could watch the logs and know what code paths were being taken. I ended up rolling this into a Python script I had written to “fix strings” in decompiled Android apps. You are probably aware that proper Android apps will have their strings placed into XML files so that it’s easier to internationalize the application. While this might be nice for developers, it means when we’re reversing an application, we may end up with some strange hex value instead of a readable string. “FixStrings.py” would loop through the decompiled code and add these strings back in as a comment tag when ever they showed up in the smali code. Your mileage may vary with how well this works, but in some apps, it helped us find things easier.

Adding on to that code base, I started to include some code to automatically add IGLogger statements around things I thought could be interesting. This includes a log statement after the “prologue” of any method. Also, any time we see two strings being compared, we’ll log both strings (this is always fun for watching a password being checked or when the app pulls up device info to see if it’s running on the right hardware). We plan to add a few more things for dumping Intent messages and URLs, but this is a start for now.

This of course will make the app run hella slow, fill up logcat, and in some cases break the application. I’ve tried to avoid that last one as best I can for now, but it is possible this script will massacre an APK so badly it will be unrunnable. If you run into that issue, you can turn off the lines that will add these automatic logging statements to the code (ie, JonestownThisAPK = False).

The last thing we added to the Python script was some searches to pull out info we may find interesting when assessing an APK file. We dump this into a file called “apk-ig-info.txt” and review it after decompiling the APK. Again, this is something we’re continuing to refine. You can find the code on the Intrepidus Group github repo:

https://github.com/intrepidusgroup/IGLogger

https://github.com/intrepidusgroup/APKSmash

 

Comments disabled

Android’s BuildConfig.DEBUG

Posted: July 15, 2012 – 9:55 pm | Author: | Filed under: android, Mobile Security, Reverse Engineering, software security

Verbose logging in Android applications is both a problem we frequently see in production builds, as well as something we’ll try to enable if we’re pentesting an app. In revision 17 of Android’s SDK Tools and ADT, the release notes mentioned a feature which could help developers with this issue:

Added a feature that allows you to run some code only in debug mode. Builds now generate a class called BuildConfig containing a DEBUG constant that is automatically set according to your build type. You can check the (BuildConfig.DEBUG) constant in your code to run debug-only functions such as outputting debug logs.

There appeared to be a few bugs in this working as expected in the original releases (Issue 297940), but that appears to have been worked out now. Running a few tests with revision 20, here’s an example of how it could be used and what it looks like in a built APK.

In our java code for our sample application, we included the following line:

if(BuildConfig.DEBUG){
 Log.e("HelloJB", "I am in the Debug");
}
if(!BuildConfig.DEBUG){
 Log.e("HelloJB", "I am NOT Debug");
}
Log.d("HelloJB", "Debug value is: " + BuildConfig.DEBUG);

We then cleaned, built, then exported our application as a signed package from Eclipse. As expected, logcat showed “I am NOT Debug” and “Debug value is: false”. However, decompiling the application using Dex2Jar showed that these values had been set at build time and not simply at execution. With the new update, the Dex2Jar output for those several lines was now just two:

Log.e("HelloJB", "I am NOT Debug");
Log.d("HelloJB", "Debug value is: false");

This looks like a pretty clean way to strip out any code you don’t want making it into your release builds (an issue we’ve seen many times in assessments).

Now for pentesters of Android applications, we often look to enable verbose application logging. I will typically look for a debug flag in the application and try setting it to TRUE at run-time or by recompiling the application. If a developer uses this method to remove debug logging though, this means switching setting the DEBUG boolean to true in the BuildConfig class on a released application will probably not make a difference. Instead, look for an empty logging class and add back in calls to Android’s log methods. There’s a few ways to skin that cat, but often some well placed smali code will do the trick. For example, if you see an empty d method taking two strings, try dropping in the following line to see the debug messages again.

invoke-static {p0, p1}, Landroid/util/Log;-&gt;d(Ljava/lang/String;Ljava/lang/String;)I

Comments disabled

Java Reflection in Android…FTW

Posted: April 13, 2012 – 11:57 am | Author: and | Filed under: android, Conferences, Mobile Security, NFC, Reverse Engineering, RFID, Tools

I’ll be hitting a few smaller security conferences this spring (whatup BeaCon and BSidesROC) with a turbo talk on how Java reflection can be useful for accessing hidden APIs in Android. The team at Gibraltar had some great posts on this last year, but getting reflection to work for accessing the NfcAdapterExtras and NfcExecutionEnvironment classes was not as straight forward as things seemed. Here’s some tips on how to get it working (at least on an Gingerbread Nexus S).

First, you want to get familiar with the nfc_extras framework. This is not included in the standard Android SDK, but you can either pull the /system/framework/com.android.nfc_extras.jar file from a device or look at the Android source. You’ll see there are two classes: NfcAdapterExtras and NfcExecutionEnvironment. What I really wanted was in the embedded NfcExecutionEnvironment, but the proper way to get that object is from NfcAdapterExtras.getEmbeddedExecutionEnvironment(). So we’ll need to create that object and method first.

I decided to use reflection to access these classes in my own Android application. Since they’re not in the SDK, I couldn’t just “include android.nfc_extras.NFCAdapterExtras” in my code. Instead, we’ll just ask for that class by name at runtime.

String sReflectedClassName = "com.android.nfc_extras.NfcAdapterExtras";
Class cReflectedNFCExtras = Class.forName(sReflectedClassName);

Wow, that’s pretty easy. Except that we’re going to need to tell the Dalvik VM to load that additional nfc_extras framework so that it knows about that class. To do that, add the following after your application tag in the AndroidManifest.XML file of your application.

<uses-library android:name=”com.google.android.nfc_extras” android:required=”true” />

Now back to our “cReflectedNFCExtras” class. The first thing we’ll need to do is get the singleton NfcAdapterExtras. This is returned by the get(NfcAdapter paramNfcAdapter) method. Note that it takes an NfcAdpater as a parameter, so we have to specify the class for that when looking for this method. The following line should work for that.

Method mReflectedGet = cReflectedNFCExtras.getMethod("get", Class.forName("android.nfc.NfcAdapter"));

However, at first I had mistake in my code that cause this method not to be found (thank Jeremy for fixing this). So instead, I had looped through all the methods using getDeclaredMethods() and stopped when the “get” method we want was found. Here’s the code for doing that followed by invoking the method and passing it the default NFCAdapter which would be the next thing we’d want to do.

Object oReturnedNFCExtras = null;
Method mReflectedMethods[] = cReflectedNFCExtras.getDeclaredMethods();
for (int i = 0; i < mReflectedMethods.length; i++){
   Log.d("NfcAdapterExtras METHOD:", mReflectedMethods[i].getName());
   if(mReflectedMethods[i].getName().contentEquals("get")){
      //Standard default NFCAdapter... need to pass this in to get back the singleton
      NfcAdapter defaultAdapter = NfcAdapter.getDefaultAdapter(this);
      oReturnedNFCExtras = mReflectedMethods[i].invoke(cReflectedNFCExtras, defaultAdapter);
   }
}

From here out, it’s smooth sailing as long as you take care of one more thing. Your application needs a special premission in order to use this framework, the one for “NFCEE_ADMIN“. The problem is this premission is declared with the protectionLevel of “signature” in the com.android.nfc3 package. There’s a few ways to get around this, but as far as I know, they’ll all require that you have root on the device. Thus, even though we’re using reflection to access these classes, Android’s permissions are still enforced. My way of dealing with this was to resign the com.android.nfc3 package with the same certificate I used to sign my newly created Android application, then adding the following line to my application’s AndroidManifest.xml

<uses-permission android:name=”com.android.nfc.permission.NFCEE_ADMIN” />

So there you go. We can now be NFCEE_ADMIN’s as well (on our own rooted devices) using reflection. I’m curious to try this out with other /system/framework packages as well. In most cases, the process should be more straight forward:

  1. Create a class using Class.forName(“package.class”)
  2. Find the method using  getClass().getMethod(“method”, paramTypes)
  3. Then invoke the method with the proper parameters

 

1 comment

Android Backdoor Fail – The Kindle Fire Easter Egg

Posted: January 3, 2012 – 10:09 am | Author: | Filed under: android, bugs, jailbreak, Mobile Security, Reverse Engineering

Happy New Year! And for all you Kindle Fire owners, happy early Easter as well. TeamAndIRC released their code and write-up for BurritoRoot which restores root level ADB access on the Kindle Fire. There were other ways to root the Fire before the latest update from Amazon, but this one is attention deserving because of how blatantly the developers left this back door wide open.

You can follow along even without a Fire by grabbing the 6.2.1 software update from Amazon’s site. Download the “bin” file, extract it, then find the “service.jar” framework file. This jar will be in the Android format, so to view this in jd-gui, you’ll want to convert it first (dex2jar works well).

Besides the standard com.android.server package you would expect to see in the service framework file, you’ll also notice there’s a “com.lab126.services” package (Lab126 appears to have done work for a number of Kindle releases). At that point, it’s pretty hard to ignore a class called “EasterEggReceiver”.  There’s not much to this class and nothing has been obfuscated to make it hard to follow. Any application which broadcasts an intent message to the “com.amazon.internal.E_COMMAND” service with the correct extra data can enable the ADB daemon to restart as root. No permissions are needed to send that intent and there are no checks in the framework to see who sent the intent message (like maybe try to limit this to only apps with a certain signature) — simply any Android app on the device can call this backdoor feature.

Easter Egg

Dex2Jar view of Kindle Fire's Services framework file

The means of data passing and the severity of this “feature” are different from the HTCLoggers.apk issue from October of last year, but I think they are both signs of the same trend. Mobile developers writing any sort of inter-process communication call or service need to ensure they are communicating only with other trusted apps. Android already gives you a way to do this, if your apps are signed with the same certificate. I’m a fan of Easter Eggs, but sometimes you want to make sure to limit who can walk away with your tasty burrito.

 

Comments disabled

The story of how qemu met MIPS and created netcat

Posted: October 20, 2011 – 10:57 am | Author: | Filed under: MIPS, Reverse Engineering

Earlier this week I found myself in a predicament when I was reversing a stripped down MIPS embedded device. The device had minimal available memory and the only real executables on it were an even more stripped down busybox executable, tftp, and tcpdump. My goal was to obtain tcpdump logs being captured on device, but due to the lack of NFS support and the minimal available memory, I had very few options.

Initially I attempted to write the tcpdump logs to a file, and tftp them up to my local box. This became an issue when the log file became too large and all of the running processes started crashing. Trying to get my timing correct for the tftp command to execute, before everything went haywire, became a serious frustration. This is when I decided to go down a different route and try using everyone’s friend netcat. Unfortunately, I could not find a working netcat binary for the device, so I had to compile my own…

First, I downloaded every MIPS netcat I could find and tried to run them on the device. None worked – the output when trying to run them looked like:

# ./netcat
./netcat: line 1: syntax error: "(" unexpected

Which is extremely frustrating because that output looks like the shell is trying to run the netcat binary as a shell script. Anyway, after a few hours of failing, this is when I decided that I needed to compile my own netcat. This day long endeavor took much longer than I anticipated. I am writing this up in the hope that I can save someone else this frustration.

I knew I was going to build a Debian based MIPS system using qemu on Ubuntu 11.04. I found this tutorial that helped greatly:

Mistake: I assumed which architecture I needed. Make SURE you know which architecture and endianness you are going to be building against. There is a very easy trick to understand this. Pull a file of of the device, and run the ‘file’ command against it. In my case it looked like this:

$ file some_elf_binary
some_elf_binary: ELF 32-bit LSB executable, MIPS, MIPS-II version 1 (SYSV), dynamically linked (uses shared libs), stripped

The “LSB” in that line stands for Least Significat Bit and means you should be building a MIPSEL virtual machine. If it says “MSB,” or Most Significant Bit you should build a MIPS virtual machine. I did not take the time to understand this, and now have a MIPS and MIPSEL virtual machine.

To reiterate – run ‘file’ on an existing executable
LSB = Create MIPSEL virtual machine
MSB = Create MIPS virtual machine

Creating the virtual machine with qemu is pretty easy, but time consuming.

First you must install the correct packages on Ubuntu (the ‘extra’ package provides the MIPS architecture):

sudo aptitude install qemu qemu-kvm-extras

Now you need to pull the initrd and vmlinux files from Debian for your virtual machine type. The initrd is the initial ramdisk and is used to load a temporary filesystem in memory. The vmlinux is the kernel that will be loaded into memory.

At the time of writing this, these links work for downloading the files for the respective architecture:
http://ftp.de.debian.org/debian/dists/stable/main/installer-mipsel/current/images/malta/netboot/
http://ftp.de.debian.org/debian/dists/stable/main/installer-mips/current/images/malta/netboot/

The next step is creating a file that will be your virtual hard disk.

qemu-img create -f raw hda.img 1G

Simple enough. Now to boot the initrd/vmlinux combination which will get you to the Debian installer.

NOTE: The ‘-M malta’ that relates to the ‘malta’ Debian images I downloaded. The tutorial I mentioned above uses ‘-M mips’ – this was a hanging point for me. When booting with ‘-M mips’ it just hung, no output, nothing.

MIPSEL:

qemu-system-mipsel -M malta -kernel vmlinux-2.6.26-2-4kc-malta -initrd initrd.gz -hda hda.img -append "root=/dev/ram console=ttyS0" -nographic

MISP:
qemu-system-mips -M malta -kernel vmlinux-2.6.26-2-4kc-malta -initrd initrd.gz -hda hda.img -append "root=/dev/ram console=ttyS0" -nographic

Follow the instructions in the installer, and at the end it will tell you everything is completed and to remove the CD rom or boot media. At this point I waited for the VM to reboot into the installer again, and then had to kill the host’s ‘qemu-system’ process.

NOTE: There is a dialog that tells you that there is no boot loader, and you will have to append some arguments to the kernel. Make sure you take note what device it tells you (sometimes it will be /dev/hda1, others /dev/sda1)

If all went well, you should now be able to boot into the MIPS or MIPSEL OS. As explained above, my two installations had different root partitions that I needed to boot into. Another hanging point when creating my second MIPSEL VM.

MIPSEL:qemu-system-mipsel -M malta -kernel vmlinux-2.6.32-5-4kc-malta -hda hda.img -append root=/dev/sda1 -nographic
MIPS:qemu-system-mips -M malta -kernel vmlinux-2.6.32-5-4kc-malta -hda hda.img -append root=/dev/hda1 -nographic

Finally we have a MIPS VM shell. I will use netcat as an example of how I built a binary that worked on the end MIPS embedded system. First step is to obviously download the source.

Using wget to pull it down on the MIPS VM, I realized I had to install a compiler.

apt-get install gcc

worked just fine.

Mistake: I did not think about how I had to compile these applications. The end device is very stripped down, and who knows what libraries it even has on it. So after some trial and error I realized that I had to compile with static library linking. It has been a while since I have compiled something like this, so I found this that explains that all you need to do is add the ‘-static’ flag to the compiler arguments, prior to finding that link I asked in the IG chat room, and got a similar response:

[10/11/11 5:10:46 PM] wuntee: how do you force automake/autoconf/gcc whatever to do static librarys, vs dynamic?
[10/11/11 5:11:38 PM] Jeremy: -static
[10/11/11 5:11:39 PM] Sid: -static
[10/11/11 5:11:46 PM] Sid: ^ that
[10/11/11 5:11:46 PM] Jeremy: Zach, go on… say it
[10/11/11 5:11:49 PM] Zach twitches
[10/11/11 5:11:52 PM] Jeremy: -static
[10/11/11 5:11:57 PM] Jeremy: -static -static -static -static -static -static -static -static -static -static -static -static -static -static -static -static -static -static -static -static -static -static -static -static -static -static -static -static -static -static -static -static -static
[10/11/11 5:11:57 PM] Sid: -static
[10/11/11 5:12:36 PM] Zach: “./configure –help”

RTFM… Thanks guys.

Mistake: There are MANY options you can pass to the compiler when compiling something for MIPS:. The output of the ‘file’ command on a working executable can be helpful here. In my case it says “ELF 32-bit LSB executable, MIPS, MIPS-II” which meant I wanted to also pass ‘-mips2′ to the compiler.

Configuring and making netcat was easy enough after figuring out all of the options:

CPPFLAGS="-mips2 -static" CFLAGS="-mips2 -static" ./configure &amp;&amp; make

Running file on this looks close enough to the working executable, and it is statically linked like I wanted:

some_elf_binary: ELF 32-bit LSB executable, MIPS, MIPS-II version 1 (SYSV), dynamically linked (uses shared libs), stripped
netcat: ELF 32-bit LSB executable, MIPS, MIPS-II version 1 (SYSV), statically linked, for GNU/Linux 2.6.18, with unknown capability 0xf41 = 0x756e6700, with unknown capability 0x70100 = 0x1040000, stripped

You can then put the executable on the device, and it hopefully will run. One last thing you can do to the binary in order to conserve space is strip it. This can be done with the strip command.

After about a day of getting this working, the end netcat file was only 750k, and allowed me to obtain tcpdump output. Hope this helps someone.

wuntee

1 comment

A Brave New Wallet – First look at decompiling Google Wallet

Posted: September 21, 2011 – 10:12 am | Author: | Filed under: android, Humor, Mobile Security, NFC, Reverse Engineering, RFID, software security

For the record, I welcome our new contactless payment overlords. I truly see the value in having the ability to make a payment transaction with our mobile devices. This opens up an opportunity to make these transactions more secure, give customers a better user experience, and also give them more control over payment options. Sure there are risks involved with this new technology and everyone should do their own weighing of the risk versus benefits, but I imagine a good number of you already have done this with deciding to use a current payment system over cash (or gold). However, a first (and rather quick as I’m supposed to be on vacation) look at the new Google Wallet code makes me wonder if this first release might need a bit of polish.

If you would like to follow along even without a Nexus S 4G, you can grab the new over-the-air (OTA) update from Google here. You can find the main parts of the new Wallet application in the “\system\app” directory of the update, but it will need some deodexing.

I typically start going through an app with the AndroidManifest.xml file. One thing that jumped out at me with the six “debug” and five “fakes” activities listed in the manifest. As a general best practice, debugging code should be removed from production releases. However, you do have to appreciate the humor of the “BsBankManagerActivity”. Yup, sign up with “BS” bank by calling “6501111111″ or visiting “http://bsbank.com” (BS Bank heard there was a BEAST breaking TLS this week, so they dropped it). Going through the BS code leads to some more fun “bsness” later on as well, such the revelation that “something is seriously wrong with this image URL” (which they were working on back in January?)

Additionally, there’s a handful of test related phone numbers left in “DebugMenuHelper” and “DemoDataPopulator”. Here they are in the format found:

4155589991
(415) 626-9682
(510) 351-0108

You will notice there are a few obfuscated classes in the wallet application. These appear to be related to the OTA proxy parts of the application. While not extremely complex in its functionality, I do think it’s appropriate to obfuscate this. Unfortunately, it appears that a great deal of logging can take place here and the default level is set to “FULL_LOGGING” (although it appears this level can be dynamically changed).

We haven’t yet seen what data gets logged by this, but the obvious concern would be a malicious log reading application as described over a year ago by the Lookout team. There also appears code that will send some log messages to “gtec.skcc@gmail.com“.

Continuing with the testing related code in the production application, lets pull out the number of test/demo/uat URLs (which don’t seem totally bogus but still could be). “CodeConfiguration” has a number of these:


private static final DEFAULT_CITI_SOAP_URL_CAT:Ljava/lang/String; = https://systemtest.citibankonline.citibank.com/MSMOTA Personalization/Webservices/MSMPayPassOTAPersonalizationService-service1.serviceagent/MSMPayPassOTAPersonalization ServicePortTypeEndpoint1

private static final DEFAULT_CITI_SOAP_URL_DEMO:Ljava/lang/String; = https://systemtest.citibankonline.citibank.com/MSMOTA Personalization/Webservices/MSMPayPassOTAPersonalizationService-service1.serviceagent/MSMPayPassOTA PersonalizationServicePortTypeEndpoint1

private static final DEFAULT_CITI_SOAP_URL_PROD:Ljava/lang/String; = https://test.mobileservices.accountonline.com/MSMOTAPersonalization_ FUT/Webservices/MSMPayPassOTAPersonalizationService-service1.serviceagent/MSMPayPassOTAPersonalizationServicePortTypeEndpoint1

private static final DEFAULT_FDCML_PROD_URL:Ljava/lang/String; = "https://www.fdmobileservices.com/mAccountsWeb/MbankingService"

private static final DEFAULT_FDCML_TEST_URL:Ljava/lang/String; = "https://cat.fdmobileservices.com/mAccountsWeb/MbankingService"

private static final DEFAULT_TSM_URL_CAT:Ljava/lang/String; = "https://uat.skcctsm.com:8443"

private static final DEFAULT_TSM_URL_PROD:Ljava/lang/String; = "https://pip.skcctsm.com:8443"

const-string v1, "DEVELOPMENT"

const-string v2, "https://jmt0.google.com/cm"

const-string v1, "SANDBOX"

const-string v2, "https://cream.sandbox.google.com"

const-string v1, "PROD"

const-string v2, "https://clients5.google.com/cm"

Finally, with each point release of Gingerbread (2.3) we’ve see code around the NFC components changing greatly. Generally adding new functionality, but at times deprecating older ones. In the wallet code, there appears to be over 50 classes with at least one deprecated method.

I’m sure many others are looking at this code as well and have some intersting finds. We are looking forward to making a payment soon with our Nexus S. Maybe we’ll use it to buy a pair of shoes.

Update 11/18/2011

Its been a while now and there’s been quite a bit of good work on Google Wallet done on XDA Developers. To clear a few things up, the email address appears to be for Android Cloud to Device Messaging (C2DM) and a lot of the debug code was removed from the wallet updates which have been pushed. That said, you can flip on the “Debug” menu in the orginal code. If you want to get this to run on a device though, you’ll need to resign a few other packages or fix permissions.

Built in debug menu in Google Wallet

-b3nn

Comments disabled

image

This site is protected with Urban Giraffe's plugin 'HTML Purified' and Edward Z. Yang's Powered by HTML Purifier. 24707 items have been purified.