Intrepidus Group

Category Archives: software security

Sanitize your outputs: Apple ID Password Logfile Disclosure

Posted: March 10, 2014 – 3:23 pm | Author: | Filed under: bugs, iOS, Passwords, software security

In recent weeks, there have been quite a few security disclosures for Apple. Some of these have even been pretty significant. Not to pile on, but here’s some detail behind another security issue that I stumbled across last fall.

Apple had just restricted the availability of the Add Site feature on Apple TVs, and I was trying to determine whether alternate methods existed to enable the feature. If you couldn’t install the required configuration profile directly, maybe it’d work via MDM? Nope, that didn’t work. Wait, what about the Touch Setup feature? That’s obviously allowed to make changes to the Apple TV system.

Briefly, Touch Setup allows a user to wirelessly transfer network and Apple credentials from an iPhone to a new Apple TV. When booting the Apple TV for the first time, the user simply touches an unlocked iPhone to the device, it asks if they want to set up the device, and if so, the Wi-Fi password and Apple ID account and password are copied to the Apple TV. I was hoping that the data was transferred using some kind of special Configuration Profile that I could then modify and install through normal means to re-enable the Add Site feature.

Since a public jailbreak does not yet exist for Apple TV 6.0, I was kind of limited in what I could try. The first thing I checked was to see whether anything interesting is revealed in log files. Collecting log data from an iPhone is pretty easy: Connect the phone via USB and use an application like iPhone Configuration Utility (IPCU) or Xcode to read and save the console log. Can we do that on an Apple TV?

Sure we can, but there’s a slight problem: the micro-USB port and HDMI port on the Apple TV are right next to each other. They’re so close that you usually can’t plug both in at the same time. However, this problem may be solved with a sharp utility knife — and a little bit of luck.

Be careful you don't slice too deep or you'll cut the wires!

Be careful you don’t slice too deep or you’ll cut the wires!

Okay, now that I’ve got the hardware issues solved… Connect the laptop, go into the Apple TV, select factory reset, reboot, and… Damn. It launches iTunes and goes into “Device Restore” mode. You have to disconnect the USB, reboot the Apple TV, and then connect the USB sometime after it’s started booting. For the purposes of this test, there’s no need to connect until we actually start the Touch Setup process.

After collecting logs from both iPhone and Apple TV, I then went looking for useful log entries. Several items appeared interesting, including multiple “Data arrived” and “Sending back” lines related to the AppleTV binary and TouchRemote framework. One sequence in particular appeared intriguing (you may have to scroll sideways to see the fun bits):

Oct 10 19:47:12 Apple-TV AppleTV[24] <Warning>: Data arrived: <bcc254e0 27de2466 6bda2549 d2c46db7 9892a153 eba53073 dc0df4a7 0cbac8cf 811d0437 18d843a3 c822de94 1436a154 d0954f09 75c407ff 0e289a2d 46230016 259ed996 0f208470 a5cbdb39 94791034 9c> - sending to client
Oct 10 19:47:12 Apple-TV AppleTV[24] <Warning>: [TRTransferServer] Received a full packet: <bcc254e0 27de2466 6bda2549 d2c46db7 9892a153 eba53073 dc0df4a7 0cbac8cf 811d0437 18d843a3 c822de94 1436a154 d0954f09 75c407ff 0e289a2d 46230016 259ed996 0f208470 a5cbdb39 94791034 9c>, state: 3
Oct 10 19:47:12 Apple-TV AppleTV[24] <Warning>: [TRDeviceSetupServer] Server got data of length 81
Oct 10 19:47:12 Apple-TV AppleTV[24] <Warning>: [TRDeviceSetupServer] Compressed data: <1f8b0800 00000000 00034b2a c8c92c2e 3130b8c4 c8c4cc12 98185810 92585a92 7191952d 3025ce25 b12c3345 bd582133 20233f2f 9583975f 504c529a 010c1819 2134033b 032ad002 00102051 82510000 00>
Oct 10 19:47:12 Apple-TV AppleTV[24] <Warning>: [TRDeviceSetupServer] Decompressed data: <62706c69 73743030 d2010203 04516151 70546175 7468d105 0651645e 44617669 64277320 6950686f 6e65080d 0f111619 1b000000 00000001 01000000 00000000 07000000 00000000 00000000 00000000 2a>

The AppleTV process receives data over Bluetooth (starting here with “bcc254e0…”) and passes that to the TRTransferServer, which also logs the same data packet. Then it logs data of exactly the same length, with the note “Compressed data.” So the data was sent compressed, and probably encrypted, and now we’ve seen both the encrypted and decrypted versions of it. Woohoo, all we need to do now is decompress it. But wait, TRDeviceSetupServer is very helpful in its debug logging, as the very next line is annotated “Decompressed data” and contains the actual, decrypted, decompressed data sent by the iPhone. It’s a binary property list, which turned into JSON, looks like this:

  "a" : "auth",
  "p" : {
    "d" : "David's iPhone"

So far, so good, but really nothing special. It’s the result of the next transaction that’s interesting. The phone sends 1636 bytes of encrypted and compressed data, which the TRDeviceSetupServer conveniently decrypts, decompresses, and dumps to the log (as 3439 bytes of hex). This, again, is a property list, with considerably more content:

  "a" : "setup",
  "p" : {
    "au" : {
      "h" : {
        "x-apple-orig-url" : "https:\/\/\/WebObjects\/MZFinance.woa\/wa\/authenticate",
        "edge-control" : "no-store, cache-maxage=0",
        "x-set-apple-store-front" : "143441-1,19",
        "Expires" : "Thu, 10 Oct 2013 22:47:49 GMT",
        "apple-timing-app" : "402 ms",
        "pod" : "44",
        "Cache-Control" : "private, no-cache, no-store, no-transform, must-revalidate, max-age=0",
        "x-apple-lokamai-no-cache" : "true",
        "Content-Type" : "text\/xml; charset=UTF-8",
        "x-apple-translated-wo-url" : "\/WebObjects\/MZFinance.woa\/wa\/authenticate",
        "x-apple-jingle-correlation-key" : "--redacted--",
        "Content-Encoding" : "gzip",
        "x-apple-date-generated" : "Thu, 10 Oct 2013 22:47:48 GMT",
        "x-apple-application-site" : "ST13",
        "x-apple-application-instance" : "440051",
        "x-apple-asset-version" : "0",
        "Date" : "Thu, 10 Oct 2013 22:47:49 GMT",
        "Set-Cookie" : --really-large-cookie-jar-goes-here--;; httponly"
        "x-webobjects-loadaverage" : "23",
        "x-apple-request-store-front" : "143441-1,19 t:6",
        "Content-Length" : "522",
        "itspod" : "44"
      "b" : {
        "status" : 0,
        "password" : "--redacted--",
        "m-allowed" : true,
        "creditBalance" : "1311811",
        "freeSongBalance" : "1311811",
        "clearToken" : "--redacted--",
        "is-cloud-enabled" : "true",
        "passwordToken" : "--redacted--",
        "dsPersonId" : "--redacted--",
        "creditDisplay" : "",
        "accountInfo" : {
          "address" : {
            "firstName" : "David",
            "lastName" : "Schuetz"
          "accountKind" : "0",
          "appleId" : "--redacted--"
    "np" : "--redacted--",
    "c" : "US",
    "l" : "en",
    "ns" : "--redacted--",
    "ha" : "--redacted--",
    "rp" : true,
    "hg" : "--redacted--",
    "di" : true

(I’ve redacted anything that looked, you know, sensitive.) (and, no, I don’t have $1.3 million in iTunes credits. I have no idea what that number means.) The structure I’ll refer to here as p->b (the last big block, which starts with “status” and “password”) contained my Apple ID and password. The Apple ID was also repeated in the “ha” value a few lines later, and the Wi-Fi network name and password are passed in the ns and np values.

Naturally, we expect all this data to be present in the packet, as that’s the whole point of the Touch Setup: to transfer your Apple ID and Wi-Fi credentials to a new Apple TV so it’s immediately up and running on the network. The problem is that the credentials got saved to the log.

Here’s the affected code, in the original version (the two NSLog entries are where the data is written to the log):

// TRDeviceSetupServer - (id)server:(id) didReceiveData:(id)
  // v6 contains the decrypted data 
  if ( _TRLogEnabled == 1 )
    NSLog("Compressed data: %s", v6);   

  v18 = objc_msgSend(v6, "TR_decompressedGzipData");
  v19 = objc_retainAutoreleasedReturnValue(v18);
  if ( _TRLogEnabled == 1 )
    NSLog("Decompressed data: %s", v19); 

And the new version. The NSLog entry for compressed data has been removed, and the log entry for uncompressed data has been replaced with a log simply recording the length of the decompressed data:

  v16 = objc_msgSend(v6, "TR_decompressedGzipData");
  v17 = (void *)objc_retainAutoreleasedReturnValue(v16);
  v6 = v17;
  if ( _TRLogEnabled == 1 )
    v18 = objc_msgSend(v17, "length");
    NSLog(CFSTR("[TRDeviceSetupServer] Decompressed data length: %li"), v18);

Is this really a big problem? Well, one should never write any kind of credentials to a system log, even in the case of the Apple TV, where the log is only available by physically connecting to the device. As far as I know, these logs aren’t stored on the system, but since I don’t have a jailbroken 6.0 Apple TV I can’t say for certain that they aren’t on disk somewhere. And if they’re on disk, they’re at risk.

Armed with this information, I wrote up a nice formal security advisory (Full Disclosure, see also CVE-2014-1279), and sent it off to Apple. Well, it took a little while — I had to verify what I’d done, get it written up, send it through our internal disclosure process, and, well, find the time to do it all. So I didn’t get the bug reported until November 8. Then we waited. I exchanged emails with Apple a couple of times, but still didn’t know when the fix would be released. Finally, on February 19, Apple wrote asking how I’d like to be credited for the find, and we knew the fix would be released soon. So even though it took a while, the “Coordinated Disclosure” process seemed to work pretty well.

This isn’t the first time that passwords were written to a system log, and it probably won’t be the last. But hopefully Apple can learn from these mistakes and work on their QA and code review processes to minimize similar vulnerabilities in the future.

Comments disabled

Android’s BuildConfig.DEBUG

Posted: July 15, 2012 – 9:55 pm | Author: | Filed under: android, Mobile Security, Reverse Engineering, software security

Verbose logging in Android applications is both a problem we frequently see in production builds, as well as something we’ll try to enable if we’re pentesting an app. In revision 17 of Android’s SDK Tools and ADT, the release notes mentioned a feature which could help developers with this issue:

Added a feature that allows you to run some code only in debug mode. Builds now generate a class called BuildConfig containing a DEBUG constant that is automatically set according to your build type. You can check the (BuildConfig.DEBUG) constant in your code to run debug-only functions such as outputting debug logs.

There appeared to be a few bugs in this working as expected in the original releases (Issue 297940), but that appears to have been worked out now. Running a few tests with revision 20, here’s an example of how it could be used and what it looks like in a built APK.

In our java code for our sample application, we included the following line:

 Log.e("HelloJB", "I am in the Debug");
 Log.e("HelloJB", "I am NOT Debug");
Log.d("HelloJB", "Debug value is: " + BuildConfig.DEBUG);

We then cleaned, built, then exported our application as a signed package from Eclipse. As expected, logcat showed “I am NOT Debug” and “Debug value is: false”. However, decompiling the application using Dex2Jar showed that these values had been set at build time and not simply at execution. With the new update, the Dex2Jar output for those several lines was now just two:

Log.e("HelloJB", "I am NOT Debug");
Log.d("HelloJB", "Debug value is: false");

This looks like a pretty clean way to strip out any code you don’t want making it into your release builds (an issue we’ve seen many times in assessments).

Now for pentesters of Android applications, we often look to enable verbose application logging. I will typically look for a debug flag in the application and try setting it to TRUE at run-time or by recompiling the application. If a developer uses this method to remove debug logging though, this means switching setting the DEBUG boolean to true in the BuildConfig class on a released application will probably not make a difference. Instead, look for an empty logging class and add back in calls to Android’s log methods. There’s a few ways to skin that cat, but often some well placed smali code will do the trick. For example, if you see an empty d method taking two strings, try dropping in the following line to see the debug messages again.

invoke-static {p0, p1}, Landroid/util/Log;-&gt;d(Ljava/lang/String;Ljava/lang/String;)I

Comments disabled

NDK File Permissions Gotcha and Fix

Posted: May 11, 2012 – 1:17 pm | Author: | Filed under: android, software security

The Android NDK (Native Development Kit) is a  complementary toolkit for the Android SDK which enables developers to create native binary code. With the NDK, developers can harness the speed and performance of  C to augment their Java Android SDK application.  Typical uses for this are performance critical functions used in audio/video playback or rendering graphics in video games.

The NDK compiles the C portion to a  shared library (.so), and exported functions in the shared library are accessible via JNI calls from the Java portion of the application. During the call to the native function, the program is running native ARM machine instructions outside the Dalvik virtual machine and developers should take note that the security model which is supplied by the Dalvik VM is not necessarily inherited in the native code runtime environment. An example of this can be seen when writing to a file.

When the standard method of writing to a file in the Android SDK (Java executed in the Dalvik VM) is used, the output file is created with the following permissions:

-rw-rw-r– 1 app_75 app_75 16 May 3 14:16 J.FLAG

The result is a file that is world readable but not writeable.

If one were to create a file via the NDK, using JNI, and simply used the following C library calls:

FILE * fp = fopen("/data/data/com.intrepidusgroup.scratch/C.FLAG", "ab");
fprintf(fp, "This is content.\n");

The new file would have the following permissions:

-rw-rw-rw- 1 app_75 app_75 22 May 3 14:17 C.FLAG

The file is world readable AND world writeable. This could potentially lead to security issues since the new file may be corrupted (intentionally or otherwise) by another application on the device if the file location is well known.

The difference in these default file permissions occurs because of the way the process execution is handled on Android. The Dalvik VM passes execution to the Zygote process, which has a umask of 000, and any process spawned off from Zygote will inherit this umask.

One way to match the SDK’s permission settings in native code, is to change the individual process umask with the umask(3) C library API call. This should be done before creating the file in the native code:

FILE * fp = fopen("/data/data/com.intrepidusgroup.scratch/C.FLAG", "ab");

This results in a file which has permissions that conform to those set by the Android Java SDK:
-rw-rw-r– 1 app_75 app_75 22 May 3 16:07 C.FLAG

Another way to do this is to use the open(2) system call itself to create the file, as this requires that permissions be explicitly specified when a new file is created.

const char * fn = "/data/data/com.intrepidusgroup.scratch/C.FLAG";
const char * content = "This is some content.\n";
err = write(fd, content, strlen(content));

This also creates a file which has permissions:
-rw-rw-r– 1 app_75 app_75 22 May 3 17:15 C.FLAG

By removing or modifying the third argument to open(2) it is possible to restrict the file permissions even further. For example the exclusion of the S_IROTH flag would remove world readability from the file.


Comments disabled

OWASP ATL: Mobile Application Assessment Presentation

Posted: November 29, 2011 – 4:04 pm | Author: | Filed under: iOS, Mallory, Mobile Security, OWASP, software security, ssl

I recently gave a presentation at OWASP ATL on the OWASP Mobile Top 10 and how to assess mobile applications. This was a light weight discussion of the OWASP Mobile Top 10 and some topical and technical concerns related to securing mobile applications.

Download the presentation here: [download id="276"]


These videos show various testing techniques on real applications. The applications targeted didn’t have any serious problems. In the case of the game, “WordFeud”, a Scrabble clone, the game maintained game state on the server and tampering with client side values did not yield any interesting results. The SoundCloud demonstration shows how it uses the iOS data protection API to avoid storing OAuth tokens in the applications file sandbox and instead uses KeyChain.

Video Demo Series Here:



iOS Application MiTM


Sound Cloud and Data Protection




Comments disabled

A Brave New Wallet – First look at decompiling Google Wallet

Posted: September 21, 2011 – 10:12 am | Author: | Filed under: android, Humor, Mobile Security, NFC, Reverse Engineering, RFID, software security

For the record, I welcome our new contactless payment overlords. I truly see the value in having the ability to make a payment transaction with our mobile devices. This opens up an opportunity to make these transactions more secure, give customers a better user experience, and also give them more control over payment options. Sure there are risks involved with this new technology and everyone should do their own weighing of the risk versus benefits, but I imagine a good number of you already have done this with deciding to use a current payment system over cash (or gold). However, a first (and rather quick as I’m supposed to be on vacation) look at the new Google Wallet code makes me wonder if this first release might need a bit of polish.

If you would like to follow along even without a Nexus S 4G, you can grab the new over-the-air (OTA) update from Google here. You can find the main parts of the new Wallet application in the “\system\app” directory of the update, but it will need some deodexing.

I typically start going through an app with the AndroidManifest.xml file. One thing that jumped out at me with the six “debug” and five “fakes” activities listed in the manifest. As a general best practice, debugging code should be removed from production releases. However, you do have to appreciate the humor of the “BsBankManagerActivity”. Yup, sign up with “BS” bank by calling “6501111111″ or visiting “” (BS Bank heard there was a BEAST breaking TLS this week, so they dropped it). Going through the BS code leads to some more fun “bsness” later on as well, such the revelation that “something is seriously wrong with this image URL” (which they were working on back in January?)

Additionally, there’s a handful of test related phone numbers left in “DebugMenuHelper” and “DemoDataPopulator”. Here they are in the format found:

(415) 626-9682
(510) 351-0108

You will notice there are a few obfuscated classes in the wallet application. These appear to be related to the OTA proxy parts of the application. While not extremely complex in its functionality, I do think it’s appropriate to obfuscate this. Unfortunately, it appears that a great deal of logging can take place here and the default level is set to “FULL_LOGGING” (although it appears this level can be dynamically changed).

We haven’t yet seen what data gets logged by this, but the obvious concern would be a malicious log reading application as described over a year ago by the Lookout team. There also appears code that will send some log messages to ““.

Continuing with the testing related code in the production application, lets pull out the number of test/demo/uat URLs (which don’t seem totally bogus but still could be). “CodeConfiguration” has a number of these:

private static final DEFAULT_CITI_SOAP_URL_CAT:Ljava/lang/String; = Personalization/Webservices/MSMPayPassOTAPersonalizationService-service1.serviceagent/MSMPayPassOTAPersonalization ServicePortTypeEndpoint1

private static final DEFAULT_CITI_SOAP_URL_DEMO:Ljava/lang/String; = Personalization/Webservices/MSMPayPassOTAPersonalizationService-service1.serviceagent/MSMPayPassOTA PersonalizationServicePortTypeEndpoint1

private static final DEFAULT_CITI_SOAP_URL_PROD:Ljava/lang/String; = FUT/Webservices/MSMPayPassOTAPersonalizationService-service1.serviceagent/MSMPayPassOTAPersonalizationServicePortTypeEndpoint1

private static final DEFAULT_FDCML_PROD_URL:Ljava/lang/String; = ""

private static final DEFAULT_FDCML_TEST_URL:Ljava/lang/String; = ""

private static final DEFAULT_TSM_URL_CAT:Ljava/lang/String; = ""

private static final DEFAULT_TSM_URL_PROD:Ljava/lang/String; = ""

const-string v1, "DEVELOPMENT"

const-string v2, ""

const-string v1, "SANDBOX"

const-string v2, ""

const-string v1, "PROD"

const-string v2, ""

Finally, with each point release of Gingerbread (2.3) we’ve see code around the NFC components changing greatly. Generally adding new functionality, but at times deprecating older ones. In the wallet code, there appears to be over 50 classes with at least one deprecated method.

I’m sure many others are looking at this code as well and have some intersting finds. We are looking forward to making a payment soon with our Nexus S. Maybe we’ll use it to buy a pair of shoes.

Update 11/18/2011

Its been a while now and there’s been quite a bit of good work on Google Wallet done on XDA Developers. To clear a few things up, the email address appears to be for Android Cloud to Device Messaging (C2DM) and a lot of the debug code was removed from the wallet updates which have been pushed. That said, you can flip on the “Debug” menu in the orginal code. If you want to get this to run on a device though, you’ll need to resign a few other packages or fix permissions.

Built in debug menu in Google Wallet


Comments disabled

USRP 101: Unlocking Wireless PC Locks (and freeing dolphins)

Posted: July 18, 2011 – 5:39 pm | Author: and | Filed under: Cryptography, Mobile Security, RFID, software security, Tools, USRP, Wireless

Wireless PC Proximity Lock

Have you ever seen one of these “USB Proximity PC Locks” before and thought “There’s NO way that piece of junk is secure”… turns out, you were right.

We had a little office challange recently to break this system, just for fun, and along the way document our Universal Software Radio Peripheral (USRP) which I’m still just starting to get to know. By now, I figure most of our readers would be familiar with the OpenBTS Project which uses an USRP to impersonate a GSM base-station. While this is an impressive use of the hardware at a fraction of the cost of a comercial base-station, the USRP can also be used to impersonate less functional and almost worthless priceless equipment… like that USB proximity lock.

First things first, we need to get one of these locks ourselves. Surprisingly, I got one of these as a gift from ThinkGeek years ago and you can still find them on eBay and a few other sites. I was missing the drivers for mine, but you can still find a copy online. I installed it in an XP virtual machine and paired the remote with the USB dongle. Now anytime the remote was powered down or more than 30 feet away, the lock screen with these pretty dolphins was displayed.

Wireless Lock "lock" screen on XP

What I needed to know next was the approximate frequency. which the remote used to send data to the dongle and unlock the computer. My goal was to capture this transmission with the USRP, then replay the signal when the remote was turned off or out of range. Unfortunately the documentation that came with the wireless lock was pretty silent on what it used to do this. Given the device is so old, I doubted it would use BlueTooth, so I started to look through the installed application files for clues. The application is sold to be re-branded by many companies, but the string “Copyright (C) 2003 Dritek System Inc” in the HIDRead.dll seems to point to the actual manufacture. The USB dongle installed as a HID device under Windows, but the driver does not appear to say anything about the frequency which the dongle and the remote communicate. Neither does the documentation with the driver nor the PDFs I found online. However, one EBay post did contain an image of the back packaging which seems to have “FCC 434Mhz”. This matches the unlicensed spectrum that is commonly used for remote keyless car unlocking and garage doors.

This was also backed up when the remote device was taken apart. There are 2 main chips on the remote I was interested in, one labeled “NDR 550″ and the other “MDT10P55B1S”. Some surfing around leads to the NDR550 being from “Najing Electronic Devices Institute” which list this as a One Port Resonator which operates at 433.92 Mhz. Also looking at the remote’s PCB near the battery there are markings “315″ and “434″. Mine had a blue pen marks next to the “434″ text which falls within the range of the WBX board in our USRP.

Wireless PC Lock Remote

Using the GNU Radio spectrum analyzer around the 433.92Mhz frequency with our USRP N210, we do in fact receive a signal when the unlock remote is powered up and transmitting. The “” script comes with GNURadio UHD package. While the GUI was a bit unstable on my system, command line parameters worked well. -f 433.9M -A TX/RX


The next step was to capture the signal coming from the remote to the dongle. While far from stealthy, the Log Periodic antenna we had from WA5VJB works for 400-1000MHz ranges. So with a bit of gain tweaking and proper timing, we were able to snag a good complex capture of the signal out of the air. Again, GNU Radio makes this easy with the “” script. -f 433.9M -A TX/RX -g 35 outfile.dump

Then it was time to replay the signal. To do this, we wrote a GNU Radio Companion (GRC) file. I’d recommend looking at the OZ9AEC GRC examples if you’re new to GRC and have a UHD device like our N210. However, this replay script was so easy you could basically point and click to get it working. You’ll need just one source (something that will generate a signal in this case) and one sink (something to transmit the signal). The source was the file we had just captured which we sent to the UHD: USRP Sink.  Set the sample rate to match that of the capture (default 1M), the center frequency (433.9 MHz in this case), and adjust the gain depending on your antenna and range.  We set the file sink to repeat so running the script would continuously replay the unlock command to the dongle. From there, simply execute the script and watch the PC unlock (Go free my dolphin friends!)

GRC Transmit from File

We also looked at unlocking the system using a Teensy USB development board as a fake dongle (Sid, I want my Teensy back!). We plan to have a follow up post on that, but if you start looking though the registry and configuration settings for this wireless lock, you’ll notice some data looks strange. The “SqrtyKey.Cfg” file and HKLM\SOFTWARE\KeyMark\Wireless PC Lock\Password Answer registry setting are encoded with a transposition cipher. It shouldn’t take you long to figure out the pattern, so once you have, you can use the python script below to save you some decoding time. [download id="271" format="2"] (update: link should now work) (update 2: the maddman posted an awesome clean up of the script for Python 3 here)

So there you have it. Want to defeat a $20 wireless PC lock? All you have to do is spend $2500 on USRP hardware ;-)

~Corey and Max


The OWASP Mobile Top 10 Risks for iOS Developers

Posted: May 24, 2011 – 11:42 am | Author: | Filed under: Cryptography, iOS, Skype, software security

The OWASP Mobile Top 10 Risks is an overview of a generic list of the most common risks found in mobile applications. We see these risks in mobile applications every day. When we see them they often show up as vulnerabilities in the applications we are assessing. No list, such as this, can adequately cover *every* issue an application will face. However, this is a good starting point for a security team or development team looking to understand the most common mobile application security issues for iOS. This article focuses on these risks (and ways to mitigate them) for iOS. For a more generic look at these controls and for further ideas on mitigations check out the OWASP Mobile Top 10 Controls.

1 – Insecure or unnecessary client-side data storage

This risk addresses the obvious concern of sensitive data being stored on mobile devices. All developers must carefully consider if storing a piece of data on a mobile device’s persistent storage is absolutely critical to the application’s correct functioning. If it is not the data should simply not be stored. If the data is required, and it is sensitive, it must be protected. Data protection often means “encryption”. For iOS developers, this is not an onerous problem. Apple provides a very easy to use, and secure, data protection mechanism for iOS 4 and newer devices using 3GS or newer hardware. The mechanism is aptly referenced as “Data Protection”. Apple provides two methods of doing this that very straight forward to use. Pass in the correct option (NSDataWritingFileProtectionComplete) to NSData writeToFile:options:errors: and the file is protected and encrypted when the device is locked. Alternatively, you can use NSFileManager setAttributes:ofItemAtPath:error: and pass in the NSFileProtectionKey attribute with the NSFileProtectionComplete value to protect existing files.

For more information see: Implementing Common Application Behaviors.

Note: The user must have a passcode for this to work. iOS 3.x on 3G and older devices do not have this capability and require much higher effort and less secure/user friendly solutions.

2 – Lack of data protection in transit

After data has been secured on the device the next high concern area is protecting the communications between the mobile application and the server. By far, the most common communication protocol is HTTP. For iOS developers this typically means using the NSURL or NSURLConnection class. By default, NSURL or NSURLConnection will fail with an error in the event of an SSL issue. Development environments do not, typically, have a valid SSL certificate, which creates a problem. NSURL and NSURLConnection behavior is changed to accept invalid certificates to continue development without hassle.

With NSURLConnection an application must implement the delegate methods canAuthenticateAgainstProtectionSpace and didReceiveAuthenticationChallenge to ignore SSL issues. Implementing these methods also gives you the opportunity to warn the user in the event of an invalid SSL certificate. The best behavior is to fail with an error the user can’t accept for production builds. Conditional code that is never compiled into the production binaries (that excludes the risky code) is best. Make sure all production code has development oriented SSL code pruned out of it.

The long and short of it is that you have to actively work at making this insecure on iOS.

3 – Personal Data Leakage

This is less of a technical issue and more of a purposeful choice that must be made as a developer. Developers must take care to guard their user’s private information. Applications must protect their user’s personal data. Use the same data protection mechanism described in the previously discussed risk “Insecure or unnecessary client-side data storage” to protect personal user data. Beyond this it is an application design decision about how the application will handle the user’s personal data. Personal data privacy has become a hot topic and user’s are becoming much more aware that their private data may be at risk in mobile applications.

4 – Failure to protect resources with strong authentication

This issue is something of a server and client issue. Very little authentication is typically performed on a mobile device. The majority of the authentication that mobile devices encounter is oriented around a server authenticating a mobile application. Application’s on mobile devices rarely authenticate other services directly. If this does occur an application is usually being asked to share a resource, such as a photograph or some other piece of data managed by the application, with a server.
The main concern is that iOS applications properly authenticate to servers and that they implement strong authentication that uniquely identifies each mobile application user to the server. One of the main concern for developers is to never embed client side secrets in their application and then use those as an integral part of an authentication method.

5 – Failure to implement least privilege authorization policy

For the local side of things only request permissions you absolutely need. Does your application *really* need access to the user’s GPS? This falls more along the lines of protecting your user’s sensitive privacy oriented data. Be judicious with what resources you attempt to access.

The other part of this issue is server application oriented issue. The key issue at hand is that applications, especially thick-client oriented applications, may contain a great deal of functionality that is not available to lower privilege users. The server is responsible for checking that a user can perform a requested action. Even if the functionality is accessed in the application, the server must not allow lower privileged users to access and execute higher privileged server side functionality. Vertical privilege escalation is a constant risk to server side applications.

Horizontal privilege flaws allow users of mobile applications to easily circumvent authorization controls and access the data of other users at the same privilege level. Care must be taken to only allow a mobile application to access server side data that belongs to the currently authenticated user.

6 – Client-side injection

Client side injection is an interesting problem that can lead to a variety of issues depending on the application and how it operates. Many iOS applications utilize SQLite, which means that at some level those applications may be vulnerable to SQL injection. Often the consequences of SQL injection against a client side application are minimal.

A recent vulnerability in the Skype client for the Mac (I know, it isn’t an iOS app) illustrates the possibilities of “Rich User Environments” and the consequences they can have when an application implicitly trusts input.. The vulnerability was a simple cross site scripting attack that allowed remote code execution. Similar issues can surface in mobile applications if the application uses UIWebView or other rich environments and does not carefully check user (and other!) input.

The mitigation falls down to standard data validation. Any specific advice given here would be to specific to be useful in a general way. Make sure your data confirms to your expected length, range, type and format. Length is obvious, range is for data that has expected numerical ranges (positive integers only), type is for any sort of integer or other data structure being read and format is for the actual data formatting (such as phone numbers).

Data type is actually an interesting one. Any sort of serialized NS objects are inherently insecure and should never be trusted from a remote source. Similar to NIB files (serialized objects), they must only be used locally and only come from trusted sources. If a user can directly input objects into the system and manipulate them a variety of difficult to catch security bugs can result.

Another interesting issue is with format strings. From the comfortable confines of the NS* Objective-C environment it barely feels like you are writing code that gets compiled down to native instructions. Drop down to a library, like SQLite and that illusion is quickly shattered. Format strings using the %@ formatter are vulnerable to a variety of interesting attacks: ()

7 – Client-side DOS

This issue is fairly obvious. Make sure you are being a defensive programmer. This can result from development errors and bad logic. Modern mobile applications do a lot of parsing of formats such as XML and JSON. Defects in these parsers or in the way they are used can result in unexpected DoS.

8 – Malicious third-party code

Assume the device is jailbroken. Let me say that again: assume your user’s device is jailbroken. Act accordingly. Also, even on legitimate, non jail-broken devices there are interesting problems that can arise related to the URL / Protocol handling mechanism. iOS is not very big on Inter Process Communication. One of the few mechanisms available to iOS developers to accomplish this is the custom URL scheme. Use these with extreme caution. External (malicious) web sites can call these. The behavior when multiple applications define the same scheme is undefined (and you can’t stop an application from trying to usurp your scheme).
From iOS 4.2 and on there is a better IPC mechanism (application:openURL:sourceAppplication:annotation). If you need to perform IPC, use this. You can even pass real objects instead of hacking them up into a URL ready format (see: trusting serialized objects is a bad idea).

9 – Client-side buffer overflow

This is still quite possible. Since Objective-C is a strict superset of C there is no limit to the depth in which an iOS can get itself in trouble with old C programming issues. I won’t belabor them here, but they are quite possible. Pay careful attention to your string formatting (NSLog, etc.). Stick to the NS* class hierarchy when at all possible. Abstract away your C code to the smallest amount possible and be extremely cautious with any C string and memory operations.

10 – Failure to apply server-side controls

See: The OWASP Top 10.

Other Thoughts

For some other thoughts on iOS application security please visit our earlier blog post on the topic.



For some other

Comments disabled

Hijacking NFC Intents on Android

Posted: May 10, 2011 – 10:15 pm | Author: | Filed under: android, Conferences, Mobile Security, NFC, Phishing, RFID, software security

Google IO had a “How to NFC” session today where they demoed and described using NFC on Android. One of the items they pointed out was the desire to use NFC for instant gratification and zero-click interactions. The only default application on the Nexus S that I’ve seen this in before today was Google Maps, but the desire is that other applications will incorporate this feature as well. In the future, we may see a banking app that launches when the phone is touched to a particular NFC/NDEF message tag and not require the user to click anything.

To see how this could work right now on a Nexus S, take a Mifare tag and write to it an NDEF message with a URL to ““. When the device reads the tag, the standard NFC Tags application requiring user interaction will NOT be triggered. Instead it will automatically trigger Google Maps on the phone. This is done with specialized intent-filters. O’Reilly has been on the NFC ball and has a great write-upand flow chart about how Android figures out what actions to take when a new NFC tag/NDEF message is detected. It is well worth the read if you are planning on using NFC tags with your application.

To see how this works, pull out the AndroidManifest.xml file from the Google Maps application on the Nexus S, you’ll see a number of URLs registered for the “android.nfc.action.NDEF_DISCOVERED” action. These are intent-filters, which don’t require any special permission, nor present any type of prompt to the user when installed. So what if we wanted to create a competitor to Google’s Map application and register for these same intents? What if this was a banking app and the tags triggered the start of a transaction? Nothing currently stops our app from also creating these intent-filters, so lets see what that could look like.

We created a quick “Angry Birds New Jersey” application with some special intent filters in the manifest for our presentation at B-Sides Rochester last weekend. When the user installs what appears to be a game application, it will also silently register to receive the same intents which would launch Google Maps. Here’s a sample of the intent-filters for that:

Intent-Filters for NDEF_DISCOVERED

Intent-Filters for NDEF_DISCOVERED

Now when a user scans a NFC tag with a maps URL, a menu choice will pop up asking the user to choose which application should handle the intent. The challenge becomes getting the user to send the information to our application instead of the office application. The intent-filters include two handy settings for this. First you can customize the “label” that will appear on the popup list. So instead of our normal installed application name “Angry Birds New Jersey” showing up, we can call it “Google Maps”. We can also set the icon that will be displayed. So again, instead of showing the game icon, we can use an image that people already associate with Google. If you had to choose between these two apps, which one would you click on?

Google Maps or Maps... which is the real application?

I’m not sure most users would know the first one on that list was the from the bird game we installed and not the offical Google Maps application. There might not be too much risk here hijacking a map URL, but its something I would encourage developers to think about with their data and tags.

UPDATE 11/18/2011

There is now a way to protect against this when writing your data to NFC tags, if your application is running on Android 4.0 (or probably later as well). The protection is being called Android Application Record (AAR). Click here for our full POST on the feature.


Comments disabled

Bug Bounties: Do they work?

Posted: March 9, 2011 – 10:42 am | Author: | Filed under: bugs, Conferences, Security Management, software security, Web Apps

Two years ago at CanSecWest Charlie Miller, Alex Sotirov and Dino Dai Zovi declared there would be no more free bugs. One of the leading philosophies for the “no more free bugs” statement is that an organization paying an individual security researcher legitimizes that research and dramatically changes the organization’s posture on reported bugs. The paying organization is saying, “this has monetary value to us and we will pay you, not attack you, for finding bugs”. The researcher is incentivized because they get money and have a known, legitimate, working relationship with the organization paying the bug bounty. Fast forward to two years later. A lot of discussion has happened regarding bug bounties in the public eye. And a lot of money has been paid for security bugs.

The concept of a bug bounty is not new and many famous hackers have offered them over the years. Donald Knuth probably has one of the oldest, and most prestigious, bug bounty programs in existence. The idea of someone who writes software offering money, even $1, for a bug is rare. Fast forward to two years since that statement at CanSecWest.  Google has a web bug bounty program and a browser bug bounty program. Mozilla has a bug bounty program. ZDI also has a prominent bug bounty program (they run Pwn2Own). The experiment on bug bounties is running full steam at this point in the information security community.

Looking at the list of recent rewards from the Google Chrome Releases blog and seeing all of the $ signs next to security bugs makes me happy. I don’t feel insulted when I get paid to report bugs. I do think getting Google dollars for hard research work is gratifying. This leads me to the conclusion that these kind of programs “work” at a fundamental level. How well they work is a discussion for another time. If you had $100K to augment your security budget, every one of those dollars spent in a bug bounty program would represent a lot of research for the amount of money involved.




Comments disabled

Discussion: Application Security Debt

Posted: March 5, 2011 – 11:49 am | Author: | Filed under: Mobile Security, SDL, Security Management, software security, Tools

I am going to break a rule of good blogging and straight-away direct my readers to some background material with the promise of a quick summary in this post:

I will now offer a quick summary here. In software development there is a concept of “technical debt”. Technical debt is when you, as a developer, knowingly dig a hole for yourself that you need to fill in at some point. The concept works well as a rough mental model. Every developer knows where their ugly spots are in their code. If you don’t refactor aggressively and manage your technical debt it can spiral out of control. Your “interest rates” are also a factor in this equation. The remaining goal of the posts is to hash out a way to develop a metric to estimate the dollar impact of technical debt as it relates to security. I enjoyed reading these posts as a different take on how to relate the cost of software (in)security to project managers and those responsible for budgets. I think this is worthwhile and that security often gets ignored because the cost of it is misunderstood.

The truth is it all comes down to a very simple question: will my (in)security cost me more than real security will cost me? Can I afford the cost? What can I do about it? I have long been a proponent of metrics collection throughout the software development process to measure the effectiveness of software security efforts. Models to estimate cost are a great starting point, but without measurement and a real SDL you will never have a way to quantify the value of software security and its impact on your organization. The answer is to integrate software security into your software development life cycle.There are compelling arguments that a good SDL program has good ROI: Microsoft SDL: Return on Investment. From the Microsoft Paper:

While tools should be part of the equation and can provide a force multiplier, no product can substitute for secure software development. An effective, structured approach to software security must include people—both experts and the larger development organization—a cultural shift toward security, tools where useful, the security processes to tie activities together, and metrics that allow for understanding and improvement.

That is very eloquently stated. I will offer a couple of closing thoughts now. The expense of insecurity can be greater than that of the project as a whole. And, there are intangible costs to insecurity, such as harm to your reputation. Mobile applications are often small appendages to larger efforts, but the mobile application is also quite often the “tip” of the iceberg that users experience and work with. Security vulnerabilities in mobile applications can really hurt.

edit: I wanted to offer one additional observation. Though tools can’t substitute for secure software development, they can open eyes and serve as a catalyst to full SDL adoption.





This site is protected with Urban Giraffe's plugin 'HTML Purified' and Edward Z. Yang's Powered by HTML Purifier. 24799 items have been purified.