Intrepidus Group
Insight

Category Archives: Mobile Security

RIT’s ISTS Mobile Challenge

Posted: March 13, 2014 – 2:55 pm | Author: | Filed under: android, Mobile Security

This year’s Information Security Talent Search (ISTS) proved to another hilarious and fun experience (at least from the Red Team’s perspective).

Located in Rochester, New York, at RIT, ISTS is a student-run competition that focuses as much on the offensive side, as the defensive. Unlike many of the other CCDC style competitions that make teams show up and just defend boxes, ISTS lets competitors gain points by breaking into another team’s machine. The standard competition structure exists like other competition: The White Team is in charge of setup, scoring, and making the competition run smoothly. Blue Teams refer to the teams of competitors which in this case consist of 5 or 6 people from schools all across the country.And then there was the Red Team…

Seeing Red

Jason Ross and I, along with a bunch of our local hacker friends, have been on the Red Team for at least 5 years. This group is dedicated to owning as many boxes as possible and laughing along the way. It’s hard not to name-drop when describing the Red Team members like Rob Fuller, Raphael Mudge, and Bruce Potter but when you add them into the other dozen-or-so equally ninja-like members, the Red Team becomes a horrific killing squad. I don’t have time to go in to the details but some of the highlights for me were:

  • Overwriting the Linux machines with a Nyan cat boot loader
  • A custom-written crypto locker malware that encrypted important machine files and held them for ransom
  • Making teams sing the theme song from “The Fresh Prince Of Bel-Air” in order to get their domain controller back … and other fun acts
  • Bruce Potter’s 11 year old son calling a team on their VOIP phones and saying “I just hacked you” and hanging up

This is one of the only competitions where the organizers repeatedly ask “How can we make this more fun for you next year.” In a way, I feel like the competition is really just for the Red Team in the end. Unlike other competitions that let competitors wipe the supplied machines and install something like FreeBSD, ISTS has a rule that says you have to use the VM’s that were supplied.  There’s another rule so you can’t unplug the network cables. This was a common tactic when a team realized they were completely owned and desperate. This helped keep persistence… persistent.

Mobile Challenge

For  a while, I’ve wanted RIT to have more of a mobile security program. When these kids graduate, you can expect that they will have to at least have a basic understanding of mobile security in an enterprise environment; MDM, BYOD, and other mobile-centric buzzwords.

So I proposed last year that Intrepidus get involved to provide a supplemental mobile challenge. The organizers were down and I’m really happy they let me do it.

The scenario I gave competitors was this:

  • Your new CISO wants to move the enterprise into mobile
  • He’s decided to implement a password manager so that members of IT can provide tech support from wherever they are
  • He has tasked you, to review four password managers, and decide which one will be deployed in the IT group
  • After you’ve decided, one of the members of IT will subsequently lose their mobile device and it will end up in the hands of an attacker

That played out for the competition to mean that competitors needed to reverse engineer four password applications, choose which is the most secure, and then enter in their team’s actual SSH credentials into the application. Then they had to swap their device with another team, giving them a chance to crack the application, and extract the credentials for that team. If you cracked the password you got 500 points. If you answered my questions, you got another 500 points.

The LastPass … Before You Lose Everything

The password application was designed to be terrible. You could open the application and setup a PIN code that would protect the credentials that you would subsequently enter. Under the hood, that meant that you had three main functions: one to register the PIN, another to collect the credentials, and another to encrypt/decrypt the information. You can check out an abridged version of the code that Roman and I wrote here.

As I explain each of the vulnerabilities, I want to point out that many of them are very easy, not because I thought that the competitors wouldn’t be able to figure it out, but because there are a lot of things going on during the competition. Between getting calls on their desk phone, trying to admin services, working on points, doing push-ups, there’s not really enough time to dedicate to reviewing source code.

App 1: A Dict Move

This version of the application was a simple trick. You could have discovered it by looking at the permissions that the application requested, or by reviewing the source code for the UserDict call. When a user first registered their PIN, this value was saved in the personal dictionary of the device.

Exploit: Go to Settings>Language and Input>Personal Dictionary and you will see the PIN that was entered.

App 2: Missing Tooth

This app made a call to the Bluetooth Manager on the device to check the state of the service. But if the competitors noticed, there was no Bluetooth on the device that was supplied. A Logcat error appeared that pointed to a Bluetooth error. If you looked at the code for that portion, you would notice that there is a simple if statement to check if Bluetooth exists. If it does exist, it will print out the PIN code that it saved a copy of in its shared preferences. You could either exploit this fact, or the underlying problem which was that the PIN was stored using a static AES key in the data directory.

Exploit1: Modify the APK to change the if statement to check if it doesn’t exist, or just remote the if statement completely and keep the code to print the PIN.

Exploit2: Access the shared preferences file by rooting the device or finding it in a shared location.

App 3: The 777 PIN

This version of the application merely stored the PIN in a world readable path. The way you could find this is by looking at the code for references to “MODE_WORLD_READABLE”. This was code that accessed a shared preferences file inside the application’s data directory, and then saved it with a world readable permission. Once you found that, you would notice an entry that read “PIN”.

Exploit1:
cat /data/data/com.intrepidusgroup.passwordmanager6969/shared_prefs/com.intrepidusgroup.passwordmanager.preferences.xml

Exploit2: Root the device and extract the file.

App 4: PBKDF2 Is Hard?

Roman was nice enough to implement PBKDF2 from the nelenkov blog and that was usually what was used to store the credentials. If you look in the PBKDF2Helper.java class you could understand how it works – or at least the important part which was that the PIN is required to encrypt the information. And you would normally assume that the PIN was the user input supplied by the user. Without that user to enter that PIN, you couldn’t recover the original value.

But if you compared the different applications, you would notice that this app had a very different encryption function. Instead of using a PIN, it checked that the PIN matched the one stored in the preferences, and if it was, it used a static value of “Intrepidus” to encrypt and decrypt the credentials. That means that the PIN merely protects you from opening the app, not securing the content. You can crack this in a few ways:

Exploit1: Implement PBKDF2 in the same way that the helper class runs and supply it a password of “Intrepidus” to decrypt the contents found in LogCat

Exploit2: Just call the activity directly using an intent with the extra of “Intrepidus”
adb shell am start -n com.intrepidusgroup.passwordmanager9001/.MainPasswordSafeActivity -e EXTRA_SAFE_PASSWORD "Intrepidus"

Results

Out of the 12 teams, 10 of them thought they would be up for the challenge. From there 7 teams turned in a solution. 4 teams were able to exploit the system to get another team’s credentials.

2014-03-13

And one team broke a tablet. Come on, Team 6! :)

I’m hoping to be lucky enough to be involved with ISTS next year. It has been a great opportunity in the past and what keeps me coming back every year. I’m already coming up with some new ideas if I can do a mobile challenge next year. My end goal though is to get more RIT kids into mobile so they can write some crazy stuff themselves.

No comments yet

A look at Snapchat client-side controls

Posted: February 5, 2014 – 2:08 pm | Author: | Filed under: iOS, jailbroken, Mobile Security

Snapchat seems to be in the news lately for all the wrong reasons. For the uninitiated, Snapchat lets you send time bound images to other users; once an image has been seen for a small period of time, it self-destructs and becomes unavailable. The application also informs the sender if the receiver takes a screenshot of the picture, so you’d know if someone tried to save the picture you sent. In a recent internal training session on iOS application modification, someone suggested the Snapchat iOS app as a good target to play around with, giving rise to Snapstore, an iOS tweak for jailbroken iPhones.

tl;dr: An iOS tweak to save Snapchat images to persistent storage, disable screenshot notifications, and never expire images.

For this exercise, we used the Theos framework to write a MobileSubstrate based tweak. There are plenty of tutorials out there on how to write a tweak; I’d encourage readers to check them out.

We didn’t look at the network traffic of the application, since the point was writing a tweak. A bit of investigation using class-dump-z, Andrea’s excellent Snoop-it tool, and educated guesses revealed quite a bit of information about the application’s internal workings. Snapchat downloads a user’s images and stores it temporarily along with associated metadata in a class called SCMediaCache. The image is then encrypted using the CCCryptor API, with keys and IVs generated using SecRandomCopyBytes, and stored in <AppFolder>/Library/Caches/SCMediaCache. When a user opens an image, the image is decrypted,  and a countdown timer is started. Once the timer runs out, the application deletes the image and associated metadata, and informs the server.

Saving images

The first goal was to save unencrypted images to persistent storage. We can hook and override the application at various points; we need a handle to the image before encryption or after decryption, and flush that to disk. We chose to hook on to the encrypt function of the SCMediaCache class, which caches the image received over the network, and save it to the application’s Documents folder. An alternative would be to grab the image after decryption, or dump the encryption keys and IV to disk and decrypt the image offline. Bonus: the receiver doesn’t have to open the image to save it unencrypted to the Documents folder.

Disable image timeout

For every image object opened, the application creates an NSTimer instance and counts down to 0 after an image is opened. There were many ways to prevent the count down from hitting zero; we opted to “do nothing” on every tick of the timer. The counter never counts down, and the image never expires. Bonus: the sender is not notified that the image was opened. I believe the application notifies the server that an image has been seen after its viewing period expires; since that never happens, no notifications are sent.

Disable screenshot notifications

I assumed here that the application registers for the UIApplicationUserDidTakeScreenshotNotification notification, and implements a callback to notify the server when this event is raised. Looking through the class dump of the application, the likely implementation was userDidTakeScreenshot() – overriding this method to “do nothing” effectively disables the screenshot notification.

So.. Snapchat is broken?

Not really – or rather, it suffers from the same problems that plague all applications relying on client side security. The snap exists unencrypted at some point in the application – if you can hook on to it, you can access it. On a jailbroken phone, there are no guarantees of security, and frameworks like Theos make it really easy to write runtime modifications for applications. Could Snapchat have protected against these attacks? Maybe. There are certain things they could have done to raise the technical bar for exploitation; for example, if the class and method names were obfuscated, I would have had a very different reaction to the exercise. Unfortunately, there aren’t many opensource obfuscators for ObjectiveC/iOS, like Proguard for Android.

Credits to Andrew for the original idea, and Nitin for sending test images.

Comments disabled

iOS 7 and Mavericks: New feature roundup from a security perspective

Posted: June 11, 2013 – 10:46 am | Author: | Filed under: iOS, Mobile Security, Security Management, Uncategorized

Yesterday Apple unveiled the latest versions of OS X (code-named Mavericks) and iOS 7, at the annual World Wide Developer Conference (WWDC). The general focus was on end-user features and items of interest to developers, but several items appeared to have an impact on security in one way or another.

The beta versions of both operating systems were also released to developers yesterday, but I haven’t seen them yet (and once I do, I’d probably be bound by NDA to not talk much about them). So before I go that route (hopefully later this week!), I thought it would be useful to quickly review some of the items I found potentially significant. I’ll briefly describe the features, then summarize some of the security questions I have at the end. Also, whenever I talk about “Early Reports,” I’m referring to information not specifically announced by Apple, but which have leaked through screenshots and other reports.

OS X “Mavericks”

Though my focus at Intrepidus has generally been on iOS, I do use OS X on a daily basis, and a few items here seemed worthy of mention (plus, they also pertain to iOS).

  • Passwords in the Cloud — a secure vault, stored on iCloud, for website logins, credit card numbers, wi-fi passwords, etc. This was cited as using AES-256 encryption, pushed to trusted devices. When used within Safari, it can even auto-suggest random, secure passwords as you create web-based accounts.
  • Notifications in the lock screen — when the computer is locked or asleep, notifications (including push notifications) can queue up, and will be displayed to the user the next time they wake up the computer, while the screen is still locked.
  • The map application can send directions to an iPhone, but how this works wasn’t explained. My speculation is it’s an iCloud document, just like you can send Passbook passes from Safari directly to your iOS devices.

iOS 7

This was the big change. So big, they repeatedly referred to it as “the biggest change to iOS since the introduction of the iPhone.” Clearly, there have been big changes in the interface design, but also several new features were introduced as well.

  • AirDrop — iOS devices can now share information directly with nearby friends over peer-to-peer Wi-Fi. This was introduced in OS X Lion, and doesn’t require actually being on the same Wi-Fi network.
  • Notification center on lock screen — similar to the new feature in Mavericks
  • Control Center — provides an easy way to toggle features like Wi-Fi, Airplane mode, and Do Not Disturb, by simply swiping up from the bottom of the screen. This also allows quick access to four applications: Flashlight, Timers, Calculator, and Camera.
  • Better multitasking — applications may now actually remain in the background, with the operating system using some careful monitoring and management to reduce the cycles they use to the bare minimum. This also provides a facility called “push trigger,” where an application in the background can actually immediately act on data received in a push notification.
  • Safari: iCloud keychain and parental controls — I don’t have any idea what the parental controls would do, but if it provides a way to blacklist and/or whitelist websites, this could be somewhat useful in corporate settings. And, of course, the iCloud keychain (described above for Mavericks) is a major new feature.
  • App store automatic updates — this is a good/bad thing, in my mind. People certainly want to stop having to do big updates of many apps every week or two…but sometimes a new version of an app may be buggy, and users might not want to upgrade immediately. Also, corporations may want to review apps before they’re updated, to ensure that new features don’t change the risk profile the app poses to their enterprise.
  • Activation Lock — this new feature allows a user to configure an iOS device such that if it’s been remotely wiped (because it was lost or stolen), then the device cannot be re-activated until the original iCloud credentials are entered. This should provide some additional deterrence against theft, at least, once the feature becomes widespread and well understood.

These keynotes always focus on only a few features, and there are always several dozen other features that don’t get described in detail. In this case, two screens full of features were shown during the keynote, including several that appear to have relevance to security or corporate users:

  • Enterprise single sign on — definitely interesting
  • Per-app VPNs — would be very interesting if each app could be assigned to an arbitrary VPN
  • Streamline MDM enrollment — no idea what this could mean, since (for the end user) it’s already pretty simple
  • App store volume purchase — this has been a complicated endeavor since it was first introduced, so changes here could be significant
  • Managed app configuration — this might be similar to application profiles in the OS X profile manager (which are an outgrowth of the old MCX system in pre-Lion OS X)
  • Scan to acquire passbook passes — probably built-in QR scanner
  • iBeacons — Low Energy Bluetooth location
  • Automatic configuration — possibly the aforementioned app configuration
  • Barcode scanning — may confirm the passbook assumption
  • Data protection by default — finally, all apps may have the additional “encrypted when device is locked” protection

Finally, some interesting bits have already been seen in screenshots on the web:

  • Integration of Vimeo and Flickr accounts for share sheets (similar to existing Twitter and Facebook integration)
  • Separate iCloud security panel, including integrated two-factor authentication, a separate passcode for the iCloud keychain, and a toggle for “Keychain Recovery” subtitled “Restore passwords if you lose all your devices.”

Outstanding Questions

  • How are passwords in the cloud stored, and does anyone else have access to the data (for example, if you forget your key)?
  • Can we control what notifications appear on the lock screen? For example, allow Twitter, but disallow mail, while allowing both Twitter and email when the device is unlocked?
  • Does AirDrop on iOS introduce any new problems? Can strangers try to push data to you while in public, even if you’re not logged into a public Wi-Fi? Could that lead to a phishing vector (for example, sharing a malicious configuration profile over AirDrop)?
  • Can you change the applications available for quick-launch in the Control Center? Early reports indicate that the Control Center may be enabled for use in the lock screen, and if so, how does that affect apps which encrypt their data?
  • How much can an application do when woken up by a push trigger? Could an attacker in control of a malicious app and its push server remotely enable the device microphone, for example? Can this be done while the device is locked?
  • Can automatic app updates be configured, for example, to wait a week after release prior to being applied? Can the feature be disabled altogether? Or better yet, can certain apps be flagged for manual updating only?
  • For activation lock, can the remote geolocation and messaging features of Find My iPhone remain intact even after the device was wiped? Currently, users are faced with a tough choice, whether to wipe the device and give up any chance of locating it again, or leave it trackable, and able to receive messages, but at risk of someone extracting sensitive information from it. It’d be nice if one could wipe the device, but still be able to try to track it down and send “If found, please call me for a reward” messages to the finder.

All in all, there appears to be a great deal of change coming in both OS X, and especially, iOS. This summer will keep us busy exploring all the new features and their security implications, and hopefully the final release will prove to be an improvement in many areas.

Comments disabled

Tizen Security

Posted: June 10, 2013 – 10:54 am | Author: | Filed under: Mobile Security, Tizen

There is some intense competition between up-and-coming mobile platforms that aim at taking away some of Android’s marketshare. We have Tizen, Firefox OS, Ubuntu Touch, Sailfish OS, and others but to me these are the big players. Which do you think is going to make a substantial splash? Any?

Firefox OS has the backing of the Mozilla Foundation and big players like Facebook, Ubuntu Touch has Canonical, Sailfish OS has…the Sailfish Alliance? But if we’re betting on which of these will most likely take off, Tizen has the support of Intel (with McAfee), Samsung, SK Telecom, Vodafone, Huawei, and many others. There are even promises that Samsung will release a high-end Tizen based device in August of this year. So the question is, what’s a “tizen,” and why should I care?

img_tizen01[1]

Tizen TL;DR

Environmentally, Tizen is a W3C standards compliant HTML5 based platform run on-top of Linux where their applications (called “widgets” and “web apps”) are developed in HTML5 and JavaScript. (Native applications are also supported, developed in C, and mostly aimed at game developers.) Applications make feature requests to privileged APIs using JavaScript, that maintain your controls over features like contacts, NFC, or the camera.

History

Before I go into security, I just want to point out where Tizen has come from, in case you’ve worked on one of its predecessors before. In 2010, Nokia and Intel announce MeeGo, a Linux, web-based mobile platform. Nokia eventually dropped out of this project and decided to focus on Windows Phone. In 2011, Intel decided to kill MeeGo and make its own platform with the support of the Linux Foundation, called “Tizen”. The Tizen 1.0 SDK was released in 2012 along with a specific flavor designed to be a vehicle dashboard called Tizen IVI. Last month, Tizen released version 2.1 of its platform at its conference in San Francisco along with the announcement of a lot of new supporters that it has snowballed along the way.

Security Model

Tizen has a similar sandbox model to its competitors wherein each application runs segregated from other applications. What’s different in Tizen is how those apps are segregated, and which device component is responsible for enforcement of this sandbox. Each application runs as an instance of the Tizen Web Runtime (WRT) and a Linux Kernel Security Module (“Smack”), controls processes and their interactions with the rest of the operating system based on a set of rules. WRT will be covered in more detail a later post; for the sake of this post, the WRT functions similarly to Dalvik VM instances in Android.

Smack

Smack is where Tizen can potentially make its mark as an innovative, modern, mobile OS, and this feature sets apart this platform from other mobile OS’s right now. If you had never heard of Smack before (like me), you can think of it as a simplified competitor to SELinux which I expect you actually have heard of (hint: it’s enabled but not enforced on the Samsung Galaxy S4). Unlike SELinux, which can have insanely long rule-sets that control how a process interacts with the system, Smack is designed for simplicity.

One of its design metaphors is “Smack Labels,” which take an object like a process or file system location, and designate it with an identifier. During runtime, when an application needs to interact with other objects in the system, those labels are reviewed and checked to see if Label A is allowed to interact with Label B. Every app is given its own Smack Label (similar to an Android UID and GID) and Tizen uses these Smack Labels to control how apps, APIs, device functions, and just about everything sandboxed application on the device interacts with another sandboxed application.

Content Security Framework

There’s a lot of stuff to dive into in the Tizen OS, but I wanted to highlight Tizen’s new Content Security Framework (CSF) introduced in 2.1. If you’ve worked with Android and understand why malware continues to be a threat, you’ll soon notice that one of the problems with anti-malware solutions for the platform is that it’s not possible for them to gain the privileges necessary to properly protect a device. On Tizen, Intel and McAfee have provided a solution to this type of problem, namely this Content Security Framework, which is a security engine that actively looks for malicious activity and is built into the Tizen environment. This engine gives developers the ability to hook into the CSF API and scan applications, device content, and presumably even lower level device functions to let them develop a more empowered malware protection application.

Besides scanning for malware or malicious content, applications which make use of the CSF can also scan URLs used by an application, categorize them, or report back on a domain’s reputation. You might see where this is going — all of this has the aim of letting device administrators set policies of what a device is or isn’t allowed to see, and enforce those policies at a low level.

Had Enough?

There’s too much information to really go through in depth, so I’m just going to cheat and give you some hints for further reading (and an outline for future blog posts). Most of the research right now is based on theories of how an actual Tizen device will be launched and how secure will it be. Hope to talk more about this soon. :)

  • Tizen applications must be signed by 2 signatures – the author and the distributer. The distributer is the marketplace in which the developer is publishing their application.
  • Apps can be encrypted and are dynamically decrypted by the WRT instance of that application (as opposed to on boot).
  • The Tizen SDK is similar to Android where device access uses something called “SDB” (equivalent to ADB) to access the device filesystem and provide debugging functionality.
  • ASLR is fully implemented (at least in the emulator).
  • A widget has the ability to set which domains an application is allowed to access (in the form of a whitelist). Developers can even get as specific as setting sub-domains and what types of calls are allowed to be sent to each subdomain.
  • The Secure Logging function offers the same control as Android’s BuildConfig.DEBUG value. When an app is packaged for production, logging is automatically removed.
  • JavaScript renders inside of widgets and web-apps making XSS vectors very juicy.
  • Zypper is used as a built in package management system letting users install apps like SSH, telnet, apache.

 

Comments disabled

Securing Mobile Hotspots, Part 1

Posted: May 7, 2013 – 1:19 pm | Author: and | Filed under: Mobile Device Management, Mobile Security, OWASP, Wireless

Mobile hotspots are awesome. They allow the user to connect any WiFi-enabled device to a high-speed 4G network. Anywhere. Maybe that’s one reason we see so many hitting the shelves. A significant number of these devices are equipped with advanced capabilities, such as media sharing or location-based services. But even without these capabilities, mobile hotspots are a tinkerer’s dream. It’s a WiFi radio, cellular (3G and 4G) radio, embedded OS, and web server, all rolled into one sweet package. So much to look at! Without pointing out particular vulnerabilities that we’ve found, we’re going to cover several weak points that we see across the board in these devices and provide some advice for testers and developers.

Weak admin controls
While a router’s webapp isn’t a fully internet-exposed attack surface, it isn’t ideal if sharing your hotspot with someone you think you can trust leads to a total compromise of your data. Most hotspots have an admin password — we like to see a different password for the admin interface and for WPA. Otherwise, what’s the point of an admin interface? <oprah>You get admin, and you get admin, and….</oprah> Once the passwords are different, it shouldn’t be easy to bypass the password prompt altogether. Here is a great example. Researcher Dustin Schultz found that an unprivileged user can access the WiFi password and administrative settings by adding a ‘/’ to the end of any URL. We’ve seen this take many forms, and allow anything from faking the admin cookie to disclosure of the actual admin password.

Common web-app vulns
The router’s web interface is private, right? Right? Unfortunately, Cross-Site Request Forgery (CSRF) attacks can originate from outside the network, and ultimately send data on your behalf. If you are logged in as the administrative user, CSRF can be used to change access point security, administrative passwords, or execute denial-of-service (DoS) attacks. CSRF attacks have been well-documented, and the OWASP site has plenty of examples and remediations, the most basic of which is to include anti-CSRF tokens with every request. This allows the webserver to verify that the request is coming from the actual page, not from a pre-crafted static request which is embedded in a website.

WiFi Protected Setup
Many modern routers STILL have Wifi-Protected Setup (WPS) enabled by default. The purpose of WPS is to allow users to connect to the Access Point using an 8-digit WPS PIN instead of a WPA(2) network key. Unfortunately, a very public design flaw in the protocol allows the PIN to be brute forced because it is sent and verified 4 digits at a time. This results in the router giving out the WPA key for the network. Lucky for us, some routers do have an option to disable WPS. Unlucky for us, some of those routers respond to WPS protocol requests even after WPS has been disabled. To mitigate the problem, OEMs should make sure that users have an option to disable WPS and that the device does not respond to requests after it’s disabled.

We’ll be publishing some more issues next week, including command injection, UPnP, and DNS rebinding!

Cheers,
Max and Rohan

Comments disabled

iOS Configuration Profile Ransomware

Posted: April 11, 2013 – 11:40 am | Author: | Filed under: iOS, Mobile Security, Phishing

A couple of months ago, at ShmooCon 2013, Tim Medin gave a great short talk titled “Apple iOS Certificate Tomfoolery.” One of the most interesting ideas I took away from this talk was the idea of ransomware delivered through a configuration profile. Briefly, configuration profiles can be used to control many aspects of an iOS device’s configuration. They can enable features, disable features, and even hide applications from the user.

This is the tricky bit: Create a configuration profile that disables Safari, disables installation of applications, even disables iCloud backups, and adds a “READ ME” web page to the user’s home screen. Put a password on the profile, so the user has to enter the password in order to remove it. Now, you just need to convince the user to install the profile, and you can do that simply through email or SMS phishing. Once they install it, half their expected functionality suddenly goes away, and if they tap on the “READ ME” page, they’ll see the instructions as to how to pay ransom to receive the password to remove the profile. Win! (well, not for the user).

Now, fortunately, there are a couple of flags that (might) alert the user that something odd is happening. First, in the initial profile installation screen, is the list of contents, which includes “Profile Removal Password.” Similarly, tapping on “More Details” clarifies that this is a locked profile. Of course, if the email introducing the profile was written well enough, then the user might already expect and accept this. Hopefully we can train them not to. Also, if the user has a passcode on their device, then they have to enter their passcode as well, so it won’t simply install without the user noticing.

photo 1

photo 2

But what if they ignore all the warnings, and install the profile anyway? Well, all might not yet be lost. Turns out, the removal password is included in the profile, in plaintext. The attacker could choose to encrypt the profile, but to do that they need a public key from the target device, which might not be so easily acquired. So, assuming the profile is not encrypted, just pull down the .mobileconfig file from the original phishing email, open it up, and find the password.

SooperSekrit

Of course, the attacker could get really tricky, and serve up a file with a different password each time, placing some kind of key into the ransom notice (“Pay me $35 to remove this profile. Use the word ‘ostrich’ when you send me your bitcoins”) and then that key would be used to derive the actual removal password. If this is the case, then each time you hit the page you’d get something different, and so you wouldn’t be able to recover the correct password. In that case, the only real way to remove it is either to pay the ransom, or, if the device is jailbroken, get in and remove the profile directly from the filesystem.

In iOS 6.x, a new feature was introduced that can prevent the user from installing profiles. This feature is only available in Supervised Mode (via the Configurator application), however, and so isn’t of much use to the general population.

Want to hear more about configuration profiles and keeping your iOS devices secure? Come to my talk at SOURCE Boston next Thursday!

Comments disabled

APKTool, make me a logcat sandwich

Posted: March 8, 2013 – 2:57 pm | Author: | Filed under: android, Mobile Device Management, Mobile Security, Reverse Engineering, Tools

I recently turned a few friends on to Zed Shaw’s “learn python the hard way” course and it reminded me how bad of a programmer I can be. In fact, I’m that guy how litters his code with print statements. So it’s probably no shock then that a lot of times when I’m trying to figure out what’s going on in an Android app we’re reversing, that I’ll want to drop in some print statements. I use to do this by adding a few lines of smali directly into a class file, but there were a few things I needed to deal with for that to work how I wanted it. For example, here is what the default “debug” log call looks like in smali.
invoke-static {v0, v1}, Landroid/util/Log;->d(Ljava/lang/String;Ljava/lang/String;)I
If you were going to drop this line into the code somewhere, you would need to make sure both v0 and v1 are Strings. I would typically want “v1″ to be the string I wanted logged out, and “v0″ (in this example) to be the log “Tag” value so I knew where I was in the code when it was dumped to the log (I may have a dozen or so values getting logged out, so this helps to keep things straight when you see them in the logs). Setting up this Tag string and not stomping on things sometimes meant I needed to increase the local variable count and add some more lines for setting the string and then making sure I got the register/variables correct in that previous logging line. This worked alright if it wasn’t too late in the night or I had enough caffeine in me, but I typically would screw something up and would end up recompiling a bunch of times. I wanted an easier way and something that could deal with logging out things that weren’t already strings.

Thus I created this simple class file I can drop into the root of any application (yes, this is not as good as a real debugger using JDWP, but sometimes doing things quick and dirty gets the job done quicker for me). I wanted to stay with Android log utility syntax, but simplified a few things. I overloaded the logging object’s “d” method so that it could take just about any variable type I was dealing with. One handy example of this is a byte arrays (which is often what we find decryption keys stored in). The wrapper in IGLogger will convert the byte array into a hex string and dump that to the logs. All you need to add is one statement to the code. If “v0″ contained a byte array we wanted printed out, just drop this line of code.
invoke-static {v0}, Liglogger;->d([B)I
Since “iglogger.smali” is in the root of the recompiled APK, we can statically invoke it from any other class in the project. In this case, we need to tell the “d” method v0 is a byte array “[B" and sticking with the standard Android logging utility class, we're returning an Integer (although I've thought about just making that a Void... I never check it). You may notice we're not passing a log Tag variable with this statement. IGLogger supports that if you want, but we've added a trick to IGLogger that I find works pretty well. In IGLogger, we'll create a new Throwable object, get the getStackTrace method to find out the last class and method we were in, and put that in our log Tag. If the APK is not obfuscated, this will even include a line number. This same trick allows for a very simple "hey, I got here and this is how" stack trace method to be dumped by placing this one line of code anywhere.
invoke-static {}, Liglogger;->d()I
You might have heard a lot of us here are fans of Virtuous Ten Studio for working with smali. I have a bunch of these IGLogger print statements in  Extras->Smali->CodeSnippets. Makes it really simple to just click and drop in a log statement.

But that wasn’t good enough for Niko here when we had a massively huge app that was obfuscated. He talked me into automating the process of logging out each class and method that was entered so we could watch the logs and know what code paths were being taken. I ended up rolling this into a Python script I had written to “fix strings” in decompiled Android apps. You are probably aware that proper Android apps will have their strings placed into XML files so that it’s easier to internationalize the application. While this might be nice for developers, it means when we’re reversing an application, we may end up with some strange hex value instead of a readable string. “FixStrings.py” would loop through the decompiled code and add these strings back in as a comment tag when ever they showed up in the smali code. Your mileage may vary with how well this works, but in some apps, it helped us find things easier.

Adding on to that code base, I started to include some code to automatically add IGLogger statements around things I thought could be interesting. This includes a log statement after the “prologue” of any method. Also, any time we see two strings being compared, we’ll log both strings (this is always fun for watching a password being checked or when the app pulls up device info to see if it’s running on the right hardware). We plan to add a few more things for dumping Intent messages and URLs, but this is a start for now.

This of course will make the app run hella slow, fill up logcat, and in some cases break the application. I’ve tried to avoid that last one as best I can for now, but it is possible this script will massacre an APK so badly it will be unrunnable. If you run into that issue, you can turn off the lines that will add these automatic logging statements to the code (ie, JonestownThisAPK = False).

The last thing we added to the Python script was some searches to pull out info we may find interesting when assessing an APK file. We dump this into a file called “apk-ig-info.txt” and review it after decompiling the APK. Again, this is something we’re continuing to refine. You can find the code on the Intrepidus Group github repo:

https://github.com/intrepidusgroup/IGLogger

https://github.com/intrepidusgroup/APKSmash

 

Comments disabled

Armor for Your Android Apps – ShmooCon follow-up

Posted: February 27, 2013 – 1:26 pm | Author: | Filed under: android, Conferences, Fun and Games, Mobile Security, Tools

Hopefully, everyone’s already decompressed from all the Shmoocon partying by now. I wanted to follow up on the IG Learner app that I presented during my “Armor for your Android Apps” talk and give out a couple of tips on how to approach cracking the challenges (which aren’t all that hard, really).
Before I dive into the meat of the lessons, I just wanted to point out that if you didn’t attend the conference but still want the app, you can get it from the Play Store:

https://play.google.com/store/apps/details?id=com.intrepidusgroup.learner

qrcode

So, you’ve got everything installed and running. At this point you have two options – take the easiest way and hit the walkthrough or try to dig through the lessons yourself. I intended for the walkthrough to serve as a helper thing, but if you’d like to just use it to run through the whole thing, sure, that’s an option, too. The link to the walkthrough is provided at the end of this post.

In if you want to do it yourself but are not sure where to start, here’s a few general tips:

1. You will end up using Android SDK / Android monitor (monitor.bat) very heavily. I am guessing that by now you have that installed on your system anyway.

2. Use dex2jar (http://code.google.com/p/dex2jar/) to convert APK’s Dalvik executables (*.dex) into their Java representation – since the code is not obfuscated, this will really help you understand the logic of the lessons.

3. Apktool (http://code.google.com/p/dex2jar/) – this command-line utility lets you decompile APKs and recompile them back. You’ll definitely need this on a few occasions.

4. Jarsigner – comes with Java SDK, is necessary to install an app on an Android device. Read here about signing of APKs: http://developer.android.com/tools/publishing/app-signing.html

5. Virtuous Ten Studio (http://www.virtuous-ten-studio.com/) – Smali IDE, complete with syntax highlighting / automatic signing / APK upload. Awesomeness redefined. If you want to bypass 3) and 4) and not have to deal with it, go the VTS way. That said, I’d still recommend familiarizing yourself with the command line versions of the tools – just so that you understand better what’s happening behing the scenes.

6. Some knowledge of Java is definitely helpful for quick completion of challenges.

7. “adb shell pm list packages” gets you the list of packages installed on the phone. IG Learner is one of them.

Now, let’s go to some specific tips per lesson:

1. Lesson 1. This one is pretty self-explanatory. If you start the Android monitor and look at the log output,  you’ll see the answer to the challenge. Easy as that.

2. Lesson 2. Convert the APK into Java and try to figure out the filename that’s being created. Another hint: default directory for Android app file storage is /data/data/<packagename>/files.

3. You can figure out what the URI scheme is just by looking at the lesson screen and requesting a URI. Now try to look through decompiled code (Either Smali or the Java representation) to figure out what the lesson is expecting. Also, pay attention to extra activities in the app.

4. You should use a local proxy to intercept application traffic (Burp Suite maybe?) Keep in mind that you can’t man in the middle SSL traffic unless the SSL certificate presented by the remote server is verified. And for that (at least, for Lesson 4) you need to update your trusted CA store with the signing certificate of your local proxy. Once you export that certificate (there are multiple ways to do it, using Internet Exporer’s certificate export wizard is one of them), you should be able to import into into the trusted CA store by placing it in the root of /sdcard and importing it through the Android’s Trusted Credentials menu.

5. This lesson is a bit trickier. For one of the ways to solve this, I suggest looking through the Smali code and finding the pin for https://www.intrepidusgroup.com SSL certificate that you can get by running Moxie Marlinspike’s pin.py script on our certificate. Then you can replace this with your own intercepting proxy certificate’s pin, recompile the app, and push it back to the phone. You’re good to go.

6. Hard-coded keys are awful. Seriously. When you’re playing around with symmetric encryption as you’re trying to find the correct value of the encrypted string, make sure that you convert that to Base64 for readable output. The logging facilities are there to help you.
The encryption can be done in less than 10 lines of Java code. If you’re struggling with that, check out our GitHub repository for a helper Java class.

7. Content providers are advertised in the Manifest. Mercury (http://labs.mwrinfosecurity.com/tools/2012/03/16/mercury/) is a great framework that lets you easily query those providers. This should be enough to successfully complete the challenge.

8. I’d recommend starting with decompilation of the app and looking at the Lesson8Activity. This may give you an idea of what the Intent handler is expecting. From there you can either download the Lesson8Aux app from the Play Store (https://play.google.com/store/apps/details?id=com.intrepidusgroup.lesson8auxapp), decompile it, modify it to throw the correct Intent to the application, or just use the “am” command to do just the same. Whichever is easier for you is fine, but I recommend going the auxiliary app way just to gain some more practice exercises decompiling and recompiling Smali code.

Oh, and yeah, the walkthrough (Huge thanks to our intern Nitin for putting it together!). Here it is:

walkthrough

 

Comments disabled

Evading evasi0n: iOS 6 Jailbreak Prevention

Posted: February 5, 2013 – 4:06 pm | Author: | Filed under: iOS, jailbreak, Mobile Device Management, Mobile Security

The latest iOS jailbreak was released yesterday. Called “evasi0n,” it can be used to bypass most all protections in iOS 6.1 on any device that supports it. It’s quite cool, and was certainly something I was looking forward to (since much of my work is greatly aided by working on a jailbroken device).

However, another part of my work is ensuring that our customers’ devices are as secure as they can be. And having an available jailbreak kind of weakens those assurances. So it might be useful to find a way to prevent the jailbreak from working.

And, it turns out, there might be such a way. At least, until the jailbreak team finds a workaround for the workaround.

Last March, Apple released the Configurator Application. Using this application, iOS devices can be put into a “Supervised” mode, strongly locks down many features. One of these features is the ability to connect to iTunes and do backups/restores. On a supervised device, this functionality is possible only from the machine designated as the device’s supervisor.

The evasi0n jailbreak just happens to depend on the iOS backup / restore channel. So much so, in fact, that this is what I got when I tried to jailbreak a supervised device:

Could the evasi0n authors work around this? Possibly. It depends on how deep the supervised mode controls are embedded within iOS. If the device requires a unique host key (from the supervising machine) in order to restore data to the iPad, then it could well be impossible to make evasi0n work on anything other than the actual supervising host.

Of course, putting a device in supervised mode isn’t for the faint of heart — it’s a major shift in how one configures and manages iOS devices. So this probably won’t be a “Jailbreak Stopper” for every major organization out there with large pools of iPads. But it might provide some additional comfort in small groups, like iPads checked out to executives, etc.

But couldn’t a user just remove the device from supervision? Yes, but that’s harder than it sounds. “Erase all Settings” won’t do it, and even “Erase all content and settings” (essentially, “wipe the device with extreme prejudice”) won’t kill the supervisory link. To make a device unsupervised, you need to connect it back to the supervising machine, and sever the link within Configurator. You should also be able to do it in iTunes by doing a full OS restore. In either case, however, all data on the device is wiped, so anything installed while in supervised mode would be lost prior to the jailbreak.

Bottom line: if your organization has iOS devices with sensitive information, and you’re concerned that this jailbreak might put data at risk, it might be worth checking out Configurator and putting some of your devices under supervised control.

[shameless plug: I'll be talking a little about this, and other ways to protect your data on iOS, at ShmooCon.]

UPDATE: Further thought should’ve made it obvious to me that forcing encrypted backups would have the same effect, and this is borne out in some simple testing. Of course, the user can’t be permitted to remove the encryption: so it needs to be forced through a configuration profile, preferably one that can’t be removed. If this setting is implemented through Mobile Device Mangaement, then the user could remove the device from MDM, disable encryption, jailbreak the device, and then re-enroll in MDM. So not entirely foolproof, but perhaps a more practical approach than shifting everyone to supervised mode.

Comments disabled

Unlocking NFC deadbolts with Androids

Posted: September 26, 2012 – 6:42 pm | Author: | Filed under: android, Conferences, Mobile Security, NFC, RFID

Program the EZon NFC lock to work with your Nexus

At Shmoocon  this last year, there was a vendor who caught my eye with the Samsung SHS-3121 Digital Keypad Keyless Deadbolt “EZon” Lock. They endorsed the lock for the unique digital keypad, which randomly displays two extra digits that must entered before pressing the actual unlock code. A fairly nice way to ensure extra smudge prints on the keypad and even wearing. What got my attention though was the NFC cards which could also be used to unlock the deadbolt. At Shmoocon, I scanned the sample card with my Galaxy Nexus and realized it was a Mifare Classic card… with no protected sectors or data on it (not that it would have mattered too much since the Mifare Classic encryption can be fairly easily broken at this point). We ordered one for the office to play with even though there were some warnings that the RFID side of things might not have the best security.

I don’t want to turn this into a full product review (or video overview), but I’ll just focus on the NFC side of things. The lock ships with 5 branded “Access Cards” which are Mifare Classics. The lock only appears to be checking the 4 byte UID of the card and if the UID has been previously registered with the lock, it allows access. The UID is like a unique serial number for each card and should be impossible to change after the card is manufactured. None of the cards that are shipped with the lock  are pre-registered, thus they must be manually added for access.

One thought of attack would be to similar to the HID card enumeration attacks (where if you know the ID from one card, it makes it pretty easy to find other values). Scanning the cards that were sent with the lock, the UIDs are not within close numerical range although some parts a similar (the UIDs were: 3e8700b1, be37feb0, eed2ffb0, fe2701b1, and… oops, I lost the last card). Additionally, the lock ships with brute-force detection enabled, which is refereed to as “prank” detection in the manual. Scan five invalid NFC cards in a row, the lock sounds an alarm and requires an administrator to unlock the device (or the door to be unlocked form the inside). Thumbs up for shipping with brute-force protection turned on by default. Unfortunately, we also noticed there’s a reset button hidden on the outside of the lock, so bring a paper clip and reset it after four attempts to avoid triggering the brute-force alarm and time-out.

That said, brute-forcing UIDs right now is a bit complicated. We haven’t seen a way to do this directly on a mobile device yet. A great whitepaper (PDF) on the current state of things was done by Michael Roland a few months ago. So while we can’t do this directly on our devices yet, we were able to purchase a knock-off “Mifare Classic” card from a contact in China which allows us to set the UID on a physical card using a non-standard command. At EUSecWest 2012, Max and I demonstrated using a Nexus S to read the UID off someone’s access card, then program onto this KIRF card in order to unlock the EZon.

So if you use one of these locks, you might want to keep your card in a shield when it’s not needed. However, you could also enroll your mobile phone to be your access key. This would then allow you to control when your card is active and when it is not. If you have an Android device that supports Google Wallet, you’re all set. The trick is to have Google Wallet installed with at least one “Loyalty Card” setup in the wallet, then make sure the card is enabled. Doing this enables NFC card emulation on your device which will present a UID to the EZon when it is within range. This type of card emulation is different from your payment information (so you don’t have to worry about the lock charging your bank account each time you unlock it). You can then enroll your phone just like a physical access card to the EZon and use your phone to unlock the device. The added benefit is that when your phone’s screen is turned off, card emulation is off as well which makes things a lot harder to tap and then clone.

~benn

Comments disabled

image

This site is protected with Urban Giraffe's plugin 'HTML Purified' and Edward Z. Yang's Powered by HTML Purifier. 24331 items have been purified.