Category Archives: Phishing
A couple of months ago, at ShmooCon 2013, Tim Medin gave a great short talk titled “Apple iOS Certificate Tomfoolery.” One of the most interesting ideas I took away from this talk was the idea of ransomware delivered through a configuration profile. Briefly, configuration profiles can be used to control many aspects of an iOS device’s configuration. They can enable features, disable features, and even hide applications from the user.
This is the tricky bit: Create a configuration profile that disables Safari, disables installation of applications, even disables iCloud backups, and adds a “READ ME” web page to the user’s home screen. Put a password on the profile, so the user has to enter the password in order to remove it. Now, you just need to convince the user to install the profile, and you can do that simply through email or SMS phishing. Once they install it, half their expected functionality suddenly goes away, and if they tap on the “READ ME” page, they’ll see the instructions as to how to pay ransom to receive the password to remove the profile. Win! (well, not for the user).
Now, fortunately, there are a couple of flags that (might) alert the user that something odd is happening. First, in the initial profile installation screen, is the list of contents, which includes “Profile Removal Password.” Similarly, tapping on “More Details” clarifies that this is a locked profile. Of course, if the email introducing the profile was written well enough, then the user might already expect and accept this. Hopefully we can train them not to. Also, if the user has a passcode on their device, then they have to enter their passcode as well, so it won’t simply install without the user noticing.
But what if they ignore all the warnings, and install the profile anyway? Well, all might not yet be lost. Turns out, the removal password is included in the profile, in plaintext. The attacker could choose to encrypt the profile, but to do that they need a public key from the target device, which might not be so easily acquired. So, assuming the profile is not encrypted, just pull down the .mobileconfig file from the original phishing email, open it up, and find the password.
Of course, the attacker could get really tricky, and serve up a file with a different password each time, placing some kind of key into the ransom notice (“Pay me $35 to remove this profile. Use the word ‘ostrich’ when you send me your bitcoins”) and then that key would be used to derive the actual removal password. If this is the case, then each time you hit the page you’d get something different, and so you wouldn’t be able to recover the correct password. In that case, the only real way to remove it is either to pay the ransom, or, if the device is jailbroken, get in and remove the profile directly from the filesystem.
In iOS 6.x, a new feature was introduced that can prevent the user from installing profiles. This feature is only available in Supervised Mode (via the Configurator application), however, and so isn’t of much use to the general population.
If you’re a gamer like me, you’ve probably been waiting for the release of Star Wars: The Old Republic, currently being developed by Bioware. I’ve been looking for beta codes, and came across Penny Arcade’s beta code give-away some time ago (bless their souls).
As I was signing up for the beta, I noticed something interesting: the registration page immediately told you if the email you’d typed in matched an EA Origin account. This piqued my interest: was this exploitable, other than testing if email addresses were associated with Origin? I signed up with my Origin account, and strangely, it asked for a new password, without authenticating my current password. Digging further, I realized that you could reset the password to a new one of your choosing, with one caveat: the holder of the email account needs to access the verification link sent to the email address as part of signup. Here’s the email:
Thank you for joining the Star Wars™: The Old Republic™ community! To complete your registration and activate your account, simply verify your email address by clicking here. Remember, we will update or create an EA account using this email address upon your email validation.
Normally, this would be good enough to prevent unauthorized password resets, but I found it strange that nowhere on the registration page or the email did it say that my password would be reset, and no indication after the reset. Conceivably, you could sign up a large number of Origin accounts for the SWTOR beta, and if the target doesn’t log into his Origin account immediately, he’d be unaware his account has been compromised. If you’re a gamer, you’ve probably signed up for a dozen betas, hoping to get lucky now and then – a “confirm you want to join the Star Wars beta” email from EA would raise no suspicions whatsoever.
I decided to report the issue to EA. I couldn’t find a security contact on their website, but a bit of e-stalking later with a colleague, found the email address of the EA CISO, and shot off an email describing what I’d found. To my surprise, instead of being ignored or receiving a letter from their legal department, I got a response within the hour – “Do you mind sharing your details so we can address the issue promptly? We take security very seriously and would like to get on it right away.” I couldn’t have asked for anything better.
Four days and some email exchanges later, I received an email from one of their Online Development Directors: thank you for reporting the issue, a patch has been deployed – oh and we’d like to send you some free SWTOR swag to show our appreciation.
This was a pretty fun experience for me. It wasn’t a critical vulnerability, but had the potential for mass abuse. EA was on the ball in fixing the bug, and kept me in the loop. You don’t get that too often. Now if only I could use this to have EA give me a permanent beta testing status for all games
Google IO had a “How to NFC” session today where they demoed and described using NFC on Android. One of the items they pointed out was the desire to use NFC for instant gratification and zero-click interactions. The only default application on the Nexus S that I’ve seen this in before today was Google Maps, but the desire is that other applications will incorporate this feature as well. In the future, we may see a banking app that launches when the phone is touched to a particular NFC/NDEF message tag and not require the user to click anything.
To see how this could work right now on a Nexus S, take a Mifare tag and write to it an NDEF message with a URL to “http://maps.google.com/“. When the device reads the tag, the standard NFC Tags application requiring user interaction will NOT be triggered. Instead it will automatically trigger Google Maps on the phone. This is done with specialized intent-filters. O’Reilly has been on the NFC ball and has a great write-upand flow chart about how Android figures out what actions to take when a new NFC tag/NDEF message is detected. It is well worth the read if you are planning on using NFC tags with your application.
To see how this works, pull out the AndroidManifest.xml file from the Google Maps application on the Nexus S, you’ll see a number of URLs registered for the “android.nfc.action.NDEF_DISCOVERED” action. These are intent-filters, which don’t require any special permission, nor present any type of prompt to the user when installed. So what if we wanted to create a competitor to Google’s Map application and register for these same intents? What if this was a banking app and the tags triggered the start of a transaction? Nothing currently stops our app from also creating these intent-filters, so lets see what that could look like.
We created a quick “Angry Birds New Jersey” application with some special intent filters in the manifest for our presentation at B-Sides Rochester last weekend. When the user installs what appears to be a game application, it will also silently register to receive the same intents which would launch Google Maps. Here’s a sample of the intent-filters for that:
Now when a user scans a NFC tag with a maps URL, a menu choice will pop up asking the user to choose which application should handle the intent. The challenge becomes getting the user to send the information to our application instead of the office application. The intent-filters include two handy settings for this. First you can customize the “label” that will appear on the popup list. So instead of our normal installed application name “Angry Birds New Jersey” showing up, we can call it “Google Maps”. We can also set the icon that will be displayed. So again, instead of showing the game icon, we can use an image that people already associate with Google. If you had to choose between these two apps, which one would you click on?
I’m not sure most users would know the first one on that list was the from the bird game we installed and not the offical Google Maps application. There might not be too much risk here hijacking a map URL, but its something I would encourage developers to think about with their data and tags.
There is now a way to protect against this when writing your data to NFC tags, if your application is running on Android 4.0 (or probably later as well). The protection is being called Android Application Record (AAR). Click here for our full POST on the feature.
Can you tell if a host is remotely infected just by a single HTTP request? For some malware the answer is yes.
By now, I think our readers are pretty familiar with PhishMe. As you can imagine, we see a lot of hits to PhishMe from a variety of browsers. And even better, we see a lot of hits to PhishMe from a variety of browsers where the user is likely to click on things. Each time a user makes a requests a website, the user’s browser sends a “user-agent” string to the web server as part of the request. A simple user-agent string looks like:
Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)
Here’s a quick break down of what this string tells us. The Mozilla/4.0 portion indicates a Mozilla-based browser. This user is running Internet Explorer 7.0 and Windows NT 5.1 (Vista). You can check your user-agent here.
Now for Internet Explorer, it’s pretty easy to append information to this user-agent string by editing the registry. You will typically see a number of .NET related items coming from a normal user-agent header on a Windows system.
Where it gets interesting is when we see user-agents like these next ones. It seems that some viruses and malware (or “potentially unwanted software”) insert their name or a token into the user-agent string. Here’s some examples we found:
Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; AntivirXP08; .NET CLR 1.1.4322) Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; PeoplePal 7.0; .NET CLR 2.0.50727) Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; FunWebProducts; .NET CLR 1.1.4322; .NET CLR 2.0.50727)
If the malware instead appends a token to the user-agent string, this token could be used to track the user from site to site or to trigger certain behavior on malicious websites. We identified several pieces of potentially unwanted software and tallied the number of infected users using PhishMe. The graph below shows the most common pieces of malware found in user-agent strings:
We looked at IE 6, 7, and 8. Using this total number of “infected” users, we broke down the infections into browser version and divided by the total number of users running each browser version to get the percentage of each version’s population which is infected. As it turns out, the portion of infections is pretty similar across all IE versions. Isn’t IE 8 suppose to protect users much more than IE 6? This is a bit of a surprise, but suggests something we’ve known about the current state of attacks. You can have strong software controls, but security still depends as much on the user operating the software safely. Even given a browser that is relatively hardened against threats, users must know how to identify sites with malware and phishing schemes in order to stay safe. Patching and updates are important, but so is user awareness.
PhishMe clients can contact our support team for an analysis of your user base.
Sending attachments over email can sometimes be a game of getting around content filtering rules. Especially when you’re in the security field and you are sending something that may look like a security threat. Recently we found ourselves needing to send out attachments with HTML code to a user who was checking their mail with Outlook Web Access (OWA). Since OWA is a web application to allow exchange users to read their email, it makes a sense that OWA will try to block attachments it detects as malicious. Enter in the “Safe HTML” filter.
The Safe HTML filter isn’t meant to protect users from everything, it’s just one of those nice extras to hopefully stop some low hanging fruit. If it’s your OWA server, you can disable this filter, but we not about to recommend that to anyone. We just needed to get our HTML attachment through (I swear officer it’s not malicious, just good clean HTML tags). It didn’t matter what we named the file (foo.gif, bar.doc, baz.foo), if it had HTML in it, the file got truncated when the user attempted to download it. After digging into our bag of old tricks, it was nice to see one come through.
From my days of playing with browser caching options, I remembered an issue with some versions of Internet Explorer where it would only obey META tags regarding caching if you had them within the first 64 KB of the page. Well using that idea, it turns out if you pad the start of your attachments with something like 1024 space characters, your HTML attachments download and open just fine in OWA. I imagine someone has done this before and would love to see if there is a more thorough review of the Safe HTML filter out there, but for us, it was just a reminder that some tricks don’t die… they just may need a few more or less bytes.
It is common knowledge that people get phished on non-SSL HTTP web sites. RSnake has blogged and presented about the weaknesses in todays web browsers that make this possible. These same weaknesses are presumably what Moxie Marlinspike exploited after he thwarted SSL site-validation and encryption via man-in-the-middle (MITM) attacks against HTTP traffic on the Tor network, as discussed in his BlackHat DC talk.
While these weaknesses have been known, what makes Moxie’s presentation unique is that he launched this attack against a large sample set of real victims, and succeeded in capturing their login credentials. Further, Moxie has shown us that his tool SSLstrip, and others like it, can make these attacks easy and automatic – assuming you have a foothold as a MITM. Hopefully somewhere, upon reading Moxie’s slides, a browser UI designer has finally let out a “Doh!” and slapped his own forehead.
MITM attacks on SSL aside, the most interesting thing I’ve taken away from Moxie’s talk that he was able to identify user accounts for specific web sites on the Tor network. You can read about how Tor works on the Tor Project site, but the purpose of Tor is to provide reliable anonymity while surfing the Internet. Anonymity is key for folks who want to blog about their oppressive governments, as well as those who engage in less-than-ethical activities on the Internet.
Posting an anonymous blog on a free blog service is one thing. But what about anonymously logging into your bank’s web site? Or anonymously checking your PayPal account? Isn’t that kind of like anonymously presenting your drivers license to the bouncer at the bar? The person on the receiving end of the communication knows who you are claiming to be.
If I wanted to do something that would hide my identity, I would use the Tor network. However, if I were doing something to hide my identity, I would not do so using my own peronally identifiable information (PII). This really makes me wonder about the people that Moxie man-in-the-middled. Were they ignorantly using Tor, assuming that anonymity in the network provided them increased security to perform their online banking? Or were they bad guys (phishers) logging in to compromised accounts using Tor to hide their identity and protect them from prosecution?
There are a lot of misconceptions about SSL and “online security” in the non-security geek world. People don’t get it. The big question I have after Moxie’s presentation is “do similar misconceptions apply to the use of Tor”? I would be very interested to know more about the people compromised in Moxies experiment.
Authenticating to a web application is a mutual process. Before a user enters credentials into the application, they validate the web applications credentials: its hostname, content, and SSL certificate (assuming it uses SSL).
Essentially, you validate the web site against what you know to be true (hostname and expected content). The browser validates that a trusted third party signed the web sites public key, and together they vouch for the sites identity by showing you a visual cue.
If the web site passes your personal validation and you decide to provide them, the application will take your credentials and validate them against what it knows to be true: a directory or other repository with user information. If it validates your credentials, it lets you in.
Dan Kaminsky’s DNS flaw makes it possible for attackers to spoof one of the three credentials web servers use to authenticate against users: the host name. The look and feel of a particular web site is already easy to spoof: phishers have been doing this for years. The only remaining credential the web server has that can’t easily be compromised is its SSL certificate, and the signature of a trusted third party (one of the commercial certifcate authorities).
Now that two of the three credentials could be spoofed, I started wondering how hard it would be to spoof the third. If you can get a valid SSL certificate, you can completely steal the identify of a web site. Unfortunately, it is not too dificult, and it is through no technical fault of the SSL protocol.
For me, it required no social engineering, no illicit hacking or ninja skills. In fact, it was kinda scary in its simplicity, and the real fault is in the process of the certificate authority (a big one). Is it that bad? I attempted to get certs for three HUGE Internet sites, and I was successful with one. An interesting application logic problem prevented me from getting another, and the certificate authority basically told me no (over the phone) for the third. The one I did get, however, is a biggie.
A few weeks ago I was looking into writing an application for my iPhone. At some point, I felt compelled to actually give it a shot, and I headed over to Apple’s web site to download XCode and whatever other tools I needed. Of course, I couldn’t remember my Apple developer center password, so I went through their “Forgot Your Password” routine on my Dell laptop.
A few seconds later, an email popped up on my Mac containing their magic link to pull up my change password form. I clicked it and went through the reset process, which ultimately asked me to authenticate with my new password.
Finally, I was redirected to the URL I originally requested . . . on my Dell. Hmm. How did my Mac get to where my Dell originally was?
Turns out Apple was maintaining a session for me on the server which retained my original URL. When you requested a URL that required authentication, Apple 302′d you to the login page with your desired URL contained in a query-string parameter. Once on the login page, you could tamper with the URL before it was stored in the session. You could also then enter your username (or, even better, someone elses’) and initiate the change password process.
When you chose to have Apple send you a link to change your password, the session you started with your original URL persisted via the data contained in the link. After you went through the process of changing your password and you finally authenticated, Apple sent down a small HTML file with a META-REFRESH tag that actually sent you where you originally wanted to go.
It is in this HTML where the badness happened. The original URL Apple stored in the session was being written here without being HTML encoded or properly validated. Apple did prevent you from specifying http://attackersite.com, but they did not validate against iphone.html”><SCRIPT>…</SCRIPT>.
The attack would have been as follows:
1. Tamper with the original URL and inject an XSS attack.
2. Enter someone elses’ username in the logon form, and click “Forgot Your Password”
3. Have Apple send the victim the password reset email.
4. Here is the kinda far fetched part: you need to hope/pray/socially engineer/somehow get the victim to go through the password change process, and authenticate.
5. Once they authenticate, you own their browser.
This attack is interesting to me for a number of reasons. First, it is a persistent XSS attack in a credential management system (ouch!). Second, the injection point is pre-auth, while the payload executes in the victims browser post-auth. Third, it is very easy to target individual users using legitimate emails from Apple: no spoofing required!
Apple was very quick to fix the problem, and even gave us credit here.
Good job Apple!
At this year’s RSA conference Ira Winkler went on to tell the audience about hacking into an energy company (via an authorized penetration test) using a targeted phishing email. Details are in this networkwold article: http://www.networkworld.com/news/2008/040908-rsa-hack-power-grid.html
“The penetration team started by tapping into distribution lists for SCADA user groups, where they harvested the e-mail addresses of people who worked for the target power company. They sent the workers an e-mail about a plan to cut their benefits and included a link to a Web site where they could find out more.”
Are we surprised they were successful? Absolutely not. We’ve been using this technique and responding to real incidents that that used spear phishing for quite some time now. But what if those same employees had already been “phished” through targeted awareness and then presented with the appropriate training material? What if you ran this exercise against all your employees regularly?
Phishme.com already has pre-built scenarios to make this training quick and easy. It has many generic domain names to choose from or you can register your own look-a-like domain.
There is no sense in paying a pentest company high dollar consulting fees to find out if your employees are vulnerable to phishing. I’m about to save your company a boat load of money.
Dear Magic Eight ball, I don’t currently conduct phishing attacks against my own employees as a means to train them. Am I vulnerable to spear-phishing attacks?
The presentations were very hit or miss this year, with unfortunately a bit more of the latter. I felt a lot of presentations would have fit a shorter turbo style time slot better than the hour long time slots. For example, the ‘baffle’ application for wireless AP finger printing looks like a very cool first generation tool. Easy to use, hack around with, well researched, and makes pretty graphs. Score. Unfortunately they dragged out the presentation with the whole history of tcp finger printing and made us wonder what the students were IM’ing about as they sat on the stage trying not to look too embarrassed or bored.
Mad props go out to Brad Antoniewicz and Joshua Wright. Not only for releasing a cool tool for wireless PEAP/TLS client credential pwnage (FreeRADIUS – Wireless Pwnage Edition), but for fun presentation skillz and shmooball dodging. Find the video for this one. It was probably my favorite talk of the con (not sure if the camera man caught the start of the talk though).
The guys at Vigilar also rocked with a new and improved version of VoIP Hopper; complete with practical usage scenarios and some good demos with a standard VoIP phone. They showed how to get on to the corporate network bypassing vlans setup for the VoIP traffic. I could think of a number of locations I’ve been at where it would be handy to have this tool with me.
Our very own Jaime and Aaron got a lot of people thinking with their forced internet condom. They’re moving the web hosting provider, but there’s some good data about what ports ISPs are blocking over at portscan.us (and you can help add to the project as well).
I unfortunately missed h1kari’s (David Hulton) GSM talk due to train delays, but the word at the hotel bar was that it was one of the most techincal and interesting talks of the con. His GSM rainbow tables may make things very interesting when the FPGAs complete in three months (anyone get a link to where that will be?). Speaking of FPGAs, I’m proposing the FDA needs to start looking into these things since they’re basically giving every geek I know an erection that is lasting way longer than 4 hours.
And for more geek porn, let me suggest the Solid State Drives Data Recovery Comparison to Hard Drives presentation. Scott Moulton makes powerpoint look a commadore 64 next to his smoothly timed 3D graphics. His guy also rocks for having them online for everyone to get jealous of… oh and teach us that deleting or wiping flash based drives is completely useless because of the wear-levelling process done by the controllers on these things. (and yes, I did sit there thinking of all the times I’ve futilely done PGP wipes of data on my flash drives). The good news though is that the recovery of that data sounds pretty damn hard at this time. Also in good news, we can now write off a few power tools from home depot as business expenses since you’ll want a hammer now to “wipe” those drives.
A number of us caught the phishing talk by Syn Phishus. I think we’ll have a full follow-up post on that (but just to clear one rumor we heard, no, he does not work for or have anything to do with phishme.com). He obviously agrees with us that mock phishing exercises need to be done… but I’d say our approachs to this differ greatly.