Intrepidus Group
Insight

Author Archives: Mike Zusman

Front Range OWASP Conference

Posted: June 3, 2010 – 8:26 am | Author: | Filed under: Conferences, Mobile Security, Tools, Uncategorized

I’m just finishing up my third trip to Denver to speak at the Front Range OWASP Conference (FROC). This time I was corrected, as I was told that FROC is a local conference, and not a regional one. It had me fooled. The amount of attendees definitely threw me off! At any rate, David Campbell, Kathleen Thaxton, and others definitely put on a good event, and this one is going to remain on my schedule for years to come.

This year Intrepidus had two speaking slots at FROC. In the application security track, Raj Umadas and Aaron Rhodes covered some advanced man-in-the-middle techniques used for software testing. They flexed some muscle with a few Intrepidus in-house developed tools, showing how easily some non-plaint text protocols (like SSH) can be intercepted and tampered with. Stay tuned for more on this topic, as Raj and fellow Intrepid-dude Jeremy Allen will be presenting additional content on this subject at BlackHat in July.

In the emerging technologies track, Zach Lanier and I discussed our current opinions/observations on the mobile application security space, as well as the variety of testing techniques we employ when testing apps written for various mobile platforms. For example, while it might be easy to extract and decompile a RIM application to perform some static byte code analysis, the same technique would not apply to C/C++ binary client written for Windows Mobile. In this case, it can be faster (easier) to understand and break the application by  analyzing/tampering with its network traffic. We followed this up with a number of case studies based on mobile app assessments we’ve performed.  We’ll be continuing our research and presenting further, additional content at the New York State Cyber Security Conference later this month, and also in October at SecTor in Toronto.

Comments disabled

SSL Mystery Theater

Posted: April 6, 2010 – 7:09 am | Author: | Filed under: ssl

Some frightening chatter from the mozilla.dev.security.policy list. A root certificate that would appear to be owned by RSA has been included in the NSS root store for the better part of a decade, and expires in 2026. Unfortunately, RSA does not claim to own it, and its true origin is currently unknown.

“The lack of transparency in 2002 re: the source of added roots means we
have no idea whether e.g. some malicious actor slipped an extra one into
whatever list they were keeping internally to Netscape, and has been
MITMing people ever since.”

According to the thread, the same certificate is in the Apple root store, but not Microsoft’s. Another interesting bit of information comes from Florian Weimer, who states in the thread:

“For instance, the Equifax root isn’t controlled by Equifax anymore,
and there a couple of such examples.  There was a time when roots were
traded heavily.”

It had not occurred to me that roots (the chains of trust for the Internet) can be traded, bought, and sold. It is possible that the cert in question, named “RSA Security 1024 V3″ was originally created by RSA, but is now owned by someone else. Even if it is owned by a legitimate entity, this does not bode well for the concepts of transparency, identity, and trust.

If this story gets a following, and the integrity of EV comes back into question, I wonder what the big CA’s will say.

1 comment

Trust Revisited

Posted: March 25, 2010 – 8:11 am | Author: | Filed under: ssl

A long, long time ago, on a not so distant blog, I questioned the manner in which we make trust decisions regarding HTTPS enabled web sites.

Yesterday, Sid Stamm and Christopher Soghoian published a very interesting paper that further explores problems with SSL PKI and the trusted CA model. Most recent SSL research has focused on exploiting technical, implementation specific flaws in various pieces of SSL PKI. Stamm and Soghoian instead discuss a much more esoteric threat: various government agencies strong arming trusted Certification Authorities into issuing valid certificates for nefarious purposes.

The authors describe a fictitious attack on Chinese dissidents where the Chinese government coerces a Chinese CA to issue a certificate for US based Google. By detecting a change in the country of origin for the signing CA of the Google certificate, the authors say that an otherwise perfect SSL MITM attack can be detected.

But with all this talk of the Google hack, APT, and various government and defense agencies being successfully attacked themselves, who is to say that the Certification Authorities are immune? Why strong arm a CA, when you can silently issue your own certificate?

1 comment

Moxie Marlinspike Un-masks Tor Users

Posted: February 19, 2009 – 12:17 pm | Author: | Filed under: Phishing, ssl, Web Apps

It is common knowledge that people get phished on non-SSL HTTP web sites. RSnake has blogged and presented about the weaknesses in todays web browsers that make this possible. These same weaknesses are presumably what Moxie Marlinspike exploited after he thwarted SSL site-validation and encryption via man-in-the-middle (MITM) attacks against HTTP traffic on the Tor network, as discussed in his BlackHat DC talk.

While these weaknesses have been known, what makes Moxie’s presentation unique is that he launched this attack against a large sample set of real victims, and succeeded in capturing their login credentials. Further, Moxie has shown us that his tool SSLstrip, and others like it, can make these attacks easy and automatic – assuming you have a foothold as a MITM. Hopefully somewhere, upon reading Moxie’s slides, a browser UI designer has finally let out a “Doh!” and slapped his own forehead.

MITM attacks on SSL aside, the most interesting thing I’ve taken away from Moxie’s talk that he was able to identify user accounts for specific web sites on the Tor network. You can read about how Tor works on the Tor Project site, but the purpose of Tor is to provide reliable anonymity while surfing the Internet. Anonymity is key for folks who want to blog about their oppressive governments, as well as those who engage in less-than-ethical activities on the Internet.

Posting an anonymous blog on a free blog service is one thing. But what about anonymously logging into your bank’s web site? Or anonymously checking your PayPal account? Isn’t that kind of like anonymously presenting your drivers license to the bouncer at the bar? The person on the receiving end of the communication knows who you are claiming to be.

If I wanted to do something that would hide my identity, I would use the Tor network. However, if I were doing something to hide my identity, I would not do so using my own peronally identifiable information (PII). This really makes me wonder about the people that Moxie man-in-the-middled. Were they ignorantly using Tor, assuming that anonymity in the network provided them increased security to perform their online banking? Or were they bad guys (phishers) logging in to compromised accounts using Tor to hide their identity and protect them from prosecution?

There are a lot of misconceptions about SSL and “online security” in the non-security geek world. People don’t get it. The big question I have after Moxie’s presentation is “do similar misconceptions apply to the use of Tor”? I would be very interested to know more about the people compromised in Moxies experiment.

-Schmoilito

1 comment

How do you trust?

Posted: January 15, 2009 – 11:16 am | Author: | Filed under: Tools, Web Apps

SSL PKI is designed to do two things: encrypt data on the wire, and allow web site validation through the use of trusted third party signatures. The former works pretty well, the Debian weak key debacle aside. Unfortunately, the latter seems about as robust and secure as Windows 98. Case in point, https://discovercard.com. As my colleague Mike Walker points out, DiscoverCard.com forces users to enter credentials on a page served over an insecure HTTP connection. In doing so, Discover leaves users with no real way to tell who they are giving their credentials to. This is a perfect example of an implementation specific design flaw that fails open and renders SSL site validation useless.

Unfortunately, Discover Card isn’t the only organization breaking PKI. The pillars of Internet security, our trusted third party Certificate Authorities, have been having a rough time recently. A number of implementation specific flaws at multiple CAs have allowed outsiders to abuse their systems and obtain certificates for which they are not authorized to hold. Sure, these implementation specific flaws can be fixed, but the lasting damage to the trust we have in PKI can’t be undone. Further, the way PKI has been handling these situations seems to further undermine whatever trust remains.

Last summer when I disclosed the details of how I got the live.com certificate to Microsoft, I told them I wasn’t going to do anything bad with it, they said thanks, we shook hands, and that was pretty much the end of it. A few weeks ago, when Sotirov and crew disclosed that they derived their very own key capable of signing certificates that would be trusted by all web browsers, the researchers told Microsoft, Mozilla, etc, that they wouldn’t do anything bad with it. These companies again said thanks, hands were shook, and that was pretty much the end of that.

We rely on WebTrust audits and other mechanisms to ensure that our commercial Certificate Authorities do their job well, and so we can be sure we’re sending our data to the web sites we trust. Unfortunately, when the audits are useless and the Certificate Authorities screw up like they did in the above two scenarios, companies like Microsoft and Mozilla are forced to make a tough call:

Do they
a) Revoke the root CA for which a duplicate signing key was derived by unknown individuals, thus breaking the Internet for many businesses and individual users
or
b) Do nothing and trust that these guys really only have an expired certificate, and didn’t generate one valid for the next couple of years since they so very easily could have.

In the end, the trust that backs PKI is replaced with the trust of a few select individuals at the organizations who manage our root certificate programs (a.k.a the browser vendors). The millions of dollars spent on web trust audits are meaningless. The CAs could have just paid all of their money earmarked for audits to Sotirov and Appelbaum in exchange for their silence, and PKI would lived to fall another day.

Burn your SSL Certificates?

PKI, while good on paper, is hard to implement securely. It has taken almost two decades for us to have web browsers that actually support the one method that PKI has to protect itself from rogue certificates: Certificate Revocation Lists. And it doesn’t really matter, since not everyone is using IE7 or Firefox 3 yet. CRLs, which are essentially blacklists, are completely ineffective when you don’t even know what rogue certificates are actually in existence.

I don’t think trusted third parties are enough. We need technology that puts the ability to make trust decisions back in the hands of end users, rather than trying to make these decisions for them.

So what can we do differently? I’m of the mindset that client side certificate / public key caching, like that of SSH, can drastically improve our ability to make trust decisions when communicating on the Internet. SSH shows us that we can communicate securely without trusted third parties. The next question is how best to apply this to web browsers. Hashes of public keys are not easily consumed by casual Internet users. Another Intrepidus colleague, Aaron Rhodes, brought up the idea of vanity hashes that are actually easily recognizable patterns. This could help, but it would certainly complicate key management.

In an effort to actually try and help make things better, rather than just ranting about how bad PKI is on this blog, I’ve actually been working on a plug-in for Firefox that lets users white list SSL public keys SSH style and alerts the user when they change. It is actually alot harder than it would seem. In my next post, I’ll talk more about this plug-in, and the challenges I’ve faced in getting it working.

-schmoilito

1 comment

Nobody is perfect

Posted: January 2, 2009 – 10:33 am | Author: | Filed under: Uncategorized

Just before Christmas, an admin from StartCom certificate authority disclosed that he was able to procure an SSL certificate for Mozilla.com from a registered agent of the CA Comodo. He was not authorized to obtain this certificate, and the RA and CA clearly failed to properly vette his cert signing request. Shame on Comodo. You can read the entire saga on mozilla.dev.tech.crypto.

The discussion resulting from StartComs blog post is quite interesting, and touches on many issues spanning from internal CA domain validation procedures, to how to revoke a certificate in the Mozilla root cert program. One issue in particular, is exactly what I talked about in my last post.

Frank Hecker, of the Mozilla Foundation, said “[right] now we have no real idea as to the extent of the problem (e.g., how many certs might have been issued without proper validation, how many of those were issued to malicious actors, etc.).”

When a flaw in a CA validation mechanism is uncovered, it can sometimes be trivial to fix. The hard part is determining if any other certificates were obtained by taking advantage of the same flaw, and then revoke them. Although I can imagine a methodology for this process, I can’t comment on how any given CA would actually tackle this problem. Based on my own application security experience, I will say that I’m sure lots of logs that would need to be parsed, might not actually exist.

One person who commented on the StartCom post that started this all critiqued the post by saying it seemed dodgy that StartCom was blatantly pointing out flaws in a competing CA. The reader did, however, understand the severity of the problem that was found and thanked StartCom for publicly disclosing it. I agree with the reader, and I think StartCom did a good thing in disclosing this bug.

So in the interest of full-disclosure, here is what happened on Friday December 19 (three days before the StartCom disclosure). I found a flaw in StartCom’s domain validation mechanism that easily allowed anyone to authorize themselves for ANY domain name, on various .TLDs. While I only tested .COM, many other TLDs were available including .GOV.

The screen shot above shows the domain names my StartCom account was allowed to create signed certificates for. These certificates would have been trusted by Firefox, but not Internet Explorer. The first one is a domain I control. Phishme.com and Intrepidusgroup.com are domains owned by my employer for which I am not an authorized contact, and for which I should not have been, but was, granted a signed certificate. Needless to say Paypal.com and Verisign.com are companies I’m also not authorized for.

Fortunately for Verisign and PayPal, a defense in-depth strategy succeeded for StartCom. While I by-passed StartComs domain validation process, my attempts to create a signed certificate for Verisign.com was flagged by a black-list and not permitted. This is good news for the prominent sites on the black-list, but bad news for lesser known sites that rely on the trust gained by having a valid SSL certificate (small credit unions, for example).

Because they’re a good CA, the StartCom team was immediately aware of my attempt to get a certificate for Verisign. I disclosed the details of the flaw to them, and the simple problem was fixed within hours. But the question remains: did anyone else take advantage of the flaw?

PKI is not a perfect system, and there is no perfect CA. But, there are at least two types of CAs. One type treats SSL certificates as a cash cow, pushing signed certificates out the door, and counting the money. The second type is like StartCom. This second type understands that trust comes before money and that trusted CAs are a critical piece of Internet infrastructure.

-Schmoilito

(cross post on Schmoilito’s Way)

5 comments

More than one way to skin a CA

Posted: December 31, 2008 – 10:22 pm | Author: | Filed under: Uncategorized

Alex Sotirov, Jacob Appelbaum, and crew did some awesome work. They showed that it was possible to exploit RapidSSL’s use of MD5 for signing certificates in order to create their own rogue CA signing certificate. This exploitation is many orders of magnitude more severe than when I used a loop hole to get the login.live.com certificate from Thawte.

So what should happen when a CA screws up? Last summer, folks thought that the CA which issued the login.live.com certificate should have its status as a trusted CA revoked. I’m sure people feel the same way about RapidSSL. In my opinion, they are correct. However, it is clear that this could not happen, as it would effect the millions of businesses that rely on these CAs being trusted, which is what a VP at Verisign reaffirms in the comments of this post on the Breakingpoint blog.

A different question that Appelbaum asked during the presentation in Berlin, and one I’ve asked many times during my research of Certificate Authorities, is: if we were able to do this, how do we know if anybody else has done the same thing?

No one can ever give a straight answer. I’ve reported a number of flaws to CAs responsibly; flaws that can allow people to get certificates that they shouldn’t be allowed to get. The flaws get fixed, and thats great, but the damage that could have already been done is immeasurable.

It sucks when an online retailer gets hacked one or even multiple times. It’s bad for them, and it’s bad for their customers. When a trusted CA gets hacked, it sucks for the ENTIRE INTERNET. The CAs are supposed to help us secure the Internet. What does it mean if they are not secure themselves? To me, it means that we can’t rely on trusted third parties.

I know that abandoning PKI and trusted third parties is a bad idea, and probably won’t happen. However, people need to be more involved in the process of making trust decisions when communicating online. And I don’t mean little yellow locks and green address bars. I have some ideas on how to make better use of SSL in web browsers and other SSL clients. So far, I’ve gotten mixed responses to them from my peers :-) However, with what the Sotirov/Applebaum team accomplished, maybe my ideas will make more sense. Stay tuned…

-Schmoilito

(cross post on Schmoilito’s Way)

2 comments

DNS vuln + SSL cert = FAIL

Posted: July 30, 2008 – 4:17 pm | Author: | Filed under: Articles, Conferences, Phishing, Security Management, Web Apps

Authenticating to a web application is a mutual process. Before a user enters credentials into the application, they validate the web applications credentials: its hostname, content, and SSL certificate (assuming it uses SSL).

Essentially, you validate the web site against what you know to be true (hostname and expected content). The browser validates that a trusted third party signed the web sites public key, and together they vouch for the sites identity by showing you a visual cue.

If the web site passes your personal validation and you decide to provide them, the application will take your credentials and validate them against what it knows to be true: a directory or other repository with user information. If it validates your credentials, it lets you in.

Dan Kaminsky’s DNS flaw makes it possible for attackers to spoof one of the three credentials web servers use to authenticate against users: the host name. The look and feel of a particular web site is already easy to spoof: phishers have been doing this for years. The only remaining credential the web server has that can’t easily be compromised is its SSL certificate, and the signature of a trusted third party (one of the commercial certifcate authorities).

Now that two of the three credentials could be spoofed, I started wondering how hard it would be to spoof the third. If you can get a valid SSL certificate, you can completely steal the identify of a web site. Unfortunately, it is not too dificult, and it is through no technical fault of the SSL protocol.

For me, it required no social engineering, no illicit hacking or ninja skills. In fact, it was kinda scary in its simplicity, and the real fault is in the process of the certificate authority (a big one). Is it that bad? I attempted to get certs for three HUGE Internet sites, and I was successful with one. An interesting application logic problem prevented me from getting another, and the certificate authority basically told me no (over the phone) for the third. The one I did get, however, is a biggie.

I’ll drop the details at the beginning of my SSL VPN talk at BlackHat next week. I won’t divulge them sooner. Not even if Matasano kidnaps me, sends me overseas, and water boards me.

-Schmoilito

Comments disabled

Owning the Mobile Workforce @ BlackHat 2008

Posted: May 27, 2008 – 9:50 am | Author: | Filed under: Conferences, Mobile Security, Security Management, Techno

Those who have worked with me, or at least had a beer with me, know my feelings on web based SSL VPNs. They are very useful, very complicated, and can be very insecure. Useful because they allow a mobile work force to connect to the enterprise from any computer with a web browser; complicated because they need to do so with minimal inconvenience to both users and network managers; and insecure because this convenience is achieved through automation.

The automation starts with the browser based installation of client side components (ActiveX/Java applets). Network teams, management, and help desk personnel alike, love the fact that users can get the required client side software simply by visiting a web site. Once the components are installed, they can even maintain themselves by automatically download and installing updates and patches! Hallelujah!

Unfortunately, this type of behavior can be easily abused as Haroon from Sensepost has revealed. He disclosed a vulnerability in a Juniper SSL VPN ActiveX control that allows an attacker to execute code on a victim machine by getting them to view a malicious web site. The vulnerability is simple to exploit; the malicious web page invokes the ActiveX, calls one of its functions, and the ActiveX sends an HTTP request to the web server asking for commands to execute on the client machine. No stack smashing required!

Funnily enough, I reported an almost identical vulnerability to another large SSL VPN/firewall vendor. This other company makes it even easier. Instead of requesting a string of commands, their ActiveX will request, download, and execute an attacker supplied .EXE file. No signature checking or anything. Altogether, I have knowledge of these types of vulnerabilities in 4 of the leading SSL VPNs. Details will be discussed pending responsible disclosure.

We all know that SSL VPNs have similar features – you can spend days comparing vendor product descriptions. What I find interesting, and have spent much time researching, is that while SSL VPNs from different vendors share the same features, they also share the same vulnerabilities in their application logic. This research has provided most of the material for my upcoming talk at BlackHat 2008, “Leveraging the Edge: Abusing SSL VPNs”.

My talk is in the network track, but a lot of what I’ll be talking about is purely application security. This is funny to me, because during my time at Whale Communications (a Microsoft subsidiary) supporting Whale’s SSL VPN, the device was usually managed by network people who were not versed in application security at all. The “networking” (and security) in SSL VPNs terminates with the SSL connection. Beyond that, abusing gaps in access control, and other areas of application logic, can provide an attacker with all he needs to compromise clients and the networks they connect to.

-Schmoilito

2 comments

Apple.com XSS

Posted: May 23, 2008 – 7:42 am | Author: | Filed under: Articles, Phishing, Web Apps

A few weeks ago I was looking into writing an application for my iPhone. At some point, I felt compelled to actually give it a shot, and I headed over to Apple’s web site to download XCode and whatever other tools I needed. Of course, I couldn’t remember my Apple developer center password, so I went through their “Forgot Your Password” routine on my Dell laptop.

A few seconds later, an email popped up on my Mac containing their magic link to pull up my change password form. I clicked it and went through the reset process, which ultimately asked me to authenticate with my new password.

Finally, I was redirected to the URL I originally requested . . . on my Dell. Hmm. How did my Mac get to where my Dell originally was?

Turns out Apple was maintaining a session for me on the server which retained my original URL. When you requested a URL that required authentication, Apple 302′d you to the login page with your desired URL contained in a query-string parameter. Once on the login page, you could tamper with the URL before it was stored in the session. You could also then enter your username (or, even better, someone elses’) and initiate the change password process.

When you chose to have Apple send you a link to change your password, the session you started with your original URL persisted via the data contained in the link. After you went through the process of changing your password and you finally authenticated, Apple sent down a small HTML file with a META-REFRESH tag that actually sent you where you originally wanted to go.

It is in this HTML where the badness happened. The original URL Apple stored in the session was being written here without being HTML encoded or properly validated. Apple did prevent you from specifying http://attackersite.com, but they did not validate against iphone.html”><SCRIPT>…</SCRIPT>.

The attack would have been as follows:
1. Tamper with the original URL and inject an XSS attack.
2. Enter someone elses’ username in the logon form, and click “Forgot Your Password”
3. Have Apple send the victim the password reset email.
4. Here is the kinda far fetched part: you need to hope/pray/socially engineer/somehow get the victim to go through the password change process, and authenticate.
5. Once they authenticate, you own their browser.

This attack is interesting to me for a number of reasons. First, it is a persistent XSS attack in a credential management system (ouch!). Second, the injection point is pre-auth, while the payload executes in the victims browser post-auth. Third, it is very easy to target individual users using legitimate emails from Apple: no spoofing required!

Apple was very quick to fix the problem, and even gave us credit here.

Good job Apple!

-Schmoilito

2 comments

image

This site is protected with Urban Giraffe's plugin 'HTML Purified' and Edward Z. Yang's Powered by HTML Purifier. 24418 items have been purified.