Category Archives: Security Management
Yesterday Apple unveiled the latest versions of OS X (code-named Mavericks) and iOS 7, at the annual World Wide Developer Conference (WWDC). The general focus was on end-user features and items of interest to developers, but several items appeared to have an impact on security in one way or another.
The beta versions of both operating systems were also released to developers yesterday, but I haven’t seen them yet (and once I do, I’d probably be bound by NDA to not talk much about them). So before I go that route (hopefully later this week!), I thought it would be useful to quickly review some of the items I found potentially significant. I’ll briefly describe the features, then summarize some of the security questions I have at the end. Also, whenever I talk about “Early Reports,” I’m referring to information not specifically announced by Apple, but which have leaked through screenshots and other reports.
OS X “Mavericks”
Though my focus at Intrepidus has generally been on iOS, I do use OS X on a daily basis, and a few items here seemed worthy of mention (plus, they also pertain to iOS).
- Passwords in the Cloud — a secure vault, stored on iCloud, for website logins, credit card numbers, wi-fi passwords, etc. This was cited as using AES-256 encryption, pushed to trusted devices. When used within Safari, it can even auto-suggest random, secure passwords as you create web-based accounts.
- Notifications in the lock screen — when the computer is locked or asleep, notifications (including push notifications) can queue up, and will be displayed to the user the next time they wake up the computer, while the screen is still locked.
- The map application can send directions to an iPhone, but how this works wasn’t explained. My speculation is it’s an iCloud document, just like you can send Passbook passes from Safari directly to your iOS devices.
This was the big change. So big, they repeatedly referred to it as “the biggest change to iOS since the introduction of the iPhone.” Clearly, there have been big changes in the interface design, but also several new features were introduced as well.
- AirDrop — iOS devices can now share information directly with nearby friends over peer-to-peer Wi-Fi. This was introduced in OS X Lion, and doesn’t require actually being on the same Wi-Fi network.
- Notification center on lock screen — similar to the new feature in Mavericks
- Control Center — provides an easy way to toggle features like Wi-Fi, Airplane mode, and Do Not Disturb, by simply swiping up from the bottom of the screen. This also allows quick access to four applications: Flashlight, Timers, Calculator, and Camera.
- Better multitasking — applications may now actually remain in the background, with the operating system using some careful monitoring and management to reduce the cycles they use to the bare minimum. This also provides a facility called “push trigger,” where an application in the background can actually immediately act on data received in a push notification.
- Safari: iCloud keychain and parental controls — I don’t have any idea what the parental controls would do, but if it provides a way to blacklist and/or whitelist websites, this could be somewhat useful in corporate settings. And, of course, the iCloud keychain (described above for Mavericks) is a major new feature.
- App store automatic updates — this is a good/bad thing, in my mind. People certainly want to stop having to do big updates of many apps every week or two…but sometimes a new version of an app may be buggy, and users might not want to upgrade immediately. Also, corporations may want to review apps before they’re updated, to ensure that new features don’t change the risk profile the app poses to their enterprise.
- Activation Lock — this new feature allows a user to configure an iOS device such that if it’s been remotely wiped (because it was lost or stolen), then the device cannot be re-activated until the original iCloud credentials are entered. This should provide some additional deterrence against theft, at least, once the feature becomes widespread and well understood.
These keynotes always focus on only a few features, and there are always several dozen other features that don’t get described in detail. In this case, two screens full of features were shown during the keynote, including several that appear to have relevance to security or corporate users:
- Enterprise single sign on — definitely interesting
- Per-app VPNs — would be very interesting if each app could be assigned to an arbitrary VPN
- Streamline MDM enrollment — no idea what this could mean, since (for the end user) it’s already pretty simple
- App store volume purchase — this has been a complicated endeavor since it was first introduced, so changes here could be significant
- Managed app configuration — this might be similar to application profiles in the OS X profile manager (which are an outgrowth of the old MCX system in pre-Lion OS X)
- Scan to acquire passbook passes — probably built-in QR scanner
- iBeacons — Low Energy Bluetooth location
- Automatic configuration — possibly the aforementioned app configuration
- Barcode scanning — may confirm the passbook assumption
- Data protection by default — finally, all apps may have the additional “encrypted when device is locked” protection
Finally, some interesting bits have already been seen in screenshots on the web:
- Integration of Vimeo and Flickr accounts for share sheets (similar to existing Twitter and Facebook integration)
- Separate iCloud security panel, including integrated two-factor authentication, a separate passcode for the iCloud keychain, and a toggle for “Keychain Recovery” subtitled “Restore passwords if you lose all your devices.”
- How are passwords in the cloud stored, and does anyone else have access to the data (for example, if you forget your key)?
- Can we control what notifications appear on the lock screen? For example, allow Twitter, but disallow mail, while allowing both Twitter and email when the device is unlocked?
- Does AirDrop on iOS introduce any new problems? Can strangers try to push data to you while in public, even if you’re not logged into a public Wi-Fi? Could that lead to a phishing vector (for example, sharing a malicious configuration profile over AirDrop)?
- Can you change the applications available for quick-launch in the Control Center? Early reports indicate that the Control Center may be enabled for use in the lock screen, and if so, how does that affect apps which encrypt their data?
- How much can an application do when woken up by a push trigger? Could an attacker in control of a malicious app and its push server remotely enable the device microphone, for example? Can this be done while the device is locked?
- Can automatic app updates be configured, for example, to wait a week after release prior to being applied? Can the feature be disabled altogether? Or better yet, can certain apps be flagged for manual updating only?
- For activation lock, can the remote geolocation and messaging features of Find My iPhone remain intact even after the device was wiped? Currently, users are faced with a tough choice, whether to wipe the device and give up any chance of locating it again, or leave it trackable, and able to receive messages, but at risk of someone extracting sensitive information from it. It’d be nice if one could wipe the device, but still be able to try to track it down and send “If found, please call me for a reward” messages to the finder.
All in all, there appears to be a great deal of change coming in both OS X, and especially, iOS. This summer will keep us busy exploring all the new features and their security implications, and hopefully the final release will prove to be an improvement in many areas.
Two years ago at CanSecWest Charlie Miller, Alex Sotirov and Dino Dai Zovi declared there would be no more free bugs. One of the leading philosophies for the “no more free bugs” statement is that an organization paying an individual security researcher legitimizes that research and dramatically changes the organization’s posture on reported bugs. The paying organization is saying, “this has monetary value to us and we will pay you, not attack you, for finding bugs”. The researcher is incentivized because they get money and have a known, legitimate, working relationship with the organization paying the bug bounty. Fast forward to two years later. A lot of discussion has happened regarding bug bounties in the public eye. And a lot of money has been paid for security bugs.
The concept of a bug bounty is not new and many famous hackers have offered them over the years. Donald Knuth probably has one of the oldest, and most prestigious, bug bounty programs in existence. The idea of someone who writes software offering money, even $1, for a bug is rare. Fast forward to two years since that statement at CanSecWest. Google has a web bug bounty program and a browser bug bounty program. Mozilla has a bug bounty program. ZDI also has a prominent bug bounty program (they run Pwn2Own). The experiment on bug bounties is running full steam at this point in the information security community.
Looking at the list of recent rewards from the Google Chrome Releases blog and seeing all of the $ signs next to security bugs makes me happy. I don’t feel insulted when I get paid to report bugs. I do think getting Google dollars for hard research work is gratifying. This leads me to the conclusion that these kind of programs “work” at a fundamental level. How well they work is a discussion for another time. If you had $100K to augment your security budget, every one of those dollars spent in a bug bounty program would represent a lot of research for the amount of money involved.
I am going to break a rule of good blogging and straight-away direct my readers to some background material with the promise of a quick summary in this post:
- Application Security Debt and Interests Rates – Chris Wysopal
- A Financial Model for Application Security Debt – Chris Wysopal
- Fix to Wysopal’s Application Security Debt Metric – New School of Information Security
I will now offer a quick summary here. In software development there is a concept of “technical debt”. Technical debt is when you, as a developer, knowingly dig a hole for yourself that you need to fill in at some point. The concept works well as a rough mental model. Every developer knows where their ugly spots are in their code. If you don’t refactor aggressively and manage your technical debt it can spiral out of control. Your “interest rates” are also a factor in this equation. The remaining goal of the posts is to hash out a way to develop a metric to estimate the dollar impact of technical debt as it relates to security. I enjoyed reading these posts as a different take on how to relate the cost of software (in)security to project managers and those responsible for budgets. I think this is worthwhile and that security often gets ignored because the cost of it is misunderstood.
The truth is it all comes down to a very simple question: will my (in)security cost me more than real security will cost me? Can I afford the cost? What can I do about it? I have long been a proponent of metrics collection throughout the software development process to measure the effectiveness of software security efforts. Models to estimate cost are a great starting point, but without measurement and a real SDL you will never have a way to quantify the value of software security and its impact on your organization. The answer is to integrate software security into your software development life cycle.There are compelling arguments that a good SDL program has good ROI: Microsoft SDL: Return on Investment. From the Microsoft Paper:
While tools should be part of the equation and can provide a force multiplier, no product can substitute for secure software development. An effective, structured approach to software security must include people—both experts and the larger development organization—a cultural shift toward security, tools where useful, the security processes to tie activities together, and metrics that allow for understanding and improvement.
That is very eloquently stated. I will offer a couple of closing thoughts now. The expense of insecurity can be greater than that of the project as a whole. And, there are intangible costs to insecurity, such as harm to your reputation. Mobile applications are often small appendages to larger efforts, but the mobile application is also quite often the “tip” of the iceberg that users experience and work with. Security vulnerabilities in mobile applications can really hurt.
edit: I wanted to offer one additional observation. Though tools can’t substitute for secure software development, they can open eyes and serve as a catalyst to full SDL adoption.
When I heard about Gawker getting compromised I knew it was not going to be pretty. Particularly with regards to their password database. Once again, the ugly warts of shared secret authentication systems are brought to the headlines. We got our hands on a copy of the password database. For reasons only Gawker administration know at this point the database only has traditional DES crypt style hashes (yes, that DES). Ideally every password for a web application user is stored using a random salt per password (at least 4 bytes for good measure) and a safe hash algorithm, like SHA1. That’s it. (See the end of this post for a better recommendation that should be considered the end goal) Storing passwords securely is not difficult, you just have to know what to do and believe it is important enough to do it. What you are about to read would not be possible if this simple guideline was followed.
There are a few lessons we can learn from this that I think are instructive for anyone that has to store user passwords and authenticate users:
- The top 100 passwords you NEVER want to allow your users to select.
- How to properly store passwords and how to construct a password policy.
First, the technical details of how we proceeded: I have about 14-16 cores I can dedicate to password cracking spread across about six machines. They are all fairly beefy, but not crazy. The challenge this presents is parallellzing the whole process across the network. This is where John the Ripper with the MPI patch comes in. MPI allows John to distribute the work to a large number of disparate systems, which has advantages in that any old hardware can be plugged in for the task. Each cracking node needs to run a daemon or agent that communicates with one central node that coordinates the cracking efforts.
Setting all of this up requires a bit of effort, but is not the most difficult task. Once a john –test kicks off (as shown below) properly we are off to the races with our 14 cracking nodes.
mpiexec -machinefile machines -n 14 ./john --test Benchmarking: Traditional DES [128/128 BS SSE2-16]... DONE
Many salts: 33099K c/s real, 35460K c/s virtual Only one salt: 28179K c/s real, 30031K c/s virtual
Password cracking of this nature is an embarrassingly parallel problem to solve. My crack nodes were sending a few hundred bytes per second to each other to handle the overhead of coordinating the cracking efforts. In a couple hours of cracking we were able to crack about 160,000 passwords:
166166 password hashes cracked, 581986 left
This means approximately 22% of the passwords were recovered on short order. All of these users that reuse passwords are now at severe risk. Additionally, time and time again, the #1 worst password is “123456″ by a wide margin. Taking out my copy of the book “Perfect Passwords” and pulling up their top 500 worst passwords of all times shows the following passwords as the 11 worst:
Perfect Passwords Top 11
123456 password 12345678 1234 pussy 12345 dragon qwerty 696969 mustang letmein
OK, so I cheated that is actually 11 passwords. Now, here are the top 11 from the Gawker dump (number on the left is number of occurences):
Gawker Top 11
4162 123456 3332 password 1444 12345678 861 lifehack 765 qwerty 529 abc123 503 12345 471 monkey 439 111111 410 consumer 391 letmein
Notice that number 11 is exactly the same? And that number 1 and 2 and 3 are exactly the same? This keeps happening….. over and over. The old saying, “creatures of habit” comes to mind. We are letting our users down by letting them pick these passwords. We are also letting our users down by not protecting their secrets better.
I could dig into the statistics and interesting features of these passwords (and there is a lot to dig into here), but the real lesson here is that you should never store passwords in a recoverable format like this. If you have a number of passwords like this you need to run, not walk, to the design board and find a way to fix it ASAP. Also, consider disallowing users from picking any password in the gawker password database. Every last one you can recover. Don’t let your users use a single one of those!
A few updates: WSJ has good coverage of this topic as well.
Also, our suggestion at the top for password storage is a bare minimum recommendation. You should be using something like bcrypt that is several orders of magnitude more difficult to crack. Another option is to roll your own, but you really should not. The key idea is you need something that takes a lot of time (for a computer). 100-300 milliseconds on your hardware, but this depends on the time vs. performance trade offs you can tolerate. Even 10ms is better than a few microseconds. There are several schemes to do this, they generally repeat multiple rounds of their core algorithm (something that really mixes the data up). The key idea is to make password recovery very slow by forcing an attacker to perform some expensive operation a variable number of times for just one candidate password attempt. Read this article on Unix crypt and then read this article to become even better informed.
(Link to [download id="267" format="2" autop="false"] for the curious)
That is it for now, thanks to fellow Intrepidus consultant Jason Ross for doing some of this footwork!
Answer: …at the end of this post.
There as been a great deal of buzz about “contactless shopping” being enabled in the next generation of cell phones here in the United States. Google will be including APIs for this in Android 2.3 “Gingerbread” and rumors are it will be in the iPhone 5. The technology used is called “Near Field Communication” (NFC), which is an extension of ISO/IEC 14443 (proximity cards… like the badge you probably have to use to get into your office). On the techie side, these guys operate at 13.56 MHz and communicate via magnetic field induction which should have a range of up to 10 or 20 centimeters… more on that later.
The main way we will probably see NFC used is to enable phones to interact with physical tags (passive) or readers (active) when your phone comes within the few centimeter range. These passive tags could ask your phone to perform a task like launch a URL, send an SMS message, store a contact, or anything else you can communicate in a few kilobytes of data. If you tap on active reader, it may try to use Peer-to-Peer mode and a create a bidirection communication channel. It might then try to have you interact with a custom application on your device or even ask your device to send data back. The way NFC has been implemented in previous mobile phones is that the phone’s NFC reader is always active unless the phone is in a standby or airplane type mode.
The wireless protocol itself is not encrypted, thus the communication is susceptible to eavesdropping and then replay attacks by other near by devices. There has been discussion about how to add encryption, but this is not currently part of the standard. You can also introduce rogue tags and readers, however there is a NFC Signature specification for NFC data exchange format records (NDEF) which tries to address this issue. Unfortunately the specification does not address the public key infrastructure (PKI) behind this or the certificate verification and revocation process. You may be interested in some real world fun Collin Mulliner has had with passive tag spoofing and NFC device fuzzing.
So a large part of NFC security will be the range in which the device can be used. Immediately I wondered how much of previous RFID extended range research from people like Chris Paget would apply here. One of the key things to keep in mind is that the NFC RFID spec operates at 13.56 MHz and in a slightly different way than the 900 MHz RFID protocol. The 900 MHz type of RFID communicates information using backscatter (and from tag to reader only). The NFC spec uses induction to modulate a signal, thereby communicating data back to the host. Because the NFC circuit needs to be powered, the read range is greatly reduced. The RF power which reaches the tag drops off by approximately the distance squared. The read range of the NFC spec is up to 10-20 cm whereas the read range of the 900 MHz spectrum RFID tags has be pushed to hundreds of meters. However, it is possible to eavesdrop on NFC communication at a greater distance. The distance depends on several factors (including the power transmitted by the NFC reader, characteristics of the eavesdropping antenna, and material between the eavesdropper and the legitimate transaction) but is on the order of 1 meter for passive tags.
Are we going to need faraday caged cell phone holsters to stop people from pulling out our credit card data when we’re packed tightly on a subway ride? Hopefully not, but that’s going to depend on how mobile applications and operating systems are written to handle NFC.
History Answer: George Santayana and the saying: “Those who cannot remember the past are condemned to repeat it.”
- benn, higb, and mxs
How trustworthy are mobile platforms and devices?
For the maintainers of corporate networks and those charged with protecting sensitive data on those networks this is a very serious question. Corporate users are increasingly utilizing smart phones, tablets and other devices, which facilitate a more mobile and connected workforce. Often times, these devices are personal property of the user, yet users often use them for legitimate business purposes. Users desire the convenience of owning one, personal mobile device that can connect them to to a wide variety of data sources and applications: corporate email, personal email, personal data, personal applications and business applications. Reconciling device ownership and trustworthiness is a very difficult task.
Ultimately, a business has a right to protect its proprietary and sensitive information, even if that information resides on a personal device belonging to the end user. If a user desires to consume business resources and store sensitive information using their phone, then the business has a right to implement reasonable safeguards to protect the data and resources. Though the user owns the device and has the right to do as they please with it, they do not own all of the data on the device if they allow sensitive corporate data onto it.
A reasonable analogy, which approximates this situation, is a personally owned briefcase. For the sake of making an example, let us pretend the user has access to physical keys and sensitive printed documents. The user stores both these keys and documents inside of their briefcase. The user owns the briefcase, but the business still owns whats inside it. The business would expect the owner of the briefcase to take reasonable measures to protect the briefcase, and thus the contents of the briefcase. The same applies to virtually anything a person owns that can “contain” business assets. The problem is this gets much murkier and difficult to deal with in the digital environment, but the need exists and is clear.
Business policy is usually clear and requires that sensitive data must be protected at all times using reasonable measures that provide adequate security. And the subtext for that policy is to use a risk assessment to intelligently decide exactly what “adequate security” really is. Another consideration that goes into “adequate security” is what happens when an employee leaves an organization. What recourse does a business have to ensure their data is protected when the user is no longer an employee and the business cannot legitimately control the user’s device? One option is that the user’s personal device is wiped, no matter what, before it is put into an “unmanaged” mode where the business entity can no longer enforce policy on the device. Another option is to simply ask that the user be responsible and delete all of the data from their phone.
Now comes the fun part – the three major mobile platforms that users will typically wish to use. The three major platforms up for consideration in this article are: iOS (iPhone, iPad and iPod Touch), Android (tablets, smartphones) and BlackBerry (mostly phones). Some of these devices, depending on the version and configuration can meet what I consider “adequate” security. Others cannot. Now that the stage is set, the remainder of this article will explore the features of the three platforms.
First up for consideration are iOS devices (iOS does not stand for anything; it is a standalone entity now). First, one of the major caveats or things to understand about iOS is that every jailbreak out there is exploiting vulnerabilities in the operating system or a critical application on the device. Most observers would call these “security holes”. That said, Apple continually closes the holes and the security of the platform has improved steadily since its inception. Apple provides a robust set of security features for most iOS 4 devices. There are generally two approaches when allowing corporate data onto an iPhone or iPad: Force the device to become “managed” or only allow that data to be consumed inside of a trusted application, such as Good for iOS. Inside of a trusted application, ironclad control can be had on the data as it never leaves the confines of that application’s sandbox, assuming your user has not jailbroken their device. In managed mode it is possible to control many aspects of the device, such as password policy, user accounts and even restrictions on how the device can be used. It is also possible to issue commands to the device, such as remote wipe. Mobile device management servers can also query the device for a variety of information. More on the available device management servers and their features will be covered in an upcoming blog post. The features on iOS 4 rival the features available on the BlackBerry platform. iOS 4 takes the platform well beyond the limited policies available in Exchange ActiveSync. Network administrators have all of the tools they need to enforce policy on a personal user’s device.
BlackBerry has built a strong reputation on the security and corporate friendliness of the devices. Administrators have long relied on the features in the BlackBerry to set up, and enforce, effective information security policy. There are minimal gaps, if any, in the features provided by BlackBerry platform. The situation is thus very similar to that of iOS 4 (though BlackBerry has had these features for a much longer period). BlackBerry devices using a BlackBerry enterprise server can remotely wipe a device, enforce password policy and configure various other security settings remotely on a user’s personal device that should allow most corporate information security policies to be enforced
Android platform security, from a “managed security” standpoint, is not nearly as mature in terms of features provided by the OS. Only as recently as Android 2.2, the latest version of the Android OS, have features like Remote Wipe via Exchange ActiveSync (EAS) become available. There are some third party solutions out there. The more open nature of the Android platform means that third parties can more readily support the Android platform and provide features, such as remote wipe and password policy enforcement (these features are also available through EAS). There are also options, such as Good for Android, which again self contain many of the most common features a user desires.
The elephant in the room for the iOS and Android devices are the words, “jailbroken” and “rooted”. When jailbreaking and or rooting a device the goal is to circumvent or disable the pieces of the OS and platform that keep applications in a sandbox and running with limited privileges. These devices could be difficult, or even impossible, to enforce security policy on as the user can trivially circumvent the policy enforcement without the management servers being aware of it. The solution for this is much less clear and hinges on user’s being aware of the risks associated with jailbreaking and rooting, risks which they often are not aware of when they jailbreak or root their devices.
Overall mobile platforms are reaching a point where it is impossible to ignore that businesses, and users by extension, demand the ability to manage personal user devices (and corporate owned devices). iOS 4 and BlackBerry provide a compelling and rich feature set for device management and Android has turned a keen eye towards platform security. There are also many third party options and applications that aim to solve the problem of device management. Ultimately, the features provided by the vendor and operating system drive the features and provide the possibility for solutions to this problem that businesses can rely upon.
Hopefully this blog post sets the stage for future discussions on this topic. We closely follow this space and welcome opinions, links to other resources, and personal experiences of dealing with this new era of Personal yet Managed mobile devices.
I had the opportunity to take a very interesting Android Forensics course last week offered by ViaForensics. They’ve compiled great research and have developed some excellent tools for Android devices which can be a huge time saver for forensics analysis. However, I had not realized the degree to which the tools and analysis in that space right now are dependent on being able to obtain root access on the device. This reminded me of a side discussion that went on at DefCon 18.
Many of the ways to obtain root on a Android device require physical access and one can argue, pose a more limited threat for the end user. Some devices like the Nexus One even seem designed to be “developer” phones and allow users to flash the device with the firmware of their choosing. However, one of the first EVO 4G ways of obtaining root level access was from an APK download from unrEVOked’s website. Just by installing the APK, the application was able to root the devices. Clearly, most users would want the vulnerability this exploits to be patched before malicious APKs started to bundle this into their downloads (and it was patched). But at the same time, a number of users also want root access to their devices in order to customize them, investigate applications for privacy concerns, or test for other security issues. In the case of forensics analysis, root level access is needed to do the job.
The security industry has normally been fairly open about working with vendors to fix major security issues. What seems to be happening here is that there’s a growing trend in the community of even legitimate researchers, to hold on and not reveal their root level exploits. To some degree, maybe this is nothing new. However, I feel that if these attacks were found on standard desktop or server operating systems, the community would almost all largely support alerting the developer and getting a patch out to end users. These vulnerabilities would be seen as privilege escalation attacks and would need to be locked down. I don’t think its the same when it comes to closed or restricted devices. This could be an interesting discussion as more locked down devices are released.
And now with the FCC weighing in on jailbreaking, could the price-to-earnings ratio of a smartphone jailbreak skyrocket? Example: http://www.jailbreakme.com/faq.html If we are in the era of no more free bugs — what will be more lucrative for exploit developers and the budding entrepreneur? Giving away a free tool? Charging for a jailbreak app that they hope no one else reverses and puts out a cheaper tool? Or will the best dollar offer come from private exploit packs or organizations that intend to weaponize the vulnerability? Will we see the day when an smartphone exploit can buy you gold grill faster than an IIS/IE8 exploit?
Can you tell if a host is remotely infected just by a single HTTP request? For some malware the answer is yes.
By now, I think our readers are pretty familiar with PhishMe. As you can imagine, we see a lot of hits to PhishMe from a variety of browsers. And even better, we see a lot of hits to PhishMe from a variety of browsers where the user is likely to click on things. Each time a user makes a requests a website, the user’s browser sends a “user-agent” string to the web server as part of the request. A simple user-agent string looks like:
Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)
Here’s a quick break down of what this string tells us. The Mozilla/4.0 portion indicates a Mozilla-based browser. This user is running Internet Explorer 7.0 and Windows NT 5.1 (Vista). You can check your user-agent here.
Now for Internet Explorer, it’s pretty easy to append information to this user-agent string by editing the registry. You will typically see a number of .NET related items coming from a normal user-agent header on a Windows system.
Where it gets interesting is when we see user-agents like these next ones. It seems that some viruses and malware (or “potentially unwanted software”) insert their name or a token into the user-agent string. Here’s some examples we found:
Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; AntivirXP08; .NET CLR 1.1.4322) Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; PeoplePal 7.0; .NET CLR 2.0.50727) Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; FunWebProducts; .NET CLR 1.1.4322; .NET CLR 2.0.50727)
If the malware instead appends a token to the user-agent string, this token could be used to track the user from site to site or to trigger certain behavior on malicious websites. We identified several pieces of potentially unwanted software and tallied the number of infected users using PhishMe. The graph below shows the most common pieces of malware found in user-agent strings:
We looked at IE 6, 7, and 8. Using this total number of “infected” users, we broke down the infections into browser version and divided by the total number of users running each browser version to get the percentage of each version’s population which is infected. As it turns out, the portion of infections is pretty similar across all IE versions. Isn’t IE 8 suppose to protect users much more than IE 6? This is a bit of a surprise, but suggests something we’ve known about the current state of attacks. You can have strong software controls, but security still depends as much on the user operating the software safely. Even given a browser that is relatively hardened against threats, users must know how to identify sites with malware and phishing schemes in order to stay safe. Patching and updates are important, but so is user awareness.
PhishMe clients can contact our support team for an analysis of your user base.
- BES managed blackberry application that pushes data over the carrier IP network
- BES managed blackberry application that can use the WiFi radio in the device
- BIS blackberry where the end-user gets to grant security permissions, data over carrier IP network
- BIS blackberry where the application can use the carrier network
- BIS/BES blackberry that can do its authentication via the carriers LDAP/Radius via a reverse IP look-up
All of these can dramatically change the scope and type of testing we do.
The application security rights management is, — to use one word, awful! — Most applications are requesting rights to portions of the device they don’t need, most are requesting cross-application-communication rights they don’t need, and quite a few are wanting location data when they don’t really need it. — I can see why the enterprise IT manager is concerned about letting employee managed BIS RIM devices into their environment. It’s a mess! and it WILL lead to compromise of sensitive data if RIM doesn’t do something to fix this. The user needs a better way to make informed judgement calls on application rights management, and RIM needs to audit and remove applications from appworld that are requesting egregious permissions.
More about this here:
From Blackberry’s blog: IT Managers: Embracing Personal Employee Smartphones in the Enterprise
So the real problem is all the unmanaged applications,.. more about that later in Part 2.
RIM Security: Application Rights, what a mess – Part 2
Or rather experiencing the consequences… that, can inspire change. A perfect example; most people I know that are serious and disciplined about regular system backups do it because they’ve been burned in the past. (I’ve been very good about it ever since I paid Ontrack 1400 dollars to recover an IBM Deathstar hard drive)
How was your weekend? Mine was ok, except I spent a good part of my Sunday helping a teenage family member re-image her laptop after it was infected by some variant of the classic “pay us money to clean the virus off your computer” (see fake Security Essentials post here: http://blogs.technet.com/mmpc/archive/2010/02/24/if-it-calls-itself-security-essentials-2010-then-it-s-possibly-fake-innit.aspx ) This is nothing that we are not all familiar with.
The fallen laptop:
Vista Home 32bit, running as Administrator, expired Norton suite.
The Ah-Ha moment for me:
She wasn’t too upset about this. She needed a word doc for homework but could hardly take a break for texting while I was trying to find out what other important things she needed from the laptop.
Pictures? Picasa and Facebook. Email? Gmail. Music? Already on her iPod. Docs? Maybe she will use google docs from now on. SSH and PGP keys? (yeah right!) For her, a laptop is just a bridge to the Internet. Who cares about what is on the laptop? It’s just a thing that gets you to the <cringe> cloud </cringe> Is recovering your computer from the system disc every six months just the new norm?
She will be entering the workforce and on your corporate network in 2014.