Wednesday, May 30, 2012

Zulu: The Warrior is Even Stronger Now

We have analyzed close to 100,000 URLs since we launched Zulu in January 2012. Zulu provides real-time analysis of web content to determinate whether it is safe or malicious. Our goal is and always has been to make this free service easy to use and understand.

We have made significant improvements to Zulu at all levels - a more user-friendly web UI, deeper inspection, a faster back-end, improved checks, etc. Here's the low-down on all of the improvements that we've made to date.

Faster

The average time to analyze a URL is now under 5 seconds! That's right, it takes less than 5 seconds to lookup the domain, download the content, run more than 20 checks on the URL, DNS records and page content, followed by ranking the submission as benign, suspicious or malicious.

Clear Reporting

In order to make Zulu accessible to everyone, we've simplified the reports and made them more intuitive. Each check now includes a clear explanation, to detail exactly what is being analyzed. Hover your mouse over the name of each check to receive additional detail.

Each rule has a clear description detailing exactly what it looks for


Individual checks have different weightings and will of course deliver different results. The significance of each check on the overall score is now indicated by one of the following graphics: a green downward arrow indicates decreased risk, a blue dash shows no impact on overall risk and shaded upward arrows ranging from green to red and growing in size, represent the degree to which overall risk has been  increased.

risk: decreased, none, low, medium, high


External elements

In addition to scanning the URL that is sent to Zulu, we now scan up to 10 external elements that may be found in an HTML page including JavaScript, CSS, IFRAMEs, EMBED tags, links, etc. We try to choose the external elements that are the most likely to contain malicious content.

External elements found on a page

These 10 elements are scanned in parallel to the main URL, ensuring only a minimal impact on overall processing time. The final score reflects the risk associated with the main URL and each external link. You can click on each link to get the full report for the external element. This is a critical change to ensure that we're also catching even a single malicious link injected into an otherwise legitimate page.

Feedback

A number of these improvements came from the feedback we received. Please don't hesitate to send an e-mail to the Zulu team, or send feedback, positive or negative, for individual Zulu submissions.
"Send us feedback" link

Zulu is free and easy to use, give it a try.

Friday, May 25, 2012

Spotting malicious JavaScript in a page

While large blobs of obfuscated JavaScript at the top of the page are easy to spot, malicious JavaScript can often be hard to spot on hijacked sites. Some attackers go to great lengths to make their malicious code invisible to webmasters and security tools alike. In this post, I'll illustrate some of the places more commonly used to hide code and just how to spot them.

Source

Most malicious JavaScript is pulled from a different domain other than the hijacked page. One of the first things I look for is the list of JavaScript files pulled from external domains. For example, hxxp://kelly-monaco.org/ contained a script pulled from inforaf.vot.pl that turned out to be malicious.

Malicious external JavaScript
A page may contain many external JavaScript sources including, frameworks (jQuery, Prototype, etc.) a CDN, statistics (Google Analytics, counters, etc.), widgets, etc.

Location

Another good clue is the location of the script tag on the page. The attackers might be lazy and put the script tag at the very top or very bottom of the page. Always look at scripts palced before the opening HTML tag, or after the ending BODY tag.

http://www.china-crb.cn/
There are other places where a SCRIPT tag should not be found, for example, inside a TITLE tag.

Coding Style

When I analyze a page, I also look for different coding styles. For example, if a webmaster uses double quotes around tag attributes, I would then look for a SCRIPT tag with single quotes, or no quotes at all. Similarly, the webmaster might use the type and language attributes. Any SCRIPT tag that uses a different coding style would raise a red flag.

Well hidden

Here are some examples of very well hidden pieces of JavaScript that I've encountered. In the first example, the website is using vBulletin, an open-source forum application. All vBulletin pages contain inline JavaScript to call vBulletin_init(). The attacker inserted his JavaScript between the original JavaScript command, and the function call:
http://theexerciseblog.com/
A malicious piece of code can also be inserted inside an existing legitimate JavaScript file on a hijacked site:

Malicious code appended to AC_RunActiveContent.js

An even more tricky spot to identify, but one more rarely used, is the insertion of malicious JavaScript inside a CSS style using expression():
Malicious JavaScript in a style-sheet

These techniques are even combined with other tricks to deliver code which is directed only at specific users, such as IP blacklisting to block security scanners, cookies to prevent viewing the page twice, looking at the Referer tag to show the malicious code to users from specific sites, etc. The same page often has to be accessed in many different ways by security scanners to ensure that it is safe.

Wednesday, May 23, 2012

Why I joined the Zscaler ThreatLabZ team

As a new member of the ThreatLabZ, I wanted to take this opportunity to describe why I joined the Zscaler team.

Like many of you, I attended the RSA conference in San Francisco last February. At RSA I had the good fortune to run into Michael Sutton, the head of the Zscaler ThreatLabZ, and was introduced to a few members of the research team. As Principal Security Researcher at the company I worked for at the time, I was familiar with Zscaler's products. I regularly followed their blog and made use of their free tools and APIs. After discussing Zscaler’s unique approach with Michael and the ThreatLabZ team, I was eager to learn more about Zscaler technology and its research team to see if it was a potential match for a career move into something exciting and different.

I focused on several key factors to discover more about Zscaler:

  1. The undeniable benefits of a security solution reliant on Software/Security as a Service (SaaS);
  2. Verification that the underlying infrastructure supporting the product was built to handle the performance implications of inbound and outbound content analysis in the cloud;
  3. Research accessibility to Big Data and the ability to turn Big Data into useful contextual information that could be leveraged to automatically feed back into the product to constantly learn and defend against new threats;
  4. The management’s respect for the research team and support for the team to get their job done, research new threats, and help introduce new features into the product to combat those threats;

SaaS has always fascinated me and it has been a desire of mine to join a company taking a pure SaaS approach for some time. One large reason for this is because the ability to push security intelligence to the cloud is much more seamless than to hundreds or even thousands of deployed on-premise appliances or end-point desktop clients. Since attackers are constantly changing their methods and techniques, it's refreshing to be able to adapt just as quickly to combat the ever changing threat landscape.

As a researcher and a coder, implementing security detection capabilities, one thing I know is that everything has to work fast, inspection has to be transparent to the end-user. The only way that will happen is if the performance of the underlying infrastructure is finely tuned and the code is solid. Thus I spent the next few weeks discussing Zscaler’s technology with the developers, product managers, Vice President of Cloud Operations, CTO, and CEO to understand what they had built. I left satisfied that the foundation could handle the complexities of content inspection the right way.

After a few more discussions and a glimpse into the amount of data available to me as a researcher to cross correlate and link attacks, I was hooked. Management respected and understood the benefits of the research team and I was excited about the amount of potential research opportunities that lay ahead at Zscaler.

I was so impressed by Zscaler that I decided their vision was one I wanted to be a part of and I wanted to help shape the future of their product and research. The numerous free public tools released by the ThreatLabZ team, the expertise found within the team itself, and the fundamental visionary technology that supported the pure SaaS solution are the main reasons I joined the Zscaler ThreatLabZ team.

I hope to become a regular poster to the Zscaler blog, offering even more insight into the analysis of Internet threats.

If you want to reach me directly you can follow my personal Twitter feed at @StephanChenette or catch me at one of the numerous lunch and learn sessions that Zscaler presents around the country and around world.

Thanks,

Stephan Chenette
Senior Security Researcher
Zscaler ThreatLabZ

Friday, May 18, 2012

Follow up on the top blacklisted sites

Earlier this week, I researched the top websites blacklisted by Google. I've looked at more of these websites over the last three days to better understand the most common attacks.

The findings are quite disappointing. First, most infected websites are not cleaned up after three days. Webmasters should see a huge drop in their traffic, since only Internet Explorer and Opera users would not receive a warning preventing them from visiting these sites, due to the fact that other browsers use the Google Safebrowsing blacklist. This also means that the owners of these very popular websites have not invested in keeping their website safe, or at least in solutions to detect the blacklisting of their pages, traffic anomalies, or the detection of malicious content.

Second, the injected IFRAMES or JavaScript, redirect to the same type of malicious pages that we've seen for years now, such as fake AV scareware, fake Flash updates, survey scams, etc. That means that users are
still not educated enough to recognize fake software updates and still fall for the same old tricks.

These users won't get much help from their antivirus either. The detection rate of new malicious executables is very low, usually below 25%.

Here are some of the very recognizable malicious landing pages.

Fake Flash Updates

This is exactly the same attack we described in October 2011 (Naked Emma Watson video). A website that looks a lot like YouTube, claims that Flash must be upgraded to watch the sex video of some celebrity.

Fake Youtube page


Warning about Flash upgrade


Only 9 AV vendors out of 42 detect the fake Flash upgrade executable as malicious

Fake AV

This one looks different than the usual fake AV pages, as it is just an image with no animation.

Fake AV page
Detected by 12 AV engines out of 42.

Survey scam

A common way for spammers to profit from users is to get them to do "free" trials in order to earn a gift (or so they claim). This type of scam is very, very common. It's amazing that is still works.

In this example, the spammer uses a fake Youtube page to make the scam appear more legitimate.

Survey scam


I also found out that while Google Safe Browsing might block the infected site, it often does not block the actual malicious domain injected into the page in the form of a malicious IFRAME or JavaScript redirect. This means that other websites infected with the same piece of malware could be missed by Google Safe Browsing and still impact other users.

For webmasters

There are many ways to know when your website is blacklisted. For example, you can register a free account with Google Webmaster Tools. Then look under Health > Malware for any indication of blacklisting. You can also check the Google Safe Browsing diagnostic page for your domain at http://www.google.com/safebrowsing/diagnostic?site=mysite.com. This will tell you not only if your domain is blocked, but also if a portion of your site is compromised before you actually get blacklisted. Finally, you can do some automated checks with the Google Safe Browsing Lookup API. We have released libraries to interact with the API using Perl, Python and Ruby.



Monday, May 14, 2012

A look at the top websites blacklisted

Google Safe Browsing is the most popular security blacklist in use. It is leveraged by Firefox, Safari and Google Chrome. As such, being blacklisted by Google is a big deal - users of these three browsers are warned not to visit the sites and Google puts warnings in their search results.

I've run Google Safe Browsing against the top 1 million (based on number of visits) websites according to Alexa. 621 of them are blacklisted by Google Safe Browsing. I've looked at the most popular to understand why they are considered malicious. Here is what I found for the most popular blacklisted sites:


Rank Domain Threat Comment
6,239 subtitleseeker.com Malicious JavaScript Hijacked
18,784 financereports.co Scam Work from home scam
35,610 tryteens.com PDF malware Porn
41,560 iranact.co Malicious JavaScript Hijacked
47,016 creativebookmark.com Fake AV Hijacked
52,409 ffupdate.org Adware download  
52,431 vegweb.com Malicious JavaScript Hijacked
53,902 delgets.com Malicious JavaScript Hijacked
78,202 totalpad.com Fake AV Hijacked
81,403 kvfan.net Malicious JavaScript Hijacked
82,344 hgk.biz Malicious JavaScript Hijacked
83,858 youngthroats.com Malicious IFRAME Porn
125,305 metro-ads.co.in Malicious JavaScript Hijacked
133,455 salescript.info Malicious JavaScript Hijacked

http://financereports.co
creativebookmark.com
Most of the top-ranked websites that have been blacklisted are not malicious by nature, but they have been hijacked. Malicious JavaScript, similar to the code we found on a French government website, or a malicious IFRAME is generally the culprit. It is interesting to notice that Google decided to blacklist the infected site, rather than just blocking the external domain hosting the malicious content.

I have also checked to see which country the blacklisted domain is hosted in. Here is the breakdown:


Most of the blacklisted sites are hosted in the US. Western Europe (especially Germany, France and the Netherlands) is number two, followed by China (8%).

There is a government website in this list: mdjjj.gov.cn. It contains malicious JavaScript for a third domain. The code is much more sophisticated that on the other sites on this list. The JavaScript is obfuscated, broken down in several files with a .jpeg extension. There is also a Flash exploit with a heap spray targeting Mac OS X, not unlike a Flash exploit we found on another Chinese site a few years ago. Windows users with Internet Explorer 6 and 7 users get the old "iepeers.dll" exploit (a different version for each browser).


No site is safe from hijacking. Personal websites and top-10,000 sites are all likely to be infected at some point.