Wednesday, October 31, 2012

Enterprise Traffic Decrease In Wake of Hurricane Sandy

We have received several requests inquiring as to the changes in enterprise Internet activity in the regions impacted by Hurricane Sandy.  There is no question that the hurricane impacted the US in a variety of ways.  Zscaler provides security and policy enforcement for web and email network traffic for Enterprise users, while in the office or working remotely.  Given the role that Zscaler plays, we are able to do regional traffic comparisons from enterprises from last Monday/Tuesday ("normal") with this Monday/Tuesday (impacted by hurricane).  Enterprise web traffic can be seen as one measure of economic productivity.

Region: Washington D.C.
Zscaler has regional nodes in and around the Washington D.C. area to service regional customers and road-warriors in the area.  The following details the transactions per hour from last week (in blue) compared with those from this week (in red):


The two blue spikes on the chart correspond to the workdays last Monday and Tuesday (beginning around 6AM and ending around 7PM local time).  The two small red humps correspond to the workday traffic observed this week during Hurricane Sandy.  It is immediately apparent that there was a significant decline in the transactions from Enterprise customers during the hurricane.  More specifically, we observed an average decline of 60.93% in transactions from the previous week.  Another view of this data can be seen in the following pie chart showing the percentage breakdown of transactions across the dates being compared.

Region: New York City / New York
As with Washington D.C., Zscaler has multiple nodes in the New York area to support regional customers and road-warriors in the area.  The following charts provide a similar comparison to that previously listed in the Washington D.C. region.  The blue line represents transactions serviced last Monday/Tuesday and the red lines from this week, impacted by Sandy.  To provide a little different illustration - we have selected two New York nodes to provide a more granular comparison of traffic across the two weeks.



The New York node 1 had an average decrease of 58.14% from the previous week, and node 2 had an average decrease of 52.31%.

In the wake of Hurricane Sandy we observed specific regional declines in traffic from our enterprise customers.  From the specific regions / nodes analyzed here, there was an average decline of 57.13% in web transaction from last week (normal customer activity) to this week (hurricane impact).  Enterprise traffic could be seen as one measure of economic productivity. 

Tuesday, October 30, 2012

.gov redirections leading to spam: how did we get there?

You may have seen reports of an increase in .gov websites redirecting to spam or scam sites. This issue is the same as one we reported almost two years ago. Many .gov websites have open redirections that can be exploited by spammers. The example we detailed in February 2011 is actually still online, and so is the cross-site scripting (XSS) vulnerability that we found on the same page.

How did we get there?

Many websites want to warn users when they are leaving their domains. In the case of a .gov website, they want to make clear that an external domain does not carry the guarantee of being an official government website. This warning is supposed to add security for ens users. Facebook also uses a warning page because links can be added by anybody and Facebook cannot guarantee the safety of those URLs.

Why are these warning being exploited?

Spammers want clean URLs to bypass URL fitters and spam filters. They therefore use legitimate .gov websites to bypass security features. This warning may actually make the final URL look safer to the user by implying that the final URL has been linked and vetted, by the .gov website.

In the case of www.fmcsa.dot.gov (see our previous post), the user has only 10 seconds to read the warning before being redirected automatically. That's great for the spammer as no additional click is required to redirect the user to the spam/scam site.

These websites owners think they are adding additional security, but this decision is actually putting their users at risk. The assumption that leads to the abuse of open redirections is that the user seeing the warning has clicked on an external link on a page from the same .gov domain. Since the content on the .gov website is controlled, the external link has to be a safe link. The website creators did not think of spammers sending users directly to the warning page.

The fix

The fix is actually easy - make sure the assumption is right. This means the warning page should make sure that the redirection is legitimate - that the URL is one that has been approved.

There are a couple of way of ensuring that the redirection URL has been controlled:

Referrer

One way, not the most robust option, is to check the Referrer header to ensure the user comes from the same domain. This would stop user who clicked on a link in an e-mail, and spammers who use URL shorteners like bit.ly.

Whitelist

Another option is to store all 3rd-party URLs in a whitelist, and check against the list before making any redirection. This would prevent anybody from using the redirection with arbitrary URLs. The main downside to this approach is the overhead to manage this list every time an external URL is added or removed from the website.

Hash

The most common solution is to sign the URL. For example, replace http://www.fmcsa.dot.gov/redirect.asp?page=http://www.zscaler.com/ with http://www.fmcsa.dot.gov/redirect.asp?page=http://www.zscaler.com/&;hash=THE_HASH where THE_HASH = hash("http://www.zscaler.com" + secret). The URL is hashed with a secret key to prevent its replacement by a random address. This method is used by Facebook to guarantee the authenticity requests sent by Facebook to Facebook applications, for example.

The important takeaway for all webmasters is to understand that spammers will take advantage of any flaw in web applications. Be very careful with the assumptions you make.

Friday, October 19, 2012

Blackhole exploit kit v2 on the rise


It should come as no surprise that attackers are upgrading their Blackhole exploit kits to a new and more powerful version. An update is now available, thanks to the launch of Blackhole Exploit Kit v2 and we are starting to see adoption of this latest version.

v2 is different than v1 in many ways. Numerous enhancements have been made in order to ensure that the exploit kit remains undetected in the wild. 

Some of key enhancements include: 
  • The URL format is dynamic in nature. It does not follow a particular pattern as the version 1.0 URLs did.
  •  Now executables delivered with malicious content are also protected from multiple downloads.
Heavy obfuscation of the code continues as it had in the prior version. Like the Blackhole Exploit Kit v1, v2 also continues to target the known vulnerabilities in Internet Explorer (IE), Adobe and Java. A sample of raw Blackhole exploit kit v2 can be seen from the following recent infection:

Exploit Kit URL : hxxp://anygutterking.com/links/assure_numb_engineers.php

 

If you observe above screen-shots, you will note the highly obfuscated JavaScript loaded in the browser.  

Deobfuscation of the code allows for easier analysis. While we leverage a proprietary tool built in house for this purpose, you can also leverage tools like Malzilla and Jsunpack or attempt to manually deobfuscate the code.

The deobfuscated code shown below is divided into two parts for better understanding. 
  •  Browser plugins/components detection
    The code shown in the screen-shot is used to detect the different plug-ins and ActiveX components by scanning the DOM of the browser. By identifying the versions of installed plugins/components, the exploit kit can target known vulnerabilities.
  •  Attacking the vulnerabilities
    In this case, a well-known vulnerability (CVE-2006-4704) in the WMI Object Broker in Microsoft Visual Studio 2005 was targeted.  For more information about this vulnerability visit read our detailed blog post here. This vulnerability was also targeted by Blackhole v1 and other exploit kits such as the Incognito exploit kit.  


At the end of the code, we see a redirect request to “hxxp://o.casasferiasacores.net/adobe/update_flash_player.exe. This is a new addition to the exploit code released in this version. If the victim’s browser is patched and none of the vulnerabilities were exploited, then this redirection still provides one last chance for the attacker to compromise the victim’s machine. The variable “end_redirect” highlighted in above screen-shot is called in function setTimeout.  After 60 seconds, the page is redirected to the aforementioned link, which is a fake page to update Adobe Flash. This a typical example of a drive by download attack. Once redirected to this page, the user is prompted to download an .exe file labeled “update_flash_player.exe”

Screen-shot of fake Adobe page hosted on malicious domain,


VirusTotal report shows that currently only 3/43 AV vendor flag this file as malicious. The ThreatExpert report for this same file came back as expected. It identified the file with the highest severity level and analysis indicators show the binary belonging to the ‘keylogger’ family.  The file also connects to the Internet and downloads additional exe files.

Screen-shot of ThreatExpert report:


Exploits kits are becoming smarter day by day, which is keeping security vendors on their toes in an effort to combat new attacks. Fortunately, the cloud infrastructure at Zscaler allows us the flexibility to quickly add new detections as needed.


With Blackhole Exploit Kit v1, we saw an increase in malicious domains hosting exploit kits URL’s as the kit matured over the time. With the latest version being more sophisticated, we are expecting to see an even more rapid growth of Blackhole Exploit Kit v2. 

Pradeep

Thursday, October 18, 2012

Hijacked websites by the numbers

Two weeks ago, I had the chance to give a presentation about the danger of hijacked websites (you can download the presentation in French here). I used the talk to highlight various points which illustrate the extent of the problem.

Some numbers

70% of malicious links were found on hijacked websites in 2011 (Sophos, page 39). In 2012, Google is finding 9,500 newly malicious websites every day, mostly hijacked sites (Google, 2012). The Blackhole exploit kit alone is estimated to be present on several million websites per year (AVG, weekly count).

Nikkju, a web based worm that used SQL injection to spread, has infected about 200,000 websites. Lizamoon, which started propagating in 2010, infected about 1.5 million websites via SQL injection.


High profile victims

Since the hijacking of websites is mostly automated, websites of all types are getting compromised. Here are some of the high-profile websites that have been hijacked:
The list of hijacked websites includes many governmental websites from all over the world, including the United States.

Vulnerable software

The attack surface of website is quite large, an attacker can target the CMS, its plugins, administration tools (PHPMyAdmin, Plesk, cPanel - tools which should not be publicly accessible), the web server, the FTP server, the DNS server, etc.

I looked at the number of CVEs issued in 2012 for the most popular software platforms in these areas:
  • WordPress: 14 CVEs for the core, 42 for extensions, including security extensions that are supposed to make WordPress safer.
  • Joomla: 7 in core
  • Drupal: 20+ in core
  • PHPMyAdmin: 5 - Gemenenet, a security company, was compromised through a vulnerability in PHPMyAdmin
  • cPanel: 50,000 compromised through one attack
  • Plesk: 1 CVE, but 50,000 websites compromised through it
  • Apache: 30+ CVE (core and modules)
  • BIND: 6 CVEs
  • etc.
Some hosting companies were compromised as well: DreamHost (January 2012), ServerPro (February 2012), WHMCS (May 2012, provides billing and technical support to smaller hosting companies).

In your mail box

If you want some examples of hijacked websites redirecting users to malicious pages, you can take a look at your inbox. Just this morning, I received four similar messages about a fictitious payment sent to me through Intuit:
Malicious e-mail
All the links and buttons point to the same URL on a hijacked site:
  • www.eyslerimaging.com/blog/wp-content/plugins/flickr-widget/iprprocsd.html (photography blog)
  • www.odiseya.net/wp-content/themes/twentyten/bewpinfr.html
  • user3.inet.vn/wp-content/plugins/iprprocsd.html
These pages redirect to the same malicious page hxxp://navisiteseparation.net/detects/processing-details_requested.php. This page runs a malicious Java Applet. Unfortunately, I could not retrieve the content a second time for further analysis.

This is just one of the many spam campaigns that lead visitors to a malicious site.

Monday, October 8, 2012

Introducing ZAP

View a video walkthrough for ZAP

ZAP Home Page
After many months of hard work by the ThreatLabZ Team, I'm very pleased to unveil ZAP (Zscaler Application Profiler). ZAP makes it easy for anyone to determine the risk posed by a given mobile application. Users can do this by either looking up past results, or more importantly, proactively scan any iOS/Android app.

Why did we build ZAP? Being a inline security solution inspecting web traffic, it's imperative that we're able to not only analyze traditional web traffic, but also web traffic sent by mobile applications. While we think of mobile apps as native software, in many ways they behave like custom web browsers, leveraging HTTP(S) for communication. We therefore started building ZAP as an internal resource to analyze mobile app traffic, but quickly realized that people are all too trusting of mobile apps downloaded from an 'official' app store. Leveraging ZAP, our research has shown that apps commonly expose privacy and security risks from sending passwords in clear text to sharing personally identifiable information (PII) with third parties and that's why we're also releasing a public version of ZAP - to empower people to see this for themselves.

Search

The easiest way to leverage ZAP is to search through the historical results. Simply by typing the name of a mobile application in the search field, you can see if it has previously been analyzed. To further refine your search, you can additionally include the OS name (iOS or Android).

Sample ZAP Search Results
In the sample results above, you'll note that the application receives an overall score out of 100 with high numbers representing a greater security/privacy risk. Detail is also provided on the following four categories which influence the overall score:

  • Authentication - Username/password information sent in clear text or using weak encoding methods
  • Device Metadata Leakage - Transmission of device information such as the UDID (Unique Device Identifier), MAC address or details about the operating system
  • PII Leakage - Sharing personally identifiable information such as phone numbers, email addresses, mailing addresses, etc.
  • Exposed content - Communication with third parties such as advertisers and analytic firms

Scan

The true power of ZAP comes from it's ability to empower anyone to capture and analyze the web traffic from a mobile application. In order to accomplish this, we leveraged an excellent web proxy known as mitmproxy, built a front end to interface with it and created engines to automatically analyze the captured traffic to hilight security/privacy issues.

Scanning an application is as simple as pointing your phone to ZAP and using the application that you want to analyze - that's it. View the video below for a detailed walkthrough of the scanning functionality, but overall, it's a simple six step process as noted in the image below.

Video

The video below provides a detailed walkthrough of all ZAP functionality.



We look forward to hearing your feedback on how we can continue to improve ZAP, so please take it for a test drive and let us know what you think. There are many mobile apps that expose users to security/privacy risks and to date, the app store gatekeepers aren't doing an adequate job of protecting end users from these threats. Using ZAP you can help analyze the ever growing list of mobile apps and reveal those that are putting users at risk.

Enjoy!

- michael