Friday, October 29, 2010

Halloween Likejacking Campaign

I've already described (Facebook) "likejacking" in a past blog post, and we mentioned a likejacking campaign in early October here. The latest one going around has the title:

"OMFG!! The 10 Most WEIRD Facts About HALLOWEEN! [SCARY!]!"


Currently the likejacked URLs are:
hxxp://www.thefberas.info/halloween2010
hxxp://www.weliketolike.net/5things/


The likejacking sites are both served from 174.137.168.4 (Webair).

What's interesting is a comment in the source of the HTML at the top of the likejack pages for both of the sites that advertises:

"If you want to sell your pages contact <removed>@hotmail.co.uk"
(I removed the email address)

Presumably this likejacking campaign is advertising to advertise (likejack) your page for you. Google searches reveal the same email address used in Facebook
application development forums ... I think it is time for a thorough code review of this individuals Facebook application. Especially in light of the recent Facebook application privacy breaches.

Exploitation using publicly available Base64 encode/decode code

Earlier, I blogged about malicious hidden Iframes using publicly available Base64 encode/decode scripts. Recently, we have seen additional malicious JavaScript hosted on one website, using another publicly available Base64 encode/decode scheme. Here is the initial screenshot of the malicious JavaScript code:

If you look at the malicious code above, you will find many malicious JAR files loaded through applets, followed by a large chunk of random text inside the ‘div’ tag, which is hidden. If someone visits this webpage, he/she will only see text labeled “Loading….”. Meanwhile, the malicious code is downloading the various JAR files and may additionally download other malicious files. An interesting fact about this code comes from the random text inside the ‘div’ tag. Initially, the purpose of the random text was unclear. I later identified another example of code using exactly the same ‘div’ tag. At that point I assumed that it wasn’t entirely random afterall. Let’s open the source code of the “js.php” file and take a look:

The above code has been manually formatted for the purpose of this blog. Looking at the above code, it is clear that eval() has been called on a function named decodeBase64() HTML parameters passed to the function. The ‘document.getElementById(‘page’).innerHTML’ function gets the text that occurs between that element's opening and closing tag. If we look at first image of actual exploit, you will find ‘div’ tag has ‘page’ as an ID. This means the random text inside the ‘div’ tag should be passed to this ‘eval()’ function to further decode it. Let’s do that by passing random text as parameter to the decodeBase64() function:


Let’s pass this above script code to Malzilla for further decoding. It turns out that the decoded malicious code targets a few different vulnerabilities. Here is the short screenshot of the code,



I have also located this exact same piece of Base64 code elsewhere on Internet. In fact, this encoding technique can be found on Google code, as part of a hotot project. Here is the screenshot of the same piece of the code,

This is another case where an attacker has taken advantage of publicly available code to encode a malicious payload. This also shows how easy to find various encoding techniques on the Internet and leverage them for malicious purposes. For the purposes of this post, I won’t go into details of malicious files downloaded.

That’s it for now.

Umesh

Thursday, October 28, 2010

Another 1.5 million Twitter links scanned

In March 2010, I analyzed about 1 million links taken from public tweets on Twitter. I showed that the number of malicious links was less than 1%.

I have scanned another 1.5 million links in the past 3 months from Twitter public time line (1,587,824 exactly). I analyzed these URLs and the server content to find how many of them lead to malicious pages by running them through the Zscaler cloud.

Before I go into the details, I'd like to make a few points:
  • links were taken over several months, but they were analyzed immediately
  • I gathered links from the public time line. Results for direct messages might be different
  • the links may not be intentionally malicious, the page could have been compromised

The state of the Twitter links

Top-10 domains in Twitter URLs

Bit.ly is still the leader in number of URLs on Twitter at 33% of all URLs, compared to only 5% for the number 2 spot (twitpic.com)! However, its market share has decreased, mainly because of the arrival of new URL shortener services from big names. Google, for example, arrives in the the top-10 domains with goo.gl, a service only available since December 2009. If we add youtube.com and youtu.be, Google represents 5% of all URLs.

Other social services are becoming more and more popular links in tweets: 4square (4sq.com) is #10, Facebook (fb.me) is #13

But the hierarchy of domains stays pretty much the same as in March, overall.

How many malicious links?

Like the previous analysis, I looked for phishing sites, malware, browser exploits, etc., but not spam.

The results are the same as previous: 0.07% (1149 links) of all links are dangerous.

Distribution of threats by type
The distribution of malicious sites per domains is mainly the same as the total number of links per domain:

Distribution of malicious sites per domain
twitthis.com has a high percentage of malicious links mostly due to hijacked Wordpress installations serving malware. As reported last time, mediafire.com is known to host malicious content. Some URLs shorteners like youtu.be and 4sq.com create short links for one domain only (youtube.com and foursquare.com), so they are never malicious.


This shows once again that the number of malicious links in public tweets is very low. Users should pay more attention to direct messages (private tweets), but overall they should feel safe using Twitter.

-- Julien

Tuesday, October 26, 2010

Perl library for Google Safe Browsing v2

Google offers URL blacklists to identify malicious websites and phishing sites through the Google Safe Browsing API. It is used in pretty much all browsers (Firefox, Safari, etc.), except Internet Explorer. Version 2 has been available for a few months, but there are only 2 implementations thus far: Python (from Google) and PHP.

Version 1 and 2 are supposed to provide the same coverage, but I've found that Google Safe Browsing v2 lists are updated much more frequently than v1. Since I'm using the API for several of my Perl projects, I wanted to get up to date information from Google. Until now, there was a Perl implementation for v1 only.

Net::Google::SafeBrowsing2, which I've developed, is the first implementation of the Google Safe Browsing v2 API for Perl and I've made it available in CPAN.

Here is a quick example on how to use it:

  use Net::Google::SafeBrowsing2;
  use Net::Google::SafeBrowsing2::Sqlite;
  
  my $storage = Net::Google::SafeBrowsing2::Sqlite->new(file => 'google-v2.db');
  my $gsb = Net::Google::SafeBrowsing2->new(
    key     => "my key", 
    storage => $storage,
  );
  
  $gsb->update();
  my $match = $gsb->lookup(url => 'http://www.gumblar.cn/');
  
  if ($match eq MALWARE) {
    print "http://www.gumblar.cn/ is flagged as a dangerous site\n";
  }

Version 0.1

0.1 is the first version available on CPAN. It does not include Message Authentication Code (MAC), but otherwise it is fully functional. I've been using the library successfully for a couple of weeks now. There may be a bugs left, as more unit tests are required. I encourage you to help me further develop the code. You can report bugs by posting comments here, or send me an e-mail.

Despite the low version number, the library works well!

Multiple backends

Net::Google::SafeBrowsing (API version 1) only uses Sqlite to store the database locally. My new library works with multiple backends: Sqlite, MySQL, Memcached, etc. I have uploaded a storage module which uses Sqlite to CPAN. I hope others will develop new backends. Check the documentation of Net::Google::SafeBrowsing2::Storage to create your own module.

Documentation and examples are available on CPAN.

-- Julien

Thursday, October 21, 2010

Analysis of multiple exploits

The most common type of malware seen in Blackhat spam SEO is the fake antivirus. But I also see other types of exploits from time to time. This week, the same malicious page came up on different domains: 4rukel.cz.cc, 4lofs.tk, 1polidsf.co.cc, 1barede.co.cc, 3timesto.tk, 4greaix.cz.cc, 4krudi.cz.cc, etc.

This page is interesting because it uses exploits rather than social engineering to install the malicious code. Below are the details of the exploits / malicious code.


Heavy obfuscation

The JavaScript code is heavily obfuscated. It cannot be de-obfuscated by a simple copy-paste of the code into Malzilla, some of the decoding has to be done by hand.

Original malicious code

One common technique, used in this page, to break the JavaScript de-obfuscation tool is to make references to the DOM. On this page, part of the JavaScript code is included in a textarea HTML tag. It is retrieved and executed later with code like this:

eval(document.getElementByTagName('textarea')[0].value);

While executing the obfuscated JavaScript code, new HTML elements are added to the page, and used to store values or JavaScript code retrieved again later in the JavaScript code.

First de-obfuscation pass generated new obfuscated JavaScript code!

Fortunately, all the JavaScript code is inline. There is no external file, which always make the de-obfuscation harder.


Multiple exploits

Like many malicious pages, several exploits are included on this page:
  1. 2 malicious Java applets, using different techniques for Internet Explorer and Firefox
  2. PDF exploit
  3. Quicktime '_Marshaled_pUnk' Remote Code Execution Vulnerability
  4. Heap spray attack
  5. Internet Explorer MDAC exploit
  6. Internet Explorer "iepeers.dll" exploit
  7. 3 Flash exploits
Part of the code for the Java exploit
I believe these exploits come from different sources because the coding style of the various functions varies greatly.


This malicious page tries the different exploits until one is successful. Users need to make sure they keep both their browser and their plugins up to date.

-- Julien

Tuesday, October 19, 2010

Who else is benefiting from the spam SEO?

Blackhat SEO spam is used mainly to redirect users to pages serving malware, often disguised as an antivirus. However, other players are using the same Blackhat spam SEO techniques (they use hijacked sites) more and more to increase traffic to their site.


Fake search engines

These sites look like a search engine. But all links are actually paid advertising. To trick the advertising networks, the user is redirected to different IP addresses when he clicks on these links, so that the advertising networks see a small amount of traffic coming from multiple addresses instead of massive amounts of traffic coming from one location.

I've described how these fake search engines work in detail in the post about Mother's day scam.

p3p0.com fake search engine
These fake search engines include xaras.net, p3p0.com, yeasbear.com, xsearcher.net, smartbuzz.biz (currently blocked by Google Safe Browsing), etc.


Download sites

In the past few months, I've seen more and more hijacked pages redirecting users to (illegal) download sites. These sites make money by selling subscriptions to users.

Site claims to have deadliest catch.rar available for download...
They redirect users to a page that claims the file they are looking for is available for download. The file name is obtained by appending .rar to the search term. In the screenshot above, a search for "deadliest catch" on Google lead to sapm page redirecting to http://express-downloads.com/download.php?file=deadliest%20catch.rar where the the file deadliest catch.rar is available for download.


but the user must sign up first ...
But the user needs to be "to be logged in to download" the file. After entering his e-mail address and a password, he must also make a payment to access this file.

and pay to access a file which does not exist!
Of course, the file does not exist. You can change the file name to any string, the site always claims the corresponding file is available: http://express-downloads.com/download.php?file=[string].rar

Most of these sites have very similar domain names: fast-downloads.biz, turbo-speed-downloads.com, express-downloads.com, thedownloadfiles.com, etc.



Conclusion

There are more and more shady sites using Blackhat SEO in order to make money. And there is no shortage of vulnerable Wordpress installations to take advantage of. This type of spam will very likely exist for a long time.

Botnets are already available for rent, I think it won't take long before hijacked sites are up for rent to increase traffic to shady websites.

-- Julien

Thursday, October 14, 2010

"Hot Video" pages: analysis of an hijacked site (Part IV)

In Part I and II, I analyzed files on a hijacked web site that was part of the malicious "Hot Video" campaign. While doing the analysis, I looked at other hijacked domains. All these domains had one of the malicious files described in Part II which allows anyone to execute commands or upload files. Some of these files are variations of the "Hot Video" pages, other are unrelated to the attack.

Here are some of the more interesting scripts I found.

PHP Shell

Using PHP or shell commands through HTTP requests might still be too much work for some people! More than a few hijacked sites had a PHP shell script installed, which makes it easier to connect to the database, see the source code of any script, upload or modify files, etc.

Powerful PHP shell
Because the file name of this PHP shell is always the same, it is even easier to find such hijacked sites with a single Google query.


proxy.php

As its name suggests, this script acts as a proxy. It might be used by spam sites to access the Google Hot Trends without being detected and blacklisted. This script can only be used by people who know a special "key" which cannot be directly derived from the source code, like in the file .sys.php analyzed earlier.


PHP code of proxy.php
sitemap.php

All the hijacked sites contain a Sitemap, a file which shows the list of spam pages on the same domain and other domains. They are usually created by the same script which shows the "Hot Video" page (see Part III). But on a few sites, there is a separate sitemap.php file to create this list.

The script uses the list of keywords for key.txt, and the list of sites from sites.txt to generate a list of spam pages.

PHP code of sitemap.php


Script kiddies?

There shouldn't be too much pride in hacking a site which is already already. But some groups claimed have "owned" a few sites which they did not hack in the first place.

Hacker group claim to have "onwed a site already hacked

Conclusion

The most scary part of this analysis is that these websites remain vulnerable after months of being hijacked. Maybe their owner doesn't care about these sites, but they can be used to infect users at any time. Anybody with the knowledge of the right Google query can control hundreds of websites within a few minutes.


-- Julien

Monday, October 11, 2010

Halloween tricks: spammers are ready

Halloween is less than a month away and vendors have already set up their stores to sell plenty of candies, pumpkins, decorations, costumes. Halloween represents big business for U.S. retailers.

Spammers are clearly not going to miss this opportunity to make some extra money. Several university websites, including  byu.edu and bowdoin.edu, have been used to host spam about Halloween costumes. If accessed from Google, the spam pages redirect to buycostumes.com. The URL to buycostumes.com contains an affiliate ID which allows the spammer to get a commission (10% to 30% of the total purchase) from the store should a redirected user ultimately make a purchase.

Spam blog about Halloween costumes
A Wordpress blog with thousand of spam pages around variations of "Halloween" costumes has been installed on each site. On October 2nd, spammers managed to get their pages in the top results for a search of the term "Halloween express". The first link in Google pointed to bowdoin.edu, while the 10th link pointed to byu.edu. I've contacted both universities and the spam should be shutdown soon.

This is very similar to the spam for Mother's day that I reported in May 2010.

-- Julien

Wednesday, October 6, 2010

Update to the Search Engine Security plugin

I won't make a new blog post every time I update the Search Engine Security add-on for Firefox, but there have been several changes since the last post.

Install Search Engine Security add-on for Firefox 3.x

Notification

This feature was introduced in version 1.0.4. A notification is shown on Bing, Yahoo, and Google to let users know whether the SES protection is enabled for this search engine. The notification is shown under the search input.

Search Engine Security notification in Google search
Search Engine Security notification in Bing search

User-Agent modification

Most spam pages look at the Referrer value to decide whether or not to redirect users to a malicious page. However, in some cases like the Hot Video pages, only the User-Agent value is used. One common check is to look for "slurp" in the user-agent string to flag the request as coming form the Yahoo crawler. If you check the "Modify User-Agent" checkbox in the options, the string "slurp" is added to the User-Agent header when you leave Google/Bing/Yahoo in addition to overriding the Referrer header.

This option provides additional protection against malicious spam SEO.

New option to modify the User-Agent header

Fix for Google Instant

Google now make new searches as you type, without reloading the page, with Google Instant. Since the plug-in was listening on new page load to add the notification under the search input, it was showing only for the first search. Version 1.0.8 ensure the notification is always shown.

Search Engine Security notification with Google Instant

French translation

The extension has been translated in French.


Install Search Engine Security add-on for Firefox 3.x

If you have an older version of the plug-in installed, go to Tools > Add-Ons > Find Updates to get the latest version.

-- Julien

Monday, October 4, 2010

PHP deobfuscation

When I was analyzed the PHP scripts used for the "Hot Video" pages, I had to find a way to deobfuscate the PHP code. There are many tools and online services to obfuscate PHP, but very few to deobfuscate it.

Obfuscated PHP code: one big eval()
PHP obfuscation is generally done with:
  • Base64 encoding
  • Encoding strings into hexadecimal and octal values
  • Assigning functions to variables
  • Multiple calls to eval()
Evalhook

This code can be deobfuscated by hand, but it takes multiple iterations and can be time-consuming. Fortunately, Steffan Esser wrote evalhook to make deobfuscation easier. His article about the tool describes how it works. Basically, it is a library used with PHP to render code that is executed by the eval() function.

There are no instructions for compiling the source code, so it took me a little bit of time to understand all the necessary steps to use the code on CentOS5. Here are basic instructions to compile the code.

First, you need:
  • PHP >= v5.2
  • php-devel
  • PHP Zend Optimizer
CentOS5 comes with PHP 5.1, so you will need to obtain a newer version from another YUM repository. I used AtomicCorp to install the new PHP version and Zend.

Then, run these commands:
tar xvfz evalhook-0.1.tar.gz
cd evalhook
phpize 
./configure
make
sudo make install

Now, you can use the evahlhook library to deobfuscate PHP files, as described in the original article:

$ php -d extension=evalhook.so encoded_script.php

The only inconvenience I had to deal with when using evalhook, was that the resulting code still has strings encoded with hexadecimal and octal. So I wrote a small (and ugly!) Perl script to parse these strings into ASCII. Download strings.pl if you're interested in using it.

perl strings.pl source.php target.php

-- Julien