Friday, May 22, 2009

Take my (plugin) data, please

Following up on my previous post regarding all ways various third-parties could leverage features of your web browser to gain access to your surfing habits and track you, I decided to give a look into our production logs to get a feel for how certain types of user data is going out the door.

For today, I thought I would focus on who is collecting browser information--namely browser plugin information that is sent back to a server within URL query parameter data. This typically involves using Javascript to enumerate the contents of the navigator.plugins array, which contains a list of installed plugins for non-Internet-Explorer browsers. The enumeration loop composes a string list of all found plugin names, and is then included in a subsequent request URL parameter.

First I wound up searching for requests containing the string "Mozilla%20Default%20Plug-in", which is the string name of the plugin that ships with many Firefox-based browsers. It also has very little chance of being a false positive. Through logs stretching over the first half of May I found over 56,000 requests containing my target plugin string. I then grouped and ordered the requests per host. Overall, those 56,000 requests belonged to only 1056 hosts, of which nearly half (502) were sub-domained off Omniture's .2o7.net domain. Some interesting hostnames in the overall list include z.digg.com, ostats.mozilla.com, mtrics.cdc.gov, a.consumerreports.org, metrics.npr.org, stumbleupon.stumble-upon.com, a.ncbi.nlm.nih.gov, www.ac.vic.gov.au, metrics.aarp.org, and stateofgeorgia.122.2o7.net.

Next I thought I would see what other plugins were popular. Since Omniture uses a consistent URL request format, and composes a bulk of the requests, I decided to process through the logs pulling plugin usage data out of requests to Omniture and tabulate plugin usage counted on a per-client-IP basis. The most popular plugins, in order, were:

Mozilla Default Plug-in
Shockwave Flash
Adobe Acrobat
Windows Media Player Plug-in Dynamic Link Library
Microsoft Office 2003
iTunes Application Detector
Microsoft DRM
Shockwave for Director
Java TM Platform SE 6 U13
Citrix ICA Client
Windows Presentation Foundation
2007 Microsoft Office system
QuickTime Plug-in 7.6
RealPlayer tm G2 LiveConnect-Enabled Plug-In 32-bit
RealPlayer Version Plugin
Silverlight Plug-In
RealJukebox NS Plugin
Google Update
QuickTime Plug-in 7.5.5
Move Media Player
Java TM Platform SE 6 U7
DivX Web Player
Java TM Platform SE 6 U11
MetaStream 3 Plugin
ActiveTouch General Plugin Container
Google Updater
DivX Player Netscape Plugin
Microsoft Windows Media Player Firefox Plugin
Picasa
Java TM Platform SE 6 U5
QuickTime Plug-in 7.2
Java TM Platform SE 6 U2
Turner Media Plugin 1.0.0.10
Microsoft Office Live Plug-in for Firefox
getPlus for Adobe 15235
VMware Remote Console Plug-in
Java TM Platform SE 6 U3
getPlus for Adobe 15229
Avocent DSView Session Launcher Plugin

It's no surprise that Flash, Acrobat, etc. would be at the top of the list. Keep in mind that attackers are actively targeting web browser plugins; the above list should help get the point across that there is no shortage of targets out there. And it should also illustrate that sometimes things are being plugged in to browsers that you wouldn’t otherwise expect. For example, did you know that Microsoft Office installs a browser plugin? It's always worthwhile to understand the attack surface you are exposing to the Internet.

Until next time,
- Jeff

Monday, May 18, 2009

ActiveX vulnerabilities – Threat to Web Security

As I mentioned in my earlier blog, phishing is a never-ending threat for web security. Another threat for web users is the rise in the ActiveX vulnerabilities. These are very easy to exploit as there is a great deal of information available including vulnerability details, proof-of-concept exploits, etc. freely available on the Internet. ActiveX controls can be automatically downloaded and executed by Internet Explorer (with user acceptance) when viewing a web page or installed as part of a larger application. Basically, ActiveX controls have various properties and methods, which can lead to exploitation if they have not been properly coded. If someone were to find a vulnerable property or method in an ActiveX control, it is not be difficult to create a working exploit and host it on a web server. If the vulnerable control is marked ‘safe for scripting’ it can then be remotely called and exploited by a malicious web site.

Numerous buffer overflows and file overwrite vulnerabilities have been found in ActiveX controls over the past few years and working exploits are available for many of them. Using a heap spray technique, which is a popular method for reliably injecting shell code into memory, the vulnerable control can be successfully exploited by the attacker. He simply needs to host the working exploit somewhere on a web server under his control and divert a victim to visit the site. If the vulnerable ActiveX control is present on the victim’s machine, the victim will be compromised silently in the background and the vulnerability could allow remote code execution or arbitrary file overwrite. The attacker could then download and install additional malware or malicious programs.

A few months back, a vulnerability was found in Snapshot Viewer for Microsoft Access (MS08-041) that can be leveraged to download and save files to arbitrary locations on an affected system. As noted in a Symantec blog post, because this particular ActiveX control was signed by Microsoft, attackers were able to force installation/exploitation of the control without any user interaction. Web attacking toolkits like Mpack, Neospolit, etc. found in the wild are exploiting a number of such critical ActiveX vulnerabilities.

Due to the ease of exploitation with vulnerable ActiveX controls, attackers are widely using them as a popular attack vector. Heap spray code is very easy to work with and can be seen in many attacks involving ActiveX exploits. This gives the attacker the power to easily change the shellcode to whatever is required. He can then host the exploit code on the webpage and convince victims to visit the Web site, typically by getting them to click a link in an HTML e-mail link. ActiveX vulnerabilities are rising every day and people are posting exploit code on public web sites. A quick search on Securityfocus will reveal the latest ActiveX vulnerabilities as shown below:

Gone are the days when attackers were targeting server side vulnerabilities to compromise systems. Now, they are focused on targeting end users, via the web browser. A combination of heap spray and obfuscation used in the exploits makes it difficult to detect the attacks using automated means. Certainly, the rise in the use of ActiveX vulnerabilities and exploits posted on the web makes them a real threat for web security. Setting up a kill bit is the only option if the vulnerability is not patched and you do not have updated antivirus signatures. The Kill-Bit is a registry key for a particular CLSID that marks the COM object / ActiveX control referenced by that CLSID as non-loadable in the browser and other scriptable environments. Setting a Kill-Bit for the control marks that particular control as forbidden to instantiate in the browser. You can do this by modifying the data value of the Compatibility Flags DWORD value for the CLSID of the ActiveX control. Here is the link from Microsoft on how to set a kill bit for a given control:

http://support.microsoft.com/kb/240797

That’s it for now.

Umesh

Sunday, May 17, 2009

Design Flaws in Deployment

Let's suppose you work in IT security at some large company. From one perspective, your job is to make sure that nobody's security expectations are violated. Your users probably think of this as you taking care of security for you, or possibly you trying to prevent them from getting their jobs done, but really you're balancing variables: stakeholder expectations, and security in reality.

This could be fairly straightforward if it were all about implementation problems. You could just run new software through your favorite testing method, and keep up with the patches. Add in as much explicit security technology (anti-virus, a firewall, NIDS) as your stakeholders are willing to pay for, and then keep telling stakeholders you need more staff or money to do X, or the company is at risk. (Cynical? Maybe. Typical 10 years ago? Yes. Typical now? Less so, but not entirely gone just yet.)

Enter design & configuration flaws, by which I mean all security issues in which every system is acting exactly as it is designed to act, yet an attacker is able to realize a threat. You could scan for some configuration flaws, and you could run new software through a design review, but these strategies won't give you much insight into how the new individual system will affect your overall security unless you take the systems' environment into account. That is, there are now 3 variables to tweak: stakeholder expectations, individual systems, and the environment of the systems. These variables are so interrelated that they can't be tweaked independently.

Here's how 2 common (and sometimes resisted) IT security techniques affect these variables, assuming the possibility of design & configuration flaws:

Configuration Rules

If you're like most IT security folks, you have some rules about how systems can be configured. Usually these rules are network-specific, e.g. if the machine lives on network X it can't be dual-homed and it must use Kerberos authentication. The ideal is, if the system doesn't meet these rules, it doesn't get on the network.

Rules like this affect both individual systems and the system environment. They have 2 purposes:
1. Fit an individual system into the environment without enabling any possible design & configuration flaws in this system.
2. Fit a new individual system into the environment without changing the environment in a way that enables a design or configuration flaw in existing systems.

For every rule on your security list, you should be able to say whether it is trying to do #1, #2 or both. If you can't, why is the rule on the list at all? As you can imagine, it will be a lot easier to determine the purpose of a rule and see whether a given set of security rules is effective and consistent for these purposes if someone has documented the intended properties of the environment. Whether you are writing a new set of rules or reviewing some existing rules, figuring out what you want in the environment is a great place to start.

One key environmental property is that individual systems affect each other in a known, limited set of ways. This means that for each environment you need a list of ways systems are allowed to interact. Can they mount each others' disks? Ssh in?

Other desirable properties are simplicity and consistency. For example, there should be some core services (usually including routing, DNS, and if applicable, DHCP) which are provided by the same source for every individual system in the environment.

I'm sure there are other useful properties for an environment; you get the idea. And yes, you could argue that these rules also affect stakeholder expectations, by presenting a picture of the facilities & properties of the environment.

Sign Off on Exceptions

Network X will sometimes be the most appropriate place for a system which simply cannot meet the rules. The standard way most IT security departments respond is to have an exception process which requires review and sign off, frequently by some high-level political figure at the company. This leads many people, including some security people, to believe that the primary purpose of such a process is to make it harder to get an exception than it is to comply with the set of rules to start with. To you I say: Danger! You are missing the point.

This review and sign off process affects individual systems, the system environment and stakeholder expectations. The purposes should be:
1. Determine whether the proposed exceptions will enable any design & configuration flaws in this system, and document any additional properties of the environment (beyond the initial list of intended properties) which are protecting this system.
2. Determine whether the proposed exceptions will enable any design & configuration flaws in other systems in the environment, and document any changed or additional properties of the environment (beyond the initial list of intended properties) adding this system will cause.
3. Ensure that stakeholders understand and approve any flaws that have been enabled, and any threats an attacker could now realize.

If this is going to work, you need to compare each proposed exception with the intended and actual properties of the environment. Answering a bunch of questions is not sufficient unless those questions reveal the information from #1 and #2. Also, it has to be cumulative. Previously authorized exceptions must change the documented properties of the environment, or you won't be able to tell whether this exception clashes with previous exceptions.

Whether to allow an exception is usually, in the end, a business decision, but the decision makers need to understand the security implications of what they're deciding, as in purpose #3. Doing this right will be hard enough that you won't need to add hoops just for their deterrent value.

I hope these ideas help you develop or re-design a plan for minimizing design flaws in deployment. Next time: some thoughts on design flaws and perimeter security.

--Brenda

Friday, May 15, 2009

Those who know where and when you surf

Nobody likes the idea of a big brother watching over them as they surf. Yet as browser technology evolves, we are starting to have some privacy-violating "byproducts" of innocuous features. Now, we already generally accept that our immediate/upstream Internet providers can possibly snoop on our traffic—they can see what sites we try to access (whether by IP destination or monitoring DNS queries) and can peek into plaintext content. But the ability for an ISP to monitor traffic is limited to only the traffic that passes through it—namely, its customers. What if there were central organizations that could monitor user traffic (to a certain degree) across any/all ISPs?

I'm going to go ahead and overlook the obvious companies/services where you install a toolbar into your browser so they can watch and tabulate everywhere you go (like Alexa). That is an opt-in situation where you willingly give all your info to a third party. I'm also going to overlook any web proxy services that you might be using, because those are also opt-in situations with the same ramifications. And of course, any internal monitoring within an organization is also exempt; I want to focus on who, outside of your local network, organization, and your ISP, is receiving your surfing data without you explicitly opting-in to give it to them.

So let's start with HTTPS, as implemented in common web browsers. Most common browsers include certificate revocation checking capabilities for ensuring an HTTPS site certificate hasn't been compromised. When you access an HTTPS site, the browser will receive the server certificate and run off the to the designated CRL and OCSP servers to query whether the certificate has been revoked. Those CRL and OCSP servers are operated by the certificate issuers, such as Thawte, Verisign, etc. The key here is that many SSL certificates have a one-to-one relationship with a specific web site (i.e. traditional certificates and not wildcard certificates); so a query to an OCSP responder for a web site certificate essentially tells the certificate issuer that you are in the process of accessing that site. Now, certificate issuers can only track accesses to sites they issue certificates for—but in 2007, Verisign and subsidiaries were deemed to have issued over 57% of the SSL certificates in use, giving them the theoretical capability to track site access to over half of the HTTPS sites on the Internet.

Fortunately none of the OCSP responders I briefly reviewed set HTTP browser cookies, which means they are not tracking unique individuals (at least, not through simple browser means). That is a good thing; the best tracking granularity they can achieve is generally per source IP address (ignoring browser header/request and TCP protocol fingerprinting to differentiate different browsers/systems from the same IP). This can be problematic if you're a home user and have a one-to-one relationship with your IP address, but tolerable if you're behind a NAT or proxy that serves many users (giving it a many-to-one relationship with the IP address). [Aside/favor: If you happen to see a public OCSP setting browser cookies, let me know!]

Default browser start pages are another privacy leaking avenue. Every time you start your web browser, Microsoft, Apple, Opera, or Mozilla hears about it through the various default browser start page request(s):

http://en-us.start2.mozilla.com/firefox?client=firefox-a&rls=org.mozilla:en-US:official
http://www.apple.com/startpage/
http://portal.opera.com
http://go.microsoft.com/fwlink/?LinkId=74005
http://runonce.msn.com/runonce3.aspx


The start pages themselves may seem innocuous, but they can chain to more exposure. For example, Apple's start page includes a request to metrics.apple.com, which is really a pointer to Omniture (resolving metrics.apple.com returns a DNS CNAME to appleglobal.112.2o7.net); the request collects and sends a lot of identifying browser information including screen resolution and all installed browser plugins. I've seen occasions where responses to metrics.apple.com forward to Doubleclick.net too. Now, sure, it's common for web pages to include metrics/analytics and advertising links—but such elements on browser start pages means these syndication/service providers also have access to realtime information of when you start your browser. In addition to Apple's use of Omniture and Doubleclick, Opera's portal page includes resource links to Google Analytics, and Mozilla redirects to Google. Microsoft's portal page used Webtrends.com for metrics.

But more importantly, can any of these providers uniquely identify you? Microsoft relies on m.webtrends.com, and that site sets a unique cookie for you--so Webtrends has the ability to track you no matter where you go. Ditto for Apple, Omniture, and Doubleclick. The fact that these metric and advertising elements are hosted on browser start pages means the services have the opportunity to plant their unique cookie right at the beginning of your surfing session. Plus, look at where and how the start pages are hosted. Microsoft redirects from Microsoft.com to MSN.com, where the start page actually lives. By placing the start page on MSN, all of the MSN cookies are now available--and if you are logged in, that means Microsoft knows who you are and can individually identify you if they so choose. Since Mozilla redirects to Google, the same case generally applies to Google using its own cookies to identify/track you.

But fortunately this is only a minor information exposure; knowing you started your web browser is not exactly an earth-shattering privacy violation, and it only occurs at the very beginning of your surfing session (not when you open a new tab, etc.). Plus, this immediate information doesn’t expose where you actually wind up surfing to...just that you opened a browser. The previously mentioned OCSP issue reveals far more information regarding where you are surfing; but it is not the only feature that does so...

Many browsers have recently added various anti-phishing and safe surfing features which essentially query the host/URL you are accessing in a database to see if it's a known offender and thus should be blocked. Think about that for a second: for every URL/host you visit while surfing the Internet, a lookup is done to ensure the URL is safe. So how is that lookup being done?

Some of the browser vendors have designed their features so that databases of the known offenders are downloaded and stored on the user's local system, so they can be queried locally. This is ideal, as it means the lookups are fast (just look inside the downloaded file) and it doesn't expose the lookups that are actually occurring. However, note that I began this paragraph with the term "SOME of the browser vendors"...

It turns out the biggest questionable privacy offender is Opera. Their SiteCheck function, enabled by default and meant to warn you when you try to access a malware site, sends a real-time request to sitecheck2.opera.com for every new host you surf to. The transaction looks something like:

GET /?host=www.google.com&hdn=naLWSHPy7ud1pACYor32hg== HTTP/1.1
User-Agent: Opera/9.64 (Windows NT 5.2; U; en) Presto/2.1.1
Host: sitecheck2.opera.com
...

HTTP/1.1 200 OK
Date: Fri, 15 May 2009 15:17:11 GMT
Content-Type: text/xml
...
<?xml version="1.0" encoding="utf-8"?>
<operatrust version="1.1">
<action type="searchresponse">
<host></host>
<ce>14400</ce>
<w>1</w>
</action>
</operatrust>


That means Opera hears about every web site host you access, and can keep a historical record if they really wanted to. Now, the SiteCheck queries are not utilizing cookies to track individual users, so tracking granularity is limited to per-source-IP resolution. But it's still a notable privacy exposure. Other browser features like IE8's Suggested Sites likely expose the same level of information, but I have not confirmed it personally (but their level of privacy warnings and opt-in requirements seem to suggest so).

So now that we've gone through all of that, what can you do about it? Well, the good news is that you can generally turn off all of the above mentioned behaviors (you'll have to dig around in the Advanced Configuration areas of your browser)...but you might not want to. Sure, you can change your browser start page to something else without ramifications (actually, if you change it to a blank page, you'll find your browser starts faster because it doesn't have to initially load an external page). But OCSP checking and SiteCheck features are security protections that actually help keep you safe--so by turning that stuff off, you are opening yourself to more risk. So you need to decide what’s more important to you: your security, or your privacy.

Happy deciding,
- Jeff

Friday, May 8, 2009

Cookie Based Persistent XSS

This is something that I've been meaning to post for a while now as it was included in my Black Hat DC 2009 presentation entitled 'A Wolf in Sheep's Clothing: The Dangers of Persistent Web Browser Storage'. I've noticed an increasing number of sites whereby cross-site scripting vulnerabilities exist not in echoed (reflected XSS) or database stored (persistent XSS) user supplied content, but rather within the cookie itself. This is certainly a form of persistent XSS, but not in the traditional sense as the injected script is stored on the client machine as opposed to the server and only ever exposed to a single victim.

I typically notice this on sites which display a search history. When using search functionality on a site, your past search queries are displayed and rather than storing this content on the server, perhaps in a relational database, it is instead being embedded within a cookie. Let's take a look at the search functionality at Sony.com.

In the image to the left, you will note that my past search queries ('Michael' and 'Sutton') are displayed on the right hand side beneath the 'Your Recent Searches' heading. Now this data has obviously come from past requests and could naturally be stored in a variety of places such as a relational database connected to the web application, within hidden form fields, embedded in the URI, etc. In this particular case however you will see that past searches are actually stored within a cookie, in this case the sonysearch_recent_searches cookie. Now there's certainly nothing wrong with that, after all we're dealing with a relatively small amount of non-critical data which doesn't really need to be stored beyond the current session. However, as you will also note in a later image, the user supplied content accepted in search queries isn't appropriately filtered and thus the site is vulnerable to a rather unique form of persistent XSS. Note: This was reported to Sony in February.

So why is this interesting? It's just another XSS vulnerability - not exactly an elite club to get into. I find it interesting because it has some unique characteristics. In some ways, it's almost a hybrid of reflected and persistent XSS. Let's take a look at what I mean by that.

Storage - This is certainly an example of persistent XSS, as the script is stored. However, it is stored on the client side, within a cookie, not within a relational database on the server side as is typically the case.

Victim - Unlike most persistent XSS vulnerabilities, injected script will not affect anyone that visits a vulnerable page. It will only affect a single user (like reflected XSS) as it is transmitted within a cookie when an individual request is made.

As you can see, while we have persistent XSS, it smells like reflected XSS. This could come in handy, if for example an attacker were looking to target a single individual (reflected) but wanted the attack to repeat (persistent) every time the user accessed a certain resource. Consider for example a situation where the attacker wanted to harvest sensitive data every time the victim viewed it. This situation alleviates the timing challenges associated with a successful reflected XSS attack while remaining more stealthy than typical persistent XSS.

Food for thought.

- michael

Security is point-in-time

I've been having discussions with some peers that security is a point-in-time snapshot. You don't "achieve" security and then walk away from it; it is a continuous cycle. In other words, security is a state of being that is subject to surrounding context. You can be "secure" today (via traditional measurements) and, without changing a thing, be "insecure" tomorrow.

Let's entertain some examples. First we'll start off with the realm of cryptography. Once upon a time, DES, MD5, and SHA1 were considered very secure. They were (actually, they still are even now) widespread standards in many applications. And yet, all three of those mathematical critters are now considered insecure.
DES has been dead for over a decade (when the EFF built DeepCrack in 1998), MD5 fell in 2004, and the final nail was put into SHA1's coffin last week. What happened? In some cases, advances in and availability of technology diminished the provided security value of the algorithm. In other cases, mathematical wizards discovered properties that reduced the algorithms' effectiveness.

It's not the first time that advanced in technology and information has rendered a pervious security control ineffective. In the 80s and 90s, many organizations deployed proximity access control cards from HID et. al. to control access to buildings. These cards were essentially passive read-only RFID transponders operating at low frequencies (125kHz). The security of this technology was contingent on the concept that it was too difficult for attackers to create or clone new proximity cards. However, fast forward a few decades and now you have verbose information available on the Internet on
how to make a proximity card cloner using off-the-shelf components; the cloner can be trivially used to defeat these legacy proximity access control systems. Fortunately vendors have moved to newer technology (iClass, Mifare, etc., even though they have their own problems too) for RFID/proximity access control, but many 125kHz systems still are in use today.

But just because you updated to a newer technology which fixes insecurity in an older technology, doesn't mean you're done. The well-accepted security patch lifecycle shows this is an on-going battle. When you are insecure, you apply a patch...which makes you secure. That is, until a new vulnerability is found in the patch (or the application areas that were not patched), and you're back to being insecure. What was fully patched and secure yesterday can be considered unpatched and insecure today; the "secure" disposition was only valid during a particular point in time. Today’s secure version is tomorrow's insecure version.

What does this all mean at the end of the day? Security is not a purchasable asset that you buy once and then have; it takes continuous investment to keep it. So keep that in mind when you are laying out your capex plans.

Until next time,
- Jeff

Friday, May 1, 2009

Thoughts on Microsoft SIR v6

Microsoft has released volume 6 of their Security Intelligence Report. At 184 pages, it's a behemoth; fortunately there is a separate 15 page Key Findings Summary document for those with less time and patience to read through the entire report. The full report is absolutely fantastic and full of raw data, if you get into that sort of thing; but for now, I thought I'd make some observations based on the summary document.

The first thought I had is regarding figure 5 of the summary (page 5), which indicates that 59.1% of browser-based attacks on Windows XP targeted 3rd-party components, versus 94.5% on Vista. The positive message is that IE (and the native ActiveX controls that come along with it) running on Vista seems to be more secure; but the more interesting message is that IE running on XP is still being directly targeted and exploited. When you look at the breakout of the top ten threats against XP, you see that the Microsoft ones are largely old—MS06-01, MS06-057, MS05-014, MS06-071, and MS06-055. For those of you unfamiliar with Microsoft security bulletin naming conventions, the first two numbers denote the year; so these vulnerabilities were essentially patched in 2005 and 2006, and yet still actively (and successfully!) exploited in very widespread fashion in late 2008.

Later in the summary document there is a mentioned that the majority of exploited Microsoft Office applications were Release-To-Manufacturing (RTM) versions. The most successful exploit was CVE-2006-2492. Also interesting was to see the growing trend for PDF vulnerabilities, as shown in Figure 10. CVE-2007-5659 and CVE-2008-2992 account for the vast majority of PDF-based attacks. While the PDF vulnerabilities are slightly newer, the bulk of the Microsoft-specific exploitation was using old bugs. Moral to the story: people are not getting p0wned by 0day...they are getting p0wned with 730day. That means thorough and timely patching is still a very important process to wrangle, because apparently a lot of people are still failing at it.

Of course, the summary doesn't make large mention of how many of those exploits were home users vs. enterprises. Later, Figure 15 did split out home vs. enterprise user metric regarding encountered threats/malware. Home users encountered a significantly high number of Trojan downloaders and droppers, while enterprise users encountered a high number of worms. Both home and enterprise users were relatively equal in nearly all other categories. Miscellaneous Trojans (whatever that means in the Microsoft classification schema) had the greatest growth of all encountered threat categories.

The document also contains some great geo-locational data related to the sources of the encountered threats, indicated per global country and per state in the US. Globally, it isn't much surprise to see US, China, and USSR at the biggest offenders. But more interesting was the US per-state breakout, which labeled California, Texas, and Florida locations as the largest offenders. That is likely due to the overall amount of hosting data centers located there. Figure 21 indicates that the US is also one of the top hosts for phishing sites too; more surprising is China and USSR are less than the US. California and Texas are still top offending states for phishing, but Florida drops out and is replaced by Virginia and Illinois.

Overall, the SIR is packed full of raw data and visuals that do a great job at helping understand the state of security in the world. As I wade through the full report, I'll be sure to post any interesting findings to this blog.

Until next time,
- Jeff

Enterprise Browser Security is so 2001

CNet News ran a story this morning citing Forrester statistics on the web browsers that are typically deployed within the corporate enterprises. One statistic in particular made my jaw hit the floor:

"remarkably, 60 percent of enterprises are still on IE 6..."

Internet Explorer 6! Now keep in mind that there is no doubt hidden details behind that statistic as it seems a bit high, but regardless, to hear that more than half of enterprises still have IE 6 deployed at some level is truly concerning.

Vulnerabilities

Let's start off by looking at some statistics courtesy of Secunia:



The 120 vulnerabilities in the chart only cover the last seven years. The full scope, since IE 6 came to market is actually 136 advisories covering a total of 147 individual vulnerabilities.



Even more concerning is the fact that as of today 22 of the advisories relate to vulnerabilities that have not yet (or never will be) patched. Combine that with the fact that a company which deploys an eight year old browser is unlikely to be lightning fast to deploy patches and it's not difficult to understand why malicious code continues to have phenomenal success by simply targeting known vulnerabilities.

Functionality

Personally, I am less concerned about the vulnerabilities that IE 6 may have had in the past (what is broken can be fixed) and more concerned about what the browser lacks. Microsoft has done an admirable job over the years of adding security functionality to their browser. I have publicly applauded Microsoft for being the first major browser vendor to deploy protections against reflected Cross-Site Scripting (XSS) attacks. Let's look at major security protections added after IE6:

Internet Explorer 7
  • Phishing filter
  • ActiveX opt-in
  • Extended Validation SSL certificate support
  • Malicious URL filter
  • XSS filter
  • Data Execution Prevention
The aforementioned security controls are essential, with some, such as phishing and malicious URL filters having now become standard controls across all major desktop browsers. Not upgrading end-user systems to newer versions of IE, even when routinely patched in a timely fashion, leaves enterprises open to significant risk. Attackers have shifted their focus away from the server to end to target users via the web browser and it's no wonder when considering the security gap left by holding on to a browser as dated and broken as IE 6. I agree with others that it's high time that Microsoft force a migration by declaring an official end of life for IE 6.

Why?

Why would any professional, responsible for IT security ever deem it acceptable to have a browser in the enterprise that leaves such a gaping security hole? They wouldn't. When I see statistics such as this it is clear to me that security teams lack control to make decisions related to the applications which exist within the enterprise. IE 6 is running in 60% of enterprises for compatibility reasons, because some legacy intranet site requires it to stay alive. There can be a whole host of reasons why changes haven't been made to alleviate the compatibility issue which is preventing a browser upgrade but in the meantime, enterprise security suffers.

Should the org. chart be changed to give the security team a greater voice? Perhaps. That can however be a long drawn out political battle with no guarantee of success. What will produce immediate results? I don't think I've ever heard a better argument for deploying security controls in-line, to stop threats before they reach the browser. My bias is clear and I don't care if you deploy an appliance, MSSP or cloud based solution, but such controls are an important component of a defense in depth approach, especially when you can't control what resides on the desktop.

- michael