Friday, May 22, 2009
First I wound up searching for requests containing the string "Mozilla%20Default%20Plug-in", which is the string name of the plugin that ships with many Firefox-based browsers. It also has very little chance of being a false positive. Through logs stretching over the first half of May I found over 56,000 requests containing my target plugin string. I then grouped and ordered the requests per host. Overall, those 56,000 requests belonged to only 1056 hosts, of which nearly half (502) were sub-domained off Omniture's .2o7.net domain. Some interesting hostnames in the overall list include z.digg.com, ostats.mozilla.com, mtrics.cdc.gov, a.consumerreports.org, metrics.npr.org, stumbleupon.stumble-upon.com, a.ncbi.nlm.nih.gov, www.ac.vic.gov.au, metrics.aarp.org, and stateofgeorgia.122.2o7.net.
Next I thought I would see what other plugins were popular. Since Omniture uses a consistent URL request format, and composes a bulk of the requests, I decided to process through the logs pulling plugin usage data out of requests to Omniture and tabulate plugin usage counted on a per-client-IP basis. The most popular plugins, in order, were:
Mozilla Default Plug-in
Windows Media Player Plug-in Dynamic Link Library
Microsoft Office 2003
iTunes Application Detector
Shockwave for Director
Java TM Platform SE 6 U13
Citrix ICA Client
Windows Presentation Foundation
2007 Microsoft Office system
QuickTime Plug-in 7.6
RealPlayer tm G2 LiveConnect-Enabled Plug-In 32-bit
RealPlayer Version Plugin
RealJukebox NS Plugin
QuickTime Plug-in 7.5.5
Move Media Player
Java TM Platform SE 6 U7
DivX Web Player
Java TM Platform SE 6 U11
MetaStream 3 Plugin
ActiveTouch General Plugin Container
DivX Player Netscape Plugin
Microsoft Windows Media Player Firefox Plugin
Java TM Platform SE 6 U5
QuickTime Plug-in 7.2
Java TM Platform SE 6 U2
Turner Media Plugin 22.214.171.124
Microsoft Office Live Plug-in for Firefox
getPlus for Adobe 15235
VMware Remote Console Plug-in
Java TM Platform SE 6 U3
getPlus for Adobe 15229
Avocent DSView Session Launcher Plugin
It's no surprise that Flash, Acrobat, etc. would be at the top of the list. Keep in mind that attackers are actively targeting web browser plugins; the above list should help get the point across that there is no shortage of targets out there. And it should also illustrate that sometimes things are being plugged in to browsers that you wouldn’t otherwise expect. For example, did you know that Microsoft Office installs a browser plugin? It's always worthwhile to understand the attack surface you are exposing to the Internet.
Until next time,
Monday, May 18, 2009
As I mentioned in my earlier blog, phishing is a never-ending threat for web security. Another threat for web users is the rise in the ActiveX vulnerabilities. These are very easy to exploit as there is a great deal of information available including vulnerability details, proof-of-concept exploits, etc. freely available on the Internet. ActiveX controls can be automatically downloaded and executed by Internet Explorer (with user acceptance) when viewing a web page or installed as part of a larger application. Basically, ActiveX controls have various properties and methods, which can lead to exploitation if they have not been properly coded. If someone were to find a vulnerable property or method in an ActiveX control, it is not be difficult to create a working exploit and host it on a web server. If the vulnerable control is marked ‘safe for scripting’ it can then be remotely called and exploited by a malicious web site.
Numerous buffer overflows and file overwrite vulnerabilities have been found in ActiveX controls over the past few years and working exploits are available for many of them. Using a heap spray technique, which is a popular method for reliably injecting shell code into memory, the vulnerable control can be successfully exploited by the attacker. He simply needs to host the working exploit somewhere on a web server under his control and divert a victim to visit the site. If the vulnerable ActiveX control is present on the victim’s machine, the victim will be compromised silently in the background and the vulnerability could allow remote code execution or arbitrary file overwrite. The attacker could then download and install additional malware or malicious programs.
A few months back, a vulnerability was found in Snapshot Viewer for Microsoft Access (MS08-041) that can be leveraged to download and save files to arbitrary locations on an affected system. As noted in a Symantec blog post, because this particular ActiveX control was signed by Microsoft, attackers were able to force installation/exploitation of the control without any user interaction. Web attacking toolkits like Mpack, Neospolit, etc. found in the wild are exploiting a number of such critical ActiveX vulnerabilities.
Due to the ease of exploitation with vulnerable ActiveX controls, attackers are widely using them as a popular attack vector. Heap spray code is very easy to work with and can be seen in many attacks involving ActiveX exploits. This gives the attacker the power to easily change the shellcode to whatever is required. He can then host the exploit code on the webpage and convince victims to visit the Web site, typically by getting them to click a link in an HTML e-mail link. ActiveX vulnerabilities are rising every day and people are posting exploit code on public web sites. A quick search on Securityfocus will reveal the latest ActiveX vulnerabilities as shown below:
Gone are the days when attackers were targeting server side vulnerabilities to compromise systems. Now, they are focused on targeting end users, via the web browser. A combination of heap spray and obfuscation used in the exploits makes it difficult to detect the attacks using automated means. Certainly, the rise in the use of ActiveX vulnerabilities and exploits posted on the web makes them a real threat for web security. Setting up a kill bit is the only option if the vulnerability is not patched and you do not have updated antivirus signatures. The Kill-Bit is a registry key for a particular CLSID that marks the COM object / ActiveX control referenced by that CLSID as non-loadable in the browser and other scriptable environments. Setting a Kill-Bit for the control marks that particular control as forbidden to instantiate in the browser. You can do this by modifying the data value of the Compatibility Flags DWORD value for the CLSID of the ActiveX control. Here is the link from Microsoft on how to set a kill bit for a given control:
That’s it for now.
Sunday, May 17, 2009
This could be fairly straightforward if it were all about implementation problems. You could just run new software through your favorite testing method, and keep up with the patches. Add in as much explicit security technology (anti-virus, a firewall, NIDS) as your stakeholders are willing to pay for, and then keep telling stakeholders you need more staff or money to do X, or the company is at risk. (Cynical? Maybe. Typical 10 years ago? Yes. Typical now? Less so, but not entirely gone just yet.)
Enter design & configuration flaws, by which I mean all security issues in which every system is acting exactly as it is designed to act, yet an attacker is able to realize a threat. You could scan for some configuration flaws, and you could run new software through a design review, but these strategies won't give you much insight into how the new individual system will affect your overall security unless you take the systems' environment into account. That is, there are now 3 variables to tweak: stakeholder expectations, individual systems, and the environment of the systems. These variables are so interrelated that they can't be tweaked independently.
Here's how 2 common (and sometimes resisted) IT security techniques affect these variables, assuming the possibility of design & configuration flaws:
If you're like most IT security folks, you have some rules about how systems can be configured. Usually these rules are network-specific, e.g. if the machine lives on network X it can't be dual-homed and it must use Kerberos authentication. The ideal is, if the system doesn't meet these rules, it doesn't get on the network.
Rules like this affect both individual systems and the system environment. They have 2 purposes:
1. Fit an individual system into the environment without enabling any possible design & configuration flaws in this system.
2. Fit a new individual system into the environment without changing the environment in a way that enables a design or configuration flaw in existing systems.
For every rule on your security list, you should be able to say whether it is trying to do #1, #2 or both. If you can't, why is the rule on the list at all? As you can imagine, it will be a lot easier to determine the purpose of a rule and see whether a given set of security rules is effective and consistent for these purposes if someone has documented the intended properties of the environment. Whether you are writing a new set of rules or reviewing some existing rules, figuring out what you want in the environment is a great place to start.
One key environmental property is that individual systems affect each other in a known, limited set of ways. This means that for each environment you need a list of ways systems are allowed to interact. Can they mount each others' disks? Ssh in?
Other desirable properties are simplicity and consistency. For example, there should be some core services (usually including routing, DNS, and if applicable, DHCP) which are provided by the same source for every individual system in the environment.
I'm sure there are other useful properties for an environment; you get the idea. And yes, you could argue that these rules also affect stakeholder expectations, by presenting a picture of the facilities & properties of the environment.
Sign Off on Exceptions
Network X will sometimes be the most appropriate place for a system which simply cannot meet the rules. The standard way most IT security departments respond is to have an exception process which requires review and sign off, frequently by some high-level political figure at the company. This leads many people, including some security people, to believe that the primary purpose of such a process is to make it harder to get an exception than it is to comply with the set of rules to start with. To you I say: Danger! You are missing the point.
This review and sign off process affects individual systems, the system environment and stakeholder expectations. The purposes should be:
1. Determine whether the proposed exceptions will enable any design & configuration flaws in this system, and document any additional properties of the environment (beyond the initial list of intended properties) which are protecting this system.
2. Determine whether the proposed exceptions will enable any design & configuration flaws in other systems in the environment, and document any changed or additional properties of the environment (beyond the initial list of intended properties) adding this system will cause.
3. Ensure that stakeholders understand and approve any flaws that have been enabled, and any threats an attacker could now realize.
If this is going to work, you need to compare each proposed exception with the intended and actual properties of the environment. Answering a bunch of questions is not sufficient unless those questions reveal the information from #1 and #2. Also, it has to be cumulative. Previously authorized exceptions must change the documented properties of the environment, or you won't be able to tell whether this exception clashes with previous exceptions.
Whether to allow an exception is usually, in the end, a business decision, but the decision makers need to understand the security implications of what they're deciding, as in purpose #3. Doing this right will be hard enough that you won't need to add hoops just for their deterrent value.
I hope these ideas help you develop or re-design a plan for minimizing design flaws in deployment. Next time: some thoughts on design flaws and perimeter security.
Friday, May 15, 2009
I'm going to go ahead and overlook the obvious companies/services where you install a toolbar into your browser so they can watch and tabulate everywhere you go (like Alexa). That is an opt-in situation where you willingly give all your info to a third party. I'm also going to overlook any web proxy services that you might be using, because those are also opt-in situations with the same ramifications. And of course, any internal monitoring within an organization is also exempt; I want to focus on who, outside of your local network, organization, and your ISP, is receiving your surfing data without you explicitly opting-in to give it to them.
So let's start with HTTPS, as implemented in common web browsers. Most common browsers include certificate revocation checking capabilities for ensuring an HTTPS site certificate hasn't been compromised. When you access an HTTPS site, the browser will receive the server certificate and run off the to the designated CRL and OCSP servers to query whether the certificate has been revoked. Those CRL and OCSP servers are operated by the certificate issuers, such as Thawte, Verisign, etc. The key here is that many SSL certificates have a one-to-one relationship with a specific web site (i.e. traditional certificates and not wildcard certificates); so a query to an OCSP responder for a web site certificate essentially tells the certificate issuer that you are in the process of accessing that site. Now, certificate issuers can only track accesses to sites they issue certificates for—but in 2007, Verisign and subsidiaries were deemed to have issued over 57% of the SSL certificates in use, giving them the theoretical capability to track site access to over half of the HTTPS sites on the Internet.
Fortunately none of the OCSP responders I briefly reviewed set HTTP browser cookies, which means they are not tracking unique individuals (at least, not through simple browser means). That is a good thing; the best tracking granularity they can achieve is generally per source IP address (ignoring browser header/request and TCP protocol fingerprinting to differentiate different browsers/systems from the same IP). This can be problematic if you're a home user and have a one-to-one relationship with your IP address, but tolerable if you're behind a NAT or proxy that serves many users (giving it a many-to-one relationship with the IP address). [Aside/favor: If you happen to see a public OCSP setting browser cookies, let me know!]
Default browser start pages are another privacy leaking avenue. Every time you start your web browser, Microsoft, Apple, Opera, or Mozilla hears about it through the various default browser start page request(s):
The start pages themselves may seem innocuous, but they can chain to more exposure. For example, Apple's start page includes a request to metrics.apple.com, which is really a pointer to Omniture (resolving metrics.apple.com returns a DNS CNAME to appleglobal.112.2o7.net); the request collects and sends a lot of identifying browser information including screen resolution and all installed browser plugins. I've seen occasions where responses to metrics.apple.com forward to Doubleclick.net too. Now, sure, it's common for web pages to include metrics/analytics and advertising links—but such elements on browser start pages means these syndication/service providers also have access to realtime information of when you start your browser. In addition to Apple's use of Omniture and Doubleclick, Opera's portal page includes resource links to Google Analytics, and Mozilla redirects to Google. Microsoft's portal page used Webtrends.com for metrics.
But more importantly, can any of these providers uniquely identify you? Microsoft relies on m.webtrends.com, and that site sets a unique cookie for you--so Webtrends has the ability to track you no matter where you go. Ditto for Apple, Omniture, and Doubleclick. The fact that these metric and advertising elements are hosted on browser start pages means the services have the opportunity to plant their unique cookie right at the beginning of your surfing session. Plus, look at where and how the start pages are hosted. Microsoft redirects from Microsoft.com to MSN.com, where the start page actually lives. By placing the start page on MSN, all of the MSN cookies are now available--and if you are logged in, that means Microsoft knows who you are and can individually identify you if they so choose. Since Mozilla redirects to Google, the same case generally applies to Google using its own cookies to identify/track you.
But fortunately this is only a minor information exposure; knowing you started your web browser is not exactly an earth-shattering privacy violation, and it only occurs at the very beginning of your surfing session (not when you open a new tab, etc.). Plus, this immediate information doesn’t expose where you actually wind up surfing to...just that you opened a browser. The previously mentioned OCSP issue reveals far more information regarding where you are surfing; but it is not the only feature that does so...
Many browsers have recently added various anti-phishing and safe surfing features which essentially query the host/URL you are accessing in a database to see if it's a known offender and thus should be blocked. Think about that for a second: for every URL/host you visit while surfing the Internet, a lookup is done to ensure the URL is safe. So how is that lookup being done?
Some of the browser vendors have designed their features so that databases of the known offenders are downloaded and stored on the user's local system, so they can be queried locally. This is ideal, as it means the lookups are fast (just look inside the downloaded file) and it doesn't expose the lookups that are actually occurring. However, note that I began this paragraph with the term "SOME of the browser vendors"...
It turns out the biggest questionable privacy offender is Opera. Their SiteCheck function, enabled by default and meant to warn you when you try to access a malware site, sends a real-time request to sitecheck2.opera.com for every new host you surf to. The transaction looks something like:
GET /?host=www.google.com&hdn=naLWSHPy7ud1pACYor32hg== HTTP/1.1
User-Agent: Opera/9.64 (Windows NT 5.2; U; en) Presto/2.1.1
HTTP/1.1 200 OK
Date: Fri, 15 May 2009 15:17:11 GMT
<?xml version="1.0" encoding="utf-8"?>
That means Opera hears about every web site host you access, and can keep a historical record if they really wanted to. Now, the SiteCheck queries are not utilizing cookies to track individual users, so tracking granularity is limited to per-source-IP resolution. But it's still a notable privacy exposure. Other browser features like IE8's Suggested Sites likely expose the same level of information, but I have not confirmed it personally (but their level of privacy warnings and opt-in requirements seem to suggest so).
So now that we've gone through all of that, what can you do about it? Well, the good news is that you can generally turn off all of the above mentioned behaviors (you'll have to dig around in the Advanced Configuration areas of your browser)...but you might not want to. Sure, you can change your browser start page to something else without ramifications (actually, if you change it to a blank page, you'll find your browser starts faster because it doesn't have to initially load an external page). But OCSP checking and SiteCheck features are security protections that actually help keep you safe--so by turning that stuff off, you are opening yourself to more risk. So you need to decide what’s more important to you: your security, or your privacy.
Friday, May 8, 2009
Let's entertain some examples. First we'll start off with the realm of cryptography. Once upon a time, DES, MD5, and SHA1 were considered very secure. They were (actually, they still are even now) widespread standards in many applications. And yet, all three of those mathematical critters are now considered insecure. DES has been dead for over a decade (when the EFF built DeepCrack in 1998), MD5 fell in 2004, and the final nail was put into SHA1's coffin last week. What happened? In some cases, advances in and availability of technology diminished the provided security value of the algorithm. In other cases, mathematical wizards discovered properties that reduced the algorithms' effectiveness.
It's not the first time that advanced in technology and information has rendered a pervious security control ineffective. In the 80s and 90s, many organizations deployed proximity access control cards from HID et. al. to control access to buildings. These cards were essentially passive read-only RFID transponders operating at low frequencies (125kHz). The security of this technology was contingent on the concept that it was too difficult for attackers to create or clone new proximity cards. However, fast forward a few decades and now you have verbose information available on the Internet on how to make a proximity card cloner using off-the-shelf components; the cloner can be trivially used to defeat these legacy proximity access control systems. Fortunately vendors have moved to newer technology (iClass, Mifare, etc., even though they have their own problems too) for RFID/proximity access control, but many 125kHz systems still are in use today.
But just because you updated to a newer technology which fixes insecurity in an older technology, doesn't mean you're done. The well-accepted security patch lifecycle shows this is an on-going battle. When you are insecure, you apply a patch...which makes you secure. That is, until a new vulnerability is found in the patch (or the application areas that were not patched), and you're back to being insecure. What was fully patched and secure yesterday can be considered unpatched and insecure today; the "secure" disposition was only valid during a particular point in time. Today’s secure version is tomorrow's insecure version.
What does this all mean at the end of the day? Security is not a purchasable asset that you buy once and then have; it takes continuous investment to keep it. So keep that in mind when you are laying out your capex plans.
Until next time,
Friday, May 1, 2009
The first thought I had is regarding figure 5 of the summary (page 5), which indicates that 59.1% of browser-based attacks on Windows XP targeted 3rd-party components, versus 94.5% on Vista. The positive message is that IE (and the native ActiveX controls that come along with it) running on Vista seems to be more secure; but the more interesting message is that IE running on XP is still being directly targeted and exploited. When you look at the breakout of the top ten threats against XP, you see that the Microsoft ones are largely old—MS06-01, MS06-057, MS05-014, MS06-071, and MS06-055. For those of you unfamiliar with Microsoft security bulletin naming conventions, the first two numbers denote the year; so these vulnerabilities were essentially patched in 2005 and 2006, and yet still actively (and successfully!) exploited in very widespread fashion in late 2008.
Later in the summary document there is a mentioned that the majority of exploited Microsoft Office applications were Release-To-Manufacturing (RTM) versions. The most successful exploit was CVE-2006-2492. Also interesting was to see the growing trend for PDF vulnerabilities, as shown in Figure 10. CVE-2007-5659 and CVE-2008-2992 account for the vast majority of PDF-based attacks. While the PDF vulnerabilities are slightly newer, the bulk of the Microsoft-specific exploitation was using old bugs. Moral to the story: people are not getting p0wned by 0day...they are getting p0wned with 730day. That means thorough and timely patching is still a very important process to wrangle, because apparently a lot of people are still failing at it.
Of course, the summary doesn't make large mention of how many of those exploits were home users vs. enterprises. Later, Figure 15 did split out home vs. enterprise user metric regarding encountered threats/malware. Home users encountered a significantly high number of Trojan downloaders and droppers, while enterprise users encountered a high number of worms. Both home and enterprise users were relatively equal in nearly all other categories. Miscellaneous Trojans (whatever that means in the Microsoft classification schema) had the greatest growth of all encountered threat categories.
The document also contains some great geo-locational data related to the sources of the encountered threats, indicated per global country and per state in the US. Globally, it isn't much surprise to see US, China, and USSR at the biggest offenders. But more interesting was the US per-state breakout, which labeled California, Texas, and Florida locations as the largest offenders. That is likely due to the overall amount of hosting data centers located there. Figure 21 indicates that the US is also one of the top hosts for phishing sites too; more surprising is China and USSR are less than the US. California and Texas are still top offending states for phishing, but Florida drops out and is replaced by Virginia and Illinois.
Overall, the SIR is packed full of raw data and visuals that do a great job at helping understand the state of security in the world. As I wade through the full report, I'll be sure to post any interesting findings to this blog.
Until next time,
- Phishing filter
- ActiveX opt-in
- Extended Validation SSL certificate support
- Malicious URL filter
- XSS filter
- Data Execution Prevention