Friday, February 27, 2009

Big, white, puffy clouds can still evaporate

If you haven't heard by now, Google GMail had a 2.5 hour outage on Tuesday. Acacio Cruz, Gmail’s Site Reliability Manager, explained in a little more detail regarding the outage. My interpretation of his explanation is that they had a bug in how data is shifted around in the cloud, and during a shift one of data centers couldn't handle the new burden, failed, and eventually the effect cascaded.

I have a few comments on this. First, GMail still carries the 'beta' moniker...and aren't service disruptions like this kind of expected/tolerated with beta software? But beta or not, real people use the service--including Google themselves (according to Acacio Cruz's post). So maybe it's time to promote GMail out of beta and designate it as production, since that's what it is to most people anyways.


Second, running a resilient cloud is hard, particularly when it comes to outages and the resulting automatic rebalancing to cover the gap and maintain availability. It's like a tire on a car: everything rotates routinely and drives smoothly when the tire is balanced. But if the center of balance of the tire shifts dramatically at a random moment, things quickly deteriorate and the high-speed rotation can cause significant instability and damage to the rest of the vehicle. Computing clouds are conceptually similar. What happens when an outage takes out a piece of the cloud and the finely orchestrated balance is randomly skewed...is the capacity of the rest of the cloud sufficient to absorb the load? Can the cloud rebalance itself gracefully? These are the same types of questions one will hear when investigating fault tolerance, failover, and redundancy capabilities of any normal technology component (servers, firewalls, WAN links, etc.). But the questions take on a whole new level of scale when you apply them to a cloud.

Plus let's not overlook the fact that we have different fault tolerance expectations for services in the cloud. The typical perception of clouds seems to imply internal failover capabilities. After all, that's part of the appeal of clouds: all that fault tolerance, redundancy, etc. is magically taken care of within the cloud. The cloud is infallible. The cloud cannot fail. Those are the dominant marketing messages. No one talks about having a contingency plan for when the cloud is unavailable (what would that be exactly? A second hot-spare cloud?).

So when choosing a cloud-based vendor, you should try to investigate their cloud resiliency. Vendors who take non-cloud specific software, run copies of it in two or three datacenters, and call it a cloud service are likely not delivering the true expected experience to customers. Software needs to be purpose-built to run in a cloud, including multi-tenancy design and strong internal failover capability. That's because most people wind up putting all of their eggs into the cloud basket. If you are going to be one of those people, you should try to make sure the basket offers the implied safety that you expect.

Until next time,
- Jeff

Thursday, February 19, 2009

EV-SSL, SSL, and who’s not using it

It seems I've been picking on EV-SSL lately. It's not intentional; I just have encountered a lot of questionable marketing fluff lately and wanted to talk about it. Today is no different. Tim Callan's recent blog post got me thinking. Basically he referenced some SSL attacks that were disclosed at BlackHat, and suggested EV-SSL certificates as the cure to SSL MitM attacks and phishing in general. On the one hand, I believe his conclusion is conveniently aligned with his employer's sales agenda. On the other hand, I acknowledge that EV-SSL could raise the bar...if it's adopted and deployed in widespread fashion. And therein is the crux.

I've
previously talked about the EV-SSL adoption rate (or lack thereof) in this blog. NetCraft has reported its findings of one million valid SSL-capable sites are in use (note: sites with invalid, self-signed, or expired SSL certs are not included in that count). And while I don't have access to their commercial SSL Survey report for January 2009, we infer from some third party mentions that the report states around 10,000 EV-SSL sites were found. That's 3,000 more than Verisign mentioned last month, but perhaps Verisign was only reporting what they alone issued. Given their marketshare (~75%, albeit that number is a year old) against total EV-SSL cert vendors, that seems to make sense.

Only 10,000 EV-SSL sites within two years seems very low to me, especially for something which is essentially an enhanced derivative of an already understood and accepted technology. People already know what SSL is and what function SSL certs perform, so that definitely is not the hold up. There just doesn't seem to be much of an adoption momentum, and maybe that's because the additional value these premium EV-SSL certs provide is questionable in the eyes of site owners.

But maybe I'm looking at this wrong. Only 10,000 EV-SSL sites seems a bit low, but what if they were 10,000 sites that matter? I suppose that would paint a different picture. If 10,000 big sites that account for a notable portion of the web's traffic used EV-SSL, I suppose that actually might not be so bad. So I entertained that theory. I manually surfed to each of the upper 50 of
Alexa's Global Top 500 Sites list, and recorded who was using EV-SSL vs. normal SSL vs. no SSL. The idea is that these sites represent the most visited sites in all of the world (according to Alexa), and as such, assumingly have a vested interest in assuring their voluminous user population that the users' accounts and transactions are secure. If most of these sites were using EV-SSL, then that would actually provide notable EV-SSL permeation into overall web traffic.

The process I used was simple: visit each of the Alexa-listed sites (the top 50), surf around to find a 'signin' or 'signup' form or other equally-sensitive use of HTTPS, and then watch if the URL address bar in my Internet Explorer 7 browser turned green when SSL was utilized. A green bar meant EV-SSL; no green bar but still having a valid lock icon is standard SSL. I found that only two sites (ebay.com and login.live.com) were using EV-SSL. That's it. Just two. All of the other sites were using standard SSL, with the exception of ten sites that don't use SSL at all with their login forms (youtube.com, hi5.com, mail.ru, photobucket.com, vkontakte.ru, imageshack.us, friendster.com, skyrock.com, odnoklassniki.ru, and dailymotion.com) and two more that don't really receive sensitive info from the user and thus don't necessitate SSL (bbc.co.uk and cnn.com).

We might label that a bleak picture. Ten of the fifty top sites (20%) don't even use SSL for their logins. Only two use EV-SSL. I was a bit surprised to see the
online retail giant, Amazon.com, didn't use EV-SSL. If EV-SSL was a sure-thing to lead to better customer confidence and more online retail conversions, I would have figured they would have been all over it. They would not be one to overlook a technological advantage if the ROI helped their bottom line...that is, after all, how they got started. And given that they store credit card data for quick purchasing convenience, there is a certain financial value to compromising someone's Amazon account; I’m surprised it hasn't been the target of more phishing attacks to date.

Also surprising was Yahoo (login.yahoo.com), AOL (my.screenname.aol.com), and Google (www.google.com) all use standard SSL for their common backend authentication sites. These vendors offer multiple different web services, but tie it all back to a single authentication mechanism, much like a Single Sign-On (SSO) platform. And aren't some of these vendors OpenID providers? That means a successful phish of a single account could be leveraged across many sites/services, even outside the realm of just these vendors. On Microsoft's live.com site, I found that the login form (from login.live.com, which was presented over HTTP by default but did include a link to switch to HTTPS) did go to an EV-SSL site once submitted, but the signup page (signup.live.com) was handled with a standard SSL cert. Maybe they just want to discourage new signups while assuring existing accounts it's safe to login.

But here's another thought: maybe these sites have invested in standard SSL certificates before EV-SSL was popular, and just are waiting for their current certs to expire before renewing with EV-SSL variants. That seems plausible, but the results are a bit mixed. Google renewed their current standard SSL cert in May 2008 and signup.live.com was renewed in November 2008. That was fairly recent, and definitely after EV-SSL was well established. Opposite that, AOL's my.screenname.aol.com certificate was issued in March 2007 and Yahoo's is from January 2006; it's arguable they were issued before EV-SSL really caught on. So it's hard to say, but the thought might hold true for some cases. Certainly not others.

But again, maybe I'm still looking at it all wrong. EV-SSL might only make the most sense to certain verticals, like online banking sites. The Alexa list doesn't really have any bank sites in their top 50. So I did a spot-check of some big bank site URLs pulled from the top of my head. Of all the ones I tried, only BankOfAmerica's login site (which is separate from their normal site) used EV-SSL:


  • www.bankofamerica.com: standard SSL (renewed December 2008)
  • sitekey.bankofamerica.com: EV-SSL
  • www.wellsfargo.com: standard SSL (renewed June 2008)
  • online.wellsfargo.com: standard SSL (renewed July 2008)
  • www.chase.com: standard SSL (renewed August 2008)
  • chaseonline.chase.com: standard SSL (renewed April 2008)
  • online.citibank.com: standard SSL (renewed December 2007)
  • www.us.hsbc.com: standard SSL (renewed July 2008)
  • myonlineaccounts2.abbeynational.co.uk: standard SSL (renewed February 2009**)

That last one is very interesting. Abbey National Bank has been a notable target of phishing attempts going back a year. And yet, when they renewed their SSL certificate last week (February 12, 2009), they went with a standard certificate instead of EV-SSL. Whether they didn't know or didn't care that EV-SSL could help assure users against phishing, I don't know. But if a bank that's been phished for over a year doesn't decide to buy into EV-SSL when it has the convenient opportunity...who will? Marketing fluff aside, that is probably the absolute best-case realized application for EV-SSL ever. User accounts have access to monetary instruments. The site has been an active target of phishing attempts for over a year. They needed new SSL certs anyway. Any one of those reasons by itself is (supposed) ground for EV-SSL, and they had all three reasons. Oh well, maybe they're reconsider EV-SSL in 2010 when their new cert expires.

Until then,
- Jeff

**Postscript: it seems Abbey National Bank's site uses a cluster of SSL web servers that have different SSL certs installed. In some cases, I connect to a server that gives me an SSL cert that was issued on Feb 12 2009. On other occasions, I get connected to a server that gives me an SSL cert that expires on Feb 24 2009 (i.e. next week). So apparently they have currently upgraded the SSL certs on some of their servers, but not all. I just wanted to mention this in case you were poking around yourself and noticed the discrepancy. Here is a screenshot of the newer SSL cert, to prove I'm not making stuff up:


Practical Example of csSQLi Using (Google) Gears Via XSS

Comment: I would like to go out of my way to thank Paymo.biz for the professionalism that they displayed in promptly responding to vulnerabilities brought to their attention to ensure that their users were protected. Within 24 hours, they had responded to my initial communication and shortly thereafter were sharing proposed protections which were then quickly implemented on production systems. Web application vendors can learn from their example.

Yesterday, at the Blackhat DC security conference, I spoke about the dangers of persistent web browser storage. Part of the talk focused on how emerging web browser storage solutions such as Gears (formerly Google Gears) and the Database Storage functionality included in the emerging HTML 5 specification, could be attacked on sites with existing cross-site scripting (XSS) vulnerabilities. The overall message is that while such technologies have built in controls to protect against attacks such as SQL injection (SQLi), when secure technologies are implemented on insecure sites, protections become meaningless.

Both Gears and HTML 5 Database Storage, permit web applications to store content in local relational databases, which reside on the local file system by leveraging the SQLite database format. This provides powerful functionality as web applications can now be taken offline as was recently done with Gmail. At the same time, it adds a new attack vector as persistent data can now potentially be attacked on the desktop, not just the server. Given that we're dealing with a relational database, is client-side SQL injection (csSQLi) possible? Unfortunately, the answer is yes and it's not simply a theoretical attack, it's very practical thanks to the significant prevalence of XSS vulnerabilities.

Both Gears and HTML 5 Database Storage leverage client-side JavaScript to create and interact with local databases. Therefore, if an XSS vulnerability is present, it's all too easy for an attacker to compromise the confidentiality and integrity of locally stored data by reading from or writing to the local database. Web applications with local databases via HTML 5 are presently rare due to the fact that only WebKit based browsers have implemented the functionality (i.e. Safari). This however seems poised to change with recent announcements that offline Gmail access is being developed for the iPhone and Android based phones via HTML 5 Database storage. Gears enabled applications on the other hand have already started to spring up and csSQLi has therefore become a reality.

Paymo.biz, a time tracking application was previously vulnerable to csSQLi due to the existence of XSS vulnerabilities on the site. The associated image details how JavaScript could be injected to read from the local database and in this proof of concept simply write the data to an alert box. Step #1 is specific to this situation as the XSS injection point occurs within paragraph tags which need to be closed. Gears is a JavaScript API, so in step #2, the API is included in order to expose necessary functions. In step #3, the local database is opened. In traditional (server side) SQLi the database structure must first be determined either through error messages or other brute force means. With csSQLi this is not a challenge since the database structure is exposed. To determine the names of tables, columns, etc. an attacker simply needs to first access the application himself to establish a local copy of the database on his own machine. Executing the SQL statement is conducted in step #4 and once again csSQLi is far less challenging than a server-side SQLi attack. Thanks to having XSS facilitate the attack, the SQL command can be composed from scratch - no need to inject commands into the middle of an preexisting SQL statement. The results of this particular injection can be seen in the associated screen shot which simply displays default data from a newly configured database.

While Gears has not yet been widely adopted, I expect this to change in the coming months, especially with the exposure that the technology will receive thanks to recent integration with Gmail. As users recognize the power of being able to take web applications offline, developers are sure to investigate adding Gears to their own applications. It's important to note that this attack has nothing to do with insecurities within the Gears technology itself. As mentioned, the attack is made possible when Gears is implemented on a site with existing XSS vulnerabilities. Unfortunately, XSS is a vulnerability which is far too prevalent on the web today. As such, it is inevitable that we'll see sites vulnerable to csSQLi. I hope that this early example illustrates the risks associated with offline browser storage and the importance of ensuring overall application security before adding this powerful functionality. Don't avoid Gears...embrace it...but do so securely.

A full copy of the slides from my talk entitled A Wolf in Sheep's Clothing - The Dangers of Persistent Web Browser Storage is available here.

- michael

Monday, February 16, 2009

Brainstorming != Security Analysis

I mentioned a community meeting I recently attended to some friends, and they were astonished to learn that not only did I think our brainstorming session was productive, I actually suggested it. Apparently, I have ranted so much about brainstorming that my friends think I disapprove of brainstorming as a concept. Whoops. :) Let me explain.

Usually groups use brainstorming to generate a bunch of ideas about some topic. Brainstorming (especially the round-robin technique we were using at the community meeting I described) is a great way to ensure that everybody's point of view is represented. It's all about inclusion, suspending judgment, listening, and other touchy-feely stuff that is very important in a community, especially one that is just forming. But as Wikipedia tactfully puts it, "researchers have not found evidence of its effectiveness for enhancing either quantity or quality of ideas generated. … [B]rainstorming groups are little more effective than other types of groups, and they are actually less effective than individuals working independently." In my opinion, the main reason to use it is to improve the team's soft skills, understanding of each other, morale, or similar, which is exactly what we needed at the community meeting.

And it should make a good security analysis technique, right? Hah. Having a happy, well-adjusted analysis team that works well together is desirable, but hardly ensures decent results. Likewise, even if brainstorming were good for generating lots of ideas, having a lot of ideas (on any topic) is not a major contributing factor for good security analysis results. Yet many current security analysis techniques have a step (e.g. "now we all get together & think up everything that could possibly go wrong") that looks a lot like brainstorming. What is going on here?

For starters, most consumers of security analysis results (e.g. non-security developers or management folks) can count just fine, but don't have the skills or knowledge to tell the difference between high and low quality analysis. Therefore, they like quantity. Analysis consumers will feel more satisfied and comfortable when analysis investigates and finds more issues. This is not entirely unreasonable; my experience indicates that even for a small system, there are usually quite a few things to investigate. If analysis only investigates 3 potential issues, I expect low quality analysis. In the frequent situation in which the analyst doesn't have a lot of security analysis experience (e.g. the do-it-yourself model many security development lifecycles suggest) but knows that investigating 3 issues is not enough, brainstorming starts to look like a good technique for identifying potential issues: its stated goal is to increase quantity of ideas, and quantity of ideas is what seems to be missing. Brainstorming is so well-accepted as an idea generation technique that it rarely occurs to anyone to investigate its effectiveness. (For the record, what is actually missing is coverage. Quantity of potential issues is a secondary indicator of coverage; quantity of found issues is a secondary indicator of a combination of coverage, analysis quality, granularity of analysis or reporting, target quality and many other interrelated factors.)

Next, most experienced security analysts can't accurately explain what they are doing when they come up with a list of potential issues to investigate. When an inexperienced security analyst asks, the more experienced guy will frequently say something like "I start by making a big list of possible stuff to look into". He might even use the word brainstorming. When the inexperienced analyst looks at the output, it will look like brainstorming output. The trick is that brainstorming is not what he's doing. What the experienced analyst is actually doing is very similar to what Benjamin Kovitz (Practical Software Requirements) explains engineers do instead of the functional decomposition they appear to do: matching what he already knows to the target at hand, and writing a list of potential issues he already knows about for this or similar targets . If you get a group of experienced analysts together, they will do what looks like brainstorming and come up with a great list, each of them unconsciously using her own experience.

What happens when an inexperienced analyst tries brainstorming for security analysis purposes? Let's say he gets a group together, and they come up with some ideas. Frequently these days, the ideas are couched as a threat model. One such threat model I vividly recall receiving (as a guide to the code review and penetration testing I was supposed to do for a client) had one (1) threat. When I asked the security-inexperienced client about it, I found the development team had brainstormed a list of threats, but the application was so small that this was the only idea they had come up with. It's true, it was a small application with limited security requirements; if I recall correctly, I came up with only 2 dozen or so threats (maybe 50ish variations for my colleague and I to investigate) when I did a brief, more systematic analysis. This is an extreme example of intelligent but inexperienced analysts applying brainstorming in good faith and getting lousy analysis results for their trouble. More typical results in my experience are that inexperienced analysts think of around 5-15% of what experienced analysts investigate, with most threat models by inexperienced authors containing 5-12 potential issues to investigate. (These are approximations based on experience, not hard statistics.)

Now imagine that the inexperienced analyst looks at the 1 potential issue he got out of the initial brainstorming session, and thinks to himself, "hmm, something went wrong; better get some help". What does he do? Calls his more experienced buddy & has the more experienced buddy facilitate a brainstorming session. I've been that buddy and talked to a lot of other more experienced buddies, and the brainstorming session facilitated by the more experienced analyst is usually pretty successful. Why? It's not brainstorming either. The experienced analyst is asking questions that will guide participants to realize the potential security issues for this target, and helping them fill in the blanks (again based on her own experience) when participants miss something.

After a brainstorming meeting, most participants will express great satisfaction with the meeting, the process, and the results. This is great, but when interpreting their happiness, remember that suspending judgment is part of the brainstorming process and a morale boost is one of the effects. Also keep in mind that people who suggest that brainstorming is not an appropriate technique to use are frequently accused of having a bad attitude, not being team players, and so on, and may therefore be keeping their mouths shut. Either participants are experienced and/or knowledgeable, in which case they unconsciously used their experience and knowledge to get good results, or they are not, in which case they usually can't tell how good their results are (yet; some participants will notice the low quality as analysis proceeds). My experience has been that by the time a brainstorming-based security analysis completes, most participants have started to question the quality of results, the ROI, or both. Unfortunately, what's done next time is often based on familiarity and the emotions people expressed during the process ("Last time I led a group through this they were very happy with the results."), rather than costs and benefits calculated at the end.

Brainstorming is NOT a security analysis technique. Any security analysis technique that looks like brainstorming is suspect. Either it doesn't work but nobody can (or will) tell, or something else is going on behind the scenes. Find out which is happening for you, then skip the brainstorming and head straight for the secret sauce.

--Brenda

Friday, February 13, 2009

Rough Week for Security Companies

It's been a rough week for security companies, especially for their web masters. A Romanian hacking group decided to embarrass as many as possible by identifying SQL injection (SQLi) flaws in public facing websites and Kaspersky, F-Secure and Bit-Defender (actually a reseller) all fell victim to the attacks. Now SQLi is far from a new issue and it's widely accepted that pre-auth SQLi vulnerabilities are critical flaws that require immediate attention. Despite this, we see no shortage of SQLi and it is increasingly becoming a favorite tool for botnet authors, who use such vulnerabilities to inject content into otherwise legitimate web pages in order to redirect browser traffic. I recently commented on the volume of such attacks that we're seeing and stepped through a specific example.

I spend most of my time worrying about emerging threats - the risks that we'll face in the months and years ahead. However, from a business perspective, greater damage comes from current, widespread attacks and companies rightly so, dedicate the majority of security resources to combating the 'clear and present danger' which they face. The events of this past week caused to to ponder how we're doing as an industry in combating what is now a mature and well understood vulnerability - SQLi.

Unfortunately, web application security statistics are somewhat hard to come by. The best come from Whitehat Security, which generally publishes an overview of what they're seeing from scanning client websites. They publish these reports a couple of times a year, so I dug up all of their statistics dating back to November 2006. Sadly, as can be seen is the Whitehat chart, progress appears to be relatively flat. The Whitehat statistics shed light on the likelihood that a particular vulnerability will exist on a given website which they review. We can see that over the past 2+ years, on average, their is a 17 1/3% likelihood of discovering a SQLi vulnerability on any given site. That's truly frightening and while it helps to explain why such vulnerabilities are so prevalent, it doesn't explain why they aren't going away.

I also pulled statistics from a 2007 survey conducted by the Web Application Security Consortium. These statistics provide a consolidated view from eight separate security services vendors. This time around, we see that approximately 1/4 of sites scanned revealed SQLi vulnerabilities - even worse.

The question we need to ask ourselves is why are the statistics not improving? In my opinion, while the security industry is getting better at detecting and patching such issues and businesses are better understanding the risks associated with mature vulnerabilities such as SQLi, the population of new developers and web applications is growing at an even faster pace. Who is a web application developer today? Thanks to point and click tools, just about anyone can be a web application developer - but that doesn't mean that they're a good developer or one that understands and implements secure coding practices. We've struggled for a long time to educate developers about security and while such initiatives are important, they will never be enough. Education alone will never succeed in dropping the statistics included in this blog because the vast majority of 'developers' will never receive such training. The majority of developers don't have a Computer Science degree and they may not even get paid to develop. As such, vulnerability statistics will only begin to drop once protections are built into the architectures of the 'point and click' development tools, implementing security by default. Fortunately, we have started to see vendors take such steps. Only when the 'every man' developer can be protected from attack, without security knowledge will we truly see a drop in vulnerability statistics.

- michael

I am a virus, don't click me unless you're a twit

Yesterday some spam-esque messages hit a portion of the Twitter population. Twitter spam is nothing new, but there is a twist to this case. The Twitter messages read:

don't click: http://tinyurl.com/amgzs6

If you did click the URL anyway despite the warning, you were brought to another page that had another "don't click" button on it. If you clicked that button (again, ignoring the warning), you would become a victim to a clickjacking (twit jacking?) attack on your twitter account. The author of this clickjacking demonstration actually
documents the whole brouhaha in his blog.

TinyUrl removed the offending URL, and Twitter added some frame-busting Javascript code to their pages in order to make them less "clickjackable." But what is probably the most concerning outcome of this whole situation is that users' curiosity trumped security education. They were warned to not click something, twice, and they did it anyway. Twitter admins had to actually send a message to people stating the obvious:




But then again, if the mass majority of people will tell you their password for a candy bar, I suppose we shouldn’t be that surprised.

Happy Friday the 13th!

- Jeff

Friday, February 6, 2009

Botnet Use of Unregistered Domain Names

A growing number of botnets are using unregistered domain names as a means of establishing a command and control network. The basic idea involves generating an algorithm which produces a series seemingly random domain names which can be registered at a later date as needed. This approach helps to ensure that individual hard coded domain names/IP addresses can't be blocked or taken off line in order to kill the botnet.

The Srizbi botnet employed this tactic, and FireEye, a security firm which focuses on botnet protection, attempted to get a step ahead by registering the generated domain names before they could be used. This was a noble effort, however, as can be expected, costs quickly escalated and the effort was abandoned once it was determined that it could cost the firm $4,000/week just to secure the domain names.

I was poring over some log files this week and was struck by the volume of domain names that appear to be requests from infected machines for these seemingly random, as of yet unregistered domain names. Despite the coordinated effort to take down Srizbi in November 2008, it would appear that there remains no shortage of infected zombies associated with Srizbi.

dfswuhet.com
dgudhdde.com
dihhushd.com
dudteigi.com
eastgage.com
edihhsfd.com
eefiwusg.com
euuetweg.com
fdsdeitu.com
ffwhutgi.com
fhhshddh.com
fiituhew.com
fthdedut.com
gewwhisd.com
gfssguhu.com
gstweude.com
gueswifu.com
guffesuf.com
gwuishts.com
heffiehs.com
hfiesfsu.com
hstuwhhe.com
sdghtife.com
sgtgewiw.com
shtfewwd.com
spoyahoo.com
sthdsstd.com
sthhdist.com
sugteegt.com
teisudgs.com
tewsshsg.com
twhtsdsf.com
ugifsfed.com
uidesgih.com
uiwegwth.com
uwfieuwd.com
wdttsewt.com
whdufuss.com
whwudshg.com

Thanks to the reverse engineering efforts of the team at FireEye, we have insight into the algorithm used to generate the domain names. It would however appear that we're dealing with a different variant than the ones inspected by FireEye as the domains that we're seeing do not line up with the future domain names that were predicted by the Srizbi Domain Calculator derived from the FireEye research.

More importantly than the domains listed above, we are beginning to see a very high volume of requests for unregistered pseudo random domain names that do not appear to be related to Srizbi. Conficker/Downadup has also employed this approach and F-Secure has done a solid job of posting predicted command and control domain names for Conficker, based on their own reverse engineering work. However, there appear to be various other botnets employing this approach, based on the traffic that we're seeing.

Registering domain names isn't like digging ditches. It's a sunk cost for the registrars and it's high time that they stepped up to assist in this matter. Entities like FireEye shouldn't need to expend real dollars to register domain names in an effort to stop the spread of a botnet. A coordinated effort between researchers and registrars needs to be established to ensure that future botnets cannot employ this tactic. It won't be easy as there are several registrars responsible for the many TLDs now available, but it's a worthwhile initiative, especially as this trend continues.

- michael

Monday, February 2, 2009

Keeping security relative

I really liked today's XKCD web comic:



There have been multiple times in my career where I’ve encountered someone who was caught up in the 'sex appeal' of a security technology without looking at the practicality/relativity of what they were trying to achieve as a whole. One encounter I remember was an on-site discussion I had with a security staffer of a large financial institution. I was going over some security audit findings, and he remarked that he was disappointed that their password policy didn't fail the audit. I responded with a discussion about how their password policy was actually very good and well above industry best practice at the time, particularly in password length. His response was that it could always be made stronger by requiring more characters in passwords. That is true, but given the numerous other critical areas that failed the security audit and therefore allowed arbitrary attackers to compromise practically any desktop or server on their network...what was the real value of requiring longer passwords? He failed to see that there were easier ways for an attacker to get at the same data without dealing with passwords or their strength; thus a increase in password length for the sake of an increase in password length would not provide any additional security protection value.

Sure, it's important to 'raise the bar' for security to lofty heights. But there comes a point where additional effort to further raise the bar of a single control doesn't return much additional security benefit (i.e. poor securitiy ROI). And there will inevitably be a point when trying to increase/improve the benefit of a specific security control will be questionably pointless because an alternate avenue of attack will now have a comparably much lower bar. In other words, you are wasting your valuable security dollars by over-doing one security control if you have poor security benefit from another control protecting the same resource.

So every time you think about adding or increasing the level of security of a particular control, you should always take a moment, step back, and consider the level of security provided by controls encompassing alternate avenues of attack. If those other controls are under-performing, then you should consider raising the security level of the other controls instead. Otherwise you will find all of your great security effort is negated by a clever-but-predictable use of a $5 wrench.

Until next time,
- Jeff