In theory, it's pretty simple: for a set of controls to give you the security benefit you are looking for, there needs to be at least one control blocking every possible path from what the attacker can do before he attacks (control Internet web sites your employees visit) to the thing you really don't want him to do (stop your employees from accessing your corporate information). If there is a path on which nothing stops the attacker, you have a security vulnerability, which means you need to change either your goals (maybe they're too aggressive for current technology to support), the system architecture, or the set of controls you're using. If there are paths on which multiple controls stop the attacker, you have defense in depth; how much depth is good depends on your level of paranoia and the performance, administrative and financial costs you are willing to put up with.
In practice, when you're trying to use security controls to meet a minimum bar, the hard part is knowing what paths are available for an attacker to take. Each path is made up of individual steps, each of which has starting and ending privileges. There are 3 kinds of steps to consider:
- By Design: the system is designed to provide someone with privileges S with privileges E.
- Design Side-Effects: it didn't necessarily need to be this way, but the system is designed such that someone with privileges S can automatically get privileges E.
- Implementation Flaws: someone with privileges S can get privileges E even though the design of the system does not allow this step
I find it quickest and most enlightening to start at both ends (the attacker's starting privileges and your anti-goals for the attacker), and enumerate the endpoints of steps in each category.
Let's start at your anti-destination for the attacker, denying employees access to corporate information. You may think that because this is a denial of service threat, you didn't design your system to enable it at all, but chances are good you can get a pretty good list from your departing employee process.
My system is designed to prevent [ex-]employees access to corporate information when:
The user's account is deleted from LDAP.
The user's account is deleted from the 'employee' group.
The user's password is changed.
Ownership of a file formerly owned by the user is changed.
Files owned by the user are deleted.
Because these things have to work for your sysadmin when an employee leaves, they would also work for an attacker, if an attacker could do them.
To get a list of design side-effects an attacker could use to prevent an employee from accessing corporate information, try inverting the table of contents of your disaster recovery plan (or, if you don't have one, the categories in your IT help desk ticketing system). You know if any of these conditions are true, your IT crew is going to scramble because employees won't be able to get the information they need. This is true whether it happened by accident or an attacker did it to you on purpose.
This leaves us with implementation flaws. You can't really list these unless you are running known-vulnerable software. Presumably if you knew you were running vulnerable software, you would patch it, so let's not try to create this list directly. Instead, list the components which have the permissions to do things in the first 2 lists. If these components were vulnerable (in the right ways), an attacker who got far enough to reach the vulnerable interface could exploit them and presumably accomplish the original threat.
An attacker may be able to prevent access to corporate information when:
A vulnerability in the LDAP server allows an attacker to delete user accounts, change group membership, ...
A vulnerability in the file server allows an attacker to delete files, change file permissions, overwrite files, ...
A vulnerability in the user's desktop allows an attacker to prevent the machine from booting, delete applications, delete files, change file permissions, overwrite files, change the user's password, ...
Now, start from the other side. If an attacker can control an Internet site your employees visit, what can he do by design?
A web site my employee visits can:
Set or delete cookies for that site on my employee's desktop.
Redirect the employee's browser to another Web site.
Run java applications, signed ActiveX controls, …
What can he do as a design side-effect?
A web site my employee visits can also:
Respond to a single request with more than one response.
Persuade my employee to run damaging commands or executables.
What could he do as a result of implementation flaws?
A web site my employee visits can:
Take advantage of any security flaw in the employee's web browser or plugins.
In the middle, for any given potential connection you can have one of three things: a working path for the attacker, a control that blocks the attacker's path to your anti-goal for the attacker, or something more fuzzy (usually including the word "may"). If there are any working paths for the attacker that traverse only steps that are by design, you have a requirements conflict and it will not be possible for technological controls to meet the security objective you have in mind unless you change your requirements. For any other working path, you need to find a control that will break a link. Or, going the other way, to be complete, the set of controls you are considering must break at least one link in each otherwise-working path. Probably to get this effect, you will need to combine controls from multiple sources. If a set of controls doesn't break any links in any attack paths you care about, don't buy it.
From the above partial lists, there is a pretty clear connection between an attacker being able to take advantage of security flaws in the employee's web browser and the consequences of a vulnerability in the user's desktop: the web browser runs on the user's desktop, so a security flaw in the browser would mean the attacker with control of a web site visited by the employee can deny the employee access to corporate information, violating this security objective. There is an equally clear connection between social engineering the employee into opening malware and violating this security objective. The latter connection, because it doesn't involve any implementation flaws, is more serious, and it would be prudent to mitigate it. You could, say, compare the effectiveness of user-education programs, desktop anti-virus, and an HTTP malware scanner (my money, hopefully obviously, is on combining the desktop AV and the HTTP scanner, because people will click on anything).
Because the fuzzy paths say "may", you get some leeway in deciding whether you think there is a working path for the attacker to follow from his starting privileges to the things you don't want him to do. If you have no known working attack paths, you might consider reducing risk in these areas (e.g. by instituting an aggressive monitoring & patching policy, or buying a product which attempts to defend against relevant 0-day attacks) before adding depth to your defenses in other areas.