In short, everything.
The more I do it the more I realize that it is flawed. Why? Because the first thing we do is try to assign value to the unknown. And then it is all down hill from there. The entire process keeps leading me to believe it is subjective and capricious. I am finding that the systems that should be getting a higher value aren't and vice-versa. Mainly because this is all based on components and information types. I just want to walk through this out loud really. So my thoughts are prone to evolve.
1. What type of information is it?
Financial, Health Care, Legal, Proprietary, etc.
So far so good, it is important to know what information is to be processed. But then you get two types of information, that makes risk higher. But then what sub-types? But we are still okay.
2. What are the controls that we have to employ?
800-53, ISO 27001, PCI, etc.
Still kinda good, but we basically know that ISO is relatively voluntary and NIST supplies a control catalog and not policies. So here we have to take the control catalog, and mash our policies into it.
3. System Inventory
Assorted processes
Now comes the slippery slope. Someone wants me to tell them which boxes are more critical than others. This is mainly because of budgetary or operational reasons. To which I usually say "All of them, it is a system after all". The word system implying that if one part were to be absent, the whole thing would go face first into a pile of poo. But there it is. I say that obviously these are what's important and it is worth a "1". These are "2"'s and that over there is a "4.5" on a scale of one to five.
4. Assess
800-53A, SCAP, Best Practices, MBSA, etc.
So we ran a scan and now we have a report. A snapshot in time to make all decisions. Where did these vulnerability ratings come from? Do I even know if my system is at risk? What if I spend my time on vulnerabilities that have no threat? This is what I am dealing with now. A crap load of findings because that is what the tool told me. But what is my risk? In my current situation, it is that my patch management process sucks, not that 150 patches are out of date. Which is an important distinction. Since I can "crash" the program as a PMP once told me, but in 6 months I'll be in the same spot.
Other commonalities are: I can't convince anyone to do a real contingency test. I get to test my incident response plan every couple weeks (it doesn't work). My backups ... they're somewhere. Hopefully, they'll be good when I need them.
6. Decision Time / Operational Readiness Review
After all this I come to a point where a decision (yes or no) needs to be made. Which of course it is always yes, because someone has spent a shit load (technical term) of money to allow this system to come to being. Accept these residual risks, hire some consultants and then my most favorite line from the movie Stripes: "Hey! we're movin'" But we don't know where to.
6. Continuous Management
I am basically in a continuous development life-cycle, rather than ever reaching some sort of steady state. Because I constantly fight fires and politics, who's got time for change control board meetings and proactive policy enforcement.
---------
I think you see where this is going, since this is the umpteenth time I have seen this in my career; Scott Adams could probably make a Dilbert out of it and the entire East Coast would be laughing.
Where to go from here: A fundamental revamp of how to deal with Risk. Where risk professionals focus on the treating the sickness and not the symptoms, and come up with some new success/actionable metrics.