Tuesday, February 24, 2009
In summary: Do everything we've been telling you to do. Identify -> Assess -> Secure -> Monitor -> Repeat.
See I just saved you two hours of reading.
But seriously, I suppose my main objection (from a very cursory review) is that now the technical controls that have generally been assessed using automated tools, will now be weighted more heavily than those which could conceivably be just as important.
Which is why we have (drum roll please ...) Risk Assessments. So that intelligent humans can decide for themselves which controls are more important than others in their environment.
Taking a step back, FIPS 199 (pdf) asks you to look to the 800-60. In the 800-60 (pdf), there is a lot of discussion around deciding what type of data you are processing, how sensitive is it, whose allowed to see it, who is isn't, etc. From that you are supposed to be able to extract a mystical level of concern for your data (Low, Moderate and High). In my career, I have only personally ever seen one (1) High system and that was for availability. The rest if you must know were/are Moderates.
As the auditor for the High system, we heavily weighted the Incident Response and Contingency Planning controls. We even had the authorizing authority and certification authority say they didn't want to *really* impose the High controls for most of the Technical family (gasp!). We call this a risk based decision. The risks for them were purely from a "keep the damn thing up for the love of all that is holy and good" perspective. But this isn't what I wanted to talk about, I think...
These boiled down "critical controls" are a dangerous thing, in my opinion. Anyone who has been put in a capacity to use/implement the guidelines, I imagine is having similar reactions. Because (again!) there is nothing new here. Now with more confusion because they are guidelines, like all the 800-series documents are guidelines.
Perhaps I will do a more detailed analysis and comment. For now, I find myself agitated and tired.
And they used the word "cyber" too much, MS-Word and OpenOffice don't even think it is a word.
Wednesday, February 18, 2009
First - Did you know that the word "privacy" is used 7 times in the FISMA law? Once to define confidentiality, once to reference the Privacy Act of 1974 and the other five times to talk about the Information Security and Privacy Advisory Board. This is a group that gets together 3-4 times a year and does what I would describe as "things". Its true. See for your self.
With that in mind, I will start off by saying that Facebook decided that they would update their Terms of Service. The new terms said that they could do whatever they wanted, whenever they wanted, with information that they collected. After some outrage from the intertubes, along with some people who abandoned their accounts, they have reversed their decision for now.
In other news, the Federal Trade Commission issued a report about targetted online advertising that basically said that companies should do better than they are now. Blah Blah Blah. That lead to other people suddenly caring and can a congressman that said more legislation is on the way.
Lastly, we get a gem from the Department of Homeland Security who issued a report about their keeping information private on their own systems and this report includes recommendations. This is where I felt I needed to say something.
DHS? The Department of Homeland Security is worried about keeping privacy information private? Also, there isn't anything NEW in this report! The recommendations are already in OMB memos and NIST docs. Don't they have a policy or what? Damnit.
Friday, February 13, 2009
And my comments are on the site, and pasted here for your convenience:
I want to be clear here, that the community is in desperate need of more materials like this. There a ton of people who do this everyday who would watch this and it would be news to them.
I found the slides to be very good, I especially liked the scenarios.
I will be making changes to some of the semantics. Where it says that a certifier is finding risks, they in fact don’t. They discover findings. Those findings could be policy violations, evidence of policy violations or general system architecture weaknesses.
For instance, when I was a certification agent I did not list out all the patches they did not have installed. This is evidence that a patch management program is ineffective (depending on the date that a patch was released and that the SSP says that it is an implemented control).
The assignment of risk would be left up to the system owner, the certifier (a role that is disappearing in 800-37 Rev 1) or the AO. They would do this by going through an 800-30 exercise. They would start with the security assessment findings and then assign likelihood and impact ratings. This is also presuming that there is even a threat vector.
Let me know if you had a different interpretation or if I missed something.
Also, it is better if you listen to AC/DC's Hells Bells or any Metallica song while you read the slides.
Monday, February 9, 2009
I am a regular reader of Christofer Hoff over at Rational Suvivability and have been convinced that Cloud Computing is almost as evil as Dick Cheney.
Which is why this article on Obama's pick for E-Gov chief has me more than a little worried. The article spends a few paragraphs near the end talking about moving information "into the Cloud".
Cloud Computing is not a good idea, unless the government can build its own Cloud. This would involve the entire government knowing who owns and operates the infrastructure (probably GSA). In a perfect world, it would be like the ultimate General Support System (GSS, see the 800-37 [pdf]). All the agencies would sign MOUs and SLAs. They would use common APIs and there would be coding standards. Regular security checks and web application assessments. Oh the glory of it all!
But this isn't what will happen, one or many agencies will get pissed, take their ball and go home. They'll stand up their own solutions with the help of a prime and 500 sub contractors.
A single infrastructure would help some of the initiatives that are underway. Like Trusted Internet Connections, enforcement of policies on end point systems, encrypted off-site backups, IPv6, among others.
So that's something, but the risk is not worth the reward. A single infrastructure means that the whole government could be out when a targeted attack is underway. Or that a simple misconfiguration could lead to what Google faced with its badware miscategorization. How to design to be redundant and available? Would there need to be one for classified and unclassified? Who's going to support incidents? All the usual questions that go along with a shared infrastructure.
So I don't know, I would love to put applications onto a common supportable infrastructure and have the government save a crap load of money. On the other hand, doing it correctly will take years or decades to implement and there is no guarantee that everyone will be on board.
But to even get started, the current government guidance and regulations aren't clear on the best ways to execute a cloud implementation. The new 800-37 was supposed to address this, but there doesn't seem to be any clarity there. If data is shared between two agencies on the common platform (and they both make edits), who will own the data. Lastly, there are some agencies out there trying to get an HTML page with an email address secured, let alone putting all OUR data across the Internet over a VPN.
They are going to do what ever they want though, because the appearance of competent financial management outweighs competent security practices. Until there's an incident.
There once was a man named Steve,
Who was notified he was subject to audit.
It just about made him heave,
But he knew he could simply discredit.
The auditor sent over their test plan,
To which he responded with documentation.
And then they started to scan,
Yet he feared not for his occupation.
The scanners left to perform their magic,
Steve awaited the results package.
He had confidence that it wouldn't be tragic,
For the auditors were at a disadvantage.
One day the deliverable arrived,
Upon that was convened a meeting.
A plan that Steve had contrived,
Involved supplying the auditor a beating.
Steve began by questioning tool sets,
And continued by criticising results.
The contractor began to fret,
But didn't consider it an insult.
The "auditor" launched into his shtick,
Complete with tons of excuses.
It speckled with buzzword shit,
But his logic only confuses.
Now that management's confidence is shaken,
Steve goes in for the kill.
He announces the auditor is mistaken,
Then defines their lack of skill.
His argument lies in their false positive rate,
And their inability to ask questions.
The documentation review was a sorry state,
He finished by making some suggestions.
Remove these morons from my sight,
They are the reason auditor is a dirty word.
These reports are only meant to cause a fright,
This entire exercise has been absurd.
After which, I launched into my usual rants about why the Federal auditing needs to change.
Please note, I am not saying that auditing is dead. I am only saying the useful auditing died some years ago and it needs to be resurrected.
Wednesday, February 4, 2009
The point of the exercise has been lost due to the media scrutiny.
It has moved onto "you caused other components so much work", "you didn't coordinate", etc, etc. My take away from this entire story is that it is a story at all.
The training program has apparently suffered an epic fail. While I admit, it was probably a bonehead move to not let the supposed target of the scam, everything else is sending a few clear messages:
- The users didn't recognize the scam, they bought it hook, line and sinker. So much so, they forwarded it to their friends and colleagues in other agencies. Who then also fell for it;
- Some users did realize what was happening and began to take corrective actions - specifically identified in this story;
- Something that has been suspiciously been omitted are the statistics.
What is clear is that there is more work to be done. In the initial story I linked to, there are some words at the end about things improving and fewer people fell for it. The fact that there weren't even some vague generailities about "we sent it to 50,000 people and only 12 went to the site", tells me that it was obviously more than 12. More likely it was something embarrassing, like 25% of the targets. Also, add on people in other agencies who weren't even targetted but went to the site anyway. I think that is more than a few.
Justice will never get 100% of the people to not fall for a phising scam, I do hope that they can get it down to 12. I applaud the efforts of Justice and in the future I would like to see more of this. It looks bad from a PR perspective, I know. As a security professional, it gives me a confidence that more than a powerpoint is being emailed out as security training.