Friday, April 10, 2009

Embedded Compliance

I was recently using the twitter machine when someone asked me how I would develop requirements and the subsequent test cases for embedded devices. Beyond the fact that I needed more than 140 characters to answer, I found the question simultaneously amusing and befuddling. So this post is the result of that initial query.

We know that embedded devices will not have all the controls that even something like Windows is capable of meeting. Windows has a difficult time trying to meet a FIPS 199 categorization of moderate. Therefore, these devices put us (me) into a quandary. The podcast routinely talks about pen testing exploits that involved using an embedded device as a launch point for more sinister attacks. But the devices will never have the security controls that full blown operating systems and applications are capable of implementing.

We also know that these types of devices have striped down versions of things we already know and love, like ... TCP stacks and ... file systems. But the tools that assessors or testers use with servers, web sites and routers do not work (at all) or unreliably (at best).

So here we have devices that are in the system boundary and processing data. Prevalent security researchers have already demonstrated the issues with them.

Obviously, they need to be tested; they need controls and protections. But how to test while collecting this mysterious assurance evidence. The answer is the dreaded manual test case. Sitting down with your refrigerator or microwave with your requirements (let's say its an agency tailored 800-53). Sit with the vendor or poor sap who has been tagged to "be in charge" to walk through the system with you as you develop the test steps. You are not retrieving the results or collecting evidence yet. This is merely to work out a repeatable process by which others can use to re-test later.

You now want to ask me: "what about requirements that I can't develop test steps?" So a control is not in place no matter what. This is still a requirement. It just means that you don't have to test for it because it has already failed. But you will need to leave a spot in the Security Assessment procedures that says "I interviewed and the vendor/system could not provide evidence that this control could be satisfied." OR "Review of manuals and system documentation revealed that the system does not implement the control" Fail. It does not mean that it is Not Applicable, because it is still a requirement.

What about gathering proof that the control is actually in place? This is what I think the real question is; the answer is that it depends. If you are going through a terminal, then you can capture the session to a text file. If it can be remote controlled through something like VNC or RDP, then you could take a screen movie. I found this software today which they claim you can embed into Word or PDF.

But then there are those that there is no remote screen or remote terminal. All we have is a generic interface on the device itself. Well I don't know what to tell you there except camera. Oh yes. The dreaded video camera on a tripod. You will need waivers and exemptions and all kinds of paperwork. But it is really the only way to capture the test procedure if that's the level of assurance required. That's why I left it for last, because it is most unpleasant. This would also fall in the category of "evidence available upon request".

Hopefully, a detailed procedure is all you need. Here is a sample of what I would envision a test of account lockout (AC-7) to look like (but it is lacking my usual pretty formatting):

Step 1: Log in using normal interface with a valid user account and password combination.
Expected Result: Log in successful

Step 2: Log out and attempt to log in using a valid user account and invalid password combination.
Expected Result: Log in unsuccessful

Step 3: Re-attempt Step 2 until .
Expected Result: Log in unsuccessful

Step 4: Re-attempt Step 1
Expected Result: Log in unsuccessful

Step 5: Wait for minutes (Only if not unlimited) and repeat Step 1
Expected Result: Log in successful

My typical reaction is to stop a procedure once something has failed. Or to have dependencies in the test steps to limit the number of procedures I have to manage.

So I don't know if I answered the original question, I feel better for putting at least something out there.

No comments: