The reason I started down this line of thinking is thinking about what is required to be able to fight through a computer attack and then be able to recover from the attack? I'm thinking that one way would be to have each layer, from hardware to the highest software layer, detect when a higher layer has been compromised, take that layer offline, and then restart that layer from a known good point.
Interesting idea. IIRC, there is a (very) high-security operating called Rings that is conceptually similar to what you have in mind.
I'm having trouble googling for it, could you provide some more information on it, or a link? Who knew ring was such a common term?
A further refinement of this would be to take only the compromised portion of a layer offline rather than the whole layer, such as a process rather than all user-level software. Once this in put into place, the next step would be to secure each layer as well as you can.
Aside from marketing, I'm not sure if the "layers" concept actually provides any additional functionality, but I'm not sure either.
Layers aren't really a new concept. By layers I was meaning hardware, firmware, ring 0, ring 1, etc.
It seems like you should start with a finer grained program/process model then resort to a courser 'layers' model if you must. Lets suppose that app z, depends on libraries x and y, x in turn depends on systems calls r and s, while y depends on s and t. App z also make a system call directly to q. Clearly, we construct a directed graph for all apps, libs and syscalls. However, I'm not completely sure that the graph will be acyclic due to callbacks. Anyways, it seems like you should be able to create a dependency graph to shut down the compromised portions of the system, but how do you determine if a process is compromised?
I don't think it's important which system calls each process depends on, more so it's the resources (shared memory, files, possibly messages, etc.) each process accesses. These are different ways to look at the same thing (what does each program effect), but I think it's an important distinction, because I think it's easier to think in terms of the end effect of each system call. Based on this, you'd pretty much consider any program that has been written to by a compromised program also compromised.
One way to determine if a process, or any layer, is compromised, is behavioral analysis. It may be good enough to ensure the computer is in a safe state when it's turned on, like the Trusted Computing Model attempts to do, and then work to ensure it doesn't get to a bad state.
I know that the DOD was funding research into an operating system that would display information at the application level depending on the user's security clearances. For example, imagine that a B-1 bomber was on a mission. A secret clearance might show the aircraft's location, but not the speed or fuel consumption, if that information wasn't appropriate for someone w/ a secret clearance. That is another interesting idea, but I think it would require substantial changes to many aspects of our current software development.
Is this different than SELinux?
Are you thinking of Clifford Neuman
? Are you thinking of the Trusted Computing Model?
Yes and yes. I attended a dinner for PhD and MS students to become more familiar with faculty research. Most of the faculty gave a ten to fifteen minute presentation on their research, sometimes an overview other times a single subject, but in depth, followed by questions and answers from those in attendance. He spoke about some of the advantages of the Trusted Computing Model, but I don't remember much in the way of details.
You went to USC? Cool. How did you like it there?