Kerckhoffs’ principle, formulated by a 19th century Dutch mathematician, is the accepted basis for strong security.It was originally formulated for crypto algorithms, where it stated that an encryption system must not depend on the security of the algorithms, only the keys. Applied to computer security, it implies that the security of a system must not depend on the secrecy of the source code. The reasoning behind is that if there is a security-critical bug in the code, someone will find it sooner or later, so secrecy buys you at best some delay. If there is no security-critical bug, then nothing is gained (from the security point of view) by keeping the code secret.
Given that this principle has been known for more than 100 years, and is widely accepted as a requirement in the security community, it is amazing to see it ignored, especially by those who are supposedly focussed on security. Too many think that security of a system requires that the code is secret or obfuscated — the infamous “security by obscurity” approach. I’ll give you a few recent examples.
The first involves the Federal Communications Commission (FCC). In a recent ruling it stated: “A system that is wholly dependent on open source elements will have a high burden to demonstrate that it is sufficiently secure to warrant authorization as software defined radio.” So, more must be done to certify open source code than closed source code. Why? Because presumably the code is full of holes and publicly available for scrutiny. And as long as the code is secret, then it is safe. This is classical security by obscurity.
The second one is more alarming. Dan O’Dowd, CEO and founder of Green Hills Software, a leading provider of operating systems for the defense sector, in a white paper published on the company’s web site, makes this intriguing statement: “Publishing the source code for the operating systems used in our most critical defense systems is analogous to publishing the wiring diagrams for our military base security systems. [...] Unless an operating system has no vulnerabilities, publishing its source code is sure to reduce security.” In other words, O’Dowd believes in security by obscurity.
The third example is more humorous than scary. Trango Virtual Processors, provider of virtualization software for embedded systems and using the tagline “The Secure Virtual Processor”, has an FAQ on their website. Under the heading “How secure is TRANGO’s hypervisor?” it states: “[The] hypervisor is small, and written in assembly language. [...] As an assembly coded product it is also much more difficult for hackers to decipher than C-coded products.” Note that they keep the source code (and pretty much all documentation) secret, so a hacker would presumably only have access to the binary. What’s the difference between binary code generated by a C compiler, and binary code generated from hand-written assembler? The compiled C code has a lot of structure. The assembler code may or may not, depending on how it’s written. So, if there is a significant difference then it’s because the assembler code is spaghetti-structured. But that doesn’t mean it can be trusted.
I believe it is time we got serious about security, and advocating security by obscurity isn’t being serious, it’s admitting failure. If you’re serious about the security of your code, you don’t hide it, you open source it. But we need to go further, in two ways.
Firstly, we need to move to a fundamentally security oriented OS design with flexible mechanisms that support a wide range of security policies, and enables formal reasoning about them. The operating systems research community has long accepted capabilities as the best basis for security in operating systems, mostly because capabilities allow a highly controlled distribution of authority, and there are important security properties which can provably be achieved in capability-based systems but not in alternative models.
Secondly, we must move beyond testing as a (known to be deficient) means of establishing functional correctness of the security-critical system components. We need complete formal verification — mathematical proof that the implementation (the C and assembler code) actually has the security properties claimed by the system. To date, no general-purpose operating-system platform has such a proof, but there is a research project that is close. This will deliver real security.
If security is important, obscurity is no acceptable substitute.
By Gernot Heiser, Open Kernel Labs
About the Author
Gernot Heiser, cofounder of Open Kernel Labs, is the technical leader of the firm. Prior to co-founding OK, Dr. Heiser created and led the Embedded, Real-Time and Operating Systems (ERTOS) research program at NICTA. He can be contacted at firstname.lastname@example.org .