Security through obscurity
From Free net encyclopedia
In cryptography and computer security, security through obscurity (sometimes security by obscurity) is a controversial principle in security engineering, which attempts to use secrecy (of design, implementation, etc.) to ensure security. A system relying on security through obscurity may have theoretical or actual security vulnerabilities, but its owners or designers believe that the flaws are not known, and that attackers are unlikely to find them.
Contents |
Arguments against security by obscurity
In cryptography, the argument against security by obscurity dates at least to Kerckhoffs' law from the late 1880s, which, fairly literally translated, states that design of a cryptographic system should not require secrecy and should not cause inconvenience if it falls into the hands of the enemy. This principle has been paraphrased in several ways:
- System designers should assume that the entire design of a security system is known to all attackers, with the exception of the cryptographic key.
- The security of a cipher resides entirely in the cryptographic key.
- Claude Shannon rephrased it as "the enemy knows the system".
If any secret piece of information constitutes another point of potential compromise, then fewer secrets makes a more secure system. Therefore, systems that rely on secret details apart from the cryptographic key are less secure; that is, resident vulnerabilities in the secret details will render the choice of key -- simple vs. complex -- largely irrelevant.
The related full disclosure philosophy suggests that security flaws should be disclosed as soon as possible because the strength of the protection provided by keeping the cryptographic key secret has become weaker. In this case there is now effectively more than one key that provides access: the old cryptographic key and a key composed of the newly discovered flaws.
For example, if somebody stores a spare key under the doormat in case they are locked out of the house, then they are relying on security through obscurity. The theoretical security vulnerability is that anybody could break into the house by unlocking the door using the spare key. Furthermore, since burglars often know likely hiding places, the house owner will experience greater risk of a burglary by hiding the key in this way. The owner has in effect added another key -- the fact that the entry key is stored under the doormat -- to the system. The cryptographic key is no longer simply "the actual possession of the physical key that is used to open the door" but also it is now "the knowledge of the physical key's location".
In the past, several algorithms or pieces of software with secret internal details have seen their internal details become public. Furthermore, vulnerabilities have been discovered and exploited in software, even when the internal details remained secret. Taken together, these examples suggest that it is difficult or ineffective to keep the details of systems and algorithms secret.
- The A5/1 cipher for mobile telephones became public knowledge partly through reverse engineering
- Details of the RSADSI cryptographic algorithm software were revealed through probably deliberate publication of alleged RC4 source on Usenet.
- Vulnerabilities in various versions of Microsoft Windows, its default web browser Internet Explorer, and its mail applications Outlook and Outlook Express have caused worldwide problems when computer viruses, Trojan horses, or computer worms have exploited them.
- Details of Diebold Election Systems voting machine software were published on an official Web site, apparently intentionally.
- Portions of the source code of Microsoft Windows were revealed after apparently deliberate penetration of a corporate development network.
- Cisco router operating system software was accidentally exposed on a corporate network.
Linus's law that many eyes make all bugs shallow also suggests improved security for algorithms and protocols whose details are published. More people can review the details of such algorithms, identify flaws, and fix the flaws sooner. We would thus expect the frequency and severity of security compromises to be less severe for open than for proprietary or secret software.
Finally, operators and developers/vendors of systems that rely on security by obscurity may keep the fact that their system is broken secret, to avoid destroying confidence in their service or product and thus its marketability, and this may amount to fraudulent misrepresentation of the security of their products. Application of the law in this respect has been less than vigorous, in part because vendors impose terms of use as a part of licensing contracts in order to disclaim their apparent obligations under statutes and common law that require fitness for use or similar quality standards.
Arguments in favor of security by obscurity
Perfect or "unbroken" solutions provide security, but absolutes may be difficult to obtain. Although relying solely on security through obscurity is a very poor design decision, keeping secret some of the details of an otherwise well-engineered system may be a reasonable tactic as part of a defense in depth strategy. For example, security through obscurity may (but cannot be guaranteed to) act as a temporary "speed bump" for attackers while a known resolution to a security issue is implemented. Here, the goal is simply to reduce the short-run risk of exploitation of a vulnerability in the main components of the system.
Software which is deliberately released as Open Source can never be said, certainly in theory, and in practice as well, to be relying on security through obscurity (the design being publicly available), but it can nevertheless also experience security debacles (e.g., the Morris worm of 1988 spread through some obscure -- if widely visible to those who bothered to look -- vulnerabilities).
Security through obscurity can also be used to create a risk that can detect or deter potential attackers. For example, consider a computer network that appears to exhibit a known vulnerability. Lacking the security layout of the target, the attacker must consider whether to attempt to exploit the vulnerability or not. If the system is set to detect this vulnerability, it will recognize that it is under attack and can respond, either by locking the system down until proper administrators have a chance to react, by monitoring the attack and tracing the assailant, or by disconnecting the attacker. The essence of this principle is that raising the time or risk involved, the attacker is denied the information required to make a solid risk-reward decision about whether to attack in the first place.
A variant of the defense in the previous paragraph is to have a double-layer of detection of the exploit; both of which are kept secret but one is allowed to be "leaked". The idea is to give the attacker a false sense of confidence that the obscurity has been uncovered and defeated. An example of where this would be used is as part of a honeypot. In neither of these cases is there any actual reliance on obscurity for security; these are perhaps better termed obscurity bait in an active security defense.
However, it can be argued that a sufficiently well-implemented system based on security through obscurity simply becomes another variant on a key-based scheme, with the obscure details of the system acting as the secret key value.
There is a general consensus, even among those who argue in favor of security through obscurity, that security through obscurity should never be used as a primary security measure. It is, at best, a secondary measure; and disclosure of the obscurity should not result in a compromise.
Historical note
There are conflicting stories about the origin of this term. It has been claimed that it was first used in the Usenet newsgroup in news:comp.sys.apollo during a campaign to get HP/Apollo to fix security problems in its Unix-clone AEGIS / Domain/OS (they did not change a thing). ITS fans, on the other hand, say it was coined years earlier in opposition to Multics users down the hall, for whom security was far more an issue than on ITS. Within the ITS culture, the term referred to (1) the fact that by the time a tourist figured out how to make trouble he'd generally got over the urge to make it, because he felt part of the community; and (2) (self-mockingly) the poor coverage of the documentation and obscurity of many commands. One instance of deliberate security through obscurity on ITS has been noted; the command to allow patching the running ITS system (altmode altmode control-R) echoed as $$^D. Typing alt alt ^D set a flag that would prevent patching the system even if the user later got it right.
See also
External links
- A Model for when Disclosure Helps Security: What Is Different About Computer and Network Security? by Peter P. Swire
- Eric Raymond on Cisco's IOS source code 'release' v Open Source
- Computer Security Publications: Information Economics, Shifting Liability and the First Amendment by Ethan M. Preston and John Lofton
- "Security Through Obscurity" Ain't What They Think It Is by Jay Beale
- Secrecy, Security and Obscurity by Bruce Schneierde:Security through obscurity
es:Seguridad por oscuridad fr:Sécurité par l'obscurité it:Sicurezza tramite segretezza