wide open       print this storyrespond to this story

Home
Open Source: How Secure?




  
Date: 14 Nov 99 Writer: Simson Garfinkel Location: Martha's Vineyard  
 

Fiascos

Upshot:

Open source boasts distinct security advantages over proprietary software. But that doesn't mean it's bulletproof. Second of five parts.
 
Despite the fact that open source software should be more secure than proprietary software, there have been a number of high-profile cases in which devastating security flaws have been found in software that was distributed in source code form. What's made these flaws all the more embarrassing for the open source movement is that the flaws were discovered years after the source was distributed and put into wide use.

Probably the single most important piece of open source security software that's ever been written is the Kerberos network authentication system. Under development at MIT for more than a decade, Kerberos is still regarded as state-of-the-art technology. The system provides single sign-on, authentication of users to services, authentication of services to users, and distribution of one-time keys for bulk data encryption. Properly implemented, Kerberos eliminates password sniffing, one of the most common security threats today. Kerberos is so good that it has even been adopted by Microsoft, and will be deployed (in a somewhat bastardized form) in Windows 2000.

Kerberos has always been distributed in source code from MIT. The source code was examined by thousands of programmers. And yet, in February 1996, researchers at Purdue University discovered a devastating bug in the Kerberos Version 4 random number generator. "Basically, we can forge any key in a matter of seconds," professor Gene Spafford told The Chronicle of Higher Education. A patch was quickly distributed, but the fact remains that, for more than a decade, anybody who knew about the security flaw could penetrate any Kerberos-protected system on the Internet. (In the interest of full disclosure, Gene Spafford is my coauthor on several books on the subject of computer security.)

Security has also been a persistent problem for other "open source" programs. One of the best examples is sendmail, which for years was a perennial source of Unix system vulnerabilities. Indeed, there were so many security problems with sendmail, that Marcus J. Ranum wrote "smap," the sendmail wrapper, designed to prevent people on the Internet from communicating directly with the sendmail program.

Sendmail's security has gotten somewhat better in recently, thanks in part to the creation of a company that's watching over the source code. But if incorporation is what was required to finally fix sendmail's holes, doesn't that represent a failure of the open source model?

The most dramatic case of a catastrophic security failure in open source software is almost certainly the case of the Internet Worm, which infected somewhere between 2 and 10 percent of the computers on the Internet in November 1989. The worm's primary means of infection was a programming error in the fingerd program -- instead of using the C function fgets() to read data from the network, the author of fingerd used the gets() function, which doesn't check the length of its input. The worm attacked fingerd by transmitting more information than was expected, causing a buffer overflow.

How is it possible that such an obvious vulnerability could have existed in an open source program? Even in 1989, programmers understood the problems inherent with programs and functions that didn't check their arguments. Furthermore, fingerd server wasn't an obscure program: distributed with Berkeley Unix 4.3, the program had been available in source code form to hundreds of institutions. Dozens of those places had modified the fingerd source code to make it read data from alternative databases, since before the advent of the World Wide Web, finger was the primary tool for learning information about other users on the Internet. But not a single programmer raised the alarm.

Each of these cases, and many more, show that simply releasing the source code of a program does not guarantee that the program will somehow become secure. Even when there are hundreds of programmers examining the source code, even when the program performs a security-critical function, even when errors cannot be tolerated, security problems persist.

Does source code really breeds security? It's true that, in each of these cases, the existence of the source code allowed people to fix their own operating systems. But the source code also allowed attackers to craft their devastating exploits. Source code was a tool for both good and evil.

Part III: The Danger of Trojan Horses
 
© 1999 Wide Open / Red Hat, Inc. All rights reserved.