Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

OSS Security

News See also Recommended Links

Recommended Articles

Humor Etc

There is no panacea. security is hard and very expensive.

Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News

[Oct 28, 2004] eBCVG - Is Open Source Really More Secure

Is Open Source Really More Secure?
Author: Deb Shinde, WindowSecurity
Published: Thursday, 28 October 2004 15:36 GMT

In this article we'll discuss the claim made by proponents of open source software that such software is more secure. Is open source really inherently more secure than closed source commercial software? If so, why? And if not, why do so many have that perception?

The debate surrounding which is best, open source (often free) software or closed source commercial software, continues to rage. Proponents of open source claim that it not only saves money, but is also inherently more secure. The first claim might seem to be a given (although once you factor in learning curve, administrative overhead and support – or lack thereof – "free" software doesn't always have as much of a TCO advantage as it would seem). The second claim is what we'll discuss in this article. Is open source really inherently more secure than closed source commercial software? If so, why? And if not, why do so many have that perception?

What is Open Source, Anyway?

Before we can intelligently discuss the differences between open source and proprietary software, we need to clarify what the term really means. Many people equate "open source" with "free of charge," but that's not necessarily the case. Open source code can be – and is – the basis for products such as RedHat and dozens of other commercial distributions of Linux that range in cost from a few dollars to a few thousand (RedHat Enterprise Linux premium edition lists at $2499 for Intel x86, up to $18,000 for IBM S/390).

"Open source" also does not mean "unlicensed." In fact, there are a whole slew of licenses under which open source software is distributed. Some of the most popular include GPL (the GNU Public License), BSD, and the Mozilla Public License. The Open Source Initiative (OSI), a non-profit corporation, has developed a certification process for licenses. You can see a list of open source licenses approved by OSI at http://opensource.org/licenses/.

The name itself tells the story: open source software means the source code (the programming often written in C, C++ or even assembler language) is available to anyone who wants it, and can be examined, changed and used to write additional programming. This is in contrast to "closed" or proprietary software such as Microsoft Windows, for which the source code is a closely guarded trade secret (except when it's leaked to the public).

When Closed Source Comes Open

Which brings us to recent events: in early February, it was reported that part of the source code for Windows NT 4.0 and Windows 2000 had been leaked to the Internet. Files containing the code were posted to a number of P2P sites and were being eagerly downloaded. The available code comprised only a small portion of the entire code base for the operating systems, but the incident caused a great deal of consternation, both at Redmond and within the IT community.

Microsoft was understandably concerned about its intellectual property rights, but IT pundits played up the security angle. Many unnamed (and some named) "security experts" were quoted as saying the leaks of the source code present a serious security issue, and that hackers could use the information to launch new and improved attacks against the Windows operating systems.

Does This Mean Open Source is Less Secure?

These claims must seem confusing to those who have been listening to open source proponents, who for years have told us that their software is more secure precisely because the source code is readily available to everyone. If having the code "out there" makes Linux more secure, why would the same thing make Windows less secure?

Of course, Microsoft has always taken the opposite stance. During the anti-trust trials, they argued vehemently against the court's proposed remedy of disclosing their source code based on the security risks of doing so.

Who's right, then? All other issues aside, what are the security advantages and disadvantages of open source vs. proprietary software? Let's take a look.

Security Through Obscurity

Vendors of proprietary software say keeping the source code closed makes their product more secure. This reasoning is based on logic; certainly you don't want to advertise what goodies you have in your house and where they're located to the neighborhood burglars.

Open source advocates counter that this is merely a form of "security through obscurity," a concept that's generally dismissed as ineffective in the IT community. And certainly, by itself it won't protect you, as a homeowner or as a software vendor. Merely keeping quiet about your possessions might make it less likely that thieves will target you, but you'd be foolish to leave your doors unlocked at night just because you haven't distributed information about what you own.

Keeping the source code closed might deter some hackers, but the large number of successful attacks against Windows and other proprietary software proves that it certainly doesn't provide any kind of high level of security.

Speaking of the high rate of attacks against Windows, open sourcers often point to that as "proof" that their software is more secure. However, number of attacks doesn't prove anything except that Windows is a more popular target. If 90% of the people in the neighborhood put their valuables in a particular brand of safe, the smart burglar is going to spend his time learning to crack that type of safe. The other 10% might use a brand that's or equal or inferior quality, but they might be successfully attacked less often simply because the product they use is not as ubiquitous.

If you were a hacker, and the majority of systems you encountered ran Windows while a smaller number run a different OS, which one would you prefer to develop attacks and viruses for? Open source proponents are fond of "facts" that show more Windows machines are compromised, more Windows based Web sites are defaced, etc. But in fact, a lower attack rate that's due to a smaller installed base is just one more form of security through obscurity.

Security Advantages – and Disadvantages – of Open Source

Those in favor of open source say that because everyone has access to the code, bugs and vulnerabilities are found more quickly and thus are fixed more quickly, closing up security holes faster. They also point out that any and everyone is free to create a better, more secure version of the software.

Those on the other side maintain that a closed system in which only trusted insiders debug the code makes it less likely that discovered vulnerabilities will be exploited before they can be patched.

They also point out that there are many reasons (in addition to market share) that are unrelated to the technical security of the software but that can account for a larger number of attacks against proprietary software. One is the nature of the "OS wars" – because open source software has traditionally been more difficult to use, those who gravitate toward it tend to be more technically savvy. The larger number of self-proclaimed hackers who are pro-open source and anti-Microsoft means there are more people out there with the motive and the means to write malicious code targeting Windows systems.

Of course, the open source people can respond that the very fact that Microsoft has more "enemies" makes their software inherently less secure because so many are trying to bring it down.

What's the Answer?

It's obvious that you can use both statistics and logic to support either side of the argument. Our discussion started off by asking whether open source software is inherently more secure than proprietary software. That is, does opening the source code in itself make it more secure?

Consideration of the facts makes it obvious that having the code available has both advantages and disadvantages in terms of security. Vulnerabilities may be found – and exploited, if they're found by the wrong people – more easily, but they may also be fixed – if they're found by the right people – more quickly. There are many factors that affect the security of an operating system or application, from the code level to the user level. Whether or not the source code is open is probably one of the least important factors.

IS OPEN SOURCE INSECURE - Roaring Penguin Software

To investigate further, I asked Kenneth Brown (author of the M Study) and Gregory Fossedal (Chairman of the AdTI) eight questions, all of which they declined to answer. The eight questions are as follows:
  1. How much did Microsoft pay you?
  2. Do you actually believe anything in the white paper, given the overwhelming consensus in the computer security field that security by obscurity is useless?
  3. Why do you run the Alexis de Tocqueville Institution Web site on the open-source Apache server?
  4. Please check my malware graphs and tell me why that open-source server is bombarded by attacks from closed-source Windows machines.
  5. Please respond to the Mitre Report.
  6. Please explain why NSA distributes security-enhanced Linux but not any closed-source system.
  7. Explain this statement by noted security expert Bruce Schneier:

    We pick on [Microsoft] because they've done more to harm Internet security than anyone else, because they repeatedly lie to the public about their products' security, and because they do everything they can to convince people that the problems lie anywhere but inside Microsoft. Microsoft treats security vulnerabilities as public relations problems.

  8. Please explain the web-site defacement statistics which show that closed-source software has a history of defacement totally out of proportion to its market share.

Anti-open source 'whitepaper' devastated The Register

Section II is where Microsoft vents its anger. Take a look at this gem:

The GPL is one of the most uniquely restrictive product agreements in the technology industry.

Why does Microsoft... excuse me, the AdTI... say that? They say that because:

The GPL requires that if its source code is used in any type of software product (commercial or non-commercial) for any reason, then the entire new product (also known as the derivative) becomes subject to terms of the GPL open source agreement.

This is not quite true; if you do not distribute your derived product, then you do not need to distribute the source code. But for the most part, the statement is accurate.

But so what? Suppose you derive a product from Microsoft Windows or some other proprietary code. Then you are breaking all kinds of license agreements. Furthermore, proprietary vendors would demand and get the rights to your derived product, leaving you with nothing.

The GPL is no more restrictive than the most liberal of proprietary licenses, and a good deal less restrictive than most. So Microsoft's... excuse me, the AdTI's... complaints are groundless.

Another quote: David Wheeler, publisher and expert in Washington on open source and proprietary source comments, without licensing the source code in a multi-license format, (referring to other more permissive licenses), it is impossible for GPL to work for a proprietary business model.

Perhaps the AdTI misses the point. GPL advocates do not care if GPL'd software can be made to work in a proprietary business model. It's not our problem. There's no God-given right for proprietary software vendors to make money; they have to compete. And if the rules of the marketplace suddenly change and make it difficult for them, well -- tough. Adapt or die. Don't moan.

III. The Myth of a Public Software Community

Section III attempts to debunk the "myth" of a public software community. The AdTI hints that open-source advocates abandon their principles when they smell money:

Widespread support for GPL open source lies in the IT community's frustration with competitive, closed proprietary software. But in fact, it is quite common that programmers experiment with open source until they see an opportunity to capitalize on an idea, then embrace proprietary standards. One could joke that open source has been a bridesmaid but never a bride. The story of the web browser is an example of this reality.

AdTI uses the story of Netscape "killing" the open-source Mosaic. Well, Mosaic was never GPL'd. If it had been, Netscape would have been unable to kill it. Furthermore, AdTI says of Mosaic:

Through a commercial partner, Spyglass, NCSA began widely licensing Mosaic to computer companies including IBM, DEC, AT&T, and NEC.

Conspicuously absent from AdTI's list is another licensee: Microsoft. Yes, Spyglass's browser formed the basis for Internet Explorer. And revealed here is Microsoft's reason to fear the GPL: It cannot make use of the work of thousands of dedicated programmers for free, locking the work up in a proprietary product. It did that with early versions of its TCP/IP stack, derived from the Berkeley stack. But as more free software is GPL'd, Microsoft's cherry-picking opportunities diminish. Isn't it sad?

The AdTI never quite gets around to saying why the open-source community is a "myth". Apparently, the hundreds of collaborators who gave the world the Linux kernel are mythical. Perhaps the outstanding KDE desktop environment was written by unicorns. And one supposes that GNOME, another outstanding desktop environment, was produced by, well, gnomes. Apache -- it's a myth. PHP -- doesn't exist. Mozilla -- pshaw.

Even in my own modest software development, I've had contributions from dozens of people around the world to my software packages. I've had suggestions, fixes, enhancements and pieces of wisdom donated to me which would never have happened in a proprietary development environment.

IV. The Government and the GPL

This is where politicking gets into high gear.

However, the use of the GPL has the potential to radically alter a very successful model for partnership, particularly when most large commercial entities do not readily embrace the GPL.

Once again, the white paper is worried about "large commercial entities." Well, some large commercial entities like HP/Compaq, IBM, Dell and Sun are quite willing to use, produce and/or distribute GPL'd software. To those large commercial entities who wish to stop GPL'd software, I say:"Tough. Adapt or die."

Needless to say, the government could not depend on patches for software glitches to wander in from the public. Likewise, the government could only use open source code that it could independently service in case of an emergency. Agencies without extensive staff to maintain its internal operations cannot afford to use hapless and untested software without accountability, warranties or liability.

This is a complete red herring. Patches don't "wander in" from the public for open-source products. Rather, they come straight from the authors, or sometimes from distributors such as Red Hat. Furthermore, they tend to come in with a lot more alacrity than fixes from commercial vendors.

With open-source, the government at least has an option to be able to "independently service" the software in case of emergency. With proprietary software, the government does not even have this choice. Therefore, the AdTI's objections on this ground are spurious.

Another consideration for the U.S. government is that all source code developed under the GPL could have mirrored availability to the public. This poses unlimited security issues.

AdTI loves this refrain, but has yet to prove it. In my other article, I debunked the myth that source code availability necessarily introduces security issues, and demonstrated that in fact, it can often enhance security. I was interviewed by AdTI for my opinions on the matter; they neglected to include my comments in the paper.

For example, if the Federal Aviation Agency were to develop an application (derived from open source) which controlled 747 flight patterns, a number of issues easily become national security questions such as: Would it be prudent for the FAA to use software that thousands of unknown programmers have intimate knowledge of for something this critical? Could the FAA take the chance that these unknown programmers have not shared the source code accidentally with the wrong parties? Would the FAA's decision to use software in the public domain invite computer hackers more readily than proprietary products?

Again, a ludicrous example. No-one simply sits down and "develops" such an application by starting with free software. Even if the FAA did develop an open-source flight-control application, AdTI has not demonstrated at all that it would have significantly-different security issues than a closed-source one. Sure, AdTI asks a bunch of rhetorical questions. But that's not how one conducts a logical argument. So let's answer the rhetorical questions with some of our own:

Would it be prudent for the FAA to use software that thousands of unknown programmers have intimate knowledge of for something this critical?

Is it prudent for any federal agency to use Microsoft software, given that it is a matter of public record that Russian hackers illegally broke into Microsoft's network and had access to source code? Is it prudent for any federal agency to use software which is not freely-available for peer review? Is it prudent for any federal agency to take the word of a proprietary vendor that its software is secure, given that the vendor is attempting to make a sale?

Could the FAA take the chance that these unknown programmers have not shared the source code accidentally with the wrong parties?

Will the FAA ban the use of Microsoft software, given that it is a certainty that Microsoft source code has been shared "accidentally with the wrong parties"?

Would the FAA's decision to use software in the public domain invite computer hackers more readily than proprietary products?

Will the AdTI comment on why proprietary Web servers seem to be cracked far more often than open-source ones, even though they have smaller market share?

Reverse Engineering

Experts differ on whether the primary focus for security should source code or binary code. Andrew Sibre, a programmer with over twenty years of experiences insists, "Having a license for binaries only gives you a black box : you don't know what it's doing, or how, unless you want to go insane trying to reverse-engineer it with a debugger (illegal under the term of most licenses)" Having the source lets you see what it's doing, how it does it, and permits you to modify it to meet your particular requirements (including security related ones). To this extent, government officials should be concerned that threat may not just be an adversary cracking their system, but inadvertently educating adversaries about their security systems. Sibre continues, "Depending on code without the source is quite similar to depending on a complex mechanical or electronic system without the benefit of shop and parts manuals."

Naturally, having access to source code eases reverse-engineering. However, the vast majority of security exploits are found without access to the source code. As I wrote to Ken Brown, the author of the report:

The entire premise of computer security and encryption is as follows:

A security system must be resistant to attack *even if* the attacker has all the details about how it works. I refer you to: "Applied Cryptography", Bruce Schneier, John Wiley and Sons, Inc, page 3:

"All of the security in these algorithms is based on the key (or keys); none is based in the details of the algorithm. This means that the algorithm can be published and analyzed. Products using the algorithm can be mass-produced. It doesn't matter if an eavesdropper knows your algorithm; if she doesn't know your particular key, she can't read your messages."

I refer you also to: "Practical UNIX and Internet Security", Simson Garfinkel and Gene Spafford, O'Reilly and Associates, pages 40-45:

"... This is especially true if you should find yourself basing your security on the fact that something technical is unknown to your attackers. This concept can even hurt your security."

I refer you to an Internet draft on security through obscurity: http://www.ietf.org/internet-drafts/draft-ymbk-obscurity-00.txt

A few more links on why security through obscurity does not work:
http://www.treachery.net/~jdyson/toorcon2001/ http://www.counterpane.com/crypto-gram-0205.html http://online.securityfocus.com/columnists/80 http://www.vnunet.com/Analysis/1126488

Is Reverse-Engineering That Hard?
Before I started Roaring Penguin Software Inc., I worked at Chipworks, a company which does reverse-engineering for a living. From first-hand experience, I know that hardware and software security can be broken more easily than most vendors believe, and much more cheaply, too.

Back-Doors Ken Brown raises the old back-door bogeyman:
Another security concern is that the primary distribution channel for GPL open source is the Internet. As opposed to proprietary vendors, open source is freely downloaded. However, software in the public domain could contain a critical problem, a backdoor or worse, a dangerous virus.

The following material is taken straight from my other article, where I already covered the back-door issue: In fact, there have been some trojans placed in open-source software. They have usually been discovered and neutralized very quickly. By contrast, closed-source products have a sad history of spyware, "Easter eggs", and questionable material, placed by people who have (presumably) been "screened." In fact, one of Microsoft's own security updates was infected with a virus, something which (to my knowledge) has never happened in the open-source world.

An interesting back-door was one in Borland's closed-source Interbase product. This back-door lay undetected for years, but was revealed within weeks of the product being open-sourced.

And another interesting little "easter egg" is on the AdTI's very own Web site.

Questionable material in Microsoft software may have helped spur a Peruvian bill to promote free software in government. The author of the bill says that open-source software provides a better guarantee of "security of the State and citizens" than proprietary software, an analysis which is 180 degrees out of phase with the AdTI Study.

The real "victims" of the GPL

The government's productive alliance with private enterprise is also relevant particularly when its decision to use GPL source code would inherently turn away many of its traditional partners. Security, as well as other impracticalities make GPL open source very unattractive to companies concerned about intellectual property rights. In effect, the government's use of GPL source code could inevitably shut out the intellectual property based sector.

The Government must choose software to maximize national security and minimize government expenditure. It owes absolutely nothing to the "IP-based sector" or any other corporation. What was it I said before? Oh, yes: "Tough. Adapt or die."

This has a number of ramifications. Immediately, it would limit the number of qualified vendors to choose from to deliver products.

Tough. Adapt or die.

The GPL's wording also prevents the equal use of software by members of the IP community and the GPL open source community.

This is a lie. If the "IP community" (whatever that is) respects the terms and conditions of the GPL, it's as free as anyone else to use and distribute GPL'd software. If it doesn't like the terms of the GPL, that's the "IP community's" problem, not the GPL's problem.

A worse consideration is that use of GPL could inadvertently create legal problems. IP community members could argue that the government's choice of open source is restrictive and excludes taxpaying firms from taxpayer-funded projects. Adverse impact would include a discontinued flow of technology transfer from government-funded research to the technology sector. Without value, it becomes highly likely that government funding for research would slow as well.

Here, AdTI is delivering a veiled threat on behalf of Microsoft. First of all, if "IP community members" could argue that, they already would have. They have not made the argument because they know it is specious. In fact, there's a very good argument for requiring the fruits of government-funded research to be GPL'd so that all citizens can benefit.

Furthermore, the "IP community members" have benefited from government research as much as (or more than) government has benefited from private research. So to pull out of government partnerships out of pique over software licensing would only hurt proprietary vendors and no-one else.

V. Intellectual Property Left

This is a rewording of "Free Software is Communism" and merits about the same amount of serious attention.

U.S. intellectual property (IP) statutes have been a beacon for inventors around the world. The U.S. model for motivating, compensating and protecting innovators has been successful for almost 200 years. GPL source code directly competes with the intellectual property restrictions, thus it is vital to analyze its impact.

The GPL does not in any way "compete" with U.S. copyright law. It uses U.S. copyright law in a perfectly legitimate and reasonable way.

There are two groups of programmers that contribute to the open source community. The first group consists of professionally hired programmers by day, who freely contribute code. The second group consists of original equipment manufacturers (OEMs) that are hiring open source programmers for their products. However, open source principally perpetuates itself because there is an avid pool of experts and enthusiasts willing to spend their spare time to provide fixes and modifications to open-source software. This volunteer system works well as an academic model, but not as a business one.

Who cares about business models? We have Linux, Apache, Mozilla, Gnome, KDE, Perl, Python, PHP, FreeBSD, OpenBSD, NetBSD, and so on in spite of the supposed lack of a business model. What we see here is more whining from proprietary vendors about how free software is hurting their business model. Let's hear the refrain: "Tough. Adapt or die."

As mentioned earlier, open source code is not guaranteed nor does it come with a warranty.

Neither does most proprietary software, so this is a red herring. If you want a warranty, most open-source vendors will be happy to provide one if you pay for it.

Open source products are often distributed without manuals, instructions or technical information. While a commercial developer is obligated to produce manuals, diagrams and information detailing the functionality of their products, open source programmers are not. In addition, open source developers cannot be expected to create software manuals with the vigor of private firms that are obligated to produce them. Producing technical specifications (in soft or hard copy format) is time-intensive and expensive. But this is not just a customer service issue.

Some open-source software comes with poor documentation, just like some proprietary software. Other free software comes with excellent documentation. It's a matter of customer choice: Choose software that has what you need.

All of my free software products come with complete manual pages. Most serious developers do not consider software finished until the manuals are finished.

Innumerable questions surround the distribution of technical information in the copyleft environment, particularly because the Free Software Foundation has a copyleft license for its documentation as well. Issues include: Who should have the right to alter software manuals? Who is the final editor or is there one? How should changes be regulated? Are manuals copyright protected documents? What is the process for making changes? What body regulates these changes? How can organizations guarantee that information in manuals is always accurate?

More rhetorical questions. With proprietary software, if the manuals are inaccurate, you're out of luck. With free software, you at least have a chance to correct them.

Again, we see the unease of the proprietary vendors who want bodies to "regulate" changes. They are unable to wrap their minds around the new reality of free software. Rather than changing their ways, they dig in their heels. They may need another reminder: Tough. Adapt or die.

Today, software impacts a firm's financial health in an intimate fashion. It becomes unrealistic for a firm to depend too much on the trust of an anonymous community that does not have anything at stake financially to keep important technical documents current.

On the contrary, it is imperative that businesses rely solely on free software for access to critical information. Only in this way can they guarantee access to their data, and not be held hostage by proprietary file formats and proprietary vendors. To quote Dr. Edgar David Villanueva Nunez, a Peruvian legislator:

To guarantee the free access of citizens to public information, it is indispensable that the encoding of data is not tied to a single provider. The use of standard and open formats gives a guarantee of this free access, if necessary through the creation of compatible free software.

To guarantee the permanence of public data, it is necessary that the usability and maintenance of the software does not depend on the goodwill of the suppliers, or on the monopoly conditions imposed by them. For this reason the State needs systems the development of which can be guaranteed due to the availability of the source code.

To guarantee national security or the security of the State, it is indispensable to be able to rely on systems without elements which allow control from a distance or the undesired transmission of information to third parties. Systems with source code freely accessible to the public are required to allow their inspection by the State itself, by the citizens, and by a large number of independent experts throughout the world. Our proposal brings further security, since the knowledge of the source code will eliminate the growing number of programs with *spy code*.

In the same way, our proposal strengthens the security of the citizens, both in their role as legitimate owners of information managed by the state, and in their role as consumers. In this second case, by allowing the growth of a widespread availability of free software not containing *spy code* able to put at risk privacy and individual freedoms.

In fact, Villanueva's eloquent and well-written letter handily demolishes most of AdTI's premises and conclusions; it's well worth a read.

More on Reverse Engineering

The reliance on reverse engineering is probably one of the biggest conflicts between the IP and the GPL open source community. To keep GPL products relevant and up to date, GPL enthusiasts must perpetually reverse engineer intellectual property.

Reverse-engineering is required only if hardware manufacturers keep the details of their software/hardware interfaces secret. The vast majority of hardware manufacturers do not keep them secret. Some which do keep them secret provide (binary-only) drivers for free software systems. Reverse-engineering is necessary only for the small minority of hardware devices which are secret.

Reverse engineering has a number of implications. It harbors very close to IP infringement because and has staggering economic implications.

Reverse-engineering is perfectly legal. In fact, the European Union has a law guaranteeing the legality of reverse-engineering for the purpose of creating compatible software or devices. AdTI implies that reverse-engineering is "close to IP infringement", but they never say why (and their sentence doesn't even parse.)

If software is freely re-engineered, it will inevitably impact the value of software on the market. If the price of software is adversely impacted, salaries and inevitably employment of software programmers would be negatively affected as well.

This is correct. If software is freely re-engineered, it destroys monopolies and brings back a sense of free market to the industry. Yes, software prices go down. And yes, consumers benefit.

The whole paragraph is simply a thinly-hidden Microsoft lament about the success of products like Samba which enable companies to run Microsoft-compatible file sharing without exorbitant Microsoft licensing fees.

VI. Is the GPL Cost-Beneficial?
This is a restatement of the tired old "TCO" straw-man.

Discussing the economic implications of open source, Andre Carter, President of Irimi Corporation, a technology consulting firm in Washington comments, "The question of open source code is about whether the software developer wants to make available to the world the blueprint of what they built or simply the benefits of what they built. The notion of open source software has nothing to do with free software. The purchase price of computer software is only a fraction of the total cost of ownership. So even if the price tag reads free , it can end up being more expensive than software you buy. This is especially true for the typical consumer. If it requires technical know-how to operate, doesn't offer built-in support, and demands constant attention, it won't feel free for very long."

Lot's of "if's" and weasel-words in there. If it requires technical know-how to operate, etc, etc. Nowhere does Carter say that free software does in fact require any more technical know-how than proprietary software. Furthermore, proprietary software often has hidden costs which can come back later to haunt you.

The success of an A-Z open source environment would expectedly impact the software sector as a viable entity. If software is freely available, but PC s, servers and hardware maintain their value, we can only predict that the value of software companies will plummet. Hardware will come with more and more free software. Second, we can only expect that the revenues and value of the software sector will transfer to the hardware sector. Although the software sector has seen growth almost every year, it is questionable whether the GPL model will enable the software industry to continue its exceptional growth particularly when the growth in the software sector is tied to proprietary products, something the GPL is anxious to eliminate.

In the 1800's, black-smithing was a pretty good profession. In the 1960's and 1970's, 8-track tapes did a pretty good business. The fact is that the black-smith industry and the 8-track tape industry failed to heed the iron rule of the market: Adapt or die. If free software means the death of proprietary software vendors, it will be on those vendors heads who fail to adapt.

Businesses must be concerned about the perception of the GPL. For example, experts assess the value of intellectual property when completing valuations of firms. Because GPL open source literally erases the proprietary and trade secret value of software, it can be expected that firms concerned about valuations will be very concerned about using GPL open source.

This is only of concern to firms producing software. The vast majority of firms consume software, and for them, in-house software production is a cost, not a revenue source. For the vast majority of firms, free software will save them lots of money. For those few firms planning on building a business model around proprietary software, I offer my old refrain: Adapt or die. What's good for proprietary software vendors is not necessarily good for the citizen.

There are all types of consumers with ranges of needs and abilities. The guys in the lab at MIT don't need install wizards, plug and play drivers, voice based technical support and big picture manuals as part of their software. However, the elderly couple e-mailing their grandkids or the mother of two managing accounts on a PC in the kitchen does.

Carter clearly has a stereotyped view of consumers. My elderly parents, who enjoy e-mailing their grandkids, use only free software. They are quite happy to use Linux and Netscape. Furthermore, the choice of free software eases my support burden: If my parents need help, I can SSH into their machine and fix it remotely. With all of Microsoft's "wizards" and other gimmicks, they still do not provide a convenient means for remote administration on their consumer-level systems.

People believe free software is hard to use because they've never used it. Just as the AdTI showed that people who've actually worked with MCSE's have a higher opinion of them than people who haven't, people who've actually bothered to use free software have a higher opinion of it than people who haven't.

VII. GPL Open Source and the Courts

Once GPL code is combined with another type of source code, the entire product is GPL. Subsequently, this change could occur deliberately, but it could also occur accidentally. There are unlimited scenarios for accidents to occur, the license could be lost in the source code's distribution, or maybe unreadable due to a glitch in its electronic distribution. Another potentially litigious issue is whether the use of GPL tools used to manipulate code subject software to the GPL. Theoretically, a GPL tool could subject new software to GPL restrictions. This too will have to be interpreted by a judge. Regardless, unknowing users of GPL might have one intention for use of the license and find out later that it inadvertently infringed upon copyright protected work. Legal questions relevant to such an event intersect the legal arenas of intellectual property rights, contract law and liability.

AdTI is very good at offering up red herrings. Let's suppose you "accidentally" included part of Microsoft Windows in a product. Do you suppose Microsoft would be easier on you than copyright holders of a GPL'd product?

The fact is that any software license has terms and conditions which must be obeyed. The GPL is no different; if you do not like its terms, don't use GPL'd software. Microsoft's agenda is transparent here.

The proprietary software industry entreats you to diligently track licenses, and offers harsh retribution against those who violate their licenses. Most GPL violations are settled amicably, and those which result from an accident are usually settled merely by removing the offending code from distribution.

The rest of Section VII is simply speculation and not even worth commenting on.

VIII Conclusion

Open source as a development model is helpful to the software industry. For example, software distributed under the BSD license is very popular. The BSD license (see Appendix 9) enables companies, independent developers and the academic community to fluidly exchange software source code.

English translation: The BSD license is good because it allows corporations to benefit from other people's work without offering them any compensation, and without having to allow third parties to benefit from derived work.

The GPL's resistance to commonplace exchange of open source and proprietary has the potential to negatively impact the research and development budgets of companies.

English translation: The GPL doesn't let corporations benefit for free from others' work.

The GPL has many risks, but the greatest is its threat to the cooperation between different parties who collaborate and create new technologies. Today, government, commercial enterprise, academicians, etc. have a system to converge. Conversely, the GPL represents divergence; proposing to remove the current infrastructure of intellectual property rights, enforceable protection and economic incentive.

English translation: The GPL threatens Microsoft's business model. You know my response by now: Tough. Adapt or die.

While GPL advocates are quite active in their promotion of copyleft, few would disagree that its widespread adoption would present a radical change to an industry sector responsible for almost 350 billion dollars in sales annually worldwide (see Appendix 10).

Few would disagree that the automobile all but wiped out blacksmithing as a profession. Few could argue that cassettes didn't decimate the 8-track market. Few would be surprised at my response: Tough. Adapt or die.

AdTI's Numbered Points and my Counterpoints

1- Engineering software has become considerably complicated and rigorous. It is not unusual for software to include millions of lines of source code. If the incentive to develop software is changed, we can subsequently expect the quality and efficiency of software to change.

Yes, with luck, we'd expect the quality to improve. The security records of systems like OpenBSD, Linux, and FreeBSD are vastly superior to that of Windows. While there is no real cause-and-effect relationship, empirical evidence suggests that open-source software is more reliable and of higher quality than most commercial-grade proprietary software.

2- There remains considerable differences within the GPL open source community. It is questionable whether these groups will continue to be proponents of the GPL in its current form or opt for changes in the immediate future.

Even if true, this point is irrelevant. Once software has been licensed under the GPL, the license cannot be retracted. Your rights cannot be withdrawn retroactively (unless you violate the license), unlike some proprietary software licenses.

3- Open source has successfully been used in proprietary software. In addition, academic and government projects have been successful particularly because of commercial interest. Private enterprise offers unique efficiencies for the success of government funded research.

Simply another attack on the GPL. Nothing worth reading; let's move on.

4- Open source GPL use by government agencies could easily become a national security concern. Government use of software in the public domain is exceptionally risky.

A bold assertion, and totally unproven. This assertion is contradicted by empirical evidence. Also, the NSA seems quite comfortable with the security of GPL'd software.

5- Reverse engineering, perpetuated by GPL proponents, threatens not only the owners of intellectual property, but also the software industry itself.

This is an out-and-out lie. Reverse-engineering is critical for the continuation of a healthy software industry. Without legitimate reverse-engineering, there would be no market forces to oppose the development and maintenance of monopolies, and the software market would become even more unfair than it is today.

Attempts to ban reverse-engineering are simply money-grabs by greedy monopolies who wish to hang on to their power.

6- Use of GPL open source creates a number of economic concerns for firms. For example, the valuation of a software company could be significantly effected if it uses source code licensed under the GPL for the development of its products.

If that is of concern (and it is not for the vast majority of corporations), then the corporation is perfectly free not to use GPL'd software.

Using proprietary software for development of products can also significantly lower a company's valuation, especially if the owner of the original proprietary software demands royalties or part-ownership of the resulting IP.

7- The courts have yet to weigh in on the General Public License. Without legal interpretation, the use of the GPL could be perilous to users in a number of scenarios.

If corporations have concerns about legal interpretations of the GPL, they should consult qualified lawyers. IBM, for example, has a massive and top-notch legal team, and they seem to have no qualms about using, creating and distributing GPL'd software. If the AdTI would give us concrete examples of legal concerns, we could discuss them, but as it is, all we are given is conjecture, hand-waving and supposition.

Roaring Penguin's Conclusions

The AdTI claimed that the GPL is "acquisitive", yet fails to note that even the most liberal of proprietary licenses is far more restrictive and places far more encumbrances on derived products than the GPL (if, in fact, it even permits derived products in the first place.)

The AdTI says that the free software community is a "myth", but fails to explain the tens of millions of lines of high-quality code produced by this mythical community.

The AdTI promised to show how using GPL'd software could threaten security, but failed to deliver. Rather, Microsoft's own Jim Allchin admitted under oath that flaws in Microsoft software, if disclosed, could endanger national security.

The AdTI claims that free software damages members of the "IP community" (by which it means proprietary software vendors), but then fails to show how such damage occurs. Even if free software does damage proprietary software vendors, AdTI fails to show why that is a bad thing for citizens in general.

AdTI raises the hoary old "Total Cost of Ownership" issue, but does not demonstrate that proprietary software is more cost effective. AdTI ignores studies like the one from CyberSource or even Roaring Penguin's own case studies in Free Software in the Real World.

The entire AdTI study is a commercial funded by Microsoft, whose sole aim is to counter the growing adoption of GPL'd software. The report contains nothing constructive or useful. It is a sham.

Other press, commentary and related links:
MS-funded think tank propagates open-source lies, The Register.
Analysis: Microsoft vs. open source battle gets political, InfoWorld.

Copyright © 2002 David F. Skoll

What Sla$hdot DOESNT want you know (Score:-1, Offtopic)
by Anonymous Coward on Sunday April 18, @11:18AM (#8897126)
An analysis of hacker attacks on online servers in January by security consultancy mi2g found that Linux servers were the most frequently violated, accounting for 13,654 successful attacks, or 80 per cent of the survey total. Windows ran a distant second with 2,005 attacks. A more specific analysis of government servers also found Linux more susceptible, accounting for 57 per cent of all breaches [zdnet.com.au]

[Nov 20, 2003] Open source no panacea for security

Conventional wisdom says viruses, bugs and other security problems could be more rapidly cleaned up if only the world would move to an open-source model. Security experts speaking at Comdex disagreed.

"I think open-source software is slightly less secure," said Gary McGraw, chief technology officer of Cigital, who sat on a panel on security problems at the five-day conference in Las Vegas.

"Open-source developers say that there are a million eyes looking at it, but I would rather have one set of good eyes," McGraw said. "There is this boring job that no one wants to do."

Still, despite the problem of having no one specifically in charge of security, Linux so far has avoided many of the virus and worm problems. "There is this nice, big fascist control" imposed by Linus Torvalds over the addition of new features, McGraw said.

McGraw also pointed out that the complexity of software code, especially Windows code, is creating security problems. In 1998, the CERT Coordination Center--the security experts at Carnegie Mellon University--reported 262 vulnerabilities. The number grew to 417 in 1999, to 2,437 in 2001 and to 4,129 last year. At the same time, Windows has added millions of lines of code.

"More lines, more code," he made the audience of 300 chant.

[Mar 20, 2002] Building trust into open source CNET News.com By Robert Lemos

In the past three months, the open-source community has been given a wake-up call.

While Microsoft has concentrated on reviewing its flagship Windows source code as part of a new focus on security, Internet watchdogs have released the details of three widespread flaws in open-source applications usually shipped with the Linux operating system.

The flaws could compromise the security of computers on which the applications are installed, prompting some developers to urge the open-source community to take another look at popular code. But most fear the majority of members won't bother.

"No one is doing auditing," said Crispin Cowan, chief scientist at Linux maker WireX Communications, one of several companies selling a version of the OS with additional security options. Cowan is the founder of Sardonix, a Web site aimed at organizing groups of people who want to review major open-source software.

"Reviewing old code is tedious and boring and no one wants to do it," Cowan said.

With Microsoft launching a major security initiative in response to recent criticism, some fear that Linux and open-source developers have become complacent in the commonly held belief that open-source programs are more secure.

This year offered several reasons to question that belief.

In February, a flaw found in the popular scripting language PHP left as many as 9 million Web sites vulnerable to attack. Though the number of vulnerable sites could be as low as 100,000 and the flaw is hard to exploit, the software bug resembles the Web software slipup that left Microsoft servers vulnerable to the Code Red virus.

Gartner analysts Richard Stiennon and John Pescatore say that despite new attention given to software security, no software--whether proprietary or open source--will ever be 100 percent secure.

see commentary

In March, another flaw, in the omnipresent Zlib compression library, left Linux systems potentially vulnerable to attack, though no program exploiting the hole has surfaced.

And in mid-March, a bug in the OpenSSH communications encryption program, commonly used to secure communications to and from Linux computers, left many of those machines open to attack.

The spate of flaws has not gone unnoticed by the open-source community's more vocal members.

"I see a lot of bad software being done," said Theo de Raadt, founder and project leader for the open-source Unix variant OpenBSD. "There is a lot of politics and inaction causing people not to make changes that makes their software better."

The "many eyes" theory
Open-source software's main claim to security is that because anyone can view the source code, developers can constantly look for bugs and fix them. And with a broad cross-section of expertise in the developer community, programmers with specific strengths can look for hard-to-understand, "deep," bugs and fix what others might miss.

In his essay on the open-source movement, "The Cathedral and the Bazaar," developer Eric Raymond wrote, "Given enough eyeballs, all bugs are shallow."

De Raadt led a team of OpenBSD developers on just such a review, cleaning up the source code for the Unix-like operating system and replacing functions that were known to be insecure with more robust substitutes.

Yet, the "many eyes" theory, as it is known in the open-source world, doesn't work so well in reality, said WireX's Cowan.

"It does not assure that many eyes are actually looking at the code," Cowan said. "In fact, it is likely that 'rock star' code that is hip to work on gets adequate attention, while the other 90 percent languishes, in many cases never even seen by anyone but the code's authors." And much of this unsexy code forms the foundation of Linux.

Cowan hopes his Sardonix site will become a central registration point for auditing efforts, but for now, Linux and open-source software must rely on developers feeling obligated enough to commit to the drudgery of vetting source code for software bugs.

After security researchers found the flaw in the Web scripting language PHP, a group of programmers decided to start auditing that popular project's code.

"It's difficult to introduce new features and review the existing code at the same time," said Frank Denis, a part-time systems administrator for a French Internet service provider and leader of the PHP code-auditing project. "It's why we are trying to give a hand on that point. We won't introduce any new features, but we will fix potentially dangerous code."

A central process to find bugs, such as Microsoft's Trustworthy Computing initiative, will never catch all flaws in open-source software, Denis added.

"To break into your server, script kiddies will try totally unconventional tricks that have no chance of being part of any initial validation procedure," Denis said, referring to the class of online vandals who are not as technically adept. "The result of (our) different approaches will give a more extensive audit than any strict guidelines. That is the strength of free software: Everyone can put his own brick in the wall."

Polishing the software
Despite the security problem with his own project's code, Jean-loup Gailly, chief software architect for Vision IQ and the co-creator of the Zlib compression library, stressed that Linux's development process still creates more secure code.

"Open-source programs are subject to much more scrutiny and, in case of problems, fixed much more quickly than closed-source programs," Gailly said. "Apache is not more popular than Microsoft IIS (Internet Information Server) by accident; one of the reasons is that it is more secure."

An additional layer of polish is put on the source code by companies and organizations such as Red Hat and Debian, which package their own Linux distributions, Gailly added.

Linus Torvalds, a senior engineer at chipmaker Transmeta and the creator of the Linux kernel, thinks the open-source development style works well.

"Most (code review) by far is simply people looking at code, often for some other reason that had nothing to do with formal auditing," Torvalds said. "I personally like it that way, and it's proven to work pretty well in practice."

Moreover, rather than seeing a threat from Microsoft's new focus on security, Torvalds believes the move shows how weak the company's security used to be. Microsoft wouldn't provide an executive for comment on the issue.

"Let's face it," Torvalds said. "Microsoft did their initiative because they've been so bad at security in general. They fix bugs when somebody drives a truck through them and they get embarrassed enough. Get embarrassed (often) enough, and you start creating 'initiatives'--whether in politics or in commercial software."

He continued: "In the open-source community, the community has so far been pretty good at policing itself without the embarrassment. Do bugs happen? Yes, of course. But do they get found and fixed without a new virus of the week that costs a few billion dollars of user time? You bet."

[May 30, 2002] BW Open Source Software May Offer Target for Terrorists, According to Study by Alexis de Tocqueville Institution's Committee for the Common Defense

WASHINGTON--(BUSINESS WIRE)--May 30, 2002--Terrorists trying to hack or disrupt U.S. computer networks might find it easier if the federal government attempts to switch to "open source" as some groups propose.

"Opening the Open Source Debate", a soon to be released white paper by Alexis de Tocqueville Institution details the complex issues surrounding open source, particularly if federal agencies such as the Department of Defense or the Federal Aviation Administration use software that inherently requires that its blueprints, source code and architecture is made widely available to any person interested - without discretion.

In a paper to be released next week, the Alexis de Tocqueville Institution outlines how open source might facilitate efforts to disrupt or sabotage electronic commerce, air traffic control or even sensitive surveillance systems.

Unlike proprietary software, open source software does not make the underlying code of a software confidential.

"Computer systems are the backbone to U.S. national security", says Fossedal, chairman of the Alexis de Tocqueville Institution and its Committee for the Common Defense, which will release the study. "Before the Pentagon and other federal agencies make uninformed decision to alter the very foundation of computer security, they should study the potential consequences carefully."

ZDNet Tech Update

March 20, 2002 Flaws turn security spotlight on open source software
With security flaws in widely used open-source applications coming to light, even open source insiders are beginning to question the community's ability to cope. "I see a lot of bad software being done," says Theo de Raadt, founder of the OpenBSD open-source effort.

Wide Open News -- Security Through Obscurity

... Associates. The theory of open source security is simple, and it is endemic throughout the entire open source community. The theory ...

News: Too much trust in open source?

... theory Open-source software's main claim to security is that because anyone can
view the source code, developers can constantly look for bugs and fix them. ...
zdnet.com.com/2100-1104-864256.html - 59k - Cached - Similar pages


[Mar 10, 2001] Information Security MagazineBY PETE LOSHIN OPEN-SOURCE SECURITYMarch 2001OPEN SOURCE UNDER THE HOOD. Columnist PETE LOSHIN ([email protected]) is a senior editor-at-large for Information Security. He produces the Internet-Standard.com Web site and has authored more than 20 books on Internet protocols and security.

Vendors are increasingly including open-source components in their commercial products. What impact does this trend have on product security?

The days are long gone when all you needed to start your own software company were a compiler and a computer. Creating commercial off-the-shelf (COTS) products from scratch in today's market is a daunting task for any but the biggest software companies. Smaller vendors have to compete with the likes of Microsoft, Sun and Cisco, with only a fraction of the resources.

Almost no one can afford to build their own new products from scratch anymore, and the problem is magnified for vendors of network appliances: They've got to deliver a functional, competitively priced server, including software and hardware, while still turning a profit. Vendors of other products, from operating systems to software suites to end-user workstations, are feeling the pinch as well.

Considering this environment, it's not surprising to find vendors increasingly turning to open-source code when creating new products. Yet buyers may not always be aware that inside their shiny new firewall lurks an open-source OS, such as Linux or FreeBSD. Network security appliances designed to do firewalling, intrusion detection and other security functions often rely extensively on open-source OSes and utilities. But many other products include open-source components as well. Apple's new Macintosh OS X, for instance, is based on Free BSD 3.2 and the Mach 3.0 project from Carnegie Mellon University. Apache, BIND, Sendmail and Perl are all widely used in both commercial and non-commercial products.

Among the obvious reasons developers turn to open source are cost and security. Clearly, vendors can keep their costs down when they don't have to build their own components or buy licenses for commercial components. Why build a Web server when you can use the best one around-Apache-for nothing? Why build your own OS when you can use FreeBSD? Why not include open-source security utilities with a commercial security product?

While some people automatically assume that open-source OSes are more secure than proprietary OSes, it entirely depends on how the code is used and supported. When done right, open-source components add real value to commercial products-and are likely to be at least as secure as closed-source components.

So what exactly is open-source code, and what impact does it have on product security? How can it affect your systems and networks? How can you tell if the product you're using incorporates open source? And how can you become an intelligent consumer of products that use open source?

... ... ...

Risks and Rewards of Open Source in COTS Products
The common assumption among developers and engineers is that the "many eyes" approach to open-source code projects makes it inherently more reliable, robust and secure than closed-source code. However, recent revelations of glaring holes and vulnerabilities in sometimes quite old and widely used open-source code have led some to question the validity of this assumption. Steve Lipner, manager of the Microsoft Security Response Center, points to the recent discovery of vulnerabilities in MIT's Kerberos network authentication software, "where buffer overruns went undiscovered for nearly a decade" in the widely distributed and implemented open-source code. (Microsoft released a closed-source-and slightly non-standard, and thus non-interoperable-version of Kerberos with its Windows 2000 suite.)

While few open-source advocates still claim absolute security superiority over closed source, most experts seem to agree that open source has the potential to be at least as secure, if not more secure, than closed source. For one thing, open-source code by itself should have no real adverse effect on system security. "The negative ramification that people cite is that - anyone can look at the code to find holes,'" says Bastille's Jon Lasser. But that works both ways, since open-source vulnerabilities are (in theory anyway) vetted by a much larger community of developers and engineers.

"I'm not convinced that there are any significant real advantages to going with closed source unless there is something about the security mechanism itself that intrinsically can't stand up to examination" says Paul Robichaux, a senior solutions architect for EntireNet and author of several books on Microsoft products.

Robichaux stops short of unqualified endorsement of the open-source security model, cautioning that having "more eyeballs looking at [open-source code] is no guarantee of quality." Moreover, for some systems, such as national security programs, close public scrutiny is unwarranted. "If you look at the authentication system…the [U.S.] National Command Authority uses when they want to tell somebody to launch a nuclear missile, you probably would not gain any security from having many more eyeballs looking at that."

Microsoft's Lipner acknowledges that "no software is free from flaw," while suggesting that "the difference between products lies in how actively vendors seek out the flaws and then fix them." According to Lipner, Microsoft "pays top dollar to ensure that its software is scrutinized by the best minds in the industry rather than taking the open-source approach of relying on hobbyists and--someone else' to scan code in their spare time."

Microsoft's official position on the relative security of closed- and open-source software, according to Lipner, is that "the difference lies in how we-do' security. The most fundamental question to ask when examining the security of any software is whether or not the design and development process results in a sound and secure design and a solid implementation." Microsoft isn't opposed to open reviews of cryptographic protocols and algorithms; Lipner says that they "can definitely improve security. These are generally simple enough that academic or external review can find issues and add value."

However, Lipner points out that securing large software systems calls for "a substantial and often costly level of resources applied by a full-time team"--which, he says, will only work if the costs can eventually be recovered by product revenue.

According to Bastille's Lasser, "Anyone using open source can fix the code, or pay someone else to fix it. And anyone can examine it." One result of this all-hands-on-deck model is the potential for discovering backdoors. For instance, Lasser points to the Borland InterBase, initially a proprietary product that, after seven years, was released as open-source code. The proprietary version contained a backdoor that wasn't discovered until six months after the code was opened.

Kurt Seifried, senior analyst for SecurityPortal.com and project head for the Linux Security Knowledge Base, explains that attackers don't need access to the source code to find and exploit problems. For instance, Microsoft issued more than 100 security advisories in 2000, and new bugs related to IIS or IE are publicized on Bugtraq and NTBugtraq all the time.

That's not to say that open-source code has no downside, particularly when implemented in commercial products. It all depends on who is doing the implementation and how support is being provided. Lasser suggests that in order to keep customers' code current, vendors should offer opt-in e-mail lists for up-to-date news about the product, be up-front about any security problems and provide patches online that have been cryptographically signed. While vendors such as Red Hat, SuSE and Mandrake offer these services, "few vendors do all of this," he says.

Seifried, singling out OpenBSD, says some open-source projects are particularly well suited to secure deployment in commercial products. "They sat down and spent a large number of man-years auditing it heavily and now have a pretty solid and secure codebase to work from." Many commercial vendors use OpenBSD--as well as FreeBSD and NetBSD--for firewalls.

However, inappropriately using a secure and open program is dangerous. "It's almost always a question of how the products are used, rather than what they are," Lasser notes. "One of the defining characteristics of the security problem is that these evaluations are fluid, depending largely on new exploits and classes of exploits that are discovered." Just because OpenBSD is noteably secure doesn't mean it's not still vulnerable to common exploits of programs like FTP, DHCP and Send-mail. If someone used OpenBSD as part of a "secure FTP solution," but used an insecure FTP implementation, "they're toast," says Lasser.

Buying Open Source Under the Covers, Intelligently
Vendors incorporate open-source code in their products differently, so simply scrutinizing brochures or Web sites isn't enough. Some vendors make open source an important part of their marketing strategy, pointing to it as a source of strength. Examples include C2Net and Linux-based firewall vendor Cybernet Systems Corp. (www.cybernet.com).

Network security appliance vendors sometimes include lists of software installed on their hardware. Read the fine print in datasheets for Sun's Cobalt RaQ, Qube and other network appliances (www.sun.com), and you'll see that those products use the Linux 2.2 kernel. Axent's (now Symantec's) Raptor firewall appliance (www.symantec.com) is also based on the Cobalt RaQ. Other vendors are less forthcoming, releasing appliances based on "proprietary" OSes that are, in fact, open-source based. For example, the FireBox network appliance from NetWolves (www.netwolves.com) is based on FreeBSD-but the company's Web site refers to it as a "Unix-based FoxOS" operating system.

The greatest benefit of buying products that incorporate open source can also be part of the greatest drawback--that is, the fact that vulnerabilities and exploits for leading open-source products are widely published. This means fixes are usually made available quickly, but it also means that if you take too long to update your systems, they will be vulnerable to script-kiddiez and other attackers.

Bastille's Lasser suggests that vendors should provide proactive support, notifying customers of vulnerabilities and fixes. However, in his opinion, knowing where a particular piece of a system originated, whether open source or not, is not always very useful. "Sure, you could ask whose TCP/IP stack they used, but you won't know which version, and the optimal solution varies by week, application and phase of the moon," Lasser opines. But he also acknowledges that "there's nothing especially specific to open source about any of this."

In the final analysis, it's up to consumers to keep track of what open-source code is running on their systems--if only to keep them up to date. SecurityPortal's Seifried suggests asking vendors for a list of their product's security patches. "If they don't have any patches, I wouldn't buy it. Nothing is perfect.

"Open source is like any technology," he adds. "The implementation can be good or bad. Vendors that use open source and issue timely updates, proactively audit code and so on are good; vendors that don't should be avoided if possible."

Could You Do It Yourself?
Vendors use open-source code to build their products, so why can't anyone else? Well, nothing's stopping them, but the question is whether they can afford to (see box, below). A vendor can afford to put significant resources into putting together a package from open sources as long as they anticipate revenues. Seifried says it's a matter of convenience. "I can easily download the Linux kernel source and all the source code for software I need. Turning that into a working e-mail server, on the other hand, is a completely different matter."

According to Lasser, there are three other good reasons not to "roll your own." First, commercial versions of open-source programs usually incorporate proprietary extensions that add significant value. For example, C2Net's Stronghold Web server adds strong encryption and other features to Apache. Also, with proprietary products you get the benefit of quality assurance. And finally, you get support. "You're not paying for the software so much as you are to have someone to complain to when things break," Lasser says.

Buying commercial products based on open-source components may give users the best of both worlds. Microsoft's Lipner suggests that "proprietary systems are better reviewed, better tested and have a more robust process for dealing with security vulnerabilities when they are found"-though he was thinking more of entirely proprietary systems like those available from Microsoft. Everyone seems to agree that a proprietary product provides greater ease of use, better support, more convenience and more features, whether or not the proprietary product incorporates open-source code.


OPEN SOURCE INSIDE

Download pdf

BACKDOORS: OPEN OR CLOSED?

Backdoors are the security manager's nightmare. About the worst thing that could happen from a security standpoint would be the deployment of a system that includes an unknown backdoor.

The fear of backdoors "has probably been the biggest drag on the adoption of open source in the commercial world," says security expert and author Paul Robichaux. He recommends taking great care in reviewing any code brought in-house. "If you're using open-source code and you're not already reviewing it very carefully, you're being stupid and you deserve what you get," he says.

In the open-source world, it's likely that any externally injected malware (such as a Trojan) will be caught before it can be incorporated into production systems. But Robichaux warns that when you are buying compiled products (such as security appliances)--whether open source or not--"you never know what's going to be in those."

Rumors of backdoors inserted into commercial products have persisted for years. According to Robichaux, it's plausible (though never confirmed) that government agencies such as the National Security Agency (NSA) could "go to Microsoft or Sun or Oracle or whoever and wave their magic national security wand." As a result, "The product you're using will have a hole in it, but you won't necessarily find out."

Those using open-source code-whether it's the native code itself or a COTS product based on it-face a Catch-22 when it comes to backdoors. On the one hand, attackers are more likely to try to insert a backdoor into open-source code because, unlike closed source, it's out in the public domain for everyone to play with. On the other hand, they will be less likely to succeed because there are so many other people, with different goals, looking at the same code, which increases the possibility that it will be noticed.

BUILDING YOUR OWN FIREWALL

Time is money, and it's well worth spending a few thousand dollars to save a few weeks of a security manager's time.

When the Internet was still a research network, it was built on BSD/Unix systems. BSD derivatives, like all Unix flavors, are designed from the ground up to run on networked devices. So it shouldn't surprise anyone that so many firewalls and Internet servers are based on BSD-related distributions. Linux, with its relatively easy-to-use firewalling and Network Address Translation (NAT) functions, is also a popular platform for security applications.

If you have Linux, BSD or Unix expertise--or at least plenty of time--BSD- and Linux-based firewalls can be cheap and effective security solutions. But doing it yourself can be an invitation to disaster unless you're sure you've done everything right.

Once you decide to build your own firewall, you must install the operating system as securely as possible, and then create firewall rules to keep out all unauthorized traffic. That means building security policies first--a prerequisite for any firewall.

First, you must choose the most appropriate OS. Some prefer Linux for ease of use and widespread support; others find one of the BSD flavors (OpenBSD, NetBSD, FreeBSD, etc.) stronger (though perhaps less user friendly). If you choose Linux, you now have the option of using the 2.2 kernel, which provides firewall support with packet filtering by the ipchains program; or the 2.4 kernel, which uses the iptables program to create stateful inspection firewalls.

The next step is to get a trustworthy distribution: that means downloading from a trusted Web site or buying it on CD-ROM-and checking the distribution's digital signature. You'll want to review the source code before compiling it, and you should compile the kernel with only the drivers that are absolutely necessary.

Once installed, you'll need to turn off all extraneous services and harden the operating system in other ways (see Resources). You can do it by hand, or use a Linux-hardener (for example, the Bastille hardening scripts). Then, you've got to develop your firewall rules: what kind of packets should be filtered-both inbound and outbound-what applications are permitted, and so on.

Building and configuring the box, of course, is only the first step. Once the firewall is in operation, you must constantly monitor logs for suspicious activities, watch out for security alerts and install security patches as soon as they are available.

Some of these tasks (setting firewall rules and staying on top of security alerts, for example) are necessary with any firewall. But if you roll your own, you don't have the option of outsourcing any of them to a commercial firewall vendor.


RESOURCES

Download pdf


[Jun 01, 2017] CVE-2017-1000367 Bug in sudos get_process_ttyname. Most linux distributions are affected

Jun 01, 2017 | www.cyberciti.biz

There is a serious vulnerability in sudo command that grants root access to anyone with a shell account. It works on SELinux enabled systems such as CentOS/RHEL and others too. A local user with privileges to execute commands via sudo could use this flaw to escalate their privileges to root. Patch your system as soon as possible.

It was discovered that Sudo did not properly parse the contents of /proc/[pid]/stat when attempting to determine its controlling tty. A local attacker in some configurations could possibly use this to overwrite any file on the filesystem, bypassing intended permissions or gain root shell.

... ... ...

A list of affected Linux distro
  1. Red Hat Enterprise Linux 6 (sudo)
  2. Red Hat Enterprise Linux 7 (sudo)
  3. Red Hat Enterprise Linux Server (v. 5 ELS) (sudo)
  4. Oracle Enterprise Linux 6
  5. Oracle Enterprise Linux 7
  6. Oracle Enterprise Linux Server 5
  7. CentOS Linux 6 (sudo)
  8. CentOS Linux 7 (sudo)
  9. Debian wheezy
  10. Debian jessie
  11. Debian stretch
  12. Debian sid
  13. Ubuntu 17.04
  14. Ubuntu 16.10
  15. Ubuntu 16.04 LTS
  16. Ubuntu 14.04 LTS
  17. SUSE Linux Enterprise Software Development Kit 12-SP2
  18. SUSE Linux Enterprise Server for Raspberry Pi 12-SP2
  19. SUSE Linux Enterprise Server 12-SP2
  20. SUSE Linux Enterprise Desktop 12-SP2
  21. OpenSuse, Slackware, and Gentoo Linux

Recommended Links

Google matched content

Softpanorama Recommended

Top articles

Sites

T.REX Open Source Firewall

Old_opensource_whitepaper Ken Brown Paper

BW Online December 11, 2001 Is Open-Source Security Software Safe See also discussion SecurityFocus HOME News Is Open-Source Security Software Safe

Will the average bank care if the hacking underground can examine the basic source code of the security software protecting its networks? That's what information-security company Guardent is about to find out.

On Dec. 11, the Waltham (Mass.)-based company rolled out a hardware security appliance that relies solely on open-source programs to protect customers. Guardent will use these appliances, priced at $1,500 a pop, to monitor and guard corporate networks. That's a fraction of the cost of most integrated security appliances.

One small step for Guardent, one giant leap for open-source security. Corporations are loath to take a chance on a piece of security software they don't completely trust. But Guardent doesn't seem to be worried. Open-source proponents have long argued that their software is more secure due the exposure of the raw code to thousands of eyeballs, and the ability of anyone using the software to incorporate code changes to quickly patch vulnerabilities. What's more, Guardent will emphasize top-quality service first, good software second. "The thing that has the value is the service, rather than the software itself," says Guardent co-founder Daniel R. McCall.

Musings on open source security models -

Common sense would seem to indicate open source software is insecure because, for many, secure means hidden, secret. Recent discussions on several security-related mailing lists have revolved around keeping the names of servers secret, as if hiding information makes networks secure.

Others see open source as the means to secure operating systems. Some of the more secure systems available are based on the open source model. Many don't trust closed proprietary systems that can't be examined and verified for secure coding. To them, it's a myth that such systems are somehow inherently insecure, even though this belief is widely held.

In cryptography circles, they have a saying: The security of an algorithm should not depend on its secrecy. Now, this maxim is equally applicable to security software in general. Algorithms can be reverse-engineered. Protocols can be cracked through analysis. That which is hidden and secret will eventually be revealed. A secret, once lost, is gone forever and cannot be regained. Security through secrecy is largely a myth.

The argument then goes on, from the closed source camp, that secrecy or obscurity, when applied to an otherwise secure system, improves the security. It slows up intruders and, even when the secrecy is broken, the security remains as it would have been with open source.

So, all other things being equal, a secure system that isn't open source should be more secure than a secure system that is open source. Sounds reasonable.

Is it reasonable, though? Can all other things be equal? Are there ways in which secrecy and closed source code can actually compromise security?

The nature of common sense
There are many opinions and definitions as to what comprises "common sense." By one definition, common sense is the "future application of past experience." This definition allows for the possibility that what some refer to as common sense may, in fact, be wrong.

People unfamiliar with the open source model are accustomed to keeping their source secret. When their source does becomes public, it's almost always related to a security breach or the threat of a security breach.

Revelation of code developed and maintained in secret also often results in the discovery of previously undetected flaws and security holes. It's no wonder, therefore, that those accustomed to the closed source model of development view open source as insecure. Their past experience with security breaches colors their conviction that security goes down the tubes when source code becomes public.

It's in their "future application" of that "past experience" that common sense fails. Their past experience really no longer applies, since the conditions have changed.

The nature of secure software
Secure systems shouldn't depend on the secrecy of the source. What is it, then, that makes a system secure?

I offer this as a guideline to security:Secure systems require quality software utilizing secure coding techniques implemented and installed in a manner consistent with security guidelines and policy.

Myths regarding security and open source software
The following are some of the myths that contribute to the belief that open source is insecure:

Myth 1. There is no source control in open source software.

Those of us who develop open source software tend to howl with laughter over this one, but it's a criticism I hear quite a lot. Some folks actually believe open source software is developed with total disregard for tracking, accountability, or control. Nothing could be farther from the truth. With large and diverse development teams being the norm in the open source world, source control is a necessity, not an option.

Recently, a researcher at a national laboratory made such a remark to me. My reaction was to wait until I saw him again, weeks later, then innocently regale him with a number of stories and incidents arising out of "source control incidents" in open source. Many of these were taken from CVS (Concurrent Versions Systems) log announcements. He was utterly amazed at the level of source control in use and hasn't remarked on the lack of source control since.

Myth 2. No one really looks at the source.

The open source camp proclaims the source is available for anyone to examine. The closed source camp counters that people either don't have the time or the skill or won't invest the effort to examine it. Since the source isn't examined by everyone or examined by them personally, the argument goes, it's as good as if it were examined by nobody, and any errors in the code are unlikely to be made public.

After the release of PGP 2.6, someone examining the code noticed an error in a random-number generator. The mistake was very minor. A statement that should have been an XOR with operation (~=) was in fact, an assignment (=). The result was that the random number seed was somewhat less random than expected. This didn't seriously compromise the security of PGP, but it did reduce the strength of the random keys.

This error was quickly corrected, and the incident does illustrate some important points. It shows that the code is examined by others and that coding errors (intentional or unintentional) do get spotted. Due to the minor, obscure, nature of this buglette, it also indicates that the probability of a more serious bug or backdoor going undetected is rather low.

Myth 3. Anyone could put a backdoor or trapdoor in open source software.

The simplest response to this is: How? Open source uses source control, it uses code examination and analysis by others, and it puts the personal reputations of the authors on the line. Who would personally risk his or her reputation by putting a backdoor in source that is openly available in public forums?

This can be contrasted with closed source programs, which have an amazing array of "Easter eggs," those cute little surprises programmers leave in their code. What does the existence of such surprises say about the state of code review and source control in closed source circles?

This begs the question: Are there backdoors and serious surprises we can't see? Easter eggs are cute and backdoors aren't. Outside of that, there isn't much difference between them. Placing such secret surprises in open source would certainly seem much more difficult to do, if not a paradox in itself.

Myth 4. Hackers are going to find all the security holes in open source software.

Well, this really isn't a myth. The real myth is that they won't find the security holes in closed source software. One only has to look at the security warnings and advisories attached to any of the closed source systems to realize this. It has become almost a joke in the industry that hackers (good and bad) probably have better debugging, analysis, and reverse-engineering tools than developers have. Unfortunately, this joke often ends up with a decidedly unfunny punch line for network administrators.

The closed source camp likes to point out every open source security advisory as evidence that open source is insecure. In doing so, they conveniently ignore the counter examples in their own advisories. They also conveniently overlook the fact that open source problems are often found and fixed before they're widely exploited, while some closed source problem go unaddressed for months, or longer.

Recently, Alan Cox announced a Solaris security hole on the Bugtraq mailing list, after waiting over a year for Sun to fix the problem. Sun's response: that Alan had failed to notify the "correct people."

In contrast, the "Ping 'O Death" bug was fixed in Linux only a few hours after it was announced. The same bug remained unsolved in some closed source systems for weeks or months. The author of the "teardrop" exploit only released its source after seeing David Miller commit the fix to the Linux source tree.

King & Spalding Law

Open Source Code And Information Security: A Legal Perspective

Reprinted from E-Commerce Law Journal, Volume 1, Number 7, pp. 20-21 (July, 2001) and Volume 1, Number 8, pp. 17-19 (August, 2001).

By: Brad Slutsky

There are many reasons why companies need to protect their information assets. These reasons include maintaining competitive advantages, ensuring the integrity, authenticity, and availability of information, adhering to confidentiality commitments or other legal requirements, complying with duties to shareholders, and complying with duties to third parties. Typically, companies' obligations to secure their information assets are measured against a standard of "reasonable care". This article addresses the question of whether companies exercise "reasonable care" when they use open source code software to implement business functions.

What Is "Open Source" Software?

Open source software typically is viewed as software that meets the following criteria: free redistribution (licensees may resell or give away the software without paying royalties to the licensor); source code availability (the source code must accompany the software or be available on request); permission for derived works (licensees must be permitted to modify the licensed program); notice and attribution (licensors may require licensees to identify derived works as different from the licensor's original work); nondiscrimination (licensors may not discriminate against persons or fields of endeavor in licensing the software); license distribution (open source licenses apply to all users to whom the software is redistributed, without the need for executing additional licenses); and the absence of tying (licensors may not require that open source software be distributed with other software, nor may licensors place restrictions on other software distributed with open source software). See http://www.opensource.org/docs/definition_plain.html. One example of a widely recognized open source software system is the Linux operating system.

Open Source And Trade Secrets

Trade secret laws require owners of trade secrets to take reasonable measures to keep their trade secret information secret. See, e.g., 18 U.S.C. § 1839(3)(A). Indeed, to qualify as a trade secret information typically must "derive[] economic value ... from not being generally known to, and not being readily ascertainable by proper means by, other persons who can obtain economic value from its disclosure or use." See, e.g., O.C.G.A. § 10-1-761(4)(A). The courts have held that software -- even commercially distributed software -- can contain trade secrets if the trade secrets are properly protected from disclosure through license agreements that require confidentiality and that prohibit disassembly or reverse engineering. See, e.g., CMAX/CLEVELAND, INC. d/b/a Computermax v. UCR, INC., 804 F.Supp. 337, 357-358 (M.D. GA 1992). If software contains trade secrets, freely disclosing the source code and permitting others to redistribute the software and source code is antithetical to the steps necessary to maintain a trade secret. Thus, not only can the use of open source software threaten to destroy a company's trade secrets, but, due to the operation of copyright laws and the licensing schemes imposed on open source software, third parties may have rights in software that a company derives from open source software. See Potter, Opening Up To Open Source, 6 Rich. J. L. & Tech. 24, 60 (2000). Most companies are interested in recouping the costs associated with developing information assets and making a profit from those assets. The open source licensing scheme makes that a difficult proposition. See Hill, Fragmenting The Copyleft Movement: The Public Will Not Prevail, 1999 Utah L. Rev. 797.

Open Source And Duties To Third Parties

There is a school of thought in the information technology industry according to which exposing source code to the general public helps identify vulnerabilities in software and, in the long run, makes software more secure. One frequently-cited authority for this proposition is Eric Raymond. In a paper entitled The Cathedral And The Bazaar (available on the Internet at http://www.tuxedo.org/~esr/writings/cathedral-bazaar/cathedral-bazaar/), Mr. Raymond argues that the open source method of software development has many advantages over typical commercial/"closed source" development methodologies. Among these advantages is "Linus' Law" that "[g]iven enough eyeballs, all bugs are shallow" -- i.e., if enough people examine open source code, problems are likely to be identified and fixed.

As Linus Torvalds (the inventor of the Linux operating system) has indicated, however:

People think just because it is open-source, the result is going to be automatically better. Not true. You have to lead it in the right directions to succeed. Open source is not the answer to world hunger.

Bezroukov, A Second Look At The Cathedral And The Bazaar, First Monday, Volume 4, Number 12, pg. 11 (December, 1999). Thus, there are a number of conditions under which, even if "Linus' Law" is correct, open source software still may not be good for security and may not satisfy a company's duties to third parties to take reasonable care to protect information.

"Bad Guys" May Examine Your Open Source Code

"Open-sourcing your software makes it more likely that any security problems in it will be found, but whether the problems will be found by good guys or bad guys is another matter." Viega, Open source software: Will it make me secure, http://www-106.ibm.com/developerworks/library/oss-security.html (September, 1999). Indeed, there is a sense in which providing the world with one's source code is like providing a bank robber with the technical specifications of the bank's vault as well as its alarm system. While it is possible that security experts around the world will provide input with regard to better securing the bank, there is no guarantee that the experts will close any security holes before the robber can exploit them. This is one reason why some companies have tended to shy away from open source code for applications where security is important. See, e.g., QNX Opens Platform to e-device Builders, Canada Newswire (April 24, 2000) ("While e-device builders see the productivity benefits of open source, most have serious concerns about using open-source OS code in their products, citing threats to security, reliability, and potential loss of intellectual property due to GPL licensing."); Linux Watch: Red Hat Makes Second Post-IPO Buy, Client Server News (January 10, 2000) (noting that Red Hat, a Linux distributor, bought e-commerce payment processing software and that "[f]or security reasons the heart of the stuff is not open source, which is the way Red Hat intends to keep it."); Berinato, Novell moves cautiously toward opening code base, PC Week, pg. 1 (October 25, 1999) ("A key reason behind Novell's hesitancy in committing to a sweeping open-source initiative is concern over protecting the security infrastructure in the core NetWare and NDS products").

In Some Cases, There May Be More Qualified "Eyeballs" Reviewing Closed Source Code Than Open Source Code

"Any situation where many talented developers actually work on the same segment of code is more of an exception than a rule. With the increasing complexity of a given project, this pooling of talent will occur very rarely and only for the most critical or politically important bugs. For any sufficiently complex project there will never be sufficient 'eyeballs' to locate and eradicate all bugs." Bezroukov, A Second Look At The Cathedral And The Bazaar, First Monday, Volume 4, Number 12, pg. 6 (December, 1999). Given these software development dynamics, open source software development may fail to eradicate security vulnerabilities in "low profile" code. By way of contrast, many commercial companies can and do hire programmers and independent laboratories to examine and test their entire programs -- not just their "high profile" code.

Delays In Installing Security Patches Result In Increased Vulnerability

"[E]ven when good guys find a problem, bad guys can still exploit the problem for a long time to come, since informing users and getting them to upgrade is a much slower process than the distribution of exploits in the hacker community." Viega, Open source software: Will it make me secure, http://www-106.ibm.com/developerworks/library/oss-security.html (September, 1999). For large companies, the planning and implementation of system-wide security patches can be a major undertaking that, as a practical matter, will not be undertaken frequently. One of the theories behind open source software development is that "[e]arly and frequent releases" are critical to identify bugs. The Cathedral And The Bazaar (available on the Internet at http://www.tuxedo.org/~esr/writings/cathedral-bazaar/cathedral-bazaar/). This feature of open source software tends to increase vulnerability as users' efforts to close vulnerabilities lag behind hackers' efforts to exploit them.

The "Long Run" May Never Arrive

While in theory public examination of source code may reduce security issues in the "long run", the "long run" may never arrive. Frequently security patches cannot be tested with the same degree of thoroughness with which the original base code was tested. Further, patches can and frequently do introduce additional program errors. Bezroukov, A Second Look At The Cathedral And The Bazaar, First Monday, Volume 4, Number 12, pg. 6 (December, 1999). With frequent releases and patches for open source software, users may constantly lag behind hackers in fixing security vulnerabilities. In short, the "long run" may never arrive.

It Is More Difficult To Conceal "Tripwires" In Open Source Code

One technique for securing software is to insert code that will detect and perhaps remedy unauthorized system access. If the "bad guys" have access to the source code and can detect where such "tripwires" have been placed, they will know how to avoid them and their actions may be harder to detect.

Software That Started As Closed Source May Not Be Suitable For Release As Open Source

There are at least two barriers that can prevent closed source software from achieving the potential benefits of the "many eyeballs" effect of open source software. First, after closed source software reaches a certain stage in its development, it may be so complex that it will be difficult to attract sufficient qualified programmers to review, comprehend, and improve the code. Second, closed source software may rely on the secrecy of certain procedures to secure certain processes. This so-called "security through obscurity" may be rendered ineffective if the source code becomes public, and may need to be redesigned. As one author has noted, "Netscape's level of complexity, introduced by opening a pre-existing project rather late in its development cycle, is probably the main reason for difficulties in attracting new developers. ... [I]t is clear that complexity of Netscape's code represents a formidable barrier of entry that is not easily overcome by even highly motivated and qualified developers. ... After the project reaches a certain level of maturity, it essentially closes itself due to the 'binarization' of code." Bezroukov, A Second Look At The Cathedral And The Bazaar, First Monday, Volume 4, Number 12, pg. 14 (December, 1999). Similarly, the release of the "Quake" game source code in late 1999 and the programs that were then developed to "cheat" (by exploiting source code vulnerabilities) drew commentary from Eric Raymond, among others, observing that a closed source program might be the remedy to counteract such cheating. See http://slashdot.org/articles/99/12/27/1127253.shtml. While Mr. Raymond's suggested solution is to develop mission critical software with the effects of open source code in mind, the fact remains that software originally designed for a closed source environment may not fair well if subsequently released in open source format.

Who Will Be Responsible For Security Breaches?

Various licenses have been approved for licensing open source software. Perhaps the most popular and most widely recognized open source license is the GNU General Public License (the "GPL"). Under this license, there typically will be no warranties from the developer(s). If GPL software is designed with "back doors" or other malicious code that allows unauthorized access to information systems, who will be responsible? As one author has noted, "[i]f nothing else, our tests showed us that the security of open-source systems is not a cut-and-dried issue and that having someone to yell at usually is a good thing." Chowdhry, Open source meets the 'Baywatch' factor, PC Week, pg. 83 (October 18, 1999). While commercial licenses also frequently provide limited warranties, those warranties typically are more extensive than open source warranties. In addition, for liability that cannot be disclaimed (such as for intentional misconduct), there are advantages to having a large commercial company to "yell at", rather than a dispersed collection of individual developers whose collective work may be the cause of a problem.

Conclusion

There are a number of drawbacks associated with the use of open source software in a commercial environment. If a company's trade secrets are embedded in software, the use of open source software may not be a good choice. In other situations, where it may be necessary for companies to exercise "reasonable care" to protect their information assets, there are a number of potential drawbacks to the use of open source software. In order to determine whether the use of open source software will have an overall positive or negative effect on security in any given situation, one would need to assess whether, in that particular situation, there will be more qualified eyeballs reviewing closed source code or open source code, whether there will be more "good guys" or "bad guys" reviewing the source code, how often security patches will be necessary and how frequently they will be installed, what the consequences of system intrusions may be during the "short run" (while the bugs are being worked out of open source software), etc. In the absence of answers to these questions, companies may risk being "second guessed" by experts and jurors with respect to whether the use of open source code in a company's information systems constitutes "reasonable care" to protect information assets.

Information Security Magazine: Open-Source Security - Open Source Under The Hood(Mar 25, 2001)
SFGate: The Spy Who Hacked Me: Will Open Source Be The Hero Of International Security?(Mar 15, 2001)
VNU Net: US security agency (NSA) eyes open source(Feb 02, 2001)
LinuxWorld: Open source closes backdoors - Security through code obscurity provides false confidence(Nov 12, 2000)
SecurityFocus.com: Falling Apart at the Seams [Security and Open Source](Sep 05, 2000)
Open Source IT: The Myth of Open Source Security(May 26, 2000)
Security Portal: Open Source - Why it's Good for Security(Apr 18, 2000)
SecurityFocus.com: Wide Open Source - Is Open Source really more secure than closed?(Apr 17, 2000)

[Mar 31, 2001] developerWorks Linux The security implications of open source software

Theory vs. practice
Still, there are skeptics. "Simply being open source is no guarantee of security," says Craig Willis, computer consultant and a state government systems analyst. "Just as the good guys are looking for vulnerabilities, the same thing goes for the bad guys. 'Security through obscurity' may not be something you should depend on, but it can work to your advantage if the attacker can find an easier target."

Lee Badger, principal computer scientist at Network Associates, agrees, countering that the many-eyes theory "assumes people are motivated to examine even the mundane code -- and I'm not sure that's the case."

Even a few open source advocates admit the dilemma. The author of the GNU mailing list manager Mailman, John Viega, revealed that for three years Mailman had a handful of glaring security code problems, yet no one caught or reported the errors. The version of Mailman that contains these security holes was downloaded thousands of times and even included in Red Hat Professional Linux version 6.2 until mid-2000. Apparently, says Viega, everyone using Mailman assumed that someone else had done the proper security auditing, when in fact, no one had.

Says Viega, "The benefits open source provides in terms of security are vastly overrated, because there isn't as much high-quality auditing as people believe, and because many security problems are much more difficult to find than people realize."

Viega points out that even after they were identified, the security problems in Mailman took many months to fix, partly because it was written in Python (rather than the more popular C) and partly because security was not the core development team's immediate concern.

Viega lists several things that can discourage people from reviewing source code, including code that looks "like a tangled mess," or programs written in a lesser-used language. Another deterrent is human nature, as most programmers look at source code for their own benefit, not for altruistic motivations.

One issue some detractors cite is that open source code is only as good as the skill of those who review it. "Many (developers) don't understand enough to avoid problems beyond the handful of dangerous calls they know," says Viega. According to his article on open source myths (see Resources later in this article), it is common for developers to use cryptography, but misapply it in ways that destroy the security of the system, or to use encryption that is too weak and can easily be broken. Another mistake is for developers to try to hand roll their own protocols using common cryptographic primitives, not fully understanding that cryptographic protocols are generally more complex than expected and easy to get wrong.

Theo de Raadt, project leader for the open source operating system OpenBSD, is another who doubts the safety net of peer review. "These open source eyes that people are talking about -- who are they?" de Raadt asks. "Most reviewers of open source code are amateurs. Most of them, if you asked them to send you some code they had written, the most they could do is 300 lines long," he says. "They're not programmers."

Another complaint with open source software is that most bugs are found after the program has been compiled, tested, and distributed -- not because someone sat down in advance and looked at the code for holes, but because something goes wrong in its use. With a few exceptions, open source programs do generally rely on user reports and public forums to find vulnerabilities.

Resources

Linux Today - ABC News Linux Sux Redux; The Open-Source Platform Is Open to a Slew of Vulnerabilities

But now comes news from BugTraq that gives the lie to the widely held belief that Linux is any less vulnerable than its competitors. Linux's known weaknesses turn out to be proliferating faster than its market share. BugTraq publishes "Vulnerability Database Statistics" (a list of bugs, essentially, that are discovered each year in various software products) that demonstrate rather dramatically how determined Linux is to join the Big Leagues -- if not necessarily in market share, then in what might be called "vulnerability share."

SRO: And The Loser Is ... [Bugtraq Record on MS Security](May 16, 2000)
SecurityFocus.com: Wide Open Source - Is Open Source really more secure than closed?(Apr 17, 2000)
dot-lies.com Counters Microsoft's dot-truth.com FUD(Mar 10, 2000)
Eric Lee Green: FUD101 At One Year Old(Dec 02, 1999)
Linux Today Counter-FUD Work Reveals 'World Domination' in Progress(Dec 02, 1999)
ABC News: Charge of the Linux Brigade(Nov 23, 1998)

PC Week: Experts debate merits of open source for security (Mar 24, 2000)
Security Portal: An overview of OS security features - part I (Mar 22, 2000)
Security Portal: DDOS attacks' ultimate lesson: Secure that infrastructure (Mar 20, 2000)
Silicon.com: Linux is a security risk, experts claim (Mar 20, 2000)
Network Computing: Best Practices in Network Security (Mar 18, 2000)
TechWeb: Task Force: Internet Security Holes Rampant (Mar 16, 2000)
SecurityFocus.com: The Coming Linux Plague (Mar 13, 2000)
LinuxSecurity.com: Intrusion Detection Primer (Mar 13, 2000)
PC Week: PentaSafe aims to plug OS security holes (Mar 09, 2000)
SJ Mercury/Reuters: Software industry blasted for security lapses (Mar 09, 2000)
SRO: Microsoft's Not The Only Security Foul-Up (Mar 09, 2000)
Security Portal: UNIX (and Linux especially) viruses - the real story (Mar 08, 2000)
Network Magazine: Building a Robust Linux Security Solution (Mar 07, 2000)
security focus: Security Whitepaper: Seeds may already be sown for worse attacks (Mar 01, 2000)

anon - Subject: Moody is absolutely correct! ( Aug 2, 2000, 17:11:50 )
Face it folks, the prime platforms for attacks by script kiddies are linux boxes, and that is only because there are more linux boxes in the hands of home users who run unnecessary networking services than other unixes. Since any linux box can be a server (most Windows boxes can't) any linux box makes a juicy target for those seeking to host illicit IRC channels or ddos attack bots, or simply to use as intermediate nodes to obfuscate the cracker's identity while sniffing more lucrative commercial unix servers and networks.

Frankly, unix security sucks. It's real purpose is to provide job security for unix sysadmins, who can *sometimes* secure unix networks after they have learned the ropes and arcane, non-documented secrets passed on in the boys club that is uniquely unix like nothing else.

Specifically, most linux distros start far too many networking services by defaut and provide the user with no useful or sensible documentation on how to reduce security risks. Since crackers can't log onto most windows boxes remotely, they attack Windows by playing on the vulnerabilities of vb scripting, js, and active x, but ONLY if the user does stupid things or allows these scripting hosts to run while web browsing in unknown territory or reading email.

I think that linux needs to re-examine the unix security model. Not only does it suck, but is stinks. It can be improved, and even simplified. Alternate metods of establishing and verifying users should be considered. Unix has failed miserably in providing the needed level of security even with highly paid sysadmins on board in major corporations. I'm not saying that any other system has better security, on balance, but the main impediment to creative development of better security for unix is the old boy network of grossly overpaid and mostly incompetent sysadmins who trade on arcane knowledge passed on to insiders rather than real skills, as are required by those who develop rather than just administer. Let the developers establish a security model that can be easily handled by home users running simple local networks or none at all, and is flexible enough for use with large networks as well. Get the sysadmins out of the business of establishing security with arcane scripts.


Ganesh Prasad - Subject: Feedback to ABC News ( Aug 3, 2000, 00:25:55 )
I sent this feedbeck to ABC News http://www.abcnews.go.com/service/Help/abc_contactus.html
regarding Fred Moody's column "Linux Sux Redux" http://www.abcnews.go.com/sections/tech/FredMoody/moody.html

Dear Sirs,

I have a lot of respect for ABC News, but it should concern you that your credibility is being undermined by journalists who write biased pieces under your banner.

The journalist I refer to is Fred Moody, who has recently written a piece called Linux Sux Redux. I eagerly read the article to understand the security vulnerabilities that Moody claims are far more serious than in comparable operating systems. Being a Linux user for a few years now, I needed to know those vulnerabilities. To my disappointment, I found that the statistics quoted by Moody suffer from double-counting. He has aggregated bug statistics of various Linux distributions, counting many bugs twice, and making it look like Linux is less secure than Windows NT, rather than the other way around.

I was surprised at why he did this, until I read on LinuxToday that he is a former Microsoft employee.

I think in the interests of maintaining journalistic integrity as well as the normally high level of credibility associated with ABC News, you should prominently disclose all items such as Moody's past employment that could potentially be conflicts of interest. You should also have articles vetted by independent persons to ensure that blatantly false statistics do not get published. It's very embarrassing to the publication and the journalist when they are discovered and pointed out.

Thanking you,

Yours sincerely,

Ganesh Prasad

Linux Today - SecurityFocus.com Wide Open Source - Is Open Source really more secure than closed

Is Open Source really more secure than closed? Elias Levy says there's a little security in obscurity.
By
One of the great rallying cries from the Open Source community is the assertion that Open Source Software (OSS) is, by its very nature, less likely to contain security vulnerabilities, including back doors, than closed source software. The reality is far more complex and nuanced.

Advocates derive their dogmatic faith in the implicit security of Open Source code from the concept of "peer review," a cornerstone of the scientific process in which published papers and theories are scrutinized by experts other than the authors. The more peers that review the work, the less likely it is that it will contains errors, and the more likely it is to become accepted.

Open Source apostles believe that releasing the source code for a piece of software subjects it to the same kind of peer review as a quantum physics theory published in a scientific journal. Other programmers, the theory goes, will review the code for security vulnerabilities, reveal and fix them, and thus the number of new vulnerabilities introduced and discovered in the software will decrease over time when compared to similar closed source software.

It's a nice theory, and in the ideal Open Source world, it would even be true. But in the real world, there are a variety of factors that effect how secure Open Source Software really is.

Sure, the source code is available. But is anyone reading it?

If Open Source were the panacea some think it is, then every security hole described, fixed and announced to the public would come from people analyzing the source code for security vulnerabilities, such as the folks at OpenBSD, the Linux Auditing Project, or the developers or users of the application.

But there have been plenty of security vulnerabilities in Open Source Software that were discovered, not by peer review, but by black hats. Some security holes aren't discovered by the good guys until an attacker's tools are found on a compromised site, network traffic captured during an intrusion turns up signs of the exploit, or knowledge of the bug finally bubbles up from the underground.

Why is this? When the security company Trusted Information Systems (TIS) began making the source code of their Gauntlet firewall available to their customers many years ago, they believed that their clients would check for themselves how secure the product was. What they found instead was that very few people outside of TIS ever sent in feedback, bug reports or vulnerabilities. Nobody, it seems, is reading the source.

The fact is, most open source users run the software, but don't personally read the code. They just assume that someone else will do the auditing for them, and too often, it's the bad guys.

Even if people are reviewing the code, that doesn't mean they're qualified to do so.

In the scientific world, peer review works because the people doing the reviewing possess a comparable, or higher, technical caliber and level of authority on the subject matter than the author.

It is generally true that the more people reviewing a piece of code, the less likely it is the code will have a security flaw. But a single well-trained reviewer who understands security and what the code is trying to accomplish will be more effective than a hundred people who just recently learned how to program.

It is easy to hide vulnerabilities in complex, little understood and undocumented source code.

Old versions of the Sendmail mail transport agent implemented a DEBUG SMTP command that allowed the connecting user to specify a set of commands instead of an email address to receive the message. This was one of the vulnerabilities exploited by the notorious Morris Internet worm.

Sendmail is one of the oldest examples of open source software, yet this vulnerability, and many others, lay unfixed a long time. For years Sendmail was plagued by security problems, because this monolithic programs was very large, complicated, and little understood but for a few.

Vulnerabilities can be a lot more subtle than the Sendmail DEBUG command. How many people really understand the ins and outs of a kernel based NFS server? Are we sure its not leaking file handles in some instances? Ssh 1.2.27 is over seventy-one thousand lines of code (client and server). Are we sure a subtle flaw does not weakening its key strength to only 40-bits?

There is no strong guarantee that source code and binaries of an application have any real relationship.

All the benefits of source code peer review are irrelevant if you can not be certain that a given binary application is the result of the reviewed source code.

Ken Thompson made this very clear during his 1983 Turing Award lecture to the ACM, in which he revealed a shocking, and subtle, software subversion technique that's still illustrative seventeen years later.

Thompson modified the UNIX C compiler to recognize when the login program was being compiled, and to insert a back door in the resulting binary code such that it would allow him to login as any user using a "magic" password.

Anyone reviewing the compiler source code could have found the back door, except that Thompson then modified the compiler so that whenever it compiled itself, it would insert both the code that inserts the login back door, as well as code that modifies the compiler. With this new binary he removed the modifications he had made and recompiled again.

He now had a trojaned compiler and clean source code. Anyone using his compiler to compile either the login program , or the compiler, would propagate his back doors.

The reason his attack worked is because the compiler has a bootstrapping problem. You need a compiler to compile the compiler. You must obtain a binary copy of the compiler before you can use it to translate the compiler source code into a binary. There was no guarantee that the binary compiler you were using was really related to the source code of the same.

Most applications do not have this bootstrapping problem. But how many users of open source software compile all of their applications from source?

A great number of open source users install precompiled software distributions such as those from RedHat or Debian from CD-ROMs or FTP sites without thinking twice whether the binary applications have any real relationship to their source code.

While some of the binaries are cryptographically signed to verify the identity of the packager, they make no other guarantees. Until the day comes when a trusted distributor of binary open source software can issue a strong cryptographic guarantee that a particular binary is the result of a given source, any security expectations one may have about the source can't be transferred to the binary.

Open Source makes it easy for the bad guys to find vulnerabilities.

Whatever potential Open Source has to make it easy for the good guys to proactively find security vulnerabilities, also goes to the bad guys.

It is true that a black hat can find vulnerabilities in a binary-only application, and that they can attempt to steal the source code to the application from its closed source. But in the same amount of time they can do that, they can audit ten different open source applications for vulnerabilities. A bad guy that can operate a hex editor can probably manage to grep source code for 'strcpy'.

Security through obscurity is not something you should depend on, but it can be an effective deterrent if the attacker can find an easier target.

So does all this mean Open Source Software is no better than closed source software when it comes to security vulnerabilities? No. Open Source Software certainly does have the potential to be more secure than its closed source counterpart.

But make no mistake, simply being open source is no guarantee of security

Linux Today - Open Source IT The Myth of Open Source Security

"An author of the open source Mailman program explains why open source is not as secure as you might think -- using security holes in his own code as an example."

"Open source software projects can be more secure than closed source projects. However, the very things that can make open source programs secure -- the availability of the source code, and the fact that large numbers of users are available to look for and fix security holes -- can also lull people into a false sense of security."

"Eyes that look do not always see
With people motivated to look at the source code for any number of reasons, it's easy to assume that open source software is likely to have been carefully scrutinized, and that it's secure as a result. Unfortunately, that's not necessarily true. "

Slashdot Can Open Source Be Trusted

More than closed source (Score:3, Insightful)
by hattig (SpinningNucleon FATBAT yahoo.com) on Friday June 23, @07:29AM EDT (#13)
(User Info) http://evil.cones.org.uk/

Open source software can be trusted more than closed source software when it comes to security, for all the reasons that you all know (quicker bugfixes, code open to scrutiny, etc). Closed source software can have hidden APIs, bad implementations and bugs, and the release cycle is slow.

OpenBSD is interesting, as they do audits on software to get rid of the security holes. They can only do this because the source code is available.

Of course, software like Sendmail, various ftpds, POP3 daemons etc, all mess up the security aspect of an OS. The OS can be as secure as it can possible be whilst still being usable and useful, but if the software being run on it is vulnerable, then backdoors into the system will be found. Having the source code available allows the cracker to find better access methods than having to guess and feel their way into a system.

You just have to remember that there will never be perfect security, and plan accordingly.

Amazing - (Score:1, Insightful)
by Chris Worth ([email protected]) on Friday June 23, @07:29AM EDT (#14)
(User Info) http://www.chrisworth.com
I find this amazing - so open source doesn't submit itself well to a closed-doors 'expert review' where the experts are usually appointed through a political process rather than a skills one?

Open source's strength is that it's Darwinian. Yes, a program can start off full of holes, but the whole point is that these holes become evident through the development process, and get plugged.

Hell, even I get this, and I'm not even a developer/techie. (Read The Microsoft Matrix" to see what I've learned.

It's all in the people... (Score:5, Insightful)
by Spoing on Friday June 23, @09:00AM EDT (#119)
(User Info)
The following is dry, and opinionated, from the POV of an old-timer VV&T/QT/Tester.

I'm big on specifications, and will argue both sides of a contract when a spec is violated. I've even been in a couple shouting matches over them, fighting for the correct implementation, not supposed "flexibility" though they do need to be bent at times.

Fortunately, the shouting matches are rare and as a Contractor Scum(tm), I never take them personally...only as a barganing point and to help stiffen the backs of those who are easily swayed. It's a shame when good projects go bad, but that's other people's money!

Good specifications are invaluable in eliminating all sorts of conflicts and allow projects to actually end without different groups wanting to kill each other.

Unfortunately, specifications are by necessity limited in scope. If it's not in the spec, it can't easily be added. If it's in the spec, it can't be modified easily.

On a formal contract, adding in goals like "The system shall be fast" don't work well, so more detail is usually specified; "The system shall retrieve a query on the client stations within 4 seconds at all times".

There's always a few details that slip by, and if the people on the project aren't reasonable the details will cause quite a few social and technical problems.

Even relying on an outside specification is a problem...since APIs/protocols/... are usually vauge on some level.

The people who implement it and the environment have a much greater impact on the results; there will be good and bad free software / open source projects...as there are good and bad commercial projects.

From what I've seen, I'll trust open source as much or more in most cases...but I'll test it first.

Specifications are not to be trusted (Score:1)
by Diabolical on Friday June 23, @07:31AM EDT (#18)
(User Info)
If you can actually PROOF that you adhered to the specs that were outlined eveything is okay.. but as with alot of things people cannot be trusted enough to follow the specs. Just look at constructing. If a wiring scheme inside a building can be much cheaper because of cheaper wires they will put those in.. if a company can save money by NOT adhering to the specs they sure as hell will... if you create some specs and afterwards your product seems to adhere to them that does not mean that internally eveything is correct. At least with Open Source you can check that. Of course he is correct at the point of Linux and most other projects being chaos like. That's inherent to the bazaar model of things.. But strongly run Open Source projects with clearly outlined goals and specifications can be as secure as anything else...

Just my thoughts about the subject

Here is an opinion of John Viega, a Senior Research Associate in the Software Security Group at Reliable Software Technologies, an Adjunct Professor of Computer Science at the Virginia Polytechnic Institute, the author of Mailman, the open source GNU Mailing List Manager, and ITS4, a tool for finding security vulnerabilities in C and C++ code. He has authored over 30 technical publications in the areas of software security and testing, and is responsible for finding several well-publicized security vulnerabilities in major network and e-commerce products, including a recent break in Netscape's security. In his recent paper open source resources at open source it The Myth of Open Source Security he wrote:

...Even if you get the right kind of people doing the right kinds of things, you may have problems that you never hear about. Security problems are often incredibly subtle, and may span large parts of a source tree. It is not uncommon to have two or three features spread throughout a program, none of which constitutes a security problem alone, but which can be used together to perform a security breach. For example, two buffer overflows recently found in Kerberos version 5 could only be exploited when used in conjunction with each other.

As a result, doing security reviews of source code tends to be complex and boring, since you generally have to look at a lot of code, and understand it pretty well. Even many experts don't like to do these kinds of reviews.

And even the experts can miss things. Consider the case of the popular open source FTP server wu-ftpd. In the past two years, several very subtle buffer overflow problems have been found in the code. Almost all of these problems had been in the code for years, despite the fact that the program had been examined many times by both hackers and security auditors. If any of them had discovered the problems, they didn't announce it publicly. In fact, the wu-ftpd has been used as a case study for vulnerability detection techniques that never identified these problems as definite flaws. One tool was able to identify one of the problems as potentially exploitable, but researchers examined the code thoroughly for a couple of days, and came to the conclusion that there was no way that the problem identified by their tool could actually be exploited. Over a year later, they learned that they were wrong, when an expert audit finally did turn up the problem.

In code with any reasonable complexity, it can be very difficult to find bugs. The wu-ftpd is less than 8000 lines of code long, but it was easy for several bugs to remain hidden in that small space over long periods of time.

To compound the problem, even when people know about security holes, they may not get fixed, at least not right away. Even when identified, the security problems in Mailman took many months to fix, because security was not the the core development team's most immediate concern. In fact, the team believes one problem still persists in the code, but only in a configuration that we suspect doesn't get used.

An army in my belly

The single most pernicious problem in computer security today is the buffer overflow. While the availability of source code has clearly reduced the number of buffer overflow problems in open source programs, according to several sources, including CERT, buffer overflows still account for at least a quarter of all security advisories, year after year.

Open source proponents sometimes claim that the "many eyeballs" phenomenon prevents Trojan horses from being introduced in open source software. The speed with which the TCP wrappers Trojan was discovered in early 1999 is sometimes cited as supporting evidence. This too can lull the open source movement into a false sense of security, however, since the TCP wrappers Trojan is not a good example of a truly stealthy Trojan horse: the code was glaringly out of place and obviously put there for malicious purposes only. It was as if the original Trojan horse had been wheeled into Troy with a sign attached that said, "I've got an army in my belly!"

...Currently, however, the benefits open source provides in terms of security are vastly overrated, because there isn't as much high-quality auditing as people believe, and because many security problems are much more difficult to find than people realize. Open source programs which appeal to a limited audience are particularly at risk, because of the smaller number of eyeballs looking at the code. But all open source software is vulnerable, and the open source movement can only benefit by paying more attention to security.


Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Created: May 16, 1997; Last modified: December 26, 2017