Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Mindcraft Fiasco

[Jan 24, 2000] c't 13-99, page 186 - Linux and NT as Web Server more realistic experiment to compare Linux/Apache and NT/IIS [http://www.heise.de/ct/english/99/13/186-1/]

First of all, we must say that additional CPUs for plain web server operation with static HTML pages are a waste. Even with two Fast Ethernet lines there's only a moderate less than twenty percent increase. It seems that CPU performance is not the decisive factor with these tasks. The graphs indicate that the Primergy server itself didn't have to work to its full capacity at all.

Linux's comparatively bad results when tested with two network boards show that Mindcraft's results are quite realistic. NT and IIS are clearly superior to their free competitors if you stick to their rules.

To clarify it once again: These results correspond to a load beyond 1000 requests per second. In comparison: At peak times, the Heise server deals with about 100 requests/s. Also, we are talking about purely static pages which on top of that are already present in the system's main memory. Our tests showed that Mindcraft's result can't be transferred to situations with mainly dynamic contents - the common case in nearly every sophisticated web site.

In SMP mode, Linux still exhibited clear weaknesses. Kernel developers, too, admit freely that scaleability problems still exist in SMP mode if the major part of the load comes through in kernel mode. However, if user mode tasks are involved as well, as is the case with CGI scripts, Linux can benefit from additional processors, too. These SMP problems are currently the target of massive developing efforts.

In the web server areas most relevant for practical use, Linux and Apache are already ahead by at least one nose. If the pages don't come directly from the system's main memory, the situation is even reverted to favour Linux and Apache: Here, the OpenSource movement's prime products leave their commercial competitors from Redmond way behind.

Relevant for practical use is Mindcraft's critical assessment of Linux and Apache tuning tips being difficult to come by. True enough, professional Linux support structures are still in development. However, Apache and Linux developers have been extremely helpful. While Microsoft needed more than a week to come up with the ListenBackLog hint, our questions regarding Linux and Apache problems were answered with helpful hints and tips within hours.

Emails to the respective mailing lists even resulted in special kernel patches which significantly increased performance. We have, on the other hand, never heard of an NT support contract supplying NT kernels specially designed for customer problems. (ju)

[July 30, 1999]Welcome to the TPC Main Page!

Benchmarks by Transaction Processing Council. The TPC is a non-profit corporation founded to define transaction processing and database benchmarks and to disseminate objective, verifiable TPC performance data to the industry.

Q: Who are the members of the TPC? A: While the majority of TPC members are computer system vendors, the TPC also has several software database vendors. In addition, the TPC membership also includes market research firms, system integrators, and end-user organizations. There are approximately 40 members worldwide. Q: What are the benefits of being a member? Timely access to detailed competitive data. TPC membership is a who's who of computing. With a TPC membership, you have access to all benchmark test data (full disclosure reports). These reports provide all the performance and detailed pricing information on your competitors' new systems. If you want to stay ahead of your competition, you better know what they're up to. Marketing leverage. TPC benchmarks create a level playing field where your company can compete with all the major players in the industry. The TPC encourages everyone to publicize TPC results, and a good TPC result can dramatically improve your competitive stance. But it's very difficult to get the best TPC result if you haven't read the latest full disclosure reports from your competitors and are not aware of the latest TPC technical rulings. While most of the TPC's information is available to the public, your company can hardly be an effective competitor standing on the sidelines.

[July 15, 1999] PC Week 8 Web App Servers That Deliver -- slightly different test by Doculabs that shed some light on Mindcraft test results. 

The tests were designed and run by industry analysis company Doculabs. Doculabs' @Bench benchmark measures how well application servers can handle the demands of a full-size electronic commerce application (in this case, an online bookstore with a 12.5-million-row back-end database).

To carry out the tests, Doculabs used a mix of Sun and Compaq Computer Corp. servers, along with 120 client PCs, and used Client/Server Solutions Inc.'s Benchmark Factory 97 benchmark control software (available at www.benchmarkfactory.com).

(For more details on the @Bench benchmark and test hardware, see PC Week's April 5 report on the first half of the tests.) Doculabs is also producing an exhaustive report describing its results, which will be available from the Chicago company by the end of this month.

The eight tested products were Apple Computer Inc.'s WebObjects 4.0, Bluestone Software Inc.'s Sapphire/Web 5.1, Haht Software Inc.'s Hahtsite 4.0, Microsoft's Windows NT Enterprise Server 4.0, Progress Software's Apptivity 3.0, Sybase's Sybase EAS (Enterprise Application Server) 3.0, and the Sun/Netscape alliance's NetDynamics 5.0 and Netscape Application Server 2.1 application servers. IBM was also going to participate with its WebSphere application server but later pulled out, saying a new version of WebSphere was in the works.

The tests were done at PC Week's Foster City, Calif., lab in exchange for first access to the results. Doculabs charged each vendor a flat fee of $35,000 to defray server hardware costs; PC Week received no payment and didn't take part in this test in any way except to provide the facility.

...Overall, we were surprised by how fast all the application servers were. On the Sun testbed, speeds ranged from about 400 to about 1,400 pages per second (with Apptivity, Sybase EAS, NetDynamics and Netscape Application Server the top performers), and on the Compaq testbed (which no one else used), Microsoft's performance pushed almost 3,500 pages per second--with 93 percent of these pages dynamically generated.

These performance figures certainly indicate big differences among the products, but even a throughput of 400 pages per second is enough to saturate the Internet connections of most businesses hosting their own e-commerce applications. As a draft of the Doculabs report states, "Clearly, most production environments do not have the infrastructure to support even this 'modest' performance."

The average Web page in @Bench was between 2.5KB and 3KB, so a 400-page-per-second throughput rate requires at least a 7.8M-bps connection (or about six T-1 lines) before the application server will be a bottleneck...

All products showed virtually linear scalability, but there were differences in fault tolerance (see table, below). Hahtsite and WebObjects each lost some users' shopping cart information during the network failure tests. In addition, only Hahtsite, NetDynamics, Netscape Application Server and Sapphire/Web were able to bring failed servers automatically back online. Doculabs officials said Microsoft's server could also have done this had the company run its application server on a separate system from the Web server.

[July 3, 1999] What the Tests Prove

Disraeli was pretty close: actually, there are Lies, Damn lies, Statistics, Benchmarks, and Delivery dates." (from fortune)

The recent tests done at ZDLabs turned up some interesting results. They were of course presented under the auspices of Windows NT is faster than Linux, which was, strictly speaking, true. Now, it doesn't really matter what the testing software was, or what the testing hardware was. I don't really care, for the moment at least, how honest the test was. I expect that it was at least somewhat honest since some RedHat people were on the scene. I'm interested in how we interpret the results.

Now, there is the face value of the results, that is, that windows NT is faster than Linux, thus better, and hence that in any given situation, NT is better to use than Linux. There's also the option, of course, to actually look at what the tests found. What are some of the actual facts that the tests came up with? Here are some important ones that I found (pretty color graphs aside):

Linux looks pretty slow, doesn't it? Who would use it for any real application? Well, let's examine this situation a bit more than just comparatively. First off, let's just look at an approximation of the situation that this represents:

So Linux/Apache should be able to handle your site on a 4 CPU 1 Gig RAM box if you get 159 million hits per day or less. If you get only a measly 113 million hits/day, then a single CPU box with 256 meg of RAM should be able to host your site. Of course, this only works if your access is 100% even which is extremely unrealistic. Let's assume that your busy times get ten times more hits per second than your average hits/second. That means that a single CPU Linux box with 256 meg of RAM should work for you if you get about 11 million hits every day. Heck, let's be more conservative. Let's say that your busy times get 100 times more hits/second than your average hits/second. That means that if you get 1.1 million hits per day or less, that same box will serve your site just fine.

OK, there's that way of looking at it, but it's not really a good way. It's a very coarse approximation of access patterns and what a site needs. Let's try another way of looking at this.

Let's do some simple calculations to see what sort of bandwidth these numbers mean. Bandwidth will be a better and more constant method of determining who these numbers apply to than guessed at hit ratios.

The ZDNet page said that the files served were of "varying sizes", so we'll have to make some assumptions about the average size of the files being served. Since over 1000 files were served per second in all of the tests, it's pretty safe to work by averages. Some numbers:

Just as a reference, a T1 line is worth approximately 1.5 MBits/sec, these numbers don't include TCP/IP & HTTP overhead, and this document is approximately 12k.

Now, what does this tell us? Well, that if you are serving up 1,314 pages per second where the average page is only 1 kilobyte, you'll be needing 10 T1 lines or the equivalent until the computer is the limiting factor. What site on earth is going to be getting a sustained >1000 hits per second for 1 kilobyte files? Certainly not one with any graphics in it. Let's assume that you're running a site with graphics in it and that you're average file is 5 kilobytes - not too conservative or too liberal. This means that if you're serving up 1,314 of them a second, you'll need 53 MBits of bandwidth. And there are no peak issues here, you can't peak out more than your bandwidth.

Let's go at it another way, this time starting with our available bandwidth:

note: these numbers don't include TCP/IP or HTTP overhead.

I am assuming that the tests that ZD made were meant to mean something, so I won't entertain the idea that they used an average file size of less that 1K. Given that, It is clear that the numbers that ZD's tests produced are only significant when you have the equivalent bandwidth of over 6 T1 lines. Let's be clear about this: if you have only 5 T1 lines or less, a single CPU Linux box with 256 MB RAM will wait on your internet connection and not be able to serve up to its full potential. Let me reemphasize this: ZD's tests prove that a single CPU Linux box with 256 MB RAM running apache will run faster than your internet connection!. Put another way, if your site runs on 5 T1 lines or less, a single CPU Linux box with 256 MB RAM will more than fulfill your needs with CPU cycles left over.

That was just if the ZD numbers were valid for files of only 1K in size. Let's make an assumption that you either (a) have pages with more than about a screen of text or (b)black and white pictures that make your average file size 5K and that ZD's tests accurately reflect this condition. Given this, ZD's tests would indicate that a single CPU Linux box with only 256 MB RAM running Apache would be constantly waiting on your T3 line. In other words, a single CPU Linux box with 256 MB RAM will serve your needs with room to grow if your site is served by a T3 line or less.

[July 3, 1999] ZDNN Linux to face off against NT again

More recent criticism points out that the benchmark has little to do with any sort of real world situation. Doug Ledford of Red Hat was quoted widely as saying "The tests do not accurately represent how and what our customers are using Red Hat for." Penguin Computing put out a strongly worded press release arguing the irrelevance of the benchmark. See also Chris Lansdown's article on the sort of network connectivity it would take to actually sustain the number of hits per second tested in these benchmarks. A separate set of tests documented in this c't article show that, under more "realistic" conditions, Linux performs much better.

All that is true - the connection with the benchmarks and reality is weak at best. But complaints along those lines just sound like sour grapes at this point. They make Linux look bad, and are not worth the trouble.

A few problems with Linux have been found as a result of these benchmarks. There is a bottleneck in the networking code that appears to be the cause of the plateau in Apache's performance, for example. Work is already well underway to fix those problems. See Dan Kegel's page for a detailed discussion of what is happening in this area.

And that, really, is the best result out of these benchmarks. There is no deep design problem within Linux that causes performance problems in these conditions. There are, instead, specific implementation problems that have been found, and will soon be fixed. It may not be long before Linux starts winning these benchmarks. The end result will be to show how quickly Linux can adapt and deal with problems. In the long run, these benchmarks will probably look like a good thing for Linux, from both the technical and public relations point of view.

Linux Today At last, a Mindcraft 'Open Benchmark'

Alan Cox -- Bruce Weiner- Lies, Damned Lies, Statistics -- a must reading

Linux Today Bruce Weiner-- Setting the Record Straight Where Linux Today Got It Right and Wrong

In an April 27, 1999 article entitled "Will Mindcraft II Be Better?" Linux Today presented a one-sided report clearly designed to destroy Mindcraft's credibility and to falsely make our reports look wrong. I want to set the record straight with this rebuttal, so I'll point out what's right and wrong with the Linux Today article. Unfortunately, it takes more words to right a wrong than it does to make someone look wrong, so please bear with me.

What's Right

Dave Whitinger and Dwight Johnson had several points right in their article:

What was not mentioned in the article was the excellent support Red Hat provided for our second test. Doug Ledford, from Red Hat, answered my questions on the phone, always called back when I left messages, and participated in the email correspondence with the above named Linux experts.

What's Wrong

Unfortunately, Mr. Whitinger and Mr. Johnson by not even attempting to contact Mindcraft to get information from us. It seems as though they wanted to write a one-sided story from the beginning. The following points will give you the other side of their story.

The Crux of The Matter

The whole controversy over Mindcraft's benchmark report is about three things: we showed that Windows NT Server was faster than Linux on an enterprise-class server, Apache did not outperform IIS, and we didn't get the same performance measurements for Samba that Jeremy got in the PC Week article or his lab. Let's look at these issues.

Comparing the performance of a resource-constrained desktop PC with an enterprise-class server is like saying a go-kart beat a grand prix race car on a go-kart race course.

Some Background on Mindcraft

Mindcraft has been in business for over 14 years doing various kinds of testing. For example, from May 1, 1991 through September 30, 1998 Mindcraft was accredited as a POSIX Testing Laboratory by the National Voluntary Laboratory Program (NVLAP), part of the National Institute of Standards and Technology (NIST ). During that time, Mindcraft did more POSIX FIPS certifications than all other POSIX labs combined. All of those tests were paid for by the client seeking certification. NIST saw no conflict of interest in our being paid by the company seeking certification and NIST reviewed and validated each test result we submitted. We apply the same honesty to our performance testing that we do for our conformance testing. To do otherwise would be foolish and would put us out of business quickly.

Some may ask why we decided not to renew our NVLAP accreditation. The reason is simple, NIST stopped its POSIX FIPS certification program on December 31, 1997. That program was picked up by the IEEE and on November 7, 1997 the IEEE announced that they recognized Mindcraft as an Accredited POSIX Testing Laboratory. We still are IEEE accredited and are still certifying systems for POSIX FIPS conformance.

We've received many emails and there have been many postings in newsgroups accusing us of lying in our report about Linux and Windows NT Server because Microsoft paid for the tests. Nothing could be further from the truth. No Mindcraft client, including Microsoft, has ever asked us to deliver a report that lied or misrepresented the results of a test. On the contrary, all of our clients ask us to get the best performance for their product and for their competitor's products. They want to know where they really stand. If a client ever asked us to rig a test, to lie about test results, or to misrepresent test results, we would decline to do the work.

A few of the emails we've received asked us why the company that sponsored a comparative benchmark always came out on top. The answer is simple. When that was not the case our client exercised a clause in the contract that allowed them to refuse us the right to publish the results. We've had several such cases.

Mindcraft works much like a CPA hired by a company to audit its books. We give an independent, impartial assessment based on our testing. Like a CPA we're paid by our client. NVLAP approved test labs that measure everything from asbestos to the accuracy of scales are paid by their clients. It is a common practice for test labs to be paid by their clients.

What's Fair

Considering the defamatory misrepresentations and bias in the Linux Today article written by Mr. Whitinger and Mr. Johnson, we believe that Linux Today should take the following actions in fairness to Mindcraft and its readers:

  1. Remove the article from its Web site and put an apology in its place. If you do not do that, at least provide a link to this rebuttal at the top of the article so that your readers can get both sides of the story.

  2. Disclose who Mr. Whitinger and Mr. Johnson work for. Were they paid by someone with a vested interest in seeing Linux outperform Windows NT Server?

  3. Disclose who owns Linux Today and if it gets advertising revenue from companies who do not a vested interest in seeing Linux outperform Windows NT Server.

  4. Provide fair coverage from an unbiased reporter of Mindcraft's Open Benchmark of Windows NT Server and Linux. For this benchmark, we have invited Linus Torvalds, Jeremy Allison, Red Hat, and all of the other Linux experts we were in contact with to tune Linux, Apache, and Samba and to witness all tests. We have also invited Microsoft to tune Windows NT and to witness the tests. Mindcraft will participate in this benchmark at its own expense.

References to NetBench Documentation

The NetBench document entitled Understanding and Using NetBench 5.01 states on page 24, " You can only compare results if you used the same testbed each time you ran that test suite [emphasis added]."

Understanding and Using NetBench 5.01 clearly gives another reason why the performance measurements Mindcraft reported are so different than the ones Jeremy and PC Week found. Look what's stated on page 236, "Client-side caching occurs when the client is able to place some or all of the test workspace into its local RAM, which it then uses as a file cache. When the client caches these test files, the client can satisfy locally requests that normally require a network access. Because a client's RAM can handle a request many times faster than it takes that same request to traverse the LAN, the client's throughput scores show a definite rise over scores when no client-side caching occurs. In fact, the client's throughput numbers with client-side caching can increase to levels that are two to three times faster than is possible given the physical speed of the particular network [emphasis added]."

Mindcraft saga. Stage 1

The first round proved that is is unrealistic to expect that Linux will outperform any combination of other OSes.  Linux cannot win in every sector of the market... 

Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Created June 1, 1998; Last modified: March 12, 2019