Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Midrange Servers

News History Recommended Links High Performance Components CPUs Architecture Memory
Low End Servers Midrange Servers High End Servers        
    Virtual Solaris Instances vs Virtual Linux Instances and VMware Cost Efficiency Problem Laptops  Tablets MicroPC Recovery of LVM partitions
Dell HP HP Smart Array P410 controller FIT USB flash drives Phablets Tethering and Portable Wi-Fi Hotspots MiFi
This vanity fair world of cellphones and smartphones Selected Low Cost Android Smartphones Dual SIM four bands GSM phones with real keyboard Flip phones Keyboards Computer Mouse Smartwatches
Hard drive failures

Sun SPARC

MS DOS/PC DOS History

History of  CPU Instruction Sets OS History Humor

Etc

Computers should obey a square law -- when the price doubles,
 you should get at least four times as much speed

 -- Seymour Cray

Often organizations concentrate too much on initial costs. Things like reliability, scalability and manageability never even enter the equation for them.

But for a large enterprise environment, the initial costs of the midrange servers is probably the least expensive part in the total cost of ownership (TCO) of such a server (let's assume five years lifespan). During those five years total costs of licensing of software and maintenance costs of hardware and software installed often are close and sometimes exceed the initial cost of hardware.  Electricity cost for five years is also close tot he cost of hardware.

Manpower costs are even greater.  Just the cost of administration, OS license and hardware maintenance contracts can exceed $4K per year or $20K per five years.  For example if  a sysadmin administers 40 servers and gets (with benefits) $100K per year just human labor costs $2.5K per server per year.  That's $12.5K for five years. Add $3.5K for Red Hat support and, say, $5K for hardware maintenance   and we are talking about  $21K.

Now by midrange server we understand two socket servers which actually can scale to 32 cores (2 x 16) or more. also they typically have a better  disk controller with say 1 or 2 GB of cache.

they also have sophisticated remote control (DRAC or ILO installed by defualt, not as an option; although they might be "entry-level, not enterprise level variants)

They are also  more reliable because usually they have more subsystems that are duplicated in hardware. It is difficult to buy a decent configuration of two socket server for less then $8K, almost double an average price of an entry level server with a single CPU. So in a way those servers do not obey Saymour Cray law that  when the price doubles, you should get at least four times as much speed.

Also the question of how many servers one midrange server can replace (the most common question during current consolidation mania) is not that easy to answer.  Intel continues to push the envelope, but there are natural physical limits of shrinking die. Ivy Bridge chips are produced using 22-nanometer process. That might open the last chapter of the "die shrinking saga".  You can shrink die form 22 in half but not more then that. And whether you can shirk it below 10 nanometers remains to be seen.

It is important to understand that the total number of cores is not that well correlated to performance. Assuming that each core operates at the same speed a server with, say, 16 cores can't even get close to sum of productivities of 4 servers with 4 cores. Overhead per additional core is substantial and that should cool heads of too adamant virtualization enthusiasts, but they just don't care ;-). At the same time, wasting company money on the current techno fad is a noble tradition of large enterprise IT (see below discussion of conspicuous consumption).

Wasting company money on the current techno fad is a noble tradition of large enterprise IT (see below discussion of conspicuous consumption).

The amount of supported CPUs is important, but with Intel now producing 8 and 10 core CPUs two socket server is probably as efficient as a typical four socket server as the latter had much higher overhead and few applications can utilize efficiently this number of cores. OS overhead and bottlenecks are also an important factor. Moreover 4 socket server will usually operate at lower memory speed (as of June 2011, four sockets Intel server can run only at memory speed 1066 MHz) while two socket server can run at 1.33GHz up to 96GB of memory.

In earlier version of this paper (2006) I assumed the limit is 64GB  for mid-range server and have written "In ten years from now that probably will be 128G." That happened in less then five years. Currently you can buy two socket server that can carry 256GB of memory. So it is amount of memory that became the dominant distinguishing feature of midrange servers, not so much number of cores.  

I would like to stress that for mid-range servers, beyond the lure of low cost of hardware and OS, an organization needs to consider the TCO which includes operating system and hardware support costs ($1-$2K a year), database or ERP licensing costs and maintenance fees (often per CPU/core), as well as cost of monitoring system (OpenView and Tivoli are pretty expensive and add to the TCO), cost of  system administrators and network infrastructure. Stability is one of the primary concern in this range as substantial downtime often represents what it is called "career limiting move" for the person responsible for the selection of particular hardware.  Actually stability that is  adequate for a development server and desktop/laptop is completely inadequate for midrange server.  In many cases even one hour downtime a month is unacceptable.  Best midrange servers from Dell and HP provide zero hardware-related downtime for all their useful life (five-seven years).

Stability is one of the primary concern in this range. Actually stability that is pretty adequate for development desktop/laptop is completely inadequate for midrange server.

Saving on administration costs is also a very important consideration. If you can save one Unix administrator position (for example, by not adding a new one due to increased workload after a large acquisition) that means that in the USA you saved approximately $60K-$100K  per year. That means that all midrange servers should have built-in subsystems for remote administration Like DRAC in Dell or ILO in HP.

There are two major areas of deployment of midrange servers in large enterprises:

Midrange servers deployments in large enterprises are dominated by database servers. Databases are big business -- just look at what some companies spend on licensing Oracle. Also software licensing costs have great influence on selection of hardware. Oracle charges per physical socket. So it you can get two socket server with 8 cores per CPU you will be paying two time less for software license (if you need to buy one) then if you buy a server with for sockets and each CPU has four cores. An I know several large corporations that went into this trap and paid the costs just due to misunderstanding of this simple "Oracle license cost optimization" trick. 

In many large companies database servers are used very inefficiently: the level of a database group in a typical large company is usually rather basic and they rely more on "brute force" then sophistication in designing and managing the database.  So many database servers is considerably over-specified. Due to feudal structure of IT, data base administrators and Unix group often are separate groups. And while sysadmin group  has no substantial knowledge about particular application that is running on, say, Oracle database, database group usually is too far from understanding hardware and OS issues.  Also the structure of many databases are simply beyond their control.  For example,  often the database is a part of an ERP system or enterprise application developed outside the company and as such should be taken "as is".

Enterprise Resource Planning (ERP) systems have become popular  in large corporations and which support critical business functions such as supply chain, inventory, and customer relationship management are very complex, often badly written systems. And they support enterprise resource planning equally badly ;-). The choice of the right hardware platform for ERP systems is extremely important for large enterprises as it greatly affects the total cost, efficiency and future extendibility. Any unplanned downtime can lead to reduced productivity and lost revenue. The ERP platform must also be able to easily handle growing amounts of data and large numbers of concurrent users with reasonable response times. Hardware should have some space for growth and allow for, say, at least 30% growth of the user population with just memory and CPU  upgrades, without total replacement of the current hardware.  Therefore if system currently requires 32G of memory the server should be able to carry 64G (16 * 4GB chips or 8*8GB chip). It goes without saying that all such system  should have redundant power supplies, a remote console hardware build-in (Drac in Dell servers, ILO in HP servers).   All-in-all ERP servers represent one of the most important application running on midrange servers in a typical large company. 

In case the database server is used in ERP system like SAP/R3 the structure of database is dictated by the ERP vendor and a typical benchmark for ERP applications for this server often means more then cost. Again, the number of cores per CPU is important as it permit cutting the number of physical CPUs and software is often licensed "per socket (aka physical CPU). That's why Intel 10-core and 12-core CPUs are so valuable in this domain. There are greedy perverts like IBM, who charge per core, and that even with their excellent ability to lure top brass into their spider net, this is desperate, almost suicidal move on their part as there is nothing in IBM software that does not exist (often in better incarnation) in competition.  That's how they killed Tivoli.

Again minimizing number of physical sockets (physical CPUs) is the first rule of thumb.  Generally, the less CPUs the server has,  the better it is for company valet. If you can do job on two socket server, unless Oracle or SAP/R3 employs your close relatives, you should not order a four socket server ;-) That's why IBM has reasonable success with its Power 5 line in ERP systems area. In comparison with Sun SPARC, IBM POWER server can use much less CPUs to get the same productivity. But with modern Intel CPU IBM come  under tremendous pressure and has no real answer to the Intel assault. Here are old  Second Quarter 2011 SPEC CPU2006 Results (after that the situation only got worse for IBM). Here results for Dell R910 server shown. Other vendor results are highly corrected as they all use the same CPUs.

Test Sponsor System Name Base
Copies
Processor Results
Enabled
Cores
Enabled
Chips
Cores/
Chip
Threads/
Core
Base Peak
Dell Inc. PowerEdge R910 (Intel Xeon E7-4870, 2.40 GHz)
HTML | CSV | Text | PDF | PS | Config
Yes 40 4 10 1 33.5 36.6
Dell Inc. PowerEdge R910 (Intel Xeon E7-8837, 2.67 GHz)
HTML | CSV | Text | PDF | PS | Config
Yes 32 4 8 1 33.9 36.3
Dell Inc. PowerEdge R910 (Intel Xeon E7-8867L, 2.13 GHz)
HTML | CSV | Text | PDF | PS | Config
Yes 40 4 10 1 30.3 33.0
Dell Inc. PowerEdge T710 (Intel Xeon X5687, 3.60 GHz)
HTML | CSV | Text | PDF | PS | Config
Yes 8 2 4 1 45.1 47.7

There is actually no non-Intel results in the table for this quarter, so decimation of RISK vendors in 2011 is complete.  Also note that speed of the CPU is important as Intel Xeon X5687 in entry level T710 server running at 3.6GHz beats R910  Intel Xeon E7-8837 by wide margin. Like car enthusiasts used to say: there is no replacement for displacement.

The importance of memory size and speed

Both size and speed of memory are of paramount importance as database applications are very unfriendly toward caching. Memory in modern computer is a very complex subsystem. See an excellent Ulrich Drepper's article What every programmer should know about memory  for some important details the need to be understood. Figure 2.13 in the article is reproduced below and shows the names of the DDR2 modules in use today.

Array
Freq.
Bus
Freq.
Data
Rate
Name
(Rate)
Name
(FSB)
133MHz 266MHz 4,256MB/s PC2-4200 DDR2-533
166MHz 333MHz 5,312MB/s PC2-5300 DDR2-667
200MHz 400MHz 6,400MB/s PC2-6400 DDR2-800
250MHz 500MHz 8,000MB/s PC2-8000 DDR2-1000
266MHz 533MHz 8,512MB/s PC2-8500 DDR2-1066
The second table shows the names of the expected DDR3 modules.
Array
Freq.
Bus
Freq.
Data
Rate
Name
(Rate)
Name
(FSB)
100MHz 400MHz 6,400MB/s PC3-6400 DDR3-800
133MHz 533MHz 8,512MB/s PC3-8500 DDR3-1066
166MHz 667MHz 10,667MB/s PC3-10667 DDR3-1333
200MHz 800MHz 12,800MB/s PC3-12800 DDR3-1600
233MHz 933MHz 14,933MB/s PC3-14900 DDR3-1866

Most computer textbooks teach that a memory access is roughly equivalent to a time of execution of one CPU instruction (especially for RISC architecture).  But with new technologies the reality is that a memory operations, for example in case of a cache miss, may cost you 100 CPU instructions or more. Also currently the top speed of CPU is approximately two times higher than the top speed of memory (5Ghz and 2 GHz respectively). For more typical 1.33GHz memory quite acceptable speed of CPU is 2.66GHz at which CPU cache is approximately twice faster then memory.

The gap between memory speed and disk latency is even worse.  So other things equal, if you can install enough memory to cache 80% of the database in RAM you can achieve spectacular improvements in performance.  And keep in mind that 32G database is a pretty large database, but it can be easily stored in memory on any modern two socket server.  Even 256GB database can now be quite easily stored in memory.  Also SSD disks can improve reading speed dramatically.

When cost of RAM decreased so that 4GB of memory became common, Sun initially has had some advantages with its UltraSparc architecture as was able to handle larger size of memory then Intel competition due to Alpha-style organization of memory bus. Old, 32 bit Intel servers scale very weakly beyond 4G of memory. In 2006 UltraSparc scaled to 64G without major problems. Interesting benchmarks for T1 can be found at  http://www.rz.rwth-aachen.de/computing/hpc/hw/niagara.php  and related slides http://www.rz.rwth-aachen.de/computing/hpc/hw/niagara/Niagara_ZKI_2006-03-31_anMey.pdf

This ability to access huge amount of memory was eventually matched by Intel around 2010 and then exceeded in 2012, so it is no longer an advantage and memory is much cheaper in Intel servers, so you now can have two socket Intel server with 256Gb memory for around $10K and four socket Intel server with 512 GB of memory for around $30K (Dell PE 820). 

Still it is a very important consideration and for mission critical database server with a terabyte database  you probably need 256GB or more RAM. If we are taking about memory intensive applications like databases, then doubling the amount of RAM always wins doubling of the number of CPUs.  Most organizations usually overpay for CPUs and underpay for memory when ordering such servers. There are just too many enterprise servers that are completely unbalanced CPU/memory wise and/or over-specified to the extent that they run with CPU utilization of less then 10% (Actually corporate idiotism in buying SAN storage is even greater, but that's another story).

Fighting SANs Fashion on Midrange

One important, but often overlooked advantage of using Intel/AMD hardware and linux or Solaris is the ability to cut expenses of SANs. While SAN has legitimate usage in large enterprise environment for large databases and often is indispensable for clusters and multiple virtual machines running on the same box, this is a very expensive solution and one should not try to shoot birds with a cannon.   Small, especially catalog type databases (mostly read operations) can perform very well with the local storage, especially solid state drives and CPI cards like Fusion-io  ioDrive.  SAS RAID controllers with SSD drives can scale up to 6GB/sec.  Even with SAS 15RPM drives and 2 GB cache on controller, if you use 6 SAS drives per controller you will smoke SAN pretty easily. And cost of six 15K RPM harddrives is less then the cost of two SAN fiber channel controllers. To say nothing about cost of SAN storage itself and Brocade switch it is connected to. Solid state drives and, especially PCI-connected drives like ioDrive are also an interesting alternative. Such drive has  almost zero read latency and sustained sequential read speed of up to 770MB/s (ioDrive) and sustained sequential write speed of up to 770 MB/sec). 

At the same time when dealing with really large databases accessing data is not the single issue that should be addressed. Other operations such as importing/exporting data into database, and especially backups and restore operations are equally important (although I know one huge corporation that has stupid enough management to use SAN and HP Data Protector for backups like classic quote "the bigger you are the stupider you behave" implies ;-). Actually stupidity of IT of large corporations in configuring midrange servers is simply staggering and can be compared with the description of military bureaucracy in The Good Soldier Švejk - Wikipedia. For sure it produces the same mixed feelings:  

“All along the line,' said the volunteer, pulling the blanket over him, 'everything in the army stinks of rottenness. Up till now the wide-eyed masses haven't woken up to it. With goggling eyes they let themselves be made into mincemeat and then when they're struck by a bullet they just whisper, "Mummy!" Heroes don't exist, only cattle for the slaughter and the butchers in the general staffs. But in the end every body will mutiny and there will be a fine shambles. Long live the army! Goodnight!”
Jaroslav Hašek, The Good Soldier Švejk

See another typical episode when blades are used with VMware. Often that lead to bizarre configuration in which it is difficult to achieve cost savings in comparison with existing solutions. See below the section devoted to blades. In case of blades SAN storage is necessary as they do not have enough local storage, but the question arise whether they provide better cost/performance ration can be achieved with traditional midrange servers and local drives. 

Outside of multiterabyte databases that does require dedicated SAN unit, the cost/benefit ration of  SAN unit  needs to be investigated with spreadsheet in hand.  The ability to use the SAN as a backup target can be valuable but again the question of cost/performance is an interesting one as one high end QLogic fiber card (QLE2562) costs approximately $1200 or six 300 GB 15K RPM SAS-2 Seagate drives  (ST3300657SS).

One need to understand that to transmit data via fiber or copper with speeds that that is achievable on local bus (say close to 1GB per sec) is expensive.  Network latency and, especially, possibility of oversubscription of available network channels are issues that should not be overlooked.  Additional complexity is another factor: you add more expensive components like cards and switches, each of which can fail. You can duplicate them by   adding costs but sometimes a simple question arise: "Can local storage be just adequate for the task in hand, if you switch to SSD drives?  And the answer in many cases is definite yes, as few databases exceed 4 terabytes range that is now easily achievable with a single solid state drive. Also the cost of  SAN connection is substantial and for equal comparison the money spend on controller, switch and SAN storage box should be used for improving I/O (by increasing the number of controllers or totally switching to solid state drives), increasing memory size to max and using the latest Intel CPUs. 

The rule "use SAN only as the heavy artillery" is often violated.  Flexibility achievable with SANs is often overrated. In many cases SAN storage increases not decreases downtime, because if something goes south, it goes south for all connected to particular SAN unit servers, which can be a dozen or more.  This is when IT honchos who promoted SAN storage realize that any centralization has its drawbacks :-).

Using drive images and appropriate software can provide 80% of flexibility with lesser costs.  In many cases SAN networks are used  just because of fashion considerations, aggressive marketing or due to inertia linked to long gone limitations on the size of individual hard drives (which  is  the thing of the past at least since 2005).  In any case, all modern Intel/AMD servers has no such limitations and can use dozen of  large size drives. For  SAS the "large size" now means 4TB per drive and, amazingly, for SATA that means 8 or even 10TB per drive.  That means that other factors then scalability should be considered before making a decision.

Using such trivial improvements as high-end I/O controller and SSD or 15K RPM drives, bigger RAM and larger amount of physical drives (one pair per critical filesystem) one can save money that otherwise lands in EMC coffers ;-). It is not uncommon for a large enterprise paying EMC almost the same sum annually as for two major Unix vendors server maintenance. This amount can (and probably should) be cut using other vendors but the same question  "Can local storage be just adequate for the task in hand?" should still be asked before making final decision.

In this area ZFS leads the pack due to its unique advantages, but linux is not far behind and with proper tuning is very competitive.  

Losses due to overinvestment in SANs can be substantial. Benchmarks for the equal cost solutions are a must (usually you can increase the number of cores and/or amount of memory on the server in solution that use local storage instead of SANs: just two cards required cost over $2K or, for a $20K server, 10% of the total cost ).

Proof of the pudding is in the eating

But all this theoretical considerations does not worth the paper they are printed on. The configuration issues tend to be complex and strongly depend on the application which will be running of the server. Without real benchmarks on actual hardware it is difficult to tell what improvement each platform or each component can bring. The money involved are considerable and without a pilot and laboratory testing of actual applications and actual OSes on real hardware, enterprises are bound to make expensive mistakes in selection of midrange servers either because of vendor hype or their own incompetence or both.  I just want to suggest that Solaris 10 on X86 might be a part of such pilot.  Not because I love Oracle (I do not), but because it does provide value and stability using the most cost effective platform in existence. While AIX and HP-UX are bound to RISK server Solaris is free of such limitation.

Without a pilot and laboratory testing of actual applications on actual OSes enterprises are bound to make an expensive mistakes in selection of midrange servers either because of vendor hype or their own incompetence or both. I just want to suggest that Solaris 10 on X86 might be a part of such pilot.

For example, in late 2006 Tweakers.net performed an interesting  test using X4200 (4 cores). While in no way this the results can be taken for granted and need to be reproduced in your particular environment (tuning is important issues here and the more your know about particular OS the better it will perform in your environment) they have found that "concurrencies of 15 and above made Solaris between 6% and 14% quicker than Linux, running PostgreSQL and MySQL, respectively. Quite remarkably, the latter does show a higher performance peak running Linux." As margin of error in such experiments is probably around 10% this suggests that any performance advantage of Linux over Solaris on Intel platform is application dependent and only real testing can reveal what platform is better.  For example, typically for Webshere Solaris is extremely comparative and outperforms Linux on large lads by significant margin, while for LAMP stack Linux usually holds its own.

Sun T2000 review - Solaris vs. Linux

Reliability issues

Reliability is huge issue for midrange servers. Typically such servers should last five years without major down time. And most vendors achieve this quite easily. There are many case when Dell or HP servers work 10 years without a single hour of downtime dire to hardware problems.

The main source of downtime now is software and OS. The availability of certified (or in general more or less qualified) Suse and RHEL sysadmins is low and generally such people need to really "top guns"  because of tremendous variety and complexity of Intel server market. Just look at the considerations above. Linux as an OS is also more complex (Christmas tree syndrome). This makes more difficult to provide internal support for key applications. 

This situation is made worse by the instability of the system with frequent patching schedule (once a quarter is typical for Linux servers) and the challenge of finding who is responsible for a particular crash: hardware vendor or linux distribution provider.  For example Suse 10 before SP1 was known to rather unstable and only with SP1 reached the "post-beta" level. The situation normalized for a while  but then returned with SUSE 11 SP2 which generally has quality of early beta.

Conspicuous Consumption Problem

It is interesting to note that large enterprise IT department which in comparison with small startups are flush with money for hardware acquisitions are often slipping into what Thorsten Veblen called "Conspicuous Consumption" mode: they clamor for the latest and greatest CPUs and often "under-purchase" other subsystems and first of all memory and hard drives.

Summing up

Intel servers wiped the floor with RISK and on midrange it is the only game in town (like on low end). Dell and HP can provide servers with price /performance that simply can't be matched by either Oracle or IBM or HP (with its Itanium that was aptly renamed Itanic). And the fact that HP won contractual battle with Oracle as for support of Oracle database on Itanik does not help.

Configuration of midrange server in large corporation are often mismanaged. Sometimes I think that the Good Soldier Švejk would find a perfect civil occupation as the person responsible for ordering midrange servers in a large corporation.



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: March, 12, 2019