Softpanorama

Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
May the source be with you, but remember the KISS principle ;-)
Bigger doesn't imply better. Bigger often is a sign of obesity, of lost control, of overcomplexity, of cancerous cells

Small HPC Cluster Architecture

News High Performance Computing (HPC) Recommended Links

Internet Layer of TCP/IP Protocol

Bonding Ethernet Interfaces in Red Hat Linux Link aggregation Installing Mellanox InfiniBand Driver on RHEL 6.5
Dell PowerEdge R620 Dell PowerEdge M1000e Enclosure Dell M620 blade CMS Dell DRAC  Grid Engine  
Linux ifconfig ethtool Linux route command Infiniband PDSH -- a parallel remote shell Message Passing Interface  GPFS on Red Hat
DDR Memory DDR3-1866 Memory Performance Intel CPUs Price/Performance Ratio Intel Composer XE NFS performance tuning nfsstat Unix System Monitoring
Passwordless SSH login   MLNX_OFED Tips Admin Horror Stories Humor Etc

Introduction

We will assume that  a small HPC cluster is a cluster with 32 or less computational nodes ( two enclosures 16 blades each connected by 40 Gb link), a single headnode and NFS  with NFS server residing on the headnode used as the filesystem for sharing  files between nodes and headnode.  The magic number of 32 is connected mainly with  capabilities of NFS. But other advantages and the simplicity of design also tend to abruptly dissipate when the number of computational nodes exceeds the magic number of 32. For example commercial cluster manager is desirable to reduce maintenance costs. Faster interconnect often is needed and 10 Gb Ethernet typically is not sufficient if all nodes can prodice considebal I/O to shared filesystem . There are also other complications.

While blades are preferable solution due to additional management capabilities of enclosure, small 1U servers can al be used. If number of cores are not critical desktops can be used as node too and can be much cheaper.

Nodes are connected to one or two specialized network switches  (with the second based on Infiniband, if it is needed) with libraries and programs installed which allow processing to be shared among them. The result is a high-performance parallel computing cluster from inexpensive personal computer hardware.

Structurally small HPC cluster consists of one headnode, and a dozen or more computational nodes. connection can be 10Gbit network or Infiniband (IB). But if MPI is not use cross blades (tasks are limited to single server/blade) even 100 Mbit network might be adequate. MPI works on  Ethernet well with up to two blades, after that Infiniband is preferable.

Another important advantages is higher flexibility as small cluster can be "customized" to a particular set of applications both hardware and operating system wise. It also can be used for offloading "non-suitable" for supercluster jobs as well as experimentation with new software before it gets to the supercluster (if this is justified by the number of researchers who need it).  

Small cluster can have "non-uniform" nodes with the different flavour of Linux as long as scheduler used is supported on this flavour.  For example, Debian which is becoming standard de-facto for many open  source bioinformatics applications and some nodes can have Debian installed.  That improves performance and encourages researchers to find and try applications that better suit their needs instead of relying on a set of centrally supplied applications.

Often the installation of a new application can be performed immediately after the researcher request, making it available the next day or two. Unlike with supercluster, it is possible to use different flavors of Linux on different nodes, if this is absolutely necessary.  Some scripts and applications developed on small satellite clusters can be ported to the supercluster, increasing the efficiency of its usage. And vice versa: popular open source applications available on supercluster can be compiled on satellite clusters using make file developed for supercluster and reuse environmental module files.

 

Hardware costs

Small HPC clusters have lower hardware cost per node and per core than larger clusters, as the amount of additional hardware needed for cluster operation and the number of non computational nodes is minimal. A typical cost of 448 cores (16*28), 16 blade cluster with  128GB of RAM each (including  the rack server as a headnode and 50TB of directly attached storage) is around $150K. Which comes to $334 per core and $9K per node (a blade with 28 cores, two 14 core CPUs, 128 GB of RAM, 400 GB SSD ). The same is true for software: often such cluster can use "no-cost" version of linux such as CentOS or Debian, instead of RHEL.

Such clusters typically use of-the-shelf commodity hardware, for example from Dell. If funds are scare you can buy a used enclosure too.  In cases where the number of cores does not matter much but speed of the CPU  nodes can be workstations or even regular desktops.

Small HPC clusters typically are implemented as a single blade enclosure (up to 16 blades) or two interconnected by 40 Gb Ethernet link (switch to switch) blade enclosures (up to 32 blades). Headnode typically is a rack server with an attached direct storage unit (connected to PCI bus). It hosts both NFS server and the scheduler.  Using blade enclosure provides an excellent central management capability (which exceeds capabilities typically found in superclusters), and 5% to 10% better energy efficiency in comparison with rack servers due to intelligent management of power supply units.

They also can be implemented using rack servers. Or if money are tight, and the number of cores on each node is not critical (but the speed of the CPU is), using regular desktops with i7 CPU (possibly overclocked). The latter are 5-10 times cheaper than a blade and have superior performance on single threaded applications. I/O is are also competitive as with SSD disks complex RAID controller with onboard cache no longer needed for faster I/O.  For example, replacing ancient DNAL servers with workstations would improve the speed of running of most bioinformatics group applications. But typically those two solutions are suboptimal and increase the cost of management to the extent that saving on hardware does not overweight increased labor and (in case of rack servers) additional energy costs. The latter with two 130 watt CPUs can be estimated to be around $2K per blade for the five year period, assuming $0.15 per KW/h.   

As nodes are disposable, it makes sense to use used computers for them, if one need to cut costs.  Google approach (when the sever does not have an enclosure and just consists of motherboard, power supply and a harddrive mounted on on some wooden or plastic (Polypropylene)  base is also possible.

Software costs

At minimum you can have zero software costs and software maintenance costs if you uses open source software exclusively. For example, the cluster of this type can run CentOS or Academic OS, or Debian.

Free version of SGE can be used a scheduler (SGE is preferable as development stopped long ago and the code is extremely stable).  

Small cluster provide unparallel level of flexibility as you can install software (typically on head node) the same day it was requested without weeks of red tape typical for superclusters.

Total ownership costs

If you limit yourself to replacement  of failed hardware instead of repairs you can avoid hardware maintenance costs too. In this case the total cost of maintaining cluster is only your own labor plus initial cost of hardware.

If you use cluster manager which typically are not free this additional cost for small cluster this layer is redundant. Replication of nodes can be achieved via utilities such as Usage of Relax-and-Recover or some free rsync based tool. 

The headnode

In small HPC clusters the headnode controls the whole cluster and also serves files to the client nodes. In simplest case via NFS. There are three main components that are running on the front node:

In case you use Infiniband IB subnet manager also can be run from the headnode.

The headnode can also serve as a login node for users. This presuppose that several network interfaces exist and in this case headnde is typicall a separte cheap rack server.  You need at least one interface for the private cluster network (typically eth1) and another to the outside world (typically eth0).

The headnode can also run other cluster functions such as:

Nodes are configured and controlled by the headnode, and do only what they are told to do. Tasks are executed via scheduler such as Sun Grid Engine. If a network boot with DHCP is used you need just a single image for the node exists. But with 16 nodes you can achieve the same using rsync with one "etalon" node. One can build a Beowulf class machine using a standard Linux distribution without any additional software.

Compute Nodes

The compute nodes do really one thingócompute. So what computer should serve as computer node for you r small cluster totally depends on the type of application you are using.    Desktops can and should be used as computer nodes if money are tight and number of cores on each node is not that critical (so you can avoid additional expense of using two socket servers/workstations).

The key problem for multiple computer nodes and how to keep all of them in sync as for configuration files and patches.

Blades are typically used for computational nodes if the cluster has 16 or 32 nodes. They are especially useful in situations when:

  1. High density is critical
  2. Power and cooling is absolutely critical (blades typically have better power and cooling than rack-mount nodes)
  3. The applications donít require much local storage capacity (to drives are mx that most blades can carry althouth with  SSD this number  is rased to four.)
  4. PCI-e expansion cards are not required (typically blades have built-in IB or GbE)
  5. Larger clusters (blades can be cheaper than 1U nodes for larger systems starting with say 12 nodes). Otherwise encluse carries substantial additional expense which is not justified by ease of maintenance.

    Major hardware components to consider are:

    Blades can have two interfaces: 10Gbit Ethernet and IB. Each need to be connected to its own network:

Networking architecture issues

There are several networking architecture issues with small HPC cluster based on M1000e

Major software components

We also need several software applications, including:

A small cluster on the base of Dell blade enclosure

The Dell PowerEdge M1000e Enclosure is an attractive option for building small clusters. 16 blades can be installed into 10U  enclosure. With 16 core per blade and 16 blade per enclosure you get 256 node cluster. Enclosure supports up to 6 network & storage I/O interconnect modules.

A high speed passive midplane connects the server modules in the front and power, I/O, and management infrastructure in the rear of the enclosure.

Thorough power management capabilities including delivering shared power to ensure full capacity of the power supplies available to all server modules

To understand the PowerEdge M1000e architecture, it is necessary to first define the term fabric.

History

A Beowulf cluster is a computer cluster of what are normally identical, commodity-grade computers networked into a small local area network with libraries and programs installed which allow processing to be shared among them. The result is a high-performance parallel computing cluster from inexpensive personal computer hardware.

The name Beowulf originally referred to a specific computer built in 1994 by Thomas Sterling and Donald Becker at NASA.[1] The name "Beowulf" comes from the main character in the Old English epic poem Beowulf, which was bestowed by Sterling because the eponymous hero is described as having "thirty men's heft of grasp in the gripe of his hand".

There is no particular piece of software that defines a cluster as a Beowulf. Beowulf clusters normally run a Unix-like operating system, such as BSD, Linux, or Solaris, normally built from free and open source software. Commonly used parallel processing libraries include Message Passing Interface (MPI) and Parallel Virtual Machine (PVM). Both of these permit the programmer to divide a task among a group of networked computers, and collect the results of processing. Examples of MPI software include OpenMPI or MPICH.

A description of the Beowulf cluster, from the original "how-to", which was published by Jacek Radajewski and Douglas Eadline under the Linux Documentation Project in 1998.

Beowulf is a multi-computer architecture which can be used for parallel computations. It is a system which usually consists of one server node, and one or more client nodes connected via Ethernet or some other network. It is a system built using commodity hardware components, like any PC capable of running a Unix-like operating system, with standard Ethernet adapters, and switches. It does not contain any custom hardware components and is trivially reproducible. Beowulf also uses commodity software like the FreeBSD, Linux or Solaris operating system, Parallel Virtual Machine (PVM) and Message Passing Interface (MPI). The server node controls the whole cluster and serves files to the client nodes. It is also the cluster's console and gateway to the outside world. Large Beowulf machines might have more than one server node, and possibly other nodes dedicated to particular tasks, for example consoles or monitoring stations. In most cases client nodes in a Beowulf system are dumb, the dumber the better. Nodes are configured and controlled by the server node, and do only what they are told to do. In a disk-less client configuration, a client node doesn't even know its IP address or name until the server tells it.

One of the main differences between Beowulf and a Cluster of Workstations (COW) is that Beowulf behaves more like a single machine rather than many workstations. In most cases client nodes do not have keyboards or monitors, and are accessed only via remote login or possibly serial terminal. Beowulf nodes can be thought of as a CPU + memory package which can be plugged into the cluster, just like a CPU or memory module can be plugged into a motherboard.

Beowulf is not a special software package, new network topology, or the latest kernel hack. Beowulf is a technology of clustering computers to form a parallel, virtual supercomputer. Although there are many software packages such as kernel modifications, PVM and MPI libraries, and configuration tools which make the Beowulf architecture faster, easier to configure, and much more usable, one can build a Beowulf class machine using a standard Linux distribution without any additional software. If you have two networked computers which share at least the /home file system via NFS, and trust each other to execute remote shells (rsh), then it could be argued that you have a simple, two node Beowulf machine.


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Apr 01, 2019] The Seven Computational Cluster Truths

Inspired by "The seven networking truth by R. Callon, April 1, 1996
Feb 26, 2019 | www.softpanorama.org

Adapted for HPC clusters by Nikolai Bezroukov on Feb 25, 2019

Status of this Memo
This memo provides information for the HPC community. This memo does not specify an standard of any kind, except in the sense that all standards must implicitly follow the fundamental truths. Distribution of this memo is unlimited.
Abstract
This memo documents seven fundamental truths about computational clusters.
Acknowledgements
The truths described in this memo result from extensive study over an extended period of time by many people, some of whom did not intend to contribute to this work. The editor would like to thank the HPC community for helping to refine these truths.
1. Introduction
These truths apply to HPC clusters, and are not limited to TCP/IP, GPFS, scheduler, or any particular component of HPC cluster.
2. The Fundamental Truths
(1) Some things in life can never be fully appreciated nor understood unless experienced firsthand. Most problems in a large computational clusters can never be fully understood by someone who never run a cluster with more then 16, 32 or 64 nodes.

(2) Every problem or upgrade on a large cluster always takes at least twice longer to solve than it seems like it should.

(3) One size never fits all, but complexity increases non-linearly with the size of the cluster. In some areas (storage, networking) the problem grows exponentially with the size of the cluster.
(3a) Supercluster is an attempt to try to solve multiple separate problems via a single complex solution. But its size creates another set of problem which might outweigh the set of problem it intends to solve. .

(3b) With sufficient thrust, pigs fly just fine. However, this is not necessarily a good idea.

(3c) Large, Fast, Cheap: you can't have all three.

(4) On a large cluster issues are more interconnected with each other and a typical failure often affects larger number of nodes or components and take more effort to resolve
(4a) Superclusters proves that it is always possible to add another level of complexity into each cluster layer, especially at networking layer until only applications that use a single node run well.

(4b) On a supercluster it is easier to move a networking problem around, than it is to solve it.

(4c)You never understand how bad and buggy is your favorite scheduler is until you deploy it on a supercluster.

(4d) If the solution that was put in place for the particular cluster does not work, it will always be proposed later for new cluster under a different name...

(5) Functioning of a large computational cluster is undistinguishable from magic.
(5a) User superstition that "the more cores, the better" is incurable, but the user desire to run their mostly useless models on as many cores as possible can and should be resisted.

(5b) If you do not know what to do with the problem on the supercluster you can always "wave a dead chicken" e.g. perform a ritual operation on crashed software or hardware that most probably will be futile but is nevertheless useful to satisfy "important others" and frustrated users that an appropriate degree of effort has been expended.

(5c) Downtime of the large computational clusters has some mysterious religious ritual quality in it in modest doze increases the respect of the users toward the HPC support team. But only to a certain limit.

(6) "The more cores the better" is a religious truth similar to the belief in Flat Earth during Middle Ages and any attempt to challenge it might lead to burning of the heretic at the stake.

(6a) The number of cores in the cluster has a religious quality and in the eyes of users and management has power almost equal to Divine Spirit. In the stage of acquisition of the hardware it outweighs all other considerations, driving towards the cluster with maximum possible number of cores within the allocated budget Attempt to resist buying for computational nodes faster CPUs with less cores are futile.

(6b) The best way to change your preferred hardware supplier is buy a large computational cluster.

(6c) Users will always routinely abuse the facility by specifying more cores than they actually need for their runs

(7) For all resources, whatever the is the size of your cluster, you always need more.

(7a) Overhead increases exponentially with the size of the cluster until all resources of the support team are consumed by the maintaining the cluster and none can be spend for helping the users.

(7b) Users will always try to run more applications and use more languages that the cluster team can meaningfully support.

(7c) The most pressure on the support team is exerted by the users with less useful for the company and/or most questionable from the scientific standpoint applications.

(7d) The level of ignorance in computer architecture of 99% of users of large computational clusters can't be overestimated.

Security Considerations

This memo raises no security issues. However, security protocols used in the HPC cluster are subject to those truths.

References

The references have been deleted in order to protect the guilty and avoid enriching the lawyers.

Raijin User Guide - NCI Help - Opus - NCI Confluence

The systems have a simple queue structure with two main levels of priority; the queue names reflect their priority. There is no longer a separate queue for the lowest priority "bonus jobs" as these are to be submitted to the other queues, and PBS lowers their priority within the queues.

Intel Xeon Sandy Bridge

express:

normal:

copyq:

Note: always use -l other=mdss when using mdss commands in copyq. this is so that jobs only run when the mdss system is available.

Intel Xeon Broadwell

expressbw:

normalbw:

Specialised Nodes

hugemem:

gpu:

gpupascal:

More information on details of the gpu specification, how to use gpu on NCI and gpu enabled software are available on the following page: NCI GPU

knl:

More information on details of the Xeon Phil (Knights Landing) queue are available at Intel Knights Landing KNL

Hyades

Architecturally, Hyades is a cluster comprised of the following components:
Component QTY Description
Master Node 1 Dell PowerEdge R820, 4x 8-core Intel Xeon E5-4620 (2.2 GHz), 128GB memory, 8x 1TB HDDs
Analysis Node 1 Dell PowerEdge R820, 4x 8-core Intel Xeon E5-4640 (2.4 GHz), 512GB memory, 2x 600GB SSDs
Type I Compute Nodes 180 Dell PowerEdge R620, 2x 8-core Intel Xeon E5-2650 (2.0 GHz), 64GB memory, 1TB HDD
Type IIa Compute Nodes 8 Dell PowerEdge C8220x, 2x 8-core Intel Xeon E5-2650 (2.0 GHz), 64GB memory, 2x 500GB HDDs, 1x Nvidia K20
Type IIb Compute Nodes 1 Dell PowerEdge R720, 2x 6-core Intel Xeon E5-2630L (2.0 GHz), 64GB memory, 500GB HDD, 2x Xeon Phi 5110P
Lustre Storage 1 146TB of usable storage served from a Terascala/Dell storage cluster
ZFS Server 1 SuperMicro Server, 2x 4-core Intel Xeon E5-2609V2 (2.5 GHz), 64GB memory, 2x 120GB SSDs, 36x 4TB HDDs
Cloud Storage 1 1PB of raw storage served from a Huawei UDS system
InfiniBand 17 17x Mellanox IS5024 QDR (40Gb/s) InfiniBand switches, configured in a 1:1 non-blocking Fat Tree topology
Gigabit Ethernet 7 7x Dell 6248 GbE switches, stacked in a Ring topology
10-gigabit Ethernet 1 1x Dell 8132F 10GbE switch

Master Node

The Master/Login Node is the entry point to the Hyades cluster. It is a Dell PowerEdge R820 server that contains four (4x) 8-core Intel Sandy Bridge Xeon E5-4620 processors at 2.2 GHz, 128 GB memory and eight (8x) 1TB hard drives in a RAID-6 array. Primary tasks to be performed on the Master Node are:

OpenHPC - MVAPICH User Group (MUG)

StarCluster with Scientific Linux 6

UPDATE: Please try the updated SL6 image ami-d60185bf to fix SSH key issues.

If you haven't heard of StarCluster from MIT, it is a toolkit for launching clusters of virtual compute nodes within the Amazon Elastic Compute Cloud (EC2). StarCluster provides a simple way to utilize the cloud for research, scientific, high-performance and high-throughput computing.

StarCluster defaults to using Ubuntu Linux (deb) images for its base, but I have prepared a Scientific Linux (rp

Fabcluster: An Amazon EC2 Script for HPC

I've been experimenting with Amazon EC2 for prototyping HPC clusters. Spinning up a cluster of micro instances is very convenient for testing software and systems configurations. In order to facilitate training and discussion, my Python script is now available on GitHub. This piece of code utilizes the Fabric and Boto Python modules.

An Amazon Web Services account is required to use this script.

[Feb 23, 2014] High Performance Computing Facility

Resource intensive jobs (long running, high memory demand, etc) should be run on the compute nodes of the cluster. You cannot execute jobs directly on the compute nodes yourself; you must request the cluster's batch system do it on your behalf. To use the batch system, you will submit a special script which contains instructions to execute your job on the compute nodes. When submitting your job, you specify a partition (group of nodes: for testing vs. production) and a QOS (a classification that determines what kind of resources your job will need). Your job will wait in the queue until it is "next in line", and free processors on the compute nodes become available. Which job is "next in line" is determined by the scheduling policy of the cluster. Once a job is started, it continues until it either completes or reaches its time limit, in which case it is terminated by the system.

The batch system used on maya is called SLURM, which is short for Simple Linux Utility for Resource Management. Users transitioning from the cluster hpc should be aware that SLURM behaves a bit differently than PBS, and the scripting is a little different too. Unfortunately, this means you will need to rewrite your batch scripts. However many of the confusing points of PBS, such as requesting the number of nodes and tasks per node, are simplified in SLURM.

Recommended Links

Google matched content

Softpanorama Recommended

Top articles

Sites

Top articles

Sites



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Haterís Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2018 by Dr. Nikolai Bezroukov. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) in the author free time and without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: April, 10, 2019