Softpanorama

Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
May the source be with you, but remember the KISS principle ;-)
Bigger doesn't imply better. Bigger often is a sign of obesity, of lost control, of overcomplexity, of cancerous cells

PDSH -- a parallel remote shell

News Enterprise Unix System Administration Recommended Links Unix Configuration Management Tools pssh Slurping C3 Tools parallel
rsync rdist Cluster SSH Mussh Job schedulers Unix System Monitoring Grid Engine Perl Admin Tools and Scripts
SSH Power Tool Tentakel Multi Remote Tools pdsh pdcp      
SSH Usage in Pipes Password-less SSH login scp sftp Tips History Humor Etc
 

Introduction

Pdsh is a parallel remote shell client. Basic usage is via the familiar syntax

pdsh [OPTION]... COMMAND

where COMMAND is the remote command to run.

Pdsh is a threaded application that uses a sliding window (or fanout) of threads to conserve resources on the initiating host and allow some connections to time out while all other connections continue.

Output from each host is displayed as it is recieved and is prefixed with the name of the host and a ':' character, unless the -N option is used.

pdsh -av 'grep . /proc/sys/kernel/ostype'
ehype78: Linux
ehype79: Linux
ehype76: Linux
ehype85: Linux
ehype77: Linux
ehype84: Linux
...

When pdsh receives a Ctrl-C (SIGINT), it will list the status of all current threads

[Ctrl-C]
pdsh@hype137: interrupt (one more within 1 sec to abort)
pdsh@hype137:  (^Z within 1 sec to cancel pending threads)
pdsh@hype137: hype0: connecting
pdsh@hype137: hype1: command in progress
pdsh@hype137: hype2: command in progress
pdsh@hype137: hype3: connecting
pdsh@hype137: hype4: connecting
...

Another Ctrl-C within one second will cause pdsh to abort immediately, while a Ctrl-Z within a second will cancel all pending threads, allowing threads that are connecting and "in progress" to complete normally.

[Ctrl-C Ctrl-Z]
pdsh@hype137: interrupt (one more within 1 sec to abort)
pdsh@hype137:  (^Z within 1 sec to cancel pending threads)
pdsh@hype137: hype0: connecting
pdsh@hype137: hype1: command in progress
pdsh@hype137: hype2: command in progress
pdsh@hype137: hype3: connecting
pdsh@hype137: hype4: connecting
pdsh@hype137: hype5: connecting
pdsh@hype137: hype6: connecting
pdsh@hype137: Canceled 8 pending threads.

Examples

At a minimum pdsh requires a list of remote hosts to target and a remote command. The standard options used to set the target hosts in pdsh are -w and -x, which set and exclude hosts respectively.

As noted in sections above pdsh accepts lists of hosts the general form: prefix[n-m,l-k,...], where n < m and l < k, etc., as an alternative to explicit lists of hosts. This form should not be confused with regular expression character classes (also denoted by ‘‘[]’’). For example, foo[19] does not
represent an expression matching foo1 or foo9, but rather represents the degenerate hostlist: foo19.

The hostlist syntax is meant only as a convenience on clusters with a "prefixNNN" naming convention
and specification of ranges should not be considered necessary -- this foo1,foo9 could be specified
as such, or by the hostlist foo[1,9].

Some examples of usage follow:

Run command on foo01,foo02,...,foo05
pdsh -w foo[01-05] command

Run command on foo7,foo9,foo10
pdsh -w foo[7,9-10] command

Run command on foo0,foo4,foo5
pdsh -w foo[0-5] -x foo[1-3] command

A suffix on the hostname is also supported:

Run command on foo0-eth0,foo1-eth0,foo2-eth0,foo3-eth0
pdsh -w foo[0-3]-eth0 command

Note: Most shells will interpret brackets (’[’ and ’]’) for pattern matching. So it is  necessary to enclose ranged lists within quotes:

pdsh -w "foo[01-05]" command

Creating the target host list with -w

The -w option is used to set and/or filter the list of target hosts, and is used as

-w TARGETS...

where TARGETS is a comma-separated list of the one or more of the following:

See the HOSTLIST expressions page for details on the HOSTLIST format.

Additionally, a list of hosts preceded by "user@" specifies a remote username other than the default for these hosts. , and list of hosts preceeded by "rcmd_type:" specifies an alternate rcmd connect type for the following hosts. If used together, the rcmd type must be specified first, e.g. ssh:user1@host0 would use ssh to connect to host0 as user1.

Run with user `foo' on hosts h0,h1,h2, and user `bar' on hosts h3,h5:
 pdsh -w foo@h[0-2],bar@h[3,5] ...

Use ssh and user "u1" for hosts h[0-2]:
 pdsh -w ssh:u1@h[0-2] ...
Note: If using the genders module, the rcmd_type for groups of hosts can be encoded in the genders file using the special pdsh_rcmd_type attribute

Excluding target hosts with -x

The -x option is used to exclude specific hosts from the target node list and is used simply as

-x TARGETS...

This option may be used with other node selection options such as -a and -g (when available). Arguements to -x may also be preceeded by the filename ('^') and regex ('/') characters as described above. Also as with -w, the -x option also operates on HOSTLISTS.

Exclude hosts ending in 0:
 pdsh -a -x /0$/ ...

Exclude hosts in file /tmp/hosts:
 pdsh -a -x ^/tmp/hosts ...

Run on hosts node1-node100, excluding node50:
  pdsh -w node[1-100] -x node50 ...

The WCOLL environment variable

As an alternative to "-w ^file", and for backwards compatibility with DSH, a file containing a list of hosts to target may also be specified in the WCOLL environment variable, in which case pdsh behaves just as if it were called as

pdsh -w ^$WCOLL ...
Configure user environment for PDSH
# vim /etc/profile.d
Edit the following:
# setup pdsh for cluster users
export PDSH_RCMD_TYPE='ssh'
export WCOLL='/etc/pdsh/machines'
Put the host name of the Compute Nodes
# vim /etc/pdsh/machines/
node1
node2
node3
.......
.......

Other methods to create the target host list

Additionally, there are many other pdsh modules that provide options for targetting remote hosts. These are documented in the Miscellaneous Modules page, but examples include -a to target all hosts in a machines, genders, or dshgroups file, -g to target groups of hosts with genders, dshgroups, and netgroups, and -j to target hosts in SLURM or Torque/PBS jobs.

PDSH Modules

As described earlier, pdsh uses modules to implement and extend its core functionality. There are two basic kinds of modules used in pdsh -- "rcmd" modules, which implement the remote connection method pdsh uses to run commands, and "misc" modules, which implement various other pdsh functionality, such as node list generation and filtering.

The current list of loaded modules is printed with the pdsh -V output

   pdsh -V
   pdsh-2.23 (+debug)
   rcmd modules: ssh,rsh,mrsh,exec (default: mrsh)
   misc modules: slurm,dshgroup,nodeupdown (*conflicting: genders)
   [* To force-load a conflicting module, use the -M <name> option]

Note that some modules may be listed as conflicting with others. This is because these modules may provide additional command line options to pdsh, and if the command line options conflict, the options to pdsh, and if the command line options conflict, the

Detailed information about available modules may be viewed via the -L option:

> pdsh -L
8 modules loaded:

Module: misc/dshgroup
Author: Mark Grondona <mgrondona@llnl.gov>
Descr:  Read list of targets from dsh-style "group" files
Active: yes
Options:
-g groupname      target hosts in dsh group "groupname"
-X groupname      exclude hosts in dsh group "groupname"

Module: rcmd/exec
Author: Mark Grondona <mgrondona@llnl.gov>
Descr:  arbitrary command rcmd connect method
Active: yes

Module: misc/genders
Author: Jim Garlick <garlick@llnl.gov>
Descr:  target nodes using libgenders and genders attributes
Active: no
Options:
-g query,...      target nodes using genders query
-X query,...      exclude nodes using genders query
-F file           use alternate genders file `file'
-i                request alternate or canonical hostnames if applicable
-a                target all nodes except those with "pdsh_all_skip" attribute
-A                target all nodes listed in genders database

...

This output shows the module name author, a description, any options provided by the module and whether the module is currently "active" or not.

The -M option may be use to force-load a list of modules before all others, ensuring that they will be active if there is a module conflict. In this way, for example, the genders module could be made active and the dshgroup module deactivated for one run of pdsh. This option may also be set via the PDSH_MISC_MODULES environment variable.

Other Standard PDSH options

-h
Output a usage message and quit. A list of available rcmd modules will also be printed at the end of the usage message. The available options for pdsh may change based on which modules are loaded or passed to the -M option.
-S
Return the largest of the remote command return values
-b
Batch mode. Disables the ctrl-C status feature so that a single ctrl-c kills pdsh.
-l USER
Run remote commands as user USER. The remote username may also be specified using the USER@TARGETS syntax with the -w option
-t SECONDS
Set a connect timeout in seconds. Default is 10.
-u SECONDS
Set a remote command timeout. The default is unlimited.
-f FANOUT
Set the maximum number of simultaneous remote commands to FANOUT. The default is 32, but can be overridden at build time.
-N
Disable the hostname: prefix on lines of pdsh output.
-V
Output pdsh version information, along with a list of currently loaded modules, and exit.

Top updates

Bulletin Latest Past week Past month
Google Search


NEWS CONTENTS

Old News ;-)

[Nov 12, 2018] Shell Games Linux Magazine

Nov 12, 2018 | www.linux-magazine.com

First pdsh Commands

To begin, I'll try to get the kernel version of a node by using its IP address:

$ pdsh -w 192.168.1.250 uname -r
192.168.1.250: 2.6.32-431.11.2.el6.x86_64

The -w option means I am specifying the node(s) that will run the command. In this case, I specified the IP address of the node (192.168.1.250). After the list of nodes, I add the command I want to run, which is uname -r in this case. Notice that pdsh starts the output line by identifying the node name.

If you need to mix rcmd modules in a single command, you can specify which module to use in the command line,

$ pdsh -w ssh:laytonjb@192.168.1.250 uname -r
192.168.1.250: 2.6.32-431.11.2.el6.x86_64

by putting the rcmd module before the node name. In this case, I used ssh and typical ssh syntax.

A very common way of using pdsh is to set the environment variable WCOLL to point to the file that contains the list of hosts you want to use in the pdsh command. For example, I created a subdirectory PDSH where I create a file hosts that lists the hosts I want to use:

[laytonjb@home4 ~]$ mkdir PDSH
[laytonjb@home4 ~]$ cd PDSH
[laytonjb@home4 PDSH]$ vi hosts
[laytonjb@home4 PDSH]$ more hosts
192.168.1.4
192.168.1.250

I'm only using two nodes: 192.168.1.4 and 192.168.1.250. The first is my test system (like a cluster head node), and the second is my test compute node. You can put hosts in the file as you would on the command line separated by commas. Be sure not to put a blank line at the end of the file because pdsh will try to connect to it. You can put the environment variable WCOLL in your .bashrc file:

export WCOLL=/home/laytonjb/PDSH/hosts

As before, you can source your .bashrc file, or you can log out and log back in. Specifying Hosts

I won't list all the several other ways to specify a list of nodes, because the pdsh website [9] discusses virtually all of them; however, some of the methods are pretty handy. The simplest way is to specify the nodes on the command line is to use the -w option:

$ pdsh -w 192.168.1.4,192.168.1.250 uname -r
192.168.1.4: 2.6.32-431.17.1.el6.x86_64
192.168.1.250: 2.6.32-431.11.2.el6.x86_64

In this case, I specified the node names separated by commas. You can also use a range of hosts as follows:

pdsh -w host[1-11]
pdsh -w host[1-4,8-11]

In the first case, pdsh expands the host range to host1, host2, host3, , host11. In the second case, it expands the hosts similarly (host1, host2, host3, host4, host8, host9, host10, host11). You can go to the pdsh website for more information on hostlist expressions [10] .

Another option is to have pdsh read the hosts from a file other than the one to which WCOLL points. The command shown in Listing 2 tells pdsh to take the hostnames from the file /tmp/hosts , which is listed after -w ^ (with no space between the "^" and the filename). You can also use several host files,

Listing 2 Read Hosts from File
$ more /tmp/hosts
192.168.1.4
$ more /tmp/hosts2
192.168.1.250
$ pdsh -w ^/tmp/hosts,^/tmp/hosts2 uname -r
192.168.1.4: 2.6.32-431.17.1.el6.x86_64
192.168.1.250: 2.6.32-431.11.2.el6.x86_64

or you can exclude hosts from a list:

$ pdsh -w -192.168.1.250 uname -r
192.168.1.4: 2.6.32-431.17.1.el6.x86_64

The option -w -192.168.1.250 excluded node 192.168.1.250 from the list and only output the information for 192.168.1.4. You can also exclude nodes using a node file:

$ pdsh -w -^/tmp/hosts2 uname -r
192.168.1.4: 2.6.32-431.17.1.el6.x86_64

In this case, /tmp/hosts2 contains 192.168.1.250, which isn't included in the output. Using the -x option with a hostname,

$ pdsh -x 192.168.1.4 uname -r
192.168.1.250: 2.6.32-431.11.2.el6.x86_64
$ pdsh -x ^/tmp/hosts uname -r
192.168.1.250: 2.6.32-431.11.2.el6.x86_64
$ more /tmp/hosts
192.168.1.4

or a list of hostnames to be excluded from the command to run also works.

More Useful pdsh Commands

Now I can shift into second gear and try some fancier pdsh tricks. First, I want to run a more complicated command on all of the nodes ( Listing 3 ). Notice that I put the entire command in quotes. This means the entire command is run on each node, including the first ( cat /proc/cpuinfo ) and second ( grep bogomips ) parts.

Listing 3 Quotation Marks 1

In the output, the node precedes the command results, so you can tell what output is associated with which node. Notice that the BogoMips values are different on the two nodes, which is perfectly understandable because the systems are different. The first node has eight cores (four cores and four Hyper-Thread cores), and the second node has four cores.

You can use this command across a homogeneous cluster to make sure all the nodes are reporting back the same BogoMips value. If the cluster is truly homogeneous, this value should be the same. If it's not, then I would take the offending node out of production and check it.

A slightly different command shown in Listing 4 runs the first part contained in quotes, cat /proc/cpuinfo , on each node and the second part of the command, grep bogomips , on the node on which you issue the pdsh command.

Listing 4 Quotation Marks 2

The point here is that you need to be careful on the command line. In this example, the differences are trivial, but other commands could have differences that might be difficult to notice.

One very important thing to note is that pdsh does not guarantee a return of output in any particular order. If you have a list of 20 nodes, the output does not necessarily start with node 1 and increase incrementally to node 20. For example, in Listing 5 , I run vmstat on each node and get three lines of output from each node.

[Nov 08, 2018] Parallel command execution with PDSH

Notable quotes:
"... (did I mention that Rittman Mead laptops are Macs, so I can do all of this straight from my work machine... :-) ) ..."
"... open an editor and paste the following lines into it and save the file as /foo/bar ..."
Nov 08, 2018 | www.rittmanmead.com

In this series of blog posts I'm taking a look at a few very useful tools that can make your life as the sysadmin of a cluster of Linux machines easier. This may be a Hadoop cluster, or just a plain simple set of 'normal' machines on which you want to run the same commands and monitoring.

Previously we looked at using SSH keys for intra-machine authorisation , which is a pre-requisite what we'll look at here -- executing the same command across multiple machines using PDSH. In the next post of the series we'll see how we can monitor OS metrics across a cluster with colmux.

PDSH is a very smart little tool that enables you to issue the same command on multiple hosts at once, and see the output. You need to have set up ssh key authentication from the client to host on all of them, so if you followed the steps in the first section of this article you'll be good to go.

The syntax for using it is nice and simple:

For example run against a small cluster of four machines that I have:

robin@RNMMBP $ pdsh -w root@rnmcluster02-node0[1-4] date

rnmcluster02-node01: Fri Nov 28 17:26:17 GMT 2014  
rnmcluster02-node02: Fri Nov 28 17:26:18 GMT 2014  
rnmcluster02-node03: Fri Nov 28 17:26:18 GMT 2014  
rnmcluster02-node04: Fri Nov 28 17:26:18 GMT 2014

PDSH can be installed on the Mac under Homebrew (did I mention that Rittman Mead laptops are Macs, so I can do all of this straight from my work machine... :-) )

brew install pdsh

And if you want to run it on Linux from the EPEL yum repository (RHEL-compatible, but packages for other distros are available):

yum install pdsh

You can run it from a cluster node, or from your client machine (assuming your client machine is mac/linux).

Example - install and start collectl on all nodes

I started looking into pdsh when it came to setting up a cluster of machines from scratch. One of the must-have tools I like to have on any machine that I work with is the excellent collectl . This is an OS resource monitoring tool that I initially learnt of through Kevin Closson and Greg Rahn , and provides the kind of information you'd get from top etc – and then some! It can run interactively, log to disk, run as a service – and it also happens to integrate very nicely with graphite , making it a no-brainer choice for any server.

So, instead of logging into each box individually I could instead run this:

pdsh -w root@rnmcluster02-node0[1-4] yum install -y collectl  
pdsh -w root@rnmcluster02-node0[1-4] service collectl start  
pdsh -w root@rnmcluster02-node0[1-4] chkconfig collectl on

Yes, I know there are tools out there like puppet and chef that are designed for doing this kind of templated build of multiple servers, but the point I want to illustrate here is that pdsh enables you to do ad-hoc changes to a set of servers at once. Sure, once I have my cluster built and want to create an image/template for future builds, then it would be daft if I were building the whole lot through pdsh-distributed yum commands.

Example - setting up the date/timezone/NTPD

Often the accuracy of the clock on each server in a cluster is crucial, and we can easily do this with pdsh:

Install packages

robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node0[1-4] yum install -y ntp ntpdate

Set the timezone:

robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node0[1-4] ln -sf /usr/share/zoneinfo/Europe/London /etc/localtime

Force a time refresh:

robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node0[1-4] ntpdate pool.ntp.org  
rnmcluster02-node03: 30 Nov 20:46:22 ntpdate[27610]: step time server 176.58.109.199 offset -2.928585 sec  
rnmcluster02-node02: 30 Nov 20:46:22 ntpdate[28527]: step time server 176.58.109.199 offset -2.946021 sec  
rnmcluster02-node04: 30 Nov 20:46:22 ntpdate[27615]: step time server 129.250.35.250 offset -2.915713 sec  
rnmcluster02-node01: 30 Nov 20:46:25 ntpdate[29316]: 178.79.160.57 rate limit response from server.  
rnmcluster02-node01: 30 Nov 20:46:22 ntpdate[29316]: step time server 176.58.109.199 offset -2.925016 sec

Set NTPD to start automatically at boot:

robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node0[1-4] chkconfig ntpd on

Start NTPD:

robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node0[1-4] service ntpd start
Example - using a HEREDOC (here-document) and sending quotation marks in a command with PDSH

Here documents (heredocs) are a nice way to embed multi-line content in a single command, enabling the scripting of a file creation rather than the clumsy instruction to " open an editor and paste the following lines into it and save the file as /foo/bar ".

Fortunately heredocs work just fine with pdsh, so long as you remember to enclose the whole command in quotation marks. And speaking of which, if you need to include quotation marks in your actual command, you need to escape them with a backslash. Here's an example of both, setting up the configuration file for my ever-favourite gnu screen on all the nodes of the cluster:

robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node0[1-4] "cat > ~/.screenrc <<EOF  
hardstatus alwayslastline \"%{= RY}%H %{kG}%{G} Screen(s): %{c}%w %=%{kG}%c  %D, %M %d %Y  LD:%l\"  
startup_message off  
msgwait 1  
defscrollback 100000  
nethack on  
EOF  
"

Now when I login to each individual node and run screen, I get a nice toolbar at the bottom:

Combining commands

To combine commands together that you send to each host you can use the standard bash operator semicolon ;

robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node0[1-4] "date;sleep 5;date"  
rnmcluster02-node01: Sun Nov 30 20:57:06 GMT 2014  
rnmcluster02-node03: Sun Nov 30 20:57:06 GMT 2014  
rnmcluster02-node04: Sun Nov 30 20:57:06 GMT 2014  
rnmcluster02-node02: Sun Nov 30 20:57:06 GMT 2014  
rnmcluster02-node01: Sun Nov 30 20:57:11 GMT 2014  
rnmcluster02-node03: Sun Nov 30 20:57:11 GMT 2014  
rnmcluster02-node04: Sun Nov 30 20:57:11 GMT 2014  
rnmcluster02-node02: Sun Nov 30 20:57:11 GMT 2014

Note the use of the quotation marks to enclose the entire command string. Without them the bash interpretor will take the ; as the delineator of the local commands, and try to run the subsequent commands locally:

robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node0[1-4] date;sleep 5;date  
rnmcluster02-node03: Sun Nov 30 20:57:53 GMT 2014  
rnmcluster02-node04: Sun Nov 30 20:57:53 GMT 2014  
rnmcluster02-node02: Sun Nov 30 20:57:53 GMT 2014  
rnmcluster02-node01: Sun Nov 30 20:57:53 GMT 2014  
Sun 30 Nov 2014 20:58:00 GMT

You can also use && and || to run subsequent commands conditionally if the previous one succeeds or fails respectively:

robin@RNMMBP $ pdsh -w root@rnmcluster02-node[01-4] "chkconfig collectl on && service collectl start"

rnmcluster02-node03: Starting collectl: [  OK  ]  
rnmcluster02-node02: Starting collectl: [  OK  ]  
rnmcluster02-node04: Starting collectl: [  OK  ]  
rnmcluster02-node01: Starting collectl: [  OK  ]
Piping and file redirects

Similar to combining commands above, you can pipe the output of commands, and you need to use quotation marks to enclose the whole command string.

robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node[01-4] "chkconfig|grep collectl"  
rnmcluster02-node03: collectl           0:off   1:off   2:on    3:on    4:on    5:on    6:off  
rnmcluster02-node01: collectl           0:off   1:off   2:on    3:on    4:on    5:on    6:off  
rnmcluster02-node04: collectl           0:off   1:off   2:on    3:on    4:on    5:on    6:off  
rnmcluster02-node02: collectl           0:off   1:off   2:on    3:on    4:on    5:on    6:off

However, you can pipe the output from pdsh to a local process if you want:

robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node[01-4] chkconfig|grep collectl  
rnmcluster02-node02: collectl           0:off   1:off   2:on    3:on    4:on    5:on    6:off  
rnmcluster02-node04: collectl           0:off   1:off   2:on    3:on    4:on    5:on    6:off  
rnmcluster02-node03: collectl           0:off   1:off   2:on    3:on    4:on    5:on    6:off  
rnmcluster02-node01: collectl           0:off   1:off   2:on    3:on    4:on    5:on    6:off

The difference is that you'll be shifting the whole of the pipe across the network in order to process it locally, so if you're just grepping etc this doesn't make any sense. For use of utilities held locally and not on the remote server though, this might make sense.

File redirects work the same way – within quotation marks and the redirect will be to a file on the remote server, outside of them it'll be local:

robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node[01-4] "chkconfig>/tmp/pdsh.out"  
robin@RNMMBP ~ $ ls -l /tmp/pdsh.out  
ls: /tmp/pdsh.out: No such file or directory

robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node[01-4] chkconfig>/tmp/pdsh.out  
robin@RNMMBP ~ $ ls -l /tmp/pdsh.out  
-rw-r--r--  1 robin  wheel  7608 30 Nov 19:23 /tmp/pdsh.out
Cancelling PDSH operations

As you can see from above, the precise syntax of pdsh calls can be hugely important. If you run a command and it appears 'stuck', or if you have that heartstopping realisation that the shutdown -h now you meant to run locally you ran across the cluster, you can press Ctrl-C once to see the status of your commands:

robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node[01-4] sleep 30  
^Cpdsh@RNMMBP: interrupt (one more within 1 sec to abort)
pdsh@RNMMBP:  (^Z within 1 sec to cancel pending threads)  
pdsh@RNMMBP: rnmcluster02-node01: command in progress  
pdsh@RNMMBP: rnmcluster02-node02: command in progress  
pdsh@RNMMBP: rnmcluster02-node03: command in progress  
pdsh@RNMMBP: rnmcluster02-node04: command in progress

and press it twice (or within a second of the first) to cancel:

robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node[01-4] sleep 30  
^Cpdsh@RNMMBP: interrupt (one more within 1 sec to abort)
pdsh@RNMMBP:  (^Z within 1 sec to cancel pending threads)  
pdsh@RNMMBP: rnmcluster02-node01: command in progress  
pdsh@RNMMBP: rnmcluster02-node02: command in progress  
pdsh@RNMMBP: rnmcluster02-node03: command in progress  
pdsh@RNMMBP: rnmcluster02-node04: command in progress  
^Csending SIGTERM to ssh rnmcluster02-node01
sending signal 15 to rnmcluster02-node01 [ssh] pid 26534  
sending SIGTERM to ssh rnmcluster02-node02  
sending signal 15 to rnmcluster02-node02 [ssh] pid 26535  
sending SIGTERM to ssh rnmcluster02-node03  
sending signal 15 to rnmcluster02-node03 [ssh] pid 26533  
sending SIGTERM to ssh rnmcluster02-node04  
sending signal 15 to rnmcluster02-node04 [ssh] pid 26532  
pdsh@RNMMBP: interrupt, aborting.

If you've got threads yet to run on the remote hosts, but want to keep running whatever has already started, you can use Ctrl-C, Ctrl-Z:

robin@RNMMBP ~ $ pdsh -f 2 -w root@rnmcluster02-node[01-4] "sleep 5;date"  
^Cpdsh@RNMMBP: interrupt (one more within 1 sec to abort)
pdsh@RNMMBP:  (^Z within 1 sec to cancel pending threads)  
pdsh@RNMMBP: rnmcluster02-node01: command in progress  
pdsh@RNMMBP: rnmcluster02-node02: command in progress  
^Zpdsh@RNMMBP: Canceled 2 pending threads.
rnmcluster02-node01: Mon Dec  1 21:46:35 GMT 2014  
rnmcluster02-node02: Mon Dec  1 21:46:35 GMT 2014

NB the above example illustrates the use of the -f argument to limit how many threads are run against remote hosts at once. We can see the command is left running on the first two nodes and returns the date, whilst the Ctrl-C - Ctrl-Z stops it from being executed on the remaining nodes.

PDSH_SSH_ARGS_APPEND

By default, when you ssh to new host for the first time you'll be prompted to validate the remote host's SSH key fingerprint.

The authenticity of host 'rnmcluster02-node02 (172.28.128.9)' can't be established.  
RSA key fingerprint is 00:c0:75:a8:bc:30:cb:8e:b3:8e:e4:29:42:6a:27:1c.  
Are you sure you want to continue connecting (yes/no)?

This is one of those prompts that the majority of us just hit enter at and ignore; if that includes you then you will want to make sure that your PDSH call doesn't fall in a heap because you're connecting to a bunch of new servers all at once. PDSH is not an interactive tool, so if it requires input from the hosts it's connecting to it'll just fail. To avoid this SSH prompt, you can set up the environment variable PDSH SSH ARGS_APPEND as follows:

export PDSH_SSH_ARGS_APPEND="-q -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"

The -q makes failures less verbose, and the -o passes in a couple of options, StrictHostKeyChecking to disable the above check, and UserKnownHostsFile to stop SSH keeping a list of host IP/hostnames and corresponding SSH fingerprints (by pointing it at /dev/null ). You'll want this if you're working with VMs that are sharing a pool of IPs and get re-used, otherwise you get this scary failure:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@  
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!  
Someone could be eavesdropping on you right now (man-in-the-middle attack)!  
It is also possible that a host key has just been changed.  
The fingerprint for the RSA key sent by the remote host is  
00:c0:75:a8:bc:30:cb:8e:b3:8e:e4:29:42:6a:27:1c.  
Please contact your system administrator.

For both of these above options, make sure you're aware of the security implications that you're opening yourself up to. For a sandbox environment I just ignore them; for anything where security is of importance make sure you are aware of quite which server you are connecting to by SSH, and protecting yourself from MitM attacks.

PDSH Reference

You can find out more about PDSH at https://code.google.com/p/pdsh/wiki/UsingPDSH

Summary

When working with multiple Linux machines I would first and foremost make sure SSH keys are set up in order to ease management through password-less logins.

After SSH keys, I would recommend pdsh for parallel execution of the same SSH command across the cluster. It's a big time saver particularly when initially setting up the cluster given the installation and configuration changes that are inevitably needed.

In the next article of this series we'll see how the tool colmux is a powerful way to monitor OS metrics across a cluster.

So now your turn – what particular tools or tips do you have for working with a cluster of Linux machines? Leave your answers in the comments below, or tweet them to me at @rmoff .

IBM Install and setup pdsh on IBM Platform Cluster Manager

Technote (FAQ)

Question

How to enable pdsh in IBM Platform Cluster Manager 3.2

Cause

pdsh is a parallel shell cluster tool which was included in earlier version of IBM Platform Cluster Manger, but is not distributed with version 3.2

Answer

Follow the steps below to install pdsh in your IBM Platform PCM/HPC 3.2 cluster. If you encounter a problem with the steps below, you can open a service request with IBM Support. For pdsh usage issue, please refer to pdsh man page or online documentation.

To install and setup pdsh, follow these steps:

0. Prerequisites

  1. Setup EPEL yum repository on your RHEL 6.2 Installer node

    1.1 Grab a URL for epel-release package

    You should confirm that your PCM installer is running RHEL 6.2 by checking the redhat-release file.

    # cat /etc/redhat-release

Red Hat Enterprise Linux Server release 6.2 (Santiago)

Once confirmed, get the URL from this page:

http://fedora-epel.mirror.lstn.net/6/i386/repoview/epel-release.html

1.2 Download the epel-release package on the PCM installer node

# wget http://fedora-epel.mirror.lstn.net/6/i386/epel-release-6-8.noarch.rpm

1.3 Install the epel-release package

# rpm -ivh epel-release-6-7.noarch.rpm

1.5 Enable the base EPEL repository

Modify the /etc/yum.repos.d/epel.repo file and make sure to set enabled=1 for "epel" repository. Do not enable "epel-debuginfo' and "epel-source" repositories.

1.4 Confirm that EPEL repository is available via yum

# yum repolist

  1. Install PDSH

    2.1 Use yum to install pdsh package

    # yum -y install pdsh

    # yum install pdsh-rcmd-rsh.x86_64

    2.2 Confirm that pdsh is installed

    # which pdsh

  2. Configure PDSH

    3.1 Create machines file for pdsh

    # mkdir /etc/pdsh

    # touch /etc/pdsh/machines

    # genconfig hostspdsh > /etc/pdsh/machines

    3.2 Configure user environment for PDSH

    Open /etc/bashrc file and add following lines at the end

    # setup pdsh for cluster users
    export PDSH_RCMD_TYPE='ssh'
    export WCOLL='/etc/pdsh/machines'

  3. Use PDSH

    Now, pdsh is setup for cluster users – similar to previous version of IBM Platform HPC. To use pdsh, simply run 'pdsh' command

    [user@master ~]$ pdsh

pdsh> uptime
compute000-eth0: 14:45:21 up 24 min, 1 user, load average: 0.00, 0.08, 0.27

Installing pdsh to issue commands to a group of nodes in parallel in CentOS

Aug 4, 2013 | linuxtoolkit.blogspot.in

. Configure user environment for PDSH

# vim /etc/profile.d
Edit the following:
# setup pdsh for cluster users
export PDSH_RCMD_TYPE='ssh'
export WCOLL='/etc/pdsh/machines'

5. Put the host name of the Compute Nodes
# vim /etc/pdsh/machines/

node1
node2
node3
.......
.......

6. Make sure the nodes have their SSH-Key Exchange. For more information, see Auto SSH Login without Password 7. Do Install Step 1 to Step 3 on ALL the client nodes.


B. USING PDSH Run the command ( pdsh [options]... command )

1. To target all the nodes found at /etc/pdsh/machinefile. Assuming the files are transferred already. Do note that the parallel copy comes with the pdsh utilities

# pdsh -a "rpm -Uvh /root/htop-1.0.2-1.el6.rf.x86_64.rpm"

2. To target specific nodes, you may want to consider using the -x command
# pdsh -w host1,host2 "rpm -Uvh /root/htop-1.0.2-1.el6.rf.x86_64.rpm"
References
  1. Install and setup pdsh on IBM Platform Cluster Manager
  2. PDSH Project Site
  3. PDSH Download Site (Sourceforge)

[Sep 02, 2014] Parallel Shell

cs.boisestate.edu
The cluster comes with a simple parallel shell named pdsh. The pdsh shell is handy for running commands across the cluster. See the man page, which describes the capabilities of pdsh in detail. One of the useful features is the capability of specifying all or a subset of the cluster.
For example:

pdsh -a <command> targets the <command> to all nodes of the cluster, including the master.

pdsh -a -x node00 <command> targets the <command> to all nodes of the cluster except the master.

pdsh -w node[01-08] <command> targets the <command> to the 8 nodes of the cluster named node01, node02, ..., node08.

Another utility that is useful for formatting the output of pdsh is dshbak. Here we will show some handy uses of pdsh.

Parallel remote "shelling" via pdsh by Jonathan Rad

May 24, 2012 | radfest.wordpress.com
Ever have a multitude of hosts you need to run a command (or series of commands) on? We all know that forloop outputs are super fun to parse through when you need to do this, but why not do it better with a tool like pdsh.

A respected ex-colleague of mine made a great suggestion to start using pdsh instead of forloops and other creative make shift parallel shell processing. The majority of my notes in this blog post are from him. If he'll allow me too, I'll give him a shout out and cite his Google+ profile for anyone interested.

Pdsh is a parallel remote shell client available from sourceforge. If you are using rpmforge CentOS repos you can pick it up there as well, but it may not be the most bleeding edge package available.

Pdsh lives on sourceforge, but the code is on google:

Usage docs: Some quick tips on how to get started using pdsh:
  1. Set up your environment:
  2. export PDSH_SSH_ARGS_APPEND="-o ConnectTimeout=5 -o CheckHostIP=no -o StrictHostKeyChecking=no" (Add this to your .bashrc to save time.)
  1. Create your target list in a text file, one hostname per line (in the examples below, this file is called "host-list"
  2. It would probably be a good idea to use "tee" to capture output.
    • "man tee" if you need more information on tee.
  3. Run a test first to make sure your pdsh command works the way you think it will before potentially doing anything destructive:
    • sudo pdsh -R ssh -w ^host-list "hostname" | tee output-test-1
  4. Change your test run to do what you really want it to after a successful test. e.g.:
    • sudo pdsh -R ssh -w ^host-list "/usr/bin/mycmd args" | tee output-mycmd-run-1
Obviously if you have Puppet Enterprise fully integrated within your environment, you can take advantage of powerful tools such as mcollective. If you do not, pdsh is a great alternative.

Shell Games " Linux Magazine by Jeff Layton

Linux PRO Issue 166/2014

Building and Installing pdsh

Building and installing pdsh is really simple if you've built code using GNU's autoconfigure before. The steps are quite easy:

./configure --with-ssh --without-rsh
make
make install

This puts the binaries into /usr/local/, which is fine for testing purposes. For production work, I would put it in /opt or something like that – just be sure it's in your path.

You might notice that I used the --without-rsh option in the configure command. By default, pdsh uses rsh, which is not really secure, so I chose to exclude it from the configuration. In the output in Listing 1, you can see the pdsh rcmd modules (rcmd is the remote command used by pdsh). Notice that the "available rcmd modules" at the end of the output lists only ssh and exec. If I didn't exclude rsh, it would be listed here, too, and it would be the default. To override rsh and make ssh the default, you just add the following line to your .bashrc file:

Listing 1

rcmd Modules

[laytonjb@home4 ~]$ pdsh -v
pdsh: invalid option -- 'v'
Usage: pdsh [-options] command ...
-S                return largest of remote command return values
-h                output usage menu and quit
-V                output version information and quit
-q                list the option settings and quit
-b                disable ^C status feature (batch mode)
-d                enable extra debug information from ^C status
-l user           execute remote commands as user
-t seconds        set connect timeout (default is 10 sec)
-u seconds        set command timeout (no default)
-f n              use fanout of n nodes
-w host,host,...  set target node list on command line
-x host,host,...  set node exclusion list on command line
-R name           set rcmd module to name
-M name,...       select one or more misc modules to initialize first
-N                disable hostname: labels on output lines
-L                list info on all loaded modules and exit
available rcmd modules: ssh,exec (default: ssh)
export PDSH_RCMD_TYPE=ssh

Be sure to "source" your .bashrc file (i.e., source .bashrc) to set the environment variable. You can also log out and log back in. If, for some reason, you see the following when you try running pdsh,

$ pdsh -w 192.168.1.250 ls -s
pdsh@home4: 192.168.1.250: rcmd: socket: Permission denied

then you have built it with rsh. You can either rebuild pdsh without rsh, or you can use the environment variable in your .bashrc file, or you can do both.

First pdsh Commands

To begin, I'll try to get the kernel version of a node by using its IP address:

$ pdsh -w 192.168.1.250 uname -r
192.168.1.250: 2.6.32-431.11.2.el6.x86_64

The -w option means I am specifying the node(s) that will run the command. In this case, I specified the IP address of the node (192.168.1.250). After the list of nodes, I add the command I want to run, which is uname -r in this case. Notice that pdsh starts the output line by identifying the node name.

If you need to mix rcmd modules in a single command, you can specify which module to use in the command line,

$ pdsh -w ssh:laytonjb@192.168.1.250 uname -r
192.168.1.250: 2.6.32-431.11.2.el6.x86_64

by putting the rcmd module before the node name. In this case, I used ssh and typical ssh syntax.

A very common way of using pdsh is to set the environment variable WCOLL to point to the file that contains the list of hosts you want to use in the pdsh command. For example, I created a subdirectory PDSH where I create a file hosts that lists the hosts I want to use:

[laytonjb@home4 ~]$ mkdir PDSH
[laytonjb@home4 ~]$ cd PDSH
[laytonjb@home4 PDSH]$ vi hosts
[laytonjb@home4 PDSH]$ more hosts
192.168.1.4
192.168.1.250

I'm only using two nodes: 192.168.1.4 and 192.168.1.250. The first is my test system (like a cluster head node), and the second is my test compute node. You can put hosts in the file as you would on the command line separated by commas. Be sure not to put a blank line at the end of the file because pdsh will try to connect to it. You can put the environment variable WCOLL in your .bashrc file:

export WCOLL=/home/laytonjb/PDSH/hosts

As before, you can source your .bashrc file, or you can log out and log back in.

Specifying Hosts

I won't list all the several other ways to specify a list of nodes, because the pdsh website [9] discusses virtually all of them; however, some of the methods are pretty handy. The simplest way is to specify the nodes on the command line is to use the -w option:

$ pdsh -w 192.168.1.4,192.168.1.250 uname -r
192.168.1.4: 2.6.32-431.17.1.el6.x86_64
192.168.1.250: 2.6.32-431.11.2.el6.x86_64

In this case, I specified the node names separated by commas. You can also use a range of hosts as follows:

pdsh -w host[1-11]
pdsh -w host[1-4,8-11]

In the first case, pdsh expands the host range to host1, host2, host3, …, host11. In the second case, it expands the hosts similarly (host1, host2, host3, host4, host8, host9, host10, host11). You can go to the pdsh website for more information on hostlist expressions [10].

Another option is to have pdsh read the hosts from a file other than the one to which WCOLL points. The command shown in Listing 2 tells pdsh to take the hostnames from the file /tmp/hosts, which is listed after -w ^ (with no space between the "^" and the filename). You can also use several host files,

Listing 2

Read Hosts from File

[laytonjb@home4 ~]$ pdsh -w ^/tmp/hosts uptime
192.168.1.4:  15:51:39 up  8:35, 12 users,  load average: 0.64, 0.38, 0.20
192.168.1.250:  15:47:53 up 2 min,  0 users,  load average: 0.10, 0.10, 0.04
[laytonjb@home4 ~]$ more /tmp/hosts
192.168.1.4
192.168.1.250
$ more /tmp/hosts
192.168.1.4
$ more /tmp/hosts2
192.168.1.250
$ pdsh -w ^/tmp/hosts,^/tmp/hosts2 uname -r
192.168.1.4: 2.6.32-431.17.1.el6.x86_64
192.168.1.250: 2.6.32-431.11.2.el6.x86_64

or you can exclude hosts from a list:

$ pdsh -w -192.168.1.250 uname -r
192.168.1.4: 2.6.32-431.17.1.el6.x86_64

The option -w -192.168.1.250 excluded node 192.168.1.250 from the list and only output the information for 192.168.1.4. You can also exclude nodes using a node file:

$ pdsh -w -^/tmp/hosts2 uname -r
192.168.1.4: 2.6.32-431.17.1.el6.x86_64

In this case, /tmp/hosts2 contains 192.168.1.250, which isn't included in the output. Using the -x option with a hostname,

$ pdsh -x 192.168.1.4 uname -r
192.168.1.250: 2.6.32-431.11.2.el6.x86_64
$ pdsh -x ^/tmp/hosts uname -r
192.168.1.250: 2.6.32-431.11.2.el6.x86_64
$ more /tmp/hosts
192.168.1.4

or a list of hostnames to be excluded from the command to run also works.

More Useful pdsh Commands

Now I can shift into second gear and try some fancier pdsh tricks. First, I want to run a more complicated command on all of the nodes (Listing 3). Notice that I put the entire command in quotes. This means the entire command is run on each node, including the first (cat /proc/cpuinfo) and second (grep bogomips) parts.

Listing 3

Quotation Marks 1

[laytonjb@home4 ~]$ pdsh 'cat /proc/cpuinfo | grep bogomips'
192.168.1.4: bogomips   : 6997.39
192.168.1.4: bogomips   : 6997.39
192.168.1.4: bogomips   : 6997.39
192.168.1.4: bogomips   : 6997.39
192.168.1.4: bogomips   : 6997.39
192.168.1.4: bogomips   : 6997.39
192.168.1.4: bogomips   : 6997.39
192.168.1.4: bogomips   : 6997.39
192.168.1.250: bogomips : 5624.23
192.168.1.250: bogomips : 5624.23
192.168.1.250: bogomips : 5624.23
192.168.1.250: bogomips : 5624.23

In the output, the node precedes the command results, so you can tell what output is associated with which node. Notice that the BogoMips values are different on the two nodes, which is perfectly understandable because the systems are different. The first node has eight cores (four cores and four Hyper-Thread cores), and the second node has four cores.

You can use this command across a homogeneous cluster to make sure all the nodes are reporting back the same BogoMips value. If the cluster is truly homogeneous, this value should be the same. If it's not, then I would take the offending node out of production and check it.

A slightly different command shown in Listing 4 runs the first part contained in quotes, cat /proc/cpuinfo, on each node and the second part of the command, grep bogomips, on the node on which you issue the pdsh command.

Listing 4

Quotation Marks 2

[laytonjb@home4 ~]$ pdsh 'cat /proc/cpuinfo' | grep bogomips
192.168.1.4: bogomips   : 6997.39
192.168.1.4: bogomips   : 6997.39
192.168.1.4: bogomips   : 6997.39
192.168.1.4: bogomips   : 6997.39
192.168.1.4: bogomips   : 6997.39
192.168.1.4: bogomips   : 6997.39
192.168.1.4: bogomips   : 6997.39
192.168.1.4: bogomips   : 6997.39
192.168.1.250: bogomips : 5624.23
192.168.1.250: bogomips : 5624.23
192.168.1.250: bogomips : 5624.23
192.168.1.250: bogomips : 5624.23

The point here is that you need to be careful on the command line. In this example, the differences are trivial, but other commands could have differences that might be difficult to notice.

One very important thing to note is that pdsh does not guarantee a return of output in any particular order. If you have a list of 20 nodes, the output does not necessarily start with node 1 and increase incrementally to node 20. For example, in Listing 5, I run vmstat on each node and get three lines of output from each node.

Listing 5

Order of Output

laytonjb@home4 ~]$ pdsh vmstat 1 2
192.168.1.4: procs  ------------memory------------   ---swap-- -----io---- --system--  -----cpu-----
192.168.1.4:  r  b     swpd   free    buff   cache     si   so    bi    bo   in    cs  us sy id wa st
192.168.1.4:  1  0        0 30198704  286340  751652    0    0     2     3   48    66   1  0 98  0  0
192.168.1.250: procs -----------memory------------   ---swap-- -----io---- --system-- ------cpu------
192.168.1.250:  r  b   swpd   free    buff   cache     si   so    bi    bo   in    cs us sy  id wa st
192.168.1.250:  0  0      0 7248836   25632  79268      0    0    14     2   22    21  0  0  99  0  0
192.168.1.4:    1  0      0 30198100  286340 751668     0    0     0     0  412   735  1  0  99  0  0
192.168.1.250:  0  0      0 7249076   25632  79284      0    0     0     0   90    39  0  0 100  0  0

At first, it looks like the results from the first node are output first, but then the second node creeps in with its results. You need to expect that the output from a command that returns more than one line per node could be mixed. My best advice is to grab the output, put it into an editor, and rearrange the lines, remembering that the lines for any specific node are in the correct order.

... ... ...

The Author

Jeff Layton has been in the HPC business for almost 25 years (starting when he was 4 years old). He can be found lounging around at a nearby Frys enjoying the coffee and waiting for sales.


Recommended Links

Top Visited

Bulletin Latest Past week Past month
Google Search



IBM Install and setup pdsh on IBM Platform Cluster Manager - United States

pdsh(1) - Linux man page

Ubuntu Manpage pdsh - issue commands to groups of hosts in parallel

Shell Games - Page 1.6 " Linux Magazine by Jeff Layton

  1. DSH: http://www.netfort.gr.jp/~dancer/software/dsh.html.en
  2. PyDSH: http://pydsh.sourceforge.net/
  3. PPSS: http://code.google.com/p/ppss/
  4. PSSH: http://code.google.com/p/parallel-ssh/
  5. pdsh: http://sourceforge.net/projects/pdsh
  6. PuSSH: http://pussh.sourceforge.net/
  7. sshpt: http://code.google.com/p/sshpt/
  8. mqsh: https://github.com/aia/mqsh
  9. Using pdsh: https://code.google.com/p/pdsh/wiki/UsingPDSH
  10. Hostlist expressions: https://code.google.com/p/pdsh/wiki/HostListExpressions
  11. Processor and memory metrics: http://www.admin-magazine.com/HPC/Articles/Processor-and-Memory-Metrics
  12. Process, network, and disk-metrics: http://www.admin-magazine.com/HPC/Articles/Process-Network-and-Disk-Metrics
  13. pdsh modules: https://code.google.com/p/pdsh/wiki/MiscModules

Parallel Shell http://cs.boisestate.edu

pdsh - Parallel Distributed Shell - Google Project Hosting

Linux at Livermore Pdsh

Parallel Distributed Shell SourceForge.net

Novell openSUSE 10.3 pdsh

Linux Toolkits Installing pdsh to issue commands to a group of nodes in parallel in CentOS

Using the Parallel Shell Utility (pdsh) - Building Hadoop Clusters



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2018 by Dr. Nikolai Bezroukov. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) in the author free time and without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: September 12, 2017