May the source be with you, but remember the KISS principle ;-)
Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

Software Distribution

News Enterprise Unix System Administration Recommended Links Unix Configuration Management Tools Parallel command execution Job schedulers Unix System Monitoring  
Software Distribution Provisioning Perl Admin Tools and Scripts rsync Cluster SSH pssh sftp  
SSH Usage in Pipes Password-less SSH login scp Sysadmin Horror Stories Tips History Humor Etc

There are many tools for delivering software to multiple servers. Some good, some not so good. There are two common tools that can be used (sometimes in various combinations like ssh+ rsync ):

  1. ssh (or rsh) and scp (or rcp).  In this case you do it either sequentially or if the number of server is over 100 in parallel.
  2. rsync can also be used for synchronizing directories. which is essentially a transfer of newer files from "mothership" to satellites.
  3. ftp, telnet and other less commonly used protocols (netcat, etc).

There are several forms of packaging for software delivery:

  1. None -- regular files are used.
  2. Tar and other archives can be used to make "pseudo packages" that contain multiple files from multiple directories and save writing cp/mv commands for that.
  3. RPM and other package managers can also be used in case the volume justifies additional effort.

I have tried to provide an overview of general ideas behind software distribution in the previous lecture. The key here is not to think in terms or individual servers but in terms of complex hierarchy of functionally different servers: group of servers based on OS used,  main application installed and such can be operated more or less as a single server providing substantial increase of productivity, more uniform configuration and less errors.

Unless there are special efforts to group servers and make them more or less uniform within groups software distribution is a risky idea that can lead to substantial downtime: overwriting important configuration file on a wrong server is one of the worst situation sysadmin can find himself around. 

Generally you can distinguish environment with multiple small group and the environment with few large group (like in ISP datacenters). Different tools are needs for each. For multiple small groups more custom tools are a better deal as gains from the parallelization of updated are non-existent. 

See Slashdot discussion Slashdot Automating Unix and Linux Administration (Dec 17, 2003) about some practical aspects of software distributions.

Learn to script (Score:4, Interesting)
by holden_t (444907) <> on Thursday October 09, @03:09PM (#7175570)

Certainly I haven't read the book but it looks as if Kirk is offering examples of how to write scripts to handle everyday gruntwork. Good idea.

But I say to those that call themselves sys.admins, Learn how to script!!!

I work at a large bankrupt telcom :) and it's amazing the amount of admins that don't have the slightest idea how to write the simplest loop. Or use ksh, bash, or csh's cmd history. Or vi.

Maybe this is just a corporate thing. They were raised, in a sense, in a setting where all they had to do was add users and replace disks. Maybe they never learned how to do anything else.

Back in '83 I took manuals home and poured over every page, every weekend for months. That didn't make me a good admin but it gave me a good foundation. From there I had to just halfway use my head (imagination?) and start writing scripts. Ugly? Sure. Did they get better? Of course!

Now I play admin on 110+ machines, and I stay bored. Why? Because I've written a response engine in Expect that handles most of my everyday problems. I call it AGE, Automated Gruntwork Eliminator.

There's no way I could have done this if I had just sat back and floated, not put in a bit of effort to learn new things.

Multiple Machines (Score:5, Interesting)

by BrookHarty (9119) on Thursday October 09, @01:48PM (#7175005)

One of the problems we have, is when you have clusters with 100+ machines, and need to push configs, or gather stats off each box.

On solaris, we run a script called "shout" that does a for/next loop that ssh's into each box and runs a command for us. We also have one called "Scream" which does some root privilege ssh enabled commands.

Nortel has a nice program called CLIManager (use to be called CLImax), that allows you telnet into multiple passports and run commands. Same idea, but the program formats data to display. Say you wanted to display "ipconfig" on 50 machines, this would format it, so you have columns of data, easy to read and put in reports.

Also, has a "Watch" command that will repeat a command, and format the data. Say you want to display counters.

I have not seen an opensource program that does the same as "CliManager" but its has to be one of the best idea's that should be implemented in opensource. Basically, it logs into multiple machines, parses and displays data, and outputs all errors on another window to keep your main screen clean.

Think of logging into 10 machines, and doing a tail -f on an active log file. Then the program would parse the data, display it in a table, and all updates would be highlighted.

I havnt spoken to the author of CliManager, but I guess he also hated logging into multiple machines, and running the same command. This program has been updated over the years, and is now the standard interface to the nodes. It just uses telnet and a command line, but you can log into 100's of nodes at once.

Wish I could post pics and the tgz file, maybe someone from Nortel can comment. (Runs on Solaris, NT and linux)

Re:Multiple Machines (Score:2)
by Xzzy (111297) <sether@ t r u 7> on Thursday October 09, @04:21PM (#7176481)

> Nortel has a nice program called CLIManager (use to be called CLImax), that allows you telnet into multiple passports and run commands.

Fermilab has available a tool called rgang that does (minus the output formatting) something like this: t.html

We use it regularily on a cluster of 176 machines. It's biggest flaw is it tends to hang when one of the machines it encounters is down.

But it is free so I won't complain. :)

Multiple Machines in Parallel (Score:1)
by cquark (246669) on Thursday October 09, @04:29PM (#7176572)

One of the problems we have, is when you have clusters with 100+ machines, and need to push configs, or gather stats off each box. On solaris, we run a script called "shout" that does a for/next loop that ssh's into each box and runs a command for us. We also have one called "Scream" which does some root privilege ssh enabled commands.
While the serial approach of looping through machines is a huge improvement over making changes by hand, for large scale environments, you need to use a parallel approach, with 16 processes or so contacting machines in parallel.

I wrote my own script, but these days the Parallel::ForkManager [] module for perl does the process management part for you.

Re:Multiple Machines (Score:2)
by Sevn (12012) on Thursday October 09, @04:57PM (#7176807)
( | Last Journal: Tuesday April 01, @07:18PM)

I do pretty much the same thing this way:

Generate ssh key file.
Put pub key file in $HOME/.ssh/authorized_keys2 on the remote machines.

Have a text file with a list of all the names the machines resolve to.

for i in `cat machinelist.txt`; do echo "running blah on $i"; ssh user@$i 'some command I want to run on all machines'; echo " "; done

It comes in handy for stuff like checking the mail queues or doing a tail -50 on a log file. Mundane stuff like that. Everyone once in a while I'll do basically the same thing with scp instead. It can get as complicated as you want. I used a for loop like this to remount 150 /tmp dirs noexec and make the edits to fstab.

Re:Multiple Machines (Score:2)
by drinkypoo (153816) < minus distro> on Thursday October 09, @10:00PM (#7179637)
( | Last Journal: Friday November 21, @04:31PM)

IBM also owns Tivoli Systems, which made something called TME10, the current name of which escapes me at the moment. TME10 uses CORBA (their ORB is now Java, but it used to be basically ANSI C plus classes, compiled with the microsoft compiler on windows and gcc on most other platforms. Lots of it was perl, some of it was shell, plenty of it was C. Methods called Perl scripts pretty damn frequently. The interface was completely configurable and not only could you customize them without purchasing any additional products (if you felt froggy) but they also sold products to make this easier to do.

Last I checked this package ran with varying degrees of ability (but most operating systems were very well suppored) on all major Commercial Unices, BSDi, Linux, OS/2, NT, Novell, and a bunch of random Unices that most people have never heard of, and never had to. It was sometimes problematic but the fact is that it was incredibly cross-platform.

It was a neat way to do system monitoring. It would be nice to develop something open source like that. I think that today it would not be all that difficult a task. I'd like to see all communications be encrypted, with arbitrary shapes allowed in the network in terms of who talks to who, and who has control over who, to reflect the realities of organizations.

Re:Multiple Machines (Score:0)
by Anonymous Coward on Thursday October 09, @04:14PM (#7176396)

IBM has two solutions depending on the environment.

PSSP under AIX will allow you to run distrbuted command across nodes with either a correct RSH config or SSH Keys with no passphrase. PSSP, also, allow for parrallel copy.

Under Linux( and AIX actually) there is CSM which also allows for DSH with the same config requirements. You can do Parallel copy under CSM, but you have to be tricky with something like, "dsh headnode:/file /file" .

Re:Learn to script (Score:2)
by Wolfrider (856) < minus city> on Friday October 10, @08:10PM (#7187085)

O'Reilly's book helped me quite a bit.

In addition, Debian has a new package called abs-guide that I haven't checked out yet. html

--I've written a bunch of helpful bash scripts to help me with everyday stuff, as well as aliases and functions. If you want, email me - kingneutron at yahoo NOSPAM dot com and put "Request for bash scripts" in the subject line, and I'll send you a tarball.

Might be useful... (Score:2)
by Vrallis (33290) on Friday October 10, @12:22AM (#7180451)

This might very well be a book I'll pick up sometime. I'm always looking for more ideas.

I maintain about ~170 remote Linux boxes (in our company's retail stores and warehouses), as well as our ~30 or so inhouse servers.

I went through a lot of work to enable our rollout and conversion to go more smoothly. The network and methodology for users, printers, etc. is extremely simplified and patterened.

For each of the 3 'models' of PCs we use, I have a master system that I produced. I used Mondo Rescue [] to produce CD backups of these systems. These systems act as serial terminal controllers, print spoolers, routers, desktop system usage (OpenOffice, Mozilla, Kmail under KDE), and other functions as needed.

When we need to replace a system, or rollout a new location, we grab a system, pop in the Mondo CD, and do a nuke restore. When done, we have a standard configuration user that we log in as. It runs a quick implementation script where you answer anywhere from 3-8 questions (depending on the system type and options), and it configures everything. All networking, users, sets up Kmail, configures all printers and terminals (we use Comtrol Rocketport serial boards), and so on.

If the system is physically ready, we can have it ready software-wise in about 20 minutes (2 CDs to restore).

Updates are done via a couple different methods. I use SSH (over our internal VPN, using key authentication) in scripts to do most updates. If I need to do anything major, such as recently updating Mozilla, we do a CD distribution. The users have a simple menu to take care of running the update for them, even with autorun under KDE. Just pop in the CD, and it automatically takes them into the menu they need.

All logs are duplicated across the network to a central server, but intrusion is less likely as these systems sit on a private frame network. They do, however, have fully secured network setups, as we use cheap dial-up internet access as a backup in case the frame circuit goes down.

I can't help but feel every day like this is just one big hack/kludge, but it works, works damned well, and was about half the cost of any other solution (i.e. higher end Cisco routers to handle various functions, and using Equinox ELS-IIs or the like...those pieces of crap never would work right, we finally pulled only 2 we had in use, and they are currently collecting dust in a storage cabinet).

Needless to say, I am *always* looking for ideas to improve upon this.

Dr. Nikolai Bezroukov

Top Visited
Past week
Past month


Old News ;-)

[Nov 03, 2011] MUltihost SSH Wrapper 1.0

Unix Shell
Mussh is a shell script that allows you to execute a command or script over SSH on multiple hosts with one command. When possible, it will use ssh-agent and RSA/DSA keys to minimize the need to enter your password more than once.

[Jan 23, 2011] GNU Parallel

GNU parallel is a Perl script for executing jobs in parallel locally or using remote computers. A job is typically a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. If you use xargs today you will find GNU parallel very easy to use, as GNU parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. If you use ppss or pexec you will find GNU parallel will often make the command easier to read. GNU parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU parallel as input for other programs.

[Aug 27, 2010] UNIX System Administration Tools


Runs commands on multiple remote hosts simultaneously. (Perl)
View the README
Download version 11.0 - gzipped tarball, 9 KB
Last update: November 2005


Copies files to remote hosts based on a configuration file. (Perl)
View the README
Download version 1.4 - gzipped tarball, 5 KB
Last update: April 2007

[Aug 06, 2010] Kickstart, APT and RGANG usage note for farm administration Mirko Corosu INFN Genova, Alex Barchiesi, Marco Serra INFN Roma

Introduction to RGANG

Nearly every system administrator tasked with operating a cluster of Unix machines will eventually find or write a tool which will execute the same command on all of the nodes.

At Fermilab has been created a tool called "rgang", written by Marc Mengel, Kurt Ruthmansdorfer, Jon Bakken (who added "copy mode") and Ron Rechenmacher (who included the parallel mode and "tree structure").
The tools was repackaged in an rpm and it is available here:

It relies on files in /etc/rgang.d/farmlets/ which define sets of nodes in the cluster.
For example, "all" (/etc/rgang.d/farmlets/all) lists all farm nodes, "t2_wn" lists all your t2_wn nodes, and so forth.
The administrator issues a command to a group of nodes using this syntax:

rgang farmlet_name command arg1 arg2 ... argn

On each node in the file farmlet_name, rgang executes the given command via ssh, displaying the result delimited by a node-specific header.
"rgang" is implemented in Python and works forking separate ssh children which execute in parallel. After successfully waiting on returns from each child or after timing out it displays the output as the OR of all exit status values of the commands executed on each node.
To allow scaling to kiloclusters it can utilize a tree-structure, via an "nway" switch. When so invoked, rgang uses ssh to spawn copies of itself on multiple nodes. These copies in turn spawn additional copies.

4.1 Required Hardware and Software

Users will need to have python (tested on Python 1.5.2 and 2.3.4) installed too. It is also supplied a "frozen" version of rgang that does not need any additional package and can be found in /usr/lib/rgang/bin/.

4.2 Product Installation

Install the rpm and that's it.

rpm -iv rgang.rpm

It has been created a "pre-script" (/usr/bin/rgang ) that sets the appropriate environmental variables and then execs the python script or "frozen" version. You have to change the name of the executable depending on the one you are planning to use. In the python case:

rgOpts="--rsh=ssh --rcp=scp"
# this has to be uncommented if you have a Python version over 2.3
#pyOpts="-W ignore::FutureWarning" 
exec python $pathToRgang/ $rgOpts "$@"

if you need to use the frozen version modify the pre-script as follows: 
rgOpts="--rsh=ssh --rcp=scp"
# this has to be uncommented if you have a Python version over 2.3
#pyOpts="-W ignore::FutureWarning" 
exec $pathToRgang/rgang $rgOpts "$@"

4.3 Running the Software

In the following lines it's shown by examples the typical usage of 'rgang' refer to the documentation or usage/help from 'rgang -h' for the whole of the options.

5 Troubleshooting

6 Appendix

6.1 Setting the RSA keys

It could be useful to distribute the RSA-key from your mother-node to your target-nodes so that you can use ssh-agent for authentication.
To create a key on your mother-node:

ssh-keygen -t dsa

then to copy the public key to the target-nodes in interactive mode (``-pty''):

rgang --pty -c <nodes-spec> /root/.ssh/ /root/.ssh/authorized_keys

then on your mother-node:

ssh-agent <your shell>

and type the pass-phrase you choose when created the key, then use 'rgang' as usual (no interactive option).

[Aug 03, 2010] Fabric - Fabric v0.9.1 documentation

Fabric is a Python library and command-line tool for streamlining the use of SSH for application deployment or systems administration tasks.

It provides a basic suite of operations for executing local or remote shell commands (normally or via sudo) and uploading/downloading files, as well as auxiliary functionality such as prompting the running user for input, or aborting execution.

Typical use involves creating a Python module containing one or more functions, then executing them via the fab command-line tool. Below is a small but complete "fabfile" containing a single task:

[Jul 06, 2010] What is a good modern parallel SSH tool - Server Fault

Q. I have heard that pssh and clusterssh are two popular ones, but I thought I would open it to discussion here and see what the community's experiences with these tools were? What are the gotchas? Any decent hacks or use cases?

A: I have used pssh and it's easy and works quite well. It's really great for quick queries.

If you find yourself managing servers I'd suggest something more robust and in a slightly different realm (configuration management) such as Puppet or CFEngine.

There is also dsh for parallel ssh runs.

Mussh is a good alternative, it is already included in many Linux distros.

Mussh is a shell script that allows you to execute a command or script over ssh on multiple hosts with one command. When possible mussh will use ssh-agent and RSA/DSA keys to minimize the need to enter your password more than once.

The SSH Power Tool (sshpt) was designed for parallel SSH without requiring that the user setup pre-shared SSH keys. It is superior to pssh and clusterssh in that it supports executions via sudo and can also copy files and execute them afterwards (optionally, via sudo as well). By default it outputs results in CSV format but doubles as an importable Python module so you can use it in your own programs (I used to use it as a back-end behind a a custom-built web-based reporting tool at my former employer).

[Jul 05, 2010] parallel-ssh - Project Hosting on Google Code

Python-based. More sophisticated then cluster ssh which is Perl-based...

PSSH provides parallel versions of OpenSSH and related tools. Included are pssh, pscp, prsync, pnuke, and pslurp. The project includes psshlib which can be used within custom applications. The source code is written in Python and can be cloned from:

git clone git://

PSSH was originally written and maintained by Brent N. Chun. Due to his busy schedule, Brent handed over maintenance to Andrew McNabb in October 2009.

[Apr 06, 2010] Tentakel to execute commands on multiple Linux or UNIX Servers by nixcraft

With the help of tool called tentakel, you run distributed command execution. It is a program for executing the same command on many hosts in parallel using ssh (it supports other methods too). Main advantage is you can create several sets of servers according requirements. For example webserver group, mail server group, home servers group etc. The command is executed in parallel on all servers in this group (time saving). By default, every result is printed to stdout (screen). The output format can be defined for each group.

Consider the following sample setup:

admin workstation   Group                  Hosts
|----------------> www-servers        host1, host2,host3
|----------------> homeservers,

You need to install tentakel on admin workstation ( We have two group servers, first is group of web server with three host and another is homeservers with two hosts.

The requirements on the remote hosts (groups) need a running sshd server on the remote side. You need to setup ssh-key based login between admin workstation and all group servers/hosts to take full advantage of this tentakel distributed command execution method.

Tentakel requires a working Python installation. It is known to work with Python 2.3. Python 2.2 and Python 2.1 are not supported. If you are using old version of python then please upgrade it.

Let us see howto install and configure tentakel.

Visit sourceforge home page to download tentakel or download RPM files from tentakel home page.

Untar source code, enter:

# tar -zxvf tentakel-2.2.tgz

You should be root user for the install step. To install it type

# make
# make install

For demonstration purpose we will use following setup:

   admin pc                    Group           hosts
Running Debian Linux       homeservers
User: jadmin

Copy sample tentakel configuration file tentakel.conf.example to /etc directory

# cp tentakel.conf.example /etc/ tentakel.conf

Modify /etc/tentakel.conf according to above setup, at the end your file should look like as follows:

# first section: global parameters
set ssh_path="/usr/bin/ssh"
set method="ssh"  # ssh method
set user="jadmin"   # ssh username for remote servers
#set format="%d %o\n" # output format see man page
#set maxparallel="3"  # run at most 3 commands in parallel

# our home servers with two hosts
group homeservers ()
+ +

# localhost
group local ()

Save the file and exit to shell prompt. Where,
group homeservers () : Group name
+ + : Host inclusion. name is included and can be an ip address or a hostname.

Configure ssh-key based login to avoid password prompt between admin workstation and group servers for jadmin user.

Login as jadmin and type the following command:

$ tentakel -g homeservers

interactive mode

-g groupname: Select the group groupname The group must be defined in the configuration file (here it is homeservers). If not specified tentakel implicitly assumes the default group.

At tentakel(homeservers)> prompt type command uname and uptime command as follows:

exec "uname -mrs"
exec "uptime"

Few more examples
Find who is logged on all homeservers and what they are doing (type at shell prompt)

$ tentakel -g homeservers "w"

Executes the uptime command on all hosts defined in group homeservers:

$ tentakel -g homeservers uptime

As you can see, tentakel is very powerful and easy to use tool. It also supports the concept of plugins. A plugin is a single Python module and must appear in the $HOME/.tentakel/plugins/ directory. Main advantage of plugin is customization according to your need. For example, entire web server or mysql server farm can be controlled according our requirements.
However, tentakel is not the only utility for this kind of work. There are programs that do similar things or have to do with tentakel in some way. The complete list can be found online here. tentakel should work on almost all variant of UNIX/BSD or Linux distributions.

Time is a precious commodity, especially if you're a system administrator. No other job pulls people in so many directions at once. Users interrupt you constantly with requests, preventing you from getting anything done and putting lots of pressure on you. What do you do? The answer is time management. Read our book review of Time Management for System Administrators. Continue reading Execute commands on multiple hosts using expect tool Part III of this series.

Stoyan 12.27.05 at 2:10 pm
Or if you like ruby, use SwitchTower. Have native ruby ssh client = will work on windows too.
4 Anonymous 12.28.05 at 10:28 am
Fermilab has a much more feature rich tool named rgang.

More info: rgang abstract and download

5 nixcraft 12.28.05 at 11:26 am
RGANG looks good too. It incorporates an algorithm to build a tree-like structure (or "worm" structure) to allow the distribution processing time to scale very well to 1000 or more nodes. Looks rock solid.

Thanks for pointing out I appreciate you post :)

6 Damon 12.28.05 at 12:31 pm
I can confirm that it is possible to run Tentakel on Windows, albeit with a bit of modification to the source (about 7 lines total).

I posted the details over my blog: Running Tentakel on Windows

8 Anonymous 12.28.05 at 9:25 pm
It seems a nice tool and using ssh it will be secure (as long as no-one knows the privat key ofcourse).

For a more simple variant I use a sh-script to execute on all machines in my (linux-)network:

ping -c 2 -w 10 -b 2>/dev/null | sed -n -e 's#^.*bytes from ([^:][^:]*).*#1#p' | while read ip
name=`host ${ip} | sed -e 's#.* ([^ ][^ ]*).$#1#'`
echo "- ${name} : ${*}"
rsh ${name} ${EXTRA} "${*}"

9 Anonymous 12.28.05 at 10:42 pm
If your somewhat traditional, just use Expect. Does most of the same, has tons of examples around, a cool book (Exploring Expect).
And you can handle stuff that NEEDS a terminal like ssh password prompts or the password program to change passwords.And it works on windows.
10 nixcraft 12.28.05 at 11:17 pm
Expect. Is very nice back in Solaris day I had complete monitoring system written in rsh and expect tool. Open advantage of ssh is that it provides API for C/C++ programs. So I get performance

Anonymous user thanks for sharing your script with us. ,appreciate your post.

11 Sebastian 12.29.05 at 12:52 am
Thanks for mentioning tentakel in your blog. You also mentioned rgang, which looks nice indeed. However, there are two reasons why I don't like rgang: 1) the license is not as free as tentakels (at least it does not look like as far as I can tell without being a lawyer) 2) it looks much more unmaintained thatn tentakel :)


12 alutii 12.29.05 at 1:25 am
Another possibility is fanout. I quite like fanterm where the output is collected in xterm like windows. Helps keep things organized for me.

Groups in tentakel looks handy. Thanks for the article nixcraft.

13 Anonymous 12.31.05 at 3:40 pm
if the number of machines are
14 Anonymous 12.31.05 at 8:20 pm
I use Shocto -
written in Ruby.
Link here
15 mariuz 01.17.08 at 4:08 pm
if you have python2.5 like on my ubuntu system
you must install from svn
"I checked in a fix. You can try it out by checking out the development
version of tentakel using the following command:"

svn co
cd trunk/tentakel
make ; make install

16 Moise Ndala 07.20.09 at 2:26 pm
I would like just to mention that for new version of python 2.5 and latest versions, tentakel works fine with the patch specified at:

Thanks for your help!

[May 16, 2009] makeself

makeself is a small shell script that generates a self-extractable compressed TAR archive from a directory. The resulting file appears as a shell script, and can be launched as is. The archive will then uncompress itself to a temporary directory and an arbitrary command will be executed (for example, an installation script).

This is pretty similar to archives generated with WinZip Self-Extractor in the Windows world.

[Apr 2, 2009] Spacewalk

Spacewalk is a Linux and Solaris systems management solution. It allows you to inventory your systems (hardware and software information), install and update software on your systems, collect and distribute your custom software packages into manageable groups, provision (Kickstart) your systems, manage and deploy configuration files to your systems, monitor your systems, provision virtual guests, and start/stop/configure virtual guests.

[Mar 10, 2009] Cluster SSH


Cluster SSH opens terminal windows with connections to specified hosts and an administration console. Any text typed into the administration console is replicated to all other connected and active windows. This tool is intended for, but not limited to, cluster administration where the same configuration or commands must be run on each node within the cluster. Performing these commands all at once via this tool ensures all nodes are kept in sync.

[Dec 14, 2008] Cool Solutions Using ClusterSSH to Perform Tasks on Multiple Servers Simultaneously By Martijn Pepping


As an administrator of SLES/OES Linux clusters or multiple SUSE Linux servers you are probably familiar with that fact that you have to make an identical change on more than one server. Those can be things like editing files, execute commands, collect data or some other administrative task.

There are a couple of way to do this. You can write a script that performs the change for you, or you can SSH into a server, make the change and repeat that task manually for every server.

Now both ways can cost an extended amount of time. Writing and testing a shell script takes some time and performing the task by hand on lets say five or more servers also costs time.

Now, wouldn't it be a real timesaver when you have only one console in which you can perform tasks on multiple servers simultaneously? This solution can be found in ClusterSSH.


With ClusterSSH it is possible to make a SSH connection to multiple servers and perform tasks from one single command window, without any scripting. The 'cssh' command lets you connect to any server specified as a command line argument, or to groups of servers (or cluster nodes) defined in a configuration file.

The 'cssh' command opens a terminal window to every server which can be used to review the output sent from the cssh-console, or to edit a single host directly. Commands given in to the cssh-console are executed on every connected host. When you start typing in the cssh-console you'll see that the same command also show up on the commandline of the connected systems.

The state of connected systems can be toggled from the cssh-console. So if you want to exclude certain hosts temporarily from specific command, you can do this with a single mouseclick. Also, hosts can be added on the fly and open terminal windows can automatically be rearranged.

One caveat to be aware of is when editing files. Never assume that file is identical on all systems. For example, lines in a file you are editing may be in a different order. Don't just go to a certain line in a file and start editing. Instead search for the text you want to exit, just to be sure the correct text is edited on all connected systems.


Configuration files section from the man-page:


This file contains a list of tags to server names mappings. When any name is used on the command line it is checked to see if it is a tag in /etc/clusters (or the .csshrc file, or any additional cluster file specified by -c). If it is a tag, then the tag is replaced with the list of servers from the file. The file is formatted as follows:

<tag> [user@]<server> [user@]<server> [...]


# List of servers in live

live admin1@server1 admin2@server2 server3 server4

Clusters may also be specified within the users .csshrc file, as documented below.

/etc/csshrc & $HOME/.csshrc

This file contains configuration overrides - the defaults are as marked. Default options are overwritten first by the global file, and then by the user file.


ClusterSSH can be used to any system running the SSH daemon.

[Aug 25, 2008] pssh 1.4.0 by Brent N. Chun -

About: pssh provides parallel versions of the OpenSSH tools that are useful for controlling large numbers of machines simultaneously. It includes parallel versions of ssh, scp, and rsync, as well as a parallel kill command.

Changes: A 64-bit bug was fixed: select now uses None when there is no timeout rather than sys.maxint. EINTR is caught on select, read, and write calls. Longopts were fixed for pnuke, prsync, pscp, pslurp, and pssh. Missing environment variables options support was added.

[Dec 8, 2006] rsnapshot A Perl-based filesystem snapshot utility.

rsnapshot is a filesystem snapshot utility based on rsync. It makes it easy to make periodic snapshots of local machines, and remote machines over ssh. It uses hard links whenever possible, to greatly reduce the disk space required.

[Dec 8, 2006] Warsync A Perl-based server replication program based on rsync.

Warsync (Wrapper Around Rsync) is a server replication system mainly used to sync servers in LVS clusters. It is based on rsync over ssh and has native support for Debian package synchronization.

[Nov 5, 2006] Perl Scripts for execution on multiple servers

[Nov 5, 2006] Shell Scripts for execution on multiple servers

[Nov 5, 2006] Perl Scripts for execution on multiple servers

[Nov 5, 2006] Python Scripts for execution on multiple servers

autosync Python Package Manager Index (PyPM) ActiveState Code

A very efficent tool to maintain the one direction synchronization from a local file system to a remote location. Will (eventually) work with S3, Rackspace, rsync and generic http based targets.

Automating ssh and scp across multiple hosts

Once installed the pssh package installs a number of new commands:

This command allows you to copy files from multipl remote hosts to the local system. We'll demonstrate the usage shortly.
This command allows you to run commands upon a number of systems in parallel. We'll also demonstrate this command shortly.
This command likes you kill processes on multiple remote systems.
This is the opposite of parallel-slirp and allows you to copy a file, or files, to multiple remote systems.

General Usage

Each of the new commands installed by the pssh package will expect to read a list of hostnames from a text file. This makes automated usage a little bit more straightforward, and simplifies the command-line parsing.

Running Commands On Multiple Hosts

The most basic usage is to simply run a command upon each host, and not report upon the output. For example given the file hosts.txt containing a number of hostnames we can run:

Save time managing multiple systems with Parallel SSH Linux and Open Source

OpenSSH is perhaps one of the most powerful and versatile tools available to any Linux user. It allows you to securely connect to a remote system via a shell or encrypted FTP and also allows you to copy files securely to and from remote systems.

For a user caring for multiple systems, OpenSSH is extremely useful, but being able to execute OpenSSH commands in parallel is even more so. This is where Parallel SSH, or pssh, comes in. Pssh provides parallel versions of the OpenSSH tools, meaning you can execute commands on various hosts in parallel, copy files in parallel, and so forth. Pssh is essentially a frontend to OpenSSH written in Python. It includes pssh, pscp, and prsync, as well as pslurp (the opposite of pscp in that it downloads rather than uploads) and pnuke (a frontend to the kill command).

Using pssh is extremely easy. There are no manpages, but calling the command with no arguments will bring up the help, which describes each option.

Every command uses a plaintext "hosts" file that is a simple text file containing the hosts to act upon, one per line. As an example, assume you wanted to make sure that the time on each server was identical. This could be done using the date command, but to do it with regular SSH, you would have to execute the command at the same time on each host using screen or multiple terminals. With pssh, this is one simple command:

$ pssh -h hosts -P date
hades: Wed Nov 12 10:21:11 MST 2008
hades: [1] 10:21:11 [SUCCESS] hades 22
odin: Wed Nov 12 10:21:11 MST 2008
odin: [2] 10:21:11 [SUCCESS] odin 22
$ cat hosts

Contrast that to using ssh directly:

$ for host in hades odin; do ssh ${host} "date"; done
Wed Nov 12 10:24:02 MST 2008
Wed Nov 12 10:24:02 MST 2008

Remote System Management Tool Overview


Remote Server Management Tool is an Eclipse plug-in that provides an integrated graphical user interface (GUI) environment and enables testers to manage multiple remote servers simultaneously. The tool is designed as a management tool for those who would otherwise telnet to more than one server to manage the servers and who must look at different docs and man pages to find commands for different platforms in order to create or manage users and groups and to initiate and monitor processes. This tool handles these operations on remote servers by using a user-friendly GUI; in addition, it displays configuration of the test server (number of processors, RAM, etc.). The activities that can be managed by this tool on the remote and local server are divided as follows:

How does it work?

This Eclipse plug-in was written with the Standard Widget Toolkit (SWT). The tool has a perspective named Remote System Management; the perspective consists of test servers and a console view. The remote test servers are mounted in the Test Servers view for management of their resources (process, file system, and users or groups).

At the back end, this Eclipse plug-in uses the Software Test Automation Framework (STAF). STAF is an open-source framework that masks the operating system-specific details and provides common services and APIs in order to manage system resources. The APIs are provided for a majority of the languages. Along with the built-in services, STAF also supports external services. The Remote Server Management Tool comes with two STAF external services: one for user management and another for proving system details.


radmind is a suite of Unix command-line tools and a server designed to remotely administer the file systems of multiple Unix machines. At its core, radmind operates as a tripwire. It is able to detect changes to any managed filesystem object, e.g. files, directories, links, etc. However, radmind goes further than just integrity checking: once a change is detected, radmind can optionally reverse the change. Each managed machine may have its own loadset composed of multiple, layered overloads. This allows, for example, the operating system to be described separately from applications. Loadsets are stored on a remote server. By updating a loadset on the server, changes can be pushed to managed machines.


spill manages symbolic links under one tree which point to matching filenames in another. When individual projects are configured with project/version-specific -- prefix= settings, to keep their installations segregated, spill can make them appear to be installed in a common place, e.g. under /usr/local. It can also delete the links associated with a particular program. It is similar in concept to various other programs such as stow, depot, and relink. However, it's written in C, so it isn't reliant on an interpreter being available. It also doesn't assume complete control of the directory tree where the symbolic links are created. It can create both absolute or relative symbolic links, the latter being more convenient in some setups.


Rdist is a program to maintain identical copies of files over multiple hosts. It preserves the owner, group, mode, and mtime of files if possible and can update programs that are executing. relink

relink is a package management tool for the organization and management of software packages. It should run on any Unix platform that runs Perl. It is similar to tools such as RPM (Red Hat/Mandrake), pkgadd (Slackware/Sun), stow(GNU), and depot(CMU).

Recommended Links

Softpanorama hot topic of the month

Softpanorama Recommended

Top articles


Projects tagged Software Distribution Tools

Use ssh on multiple servers at one time

Automating ssh and scp across multiple hosts

If you're like me you'll run Debian GNU/Linux upon a number of hosts and at times you'd like to run a command or two upon all of those hosts. There are several ways you can accomplish this, ranging from manually connecting to each host in turn, to the more complex solutions such as CFEngine or Puppet. Midway between the two you can use pssh to run commands upon multiple hosts.

The pssh package is one of a number of tools which allows you to perform SSH connections in parallel across a number of machines.

Execute commands simultaneously on multiple servers Using PSSH-Cluster SSH-Multixterm Ubuntu Geek

Execute commands on multiple hosts using expect tool

ssh on multiple servers Using cluster ssh

Updates of Multiple Machines Using SSH

Save time managing multiple systems with Parallel SSH | Linux and ...

using rsync with ssh to distribute to multiple hosts -- Execute commands simultaneously on multiple servers


FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit exclusivly for research and educational purposes.   If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner. 

ABUSE: IPs or network segments from which we detect a stream of probes might be blocked for no less then 90 days. Multiple types of probes increase this period.  


Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy


War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes


Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law


Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least

Copyright © 1996-2016 by Dr. Nikolai Bezroukov. was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License.

The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting development of this site and speed up access. In case is down you can use the at


The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

Last updated: September 12, 2017