May the source be with you, but remember the KISS principle ;-)
Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

Parallel command execution

News Enterprise Unix System Administration Recommended Links Unix Configuration Management Tools pssh Slurping C3 Tools parallel
Pdsh a multithreaded remote shel pdcp rsync Cluster SSH Mussh rdist Group Shell (also called gsh) clustershell 
SSH Power Tool Tentakel Multi Remote Tools     Perl Admin Tools and Scripts Grid Engine Unix System Monitoring
SSH Usage in Pipes Password-less SSH login scp sftp Tips History Humor Etc

Parallel command execution means executing the same command or script on multiple servers. Such tools can be command line or interactive. Most are based on SSH. Some of uses rdist when you trying to transfer file and ssh they you execute the file. You probably should use one written in a scripting language you know.

Typical usage is for example distribution of private keys to multiple hosts. See Passwordless SSH login

Such tools are really necessary in cluster and computer grid environment. See Cluster management tools

Among popular choices are:

  1. Shell
  2. Perl
  3. Python

Top Visited
Past week
Past month


Old News ;-)

[Feb 12, 2017] Linux Server Hacks, Volume Two

Notable quotes:
"... multixterm ..."
"... xterms ..."
"... multixterm ..."
"... host1 host2 ..."
Feb 12, 2017 |
Execute Commands Simultaneously on Multiple Servers

Run the same command at the same time on multiple systems, simplifying administrative tasks and reducing synchronization problems .

If you have multiple servers with similar or identical configurations (such as nodes in a cluster), it's often difficult to make sure the contents and configuration of those servers are identical. It's even more difficult when you need to make configuration modifications from the command line, knowing you'll have to execute the exact same command on a large number of systems (better get coffee first). You could try writing a script to perform the task automatically, but sometimes scripting is overkill for the work to be done. Fortunately, there's another way to execute commands on multiple hosts simultaneously.

A great solution for this problem is an excellent tool called multixterm , which enables you to simultaneously open xterms to any number of systems, type your commands in a single central window and have the commands executed in each of the xterm windows you've started. Sound appealing? Type once, execute many-it sounds like a new pipelining instruction set.

multixterm is available from , and it requires expect and tk . The most common way to run multixterm is with a command like the following:

multixterm -xc "ssh %n"

host1 host2

This command will open ssh connections to host1 and host2 ( Figure 4-1 ). Anything typed in the area labeled "stdin window" (which is usually gray or green, depending on your color scheme) will be sent to both windows, as shown in the figure.

As you can see from the sample command, the –xc option stands for execute command, and it must be followed by the command that you want to execute on each host, enclosed in double quotation marks. If the specified command includes a wildcard such as %n , each hostname that follows the command will be substituted into the command in turn when it is executed. Thus, in our example, the commands ssh host1 and ssh host2 were both executed by multixterm , each within its own xterm window.

See Also

[Dec 18, 2014] Parallel SSH execution and a single shell to control them all

This isn't good enough?

Posted by: Anonymous [ip:] on November 01, 2008 01:46 AM

For x in `cat hosts`; do ssh -f $x "do stuff" & done && wait;


Re: This isn't good enough?

Posted by: Anonymous [ip:] on November 01, 2008 09:22 PM

no, its not. Lets take a sample size of 32 hosts and run a quick command on each:

$ time for f 
																						in `cat hosts`; 
																						do ssh $f 'ls / > /dev/null'; 
real 2m45.195s

That's roughly 5.15 seconds per host. If this were a 5000 node network we're looking at about 7.1 hours to complete this command. Lets do the same test with pssh and a max parallel of 10:

$ time pssh -p 10 -h hosts 
																						"ls > /dev/null"
real 0m17.220s

That's some considerable savings. lets try each one in parallel and set the max to 32:

$ time pssh -p 32 -h hosts 
																						"ls > /dev/null"
real 0m7.436s

If one run took about 5 seconds, doing them all at the same time also took about 5 seconds, just with a bit of overhead. I don't have a 5000 node network (anymore) but you can see there are considerable savings by doing some things in parallel. You probably wouldn't ever run 5000 commands in parallel but really thats a limit of your hardware and network. if you had a beefy enough host machine you probably could run 50, 100 or even 200 in parallel if the machine could handle it.


Re(1): This isn't good enough?

Posted by: Anonymous [ip:] on November 02, 2008 07:54 PM

It's absolutely not good enough. 4 or so years ago a coworker and I wrote a suite of parallel ssh tools to help perform security related duties on the very large network in our global corp. With our tools on a mosix cluster using load balanced ssh-agents across multiple nodes we could run upto 1000 outbound sessions concurrently. This made tasks such as looking for users processes or cronjobs on 10,000+ hosts world wide a task that could be done in a reasonable amount of time, as opposed to taking more than a day.


Parallel SSH execution and a single shell to control them all

Posted by: Anonymous [ip:] on November 02, 2008 12:43 PM

I use the parallel option to xargs for this. Tried shmux, and some other tools, but xargs seems to work best for me. Just use a more recent gnu version. Some older gnu versions, some aix version, etc... have some issues. Only real gotcha that I've run into is that it will stop the whole run if a command exits non-zero. Just write a little wrapper that exits 0 and you're good to go.

I've used this in 2 ~1000 server environments to push code(pipe tar over ssh for better compatibility), and remotely execute commands.


Parallel SSH execution and a single shell to control them all

Posted by: Anonymous [ip:] on November 04, 2008 04:09 PM

Ticketmaster wrote a really good tool to do parallel systems administration tasks like this called "onall".

It is released under the gplv3 and can be downloaded from:

I do something like this to execute a command on all hosts and set the timeout to 10 seconds:

host -l | awk '{print $1}' | onall -t10 "uptime"


Parallel SSH execution and a single shell to control them all

Posted by: Anonymous [ip:] on November 06, 2008 10:08 AM

What I'm curious about is this:
if you want to interactively edit the same file on multiple machines, it might be quicker to use a parallel SSH utility and edit the file on all nodes with vi rather than concoct a script to do the same edit.

I would have found a short note on which of these three is capable of doing so very helpfull. Cluster SSH's description sounds as though it would be the tool that could do it. But I just don't have the time to test it just yet.
Anyone tried that yet? Or knows to which tool this statement refers to?


Re: Parallel SSH execution and a single shell to control them all

Posted by: Anonymous [ip:] on November 14, 2008 05:33 AM

I don't see ssh keys mentioned anywhere...I'm not getting how this allows authentication to happen. Is this thing secure?

[Aug 31, 2014] Run the same command on many Linux servers at once


with the command gsh

Group Shell (also called gsh) is a remote shell multiplexor. It lets you control many remote shells at once in a single shell. Unlike other commands dispatchers, it is interactive, so shells spawned on the remote hosts are persistent.

It requires only a SSH server on the remote hosts, or some other way to open a remote shell.

gsh allows you to run commands on multiple hosts by adding tags to the gsh command.

gsh tag "remote command"

Important things to remember:


List uptime on all servers in the linux group:

gsh linux "uptime"

Check to see if an IP address was blocked with CSF by checking the csf and csfcluster groups/tags:

gsh csf+csfcluster "/usr/sbin/csf -g" 

Unblock an IP and remove from /etc/csf.deny from all csf and csfcluster machines

 gsh csf+csfcluster "/usr/sbin/csf -dr"

Check the linux kernel version on all VPS machines running centos 5

gsh centos5-baremetal "uname -r"

Check cpanel version on all cpanel machines

gsh cpanel "/usr/local/cpanel/cpanel -V"

The full readme is located here:

Here's an example /etc/ghosts file:

# Machines
# hostname         OS-Version Hardware OS  cp     security debian6 baremetal linux plesk  iptables centos5 vps       linux cpanel csfcluster debian7 baremetal linux plesk  iptables centos6 vps       linux cpanel csfcluster centos6 vps       linux cpanel csfcluster centos6 vps       linux nocp   denyhosts debian6 baremetal linux plesk  iptables centos6 baremetal linux cpanel csf centos5 vps       linux cpanel csf

[Mar 27, 2014] C3 Tools


[Jan 26, 2012] pssh 2.3

pssh provides parallel versions of the OpenSSH tools that are useful for controlling large numbers of machines simultaneously. It includes parallel versions of ssh, scp, and rsync, as well as a parallel kill command.

[Jan 23, 2011] dsh

dsh (the distributed shell) is a program which executes a single command on multiple remote machines. It can execute this command in parallel (i.e., on any number of machines at a time) or in serial (by specifying parallel execution of the command on 1 node at a time). It was originally designed to work with rsh, but has full support for ssh and with a little tweaking of the top part of the dsh executable, should work with any program that allows remote execution of a command without an interactive login.

[Aug 24, 2010] Massh

Bash, GPL

Massh is a mass ssh tool that allows for parallel execution of commands on remote systems. This makes it possible to update and manage hundreds or even thousands of systems. It can also push files in parallel and run scripts.

It also includes Pingz, a mass pinger that can do DNS lookups, and Ambit, a string expander that allows for both pre-defined groups of hosts and arbitrary strings that represent host groupings.

The combination of Massh and Ambit creates a powerful way to manage groups of systems as configurable units. This allows a focus on managing an environment of services, not servers. Clean, organized output sets it apart from other mass ssh tools.

[Apr 06, 2010] Tentakel to execute commands on multiple Linux or UNIX Servers by nixcraft

With the help of tool called tentakel, you run distributed command execution. It is a program for executing the same command on many hosts in parallel using ssh (it supports other methods too). Main advantage is you can create several sets of servers according requirements. For example webserver group, mail server group, home servers group etc. The command is executed in parallel on all servers in this group (time saving). By default, every result is printed to stdout (screen). The output format can be defined for each group.

Consider the following sample setup:

admin workstation   Group                  Hosts
|----------------> www-servers        host1, host2,host3
|----------------> homeservers,

You need to install tentakel on admin workstation ( We have two group servers, first is group of web server with three host and another is homeservers with two hosts.

The requirements on the remote hosts (groups) need a running sshd server on the remote side. You need to setup ssh-key based login between admin workstation and all group servers/hosts to take full advantage of this tentakel distributed command execution method.

Tentakel requires a working Python installation. It is known to work with Python 2.3. Python 2.2 and Python 2.1 are not supported. If you are using old version of python then please upgrade it.

Let us see howto install and configure tentakel.

Visit sourceforge home page to download tentakel or download RPM files from tentakel home page.

Untar source code, enter:

# tar -zxvf tentakel-2.2.tgz

You should be root user for the install step. To install it type

# make
# make install

For demonstration purpose we will use following setup:

   admin pc                    Group           hosts
Running Debian Linux       homeservers
User: jadmin

Copy sample tentakel configuration file tentakel.conf.example to /etc directory

# cp tentakel.conf.example /etc/ tentakel.conf

Modify /etc/tentakel.conf according to above setup, at the end your file should look like as follows:

# first section: global parameters
set ssh_path="/usr/bin/ssh"
set method="ssh"  # ssh method
set user="jadmin"   # ssh username for remote servers
#set format="%d %o\n" # output format see man page
#set maxparallel="3"  # run at most 3 commands in parallel

# our home servers with two hosts
group homeservers ()
+ +

# localhost
group local ()

Save the file and exit to shell prompt. Where,
group homeservers () : Group name
+ + : Host inclusion. name is included and can be an ip address or a hostname.

Configure ssh-key based login to avoid password prompt between admin workstation and group servers for jadmin user.

Login as jadmin and type the following command:

$ tentakel -g homeservers

interactive mode

-g groupname: Select the group groupname The group must be defined in the configuration file (here it is homeservers). If not specified tentakel implicitly assumes the default group.

At tentakel(homeservers)> prompt type command uname and uptime command as follows:

exec "uname -mrs"
exec "uptime"

Few more examples
Find who is logged on all homeservers and what they are doing (type at shell prompt)

$ tentakel -g homeservers "w"

Executes the uptime command on all hosts defined in group homeservers:

$ tentakel -g homeservers uptime

As you can see, tentakel is very powerful and easy to use tool. It also supports the concept of plugins. A plugin is a single Python module and must appear in the $HOME/.tentakel/plugins/ directory. Main advantage of plugin is customization according to your need. For example, entire web server or mysql server farm can be controlled according our requirements.
However, tentakel is not the only utility for this kind of work. There are programs that do similar things or have to do with tentakel in some way. The complete list can be found online here. tentakel should work on almost all variant of UNIX/BSD or Linux distributions.

Time is a precious commodity, especially if you're a system administrator. No other job pulls people in so many directions at once. Users interrupt you constantly with requests, preventing you from getting anything done and putting lots of pressure on you. What do you do? The answer is time management. Read our book review of Time Management for System Administrators. Continue reading Execute commands on multiple hosts using expect tool Part III of this series.

Stoyan 12.27.05 at 2:10 pm
Or if you like ruby, use SwitchTower. Have native ruby ssh client = will work on windows too.
4 Anonymous 12.28.05 at 10:28 am
Fermilab has a much more feature rich tool named rgang.

More info: rgang abstract and download

5 nixcraft 12.28.05 at 11:26 am
RGANG looks good too. It incorporates an algorithm to build a tree-like structure (or "worm" structure) to allow the distribution processing time to scale very well to 1000 or more nodes. Looks rock solid.

Thanks for pointing out I appreciate you post :)

6 Damon 12.28.05 at 12:31 pm
I can confirm that it is possible to run Tentakel on Windows, albeit with a bit of modification to the source (about 7 lines total).

I posted the details over my blog: Running Tentakel on Windows

8 Anonymous 12.28.05 at 9:25 pm
It seems a nice tool and using ssh it will be secure (as long as no-one knows the privat key ofcourse).

For a more simple variant I use a sh-script to execute on all machines in my (linux-)network:

ping -c 2 -w 10 -b 2>/dev/null | sed -n -e 's#^.*bytes from ([^:][^:]*).*#1#p' | while read ip
name=`host ${ip} | sed -e 's#.* ([^ ][^ ]*).$#1#'`
echo "- ${name} : ${*}"
rsh ${name} ${EXTRA} "${*}"

9 Anonymous 12.28.05 at 10:42 pm
If your somewhat traditional, just use Expect. Does most of the same, has tons of examples around, a cool book (Exploring Expect).
And you can handle stuff that NEEDS a terminal like ssh password prompts or the password program to change passwords.And it works on windows.
10 nixcraft 12.28.05 at 11:17 pm
Expect. Is very nice back in Solaris day I had complete monitoring system written in rsh and expect tool. Open advantage of ssh is that it provides API for C/C++ programs. So I get performance

Anonymous user thanks for sharing your script with us. ,appreciate your post.

11 Sebastian 12.29.05 at 12:52 am
Thanks for mentioning tentakel in your blog. You also mentioned rgang, which looks nice indeed. However, there are two reasons why I don't like rgang: 1) the license is not as free as tentakels (at least it does not look like as far as I can tell without being a lawyer) 2) it looks much more unmaintained thatn tentakel :)


12 alutii 12.29.05 at 1:25 am
Another possibility is fanout. I quite like fanterm where the output is collected in xterm like windows. Helps keep things organized for me.

Groups in tentakel looks handy. Thanks for the article nixcraft.

13 Anonymous 12.31.05 at 3:40 pm
if the number of machines are
14 Anonymous 12.31.05 at 8:20 pm
I use Shocto -
written in Ruby.
Link here
15 mariuz 01.17.08 at 4:08 pm
if you have python2.5 like on my ubuntu system
you must install from svn
"I checked in a fix. You can try it out by checking out the development
version of tentakel using the following command:"

svn co
cd trunk/tentakel
make ; make install

16 Moise Ndala 07.20.09 at 2:26 pm
I would like just to mention that for new version of python 2.5 and latest versions, tentakel works fine with the patch specified at:

Thanks for your help!

[May 16, 2009] makeself

makeself is a small shell script that generates a self-extractable compressed TAR archive from a directory. The resulting file appears as a shell script, and can be launched as is. The archive will then uncompress itself to a temporary directory and an arbitrary command will be executed (for example, an installation script).

This is pretty similar to archives generated with WinZip Self-Extractor in the Windows world.

[Apr 2, 2009] Spacewalk

Spacewalk is a Linux and Solaris systems management solution. It allows you to inventory your systems (hardware and software information), install and update software on your systems, collect and distribute your custom software packages into manageable groups, provision (Kickstart) your systems, manage and deploy configuration files to your systems, monitor your systems, provision virtual guests, and start/stop/configure virtual guests.

[Mar 10, 2009] Cluster SSH


Cluster SSH opens terminal windows with connections to specified hosts and an administration console. Any text typed into the administration console is replicated to all other connected and active windows. This tool is intended for, but not limited to, cluster administration where the same configuration or commands must be run on each node within the cluster. Performing these commands all at once via this tool ensures all nodes are kept in sync.

[Dec 14, 2008] Using ClusterSSH to Perform Tasks on Multiple Servers Simultaneously By Martijn Pepping

Cool Solutions


As an administrator of SLES/OES Linux clusters or multiple SUSE Linux servers you are probably familiar with that fact that you have to make an identical change on more than one server. Those can be things like editing files, execute commands, collect data or some other administrative task.

There are a couple of way to do this. You can write a script that performs the change for you, or you can SSH into a server, make the change and repeat that task manually for every server.

Now both ways can cost an extended amount of time. Writing and testing a shell script takes some time and performing the task by hand on lets say five or more servers also costs time.

Now, wouldn't it be a real timesaver when you have only one console in which you can perform tasks on multiple servers simultaneously? This solution can be found in ClusterSSH.


With ClusterSSH it is possible to make a SSH connection to multiple servers and perform tasks from one single command window, without any scripting. The 'cssh' command lets you connect to any server specified as a command line argument, or to groups of servers (or cluster nodes) defined in a configuration file.

The 'cssh' command opens a terminal window to every server which can be used to review the output sent from the cssh-console, or to edit a single host directly. Commands given in to the cssh-console are executed on every connected host. When you start typing in the cssh-console you'll see that the same command also show up on the commandline of the connected systems.

The state of connected systems can be toggled from the cssh-console. So if you want to exclude certain hosts temporarily from specific command, you can do this with a single mouseclick. Also, hosts can be added on the fly and open terminal windows can automatically be rearranged.

One caveat to be aware of is when editing files. Never assume that file is identical on all systems. For example, lines in a file you are editing may be in a different order. Don't just go to a certain line in a file and start editing. Instead search for the text you want to exit, just to be sure the correct text is edited on all connected systems.


Configuration files section from the man-page:


This file contains a list of tags to server names mappings. When any name is used on the command line it is checked to see if it is a tag in /etc/clusters (or the .csshrc file, or any additional cluster file specified by -c). If it is a tag, then the tag is replaced with the list of servers from the file. The file is formatted as follows:

<tag> [user@]<server> [user@]<server> [...]


# List of servers in live

live admin1@server1 admin2@server2 server3 server4

Clusters may also be specified within the users .csshrc file, as documented below.

/etc/csshrc & $HOME/.csshrc

This file contains configuration overrides - the defaults are as marked. Default options are overwritten first by the global file, and then by the user file.


ClusterSSH can be used to any system running the SSH daemon.

[Aug 25, 2008] pssh 1.4.0 by Brent N. Chun -

About: pssh provides parallel versions of the OpenSSH tools that are useful for controlling large numbers of machines simultaneously. It includes parallel versions of ssh, scp, and rsync, as well as a parallel kill command.

Changes: A 64-bit bug was fixed: select now uses None when there is no timeout rather than sys.maxint. EINTR is caught on select, read, and write calls. Longopts were fixed for pnuke, prsync, pscp, pslurp, and pssh. Missing environment variables options support was added.

[Apr 22, 2008] Project details for Multi Remote Tools

Apr 18, 2008 |

MrTools is a suite of tools for managing large, distributed environments. It can be used to execute scripts on multiple remote hosts without prior installation, copy of a file or directory to multiple hosts as efficiently as possible in a relatively secure way, and collect a copy of a file or directory from multiple hosts.

Release focus: Initial freshmeat announcement

Hash tree cleanup in thread tracking code was improved in all tools in the suite. Mrtools Has now adopted version 3 of the GPL. A shell quoting issue in was fixed. This fixed several known limitations, including the ability to use with Perl scripts and awk if statements. This fix alone has redefined's capabilities, making an already powerful tool even more powerful.

Run Command Parallel on Multiple Hosts using PDSH Tool

In this case, I specified the node names separated by commas. You can also use a range of hosts as follows:

[shaha@oc8535558703 PDSH]$ pdsh -w ^hosts uptime
ubuntu@ec2-52-58-254-227: 10:00:52 up 2 days, 16:48, 0 users, load average: 0.05, 0.04, 0.05
ec2-user@ec2-52-59-121-138: 10:00:50 up 2 days, 16:51, 0 users, load average: 0.00, 0.01, 0.05
[shaha@oc8535558703 PDSH]$

4) More Useful pdsh Commands

Now I can shift into second gear and try some fancier pdsh tricks. First, I want to run a more complicated command on all of the nodes . Notice that I put the entire command in quotes. This means the entire command is run on each node, including the first (cat /proc/cpuinfo) and second (grep bogomips , model ,cpu) parts.

[shaha@oc8535558703 PDSH]$ pdsh 'cat /proc/cpuinfo' | egrep 'bogomips|model|cpu'
ubuntu@ec2-52-58-254-227: cpu family : 6
ubuntu@ec2-52-58-254-227: model : 63
ubuntu@ec2-52-58-254-227: model name : Intel(R) Xeon(R) CPU E5-2676 v3 @ 2.40GHz
ubuntu@ec2-52-58-254-227: cpu MHz : 2400.070
ubuntu@ec2-52-58-254-227: cpu cores : 1
ubuntu@ec2-52-58-254-227: cpuid level : 13
ubuntu@ec2-52-58-254-227: bogomips : 4800.14
ec2-user@ec2-52-59-121-138: cpu family : 6
ec2-user@ec2-52-59-121-138: model : 62
ec2-user@ec2-52-59-121-138: model name : Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz
ec2-user@ec2-52-59-121-138: cpu MHz : 2500.036
ec2-user@ec2-52-59-121-138: cpu cores : 1
ec2-user@ec2-52-59-121-138: cpuid level : 13
ec2-user@ec2-52-59-121-138: bogomips : 5000.07
[shaha@oc8535558703 PDSH]$

What is the best distributed shell for medium-sized Linux clusters Why - Quora

12 Dec 2013

What's the next best thing after ssh in a for loop? Only needs to scale to a hundred or so hosts. A client-side only implementation is preferable. Some options I've dug up:

* pdsh:
* parallel ssh:
* dsh:
* pydsh:
* dsh:
* gxp:
* cap:
* omnitty:
* dcli:


Christian Nygaard, Devops 16 years of administrating Linux. Anything from postfix to the cloud.

Written 12 Dec 2013

Radically simple IT automation

Install ansible works on your admin machine,

edit /etc/ansible/hosts and add the machines you want to manager in a

copy your public ssh key to the servers you want to manage

On the admin node add the ssh key to the ssh-agent
eval `ssh-agent`

Run the uptime command on all servers in the cluster
ansible all -a 'uptime'

Distributed Command Execution -CodeIdol

Use tentakel for parallel, distributed command execution.

Often you want to execute a command not only on one computer, but on several at once. For example, you might want to report the current statistics on a group of managed servers or update all of your web servers at once.

1 The Obvious Approach

You could simply do this on the command line with a shell script like the following:

for host in hostA hostB hostC ; do 
   ssh $host do_something

However, this has several disadvantages:

2 How tentakel Can Help

While you could write a shell script to address some of these disadvantages, you might want to consider tentakel, which is available in the ports collection. Its execution starts multiple threads that run independently of each other. The maximum waiting time depends on the longest running remote connection, not on the sum of all of them. After the last remote command has returned, tentakel displays the results of all remote command executions. You can also configure how the output should look, combining or differentiating the results from individual hosts.

tentakel operates on groups of hosts. A group can have two types of members: hosts or references to other groups. A group can also have parameters to control various aspects of the connection, including username and access method (rsh or ssh, for example).

3 Installing and Configuring tentakel

Install tentakel from the ports collection:

cd /usr/ports/sysutils/tentakel
make install clean

You can instead install tentakel by hand; consult the INSTALL file in the distribution. A make install should work in most cases, provided that you have a working Python environment installed.

After the installation, create the configuration file tentakel.conf in the directory $HOME/.tentakel/. See the example file in /usr/local/share/doc/tentakel/tentakel.conf.example for a quick overview of the format.

Alternatively, copy the file into /usr/local/etc/ or /etc/, depending on your system's policy, in order to have a site-wide tentakel.conf that will be used when there is no user-specific configuration. As an administrator, you may predefine groups for your users this way.

Assuming that you have a farm of three servers, mosel, aare, and spree, of which the first two are web servers, your configuration might resemble this:

set format="%d\n%o\n"
group webservers(user="webmaster")

  +mosel +aare

group servers(user="root")

  @webservers +spree

With this definition, you can use the group name servers to execute a command on all your servers as root and the group name webservers to execute it only on your web servers as user webmaster.

The first line defines the output format, as explained in Figure.

tentakel output format characters




The hostname


The output of the remotely executed commands


A newline character

This commands tentakel to print the hostname, followed by the lines of the remote output for each server sequentially. You can enrich the format string with additional directives, such as %s for the exit status from commands. See the manpage for more information.

As you can see from the servers definition, there is no need to list all servers in each group; include servers from other groups using the @groupname notation.

On the remote machines, the only required configuration is to ensure that you can log into them from the tentakel machine without entering a password. Usually that will mean using ssh and public keys, which is also tentakel's default. tentakel provides the parameter method for using different mechanisms, so refer to the manpage for details.

4 Using tentakel

To update the web pages on all web servers from a CVS repository:

% tentakel -g webservers "cd /var/www/htdocs && cvs update"

### mosel(0):

cvs update: Updating .

U index.html

U main.css

### aare(1):

C main.css

cvs update: Updating .


Note the use of quotes around the command to be executed. This prevents the local shell from interpreting special characters such as & or ;.

If no command is specified, tentakel invokes interactive mode:

% tentakel 

interactive mode

tentakel(default)> use webservers

tentakel(webservers)> exec du -sh /var/www/htdocs

### mosel(0):

364k    /var/www/htdocs

### aare(0):

364k    /var/www/htdocs

tentakel(webservers)> quit


While in interactive mode, the command help prints further information.

5 See Also

NetSarang - Xshell Features

UNIX System Administration Tools

All of these tools are open source, released under a BSD-style license, with the notable exception of readinfo, which is released under the GNU GPL.

Demonstrates the use of Mac OS X Authorization Services. (C)
View the README
Download version 1.1 - gzipped tarball, 3 KB
Last update: July 2003

Copies files to remote hosts based on a configuration file. (Perl)
View the README
Download version 1.4 - gzipped tarball, 5 KB
Last update: April 2007

Records system changes to flat text log files. (Bourne shell)
View the README
Download version 2.2 - gzipped tarball, 5 KB
Last update: June 2007

Manages inetd or xinetd daemon and services. Includes support for SRC on AIX, SMF on Solaris, and launchd on Mac OS X. (Perl)
View the README
Download version 6.3 - gzipped tarball, 11 KB
Last update: March 2007

Changes file ownership on systems with restricted chown. (Bourne shell)
View the README
Download version 1.1 - gzipped tarball, 3 KB
Last update: March 2008

Generates memorable passwords that are tough to crack. In addition to the command-line tool, the package includes an AppleScript for use on Mac OS X. (Perl, AppleScript)
View the README
Download version 5.8 - gzipped tarball, 5 KB
Last update: April 2007

Prints the most recent versions of installed Solaris patches. (Bourne shell)
View the README
Download version 1.1 - gzipped tarball, 3 KB
Last update: March 2006

Prints out serial number of Mac OS X system. (Bourne shell)
View the README
Download version 1.0 - gzipped tarball, 2 KB
Last update: July 2006

Generates private keys for use with Solaris 8 IPsec and with FreeS/WAN (Linux IPsec). (Perl)
View the README
Download version 1.3 - gzipped tarball, 4 KB
Last update: July 2003

Login shell for SSH jump box that permits access to internal SSH servers. (Perl)
View the README
Download version 3.1 - gzipped tarball, 4 KB
Last update: March 2008

lookx (hostx, userx, groupx, servx)
Looks up hostnames and IP addresses, usernames and UIDs, groups and GIDs, services and port numbers, using a system's native mechanisms. (Perl)
View the README
Download version 5.0 - gzipped tarball, 5 KB
Last update: October 2008

Runs a command on an infinite loop, sending output to syslog. (Bourne shell)
View the README
Download version 1.4 - gzipped tarball, 3 KB
Last update: August 2003

Displays network driver parameters. (Perl)
View the README
Download version 1.12 - gzipped tarball, 7 KB
Last update: May 2008

Enables and disables SysV-style init scripts. (Bourne shell)
View the README
Download version 1.7 - gzipped tarball, 4 KB
Last update: April 2007

Reads fields from a formatted text file. Used by rshall. (Perl)
View the README
Download version 2.3 - gzipped tarball, 14 KB
Last update: November 2005

Runs commands on multiple remote hosts simultaneously. (Perl)
View the README
Download version 12.2 - gzipped tarball, 11 KB
Last update: November 2008

Renames users' Mac OS X ByHost preferences files, either from the CLI or as a login hook. (Perl)
View the README
Download version 4.1 - gzipped tarball, 4 KB
Last update: August 2003

Manages NIS maps under revision control. (Bourne shell)
View the README
Download version 3.6 - gzipped tarball, 7 KB
Last update: November 2008

Manages Samba winbind user/SID mappings. (Perl)
View the README
Procedures for Enabling Active Directory Authentication on UNIX
Download version 1.1 - gzipped tarball, 8 KB
Last update: January 2006

Bgsh - Global Shell run commands in parallel to multiple machines

The idea behind this tool originally came from wanting to do something on each machine in our network. Existing scripts would serially go to each machine run the command, wait for it to finish, and continue to the next machine. There was no reason why this couldn't be done in parallel. The problems, however, were many. First of all, the output from finishing parallel jobs needs to be buffered in such a way that different machines wouldn't output their results on top of eachother. A final bit was added because it was nice to have output alphabetical rather than first-done, first-seen. The result is a parallel job spawner that displays output from the machines alphabetically, as soon as it is available. If ``alpha'' take longer than ``zebra'', there will be no output past ``alpha'' until it is finished. As soon as ``alpha'' is finished, though, everyone's output is printed.

Sending a SIGUSR1 to gsh(1) will cause it to report which machines are still pending. (Effectively turns on --debug for one cycle.)

Go here to download.

Latest version is 1.0.2.
$Id: index.html,v 1.11 2006/05/25 23:16:28 nemesis Exp $

This tool was developed totally separately from another tool which has the same name. Looks like both this author and I had the same idea. The significant differences between these versions appears to be that I cleaned up the macro language, and added a lot of options for behavior.

The README file says:
Well, this is seriously undocumented code.  The short version is:

	perl Makefile.PL
	make install

And then create a file called /etc/ghosts which lists all the machines you
want to contact.  It would look something like this:

	# Macros
	# Machines
	# Name		Group		Hardware	OS
	bilbo		prod		intel		linux
	baggins		prod		e4500		solaris
	tolkien		devel		e450		solaris
Machine groups are run together with "+"s and "-"s as you see fit:

	ghosts intel+e450

	ghosts prod-intel

The "ghosts" command just shows the resulting list.  "gsh" a group to run
a command:

	gsh devel+intel "cat /etc/motd"

You'll need to have ssh set up with trusted RSA keys, though.  I should
cover that in here too, but it's REALLY late tonight, and I just want to
get this posted so my buddy will quite bugging me about downloading the 
"latest" version.  :P

See the TODO file for the huge list of things I need to do.  Mostly
documentation.  :)

Credit where credit is due: this is very very losely based on the "gsh" tool
that came (comes?) with the Perl distribution, and on extra work by 
Mike Murphy.  My version will do things in parallel, and does proper macro
expansions.  It is released under the GNU General Public License.

Kees Cook

Run commands on multiple servers by Milan Babuskov

Software Woes

I just discovered this very cool feature of Konsole. You can log into multiple servers (via ssh) and run the same command in each Konsole tab at once. It's great when you have many computers with same configuration. Just log in, and select one of Konsole's tabs to be the one to "broadcast" input to all others. It works for all tabs in a single Konsole window.

It also useful when you have several users on the same computer, and you wish to make sure all of them have the same rights, and that they can perform some operations without stepping on each others toes.

One of the problems is monitoring the effects of commands. Well, you can detach the tabs (Detach Session menu item) after you set up the broadcasting. If you have large enough screen, you can set up 8 or 9 windows nicely, and watch what's happening. Really useful stuff.

One warning though: don't forget to turn it off once you're done. It's easy to forget yourself and start some clean-up job (rm -rf /) which is only meant to one of machines.

Fanout and fanterm - run commands on multiple remote machines at once

Fanout and fanterm are two utilities that allow you to run commands on multiple machines. The difference is that fanout only runs non-interactive commands (like dd, cat, adduser, uname -a, etc.) and pipelines built of these. The output is collected into a single display that can be viewed by less or redirected to a file.

Fanterm, on the other hand, allows you to run interactive text mode commands on multiple machines at the same time. Your keystrokes are sent to a shell or application running on each of the target systems. The output from each system is shown in a seperate xterm.

See below for examples and sample output.


Fanout allows you to run non-interactive commands on remote machines simultaneously, collecting the output in an organized fashion. The syntax is:

fanout [--noping] "{space separated list of systems}" "{commands to run}"

By default, fanout pings each of the remote machines to make sure they're up before trying to ssh to them. If they're not pingable (because of a firewall), put --noping as the first parameter.

A "System" is a bit of a misnomer; it could be a fully qualified domain, an entry in /etc/hosts, an IP address, an entry in ~/.ssh/config, or any of those preceeded with user_account@ . In short, if you can type ssh something and get a command prompt, it can be used as a "system" above.

You can run as many commands as you'd like on the remote systems. These need to be separated by semicolons. You can also run pipelines of comands, such as

cat /etc/passwd | grep '^gparker:' | awk -F: '{print $3}'
to see what uid gparker has on each of the remote systems.


Sample run

#Sample run
[wstearns@sparrow fanout]$ fanout "localhost wstearns@localhost aaa.bbb.ccc" "uptime" | less
aaa.bbb.ccc unavailable
Starting localhost
Starting wstearns@localhost
Fanout executing "uptime"
Start time Fri Apr 7 00:13:07 EDT 2000 , End time Fri Apr 7 00:13:20 EDT 2000
==== On aaa.bbb.ccc ====
==== Machine unreachable by ping

==== On localhost ====
   12:13am  up 3 days, 10:44,  0 users,  load average: 0.17, 0.17, 0.22

==== As wstearns on localhost ====
   12:13am  up 3 days, 10:44,  0 users,  load average: 0.15, 0.16, 0.22

The command(s) you execute run concurrently on each remote machine. Output does not show up until all are done.


From an xterm, type:

fansetup onemachine anothermachine user@yetathirdmachine

and you'll get 3 additional xterms. Type your commands in the original terminal; each command will be sent to each machine and you'll see the output from each machine in the other xterms. This even works for interactive commands like editors.

Here's an example:

capistrano -- executes commands in parallel on multiple servers - Debian Bug report logs

There isn't Capistrano package in Debian system and it's a very good utility.

There is a description what is capistrano bellow:

Capistrano is a utility that can execute commands in parallel on multiples
servers. It allows you to define tasks, which can include commands that
are executed on the servers. You can also define roles for your servers,
and then specify that certain tasks apply only to certain roles.

More information:

-- System Information:
Debian Release: lenny/sid
APT prefers unstable
APT policy: (500, 'unstable')
Architecture: i386 (i686)

Kernel: Linux 2.6.18-3-686 (SMP w/2 CPU cores)
Locale: LANG=pt_BR.UTF-8, LC_CTYPE=pt_BR.UTF-8 (charmap=UTF-8)
Shell: /bin/sh linked to /bin/bash

AdminCoPy A wrapper for SSH/SCP to run one command on many hosts.

ACP is basically a wrapper for SSH and SCP that allows a user to select, or manually enter, a group of hosts to connect to. The user can run a command or copy some files/directories to multiple hosts by issuing a single command on the "admin" host. It requires the fping program, as it checks the specified hosts for connectivity, and will only try to run the command on/copy files to hosts that are reachable.

cssh Central Secure Shell

A tool to run multiple commands to multiple hosts from a central shell over SSH.

A Perl program that uses Net::SSH::Perl that allows systems administrators to run multiple commands to multiple hosts from one central host.

I work in a shop with approximately 100 servers running Linux and Solaris. Every
Tuesday we do a publish of our production content that requires me to login to
several machines to run the same commands on each. I thought that this was
ridiculous for me to open 8 terminal windows to run the same dumb commands over and
over again. Thus the birth of cssh. Cssh is a Perl script that I wrote to allow
me to admin my servers by performing any number of commands to any number of
servers from one centrally located shell login.

As I was writing cssh I started to think of other things I could put into cssh to
allow administration to be that much easier and more flexible. Thus came the
current project called cssh. The program is small all by itself. It's sort of
large, however, when you consider all the modules that are needed to make the
ssh portion of cssh work.

Cssh is under fairly heavy development. I schedule as much time as I can to work
on it around my personal life issues :). So, I have a fairly large todo list. I
am excited for others to test it out and give me feedback on what they feel should
be done on cssh.

You can click below for the current documentation. This project is very
new so please don't expect too much, yet.

Here is the file itself. Please read the documentation to learn about how to use
cssh. It might be a little confusing at first but as you read through the docum-
entation, its use should become more understandable. Feel free to contact me for
support. Right now, since the project is so new I should be able to assist you. I
suspect that if this project matures that personal support will become increasingly
more difficult. We will cross that bridge when we get there.

READ the documentation here!!

Added: Small install script. See documentation for installation *not* using install
script. To install using the install script simply extract the tarball:

tar xvfz cssh_beta-0.03.2.tar.gz
or depending on which compressed format you download
tar xvfj cssh_beta-0.03.2.tar.bz2

cd cssh_beta-0.03.2

./ < prefix >

Where prefix is the path that you want to install cssh.
If you run the install script as root then the cssh_configs directory will be installed
in /etc by default. The man page will be placed in /usr/share/man/man1 by default
as well, and if you don't specify a prefix as root then cssh will be installed in
/usr/bin (it should actually be /usr/local/bin) otherwise it will be installed in
< prefix >/bin. Currently, there is no way, short of changing the script a
little, to change this behavior. I whipped this up in a matter of minutes and tested
it for a few hours making little changes here and there. It's meant to make installation
a little easier. It will install Net::SSH::Perl for you. NOTE! You really should
run the installation script as root given the nature of Net::SSH::Perl.

First alpha release: cssh alpha .03
Last alpha release (gzip): cssh_alpha-

Latest beta release (gzip): cssh_beta-
Latest beta release (bzip2): cssh_beta-

cssh now has a maillist provided by sourceforge. You can sign-up here.

The Distribulator

A cluster-aware, SSH-based command execution and file transfer utility.


MUltihost SSH Wrapper


File Integrity Command & Control



A remote administration tool for managing multiple servers.

SSH Enchanter

A small library for scripting SSH sessions. -- Java, Python, Ruby


Adds multiple ssh keys to an agent with a minimum number of passphrase requests.

Distributed Internet Archiving Program
Fast, low-cost way to make systems more robust by backing up in multiple places.

28. dsh

dsh (the distributed shell) is a program which executes a single command on multiple remote machines. It can execute this command in parallel (i.e., on any number of machines at a time) or in serial (by specifying parallel execution of the command on 1 node at a time). It was originally designed to work with rsh, but has full support for ssh and with a little tweaking of the top part of the dsh executable, should work with any program that allows remote execution of a command without an interactive login.

Automating ssh and scp across multiple hosts

Once installed the pssh package installs a number of new commands:

This command allows you to copy files from multipl remote hosts to the local system. We'll demonstrate the usage shortly.
This command allows you to run commands upon a number of systems in parallel. We'll also demonstrate this command shortly.
This command likes you kill processes on multiple remote systems.
This is the opposite of parallel-slirp and allows you to copy a file, or files, to multiple remote systems.

General Usage

Each of the new commands installed by the pssh package will expect to read a list of hostnames from a text file. This makes automated usage a little bit more straightforward, and simplifies the command-line parsing.

Running Commands On Multiple Hosts

The most basic usage is to simply run a command upon each host, and not report upon the output. For example given the file hosts.txt containing a number of hostnames we can run:

Recommended Links

Softpanorama hot topic of the month

Softpanorama Recommended

Top articles



FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit exclusivly for research and educational purposes.   If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner. 

ABUSE: IPs or network segments from which we detect a stream of probes might be blocked for no less then 90 days. Multiple types of probes increase this period.  


Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy


War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes


Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law


Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least

Copyright © 1996-2016 by Dr. Nikolai Bezroukov. was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License.

The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting development of this site and speed up access. In case is down you can use the at


The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

Last modified: September, 18, 2017