May the source be with you, but remember the KISS principle ;-)

Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

Unix Shells

Ksh93 and Bash Shells for Unix System Administrators


Unix shell history

Best Shell Books

Recommended Links Papers, ebooks  tutorials Unix Tools

Unix Utilities


Bourne Shell and portability




Bash Built-in Variables

Readline and inputrc 

IFS variable

Command completion


 Bash history and bang commands

Comparison operators

 Arithmetic Expressions in BASH

String Operations in Shell

Bash Control Structures

if statements Loops in Bash

Usage of pipes with loops in shell

Case statement Bash select statement
Input and Output Redirection Advanced filesystem navigation Shell Prompts Customarization Subshells Brace Expansion Examples of .bashrc files History substitution Vi editing mode Pushd and popd
Pretty Printing Functions Unix shell vi mode The Unix Hater’s Handbook Sysadmin Horror Stories bash Tips and Tricks Tips Humor Etc
"The lyfe so short, the craft so long to lerne,''

Chaucer c. 1340–1400
(borrowed from the bash FAQ)

Command completionThis collection of links is oriented on students (initially it was provided as a reference material to my shell programming university course) and is designed to emphasize usage of  advanced shell constructs and pipes in shell programming (mainly in the context of ksh93 and bash 3.2+ which have good support for those constructs).  An introductory paper  Slightly Skeptical View on Shell discusses the shell as a scripting language and as one of the earliest examples of very high level languages.  The page might also be useful for system administrators who constitute the considerable percentage of  shell users and lion part of shell programmers.

This page is the main page to a set of sub-pages devoted to shell that collectively are known as Shellorama. The most important are:

I strongly recommend getting a so-called orthodox file manager (OFM). This tool can immensely simplify Unix filesystem navigation and file operations (Midnight Commander  while defective in handling command line can be tried first as this is an active project and it provides fpt and sftp virtual filesystem in remote hosts)

Actually filesystem navigation in shell is an area of great concern as there are several serious problems with the current tools for Unix filesystem navigation.  I would say that usage of cd command (the most common method) is conceptually broken and deprives people from the full understanding of Unix filesystem; I doubt that it can be fixed within the shell paradigm (C-shell made an attempt to compensate for this deficiency by introducing history and popd/pushd/dirs troika, but this proved to be neither necessary nor sufficient for compensating problems with the in-depth understanding of the classical Unix hierarchical filesystem  inherent in purely command line navigation ;-). Paradoxically sysadmins who use OFMs usually have much better understanding of the power and flexibility of the Unix filesystem then people who use command line.   All-in-all usage of OFM is system administration represents Eastern European school of administration and it might be a better way to administer system that a typical "North American Way".

The second indispensable tool for shell programmer is Expect. This is a very flexible application that can be used for automation of interactive sessions as well as automation of testing of applications.

Usually people who know shell and awk and/or Perl well are usually considered to be advanced Unix system administrators (this is another way to say the system administrators who does not know shall/awk/Perl troika well are essentially a various flavors of entry-level system administrators no matter how many years of experience they have). I would argue that no system administrator can consider himself to be a senior Unix system administrator without in-depth knowledge of both one of the OFMs and Expect.

No system administrator can consider himself to be a senior Unix system administrator without in-depth knowledge of both one of the OFMs and Expect.

An OFM tends to educate the user about the Unix filesystem in some subtle, but definitely  psychologically superior way. Widespread use of OFMs in Europe, especially in Germany and Eastern Europe, tend to produce specialists with substantially greater skills at handling Unix (and Windows) file systems than users that only have experience with a more primitive command line based navigational tools.

And yes, cd navigation is conceptually broken. This is not a bizarre opinion of the author,  this is a fact:  when you do not even suspect that a particular part of the tree exists something is conceptually broken.  People using command line know only fragments of the file system structure like blinds know only the parts of the elephant.  Current Unix file system with, say,  13K directories for a regular Solaris installation, are just unsuitable for the "cd way of navigation"; 1K directories was probably OK. But when there are over 10K of directories you need something else. Here quantity turns into quality. That's my point.

The page provides rather long quotes as web pages as web pages are notoriously unreliable medium and can disappear without trace.  That makes this page somewhat difficult to browse, but it's not designed for browsing; it's designed as a supplementary material to the university shell course and for self-education.

Note: A highly recommended shell site is SHELLdorado by Heiner Steven.
This is a really excellent site with the good coding practice section,
some interesting example scripts and tips and tricks

A complementary page with Best Shell Books Reviews is also available. Although the best book selection is to a certain extent individual, the selection of a bad book is not: so this page might at least help you to avoid most common bad books (often the book recommended by a particular university are either weak or boring or both;  Unix Shell by Example is one such example ;-). Still the shell literature is substantial (over a hundred of books) and that mean that you can find a suitable textbook. Please be aware of the fact that that few authors of shell programming books have a broad understanding of Unix necessary for writing a comprehensive shell book. 

IMHO the first edition of O'Reilly Learning Korn Shell is probably one of the best and contains nice set of  examples (the second edition is more up to date but generally is weaker). Also the first edition has advantage of being available in HTML form too (O'Reilly Unix CD). It does not cover ksh93 but it presents ksh in a unique way that no other book does. Some useful examples can also be found in UNIX Power Tools Book( see Archive of all shell scripts (684 KB); the book is available in HTML from one of O'Reilly CD bookshelf collections).

Still one needs to understand that Unix shells are pretty archaic languages which were designed with compatibility with dinosaur shells in mind (and Borne is a dinosaur shell by any definition). Designers even such strong designers as David Korn were hampered by compatibility problems from the very beginning (in a way it is amazing how much ingenuity they demonstrate in enhancing  Borne shell; I am really amazed how David Korn managed to extend borne shell into something much more usable and much loser to "normal" scripting language. In this sense ksh93 stands like a real pinnacle of shell compatibility and the the testament of the art of shell language extension).

That means that outside of interactive usage and small one page scripts they generally outlived their usefulness. That's why for more or less complex tasks Perl is usually used (and should be used) instead of shells. While shells continued to improve since the original C-shell and Korn shell, the shell syntax is frozen in space and time and now looks completely archaic.  There are a large number of problems with this syntax as it does not cleanly separate lexical analysis from syntax analysis.  Bash 3.2 actually made some progress of overcoming most archaic features of old shells but still it has it own share of warts (for example last stage of the pipe does not run in on the same level as encompassing the pipe script)

Some syntax features in shell are idiosyncratic as Steve Bourne played with Algol 68 before starting work on the shell. In a way, he proved to be the most influential bad language designer, the designer who has the most lasting influence on Unix environment (that does not exonerate the subsequent designers which probably can take a more aggressive stance on the elimination of initial shell design blunders by marking them as "legacy").

For example there is very little logic in how  different types of blocks are delimitated in shell scripts. Conditional statements end with (broken) classic Algor-68 the reverse keyword syntax: 'if condition; then echo yes; else echo no; fi', but loops are structured like perverted version of PL/1 (loop prefix do; ... done;) , individual case branches blocks ends with  ';;' . Functions have C-style bracketing "{", "}".  M. D. McIlroy as Steve Borne manager should be ashamed. After all at this time the level of compiler construction knowledge was pretty sufficient to avoid such blunders (David Gries book was published in 1971) and Bell Labs staff were not a bunch of enthusiasts ;-).

Also the original Bourne shell was a almost pure macro language. It performed variable substitution, tokenization and other operations on one line at a time without understanding the underlying syntax. This results in many unexpected side effects: Consider a simple command
rm $file
If variable $file is accidentally contains space that will lead to treating it as two separate augments to the rm command with possible nasty side effects.  To fix this, the user has to make sure every use of a variable in enclosed in quotes, like in rm "$file"

Variable assignments in Bourne shell are whitespace sensitive. 'foo=bar' is an assignment, but 'foo = bar' is not. It is a function call with "= "and "bar" as two arguments. This is another strange idiosyncrasy.

There is also an overlap between aliases and functions. Aliases are positional macros that are recognized only as the first word of the command like in classic  alias ll='ls -l'.  Because of this, aliases have several limitations:

Functions are not positional and can in most cases can emulate aliases functionality:
ll() { ls -l $*; }
The curly brackets are some sort of pseudo-commands, so skipping the semicolon in the example above results in a syntax error. As there is no clean separation between lexical analysis and syntax analysis  removing the whitespace between the opening bracket and 'ls' will also result in a syntax error.

Since the use of variables as commands is allowed, it is impossible to reliably check the syntax of a script as substitution can accidentally result in key word as in example that I found in the paper about fish (not that I like or recommend fish):

if true; then if [ $RANDOM -lt 1024 ]; then END=fi; else END=true; fi; $END
Both bash and zsh try to determine if the command in the current buffer is finished when the user presses the return key, but because of issues like this, they will sometimes fail.

Dr. Nikolai Bezroukov


Old News ;-)

2009 2008 2007 2006 2005 2004 2003 2002

[Sep 12, 2016] CLI Magic Bash complete

Notable quotes:
"... Shashank Sharma is studying for a degree in computer science. He specializes in writing about free and open source software for new users. He is the co-author of Beginning Fedora , published by Apress. ..."
Sep 12, 2016 |

CLI Magic: Bash complete

By Shashank Sharma on May 08, 2006 (8:00:00 AM)

Printer friendly page Print
Comment on this article Comments

The auto complete feature of the Bourne Again SHell makes bash one of the most loved and newbie-friendly Linux shells. Just by pressing the Tab key you can complete commands and filenames. Press the Tab key twice and all files in the directory get displayed. But you can do more with autocomplete -- such as associating file types with applications, and automatically designating whether you're looking for directories, text, or MP3 files. With simple commands such as complete and the use of Escape sequences, you can save time and have fun on the command line. You can use the dollar sign ($) , tilde (~) , and at (@) characters along with the Tab key to get quick results in autocomplete.

For instance, if you want to switch to the testing subdirectory of your home directory, you can either type cd /ho[Tab]/tes[Tab] to get there, or use the tilde -- cd ~tes[Tab] . If the partial text -- that is, the portion before you press Tab -- begins with a dollar sign, bash looks for a matching environment variable. The tilde tells bash to look for a matching user name, and the at-sign tells it to look for a matching hostname.

Escaping is good

The Tab key can complete the names of commands, files, directories, users, and hosts. Sometimes, it is overkill to use the Tab key. If you know that you are looking for a file, or only user names, then use the Escape key instead for completion, as it limits bash's completion field.

You can use several Escape key combinations to tell bash what you are looking for. Invoke Escape key combinations by pressing a key while keeping the Escape key pressed. When looking for a file, you can use the Esc-/ (press / along with Escape) key combination. This will attempt filename completion only. If you have one file and one directory beginning with the letter 'i,' you will have to press the Tab key twice to see all the files:

$ less i <tab><tab>
ideas im articles/

When you type less i and press '/' while keeping the Escape key pressed, bash completes the filename to 'ideas.'

While Control key combinations work no matter how long you keep the Ctrl key pressed before pressing the second key, this is not the case with Escape key sequences. The Esc-/ sequence will print out a slash if you delay in pressing the / key after you press the Escape key.

You can also use Escape along with the previously discussed $ , ~ , and @ keys. Esc-$ , for example, completes only variable names. You can use Esc-! when you wish to complete command names. Of course you need to press the Shift key in order to use any of the "upper order" characters.

Wildcard expansion
The asterisk (*), caret (^), and question mark (?) are all wildcard characters. All *nix shells can perform wildcard expansion. Use the asterisk wildcard if you don't wish to view any hidden files. ls * would display all files and directories in the current directory except for those beginning with a dot (hidden).

If you wish to view only files with five-letter names beginning with a given letter, use the question mark wildcard. ls p???? will display only files with names with five letter files starting with 'p.' You can also use the square brackets for filename completion. ls [a-d]* displays all files and directories that begin with any letter between 'a' and 'd.' The caret wildcard excludes files. mv *[^.php] ../ would move all files to the parent directory, exculding those with .php extension.

Even smarter completion

By default, Tab completion is quite dim-witted. This is because when you have already typed cd down before pressing Tab, you'd expect bash to complete only directory names. But bash goes ahead and displays all possible files and directories that begin with 'down.'

You can, however, convert bash into a brilliant command-reading whiz. As root, edit the /etc/bash.bashrc file. Scroll down to the end of the file till you see the section:

# enable bash completion in interactive shells #if [ -f /etc/bash_completion ]; then # . /etc/bash_completion #fi

Uncomment this section and voilà, you have given bash powers far beyond your imagination! Not only is bash now smart enough to know when to complete only directory names, it can also complete man pages and even some command arguments.

Don't despair if you don't have root previleges. Just edit the last section of your ~/.bashrc file.

Associating application with file types

The complete command in bash lets you associate file types with certain applications. If after associating a file type to an application you were to write the name of the application and press Tab, only files with associated file types would be displayed.

complete -G "*.txt" gedit would associate .txt files with gedit. The downfall of using complete is that it overwrites bash's regular completion. That is, if you have two files named invoice.txt and ideas.txt, gedit [Tab][Tab] displays both the files, but gedit inv[Tab] , which should complete to invoice.txt, no longer works.

complete associations last only for the current bash session. If you exit and open a console, gedit will no longer be associated with .txt files. You need to associate file types to applications each time you start a new console session.

For permanent associations, you need to add the command to one of the bash startup scripts, such as ~/.bashrc. Then, whenever you are at the console, gedit will be associated with .txt files.

Shashank Sharma is studying for a degree in computer science. He specializes in writing about free and open source software for new users. Shashank Sharma is studying for a degree in computer science. He specializes in writing about free and open source software for new users. He is the co-author of Beginning Fedora , published by Apress.

[Dec 06, 2015] Bash For Loop Examples

A very nice tutorial by Vivek Gite (created October 31, 2008 last updated June 24, 2015). His mistake is putting new for loop too far inside the tutorial. It should emphazied, not hidden.
June 24, 2015 |

... ... ...

Bash v4.0+ has inbuilt support for setting up a step value using {START..END..INCREMENT} syntax:

echo "Bash version ${BASH_VERSION}..."
for i in {0..10..2}
     echo "Welcome $i times"

Sample outputs:

Bash version 4.0.33(0)-release...
Welcome 0 times
Welcome 2 times
Welcome 4 times
Welcome 6 times
Welcome 8 times
Welcome 10 times

... ... ...

Three-expression bash for loops syntax

This type of for loop share a common heritage with the C programming language. It is characterized by a three-parameter loop control expression; consisting of an initializer (EXP1), a loop-test or condition (EXP2), and a counting expression (EXP3).

for (( EXP1; EXP2; EXP3 ))

A representative three-expression example in bash as follows:

for (( c=1; c<=5; c++ ))
   echo "Welcome $c times"
... ... ...

Jadu Saikia, November 2, 2008, 3:37 pm

Nice one. All the examples are explained well, thanks Vivek.

seq 1 2 20
output can also be produced using jot

jot – 1 20 2

The infinite loops as everyone knows have the following alternatives.

while :


Andi Reinbrech, November 18, 2010, 7:42 pm
I know this is an ancient thread, but thought this trick might be helpful to someone:

For the above example with all the cuts, simply do

set `echo $line`

This will split line into positional parameters and you can after the set simply say

F1=$1; F2=$2; F3=$3

I used this a lot many years ago on solaris with "set `date`", it neatly splits the whole date string into variables and saves lots of messy cutting :-)

… no, you can't change the FS, if it's not space, you can't use this method

Peko, July 16, 2009, 6:11 pm
Hi Vivek,
Thanks for this a useful topic.

IMNSHO, there may be something to modify here
Latest bash version 3.0+ has inbuilt support for setting up a step value:

for i in {1..5}
1) The increment feature seems to belong to the version 4 of bash.
Accordingly, my bash v3.2 does not include this feature.

BTW, where did you read that it was 3.0+ ?
(I ask because you may know some good website of interest on the subject).

2) The syntax is {} where from, to, step are 3 integers.
You code is missing the increment.

Note that GNU Bash documentation may be bugged at this time,
because on GNU Bash manual, you will find the syntax {x..y[incr]}
which may be a typo. (missing the second ".." between y and increment).


The Bash Hackers page
again, see
seeems to be more accurate,
but who knows ? Anyway, at least one of them may be right… ;-)

Keep on the good work of your own,
Thanks a million.

- Peko

Michal Kaut July 22, 2009, 6:12 am

is there a simple way to control the number formatting? I use several computers, some of which have non-US settings with comma as a decimal point. This means that
for x in $(seq 0 0.1 1) gives 0 0.1 0.2 … 1 one some machines and 0 0,1 0,2 … 1 on other.
Is there a way to force the first variant, regardless of the language settings? Can I, for example, set the keyboard to US inside the script? Or perhaps some alternative to $x that would convert commas to points?
(I am sending these as parameters to another code and it won't accept numbers with commas…)

The best thing I could think of is adding x=`echo $x | sed s/,/./` as a first line inside the loop, but there should be a better solution? (Interestingly, the sed command does not seem to be upset by me rewriting its variable.)


Peko July 22, 2009, 7:27 am

To Michal Kaut:

Hi Michal,

Such output format is configured through LOCALE settings.

I tried :

export LC_CTYPE="en_EN.UTF-8″; seq 0 0.1 1

and it works as desired.

You just have to find the exact value for LC_CTYPE that fits to your systems and your needs.


Peko July 22, 2009, 2:29 pm

To Michal Kaus [2]

Ooops – ;-)
Instead of LC_CTYPE,
LC_NUMERIC should be more appropriate
(Although LC_CTYPE is actually yielding to the same result – I tested both)

By the way, Vivek has already documented the matter :

Philippe Petrinko October 30, 2009, 8:35 am

To Vivek:
Regarding your last example, that is : running a loop through arguments given to the script on the command line, there is a simplier way of doing this:
# instead of:
# FILES="$@"
# for f in $FILES

# use the following syntax
for arg
# whatever you need here – try : echo "$arg"

Of course, you can use any variable name, not only "arg".

Philippe Petrinko November 11, 2009, 11:25 am

To tdurden:

Why would'nt you use

1) either a [for] loop
for old in * ; do mv ${old} ${old}.new; done

2) Either the [rename] command ?
excerpt form "man rename" :

RENAME(1) Perl Programmers Reference Guide RENAME(1)

rename – renames multiple files

rename [ -v ] [ -n ] [ -f ] perlexpr [ files ]

"rename" renames the filenames supplied according to the rule specified
as the first argument. The perlexpr argument is a Perl expression
which is expected to modify the $_ string in Perl for at least some of
the filenames specified. If a given filename is not modified by the
expression, it will not be renamed. If no filenames are given on the
command line, filenames will be read via standard input.

For example, to rename all files matching "*.bak" to strip the
extension, you might say

rename 's/\.bak$//' *.bak

To translate uppercase names to lower, you'd use

rename 'y/A-Z/a-z/' *

- Philippe

Philippe Petrinko November 11, 2009, 9:27 pm

If you set the shell option extglob, Bash understands some more powerful patterns. Here, a is one or more pattern, separated by the pipe-symbol (|).

?() Matches zero or one occurrence of the given patterns
*() Matches zero or more occurrences of the given patterns
+() Matches one or more occurrences of the given patterns
@() Matches one of the given patterns
!() Matches anything except one of the given patterns


Philippe Petrinko November 12, 2009, 3:44 pm

To Sean:
Right, the more sharp a knife is, the easier it can cut your fingers…

I mean: There are side-effects to the use of file globbing (like in [ for f in * ] ) , when the globbing expression matches nothing: the globbing expression is not susbtitued.

Then you might want to consider using [ nullglob ] shell extension,
to prevent this.

Devil hides in detail ;-)

Dominic January 14, 2010, 10:04 am

There is an interesting difference between the exit value for two different for looping structures (hope this comes out right):
for (( c=1; c<=2; c++ )) do echo -n "inside (( )) loop c is $c, "; done; echo "done (( )) loop c is $c"
for c in {1..2}; do echo -n "inside { } loop c is $c, "; done; echo "done { } loop c is $c"

You see that the first structure does a final increment of c, the second does not. The first is more useful IMO because if you have a conditional break in the for loop, then you can subsequently test the value of $c to see if the for loop was broken or not; with the second structure you can't know whether the loop was broken on the last iteration or continued to completion.

Dominic January 14, 2010, 10:09 am

sorry, my previous post would have been clearer if I had shown the output of my code snippet, which is:
inside (( )) loop c is 1, inside (( )) loop c is 2, done (( )) loop c is 3
inside { } loop c is 1, inside { } loop c is 2, done { } loop c is 2

Philippe Petrinko March 9, 2010, 2:34 pm


And, again, as stated many times up there, using [seq] is counter productive, because it requires a call to an external program, when you should Keep It Short and Simple, using only bash internals functions:

for ((c=1; c<21; c+=2)); do echo "Welcome $c times" ; done

(and I wonder why Vivek is sticking to that old solution which should be presented only for historical reasons when there was no way of using bash internals.
By the way, this historical recall should be placed only at topic end, and not on top of the topic, which makes newbies sticking to the not-up-to-date technique ;-) )

Sean March 9, 2010, 11:15 pm

I have a comment to add about using the builtin for (( … )) syntax. I would agree the builtin method is cleaner, but from what I've noticed with other builtin functionality, I had to check the speed advantage for myself. I wrote the following files:

for ((i=1;i<=1000000;i++))
echo "Output $i"

for i in $(seq 1 1000000)
echo "Output $i"

And here were the results that I got:
time ./
real 0m22.122s
user 0m18.329s
sys 0m3.166s

time ./
real 0m19.590s
user 0m15.326s
sys 0m2.503s

The performance increase isn't too significant, especially when you are probably going to be doing something a little more interesting inside of the for loop, but it does show that builtin commands are not necessarily faster.

Andi Reinbrech November 18, 2010, 8:35 pm

The reason why the external seq is faster, is because it is executed only once, and returns a huge splurb of space separated integers which need no further processing, apart from the for loop advancing to the next one for the variable substitution.

The internal loop is a nice and clean/readable construct, but it has a lot of overhead. The check expression is re-evaluated on every iteration, and a variable on the interpreter's heap gets incremented, possibly checked for overflow etc. etc.

Note that the check expression cannot be simplified or internally optimised by the interpreter because the value may change inside the loop's body (yes, there are cases where you'd want to do this, however rare and stupid they may seem), hence the variables are volatile and get re-evaluted.

I.e. botom line, the internal one has more overhead, the "seq" version is equivalent to either having 1000000 integers inside the script (hard coded), or reading once from a text file with 1000000 integers with a cat. Point being that it gets executed only once and becomes static.

OK, blah blah fishpaste, past my bed time :-)


Anthony Thyssen June 4, 2010, 6:53 am

The {1..10} syntax is pretty useful as you can use a variable with it!

echo {1..${limit}}

You need to eval it to get it to work!

eval "echo {1..${limit}}"
1 2 3 4 5 6 7 8 9 10

'seq' is not avilable on ALL system (MacOSX for example)
and BASH is not available on all systems either.

You are better off either using the old while-expr method for computer compatiblity!

   limit=10; n=1;
   while [ $n -le 10 ]; do
     echo $n;
     n=`expr $n + 1`;

Alternativally use a seq() function replacement…

 # seq_count 10
seq_count() {
  i=1; while [ $i -le $1 ]; do echo $i; i=`expr $i + 1`; done
# simple_seq 1 2 10
simple_seq() {
  i=$1; while [ $i -le $3 ]; do echo $i; i=`expr $i + $2`; done
seq_integer() {
    if [ "X$1" = "X-f" ]
    then format="$2"; shift; shift
    else format="%d"
    case $# in
    1) i=1 inc=1 end=$1 ;;
    2) i=$1 inc=1 end=$2 ;;
    *) i=$1 inc=$2 end=$3 ;;
    while [ $i -le $end ]; do
      printf "$format\n" $i;
      i=`expr $i + $inc`;

Edited: by Admin – added code tags.

TheBonsai June 4, 2010, 9:57 am

The Bash C-style for loop was taken from KSH93, thus I guess it's at least portable towards Korn and Z.

The seq-function above could use i=$((i + inc)), if only POSIX matters. expr is obsolete for those things, even in POSIX.

Philippe Petrinko June 4, 2010, 10:15 am

Right Bonsai,
( )

But FOR C-style does not seem to be POSIXLY-correct…

Read on-line reference issue 6/2004,
Top is here,

and the Shell and Utilities volume (XCU) T.OC. is here
doc is:

and FOR command:

Anthony Thyssen June 6, 2010, 7:18 am

TheBonsai wrote…. "The seq-function above could use i=$((i + inc)), if only POSIX matters. expr is obsolete for those things, even in POSIX."

I am not certain it is in Posix. It was NOT part of the original Bourne Shell, and on some machines, I deal with Bourne Shell. Not Ksh, Bash, or anything else.

Bourne Shell syntax works everywhere! But as 'expr' is a builtin in more modern shells, then it is not a big loss or slow down.

This is especially important if writing a replacement command, such as for "seq" where you want your "just-paste-it-in" function to work as widely as possible.

I have been shell programming pretty well all the time since 1988, so I know what I am talking about! Believe me.

MacOSX has in this regard been the worse, and a very big backward step in UNIX compatibility. 2 year after it came out, its shell still did not even understand most of the normal 'test' functions. A major pain to write shells scripts that need to also work on this system.

TheBonsai June 6, 2010, 12:35 pm

Yea, the question was if it's POSIX, not if it's 100% portable (which is a difference). The POSIX base more or less is a subset of the Korn features (88, 93), pure Bourne is something "else", I know. Real portability, which means a program can go wherever UNIX went, only in C ;)

Philippe Petrinko November 22, 2010, 8:23 am

And if you want to get rid of double-quotes, use:

one-liner code:
while read; do record=${REPLY}; echo ${record}|while read -d ","; do field="${REPLY#\"}"; field="${field%\"}"; echo ${field}; done; done<data

script code, added of some text to better see record and field breakdown:

while read
echo "New record"
echo ${record}|while read -d ,
echo "Field is :${field}:"

Does it work with your data?

- PP

Philippe Petrinko November 22, 2010, 9:01 am

Of course, all the above code was assuming that your CSV file is named "data".

If you want to use anyname with the script, replace:




And then use your script file (named for instance "myScript") with standard input redirection:

myScript < anyFileNameYouWant


Philippe Petrinko November 22, 2010, 11:28 am

well no there is a bug, last field of each record is not read – it needs a workout and may be IFS modification ! After all that's what it was built for… :O)

Anthony Thyssen November 22, 2010, 11:31 pm

Another bug is the inner loop is a pipeline, so you can't assign variables for use later in the script. but you can use '<<<' to break the pipeline and avoid the echo.

But this does not help when you have commas within the quotes! Which is why you needed quotes in the first place.

In any case It is a little off topic. Perhaps a new thread for reading CVS files in shell should be created.

Philippe Petrinko November 24, 2010, 6:29 pm

Would you try this one-liner script on your CSV file?

This one-liner assumes that CSV file named [data] has __every__ field double-quoted.

while read; do r="${REPLY#\"}";echo "${r//\",\"/\"}"|while read -d \";do echo "Field is :${REPLY}:";done;done<data

Here is the same code, but for a script file, not a one-liner tweak.

# script
# 1) Usage
# This script reads from standard input
# any CSV with double-quoted data fields
# and breaks down each field on standard output
# 2) Within each record (line), _every_ field MUST:
# - Be surrounded by double quotes,
# - and be separated from preceeding field by a comma
# (not the first field of course, no comma before the first field)
while read
echo "New record" # this is not mandatory-just for explanation
# store REPLY and remove opening double quote
# replace every "," by a single double quote
echo ${record}|while read -d \"
# store REPLY into variable "field"
echo "Field is :${field}:" # just for explanation

This script named here [] must be used so: < my-cvs-file-with-doublequotes

Philippe Petrinko November 24, 2010, 6:35 pm


By the way, using [REPLY] in the outer loop _and_ the inner loop is not a bug.
As long as you know what you do, this is not problem, you just have to store [REPLY] value conveniently, as this script shows.

TheBonsai March 8, 2011, 6:26 am
for ((i=1; i<=20; i++)); do printf "%02d\n" "$i"; done

nixCraft March 8, 2011, 6:37 am

+1 for printf due to portability, but you can use bashy .. syntax too

for i in {01..20}; do echo "$i"; done

TheBonsai March 8, 2011, 6:48 am

Well, it isn't portable per se, it makes it portable to pre-4 Bash versions.

I think a more or less "portable" (in terms of POSIX, at least) code would be

while [ "$((i >= 20))" -eq 0 ]; do
  printf "%02d\n" "$i"

Philip Ratzsch April 20, 2011, 5:53 am

I didn't see this in the article or any of the comments so I thought I'd share. While this is a contrived example, I find that nesting two groups can help squeeze a two-liner (once for each range) into a one-liner:

for num in {{1..10},{15..20}};do echo $num;done

Great reference article!

Philippe Petrinko April 20, 2011, 8:23 am

Nice thing to think of, using brace nesting, thanks for sharing.

Philippe Petrinko May 6, 2011, 10:13 am

Hello Sanya,

That would be because brace expansion does not support variables. I have to check this.
Anyway, Keep It Short and Simple: (KISS) here is a simple solution I already gave above:

for (( x = $xstart; x <= $xend; x += $xstep)); do echo $x;done

Actually, POSIX compliance allows to forget $ in for quotes, as said before, you could also write:

for (( x = xstart; x <= xend; x += xstep)); do echo $x;done

Philippe Petrinko May 6, 2011, 10:48 am


Actually brace expansion happens __before__ $ parameter exapansion, so you cannot use it this way.

Nevertheless, you could overcome this this way:

max=10; for i in $(eval echo {1..$max}); do echo $i; done

Sanya May 6, 2011, 11:42 am

Hello, Philippe

Thanks for your suggestions
You basically confirmed my findings, that bash constructions are not as simple as zsh ones.
But since I don't care about POSIX compliance, and want to keep my scripts "readable" for less experienced people, I would prefer to stick to zsh where my simple for-loop works

Cheers, Sanya

Philippe Petrinko May 6, 2011, 12:07 pm


First, you got it wrong: solutions I gave are not related to POSIX, I just pointed out that POSIX allows not to use $ in for (( )), which is just a little bit more readable – sort of.

Second, why do you see this less readable than your [zsh] [for loop]?

for (( x = start; x <= end; x += step)) do
echo "Loop number ${x}"

It is clear that it is a loop, loop increments and limits are clear.

IMNSHO, if anyone cannot read this right, he should not be allowed to code. :-D


Anthony Thyssen May 8, 2011, 11:30 pm

If you are going to do… $(eval echo {1..$max});
You may as well use "seq" or one of the many other forms.
See all the other comments on doing for loops.

Tom P May 19, 2011, 12:16 pm

I am trying to use the variable I set in the for line on to set another variable with a different extension. Couldn't get this to work and couldnt find it anywhere on the web… Can someone help.


FILE_TOKEN=`cat /tmp/All_Tokens.txt`
for token in $FILE_TOKEN
A1_$token=`grep $A1_token /file/path/file.txt | cut -d ":" -f2`

my goal is to take the values from the ALL Tokens file and set a new variable with A1_ infront of it… This tells be that A1_ is not a command…

[Nov 08, 2015] 2013 Keynote: Dan Quinlan: C++ Use in High Performance Computing Within DOE: Past and Future

At 31 min there is an interesting slide that gives some information about the scale of system in DOE. Current system has 18,700 News system will have 50K to 500K nodes, 32 core per node (power consumption is ~15 MW equal to a small city power consumption). The cost is around $200M
Jun 09, 2013 | YouTube


[Nov 08, 2015] The Anti-Java Professor and the Jobless Programmers

Nick Geoghegan

James Maguire's article raises some interesting questions as to why teaching Java to first year CS / IT students is a bad idea. The article mentions both Ada and Pascal – neither of which really "took off" outside of the States, with the former being used mainly by contractors of the US Dept. of Defense.

This is my own, personal, extension to the article – which I agree with – and why first year students should be taught C in first year. I'm biased though, I learned C as my first language and extensively use C or C++ in projects.

Java is a very high level language that has interesting features that make it easier for programmers. The two main points, that I like about Java, are libraries (although libraries exist for C / C++ ) and memory management.


Libraries are fantastic. They offer an API and abstract a metric fuck tonne of work that a programmer doesn't care about. I don't care how the library works inside, just that I have a way of putting in input and getting expected output (see my post on abstraction). I've extensively used libraries, even this week, for audio codec decoding. Libraries mean not reinventing the wheel and reusing code (something students are discouraged from doing, as it's plagiarism, yet in the real world you are rewarded). Again, starting with C means that you appreciate the libraries more.

Memory Management

Managing your programs memory manually is a pain in the hole. We all know this after spending countless hours finding memory leaks in our programs. Java's inbuilt memory management tool is great – it saves me from having to do it. However, if I had have learned Java first, I would assume (for a short amount of time) that all languages managed memory for you or that all languages were shite compared to Java because they don't manage memory for you. Going from a "lesser" language like C to Java makes you appreciate the memory manager

What's so great about C?

In the context of a first language to teach students, C is perfect. C is

Java is a complex language that will spoil a first year student. However, as noted, CS / IT courses need to keep student retention rates high. As an example, my first year class was about 60 people, final year was 8. There are ways to keep students, possibly with other, easier, languages in the second semester of first year – so that students don't hate the subject when choosing the next years subject post exams.

Conversely, I could say that you should teach Java in first year and expand on more difficult languages like C or assembler (which should be taught side by side, in my mind) later down the line – keeping retention high in the initial years, and drilling down with each successive semester to more systems level programming.

There's a time and place for Java, which I believe is third year or final year. This will keep Java fresh in the students mind while they are going job hunting after leaving the bosom of academia. This will give them a good head start, as most companies are Java houses in Ireland.

[Nov 08, 2015] Abstraction

Filed in Programming No Comments

A few things can confuse programming students, or new people to programming. One of these is abstraction.

Wikipedia says:

In computer science, abstraction is the process by which data and programs are defined with a representation similar to its meaning (semantics), while hiding away the implementation details. Abstraction tries to reduce and factor out details so that the programmer can focus on a few concepts at a time. A system can have several abstraction layers whereby different meanings and amounts of detail are exposed to the programmer. For example, low-level abstraction layers expose details of the hardware where the program is run, while high-level layers deal with the business logic of the program.

That might be a bit too wordy for some people, and not at all clear. Here's my analogy of abstraction.

Abstraction is like a car

A car has a few features that makes it unique.

If someone can drive a Manual transmission car, they can drive any Manual transmission car. Automatic drivers, sadly, cannot drive a Manual transmission drivers without "relearing" the car. That is an aside, we'll assume that all cars are Manual transmission cars – as is the case in Ireland for most cars.

Since I can drive my car, which is a Mitsubishi Pajero, that means that I can drive your car – a Honda Civic, Toyota Yaris, Volkswagen Passat.

All I need to know, in order to drive a car – any car – is how to use the breaks, accelerator, steering wheel, clutch and transmission. Since I already know this in my car, I can abstract away your car and it's controls.

I do not need to know the inner workings of your car in order to drive it, just the controls. I don't need to know how exactly the breaks work in your car, only that they work. I don't need to know, that your car has a turbo charger, only that when I push the accelerator, the car moves. I also don't need to know the exact revs that I should gear up or gear down (although that would be better on the engine!)

Virtually all controls are the same. Standardization means that the clutch, break and accelerator are all in the same place, regardless of the car. This means that I do not need to relearn how a car works. To me, a car is just a car, and is interchangeable with any other car.

Abstraction means not caring

As a programmer, or someone using a third party API (for example), abstraction means not caring how the inner workings of some function works – Linked list data structure, variable names inside the function, the sorting algorithm used, etc – just that I have a standard (preferable unchanging) interface to do whatever I need to do.

Abstraction can be taught of as a black box. For input, you get output. That shouldn't be the case, but often is. We need abstraction so that, as a programmer, we can concentrate on other aspects of the program – this is the corner-stone for large scale, multi developer, software projects.

[Nov 08, 2015] Get timestamps on Bash's History
One of the annoyances of Bash, is that searching through your history has no context. When did I last run that command? What commands were run at 3am, while on the lock?

The following, single line, run in the shell, will provide date and time stamping for your Bash History the next time you login, or run bash.

echo  'export HISTTIMEFORMAT="%h/%d - %H:%M:%S "' >>  ~/.bashrc

[Nov 08, 2015] Single command shell accounts

Notable quotes:
"... /etc/ssh/sshd_config ..."
"... /home/restricteduser/.ssh/authorized_keys ..."

There are times when you will want a single purpose user account – an account that cannot get a shell, not can it do anything but run a single command. This can come in useful for a few reasons – for me, I use it to force an svn update on machines that can't use user generated crontabs. Others have used this setup to allow multiple users run some arbitrary command, without giving them shell access.

Add the user

Add the user as you'd add any user. You'll need a home directory, as I want to use ssh keys so I don't need a password and it can be scripted from the master server.

 root@slave1# adduser restricteduser
Set the users password

Select a nice strong password. I like using $pwgen 32

 root@slave1# passwd restricteduser
Copy your ssh-key to the server

Some Linux distros don't have the following command, in this case, contact your distro mailing list or Google.

 root@master# ssh-copy-id restricteduser@slave1
Lock out the user

Password lock out the user. This contradicts the above step, but it ensures that restricteduser can't update their password.

 root@slave1# passwd -l restricteduser
Edit the sshd config

Depending on your system, this can be in a number of places. On Debian, it's in /etc/ssh/sshd_config. Put it down the bottom.

 Match User restricteduser
    AllowTCPForwarding no
    X11Forwarding no
    ForceCommand /bin/foobar_command
Restart ssh
 root@slave1# service ssh restart
Add more ssh keys

Add any additional ssh key to /home/restricteduser/.ssh/authorized_keys

[Nov 08, 2015] A Practical Guide to Linux Commands, Editors, and Shell Programming (3rd Edition)

Average content, but huge page wise volume (1200). It's unclear what are playing your money for. Like all Sobel's books it is mainly a reference, not so much textbook. It tried to cover too much with (weak and superficial) chapters on Perl, Python and MySQL. He also covers both the apt-get and yum utilities which has little or nothing to do with shell programming. There are even some OS X notes as well. Older version (Prentice Hall July 11, 2005 ISBN-10: 0131478230) is much cheaper and contains mostly the same material (see my review)
Notable quotes:
"... Keep in mind that this is a reference book ..."
"... It's a very dense reference manual that needs the company of a few lighter books to fill in some gaps and provide some more in-depth info on the more interesting (for you) sections. ..."
September 24, 2012 |
ayates83 on December 6, 2012
Excellent reference guide

Upon first getting this book I wasn't sure what to expect. I've read a number of books and online how-to guides dealing with BASH scripting. This book is well organized and presents the material in a way that is easily understood. It was surprising as well to see that it covered OSX which I also use extensively in my day-to-day work. Most scripting guides that I have read just cover a specific Linux distribution.

I think this book would be great for those just beginning to work with Linux as well. I would definitely recommend this book as a supplement to my students as they are learning how to tackle the sometimes daunting task of learning how to write effective scripts for the Linux classes that I teach. The fact that a number of languages were briefly covered makes this book an excellent tool. BASH and python are the most common languages that I see used and that are taught in my class. It's an added bonus to have a section covering perl scripts as well.

The portion of the book I found especially helpful is the command reference section and the appendix on regular expressions. Not being particularly strong in regexes myself this has become a very valuable quick reference guide when i need to brush up on my syntax.

The only criticism that I would have for the book is the section on MySQL. It just seems out of place to me in a system scripting/programming book to me and should rather be included in a book geared more specifically towards systems administration.

Overall, I give this book 4/5 stars.

Adam Yates

Mehon October 23, 2015

4.0 out of 5 stars The good, the bad, and the slightly ugly

I'll keep it short. A full review would take too much time and space.

The good:

This is a very large reference manual, with lots of information in it on a wide variety of subjects. The fact that there is an entire chapter on MySQL is a big plus, as anyone who deals with software professionally NEEDS to know about databases. The more, the better. The indices in the book are some of the best I've seen. Redundancies are not always a bad thing!

The bad:

Probably just a personal gripe, but the author doesn't use strict/warnings in his Perl code. It doesn't seem to be a "Perl for people who already know Perl" chapter. Any intro to Perl should use strict/warnings. Perl lets you do all kinds of crazy things, which is both its greatest strength and its greatest weakness. Using strict/warnings in ALL of your Perl helps you learn to write better code. When you KNOW how to write code and know when to add "no strict" to a subroutine (or just leave it out entirely) to work some Perl magic, then you can take advantage of the added freedoms. Nearly all of the scripts in this chapter need to be partially or completely rewritten if you use strict, but I suppose that's just another way of learning Perl.

The slightly ugly:

Keep in mind that this is a reference book, not Lord of the freakin Rings (unless you spend way too much time with computers, I suppose). There are multiple instances on just about every page of "refer to xyz on page 1###" when you're only on page 3##. There's a lot of flipping around to various sections of the book, like one of those old "Choose your own story" books. Granted, it's tough to pull off a book this thick and dense without doing that, but there was just a LOT of it, and it gets a bit old after a while.

The short version:

It's a very dense reference manual that needs the company of a few "lighter" books to fill in some gaps and provide some more in-depth info on the more interesting (for you) sections.

[Nov 08, 2015] Linux Command Line and Shell Scripting Bible by Richard Blum & Christine Bresnahan

***+  This is an average or slightly above average introductory book, oriented on newcomers to shell scripting. Looks like the author just added (( )) type of arithmetic expressions as a afterthought, while it should be taught as a primary construct. Similar situation with C-style loop available now in bash as well. One of the few books on shell scripting published after 2005.  Unix Shell Programming, Third Edition by Stephen Kochan & Patrick Wood is a much better book. For everybody but absolute beginners, Classic Shell Scripting (May 1, 2005) might also be a better book, although it belong to intermediate level.  But this is definitely a good alternative to Sobel's A Practical Guide to Linux Commands, Editors, and Shell Programming (3rd Edition) published in 2012.
BTW it somebody pretend to write a scripting bible he/she should be a scripting god. Neither author qualifies ; -).  They are pretty average run of the mill instructors. 
Notable quotes:
"... It has one of the best explanations for sed and gawk I have ever seen. ..."
"... Great examples and clearly described concepts. ..."
January 20, 2015 |
Michael, on May 17, 2015
By the time I bought this book, I had already read a lot of online resources about bash scripting, and I had already been using linux for two years. I had even read most of the A-plus certification book on Linux. Despite that, I was constantly struggling to write bash scripts that worked, this is because so much of the free online documentation on bash scripting is confusing and incomplete. Even when consulting co-workers, they too could not explain why so many things I tried to code in a bash script did not work. That's when I decided to buy this book.

The "Linux Command Line and Shell Scripting Bible" cleared up a lot of problems that have been plaguing me for a long time now. I wish that I had started to learn bash scripting with this book, it could have saved me a lot of time. I would highly recommend this for anybody who will use linux.

Let me list some things this book explained to me that I struggled with for years prior:
- When is a subshell made, what are the implications of that, how does variable scoping come into play.
- how can you create, manipulate, and pass around arrays in bash
- how does the "return" statement behave in functions, how to use that in an if statement
- how can you do math in bash
- the differences between [ ] and [[ ]]

Here are some other things I love about this book:
- it has an excellent explanation of how you could parse a command line that follows a complicated pattern like "mycommand --longopt -a -bcf input.txt -- foo bar zop". Before I picked up this book I thought that would be too difficult to do in a bash script.
- It explains how to easily create GUI interfaces for your script.
- It has one of the best explanations for sed and gawk I have ever seen.

Throughout the entire book, everything said is clear and easy to understand, and the authors give you ample examples to demonstrate the point. While the thickness of the book is a bit intimidating, you will find that you can read it pretty fast because a lot of those pages are full of clear examples that you can read quickly.

Metro CA on September 8, 2015

Four Stars

Great examples and clearly described concepts.

[Aug 02, 2015] The Linux Tips HOWTO Detailed Tips

Converting all files in a directory to lowercase. Justin Dossey,

I noticed a few overly difficult or unnecessary procedures recommended in the 2c tips section of Issue 12. Since there is more than one, I'm sending it to you:

         # lowerit
         # convert all file names in the current directory to lower case
         # only operates on plain files--does not change the name of directories
         # will ask for verification before overwriting an existing file
         for x in `ls`
           if [ ! -f $x ]; then
           lc=`echo $x  | tr '[A-Z]' '[a-z]'`
           if [ $lc != $x ]; then
             mv -i $x $lc

Wow. That's a long script. I wouldn't write a script to do that; instead, I would use this command:
for i in * ; do [ -f $i ] && mv -i $i `echo $i | tr '[A-Z]' '[a-z]'`;
on the command line.

The contributor says he wrote the script how he did for understandability (see below).

On the next tip, this one about adding and removing users, Geoff is doing fine until that last step. Reboot? Boy, I hope he doesn't reboot every time he removes a user. All you have to do is the first two steps. What sort of processes would that user have going, anyway? An irc bot? Killing the processes with a simple

kill -9 `ps -aux |grep ^<username> |tr -s " " |cut -d " " -f2`
Example, username is foo
kill -9 `ps -aux |grep ^foo |tr -s " " |cut -d " " -f2`
That taken care of, let us move to the forgotten root password.

The solution given in the Gazette is the most universal one, but not the easiest one. With both LILO and loadlin, one may provide the boot parameter "single" to boot directly into the default shell with no login or password prompt. From there, one may change or remove any passwords before typing "init 3" to start multiuser mode. Number of reboots: 1 The other way Number of reboots: 2

Justin Dossey

[Aug 27, 2011] Turn Vim into a bash IDE By Joe 'Zonker' Brockmeier

June 11, 2007 |

By itself, Vim is one of the best editors for shell scripting. With a little tweaking, however, you can turn Vim into a full-fledged IDE for writing scripts. You could do it yourself, or you can just install Fritz Mehner's Bash Support plugin.

To install Bash Support, download the zip archive, copy it to your ~/.vim directory, and unzip the archive. You'll also want to edit your ~/.vimrc to include a few personal details; open the file and add these three lines:

let g:BASH_AuthorName   = 'Your Name'
let g:BASH_Email        = ''
let g:BASH_Company      = 'Company Name'

These variables will be used to fill in some headers for your projects, as we'll see below.

The Bash Support plugin works in the Vim GUI (gVim) and text mode Vim. It's a little easier to use in the GUI, and Bash Support doesn't implement most of its menu functions in Vim's text mode, so you might want to stick with gVim when scripting.

When Bash Support is installed, gVim will include a new menu, appropriately titled Bash. This puts all of the Bash Support functions right at your fingertips (or mouse button, if you prefer). Let's walk through some of the features, and see how Bash Support can make Bash scripting a breeze.

Header and comments

If you believe in using extensive comments in your scripts, and I hope you are, you'll really enjoy using Bash Support. Bash Support provides a number of functions that make it easy to add comments to your bash scripts and programs automatically or with just a mouse click or a few keystrokes.

When you start a non-trivial script that will be used and maintained by others, it's a good idea to include a header with basic information -- the name of the script, usage, description, notes, author information, copyright, and any other info that might be useful to the next person who has to maintain the script. Bash Support makes it a breeze to provide this information. Go to Bash -> Comments -> File Header, and gVim will insert a header like this in your script:

#          FILE:
#         USAGE:  ./
#       OPTIONS:  ---
#          BUGS:  ---
#         NOTES:  ---
#        AUTHOR:  Joe Brockmeier,
#       COMPANY:  Dissociated Press
#       VERSION:  1.0
#       CREATED:  05/25/2007 10:31:01 PM MDT
#      REVISION:  ---

You'll need to fill in some of the information, but Bash Support grabs the author, company name, and email address from your ~/.vimrc, and fills in the file name and created date automatically. To make life even easier, if you start Vim or gVim with a new file that ends with an .sh extension, it will insert the header automatically.

As you're writing your script, you might want to add comment blocks for your functions as well. To do this, go to Bash -> Comment -> Function Description to insert a block of text like this:

#===  FUNCTION  ================================================================
#          NAME:
#       RETURNS:

Just fill in the relevant information and carry on coding.

The Comment menu allows you to insert other types of comments, insert the current date and time, and turn selected code into a comment, and vice versa.

Statements and snippets

Let's say you want to add an if-else statement to your script. You could type out the statement, or you could just use Bash Support's handy selection of pre-made statements. Go to Bash -> Statements and you'll see a long list of pre-made statements that you can just plug in and fill in the blanks. For instance, if you want to add a while statement, you can go to Bash -> Statements -> while, and you'll get the following:

while _; do

The cursor will be positioned where the underscore (_) is above. All you need to do is add the test statement and the actual code you want to run in the while statement. Sure, it'd be nice if Bash Support could do all that too, but there's only so far an IDE can help you.

However, you can help yourself. When you do a lot of bash scripting, you might have functions or code snippets that you reuse in new scripts. Bash Support allows you to add your snippets and functions by highlighting the code you want to save, then going to Bash -> Statements -> write code snippet. When you want to grab a piece of prewritten code, go to Bash -> Statements -> read code snippet. Bash Support ships with a few included code fragments.

Another way to add snippets to the statement collection is to just place a text file with the snippet under the ~/.vim/bash-support/codesnippets directory.

Running and debugging scripts

Once you have a script ready to go, and it's testing and debugging time. You could exit Vim, make the script executable, run it and see if it has any bugs, and then go back to Vim to edit it, but that's tedious. Bash Support lets you stay in Vim while doing your testing.

When you're ready to make the script executable, just choose Bash -> Run -> make script executable. To save and run the script, press Ctrl-F9, or go to Bash -> Run -> save + run script.

Bash Support also lets you call the bash debugger (bashdb) directly from within Vim. On Ubuntu, it's not installed by default, but that's easily remedied with apt-get install bashdb. Once it's installed, you can debug the script you're working on with F9 or Bash -> Run -> start debugger.

If you want a "hard copy" -- a PostScript printout -- of your script, you can generate one by going to Bash -> Run -> hardcopy to This is where Bash Support comes in handy for any type of file, not just bash scripts. You can use this function within any file to generate a PostScript printout.

Bash Support has several other functions to help run and test scripts from within Vim. One useful feature is syntax checking, which you can access with Alt-F9. If you have no syntax errors, you'll get a quick OK. If there are problems, you'll see a small window at the bottom of the Vim screen with a list of syntax errors. From that window you can highlight the error and press Enter, and you'll be taken to the line with the error.

Put away the reference book...

Don't you hate it when you need to include a regular expression or a test in a script, but can't quite remember the syntax? That's no problem when you're using Bash Support, because you have Regex and Tests menus with all you'll need. For example, if you need to verify that a file exists and is owned by the correct user ID (UID), go to Bash -> Tests -> file exists and is owned by the effective UID. Bash Support will insert the appropriate test ([ -O _]) with your cursor in the spot where you have to fill in the file name.

To build regular expressions quickly, go to the Bash menu, select Regex, then pick the appropriate expression from the list. It's fairly useful when you can't remember exactly how to express "zero or one" or other regular expressions.

Bash Support also includes menus for environment variables, bash builtins, shell options, and a lot more.

Hotkey support

Vim users can access many of Bash Support's features using hotkeys. While not as simple as clicking the menu, the hotkeys do follow a logical scheme that makes them easy to remember. For example, all of the comment functions are accessed with \c, so if you want to insert a file header, you use \ch; if you want a date inserted, type \cd; and for a line end comment, use \cl.

Statements can be accessed with \a. Use \ac for a case statement, \aie for an "if then else" statement, \af for a "for in..." statement, and so on. Note that the online docs are incorrect here, and indicate that statements begin with \s, but Bash Support ships with a PDF reference card (under .vim/bash-support/doc/bash-hot-keys.pdf) that gets it right.

Run commands are accessed with \r. For example, to save the file and run a script, use \rr; to make a script executable, use \re; and to start the debugger, type \rd. I won't try to detail all of the shortcuts, but you can pull up a reference using :help bashsupport-usage-vim when in Vim, or use the PDF. The full Bash Support reference is available within Vim by running :help bashsupport, or you can read it online.

Of course, we've covered only a small part of Bash Support's functionality. The next time you need to whip up a shell script, try it using Vim with Bash Support. This plugin makes scripting in bash a lot easier.

[Jul 30, 2011] Advanced Techniques

Shell scripts can be powerful tools for writing software. Graphical interfaces notwithstanding, they are capable of performing nearly any task that could be performed with a more traditional language. This chapter describes several techniques that will help you write more complex software using shell scripts.

[Jul 30, 2011] Tuesday's Tips for Unix Shell Scripts

Welcome to Tuesday's Tips for shell scripting.

These tips come from my own scripting as well as answers I have provided to queries in various usenet newgroups (e.g., and comp.os.linux.misc).

The series ran from April to September, 2004, at which time I began work on a book of shell scripts. Due to the demands that project made on my time, I was unable to continue the series.

  1. The Easy PATH (28 Sep 2004)
  2. flocate — locate a file (21 Sep 2004)
  3. Toggling a variable (31 Aug 2004)
  4. Redirecting stdout and stderr (24 Aug 2004)
  5. A list of directories (17 Aug 2004)
  6. Learning to read (10 Aug 2004)
  7. New Bash Parameter expansion ( 3 Aug 2004)
  8. Heads up (27 Jul 2004)
  9. Random thoughts (20 Jul 2004)
  10. Useful variables (13 Jul 2004)
  11. Centering text on a line ( 6 July 2004)
  12. Setting multiple variables with one command (29 June 2004)
  13. Adding numbers from a file (22 June 2004)
  14. How many days in the month? (15 June 2004)
  15. Is this a leap year? ( 8 June 2004:
  16. Searching man pages with sman() ( 1 June 2004)
  17. Removing non-consecutive duplicate lines from a file (25 May 2004)
  18. Printing to entire width of screen (18 May 2004)
  19. Extracting multiple values from a string ( 4 May 2004)
  20. Automatic array indexing (27 April 2004)

[Sep 10, 2010] bash iterator trick

The UNIX Blog

A neat little feature I never new existed in bash is being able to iterate over a sequence of number in a more or less C-esque manner. Coming from Bourne/Korn shell background creating an elegant iterator is always a slight nuisance, since you would come up with something like this to iterate over a sequence of numbers:

while [ $i -lt 10 ]; do
i=`expr $i + 1`;

Well, not exactly the most elegant solution. With bash on the other hand it can be done as simple as:

for((i=1; $i<10; i++)); do

Simple and to the point.

[Jul 28, 2010] Bash Co-Processes

Linux Journal

One of the new features in bash 4.0 is the coproc statement. The coproc statement allows you to create a co-process that is connected to the invoking shell via two pipes: one to send input to the co-process and one to get output from the co-process.

The first use that I found for this I discovered while trying to do logging and using exec redirections. The goal was to allow you to optionally start writing all of a script's output to a log file once the script had already begun (e.g. due to a --log command line option).

The main problem with logging output after the script has already started is that the script may have been invoked with the output already redirected (to a file or to a pipe). If we change where the output goes when the output has already been redirected then we will not be executing the command as intended by the user.

The previous attempt ended up using named pipes:


echo hello

if test -t 1; then
    # Stdout is a terminal.
    exec >log
    # Stdout is not a terminal.
    trap "rm -f $npipe" EXIT
    mknod $npipe p
    tee <$npipe log &
    exec 1>&-
    exec 1>$npipe

echo goodbye

From the previous article:

Here, if the script's stdout is not connected to the terminal, we create a named pipe (a pipe that exists in the file-system) using mknod and setup a trap to delete it on exit. Then we start tee in the background reading from the named pipe and writing to the log file. Remember that tee is also writing anything that it reads on its stdin to its stdout. Also remember that tee's stdout is also the same as the script's stdout (our main script, the one that invokes tee) so the output from tee's stdout is going to go wherever our stdout is currently going (i.e. to the user's redirection or pipeline that was specified on the command line). So at this point we have tee's output going where it needs to go: into the redirection/pipeline specified by the user.

We can do the same thing using a co-process:

echo hello

if test -t 1; then
    # Stdout is a terminal.
    exec >log
    # Stdout is not a terminal.
    exec 7>&1
    coproc tee log 1>&7
    #echo Stdout of coproc: ${COPROC[0]} >&2
    #echo Stdin of coproc: ${COPROC[1]} >&2
    #ls -la /proc/$$/fd
    exec 7>&-
    exec 7>&${COPROC[1]}-
    exec 1>&7-
    eval "exec ${COPROC[0]}>&-"
    #ls -la /proc/$$/fd
echo goodbye
echo error >&2

In the case that our standard output is going to the terminal then we just use exec to redirect our output to the desired log file, as before. If our output is not going to the terminal then we use coproc to run tee as a co-process and redirect our output to tee's input and redirect tee's output to where our output was originally going.

Running tee using the coproc statement is essentially the same as running tee in the background (e.g. tee log &), the main difference is that bash runs tee with both its input and output connected to pipes. Bash puts the file descriptors for those pipes into an array named COPROC (by default):

Note that these pipes are created before any redirections are done in the command.

Focusing on the part where the original script's output is not connected to the terminal. The following line duplicates our standard output on file descriptor 7.

exec 7>&1

Then we start tee with its output redirected to file descriptor 7.

coproc tee log 1>&7

So tee will now write whatever it reads on its standard input to the file named log and to file descriptor 7, which is our original standard out.

Now we close file descriptor 7 with (remember that tee still has the "file" that's open on 7 opened as its standard output) with:

exec 7>&-

Since we've closed 7 we can reuse it, so we move the pipe that's connected to tee's input to 7 with:

exec 7>&${COPROC[1]}-

Then we move our standard output to the pipe that's connected to tee's standard input (our file descriptor 7) via:

exec 1>&7-

And finally, we close the pipe connected to tee's output, since we don't have any need for it, with:

eval "exec ${COPROC[0]}>&-"

The eval here is required here because otherwise bash thinks the value of ${COPROC[0]} is a command name. On the other hand, it's not required in the statement above (exec 7>&${COPROC[1]}-), because in that one bash can recognize that "7" is the start of a file descriptor action and not a command.

Also note the commented command:

#ls -la /proc/$$/fd

This is useful for seeing the files that are open by the current process.

We now have achieved the desired effect: our standard output is going into tee. Tee is "logging" it to our log file and writing it to the pipe or file that our output was originally going to.

As of yet I haven't come up with any other uses for co-processes, at least ones that aren't contrived. See the bash man page for more about co-processes.

[Jun 12, 2010] Writing Robust Bash Shell Scripts

Many people hack together shell scripts quickly to do simple tasks, but these soon take on a life of their own. Unfortunately shell scripts are full of subtle effects which result in scripts failing in unusual ways. It's possible to write scripts which minimise these problems. In this article, I explain several techniques for writing robust bash scripts.

Use set -u

How often have you written a script that broke because a variable wasn't set? I know I have, many times.

rm -rf $chroot/usr/share/doc 

If you ran the script above and accidentally forgot to give a parameter, you would have just deleted all of your system documentation rather than making a smaller chroot. So what can you do about it? Fortunately bash provides you with set -u, which will exit your script if you try to use an uninitialised variable. You can also use the slightly more readable set -o nounset.

david% bash /tmp/
david% bash -u /tmp/
/tmp/ line 3: $1: unbound variable

Use set -e

Every script you write should include set -e at the top. This tells bash that it should exit the script if any statement returns a non-true return value. The benefit of using -e is that it prevents errors snowballing into serious issues when they could have been caught earlier. Again, for readability you may want to use set -o errexit.

Using -e gives you error checking for free. If you forget to check something, bash will do it or you. Unfortunately it means you can't check $? as bash will never get to the checking code if it isn't zero. There are other constructs you could use:

if [ "$?"-ne 0]; then echo "command failed"; exit 1; fi 

could be replaced with

command || { echo "command failed"; exit 1; } 


if ! command; then echo "command failed"; exit 1; fi 

What if you have a command that returns non-zero or you are not interested in its return value? You can use command || true, or if you have a longer section of code, you can turn off the error checking, but I recommend you use this sparingly.

set +e
set -e 

On a slightly related note, by default bash takes the error status of the last item in a pipeline, which may not be what you want. For example, false | true will be considered to have succeeded. If you would like this to fail, then you can use set -o pipefail to make it fail.

Program defensively - expect the unexpected

Your script should take into account of the unexpected, like files missing or directories not being created. There are several things you can do to prevent errors in these situations. For example, when you create a directory, if the parent directory doesn't exist, mkdir will return an error. If you add a -p option then mkdir will create all the parent directories before creating the requested directory. Another example is rm. If you ask rm to delete a non-existent file, it will complain and your script will terminate. (You are using -e, right?) You can fix this by using -f, which will silently continue if the file didn't exist.

Be prepared for spaces in filenames

Someone will always use spaces in filenames or command line arguments and you should keep this in mind when writing shell scripts. In particular you should use quotes around variables.

if [ $filename = "foo" ]; 

will fail if $filename contains a space. This can be fixed by using:

if [ "$filename" = "foo" ]; 

When using $@ variable, you should always quote it or any arguments containing a space will be expanded in to separate words.

david% foo() { for i in $@; do echo $i; done }; foo bar "baz quux"
david% foo() { for i in "$@"; do echo $i; done }; foo bar "baz quux"
baz quux 

I can not think of a single place where you shouldn't use "$@" over $@, so when in doubt, use quotes.

If you use find and xargs together, you should use -print0 to separate filenames with a null character rather than new lines. You then need to use -0 with xargs.

david% touch "foo bar"
david% find | xargs ls
ls: ./foo: No such file or directory
ls: bar: No such file or directory
david% find -print0 | xargs -0 ls
./foo bar 

Setting traps

Often you write scripts which fail and leave the filesystem in an inconsistent state; things like lock files, temporary files or you've updated one file and there is an error updating the next file. It would be nice if you could fix these problems, either by deleting the lock files or by rolling back to a known good state when your script suffers a problem. Fortunately bash provides a way to run a command or function when it receives a unix signal using the trap command.

trap command signal [signal ...]

There are many signals you can trap (you can get a list of them by running kill -l), but for cleaning up after problems there are only 3 we are interested in: INT, TERM and EXIT. You can also reset traps back to their default by using - as the command.

Signal Description
INT Interrupt - This signal is sent when someone kills the script by pressing ctrl-c.
TERM Terminate - this signal is sent when someone sends the TERM signal using the kill command.
EXIT Exit - this is a pseudo-signal and is triggered when your script exits, either through reaching the end of the script, an exit command or by a command failing when using set -e.

Usually, when you write something using a lock file you would use something like:

if [ ! -e $lockfile ]; then
   touch $lockfile
   rm $lockfile
   echo "critical-section is already running"

What happens if someone kills your script while critical-section is running? The lockfile will be left there and your script won't run again until it's been deleted. The fix is to use:

if [ ! -e $lockfile ]; then
   trap "rm -f $lockfile; exit" INT TERM EXIT
   touch $lockfile
   rm $lockfile
   trap - INT TERM EXIT
   echo "critical-section is already running"

Now when you kill the script it will delete the lock file too. Notice that we explicitly exit from the script at the end of trap command, otherwise the script will resume from the point that the signal was received.

Race conditions

It's worth pointing out that there is a slight race condition in the above lock example between the time we test for the lockfile and the time we create it. A possible solution to this is to use IO redirection and bash's noclobber mode, which won't redirect to an existing file. We can use something similar to:

if ( set -o noclobber; echo "$$" > "$lockfile") 2> /dev/null; 
   trap 'rm -f "$lockfile"; exit $?' INT TERM EXIT

   rm -f "$lockfile"
   trap - INT TERM EXIT
   echo "Failed to acquire lockfile: $lockfile." 
   echo "Held by $(cat $lockfile)"

A slightly more complicated problem is where you need to update a bunch of files and need the script to fail gracefully if there is a problem in the middle of the update. You want to be certain that something either happened correctly or that it appears as though it didn't happen at all.Say you had a script to add users.

add_to_passwd $user
cp -a /etc/skel /home/$user
chown $user /home/$user -R

There could be problems if you ran out of diskspace or someone killed the process. In this case you'd want the user to not exist and all their files to be removed.

rollback() {
   del_from_passwd $user
   if [ -e /home/$user ]; then
      rm -rf /home/$user

trap rollback INT TERM EXIT
add_to_passwd $user
cp -a /etc/skel /home/$user
chown $user /home/$user -R

We needed to remove the trap at the end or the rollback function would have been called as we exited, undoing all the script's hard work.

Be atomic

Sometimes you need to update a bunch of files in a directory at once, say you need to rewrite urls form one host to another on your website. You might write:

for file in $(find /var/www -type f -name "*.html"); do
   perl -pi -e 's/' $file

Now if there is a problem with the script you could have half the site referring to and the rest referring to You could fix this using a backup and a trap, but you also have the problem that the site will be inconsistent during the upgrade too.

The solution to this is to make the changes an (almost) atomic operation. To do this make a copy of the data, make the changes in the copy, move the original out of the way and then move the copy back into place. You need to make sure that both the old and the new directories are moved to locations that are on the same partition so you can take advantage of the property of most unix filesystems that moving directories is very fast, as they only have to update the inode for that directory.

cp -a /var/www /var/www-tmp
for file in $(find /var/www-tmp -type f -name "*.html"); do
   perl -pi -e 's/' $file
mv /var/www /var/www-old
mv /var/www-tmp /var/www 

This means that if there is a problem with the update, the live system is not affected. Also the time where it is affected is reduced to the time between the two mvs, which should be very minimal, as the filesystem just has to change two entries in the inodes rather than copying all the data around.

The disadvantage of this technique is that you need to use twice as much disk space and that any process that keeps files open for a long time will still have the old files open and not the new ones, so you would have to restart those processes if this is the case. In our example this isn't a problem as apache opens the files every request. You can check for files with files open by using lsof. An advantage is that you now have a backup before you made your changes in case you need to revert.

[May 20, 2010] SFR Fresh bash-4.1.tar.gz (Download & Source Code Browsing)

[Nov 26, 2009] Bash 4.0

Interesting featured are -l -u options in declare statement (automatic conversion to lower case/upper case).

n.  The -p option to `declare' now displays all variable values and attributes
    (or function values and attributes if used with -f).

o.  There is a new `compopt' builtin that allows completion functions to modify
    completion options for existing completions or the completion currently
    being executed.

p.  The `read' builtin has a new -i option which inserts text into the reply
    buffer when using readline.

s.  Changed format of internal help documentation for all builtins to roughly
    follow man page format.

t.  The `help' builtin now has a new -d option, to display a short description,
    and a -m option, to print help information in a man page-like format.

u.  There is a new `mapfile' builtin to populate an array with lines from a
    given file.  The name `readarray' is a synonym.

w.  There is a new shell option: `globstar'.  When enabled, the globbing code
    treats `**' specially -- it matches all directories (and files within
    them, when appropriate) recursively.

x.  There is a new shell option: `dirspell'.  When enabled, the filename
    completion code performs spelling correction on directory names during

dd. The parser now understands `|&' as a synonym for `2>&1 |', which redirects
    the standard error for a command through a pipe.

ee. The new `;&' case statement action list terminator causes execution to
    continue with the action associated with the next pattern in the
    statement rather than terminating the command.

hh. There are new case-modifying word expansions: uppercase (^[^]) and
    lowercase (,[,]).  They can work on either the first character or
    array element, or globally.  They accept an optional shell pattern
    that determines which characters to modify.  There is an optionally-
    configured feature to include capitalization operators.

ii. The shell provides associative array variables, with the appropriate
    support to create, delete, assign values to, and expand them.

jj. The `declare' builtin now has new -l (convert value to lowercase upon
    assignment) and -u (convert value to uppercase upon assignment) options.
    There is an optionally-configurable -c option to capitalize a value at

kk. There is a new `coproc' reserved word that specifies a coprocess: an
    asynchronous command run with two pipes connected to the creating shell.
    Coprocs can be named.  The input and output file descriptors and the
    PID of the coprocess are available to the calling shell in variables
    with coproc-specific names.

[Apr 22, 2009] Advanced Bash Scripting Guide

Changes: Fairly extensive coverage of the version 4.0 Bash release. A great deal of other new material and bugfixes. This is a very important update.

[Apr 22, 2009] » bashstyle-ng 7.7

BashStyle-NG is a graphical tool for changing Bash's behavior and look and feel. It can also style Readline, Nano, and Vim.

It ships with a set of scripts, which are used by the styles shipped with BS-NG, but can also be used separately. Since v6.3 you have the opportunity to create your own prompts. For important notes on how to do so, refer back to the documentation.

If you don’t understand an option or something does not work as expected read the documentation. It’s installed in /usr/share/doc/bashstyle-ng/index.html ( /usr is default but may vary if you passed an other prefix to configure).

Currently BS-NG ships 15 pre-defined Styles for your prompt. Most of them can be modified via the Custom Prompt Builder. If you want to save your current configuration and want to re-import it later (or for using it on a different user/machine) you can do so via the Profiler (bs-ng-profiler –help).

Standalone (GConf-Free) Configuration (for faster Bash-Startup) can be created via the RCGenerator (rcgenerator –help).

[Apr 7, 2009] Mastering Unix Shell Scripting: Bash, Bourne, and Korn Shell Scripting for Programmers, System Administrators, and UNIX Gurus (Paperback)

This is one of the few book with some AIX bias. For example, Chapter 11 is about AIX Logical Volume Manager. That author also wrote a great book on AIX administration

by Randal K. Michael (Author)

From foreword:

We urge everyone to study this entire book. Every chapter hits a different topic

using a different approach. The book is written this way to emphasize that there is

never only one technique to solve a challenge in UNIX. All the shell scripts in this book

are real-world examples of how to solve a problem. Thumb through the chapters, and

you can see that we tried to hit most of the common (and some uncommon!) tasks

in UNIX. All the shell scripts have a good explanation of the thinking process, and

we always start out with the correct command syntax for the shell script targeting a

specific goal. I hope you enjoy this book as much as I enjoyed writing it. Let’s get


4.0 out of 5 stars A must-have for all levels of *nix users. , August 1, 2004
By Bindlestiff ( - See all my reviews
This review is from: Mastering UNIX Shell Scripting (Paperback)

The breadth of real-world examples make the difference between this book and most reference texts. It's true that it's written for korn, but I've had little trouble adapting for Bash; many of the scripts run almost unchanged and the ones that don't provide a useful opportunity for exercise in adaptation. The authors prose is clear. His attitude is a bit challenging; he says early on that that his intention is to teach you how to -solve problems- by shell scripting, NOT to present a ream of canned solutions. This is NOT a reference text for any particular shell, you'll still need plenty of O'Reilly books, a web browser & etc.

This book has enabled me to write a major project using scripting as the glue to hold together a hefty mass of file-moving daemons, fax/paging engines, python UI code, PostGreSQL database engine, networking/email, SSH, and Expect scripts on a Gnu Linux platform. I absolutely could not have done it without this book and I'm very grateful to Mr Michael for his work. If a later edition could more closely serve the needs of the masses by presenting more Bash examples and maybe throwing in a CD it would be a 5-star text.


Command readability and step-by-step comments are just the very basics of a

well-written script. Using a lot of comments will make our life much easier when we

have to come back to the code after not looking at it for six months, and believe me; we

will look at the code again. Comment everything! This includes, but is not limited to,

describing what our variables and files are used for, describing what loops are doing,

describing each test, maybe including expected results and how we are manipulating

the data and the many data fields. A hash mark, #, precedes each line of a comment.

The script stub that follows is on this book’s companion web site at

go/michael2e. The name is script.stub. It has all the comments ready to get started

writing a shell script. The script.stub file can be copied to a new filename. Edit the

new filename, and start writing code. The script.stub file is shown in Listing 1-1.






# REV: 1.1.A (Valid are A, B, D, T and P)

# (For Alpha, Beta, Dev, Test and Production)


# PLATFORM: (SPECIFY: AIX, HP-UX, Linux, OpenBSD, Solaris

# or Not platform dependent)


# PURPOSE: Give a clear, and if necessary, long, description of the

# purpose of the shell script. This will also help you stay

# focused on the task at hand.





# MODIFICATION: Describe what was modified, new features, etc--



# set -n # Uncomment to check script syntax, without execution.

# # NOTE: Do not forget to put the comment back in or

# # the shell script will not execute!

# set -x # Uncomment to debug this shell script





Listing 1-1 script.stub shell script starter listing

Michael c01.tex V4 - 03/24/2008 4:45pm Page 8

8 PartIThe Basics of Shell Scripting







# End of script

Listing 1-1 (continued)

The shell script starter shown in Listing 1-1 gives you the framework to start writing

the shell script with sections to declare variables and files, create functions, and write

the final section, BEGINNING OF MAIN, where the main body of the shell script is


[Oct 27, 2008] Bash Debugger 4.0-0.1

About: The Bash Debugger (bashdb) is a debugger for Bash scripts. The debugger command interface is modeled on the gdb command interface. Front-ends supporting bashdb include GNU-Emacs and ddd. In the past, the project has been used as a springboard for other experimental features such as a timestamped history file (now in Bash versions after 3.0).

Changes: This major rewrite and reorganization of code has numerous bugfixes and has been tested on bash 3.1 and 4.0 alpha. With the introduction of a simple debugger command alias mechanism, there are some incompatibilities. Command aliasing of short commands is no longer hard-wired. New commands of note are "set autoeval" and "step+", taken from ruby-debug. Emacs support has been greatly improved. Long option command-processing is guaranteed.

[Oct 27, 2008] Teach an old shell new tricks with BashDiff

BashDiff is a patch for the bash shell that can do an amazing number of things. It extends existing bash features, brings a few of awk's tricks into the shell itself, exposes some common C functions to bash shell programming, adds an exception mechanism, provides features of functional programming such as list comprehension and the map function, lets you talk with GTK+2 and databases, and even adds a Web server right into the standard bash shell.

There are no packages of BashDiff in the openSUSE, Fedora, or Ubuntu repositories. I'll build from source using BashDiff 1.45 on an x86 Fedora 9 machine running bash 3.0. While versions 3.1 and 3.2 of bash are available, the 1.45 BashDiff patch does not apply cleanly to either.

[Jul 21, 2008] Project details for Advanced Bash Scripting Guide 5.3

Recreational scripts were added, such as an almost full-featured Perquacky clone script, a script that does the "Petals Around the Rose" puzzle, and a crossword puzzle solver script. There is also the usual batch of bugfixes and other new material.

[Apr 3, 2008] -- Excerpt from Linux Cookbook, Part 1

[Mar 31, 2008] Bash Shell Programming in Linux by P. Lutus

Simple but nice intro with good examples

[Mar 30, 2008] Linux, Unix, -etc Unix Shell Scripts

These are simple stand-alone scripts. You are unlikely to find anything very impressive here if you already hack your own scripts, but a novice might find some of the ideas new.


Recommended Links

Softpanorama Top Visited

Softpanorama Recommended

Please visit  Heiner Steven SHELLdorado ; the best shell scripting site on the Internet

***** SHELLdorado -- An excellent site by Heiner Steven. Very cute name that catches the fact that the shell is a crown jewel of Unix ;-) IMHO this is the best shell-related site on Internet. Actively maintained. Highly recommended ! Note: This site now has a bulletin that you can subscribe to [Feb 02, 2002]

***** The official Korn Shell Web Site by David Korn, the author of ksh88 and ksh93. See also his son's Home Page  (actually I think that Tksh implementation is the best free shell available, but ksh93 is better supported).

***+ Open Directory - Computers Operating Systems Unix Shell -- a decent collection of links.  Could be better...

**** Unix shell scripts by Paul Dunne. contains links and several useful scripts.

scripting -- this is a member site and can be down

***+ kshweb.html--  a good site by Dana French

*** home -- several dot files of very uneven (often low) quality. Still better than nothing...

Other Links

MAN pages

KornShell 93 Manual Page man pages section 1 User Commands - ksh

Man pages from Neosoft

Bash(1) manual page

pdksh man page

FAQs -- the only Usenet group about shells

Note: see first  Unix FAQshell Index -- it's probably more up-to-date than this document


See also additional material at

Collections of reference materials, cards, etc

Reference cards:

Reference manuals

Desktop Kornshell


Legacy staff

Public domain Korn Shell (pdksh)

Public domain Korn Shell is closer to ksh88 than ksh93. It has most of the ksh88 features, not much of the ksh93 features, and a number of its own features.

Old Korn shell (Kornshell 88)



Another open source shell that is actively developed. Probably the most powerful free Unix shell but it definitely suffers from feature creep.  Not 100% compatible with ksh93 or bash. Many of the useful features of bash, ksh93, and tcsh were incorporated into zsh; many original features were added (see An Introduction to the Z Shell ).  Zsh was originally written by Paul Falstad while he was a student in Princeton. Zsh is now maintained by the members of the zsh workers mailing list The development is currently coordinated by Zoltan Hidvegi, Z-shell is very popular in Europe. Not so much in the USA.

Advantages of Zsh include:

Main WEB sites include:

Experimental Scripting Language based shells

Most scripting languages can be used as shell. Experimental shells exist for Perl,  Scheme, but only TCL mange to get into commercial grade product (Tksh). Javascript can be used as shell for NT.


Scsh (a Unix Scheme shell) FAQ

Perl shell 0.009       

Perl Shell  Gregor N. Purdy - November 23rd 1999, 18:03 EST

The Perl Shell (psh) is an interactive command-line Unix shell that aims to bring the benefits of Perl scripting to a shell context and the shell's interactive execution model to Perl.

Changes: This version of the Perl Shell adds significant functionality. However, it is still an early development release. New features include rudimentary background jobs handling and job management, signal handling, filename completion, updates to history handling, flexible %built_ins mechanism for adding built-in functions and smart mode is now on by default.

C-shell and tcsh

Tcsh and C shell

C-shell based tutorials

Random Findings


A modular Perl shell written, configured, and operated entirely in Perl. It aspires to be a fully operational login shell with all the features one normally expects. But it also gives direct access to Perl objects and data structures from the command line, and allows you to run Perl code within the scope of your command line. And it's named after one of the greatest characters on Futurama, so it must be good...

RC shell

A very interesting shell with advanced piping support.

Lost Links


FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit exclusivly for research and educational purposes.   If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner. 

ABUSE: IPs or network segments from which we detect a stream of probes might be blocked for no less then 90 days. Multiple types of probes increase this period.  


Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy


War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes


Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law


Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least

Copyright © 1996-2015 by Dr. Nikolai Bezroukov. was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License.

The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting development of this site and speed up access. In case is down currently there are two functional mirrors: (the fastest) and


The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

Last modified: September 12, 2016