Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Usage of pipes with loops in shell

News

Loops in Shell

Best Shell Books

Recommended Links Pipes Using redirection and pipes Unix Filters
Pipe Debugging pv - pipe viewer Named pipes Unix Sockets Text files processing Use grep and extended regular expressions to analyze text files Finding files and directories; mass operations on files

xargs Command Tutorial

Unix Component Model awk one liners perl one liners netcat AWK Programming  

Coroutines

exec command

Sysadmin Horror Stories Tips History of pipe concept Humor Etc

Korn shell and its derivates like ksh93, bash and zsh have a unique feature: the ability to pipe a file into a loop. This is effectively an implementation of simple coroutines. Please note that shell has also more general form (See Learning Korn Shell by Bill Rosenblat and Arnold Robbins).

In bash this capability is limited as bash does not run the last stage of the pipe as the current process... Bash developers probably think they can reinvent the wheel, but created a square..  Googling for "bash pipe subprocesses order" shows the extent of the problem, but I couldn't find the bash developers' official stand on the problem. Looks like in recent version of bash there is an option to force ksh behavior...

Let's assume that we need to find all files that contain string "19%" which is a typical for printing commands like "19%2d"

cd/ /usr/bin
ls | while read file
do
    echo $file
    string $file | grep '19%'
done

Here we use the ls command to generate the list of the file names and this list it piped into a loop. In a loop we echo command and then run strings piped to grep looking for suspicious format strings.

In another example from O'Reilly "Learning Korn Shell" (first edition). Here we will pipe awk output into the loop. This is a  function that, given a pathname as argument, prints its equivalent in tilde notation if possible:

function tildize {
    if [[ $1 = $HOME* ]]; then
        print "\~/${1#$HOME}"
        return 0
    fi
    awk '{FS=":"; print $1, $6}' /etc/passwd | 
        while read user homedir; do
            if [[ $homedir != / && $1 = ${homedir}?(/*) ]]; then
                print "\~$user/${1#$homedir}"
                return 0
            fi
        done
    print "$1"
    return 1
}

Loop can also serve as a source to input for the pipe. For example

{ while read line'?adc> '; do
      print "$(alg2rpn $line)"
  done 
} | dc

As an example; assume that you want to go through all  files of a directory and, if they are readable to you, convert the filenames to contain lowercase letters only. We can do it it in slightly different ways.

There are two major ways to accomplish this:

  1. The first, more traditional, variant calls tr inside the the for loop:
    #!/bin/ksh
    for x in * 
    do
      [ -r $x ] && echo $x | tr 'A-Z' 'a-z'
    done
    
  2. The second, more elegant variant uses pipe to feed tr from the loop:
    #!/bin/ksh
    for x in * 
    do
      [ -r $x ] && echo $x 
    done | tr 'A-Z' 'a-z'
  3. Usage in submission scripts for SGE and other HPC schedulers. Here is one example when we generate ./machine file for MPI using SGE variable $PE_HOSTFILE:
    # get machine from $PE_HOSTFILE
    
    cat /dev/null > ./machines
    
    cat $PE_HOSTFILE | while read line; do
    host=`echo $line | cut -d" " -f1`
    cores=`echo $line | cut -d" " -f2`
    
    while (( $cores > 0 )) ; do
            echo $host >> machines
            let cores--
    done
    done
    
    ## done with $PE_HOSTFILE

Monitoring the progress of data  through a pipeline

There is also a useful terminal-based tool for monitoring the progress of data through a pipeline called pv - pipe viewer.  It can be inserted into any normal pipeline between two processes to give a visual indication of how quickly data is passing through, how long it has taken, how near to completion it is, and an estimate of how long it will be until completion. It is available for all major Linux distributions. It also has precompiled Solaris binary (Solaris binary ).


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Jun 23, 2021] How to make a pipe loop in bash

Jun 23, 2021 | stackoverflow.com

Ask Question Asked 12 years, 9 months ago Active 6 years, 3 months ago Viewed 21k times

https://832dd5f9ff74a6be66c562b9cf145a16.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html Report this ad


mweerden ,

22 10

Assume that I have programs P0 , P1 , ... P(n-1) for some n > 0 . How can I easily redirect the output of program Pi to program P(i+1 mod n) for all i ( 0 <= i < n )?

For example, let's say I have a program square , which repeatedly reads a number and than prints the square of that number, and a program calc , which sometimes prints a number after which it expects to be able to read the square of it. How do I connect these programs such that whenever calc prints a number, square squares it returns it to calc ?

Edit: I should probably clarify what I mean with "easily". The named pipe/fifo solution is one that indeed works (and I have used in the past), but it actually requires quite a bit of work to do properly if you compare it with using a bash pipe. (You need to get a not yet existing filename, make a pipe with that name, run the "pipe loop", clean up the named pipe.) Imagine you could no longer write prog1 | prog2 and would always have to use named pipes to connect programs.

I'm looking for something that is almost as easy as writing a "normal" pipe. For instance something like { prog1 | prog2 } >&0 would be great. bash Share Improve this question Follow edited Sep 4 '08 at 7:38 asked Sep 2 '08 at 18:40 mweerden 12.5k 4 4 gold badges 28 28 silver badges 31 31 bronze badges

> ,

Add a comment 7 Answers Active Oldest Votes

mweerden ,

27

After spending quite some time yesterday trying to redirect stdout to stdin , I ended up with the following method. It isn't really nice, but I think I prefer it over the named pipe/fifo solution.

read | { P0 | ... | P(n-1); } >/dev/fd/0

The { ... } >/dev/fd/0 is to redirect stdout to stdin for the pipe sequence as a whole (i.e. it redirects the output of P(n-1) to the input of P0). Using >&0 or something similar does not work; this is probably because bash assumes 0 is read-only while it doesn't mind writing to /dev/fd/0 .

The initial read -pipe is necessary because without it both the input and output file descriptor are the same pts device (at least on my system) and the redirect has no effect. (The pts device doesn't work as a pipe; writing to it puts things on your screen.) By making the input of the { ... } a normal pipe, the redirect has the desired effect.

To illustrate with my calc / square example:

function calc() {
  # calculate sum of squares of numbers 0,..,10

  sum=0
  for ((i=0; i<10; i++)); do
    echo $i                   # "request" the square of i

    read ii                   # read the square of i
    echo "got $ii" >&2          # debug message

    let sum=$sum+$ii
  done

  echo "sum $sum" >&2           # output result to stderr
}

function square() {
  # square numbers

  read j                         # receive first "request"
  while [ "$j" != "" ]; do
    let jj=$j*$j
    echo "square($j) = $jj" >&2  # debug message

    echo $jj                     # send square

    read j                       # receive next "request"
  done
}

read | { calc | square; } >/dev/fd/0

Running the above code gives the following output:

square(0) = 0
got 0
square(1) = 1
got 1
square(2) = 4
got 4
square(3) = 9
got 9
square(4) = 16
got 16
square(5) = 25
got 25
square(6) = 36
got 36
square(7) = 49
got 49
square(8) = 64
got 64
square(9) = 81
got 81
sum 285

Of course, this method is quite a bit of a hack. Especially the read part has an undesired side-effect: termination of the "real" pipe loop does not lead to termination of the whole. I couldn't think of anything better than read as it seems that you can only determine that the pipe loop has terminated by try to writing write something to it. Share Improve this answer Follow answered Sep 4 '08 at 8:22 mweerden 12.5k 4 4 gold badges 28 28 silver badges 31 31 bronze badges

regnarg ,

Nice solution. I had to do something similar using netcat inside a loop and worked around the 'read' side effect by 'closing' its input with an 'echo'. In the end it was something like this : echo | read | { P0 | ... | P(n-1); } >/dev/fd/0 – Thiago de Arruda Nov 30 '11 at 16:29

Douglas Leeder , 2008-09-02 20:57:53

15

A named pipe might do it:

$ mkfifo outside
$ <outside calc | square >outside &
$ echo "1" >outside ## Trigger the loop to start
Share Improve this answer Follow answered Sep 2 '08 at 20:57 Douglas Leeder 49.1k 8 8 gold badges 86 86 silver badges 133 133 bronze badges

Douglas Leeder ,

Could you explain the line "<outside calc | square >outside &"? I am unsure about <outside and >outside. – Léo Léopold Hertz 준영 May 7 '09 at 18:35

Mark Witczak ,

5

This is a very interesting question. I (vaguely) remember an assignment very similar in college 17 years ago. We had to create an array of pipes, where our code would get filehandles for the input/output of each pipe. Then the code would fork and close the unused filehandles.

I'm thinking you could do something similar with named pipes in bash. Use mknod or mkfifo to create a set of pipes with unique names you can reference then fork your program. Share Improve this answer Follow answered Sep 2 '08 at 19:16 Mark Witczak 1,413 2 2 gold badges 14 14 silver badges 13 13 bronze badges

> ,

Add a comment

Andreas Florath , 2015-03-14 20:30:14

3

My solutions uses pipexec (Most of the function implementation comes from your answer):

square.sh

function square() {
  # square numbers

  read j                         # receive first "request"
  while [ "$j" != "" ]; do
    let jj=$j*$j
    echo "square($j) = $jj" >&2  # debug message

    echo $jj                     # send square

    read j                       # receive next "request"
  done
}

square $@

calc.sh

function calc() {
  # calculate sum of squares of numbers 0,..,10

  sum=0
  for ((i=0; i<10; i++)); do
    echo $i                   # "request" the square of i

    read ii                   # read the square of i
    echo "got $ii" >&2          # debug message

    let sum=$sum+$ii
 done

 echo "sum $sum" >&2           # output result to stderr
}

calc $@

The command

pipexec [ CALC /bin/bash calc.sh ] [ SQUARE /bin/bash square.sh ] \
    "{CALC:1>SQUARE:0}" "{SQUARE:1>CALC:0}"

The output (same as in your answer)

square(0) = 0
got 0
square(1) = 1
got 1
square(2) = 4
got 4
square(3) = 9
got 9
square(4) = 16
got 16
square(5) = 25
got 25
square(6) = 36
got 36
square(7) = 49
got 49
square(8) = 64
got 64
square(9) = 81
got 81
sum 285

Comment: pipexec was designed to start processes and build arbitrary pipes in between. Because bash functions cannot be handled as processes, there is the need to have the functions in separate files and use a separate bash. Share Improve this answer Follow answered Mar 14 '15 at 20:30 Andreas Florath 3,797 19 19 silver badges 31 31 bronze badges

> ,

Add a comment

1729 ,

1

Named pipes.

Create a series of fifos, using mkfifo

i.e fifo0, fifo1

Then attach each process in term to the pipes you want:

processn < fifo(n-1) > fifon Share Improve this answer Follow answered Sep 2 '08 at 20:57 1729 4,589 2 2 gold badges 24 24 silver badges 17 17 bronze badges

> ,

Add a comment

Penz ,

-1

I doubt sh/bash can do it. ZSH would be a better bet, with its MULTIOS and coproc features. Share Improve this answer Follow answered Sep 2 '08 at 20:31 Penz 4,680 4 4 gold badges 26 26 silver badges 26 26 bronze badges

Léo Léopold Hertz 준영 ,

Could you give an example about Zsh? I am interested in it. – Léo Léopold Hertz 준영 May 7 '09 at 18:36

Fritz G. Mehner ,

-2

A command stack can be composed as string from an array of arbitrary commands and evaluated with eval. The following example gives the result 65536.

function square ()
{
  read n
  echo $((n*n))
}    # ----------  end of function square  ----------

declare -a  commands=( 'echo 4' 'square' 'square' 'square' )

#-------------------------------------------------------------------------------
#   build the command stack using pipes
#-------------------------------------------------------------------------------
declare     stack=${commands[0]}

for (( COUNTER=1; COUNTER<${#commands[@]}; COUNTER++ )); do
  stack="${stack} | ${commands[${COUNTER}]}"
done

#-------------------------------------------------------------------------------
#   run the command stack
#-------------------------------------------------------------------------------
eval "$stack"
Share Improve this answer Follow answered Jan 21 '09 at 9:56 Fritz G. Mehner 14.9k 2 2 gold badges 30 30 silver badges 40 40 bronze badges

reinierpost ,

I don't think you're answering the question. – reinierpost Jan 29 '10 at 15:04

[Jun 23, 2021] bash - How can I use a pipe in a while condition- - Ask Ubuntu

Notable quotes:
"... This is not at all what you are looking for. ..."
Jun 23, 2021 | askubuntu.com

John1024 , 2016-09-17 06:14:32

10

To get the logic right, just minor changes are required. Use:

while ! df | grep '/toBeMounted'
do
  sleep 2
done
echo -e '\a'Hey, I think you wanted to know that /toBeMounted is available finally.
Discussion

The corresponding code in the question was:

while df | grep -v '/toBeMounted'

The exit code of a pipeline is the exit code of the last command in the pipeline. grep -v '/toBeMounted' will return true (code=0) if at least one line of input does not match /toBeMounted . Thus, this tests whether there are other things mounted besides /toBeMounted . This is not at all what you are looking for.

To use df and grep to test whether /toBeMounted is mounted, we need

df | grep '/toBeMounted'

This returns true if /toBeMounted is mounted. What you actually need is the negation of this: you need a condition that is true if /toBeMounted is not mounted. To do that, we just need to use negation, denoted by ! :

! df | grep '/toBeMounted'

And, this is what we use in the code above.

Documentation

From the Bash manual :

The return status of a pipeline is the exit status of the last command, unless the pipefail option is enabled. If pipefail is enabled, the pipeline's return status is the value of the last (rightmost) command to exit with a non-zero status, or zero if all commands exit successfully. If the reserved word ! precedes a pipeline, the exit status of that pipeline is the logical negation of the exit status as described above. The shell waits for all commands in the pipeline to terminate before returning a value.

Share Improve this answer Follow edited Sep 17 '16 at 12:29 ilkkachu 1,463 5 5 silver badges 13 13 bronze badges answered Sep 17 '16 at 6:14 John1024 12.6k 38 38 silver badges 47 47 bronze badges

John1024 ,

Yeah it looks like my real problem wasn't the pipe, but not clearly thinking about the -v on a line by line basis. – dlamblin Sep 17 '16 at 6:47

Sergiy Kolodyazhnyy ,

4

The fact that you're using df with grep tells me that you're filtering output of df until some device mounts to specific directory, i.e. whether or not it's on the list.

Instead of filtering the list focus on the directory that you want. Luckly for us, the utility mountpoint allows us to do exactly that, and allows to deal with exit status of that command. Consider this:

$ mountpoint  /mnt/HDD/                                                        
/mnt/HDD/ is a mountpoint
$ echo $?
0
$ mountpoint  ~                                                                
/home/xieerqi is not a mountpoint
$ echo $?
1

Your script thus, can be rewritten as

while ! mountput /toBeMounted > /dev/null
do
   sleep 3
done
echo "Yup, /toBeMounted got mounted!"

Sample run with my own disk:

$ while ! mountpoint /mnt/HDD > /dev/null
> do 
>     echo "Waiting"
>     sleep 1
> done && echo "/mnt/HDD is mounted"
Waiting
Waiting
Waiting
Waiting
Waiting
/mnt/HDD is mounted

On a side note, you can fairly easy implement your own version of mountpoint command, for instance , in python , like i did:

#!/usr/bin/env python3
from os import path
import sys

def main():

    if not sys.argv[1]:
       print('Missing a path')
       sys.exit(1)

    full_path = path.realpath(sys.argv[1])
    with open('/proc/self/mounts') as mounts:
       print
       for line in mounts:
           if full_path in line:
              print(full_path,' is mountpoint')
              sys.exit(0)
    print(full_path,' is not a mountpoint')
    sys.exit(1)

if __name__ == '__main__':
    main()

Sample run:

$ python3 ./is_mountpoint.py /mnt/HDD                                          
/mnt/HDD  is mountpoint
$ python3 ./is_mountpoint.py ~                                                 
/home/xieerqi  is not a mountpoint
Share Improve this answer Follow edited Sep 17 '16 at 9:41 answered Sep 17 '16 at 9:03 Sergiy Kolodyazhnyy 93.4k 18 18 gold badges 236 236 silver badges 429 429 bronze badges

Sergiy Kolodyazhnyy ,

I was generally unclear on using a pipe in a conditional statement. But the specific case of checking for a mounted device, mountpoint sounds perfect, thanks. Though conceptually in this case I could have also just done: while [ ! -d /toBeMounted ]; do sleep 2; done; echo -e \\aDing the directory is available now.dlamblin Sep 20 '16 at 0:52

[Jun 23, 2021] bash - multiple pipes in loop, saving pipeline-result to array - Unix Linux Stack Exchange

Jun 23, 2021 | unix.stackexchange.com

multiple pipes in loop, saving pipeline-result to array Ask Question Asked 2 years, 11 months ago Active 2 years, 11 months ago Viewed 1k times

https://523467b4f3186a665b8a0c59ce7f89c4.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html Report this ad


gugy , 2018-07-25 09:56:33

0

I am trying to do the following (using bash): Search for files that always have the same name and extract data from these files. I want to store the extracted data in new arrays I am almost there, I think, see code below.

The files I am searching for all have this format:

 #!/bin/bash
  echo "the concentration of NDPH is 2 mM, which corresponds to 2 molecules in a box of size 12 nm (12 x 12 x 12 nm^3)" > README_test

#find all the README* files and save the paths into an array called files
  files=()
  data1=()
  data2=()
  data3=()

  while IFS=  read -r -d $'\0'; do
files+=("$REPLY")
  #open all the files and extract data from them
  while read -r line
  do
name="$line"
echo "$name" | tr ' ' '\n'|  awk 'f{print;f=0;exit} /of/{f=1}' 
echo "$name" 
echo "$name" | tr ' ' '\n'|  awk 'f{print;f=0;exit} /of/{f=1}'
data1+=( "$echo "$name" | tr ' ' '\n'|  awk 'f{print;f=0;exit} /of/{f=1}' )" )    

# variables are not preserved...
# data2+= echo "$name"  | tr ' ' '\n'|  awk 'f{print;f=0;exit} /is/{f=1}'
echo "$name"  | tr ' ' '\n'|  awk 'f{print;f=0;exit} /size/{f=1}'
# variables are not preserved... 
# data3+= echo "$name"  | tr ' ' '\n'|  awk 'f{print;f=0;exit} /size/{f=1}'
  done < "$REPLY"
  done < <(find . -name "README*" -print0)
  echo ${data1[0]}

The issue is that the pipe giving me the exact output I want from the files is "not working" (variables are not preserved) in the loops. I have no idea how/if I can use process substitution to get what I want: an array (data1, data2, data3) filled with the output of the pipes.

UPDATE: SO I was not assigning things to the array correctly (see data1, which is properly assigning sth now.) But why are

echo ${data1[0]}

and

echo "$name" | tr ' ' '\n'|  awk 'f{print;f=0;exit} /of/{f=1}'

not the same?

SOLUTION (as per ilkkachu' s accepted answer):

  #!/bin/bash
  echo "the concentration of NDPH is 2 mM, which corresponds to 2 molecules in a box of size 12 nm (12 x 12 x 12 nm^3)" > README_test
  files=()
  data1=()
  data2=()
  data3=()

  get_some_field() {    
 echo "$1" | tr ' ' '\n'|  awk -vkey="$2" 'f{print;f=0;exit} $0 ~ key {f=1}' 
  }

  #find all the README* files and save the paths into an array called files
  while IFS=  read -r -d $'\0'; do
files+=("$REPLY")
  #open all the files and extract data from them
  while read -r line
  do
name="$line"
echo "$name" 
echo "$name" | tr ' ' '\n'|  awk 'f{print;f=0;exit} /of/{f=1}'
data1+=( "$(get_some_field "$name" of)" )
data2+=( "$(get_some_field "$name" is)" )
data3+=( "$(get_some_field "$name" size)" )

  done < "$REPLY"
 done < <(find . -name "README*" -print0)

  echo ${data1[0]}
  echo ${data2[0]}
  echo ${data3[0]}
bash pipe process-substitution Share Improve this question Follow edited Jul 25 '18 at 12:34 asked Jul 25 '18 at 9:56 gugy 133 1 1 silver badge 9 9 bronze badges

steeldriver ,

data1+= echo... doesn't really do anything to the data1 variable. Do you mean to use data1+=( "$(echo ... | awk)" ) ? – ilkkachu Jul 25 '18 at 10:20

> ,

2

I'm assuming you want the output of the echo ... | awk stored in a variable, and in particular, appended to one of the arrays.

First, to capture the output of a command, use "$( cmd... )" (command substitution). As a trivial example, this prints your hostname:

var=$(uname -n)
echo $var

Second, to append to an array, you need to use the array assignment syntax, with parenthesis around the right hand side. This would append the value of var to the array:

array+=( $var )

And third, the expansion of $var and the command substitution $(...) are subject to word splitting, so you want to use parenthesis around them. Again a trivial example, this puts the full output of uname -a as a single element in the array:

array+=( "$(uname -a)" )

Or, in your case, in full:

data1+=( "$(echo "$1" | tr ' ' '\n'|  awk 'f{print;f=0;exit} /of/{f=1}')" )

(Note that the quotes inside the command substitution are distinct from the quotes outside it. The quote before $1 doesn't stop the quoting started outside $() , unlike what the syntax hilighting on SE seems to imply.)

You could make that slightly simpler to read by putting the pipeline in a function:

get_data1() {
    echo "$name" | tr ' ' '\n'|  awk 'f{print;f=0;exit} /of/{f=1}'
}
...
data1+=( "$(get_data1)" )

Or, as the pipelines seem similar, use the function to avoid repeating the code:

get_some_field() {
    echo "$1" | tr ' ' '\n'|  awk -vkey="$2" 'f{print;f=0;exit} $0 ~ key {f=1}'
}

and then

data1+=( "$(get_some_field "$name" of)" )
data2+=( "$(get_some_field "$name" is)" )
data3+=( "$(get_some_field "$name" size)" )

(If I read your pipeline right, that is, I didn't test the above.)

[Dec 06, 2015] Bash For Loop Examples

A very nice tutorial by Vivek Gite (created October 31, 2008 last updated June 24, 2015). His mistake is putting new for loop too far inside the tutorial. It should emphasized, not hidden.
Comment contain several pipes "in the loop" examples
June 24, 2015 | cyberciti.biz

... ... ...

Bash v4.0+ has inbuilt support for setting up a step value using {START..END..INCREMENT} syntax:

#!/bin/bash
echo "Bash version ${BASH_VERSION}..."
for i in {0..10..2}
  do
     echo "Welcome $i times"
 done

Sample outputs:

Bash version 4.0.33(0)-release...
Welcome 0 times
Welcome 2 times
Welcome 4 times
Welcome 6 times
Welcome 8 times
Welcome 10 times

... ... ...

Three-expression bash for loops syntax

This type of for loop share a common heritage with the C programming language. It is characterized by a three-parameter loop control expression; consisting of an initializer (EXP1), a loop-test or condition (EXP2), and a counting expression (EXP3).

for (( EXP1; EXP2; EXP3 ))
do
	command1
	command2
	command3
done

A representative three-expression example in bash as follows:

#!/bin/bash
for (( c=1; c<=5; c++ ))
do
   echo "Welcome $c times"
done
... ... ...

Jadu Saikia, November 2, 2008, 3:37 pm

Nice one. All the examples are explained well, thanks Vivek.

seq 1 2 20
output can also be produced using jot

jot – 1 20 2

The infinite loops as everyone knows have the following alternatives.

while(true)
or
while :

//Jadu

Andi Reinbrech, November 18, 2010, 7:42 pm
I know this is an ancient thread, but thought this trick might be helpful to someone:

For the above example with all the cuts, simply do

set `echo $line`

This will split line into positional parameters and you can after the set simply say

F1=$1; F2=$2; F3=$3

I used this a lot many years ago on solaris with "set `date`", it neatly splits the whole date string into variables and saves lots of messy cutting :-)

… no, you can't change the FS, if it's not space, you can't use this method

Peko, July 16, 2009, 6:11 pm
Hi Vivek,
Thanks for this a useful topic.

IMNSHO, there may be something to modify here
=======================
Latest bash version 3.0+ has inbuilt support for setting up a step value:

#!/bin/bash
for i in {1..5}
=======================
1) The increment feature seems to belong to the version 4 of bash.
Reference: http://bash-hackers.org/wiki/doku.php/syntax/expansion/brace
Accordingly, my bash v3.2 does not include this feature.

BTW, where did you read that it was 3.0+ ?
(I ask because you may know some good website of interest on the subject).

2) The syntax is {from..to..step} where from, to, step are 3 integers.
You code is missing the increment.

Note that GNU Bash documentation may be bugged at this time,
because on GNU Bash manual, you will find the syntax {x..y[incr]}
which may be a typo. (missing the second ".." between y and increment).

see http://www.gnu.org/software/bash/manual/bashref.html#Brace-Expansion

The Bash Hackers page
again, see http://bash-hackers.org/wiki/doku.php/syntax/expansion/brace
seeems to be more accurate,
but who knows ? Anyway, at least one of them may be right… ;-)

Keep on the good work of your own,
Thanks a million.

- Peko

Michal Kaut July 22, 2009, 6:12 am
Hello,

is there a simple way to control the number formatting? I use several computers, some of which have non-US settings with comma as a decimal point. This means that
for x in $(seq 0 0.1 1) gives 0 0.1 0.2 … 1 one some machines and 0 0,1 0,2 … 1 on other.
Is there a way to force the first variant, regardless of the language settings? Can I, for example, set the keyboard to US inside the script? Or perhaps some alternative to $x that would convert commas to points?
(I am sending these as parameters to another code and it won't accept numbers with commas…)

The best thing I could think of is adding x=`echo $x | sed s/,/./` as a first line inside the loop, but there should be a better solution? (Interestingly, the sed command does not seem to be upset by me rewriting its variable.)

Thanks,
Michal

Peko July 22, 2009, 7:27 am

To Michal Kaut:

Hi Michal,

Such output format is configured through LOCALE settings.

I tried :

export LC_CTYPE="en_EN.UTF-8″; seq 0 0.1 1

and it works as desired.

You just have to find the exact value for LC_CTYPE that fits to your systems and your needs.

Peko

Peko July 22, 2009, 2:29 pm

To Michal Kaus [2]

Ooops – ;-)
Instead of LC_CTYPE,
LC_NUMERIC should be more appropriate
(Although LC_CTYPE is actually yielding to the same result – I tested both)

By the way, Vivek has already documented the matter : http://www.cyberciti.biz/tips/linux-find-supportable-character-sets.html

Philippe Petrinko October 30, 2009, 8:35 am

To Vivek:
Regarding your last example, that is : running a loop through arguments given to the script on the command line, there is a simplier way of doing this:
# instead of:
# FILES="$@"
# for f in $FILES

# use the following syntax
for arg
do
# whatever you need here – try : echo "$arg"
done

Of course, you can use any variable name, not only "arg".

Philippe Petrinko November 11, 2009, 11:25 am

To tdurden:

Why would'nt you use

1) either a [for] loop
for old in * ; do mv ${old} ${old}.new; done

2) Either the [rename] command ?
excerpt form "man rename" :

RENAME(1) Perl Programmers Reference Guide RENAME(1)

NAME
rename – renames multiple files

SYNOPSIS
rename [ -v ] [ -n ] [ -f ] perlexpr [ files ]

DESCRIPTION
"rename" renames the filenames supplied according to the rule specified
as the first argument. The perlexpr argument is a Perl expression
which is expected to modify the $_ string in Perl for at least some of
the filenames specified. If a given filename is not modified by the
expression, it will not be renamed. If no filenames are given on the
command line, filenames will be read via standard input.

For example, to rename all files matching "*.bak" to strip the
extension, you might say

rename 's/\.bak$//' *.bak

To translate uppercase names to lower, you'd use

rename 'y/A-Z/a-z/' *

- Philippe

Philippe Petrinko November 11, 2009, 9:27 pm

If you set the shell option extglob, Bash understands some more powerful patterns. Here, a is one or more pattern, separated by the pipe-symbol (|).

?() Matches zero or one occurrence of the given patterns
*() Matches zero or more occurrences of the given patterns
+() Matches one or more occurrences of the given patterns
@() Matches one of the given patterns
!() Matches anything except one of the given patterns

source: http://www.bash-hackers.org/wiki/doku.php/syntax/pattern

Philippe Petrinko November 12, 2009, 3:44 pm

To Sean:
Right, the more sharp a knife is, the easier it can cut your fingers…

I mean: There are side-effects to the use of file globbing (like in [ for f in * ] ) , when the globbing expression matches nothing: the globbing expression is not susbtitued.

Then you might want to consider using [ nullglob ] shell extension,
to prevent this.
see: http://www.bash-hackers.org/wiki/doku.php/syntax/expansion/globs#customization

Devil hides in detail ;-)

Dominic January 14, 2010, 10:04 am

There is an interesting difference between the exit value for two different for looping structures (hope this comes out right):
for (( c=1; c<=2; c++ )) do echo -n "inside (( )) loop c is $c, "; done; echo "done (( )) loop c is $c"
for c in {1..2}; do echo -n "inside { } loop c is $c, "; done; echo "done { } loop c is $c"

You see that the first structure does a final increment of c, the second does not. The first is more useful IMO because if you have a conditional break in the for loop, then you can subsequently test the value of $c to see if the for loop was broken or not; with the second structure you can't know whether the loop was broken on the last iteration or continued to completion.

Dominic January 14, 2010, 10:09 am

sorry, my previous post would have been clearer if I had shown the output of my code snippet, which is:
inside (( )) loop c is 1, inside (( )) loop c is 2, done (( )) loop c is 3
inside { } loop c is 1, inside { } loop c is 2, done { } loop c is 2

Philippe Petrinko March 9, 2010, 2:34 pm

@Dmitry

And, again, as stated many times up there, using [seq] is counter productive, because it requires a call to an external program, when you should Keep It Short and Simple, using only bash internals functions:


for ((c=1; c<21; c+=2)); do echo "Welcome $c times" ; done

(and I wonder why Vivek is sticking to that old solution which should be presented only for historical reasons when there was no way of using bash internals.
By the way, this historical recall should be placed only at topic end, and not on top of the topic, which makes newbies sticking to the not-up-to-date technique ;-) )

Sean March 9, 2010, 11:15 pm

I have a comment to add about using the builtin for (( … )) syntax. I would agree the builtin method is cleaner, but from what I've noticed with other builtin functionality, I had to check the speed advantage for myself. I wrote the following files:

builtin_count.sh:

#!/bin/bash
for ((i=1;i<=1000000;i++))
do
echo "Output $i"
done

seq_count.sh:

#!/bin/bash
for i in $(seq 1 1000000)
do
echo "Output $i"
done

And here were the results that I got:
time ./builtin_count.sh
real 0m22.122s
user 0m18.329s
sys 0m3.166s

time ./seq_count.sh
real 0m19.590s
user 0m15.326s
sys 0m2.503s

The performance increase isn't too significant, especially when you are probably going to be doing something a little more interesting inside of the for loop, but it does show that builtin commands are not necessarily faster.

Andi Reinbrech November 18, 2010, 8:35 pm

The reason why the external seq is faster, is because it is executed only once, and returns a huge splurb of space separated integers which need no further processing, apart from the for loop advancing to the next one for the variable substitution.

The internal loop is a nice and clean/readable construct, but it has a lot of overhead. The check expression is re-evaluated on every iteration, and a variable on the interpreter's heap gets incremented, possibly checked for overflow etc. etc.

Note that the check expression cannot be simplified or internally optimised by the interpreter because the value may change inside the loop's body (yes, there are cases where you'd want to do this, however rare and stupid they may seem), hence the variables are volatile and get re-evaluted.

I.e. botom line, the internal one has more overhead, the "seq" version is equivalent to either having 1000000 integers inside the script (hard coded), or reading once from a text file with 1000000 integers with a cat. Point being that it gets executed only once and becomes static.

OK, blah blah fishpaste, past my bed time :-)

Cheers,
Andi

Anthony Thyssen June 4, 2010, 6:53 am

The {1..10} syntax is pretty useful as you can use a variable with it!

limit=10
echo {1..${limit}}
{1..10}

You need to eval it to get it to work!

limit=10
eval "echo {1..${limit}}"
1 2 3 4 5 6 7 8 9 10

'seq' is not avilable on ALL system (MacOSX for example)
and BASH is not available on all systems either.

You are better off either using the old while-expr method for computer compatiblity!

   limit=10; n=1;
   while [ $n -le 10 ]; do
     echo $n;
     n=`expr $n + 1`;
   done

Alternativally use a seq() function replacement…

 # seq_count 10
seq_count() {
  i=1; while [ $i -le $1 ]; do echo $i; i=`expr $i + 1`; done
}
# simple_seq 1 2 10
simple_seq() {
  i=$1; while [ $i -le $3 ]; do echo $i; i=`expr $i + $2`; done
}
seq_integer() {
    if [ "X$1" = "X-f" ]
    then format="$2"; shift; shift
    else format="%d"
    fi
    case $# in
    1) i=1 inc=1 end=$1 ;;
    2) i=$1 inc=1 end=$2 ;;
    *) i=$1 inc=$2 end=$3 ;;
    esac
    while [ $i -le $end ]; do
      printf "$format\n" $i;
      i=`expr $i + $inc`;
    done
  }

Edited: by Admin – added code tags.

TheBonsai June 4, 2010, 9:57 am

The Bash C-style for loop was taken from KSH93, thus I guess it's at least portable towards Korn and Z.

The seq-function above could use i=$((i + inc)), if only POSIX matters. expr is obsolete for those things, even in POSIX.

Philippe Petrinko June 4, 2010, 10:15 am

Right Bonsai,
( http://www.opengroup.org/onlinepubs/009695399/utilities/xcu_chap02.html#tag_02_06_04 )

But FOR C-style does not seem to be POSIXLY-correct…

Read on-line reference issue 6/2004,
Top is here, http://www.opengroup.org/onlinepubs/009695399/mindex.html

and the Shell and Utilities volume (XCU) T.OC. is here
http://www.opengroup.org/onlinepubs/009695399/utilities/toc.html
doc is:
http://www.opengroup.org/onlinepubs/009695399/basedefs/xbd_chap01.html

and FOR command:
http://www.opengroup.org/onlinepubs/009695399/utilities/xcu_chap02.html#tag_02_09_04_03

Anthony Thyssen June 6, 2010, 7:18 am

TheBonsai wrote…. "The seq-function above could use i=$((i + inc)), if only POSIX matters. expr is obsolete for those things, even in POSIX."

I am not certain it is in Posix. It was NOT part of the original Bourne Shell, and on some machines, I deal with Bourne Shell. Not Ksh, Bash, or anything else.

Bourne Shell syntax works everywhere! But as 'expr' is a builtin in more modern shells, then it is not a big loss or slow down.

This is especially important if writing a replacement command, such as for "seq" where you want your "just-paste-it-in" function to work as widely as possible.

I have been shell programming pretty well all the time since 1988, so I know what I am talking about! Believe me.

MacOSX has in this regard been the worse, and a very big backward step in UNIX compatibility. 2 year after it came out, its shell still did not even understand most of the normal 'test' functions. A major pain to write shells scripts that need to also work on this system.

TheBonsai June 6, 2010, 12:35 pm

Yea, the question was if it's POSIX, not if it's 100% portable (which is a difference). The POSIX base more or less is a subset of the Korn features (88, 93), pure Bourne is something "else", I know. Real portability, which means a program can go wherever UNIX went, only in C ;)

Philippe Petrinko November 22, 2010, 8:23 am

And if you want to get rid of double-quotes, use:

one-liner code:
while read; do record=${REPLY}; echo ${record}|while read -d ","; do field="${REPLY#\"}"; field="${field%\"}"; echo ${field}; done; done<data

script code, added of some text to better see record and field breakdown:

#!/bin/bash
while read
do
echo "New record"
record=${REPLY}
echo ${record}|while read -d ,
do
field="${REPLY#\"}"
field="${field%\"}"
echo "Field is :${field}:"
done
done<data

Does it work with your data?

- PP

Philippe Petrinko November 22, 2010, 9:01 am

Of course, all the above code was assuming that your CSV file is named "data".

If you want to use anyname with the script, replace:

done<data

With:

done

And then use your script file (named for instance "myScript") with standard input redirection:

myScript < anyFileNameYouWant

Enjoy!

Philippe Petrinko November 22, 2010, 11:28 am

well no there is a bug, last field of each record is not read – it needs a workout and may be IFS modification ! After all that's what it was built for… :O)

Anthony Thyssen November 22, 2010, 11:31 pm

Another bug is the inner loop is a pipeline, so you can't assign variables for use later in the script. but you can use '<<<' to break the pipeline and avoid the echo.

But this does not help when you have commas within the quotes! Which is why you needed quotes in the first place.

In any case It is a little off topic. Perhaps a new thread for reading CVS files in shell should be created.

Philippe Petrinko November 24, 2010, 6:29 pm

Anthony,
Would you try this one-liner script on your CSV file?

This one-liner assumes that CSV file named [data] has __every__ field double-quoted.

while read; do r="${REPLY#\"}";echo "${r//\",\"/\"}" | while read -d \";do echo "Field is :${REPLY}:";done;done < data

Here is the same code, but for a script file, not a one-liner tweak.


#!/bin/bash
# script csv01.sh
#
# 1) Usage
# This script reads from standard input
# any CSV with double-quoted data fields
# and breaks down each field on standard output
#
# 2) Within each record (line), _every_ field MUST:
# - Be surrounded by double quotes,
# - and be separated from preceeding field by a comma
# (not the first field of course, no comma before the first field)
#
while read
do
echo "New record" # this is not mandatory-just for explanation
#
#
# store REPLY and remove opening double quote
record="${REPLY#\"}"
#
#
# replace every "," by a single double quote
record=${record//\",\"/\"}
#
#
echo ${record}|while read -d \"
do
# store REPLY into variable "field"
field="${REPLY}"
#
#
echo "Field is :${field}:" # just for explanation
done
done

This script named here [cvs01.sh] must be used so:

cvs01.sh < my-cvs-file-with-doublequotes

Philippe Petrinko November 24, 2010, 6:35 pm

@Anthony,

By the way, using [REPLY] in the outer loop _and_ the inner loop is not a bug.
As long as you know what you do, this is not problem, you just have to store [REPLY] value conveniently, as this script shows.

TheBonsai March 8, 2011, 6:26 am
for ((i=1; i<=20; i++)); do printf "%02d\n" "$i"; done

nixCraft March 8, 2011, 6:37 am

+1 for printf due to portability, but you can use bashy .. syntax too

for i in {01..20}; do echo "$i"; done

TheBonsai March 8, 2011, 6:48 am

Well, it isn't portable per se, it makes it portable to pre-4 Bash versions.

I think a more or less "portable" (in terms of POSIX, at least) code would be

i=0
while [ "$((i >= 20))" -eq 0 ]; do
  printf "%02d\n" "$i"
  i=$((i+1))
done

Philip Ratzsch April 20, 2011, 5:53 am

I didn't see this in the article or any of the comments so I thought I'd share. While this is a contrived example, I find that nesting two groups can help squeeze a two-liner (once for each range) into a one-liner:

for num in {{1..10},{15..20}};do echo $num;done

Great reference article!

Philippe Petrinko April 20, 2011, 8:23 am

@Philip
Nice thing to think of, using brace nesting, thanks for sharing.

Philippe Petrinko May 6, 2011, 10:13 am

Hello Sanya,

That would be because brace expansion does not support variables. I have to check this.
Anyway, Keep It Short and Simple: (KISS) here is a simple solution I already gave above:

xstart=1;xend=10;xstep=1
for (( x = $xstart; x <= $xend; x += $xstep)); do echo $x;done

Actually, POSIX compliance allows to forget $ in for quotes, as said before, you could also write:

xstart=1;xend=10;xstep=1
for (( x = xstart; x <= xend; x += xstep)); do echo $x;done

Philippe Petrinko May 6, 2011, 10:48 am

Sanya,

Actually brace expansion happens __before__ $ parameter exapansion, so you cannot use it this way.

Nevertheless, you could overcome this this way:

max=10; for i in $(eval echo {1..$max}); do echo $i; done

Sanya May 6, 2011, 11:42 am

Hello, Philippe

Thanks for your suggestions
You basically confirmed my findings, that bash constructions are not as simple as zsh ones.
But since I don't care about POSIX compliance, and want to keep my scripts "readable" for less experienced people, I would prefer to stick to zsh where my simple for-loop works

Cheers, Sanya

Philippe Petrinko May 6, 2011, 12:07 pm

Sanya,

First, you got it wrong: solutions I gave are not related to POSIX, I just pointed out that POSIX allows not to use $ in for (( )), which is just a little bit more readable – sort of.

Second, why do you see this less readable than your [zsh] [for loop]?

for (( x = start; x <= end; x += step)) do
echo "Loop number ${x}"
done

It is clear that it is a loop, loop increments and limits are clear.

IMNSHO, if anyone cannot read this right, he should not be allowed to code. :-D

BFN

Anthony Thyssen May 8, 2011, 11:30 pm

If you are going to do… $(eval echo {1..$max});
You may as well use "seq" or one of the many other forms.
See all the other comments on doing for loops.

Tom P May 19, 2011, 12:16 pm

I am trying to use the variable I set in the for line on to set another variable with a different extension. Couldn't get this to work and couldnt find it anywhere on the web… Can someone help.

Example:

FILE_TOKEN=`cat /tmp/All_Tokens.txt`
for token in $FILE_TOKEN
do
A1_$token=`grep $A1_token /file/path/file.txt | cut -d ":" -f2`

my goal is to take the values from the ALL Tokens file and set a new variable with A1_ infront of it… This tells be that A1_ is not a command…

pipe viewer.

Can be inserted into any normal pipeline between two processes to give a visual indication of how quickly data is passing through, how long it has taken, how near to completion it is, and an estimate of how long it will be until completion. It has precompiled Solaris binary (Solaris binary )

Using Bash To Feed Command Output To A While Loop Without Using Pipes!

The Linux and Unix Menagerie

But here's a really neat trick for getting this to work in bash 2.x. If you change your program to be structured like so:

while read line
do
    echo $line
done < <(ls -1d *)
Your outcome will result in success!! You've got the command output and you didn't have to use a pipe to feed it to the while loop!

NOTE: The two most important things to remember about doing this are that:

1. The space between the first < and second < is mandatory! Although, it should be noted that, between the two <'s, you can have as many spaces as you want. You can even use a tab between the two <'s, they just can't be directly connected.

2. The command, from which you want to use output as fodder for the while loop, needs to be run in a subshell (generally placed between parentheses, just like the ones surrounding this sentence) and the left parenthesis must immediately follow the second <, with "no" space in between!

We've already looked at what happens if you ignore rule number 1 and use << instead of < <. If you ignore rule number 2, you'll get:

./program: line 4: syntax error near unexpected token `<'
./program: line 4: `done < < (ls -1d *)'

And here's the "even better part" - In bash 3.x, you don't have to worry about all that spacing anymore, as they've added a new feature which does the same thing (or is it really just an old feature dressed up to make it seem fabulous? ;) In bash 3.x, you can use the triple-< operator. Actually, I believe the <<< syntax is referred to as a "here string," but that's purely academic. They could call it "fudge," as long as it works ;)

So, in bash 3.x, you could write a while loop that takes input from a command without using a pipe like so:

while read line
do
 echo hi $line
done <<< `ls -1d *`
NOTE: The space between the <<< and your backticked (or otherwise extrapolated) command output is not necessary and you can have as much space as the shell can stand between those two parts of the "here string." Of course, the three <'s need to be all clumped together with no space in between them.

Reader - Contributions browse

bash Cookbook

pipeline of commands

We discuss in the book (in the chapter on common mistakes) the fact that a pipeline of commands runs those commands in subshells. The result (or dilema) is that what happens in those subshells (e.g. counting something) is lost to the parent shell script unless the parent captures output from the pipeline, but that isn't always easy or desirable.

The bash man page describes a feature of bash called "Process Substitution" that lets you substitute the output of a pipeline of commands (actually a list of commands) using <(list) as the syntax.

But notice how the feature is described:

The process list is run with its input or output connected to a FIFO or some file in /dev/fd. The name of this file is passed as an argument to the current command as the result of the expansion.

The <(...) is going to be replaced with the name of a fifo. So if you wrote:

wc <(some commands)
the result would be:
wc fifo

that is, the fifo filename is passed to the command. That's fine for commands like wc that can accept a filename. But what about a builtin like while?

It turns out that you can add the redirect from the fifo, but the space between the two less-than signs is crucial to distinguish it from "<<", the "here document" syntax.

So you can write:

  while read a b c
  do
    ...
  done < <(pipeline of commands)

Internal Commands and Builtins

Piping output to a read, using echo to set variables will fail.

Yet, piping the output of cat seems to work.

cat file1 file2 |
   while read line; do
     echo $line
   done

However, as Bjön Eriksson shows:

Example 14-8. Problems reading from a pipe
#!/bin/sh
# readpipe.sh
# This example contributed by Bjon Eriksson.

last="(null)"
cat $0 |
while read line
do
    echo "{$line}"
    last=$line
done
printf "\nAll done, last:$last\n"

exit 0  # End of code.
        # (Partial) output of script follows.
        # The 'echo' supplies extra brackets.

#############################################

./readpipe.sh 

{#!/bin/sh}
{last="(null)"}
{cat $0 |}
{while read line}
{do}
{echo "{$line}"}
{last=$line}
{done}
{printf "nAll done, last:$lastn"}


All done, last:(null)

The variable (last) is set within the subshell but unset outside.

The gendiff script, usually found in /usr/bin on many Linux distros, pipes the output of find to a while read construct.

find $1 \( -name "*$2" -o -name ".*$2" \) -print |
while read f; do
. . .
It is possible to paste text into the input field of a read. See Example A-39.

ksh vs bash setting variable in piped loops are lost

Ubuntu Forums
ksh vs bash: setting variable in piped loops are lost
I'm a long time unix (not linux) programmer, mainly shell scripts.
Unix does not know about bash, but rather ksh or sh.

So I often build stuff like this:

Code:

n=0
du | sort -n | while read size dir
do
  if [ "$size" -gt 100000 ]
  then
    n=$((n+1))
  fi
done
echo "Found $n too big files"
the question is not how to reformulate this differently using awk or perl.

The question is: in ksh this script returns the correct value in bash it always returns 0. This is because in ksh the last command in the pipe runs in the current process, whereas in bash the first command runs in the current proces. Result: the modified variables are lost.

I'm still puzzled that his is the case and that nobody really cares about. Maybe I'm missing the point and there is some simple environment variable or other setting to change to enable ksh compatible


Recommended Links

Google matched content

Softpanorama Recommended

Top articles

Sites



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: June 23, 2021