May the source be with you, but remember the KISS principle ;-)
Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

Unix Pipes -- powerful and elegant programming paradigm

News Recommended Links Introductory Advanced Piping Named pipes Filters Coroutines Unix Sockets
Pipes support in Unix shell Coroutines in shell Pipes in Perl Shell Input and Output Redirection Coroutines in Assember Pipes in vi/vim Coroutines
in C
Coroutines in C++
cut tr expand tee script head sort


awk one liners perl one liners netcat pv - pipe viewer Regex xargs    
Software Engineering Unix Component Model Sysadmin Horror Stories Tips History Humor Random Findings Etc
There are many people who use UNIX or Linux but who IMHO do not understand UNIX. UNIX is not just an operating system, it is a way of doing things, and the shell plays a key role by providing the glue that makes it work. The UNIX methodology relies heavily on reuse of a set of tools rather than on building monolithic applications. Even perl programmers often miss the point, writing the heart and soul of the application as perl script without making use of the UNIX toolkit.

David Korn(bold italic is mine -- BNN)

Pipeline programming involves applying special style of componentization  that allows to break a problem into a number of small steps, each of which can then be performed by a simple program. We will call this type of componentization pipethink in which wherever possible, programmer  relies on preexisting collection of useful "stages" implemented by what is called Unix filters. David Korn quote above catches the essence of pipethink -- "reuse of a set of components rather than on building monolithic applications".

Traditionally, Unix utilities that are used as stages in pipelines are small and perform a single well-defined function. At the same time they are generalized to allow re-use in different situations. Because they are so small and well-defined, it is possible to make them very reliable. In other words, Unix filters are "little gems".  Over years Unix accumulated a rich collection of such gems and shell programmers often find that some of requires text transformations can be written using only filters. Sometimes adding one of two custom stages typically written in shell, awk or Perl.

The concept of pipes is one of the most important Unix innovation (the other two are probably hierarchical filesystem and regular expressions) that had found its way to all other operating systems. The concept like many other Unix concepts originated in Multics, but was not available in Multics shell. Pipes were not present in original Unix. They were added in 1972, well after the PDP-11 version of the system was in operation almost simultaneously with rewriting Unix kernel in C.

Let me state it again: pipes is the the most elegant among three most innovative features of UNIX (hierarchical filesystem, pipes and regular expressions).  In The Creation of the UNIX Operating System /Connecting streams like a garden hose the authors wrote: 

Another innovation of UNIX was the development of pipes, which gave programmers the ability to string together a number of processes for a specific output.

Doug McIlroy, then a department head in the Computing Science Research Center, is credited for the concept of pipes at Bell Labs, and Thompson gets the credit for actually doing it.

McIlroy had been working on macros in the later 1950s, and was always theorizing to anyone who would listen about linking macros together to eliminate the need to make a series of discrete commands to obtain an end result.

"If you think about macros," McIlroy explained, "they mainly involve switching data streams. I mean, you're taking input and you suddenly come to a macro call, and that says, 'Stop taking input from here and go take it from there.'

"Somewhere at that time I talked of a macro as a 'switchyard for data streams,' and there's a paper hanging in Brian Kernighan's office, which he dredged up from somewhere, where I talked about screwing together streams like a garden hose. So this idea had been banging around in my head for a long time."

 ... ... ... ...

While Thompson and Ritchie were at the chalkboard sketching out a file system, McIlroy was at his own chalkboard trying to sketch out how to connect processes together and to work out a prefix notation language to do it.

It wasn't easy. "It's very easy to say 'cat into grep into...,' or 'who into cat into grep,'" McIlroy explained. "But there are all these side parameters that these commands have; they just don't have input and output arguments, but they have all these options."

"Syntactically, it was not clear how to stick the options into this chain of things written in prefix notation, cat of grep of who [i.e. cat(grep(who))]," he said. "Syntactic blinders: I didn't see how to do it." 

Although stymied, McIlroy didn't drop the idea. "And over a period from 1970 to 1972, I'd from time to time say, 'How about making something like this?', and I'd put up another proposal, another proposal, another proposal. And one day I came up with a syntax for the shell that went along with the piping, and Ken said, 'I'm going to do it!'"

"He was tired of hearing this stuff," McIlroy explained. "He didn't do exactly what I had proposed for the pipe system call. He invented a slightly better one that finally got changed once more to what we have today. He did use my clumsy syntax."

"Thompson saw that file arguments weren't going to fit with this scheme of things and he went in and changed all those programs in the same night. I don't know how...and the next morning we had this orgy of one-liners."

"He put pipes into UNIX, he put this notation into shell, all in one night," McElroy said in wonder.

Next: Creating a programming philosophy from pipes and a tool box

Here is how Dennis M. Ritchie in his paper Early Unix history and evolution describes how pipes were introduced in Unix:

One of the most widely admired contributions of Unix to the culture of operating systems and command languages is the pipe, as used in a pipeline of commands. Of course, the fundamental idea was by no means new; the pipeline is merely a specific form of coroutine. Even the implementation was not unprecedented, although we didn't know it at the time; the `communication files' of the Dartmouth Time-Sharing System [10] did very nearly what Unix pipes do, though they seem not to have been exploited so fully.

Pipes appeared in Unix in 1972, well after the PDP-11 version of the system was in operation, at the suggestion (or perhaps insistence) of M. D. McIlroy, a long-time advocate of the non-hierarchical control flow that characterizes coroutines. Some years before pipes were implemented, he suggested that commands should be thought of as binary operators, whose left and right operand specified the input and output files. Thus a `copy' utility would be commanded by

 inputfile copy outputfile

To make a pipeline, command operators could be stacked up. Thus, to sort input, paginate it neatly, and print the result off-line, one would write

 input sort paginate offprint

In today's system, this would correspond to

 sort input | pr | opr

The idea, explained one afternoon on a blackboard, intrigued us but failed to ignite any immediate action. There were several objections to the idea as put: the infix notation seemed too radical (we were too accustomed to typing `cp x y' to copy x to y); and we were unable to see how to distinguish command parameters from the input or output files. Also, the one-input one-output model of command execution seemed too confining. What a failure of imagination!

Some time later, thanks to McIlroy's persistence, pipes were finally installed in the operating system (a relatively simple job), and a new notation was introduced. It used the same characters as for I/O redirection. For example, the pipeline above might have been written

 sort input >pr>opr>

The idea is that following a `>' may be either a file, to specify redirection of output to that file, or a command into which the output of the preceding command is directed as input. The trailing `>' was needed in the example to specify that the (nonexistent) output of opr should be directed to the console; otherwise the command opr would not have been executed at all; instead a file opr would have been created.

The new facility was enthusiastically received, and the term `filter' was soon coined. Many commands were changed to make them usable in pipelines. For example, no one had imagined that anyone would want the sort or pr utility to sort or print its standard input if given no explicit arguments.

Soon some problems with the notation became evident. Most annoying was a silly lexical problem: the string after `>' was delimited by blanks, so, to give a parameter to pr in the example, one had to quote:

 sort input >"pr -2">opr>

Second, in attempt to give generality, the pipe notation accepted `<' as an input redirection in a way corresponding to `>'; this meant that the notation was not unique. One could also write, for example,

 opr <pr<"sort input"<

or even

 pr <"sort input"< >opr>

The pipe notation using `<' and `>' survived only a couple of months; it was replaced by the present one that uses a unique operator to separate components of a pipeline. Although the old notation had a certain charm and inner consistency, the new one is certainly superior. Of course, it too has limitations. It is unabashedly linear, though there are situations in which multiple redirected inputs and outputs are called for. For example, what is the best way to compare the outputs of two programs? What is the appropriate notation for invoking a program with two parallel output streams?

I mentioned above in the section on IO redirection that Multics provided a mechanism by which IO streams could be directed through processing modules on the way to (or from) the device or file serving as source or sink. Thus it might seem that stream-splicing in Multics was the direct precursor of Unix pipes, as Multics IO redirection certainly was for its Unix version. In fact I do not think this is true, or is true only in a weak sense. Not only were coroutines well-known already, but their embodiment as Multics spliceable IO modules required that the modules be specially coded in such a way that they could be used for no other purpose. The genius of the Unix pipeline is precisely that it is constructed from the very same commands used constantly in simplex fashion. The mental leap needed to see this possibility and to invent the notation is large indeed.

If you are not familiar with pipes, however, you should study this feature. Pipes are elegant implementation of coroutines on OS shell level and as such they allow the output from one program to be fed as input to another program. Doug McIlroy, the inventor of pipes, is said to point out that both pipes and lazy lists behave exactly like coroutines. Some other operating systems like MS DOS "fake" pipes by writing all the output of the first program to a temporary file, and then using that temporary file as input to the second program. That's not the real thing (suppose the first program produces an enormous amount of output or even does not terminate), but this is not a page where we discuss operating systems.

The simplest way to get you foot wet in this construct is to use shell language. Among various Unix shells  ksh93 contains the best facilities for using pipes. Bash is weaker and more buggy but still pretty decent. Perl is rather weak in this respect (for example it's impossible for an internal subroutine in Perl to produce stream that will be read by another subroutine via pipe, but one can compensate this weakness by using sockets for the implementation of complex pipe-style processing.

Tools like netcat can connect pipes to TCP/IP sockets creating computer-to-computer pipes and thus extending the Unix philosophy of "everything is a file" to networked multiple computers.  Pipes also can be used with tools like rsh and ssh for inter-computer communication.

Historically Modula 2 was the first widely used language that supported coroutines as a programming construct. Among scripting languages both modern Python and Ruby support pipes. BTW Perl is pretty disappointing in providing pipe-related functionality. It's really unfortunate, but probably will be partially solved in Perl 6. See advanced languages for some languages that support coroutines and piping as a programming concept. To be fair there is a special module IPC-Run by Barrie Slaymaker( After a user's spun up on bash/ksh, it provides useful piping constructs, subprocesses, and either expect-like or event loop oriented I/O capabilities.

Probably everybody knows that how to use simple pipes: to send the output of one program to another, use the | symbol (known as the pipe symbol) between the two commands as follows:

command1 | command2

For example, if you want to look at a list of your files, but you have too many files to see at once, you can prevent them from scrolling too quickly by piping the ls command into the more command as shown below:

ls | more

Another example would be to pipe the lpc stat command into the more command as follows:

lpc stat | more

But this is pipes for dummies level -- pipes are much more than that. Essentially pipes are a very powerful programming paradigm, implementation of  coroutines in shell.  The term "coroutine" was originated by Melvin Conway in his seminal 1963 paper.  IMHO it is an extremely  elegant concept that can be considered as one of the most important programming paradigms -- I think that as a paradigm it is more important that that the idea of object oriented programming.  In many cases structuring a program as sequence  of coroutines makes it much more simpler that object oriented approach (or more correctly primitive object oriented approach because a large part of OO blah-blah-blah is just common sense and useful hierarchical structuring of name space).

Paradoxically, but the most widely used programming language for coroutines programming is ksh. Learning Korn Shell contains probably the best explanation of ksh88 coroutines mechanism (ksh93 has better capabilities).

Again I would like to stress that pipes are implementation of coroutines and several languages (Modula, Oberon, Icon) contain mechanisms for using coroutines.  A much underrated feature, coroutines, buys you 80% of what threads give you with none of the hassle. But threads are better than nothing and it is possible to use threads as a substitute for coroutines, for example, in Java.

There are also named pipes. A named pipe (also called FIFO) is a special file that acts as a buffer to connect processes on the same machine. Ordinary pipes also allow processes to communicate, but those processes must have inherited the filehandles from their parents. To use a named pipe, a process need know only the named pipe's filename. In most cases, processes don't even need to be aware that they're reading from a pipe. Use named pipes you need first to create one using mkfifo command:

% mkfifo /path/to/named.pipe

After that you can write to it using one process and read from it using another:


open(SYSFIFO, "> /mypath/my_named.pipe")  or die $!;
while (<SYSFIFO>) {
   ... ... ...  


open(SYSFIFO, "< /mypath/my_named.pipe") or die $!;
while (<SYSFIFO>) {
   ... ... ...

The writer to the pipe can be a daemon, for example syslogd.  That gives a possibility to process syslog dynamically.

Unfortunately using pipe as a source of input to other program won't always work, because some programs check the size of the file before trying to read it. Because named pipes appear as special files of zero size on the filesystem, such clients and servers will not try to open or read from our named pipe, and the trick will fail.

Nikolai Bezroukov

Top Visited
Past week
Past month


Old News ;-)

[Nov 22, 2012] Nooface TermKit Fuses UNIX Command Line Pipes With Visual Output

TermKit is a visual front-end for the UNIX command line. A key attribute of the UNIX command line environment is the ability to chain multiple programs with pipes, in which the output of one program is fed through a pipe to become the input for the next program, and the last program in the chain displays the output of the entire sequence - traditionally as ASCII characters on a terminal (or terminal window). The piping approach is key to UNIX modularity, as it encourages the development of simple, well-defined programs that work together to solve a more complex problem.

TermKit maintains this modularity, but adds the ability to display the output in a way that fully exploits the more powerful graphics of modern interfaces. It accomplishes this by separating the output of programs into two types:

The result is that programs can display anything representable in a browser, including HTML5 media. The output is built out of generic widgets (lists, tables, images, files, progress bars, etc.) (see screen shot). The goal is to offer a rich enough set for the common data types of Unix, extensible with plug-ins. This YouTube video shows the interface in action with a mix of commands that produce both simple text-based output and richer visual displays. The TermKit code is based on Node.js, Socket.IO, jQuery and WebKit. It currently runs only on Mac and Windows, but 90% of the prototype functions work in any WebKit-based browser.

[Nov 21, 2012] Monadic i/o and UNIX shell programming

This is an essay inspired by Philip Wadler's paper "How to Declare an Imperative" [Wadler97]. We will show uncanny similarities between monadic i/o in Haskell, and UNIX filter compositions based on pipes and redirections. UNIX pipes (treated semantically as writing to temporary files) are quite similar to monads. Furthermore, at the level of UNIX programming, all i/o can be regarded monadic.

[Jul 26, 2011] Pipe Viewer Online Man Page

pv allows a user to see the progress of data through a pipeline, by giving information such as time elapsed, percentage completed (with progress bar), current throughput rate, total data transferred, and ETA.

To use it, insert it in a pipeline between two processes, with the appropriate options. Its standard input will be passed through to its standard output and progress will be shown on standard error.

pv will copy each supplied FILE in turn to standard output (- means standard input), or if no FILEs are specified just standard input is copied. This is the same behaviour as cat(1).

A simple example to watch how quickly a file is transferred using nc(1):

pv file | nc -w 1 3000

A similar example, transferring a file from another process and passing the expected size to pv:

cat file | pv -s 12345 | nc -w 1 3000

A more complicated example using numeric output to feed into the dialog(1) program for a full-screen progress display:

(tar cf - . \
| pv -n -s $(du -sb . | awk '{print $1}') \
| gzip -9 > out.tgz) 2>&1 \
| dialog --gauge 'Progress' 7 70

Frequent use of this third form is not recommended as it may cause the programmer to overheat.

[Dec 15, 2010] Pipe Viewer 1.2.0

pv (Pipe Viewer) is a terminal-based tool for monitoring the progress of data through a pipeline. It can be inserted into any normal pipeline between two processes to give a visual indication of how quickly... data is passing through, how long it has taken, how near to completion it is, and an estimate of how long it will be until completion.

[Jul 28, 2010] Bash Co-Processes Linux Journal

One of the new features in bash 4.0 is the coproc statement. The coproc statement allows you to create a co-process that is connected to the invoking shell via two pipes: one to send input to the co-process and one to get output from the co-process.

The first use that I found for this I discovered while trying to do logging and using exec redirections. The goal was to allow you to optionally start writing all of a script's output to a log file once the script had already begun (e.g. due to a --log command line option).

The main problem with logging output after the script has already started is that the script may have been invoked with the output already redirected (to a file or to a pipe). If we change where the output goes when the output has already been redirected then we will not be executing the command as intended by the user.

The previous attempt ended up using named pipes:


echo hello

if test -t 1; then
    # Stdout is a terminal.
    exec >log
    # Stdout is not a terminal.
    trap "rm -f $npipe" EXIT
    mknod $npipe p
    tee <$npipe log &
    exec 1>&-
    exec 1>$npipe

echo goodbye

From the previous article:

Here, if the script's stdout is not connected to the terminal, we create a named pipe (a pipe that exists in the file-system) using mknod and setup a trap to delete it on exit. Then we start tee in the background reading from the named pipe and writing to the log file. Remember that tee is also writing anything that it reads on its stdin to its stdout. Also remember that tee's stdout is also the same as the script's stdout (our main script, the one that invokes tee) so the output from tee's stdout is going to go wherever our stdout is currently going (i.e. to the user's redirection or pipeline that was specified on the command line). So at this point we have tee's output going where it needs to go: into the redirection/pipeline specified by the user.

We can do the same thing using a co-process:

echo hello

if test -t 1; then
    # Stdout is a terminal.
    exec >log
    # Stdout is not a terminal.
    exec 7>&1
    coproc tee log 1>&7
    #echo Stdout of coproc: ${COPROC[0]} >&2
    #echo Stdin of coproc: ${COPROC[1]} >&2
    #ls -la /proc/$$/fd
    exec 7>&-
    exec 7>&${COPROC[1]}-
    exec 1>&7-
    eval "exec ${COPROC[0]}>&-"
    #ls -la /proc/$$/fd
echo goodbye
echo error >&2

In the case that our standard output is going to the terminal then we just use exec to redirect our output to the desired log file, as before. If our output is not going to the terminal then we use coproc to run tee as a co-process and redirect our output to tee's input and redirect tee's output to where our output was originally going.

Running tee using the coproc statement is essentially the same as running tee in the background (e.g. tee log &), the main difference is that bash runs tee with both its input and output connected to pipes. Bash puts the file descriptors for those pipes into an array named COPROC (by default):

Note that these pipes are created before any redirections are done in the command.

Focusing on the part where the original script's output is not connected to the terminal. The following line duplicates our standard output on file descriptor 7.

exec 7>&1

Then we start tee with its output redirected to file descriptor 7.

coproc tee log 1>&7

So tee will now write whatever it reads on its standard input to the file named log and to file descriptor 7, which is our original standard out.

Now we close file descriptor 7 with (remember that tee still has the "file" that's open on 7 opened as its standard output) with:

exec 7>&-

Since we've closed 7 we can reuse it, so we move the pipe that's connected to tee's input to 7 with:

exec 7>&${COPROC[1]}-

Then we move our standard output to the pipe that's connected to tee's standard input (our file descriptor 7) via:

exec 1>&7-

And finally, we close the pipe connected to tee's output, since we don't have any need for it, with:

eval "exec ${COPROC[0]}>&-"

The eval here is required here because otherwise bash thinks the value of ${COPROC[0]} is a command name. On the other hand, it's not required in the statement above (exec 7>&${COPROC[1]}-), because in that one bash can recognize that "7" is the start of a file descriptor action and not a command.

Also note the commented command:

#ls -la /proc/$$/fd

This is useful for seeing the files that are open by the current process.

We now have achieved the desired effect: our standard output is going into tee. Tee is "logging" it to our log file and writing it to the pipe or file that our output was originally going to.

As of yet I haven't come up with any other uses for co-processes, at least ones that aren't contrived. See the bash man page for more about co-processes.

[Apr 6, 2009] Bash Process Substitution Linux Journal by Mitch Frazier

May 22, 2008 |

In addition to the fairly common forms of input/output redirection the shell recognizes something called process substitution. Although not documented as a form of input/output redirection, its syntax and its effects are similar.

The syntax for process substitution is:

where each list is a command or a pipeline of commands. The effect of process substitution is to make each list act like a file. This is done by giving the list a name in the file system and then substituting that name in the command line. The list is given a name either by connecting the list to named pipe or by using a file in /dev/fd (if supported by the O/S). By doing this, the command simply sees a file name and is unaware that its reading from or writing to a command pipeline.

To substitute a command pipeline for an input file the syntax is:

  command ... <(list) ...
To substitute a command pipeline for an output file the syntax is:
  command ... >(list) ...

At first process substitution may seem rather pointless, for example you might imagine something simple like:

  uniq <(sort a)
to sort a file and then find the unique lines in it, but this is more commonly (and more conveniently) written as:
  sort a | uniq
The power of process substitution comes when you have multiple command pipelines that you want to connect to a single command.

For example, given the two files:

  # cat a
  # cat b
To view the lines unique to each of these two unsorted files you might do something like this:
  # sort a | uniq >tmp1
  # sort b | uniq >tmp2
  # comm -3 tmp1 tmp2
  # rm tmp1 tmp2
With process substitution we can do all this with one line:
  # comm -3 <(sort a | uniq) <(sort b | uniq)

Depending on your shell settings you may get an error message similar to:

  syntax error near unexpected token `('
when you try to use process substitution, particularly if you try to use it within a shell script. Process substitution is not a POSIX compliant feature and so it may have to be enabled via:
  set +o posix
Be careful not to try something like:
  if [[ $use_process_substitution -eq 1 ]]; then
    set +o posix
    comm -3 <(sort a | uniq) <(sort b | uniq)
The command set +o posix enables not only the execution of process substitution but the recognition of the syntax. So, in the example above the shell tries to parse the process substitution syntax before the "set" command is executed and therefore still sees the process substitution syntax as illegal.

Of course, note that all shells may not support process substitution, these examples will work with bash.

[Apr 5, 2009] Using Named Pipes (FIFOs) with Bash

[Dec 9, 2008] Slashdot What Programming Language For Linux Development

by dkf (304284) <> on Saturday December 06, @07:08PM (#26016101) Homepage

C/C++ are the languages you'd want to go for. They can do *everything*, have great support, are fast etc.

Let's be honest here. C and C++ are very fast indeed if you use them well (very little can touch them; most other languages are actually implemented in terms of them) but they're also very easy to use really badly. They're genuine professional power tools: they'll do what you ask them to really quickly, even if that is just to spin on the spot chopping peoples' legs off. Care required!

If you use a higher-level language (I prefer Tcl, but you might prefer Python, Perl, Ruby, Lua, Rexx, awk, bash, etc. - the list is huge) then you probably won't go as fast. But unless you're very good at C/C++ you'll go acceptably fast at a much earlier calendar date. It's just easier for most people to be productive in higher-level languages. Well, unless you're doing something where you have to be incredibly close to the metal like a device driver, but even then it's best to keep the amount of low-level code small and to try to get to use high-level things as soon as you can.

One technique that is used quite a bit, especially by really experienced developers, is to split the program up into components that are then glued together. You can then write the components in a low-level language if necessary, but use the far superior gluing capabilities of a high-level language effectively. I know many people are very productive doing this.

[Jan 7, 2008] Project details for pmr by Heikki Orsila

pmr is a command line filter that displays the data bandwidth and total number of bytes passing through a pipe.

About: pmr is a command line filter that displays the data bandwidth and total number of bytes passing through a pipe. It can also limit the rate of data going through the pipe and compute an MD5 checksum of the stream for verifying data integrity on unreliable networks.

It has following features:

Measure data rate on the command line. pmr reads data from standard input and copies it to standard output.

Limit data rate to a specified speed (e.g. 100 KiB/s useful for slow internet connections)
Example: copy files to another host with at most 100 KiB/s speed

tar cv *files* |pmr -l 100KiB |nc -q0 host port

Compute an md5sum of the stream (useful for verifying integrity of network transfers)
Example: copy files to another host and verify checksums on both sides

Sender: tar cv *files* | pmr -m | nc -q0 host port

Receiver: nc -l -p port | pmr -m | tar xv

Calculate time estimate of the copied data when the stream size is known
Example: copy files to another host and calculate an estimated time of completion

tar cv *files* |pmr -s 1GiB |nc -q0 host port

Changes: The man page was missing in release 1.00, and now it is back.

[June 3, 2002] PipeMore 1.0 by Terry Gliedt

About: PipeMore is a utility to be used as the last of a series of piped commands (like 'more'). It displays STDIN data in a scrolled window where it can be searched or saved, and thereby avoids filling your xterm with temporary output.

CVS tree (cvsweb):

[Mar 10, 2002] Unix pipes

Examples of scripts that use pipes. Most examples are pretty trivial.

[Mar 04, 2002] TR 93-045: Sather Iters: Object-Oriented Iteration Abstraction

Sather iters are a powerful new way to encapsulate iteration. We argue that such iteration abstractions belong in a class' interface on an equal footing with its routines. Sather iters were derived from CLU iterators but are much more flexible and better suited for object-oriented programming. We motivate and describe the construct along with several simple examples. We compare it with iteration based on CLU iterators, cursors, riders, streams, series, generators, coroutines, blocks, closures, and lambda expressions. Finally, we describe how to implement them in terms of coroutines and then show how to transform this implementation into efficient code.

[May 15 2001] coroutines for Ruby

[Aug 7, 2000] -- "Tom Scola" <>

A proposal for Pipes in Perl 6. First I would like to state that this is an excellent and timely proposal. Of course with Perl, the lexical sugar is one of the most difficult parts. Here it might be that we need to adopt a trigram symbol like <|> for pipes :-). But the main thing in the proposal was done right. I agree that "Inside a coroutine, the meanings of "<>" and the default file descriptor for print, printf, etc. are overloaded. ". That's fundamentally right approach. But one of the most important thing is to preserve symmetry between i/o and pipes as long as this is possible. That means that you should be able to open coroutine as file:

open (SYSCO, >coroutine);

print SYSCO, $record1;

co coroutine {
... ...

Syntactically coroutine name is a bareword so it should be OK in open statement.

The second thing is the ability to feed coroutine in a simplified manner. One of
the most important special cases is feeding it from the loop:

for .... {

} <|> stage1 <|> stage2 # here the value of $_ should be piped on each iteration

The third important special case feeding lists into pipe. That can be achieved by special built-in function pipefeed

pipefeed(@array) <|> co1 <|> co2;


pipefeed ('mn','ts','wn',...) <|> co1 <|> co2;

The possibilities to split a pipe on two subpipes are also very important (see
VM/CMS pipes). Streams A and B should be defined in co1. For example

co1 <|>(:::A,:::B)
A::: co2 <|>
B::: co3 <|> ...

As for selling IMHO the key question is probably competition. Introduction of
pipes can help to differentiate the language from PHP at least temporarily :-).

It also might simplify the language in several aspects. For example coroutine can be a natural base of exception handling like in PL/1. Currently Perl is weak in this respect.

Actually one flavor of Python already has this capabilities.

The lost art of named pipes by Tony Mancill

04.20.2004 |

A "named pipe" -- also known as a FIFO (First In, First Out) or just fifo -- is an inter-process communication mechanism that makes use of the filesystem to allow two processes to communicate with each other. In particular, it allows one of these to open one end of the pipe as a reader, and the other to open it as a writer. Let's take a look at the FIFO and how you can use it.

First, here's a real-life example of a named pipe at work. In this instance, you run a shell command like: "ls -al | grep myfile" In that example, the "ls" program is writing to the pipe, and "grep" is reading from it. Well, a named pipe is exactly that, but the processes don't have to be running under the same shell, nor are they restricted to writing to STDOUT and reading from STDIN. Instead, they reference the named pipe via the filesystem.

In the filesystem, it looks like a file of length 0 with a "p" designation for the file type. Example:

tony@hesse:/tmp$ ls -l pdffifo

prw-r--r--    1 tony     tony            0 2004-01-11 17:32 pdffifo

Note that the file never grows; it's not actually a file but simply an abstraction of one type of IPC (Inter-Process Communication) provided by the kernel. As long as the named pipe is accessible, in terms of permissions, to the processes that would like to make use of it, they can read and write from it without having to worry about every using physical disk space or paying any I/O subsystem overhead.

At first glance, the utility of a named pipe is perhaps not immediately obvious, so I've come armed with more examples.

Here's a sample dilemma about dealing with the receipts generated by Web transactions. My browser of choice, Mozilla Firebird, gives me the option of either printing a page to a printer, or writing it as a Postscript file to a directory. I don't want a hardcopy, as it wastes paper, and I'll just have to file it somewhere. (If I wanted to kill a tree, I wouldn't be doing my transactions on the Web!) Also, I don't like the PostScript files, because they're large and because operating systems that don't have a copy of Ghostscript installed can't view them very easily.

Instead, I want to store the page as a PDF file. This is easily accomplished with the command-line utility "ps2pdf", but I'm too lazy to write the file out as PostScript, open a shell, convert the file and then delete the PostScript file. That's no problem because the browser knows how to open a file and write to it. And, ps2pdf knows how to read from STDIN to produce PDF output.

So, in its simplest incarnation:

mkfifo /tmp/pdffifo

ps2pdf - ~/receipts/webprint.pdf </tmp/pdffifo

When I tell my browser to print the PS output to /tmp/pdffifo, the result is PDF output in ~/receipts/webprint.pdf. This is fine, but it's a "one-shot," meaning that you have to set it up each time you want to print. That's because ps2pdf exits after processing one file. For a slightly more general solution, see listing 1 at the end of this tip.

Admittedly, there are other ways to solve that Web print problem. For example, I could let the browser write the PS file and then have a daemon sweep through the directory once a day and convert everything. Unfortunately, then I'd have to wait for my PDF files, and you wouldn't have seen an example of named pipes in action.

A very pragmatic use for these rascals is made in mp3burn (, a utility written in Perl that is used to burn MP3 tracks onto audio CDs. It takes advantage of named pipes by using them as the conduit between the MP3 decoders (or player) and the cdrecord, which expects WAV audio files turn to the CD. Those WAV files, which can be 700MB or so, never have to be written anywhere to the filesystem. Your system has to be fast enough to decode the MP3 to WAV and run your CD burner. But you're not paying the overhead of writing to and from the hard drive, and you don't have to worry about having almost a GB of available free space to do the burn.

Finally, there is an entire class of applications for system administrators who need to write compressed logfiles in real-time from programs that don't have compression support built-in. Even when you do have the source, do you want to spend all day re-inventing the wheel?

Listing 1

while (1 == 1) {
    open(FIFO, "</tmp/pdffifo")|| die ("unable to open /tmp/pdffifo");
    open(PDF, "|ps2pdf − /tmp/outfile.$$");
    while(<FIFO>) {
        print PDF $_;
    rename("/tmp/outfile.$$", "/tmp/webprint_" . localtime() . ".pdf");
Tony Mancill is the author of "Linux Routers: A Primer for Network Administrators" from Prentice Hall PTR. He can be reached at

Stuck in the Shell: The Limitations of Unix Pipes by David Glasser

Taking the a Unix guru's "|" key is as crippling as taking away a Windows user's mouse. At the Unix shell, piping is the fundamental form of program combination: pipes connect the standard output and standard input of many small tools together in a virtual "pipeline" to solve problems much more sophisticated than any of the individual programs can deal with. In theory, stringing together command-line programs is only one use of the underlying Unix pipe system call, which simply creates one file descriptor to write data to and another to read it back; these descriptors can be shared with subprocesses spawned with fork. One might think that this very generic system call, which was essentially the only form of inter-process communication in early Unix, could be used in many ways, of which the original "connect processes linearly" is just one example. Unfortunately, pipes have several limitations, such as unidirectionality and a common-ancestor requirement, which prevent pipes from being more generally useful. In practice, the limitations of a "pipe" system call designed for command-line pipelines restrict its use as a general-purpose tool.

Unix pipes are inherently unidirectional channels from one process to another process, and cannot be easily turned into bidirectional or multicast channels. This restriction is exactly what is needed for a shell pipeline, but it makes pipes useless for more complex inter-process communication. The pipe system call creates a single "read end" and a single "write end". Bidirectional communication can be simulated by creating a pair of pipes, but inconsistent buffering between the pair of pipes can often lead to deadlock, especially if the programmer only has control of the program on one end of the pipe. Programmers can attempt to use pipes as a multicast channel by sharing one read end between many child processes, but because all of the processes share a single descriptor, an extra buffering layer is needed in order for the children to all independently read the message. A manually maintained collection of many pipes is required for pipe-based multicast, and programming that takes much more effort.

Pipes can only be shared between processes with a common ancestor which anticipated the need for the pipe. This is no problem for a shell, which sees a list of programs and can set up all of their pipes at once. But this restriction prevents many useful forms of inter-process communication from being layered on top of pipes. Essentially, pipes are a form of combination, but not of abstraction - there is no way for a process to name a pipe that it (or an ancestor) did not directly create via pipe. Pipes cannot be used for clients to connect to long-running services. Processes cannot even open additional pipes to other processes that they already have a pipe to.

These limitations are not merely theoretical - they can be seen in practice by the fact that no major form of inter-process communication later developed in Unix is layered on top of pipe. After all, the usual way to respond to the concern that a feature of a system is too simple is to add a higher-level layer on top; for example, the fact that Unix pipes send raw, uninterpreted binary data and not high-level data structures can be fixed by wrapping pipes with functions which marshal your structures before putting them through the pipe. But the restriction of pipes to premeditated unidirectional communication between two processes cannot be fixed in this way. Several forms of inter-process communication, such as sockets, named pipes, and shared memory, have been created for Unix to overcome the drawbacks of pipes. None of them have been implemented as layers over pipes; all of them have required the creation of new primitive operations. In fact, the reverse is true - pipes could theoretically be implemented as a layer around sockets, which have grown up to be the backbone of the internet. But poor old pipes are still limited to solving the same problems in 2006 that they were in 1978.

NETPIPES 1 October 28, 1997

NETPIPES 1 October 28, 1997 netpipes – a package to manipulate BSD TCP/IP stream sockets

version 4.2


faucet port (--in|--out|--err|--fd n)+ [--once] [--verbose] [--quiet] [--unix] [--foreignhost addr] [--foreignport port] [--localhost addr] [--serial] [--daemon] [--shutdown (r|w) ] [--pidfile filename] [--noreuseaddr] [--backlog n] [-[i][o][e][#3[,4[,5...]]][v][1][q][u][d][s]] [-p foreign-port] [-h foreign-host] [-H local-host] command args

hose hostname port (--in|--out|--err|--fd n|--slave) [--verbose] [--unix] [--localport port] [--localhost addr] [--retry n] [--delay n] [--shutdown [r|w][a] ] [--noreuseaddr] [-[i][o][e][#3[,4[,5...]]][s][v][u]] [-p local-port] [-h local-host] command args

encapsulate --fd n [ --verbose ] [ --subproc [ --infd n[=sid] ] [ --outfd n[=sid] ] [ --duplex n[=sid] ] [ --Duplex n[=sid] ] [ --DUPLEX n[=sid] ] [ --prefer-local ] [ --prefer-remote ] [ --local-only ] [ --remote-only ] ] [ --client ] [ --server ] -[#n][v][s[in][on][dn][ion][oin][l][r][L][R]] command args ...

ssl-auth --fd n ( --server | --client ) [ --cert file ] [ --key file ] [ --verbose ] [ --verify n ] [ --CApath path/ ] [ --CAfile file ] [ --cipher cipher-list ] [ --criteria criteria-expr ] [ --subproc [ --infd n ] [ --outfd n ] ] [ -[#n][v][s[in][on]] ]

sockdown [fd [how] ]

getpeername [ -verbose ] [ -sock ] [ fd ]

getsockname [ -verbose ] [ -peer ] [ fd ]

timelimit [ -v ] [ -nokill ] time command args

Korn Shell Script Course Notes

Writing agents in sh: conversing through a pipe by Oleg Kiselyov

exec_with_piped is a tool that turns any UNIX-interactive application into a server, which runs as a single background process accepting sequences of commands from a number of clients (applications or scripts). One example of a UNIX-interactive application is telnet: this makes it possible to script remote daemons.

Executing a shell command feeding from a named FIFO pipe is trivial, except for one pitfall, as an article "Scripting daemons through pipes; e.g.: newsreader in sh? (yes!)" explains. The article also shows off a few sh-agents talking (and talking back) to daemons and other UNIX-interactive programs.


The current version is 1.4, Nov 14, 1997.


pipe_scripting.shar [10K]
that contains

"Scripting daemons through pipes; e.g.: newsreader in sh? (yes!)" [plain text file]
a USENET article explaining the technique, posted on comp.unix.programmer, comp.unix.admin, comp.unix.internals, newsgroups on Jan 24, 1996.

"The most primitive and nearly universal database interface",
exec_with_piped as a database "bridge" that lets applications or scripts access an SQL server without ODBC drivers, Embedded SQL, etc.

[Feb. 24, 2001] HPUX-DEVTOOLS Named Pipes Vs Sockets

I found the answer to my question in one paper:
Implementation and measurements of efficient communication facilities for distributed database systems
Bhargava, B.; Mafla, E.; Riedl, J.; Sauder, B. Data Engineering, 1989. Proceedings. Fifth International Conference on ,1989 Page(s): 200 -207
Here are the sample times as per this paper for different message sizes in msec.

for 10 bytes 1000 bytes

Sockets 4.3 9.6

Named Pipes 2.3 3.9

Message Queues 2.0 2.9

So message queues are the best to implement Inter process communications with in a system


[Oct. 07, 2000] A special module IPC-Run by Barrie Slaymaker(

After a user's spun up on bash/ksh, it provides useful piping constructs, subprocesses, and either expect-like or event loop oriented I/O capabilities.

Communications Patch-free User-level Link-time intercepting of system calls and interposing on library functions


2.Statement of the Problem
1.Linux 2.x and GNU ld
2.HP-UX 10.x and a native ld
3.Solaris 2.6 and a native ld
4.FreeBSD 3.2 and a GNU ld
4.Application: Extended File Names and Virtual File Systems

SunWorld: Introduction to pipes, filters, and redirection, Part 1 by Mo Budlong

Other articles of Mo Budlong are still available at

"Redirection allows a user to redirect output that would normally go to the screen and instead send it to a file or another process. Input that normally comes from the keyboard can be redirected to come from a file or another process." - Pyxie

PYX is based on a concept from the SGML world known as ESIS. ESIS was popularized by James Clark's SGML parsers. (Clarks' first parser was sgmls, a C-based parser built on top of the arcsgml parser developed by Dr. Charles Goldfarb, the inventor of SGML. Then came the hugely popular nsgmls, which was a completely new SGML parsing application implemented in C++.)

The PYX notation facilitates a useful XML processing paradigm that presents an alternative to SAX or DOM based API programming of XML documents. PYX is particularly useful for pipeline processing, in which the output of one application becomes the input to another application. We will see an example of this later on.

Linux Today Overflow 0.1 Released

similar to National Instruments' "G" language used in their LabView product?

We are proud to announce the first release of the Overflow project, version 0.1. Overflow is a free (GPL) "data flow oriented" development environment which allows users to build programs by visually connecting simple building blocks. Though it is primarily designed as a research tool, it can also be used for real-time data processing. This first release includes 5 toolboxes: signal processing, image processing, speech recognition, vector quantization and neural networks.

The visual interface is written for GNOME, but there is also a command-line tool that doesn't use GNOME. Because of the modular design, we hope to have a KDE version in the future. Screenshots can be found here.

Recommended Links

Softpanorama hot topic of the month

Softpanorama Recommended

Top articles


Shell level Introductory materials

Unix Pipes -- small introduction (for dummies level)


Input-Output Redirection and Pipes from A Quick Guide for UNIX and the Department Computing Facilities

Advanced UNIX

pipe - PC Webopaedia Definition and Links

Computer Guide for MTS

Advanced Piping

Network Access with GAWK

UNIX File System by Michael Lemmon University of Notre Dame

Unix Programming Frequently Asked Questions - Table of Contents

2. General File handling (including pipes and sockets)

Linux Interprocess Communications

NMRPipe.html -- NMRPipe: a multidimensional spectral processing system based on UNIX pipes

Advanced languages

Simula 67 was the first language that implement coroutines as language constructs. Algol 68 and Modula-2 followed the suit. Actually Modula-2 is a really impressive and under appreciated language for system programming. Here is explanatiuon of this concept from OOSC 2 28.9 EXAMPLES

Coroutines emulate concurrency on a sequential computer. They provide a form of functional program unit ("functional" as opposed to "object-oriented") that, although similar to the traditional notion of routine, provides a more symmetric form of communication. With a routine call, there is a master and a slave: the caller starts a routine, waits for its termination, and picks up where it left; the routine, however, always starts from the beginning. The caller calls; the routine returns. With coroutines, the relationship is between peers: coroutine a gets stuck in its work and calls coroutine b for help; b restarts where it last left, and continues until it is its turn to get stuck or it has proceeded as far as needed for the moment; then a picks up its computation. Instead of separate call and return mechanisms, there is a single operation, resume c, meaning: restart coroutine c where it was last interrupted; I will wait until someone e

This is all strictly sequential and meant to be executed on a single process (task) of a single computer. But the ideas are clearly drawn from concurrent computation; in fact, an operating system that provide such schemes as time-sharing, multitasking (as in Unix) and multithreading, mentioned at the beginning of this chapter as providing the appearance of concurrency on a single computer, will internally implement them through a coroutine-like mechanism.

Coroutines may be viewed as a boundary case of concurrency: the poor man's substitute to concurrent computation when only one thread of control is available. It is always a good idea to check that a general-purpose mechanism degrades gracefully to boundary cases; so let us see how we can represent coroutines. The following two classes will achieve this goal.


  1. Tremblay & Sorenson, "The Theory and Practice of Compiler Writing", McGraw Hill, 1985.
  2. Aho, Sethi and Ullman, "Compilers Principles, Techniques and Tools", Addison Wesley, 1987.
  3. Pratt, "Programming Languages", Prentice Hall, 1984.
  4. Sethi, "Programming Languages Concepts and Constructs", Addison- Wesley, 1989.

Coroutines in Modula-2

Famous Dotzel paper:

Coroutines in BETA

Ice 9 - Coroutines Using Runqs

Coroutines and stack overflow testing

The MT Icon Interpreter

The Aesop System A Tutorial

Java Pipes -- not that impresssive

TaskMaster -- see descripting and pointers to Fabrik

Continuations And Stackless Python

3.1 Coroutines in Display PostScript

An Introduction to Scheme and its Implementation - call-with-current-continuation

CPS 206 Advanced Programming Languages Fall, 1999 Text: Finkel: Advanced Programming Language Design


(#3 2 lectures, skip 4)

1. Exception Handling

2. Coroutines

Coroutines in Simula

Coroutines in CLU

Embedding CLU Iterators in C

Coroutines in Icon

3. Continuations: Io

4. Power Loops

5. Final Comments

C Coroutines

CORO(2)                    C Coroutines                   CORO(2)

       co_create, co_call, co_resume, co_delete, co_exit_to,
       co_exit - C coroutine management

       #include <coro.h>

       extern struct coroutine *co_current;
       extern struct coroutine co_main[];

       struct coroutine *co_create(void *func, void *stack, int stacksize);
       void co_delete(struct coroutine *co);
       void *co_call(struct coroutine *co, void *data);
       void *co_resume(void *data);
       void *co_exit_to(struct coroutine *co, void *data);
       void *co_exit(void *data);
       The coro library implements the low level functionality
       for coroutines.  For a definition of the term coroutine
       see The Art of Computer Programming by Donald E. Knuth.
       In short, you may think of coroutines as a very simple
       cooperative multitasking environment where the switch from
       one task to another is done explicitly by a function call.
       And, coroutines are fast.  Switching from one coroutine to
       another takes only a couple of assembler instructions more
       than a normal function call.

       This document defines an API for the low level handling of
       coroutines i.e. creating and deleting coroutines and
       switching between them.  Higher level functionality
       (scheduler, etc.) is not covered here.

Pipe Tutorial Intro by Faith Fishman

Advanced UNIX Programming -- lecture notes by Michael D. Lemmon

Interprocess Communications in UNIX -- book

Beej's Guide to Unix IPC

Named Pipes


Unix Communication Facilities done by: Gerhard Müller (muellerg@informatik.tu-muenchen,de) Supervisor: Dr. N.A. Speirs ( 2nd April 1996

CTC Tutorial on Pipes

6.2 Half-duplex UNIX Pipes


Random Findings

CMS-TSO Pipelines Runtime Library Distribution

This Web page serves as a distribution point for files pertaining to CMS/TSO Pipelines.

The files marked as "packed" should be downloaded in binary mode, reblocked to 1024-byte, fixed-length records (e.g., using an "fblock 1024" stage), and then unpacked using an "unpack" stage. The BOOK files should be downloaded in binary mode and reblocked using an "fblock 4096" stage.

The files in VMARC format should be downloaded in binary mode, reblocked using an "fblock 80" stage, and then unpacked using the VMARC command.

The files in LISTING format have ASA carriage control ("FORTRAN carriage control"). On CMS they should be printed with the "CC" option; on most unix systems they can be printed with "lpr -f".

The GNU C Library - Signal Handling

An illustrated explanation of pipes

REXX and CMS pipes CMS-TSO Pipelines Runtime Library Distribution -- interesting product from IBM, similar to Unix, but not exactly ;-)

IBM CMS Pipelines on VM

CMS Pipelines is a programmer productivity tool for simple creation of powerful, reusable REXX and Assembler programs and Common Gateway Interface (CGI) scripts for Web servers. [More...]


FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit exclusivly for research and educational purposes.   If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner. 

ABUSE: IPs or network segments from which we detect a stream of probes might be blocked for no less then 90 days. Multiple types of probes increase this period.  


Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy


War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes


Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law


Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least

Copyright © 1996-2016 by Dr. Nikolai Bezroukov. was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License.

The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting development of this site and speed up access. In case is down you can use the at


The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

Last modified: September 12, 2017