Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

pv command

News Pipes -- powerful and elegant programming paradigm Recommended Links Options Examples Pipe Debugging Filters Coroutines
Advanced Piping Advanced Languages bash Tips and Tricks SSH Tips SCP Tips AWK Tips AWK one liners Named pipes
Pipes in vi/vim Coroutines in shell Coroutines in Assember Coroutines
in C
Coroutines in C++ netcat tee script
sort uniq cut tee Regex History of pipes concept Humor Etc

Monitor pipe progress using Pipe Viewer

pv  - Pipe Viewer - is a terminal-based tool for monitoring the progress of data through a pipeline. It can be inserted into any normal pipeline between two processes to give a visual indication of how quickly data is passing through, how long it has taken, how near to completion it is, and an estimate of how long it will be until completion.

Additional support is available for multiple instances working in tandem, to given a visual indicator of relative throughput in a complex pipeline:

[Pipe Viewer in action with a complex pipeline]

Source for all systems and RPMs for RPM-based i386 systems are available in the download area. Alternatively:

Comments, bug reports, and patches can be sent using the Contact Form.

Syntax:

pv [OPTION] [FILE]...
pv [-h|-V]

pv allows a user to see the progress of data through a pipeline, by giving information such as time elapsed, percentage completed (with progress bar), current throughput rate, total data transferred, and ETA.

To use it, insert it in a pipeline between two processes, with the appropriate options. Its standard input will be passed through to its standard output and progress will be shown on standard error.

pv will copy each supplied FILE in turn to standard output (- means standard input), or if no FILEs are specified just standard input is copied. This is the same behaviour as cat(1).

A simple example to watch how quickly a file is transferred using nc(1):

pv file | nc -w 1 somewhere.com 3000

A similar example, transferring a file from another process and passing the expected size to pv:

cat file | pv -s 12345 | nc -w 1 somewhere.com 3000

A more complicated example using numeric output to feed into the dialog(1) program for a full-screen progress display:

(tar cf - . \
| pv -n -s $(du -sb . | awk '{print $1}') \
| gzip -9 > out.tgz) 2>&1 \
| dialog --gauge 'Progress' 7 70

Frequent use of this third form is not recommended as it may cause the programmer to overheat.

Options

pv takes many options, which are divided into display switches, output modifiers, and general options.

DISPLAY SWITCHES

If no display switches are specified, pv behaves as if -p, -t, -e, -r, and -b had been given (i.e. everything except average rate is switched on). Otherwise, only those display types that are explicitly switched on will be shown.

-p, --progress
Turn the progress bar on. If standard input is not a file and no size was given (with the -s modifier), the progress bar cannot indicate how close to completion the transfer is, so it will just move left and right to indicate that data is moving.
-t, --timer
Turn the timer on. This will display the total elapsed time that pv has been running for.
-e, --eta
Turn the ETA timer on. This will attempt to guess, based on previous transfer rates and the total data size, how long it will be before completion. This option will have no effect if the total data size cannot be determined.
-r, --rate
Turn the rate counter on. This will display the current rate of data transfer.
-a, --average-rate
Turn the average rate counter on. This will display the average rate of data transfer so far.
-b, --bytes
Turn the total byte counter on. This will display the total amount of data transferred so far.
-n, --numeric
Numeric output. Instead of giving a visual indication of progress, pv will give an integer percentage, one per line, on standard error, suitable for piping (via convoluted redirection) into dialog(1). Note that -f is not required if -n is being used.
-q, --quiet
No output. Useful if the -L option is being used on its own to just limit the transfer rate of a pipe.

OUTPUT MODIFIERS

-W, --wait
Wait until the first byte has been transferred before showing any progress information or calculating any ETAs. Useful if the program you are piping to or from requires extra information before it starts, eg piping data into gpg(1) or mcrypt(1) which require a passphrase before data can be processed.
-s SIZE, --size SIZE
Assume the total amount of data to be transferred is SIZE bytes when calculating percentages and ETAs. The same suffixes of "k", "m" etc can be used as with -L.
-l, --line-mode
Instead of counting bytes, count lines (newline characters). The progress bar will only move when a new line is found, and the value passed to the -s option will be interpreted as a line count.
-i SEC, --interval SEC
Wait SEC seconds between updates. The default is to update every second. Note that this can be a decimal such as 0.1.
-w WIDTH, --width WIDTH
Assume the terminal is WIDTH characters wide, instead of trying to work it out (or assuming 80 if it cannot be guessed).
-H HEIGHT, --height HEIGHT
Assume the terminal is HEIGHT rows high, instead of trying to work it out (or assuming 25 if it cannot be guessed).
-N NAME, --name NAME
Prefix the output information with NAME. Useful in conjunction with -c if you have a complicated pipeline and you want to be able to tell different parts of it apart.
-f, --force
Force output. Normally, pv will not output any visual display if standard error is not a terminal. This option forces it to do so.
-c, --cursor
Use cursor positioning escape sequences instead of just using carriage returns. This is useful in conjunction with -N (name) if you are using multiple pv invocations in a single, long, pipeline.

DATA TRANSFER MODIFIERS

-L RATE, --rate-limit RATE
Limit the transfer to a maximum of RATE bytes per second. A suffix of "k", "m", "g", or "t" can be added to denote kilobytes (*1024), megabytes, and so on.
NOTE: This must be more than 10 bytes (assuming an -i of 1 second), or pv wlll block.
-B BYTES, --buffer-size BYTES
Use a transfer buffer size of BYTES bytes. A suffix of "k", "m", "g", or "t" can be added to denote kilobytes (*1024), megabytes, and so on. The default buffer size is the block size of the input file's filesystem multiplied by 32 (512kb max), or 400kb if the block size cannot be determined.
-R PID, --remote PID
If PID is an instance of pv that is already running, -R PID will cause that instance to act as though it had been given this instance's command line instead. For example, if pv -L 123k is running with process ID 9876, then running pv -R 9876 -L 321k will cause it to start using a rate limit of 321k instead of 123k. Note that some options cannot be changed while running, such as -c, -l, and -f.

GENERAL OPTIONS

-h, --help
Print a usage message on standard output and exit successfully.
-V, --version
Print version information on standard output and exit successfully.

EXIT STATUS

An exit status of 1 indicates a problem with the -R option.

Any other exit status is a bitmask of the following:

2
One or more files could not be accessed, stat(2)ed, or opened.
4
An input file was the same as the output file.
8
Internal error with closing a file or moving to the next file.
16
There was an error while transferring data from one or more input files.
32
A signal was caught that caused an early exit.
64
Memory allocation failed.

A zero exit status indicates no problems.

Examples

9 Really Useful Tricks With pv- Pipe Viewer

Tricks | blog.urfix.com

pv allows a user to see the progress of data through a pipeline, by giving information such as time elapsed, percentage completed (with progress bar), current throughput rate, total data transferred, and ETA.

Here's a nice list of cool ways you can use pv

1) Simulate typing
echo "You can simulate on-screen typing just like in the movies" | pv -qL 10

This will output the characters at 10 per second.

2) Monitor progress of a command
pv access.log | gzip > access.log.gz

Pipe viewer is a terminal-based tool for monitoring the progress of data through a pipeline. It can be inserted into any normal pipeline between two processes to give a visual indication of how quickly data is passing through, how long it has taken, how near to completion it is, and an estimate of how long it will be until completion.

3) live ssh network throughput test
yes | pv | ssh $host "cat > /dev/null"

connects to host via ssh and displays the live transfer speed, directing all transferred data to /dev/null

4) copy working directory and compress it on-the-fly while showing progress
tar -cf - . | pv -s $(du -sb . | awk '{print $1}') | gzip > out.tgz

What happens here is we tell tar to create "-c" an archive of all files in current dir "." (recursively) and output the data to stdout "-f -". Next we specify the size "-s" to pv of all files in current dir. The "du -sb . | awk ?{print $1}?" returns number of bytes in current dir, and it gets fed as "-s" parameter to pv. Next we gzip the whole content and output the result to out.tgz file. This way "pv" knows how much data is still left to be processed and shows us that it will take yet another 4 mins 49 secs to finish.

5) Copy a file using pv and watch its progress
pv sourcefile > destfile

pv allows a user to see the progress of data through a pipeline, by giving information such as time elapsed, percentage completed (with progress bar), current throughput rate, total data transferred, and ETA. (man pv)

6) Another live ssh network throughput test
pv /dev/zero|ssh $host 'cat > /dev/null'

connects to host via ssh and displays the live transfer speed, directing all transferred data to /dev/null

7) dd with progress bar and statistics
sudo dd if=/dev/sdc bs=4096 | pv -s 2G | sudo dd bs=4096 of=~/USB_BLACK_BACKUP.IMG

This command utilizes 'pv' to show dd's progress.

Notes on use with dd:

– dd block size (bs=…) is a widely debated command-line switch and should usually be between 1024 and 4096. You won't see much performance improvements beyond 4096, but regardless of the block size, dd will transfer every bit of data.

– pv's switch, '-s' should be as close to the size of the data source as possible.

– dd's out file, 'of=…' can be anything as the data within that file are the same regardless of the filename / extension.

8) [re]verify a disc with very friendly output
dd if=/dev/cdrom | pv -s 700m | md5sum | tee test.md5

[re]verify those burned CD's early and often – better safe than sorry -

at a bare minimum you need the good old `dd` and `md5sum` commands,

but why not throw in a super "user-friendly" progress gauge with the `pv` command -

adjust the "-s" "size" argument to your needs – 700 MB in this case,

and capture that checksum in a "test.md5″ file with `tee` – just in-case for near-future reference.

*uber-bonus* ability – positively identify those unlabeled mystery discs -

for extra credit, what disc was used for this sample output?

9) time how fast the computer reads from /dev/zero
pv /dev/zero > /dev/null

my stats 217GB 0:00:38 [4,36GB/s]


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Dec 15, 2010] Pipe Viewer 1.2.0

pv (Pipe Viewer) is a terminal-based tool for monitoring the progress of data through a pipeline. It can be inserted into any normal pipeline between two processes to give a visual indication of how quickly... data is passing through, how long it has taken, how near to completion it is, and an estimate of how long it will be until completion.

[Oct 15, 2010] Manual Backups in Linux, DD

Eleven is Louder

So, a simple backup?

#dd if=/dev/sdX | pv | dd of=/dev/sdY
The above assumes that you have access to another hard disc of the exact same make, and that you wish to mirror your drive in use onto the other drive (pv is just the progress indicator). Not your cup of tea? Ok. Now, this is the command I use for backup. It's faster than bit by bit, though not quite as safe. It also will not quit when it encounters a disc error. I compresses your output, and instead of outputting to a disc, it will output to a file. The restore on this one is pretty much the same.
#dd if=/dev/sdX bs=64k conv=noerror,sync | pv | gzip -c -9 > sdX.img.gz
#gunzip -c sdX.img.gz | pv | dd of=/dev/sdX conv=sync,noerror bs=64K
Alright, so now you want to store your backup on your server? No problem, dd can handle networks too... with the help of SSH.
#dd if=/dev/sdX bs=64k conv=noerror,sync | pv | gzip -c -9 | ssh user@remote_server dd of=sdX.img.gz
Other gems of the Disk Destroyer:

Floppy copy: dd if=/dev/fd0 of=floppy.img bs=2x80x18b conv=notrunc
CD ISO copy: dd if=/dev/sr0 of=mycd.iso bs=2048 conv=notrunc
MBR Copy: dd if=/dev/sda of=mbr.img bs=512 count=1
MBR Wipe: dd if=/dev/zero of=/dev/sda bs=512 count=1

Disk Wipe: dd if=/dev/zero of=/dev/sda bs=64k
(could follow with if from random/urandom and then another zero, but you may not be paranoid)

... ... ...

Really nerdy stuff follows:

view filesystems

dd if=/proc/filesystems | hexdump -C | less

all loaded modules

dd if=/proc/kallsyms | hexdump -C | less

interrupt table

dd if=/proc/interrupts | hexdump -C | less

system uptime (in seconds)

dd if=/proc/uptime | hexdump -C | less

partitions and sizes in kb

dd if=/proc/partitions | hexdump -C | less

mem stats

dd if=/proc/meminfo | hexdump -C | less

[Jul 28, 2010] Bash Co-Processes Linux Journal

One of the new features in bash 4.0 is the coproc statement. The coproc statement allows you to create a co-process that is connected to the invoking shell via two pipes: one to send input to the co-process and one to get output from the co-process.

The first use that I found for this I discovered while trying to do logging and using exec redirections. The goal was to allow you to optionally start writing all of a script's output to a log file once the script had already begun (e.g. due to a --log command line option).

The main problem with logging output after the script has already started is that the script may have been invoked with the output already redirected (to a file or to a pipe). If we change where the output goes when the output has already been redirected then we will not be executing the command as intended by the user.

The previous attempt ended up using named pipes:

#!/bin/bash

echo hello

if test -t 1; then
    # Stdout is a terminal.
    exec >log
else
    # Stdout is not a terminal.
    npipe=/tmp/$$.tmp
    trap "rm -f $npipe" EXIT
    mknod $npipe p
    tee <$npipe log &
    exec 1>&-
    exec 1>$npipe
fi

echo goodbye

From the previous article:

Here, if the script's stdout is not connected to the terminal, we create a named pipe (a pipe that exists in the file-system) using mknod and setup a trap to delete it on exit. Then we start tee in the background reading from the named pipe and writing to the log file. Remember that tee is also writing anything that it reads on its stdin to its stdout. Also remember that tee's stdout is also the same as the script's stdout (our main script, the one that invokes tee) so the output from tee's stdout is going to go wherever our stdout is currently going (i.e. to the user's redirection or pipeline that was specified on the command line). So at this point we have tee's output going where it needs to go: into the redirection/pipeline specified by the user.

We can do the same thing using a co-process:

echo hello

if test -t 1; then
    # Stdout is a terminal.
    exec >log
else
    # Stdout is not a terminal.
    exec 7>&1
    coproc tee log 1>&7
    #echo Stdout of coproc: ${COPROC[0]} >&2
    #echo Stdin of coproc: ${COPROC[1]} >&2
    #ls -la /proc/$$/fd
    exec 7>&-
    exec 7>&${COPROC[1]}-
    exec 1>&7-
    eval "exec ${COPROC[0]}>&-"
    #ls -la /proc/$$/fd
fi
echo goodbye
echo error >&2

In the case that our standard output is going to the terminal then we just use exec to redirect our output to the desired log file, as before. If our output is not going to the terminal then we use coproc to run tee as a co-process and redirect our output to tee's input and redirect tee's output to where our output was originally going.

Running tee using the coproc statement is essentially the same as running tee in the background (e.g. tee log &), the main difference is that bash runs tee with both its input and output connected to pipes. Bash puts the file descriptors for those pipes into an array named COPROC (by default):

Note that these pipes are created before any redirections are done in the command.

Focusing on the part where the original script's output is not connected to the terminal. The following line duplicates our standard output on file descriptor 7.

exec 7>&1

Then we start tee with its output redirected to file descriptor 7.

coproc tee log 1>&7

So tee will now write whatever it reads on its standard input to the file named log and to file descriptor 7, which is our original standard out.

Now we close file descriptor 7 with (remember that tee still has the "file" that's open on 7 opened as its standard output) with:

exec 7>&-

Since we've closed 7 we can reuse it, so we move the pipe that's connected to tee's input to 7 with:

exec 7>&${COPROC[1]}-

Then we move our standard output to the pipe that's connected to tee's standard input (our file descriptor 7) via:

exec 1>&7-

And finally, we close the pipe connected to tee's output, since we don't have any need for it, with:

eval "exec ${COPROC[0]}>&-"

The eval here is required here because otherwise bash thinks the value of ${COPROC[0]} is a command name. On the other hand, it's not required in the statement above (exec 7>&${COPROC[1]}-), because in that one bash can recognize that "7" is the start of a file descriptor action and not a command.

Also note the commented command:

#ls -la /proc/$$/fd

This is useful for seeing the files that are open by the current process.

We now have achieved the desired effect: our standard output is going into tee. Tee is "logging" it to our log file and writing it to the pipe or file that our output was originally going to.

As of yet I haven't come up with any other uses for co-processes, at least ones that aren't contrived. See the bash man page for more about co-processes.

[Jan 7, 2008] freshmeat.net Project details for pmr by Heikki Orsila

pmr is a command line filter that displays the data bandwidth and total number of bytes passing through a pipe.

About: pmr is a command line filter that displays the data bandwidth and total number of bytes passing through a pipe. It can also limit the rate of data going through the pipe and compute an MD5 checksum of the stream for verifying data integrity on unreliable networks.

It has following features:

Measure data rate on the command line. pmr reads data from standard input and copies it to standard output.

Limit data rate to a specified speed (e.g. 100 KiB/s useful for slow internet connections)
Example: copy files to another host with at most 100 KiB/s speed

tar cv *files* | pmr -l 100KiB |nc -q0 host port

Compute an md5sum of the stream (useful for verifying integrity of network transfers)
Example: copy files to another host and verify checksums on both sides

Sender: tar cv *files* | pmr -m | nc -q0 host port

Receiver: nc -l -p port | pmr -m | tar xv

Calculate time estimate of the copied data when the stream size is known
Example: copy files to another host and calculate an estimated time of completion

tar cv *files* |pmr -s 1GiB |nc -q0 host port

Changes: The man page was missing in release 1.00, and now it is back.

9 Really Useful Tricks With pv- Pipe Viewer

Tricks | blog.urfix.com

pv allows a user to see the progress of data through a pipeline, by giving information such as time elapsed, percentage completed (with progress bar), current throughput rate, total data transferred, and ETA.

Here's a nice list of cool ways you can use pv

1) Simulate typing
echo "You can simulate on-screen typing just like in the movies" | pv -qL 10

This will output the characters at 10 per second.

2) Monitor progress of a command
pv access.log | gzip > access.log.gz

Pipe viewer is a terminal-based tool for monitoring the progress of data through a pipeline. It can be inserted into any normal pipeline between two processes to give a visual indication of how quickly data is passing through, how long it has taken, how near to completion it is, and an estimate of how long it will be until completion.

3) live ssh network throughput test
yes | pv | ssh $host "cat > /dev/null"

connects to host via ssh and displays the live transfer speed, directing all transferred data to /dev/null

4) copy working directory and compress it on-the-fly while showing progress
tar -cf - . | pv -s $(du -sb . | awk '{print $1}') | gzip > out.tgz

What happens here is we tell tar to create "-c" an archive of all files in current dir "." (recursively) and output the data to stdout "-f -". Next we specify the size "-s" to pv of all files in current dir. The "du -sb . | awk ?{print $1}?" returns number of bytes in current dir, and it gets fed as "-s" parameter to pv. Next we gzip the whole content and output the result to out.tgz file. This way "pv" knows how much data is still left to be processed and shows us that it will take yet another 4 mins 49 secs to finish.

5) Copy a file using pv and watch its progress
pv sourcefile > destfile

pv allows a user to see the progress of data through a pipeline, by giving information such as time elapsed, percentage completed (with progress bar), current throughput rate, total data transferred, and ETA. (man pv)

6) Another live ssh network throughput test
pv /dev/zero|ssh $host 'cat > /dev/null'

connects to host via ssh and displays the live transfer speed, directing all transferred data to /dev/null

7) dd with progress bar and statistics
sudo dd if=/dev/sdc bs=4096 | pv -s 2G | sudo dd bs=4096 of=~/USB_BLACK_BACKUP.IMG

This command utilizes 'pv' to show dd's progress.

Notes on use with dd:

– dd block size (bs=…) is a widely debated command-line switch and should usually be between 1024 and 4096. You won't see much performance improvements beyond 4096, but regardless of the block size, dd will transfer every bit of data.

– pv's switch, '-s' should be as close to the size of the data source as possible.

– dd's out file, 'of=…' can be anything as the data within that file are the same regardless of the filename / extension.

8) [re]verify a disc with very friendly output
dd if=/dev/cdrom | pv -s 700m | md5sum | tee test.md5

[re]verify those burned CD's early and often – better safe than sorry -

at a bare minimum you need the good old `dd` and `md5sum` commands,

but why not throw in a super "user-friendly" progress gauge with the `pv` command -

adjust the "-s" "size" argument to your needs – 700 MB in this case,

and capture that checksum in a "test.md5″ file with `tee` – just in-case for near-future reference.

*uber-bonus* ability – positively identify those unlabeled mystery discs -

for extra credit, what disc was used for this sample output?

9) time how fast the computer reads from /dev/zero
pv /dev/zero > /dev/null

my stats 217GB 0:00:38 [4,36GB/s]

Monitor Progress With Pipe Viewer - by Joe Brockmeier

July 25, 2011 | ServerWatch.com

One of the most frustrating things when doing system administration is having no idea how long a process will take to finish or how much progress it's made. To get a better look at what's going on, try the Pipe Viewer utility.

Pipe Viewer, or just pv when you're invoking it at the command line or in scripts, is a utility for monitoring data flowing through a pipeline. It gives you an idea of how fast the data is moving through the pipeline, how long it's taken so far, and when it will be finished. It's the digital answer to the administrative question, "are we there yet?"

Pipe Viewer is not installed by default on most distros I've seen, so you'll need to look for packages on some systems. It's packaged for Debian and Ubuntu, so you can grab pv with the standard apt-get install pv dance.

We'll start with a really simple example of pv in action. Let's say you want to see how long it's going to take to compress a logfile with gzip. Run pv logfile | gzip > logfile.gz. A progress bar will demonstrate the amount of progress made and how much further it must go. If you're doing this with a small file, pv is going to add little to the process -- it will be over before you can blink.

Let me be the first to say (at least in this post ...) that pv can be "some assembly required." That is, it can really report accurately only if it knows what it's reporting -- and it relies on the user to tell it, in some cases. This can be easily overcome, however, with a little extra help from du and cut. Hat tip to the Super User Stack Exchange for this one.

If you want to see the progress of, say, creating a compressed tarball from a directory, you can grab the size and then pass it to pv. First, get the size of the directory using SIZE=`du -sk directory | cut -f 1`. This will use du to grab the size of the directory and then pass it to cut to grab the proper field, saving it as SIZE.

Next, run the job: tar cf - directory | pv -p -s ${SIZE}k | bzip2 -c > directory.tar.bz2. This will tar up the directory while passing it to pv, which will then pass the output to bzip2. Since it knows the size, pv can report the progress (-p) while it's going on. Note that the Stack Exchange example also shows using the v (verbose) option with tar. You don't want to do that, as it will interfere with the output you want to see from pv.

Really, pv should be part of any admin's toolbox. While some utilities are advanced enough that they have their own built-in progress reporting, many (like tar) leave that as an exercise to the user. And when it comes to jobs that require using several utilities and passing data through pipes, pv is just what the admin ordered.

Join the Microsoft Virtual Academy and Gain Recognition Sponsored by Microsoft

Improve your IT skill-set and advance your career with a free, easy to access training portal. Learn at your own pace and gain points in the process! Join over 22,000 others and get access to over 10,500 hours of training. Learn more.

Recommended Links

Google matched content

Softpanorama Recommended

Top articles

Sites

ivarch.com Pipe Viewer



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: March, 12, 2019