Process Handling

Adapted from  Learning the Korn Shell

by Bill Rosenblatt ( O'Reilly, January 1, 1993)

Contents:

UNIX gives all processes numbers, called process IDs, when they are created. You will notice that, when you run a command in the background by appending & to it, the shell responds with a line that looks like this:

$ fred & 
[1]     2349

In this example, 2349 is the process ID for the fred  process. The [1]  is a job number assigned by the shell (not the operating system). What's the difference? Job numbers refer to background processes that are currently running under your shell, while process IDs refer to all processes currently running on the entire system, for all users. The term job basically refers to a command line that was invoked from your login shell.

If you start up additional background jobs while the first one is still running, the shell will number them 2, 3, etc. For example:

$ bob & 
[2]     2367
$ dave & 
[3]     2382

Clearly, 1, 2, and 3 are easier to remember than 2349, 2367, and 2382!

The shell includes job numbers in messages it prints when a background job completes, like this:

[1] +  Done                     fred &

We'll explain what the plus sign means soon. If the job exits with non-zero status, the shell will include the exit status in parentheses:

[1] +  Done(1)                  fred &

The shell prints other types of messages when certain abnormal things happen to background jobs; we'll see these later in this chapter.

Job Control

Why should you care about process IDs or job numbers? Actually, you could probably get along fine through your UNIX life without ever referring to process IDs (unless you use a windowing workstation-as we'll see soon). Job numbers are more important, however: you can use them with the shell commands for job control.

You already know the most obvious way of controlling a job: you can create one in the background with &. Once a job is running in the background, you can let it run to completion, bring it into the foreground, or send it a message called a signal.

 Foreground and Background

If you just want the pid of the process you can make use of pgrep if available. pgrep <command> will return the pid of the command (or list of pids in case there are more than one instance of the command running).

The built-in command fg  brings a background job into the foreground. Normally this means that the job will have control of your terminal or window and therefore will be able to accept your input. In other words, the job will begin to act as if you typed its command without the &.

If you have only one background job running, you can use fg  without arguments, and the shell will bring that job into the foreground. But if you have several running in the background, the shell will pick the one that you put into the background most recently. If you want some other job put into the foreground, you need to use the job's command name, preceded by a percent sign (%), or you can use its job number, also preceded by %, or its process ID without a percent sign. If you don't remember which jobs are running, you can use the command jobs  to list them.

A few examples should make this clearer. Let's say you created three background jobs as above. Then if you type jobs, you will see this:

[1]   Running                  fred &
[2] - Running                  bob &
[3] + Running                  dave &

jobs  has a few interesting options. jobs -l  also lists process IDs:

[1]   2349      Running                  fred &
[2] - 2367      Running                  bob &
[3] + 2382      Running                  dave &

The -p  option tells jobs  to list only process IDs:

2349
2367
2382

This could be useful with command substitution; Finally, the -n  option lists only those jobs whose status has changed since the shell last reported it - whether with a jobs  command or otherwise.

If you type fg  without an argument, the shell will put dave  in the foreground, because it was put in the background most recently. But if you type fg %bob  (or fg %2), bob  will go in the foreground.

You can also refer to the job most recently put in the background by %+. Similarly, %i-  refers to the background job invoked next-most-recently (bob  in this case). That explains the plus and minus signs in the above: the plus sign shows the most recently invoked job; the minus sign shows the next-most-recently invoked job. [3]

This is analogous to ~+  and ~-  as references to the currently and previous directory; see the footnote in Chapter 7, Input/Output and Command-line Processing. Also: %%  is a synonym for %+.

If more than one background job has the same command, then %command will disambiguate by choosing the most recently invoked job (as you'd expect). If this isn't what you want, you need to use the job number instead of the command name. However, if the commands have different arguments, you can use %?string instead of %command. %?string refers to the job whose command contains the string. For example, assume you started these background jobs:

$ bob pete & 
[1]     189
$ bob ralph & 
[2]     190
$

Then you can use %?pete  and %?ralph  to refer to each of them, although actually %?pe  and %?ra  are sufficient to disambiguate.

Table below lists all of the ways to refer to background jobs. We have found that, given how infrequently people use job control commands, job numbers or command names are sufficient, and the other ways are superfluous.

Reference Background job
%N Job number N
%string Job whose command begins with string
%?string Job whose command contains string
%+ Most recently invoked background job
%% Same as above
%- Second-most recently invoked background job

Suspending a Job

Just as you can put background jobs into the foreground with fg, you can also put a foreground job into the background. This involves suspending a job, so that the shell regains control of your terminal.

To suspend a job, type [CTRL-Z]while it is running. This is analogous to typing [CTRL-C] (or whatever your interrupt key is), except that you can resume the job after you have stopped it. When you type [CTRL-Z], the shell responds with a message like this:

[1] + Stopped                   command 

Then it gives you your prompt back.

To resume a suspended job so that it continues to run in the foreground, just type fg. If, for some reason, you put other jobs in the background after you typed [CTRL-Z], use fg  with a job name or number. For example:

fred is running... 
CTRL-Z 
[1] + Stopped                   fred
$ bob & 
[2]     bob &
$ fg %fred 
fred resumes in the foreground... 

The ability to suspend jobs and resume them in the foreground comes in very handy when you have a conventional terminal (as opposed to a windowing workstation) and you are using a text editor like vi on a file that needs to be processed. For example, if you are editing a file for the troff text processor, you can do the following:

$ vi myfile 
edit the file...  CTRL-Z 
Stopped [1] vi
$ troff myfile 
troff reports an error 
$ fg 
vi comes back up in the same place in your file 

Programmers often use the same technique when debugging source code.

You will probably also find it useful to suspend a job and resume it in the background instead of the foreground. You may start a command in the foreground (i.e., normally) and find that it takes much longer than you expected-for example, a grep, sort, or database query. You need the command to finish, but you would also like control of your terminal back so that you can do other work. If you type [CTRL-Z]  followed by bg, you will move the job to the background.

NOTE: Be warned, however, that not all commands are "well-behaved" when you do this. Be especially careful with commands that run over a network on a remote machine; you may end up "confusing" the remote program.

 

Signals

We mentioned earlier that typing CTRL-Z to suspend a job is similar to typing CTRL-C to stop a job, except that you can resume the job later. They are actually similar in a deeper way: both are particular cases of the act of sending a signal to a process.

A signal is a message that one process sends to another when some abnormal event takes place or when it wants the other process to do something. Most of the time, a process send a signal to a subprocess it created. You're undoubtedly already comfortable with the idea that one process can communicate with another through an I/O pipeline; think of a signal as another way for processes to communicate with each other. (In fact, any textbook on operating systems will tell you that both are examples of the general concept of interprocess communication, or IPC.) [6]

[6] Pipes and signals were the only IPC mechanisms in early versions of UNIX. More modern versions like System V and 4.x BSD have additional mechanisms, such as sockets, named pipes, and shared memory. Named pipes are accessible to shell programmers through the mknod(1) command, which is beyond the scope of this book.

Depending on the version of UNIX, there are two or three dozen types of signals, including a few that can be used for whatever purpose a programmer wishes. Signals have numbers (from 1 to the number of signals the system supports) and names; we'll use the latter. You can get a list of all the signals on your system, by name and number, by typing kill -l. Bear in mind, when you write shell code involving signals, that signal names are more portable to other versions of UNIX than signal numbers.

Control-key Signals

When you type CTRL-C, you tell the shell to send the INT (for "interrupt") signal to the current job; [CTRL-Z] sends TSTP (on most systems, for "terminal stop"). You can also send the current job a QUIT signal by typing CTRL-\  (control-backslash); this is sort of like a "stronger" version of [CTRL-C]. [7] You would normally use [CTRL-] when (and only when) [CTRL-C] doesn't work.

[CTRL-\] can also cause the shell to leave a file called core in your current directory. This file contains an image of the process to which you sent the signal; a programmer could use it to help debug the program that was running.

As we'll see soon, there is also a "panic" signal called KILL that you can send to a process when even [CTRL-] doesn't work. But it isn't attached to any control key, which means that you can't use it to stop the currently running process. INT, TSTP, and QUIT are the only signals you can use with control keys.

You can customize the control keys used to send signals with options of the stty(1) command. These vary from system to system-consult your man page for the command-but the usual syntax is stty  signame char. signame is a name for the signal that, unfortunately, is often not the same as the names we use here. Table 1.7 in Chapter 1, Korn Shell Basics lists stty names for signals found on all versions of UNIX. char is the control character, which you can give in the same notation we use. For example, to set your INT key to [CTRL-X] on most systems, use:

stty intr ^X

Now that we've told you how to do this, we should add that we don't recommend it. Changing your signal keys could lead to trouble if someone else has to stop a runaway process on your machine.

Most of the other signals are used by the operating system to advise processes of error conditions, like a bad machine code instruction, bad memory address, or division by zero, or "interesting" events such as a user logging out or a timer ("alarm") going off. The remaining signals are used for esoteric error conditions that are of interest only to low-level systems programmers; newer versions of UNIX have more and more arcane signal types.

kill

You can use the built-in shell command kill  to send a signal to any process you created-not just the currently running job. kill  takes as argument the process ID, job number, or command name of the process to which you want to send the signal. By default, kill  sends the TERM ("terminate") signal, which usually has the same effect as the INT signal that you send with [CTRL-C]. But you can specify a different signal by using the signal name (or number) as an option, preceded by a dash.

kill  is so-named because of the nature of the default TERM signal, but there is another reason, which has to do with the way UNIX handles signals in general. The full details are too complex to go into here, but the following explanation should suffice.

Most signals cause a process that receives them to roll over and die; therefore if you send any one of these signals, you "kill" the process that receives it. However, programs can be set up to "trap" specific signals and take some other action. For example, a text editor would do well to save the file being edited before terminating when it receives a signal such as INT, TERM, or QUIT. Determining what to do when various signals come in is part of the fun of UNIX systems programming.

Here is an example of kill. Say you have a fred  process in the background, with process ID 480 and job number 1, that needs to be stopped. You would start with this command:

$ kill %1

If you were successful, you would see a message like this:

[1] + Terminated                fred &

If you don't see this, then the TERM signal failed to terminate the job. The next step would be to try QUIT:

$ kill -QUIT %1

If that worked, you would see these messages:

fred[1]: 480 Quit(coredump)
[1] +  Done(131)                fred &

The 131 is the exit status returned by fred. [9] But if even QUIT doesn't work, the "last-ditch" method would be to use KILL:

[9] When a shell script is sent a signal, it exits with status 128+N, where N is the number of the signal it received (128 changes to 256 in future releases). In this case, fred  is a shell script, and QUIT happens to be signal number 3.

$ kill -KILL %1

(Notice how this has the flavor of "yelling" at the runaway process.) This produces the message:

[1] + Killed                    fred &

It is impossible for a process to "trap" a KILL signal-the operating system should terminate the process immediately and unconditionally. If it doesn't, then either your process is in one of the "funny states" we'll see later in this chapter, or (far less likely) there's a bug in your version of UNIX.

Here's another example.

Task: Write a script equvalent to Linux killall called killalljobs  which kills all background jobs.

The key idea is to rely on  the output from jobs -p:

kill "$@" $(jobs -p)

You may be tempted to use the KILL signal immediately, instead of trying TERM (the default) and QUIT first. Don't do this. TERM and QUIT are designed to give a process the chance to "clean up" before exiting, whereas KILL will stop the process, wherever it may be in its computation. Use KILL only as a last resort!

You can use the kill  command with any process you create, not just jobs in the background of your current shell. For example, if you use a windowing system, then you may have several terminal windows, each of which runs its own shell. If one shell is running a process that you want to stop, you can kill  it from another window-but you can't refer to it with a job number because it's running under a different shell. You must instead use its process ID.

ps

This is probably the only situation in which a casual user would need to know the ID of a process. The command ps(1) gives you this information; however, it can give you lots of extra information that you must wade through as well.

ps is a complex command. It takes several options, some of which differ from one version of UNIX to another. To add to the confusion, you may need different options on different UNIX versions to get the same information! We will use options available on the two major types of UNIX systems, those derived from System V (such as most of the versions for Intel 386/486 PCs, as well as IBM's AIX and Hewlett-Packard's HP/UX) and BSD (DEC's Ultrix, SunOS). If you aren't sure which kind of UNIX version you have, try the System V options first.

You can invoke ps in its simplest form without any options. In this case, it will print a line of information about the current login shell and any processes running under it (i.e., background jobs). For example, if you invoked three background jobs, as we saw earlier in the chapter, ps on System V-derived versions of UNIX would produce output that looks something like this:

   PID TTY      TIME COMD
   146 pts/10   0:03 ksh
  2349 pts/10   0:03 fred
  2367 pts/10   0:17 bob
  2389 pts/10   0:09 dave
  2390 pts/10   0:00 ps

The output on BSD-derived systems looks like this:

   PID TT STAT  TIME COMMAND
   146 10 S     0:03 /bin/ksh -i
  2349 10 R     0:03 fred
  2367 10 D     0:17 bob -f /dev/rmt0
  2389 10 R     0:09 dave
  2390 10 R     0:00 ps

(You can ignore the STAT column.) This is a bit like the jobs  command. PID is the process ID; TTY (or TT) is the terminal (or pseudo-terminal, if you are using a windowing system) the process was invoked from; TIME is the amount of processor time (not real or "wall clock" time) the process has used so far; COMD (or COMMAND) is the command. Notice that the BSD version includes the command's arguments, if any; also notice that the first line reports on the parent shell process, and in the last line, ps reports on itself.

ps without arguments lists all processes started from the current terminal or pseudo-terminal. But since ps is not a shell command, it doesn't correlate process IDs with the shell's job numbers. It also doesn't help you find the ID of the runaway process in another shell window.

To get this information, use ps -a  (for "all"); this lists information on a different set of processes, depending on your UNIX version.

System V

Instead of listing all of those that were started under a specific terminal, ps -a  on System V-derived systems lists all processes associated with any terminal that aren't group leaders. For our purposes, a "group leader" is the parent shell of a terminal or window. Therefore, if you are using a windowing system, ps -a  lists all jobs started in all windows (by all users), but not their parent shells.

Assume that, in the above example, you have only one terminal or window. Then ps -a  will print the same output as plain ps except for the first line, since that's the parent shell. This doesn't seem to be very useful.

But consider what happens when you have multiple windows open. Let's say you have three windows, all running terminal emulators like xterm for the X Window System. You start background jobs fred, dave, and bob  in windows with pseudo-terminal numbers 1, 2, and 3, respectively. This situation is shown in Figure 8.1.

Assume you are in the uppermost window. If you type ps, you will see something like this:

   PID TTY      TIME COMD
   146 pts/1    0:03 ksh
  2349 pts/1    0:03 fred
  2390 pts/1    0:00 ps

But if you type ps -a, you will see this:

   PID TTY      TIME COMD
  2349 pts/1    0:03 fred
  2367 pts/2    0:17 bob
  2389 pts/3    0:09 dave
  2390 pts/1    0:00 ps

Now you should see how ps -a  can help you track down a runaway process. If it's dave, you can type kill 2389. If that doesn't work, try kill -QUIT 2389, or in the worst case, kill -KILL 2389.

BSD

On BSD-derived systems, ps -a  lists all jobs that were started on any terminal; in other words, it's a bit like concatenating the the results of plain ps for every user on the system. Given the above scenario, ps -a  will show you all processes that the System V version shows, plus the group leaders (parent shells).

Unfortunately, ps -a  (on any version of UNIX) will not report processes that are in certain pathological conditions where they "forget" things like what shell invoked them and what terminal they belong to. Such processes have colorful names ("zombies," "orphans") that are actually used in UNIX technical literature, not just informally by systems hackers. If you have a serious runaway process problem, it's possible that the process has entered one of these states.

Let's not worry about why or how a process gets this way. All you need to understand is that the process doesn't show up when you type ps -a. You need another option to ps to see it: on System V, it's ps -e  ("everything"), whereas on BSD, it's ps -ax.

These options tell ps to list processes that either weren't started from terminals or "forgot" what terminal they were started from. The former category includes lots of processes that you probably didn't even know existed: these include basic processes that run the system and so-called daemons (pronounced "demons") that handle system services like mail, printing, network file systems, etc.

In fact, the output of ps -e  or ps -ax  is an excellent source of education about UNIX system internals, if you're curious about them. Run the command on your system and, for each line of the listing that looks interesting, invoke man on the process name or look it up in the Unix Programmer's Manual for your system.

User shells and processes are listed at the very bottom of ps -e  or ps -ax  output; this is where you should look for runaway processes. Notice that many processes in the listing have ? instead of a terminal. Either these aren't supposed to have one (such as the basic daemons) or they're runaways. Therefore it's likely that if ps -a  doesn't find a process you're trying to kill, ps -e  (or ps -ax) will list it with ? in the TTY (or TT) column. You can determine which process you want by looking at the COMD (or COMMAND) column.


trap

We've been discussing how signals affect the casual user; now let's talk a bit about how shell programmers can use them. We won't go into too much depth about this, because it's really the domain of systems programmers.

We mentioned above that programs in general can be set up to "trap" specific signals and process them in their own way. The trap  built-in command lets you do this from within a shell script. trap  is most important for "bullet-proofing" large shell programs so that they react appropriately to abnormal events-just as programs in any language should guard against invalid input. It's also important for certain systems programming tasks, as we'll see in the next chapter.

The syntax of trap  is:

trap  cmd sig1 sig2 ...

That is, when any of sig1, sig2, etc., are received, run cmd, then resume execution. After cmd finishes, the script resumes execution just after the command that was interrupted. [10]

[10] This is what usually happens. Sometimes the command currently running will abort (sleep acts like this, as we'll see soon); other times it will finish running. Further details are beyond the scope of this book.

Of course, cmd can be a script or function. The sigs can be specified by name or by number. You can also invoke trap  without arguments, in which case the shell will print a list of any traps that have been set, using symbolic names for the signals.

Here's a simple example that shows how trap  works. Suppose we have a shell script called loop  with this code:

while true; do
    sleep 60
done

This will just pause for 60 seconds (the sleep(1) command) and repeat indefinitely. true  is a "do-nothing" command whose exit status is always 0. [11] Try typing in this script. Invoke it, let it run for a little while, then type [CTRL-C] (assuming that is your interrupt key). It should stop, and you should get your shell prompt back.

[11] Actually, it's a built-in alias for :, the real shell "no-op."

Now insert the following line at the beginning of the script:

trap 'print \'You hit control-C!\'' INT

Invoke the script again. Now hit CTRL-C. The odds are overwhelming that you are interrupting the sleep command (as opposed to true). You should see the message "You hit control-C!", and the script will not stop running; instead, the sleep command will abort, and it will loop around and start another sleep. Hit CTRL-\ to get it to stop. Type rm core  to get rid of the resulting core dump file.

Next, run the script in the background by typing loop &. Type kill %loop  (i.e., send it the TERM signal); the script will terminate. Add TERM to the trap  command, so that it looks like this:

trap 'print \'You hit control-C!\'' INT TERM

Now repeat the process: run it in the background and type kill %loop. As before, you will see the message and the process will keep on running. Type kill -KILL %loop  to stop it.

Notice that the message isn't really appropriate when you use kill. We'll change the script so it prints a better message in the kill  case:

trap 'print \'You hit control-C!\'' INT
trap 'print \'You tried to kill me!\'' TERM

while true; do
    sleep 60
done

Now try it both ways: in the foreground with [CTRL-C] and in the background with kill. You'll see different messages.

Traps and Functions

The relationship between traps and shell functions is straightforward, but it has certain nuances that are worth discussing. The most important thing to understand is that functions can have their own local traps; these aren't known outside of the function. In particular, the surrounding script doesn't know about them. Consider this code:

function settrap {
    trap 'print \'You hit control-C!\'' INT
}

settrap
while true; do
    sleep 60
done

If you invoke this script and hit your interrupt key, it will just exit. The trap on INT in the function is known only inside that function. On the other hand:

function loop {
    trap 'print \'How dare you!\'' INT
    while true; do
        sleep 60
    done
}

trap 'print \'You hit control-C!\'' INT
loop

When you run this script and hit your interrupt key, it will print "How dare you!". But how about this:

function loop {
    while true; do
        sleep 60
    done
}

trap 'print \'You hit control-C!\'' INT
loop
print 'exiting...'

This time the looping code is within a function, and the trap is set in the surrounding script. If you hit your interrupt key, it will print the message and then print "exiting...". It will not repeat the loop as above.

Why? Remember that when the signal comes in, the shell aborts the current command, which in this case is a call to a function. The entire function aborts, and execution resumes at the next statement after the function call.

The advantage of traps that are local to functions is that they allow you to control a function's behavior separately from the surrounding code.

Yet you may want to define global traps inside functions. There is a rather kludgy way to do this; it depends on a feature that we introduce in the next chapter, which we call a "fake signal." Here is a way to set trapcode as a global trap for signal SIG inside a function:

trap "trap trapcode SIG" EXIT

This sets up the command trap  trapcode SIG to run right after the function exits, at which time the surrounding shell script is in scope (i.e., is "in charge"). When that command runs, trapcode is set up to handle the SIG signal.

For example, you may want to reset the trap on the signal you just received, like this:

function trap_handler {
    trap "trap second_handler INT" EXIT
    print 'Interrupt: one more to abort.'
}

function second_handler {
    print 'Aborted.'
    exit 
}

trap trap_handler INT

This code acts like the UNIX mail utility: when you are typing in a message, you must press your interrupt key twice to abort the process.

Speaking of mail, now we'll show a more practical example of traps.

Task: As part of an electronic mail system, write the shell code that lets a user compose a message.

The basic idea is to use cat to create the message in a temporary file and then hand the file's name off to a program that actually sends the message to its destination. The code to create the file is very simple:

msgfile=/tmp/msg$$
cat > $msgfile

Since cat without an argument reads from the standard input, this will just wait for the user to type a message and end it with the end-of-text character [CTRL-D].

Process ID Variables and Temporary Files

The only thing new about this is $$  in the filename expression. This is a special shell variable whose value is the process ID of the current shell.

To see how $$  works, type ps  and note the process ID of your shell process (ksh). Then type print  "$$"; the shell will respond with that same number. Now type ksh  to start a subshell, and when you get a prompt, repeat the process. You should see a different number, probably slightly higher than the last one.

A related built-in shell variable is !  (i.e., its value is $!), which contains the process ID of the most recently invoked background job. To see how this works, invoke any job in the background and note the process ID printed by the shell next to [1]. Then type print  "$!"; you should see the same number.

The !  variable is useful in shell programs that involve multiple communicating processes, as we'll see later.

To return to our mail example: since all processes on the system must have unique process IDs, $$  is excellent for constructing names of temporary files. We saw an example of this back in Chapter 2, Command-line Editing: we used the expression .hist$$  as a way of generating unique names for command history files so that several can be open at once, allowing multiple shell windows on a workstation to have their own history files. This expression generates names like .hist234. There are also examples of $$  in Chapter 7 and Chapter 9, Debugging Shell Programs.

The directory /tmp is conventionally used for temporary files. Many systems also have another directory, /usr/tmp, for the same purpose. All files in these directories are usually erased whenever the computer is rebooted.

Nevertheless, a program should clean up such files before it exits, to avoid taking up unnecessary disk space. We could do this in our code very easily by adding the line rm $msgfile  after the code that actually sends the message. But what if the program receives a signal during execution? For example, what if a user changes his or her mind about sending the message and hits CTRL-C to stop the process? We would need to clean up before exiting. We'll emulate the actual UNIX mail system by saving the message being written in a file called dead.letter in the current directory. We can do this by using trap  with a command string that includes an exit  command:

trap 'mv $msgfile dead.letter; exit' INT TERM
msgfile=/tmp/msg$$
cat > $msgfile
# send the contents of $msgfile to the specified mail address...
rm $msgfile

When the script receives an INT or TERM signal, it will remove the temp file and then exit. Note that the command string isn't evaluated until it needs to be run, so $msgfile  will contain the correct value; that's why we surround the string in single quotes.

But what if the script receives a signal before msgfile  is created-unlikely though that may be? Then mv  will try to rename a file that doesn't exist. To fix this, we need to test for the existence of the file $msgfile  before trying to delete it. The code for this is a bit unwieldy to put in a single command string, so we'll use a function instead:

function cleanup {
    if [[ -a $msgfile ]]; then
	  mv $msgfile dead.letter
    fi
    exit
}

trap cleanup INT TERM

msgfile=/tmp/msg$$
cat > $msgfile
# send the contents of $msgfile to the specified mail address...
rm $msgfile

 Ignoring Signals

Sometimes a signal comes in that you don't want to do anything about. If you give the null string (" " or ' ') as the command argument to trap, then the shell will effectively ignore that signal. The classic example of a signal you may want to ignore is HUP (hangup), the signal the shell sends to all of your background processes when you log out.

HUP has the usual default behavior: it will kill the process that receives it. But there are bound to be times when you don't want a background job to terminate when you log out. For example, you may start a long compile or word processing job; you want to log out and come back later when you expect the job to be finished. Under normal circumstances, your background job will terminate when you log out. But if you run it in a shell environment where the HUP signal is ignored, the job will finish.

To do this, you could write a simple function that looks like this:

function ignorehup {
    trap "" HUP
    eval "$@"
}

We write this as a function instead of a script for reasons that will become clearer when we look in detail at subshells at the end of this chapter.

Actually, there is a UNIX command called nohup that does precisely this. The start  script from the last chapter could include nohup:

eval nohup "$@" > logfile 2>&1 &

This prevents HUP from terminating your command and saves its standard and error output in a file. Actually, the following is just as good:

nohup "$@" > logfile 2>&1 &

If you understand why eval  is essentially redundant when you use nohup  in this case, then you have a firm grasp on the material in the previous chapter.

Resetting Traps

Another "special case" of the trap  command occurs when you give a dash (-) as the command argument. This resets the action taken when the signal is received to the default, which usually is termination of the process.

As an example of this, let's return to Task 8-2, our mail program. After the user has finished sending the message, the temporary file is erased. At that point, since there is no longer any need to "clean up," we can reset the signal trap to its default state. The code for this, apart from function definitions, is:

trap abortmsg INT
trap cleanup TERM

msgfile=/tmp/msg$$
cat > $msgfile
# send the contents of $msgfile to the specified mail address...
rm $msgfile

trap - INT TERM

The last line of this code resets the handlers for the INT and TERM signals.

At this point you may be thinking that one could get seriously carried away with signal handling in a shell script. It is true that "industrial strength" programs devote considerable amounts of code to dealing with signals. But these programs are almost always large enough so that the signal-handling code is a tiny fraction of the whole thing. For example, you can bet that the real UNIX mail system is pretty darn bullet-proof.

However, you will probably never write a shell script that is complex enough, and that needs to be robust enough, to merit lots of signal handling. You may write a prototype for a program as large as mail in shell code, but prototypes by definition do not need to be bullet-proofed.

Therefore, you shouldn't worry about putting signal-handling code in every 20-line shell script you write. Our advice is to determine if there are any situations in which a signal could cause your program to do something seriously bad and add code to deal with those contingencies. What is "seriously bad"? Well, with respect to the above examples, we'd say that the case where HUP causes your job to terminate on logout is seriously bad, while the temporary file situation in our mail program is not.

The Korn shell has several new options to trap  (with respect to the same command in most Bourne shells) that make it useful as an aid for debugging shell scripts. We'll cover these in the next chapter.


Coroutines

We've spent the last several pages on almost microscopic details of process behavior. Rather than continue our descent into the murky depths, we'll revert to a higher-level view of processes.

Earlier in this chapter, we covered ways of controlling multiple simultaneous jobs within an interactive login session; now we'll consider multiple process control within shell programs. When two (or more) processes are explicitly programmed to run simultaneously and possibly communicate with each other, we call them coroutines.

This is actually nothing new: a pipeline is an example of coroutines. The shell's pipeline construct encapsulates a fairly sophisticated set of rules about how processes interact with each other. If we take a closer look at these rules, we'll be better able to understand other ways of handling coroutines-most of which turn out to be simpler than pipelines.

When you invoke a simple pipeline, say ls | more, the shell invokes a series of UNIX primitive operations, a.k.a. system calls. In effect, the shell tells UNIX to do the following things; in case you're interested, we include in parentheses the actual system call used at each step:

  1. Create two subprocesses, which we'll call P1 and P2 (the fork system call).

  2. Set up I/O between the processes so that P1's standard output feeds into P2's standard input (pipe).

  3. Start /bin/ls in process P1 (exec).

  4. Start /bin/more in process P2 (exec).

  5. Wait for both processes to finish (wait).

You can probably imagine how the above steps change when the pipeline involves more than two processes.

Now let's make things simpler. We'll see how to get multiple processes to run at the same time if the processes do not need to communicate. For example, we want the processes dave  and bob  to run as coroutines, without communication, in a shell script. Our initial solution would be this:

dave &
bob

Assume for the moment that bob  is the last command in the script. The above will work-but only if dave  finishes first. If dave  is still running when the script finishes, then it becomes an orphan, i.e., it enters one of the "funny states" we mentioned earlier in this chapter. Never mind the details of orphanhood; just believe that you don't want this to happen, and if it does, you may need to use the "runaway process" method of stopping it, discussed earlier in this chapter.

wait

There is a way of making sure the script doesn't finish before dave  does: the built-in command wait. Without arguments, wait  simply waits until all background jobs have finished. So to make sure the above code behaves properly, we would add wait, like this:

dave &
bob
wait

Here, if bob  finishes first, the parent shell will wait for dave  to finish before finishing itself.

If your script has more than one background job and you need to wait for specific ones to finish, you can give wait  the same type of job argument (with a percent sign) as you would use with kill, fg, or bg.

However, you will probably find that wait  without arguments suffices for all coroutines you will ever program. Situations in which you would need to wait for specific background jobs are quite complex and beyond the scope of this book.

Advantages and Disadvantages of Coroutines

In fact, you may be wondering why you would ever need to program coroutines that don't communicate with each other. For example, why not just run bob  after dave  in the usual way? What advantage is there in running the two jobs simultaneously?

If you are running on a computer with one processor (CPU), then there is a performance advantage-but only if you have the bgnice  option turned off (see Chapter 3, Customizing Your Environment), and even then only in certain situations.

Roughly speaking, you can characterize a process in terms of how it uses system resources in three ways: whether it is CPU intensive (e.g., does lots of number crunching), I/O intensive (does a lot of reading or writing to the disk), or interactive (requires user intervention).

We already know from Chapter 1 that it makes no sense to run an interactive job in the background. But apart from that, the more two or more processes differ with respect to these three criteria, the more advantage there is in running them simultaneously. For example, a number-crunching statistical calculation would do well when running at the same time as a long, I/O-intensive database query.

On the other hand, if two processes use resources in similar ways, it may even be less efficient to run them at the same time as it would be to run them sequentially. Why? Basically, because under such circumstances, the operating system often has to "time-slice" the resource(s) in contention.

For example, if both processes are "disk hogs," the operating system may enter a mode where it constantly switches control of the disk back and forth between the two competing processes; the system ends up spending at least as much time doing the switching as it does on the processes themselves. This phenomenon is known as thrashing; at its most severe, it can cause a system to come to a virtual standstill. Thrashing is a common problem; system administrators and operating system designers both spend lots of time trying to minimize it.

Parallelization

But if you have a computer with multiple CPUs (such as a Pyramid, Sequent, or Sun MP), you should be less concerned about thrashing. Furthermore, coroutines can provide dramatic increases in speed on this type of machine, which is often called a parallel computer; analogously, breaking up a process into coroutines is sometimes called parallelizing the job.

Normally, when you start a background job on a multiple-CPU machine, the computer will assign it to the next available processor. This means that the two jobs are actually-not just metaphorically-running at the same time.

In this case, the running time of the coroutines is essentially equal to that of the longest-running job plus a bit of overhead, instead of the sum of the run times of all processes (although if the CPUs all share a common disk drive, the possibility of I/O-related thrashing still exists). In the best case-all jobs having the same run time and no I/O contention-you get a speedup factor equal to the number of jobs.

Parallelizing a program is often not easy; there are several subtle issues involved and there's plenty of room for error. Nevertheless, it's worthwhile to know how to parallelize a shell script whether or not you have a parallel machine, especially since such machines are becoming more and more common.

We'll show how to do this-and give you an idea of some of the problems involved-by means of a simple task whose solution is amenable to parallelization.

Task: Write a utility that allows you to make multiple copies of a file to the list of different distination in parallel.

We'll call this script mcp. The command mcp  filename dest1 dest2 ... should copy filename to all of the destinations given. The code for this should be fairly obvious:

file=$1
shift
for dest in "$@"; do
    cp $file $dest
done

Now let's say we have a parallel computer and we want this command to run as fast as possible. To parallelize this script, it's a simple matter of firing off the cp commands in the background and adding a wait  at the end:

file=$1
shift
for dest in "$@"; do
    cp $file $dest &
done
wait

Simple, right? Well, there is one little problem: what happens if the user specifies duplicate destinations? If you're lucky, the file just gets copied to the same place twice. Otherwise, the identical cp commands will interfere with each other, possibly resulting in a file that contains two interspersed copies of the original file. In contrast, if you give the regular cp command two arguments that point to the same file, it will print an error message and do nothing.

To fix this problem, we would have to write code that checks the argument list for duplicates. Although this isn't too hard to do (see the exercises at the end of this chapter), the time it takes that code to run might offset any gain in speed from parallelization; furthermore, the code that does the checking detracts from the simple elegance of the script.

As you can see, even a seemingly trivial parallelization task has problems resulting from multiple processes having concurrent access to a given system resource (a file in this case). Such problems, known as concurrency control issues, become much more difficult as the complexity of the application increases. Complex concurrent programs often have much more code for handling the special cases than for the actual job the program is supposed to do!

Therefore it shouldn't surprise you that much research has been and is being done on parallelization, the ultimate goal being to devise a tool that parallelizes code automatically. (Such tools do exist; they usually work in the confines of some narrow subset of the problem.) Even if you don't have access to a multiple-CPU machine, parallelizing a shell script is an interesting exercise that should acquaint you with some of the issues that surround coroutines.

Coroutines with Two-way Pipes

Now that we have seen how to program coroutines that don't communicate with each other, we'll build on that foundation and discuss how to get them to communicate-in a more sophisticated way than with a pipeline. The Korn shell has a set of features that allow programmers to set up two-way communication between coroutines. These features aren't included in most Bourne shells.

If you start a background process by appending |&  to a command instead of &, the Korn shell will set up a special two-way pipeline between the parent shell and the new background process. read -p  in the parent shell reads a line of the background process' standard output; similarly, print -p  in the parent shell feeds into the standard input of the background process. Figure 8.2 shows how this works.

This scheme has some intriguing possibilities. Notice the following things: first, the parent shell communicates with the background process independently of its own standard input and output. Second, the background process need not have any idea that a shell script is communicating with it in this manner. This means that the background process can be any pre-existing program that uses its standard input and output in normal ways.

Here's a task that shows a simple example:

Task: You would like to have an online calculator, but the existing UNIX utility dc(1) uses Reverse Polish Notation (RPN), a la Hewlett-Packard calculators. You'd rather have one that works like the $3.95 model you got with that magazine subscription. Write a calculator program that accepts standard algebraic notation.

The objective here is to write the program without re-implementing the calculation engine that dc already has-in other words, to write a program that translates algebraic notation to RPN and passes the translated line to dc to do the actual calculation. [12]

[12] The utility bc(1) actually provides similar functionality.

We'll assume that the function alg2rpn, which does the translation, already exists: given a line of algebraic notation as argument, it prints the RPN equivalent on the standard output. If we have this, then the calculator program (which we'll call adc) is very simple:

dc |&

while read line'?adc> '; do
    print -p "$(alg2rpn $line)"
    read -p answer
    print "    = $answer"
done

The first line of this code starts dc as a coroutine with a two-way pipe. Then the while  loop prompts the user for a line and reads it until the user types [CTRL-D] for end-of-input. The loop body converts the line to RPN, passes it to dc through the pipe, reads dc's answer, and prints it after an equal sign. For example:

$ adc
adc> 2 + 3
    = 5
adc> (7 * 8) + 54
    = 110
adc> ^D
$

Actually-as you may have noticed-it's not entirely necessary to have a two-way pipe with dc. You could do it with a standard pipe and let dc do its own output, like this:

{ while read line'?adc> '; do
      print "$(alg2rpn $line)"
  done 
} | dc

The only difference from the above is the lack of equal sign before each answer is printed.

But: what if you wanted to make a fancy graphical user interface (GUI), like the xcalc program that comes with many X Window System installations? Then, clearly, dc's own output would not be satisfactory, and you would need full control of your own standard output in the parent process. The user interface would have to capture dc's output and display it in the window properly. The two-way pipe is an excellent solution to this problem: just imagine that, instead of print " = $answer  ", there is a call to a routine that displays the answer in the "readout" section of the calculator window.

All of this suggests that the two-way pipe scheme is great for writing shell scripts that interpose a software layer between the user (or some other program) and an existing program that uses standard input and output. In particular, it's great for writing new interfaces to old, standard UNIX programs that expect line-at-a-time, character-based user input and output. The new interfaces could be GUIs, or they could be network interface programs that talk to users over links to remote machines. In other words, the Korn shell's two-way pipe construct is designed to help develop very up-to-date software!

Two-way Pipes Versus Standard Pipes

Before we leave the subject of coroutines, we'll complete the circle by showing how the two-way pipe construct compares to regular pipelines. As you may have been able to figure out by now, it is possible to program a standard pipeline by using |&  with print -p.

This has the advantage of reserving the parent shell's standard output for other use. The disadvantage is that the child process' standard output is directed to the two-way pipe: if the parent process doesn't read it with read -p, then it's effectively lost.


Subshells

Coroutines clearly represent the most complex relationship between processes that the Korn shell defines. To conclude this chapter, we will look at a much simpler type of interprocess relationship: that of a subshell with its parent shell. We saw in Chapter 3 that whenever you run a shell script, you actually invoke another copy of the shell that is a subprocess of the main, or parent, shell process. Now let's look at subshells in more detail.

Subshell Inheritance

The most important things you need to know about subshells are what characteristics they get, or inherit, from their parents. These are as follows:

The first three of these are inherited by all subprocesses, while the last is unique to subshells. Just as important are the things that a subshell does not inherit from its parent:

We covered some of this earlier (in Chapter 3), but these points are common sources of confusion, so they bear repeating.

Nested Subshells

Subshells need not be in separate scripts; you can also start a subshell within the same script (or function) as the parent. You do this in a manner very similar to the code blocks we saw in the last chapter. Just surround some shell code with parentheses (instead of curly brackets), and that code will run in a subshell. We'll call this a nested subshell.

For example, here is the calculator program, from above, with a subshell instead of a code block:

( while read line'?adc> '; do
      print "$(alg2rpn $line)"
  done 
) | dc

The code inside the parentheses will run as a separate process. This is usually less efficient than a code block. The differences in functionality between subshells and code blocks are very few; they primarily pertain to issues of scope, i.e., the domains in which definitions of things like shell variables and signal traps are known. First, code inside a nested subshell obeys the above rules of subshell inheritance, except that it knows about variables defined in the surrounding shell; in contrast, think of blocks as code units that inherit everything from the outer shell. Second, variables and traps defined inside a code block are known to the shell code after the block, whereas those defined in a subshell are not.

For example, consider this code:

{
    fred=bob
    trap 'print \'You hit CTRL-C!\'' INT
}
while true; do
    print "\$fred is $fred"
    sleep 60
done

If you run this code, you will see the message $fred is bob  every 60 seconds, and if you type CTRL-C, you will see the message, You hit CTRL-C!. You will need to type CTRL-\ to stop it (don't forget to remove the core file). Now let's change it to a nested subshell:

(
    fred=bob
    trap 'print \'You hit CTRL-C!\'' INT
)
while true; do
    print "\$fred is $fred"
    sleep 60
done

If you run this, you will see the message $fred is; the outer shell doesn't know about the subshell's definition of fred  and therefore thinks it's null. Furthermore, the outer shell doesn't know about the subshell's trap of the INT signal, so if you hit CTRL-C, the script will terminate.

If a language supports code nesting, then it's considered desirable that definitions inside a nested unit have a scope limited to that nested unit. In other words, nested subshells give you better control than code blocks over the scope of variables and signal traps. Therefore we feel that you should use subshells instead of code blocks if they are to contain variable definitions or signal traps-unless efficiency is a concern.

Exersizes

Here are some exercises that should help you make sure you have a firm grasp on the material. The last exercise is especially difficult for those without backgrounds in compilers, parsing theory, or formal language theory.

  1. Write a shell script called pinfo that combines the jobs  and ps commands by printing a list of jobs with their job numbers, corresponding process IDs, running times, and full commands.

  2. Take the latest version of our C compiler shell script-or some other non-trivial shell script-and "bullet-proof" it with signal traps.

  3. Take the non-pipeline version of our C compiler-or some other non-trivial shell script-and parallelize it as much as possible.

  4. Write the code that checks for duplicate arguments to the mcp script. Bear in mind that different pathnames can point to the same file. (Hint: if $i  is "1", then eval 'print \${$i}' prints the first command-line argument. Make sure you understand why.)

  5. Redo the findterms program in the last chapter using a nested subshell instead of a code block.

  6. (The following doesn't have that much to do with the material in this chapter per se, but it is a classic programming exercise:)

    1. Write the function alg2rpn used in adc. Here's how to do this: Arithmetic expressions in algebraic notation have the form expr op expr, where each expr is either a number or another expression (perhaps in parentheses), and op is +, -, ×, /, or %  (remainder). In RPN, expressions have the form expr expr op. For example: the algebraic expression 2+3  is 2 3 +  in RPN; the RPN equivalent of (2+3) × (9-5)  is 2 3 +  9 5 - ×. The main advantage of RPN is that it obviates the need for parentheses and operator precedence rules (e.g., × is evaluated before +). The dc program accepts standard RPN, but each expression should have "p" appended to it: this tells dc to print its result, e.g., the first example above should be given to dc as 2 3 + p.

    2. You need to write a routine that converts algebraic notation to RPN. This should be (or include) a function that calls itself (known as a recursive function) whenever it encounters a subexpression. It is especially important that this function keep track of where it is in the input string and how much of the string it "eats up" during its processing.

    3. To make your life easier, don't worry about operator precedence for now; just convert to RPN from left to right. e.g., treat 3+4×5  as (3+4)×5  and 3×4+5  as (3×4)+5. This makes it possible for you to convert the input string on the fly, i.e., without having to read in the whole thing before doing any processing.

    4. Enhance your solution to the previous exercise so that it supports operator precedence in the "usual" order: ×, /, % (remainder) +, -. e.g., treat 3+4×5  as 3+(4×5)  and 3×4+5  as (3×4)+5.


Top Visited
Switchboard
Latest
Past week
Past month

Old News ;-)

Recommended Links

Google matched content

Softpanorama Recommended

Top articles

Sites

1. Korn Shell Basics - Learning the Korn Shell [Book]

Amazon.com Learning the Korn Shell (Nutshell Handbooks) eBook Bill Rosenblatt  O'Reilly Media; 1 edition (January 1, 1993)

Learning the Korn Shell (2nd Edition) Arnold Robbins, Bill Rosenblatt 9780596001957 Amazon.com Books O'Reilly Media; 2nd edition (May 3, 2002)

Arnold Robbins, an Atlanta native, is a professional programmer and technical author. He has worked with Unix systems since 1980, when he was introduced to a PDP-11 running a version of Sixth Edition Unix. He has been a heavy AWK user since 1987, when he became involved with gawk, the GNU project's version of AWK. As a member of the POSIX 1003.2 balloting group, he helped shape the POSIX standard for AWK. He is currently the maintainer of gawk and its documentation. He is also coauthor of the sixth edition of O'Reilly's Learning the vi Editor.

Learning the bash Shell, 2nd Edition Cameron Newham, Bill Rosenblatt 0636920923473 Amazon.com Books  O'Reilly Media; 2nd edition (January 26, 1998)



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: August 14, 2019