Softpanorama
May the source be with you, but remember the KISS principle ;-)

Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

Input and Output Redirection

News

Unix Shells

Best Shell Books

Recommended Links Papers, ebooks  tutorials

Pipes

Process Substitution in Shell
exec command Tee Pipes in Loops Pipes support in Unix shell Unix Filters String Operations in Shell Subshells
bash Tips and Tricks Sysadmin Horror Stories Unix shells history History Tips Humor Etc

Introduction

Before shell executes a command, it scans the command line for redirection characters. These special symbols instruct the shell to redirect input and output. Redirection characters can appear anywhere in a simple command or can precede or follow a command. They are not passed on to the invoked command.

The shell performs command and parameter substitution before using the  parameter . File name substitution occurs only if the pattern matches a single file and blank interpretation is not performed. There are several classic types of redirection:

< Sourcefile   

Uses the file specified by the Sourcefile  as standard input (file descriptor 0).

>Targetfile

Uses the file specified by the target as standard output (file descriptor 1). If the file does not exist, the shell creates it. If the file exists and the noclobber  option is on, an error results; otherwise, the file is truncated to zero length.

Note:  When multiple shells have the noclobber option set and they redirect output to the same file, there could be a race condition, which might result in more than one of these shell processes writing to the file. The shell does not detect or prevent such race conditions.

>| Targetfile

Same as the > command, except that this redirection statement overrides the noclobber option.

2>/dev/null  -- redirect stder to /dev/null

>> Targetfile

Uses the file specified by target as standard output. If the file currently exists, the shell appends the output to it (by first seeking the end-of-file character). If the file does not exist, the shell creates it.

<> Stream

Opens the file specified by the parameter for reading and writing as standard input.

<<[-] EndMarker (Here documents)

Here documents are often used with cat command. In this case shell reads each line input until it locates a line containing only the value of the parameter (which serves as end marker ) or an end-of-file character.  This part of the script is called here document. For example

tr a-z A-Z << EOF
jan feb mar
apr may jun
jul aug sep
oct nov dec
EOF

The string EOF was used as the delimiting identifier. It specified the start and end of the here document. The redirect and the delimiting identifier do not need to be separated by a space: <<EOF and  << EOF both work.

The delimited by the end marker part of the script is converted into a file that becomes the standard input. If all or part of the end marker parameter is quoted, no interpretation is used in the body here document. No expansion of variables, no arithmetic expressions, no backticks output substitution, nothing.

In other words, the here document is treated as a file that begins after the next newline character and continues until there is a line containing only the end_marker, with no trailing blank characters. Then the next here document, if any, starts (it can be several). In other words, the format of here document is as follows:

[n]<<end_marker
   here document
end_marker

There are two types of end marker:

If a hyphen (-) is appended to <<, the shell strips all leading tabs from the Word parameter and the document.

<<< -- Here string

A here string (available in Bash, ksh, or zsh) consisting of <<<, and effects input redirection from a word  or a string literal:

In this case you do not need the end marker  -- end of the sting  signifies end of the here string.  here is an explanation of the concept from Wikipedia:

A single word need not be quoted:

tr a-z A-Z <<< one

yields:

ONE

In case of a string with spaces, it must be quoted:

tr a-z A-Z <<< 'one two three'

yields:

ONE TWO THREE

This could also be written as:

 FOO='one two three'
 tr a-z A-Z <<< $FOO

Multiline strings are acceptable, yielding:

 tr a-z A-Z <<< 'one
 two three'

yields:

ONE
TWO THREE

Note that leading and trailing newlines, if present, are included:

 tr a-z A-Z <<< '
 one
 two three'

yields:

ONE
TWO THREE

The key difference from here documents is that in here documents, the delimiters are on separate lines (the leading and trailing newlines are stripped), and the terminating delimiter can be specified.

Echo, cat and cut command as less effective substitutes for redirection

Note that here string behavior can also be accomplished (reversing the order) via piping and the echo command, as in:

 echo 'one two three' | tr a-z A-Z

Echo is less effective and less elegant the here string but  still is widely used.

Unix command cat is actually short for "catenate," i.e., link together. It accepts multiple filename arguments and copies them to the standard output. But let's pretend, for the moment, that cat and other utilities don't accept filename arguments and accept only standard input. Unix shell lets you redirect standard input so that it comes from a file. The notation command <  filename does the same as cat with less overhead.

For example, if you have a file called fred that contains some text, then cat < fred  will print fred's contents out onto your terminal. sort < fred  will sort the lines in the fred file and print the result on your terminal (remember: we're pretending that utilities don't take filename arguments).

Combination of input and output redirectors

Input and output redirectors can be combined. For example: the cp command is normally used to copy files; if for some reason it didn't exist or was broken, you could use cat in this way:

cat  < file1 >  file2

This would be similar to cp file1 file2.

It is also possible to redirect the output of a command into the standard input of another command instead of a file. The construct that does this is called the pipe, notated as |. A command line that includes two or more commands connected with pipes is called a pipeline.

Pipes also can be combined with redirectors.  To see a sorted listing of the file fred a screen at a time, type sort < fred | more. To print it instead of viewing it on your terminal, type sort < fred | lp.

Here's a more complicated example. The file /etc/passwd stores information about users' accounts on a UNIX system. Each line in the file contains a user's login name, user ID number, encrypted password, home directory, login shell, and other info. The first field of each line is the login name; fields are separated by colons (:).

To get a sorted listing of all users on the system, type:

cut -d: -f1 < /etc/passwd | sort 

(Actually, you can omit the <, since cut accepts input filename arguments.) The cut command extracts the first field (-f1), where fields are separated by colons (-d:), from the input.

If you want to send the list directly to the file /root/users in addition to your screen), you can extend the pipeline like this:

 cut -d: -f1 < /etc/passwd | sort | tee /root/users

Using cat for creation small documents and addling lines to system files

Cat also can be used for creating small documents or addling lines to system files:

cat > message <<EOF
Hello Jim, 
I will come to work later today as I have doctor appointment
EOF
You can also add lines to system files such as /etc/fstab (see fstab - Wikipedia)
cat >> /etc/fstab <<EOF
# Removable media
/dev/cdrom      /mnt/cdrom      udf,iso9660  noauto,owner,ro                                     0 0
 
# NTFS Windows 7 partition
/dev/sda1       /mnt/Windows    ntfs-3g      quiet,defaults,locale=en_US.utf8,umask=0,noexec     0 0
 
# Partition shared by Windows and Linux
/dev/sda7       /mnt/shared     vfat         umask=000                                           0 0
 
# mounting tmpfs
tmpfs           /mnt/tmpfschk   tmpfs        size=100m                                           0 0
 
# mounting cifs
//pingu/ashare  /store/pingu    cifs         credentials=/root/smbpass.txt                       0 0
 
# mounting NFS
pingu:/store    /store          nfs          rw
EOF   

Summary

Now you should see how I/O directors and pipelines support the UNIX building block philosophy. The notation is extremely terse and powerful. Just as important, the pipe concept eliminates the need for messy temporary files to store output of commands before it is fed into other commands.

After sufficient practice, you will find yourself routinely typing in powerful command pipelines that do in one line what it would take several commands (and temporary files) in other operating systems to accomplish.

<&Digit Duplicates standard input from the file descriptor specified by the Digit parameter
>& Digit Duplicates standard output in the file descriptor specified by the Digit parameter
<&- Closes standard input
>&- Closes standard output
<&p Moves input from the co-process to standard input
>&p Moves output to the co-process to standard output

If one of these redirection options is preceded by a digit, then the file descriptor number referred to is specified by the digit (instead of the default 0 or 1). In the following example, the shell opens file descriptor 2 for writing as a duplicate of file descriptor 1:

... 2>&1

The order in which redirections are specified is significant. The shell evaluates each redirection in terms of the (FileDescriptor, File) association at the time of evaluation. For example, in the statement:

... 1>File 2>&1

the file descriptor 1 is associated with the file specified by the File parameter. The shell associates file descriptor 2 with the file associated with file descriptor 1 (File). If the order of redirections were reversed, file descriptor 2 would be associated with the terminal (assuming file descriptor 1 had previously been) and file descriptor 1 would be associated with the file specified by the File parameter.

If a command is followed by an ampersand (&) and job control is not active, the default standard input for the command is the empty file /dev/null. Otherwise, the environment for the execution of a command contains the file descriptors of the invoking shell as modified by input and output specifications.

For more information about redirection, see Input and output redirection.



NEWS CONTENTS

Old News ;-)

[Apr 09, 2010] Improve Your Unix Logging with Advanced I/O Redirection By Charlie Schluting

March 16, 2010 |  http://www.enterprisenetworkingplanet.com/

Beyond the basic shell I/O redirection tools, there are a few interesting tricks to learn. One trick, swapping stderr and stdout, is highly useful in many situations. Command lengths tend to grow out of control and become difficult to understand, but taking the time to break them apart into each component reveals a simple explanation.

Briefly, let's make sure everyone is on the same page. In Unix IO, there are three file handles that always exist: stdin (0), stdout (1), and stderr (2). Standard error is the troublesome one, as we often want all command output sent to a file. Running ./command > logfile will send only stdout to the file, and stderr output will be sent to the terminal. To combat this, we can simply redirect the stderr stream into stdout, so the both stream into the file: ./command 2>&1 >logfile. This works wonderfully, but often severely limits our options.

Take the following which, is all one line (backslashes mean continue on the next line):

(((/root/backup.sh | tee -a /var/log/backuplog) \ 
   3>&1 1>&2 2>&3 | tee -a /var/log/backuplog) \ 
   3>&1 1>&2 2>&3) >/dev/null

This trick involves three nested subshells, and all kinds of weird file handle manipulation. The above command will log both stderr and stdout into the /var/log/backuplog file, and also emit any errors to the terminal as stderr should. This is the only way to accomplish those requirements, aside from creating your own FIFO pipes with mkfifo, or perhaps using bash's process substitution.

The reason this command gets so crazy is because 'tee' doesn't understand stderr. The tee command will take whatever it receives on stdin and write it to stdout and the specified file. If you approach this the normal way, that is to say combine stderr into stdout (2>&1), before sending it to tee, you've just lost the ability to write only stderr to the console. Both streams are combined, with no way to break them apart again.

Why is this useful? When the above command is run from cron, it will log everything to the file and if anything goes wrong, stderr will go to the console, which gets e-mailed to an administrator. If desired, you could also log stdout and stderr to two distinct files.

Starting with the basic command, we're simply writing the stdout output to a log file, and back to stdout again.

backup.sh | tee -a /var/log/backuplog

Any errors going to stderr are preserved, because we haven't redirected anything yet. Unfortunately, we've lost control of any stderr out that came from backup.sh. After running this command, stderr simply gets written to the console.

To test this, simple write a script to run that outputs both stderr and stdout:

#!/bin/sh
echo "test stdout" >&1
echo "test stderr" >&2

Now, to resolve the issue with stderr being written to the console before we've had a chance to play with it, we use a subshell:

(test.sh | tee -a ./log)

We now have our two distinct file descriptors to work with again. A subshell will capture stdout and stderr, and those file handles will both be available for the next process. If we just stopped there, we could now write those two items to distinct log files, like so:

(test.sh | tee -a ./log) 1>out.log 2> err.log

We're already writing stdout to the backuplog file, so this probably isn't useful in this situation. The next step in this process is to swap stdout and stderr. Remember, 'tee' can only operate on stdout, so in order to have it write stderr to the log, we have to swap the two. When done carefully, we don't permanently combine the two streams with no way to get them back apart.

(test.sh | tee -a ./log) 3>&1 1>&2 2>&3

This works be creating a new handle, 3, and mapping it to stdout. Then stdout gets mapped to stderr, and stderr becomes stdout in the last portion. The third file handle essentially "saves" stdout for use later, so 2>&3 maps stderr to stdout.

At this point, we have test.sh writing "test stdout" into the log file, and then the stdout and stderr file handles are swapped.

Now, we can write stderr into the log file:

((test.sh | tee -a ./log) 3>&1 1>&2 2>&3 | tee -a ./log)

This is the same thing we did before, including the subshell. This is where the 2nd level of subshell comes from; we want to retain both stdout and stderr so we can swap them back to normal again.

At this point, you could also modify the above to write to two different log files, one for the standard output and one for standard error. Now let's return out file handles to the proper state by using that swap trick again:

(((test.sh | tee -a ./log) 3>&1 1>&2 2>&3 | tee -a ./log) \
3>&1 1>&2 2>&3)


We're now back to where we started, except both stdout and stderr have been written to the log file! You can proceed as normal, and redirect the IO however you wish. In the original command above, we added >/dev/null to the end, so that only standard output would be output. This is great for cron scripts where you only want e-mail if something goes wrong, but you also wish to have all output logged for debugging purposes. There is one problem, however, with this approach. If both types of output are being written quickly, things can arrive in the log file out of order. There is no solution when using this technique, so be aware that it may happen.

[Aug 11, 2009] Input-Output redirection made simple in Linux

January 07, 2006 | All about Linux
Linux follows the philosophy that every thing is a file. For exEcho, cat and cut command as less effective substitutes for redirectiontor, mouse, printer .... you name it and it is classified as a file in Linux. Each of these pieces of hardware have got unique file descriptors associated with it. Now this nomenclature has got its own advantages. The main one being you can use all the common command line tools you have in Linux to send, receive or manipulate data with these devices.

For example, my mouse has the file descriptor '/dev/input/mice' associated with it (yours may be different).

So if I want to see the output of the mouse on my screen, I just enter the command :
cat /dev/input/mice
... and then move the mouse to get characters on the terminal. Try it out yourselves.

Note: In some cases, running the above command will scramble your terminal display. In such an outcome, you can type the command :

reset
... to get it corrected.

Linux provides each program that is run on it access to three important files. They are standard input, standard output and standard error. And each of these special files (standard input, output and error) have got the file descriptors 0, 1 and 2 respectively. In the previous example, the utility 'cat' uses standard output which by default is the screen or the console to display the output.

Redirecting output to other files

You can easily redirect input / output to any file other than the default one. This is achieved in Linux using input and output redirection symbols. These symbols are as follows:

> - Output redirection
< - Input redirection 
Using a combination of these symbols and the standard file descriptors you can achieve complex redirection tasks quite easily.

Output Redirection

Suppose, I want to redirect the output of 'ls' to a text file instead of the console. This I achieve using the output redirection symbol as follows:
$ ls -l myfile.txt > test.txt
When you execute the above command, the output is redirected to a file by name test.txt. If the file 'test.txt' does not exist, then it is automatically created and the output of the command 'ls -l' is written to it. This is assuming that there is a file called myfile.txt existing in my current directory.

Now lets see what happens when we execute the same command after deleting the file myfile.txt.

$ rm myfile.txt
$ ls -l myfile.txt > test.txt
ls: myfile.txt: No such file or directory -- ERROR
What happens is that 'ls' does not find the file named myfile.txt and displays an error on the console or terminal. Now here is the fun part. You can also redirect the error generated above to another file instead of displaying on the console by using a combination of error file descriptor and output file redirection symbol as follows:
$ ls -l myfile.txt 2> test.txt
The thing to note in the above command is '2>' which can be read as - redirect the error (2) to the file test.txt.
 
Use two open xterms can be used to practice output redirection.

I can give one practical purpose for this error redirection which I use on a regular basis. When I am searching for a file in the whole hard disk as a normal user, I get a lot of errors such as :

find: /file/path: Permission denied
In such situations I use the error redirection to weed out these error messages as follows:
# find / -iname \* 2> /dev/null
Now all the error messages are redirected to /dev/null device and I get only the actual find results on the screen.

Note: /dev/null is a special kind of file in that its size is always zero. So what ever you write to that file will just disappear. The opposite of this file is /dev/zero which acts as an infinite source. For example, you can use /dev/zero to create a file of any size - for example, when creating a swap file for instance.

If you have a line printer connected to your Linux machine, and lets say its file descriptor is /dev/lp0 . Then you can send any output to the printer using output redirection. For example to print the contents of a text file, I do the following:

$ cat testfile.txt > /dev/lp0
Input Redirection

You use input redirection using the less-than symbol and it is usually used with a program which accepts user input from the keyboard. A legendary use of input redirection that I have come across is mailing the contents of a text file to another user.

$ mail ravi < mail_contents.txt

I say legendary because now with the advances in GUI, and also availability of good email clients, this method is seldom used.

Suppose you want to find the exact number of lines, number of words and characters respectively in a text file and at the same time you want to write it to another file. This is achieved using a combination of input and output redirection symbols as follows:

$ wc < my_text_file.txt > output_file.txt
What happens above is the contents of the file my_text_file.txt are passed to the command 'wc' whose output is in turn redirected to the file output_file.txt .

Appending data to a file

You can also use the >> symbol instead of output redirection to append data to a file. For example,

$ cat - >> test.txt
... will append what ever you write to the file test.txt.
Related Content

20 comments:

Anonymous said...
A very good article. Enjoyed it.
I may also add that you can do something like :
ls -l xxx.txt 2>1& another_file
which will redirect both output and any errors to another_file.
Henry
1/07/2006 10:40:00 AM  
Anonymous said...
Excellent. The first time I have ever really understood redirection!
1/07/2006 10:42:00 PM  
bogomipz said...
You mean:
ls -l xxx.txt 2>&1 another_file

What you said will redirect stderr to a file named 1 and try to run another_file as a command. I prefer this simpler syntax to redirect stdout and stderr to a file:

ls -l xxx.txt &> another_file

1/08/2006 02:26:00 AM  
Anonymous said...
Maybe the first example:

$ cat /dev/input/mice

should be changed by:

# cat /dev/input/mice

if for example I had a vulnerability in a web application and someone could just read one of those files from www-data user, and with them, the attacker could read all what I do with my mouse or my keyboard, I would be really scared.

Fortunately, just root can do that

1/08/2006 05:57:00 AM  
Anonymous said...
Aren't you glad that GNU provides all of these core utilities? Welcome to the GNU userland.
1/08/2006 08:50:00 AM  
Anonymous said...
@ anonymous 5:57 AM

$ cat /dev/input/mice

should be changed by:

# cat /dev/input/mice

I think the author has written $ cat...  instead of # cat ...  for a reason. He is just conveying that it is better to play it safe and run those commands as a normal user and not as root.

A nice article. Cleared a lot of my doubts on this topic.

1/08/2006 09:48:00 AM  
Anonymous said...
>I think the author has written $ cat... instead of # cat ... for a reason. He is just conveying that it is better to play it safe and run those commands as a normal user and not as root.

In a sane environment you can't do that. Normal users only will have permissions to write to this device, not to read from it.

1/08/2006 03:23:00 PM  
Anonymous said...
If "everything is a file", then where are the semantics of file operations defined? I'm especially interested in the semantics of 'ioctl'.
1/08/2006 03:26:00 PM  
Anonymous said...
great
this shows, how powerful gnu/linux/unix is
1/08/2006 03:34:00 PM  
Anonymous said...
@ anonymous (3:26 PM)

Semantics is defined at some appropriate driver level.

For example, do a 'ls -l' on the files in the /dev directory, these are special devices with major/minor number pairs which maps to a device driver in kernel space, hence the semantics is left to that driver.

The same can be applied to ordinary files and directories, The VFS layer performs the mapping to the appropriate file system driver for those.

The character far left of an 'ls -l' on a file tells the file type (e.g. '-', 'd', 'b' or 'c' to mention a few)

1/08/2006 03:48:00 PM  
Anonymous said...
I think the author has written $ cat... instead of # cat ... for a reason. He is just conveying that it is better to play it safe and run those commands as a normal user and not as root.

Yeah, run everything with the less privileges you can... but reading /dev/input/mice is something that just root can do it! run those commands as a normal user and you will get a "Permission denied".

In a sane environment you can't do that. Normal users only will have permissions to write to this device, not to read from it.

In a sane environment you also can't write to the device.

Those files represent devices.

Would you imagine that *any user* of the system would have permissions over input devices? I'm thinking in "postfix" user, "www-data" user, and so on.

You would be in almost the same situation as if you were just in front of the computer.

1/10/2006 04:11:00 AM  
Anonymous said...
This should have been labeled 'Simple Input/Output Redirection at the command-line.' A more useful article would have been how to redirect stdin, stdout, stderr within a shell script using 'exec cat'. With a little info about tee thown in for good measure.

/djs

1/10/2006 05:23:00 AM  
Anonymous said...
wc < my_text_file.txt > output_file.txt

can also be written as

wc my_text_file.txt | tee output_file.txt

1/12/2006 03:40:00 PM  
Jason Thompson said...
Great article. I've always wanted to know how to redirect error messages. I'm kicking myself now since it is so easy.
7/11/2006 10:53:00 PM  
Anonymous said...
I appreciate the examples given. They are very clear and easy to understand. It's a great article indeed.
9/15/2006 10:00:00 AM  
Anonymous said...
Nice article, but let's dig deeper.
For example, you woud like to change your password.
You type:

$ passwd

That will ask for passwords.
Now how do you realize a redirection in this case, if you want to input these passwords from a file?

10/13/2006 01:58:00 PM  
Anonymous said...
Very good,
Thank you!
4/13/2007 12:35:00 PM  
Cabellos JL said...
good ... but I would like to
redirect the output of command "dsh"
dsh -a ps -u $USER > file
it does not work !!!!
how can i do this?
Thank you!
Cabellos JL
9/21/2007 09:59:00 AM  
Anonymous said...
dsh -a ps -u $USER > file

probably because 'dsh' is a shell that's accepting a command as an argument. in this case it thinks you're passing it

ps -u $USER > file

in order to redirect the output of that command to the file you'd need to do:

dsh -a "ps -u $USER" > file

4/29/2008 02:00:00 AM  
sudharsh said...
well, to ensure that the terminal is not scrambled, you could use hexdump

$ hexdump /dev/input/mice

prints a viewable dump..od would work as well.

KSH - Redirection and Pipes

An uncommon program to use for this example is the "fuser" program under solaris. it gives you a long listing of what processes are using a particular file. For example:

$ fuser /bin/sh
/bin/sh:    13067tm   21262tm
If you wanted to see just the processes using that file, you might initially groan and wonder how best to parse it with awk or something. However, fuser actually splits up the data for you already. It puts the stuff you may not care about on stderr, and the meaty 'data' on stdout. So if you throw away stderr, with the '2>' special redirect, you get
$ fuser /bin/sh  2>/dev/null
    13067   21262
which is then trivially usable.

Unfortunately, not all programs are that straightforward :-) However, it is good to be aware of these things, and also of status returns. The 'grep' command actually returns a status based on whether it found a line. The status of the last command is stored in the '$?' variable. So if all you care about is, "is 'biggles' in /etc/hosts?" you can do the following:

grep biggles /etc/hosts >/dev/null
if [[ $? -eq 0 ]] ; then
	echo YES
else
	echo NO
fi
As usual, there are lots of other ways to accomplish this task, even using the same 'grep' command. However, this method has the advantage that it does not waste OS cycles with a temp file, nor does it waste memory with a potentially very long variable.
(If you were looking for something that could potentially match hundreds of lines, then var=`grep something /file/name` could get very long)

Inline redirection

You have seen redirection TO a file. But you can also redirect input, from a file. For programs that can take data in stdin, this is useful. The 'wc' can take a filename as an argument, or use stdin. So all the following are roughly equivalent in result, although internally, different things happen:
wc -l /etc/hosts
wc -l < /etc/hosts
cat /etc/hosts | wc -l

Additionally, if there are a some fixed lines you want to use, and you do not want to bother making a temporary file, you can pretend part of your script is a separate file!. This is done with the special '<<' redirect operator.

command << EOF
means, "run 'command', but make its stdin come from this file right here, until you see the string 'EOF'"

EOF is the traditional string. But you can actually use any unique string you want. Additionally, you can use variable expansion in this section!

DATE=`date`
HOST=`uname -n`
mailx -s 'long warning' root << EOF
Something went horribly wrong with system $HOST
at $DATE
EOF

Pipes

In case you missed it before, pipes take the output of one command, and put it on the input of another command. You can actually string these together, as seen here;
grep hostspec /etc/hosts| awk '{print $1}' | fgrep '^10.1.' | wc -l

This is a fairly easy way to find what entries in /etc/hosts both match a particular pattern in their name, AND have a particular IP address ranage.

The "disadvantage" to this, is that it is very wasteful. Whenever you use more than one pipe at a time, you should wonder if there is a better way to do it. And indeed for this case, there most certainly IS a better way:

grep '^10\.1\..*hostspec' /etc/hosts | wc -l

There is actually a way to do this with a single awk command. But this is not a lesson on how to use AWK!

Combining pipes and redirection

An interesting example of pipes with stdin/err and redirection is the "tar" command. If you use "tar cvf file.tar dirname", it will create a tar file, and print out all the names of the files in dirname it is putting in the tarfile. It is also possible to take the same 'tar' data, and dump it to stdout. This is useful if you want to compress at the same time you are archiving:
tar cf - dirname | compress > file.tar.Z
But it is important to note that pipes by default only take the data on stdout! So it is possible to get an interactive view of the process, by using
tar cvf - dirname | compress > file.tar.Z
stdout has been redirected to the pipe, but stderr is still being displayed to your terminal, so you will get a file-by-file progress report. Or of course, you could redirect it somewhere else, with
tar cvf - dirname 2>/tmp/tarfile.list | compress > file.tar.Z 

Indirect redirection

Additionally, there is a special type of pipes+redirection. This only works on systems with /dev/fd/X support. You can automatically generate a "fake" file as the result of a command that does not normally generate a file. The name of the fake files will be /dev/fd/{somenumberhere}

Here's an example that doesnt do anything useful

wc -l <(echo one line) <(echo another line)
wc will report that it saw two files, "/dev/fd/4", and "/dev/fd/5", and each "file" had 1 line each. From its own perspective, wc was called simply as
wc -l /dev/fd/4 /dev/fd/5

There are two useful components to this:

  1. You can handle MULTIPLE commands' output at once
  2. It's a quick-n-dirty way to create a pipeline out of a command that "requires" a filename (as long as it only processes its input in a single continuous stream).

 

Solaris Advanced System Administrator's Guide, Second EditionWriting Shell=Scripts

Standard In, Standard Out, and Standard Error

When writing shell scripts, you can control input/output redirection. Input redirection is the ability to force a command to read any necessary input from a file instead of from the keyboard. Output redirection is the ability to send the output from a command into a file or pipe instead of to the screen.

Each process created by a shell script begins with three file descriptors associated with it, as shown in Figure 16-1.

These file descriptors—standard input, standard output, and standard error—determine where input to the process comes from, and where the output and error messages are sent.

Standard input (STDIN) is always file descriptor 0. Standard input is the place where the shell looks for its input data. Usually data for standard input comes from the keyboard. You can specify standard input to come from another source using input/output redirection.

Standard output (STDOUT) is always file descriptor 1. Standard output (default) is the place where the results of the execution of the program are sent. Usually, the results of program execution are displayed on the terminal screen. You can redirect standard output to a file, or suppress it completely by redirecting it to /dev/null.

Standard error (STDERR) is always file descriptor 2. Standard error is the place where error messages are sent as they are generated during command processing. Usually, error messages are displayed on the terminal screen. You can redirect standard error to a file, or suppress it completely by redirecting it to /dev/null.

You can use the file descriptor numbers 0 (standard input), 1 (standard output), and 2 (standard error) together with the redirection metacharacters to control input and output in the Bourne and Korn shells. Table 16-7 shows the common ways you can redirect file descriptors.

Table 16-7  Bourne and Korn Shell Redirection

Description Command
Take STDIN from file <file, or 0<file
Redirect STDOUT to file > file, or 1>file
Redirect STDERR to file 2> file
Append STDOUT to end of file >> file
Redirect STDERR to STDOUT 2>&1
Pipe standard output of cmd1 as standard input to cmd2 cmd1 | cmd2
Use file as both STDIN and STDOUT <> file
Close STDIN <&-
Close STDOUT >&-
Close STDERR 2>&-

When redirecting STDIN and STDOUT in the Bourne and Korn shells, you can omit the file descriptors 0 and 1 from the redirection symbols. You must always use the file descriptor 2 with the redirection symbol.

The 0 and 1 file descriptors are implied, and not used explicitly for the C shell, as shown in Table 16-8. The C shell representation for standard error (2) is an ampersand (&). STDERR can only be redirected when redirecting STDOUT.

Table 16-8  C Shell Redirection Metacharacters

Description Command
Redirect STDOUT to file > file
Take input from file < file
Append STDOUT to end of file >> file
Redirect STDOUT and STDERR to file >& file
Append STDOUT and STDERR to file >>& file

 

Tips For Linux - Input-Output Redirection in Unix

>> Input/Output Redirection in Unix Redirection is one of Unix's strongest points. Ramnick explains this concept in this article. He talks about Input, Output Redirection. He cites many simple and useful ways in which we can put redirection to good use.


Introduction

For those of you'll who have no idea what Redirection means, let me explain it in a few words. Whenever you run a program you get some output at the shell prompt. In case you don't want that output to appear in the shell window, you can redirect it elsewhere. you can make the output go into a file...or maybe go directly to the printer.. or you could make it disappear :)

This is known as Redirection. Not only can the output of programs be redirected, you can also redirect the input for programs. I shall be explaining all this in detail in this article. Lets begin...


File Descriptors

One important thing you have to know to understand Redirection is file descriptors. In Unix every file has a no. associated with it called the file descriptor. And in Unix everything is a file. Right from your devices connected to your machine to the normal text files storing some information - all of these are looked at, as files by the Operating System.
 

Similarly even your screen on which your programs display their output are files for Unix. These have file descriptors associated with it. So when a program actually executes it sends its output to this file descriptor and since this particular file descriptor happens to be pointing to the screen, the output gets displayed on the screen. Had it been the file descriptor of the printer, the output would have been printed by the printer. (There are ofcourse other factors which come into play, but I guess you got the idea of how everything is a file and you send whatver you want to particular files descriptors)

Whenever any program is executed (i.e. when the user types a command) the program has 3 important files to work with. They are standard input, standard output, and standard error. These are 3 files that are always open when a program runs. You could kind of consider them to be inherently present for all programs (For the techies.. basically when a child process is forked from a parent process, these 3 files are made available to the child process). For the rest, just remember that you always have these 3 files with you whenever you type any command at the prompt. As explained before a file descriptor, is associated with each of these files -

File Descriptor Descriptor Points to -
0 Standard Input (Generally Keyboard)
1 Standard output (Generally Display/Screen)
2 Standard Error Ouput (Generally Display/Screen)

You could redirect any of these files to other files. In short if you redirect 1 (standard output) to the printer, your programs output would start getting printed instead of being displayed on the screen.

What is the standard input? That would be your keyboard. Most of the times since you enter commands with your keyboard, you could consider 0 to be your keyboard. Since you get the output of your command on the screen, 1 would be the screen (display) and the errors as well are shown on the screen to you, so 2 would also be the screen.

For those of you'll who like to think ahead of what is being discussed... you'll must have already understood that you can now avoid all those irritating, irrelevant error messages you often get while executing some programs. You could just redirect the standard error (2) to some file and avoid seeing the error messages on the screen!!


Output Redirection

The most common use of Redirection is to redirect the output (that normally goes to the terminal) from a command to a file instead. This is known as Output Redirection. This is generally used when you get a lot of output when you execute your program. Often you see that screens scroll past very rapidly. You could get all the output in a file and then even transfer that file elsewhere or mail it to someone.

The way to redirect the output is by using the ' >  ' operator in shell command you enter. This is shown below. The ' > ' symbol is known as the output redirection operator. Any command that outputs its results to the screen can have its output sent to a file.

$ ls > listing

The ' ls ' command would normally give you a directory listing. Since you have the ' >  ' operator after the ' ls ' command, redirection would take place. What follows the ' >  ' tells Unix where to redirect the output. In our case it would create a file named ' listing ' and write the directory listing in that file. You could view this file using any text editor or by using the cat command.

Note:  If the file mentioned already exists, it is overwritten. So care should be taken to enter a proper name. In case you want to append to an existing file, then instead of the ' > ' operator you should use the ' >>  ' operator. This would append to the file if it already exists, else it would create a new file by that name and then add the output to that newly created file.



Input Redirection

Input Redirection is not as popular as Output Redirection. Since most of the times you would expect the input to be typed at the keyboard. But when it is used effectively, Input Redirection can be of great use. The general use of Input Redirection is when you have some kind of file, which you have ready and now you would like to use some command on that file.

You can use Input Redirection by typing the ' <  ' operator. An excellent example of Input Redirection has been shown below.

$ mail cousin < my_typed_letter

The above command would start the mail program with contents of the file named ' my_typed_letter ' as the input since the Input Redirection operator was used.

Note:  You can't have Input Redirection with any program/command. Only those commands that accept input from keyboard could be redirected to use some kind of text files as their input. Similarly Output Redirection is also useful only when the program sends its output to the terminal. In case you are redirecting the output of a program that runs under X, it would be of no use to you.

Error Redirection

This is a very popular feature that many Unix users are happy to learn. In case you have worked with Unix for some time, you must have realised that for a lot of commands you type you get a lot of error messages. And you are not really bothered about those error messages. For example whenever I perform a search for a file, I always get a lot of permission denied error messages. There may be ways to fix those things. But the simplest way is to redirect the error messages elsewhere so that it doesn't bother me. In my case I know that errors I get while searching for files would be of no use to me.

Here is a way to redirect the error messages

$ myprogram 2>errorsfile

This above command would execute a program named ' myprogram ' and whatever errors are generated while executing that program would all be added to a file named ' errorsfile ' rather than be displayed on the screen. Remember that 2 is the error output file descriptor. Thus ' 2>  ' means redirect the error output.

$ myprogram 2>>all_errors_till_now

The above command would be useful in case you have been saving all the error messages for some later use. This time the error messages would append to the file rather than create a new file.

You might realize that in the above case since I wasn't interested in the error messages generated by the program I redirected the output to a file. But since those error messages don't interest me I would have to go and delete that file created every time I run that command. Else I would have several such files created all over whenever I redirect my unwanted error output. An excellent way around is shown below

$ find / -name s*.jpg 2>/dev/null

What's /dev/null ????? That something like a black hole. Whatever is sent to the ' /dev/null ' never returns. Neither does one know where it goes. It simple disappears. Isn't that fantastic !! So remember.. whenever you want to remove something.. something that you don't want ...you could just send it to /dev/null

Isnt Unix wonderful !!!

 

Different ways to use Redirection Operators

Suppose you want to create a text file quickly

$ cat > filename
This is some text that I want in this file
^D

That's it!! Once you type the ' cat ' command, use the Redirection operator and add a name for a file. Then start typing your line. And finally press Ctrl+D. You will have a file named ' filename ' in the same directory.

Suppose you want to add a single line to an existing file.

$ echo "this is a new line" >> exsisting_file

That would add the new line to the file named ' existing_file ' . Remember to use ' >> ' instead of ' >  ' else you would overwrite the file.

Suppose you wanted to join 2 files

$ cat file2 >> file1

Wow!! That a much neater way then to open a text editor and copy paste. The contents of ' file2 ' would be added to ' file1 ' .

Suppose you want to join a couple of files

$ cat file1 file2 > file3

This would add the contents of ' file1 ' and ' file2 ' and then write these contents into a new file named ' file3 ' .


Redirection works with many commands besides normal ones such as ' cat ' or ' ls ' . One example I could give you is in case you are programming using any language you could redirect the output messages of the compilation of your code so that you can view them later on. There are lots of commands where you can use Redirection. The more you use Unix the more you will come to know.

About the Author - Ramnick G currently works for Realtech Systems based in Brazil. He has been passionate about Linux since early 90s and has been developing on Linux machines for the last couple of years. When he finds some free time, he prefers to spend it listening to Yanni.

 

Tuesday Tiny Techie Tips

Tuesday Tiny Techie Tip -- 15 April 1997 Written by Jeff Youngstrom

Redirect stderr to a file 
$ ls 2> file
This redirects just stderr output (associated with fd2) to the file. stdout is unchanged.
 
Redirect both stdout and stderr to a file 
$ ls > file 2>&1
First the "> file" indicates that stdout should be sent to the file, then the "2>&1" indicates that stderr (fd2) should be sent to the same place as stdout (fd1).

To append to the file, only the stdout redirection must change since stderr is just hitching a lift on whatever stdout is doing.

$ ls >> file 2>&1
Redirect stdout to one file and stderr to another 
$ ls > file 2> file2
Pipe one process' stdout and stderr to another's stdin 
$ ls 2>&1 | wc
Here we combine stderr onto the stdout stream, then use "|" to pipe the result to the next process.
 
Combinations 
$ sed 's/^#//' < file 2> sederr | \
	wc -l 2> wcerr | \
	awk '{print $NF}' > final 2> awkerr
Here I'm saving the error output from each command in the pipeline to a separate file ("sederr", "wcerr", "awkerr"), but letting stdout go straight through the pipe into the file "final". Input to sed(1) at the beginning of the pipe is redirected from the file "file"

Up to the TTTT index

Tuesday Tiny Techie Tips are all © Copyright 1996-1997 by Jeff Youngstrom.

Recommended Links

Softpanorama Top Visited

Softpanorama Recommended

Linux I-O Redirection

Bourne Shell Scripting-Redirection - Wikibooks, collection of open-content textbooks

Input and output redirection in the Korn shell or POSIX shell

tee (Unix) - Wikipedia, the free encyclopedia




Etc

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit exclusivly for research and educational purposes.   If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner.

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Haterís Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least


Copyright © 1996-2014 by Dr. Nikolai Bezroukov. www.softpanorama.org was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Site uses AdSense so you need to be aware of Google privacy policy. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting hosting of this site with different providers to distribute and speed up access. Currently there are two functional mirrors: softpanorama.info (the fastest) and softpanorama.net.

Disclaimer:

The statements, views and opinions presented on this web page are those of the author and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

Last modified: October 04, 2014