Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Shellorama 2007

Prev | Index | Next


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Oct 31, 2007] freshmeat.net Project details for Bash Debugger

Version 3.1-0.09
The Bash Debugger (bashdb) is a debugger for Bash scripts. The debugger command interface is modeled on the gdb command interface. Front-ends supporting bashdb include GNU-Emacs and ddd. In the past, the project has been used as a springboard for other experimental features such as a timestamped history file (now in Bash versions after 3.0).

Release focus: Minor bugfixes

Changes:
This release contains bugfixes accumulated over the year and works on bash version 3.2 as well as version 3.1.

UNIX System Administration Tools

chrec
Records system changes to flat text log files. (Bourne shell)
View the README
Download version 2.2 - gzipped tarball, 5 KB
Last update: June 2007

[Oct 27, 2007] System Administration Toolkit Standardizing your UNIX command-line tools

Freelance Writer, Consultant

22 Aug 2006

Examine methods for standardizing your interface to simplify movements between different UNIX® systems. If you manage multiple UNIX systems, particularly in a heterogeneous environment, then the hardest task can be switching between the different environments and performing the different tasks while having to consider all of the differences between the systems. This article does not cover specific differences, but you'll look at ways that can provide compatible layers, or wrappers, to support a consistent environment.

[Oct 3, 2007] freshmeat.net Project details for rr

rr is a basic command-line utility designed to retain/recall file and directory paths. This is done by treating the filename itself as a unique key to be referenced for future rr program calls. The purpose of this is to assist the user in shorthand typing and/or not having to remember arbitrary full paths. So, for example, "/etc/httpd/conf/httpd.conf" can be referenced as "// httpd.conf" in daily operations.

Release focus: Initial freshmeat announcement

[Sep 27, 2007] freshmeat.net Project details for Closebracket

This is essentially reinvention and re-implementation of OFM context sensitive linage of extension to commands (via ext file) the additional twist that if there is no extension file type (for example as discovered by file) is used.

Closebracket lets you define multiple shell actions in a single command to speed up the typing of the most repetitive shell commands. It includes ']' and '][' commands, which are located near the "Enter" key and are easy to type quickly. They invoke primary and secondary actions respectively.

Linux tip Controlling the duration of scheduled jobs

Say you need to debug a pesky problem by running some traces for 30 minutes at midnight, or you would just like to use your Linux system as an alarm clock. This tip helps you stop jobs, such as those started with the cron and at capabilities, after the jobs have run for a certain time, or when some other criteria are met.

... ... ...

Listing 2 shows an enhanced runclock2.sh script that captures some information about the process ids of the shell and the xclock processes, along with the output of the script and the output of the ps command, showing the process status for xclock, after the shell completes.


Listing 2. Gathering diagnostic information runclock2.sh

                
[ian@attic4 ~]$ cat runclock2.sh
#!/bin/bash
runtime=${1:-10m}
mypid=$$
# Run xclock in background
xclock&
clockpid=$!
echo "My PID=$mypid. Clock's PID=$clockpid"
ps -f $clockpid
#Sleep for the specified time.
sleep $runtime
echo "All done"
[ian@attic4 ~]$ ./runclock2.sh 10s
My PID=8619. Clock's PID=8620
UID        PID  PPID  C STIME TTY      STAT   TIME CMD
ian       8620  8619  0 19:57 pts/1    S+     0:00 xclock
All done
[ian@attic4 ~]$ ps -f 8620
UID        PID  PPID  C STIME TTY      STAT   TIME CMD
ian       8620     1  0 19:57 pts/1    S      0:00 xclock

Notice that the parent process id (PPID) in the first output of ps is 8619, which is the process id (PID) of the script. Once the script terminates, the clock process becomes an orphan and is assigned to be a child of the init process-process 1. The child does not terminate immediately when its parent terminates, although it will terminate when you log out of the system.

Terminating a child process

The solution to the problem of non-terminating child processes is to explicitly terminate them using the kill command. This sends a signal to a process, and that usually terminates the process. Later in this tip you see how a process can trap signals and not terminate, but here you use the interrupt signal (SIGINT) to terminate the clock.

To see a list of signals available on your system, use the kill command with the -l option, as shown in Listing 3. Note that some signals are common to all Linux systems, but some may be specific to the particular machine architecture. Some, such as floating point exceptions (SIGFPE) or segment violation (SIGSEGV), are generated by the system, while others such as interrupt (SIGINT), user signals (SIGUSR1 or SIGUSR2), or unconditional terminate (SIGKILL), can be sent by applications.

[Aug 12, 2007] BigAdmin Submitted Tech Tip Automatically Saving Script Output Using the nohup Utility by István Bátori

Can simplify logging for shell scripts

... Scripts can have their own logging functions, but the output also contains important information. Saving the output is typically solved by output redirection, for example:

nohup mynohupscript.sh >/var/log/myscript.log 2>&1

How do you feel about typing such a long command? Isn't it boring? It's a lot to type. And it's confusing to read the long, uninteresting ballast in an admin document, especially if the script has many arguments and options and the log file contains the start date.

Copy the nohup.txt file, changing the ".txt" suffix to ".sh", and save into the /opt/bin directory. Then give it execute rights.

cp nohup.sh /opt/bin
chmod 755 /opt/bin/nohup.sh
chown root:admin /opt/bin/nohup.sh

Usage

There are two different ways to use the wrapper script:

1. Start the wrapper script directly from the command line:

/opt/bin/nohup.sh <command> [<args...>]

For example:

/opt/bin/nohup.sh foo.sh

2. Put the following into a script as the first line:

#!/usr/bin/ksh /opt/bin/nohup.sh

For example:

cat >/tmp/nohup_test.sh <<EOF
#!/usr/bin/ksh /opt/bin/nohup.sh
# example script to demonstrate nohup.sh usage
default_cycles=3
default_sleeping=1
cycles=${1:-$default_cycles}
sleeping=${2:-$default_sleeping}
i=0
while [[ $i -lt $cycles ]]
do
  echo "$(date): $i"
  sleep $sleeping
  i=$((i+1))
done
EOF

Executing nohup_test.sh automatically calls nohup.sh, which starts the script with the nohup utility and redirects the script output into /var/tmp/nohup_test_<date>.log.

Configuration

Both usage methods have the same functionality. The functionality can be configured by shell variables, as follows:

[Aug 10, 2007] Speaking UNIX, Part 6 Automate, automate, automate!

Level: Intermediate

Martin Streicher ([email protected]), Editor-in-Chief, Linux Magazine

03 Jan 2007

Discover how shell scripts can mechanize virtually any personal or system task. Scripts can monitor, archive, update, report, upload, and download. Indeed, no job is too small or too great for a script. Here's an introduction.

If you peer over a longtime UNIX® user's shoulder while he or she works, you might be mesmerized by the strange incantations being murmured at the command line. If you've read any of the previous articles in the Speaking UNIX series (see Resources), at least some of the mystical runes being typed -- such as tilde (~), pipe (|), variables, and redirection (< and >) -- will look familiar. You might also recognize certain UNIX command names and combinations, or realize when an alias is being used as a sorcerer's shorthand.

Still, other command-line conjurations might elude you, because it's typical for an experienced UNIX user to amass a large arsenal of small, highly specialized spells in the form of shell scripts to simplify or automate oft-repeated tasks. Rather than type and re-type a (potentially) complex series of commands to accomplish a chore, a shell script mechanizes the work.

[Jun 27, 2007] BigAdmin Submitted Article A Script Template and Useful Techniques for ksh Scripts by Bernd Schemmer

This article discusses a script template for ksh scripts. I use this script template for nearly all the scripts I write for doing day-to-day work. I'm pretty sure that every system administrator who is responsible for more than a few machines running the Solaris Operating System has her own bag of scripts for maintaining the machines. Nevertheless, the script template and the programming techniques discussed in this article might be useful for them also.

The script template is released under the Common Development and Distribution License, Version 1.0; a link to download the script is at the end of this article.

[ Jun 25, 2007] Korn shell exec, read and miscellaneous commands

exec may be used to execute a script but user is logged out exec is invoked when in parent shell.

The read command reads input from the terminal as single variables, piped names or file redirected with exec as summarized in the following example.

> cat kread
#!/bin/ksh
#-----------kread: read data with Korn shell---------------------
#
echo Proc $0: read command in Korn shell
echo
print "type a name> \c"                              # read var at promp
read name
print "typed name is: " $name
print "\npiped read example:"
print "apr may jun" | read a b c                     # pipe args
print arg1 from pipe is $a
print arg2 from pipe is $b
print arg3 from pipe is $c
print "\nread/write lines with exec redirection\n"
exec 0<$1                                            # redirect i/o
while read LINE
do
   print $LINE
done
#
#----------end script------------------

> kread
Proc kread: read command in Korn shell

type a name> any
typed name is:  any

piped read example:
arg1 from pipe is apr
arg2 from pipe is may
arg3 from pipe is jun

read/write lines with exec redirection

line 1
line 1
<ctrl>C
>
Korn Shell Read Options
-p read line form co-process
-r do not treat \ as continuation
-s save input in history file
-un read from file descriptor n

fortune data for Korn Shell 93 - Mt Xia- Technical Consulting Group

fortunes data file for Korn Shell 93
Version: 1.1
Provided by:

Dana French
Mt Xia Technical Consulting Group
113 East Rich
Norman, OK 73069
405.329.6578
[email protected]

Specializing in Business Continuity, Disaster Recovery, AIX and HACMP

%
Korn Shell 93 - Command Line Arguments

- Ends option processing

%
Korn Shell 93 - Command Line Arguments

-c cmd execute cmd ( default reads command from file
named in first entry of args via path search)

%
Korn Shell 93 - Command Line Arguments

-D print all double-quoted strings that are preceded
by a $. Such strings are subject to translation
in the current locale

%
Korn Shell 93 - Command Line Arguments

-i set interactive mode (default)

... ... ...

[Jun 25, 2007] Useful Shell Scripting Variables - Part III - IFS (Internal Field Separator)

October 13, 2003

... The shell uses the value stored in IFS, which is the space, tab, and newline characters by default, to delimit words for the read and set commands, when parsing output from command substitution, and when performing variable substitution.

IFS can be redefined to parse one or more lines of data whose fields are not delimited by the default white-space characters. Consider this sequence of variable assignments and for loops:

		$ line=learn:unix:at:livefire:labs
... ... ...
$ OIFS=$IFS
$ IFS=:
$ for i in $line; do; echo $i; done
learn
unix
at
livefire
labs
$

The first command assigns the string "learn:unix:at:livefire:labs" to the variable named line. You can see from the first for loop that the shell treats the entire string as a single field. This is because the string does not contain a space, tab, or newline character.

After redefining IFS, the second for loop treats the string as four separated fields, each delimited by a colon. Using a colon for IFS would be appropriate when parsing the fields in a record from /etc/passwd, the user account information file:

		livefire:x:100:1::/export/home/livefire:/bin/ksh		
Notice that the original value of IFS was stored in OIFS ("O" for original) prior to changing its value. After you are finished using the new definition, it would be wise to return it to its original value to avoid unexpected side effects that may surface later on in your script.

TIP – The current value of IFS may be viewed using the following pipeline:

			$ echo "$IFS" | od -b
0000000 040 011 012 012
0000004
$

The output of the echo command is piped into the octal dump command, giving you its octal equivalent. You can then use an ASCII table to determine what characters are stored in the variable. Hint: Ignore the first set of zeros and the second newline character (012), which was generated by echo.

System Administration Toolkit: Backing up key information

A couple of ideas on what is the best way to preserve key config files.

Most UNIX(R) administrators have processes in place to back up the data and information on their UNIX machines, but what about the configuration files and other elements that provide the configuration data your machines need to operate? This article provides detailed information on techniques for achieving an effective and efficient backup system for these key files.

[Jun 10, 2007]System Administration Toolkit: Standardizing your UNIX command-line tools

Several interesting and simple functions for your .kshrc or .bashrc.

Examine methods for standardizing your interface to simplify movements between different UNIX(R) systems. If you manage multiple UNIX systems, particularly in a heterogeneous environment, then the hardest task can be switching between the different environments and performing the different tasks while having to consider all of the differences between the systems. This article does not cover specific differences, but you'll look at ways that can provide compatible layers, or wrappers, to support a consistent environment.

[Jun 10, 2007] System Administration Toolkit: Get the most out of bash

Good paper !

Ease your system administration tasks by taking advantage of key parts of the Bourne-again shell (bash) and its features. Bash is a popular alternative to the original Bourne and Korn shells. It provides an impressive range of additional functionality that includes improvements to the scripting environment, extensive aliasing techniques, and improved methods for automatically completing different commands, files, and paths.

!!! [Jun 10, 2007] Linux tip: Bash parameters and parameter expansions by Ian Shields

Definitely gifted author !

Do you sometimes wonder how to use parameters with your scripts, and how to pass them to internal functions or other scripts? Do you need to do simple validity tests on parameters or options, or perform simple extraction and replacement operations on the parameter strings? This tip helps you with parameter use and the various parameter expansions available in the bash shell.

[Mar 24, 2007] LWN Fish - The friendly interactive shell

Default Settings

zsh provides command-specific tab completions, a history file, tab completion of strings with wild cards and many, many other advanced functions. But none of them are turned on by default. In fact, a user who starts zsh for the first time would think it was a small improvement over the original Bourne shell. bash does better here, but features like command specific tab completions are turned off by default in bash as well, and the default history settings are not very useful.

It is quite possible that a few of my complaints against bash and zsh could be configured away, this is not on purpose from my side, but even though I have been an avid shell user for nearly a decade, I keep discovering useful new features that are poorly documented, turned off by default and implemented in a less useful way.

Fish does not hide it's functionality. The design philosophy of fish is to focus more on making things work right and less on making things configurable. As a result of this, there are very few settings in fish.

Context Sensitive, User Friendly Help Pages

While the man pages give you a decent amount of information on how to use specific commands, the documentation for the shell and it's built-in commands is often hard to use. The bash man page is nearly legendary for how hard it is to get the information you want. fish tries to provide context sensitive documentation in an easy to use form.

To access the fish help, one should use the 'help' command. Simply writing 'help' and pressing return will start the user's preferred web browser with the local copy of the fish manual. There are a large number of topics that can be specified with the help command, like 'help syntax', 'help editor', etc. These open up a chapter of the documentation on the specified topic. In order to make the help system easy to find, a message describing how to access it is printed every time fish starts up. Finding a specific help section is easy since the section names can be tab completed.

Built-in commands in fish support the -h and --help options, which result in a detailed explanation of how that command works. The only exception to this are the commands the start a new block of code, such as 'for', 'if', 'while', 'function', etc. To get help on one of these commands, type the command without any options.

Error reporting is an often-overlooked form of help. On syntax errors, fish tries to give a detailed report on what went wrong, and if possible, also prints a help message.

Desktop Integration

Since most users access the shell from inside a virtual terminal in a graphical desktop, the shell should attempt to integrate with the desktop. fish uses the X clipboard for copy and paste, so you can use Control-Y to paste from the clipboard to fish, and Control-K to move the rest of the line to the clipboard.

Opening Files

In a graphical file manager it is usually easy to open a document or an image. You simply double-click it and it is launched with a default application. From the shell, this is much more difficult. You need to know which program can handle a file of the given type, and also how to launch it. Launching an HTML file from the command line is no easy task, since most browsers expect a URL, possibly with an absolute path, not a filename. fish features a program called open that uses the same mime-type database and .desktop files used by Gnome and KDE to find the default application for a file, and opens it using the syntax specified by the .desktop file.

A Better Shell Syntax

While shells have gained some features since the seventies, the shell syntax of moderns POSIX shells like bash and zsh is very similar to the original Bourne shell, which is about 30 years old. There are a large number of problems with this syntax which I feel should be changed. Unfortunately, this means that the fish syntax is incompatible with other shells. While it should not be difficult to adapt to the new syntax, old scripts would have to be converted to run in fish.

Blocks

There are many cases in shell programming where you specify a list of multiple commands. This includes conditional blocks, loop blocks and function definitions. In regular shells there is very little logic in how these different types of blocks are ended. Conditional statements end with the reverse command, like: 'if true; echo yes; fi', but loops end with the 'done' command like: 'while true; do echo hello; done', individual case conditions end with ';;' and functions end with '}'. Arbitrary reserved words like 'then' and 'do' are also sprinkled over the code. fish uses a single, consistent method of closing blocks: the 'end' command. For a few examples of block syntax in POSIX shell and in fish, see the table below.

POSIX command fish command
if true; then echo hello; fi if true; echo hello; end
for i in a b c; do echo $i; done for i in a b c; echo $i; end
case $you in *) echo hi;; esac switch $you; case '*'; echo hi; end
hi () { echo hello; } function hi; echo hello; end

Quoting

The original Bourne shell was a macro language. It performed variable substitution, tokenization and other operations on one line at a time without understanding the underlying syntax. This results in many unexpected side effects: Consider the following block of code:
smurf=blue;
smurf=evil; echo Smurfs are $smurf
On the Bourne shell, it will result in the output 'Smurfs are blue'. Macro languages like M4 and Bourne are not intuitive, but once you understand how they function, they are at least predictable and somewhat logical. bash is implemented as a standard language using a bison grammar, but still chooses to emulate some of the quirks from the original Bourne shell.

The above example would result in bash printing 'Smurfs are evil'. On the other hand variable values are still tokenized on spaces, meaning you can't write 'rm $file', since if the variable file contains spaces, rm will try to remove the wrong files. To fix this, the user has to make sure every use of a variable in enclosed in quotes, like 'rm "$file"'. This is a very common source of bugs in shell scripts since it is simply a case of the default behavior being unexpected and very rarely what is wanted.

In summary, by making bash a non-macro language that sometimes behaves like one, it becomes unpredictable and very hard to learn.

fish is not a macro language and does not pretend to be one. Variables with spaces are still just one token. Because of this, there is no need for the double quotes to mean something different from single quotes, so both types of quotes mean the same thing, and quotes can be nested.

Variable Assignment

Variable assignments in Bourne shell are whitespace sensitive. 'foo=bar' is an assignment, but 'foo = bar' is not. This is just a bad idea. fish does something somewhat unexpected while fixing this. It borrows syntax from csh, and uses a command called 'set' to assign variable values. The reason for doing this is that in fish everything is a command. Loops, conditionals and every other kind of higher level language construct is implemented as yet another built-in command, following the same syntax rules. This makes the language easier to learn and understand, as well as easier to implement.

To set the variable smurf to the value blue, use the command:

set smurf blue
By default, variables are local to the current block and disappear when the block goes out of scope. To make a variable global, you need to use the -g switch.

Two Methods of Creating Functions, and Both are Bad

bash, zsh and other regular shells allow you to create stored functions in two ways, either as aliases or as functions.

Aliases are defined using commands like 'alias ll="ls -l"'. Aliases are simply string substitutions in the command-line. Because of this, aliases have the following limitations:

Because of these limitations, bash uses a second method to specify functions, using a syntax like:
ll() { ls $*;}
While this solves the issues with aliases, I think this is just as bad a syntax. It looks like C code, but anyone expecting it to work anything like C will discover it is really not. You can not specify argument names in the parenthesis, they are just there to make it look like C code. The curly brackets are some sort of pseudo-commands, so skipping the semicolon in the example above results in a syntax error. And perhaps the most strange quirk of all is that removing the whitespace between the opening bracket and 'ls' will also result in a syntax error. Clearly this is not a very well though out syntax. fish uses a single syntax for defining functions, and the definition is just another regular command:
function ll; ls $argv; end
This is slightly wordier than the above examples, but it solves all the issues with either of the above syntaxes in a way that is consistent with the rest of the fish syntax.

Impossible to Validate the Syntax

Since the use of variables as commands in regular shells is allowed, it is impossible to reliably check the syntax of a script.

For example, this snippet of bash/zsh code may or may not be legal, depending on your luck:

if true; then if [ $RANDOM -lt 1024 ]; then END=fi; else END=true; fi; $END
Both bash and zsh try to determine if the command in the current buffer is finished when the user presses the return key, but because of issues like this, they will sometimes fail.

fish solves this by disallowing variables as commands. Anything you can do with variables as commands can be done in a much cleaner way using either the eval command or by using functions.

Minor Problems

The strings '$foo', "$foo" and `$foo` all look rather similar, while doing three completely different things. fish solves this by making '$foo' and "$foo" mean the same thing, as described above, but also by making the syntax for sub-shells use parenthesis instead of back-ticks.

A large number of standard UNIX commands, like printf, echo, kill and test are implemented as shell built-ins in bash and zsh. As near as I can tell, there is only one advantage to this, a minimal performance increase. But the drawbacks are many:

For those reasons, fish implements as few built-in commands as possible. Including block commands such as 'for' and 'end', fish implements 24 built-ins, whereas bash implements between 60 and 70 of them.

[Mar 11, 2007] Sys Admin v16, i03 Miscellaneous Unix Tips Answering Novice Shell Questions

Using the exec & eval Commands

One novice asked when it was suitable to exec and eval Unix commands:

exec mycommand
eval mycommand
The exec command works differently depending on the shell; in the Bourne and Korn shells, the exec command replaces the current shell with the command being exec'ed. Consider this stub script:
exec echo "Hello John"
echo "Hello Ed"
# end stub
When the above stub executes, the current shell will be replaced when exec'ing the echo "Hello John" command. The echo "Hello Ed" command never gets the chance to execute. Obviously, this capability has limited uses.

You might design a shell menu where the requirement is to execute an option that never returns to the menu. Another use would be restricting the user from obtaining the command line. The following last line in the user's .profile file logs the user out as soon as my_executable terminates:

exec my_executable
However, most systems administrators probably use the exit command instead:
my_executable
exit
Don't confuse exec'ing a Unix command with using exec to assign a file to a file descriptor. Remember that the default descriptors are standard input, output, and error or 0, 1, and 2, respectively.

Let's consider an example where you need to read a file with a while loop and ask the user for input in the same loop. Assign the file to an unused descriptor -- 3 in this case -- and obtain the user input from standard input:

exec 3< file.txt
while read line <&3
do
    echo "$line"
    echo "what is your input? "
    read answer
    .
    .
done
exec 3<&-   # close the file descriptor when done
The eval command is more interesting. A common eval use is to build a dynamic string containing valid Unix commands and then use eval to execute the string. Why do we need eval? Often, you can build a command that doesn't require eval:
evalstr="myexecutable"
$evalstr   # execute the command string
However, chances are the above command won't work if "myexecutable" requires command-line arguments. That's where eval comes in.

Our man page says that the arguments to the eval command are "read as input to the shell and the resulting commands executed". What does that mean? Think of it as the eval command forcing a second pass so the string's arguments become the arguments of the spawned child shell.

In a previous column, we built a dynamic sed command that skipped 3 header lines, printed 5 lines, and skipped 3 more lines until the end of the file:

evalstr="sed -n '4,\${p;n;p;n;p;n;p;n;p;n;n;n;}' data.file"
eval $evalstr  # execute the command string
This command fails without eval. When the sed command executes in the child shell, eval forces the remainder of the string to become arguments to the child.

Possibly the coolest eval use is building dynamic Unix shell variables. The following stub script dynamically creates shell variables user1 and user2 setting them equal to the strings John and Ed, respectively:

COUNT=1
eval user${COUNT}=John
echo $user1

COUNT=2
eval user${COUNT}=Ed
echo $user2
Pasting Files with paste
Another novice asked how to line up three files line by line sending the output to another file. Given the following:
file1:

1
2
3

file2:

a
b
c

file3:

7
8
9
the output file should look like this:
1a7
2b8
3c9
The paste command is a ready-made solution:
paste file1 file2 file3
By default, the delimiter character between the columns is a tab key. The paste command provides a -d delimiter option. Everything after -d is treated as a list. For example, this paste rendition uses the pipe symbol and ampersand characters as a list:
paste -d"|&" file1 file2 file3
The command produces this output:
1|a&7
2|b&8
3|c&9
The pipe symbol character, |, is used between columns 1 and 2, while the ampersand, &, separates column 2 and 3. If the list is completely used, and if the paste command contains more files arguments, then paste starts at the beginning of the list.

To satisfy our original requirement, paste provides a null character, \0, signifying no character. To prevent the shell from interpreting the character, it must also be quoted:

paste -d"\0" file1 file2 file3

Eric Blake - Updated bash-3.2.9-11

A new release of bash, 3.2.9-11, has been uploaded, replacing 3.2.9-10 as
current.

NEWS:
=====
This is a minor patch release. It swaps over to the cygport build
framework (although building it requires several patches on top of cygport
0.2.8). It fixes SHELLOPTS parsing so that if you export SHELLOPTS in an
interactive shell, then invoke a non-interactive script, the interactive
shell options such as 'history' or 'emacs' are not inherited into the
script. It also fixes shell builtins that were mistakenly treating pipes
in text mode if stdout had previously been a text mode file prior to
command substitution or a pipe.

There are a few things you should be aware of before using this version:
1. When using binary mounts, cygwin programs try to emulate Linux. Bash
on Linux does not understand \r\n line endings, but interprets the \r
literally, which leads to syntax errors or odd variable assignments.
Therefore, you will get the same behavior on Cygwin binary mounts by default.
2. d2u is your friend. You can use it to convert any problematic script
into binary line endings.
3. Cygwin text mounts automatically work with either line ending style,
because the \r is stripped before bash reads the file. If you absolutely
must use files with \r\n line endings, consider mounting the directory
where those files live as a text mount. However, text mounts are not as
well tested or supported on the cygwin mailing list, so you may encounter
other problems with other cygwin tools in those directories.
4. This version of bash has a cygwin-specific shell option, named "igncr"
to force bash to ignore \r, independently of cygwin's mount style.
As of
bash-3.2.3-5, it controls regular scripts, command substitution, and
sourced files. I hope to convince the upstream bash maintainer to accept
this patch into the future bash 4.0 even on Linux, rather than keeping it
a cygwin-specific patch, but only time will tell. There are several ways
to activate this option:
4a. For a single affected script, add this line just after the she-bang:
(set -o igncr) 2>/dev/null && set -o igncr; # comment is needed
4b. For a single script, invoke bash explicitly with the shopt, as in
'bash -o igncr ./myscript' rather than the simpler './myscript'.
4c. To affect all scripts, export the environment variable BASH_ENV,
pointing to a file that sets the shell option as desired. Bash will
source this file on startup for every script.
4d. Added in the bash-3.2-2 release: export the environment variable
SHELLOPTS with igncr included in it. It is read-only from within bash,
but you can set it before invoking bash; once in bash, it auto-tracks the
current state of 'set -o igncr'. If exported, then all bash child
processes inherit the same option settings; with the exception added in
3.2.9-11 that certain interactive options are not inherited in
non-interactive use.
5. You can also experiment with the IFS variable for controlling how bash
will treat \r during variable expansion.
6. Normally, cygwin treats DOS-style paths as binary only. This release
of bash includes a hack to check the underlying mount point of files, even
when passed as DOS style paths, but other cygwin tools do not. You are
better off learning how to use POSIX-style paths.
7. There are varying levels of speed at which bash operates. The fastest
is on a binary mount with igncr disabled (the default behavior). Next
would be text mounts with igncr disabled and no \r in the underlying file.
Next would be binary mounts with igncr enabled. And the slowest that bash
will operate is on text mounts with igncr enabled.
8. If you don't like how bash behaves, then propose a patch, rather than
proposing idle ideas. This turn of events has already been talked to
death on the mailing lists by people with many ideas, but few patches.
9. If you forget to read this release announcement, the best you can
expect when you complain to the list is a link back to this email.

Remember, you must not have any bash or /bin/sh instances running when you
upgrade the bash package. This release requires cygwin-1.5.23-1 or
later; and it requires libreadline6-5.2.1-6.

[Feb 14, 2007] pSeries and AIX Information Center

Enhanced Korn shell (ksh93)

In addition to the default system Korn shell (/usr/bin/ksh), AIX® provides an enhanced version available as Korn shell /usr/bin/ksh93. This enhanced version is mostly upwardly compatible with the current default version, and includes a few additional features that are not available in Korn shell /usr/bin/ksh.

Some scripts might perform differently under Korn shell ksh93 than under the default shell because variable handling is somewhat different under the two shells.

Note: There is also a restricted version of the enhanced Korn shell available, called rksh93.

The following features are not available in Korn shell /usr/bin/ksh, but are available in Korn shell /usr/bin/ksh93:

Arithmetic enhancements You can use libm functions (math functions typically found in the C programming language), within arithmetic expressions, such as $ value=$((sqrt(9))). More arithmetic operators are available, including the unary +, ++, --, and the ?: construct (for example, "x ? y : z"), as well as the , (comma) operator. Arithmetic bases are supported up to base 64. Floating point arithmetic is also supported. "typeset -E" (exponential) can be used to specify the number of significant digits and "typeset -F" (float) can be used to specify the number of decimal places for an arithmetic variable. The SECONDS variable now displays to the nearest hundredth of a second, rather than to the nearest second.
Compound variables Compound variables are supported. A compound variable allows a user to specify multiple values within a single variable name. The values are each assigned with a subscript variable, separated from the parent variable with a period (.). For example:
$ myvar=( x=1 y=2 ) 
$ print "${myvar.x}" 
1
Compound assignments Compound assignments are supported when initializing arrays, both for indexed arrays and associative arrays. The assignment values are placed in parentheses, as shown in the following example:
$ numbers=( zero one two three ) 
$ print ${numbers[0]} ${numbers[3]} 
zero three   
Associative arrays An associative array is an array with a string as an index.

The typeset command used with the -A flag allows you to specify associative arrays within ksh93. For example:

$ typeset -A teammates 
$ teammates=( [john]=smith [mary]=jones ) 
$ print ${teammates[mary]} 
jones
Variable name references The typeset command used with the -n flag allows you to assign one variable name as a reference to another. In this way, modifying the value of a variable will in turn modify the value of the variable that is referenced. For example:
$ greeting="hello"
$ typeset -n welcome=greeting     # establishes the reference 	
$ welcome="hi there"              # overrides previous value 	
$ print $greeting 	
hi there   
Parameter expansions The following parameter-expansion constructs are available:
  • ${!varname} is the name of the variable itself.
  • ${!varname[@]} names the indexes for the varname array.
  • ${param:offset} is a substring of param, starting at offset.
  • ${param:offset:num} is a substring of param, starting at offset, for num number of characters.
  • ${@:offset} indicates all positional parameters starting at offset.
  • ${@:offset:num} indicates num positional parameters starting at offset.
  • ${param/pattern/repl} evaluates to param, with the first occurrence of pattern replaced by repl.
  • ${param//pattern/repl} evaluates to param, with every occurrence of pattern replaced by repl.
  • ${param/#pattern/repl} if param begins with pattern, then param is replaced by repl.
  • ${param/%pattern/repl} if param ends with pattern, then param is replaced by repl.
Discipline functions A discipline function is a function that is associated with a specific variable. This allows you to define and call a function every time that variable is referenced, set, or unset. These functions take the form of varname.function, where varname is the name of the variable and function is the discipline function. The predefined discipline functions are get, set, and unset.
  • The varname.get function is invoked every time varname is referenced. If the special variable .sh.value is set within this function, then the value of varname is changed to this value. A simple example is the time of day:
    $ function time.get 
    > { 
    >     .sh.value=$(date +%r) 
    > } 
    $ print $time 
    09:15:58 AM 
    $ print $time    # it will change in a few seconds 
    09:16:04 AM
  • The varname.set function is invoked every time varname is set. The .sh.value variable is given the value that was assigned. The value assigned to varname is the value of .sh.value when the function completes. For example:
    $ function adder.set 
    > { 
    >   let .sh.value="
    $ {.sh.value} + 1" 
    > } 
    $ adder=0 
    $ echo $adder 
    1 
    $ adder=$adder 
    $ echo $adder 
    2  
  • The varname.unset function is executed every time varname is unset. The variable is not actually unset unless it is unset within the function itself; otherwise it retains its value.

Within all discipline functions, the special variable .sh.name is set to the name of the variable, while .sh.subscript is set to the value of the variables subscript, if applicable.

Function environments Functions declared with the function myfunc format are run in a separate function environment and support local variables. Functions declared as myfunc() run with the same environment as the parent shell.
Variables Variables beginning with .sh. are reserved by the shell and have special meaning. See the description of Discipline Functions in this table for an explanation of .sh.name, .sh.value, and .sh.subscript. Also available is .sh.version, which represents the version of the shell.
Command return values Return values of commands are as follows:
  • If the command to be executed is not found, the return value is set to 127.
  • If the command to be executed is found, but not executable, the return value is 126.
  • If the command is executed, but is terminated by a signal, the return value is 256 plus the signal number.
PATH search rules Special built-in commands are searched for first, followed by all functions (including those in FPATH directories), followed by other built-ins.
Shell history The hist command allows you to display and edit the shells command history. In the ksh shell, the fc command was used. The fc command is an alias to hist. Variables are HISTCMD, which increments once for each command executed in the shells current history, and HISTEDIT, which specifies which editor to use when using the hist command.
Built-in commands The enhanced Korn shell contains the following built-in commands:
  • The builtin command lists all available built-in commands.
  • The printf command works in a similar manner as the printf() C library routine. See the printf command.
  • The disown blocks the shell from sending a SIGHUP to the specified command.
  • The getconf command works in the same way as the stand-alone command /usr/bin/getconf. See the getconf command.
  • The read built-in command has the following flags:
    • read -d {char} allows you to specify a character delimiter instead of the default newline.
    • read -t {seconds} allows you to specify a time limit, in seconds, after which the read command will time out. If read times out, it will return FALSE.
  • The exec built-in command has the following flags:
    • exec -a {name} {cmd} specifies that argument 0 of cmd be replaced with name.
    • exec -c {cmd} tells exec to clear the environment before executing cmd.
  • The kill built-in command has the following flags:
    • kill -n {signum} is used for specifying a signal number to send to a process, while kill -s {signame} is used to specify a signal name.
    • kill -l, with no arguments, lists all signal names but not their numbers.
  • The whence built-in command has the following flags:
    • The -a flag displays all matches, not only the first one found.
    • The -f flag tells whence not to search for any functions.
  • An escape character sequence is used for use by the print and echo commands. The Esc (Escape) key can be represented by the sequence \E.
  • All regular built-in commands recognize the -? flag, which shows the syntax for the specified command.
Other miscellaneous differences between Korn shell ksh and Korn shell ksh93 Other differences are:
  • With Korn shell ksh93, you cannot export functions using the typeset -fx built-in command.
  • With Korn shell ksh93, you cannot export an alias using the alias -x built-in command.
  • With Korn shell ksh93, a dollar sign followed by a single quote ($') is interpreted as an ANSI C string. You must quote the dollar sign (\"$\"') to get the old (ksh) behavior.
  • Argument parsing logic for Korn shell ksh93 built-in commands has been changed. The undocumented combinations of argument parsing to Korn shell ksh built-in commands do not work in Korn shell ksh93. For example, typeset -4i works similar to typeset -i4 in Korn shell ksh, but does not work in Korn shell ksh93.
  • With Korn shell ksh93, command substitution and arithmetic expansion is performed on special environment variables PS1, PS3, and ENV while expanding. Therefore, you must escape the grave symbol (`) and the dollar sign and opening parenthesis symbols ($() using a backslash (\) to retain the old behavior. For example, Korn shell ksh literally assigns x=$'name\toperator' as $name\toperator; Korn shell ksh93 expands \t and assigns it as name<\t expanded>operator. To preserve the Korn shell ksh behavior, you must quote $. For example, x="$"'name\toperator'.
  • The ERRNO variable has been removed in Korn shell ksh93.
  • In Korn shell ksh93, file names are not expanded for non-interactive shells after the redirection symbol.
  • With Korn shell ksh93, you must use the -t option of the alias command to display tracked aliases.
  • With Korn shell ksh93, in emacs mode, Ctrl+T swaps the current and previous character. With ksh, Ctrl+T swaps the current and next character.
  • Korn shell ksh93 does not allow unbalanced parentheses within ${name operator value}. For example, ${name-(} needs an escape such as ${name-\(} to work in both versions.
  • With Korn shell ksh93, the kill -l command lists only the signal names, not their numerical values.
Parent topic: Shells

[Feb 9, 2007] http://opensolaris.org/os/project/ksh93-integration/ Open Solaris /bin/ksh gets updated to ksh93!!


Continue...



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: March 12, 2019