Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Shellorama, 2006

Prev | Index | Next


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Dec 28, 2006] Bash Navigator File System Browsing Utility 1.0

BashNavigator (BN) allows you to bookmark directories on your local file system by typing a single, user modifiable, key (the default is g for "go")in place of the usual cd command. Once you have placed a set of directories on the navigation list, you may move quickly to any listed directory simply by choosing its number (the default syntax for this "choose" operation is c <directory number> typically three keystrokes). The forward/back and next/previous movement commands familiar to most users of web browsers are also supported in the form of single keystroke commands (respectively f, b, n, and p by default.) Additional commands allow the deletion from the navigation list of selected directories (syntax: d followed by the number of the directory to delete from the list), saving of the current list to a named file in a centralized location (syntax: s <filename>), merging of saved navigation lists into the current list (syntax: m <filename>), and extraction or "taking" (syntax: t n_1 [ n_2 ...] ) of directories in the form of newline delimited lists. See the help (at the bottom of the bn script and accessible via the h command) for a complete list of user commands. Since all user commands are BASH shell aliases for script-defined functions, it is trivial to modify the default BN syntax to suit your needs.

[Dec 28, 2006] Todd Allen's Tools marks

This is a reinvention of aliases: "marks is a set of ksh or bash utilities that provides a mechanism for remembering long pathnames as shorter mark names."

marks is a set of ksh or bash utilities that provides a mechanism for remembering long pathnames as shorter mark names. Marks are persistent across multiple shell invocations, and may be shared from user to user.

The two basic commands that become available are:

# mark foo Create a mark named foo that denotes the current directory.
# unmark foo Remove the mark named foo.

After issuing mark foo, the following command can be issued:

# foo Changes the current working directory to the directory denoted by the mark foo.

Additionally, after executing mark foo, the $foo variable is defined containing the marked directory's pathname, allowing commands like this to be issued:

# cd /some/really/long/path/name
# mark foo
# cd /some/place/else
# cp $foo/*.c .

[Dec 22, 2006] Advanced Bash Scripting Guide version 4.2

About:
The Advanced Bash Scripting Guide is both a reference and a tutorial on shell scripting. This comprehensive book (the equivalent of about 700 print pages) covers almost every aspect of shell scripting. It contains over 300 profusely commented illustrative examples, and a number of tables. Not just a shell scripting tutorial, this book also provides an introduction to basic programming techniques, such as sorting and recursion. It is well suited for either individual study or classroom use. It covers Bash version 3+.

Release focus: Minor feature enhancements

Changes:
This is a fairly important update. The too-wide pages have finally been reformatted for printing and better display on the PDF version. Quite a bit of new material has been added, including a new example script. There are some bugfixes.

Author:
M. Leo Cooper

[Dec 21, 2006] Easier Shell Script Debugging by Bátori István

December 2006 | BigAdmin

Write your shell scripts this way to make them easier to test and debug.

Shell script debugging is not easy. You have to put set -x or the echo command into the script. You may want to test the script in a test environment. And at least before publishing the script, you have to delete the debug lines. The following tip gives a hint about how to solve this problem without errors.

Put the following lines at the beginning of the script:

if [[ -z $DEBUG ]];then
  alias DEBUG='# ' 
else
  alias DEBUG=''
fi

Everywhere you put a line that is only for testing, write it in the following way:

DEBUG set -x

Or echo a parameter:

DEBUG echo $PATH

Or set a parameter that is valid only during the test:

DEBUG export LOGFILE=$HOME/tmp

Before executing the script, set the DEBUG variable in the shell:

# export DEBUG=yes

During the execution of the script, the DEBUG lines will be executed. If you publish the script, you can forget the deletion of the debug lines; they will not disturb the functionality.

Sample Script
#!/usr/bin/ksh

# test script to show the DEBUG alias

if [[ -z $DEBUG ]];then
  alias DEBUG='# '
else
  alias DEBUG=''
fi

LOG_FILE=/var/script.out
DEBUG LOG_FILE=$HOME/tmp/script.out

function add {
    DEBUG set -x

    a=$1
    b=$2

    return $((a + b))
}


# MAIN

DEBUG echo "test execution"

echo "$(date) script execution" >>$LOG_FILE
echo "if you do not know it:"

add 2 2
echo "  2 + 2 = $?"

[Nov 25, 2006] appctl

freshmeat.net

Appctl is a framework for virtually any server software. It provides a central script called "ctl" which allows you to start, stop, restart, maintain, or query the current status of an application. It is meant as a completely generic replacement for application-specific startup/stop scripts. The project also supplies generic monitoring scripts for clusters, which can dramatically decrease clustering costs.

Release focus: Minor feature enhancements

Changes:
This update adds some new features. It allows you to use pre-/post-start/stop/status scripts. Additionally, the function names for the "startup", "shutdown", and "status" shell function are now fully configurable. This update adds the ability to define a PIDFILE variable. If defined in a module environment file, it defines where the PID file is written or read from. This is useful if an application has a hard-coded PID path and cannot be reconfigured to use $PID_BASE/[module].pid.

[May 22, 2006] An ETF Gold Mine Launches

The Market Vectors-Gold Miners (GDX:AMEX - commentary - research - Cramer's Take) exchange-traded fund began trading Monday on the Amex. The new ETF, designed to track the performance of the Amex Gold Miners Index, is the first exchange-traded fund in the U.S. offering investors exposure to the gold-mining equity market, as opposed to just the metal itself.

There currently are two ETFs on the market providing direct exposure to gold bullion, streetTRACKS Gold (GLD:Amex - commentary - research - Cramer's Take) and the iShares Comex Gold Trust (IAU:Amex - commentary - research - Cramer's Take). Holders of those funds own shares that each represent a fraction of the price of an ounce of gold.

The Market Vectors-Gold Miners ETF, however, provides exposure to the metal through gold-mining companies such as Newmont Mining (NEM:NYSE - commentary - research - Cramer's Take) and Goldcorp (GG:NYSE - commentary - research - Cramer's Take). Investors receive additional leverage by owning shares of the miners instead of the metal itself.

Like other ETFs, the Market Vectors-Gold Miners ETF trades like a stock and may also be sold short, thereby acting as a speculative hedging investment.

Van Eck Global has long been a player in precious metals and other commodities. For example, in 1968, it introduced the nation's first gold mutual fund.

"We at Van Eck believe that the vulnerability of the U.S. dollar, the twin deficits and other financial imbalances could lead to economic stress that supports a continued positive view on gold-related investments," said Joseph Foster, a fund manager for Van Eck gold since 1996, in a statement. "The Market Vectors-Gold Miners ETF is designed for investors looking for the traditional diversification benefits of gold-related investments, as well as the liquidity that intraday-trading access provides."

The GDX recently was down 53 cents, or 1.4%, to $35.99 amid a sharp selloff in the overall commodities market. Gold is down about 1% for the day to $654 an ounce.

[May 22, 2006] Advanced Bash Scripting Guide 3.9 (Stable)

[May 3, 2006] BigAdmin Submitted Article Converting a ksh Function to a ksh Script by William R. Seppeler, April 2006

Here is a simple way to create a script that will behave both as an executable script and as a ksh function. Being an executable script means the script can be run from any shell. Being a ksh function means the script can be optimized to run faster if launched from a ksh shell. This is an attempt to get the best of both worlds.

Procedure

Start by writing a ksh function. A ksh function is just like a ksh script except the script code is enclosed within a function name { script } construct.

Take the following example:

# Example script

function fun {
  print "pid=$$ cmd=$0 args=$*" opts="$-"
}

Save the text in a file. You'll notice nothing happens if you try to execute the code as a script:

ksh ./example

In order to use a function, the file must first be sourced. Sourcing the file will create the function definition in the current shell. After the function has been sourced, it can then be executed when you call it by name:

.. ./example
fun

To make the function execute as a script, the function must be called within the file. Add the bold text to the example function.

# Example script

function fun {
  print "pid=$$ cmd=$0 args=$*" opts="$-"
}

fun $*

Now you have a file that executes like a ksh script and sources like a ksh function. One caveat is that the file now executes while it is being sourced.

There are advantages and disadvantages to how the code is executed. If the file was executed as a script, the system spawns a child ksh process, loads the function definition, and then executes the function. If the file was sourced, no child process is created, the function definition is loaded into the current shell process, and the function is then executed.

Sourcing the file will make it run faster because no extra processes are created, however, loading a function occupies environment memory space. Functions can also manipulate environment variables whereas a script only gets a copy to work with. In programming terms, a function can use call by reference parameters via shell variables. A shell script is always call by value via arguments.

Advanced Information

When working with functions, it's advantageous to use ksh autoloading. Autoloading eliminates the need to source a file before executing the function. This is accomplished by saving the file with the same name as the function. In the above example, save the example as the file name "fun". Then set the FPATH environment variable to the directory where the file fun is. Now, all that needs to be done is type "fun" on the command line to execute the function.

Notice the double output the first time fun is called. This is because the first time the function is called, the file must be sourced, and in sourcing the file, the function gets called. What we need is to only call the function when the file is executed as a script, but skip calling the function if the file is sourced. To accomplish this, notice the output of the script when executing it as opposed to sourcing it. When the file is sourced, arg0 is always -ksh. Also, note the difference in opts when the script is sourced. Test the output of arg0 to determine if the function should be called or not. Also, make the file a self-executing script. After all, no one likes having to type "ksh" before running every ksh script.

#!/bin/ksh
# Example script

function fun {
  print "pid=$$ cmd=$0 args=$*" opts="$-"
}

 [[ "${0##*/}" == "fun" ]] && fun $*

Now the file is a self-executing script as well as a self-sourcing function (when used with ksh autoloading). What becomes more interesting is that since the file can be an autoload function as well as a stand-alone script, it could be placed in a single directory and have both PATH and FPATH point to it.

# ${HOME}/.profile

FPATH=${HOME}/bin
PATH=${FPATH}:${PATH}

In this setup, fun will always be called as a function unless it's explicitly called as ${HOME}/bin/fun.

Considerations

Even though the file can be executed as a function or a script, there are minor differences in behavior between the two. When the file is sourced as a function, all local environment variables will be visible to the script. If the file is executed as a script, only exported environment variables will be visible. Also, when sourced, a function can modify all environment variables. When the file is executed, all visible environment variables are only copies. We may want to make special allowances depending on how the file is called. Take the following example.

#!/bin/ksh

# Add arg2 to the contents of arg1

function addTo {
  eval $1=$(($1 + $2))
}

if [[ "${0##*/}" == "addTo" ]]; then
  addTo $*
  eval print \$$1
fi

The script is called by naming an environment variable and a quantity to add to that variable. When sourced, the script will directly modify the environment variable with the new value. However, when executed as a script, the environment variable cannot be modified, so the result must be output instead. Here is a sample run of both situations.

# called as a function
var=5
addTo var 3
print $var

# called as a script
var=5
export var
var=$(./addTo var 3)
print $var

Note the extra steps needed when executing this example as a script. The var must be exported prior to running the script or else it won't be visible. Also, because a script can't manipulate the current environment, you must capture the new result.

Extra function-ality

It's possible to package several functions into a single file. This is nice for distribution as you only need to maintain a single file. In order to maintain autoloading functionality, all that needs to be done is create a link for each function named in the file.

#!/bin/ksh

function addTo {
  eval $1=$(($1 + $2))
}

function multiplyBy {
  eval $1=$(($1 * $2))
}

if [[ "${0##*/}" == "addTo" ]] \
|| [[ "${0##*/}" == "multiplyBy" ]]; then
  ${0##*/} $*
  eval print \$$1
fi

if [[ ! -f "${0%/*}/addTo" ]] \
|| [[ ! -f "${0%/*}/multiplyBy" ]]; then
  ln "${0}" "${0%/*}/addTo"
  ln "${0}" "${0%/*}/multiplyBy"
  chmod u+rx "${0}"
fi

Notice the extra code at the bottom. This text could be saved in a file named myDist. The first time the file is sourced or executed, the appropriate links and file permissions will be put in place, thus creating a single distribution for multiple functions. Couple that with making the file a script executable and you end up with a single distribution of multiple scripts. It's like a shar file, but nothing actually gets unpacked.

The only downside to this distribution tactic is that BigAdmin will only credit you for each file submission, not based on the actual number of executable programs...

Time to Run

Try some of the sample code in this document. Get comfortable with the usage of each snippet to understand the differences and limitations. In general, it's safest to always distribute a script, but it's nice to have a function when speed is a consideration. Do some timing tests.

export var=8
time ./addTo var 5
time addTo var 5

If this code were part of an inner-loop calculation of a larger script, that speed difference could be significant.

This document aims to provide the best of both worlds. You can have a script and retain function speed for when it's needed. I hope you have enjoyed this document and its content. Thanks to Sun and BigAdmin for the hosting and support to make contributions like this possible.

[Apr 10, 2006] freshmeat.net Project details for log4sh by Kate Ward

About: log4sh is a logging framework for shell scripts that works similar to the other wonderful logging products available from the Apache Software Foundataion (eg. log4j, log4perl). Although not as powerful as the others, it can make the task of adding advanced logging to shell scripts easier, and has much more power than just using simple "echo" commands throughout. In addition, it can be configured from a properties file so that scripts in a production environment do not need to be altered to change the amount of logging they produce.

Changes: The LOG4SH_CONFIG_PREFIX variable was added, which allows log4sh to be configured to read native log4j properties files. More conversion character support was added to allow for better compatibility with log4j properties files. The test scripts were reworked so that log4sh can now be tested using a properties file for configuration or via a runtime configuration.


Continue...



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: March 12, 2019