Softpanorama

May the source be with you, but remember the KISS principle ;-)
Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

Slightly Skeptical View on Enterprise Unix Administration

News Webliography of problems with "pure" cloud environment Recommended Books Recommended Links Shadow IT Is DevOps a yet another "for profit" technocult Bosos or Empty Suits (Aggressive Incompetent Managers)
Unix Configuration Management Tools Job schedulers Unix System Monitoring Over 50 and unemployed Corporate bullshit as a communication method Diplomatic Communication Using HP ILO virtual CDROM

The KISS rule can be expanded as: Keep It Simple, Sysadmin ;-)

Additional useful material on the topic can also be found in an older article Solaris vs Linux:

Abstract

Introduction

Nine factors framework for comparison of two flavors of Unix in a large enterprise environment

Four major areas of Linux and Solaris deployment

Comparison of internal architecture and key subsystems

Security

Hardware: SPARC vs. X86

Development environment

Solaris as a cultural phenomenon

Using Solaris-Linux enterprise mix as the least toxic Unix mix available

Conclusions

Acknowledgements

Webliography

Here are my notes/reflection of sysadmin problem in strange (and typically pretty toxic) IT departments of large corporations:


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

2016 2015 2014 2013 2012 2011 2010 2009 2008
2007 2006 2005 2004 2003 2002 2001 2000 1999

"I appreciate Woody Allen's humor because one of my safety valves is an appreciation for life's absurdities. His message is that life isn't a funeral march to the grave. It's a polka."

-- Dennis Kusinich

[Sep 17, 2017] The last 25 years (or so) were years of tremendous progress in computers and networking that changed the human civilization

Notable quotes:
"... To emulate those capabilities on computers will probably require another 100 years or more. Selective functions can be imitated even now (manipulator that deals with blocks in a pyramid was created in 70th or early 80th I think, but capabilities of human "eye controlled arm" is still far, far beyond even wildest dreams of AI. ..."
"... Similarly human intellect is completely different from AI. At the current level the difference is probably 1000 times larger then the difference between a child with Down syndrome and a normal person. ..."
"... Human brain is actually a machine that creates languages for specific domain (or acquire them via learning) and then is able to operate in terms of those languages. Human child forced to grow up with animals, including wild animals, learns and is able to use "animal language." At least to a certain extent. Some of such children managed to survive in this environment. ..."
"... If you are bilingual, try Google translate on this post. You might be impressed by their recent progress in this field. It did improved considerably and now does not cause instant laugh. ..."
"... One interesting observation that I have is that automation is not always improve functioning of the organization. It can be quite opposite :-). Only the costs are cut, and even that is not always true. ..."
"... Of course the last 25 years (or so) were years of tremendous progress in computers and networking that changed the human civilization. And it is unclear whether we reached the limit of current capabilities or not in certain areas (in CPU speeds and die shrinking we probably did; I do not expect anything significant below 7 nanometers: https://en.wikipedia.org/wiki/7_nanometer ). ..."
May 28, 2017 | economistsview.typepad.com

libezkova , May 27, 2017 at 10:53 PM

"When combined with our brains, human fingers are amazingly fine manipulation devices."

Not only fingers. The whole human arm is an amazing device. Pure magic, if you ask me.

To emulate those capabilities on computers will probably require another 100 years or more. Selective functions can be imitated even now (manipulator that deals with blocks in a pyramid was created in 70th or early 80th I think, but capabilities of human "eye controlled arm" is still far, far beyond even wildest dreams of AI.

Similarly human intellect is completely different from AI. At the current level the difference is probably 1000 times larger then the difference between a child with Down syndrome and a normal person.

Human brain is actually a machine that creates languages for specific domain (or acquire them via learning) and then is able to operate in terms of those languages. Human child forced to grow up with animals, including wild animals, learns and is able to use "animal language." At least to a certain extent. Some of such children managed to survive in this environment.

Such cruel natural experiments have shown that the level of flexibility of human brain is something really incredible. And IMHO can not be achieved by computers (although never say never).

Here we are talking about tasks that are 1 million times more complex task that playing GO or chess, or driving a car on the street.

My impression is that most of recent AI successes (especially IBM win in Jeopardy ( http://www.techrepublic.com/article/ibm-watson-the-inside-story-of-how-the-jeopardy-winning-supercomputer-was-born-and-what-it-wants-to-do-next/ ), which probably was partially staged, is by-and-large due to the growth of storage and the number of cores of computers, not so much sophistication of algorithms used.

The limits of AI are clearly visible when we see the quality of translation from one language to another. For more or less complex technical text it remains medium to low. As in "requires human editing".

If you are bilingual, try Google translate on this post. You might be impressed by their recent progress in this field. It did improved considerably and now does not cause instant laugh.

Same thing with the speech recognition. The progress is tremendous, especially the last three-five years. But it is still far from perfect. Now, with a some training, programs like Dragon are quite usable as dictation device on, say PC with 4 core 3GHz CPU with 16 GB of memory (especially if you are native English speaker), but if you deal with special text or have strong accent, they still leaves much to be desired (although your level of knowledge of the program, experience and persistence can improve the results considerably.

One interesting observation that I have is that automation is not always improve functioning of the organization. It can be quite opposite :-). Only the costs are cut, and even that is not always true.

Of course the last 25 years (or so) were years of tremendous progress in computers and networking that changed the human civilization. And it is unclear whether we reached the limit of current capabilities or not in certain areas (in CPU speeds and die shrinking we probably did; I do not expect anything significant below 7 nanometers: https://en.wikipedia.org/wiki/7_nanometer ).

[Sep 17, 2017] Colleagues Addicted to Tech

Notable quotes:
"... dwelling on the negative can backfire. ..."
"... It's fine to acknowledge a misstep. But spin the answer to focus on why this new situation is such an ideal match of your abilities to the employer's needs. ..."
Apr 20, 2015 | NYTimes.com

Discussing Bad Work Situations

I have been in my present position for over 25 years. Five years ago, I was assigned a new boss, who has a reputation in my industry for harassing people in positions such as mine until they quit. I have managed to survive, but it's clear that it's time for me to move along. How should I answer the inevitable interview question: Why would I want to leave after so long? I've heard that speaking badly of a boss is an interview no-no, but it really is the only reason I'm looking to find something new. BROOKLYN

I am unemployed and interviewing for a new job. I have read that when answering interview questions, it's best to keep everything you say about previous work experiences or managers positive.

But what if you've made one or two bad choices in the past: taking jobs because you needed them, figuring you could make it work - then realizing the culture was a bad fit, or you had an arrogant, narcissistic boss?

Nearly everyone has had a bad work situation or boss. I find it refreshing when I read stories about successful people who mention that they were fired at some point, or didn't get along with a past manager. So why is it verboten to discuss this in an interview? How can the subject be addressed without sounding like a complainer, or a bad employee? CHICAGO

As these queries illustrate, the temptation to discuss a negative work situation can be strong among job applicants. But in both of these situations, and in general, criticizing a current or past employer is a risky move. You don't have to paint a fictitiously rosy picture of the past, but dwelling on the negative can backfire. Really, you don't want to get into a detailed explanation of why you have or might quit at all. Instead, you want to talk about why you're such a perfect fit for the gig you're applying for.

So, for instance, a question about leaving a long-held job could be answered by suggesting that the new position offers a chance to contribute more and learn new skills by working with a stronger team. This principle applies in responding to curiosity about jobs that you held for only a short time.

It's fine to acknowledge a misstep. But spin the answer to focus on why this new situation is such an ideal match of your abilities to the employer's needs.

The truth is, even if you're completely right about the past, a prospective employer doesn't really want to hear about the workplace injustices you've suffered, or the failings of your previous employer. A manager may even become concerned that you will one day add his or her name to the list of people who treated you badly. Save your cathartic outpourings for your spouse, your therapist, or, perhaps, the future adoring profile writer canonizing your indisputable success.

Send your workplace conundrums to workologist@nytimes.com, including your name and contact information (even if you want it withheld for publication). The Workologist is a guy with well-intentioned opinions, not a professional career adviser. Letters may be edited.

[Sep 16, 2017] Google Drive Faces Outage, Users Report

Sep 16, 2017 | tech.slashdot.org

(google.com) 75

Posted by msmash on Thursday September 07, 2017

Numerous Slashdot readers are reporting that they are facing issues access Google Drive, the productivity suite from the Mountain View-based company. Google's dashboard confirms that Drive is facing outage .

Third-party web monitoring tool DownDetector also reports thousands of similar complaints from users. The company said, "Google Drive service has already been restored for some users, and we expect a resolution for all users in the near future.

Please note this time frame is an estimate and may change. Google Drive is not loading files and results in a failures for a subset of users."

[Sep 16, 2017] Will Millennials Be Forced Out of Tech Jobs When They Turn 40?

Notable quotes:
"... Karen Panetta, the dean of graduate engineering education at Tufts University and the vice president of communications and public relations at the IEEE-USA, believes the outcome for tech will be Logan's Run -like, where age sets a career limit... ..."
"... It's great to get the new hot shot who just graduated from college, but it's also important to have somebody with 40 years of experience who has seen all of the changes in the industry and can offer a different perspective." ..."
Sep 16, 2017 | it.slashdot.org

(ieeeusa.org)

Posted by EditorDavid on Sunday September 03, 2017 @07:30AM

dcblogs shared an interesting article from IEEE-USA's "Insight" newsletter: Millennials, which date from the 1980s to mid-2000s, are the largest generation. But what will happen to this generation's tech workers as they settle into middle age ?

Will the median age of tech firms rise as the Millennial generation grows older...? The median age range at Google, Facebook, SpaceX, LinkedIn, Amazon, Salesforce, Apple and Adobe, is 29 to 31, according to a study last year by PayScale, which analyzes self-reported data...

Karen Panetta, the dean of graduate engineering education at Tufts University and the vice president of communications and public relations at the IEEE-USA, believes the outcome for tech will be Logan's Run -like, where age sets a career limit...

Tech firms want people with the current skills sets and those "without those skills will be pressured to leave or see minimal career progression," said Panetta... The idea that the tech industry may have an age bias is not scaring the new college grads away. "They see retirement so far off, so they are more interested in how to move up or onto new startup ventures or even business school," said Panetta.

"The reality sets in when they have families and companies downsize and it's not so easy to just pick up and go on to another company," she said. None of this may be a foregone conclusion.

Millennials may see the experience of today's older workers as a cautionary tale, and usher in cultural changes... David Kurtz, a labor relations partner at Constangy, Brooks, Smith & Prophete, suggests tech firms should be sharing age-related date about their workforce, adding "The more of a focus you place on an issue the more attention it gets and the more likely that change can happen.

It's great to get the new hot shot who just graduated from college, but it's also important to have somebody with 40 years of experience who has seen all of the changes in the industry and can offer a different perspective."

[Sep 01, 2017] linux - Looping through the content of a file in Bash - Stack Overflow

Notable quotes:
"... done <<< "$(...)" ..."
Sep 01, 2017 | stackoverflow.com
down vote favorite 234

Peter Mortensen , asked Oct 5 '09 at 17:52

How do I iterate through each line of a text file with Bash ?

With this script

echo "Start!"
for p in (peptides.txt)
do
    echo "${p}"
done

I get this output on the screen:

Start!
./runPep.sh: line 3: syntax error near unexpected token `('
./runPep.sh: line 3: `for p in (peptides.txt)'

(Later I want to do something more complicated with $p than just output to the screen.)


The environment variable SHELL is (from env):

SHELL=/bin/bash

/bin/bash --version output:

GNU bash, version 3.1.17(1)-release (x86_64-suse-linux-gnu)
Copyright (C) 2005 Free Software Foundation, Inc.

cat /proc/version output:

Linux version 2.6.18.2-34-default (geeko@buildhost) (gcc version 4.1.2 20061115 (prerelease) (SUSE Linux)) #1 SMP Mon Nov 27 11:46:27 UTC 2006

The file peptides.txt contains:

RKEKNVQ
IPKKLLQK
QYFHQLEKMNVK
IPKKLLQK
GDLSTALEVAIDCYEK
QYFHQLEKMNVKIPENIYR
RKEKNVQ
VLAKHGKLQDAIN
ILGFMK
LEDVALQILL

Bruno De Fraine , answered Oct 5 '09 at 18:00

One way to do it is:
while read p; do
  echo $p
done <peptides.txt

Exceptionally, if the loop body may read from standard input , you can open the file using a different file descriptor:

while read -u 10 p; do
  ...
done 10<peptides.txt

Here, 10 is just an arbitrary number (different from 0, 1, 2).

Warren Young , answered Oct 5 '09 at 17:54

cat peptides.txt | while read line
do
   # do something with $line here
done

Stan Graves , answered Oct 5 '09 at 18:18

Option 1a: While loop: Single line at a time: Input redirection
#!/bin/bash
filename='peptides.txt'
echo Start
while read p; do 
    echo $p
done < $filename

Option 1b: While loop: Single line at a time:
Open the file, read from a file descriptor (in this case file descriptor #4).

#!/bin/bash
filename='peptides.txt'
exec 4<$filename
echo Start
while read -u4 p ; do
    echo $p
done

Option 2: For loop: Read file into single variable and parse.
This syntax will parse "lines" based on any white space between the tokens. This still works because the given input file lines are single work tokens. If there were more than one token per line, then this method would not work as well. Also, reading the full file into a single variable is not a good strategy for large files.

#!/bin/bash
filename='peptides.txt'
filelines=`cat $filename`
echo Start
for line in $filelines ; do
    echo $line
done

mightypile , answered Oct 4 '13 at 13:30

This is no better than other answers, but is one more way to get the job done in a file without spaces (see comments). I find that I often need one-liners to dig through lists in text files without the extra step of using separate script files.
for word in $(cat peptides.txt); do echo $word; done

This format allows me to put it all in one command-line. Change the "echo $word" portion to whatever you want and you can issue multiple commands separated by semicolons. The following example uses the file's contents as arguments into two other scripts you may have written.

for word in $(cat peptides.txt); do cmd_a.sh $word; cmd_b.py $word; done

Or if you intend to use this like a stream editor (learn sed) you can dump the output to another file as follows.

for word in $(cat peptides.txt); do cmd_a.sh $word; cmd_b.py $word; done > outfile.txt

I've used these as written above because I have used text files where I've created them with one word per line. (See comments) If you have spaces that you don't want splitting your words/lines, it gets a little uglier, but the same command still works as follows:

OLDIFS=$IFS; IFS=$'\n'; for line in $(cat peptides.txt); do cmd_a.sh $line; cmd_b.py $line; done > outfile.txt; IFS=$OLDIFS

This just tells the shell to split on newlines only, not spaces, then returns the environment back to what it was previously. At this point, you may want to consider putting it all into a shell script rather than squeezing it all into a single line, though.

Best of luck!

Jahid , answered Jun 9 '15 at 15:09

Use a while loop, like this:
while IFS= read -r line; do
   echo "$line"
done <file

Notes:

  1. If you don't set the IFS properly, you will lose indentation.
  2. You should almost always use the -r option with read.
  3. Don't read lines with for

codeforester , answered Jan 14 at 3:30

A few more things not covered by other answers: Reading from a delimited file
# ':' is the delimiter here, and there are three fields on each line in the file
# IFS set below is restricted to the context of `read`, it doesn't affect any other code
while IFS=: read -r field1 field2 field3; do
  # process the fields
  # if the line has less than three fields, the missing fields will be set to an empty string
  # if the line has more than three fields, `field3` will get all the values, including the third field plus the delimiter(s)
done < input.txt
Reading from more than one file at a time
while read -u 3 -r line1 && read -u 4 -r line2; do
  # process the lines
  # note that the loop will end when we reach EOF on either of the files, because of the `&&`
done 3< input1.txt 4< input2.txt
Reading a whole file into an array (Bash version 4+)
readarray -t my_array < my_file

or

mapfile -t my_array < my_file

And then

for line in "${my_array[@]}"; do
  # process the lines
done

Anjul Sharma , answered Mar 8 '16 at 16:10

If you don't want your read to be broken by newline character, use -
#!/bin/bash
while IFS='' read -r line || [[ -n "$line" ]]; do
    echo "$line"
done < "$1"

Then run the script with file name as parameter.

Sine , answered Nov 14 '13 at 14:23

#!/bin/bash
#
# Change the file name from "test" to desired input file 
# (The comments in bash are prefixed with #'s)
for x in $(cat test.txt)
do
    echo $x
done

dawg , answered Feb 3 '16 at 19:15

Suppose you have this file:
$ cat /tmp/test.txt
Line 1
    Line 2 has leading space
Line 3 followed by blank line

Line 5 (follows a blank line) and has trailing space    
Line 6 has no ending CR

There are four elements that will alter the meaning of the file output read by many Bash solutions:

  1. The blank line 4;
  2. Leading or trailing spaces on two lines;
  3. Maintaining the meaning of individual lines (i.e., each line is a record);
  4. The line 6 not terminated with a CR.

If you want the text file line by line including blank lines and terminating lines without CR, you must use a while loop and you must have an alternate test for the final line.

Here are the methods that may change the file (in comparison to what cat returns):

1) Lose the last line and leading and trailing spaces:

$ while read -r p; do printf "%s\n" "'$p'"; done </tmp/test.txt
'Line 1'
'Line 2 has leading space'
'Line 3 followed by blank line'
''
'Line 5 (follows a blank line) and has trailing space'

(If you do while IFS= read -r p; do printf "%s\n" "'$p'"; done </tmp/test.txt instead, you preserve the leading and trailing spaces but still lose the last line if it is not terminated with CR)

2) Using process substitution with cat will reads the entire file in one gulp and loses the meaning of individual lines:

$ for p in "$(cat /tmp/test.txt)"; do printf "%s\n" "'$p'"; done
'Line 1
    Line 2 has leading space
Line 3 followed by blank line

Line 5 (follows a blank line) and has trailing space    
Line 6 has no ending CR'

(If you remove the " from $(cat /tmp/test.txt) you read the file word by word rather than one gulp. Also probably not what is intended...)


The most robust and simplest way to read a file line-by-line and preserve all spacing is:

$ while IFS= read -r line || [[ -n $line ]]; do printf "'%s'\n" "$line"; done </tmp/test.txt
'Line 1'
'    Line 2 has leading space'
'Line 3 followed by blank line'
''
'Line 5 (follows a blank line) and has trailing space    '
'Line 6 has no ending CR'

If you want to strip leading and trading spaces, remove the IFS= part:

$ while read -r line || [[ -n $line ]]; do printf "'%s'\n" "$line"; done </tmp/test.txt
'Line 1'
'Line 2 has leading space'
'Line 3 followed by blank line'
''
'Line 5 (follows a blank line) and has trailing space'
'Line 6 has no ending CR'

(A text file without a terminating \n , while fairly common, is considered broken under POSIX. If you can count on the trailing \n you do not need || [[ -n $line ]] in the while loop.)

More at the BASH FAQ

,

Here is my real life example how to loop lines of another program output, check for substrings, drop double quotes from variable, use that variable outside of the loop. I guess quite many is asking these questions sooner or later.
##Parse FPS from first video stream, drop quotes from fps variable
## streams.stream.0.codec_type="video"
## streams.stream.0.r_frame_rate="24000/1001"
## streams.stream.0.avg_frame_rate="24000/1001"
FPS=unknown
while read -r line; do
  if [[ $FPS == "unknown" ]] && [[ $line == *".codec_type=\"video\""* ]]; then
    echo ParseFPS $line
    FPS=parse
  fi
  if [[ $FPS == "parse" ]] && [[ $line == *".r_frame_rate="* ]]; then
    echo ParseFPS $line
    FPS=${line##*=}
    FPS="${FPS%\"}"
    FPS="${FPS#\"}"
  fi
done <<< "$(ffprobe -v quiet -print_format flat -show_format -show_streams -i "$input")"
if [ "$FPS" == "unknown" ] || [ "$FPS" == "parse" ]; then 
  echo ParseFPS Unknown frame rate
fi
echo Found $FPS

Declare variable outside of the loop, set value and use it outside of loop requires done <<< "$(...)" syntax. Application need to be run within a context of current console. Quotes around the command keeps newlines of output stream.

Loop match for substrings then reads name=value pair, splits right-side part of last = character, drops first quote, drops last quote, we have a clean value to be used elsewhere.

[Aug 29, 2017] The booklet for common tasks on a Linux system.

Aug 29, 2017 | bumble.sourceforge.net

This booklet is designed to help with common tasks on a Linux system. It is designed to be presentable as a series of "recipes" for accomplishing common tasks. These recipes consist of a plain English one-line description, followed by the Linux command which carries out the task.

The document is focused on performing tasks in Linux using the 'command line' or 'console'.

The format of the booklet was largely inspired by the "Linux Cookbook" www.dsl.org/cookbook

[Aug 29, 2017] backup-etc.sh -- A script to backup the /etc directory

This is simple script that generated "dot" progression lines. Backup name includes a timestamp. No rotation implemented.
Aug 29, 2017 | wpollock.com
   #!/bin/bash
# Script to backup the /etc heirarchy
#
# Written 4/2002 by Wayne Pollock, Tampa Florida USA
#
#  $Id: backup-etc,v 1.6 2004/08/25 01:42:26 wpollock Exp $
#
# $Log: backup-etc,v $
# Revision 1.6  2004/08/25 01:42:26  wpollock
# Changed backup name to include the hostname and 4 digit years.
#
# Revision 1.5  2004/01/07 18:07:33  wpollock
# Fixed dots routine to count files first, then calculate files per dot.
#
# Revision 1.4  2003/04/03 08:10:12  wpollock
# Changed how the version number is obtained, so the file
# can be checked out normally.
#
# Revision 1.3  2003/04/03 08:01:25  wpollock
# Added ultra-fancy dots function for verbose mode.
#
# Revision 1.2  2003/04/01 15:03:33  wpollock
# Eliminated the use of find, and discovered that tar was working
# as intended all along!  (Each directory that find found was
# recursively backed-up, so for example /etc, then /etc/mail,
# caused /etc/mail/sendmail.mc to be backuped three times.)
#
# Revision 1.1  2003/03/23 18:57:29  wpollock
# Modified by Wayne Pollock:
#
# Discovered not all files were being backed up, so
# added "-print0 --force-local" to find and "--null -T -"
# to tar (eliminating xargs), to fix the problem when filenames
# contain metacharacters such as whitespace.
# Although this now seems to work, the current version of tar
# seems to have a bug causing it to backup every file two or
# three times when using these options!  This is still better
# than not backing up some files at all.)
#
# Changed the logger level from "warning" to "error".
#
# Added '-v, --verbose' options to display dots every 60 files,
# just to give feedback to a user.
#
# Added '-V, --version' and '-h, --help' options.
#
# Removed the lock file mechanism and backup file renaming
# (from foo to foo.1), in favor of just including a time-stamp
# of the form "yymmdd-hhmm" to the filename.
#
#

PATH=/bin:/usr/bin

# The backups should probably be stored in /var somplace:
REPOSITORY=/root
TIMESTAMP=$(date '+%Y%m%d-%H%M')
HOSTNAME=$(hostname)
FILE="$REPOSITORY/$HOSTNAME-etc-full-backup-$TIMESTAMP.tgz"

ERRMSGS=/tmp/backup-etc.$$
PROG=${0##*/}
VERSION=$(echo $Revision: 1.6 $ |awk '{print$2}')
VERBOSE=off

usage()
{  echo "This script creates a full backup of /etc via tar in $REPOSITORY."
   echo "Usage: $PROG [OPTIONS]"
   echo '  Options:'
   echo '    -v, --verbose   displays some feedback (dots) during backup'
   echo '    -h, --help      displays this message'
   echo '    -V, --version   display program version and author info'
   echo
}

dots()
{  MAX_DOTS=50
   NUM_FILES=`find /etc|wc -l`
   let 'FILES_PER_DOT = NUM_FILES / MAX_DOTS'
   bold=`tput smso`
   norm=`tput rmso`
   tput sc
   tput civis
   echo -n "$bold(00%)$norm"
   while read; do
      let "cnt = (cnt + 1) % FILES_PER_DOT"
      if [ "$cnt" -eq 0 ]
      then
         let '++num_dots'
         let 'percent = (100 * num_dots) / MAX_DOTS'
         [ "$percent" -gt "100" ] && percent=100
         tput rc
         printf "$bold(%02d%%)$norm" "$percent"
         tput smir
         echo -n "."
         tput rmir
      fi
   done
   tput cnorm
   echo
}

# Command line argument processing:
while [ $# -gt 0 ]
do
   case "$1" in
      -v|--verbose)  VERBOSE=on; ;;
      -h|--help)     usage; exit 0; ;;
      -V|--version)  echo -n "$PROG version $VERSION "
                     echo 'Written by Wayne Pollock '
                     exit 0; ;;
      *)             usage; exit 1; ;;
   esac
   shift
done

trap "rm -f $ERRMSGS" EXIT

cd /etc

# create backup, saving any error messages:
if [ "$VERBOSE" != "on" ]
then
    tar -cz --force-local -f $FILE . 2> $ERRMSGS 
else
    tar -czv --force-local -f $FILE . 2> $ERRMSGS | dots
fi

# Log any error messages produced:
if [ -s "$ERRMSGS" ]
then logger -p user.error -t $PROG "$(cat $ERRMSGS)"
else logger -t $PROG "Completed full backup of /etc"
fi

exit 0

[Aug 29, 2017] How to view the `.bash_history` file via command line

Aug 29, 2017 | askubuntu.com

If you actually need the output of the .bash_history file , replace history with

cat ~/.bash_history in all of the commands below.

If you actually want the commands without numbers in front, use this command instead of history :

history | cut -d' ' -f 4-

[Aug 29, 2017] The quickie guide to continuous delivery in DevOps

This is pretty idiotic: "But wait -- Isn't speed the key to all software development? These days, companies routinely require their developers to update or add features once per day, week, or month. This was unheard of back in the day, even in the era of agile software development ."
And now example buss words infused nonsense: ""DevOps is a concept, an idea, a life philosophy," says Gottfried Sehringer, chief marketing officer at XebiaLabs , a software delivery automation company. "It's not really a process or a toolset, or a technology." And another one: ..." "In an ideal world, you would push a button to release every few seconds," Sehringer says. But this is not an ideal world, and so people plug up the process along the way."... "
I want to see sizable software product with the release every few seconds. Even for a small and rapidly evolving web site scripts should be released no more frequently then daily.
Notable quotes:
"... Even if you're a deity of software bug-squashing, how can you -- or any developer or operations specialist -- deliver high-quality, "don't break anything" code when you have to build and release that fast? Everyone has their own magic bullet. "Agile -- " cries one crowd. " Continuous build -- " yells another. " Continuous integration -- " cheers a third. ..."
"... Automation has obvious returns on investment. "You can make sure it's good in pre-production and push it immediately to production without breaking anything, and then just repeat, repeat, repeat, over and over again," says Sehringer. ..."
"... In other words, you move delivery through all the steps in a structured, repeatable, automated way to reduce risk and increase the speed of releases and updates. ..."
Aug 29, 2017 | insights.hpe.com
The quickie guide to continuous delivery in DevOps

In today's world, you have to develop and deliver almost in the same breath. Here's a quick guide to help you figure out which continuous delivery concepts will help you breathe easy, and which are only hot air. Developers are always under pressure to produce more and release software faster, which encourages the adoption of new concepts and tools. But confusing buzzwords obfuscate real technology and business benefits, particularly when a vendor has something to sell. That makes it hard to determine what works best -- for real, not just as a marketing phrase -- in the continuous flow of build and deliver processes. This article gives you the basics of continuous delivery to help you sort it all out.

To start with, the terms apply to different parts of the same production arc, each of which are automated to different degrees:

With continuous deployment, "a developer's job typically ends at reviewing a pull request from a teammate and merging it to the master branch," explains Marko Anastasov in a blog post . "A continuous integration/continuous deployment service takes over from there by running all tests and deploying the code to production, while keeping the team informed about [the] outcome of every important event."

However, knowing the terms and their definitions isn't enough to help you determine when and where it is best to use each. Because, of course, every shop is different.

It would be great if the market clearly distinguished between concepts and tools and their uses, as they do with terms like DevOps. Oh, wait.

"DevOps is a concept, an idea, a life philosophy," says Gottfried Sehringer, chief marketing officer at XebiaLabs , a software delivery automation company. "It's not really a process or a toolset, or a technology."

But, alas, industry terms are rarely spelled out that succinctly. Nor are they followed with hints and tips on how and when to use them. Hence this guide, which aims to help you learn when to use what.

Choose your accelerator according to your need for speed

But wait -- Isn't speed the key to all software development? These days, companies routinely require their developers to update or add features once per day, week, or month. This was unheard of back in the day, even in the era of agile software development .

That's not the end of it; some businesses push for software updates to be faster still. "If you work for Amazon, it might be every few seconds," says Sehringer.

Even if you're a deity of software bug-squashing, how can you -- or any developer or operations specialist -- deliver high-quality, "don't break anything" code when you have to build and release that fast? Everyone has their own magic bullet. "Agile -- " cries one crowd. " Continuous build -- " yells another. " Continuous integration -- " cheers a third.

Let's just cut to the chase on all that, shall we?

"Just think of continuous as 'automated,'" says Nate Berent-Spillson, senior delivery director at Nexient , a software services provider. "Automation is driving down cost and the time to develop and deploy."

Well, frack, why don't people just say automation?

Add to the idea of automation the concepts of continuous build, continuous delivery, continuous everything, which are central to DevOps, and we find ourselves talking in circles. So, let's get right to sorting all that out.

... ... ...

Rinse. Repeat, repeat, repeat, repeat (the point of automation in DevOps)

Automation has obvious returns on investment. "You can make sure it's good in pre-production and push it immediately to production without breaking anything, and then just repeat, repeat, repeat, over and over again," says Sehringer.

In other words, you move delivery through all the steps in a structured, repeatable, automated way to reduce risk and increase the speed of releases and updates.

In an ideal world, you would push a button to release every few seconds," Sehringer says. But this is not an ideal world, and so people plug up the process along the way.

A company may need approval for an application change from its legal department. "Some companies are heavily regulated and may need additional gates to ensure compliance," notes Sehringer. "It's important to understand where these bottlenecks are." The ARA software should improve efficiencies and ensure the application is released or updated on schedule.

"Developers are more familiar with continuous integration," he says. "Application release automation is more recent and thus less understood."

... ... ...

Pam Baker has written hundreds of articles published in leading technology, business and finance publications including InformationWeek, Institutional Investor magazine, CIO.com, NetworkWorld, ComputerWorld, IT World, Linux World, and more. She has also authored several analytical studies on technology, eight books -- the latest of which is Data Divination: Big Data Strategies -- and an award-winning documentary on paper-making. She is a member of the National Press Club, Society of Professional Journalists and the Internet Press Guild.

[Aug 28, 2017] Rsync over ssh with root access on both sides

Aug 28, 2017 | serverfault.com

I have one older ubuntu server, and one newer debian server and I am migrating data from the old one to the new one. I want to use rsync to transfer data across to make final migration easier and quicker than the equivalent tar/scp/untar process.

As an example, I want to sync the home folders one at a time to the new server. This requires root access at both ends as not all files at the source side are world readable and the destination has to be written with correct permissions into /home. I can't figure out how to give rsync root access on both sides.

I've seen a few related questions, but none quite match what I'm trying to do.

I have sudo set up and working on both servers. ubuntu ssh debian rsync root

share improve this question asked Apr 28 '10 at 9:18 Tim Abell 732 20
add a comment | 3 Answers active oldest votes
up vote down vote accepted Actually you do NOT need to allow root authentication via SSH to run rsync as Antoine suggests. The transport and system authentication can be done entirely over user accounts as long as you can run rsync with sudo on both ends for reading and writing the files.

As a user on your destination server you can suck the data from your source server like this:

sudo rsync -aPe ssh --rsync-path='sudo rsync' boron:/home/fred /home/

The user you run as on both servers will need passwordless* sudo access to the rsync binary, but you do NOT need to enable ssh login as root anywhere. If the user you are using doesn't match on the other end, you can add user@boron: to specify a different remote user.

Good luck.

*or you will need to have entered the password manually inside the timeout window.

share improve this answer edited Jun 30 '10 at 13:51 answered Apr 28 '10 at 22:06 Caleb 9,089 27 43
1
Although this is an old question I'd like to add word of CAUTION to this accepted answer. From my understanding allowing passwordless "sudo rsync" is equivalent to open the root account to remote login. This is because with this it is very easy to gain full root access, e.g. because all system files can be downloaded, modified and replaced without a password. – Ascurion Jan 8 '16 at 16:30
add a comment |
up vote down vote If your data is not highly sensitive, you could use tar and socat. In my experience this is often faster as rsync over ssh.

You need socat or netcat on both sides.

On the target host, go to the directory where you would like to put your data, after that run: socat TCP-LISTEN:4444 - | tar xzf -

If the target host is listening, start it on the source like: tar czf - /home/fred /home/ | socat - TCP:ip-of-remote-server:4444

For this setup you'll need a reliably connection between the 2 servers.

share improve this answer answered Apr 28 '10 at 21:20 Jeroen Moors
Good point. In a trusted environment, you'll pick up a lot of speed by not encrypting. It might not matter on small files, but with GBs of data it will. – pboin May 18 '10 at 10:53
add a comment |
up vote down vote Ok, i've pieced together all the clues to get something that works for me.

Lets call the servers "src" & "dst".

Set up a key pair for root on the destination server, and copy the public key to the source server:

dest $ sudo -i
dest # ssh-keygen
dest # exit
dest $ scp /root/id_rsa.pub src:

Add the public key to root's authorized keys on the source server

src $ sudo -i
src # cp /home/tim/id_rsa.pub .ssh/authorized_keys

Back on the destination server, pull the data across with rsync:

dest $ sudo -i
dest # rsync -aP src:/home/fred /home/

[Aug 28, 2017] Unix Rsync Copy Hidden Dot Files and Directories Only by Vivek Gite

Feb 06, 2014 | www.cyberciti.biz
November 9, 2012 February 6, 2014 in Categories Commands , File system , Linux , UNIX last updated February 6, 2014

How do I use the rsync tool to copy only the hidden files and directory (such as ~/.ssh/, ~/.foo, and so on) from /home/jobs directory to the /mnt/usb directory under Unix like operating system?

The rsync program is used for synchronizing files over a network or local disks. To view or display only hidden files with ls command:

ls -ld ~/.??*

OR

ls -ld ~/.[^.]*

Sample outputs:

ls command: List only hidden files in Unix / Linux terminal

Fig:01 ls command to view only hidden files

rsync not synchronizing all hidden .dot files?

In this example, you used the pattern .[^.]* or .??* to select and display only hidden files using ls command . You can use the same pattern with any Unix command including rsync command. The syntax is as follows to copy hidden files with rsync:

rsync -av /path/to/dir/.??* /path/to/dest
rsync -avzP /path/to/dir/.??* /mnt/usb
rsync -avzP $HOME/.??* user1@server1.cyberciti.biz:/path/to/backup/users/u/user1
rsync -avzP ~/.[^.]* user1@server1.cyberciti.biz:/path/to/backup/users/u/user1

rsync -av /path/to/dir/.??* /path/to/dest rsync -avzP /path/to/dir/.??* /mnt/usb rsync -avzP $HOME/.??* user1@server1.cyberciti.biz:/path/to/backup/users/u/user1 rsync -avzP ~/.[^.]* user1@server1.cyberciti.biz:/path/to/backup/users/u/user1

In this example, copy all hidden files from my home directory to /mnt/test:

rsync -avzP ~/.[^.]* /mnt/test

rsync -avzP ~/.[^.]* /mnt/test

Sample outputs:

Rsync example to copy only hidden files

Fig.02 Rsync example to copy only hidden files

Vivek Gite is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on Twitter , Facebook , Google+ .

[Aug 28, 2017] Could AI Transform Continuous Delivery Development

Notable quotes:
"... It's basically a bullshit bingo post where someone repeats a buzzword without any knowledge of the material behind it ..."
"... continuous delivery == constant change ..."
"... This might be good for developers, but it's a nightmare for the poor, bloody, customers. ..."
"... However, I come at it from the other side, the developers just push new development out and production support is responsible for addressing the mess, it is horrible, there is too much disconnect between developers and their resulting output creating consistent outages. The most successful teams follow the mantra "Eat your own dog food" , developers who support the crap they push ..."
"... But do you know who likes Continuous Delivery? Not the users. The users hate stuff changing for the sake of change, but trying to convince management seems an impossible task. ..."
"... some of us terrified what 'continious delivery' means in the context of software in the microcontroller of a health care device, ..."
"... It is a natural consequence of a continuous delivery, emphasis on always evolving and changing and that the developer is king and no one can question developer opinion. Developer decides it should move, it moves. No pesky human testers to stand up and say 'you confused the piss out of us' to make them rethink it. No automatic test is going to capture 'confuse the piss out of the poor non-developer users'. ..."
"... It's amazing how common this attitude has become. It's aggressively anti-customer, and a big part of the reason for the acceleration of the decline of software quality over the past several years. ..."
"... All I know is that, as a user, rapid-release or continuous delivery has been nothing but an enormous pain in the ass to me and I wish it would die the horrible death it deserves already. ..."
Aug 28, 2017 | developers.slashdot.org

Anonymous Coward writes:

Re: Score: , Insightful)

Yeah, this is an incredibly low quality article. It doesn't specify what it means by what AI should do, doesn't specify which type of AI, doesn't specify why AI should be used, etc. Junk article.

It's basically a bullshit bingo post where someone repeats a buzzword without any knowledge of the material behind it.

xbytor ( 215790 ) , Sunday August 27, 2017 @04:00PM ( #55093989 ) Homepage
buzzwords Score: , Funny)

> a new paradigm shift.

I stopped reading after this.

cyber-vandal ( 148830 ) writes:
Re: buzzwords

Not enough leveraging core competencies through blue sky thinking and synergistic best of breed cloud machine learning for you?

sycodon ( 149926 ) , Sunday August 27, 2017 @04:10PM ( #55094039 )
Same Old Thing Score: , Insightful)

Holy Fuck.

Continuous integration Prototyping Incremental development Rapid application development Agile development Waterfall development Spiral development

Now, introducing, "Continuous Delivery"...or something.

Here is the actual model, a model that will exist for the next 1,000 years.

1. Someone (or something) gathers requirement. 2. They get it wrong. 3. They develop the wrong thing that doesn't even work they way they thought it should 4. The project leader is canned 5. The software is implemented by an outside vendor, with all the flaws. 6. The software operates finally after 5 years of modifications to both the software and the workflows (to match the flaws in the software). 7. As soon as it's all running properly and everyone is trained, a new project is launched to redo it, "the right way". 8. Goto 1

AmazingRuss ( 555076 ) writes:
Re:

If everyone is stupid, no one is.

ColdWetDog ( 752185 ) writes:
Re:

No no. We got rid of line numbers a long time ago.

Graydyn Young ( 2835695 ) writes:
Re:

+1 Depressing

Tablizer ( 95088 ) writes:
AI meets Hunger Games

It's a genetic algorithm where YOU are the population being flushed out each cycle.

TheStickBoy ( 246518 ) writes:
Re:
Here is the actual model, a model that will exist for the next 1,000 years.

1. Someone (or something) gathers requirement. 2. They get it wrong. 3. They develop the wrong thing that doesn't even work they way they thought it should 4. The project leader is canned 5. The software is implemented by an outside vendor, with all the flaws. 6. The software operates finally after 5 years of modifications to both the software and the workflows (to match the flaws in the software). 7. As soon as it's all running properly and everyone is trained, a new project is launched to redo it, "the right way". 8. Goto 1

You just accurately described a 6 year project within our organization....and it made me cry Does this model have a name? an urban dictionary name? if not it needs one.

alvinrod ( 889928 ) , Sunday August 27, 2017 @04:15PM ( #55094063 )
Re:buzzwords Score: , Insightful)

Yeah, maybe there's something useful in TFA, but I'm not really inclined to go looking based on what was in the summary. At no point, did the person being quoted actually say anything of substance.

It's just buzzword soup with a dash of new technologies thrown in.

Five years ago they would have said practically the same words, but just talked about utilizing the cloud instead of AI.

I'm also a little skeptical of any study published by a company looking to sell you what the study has just claimed to be great. That doesn't mean its a complete sham, but how hard did they look for other explanations why some companies are more successful than others?

phantomfive ( 622387 ) writes:
Re:

At first I was skeptical, but I read some online reviews of it, and it looks pretty good [slashdot.org]. All you need is some AI and everything is better.

Anonymous Coward writes:
I smell Bullshit Bingo...

that's all, folks...

93 Escort Wagon ( 326346 ) writes:
Meeting goals

I notice the targets are all set from the company's point of view... including customer satisfaction. However it's quite easy to meet any goal, as long as you set it low enough.

Companies like Comcast or Qwest objectively have abysmal customer satisfaction ratings; but they likely meet their internal goal for that metric. I notice, in their public communications, they always use phrasing along the lines of "giving you an even better customer service experience" - again, the trick is to set the target low and

petes_PoV ( 912422 ) , Sunday August 27, 2017 @05:56PM ( #55094339 )
continuous delivery == constant change ( Score: , Insightful)

This might be good for developers, but it's a nightmare for the poor, bloody, customers.

Any professional outfit will test a new release (in-house or commercial product) thoroughly before letting it get anywhere close to an environment where their business is at stake.

This process can take anywhere from a day or two to several months, depending on the complexity of the operation, the scope of the changes, HOW MANY (developers note: not if any ) bugs are found and whether any alterations to working practices have to be introduced.

So to have developers lob a new "release" over the wall at frequent intervals is not useful, it isn't clever, nor does it save (the users) any money or speed up their acceptance. It just costs more in integration testing, floods the change control process with "issues" and means that when you report (again, developers: not if ) problems, it is virtually impossible to describe exactly which release you are referring to and even more impossible for whoever fixes the bugs to produce the same version to fix and then incorporate those fixes into whatever happens to be the latest version - that hour. Even more so when dozens of major corporate customers are ALL reporting bugs with each new version they test.

SethJohnson ( 112166 ) writes:
Re:

Any professional outfit will test a new release (in-house or commercial product) thoroughly before letting it get anywhere close to an environment where their business is at stake. This process can take anywhere from a day or two to several months, depending on the complexity of the operation, the scope of the changes, HOW MANY (developers note: not if any) bugs are found and whether any alterations to working practices have to be introduced.

I wanted to chime in with a tangible anecdote to support your

Herkum01 ( 592704 ) writes:
Re:

I can sympathize with that few, of it appearing to have too many developers focused upon deployment/testing then actual development.

However, I come at it from the other side, the developers just push new development out and production support is responsible for addressing the mess, it is horrible, there is too much disconnect between developers and their resulting output creating consistent outages. The most successful teams follow the mantra "Eat your own dog food" , developers who support the crap they push

JohnFen ( 1641097 ) writes:
Re:
This might be good for developers

It's not even good for developers.

AmazingRuss ( 555076 ) writes:
"a new paradigm shift."

Another one?

sethstorm ( 512897 ) writes:
Let's hope not.

AI is enough of a problem, why make it worse?

bobm ( 53783 ) writes:
According to one study

One study, well then I'm sold.

But do you know who likes Continuous Delivery? Not the users. The users hate stuff changing for the sake of change, but trying to convince management seems an impossible task.

angel'o'sphere ( 80593 ) writes:
Re:

Why should users not like it? If you shop on amazon you don't know if a specific feature you notice today came there via continuous delivery or a more traditional process.

Junta ( 36770 ) writes:
Re:

The crux of the problem is that we (in these discussions and the analysts) describe *all* manner of 'software development' as the same thing. Whether it's a desktop application, an embedded microcontroller in industrial equipment, a web application for people to get work done, or a webapp to let people see the latest funny cat video.

Then we start talking past each other, some of us terrified what 'continious delivery' means in the context of software in the microcontroller of a health care device, others t

angel'o'sphere ( 80593 ) writes:
Re:

Well, 'continuous delievery' is a term with a defined meaning. And releasing phone apps with unwanted UI/functionality in rapid succession is not part of that definition. Continuous delievery basically only is the next logical step after continuous integration. You deploy the new functionallity automatically (or with a click of a button) when certain test criteria are met. Usually on a subset of your nodes so only a subset of your customers sees it. If you have crashes on those nodes or customer complaints you

JohnFen ( 1641097 ) writes:
Re:
You deploy the new functionallity automatically (or with a click of a button) when certain test criteria are met. Usually on a subset of your nodes so only a subset of your customers sees it. If you have crashes on those nodes or customer complaints you roll back.

Why do you consider this to be a good thing? It's certainly not for those poor customers who were chosen to be involuntary beta testers, and it's also not for the rest of the customers who have to deal with software that is constantly changing underneath them.

Junta ( 36770 ) writes:
Re:
'continuous delievery' is a term with a defined meaning. And releasing phone apps with unwanted UI/functionality in rapid succession is not part of that definition.

It is a natural consequence of a continuous delivery, emphasis on always evolving and changing and that the developer is king and no one can question developer opinion. Developer decides it should move, it moves. No pesky human testers to stand up and say 'you confused the piss out of us' to make them rethink it. No automatic test is going to capture 'confuse the piss out of the poor non-developer users'.

If you have crashes on those nodes or customer complaints you roll back.

Note that a customer with a choice is likely to just go somewhere else rather than use your software.

manu0601 ( 2221348 ) writes:
AI written paper

I suspect that article was actually written by an AI. That would explain why it makes so little sense to human mind.

4wdloop ( 1031398 ) writes:
IT what?

IT in my company does network, Windows, Office and Virus etc. type of work. Is this what they talk about? Anyway, it's been long outsourced to IT (as in "Indian" technology)...

Comrade Ogilvy ( 1719488 ) writes:
For some businesses maybe but...

I recently interviewed at a couple of the new fangled big data marketing startups that correlate piles of stuff to help target ads better, and they were continuously deploying up the wazoo. In fact, they had something like zero people doing traditional QA.

It was not totally insane at all. But they did have a blaze attitude about deployments -- if stuff don't work in production they just roll back, and not worry about customer input data being dropped on the floor. Heck, they did not worry much about da

JohnFen ( 1641097 ) writes:
Re:
But they did have a blaze attitude about deployments -- if stuff don't work in production they just roll back, and not worry about customer input data being dropped on the floor.

It's amazing how common this attitude has become. It's aggressively anti-customer, and a big part of the reason for the acceleration of the decline of software quality over the past several years.

Njovich ( 553857 ) writes:
No

You want your deployment system to be predictable, and as my old AI professor used to say, intelligent means hard to predict. You don't want AI for systems that just have to do the exact same thing reliably over and over again.

angel'o'sphere ( 80593 ) writes:
Summary sounds retarded

A continuous delivery pipeline has as much AI as a nematode has natural intelligence ... probably even less.

Junta ( 36770 ) writes:
In other words...

Analyst who understands neither software development nor AI proceeds to try to sound insightful about both.

JohnFen ( 1641097 ) writes:
All I know is

All I know is that, as a user, rapid-release or continuous delivery has been nothing but an enormous pain in the ass to me and I wish it would die the horrible death it deserves already.

jmcwork ( 564008 ) writes:
Every morning: git update; make install

As long as customers are comfortable with doing this, I do not see a problem. Now, that will require that developers keep making continuous,

[Aug 28, 2017] rsync doesn't copy files with restrictive permissions

Aug 28, 2017 | superuser.com
up vote down vote favorite Trying to copy files with rsync, it complains:
rsync: send_files failed to open "VirtualBox/Machines/Lubuntu/Lubuntu.vdi" \
(in media): Permission denied (13)

That file is not copied. Indeed the file permissions of that file are very restrictive on the server side:

-rw-------    1 1000     1000     3133181952 Nov  1  2011 Lubuntu.vdi

I call rsync with

sudo rsync -av --fake-super root@sheldon::media /mnt/media

The rsync daemon runs as root on the server. root can copy that file (of course). rsyncd has "fake super = yes" set in /etc/rsyncd.conf.

What can I do so that the file is copied without changing the permissions of the file on the server? rsync file-permissions

share improve this question asked Dec 29 '12 at 10:15 Torsten Bronger 207
If you use RSync as daemon on destination, please post grep rsync /var/log/daemon to improve your question – F. Hauri Dec 29 '12 at 13:23
add a comment |
1 Answer active oldest votes
up vote down vote As you appear to have root access to both servers have you tried a: --force ?

Alternatively you could bypass the rsync daemon and try a direct sync e.g.

rsync -optg --rsh=/usr/bin/ssh --rsync-path=/usr/bin/rsync --verbose --recursive --delete-after --force  root@sheldon::media /mnt/media
share improve this answer edited Jan 2 '13 at 10:55 answered Dec 29 '12 at 13:21 arober11 376
Using ssh means encryption, which makes things slower. --force does only affect directories, if I read the man page correctly. – Torsten Bronger Jan 1 '13 at 23:08
Unless your using ancient kit, the CPU overhead of encrypting / decrypting the traffic shouldn't be noticeable, but you will loose 10-20% of your bandwidth, through the encapsulation process. Then again 80% of a working link is better than 100% of a non working one :) – arober11 Jan 2 '13 at 10:52
do have an "ancient kit". ;-) (Slow ARM CPU on a NAS.) But I now mount the NAS with NFS and use rsync (with "sudo") locally. This solves the problem (and is even faster). However, I still think that my original problem must be solvable using the rsync protocol (remote, no ssh). – Torsten Bronger Jan 4 '13 at 7:55

[Aug 28, 2017] Using rsync under target user to copy home directories

Aug 28, 2017 | unix.stackexchange.com

up vote down vote favorite

nixnotwin , asked Sep 21 '12 at 5:11

On my Ubuntu server there are about 150 shell accounts. All usernames begin with the prefix u12.. I have root access and I am trying to copy a directory named "somefiles" to all the home directories. After copying the directory the user and group ownership of the directory should be changed to user's. Username, group and home-dir name are same. How can this be done?

Gilles , answered Sep 21 '12 at 23:44

Do the copying as the target user. This will automatically make the target files. Make sure that the original files are world-readable (or at least readable by all the target users). Run chmod afterwards if you don't want the copied files to be world-readable.
getent passwd |
awk -F : '$1 ~ /^u12/ {print $1}' |
while IFS= read -r user; do
  su "$user" -c 'cp -Rp /original/location/somefiles ~/'
done

[Aug 28, 2017] rsync over SSH preserve ownership only for www-data owned files

Aug 28, 2017 | stackoverflow.com
up vote 10 down vote favorite 4

jeffery_the_wind , asked Mar 6 '12 at 15:36

I am using rsync to replicate a web folder structure from a local server to a remote server. Both servers are ubuntu linux. I use the following command, and it works well:
rsync -az /var/www/ user@10.1.1.1:/var/www/

The usernames for the local system and the remote system are different. From what I have read it may not be possible to preserve all file and folder owners and groups. That is OK, but I would like to preserve owners and groups just for the www-data user, which does exist on both servers.

Is this possible? If so, how would I go about doing that?

Thanks!

** EDIT **

There is some mention of rsync being able to preserve ownership and groups on remote file syncs here: http://lists.samba.org/archive/rsync/2005-August/013203.html

** EDIT 2 **

I ended up getting the desired affect thanks to many of the helpful comments and answers here. Assuming the IP of the source machine is 10.1.1.2 and the IP of the destination machine is 10.1.1.1. I can use this line from the destination machine:

sudo rsync -az user@10.1.1.2:/var/www/ /var/www/

This preserves the ownership and groups of the files that have a common user name, like www-data. Note that using rsync without sudo does not preserve these permissions.

ghoti , answered Mar 6 '12 at 19:01

You can also sudo the rsync on the target host by using the --rsync-path option:
# rsync -av --rsync-path="sudo rsync" /path/to/files user@targethost:/path

This lets you authenticate as user on targethost, but still get privileged write permission through sudo . You'll have to modify your sudoers file on the target host to avoid sudo's request for your password. man sudoers or run sudo visudo for instructions and samples.

You mention that you'd like to retain the ownership of files owned by www-data, but not other files. If this is really true, then you may be out of luck unless you implement chown or a second run of rsync to update permissions. There is no way to tell rsync to preserve ownership for just one user .

That said, you should read about rsync's --files-from option.

rsync -av /path/to/files user@targethost:/path
find /path/to/files -user www-data -print | \
  rsync -av --files-from=- --rsync-path="sudo rsync" /path/to/files user@targethost:/path

I haven't tested this, so I'm not sure exactly how piping find's output into --files-from=- will work. You'll undoubtedly need to experiment.

xato , answered Mar 6 '12 at 15:39

As far as I know, you cannot chown files to somebody else than you, if you are not root. So you would have to rsync using the www-data account, as all files will be created with the specified user as owner. So you need to chown the files afterwards.

user2485267 , answered Jun 14 '13 at 8:22

I had a similar problem and cheated the rsync command,

rsync -avz --delete root@x.x.x.x:/home//domains/site/public_html/ /home/domains2/public_html && chown -R wwwusr:wwwgrp /home/domains2/public_html/

the && runs the chown against the folder when the rsync completes successfully (1x '&' would run the chown regardless of the rsync completion status)

Graham , answered Mar 6 '12 at 15:51

The root users for the local system and the remote system are different.

What does this mean? The root user is uid 0. How are they different?

Any user with read permission to the directories you want to copy can determine what usernames own what files. Only root can change the ownership of files being written .

You're currently running the command on the source machine, which restricts your writes to the permissions associated with user@10.1.1.1. Instead, you can try to run the command as root on the target machine. Your read access on the source machine isn't an issue.

So on the target machine (10.1.1.1), assuming the source is 10.1.1.2:

# rsync -az user@10.1.1.2:/var/www/ /var/www/

Make sure your groups match on both machines.

Also, set up access to user@10.1.1.2 using a DSA or RSA key, so that you can avoid having passwords floating around. For example, as root on your target machine, run:

# ssh-keygen -d

Then take the contents of the file /root/.ssh/id_dsa.pub and add it to ~user/.ssh/authorized_keys on the source machine. You can ssh user@10.1.1.2 as root from the target machine to see if it works. If you get a password prompt, check your error log to see why the key isn't working.

ghoti , answered Mar 6 '12 at 18:54

Well, you could skip the challenges of rsync altogether, and just do this through a tar tunnel.
sudo tar zcf - /path/to/files | \
  ssh user@remotehost "cd /some/path; sudo tar zxf -"

You'll need to set up your SSH keys as Graham described.

Note that this handles full directory copies, not incremental updates like rsync.

The idea here is that:

[Aug 28, 2017] rsync and file permissions

Aug 28, 2017 | superuser.com
up vote down vote favorite I'm trying to use rsync to copy a set of files from one system to another. I'm running the command as a normal user (not root). On the remote system, the files are owned by apache and when copied they are obviously owned by the local account (fred).

My problem is that every time I run the rsync command, all files are re-synched even though they haven't changed. I think the issue is that rsync sees the file owners are different and my local user doesn't have the ability to change ownership to apache, but I'm not including the -a or -o options so I thought this would not be checked. If I run the command as root, the files come over owned by apache and do not come a second time if I run the command again. However I can't run this as root for other reasons. Here is the command:

/usr/bin/rsync --recursive --rsh=/usr/bin/ssh --rsync-path=/usr/bin/rsync --verbose root@server.example.com:/src/dir/ /local/dir
unix rsync
share improve this question edited May 2 '11 at 23:53 Gareth 13.9k 11 44 58 asked May 2 '11 at 23:43 Fred Snertz 11
Why can't you run rsync as root? On the remote system, does fred have read access to the apache-owned files? – chrishiestand May 3 '11 at 0:32
Ah, I left out the fact that there are ssh keys set up so that local fred can become remote root, so yes fred/root can read them. I know this is a bit convoluted but its real. – Fred Snertz May 3 '11 at 14:50
Always be careful when root can ssh into the machine. But if you have password and challenge response authentication disabled it's not as bad. – chrishiestand May 3 '11 at 17:32
add a comment |
1 Answer active oldest votes
up vote down vote Here's the answer to your problem:
-c, --checksum
      This changes the way rsync checks if the files have been changed and are in need of a  transfer.   Without  this  option,
      rsync  uses  a "quick check" that (by default) checks if each file's size and time of last modification match between the
      sender and receiver.  This option changes this to compare a 128-bit checksum for each file  that  has  a  matching  size.
      Generating  the  checksums  means  that both sides will expend a lot of disk I/O reading all the data in the files in the
      transfer (and this is prior to any reading that will be done to transfer changed files), so this  can  slow  things  down
      significantly.

      The  sending  side  generates  its checksums while it is doing the file-system scan that builds the list of the available
      files.  The receiver generates its checksums when it is scanning for changed files, and will checksum any file  that  has
      the  same  size  as the corresponding sender's file:  files with either a changed size or a changed checksum are selected
      for transfer.

      Note that rsync always verifies that each transferred file was correctly reconstructed on the receiving side by  checking
      a  whole-file  checksum  that is generated as the file is transferred, but that automatic after-the-transfer verification
      has nothing to do with this option's before-the-transfer "Does this file need to be updated?" check.

      For protocol 30 and beyond (first supported in 3.0.0), the checksum used is MD5.  For older protocols, the checksum  used
      is MD4.

So run:

/usr/bin/rsync -c --recursive --rsh=/usr/bin/ssh --rsync-path=/usr/bin/rsync --verbose root@server.example.com:/src/dir/ /local/dir

Note there may be a time+disk churn tradeoff by using this option. Personally, I'd probably just sync the file's mtimes too:

/usr/bin/rsync -t --recursive --rsh=/usr/bin/ssh --rsync-path=/usr/bin/rsync --verbose root@server.example.com:/src/dir/ /local/dir
share improve this answer edited May 3 '11 at 17:55 answered May 3 '11 at 17:48 chrishiestand 1,098 10
Awesome. Thank you. Looks like the second option is going to work for me and I found the first very interesting. – Fred Snertz May 3 '11 at 18:40
psst, hit the green checkbox to give my answer credit ;-) Thx. – chrishiestand May 12 '11 at 1:56

[Aug 28, 2017] Why does rsync fail to copy files from /sys in Linux?

Notable quotes:
"... pseudo file system ..."
"... pseudo filesystems ..."
Aug 28, 2017 | unix.stackexchange.com

up vote 11 down vote favorite 1

Eugene Yarmash , asked Apr 24 '13 at 16:35

I have a bash script which uses rsync to backup files in Archlinux. I noticed that rsync failed to copy a file from /sys , while cp worked just fine:
# rsync /sys/class/net/enp3s1/address /tmp    
rsync: read errors mapping "/sys/class/net/enp3s1/address": No data available (61)
rsync: read errors mapping "/sys/class/net/enp3s1/address": No data available (61)
ERROR: address failed verification -- update discarded.
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1052) [sender=3.0.9]

# cp  /sys/class/net/enp3s1/address /tmp   ## this works

I wonder why does rsync fail, and is it possible to copy the file with it?

mattdm , answered Apr 24 '13 at 18:20

Rsync has code which specifically checks if a file is truncated during read and gives this error ! ENODATA . I don't know why the files in /sys have this behavior, but since they're not real files, I guess it's not too surprising. There doesn't seem to be a way to tell rsync to skip this particular check.

I think you're probably better off not rsyncing /sys and using specific scripts to cherry-pick out the particular information you want (like the network card address).

Runium , answered Apr 25 '13 at 0:23

First off /sys is a pseudo file system . If you look at /proc/filesystems you will find a list of registered file systems where quite a few has nodev in front. This indicates they are pseudo filesystems . This means they exists on a running kernel as a RAM-based filesystem. Further they do not require a block device.
$ cat /proc/filesystems
nodev   sysfs
nodev   rootfs
nodev   bdev
...

At boot the kernel mount this system and updates entries when suited. E.g. when new hardware is found during boot or by udev .

In /etc/mtab you typically find the mount by:

sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0

For a nice paper on the subject read Patric Mochel's – The sysfs Filesystem .


stat of /sys files

If you go into a directory under /sys and do a ls -l you will notice that all files has one size. Typically 4096 bytes. This is reported by sysfs .

:/sys/devices/pci0000:00/0000:00:19.0/net/eth2$ ls -l
-r--r--r-- 1 root root 4096 Apr 24 20:09 addr_assign_type
-r--r--r-- 1 root root 4096 Apr 24 20:09 address
-r--r--r-- 1 root root 4096 Apr 24 20:09 addr_len
...

Further you can do a stat on a file and notice another distinct feature; it occupies 0 blocks. Also inode of root (stat /sys) is 1. /stat/fs typically has inode 2. etc.

rsync vs. cp

The easiest explanation for rsync failure of synchronizing pseudo files is perhaps by example.

Say we have a file named address that is 18 bytes. An ls or stat of the file reports 4096 bytes.


rsync
  1. Opens file descriptor, fd.
  2. Uses fstat(fd) to get information such as size.
  3. Set out to read size bytes, i.e. 4096. That would be line 253 of the code linked by @mattdm . read_size == 4096
    1. Ask; read: 4096 bytes.
    2. A short string is read i.e. 18 bytes. nread == 18
    3. read_size = read_size - nread (4096 - 18 = 4078)
    4. Ask; read: 4078 bytes
    5. 0 bytes read (as first read consumed all bytes in file).
    6. nread == 0 , line 255
    7. Unable to read 4096 bytes. Zero out buffer.
    8. Set error ENODATA .
    9. Return.
  4. Report error.
  5. Retry. (Above loop).
  6. Fail.
  7. Report error.
  8. FINE.

During this process it actually reads the entire file. But with no size available it cannot validate the result – thus failure is only option.

cp
  1. Opens file descriptor, fd.
  2. Uses fstat(fd) to get information such as st_size (also uses lstat and stat).
  3. Check if file is likely to be sparse. That is the file has holes etc.
    copy.c:1010
    /* Use a heuristic to determine whether SRC_NAME contains any sparse
     * blocks.  If the file has fewer blocks than would normally be
     * needed for a file of its size, then at least one of the blocks in
     * the file is a hole.  */
    sparse_src = is_probably_sparse (&src_open_sb);
    

    As stat reports file to have zero blocks it is categorized as sparse.

  4. Tries to read file by extent-copy (a more efficient way to copy normal sparse files), and fails.
  5. Copy by sparse-copy.
    1. Starts out with max read size of MAXINT.
      Typically 18446744073709551615 bytes on a 32 bit system.
    2. Ask; read 4096 bytes. (Buffer size allocated in memory from stat information.)
    3. A short string is read i.e. 18 bytes.
    4. Check if a hole is needed, nope.
    5. Write buffer to target.
    6. Subtract 18 from max read size.
    7. Ask; read 4096 bytes.
    8. 0 bytes as all got consumed in first read.
    9. Return success.
  6. All OK. Update flags for file.
  7. FINE.

,

Might be related, but extended attribute calls will fail on sysfs:

[root@hypervisor eth0]# lsattr address

lsattr: Inappropriate ioctl for device While reading flags on address

[root@hypervisor eth0]#

Looking at my strace it looks like rsync tries to pull in extended attributes by default:

22964 <... getxattr resumed> , 0x7fff42845110, 132) = -1 ENODATA (No data available)

I tried finding a flag to give rsync to see if skipping extended attributes resolves the issue but wasn't able to find anything ( --xattrs turns them on at the destination).

[Aug 28, 2017] Rsync doesn't copy everyting s

Aug 28, 2017 | ubuntuforums.org

View Full Version : [ubuntu] Rsync doesn't copy everyting



Scormen May 31st, 2009, 10:09 AM Hi all,

I'm having some trouble with rsync. I'm trying to sync my local /etc directory to a remote server, but this won't work.

The problem is that it seems he doesn't copy all the files.
The local /etc dir contains 15MB of data, after a rsync, the remote backup contains only 4.6MB of data.

Rsync is running by root. I'm using this command:

rsync --rsync-path="sudo rsync" -e "ssh -i /root/.ssh/backup" -avz --delete --delete-excluded -h --stats /etc kris@192.168.1.3:/home/kris/backup/laptopkris

I hope someone can help.
Thanks!

Kris


Scormen May 31st, 2009, 11:05 AM I found that if I do a local sync, everything goes fine.
But if I do a remote sync, it copies only 4.6MB.

Any idea?


LoneWolfJack May 31st, 2009, 05:14 PM never used rsync on a remote machine, but "sudo rsync" looks wrong. you probably can't call sudo like that so the ssh connection needs to have the proper privileges for executing rsync.

just an educated guess, though.


Scormen May 31st, 2009, 05:24 PM Thanks for your answer.

In /etc/sudoers I have added next line, so "sudo rsync" will work.

kris ALL=NOPASSWD: /usr/bin/rsync

I also tried without --rsync-path="sudo rsync", but without success.

I have also tried on the server to pull the files from the laptop, but that doesn't work either.


LoneWolfJack May 31st, 2009, 05:30 PM in the rsync help file it says that --rsync-path is for the path to rsync on the remote machine, so my guess is that you can't use sudo there as it will be interpreted as a path.

so you will have to do --rsync-path="/path/to/rsync" and make sure the ssh login has root privileges if you need them to access the files you want to sync.

--rsync-path="sudo rsync" probably fails because
a) sudo is interpreted as a path
b) the space isn't escaped
c) sudo probably won't allow itself to be called remotely

again, this is not more than an educated guess.


Scormen May 31st, 2009, 05:45 PM I understand what you mean, so I tried also:

rsync -Cavuhzb --rsync-path="/usr/bin/rsync" -e "ssh -i /root/.ssh/backup" /etc kris@192.168.1.3:/home/kris/backup/laptopkris

Then I get this error:

sending incremental file list
rsync: recv_generator: failed to stat "/home/kris/backup/laptopkris/etc/chatscripts/pap": Permission denied (13)
rsync: recv_generator: failed to stat "/home/kris/backup/laptopkris/etc/chatscripts/provider": Permission denied (13)
rsync: symlink "/home/kris/backup/laptopkris/etc/cups/ssl/server.crt" -> "/etc/ssl/certs/ssl-cert-snakeoil.pem" failed: Permission denied (13)
rsync: symlink "/home/kris/backup/laptopkris/etc/cups/ssl/server.key" -> "/etc/ssl/private/ssl-cert-snakeoil.key" failed: Permission denied (13)
rsync: recv_generator: failed to stat "/home/kris/backup/laptopkris/etc/ppp/peers/provider": Permission denied (13)
rsync: recv_generator: failed to stat "/home/kris/backup/laptopkris/etc/ssl/private/ssl-cert-snakeoil.key": Permission denied (13)

sent 86.85K bytes received 306 bytes 174.31K bytes/sec
total size is 8.71M speedup is 99.97
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1058) [sender=3.0.5]

And the same command with "root" instead of "kris".
Then, I get no errors, but I still don't have all the files synced.


Scormen June 1st, 2009, 09:00 AM Sorry for this bump.
I'm still having the same problem.

Any idea?

Thanks.


binary10 June 1st, 2009, 10:36 AM I understand what you mean, so I tried also:

rsync -Cavuhzb --rsync-path="/usr/bin/rsync" -e "ssh -i /root/.ssh/backup" /etc kris@192.168.1.3:/home/kris/backup/laptopkris

Then I get this error:

And the same command with "root" instead of "kris".
Then, I get no errors, but I still don't have all the files synced.

Maybe there's a nicer way but you could place /usr/bin/rsync into a private protected area and set the owner to root place the sticky bit on it and change your rsync-path argument such like:

# on the remote side, aka kris@192.168.1.3
mkdir priv-area
# protect it from normal users running a priv version of rsync
chmod 700 priv-area
cd priv-area
cp -p /usr/local/bin/rsync ./rsync-priv
sudo chown 0:0 ./rsync-priv
sudo chmod +s ./rsync-priv
ls -ltra # rsync-priv should now be 'bold-red' in bash

Looking at your flags, you've specified a cvs ignore factor, ignore files that are updated on the target, and you're specifying a backup of removed files.

rsync -Cavuhzb --rsync-path="/home/kris/priv-area/rsync-priv" -e "ssh -i /root/.ssh/backup" /etc kris@192.168.1.3:/home/kris/backup/laptopkris

From those qualifiers you're not going to be getting everything sync'd. It's doing what you're telling it to do.

If you really wanted to perform a like for like backup.. (not keeping stuff that's been changed/deleted from the source. I'd go for something like the following.

rsync --archive --delete --hard-links --one-file-system --acls --xattrs --dry-run -i --rsync-path="/home/kris/priv-area/rsync-priv" --rsh="ssh -i /root/.ssh/backup" /etc/ kris@192.168.1.3:/home/kris/backup/laptopkris/etc/

Remove the --dry-run and -i when you're happy with the output, and it should do what you want. A word of warning, I get a bit nervous when not seeing trailing (/) on directories as it could lead to all sorts of funnies if you end up using rsync on softlinks.


Scormen June 1st, 2009, 12:19 PM Thanks for your help, binary10.

I've tried what you have said, but still, I only receive 4.6MB on the remote server.
Thanks for the warning, I'll not that!

Did someone already tried to rsync their own /etc to a remote system? Just to know if this strange thing only happens to me...

Thanks.


binary10 June 1st, 2009, 01:22 PM Thanks for your help, binary10.

I've tried what you have said, but still, I only receive 4.6MB on the remote server.
Thanks for the warning, I'll not that!

Did someone already tried to rsync their own /etc to a remote system? Just to know if this strange thing only happens to me...

Thanks.

Ok so I've gone back and looked at your original post, how are you calculating 15MB of data under etc - via a du -hsx /etc/ ??

I do daily drive to drive backup copies via rsync and drive to network copies.. and have used them recently for restoring.

Sure my du -hsx /etc/ reports 17MB of data of which 10MB gets transferred via an rsync. My backup drives still operate.

rsync 3.0.6 has some fixes to do with ACLs and special devices rsyncing between solaris. but I think 3.0.5 is still ok with ubuntu to ubuntu systems.

Here is my test doing exactly what you you're probably trying to do. I even check the remote end..

binary10@jsecx25:~/bin-priv$ ./rsync --archive --delete --hard-links --one-file-system --stats --acls --xattrs --human-readable --rsync-path="~/bin/rsync-priv-os-specific" --rsh="ssh" /etc/ rsyncbck@10.0.0.21:/home/kris/backup/laptopkris/etc/

Number of files: 3121
Number of files transferred: 1812
Total file size: 10.04M bytes
Total transferred file size: 10.00M bytes
Literal data: 10.00M bytes
Matched data: 0 bytes
File list size: 109.26K
File list generation time: 0.002 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 10.20M
Total bytes received: 38.70K

sent 10.20M bytes received 38.70K bytes 4.09M bytes/sec
total size is 10.04M speedup is 0.98

binary10@jsecx25:~/bin-priv$ sudo du -hsx /etc/
17M /etc/
binary10@jsecx25:~/bin-priv$

And then on the remote system I do the du -hsx

binary10@lenovo-n200:/home/kris/backup/laptopkris/etc$ cd ..
binary10@lenovo-n200:/home/kris/backup/laptopkris$ sudo du -hsx etc
17M etc
binary10@lenovo-n200:/home/kris/backup/laptopkris$


Scormen June 1st, 2009, 01:35 PM ow are you calculating 15MB of data under etc - via a du -hsx /etc/ ??
Indeed, on my laptop I see:

root@laptopkris:/home/kris# du -sh /etc/
15M /etc/

If I do the same thing after a fresh sync to the server, I see:

root@server:/home/kris# du -sh /home/kris/backup/laptopkris/etc/
4.6M /home/kris/backup/laptopkris/etc/

On both sides, I have installed Ubuntu 9.04, with version 3.0.5 of rsync.
So strange...


binary10 June 1st, 2009, 01:45 PM it does seem a bit odd.

I'd start doing a few diffs from the outputs find etc/ -printf "%f %s %p %Y\n" | sort

And see what type of files are missing.

- edit - Added the %Y file type.


Scormen June 1st, 2009, 01:58 PM Hmm, it's going stranger.
Now I see that I have all my files on the server, but they don't have their full size (bytes).

I have uploaded the files, so you can look into them.

Laptop: http://www.linuxontdekt.be/files/laptop.files
Server: http://www.linuxontdekt.be/files/server.files


binary10 June 1st, 2009, 02:16 PM If you look at the files that are different aka the ssl's they are links to local files else where aka linked to /usr and not within /etc/

aka they are different on your laptop and the server


Scormen June 1st, 2009, 02:25 PM I understand that soft links are just copied, and not the "full file".

But, you have run the same command to test, a few posts ago.
How is it possible that you can see the full 15MB?


binary10 June 1st, 2009, 02:34 PM I was starting to think that this was a bug with du.

The de-referencing is a bit topsy.

If you rsync copy the remote backup back to a new location back onto the laptop and do the du command. I wonder if you'll end up with 15MB again.


Scormen June 1st, 2009, 03:20 PM Good tip.

On the server side, the backup of the /etc was still 4.6MB.
I have rsynced it back to the laptop, to a new directory.

If I go on the laptop to that new directory and do a du, it says 15MB.


binary10 June 1st, 2009, 03:34 PM Good tip.

On the server side, the backup of the /etc was still 4.6MB.
I have rsynced it back to the laptop, to a new directory.

If I go on the laptop to that new directory and do a du, it says 15MB.

I think you've now confirmed that RSYNC DOES copy everything.. just tht du confusing what you had expected by counting the end link sizes.

It might also think about what you're copying, maybe you need more than just /etc of course it depends on what you are trying to do with the backup :)

enjoy.


Scormen June 1st, 2009, 03:37 PM Yeah, it seems to work well.
So, the "problem" where just the soft links, that couldn't be counted on the server side?
binary10 June 1st, 2009, 04:23 PM Yeah, it seems to work well.
So, the "problem" where just the soft links, that couldn't be counted on the server side?

The links were copied as links as per the design of the --archive in rsync.

The contents of the pointing links were different between your two systems. These being that that reside outside of /etc/ in /usr And so DU reporting them differently.


Scormen June 1st, 2009, 05:36 PM Okay, I got it.
Many thanks for the support, binarty10!
Scormen June 1st, 2009, 05:59 PM Just to know, is it possible to copy the data from these links as real, hard data?
Thanks.
binary10 June 2nd, 2009, 09:54 AM Just to know, is it possible to copy the data from these links as real, hard data?
Thanks.

Yep absolutely

You should then look at other possibilities of:

-L, --copy-links transform symlink into referent file/dir
--copy-unsafe-links only "unsafe" symlinks are transformed
--safe-links ignore symlinks that point outside the source tree
-k, --copy-dirlinks transform symlink to a dir into referent dir
-K, --keep-dirlinks treat symlinked dir on receiver as dir

but then you'll have to start questioning why you are backing them up like that especially stuff under /etc/. If you ever wanted to restore it you'd be restoring full files and not symlinks the restore result could be a nightmare as well as create future issues (upgrades etc) let alone your backup will be significantly larger, could be 150MB instead of 4MB.


Scormen June 2nd, 2009, 10:04 AM Okay, now I'm sure what its doing :)
Is it also possible to show on a system the "real disk usage" of e.g. that /etc directory? So, without the links, that we get a output of 4.6MB.

Thank you very much for your help!


binary10 June 2nd, 2009, 10:22 AM What does the following respond with.

sudo du --apparent-size -hsx /etc

If you want the real answer then your result from a dry-run rsync will only be enough for you.

sudo rsync --dry-run --stats -h --archive /etc/ /tmp/etc/

[Aug 21, 2017] As the crisis unfolds there will be talk about giving the UN some role in resolving international problems.

Aug 21, 2017 | www.lettinggobreath.com

psychohistorian | Aug 21, 2017 12:01:32 AM | 27

My understanding of the UN is that it is the High Court of the World where fealty is paid to empire that funds most of the political circus anyway...and speaking of funding or not, read the following link and lets see what PavewayIV adds to the potential sickness we are sleep walking into.

As the UN delays talks, more industry leaders back ban on weaponized AI

[Jul 29, 2017] linux - Directory bookmarking for bash - Stack Overflow

Notable quotes:
"... May you wan't to change this alias to something which fits your needs ..."
Jul 29, 2017 | stackoverflow.com

getmizanur , asked Sep 10 '11 at 20:35

Is there any directory bookmarking utility for bash to allow move around faster on the command line?

UPDATE

Thanks guys for the feedback however I created my own simple shell script (feel free to modify/expand it)

function cdb() {
    USAGE="Usage: cdb [-c|-g|-d|-l] [bookmark]" ;
    if  [ ! -e ~/.cd_bookmarks ] ; then
        mkdir ~/.cd_bookmarks
    fi

    case $1 in
        # create bookmark
        -c) shift
            if [ ! -f ~/.cd_bookmarks/$1 ] ; then
                echo "cd `pwd`" > ~/.cd_bookmarks/"$1" ;
            else
                echo "Try again! Looks like there is already a bookmark '$1'"
            fi
            ;;
        # goto bookmark
        -g) shift
            if [ -f ~/.cd_bookmarks/$1 ] ; then 
                source ~/.cd_bookmarks/"$1"
            else
                echo "Mmm...looks like your bookmark has spontaneously combusted. What I mean to say is that your bookmark does not exist." ;
            fi
            ;;
        # delete bookmark
        -d) shift
            if [ -f ~/.cd_bookmarks/$1 ] ; then 
                rm ~/.cd_bookmarks/"$1" ;
            else
                echo "Oops, forgot to specify the bookmark" ;
            fi    
            ;;
        # list bookmarks
        -l) shift
            ls -l ~/.cd_bookmarks/ ;
            ;;
         *) echo "$USAGE" ;
            ;;
    esac
}

INSTALL

1./ create a file ~/.cdb and copy the above script into it.

2./ in your ~/.bashrc add the following

if [ -f ~/.cdb ]; then
    source ~/.cdb
fi

3./ restart your bash session

USAGE

1./ to create a bookmark

$cd my_project
$cdb -c project1

2./ to goto a bookmark

$cdb -g project1

3./ to list bookmarks

$cdb -l

4./ to delete a bookmark

$cdb -d project1

5./ where are all my bookmarks stored?

$cd ~/.cd_bookmarks

Fredrik Pihl , answered Sep 10 '11 at 20:47

Also, have a look at CDPATH

A colon-separated list of search paths available to the cd command, similar in function to the $PATH variable for binaries. The $CDPATH variable may be set in the local ~/.bashrc file.

ash$ cd bash-doc
bash: cd: bash-doc: No such file or directory

bash$ CDPATH=/usr/share/doc
bash$ cd bash-doc
/usr/share/doc/bash-doc

bash$ echo $PWD
/usr/share/doc/bash-doc

and

cd -

It's the command-line equivalent of the back button (takes you to the previous directory you were in).

ajreal , answered Sep 10 '11 at 20:41

In bash script/command,
you can use pushd and popd

pushd

Save and then change the current directory. With no arguments, pushd exchanges the top two directories.

Usage

cd /abc
pushd /xxx    <-- save /abc to environment variables and cd to /xxx
pushd /zzz
pushd +1      <-- cd /xxx

popd is to remove the variable (reverse manner)

fgm , answered Sep 11 '11 at 8:28

bookmarks.sh provides a bookmark management system for the Bash version 4.0+. It can also use a Midnight Commander hotlist.

Dmitry Frank , answered Jun 16 '15 at 10:22

Thanks for sharing your solution, and I'd like to share mine as well, which I find more useful than anything else I've came across before.

The engine is a great, universal tool: command-line fuzzy finder by Junegunn.

It primarily allows you to "fuzzy-find" files in a number of ways, but it also allows to feed arbitrary text data to it and filter this data. So, the shortcuts idea is simple: all we need is to maintain a file with paths (which are shortcuts), and fuzzy-filter this file. Here's how it looks: we type cdg command (from "cd global", if you like), get a list of our bookmarks, pick the needed one in just a few keystrokes, and press Enter. Working directory is changed to the picked item:

It is extremely fast and convenient: usually I just type 3-4 letters of the needed item, and all others are already filtered out. Additionally, of course we can move through list with arrow keys or with vim-like keybindings Ctrl+j / Ctrl+k .

Article with details: Fuzzy shortcuts for your shell .

It is possible to use it for GUI applications as well (via xterm): I use that for my GUI file manager Double Commander . I have plans to write an article about this use case, too.

return42 , answered Feb 6 '15 at 11:56

Inspired by the question and answers here, I added the lines below to my ~/.bashrc file.

With this you have a favdir command (function) to manage your favorites and a autocompletion function to select an item from these favorites.

# ---------
# Favorites
# ---------

__favdirs_storage=~/.favdirs
__favdirs=( "$HOME" )

containsElement () {
    local e
    for e in "${@:2}"; do [[ "$e" == "$1" ]] && return 0; done
    return 1
}

function favdirs() {

    local cur
    local IFS
    local GLOBIGNORE

    case $1 in
        list)
            echo "favorite folders ..."
            printf -- ' - %s\n' "${__favdirs[@]}"
            ;;
        load)
            if [[ ! -e $__favdirs_storage ]] ; then
                favdirs save
            fi
            # mapfile requires bash 4 / my OS-X bash vers. is 3.2.53 (from 2007 !!?!).
            # mapfile -t __favdirs < $__favdirs_storage
            IFS=$'\r\n' GLOBIGNORE='*' __favdirs=($(< $__favdirs_storage))
            ;;
        save)
            printf -- '%s\n' "${__favdirs[@]}" > $__favdirs_storage
            ;;
        add)
            cur=${2-$(pwd)}
            favdirs load
            if containsElement "$cur" "${__favdirs[@]}" ; then
                echo "'$cur' allready exists in favorites"
            else
                __favdirs+=( "$cur" )
                favdirs save
                echo "'$cur' added to favorites"
            fi
            ;;
        del)
            cur=${2-$(pwd)}
            favdirs load
            local i=0
            for fav in ${__favdirs[@]}; do
                if [ "$fav" = "$cur" ]; then
                    echo "delete '$cur' from favorites"
                    unset __favdirs[$i]
                    favdirs save
                    break
                fi
                let i++
            done
            ;;
        *)
            echo "Manage favorite folders."
            echo ""
            echo "usage: favdirs [ list | load | save | add | del ]"
            echo ""
            echo "  list : list favorite folders"
            echo "  load : load favorite folders from $__favdirs_storage"
            echo "  save : save favorite directories to $__favdirs_storage"
            echo "  add  : add directory to favorites [default pwd $(pwd)]."
            echo "  del  : delete directory from favorites [default pwd $(pwd)]."
    esac
} && favdirs load

function __favdirs_compl_command() {
    COMPREPLY=( $( compgen -W "list load save add del" -- ${COMP_WORDS[COMP_CWORD]}))
} && complete -o default -F __favdirs_compl_command favdirs

function __favdirs_compl() {
    local IFS=$'\n'
    COMPREPLY=( $( compgen -W "${__favdirs[*]}" -- ${COMP_WORDS[COMP_CWORD]}))
}

alias _cd='cd'
complete -F __favdirs_compl _cd

Within the last two lines, an alias to change the current directory (with autocompletion) is created. With this alias ( _cd ) you are able to change to one of your favorite directories. May you wan't to change this alias to something which fits your needs .

With the function favdirs you can manage your favorites (see usage).

$ favdirs 
Manage favorite folders.

usage: favdirs [ list | load | save | add | del ]

  list : list favorite folders
  load : load favorite folders from ~/.favdirs
  save : save favorite directories to ~/.favdirs
  add  : add directory to favorites [default pwd /tmp ].
  del  : delete directory from favorites [default pwd /tmp ].

Zied , answered Mar 12 '14 at 9:53

Yes there is DirB: Directory Bookmarks for Bash well explained in this Linux Journal article

An example from the article:

% cd ~/Desktop
% s d       # save(bookmark) ~/Desktop as d
% cd /tmp   # go somewhere
% pwd
/tmp
% g d       # go to the desktop
% pwd
/home/Desktop

Al Conrad , answered Sep 4 '15 at 16:10

@getmizanur I used your cdb script. I enhanced it slightly by adding bookmarks tab completion. Here's my version of your cdb script.
_cdb()
{
    local _script_commands=$(ls -1 ~/.cd_bookmarks/)
    local cur=${COMP_WORDS[COMP_CWORD]}

    COMPREPLY=( $(compgen -W "${_script_commands}" -- $cur) )
}
complete -F _cdb cdb


function cdb() {

    local USAGE="Usage: cdb [-h|-c|-d|-g|-l|-s] [bookmark]\n
    \t[-h or no args] - prints usage help\n
    \t[-c bookmark] - create bookmark\n
    \t[-d bookmark] - delete bookmark\n
    \t[-g bookmark] - goto bookmark\n
    \t[-l] - list bookmarks\n
    \t[-s bookmark] - show bookmark location\n
    \t[bookmark] - same as [-g bookmark]\n
    Press tab for bookmark completion.\n"        

    if  [ ! -e ~/.cd_bookmarks ] ; then
        mkdir ~/.cd_bookmarks
    fi

    case $1 in
        # create bookmark
        -c) shift
            if [ ! -f ~/.cd_bookmarks/$1 ] ; then
                echo "cd `pwd`" > ~/.cd_bookmarks/"$1"
                complete -F _cdb cdb
            else
                echo "Try again! Looks like there is already a bookmark '$1'"
            fi
            ;;
        # goto bookmark
        -g) shift
            if [ -f ~/.cd_bookmarks/$1 ] ; then
                source ~/.cd_bookmarks/"$1"
            else
                echo "Mmm...looks like your bookmark has spontaneously combusted. What I mean to say is that your bookmark does not exist." ;
            fi
            ;;
        # show bookmark
        -s) shift
            if [ -f ~/.cd_bookmarks/$1 ] ; then
                cat ~/.cd_bookmarks/"$1"
            else
                echo "Mmm...looks like your bookmark has spontaneously combusted. What I mean to say is that your bookmark does not exist." ;
            fi
            ;;
        # delete bookmark
        -d) shift
            if [ -f ~/.cd_bookmarks/$1 ] ; then
                rm ~/.cd_bookmarks/"$1" ;
            else
                echo "Oops, forgot to specify the bookmark" ;
            fi
            ;;
        # list bookmarks
        -l) shift
            ls -1 ~/.cd_bookmarks/ ;
            ;;
        -h) echo -e $USAGE ;
            ;;
        # goto bookmark by default
        *)
            if [ -z "$1" ] ; then
                echo -e $USAGE
            elif [ -f ~/.cd_bookmarks/$1 ] ; then
                source ~/.cd_bookmarks/"$1"
            else
                echo "Mmm...looks like your bookmark has spontaneously combusted. What I mean to say is that your bookmark does not exist." ;
            fi
            ;;
    esac
}

tobimensch , answered Jun 5 '16 at 21:31

Yes, one that I have written, that is called anc.

https://github.com/tobimensch/anc

Anc stands for anchor, but anc's anchors are really just bookmarks.

It's designed for ease of use and there're multiple ways of navigating, either by giving a text pattern, using numbers, interactively, by going back, or using [TAB] completion.

I'm actively working on it and open to input on how to make it better.

Allow me to paste the examples from anc's github page here:

# make the current directory the default anchor:
$ anc s

# go to /etc, then /, then /usr/local and then back to the default anchor:
$ cd /etc; cd ..; cd usr/local; anc

# go back to /usr/local :
$ anc b

# add another anchor:
$ anc a $HOME/test

# view the list of anchors (the default one has the asterisk):
$ anc l
(0) /path/to/first/anchor *
(1) /home/usr/test

# jump to the anchor we just added:
# by using its anchor number
$ anc 1
# or by jumping to the last anchor in the list
$ anc -1

# add multiple anchors:
$ anc a $HOME/projects/first $HOME/projects/second $HOME/documents/first

# use text matching to jump to $HOME/projects/first
$ anc pro fir

# use text matching to jump to $HOME/documents/first
$ anc doc fir

# add anchor and jump to it using an absolute path
$ anc /etc
# is the same as
$ anc a /etc; anc -1

# add anchor and jump to it using a relative path
$ anc ./X11 #note that "./" is required for relative paths
# is the same as
$ anc a X11; anc -1

# using wildcards you can add many anchors at once
$ anc a $HOME/projects/*

# use shell completion to see a list of matching anchors
# and select the one you want to jump to directly
$ anc pro[TAB]

Cảnh Toเn Nguyễn , answered Feb 20 at 5:41

Bashmarks is an amazingly simple and intuitive utility. In short, after installation, the usage is:
s <bookmark_name> - Saves the current directory as "bookmark_name"
g <bookmark_name> - Goes (cd) to the directory associated with "bookmark_name"
p <bookmark_name> - Prints the directory associated with "bookmark_name"
d <bookmark_name> - Deletes the bookmark
l                 - Lists all available bookmarks

,

For short term shortcuts, I have a the following in my respective init script (Sorry. I can't find the source right now and didn't bother then):
function b() {
    alias $1="cd `pwd -P`"
}

Usage:

In any directory that you want to bookmark type

b THEDIR # <THEDIR> being the name of your 'bookmark'

It will create an alias to cd (back) to here.

To return to a 'bookmarked' dir type

THEDIR

It will run the stored alias and cd back there.

Caution: Use only if you understand that this might override existing shell aliases and what that means.

[Jul 29, 2017] If processes inherit the parents environment, why do we need export?

Notable quotes:
"... "Processes inherit their environment from their parent (the process which started them)." ..."
"... in the environment ..."
Jul 29, 2017 | unix.stackexchange.com
Amelio Vazquez-Reina asked May 19 '14

I read here that the purpose of export in a shell is to make the variable available to sub-processes started from the shell.

However, I have also read here and here that "Processes inherit their environment from their parent (the process which started them)."

If this is the case, why do we need export ? What am I missing?

Are shell variables not part of the environment by default? What is the difference?

Your assumption is that all shell variables are in the environment . This is incorrect. The export command is what defines a name to be in the environment at all. Thus:

a=1
b=2
export b

results in the current shell knowing that $a expands to 1 and $b to 2, but subprocesses will not know anything about a because it is not part of the environment (even in the current shell).

Some useful tools:

Alternatives to export :

  1. name=val command # Assignment before command exports that name to the command.
  2. declare/local -x name # Exports name, particularly useful in shell functions when you want to avoid exposing the name to outside scope.
====

There's a difference between shell variables and environment variables. If you define a shell variable without export ing it, it is not added to the processes environment and thus not inherited to its children.

Using export you tell the shell to add the shell variable to the environment. You can test this using printenv (which just prints its environment to stdout, since it's a child-process you see the effect of export ing variables):

#!/bin/sh
MYVAR="my cool variable"
echo "Without export:"
printenv | grep MYVAR
echo "With export:"
export MYVAR 
printenv | grep MYVAR
A variable, once exported, is part of the environment. PATH is exported in the shell itself, while custom variables can be exported as needed.

... ... ..

[Jul 29, 2017] Why does subshell not inherit exported variable (PS1)?

Jul 29, 2017 | superuser.com
up vote down vote favorite 1 I am using startx to start the graphical environment. I have a very simple .xinitrc which I will add things to as I set up the environment, but for now it is as follows:

catwm
&
# Just a basic window manager, for testing.


xterm

The reason I background the WM and foreground terminal and not the other way around as often is done, is because I would like to be able to come back to the virtual text console after typing exit in xterm . This appears to work as described.

The problem is that the PS1 variable that currently is set to my preference in /etc/profile.d/user.sh (which is sourced from /etc/profile supplied by distro), does not appear to propagate to the environment of the xterm mentioned above. The relevant process tree is as follows:


\_
bash
    \_ xinit
home
user
/.
xinitrc
--
etc
X11
xinit
xserverrc
auth
tmp
serverauth
ggJna3I0vx
        \_
usr
bin
nolisten tcp
auth
tmp
serverauth
ggJna3I0vx vt1
        \_ sh
home
user
/.
xinitrc
            \_
home
user
catwm
            \_ xterm
                \_ bash

The shell started by xterm appears to be interactive, the shell executing .xinitrc however is not. I am ok with both, the assumptions about interactivity seem to be perfectly valid, but now I have a non-interactive shell that spawns an interactive shell indirectly, and the interactive shell has no chance to automatically inherit the prompt, because the prompt was unset or otherwise made unavailable higher up the process tree.

How do I go about getting my prompt back? bash environment-variables sh

share improve this question edited Oct 21 '13 at 11:39 asked Oct 21 '13 at 9:51 amn 453 12 29
down vote accepted

Commands env and export list only variables which are exported. $PS1 is usually not exported. Try echo $PS1 in your shell to see actual value of $PS1 .

Non-interactive shells usually do not have $PS1 . Non-interactive bash explicitly unsets $PS1 . 1 You can check if bash is interactive by echo $- . If the output contains i then it is interactive. You can explicitly start interactive shell by using the option on the command line: bash -i . Shell started with -c is not interactive.

The /etc/profile script is read for a login shell. You can start the shell as a login shell by: bash -l .

With bash shell the scripts /etc/bash.bashrc and ~/.bashrc are usually used to set $PS1 . Those scripts are sourced when interactive non-login shell is started. It is your case in the xterm .

See Setting the PS? Strings Permanently

Possible solutions
share improve this answer edited Oct 22 '13 at 16:45 answered Oct 21 '13 at 11:19 pabouk 4,250 25 40
I am specifically avoiding to set PS1 in .bashrc or /etc/bash.bashrc (which is executed as well), to retain POSIX shell compatibility. These do not set or unset PS1 . PS1 is set in /etc/profile.d/user.sh , which is sourced by /etc/profile . Indeed, this file is only executed for login shells, however I do export PS1 from /etc/profile.d/user.sh exactly because I want propagation of my preferred value down the process tree. So it shouldn't matter which subshells are login and/or interactive ones then, should it? – amn Oct 21 '13 at 11:32
It seems that bash removes the PS1 variable. What exactly do you want to achieve by "POSIX shell compatibility"? Do you want to be able to replace bash by a different POSIX-compliant shell and retain the same functionality? Based on my tests bash removes PS1 when it is started as non-interactive. I think of two simple solutions: 1. start the shell as a login shell with the -l option (attention for actions in the startup scripts which should be started only at login) 2. start the intermediate shells as interactive with the -i option. – pabouk Oct 21 '13 at 12:00
I try to follow interfaces and specifications, not implementations - hence POSIX compatibility. That's important (to me). I already have one login shell - the one started by /usr/bin/login . I understand that a non-interactive shell doesn't need prompt, but unsetting a variable is too much - I need the prompt in an interactive shell (spawned and used by xterm ) later on. What am I doing wrong? I guess most people set their prompt in .bashrc which is sourced by bash anyway, and so the prompt survives. I try to avoid .bashrc however. – amn Oct 22 '13 at 12:12
@amn: I have added various possible solutions to the reply. – pabouk Oct 22 '13 at 16:46

[Jul 29, 2017] Bash subshell mystery

Notable quotes:
"... The subshell created using parentheses does not ..."
Jul 29, 2017 | stackoverflow.com

user3718463 , asked Sep 27 '14 at 21:41

The Learning Bash Book mention that a subshell will inherit only environment variabels and file descriptors , ...etc and that it will not inherit variables that are not exported of
$ var=15
$ (echo $var)
15
$ ./file # this file include the same command echo $var

$

As i know the shell will create two subshells for () case and for ./file, but why in () case the subshell identified the var variable although it is not exported and in the ./file case it did not identify it ?

...

I tried to use strace to figure out how this happens and surprisingly i found that bash will use the same arguments for the clone system call so this means that the both forked process in () and ./file should have the same process address space of the parent, so why in () case the variable is visible to the subshell and the same does not happen for ./file case although the same arguments is based with clone system call ?

Alfe , answered Sep 27 '14 at 23:16

The subshell created using parentheses does not use an execve() call for the new process, the calling of the script does. At this point the variables from the parent shell are handled differently: The execve() passes a deliberate set of variables (the script-calling case) while not calling execve() (the parentheses case) leaves the complete set of variables intact.

Your probing using strace should have shown exactly that difference; if you did not see it, I can only assume that you made one of several possible mistakes. I will just strip down what I did to show the difference, then you can decide for yourself where your error was.

... ... ...

Nicolas Albert , answered Sep 27 '14 at 21:43

You have to export your var for child process:


export var
15

Once exported, the variable is used for all children process at the launch time (not export time).



var
15



export var

is same as



export var
var
15

is same as



export var
15

Export can be cancelled using unset . Sample: unset var .

user3718463 , answered Sep 27 '14 at 23:11

The solution for this mystery is that subshells inherit everything from the parent shell including all shell variables because they are simply called with fork or clone so they share the same memory space with the parent shell , that's why this will work
$ var=15
$ (echo $var)
15

But in the ./file , the subshell will be later followed by exec or execv system call which will clear all the previous parent variables but we still have the environment variables you can check this out using strace using -f to monitor the child subshell and you will find that there is a call to execv

[Jul 29, 2017] How To Read and Set Environmental and Shell Variables on a Linux VPS

Mar 03, 2014 | www.digitalocean.com
Introduction

When interacting with your server through a shell session, there are many pieces of information that your shell compiles to determine its behavior and access to resources. Some of these settings are contained within configuration settings and others are determined by user input.

One way that the shell keeps track of all of these settings and details is through an area it maintains called the environment . The environment is an area that the shell builds every time that it starts a session that contains variables that define system properties.

In this guide, we will discuss how to interact with the environment and read or set environmental and shell variables interactively and through configuration files. We will be using an Ubuntu 12.04 VPS as an example, but these details should be relevant on any Linux system.

How the Environment and Environmental Variables Work

Every time a shell session spawns, a process takes place to gather and compile information that should be available to the shell process and its child processes. It obtains the data for these settings from a variety of different files and settings on the system.

Basically the environment provides a medium through which the shell process can get or set settings and, in turn, pass these on to its child processes.

The environment is implemented as strings that represent key-value pairs. If multiple values are passed, they are typically separated by colon (:) characters. Each pair will generally will look something like this:

KEY
value1
value2:...

If the value contains significant white-space, quotations are used:

KEY
="
value with spaces
"

The keys in these scenarios are variables. They can be one of two types, environmental variables or shell variables.

Environmental variables are variables that are defined for the current shell and are inherited by any child shells or processes. Environmental variables are used to pass information into processes that are spawned from the shell.

Shell variables are variables that are contained exclusively within the shell in which they were set or defined. They are often used to keep track of ephemeral data, like the current working directory.

By convention, these types of variables are usually defined using all capital letters. This helps users distinguish environmental variables within other contexts.

Printing Shell and Environmental Variables

Each shell session keeps track of its own shell and environmental variables. We can access these in a few different ways.

We can see a list of all of our environmental variables by using the env or printenv commands. In their default state, they should function exactly the same:

printenv


SHELL=/bin/bash
TERM=xterm
USER=demouser
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca:...
MAIL=/var/mail/demouser
PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
PWD=/home/demouser
LANG=en_US.UTF-8
SHLVL=1
HOME=/home/demouser
LOGNAME=demouser
LESSOPEN=| /usr/bin/lesspipe %s
LESSCLOSE=/usr/bin/lesspipe %s %s
_=/usr/bin/printenv

This is fairly typical of the output of both printenv and env . The difference between the two commands is only apparent in their more specific functionality. For instance, with printenv , you can requests the values of individual variables:

printenv SHELL


/bin/bash

On the other hand, env let's you modify the environment that programs run in by passing a set of variable definitions into a command like this:

env VAR1="blahblah" command_to_run command_options

Since, as we learned above, child processes typically inherit the environmental variables of the parent process, this gives you the opportunity to override values or add additional variables for the child.

As you can see from the output of our printenv command, there are quite a few environmental variables set up through our system files and processes without our input.

These show the environmental variables, but how do we see shell variables?

The set command can be used for this. If we type set without any additional parameters, we will get a list of all shell variables, environmental variables, local variables, and shell functions:

set


BASH=/bin/bash
BASHOPTS=checkwinsize:cmdhist:expand_aliases:extglob:extquote:force_fignore:histappend:interactive_comments:login_shell:progcomp:promptvars:sourcepath
BASH_ALIASES=()
BASH_ARGC=()
BASH_ARGV=()
BASH_CMDS=()
. . .

This is usually a huge list. You probably want to pipe it into a pager program to deal with the amount of output easily:

set | less

The amount of additional information that we receive back is a bit overwhelming. We probably do not need to know all of the bash functions that are defined, for instance.

We can clean up the output by specifying that set should operate in POSIX mode, which won't print the shell functions. We can execute this in a sub-shell so that it does not change our current environment:

(set -o posix; set)

This will list all of the environmental and shell variables that are defined.

We can attempt to compare this output with the output of the env or printenv commands to try to get a list of only shell variables, but this will be imperfect due to the different ways that these commands output information:

comm -23 <(set -o posix; set | sort) <(env | sort)

This will likely still include a few environmental variables, due to the fact that the set command outputs quoted values, while the printenv and env commands do not quote the values of strings.

This should still give you a good idea of the environmental and shell variables that are set in your session.

These variables are used for all sorts of things. They provide an alternative way of setting persistent values for the session between processes, without writing changes to a file.

Common Environmental and Shell Variables

Some environmental and shell variables are very useful and are referenced fairly often.

Here are some common environmental variables that you will come across:

In addition to these environmental variables, some shell variables that you'll often see are:

Setting Shell and Environmental Variables

To better understand the difference between shell and environmental variables, and to introduce the syntax for setting these variables, we will do a small demonstration.

Creating Shell Variables

We will begin by defining a shell variable within our current session. This is easy to accomplish; we only need to specify a name and a value. We'll adhere to the convention of keeping all caps for the variable name, and set it to a simple string.

TEST_VAR='Hello World!'

Here, we've used quotations since the value of our variable contains a space. Furthermore, we've used single quotes because the exclamation point is a special character in the bash shell that normally expands to the bash history if it is not escaped or put into single quotes.

We now have a shell variable. This variable is available in our current session, but will not be passed down to child processes.

We can see this by grepping for our new variable within the set output:

set | grep TEST_VAR


TEST_VAR='Hello World!'

We can verify that this is not an environmental variable by trying the same thing with printenv :

printenv | grep TEST_VAR

No out should be returned.

Let's take this as an opportunity to demonstrate a way of accessing the value of any shell or environmental variable.

echo $TEST_VAR


Hello World!

As you can see, reference the value of a variable by preceding it with a $ sign. The shell takes this to mean that it should substitute the value of the variable when it comes across this.

So now we have a shell variable. It shouldn't be passed on to any child processes. We can spawn a new bash shell from within our current one to demonstrate:

bash
echo $TEST_VAR

If we type bash to spawn a child shell, and then try to access the contents of the variable, nothing will be returned. This is what we expected.

Get back to our original shell by typing exit :

exit

Creating Environmental Variables

Now, let's turn our shell variable into an environmental variable. We can do this by exporting the variable. The command to do so is appropriately named:

export TEST_VAR

This will change our variable into an environmental variable. We can check this by checking our environmental listing again:

printenv | grep TEST_VAR


TEST_VAR=Hello World!

This time, our variable shows up. Let's try our experiment with our child shell again:

bash
echo $TEST_VAR


Hello World!

Great! Our child shell has received the variable set by its parent. Before we exit this child shell, let's try to export another variable. We can set environmental variables in a single step like this:

export NEW_VAR="Testing export"

Test that it's exported as an environmental variable:

printenv | grep NEW_VAR


NEW_VAR=Testing export

Now, let's exit back into our original shell:

exit

Let's see if our new variable is available:

echo $NEW_VAR

Nothing is returned.

This is because environmental variables are only passed to child processes. There isn't a built-in way of setting environmental variables of the parent shell. This is good in most cases and prevents programs from affecting the operating environment from which they were called.

The NEW_VAR variable was set as an environmental variable in our child shell. This variable would be available to itself and any of its child shells and processes. When we exited back into our main shell, that environment was destroyed.

Demoting and Unsetting Variables

We still have our TEST_VAR variable defined as an environmental variable. We can change it back into a shell variable by typing:

export -n TEST_VAR

It is no longer an environmental variable:

printenv | grep TEST_VAR

However, it is still a shell variable:

set | grep TEST_VAR


TEST_VAR='Hello World!'

If we want to completely unset a variable, either shell or environmental, we can do so with the unset command:

unset TEST_VAR

We can verify that it is no longer set:

echo $TEST_VAR

Nothing is returned because the variable has been unset.

Setting Environmental Variables at Login

We've already mentioned that many programs use environmental variables to decide the specifics of how to operate. We do not want to have to set important variables up every time we start a new shell session, and we have already seen how many variables are already set upon login, so how do we make and define variables automatically?

This is actually a more complex problem than it initially seems, due to the numerous configuration files that the bash shell reads depending on how it is started.

The Difference between Login, Non-Login, Interactive, and Non-Interactive Shell Sessions

The bash shell reads different configuration files depending on how the session is started.

One distinction between different sessions is whether the shell is being spawned as a "login" or "non-login" session.

A login shell is a shell session that begins by authenticating the user. If you are signing into a terminal session or through SSH and authenticate, your shell session will be set as a "login" shell.

If you start a new shell session from within your authenticated session, like we did by calling the bash command from the terminal, a non-login shell session is started. You were were not asked for your authentication details when you started your child shell.

Another distinction that can be made is whether a shell session is interactive, or non-interactive.

An interactive shell session is a shell session that is attached to a terminal. A non-interactive shell session is one is not attached to a terminal session.

So each shell session is classified as either login or non-login and interactive or non-interactive.

A normal session that begins with SSH is usually an interactive login shell. A script run from the command line is usually run in a non-interactive, non-login shell. A terminal session can be any combination of these two properties.

Whether a shell session is classified as a login or non-login shell has implications on which files are read to initialize the shell session.

A session started as a login session will read configuration details from the /etc/profile file first. It will then look for the first login shell configuration file in the user's home directory to get user-specific configuration details.

It reads the first file that it can find out of ~/.bash_profile , ~/.bash_login , and ~/.profile and does not read any further files.

In contrast, a session defined as a non-login shell will read /etc/bash.bashrc and then the user-specific ~/.bashrc file to build its environment.

Non-interactive shells read the environmental variable called BASH_ENV and read the file specified to define the new environment.

Implementing Environmental Variables

As you can see, there are a variety of different files that we would usually need to look at for placing our settings.

This provides a lot of flexibility that can help in specific situations where we want certain settings in a login shell, and other settings in a non-login shell. However, most of the time we will want the same settings in both situations.

Fortunately, most Linux distributions configure the login configuration files to source the non-login configuration files. This means that you can define environmental variables that you want in both inside the non-login configuration files. They will then be read in both scenarios.

We will usually be setting user-specific environmental variables, and we usually will want our settings to be available in both login and non-login shells. This means that the place to define these variables is in the ~/.bashrc file.

Open this file now:

nano ~/.bashrc

This will most likely contain quite a bit of data already. Most of the definitions here are for setting bash options, which are unrelated to environmental variables. You can set environmental variables just like you would from the command line:

export VARNAME=value

We can then save and close the file. The next time you start a shell session, your environmental variable declaration will be read and passed on to the shell environment. You can force your current session to read the file now by typing:

source ~/.bashrc

If you need to set system-wide variables, you may want to think about adding them to /etc/profile , /etc/bash.bashrc , or /etc/environment .

Conclusion

Environmental and shell variables are always present in your shell sessions and can be very useful. They are an interesting way for a parent process to set configuration details for its children, and are a way of setting options outside of files.

This has many advantages in specific situations. For instance, some deployment mechanisms rely on environmental variables to configure authentication information. This is useful because it does not require keeping these in files that may be seen by outside parties.

There are plenty of other, more mundane, but more common scenarios where you will need to read or alter the environment of your system. These tools and techniques should give you a good foundation for making these changes and using them correctly.

By Justin Ellingwood

[Jul 29, 2017] shell - Whats the difference between .bashrc, .bash_profile, and .environment - Stack Overflow

Notable quotes:
"... "The following paragraphs describe how bash executes its startup files." ..."
Jul 29, 2017 | stackoverflow.com

up vote 130 down vote favorite 717

Adam Rosenfield , asked Jan 6 '09 at 3:58

I've used a number of different *nix-based systems of the years, and it seems like every flavor of Bash I use has a different algorithm for deciding which startup scripts to run. For the purposes of tasks like setting up environment variables and aliases and printing startup messages (e.g. MOTDs), which startup script is the appropriate place to do these?

What's the difference between putting things in .bashrc , .bash_profile , and .environment ? I've also seen other files such as .login , .bash_login , and .profile ; are these ever relevant? What are the differences in which ones get run when logging in physically, logging in remotely via ssh, and opening a new terminal window? Are there any significant differences across platforms (including Mac OS X (and its Terminal.app) and Cygwin Bash)?

Cos , answered Jan 6 '09 at 4:18

The main difference with shell config files is that some are only read by "login" shells (eg. when you login from another host, or login at the text console of a local unix machine). these are the ones called, say, .login or .profile or .zlogin (depending on which shell you're using).

Then you have config files that are read by "interactive" shells (as in, ones connected to a terminal (or pseudo-terminal in the case of, say, a terminal emulator running under a windowing system). these are the ones with names like .bashrc , .tcshrc , .zshrc , etc.

bash complicates this in that .bashrc is only read by a shell that's both interactive and non-login , so you'll find most people end up telling their .bash_profile to also read .bashrc with something like

[[ -r ~/.bashrc ]] && . ~/.bashrc

Other shells behave differently - eg with zsh , .zshrc is always read for an interactive shell, whether it's a login one or not.

The manual page for bash explains the circumstances under which each file is read. Yes, behaviour is generally consistent between machines.

.profile is simply the login script filename originally used by /bin/sh . bash , being generally backwards-compatible with /bin/sh , will read .profile if one exists.

Johannes Schaub - litb , answered Jan 6 '09 at 15:21

That's simple. It's explained in man bash :
... ... ... 

Login shells are the ones that are the one you login (so, they are not executed when merely starting up xterm, for example). There are other ways to login. For example using an X display manager. Those have other ways to read and export environment variables at login time.

Also read the INVOCATION chapter in the manual. It says "The following paragraphs describe how bash executes its startup files." , i think that's a spot-on :) It explains what an "interactive" shell is too.

Bash does not know about .environment . I suspect that's a file of your distribution, to set environment variables independent of the shell that you drive.

Jonathan Leffler , answered Jan 6 '09 at 4:13

Classically, ~/.profile is used by Bourne Shell, and is probably supported by Bash as a legacy measure. Again, ~/.login and ~/.cshrc were used by C Shell - I'm not sure that Bash uses them at all.

The ~/.bash_profile would be used once, at login. The ~/.bashrc script is read every time a shell is started. This is analogous to /.cshrc for C Shell.

One consequence is that stuff in ~/.bashrc should be as lightweight (minimal) as possible to reduce the overhead when starting a non-login shell.

I believe the ~/.environment file is a compatibility file for Korn Shell.

Filip Ekberg , answered Jan 6 '09 at 4:03

I found information about .bashrc and .bash_profile here to sum it up:

.bash_profile is executed when you login. Stuff you put in there might be your PATH and other important environment variables.

.bashrc is used for non login shells. I'm not sure what that means. I know that RedHat executes it everytime you start another shell (su to this user or simply calling bash again) You might want to put aliases in there but again I am not sure what that means. I simply ignore it myself.

.profile is the equivalent of .bash_profile for the root. I think the name is changed to let other shells (csh, sh, tcsh) use it as well. (you don't need one as a user)

There is also .bash_logout wich executes at, yeah good guess...logout. You might want to stop deamons or even make a little housekeeping . You can also add "clear" there if you want to clear the screen when you log out.

Also there is a complete follow up on each of the configurations files here

These are probably even distro.-dependant, not all distros choose to have each configuraton with them and some have even more. But when they have the same name, they usualy include the same content.

Rose Perrone , answered Feb 27 '12 at 0:22

According to Josh Staiger , Mac OS X's Terminal.app actually runs a login shell rather than a non-login shell by default for each new terminal window, calling .bash_profile instead of .bashrc.

He recommends:

Most of the time you don't want to maintain two separate config files for login and non-login shells ! when you set a PATH, you want it to apply to both. You can fix this by sourcing .bashrc from your .bash_profile file, then putting PATH and common settings in .bashrc.

To do this, add the following lines to .bash_profile:


if ~/.bashrc ]; then 
    source ~/.bashrc
fi

Now when you login to your machine from a console .bashrc will be called.

PolyThinker , answered Jan 6 '09 at 4:06

A good place to look at is the man page of bash. Here 's an online version. Look for "INVOCATION" section.

seismick , answered May 21 '12 at 10:42

I have used Debian-family distros which appear to execute .profile , but not .bash_profile , whereas RHEL derivatives execute .bash_profile before .profile .

It seems to be a mess when you have to set up environment variables to work in any Linux OS.

[Jul 29, 2017] Preserve bash history in multiple terminal windows - Unix Linux Stack Exchange

Jul 29, 2017 | unix.stackexchange.com

Oli , asked Aug 26 '10 at 13:04

I consistently have more than one terminal open. Anywhere from two to ten, doing various bits and bobs. Now let's say I restart and open up another set of terminals. Some remember certain things, some forget.

I want a history that:

Anything I can do to make bash work more like that?

Pablo R. , answered Aug 26 '10 at 14:37

# Avoid duplicates
export HISTCONTROL=ignoredups:erasedups  
# When the shell exits, append to the history file instead of overwriting it
shopt -s histappend

# After each command, append to the history file and reread it
export PROMPT_COMMAND="${PROMPT_COMMAND:+$PROMPT_COMMAND$'\n'}history -a; history -c; history -r"

kch , answered Sep 19 '08 at 17:49

So, this is all my history-related .bashrc thing:
export HISTCONTROL=ignoredups:erasedups  # no duplicate entries
export HISTSIZE=100000                   # big big history
export HISTFILESIZE=100000               # big big history
shopt -s histappend                      # append to history, don't overwrite it

# Save and reload the history after each command finishes
export PROMPT_COMMAND="history -a; history -c; history -r; $PROMPT_COMMAND"

Tested with bash 3.2.17 on Mac OS X 10.5, bash 4.1.7 on 10.6.

lesmana , answered Jun 16 '10 at 16:11

Here is my attempt at Bash session history sharing. This will enable history sharing between bash sessions in a way that the history counter does not get mixed up and history expansion like !number will work (with some constraints).

Using Bash version 4.1.5 under Ubuntu 10.04 LTS (Lucid Lynx).

HISTSIZE=9000
HISTFILESIZE=$HISTSIZE
HISTCONTROL=ignorespace:ignoredups

_bash_history_sync() {
  builtin history -a         #1
  HISTFILESIZE=$HISTSIZE     #2
  builtin history -c         #3
  builtin history -r         #4
}

history() {                  #5
  _bash_history_sync
  builtin history "$@"
}

PROMPT_COMMAND=_bash_history_sync
Explanation:
  1. Append the just entered line to the $HISTFILE (default is .bash_history ). This will cause $HISTFILE to grow by one line.
  2. Setting the special variable $HISTFILESIZE to some value will cause Bash to truncate $HISTFILE to be no longer than $HISTFILESIZE lines by removing the oldest entries.
  3. Clear the history of the running session. This will reduce the history counter by the amount of $HISTSIZE .
  4. Read the contents of $HISTFILE and insert them in to the current running session history. this will raise the history counter by the amount of lines in $HISTFILE . Note that the line count of $HISTFILE is not necessarily $HISTFILESIZE .
  5. The history() function overrides the builtin history to make sure that the history is synchronised before it is displayed. This is necessary for the history expansion by number (more about this later).
More explanation: About the constraints of the history expansion:

When using history expansion by number, you should always look up the number immediately before using it. That means no bash prompt display between looking up the number and using it. That usually means no enter and no ctrl+c.

Generally, once you have more than one Bash session, there is no guarantee whatsoever that a history expansion by number will retain its value between two Bash prompt displays. Because when PROMPT_COMMAND is executed the history from all other Bash sessions are integrated in the history of the current session. If any other bash session has a new command then the history numbers of the current session will be different.

I find this constraint reasonable. I have to look the number up every time anyway because I can't remember arbitrary history numbers.

Usually I use the history expansion by number like this

$ history | grep something #note number
$ !number

I recommend using the following Bash options.

## reedit a history substitution line if it failed
shopt -s histreedit
## edit a recalled history line before executing
shopt -s histverify
Strange bugs:

Running the history command piped to anything will result that command to be listed in the history twice. For example:

$ history | head
$ history | tail
$ history | grep foo
$ history | true
$ history | false

All will be listed in the history twice. I have no idea why.

Ideas for improvements:

Maciej Piechotka , answered Aug 26 '10 at 13:20

I'm not aware of any way using bash . But it's one of the most popular features of zsh .
Personally I prefer zsh over bash so I recommend trying it.

Here's the part of my .zshrc that deals with history:

SAVEHIST=10000 # Number of entries
HISTSIZE=10000
HISTFILE=~/.zsh/history # File
setopt APPEND_HISTORY # Don't erase history
setopt EXTENDED_HISTORY # Add additional data to history like timestamp
setopt INC_APPEND_HISTORY # Add immediately
setopt HIST_FIND_NO_DUPS # Don't show duplicates in search
setopt HIST_IGNORE_SPACE # Don't preserve spaces. You may want to turn it off
setopt NO_HIST_BEEP # Don't beep
setopt SHARE_HISTORY # Share history between session/terminals

Chris Down , answered Nov 25 '11 at 15:46

To do this, you'll need to add two lines to your ~/.bashrc :
shopt -s histappend
PROMPT_COMMAND="history -a;history -c;history -r;"
$PROMPT_COMMAND

From man bash :

If the histappend shell option is enabled (see the description of shopt under SHELL BUILTIN COMMANDS below), the lines are appended to the history file, otherwise the history file is over-written.

Schof , answered Sep 19 '08 at 19:38

You can edit your BASH prompt to run the "history -a" and "history -r" that Muerr suggested:
savePS1=$PS1

(in case you mess something up, which is almost guaranteed)

PS1=$savePS1`history -a;history -r`

(note that these are back-ticks; they'll run history -a and history -r on every prompt. Since they don't output any text, your prompt will be unchanged.

Once you've got your PS1 variable set up the way you want, set it permanently it in your ~/.bashrc file.

If you want to go back to your original prompt while testing, do:

PS1=$savePS1

I've done basic testing on this to ensure that it sort of works, but can't speak to any side-effects from running history -a;history -r on every prompt.

pts , answered Mar 25 '11 at 17:40

If you need a bash or zsh history synchronizing solution which also solves the problem below, then see it at http://ptspts.blogspot.com/2011/03/how-to-automatically-synchronize-shell.html

The problem is the following: I have two shell windows A and B. In shell window A, I run sleep 9999 , and (without waiting for the sleep to finish) in shell window B, I want to be able to see sleep 9999 in the bash history.

The reason why most other solutions here won't solve this problem is that they are writing their history changes to the the history file using PROMPT_COMMAND or PS1 , both of which are executing too late, only after the sleep 9999 command has finished.

jtimberman , answered Sep 19 '08 at 17:38

You can use history -a to append the current session's history to the histfile, then use history -r on the other terminals to read the histfile.

jmanning2k , answered Aug 26 '10 at 13:59

I can offer a fix for that last one: make sure the env variable HISTCONTROL does not specify "ignorespace" (or "ignoreboth").

But I feel your pain with multiple concurrent sessions. It simply isn't handled well in bash.

Toby , answered Nov 20 '14 at 14:53

Here's an alternative that I use. It's cumbersome but it addresses the issue that @axel_c mentioned where sometimes you may want to have a separate history instance in each terminal (one for make, one for monitoring, one for vim, etc).

I keep a separate appended history file that I constantly update. I have the following mapped to a hotkey:

history | grep -v history >> ~/master_history.txt

This appends all history from the current terminal to a file called master_history.txt in your home dir.

I also have a separate hotkey to search through the master history file:

cat /home/toby/master_history.txt | grep -i

I use cat | grep because it leaves the cursor at the end to enter my regex. A less ugly way to do this would be to add a couple of scripts to your path to accomplish these tasks, but hotkeys work for my purposes. I also periodically will pull history down from other hosts I've worked on and append that history to my master_history.txt file.

It's always nice to be able to quickly search and find that tricky regex you used or that weird perl one-liner you came up with 7 months ago.

Yarek T , answered Jul 23 '15 at 9:05

Right, So finally this annoyed me to find a decent solution:
# Write history after each command
_bash_history_append() {
    builtin history -a
}
PROMPT_COMMAND="_bash_history_append; $PROMPT_COMMAND"

What this does is sort of amalgamation of what was said in this thread, except that I don't understand why would you reload the global history after every command. I very rarely care about what happens in other terminals, but I always run series of commands, say in one terminal:

make
ls -lh target/*.foo
scp target/artifact.foo vm:~/

(Simplified example)

And in another:

pv ~/test.data | nc vm:5000 >> output
less output
mv output output.backup1

No way I'd want the command to be shared

rouble , answered Apr 15 at 17:43

Here is my enhancement to @lesmana's answer . The main difference is that concurrent windows don't share history. This means you can keep working in your windows, without having context from other windows getting loaded into your current windows.

If you explicitly type 'history', OR if you open a new window then you get the history from all previous windows.

Also, I use this strategy to archive every command ever typed on my machine.

# Consistent and forever bash history
HISTSIZE=100000
HISTFILESIZE=$HISTSIZE
HISTCONTROL=ignorespace:ignoredups

_bash_history_sync() {
  builtin history -a         #1
  HISTFILESIZE=$HISTSIZE     #2
}

_bash_history_sync_and_reload() {
  builtin history -a         #1
  HISTFILESIZE=$HISTSIZE     #2
  builtin history -c         #3
  builtin history -r         #4
}

history() {                  #5
  _bash_history_sync_and_reload
  builtin history "$@"
}

export HISTTIMEFORMAT="%y/%m/%d %H:%M:%S   "
PROMPT_COMMAND='history 1 >> ${HOME}/.bash_eternal_history'
PROMPT_COMMAND=_bash_history_sync;$PROMPT_COMMAND

simotek , answered Jun 1 '14 at 6:02

I have written a script for setting a history file per session or task its based off the following.
        # write existing history to the old file
        history -a

        # set new historyfile
        export HISTFILE="$1"
        export HISET=$1

        # touch the new file to make sure it exists
        touch $HISTFILE
        # load new history file
        history -r $HISTFILE

It doesn't necessary save every history command but it saves the ones that i care about and its easier to retrieve them then going through every command. My version also lists all history files and provides the ability to search through them all.

Full source: https://github.com/simotek/scripts-config/blob/master/hiset.sh

Litch , answered Aug 11 '15 at 0:15

I chose to put history in a file-per-tty, as multiple people can be working on the same server - separating each session's commands makes it easier to audit.
# Convert /dev/nnn/X or /dev/nnnX to "nnnX"
HISTSUFFIX=`tty | sed 's/\///g;s/^dev//g'`
# History file is now .bash_history_pts0
HISTFILE=".bash_history_$HISTSUFFIX"
HISTTIMEFORMAT="%y-%m-%d %H:%M:%S "
HISTCONTROL=ignoredups:ignorespace
shopt -s histappend
HISTSIZE=1000
HISTFILESIZE=5000

History now looks like:

user@host:~# test 123
user@host:~# test 5451
user@host:~# history
1  15-08-11 10:09:58 test 123
2  15-08-11 10:10:00 test 5451
3  15-08-11 10:10:02 history

With the files looking like:

user@host:~# ls -la .bash*
-rw------- 1 root root  4275 Aug 11 09:42 .bash_history_pts0
-rw------- 1 root root    75 Aug 11 09:49 .bash_history_pts1
-rw-r--r-- 1 root root  3120 Aug 11 10:09 .bashrc

fstang , answered Sep 10 '16 at 19:30

Here I will point out one problem with
export PROMPT_COMMAND="${PROMPT_COMMAND:+$PROMPT_COMMAND$'\n'}history -a; history -c; history -r"

and

PROMPT_COMMAND="$PROMPT_COMMAND;history -a; history -n"

If you run source ~/.bashrc, the $PROMPT_COMMAND will be like

"history -a; history -c; history -r history -a; history -c; history -r"

and

"history -a; history -n history -a; history -n"

This repetition occurs each time you run 'source ~/.bashrc'. You can check PROMPT_COMMAND after each time you run 'source ~/.bashrc' by running 'echo $PROMPT_COMMAND'.

You could see some commands are apparently broken: "history -n history -a". But the good news is that it still works, because other parts still form a valid command sequence (Just involving some extra cost due to executing some commands repetitively. And not so clean.)

Personally I use the following simple version:

shopt -s histappend
PROMPT_COMMAND="history -a; history -c; history -r"

which has most of the functionalities while no such issue as mentioned above.

Another point to make is: there is really nothing magic . PROMPT_COMMAND is just a plain bash environment variable. The commands in it get executed before you get bash prompt (the $ sign). For example, your PROMPT_COMMAND is "echo 123", and you run "ls" in your terminal. The effect is like running "ls; echo 123".

$ PROMPT_COMMAND="echo 123"

output (Just like running 'PROMPT_COMMAND="echo 123"; $PROMPT_COMMAND'):

123

Run the following:

$ echo 3

output:

3
123

"history -a" is used to write the history commands in memory to ~/.bash_history

"history -c" is used to clear the history commands in memory

"history -r" is used to read history commands from ~/.bash_history to memory

See history command explanation here: http://ss64.com/bash/history.html

PS: As other users have pointed out, export is unnecessary. See: using export in .bashrc

Hopping Bunny , answered May 13 '15 at 4:48

Here is the snippet from my .bashrc and short explanations wherever needed:
# The following line ensures that history logs screen commands as well
shopt -s histappend

# This line makes the history file to be rewritten and reread at each bash prompt
PROMPT_COMMAND="$PROMPT_COMMAND;history -a; history -n"
# Have lots of history
HISTSIZE=100000         # remember the last 100000 commands
HISTFILESIZE=100000     # start truncating commands after 100000 lines
HISTCONTROL=ignoreboth  # ignoreboth is shorthand for ignorespace and     ignoredups

The HISTFILESIZE and HISTSIZE are personal preferences and you can change them as per your tastes.

Mulki , answered Jul 24 at 20:49

This works for ZSH
##############################################################################
# History Configuration for ZSH
##############################################################################
HISTSIZE=10000               #How many lines of history to keep in memory
HISTFILE=~/.zsh_history     #Where to save history to disk
SAVEHIST=10000               #Number of history entries to save to disk
#HISTDUP=erase               #Erase duplicates in the history file
setopt    appendhistory     #Append history to the history file (no overwriting)
setopt    sharehistory      #Share history across terminals
setopt    incappendhistory  #Immediately append to the history file, not just when a term is killed

[Jul 29, 2017] shell - How does this bash code detect an interactive session - Stack Overflow

Notable quotes:
"... ', the pattern removal operation is applied to each positional parameter in turn, and the expansion is the resultant list. If parameter is an array variable subscripted with '@' or ' ..."
Jul 29, 2017 | stackoverflow.com

user1284631 , asked Jun 5 '13 at 8:44

Following some issues with scp (it did not like the presence of the bash bind command in my .bashrc file, apparently), I followed the advice of a clever guy on the Internet (I just cannot find that post right now) that put at the top of its .bashrc file this:
[[ ${-#*} != ${-} ]] || return

in order to make sure that the bash initialization is NOT executed unless in interactive session.

Now, that works. However, I am not able to figure how it works. Could you enlighten me?

According to this answer , the $- is the current options set for the shell and I know that the ${} is the so-called "substring" syntax for expanding variables.

However, I do not understand the ${-#*i} part. And why $-#*i is not the same as ${-#*i} .

blue , answered Jun 5 '13 at 8:49

parameter#word}

$parameter##word}

The word is expanded to produce a pattern just as in filename expansion. If the pattern matches the beginning of the expanded value of parameter, then the result of the expansion is the expanded value of parameter with the shortest matching pattern (the '#' case) or the longest matching pattern (the '##' case) deleted.

If parameter is '@' or ' ', the pattern removal operation is applied to each positional parameter in turn, and the expansion is the resultant list. If parameter is an array variable subscripted with '@' or ' ', the pattern removal operation is applied to each member of the array in turn, and the expansion is the resultant list.

Source: http://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html

So basically what happens in ${-#*i} is that *i is expanded, and if it matches the beginning of the value of $- , then the result of the whole expansion is $- with the shortest matching pattern between *i and $- deleted.

Example

VAR "baioasd" 
echo ${VAR#*i};

outputs oasd .

In your case

If shell is interactive, $- will contain the letter 'i', so when you strip the variable $- of the pattern *i you will get a string that is different from the original $- ( [[ ${-#*i} != ${-} ]] yelds true). If shell is not interactive, $- does not contain the letter 'i' so the pattern *i does not match anything in $- and [[ ${-#*i} != $- ]] yelds false, and the return statement is executed.

perreal , answered Jun 5 '13 at 8:53

See this :

To determine within a startup script whether or not Bash is running interactively, test the value of the '-' special parameter. It contains i when the shell is interactive

Your substitution removes the string up to, and including the i and tests if the substituted version is equal to the original string. They will be different if there is i in the ${-} .

[Jul 28, 2017] bash - About .bash_profile, .bashrc, and where should alias be written in - Stack Overflow

Jul 28, 2017 | stackoverflow.com

Community May 23 at 12:17

Possible Duplicate: What's the difference between .bashrc, .bash_profile, and .environment?

It seems that if I use

alias ls
'ls -F'

inside of .bashrc on Mac OS X, then the newly created shell will not have that alias. I need to type bash again and that alias will be in effect.

And if I log into Linux on the hosting company, the .bashrc file has a comment line that says:

For non-login shell

and the .bash_profile file has a comment that says

for login shell

So where should aliases be written in? How come we separate the login shell and non-login shell?

Some webpage say use .bash_aliases , but it doesn't work on Mac OS X, it seems.

Maggyero edited Apr 25 '16 at 16:24

The reason you separate the login and non-login shell is because the .bashrc file is reloaded every time you start a new copy of Bash.

The .profile file is loaded only when you either log in or use the appropriate flag to tell Bash to act as a login shell.

Personally,

Oh, and the reason you need to type bash again to get the new alias is that Bash loads your .bashrc file when it starts but it doesn't reload it unless you tell it to. You can reload the .bashrc file (and not need a second shell) by typing


source
~/.
bashrc

which loads the .bashrc file as if you had typed the commands directly to Bash.

lhunath answered May 24 '09 at 6:22

Check out http://mywiki.wooledge.org/DotFiles for an excellent resource on the topic aside from man bash .

Summary:

Adam Rosenfield May 24 '09 at 2:46
From the bash manpage:

When bash is invoked as an interactive login shell, or as a non-interactive shell with the --login option, it first reads and executes commands from the file /etc/profile , if that file exists. After reading that file, it looks for ~/.bash_profile , ~/.bash_login , and ~/.profile , in that order, and reads and executes commands from the first one that exists and is readable. The --noprofile option may be used when the shell is started to inhibit this behavior.

When a login shell exits, bash reads and executes commands from the file ~/.bash_logout , if it exists.

When an interactive shell that is not a login shell is started, bash reads and executes commands from ~/.bashrc , if that file exists. This may be inhibited by using the --norc option. The --rcfile file option will force bash to read and execute commands from file instead of ~/.bashrc .

Thus, if you want to get the same behavior for both login shells and interactive non-login shells, you should put all of your commands in either .bashrc or .bash_profile , and then have the other file source the first one.

Adam Rosenfield May 24 '09 at 2:46

.bash_profile is loaded for a "login shell". I am not sure what that would be on OS X, but on Linux that is either X11 or a virtual terminal.

.bashrc is loaded every time you run Bash. That is where you should put stuff you want loaded whenever you open a new Terminal.app window.

I personally put everything in .bashrc so that I don't have to restart the application for changes to take effect.

[Jul 26, 2017] I feel stupid declare not found in bash scripting

A single space can make a huge difference in bash :-)
www.linuxquestions.org

Mohtek

I feel stupid: declare not found in bash scripting? I was anxious to get my feet wet, and I'm only up to my toes before I'm stuck...this seems very very easy but I'm not sure what I've done wrong. Below is the script and its output. What the heck am I missing?

______________________________________________________
#!/bin/bash
declare -a PROD[0]="computers" PROD[1]="HomeAutomation"
printf "${ PROD[*]}"
_______________________________________________________

products.sh: 6: declare: not found
products.sh: 8: Syntax error: Bad substitution

wjevans_7d1@yahoo.co

I ran what you posted (but at the command line, not in a script, though that should make no significant difference), and got this:

Code:

-bash: ${ PROD[*]}: bad substitution

In other words, I couldn't reproduce your first problem, the "declare: not found" error. Try the declare command by itself, on the command line.

And I got rid of the "bad substitution" problem when I removed the space which is between the ${ and the PROD on the printf line.

Hope this helps.

blackhole54

The previous poster identified your second problem.

As far as your first problem goes ... I am not a bash guru although I have written a number of bash scripts. So far I have found no need for declare statements. I suspect that you might not need it either. But if you do want to use it, the following does work:

Code:
#!/bin/bash

declare -a PROD
PROD[0]="computers"
PROD[1]="HomeAutomation"
printf "${PROD[*]}\n"

EDIT: My original post was based on an older version of bash. When I tried the declare statement you posted I got an error message, but one that was different from yours. I just tried it on a newer version of bash, and your declare statement worked fine. So it might depend on the version of bash you are running. What I posted above runs fine on both versions.

[Jul 26, 2017] Associative array declaration gotcha

Jul 26, 2017 | unix.stackexchange.com

bash silently does function return on (re-)declare of global associative read-only array - Unix & Linux Stack Exchange

Ron Burk :

Obviously cut out of a much more complex script that was more meaningful:

#!/bin/bash

function InitializeConfig(){
    declare -r -g -A SHCFG_INIT=( [a]=b )
    declare -r -g -A SHCFG_INIT=( [c]=d )
    echo "This statement never gets executed"
}

set -o xtrace

InitializeConfig
echo "Back from function"
The output looks like this:
ronburk@ubuntu:~/ubucfg$ bash bug.sh
+ InitializeConfig
+ SHCFG_INIT=([a]=b)
+ declare -r -g -A SHCFG_INIT
+ SHCFG_INIT=([c]=d)
+ echo 'Back from function'
Back from function
Bash seems to silently execute a function return upon the second declare statement. Starting to think this really is a new bug, but happy to learn otherwise.

Other details:

Machine: x86_64
OS: linux-gnu
Compiler: gcc
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64' -DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-pc-linux-gn$
uname output: Linux ubuntu 3.16.0-38-generic #52~14.04.1-Ubuntu SMP Fri May 8 09:43:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Lin$
Machine Type: x86_64-pc-linux-gnu

Bash Version: 4.3
Patch Level: 11
Release Status: release
bash array readonly
share improve this question edited Jun 14 '15 at 17:43 asked Jun 14 '15 at 7:05 118

By gum, you're right! Then I get readonly warning on second declare, which is reasonable, and the function completes. The xtrace output is also interesting; implies declare without single quotes is really treated as two steps. Ready to become superstitious about always single-quoting the argument to declare . Hard to see how popping the function stack can be anything but a bug, though. – Ron Burk Jun 14 '15 at 23:58

Weird. Doesn't happen in bash 4.2.53(1). – choroba Jun 14 '15 at 7:22
I can reproduce this problem with bash version 4.3.11 (Ubuntu 14.04.1 LTS). It works fine with bash 4.2.8 (Ubuntu 11.04). – Cyrus Jun 14 '15 at 7:34
Maybe related: unix.stackexchange.com/q/56815/116972 I can get expected result with declare -r -g -A 'SHCFG_INIT=( [a]=b )' . – yaegashi Jun 14 '15 at 23:22
add a comment |

I found this thread in bug-bash@gnu.org related to test -v on an assoc array. In short, bash implicitly did test -v SHCFG_INIT[0] in your script. I'm not sure this behavior got introduced in 4.3.

You might want to use declare -p to workaround this...

if  declare p SHCFG_INIT >/dev/null >& ; then
    echo "looks like SHCFG_INIT not defined"
fi
====
Well, rats. I think your answer is correct, but also reveals I'm really asking two separate questions when I thought they were probably the same issue. Since the title better reflects what turns out to be the "other" question, I'll leave this up for a while and see if anybody knows what's up with the mysterious implicit function return... Thanks! – Ron Burk Jun 14 '15 at 17:01
Edited question to focus on the remaining issue. Thanks again for the answer on the "-v" issue with associative arrays. – Ron Burk Jun 14 '15 at 17:55
Accepting this answer. Complete answer is here plus your comments above plus (IMHO) there's a bug in this version of bash (can't see how there can be any excuse for popping the function stack without warning). Thanks for your excellent research on this! – Ron Burk Jun 21 '15 at 19:31

[Jul 26, 2017] Typing variables: declare or typeset

Jul 26, 2017 | www.tldp.org

The declare or typeset builtins , which are exact synonyms, permit modifying the properties of variables. This is a very weak form of the typing [1] available in certain programming languages. The declare command is specific to version 2 or later of Bash. The typeset command also works in ksh scripts.

declare/typeset options
-r readonly
( declare -r var1 works the same as readonly var1 )

This is the rough equivalent of the C const type qualifier. An attempt to change the value of a readonly variable fails with an error message.

declare -r var1=1
echo "var1 = $var1"   # var1 = 1

(( var1++ ))          # x.sh: line 4: var1: readonly variable
-i integer
declare -i number
# The script will treat subsequent occurrences of "number" as an integer.             

number=3
echo "Number = $number"     # Number = 3

number=three
echo "Number = $number"     # Number = 0
# Tries to evaluate the string "three" as an integer.

Certain arithmetic operations are permitted for declared integer variables without the need for expr or let .

n=6/3
echo "n = $n"       # n = 6/3

declare -i n
n=6/3
echo "n = $n"       # n = 2
-a array
declare -a indices

The variable indices will be treated as an array .

-f function(s)
declare -f

A declare -f line with no arguments in a script causes a listing of all the functions previously defined in that script.

declare -f function_name

A declare -f function_name in a script lists just the function named.

-x export
declare -x var3

This declares a variable as available for exporting outside the environment of the script itself.

-x var=$value
declare -x var3=373

The declare command permits assigning a value to a variable in the same statement as setting its properties.

Example 9-10. Using declare to type variables
#!/bin/bash

func1 ()
{
  echo This is a function.
}

declare -f        # Lists the function above.

echo

declare -i var1   # var1 is an integer.
var1=2367
echo "var1 declared as $var1"
var1=var1+1       # Integer declaration eliminates the need for 'let'.
echo "var1 incremented by 1 is $var1."
# Attempt to change variable declared as integer.
echo "Attempting to change var1 to floating point value, 2367.1."
var1=2367.1       # Results in error message, with no change to variable.
echo "var1 is still $var1"

echo

declare -r var2=13.36         # 'declare' permits setting a variable property
                              #+ and simultaneously assigning it a value.
echo "var2 declared as $var2" # Attempt to change readonly variable.
var2=13.37                    # Generates error message, and exit from script.

echo "var2 is still $var2"    # This line will not execute.

exit 0                        # Script will not exit here.
Caution Using the declare builtin restricts the scope of a variable.
foo ()
{
FOO="bar"
}

bar ()
{
foo
echo $FOO
}

bar   # Prints bar.

However . . .

foo (){
declare FOO="bar"
}

bar ()
{
foo
echo $FOO
}

bar  # Prints nothing.


# Thank you, Michael Iatrou, for pointing this out.
9.2.1. Another use for declare

The declare command can be helpful in identifying variables, environmental or otherwise. This can be especially useful with arrays .

bash$


declare | grep HOME


HOME=/home/bozo

bash$


zzy=68


bash$


declare | grep zzy


zzy=68

bash$


Colors=([0]="purple" [1]="reddish-orange" [2]="light green")


bash$


echo ${Colors[@]}


purple reddish-orange light green

bash$


declare | grep Colors


Colors=([0]="purple" [1]="reddish-orange" [2]="light green")

Notes
[1] In this context, typing a variable means to classify it and restrict its properties. For example, a variable declared or typed as an integer is no longer available for string operations .
declare -i intvar

intvar=23
echo "$intvar"   # 23
intvar=stringval
echo "$intvar"   # 0

[Jul 25, 2017] Beginner Mistakes

Jul 25, 2017 | wiki.bash-hackers.org

Script execution Your perfect Bash script executes with syntax errors If you write Bash scripts with Bash specific syntax and features, run them with Bash , and run them with Bash in native mode .

Wrong

See also:

Your script named "test" doesn't execute Give it another name. The executable test already exists.

In Bash it's a builtin. With other shells, it might be an executable file. Either way, it's bad name choice!

Workaround: You can call it using the pathname:

/home/user/bin/test

Globbing Brace expansion is not globbing The following command line is not related to globbing (filename expansion):

# YOU EXPECT
# -i1.vob -i2.vob -i3.vob ....

echo -i{*.vob,}

# YOU GET
# -i*.vob -i
Why? The brace expansion is simple text substitution. All possible text formed by the prefix, the postfix and the braces themselves are generated. In the example, these are only two: -i*.vob and -i . The filename expansion happens after that, so there is a chance that -i*.vob is expanded to a filename - if you have files like -ihello.vob . But it definitely doesn't do what you expected.

Please see:

Test-command

Please see:

Variables Setting variables The Dollar-Sign There is no $ (dollar-sign) when you reference the name of a variable! Bash is not PHP!
# THIS IS WRONG!
$myvar="Hello world!"

A variable name preceeded with a dollar-sign always means that the variable gets expanded . In the example above, it might expand to nothing (because it wasn't set), effectively resulting in

="Hello world!"
which definitely is wrong !

When you need the name of a variable, you write only the name , for example

When you need the content of a variable, you prefix its name with a dollar-sign , like

Whitespace Putting spaces on either or both sides of the equal-sign ( = ) when assigning a value to a variable will fail.
# INCORRECT 1
example = Hello

# INCORRECT 2
example= Hello

# INCORRECT 3
example =Hello

The only valid form is no spaces between the variable name and assigned value

# CORRECT 1
example=Hello

# CORRECT 2
example=" Hello"

Expanding (using) variables A typical beginner's trap is quoting.

As noted above, when you want to expand a variable i.e. "get the content", the variable name needs to be prefixed with a dollar-sign. But, since Bash knows various ways to quote and does word-splitting, the result isn't always the same.

Let's define an example variable containing text with spaces:

example="Hello world"
Used form result number of words
$example Hello world 2
"$example" Hello world 1
\$example $example 1
'$example' $example 1

If you use parameter expansion, you must use the name ( PATH ) of the referenced variables/parameters. i.e. not ( $PATH ):

# WRONG!
echo "The first character of PATH is ${$PATH:0:1}"

# CORRECT
echo "The first character of PATH is ${PATH:0:1}"

Note that if you are using variables in arithmetic expressions , then the bare name is allowed:

((a=$a+7))         # Add 7 to a
((a = a + 7))      # Add 7 to a.  Identical to the previous command.
((a += 7))         # Add 7 to a.  Identical to the previous command.

a=$((a+7))         # POSIX-compatible version of previous code.

Please see:

Exporting Exporting a variable means to give newly created (child-)processes a copy of that variable. not copy a variable created in a child process to the parent process. The following example does not work, since the variable hello is set in a child process (the process you execute to start that script ./script.sh ):
$ cat script.sh
export hello=world

$ ./script.sh
$ echo $hello
$

Exporting is one-way. The direction is parent process to child process, not the reverse. The above example will work, when you don't execute the script, but include ("source") it:

$ source ./script.sh
$ echo $hello
world
$
In this case, the export command is of no use.

Please see:

Exit codes Reacting to exit codes If you just want to react to an exit code, regardless of its specific value, you don't need to use $? in a test command like this:
grep
 ^root:
etc
passwd
>/
dev
null
>&

 
if
$?
-neq
then
echo
"root was not found - check the pub at the corner"
fi

This can be simplified to:

if
grep
 ^root:
etc
passwd
>/
dev
null
>&
then
echo
"root was not found - check the pub at the corner"
fi

Or, simpler yet:

grep
 ^root:
etc
passwd
>/
dev
null
>&
||
echo
"root was not found - check the pub at the corner"

If you need the specific value of $? , there's no other choice. But if you need only a "true/false" exit indication, there's no need for $? .

See also:

Output vs. Return Value It's important to remember the different ways to run a child command, and whether you want the output, the return value, or neither.

When you want to run a command (or a pipeline) and save (or print) the output , whether as a string or an array, you use Bash's $(command) syntax:

$(ls -l /tmp)
newvariable=$(printf "foo")

When you want to use the return value of a command, just use the command, or add ( ) to run a command or pipeline in a subshell:

if grep someuser /etc/passwd ; then
    # do something
fi

if ( w | grep someuser | grep sqlplus ) ; then
    # someuser is logged in and running sqlplus
fi

Make sure you're using the form you intended:

# WRONG!
if $(grep ERROR /var/log/messages) ; then
    # send alerts
fi

[Jul 25, 2017] Arrays in bash 4.x

Jul 25, 2017 | wiki.bash-hackers.org

Purpose An array is a parameter that holds mappings from keys to values. Arrays are used to store a collection of parameters into a parameter. Arrays (in any programming language) are a useful and common composite data structure, and one of the most important scripting features in Bash and other shells.

Here is an abstract representation of an array named NAMES . The indexes go from 0 to 3.

NAMES
 0: Peter
 1: Anna
 2: Greg
 3: Jan

Instead of using 4 separate variables, multiple related variables are grouped grouped together into elements of the array, accessible by their key . If you want the second name, ask for index 1 of the array NAMES . Indexing Bash supports two different types of ksh-like one-dimensional arrays. Multidimensional arrays are not implemented .

Syntax Referencing To accommodate referring to array variables and their individual elements, Bash extends the parameter naming scheme with a subscript suffix. Any valid ordinary scalar parameter name is also a valid array name: [[:alpha:]_][[:alnum:]_]* . The parameter name may be followed by an optional subscript enclosed in square brackets to refer to a member of the array.

The overall syntax is arrname[subscript] - where for indexed arrays, subscript is any valid arithmetic expression, and for associative arrays, any nonempty string. Subscripts are first processed for parameter and arithmetic expansions, and command and process substitutions. When used within parameter expansions or as an argument to the unset builtin, the special subscripts * and @ are also accepted which act upon arrays analogously to the way the @ and * special parameters act upon the positional parameters. In parsing the subscript, bash ignores any text that follows the closing bracket up to the end of the parameter name.

With few exceptions, names of this form may be used anywhere ordinary parameter names are valid, such as within arithmetic expressions , parameter expansions , and as arguments to builtins that accept parameter names. An array is a Bash parameter that has been given the -a (for indexed) or -A (for associative) attributes . However, any regular (non-special or positional) parameter may be validly referenced using a subscript, because in most contexts, referring to the zeroth element of an array is synonymous with referring to the array name without a subscript.

# "x" is an ordinary non-array parameter.
$ x=hi; printf '%s ' "$x" "${x[0]}"; echo "${_[0]}"
hi hi hi

The only exceptions to this rule are in a few cases where the array variable's name refers to the array as a whole. This is the case for the unset builtin (see destruction ) and when declaring an array without assigning any values (see declaration ). Declaration The following explicitly give variables array attributes, making them arrays:

Syntax Description
ARRAY=() Declares an indexed array ARRAY and initializes it to be empty. This can also be used to empty an existing array.
ARRAY[0]= Generally sets the first element of an indexed array. If no array ARRAY existed before, it is created.
declare -a ARRAY Declares an indexed array ARRAY . An existing array is not initialized.
declare -A ARRAY Declares an associative array ARRAY . This is the one and only way to create associative arrays.
Storing values Storing values in arrays is quite as simple as storing values in normal variables.
Syntax Description
ARRAY[N]=VALUE Sets the element N of the indexed array ARRAY to VALUE . N can be any valid arithmetic expression
ARRAY[STRING]=VALUE Sets the element indexed by STRING of the associative array ARRAY .
ARRAY=VALUE As above. If no index is given, as a default the zeroth element is set to VALUE . Careful, this is even true of associative arrays - there is no error if no key is specified, and the value is assigned to string index "0".
ARRAY=(E1 E2 ) Compound array assignment - sets the whole array ARRAY to the given list of elements indexed sequentially starting at zero. The array is unset before assignment unless the += operator is used. When the list is empty ( ARRAY=() ), the array will be set to an empty array. This method obviously does not use explicit indexes. An associative array can not be set like that! Clearing an associative array using ARRAY=() works.
ARRAY=([X]=E1 [Y]=E2 ) Compound assignment for indexed arrays with index-value pairs declared individually (here for example X and Y ). X and Y are arithmetic expressions. This syntax can be combined with the above - elements declared without an explicitly specified index are assigned sequentially starting at either the last element with an explicit index, or zero.
ARRAY=([S1]=E1 [S2]=E2 ) Individual mass-setting for associative arrays . The named indexes (here: S1 and S2 ) are strings.
ARRAY+=(E1 E2 ) Append to ARRAY.

As of now, arrays can't be exported. Getting values article about parameter expansion and check the notes about arrays.

Syntax Description
${ARRAY[N]} Expands to the value of the index N in the indexed array ARRAY . If N is a negative number, it's treated as the offset from the maximum assigned index (can't be used for assignment) - 1
${ARRAY[S]} Expands to the value of the index S in the associative array ARRAY .
"${ARRAY[@]}"
${ARRAY[@]}
"${ARRAY[*]}"
${ARRAY[*]}
Similar to mass-expanding positional parameters , this expands to all elements. If unquoted, both subscripts * and @ expand to the same result, if quoted, @ expands to all elements individually quoted, * expands to all elements quoted as a whole.
"${ARRAY[@]:N:M}"
${ARRAY[@]:N:M}
"${ARRAY[*]:N:M}"
${ARRAY[*]:N:M}
Similar to what this syntax does for the characters of a single string when doing substring expansion , this expands to M elements starting with element N . This way you can mass-expand individual indexes. The rules for quoting and the subscripts * and @ are the same as above for the other mass-expansions.

For clarification: When you use the subscripts @ or * for mass-expanding, then the behaviour is exactly what it is for $@ and $* when mass-expanding the positional parameters . You should read this article to understand what's going on. Metadata

Syntax Description
${#ARRAY[N]} Expands to the length of an individual array member at index N ( stringlength
${#ARRAY[STRING]} Expands to the length of an individual associative array member at index STRING ( stringlength )
${#ARRAY[@]}
${#ARRAY[*]}
Expands to the number of elements in ARRAY
${!ARRAY[@]}
${!ARRAY[*]}
Expands to the indexes in ARRAY since BASH 3.0
Destruction The unset builtin command is used to destroy (unset) arrays or individual elements of arrays.
Syntax Description
unset -v ARRAY
unset -v ARRAY[@]
unset -v ARRAY[*]
Destroys a complete array
unset -v ARRAY[N] Destroys the array element at index N
unset -v ARRAY[STRING] Destroys the array element of the associative array at index STRING

It is best to explicitly specify -v when unsetting variables with unset.

pathname expansion to occur due to the presence of glob characters.

Example: You are in a directory with a file named x1 , and you want to destroy an array element x[1] , with

unset x[1]
then pathname expansion will expand to the filename x1 and break your processing!

Even worse, if nullglob is set, your array/index will disappear.

To avoid this, always quote the array name and index:

unset -v 'x[1]'

This applies generally to all commands which take variable names as arguments. Single quotes preferred.

Usage Numerical Index Numerical indexed arrays are easy to understand and easy to use. The Purpose and Indexing chapters above more or less explain all the needed background theory.

Now, some examples and comments for you.

Let's say we have an array sentence which is initialized as follows:

sentence=(Be liberal in what you accept, and conservative in what you send)

Since no special code is there to prevent word splitting (no quotes), every word there will be assigned to an individual array element. When you count the words you see, you should get 12. Now let's see if Bash has the same opinion:

$ echo ${#sentence[@]}
12

Yes, 12. Fine. You can take this number to walk through the array. Just subtract 1 from the number of elements, and start your walk at 0 (zero)

((n_elements=${#sentence[@]}, max_index=n_elements - 1))

for ((i = 0; i <= max_index; i++)); do
  echo "Element $i: '${sentence[i]}'"
done

You always have to remember that, it seems newbies have problems sometimes. Please understand that numerical array indexing begins at 0 (zero)

The method above, walking through an array by just knowing its number of elements, only works for arrays where all elements are set, of course. If one element in the middle is removed, then the calculation is nonsense, because the number of elements doesn't correspond to the highest used index anymore (we call them " sparse arrays "). Associative (Bash 4) Associative arrays (or hash tables ) are not much more complicated than numerical indexed arrays. The numerical index value (in Bash a number starting at zero) just is replaced with an arbitrary string:

# declare -A, introduced with Bash 4 to declare an associative array
declare -A sentence

sentence[Begin]='Be liberal in what'
sentence[Middle]='you accept, and conservative'
sentence[End]='in what you send'
sentence['Very end']=...

Beware: don't rely on the fact that the elements are ordered in memory like they were declared, it could look like this:

# output from 'set' command
sentence=([End]="in what you send" [Middle]="you accept, and conservative " [Begin]="Be liberal in what " ["Very end"]="...")
This effectively means, you can get the data back with "${sentence[@]}" , of course (just like with numerical indexing), but you can't rely on a specific order. If you want to store ordered data, or re-order data, go with numerical indexes. For associative arrays, you usually query known index values:
for element in Begin Middle End "Very end"; do
    printf "%s" "${sentence[$element]}"
done
printf "\n"

A nice code example: Checking for duplicate files using an associative array indexed with the SHA sum of the files:

# Thanks to Tramp in #bash for the idea and the code

unset flist; declare -A flist;
while read -r sum fname; do 
    if [[ ${flist[$sum]} ]]; then
        printf 'rm -- "%s" # Same as >%s<\n' "$fname" "${flist[$sum]}" 
    else
        flist[$sum]="$fname"
    fi
done <  <(find . -type f -exec sha256sum {} +)  >rmdups

Integer arrays Any type attributes applied to an array apply to all elements of the array. If the integer attribute is set for either indexed or associative arrays, then values are considered as arithmetic for both compound and ordinary assignment, and the += operator is modified in the same way as for ordinary integer variables.

 ~ $ ( declare -ia 'a=(2+4 [2]=2+2 [a[2]]="a[2]")' 'a+=(42 [a[4]]+=3)'; declare -p a )
declare -ai a='([0]="6" [2]="4" [4]="7" [5]="42")'

a[0] is assigned to the result of 2+4 . a[1] gets the result of 2+2 . The last index in the first assignment is the result of a[2] , which has already been assigned as 4 , and its value is also given a[2] .

This shows that even though any existing arrays named a in the current scope have already been unset by using = instead of += to the compound assignment, arithmetic variables within keys can self-reference any elements already assigned within the same compound-assignment. With integer arrays this also applies to expressions to the right of the = . (See evaluation order , the right side of an arithmetic assignment is typically evaluated first in Bash.)

The second compound assignment argument to declare uses += , so it appends after the last element of the existing array rather than deleting it and creating a new array, so a[5] gets 42 .

Lastly, the element whose index is the value of a[4] ( 4 ), gets 3 added to its existing value, making a[4] == 7 . Note that having the integer attribute set this time causes += to add, rather than append a string, as it would for a non-integer array.

The single quotes force the assignments to be evaluated in the environment of declare . This is important because attributes are only applied to the assignment after assignment arguments are processed. Without them the += compound assignment would have been invalid, and strings would have been inserted into the integer array without evaluating the arithmetic. A special-case of this is shown in the next section.

eval , but there are differences.) 'Todo: ' Discuss this in detail.

Indirection Arrays can be expanded indirectly using the indirect parameter expansion syntax. Parameters whose values are of the form: name[index] , name[@] , or name[*] when expanded indirectly produce the expected results. This is mainly useful for passing arrays (especially multiple arrays) by name to a function.

This example is an "isSubset"-like predicate which returns true if all key-value pairs of the array given as the first argument to isSubset correspond to a key-value of the array given as the second argument. It demonstrates both indirect array expansion and indirect key-passing without eval using the aforementioned special compound assignment expansion.

isSubset() {
    local -a 'xkeys=("${!'"$1"'[@]}")' 'ykeys=("${!'"$2"'[@]}")'
    set -- "${@/%/[key]}"

    (( ${#xkeys[@]} <= ${#ykeys[@]} )) || return 1

    local key
    for key in "${xkeys[@]}"; do
        [[ ${!2+_} && ${!1} == ${!2} ]] || return 1
    done
}

main() {
    # "a" is a subset of "b"
    local -a 'a=({0..5})' 'b=({0..10})'
    isSubset a b
    echo $? # true

    # "a" contains a key not in "b"
    local -a 'a=([5]=5 {6..11})' 'b=({0..10})'
    isSubset a b
    echo $? # false

    # "a" contains an element whose value != the corresponding member of "b"
    local -a 'a=([5]=5 6 8 9 10)' 'b=({0..10})'
    isSubset a b
    echo $? # false
}

main

This script is one way of implementing a crude multidimensional associative array by storing array definitions in an array and referencing them through indirection. The script takes two keys and dynamically calls a function whose name is resolved from the array.

callFuncs() {
    # Set up indirect references as positional parameters to minimize local name collisions.
    set -- "${@:1:3}" ${2+'a["$1"]' "$1"'["$2"]'}

    # The only way to test for set but null parameters is unfortunately to test each individually.
    local x
    for x; do
        [[ $x ]] || return 0
    done

    local -A a=(
        [foo]='([r]=f [s]=g [t]=h)'
        [bar]='([u]=i [v]=j [w]=k)'
        [baz]='([x]=l [y]=m [z]=n)'
        ) ${4+${a["$1"]+"${1}=${!3}"}} # For example, if "$1" is "bar" then define a new array: bar=([u]=i [v]=j [w]=k)

    ${4+${a["$1"]+"${!4-:}"}} # Now just lookup the new array. for inputs: "bar" "v", the function named "j" will be called, which prints "j" to stdout.
}

main() {
    # Define functions named {f..n} which just print their own names.
    local fun='() { echo "$FUNCNAME"; }' x

    for x in {f..n}; do
        eval "${x}${fun}"
    done

    callFuncs "$@"
}

main "$@"

Bugs and Portability Considerations

Bugs Evaluation order Here are some of the nasty details of array assignment evaluation order. You can use this testcase code to generate these results.
Each testcase prints evaluation order for indexed array assignment
contexts. Each context is tested for expansions (represented by digits) and
arithmetic (letters), ordered from left to right within the expression. The
output corresponds to the way evaluation is re-ordered for each shell:

a[ $1 a ]=${b[ $2 b ]:=${c[ $3 c ]}}               No attributes
a[ $1 a ]=${b[ $2 b ]:=c[ $3 c ]}                  typeset -ia a
a[ $1 a ]=${b[ $2 b ]:=c[ $3 c ]}                  typeset -ia b
a[ $1 a ]=${b[ $2 b ]:=c[ $3 c ]}                  typeset -ia a b
(( a[ $1 a ] = b[ $2 b ] ${c[ $3 c ]} ))           No attributes
(( a[ $1 a ] = ${b[ $2 b ]:=c[ $3 c ]} ))          typeset -ia b
a+=( [ $1 a ]=${b[ $2 b ]:=${c[ $3 c ]}} [ $4 d ]=$(( $5 e )) ) typeset -a a
a+=( [ $1 a ]=${b[ $2 b ]:=c[ $3 c ]} [ $4 d ]=${5}e ) typeset -ia a

bash: 4.2.42(1)-release
2 b 3 c 2 b 1 a
2 b 3 2 b 1 a c
2 b 3 2 b c 1 a
2 b 3 2 b c 1 a c
1 2 3 c b a
1 2 b 3 2 b c c a
1 2 b 3 c 2 b 4 5 e a d
1 2 b 3 2 b 4 5 a c d e

ksh93: Version AJM 93v- 2013-02-22
1 2 b b a
1 2 b b a
1 2 b b a
1 2 b b a
1 2 3 c b a
1 2 b b a
1 2 b b a 4 5 e d
1 2 b b a 4 5 d e

mksh: @(#)MIRBSD KSH R44 2013/02/24
2 b 3 c 1 a
2 b 3 1 a c
2 b 3 c 1 a
2 b 3 c 1 a
1 2 3 c a b
1 2 b 3 c a
1 2 b 3 c 4 5 e a d
1 2 b 3 4 5 a c d e

zsh: 5.0.2
2 b 3 c 2 b 1 a
2 b 3 2 b 1 a c
2 b 1 a
2 b 1 a
1 2 3 c b a
1 2 b a
1 2 b 3 c 2 b 4 5 e
1 2 b 3 2 b 4 5

See also

[Jul 25, 2017] Handling positional parameters

Notable quotes:
"... under construction ..."
"... under construction ..."
Jul 25, 2017 | wiki.bash-hackers.org

Intro The day will come when you want to give arguments to your scripts. These arguments are known as positional parameters . Some relevant special parameters are described below:

Parameter(s) Description
$0 the first positional parameter, equivalent to argv[0] in C, see the first argument
$FUNCNAME the function name ( attention : inside a function, $0 is still the $0 of the shell, not the function name)
$1 $9 the argument list elements from 1 to 9
${10} ${N} the argument list elements beyond 9 (note the parameter expansion syntax!)
$* all positional parameters except $0 , see mass usage
$@ all positional parameters except $0 , see mass usage
$# the number of arguments, not counting $0

These positional parameters reflect exactly what was given to the script when it was called.

Option-switch parsing (e.g. -h for displaying help) is not performed at this point.

See also the dictionary entry for "parameter" . The first argument The very first argument you can access is referenced as $0 . It is usually set to the script's name exactly as called, and it's set on shell initialization:

Testscript - it just echos $0 :


#!/bin/bash

echo "$0"

You see, $0 is always set to the name the script is called with ( $ is the prompt ):

> ./testscript 

./testscript


> /usr/bin/testscript

/usr/bin/testscript

However, this isn't true for login shells:


> echo "$0"

-bash

In other terms, $0 is not a positional parameter, it's a special parameter independent from the positional parameter list. It can be set to anything. In the ideal case it's the pathname of the script, but since this gets set on invocation, the invoking program can easily influence it (the login program does that for login shells, by prefixing a dash, for example).

Inside a function, $0 still behaves as described above. To get the function name, use $FUNCNAME . Shifting The builtin command shift is used to change the positional parameter values:

The command can take a number as argument: Number of positions to shift. e.g. shift 4 shifts $5 to $1 . Using them Enough theory, you want to access your script-arguments. Well, here we go. One by one One way is to access specific parameters:


#!/bin/bash

echo "Total number of arguments: $#"

echo "Argument 1: $1"

echo "Argument 2: $2"

echo "Argument 3: $3"

echo "Argument 4: $4"

echo "Argument 5: $5"

While useful in another situation, this way is lacks flexibility. The maximum number of arguments is a fixedvalue - which is a bad idea if you write a script that takes many filenames as arguments.

⇒ forget that one Loops There are several ways to loop through the positional parameters.


You can code a C-style for-loop using $# as the end value. On every iteration, the shift -command is used to shift the argument list:


numargs=$#

for ((i=1 ; i <= numargs ; i++))

do

    echo "$1"

    shift

done

Not very stylish, but usable. The numargs variable is used to store the initial value of $# because the shift command will change it as the script runs.


Another way to iterate one argument at a time is the for loop without a given wordlist. The loop uses the positional parameters as a wordlist:


for arg

do

    echo "$arg"

done

Advantage: The positional parameters will be preserved

The next method is similar to the first example (the for loop), but it doesn't test for reaching $# . It shifts and checks if $1 still expands to something, using the test command :


while [ "$1" ]

do

    echo "$1"

    shift

done

Looks nice, but has the disadvantage of stopping when $1 is empty (null-string). Let's modify it to run as long as $1 is defined (but may be null), using parameter expansion for an alternate value :


while [ "${1+defined}" ]; do

  echo "$1"

  shift

done

Getopts There is a small tutorial dedicated to ''getopts'' ( under construction ). Mass usage All Positional Parameters Sometimes it's necessary to just "relay" or "pass" given arguments to another program. It's very inefficient to do that in one of these loops, as you will destroy integrity, most likely (spaces!).

The shell developers created $* and $@ for this purpose.

As overview:

Syntax Effective result
$* $1 $2 $3 ${N}
$@ $1 $2 $3 ${N}
"$*" "$1c$2c$3c c${N}"
"$@" "$1" "$2" "$3" "${N}"

Without being quoted (double quotes), both have the same effect: All positional parameters from $1 to the last one used are expanded without any special handling.

When the $* special parameter is double quoted, it expands to the equivalent of: "$1c$2c$3c$4c ..$N" , where 'c' is the first character of IFS .

But when the $@ special parameter is used inside double quotes, it expands to the equivanent of

"$1" "$2" "$3" "$4" .. "$N"

which reflects all positional parameters as they were set initially and passed to the script or function. If you want to re-use your positional parameters to call another program (for example in a wrapper-script), then this is the choice for you, use double quoted "$@" .

Well, let's just say: You almost always want a quoted "$@" ! Range Of Positional Parameters Another way to mass expand the positional parameters is similar to what is possible for a range of characters using substring expansion on normal parameters and the mass expansion range of arrays .

${@:START:COUNT}

${*:START:COUNT}

"${@:START:COUNT}"

"${*:START:COUNT}"

The rules for using @ or * and quoting are the same as above. This will expand COUNT number of positional parameters beginning at START . COUNT can be omitted ( ${@:START} ), in which case, all positional parameters beginning at START are expanded.

If START is negative, the positional parameters are numbered in reverse starting with the last one.

COUNT may not be negative, i.e. the element count may not be decremented.

Example: START at the last positional parameter:


echo "${@: -1}"

Attention : As of Bash 4, a START of 0 includes the special parameter $0 , i.e. the shell name or whatever $0 is set to, when the positional parameters are in use. A START of 1 begins at $1 . In Bash 3 and older, both 0 and 1 began at $1 . Setting Positional Parameters Setting positional parameters with command line arguments, is not the only way to set them. The builtin command, set may be used to "artificially" change the positional parameters from inside the script or function:


set "This is" my new "set of" positional parameters



# RESULTS IN

# $1: This is

# $2: my

# $3: new

# $4: set of

# $5: positional

# $6: parameters

It's wise to signal "end of options" when setting positional parameters this way. If not, the dashes might be interpreted as an option switch by set itself:


# both ways work, but behave differently. See the article about the set command!

set -- ...

set - ...

Alternately this will also preserve any verbose (-v) or tracing (-x) flags, which may otherwise be reset by set


set -$- ...

Production examples Using a while loop To make your program accept options as standard command syntax:

COMMAND [options] <params> # Like 'cat -A file.txt'

See simple option parsing code below. It's not that flexible. It doesn't auto-interpret combined options (-fu USER) but it works and is a good rudimentary way to parse your arguments.


#!/bin/sh

# Keeping options in alphabetical order makes it easy to add more.



while :

do

    case "$1" in

      -f | --file)

          file="$2"   # You may want to check validity of $2

          shift 2

          ;;

      -h | --help)

          display_help  # Call your function

          # no shifting needed here, we're done.

          exit 0

          ;;

      -u | --user)

          username="$2" # You may want to check validity of $2

          shift 2

          ;;

      -v | --verbose)

          #  It's better to assign a string, than a number like "verbose=1"

          #  because if you're debugging the script with "bash -x" code like this:

          #

          #    if [ "$verbose" ] ...

          #

          #  You will see:

          #

          #    if [ "verbose" ] ...

          #

          #  Instead of cryptic

          #

          #    if [ "1" ] ...

          #

          verbose="verbose"

          shift

          ;;

      --) # End of all options

          shift

          break;

      -*)

          echo "Error: Unknown option: $1" >&2

          exit 1

          ;;

      *)  # No more options

          break

          ;;

    esac

done



# End of file

Filter unwanted options with a wrapper script This simple wrapper enables filtering unwanted options (here: -a and –all for ls ) out of the command line. It reads the positional parameters and builds a filtered array consisting of them, then calls ls with the new option set. It also respects the as "end of options" for ls and doesn't change anything after it:


#!/bin/bash



# simple ls(1) wrapper that doesn't allow the -a option



options=()  # the buffer array for the parameters

eoo=0       # end of options reached



while [[ $1 ]]

do

    if ! ((eoo)); then

        case "$1" in

          -a)

              shift

              ;;

          --all)

              shift

              ;;

          -[^-]*a*|-a?*)

              options+=("${1//a}")

              shift

              ;;

          --)

              eoo=1

              options+=("$1")

              shift

              ;;

          *)

              options+=("$1")

              shift

              ;;

        esac

    else

        options+=("$1")



        # Another (worse) way of doing the same thing:

        # options=("${options[@]}" "$1")

        shift

    fi

done



/bin/ls "${options[@]}"

Using getopts There is a small tutorial dedicated to ''getopts'' ( under construction ). See also

Discussion 2010/04/14 14:20
The shell-developers invented $* and $@ for this purpose.
Without being quoted (double-quoted), both have the same effect: All positional parameters from $1 to the last used one >are expanded, separated by the first character of IFS (represented by "c" here, but usually a space):
$1c$2c$3c$4c........$N

Without double quotes, $* and $@ are expanding the positional parameters separated by only space, not by IFS.


#!/bin/bash



export IFS='-'



echo -e $*

echo -e $@


$./test "This is" 2 3

This is 2 3

This is 2 3

2011/02/18 16:11 #!/bin/bash

OLDIFS="$IFS" IFS='-' #export IFS='-'

#echo -e $* #echo -e $@ #should be echo -e "$*" echo -e "$@" IFS="$OLDIFS"

2011/02/18 16:14 #should be echo -e "$*"

2012/04/20 10:32 Here's yet another non-getopts way.

http://bsdpants.blogspot.de/2007/02/option-ize-your-shell-scripts.html

2012/07/16 14:48 Hi there!

What if I use "$@" in subsequent function calls, but arguments are strings?

I mean, having:


#!/bin/bash

echo "$@"

echo n: $#

If you use it


mypc$ script arg1 arg2 "asd asd" arg4

arg1 arg2 asd asd arg4

n: 4

But having


#!/bin/bash

myfunc()

{

  echo "$@"

  echo n: $#

}

echo "$@"

echo n: $#

myfunc "$@"

you get:


mypc$ myscrpt arg1 arg2 "asd asd" arg4

arg1 arg2 asd asd arg4

4

arg1 arg2 asd asd arg4

5

As you can see, there is no way to make know the function that a parameter is a string and not a space separated list of arguments.

Any idea of how to solve it? I've test calling functions and doing expansion in almost all ways with no results.

2012/08/12 09:11 I don't know why it fails for you. It should work if you use "$@" , of course.

See the example I used your second script with:


$ ./args1 a b c "d e" f

a b c d e f

n: 5

a b c d e f

n: 5

[Jul 25, 2017] Bash function for 'cd' aliases

Jul 25, 2017 | artofsoftware.org

Sep 2, 2011

Posted by craig in Tools

Leave a comment

Tags

bash , CDPATH

Once upon a time I was playing with Windows Power Shell (WPSH) and discovered a very useful function for changing to commonly visited directories. The function, called "go", which was written by Peter Provost , grew on me as I used WPSH ! so much so that I decided to implement it in bash after my WPSH experiments ended.

The problem is simple. Users of command line interfaces tend to visit the same directories repeatedly over the course of their work, and having a way to get to these oft-visited places without a lot of typing is nice.

The solution entails maintaining a map of key-value pairs, where each key is an alias to a value, which is itself a commonly visited directory. The "go" function will, when given a string input, look that string up in the map, and if the key is found, move to the directory indicated by the value.

The map itself is just a specially formatted text file with one key-value entry per line, while each entry is separated into key-value components by the first encountered colon, with the left side being interpreted as the entry's key and the right side as its value.

Keys are typically short easily typed strings, while values can be arbitrary path names, and even contain references to environment variables. The effect of this is that "go" can respond dynamically to the environment.

Finally, the "go" function finds the map file by referring to an environment variable called "GO_FILE", which should have as its value the full path to the map.

Before I ran into this idea I had maintained a number of shell aliases, (i.e. alias dwork='cd $WORK_DIR'), to achieve a similar end, but every time I wanted to add a new location I was forced to edit my .bashrc file. Then I would subsequently have to resource it or enter the alias again on the command line. Since I typically keep multiple shells open this is just a pain, and so I didn't add new aliases very often. With this method, a new entry in the "go file" is immediately available to all open shells without any extra finagling.

This functionality is related to CDPATH, but they are not replacements for one another. Indeed CDPATH is the more appropriate solution when you want to be able to "cd" to all or most of the sub-directories of some parent. On the other hand, "go" works very well for getting to a single directory easily. For example you might not want "/usr/local" in your CDPATH and still want an abbreviated way of getting to "/usr/local/share".

The code for the go function, as well as some brief documentation follows.

##############################################
# GO
#
# Inspired by some Windows Power Shell code
# from Peter Provost (peterprovost.org)
#
# Here are some examples entries:
# work:${WORK_DIR}
# source:${SOURCE_DIR}
# dev:/c/dev
# object:${USER_OBJECT_DIR}
# debug:${USER_OBJECT_DIR}/debug
###############################################
export GO_FILE=~/.go_locations
function go
{
   if [ -z "$GO_FILE" ]
   then
      echo "The variable GO_FILE is not set."
      return
   fi

   if [ ! -e "$GO_FILE" ]
   then
      echo "The 'go file': '$GO_FILE' does not exist."
      return
   fi

   dest=""
   oldIFS=${IFS}
   IFS=$'\n'
   for entry in `cat ${GO_FILE}`
   do
      if [ "$1" = ${entry%%:*} ]
      then
         #echo $entry
         dest=${entry##*:}
         break
      fi
   done

   if [ -n "$dest" ]
   then
      # Expand variables in the go file.
      #echo $dest
      cd `eval echo $dest`
   else
      echo "Invalid location, valid locations are:"
      cat $GO_FILE
   fi
   export IFS=${oldIFS}
}

[Jul 25, 2017] Local variables

Notable quotes:
"... completely local and separate ..."
Jul 25, 2017 | wiki.bash-hackers.org

local to a function:

myfunc
()
local
var
=VALUE
 
# alternative, only when used INSIDE a function
declare
var
=VALUE
 
...

The local keyword (or declaring a variable using the declare command) tags a variable to be treated completely local and separate inside the function where it was declared:

foo
=external
 
printvalue
()
local
foo
=internal
 
echo
$foo

 
 
# this will print "external"
echo
$foo

 
# this will print "internal"

printvalue
 
# this will print - again - "external"
echo
$foo

[Jul 25, 2017] Environment variables

Notable quotes:
"... environment variables ..."
"... including the environment variables ..."
Jul 25, 2017 | wiki.bash-hackers.org

The environment space is not directly related to the topic about scope, but it's worth mentioning.

Every UNIX® process has a so-called environment . Other items, in addition to variables, are saved there, the so-called environment variables . When a child process is created (in Bash e.g. by simply executing another program, say ls to list files), the whole environment including the environment variables is copied to the new process. Reading that from the other side means: Only variables that are part of the environment are available in the child process.

A variable can be tagged to be part of the environment using the export command:

# create a new variable and set it:
# -> This is a normal shell variable, not an environment variable!
myvariable
"Hello world."

 
# make the variable visible to all child processes:
# -> Make it an environment variable: "export" it
export
 myvariable

Remember that the exported variable is a copy . There is no provision to "copy it back to the parent." See the article about Bash in the process tree !


1) under specific circumstances, also by the shell itself

[Jul 25, 2017] Block commenting

Jul 25, 2017 | wiki.bash-hackers.org

: (colon) and input redirection. The : does nothing, it's a pseudo command, so it does not care about standard input. In the following code example, you want to test mail and logging, but not dump the database, or execute a shutdown:

#!/bin/bash
# Write info mails, do some tasks and bring down the system in a safe way
echo
"System halt requested"
 mail
-s
"System halt"
 netadmin
example.com
logger
-t
 SYSHALT
"System halt requested"

 
##### The following "code block" is effectively ignored

:
<<
"SOMEWORD"
etc
init.d
mydatabase clean_stop
mydatabase_dump
var
db
db1
mnt
fsrv0
backups
db1
logger
-t
 SYSHALT
"System halt: pre-shutdown actions done, now shutting down the system"

shutdown
-h
 NOW
SOMEWORD
##### The ignored codeblock ends here
What happened? The : pseudo command was given some input by redirection (a here-document) - the pseudo command didn't care about it, effectively, the entire block was ignored.

The here-document-tag was quoted here to avoid substitutions in the "commented" text! Check redirection with here-documents for more

[Jul 25, 2017] Doing specific tasks: concepts, methods, ideas

Notable quotes:
"... under construction! ..."
Jul 25, 2017 | wiki.bash-hackers.org

[Jul 25, 2017] Bash 4 - a rough overview

Jul 25, 2017 | wiki.bash-hackers.org

Bash changes page for new stuff introduced.

Besides many bugfixes since Bash 3.2, Bash 4 will bring some interesting new features for shell users and scripters. See also Bash changes for a small general overview with more details.

Not all of the changes and news are included here, just the biggest or most interesting ones. The changes to completion, and the readline component are not covered. Though, if you're familiar with these parts of Bash (and Bash 4), feel free to write a chapter here.

The complete list of fixes and changes is in the CHANGES or NEWS file of your Bash 4 distribution.

The current available stable version is 4.2 release (February 13, 2011): New or changed commands and keywords The new "coproc" keyword Bash 4 introduces the concepts of coprocesses, a well known feature of other shells. The basic concept is simple: It will start any command in the background and set up an array that is populated with accessible files that represent the filedescriptors of the started process.

In other words: It lets you start a process in background and communicate with its input and output data streams.

See The coproc keyword The new "mapfile" builtin The mapfile builtin is able to map the lines of a file directly into an array. This avoids having to fill an array yourself using a loop. It enables you to define the range of lines to read, and optionally call a callback, for example to display a progress bar.

See: The mapfile builtin command Changes to the "case" keyword The case construct understands two new action list terminators:

The ;& terminator causes execution to continue with the next action list (rather than terminate the case construct).

The ;;& terminator causes the case construct to test the next given pattern instead of terminating the whole execution.

See The case statement Changes to the "declare" builtin The -p option now prints all attributes and values of declared variables (or functions, when used with -f ). The output is fully re-usable as input.

The new option -l declares a variable in a way that the content is converted to lowercase on assignment. For uppercase, the same applies to -u . The option -c causes the content to be capitalized before assignment.

declare -A declares associative arrays (see below). Changes to the "read" builtin The read builtin command has some interesting new features.

The -t option to specify a timeout value has been slightly tuned. It now accepts fractional values and the special value 0 (zero). When -t 0 is specified, read immediately returns with an exit status indicating if there's data waiting or not. However, when a timeout is given, and the read builtin times out, any partial data recieved up to the timeout is stored in the given variable, rather than lost. When a timeout is hit, read exits with a code greater than 128.

A new option, -i , was introduced to be able to preload the input buffer with some text (when Readline is used, with -e ). The user is able to change the text, or press return to accept it.

See The read builtin command Changes to the "help" builtin The builtin itself didn't change much, but the data displayed is more structured now. The help texts are in a better format, much easier to read.

There are two new options: -d displays the summary of a help text, -m displays a manpage-like format. Changes to the "ulimit" builtin Besides the use of the 512 bytes blocksize everywhere in POSIX mode, ulimit supports two new limits: -b for max socket buffer size and -T for max number of threads. Expansions Brace Expansion The brace expansion was tuned to provide expansion results with leading zeros when requesting a row of numbers.

See Brace expansion Parameter Expansion Methods to modify the case on expansion time have been added.

On expansion time you can modify the syntax by adding operators to the parameter name.

See Case modification on parameter expansion Substring expansion When using substring expansion on the positional parameters, a starting index of 0 now causes $0 to be prepended to the list (if the positional parameters are used). Before, this expansion started with $1:

# this should display $0 on Bash v4, $1 on Bash v3
echo ${@:0:1}

Globbing There's a new shell option globstar . When enabled, Bash will perform recursive globbing on ** – this means it matches all directories and files from the current position in the filesystem, rather than only the current level.

The new shell option dirspell enables spelling corrections on directory names during globbing.

See Pathname expansion (globbing) Associative Arrays Besides the classic method of integer indexed arrays, Bash 4 supports associative arrays.

An associative array is an array indexed by an arbitrary string, something like

declare -A ASSOC

ASSOC[First]="first element"
ASSOC[Hello]="second element"
ASSOC[Peter Pan]="A weird guy"

See Arrays Redirection There is a new &>> redirection operator, which appends the standard output and standard error to the named file. This is the same as the good old >>FILE 2>&1 notation.

The parser now understands |& as a synonym for 2>&1 | , which redirects the standard error for a command through a pipe.

See Redirection Interesting new shell variables

Variable Description
BASHPID contains the PID of the current shell (this is different than what $$ does!)
PROMPT_DIRTRIM specifies the max. level of unshortened pathname elements in the prompt
FUNCNEST control the maximum number of shell function recursions

See Special parameters and shell variables Interesting new Shell Options The mentioned shell options are off by default unless otherwise mentioned.

Option Description
checkjobs check for and report any running jobs at shell exit
compat* set compatiblity modes for older shell versions (influences regular expression matching in [[ ... ]]
dirspell enables spelling corrections on directory names during globbing
globstar enables recursive globbing with **
lastpipe (4.2) to execute the last command in a pipeline in the current environment

See List of shell options Misc

[Jul 25, 2017] Keeping persistent history in bash

Jul 25, 2017 | eli.thegreenplace.net

June 11, 2013 at 19:27 Tags Linux , Software & Tools

Update (Jan 26, 2016): I posted a short update about my usage of persistent history.

For someone spending most of his time in front of a Linux terminal, history is very important. But traditional bash history has a number of limitations, especially when multiple terminals are involved (I sometimes have dozens open). Also it's not very good at preserving just the history you're interested in across reboots.

There are many approaches to improve the situation; here I want to discuss one I've been using very successfully in the past few months - a simple "persistent history" that keeps track of history across terminal instances, saving it into a dot-file in my home directory ( ~/.persistent_history ). All commands, from all terminal instances, are saved there, forever. I found this tremendously useful in my work - it saves me time almost every day.

Why does it go into a separate history and not the main one which is accessible by all the existing history manipulation tools? Because IMHO the latter is still worthwhile to be kept separate for the simple need of bringing up recent commands in a single terminal, without mixing up commands from other terminals. While the terminal is open, I want the press "Up" and get the previous command, even if I've executed a 1000 other commands in other terminal instances in the meantime.

Persistent history is very easy to set up. Here's the relevant portion of my ~/.bashrc :

log_bash_persistent_history()
{
  [[
    $(history 1) =~ ^\ *[0-9]+\ +([^\ ]+\ [^\ ]+)\ +(.*)$
  ]]
  local date_part="${BASH_REMATCH[1]}"
  local command_part="${BASH_REMATCH[2]}"
  if [ "$command_part" != "$PERSISTENT_HISTORY_LAST" ]
  then
    echo $date_part "|" "$command_part" >> ~/.persistent_history
    export PERSISTENT_HISTORY_LAST="$command_part"
  fi
}

# Stuff to do on PROMPT_COMMAND
run_on_prompt_command()
{
    log_bash_persistent_history
}

PROMPT_COMMAND="run_on_prompt_command"

The format of the history file created by this is:

2013-06-09 17:48:11 | cat ~/.persistent_history
2013-06-09 17:49:17 | vi /home/eliben/.bashrc
2013-06-09 17:49:23 | ls

Note that an environment variable is used to avoid useless duplication (i.e. if I run ls twenty times in a row, it will only be recorded once).

OK, so we have ~/.persistent_history , how do we use it? First, I should say that it's not used very often, which kind of connects to the point I made earlier about separating it from the much higher-use regular command history. Sometimes I just look into the file with vi or tail , but mostly this alias does the trick for me:

alias phgrep='cat ~/.persistent_history|grep --color'

The alias name mirrors another alias I've been using for ages:

alias hgrep='history|grep --color'

Another tool for managing persistent history is a trimmer. I said earlier this file keeps the history "forever", which is a scary word - what if it grows too large? Well, first of all - worry not. At work my history file grew to about 2 MB after 3 months of heavy usage, and 2 MB is pretty small these days. Appending to the end of a file is very, very quick (I'm pretty sure it's a constant-time operation) so the size doesn't matter much. But trimming is easy:

tail -20000 ~/.persistent_history | tee ~/.persistent_history

Trims to the last 20000 lines. This should be sufficient for at least a couple of months of history, and your workflow should not really rely on more than that :-)

Finally, what's the use of having a tool like this without employing it to collect some useless statistics. Here's a histogram of the 15 most common commands I've used on my home machine's terminal over the past 3 months:

ls        : 865
vi        : 863
hg        : 741
cd        : 512
ll        : 289
pss       : 245
hst       : 200
python    : 168
make      : 167
git       : 148
time      : 94
python3   : 88
./python  : 88
hpu       : 82
cat       : 80

Some explanation: hst is an alias for hg st . hpu is an alias for hg pull -u . pss is my awesome pss tool , and is the reason why you don't see any calls to grep and find in the list. The proportion of Mercurial vs. git commands is likely to change in the very

[Jul 24, 2017] Bash history handling with multiple terminals

Add to your Prompt command history -a to preserve history from multiple terminals. This is a very neat trick !!!
get=

Bash history handling with multiple terminals

The bash session that is saved is the one for the terminal that is closed the latest. If you want to save the commands for every session, you could use the trick explained here.

export PROMPT_COMMAND='history -a'

To quote the manpage: "If set, the value is executed as a command prior to issuing each primary prompt."

So every time my command has finished, it appends the unwritten history item to ~/.bash

ATTENTION: If you use multiple shell sessions and do not use this trick, you need to write the history manually to preserver it using the command history -a

See also:

[Jul 20, 2017] These Guys Didnt Back Up Their Files, Now Look What Happened

Notable quotes:
"... Unfortunately, even today, people have not learned that lesson. Whether it's at work, at home, or talking with friends, I keep hearing stories of people losing hundreds to thousands of files, sometimes they lose data worth actual dollars in time and resources that were used to develop the information. ..."
"... "I lost all my files from my hard drive? help please? I did a project that took me 3 days and now i lost it, its powerpoint presentation, where can i look for it? its not there where i save it, thank you" ..."
"... Please someone help me I last week brought a Toshiba Satellite laptop running windows 7, to replace my blue screening Dell vista laptop. On plugged in my sumo external hard drive to copy over some much treasured photos and some of my (work – music/writing.) it said installing driver. it said completed I clicked on the hard drive and found a copy of my documents from the new laptop and nothing else. ..."
Jul 20, 2017 | www.makeuseof.com
Back in college, I used to work just about every day as a computer cluster consultant. I remember a month after getting promoted to a supervisor, I was in the process of training a new consultant in the library computer cluster. Suddenly, someone tapped me on the shoulder, and when I turned around I was confronted with a frantic graduate student – a 30-something year old man who I believe was Eastern European based on his accent – who was nearly in tears.

"Please need help – my document is all gone and disk stuck!" he said as he frantically pointed to his PC.

Now, right off the bat I could have told you three facts about the guy. One glance at the blue screen of the archaic DOS-based version of Wordperfect told me that – like most of the other graduate students at the time – he had not yet decided to upgrade to the newer, point-and-click style word processing software. For some reason, graduate students had become so accustomed to all of the keyboard hot-keys associated with typing in a DOS-like environment that they all refused to evolve into point-and-click users.

The second fact, gathered from a quick glance at his blank document screen and the sweat on his brow told me that he had not saved his document as he worked. The last fact, based on his thick accent, was that communicating the gravity of his situation wouldn't be easy. In fact, it was made even worse by his answer to my question when I asked him when he last saved.

"I wrote 30 pages."

Calculated out at about 600 words a page, that's 18000 words. Ouch.

Then he pointed at the disk drive. The floppy disk was stuck, and from the marks on the drive he had clearly tried to get it out with something like a paper clip. By the time I had carefully fished the torn and destroyed disk out of the drive, it was clear he'd never recover anything off of it. I asked him what was on it.

"My thesis."

I gulped. I asked him if he was serious. He was. I asked him if he'd made any backups. He hadn't.

Making Backups of Backups

If there is anything I learned during those early years of working with computers (and the people that use them), it was how critical it is to not only save important stuff, but also to save it in different places. I would back up floppy drives to those cool new zip drives as well as the local PC hard drive. Never, ever had a single copy of anything.

Unfortunately, even today, people have not learned that lesson. Whether it's at work, at home, or talking with friends, I keep hearing stories of people losing hundreds to thousands of files, sometimes they lose data worth actual dollars in time and resources that were used to develop the information.

To drive that lesson home, I wanted to share a collection of stories that I found around the Internet about some recent cases were people suffered that horrible fate – from thousands of files to entire drives worth of data completely lost. These are people where the only remaining option is to start running recovery software and praying, or in other cases paying thousands of dollars to a data recovery firm and hoping there's something to find.

Not Backing Up Projects

The first example comes from Yahoo Answers , where a user that only provided a "?" for a user name (out of embarrassment probably), posted:

"I lost all my files from my hard drive? help please? I did a project that took me 3 days and now i lost it, its powerpoint presentation, where can i look for it? its not there where i save it, thank you"

The folks answering immediately dove into suggesting that the person run recovery software, and one person suggested that the person run a search on the computer for *.ppt.

... ... ...

Doing Backups Wrong

Then, there's a scenario of actually trying to do a backup and doing it wrong, losing all of the files on the original drive. That was the case for the person who posted on Tech Support Forum , that after purchasing a brand new Toshiba Laptop and attempting to transfer old files from an external hard drive, inadvertently wiped the files on the hard drive.

Please someone help me I last week brought a Toshiba Satellite laptop running windows 7, to replace my blue screening Dell vista laptop. On plugged in my sumo external hard drive to copy over some much treasured photos and some of my (work – music/writing.) it said installing driver. it said completed I clicked on the hard drive and found a copy of my documents from the new laptop and nothing else.

While the description of the problem is a little broken, from the sound of it, the person thought they were backing up from one direction, while they were actually backing up in the other direction. At least in this case not all of the original files were deleted, but a majority were.

[Jul 20, 2017] Server Backup Procedures

Jul 20, 2017 | www.tldp.org
.1.1. Backing up with ``tar'':

If you decide to use ``tar'' as your backup solution, you should probably take the time to get to know the various command-line options that are available; type " man tar " for a comprehensive list. You will also need to know how to access the appropriate backup media; although all devices are treated like files in the Unix world, if you are writing to a character device such as a tape, the name of the "file" is the device name itself (eg. `` /dev/nst0 '' for a SCSI-based tape drive).

The following command will perform a backup of your entire Linux system onto the `` /archive/ '' file system, with the exception of the `` /proc/ '' pseudo-filesystem, any mounted file systems in `` /mnt/ '', the `` /archive/ '' file system (no sense backing up our backup sets!), as well as Squid's rather large cache files (which are, in my opinion, a waste of backup media and unnecessary to back up):


tar -zcvpf /archive/full-backup-`date '+%d-%B-%Y'`.tar.gz \
    --directory / --exclude=mnt --exclude=proc --exclude=var/spool/squid .


Don't be intimidated by the length of the command above! As we break it down into its components, you will see the beauty of this powerful utility.

The above command specifies the options `` z '' (compress; the backup data will be compressed with ``gzip''), `` c '' (create; an archive file is begin created), `` v '' (verbose; display a list of files as they get backed up), `` p '' (preserve permissions; file protection information will be "remembered" so they can be restored). The `` f '' (file) option states that the very next argument will be the name of the archive file (or device) being written. Notice how a filename which contains the current date is derived, simply by enclosing the ``date'' command between two back-quote characters. A common naming convention is to add a `` tar '' suffix for non-compressed archives, and a `` tar.gz '' suffix for compressed ones.

The `` --directory '' option tells tar to first switch to the following directory path (the `` / '' directory in this example) prior to starting the backup. The `` --exclude '' options tell tar not to bother backing up the specified directories or files. Finally, the `` . '' character tells tar that it should back up everything in the current directory.

Note: Note: It is important to realize that the options to tar are cAsE-sEnSiTiVe! In addition, most of the options can be specified as either single mneumonic characters (eg. ``f''), or by their easier-to-memorize full option names (eg. ``file''). The mneumonic representations are identified by prefixing them with a ``-'' character, while the full names are prefixed with two such characters. Again, see the "man" pages for information on using tar.

Another example, this time writing only the specified file systems (as opposed to writing them all with exceptions as demonstrated in the example above) onto a SCSI tape drive follows:


tar -cvpf /dev/nst0 --label="Backup set created on `date '+%d-%B-%Y'`." \
    --directory / --exclude=var/spool/ etc home usr/local var/spool


In the above command, notice that the `` z '' (compress) option is not used. I strongly recommend against writing compressed data to tape, because if data on a portion of the tape becomes corrupted, you will lose your entire backup set! However, archive files stored without compression have a very high recoverability for non-affected files, even if portions of the tape archive are corrupted.

Because the tape drive is a character device, it is not possible to specify an actual file name. Therefore, the file name used as an argument to tar is simply the name of the device, `` /dev/nst0 '', the first tape device on the SCSI bus.

Note: Note: The `` /dev/nst0 '' device does not rewind after the backup set is written; therefore it is possible to write multiple sets on one tape. (You may also refer to the device as `` /dev/st0 '', in which case the tape is automatically rewound after the backup set is written.)

Since we aren't able to specify a filename for the backup set, the `` --label '' option can be used to write some information about the backup set into the archive file itself.

Finally, only the files contained in the `` /etc/ '', `` /home/ '', `` /usr/local '', and `` /var/spool/ '' (with the exception of Squid's cache data files) are written to the tape.

When working with tapes, you can use the following commands to rewind, and then eject your tape:


mt -f /dev/nst0 rewind



mt -f /dev/nst0 offline


Tip: Tip: You will notice that leading `` / '' (slash) characters are stripped by tar when an archive file is created. This is tar's default mode of operation, and it is intended to protect you from overwriting critical files with older versions of those files, should you mistakenly recover the wrong file(s) in a restore operation. If you really dislike this behavior (remember, its a feature !) you can specify the `` --absolute-paths '' option to tar, which will preserve the leading slashes. However, I don't recommend doing so, as it is Dangerous !

[Jul 18, 2017] Can I copy my Ubuntu OS off my hard drive to a USB stick and boot from that stick with all my programs

get=
user323419
Yes, this is completely possible. First and foremost, you will need at least 2 USB ports available, or 1 USB port and 1 CD-Drive.

You start by booting into a Live-CD version of Ubuntu with your hard-drive where it is and the target device plugged into USB. Mount your internal drive and target USB to any paths you like.

Open up a terminal and enter the following commands:

tar cp --xattrs /path/to/internal | tar x /path/to/target/usb

You can also look into doing this through a live installation and a utility called CloneZilla, but I am unsure of exactly how to use CloneZilla. The above method is what I used to copy my 128GB hard-drive's installation of Ubuntu to a 64GB flash drive.

2) Clone again the internal or external drive in its entirety to another drive:

Use the "Clonezilla" utility, mentioned in the very last paragraph of my original answer, to clone the original internal drive to another external drive to make two such external bootable drives to keep track of. v>

[Jul 16, 2017] Bash prompt tips and tricks

Jul 07, 2017 | opensource.com
7 comments your Bash prompt.

Anyone who has started a terminal in Linux is familiar with the default Bash prompt:


[
user
@
$host
 ~
]
$

But did you know is that this is completely customizable and can contain some very useful information? Here are a few hidden treasures you can use to customize your Bash prompt.

How is the Bash prompt set?

The Bash prompt is set by the environment variable PS1 (Prompt String 1), which is used for interactive shell prompts. There is also a PS2 variable, which is used when more input is required to complete a Bash command.

[ dneary @ dhcp- 41 - 137 ~ ] $ export PS1 = "[Linux Rulez]$ "
[ Linux Rulez ] export PS2 = "... "
[ Linux Rulez ] if true ; then
... echo "Success!"
... fi
Success ! Where is the value of PS1 set?

PS1 is a regular environment variable.

The system default value is set in /etc/bashrc . On my system, the default prompt is set with this line:


[
"
$PS1
"
 = 
"\\s-\
\v
\\
\$
 "
]
&&
PS1
=
"[\u@\h \W]\
\$
 "

This tests whether the value of PS1 is \s-\v$ (the system default value), and if it is, it sets PS1 to the value [\u@\h \W]\\$ .

If you want to see a custom prompt, however, you should not be editing /etc/bashrc . You should instead add it to .bashrc in your Home directory.

What do \u, \h, \W, \s, and \v mean? More Linux resources

In the PROMPTING section of man bash , you can find a description of all the special characters in PS1 and PS2 . The following are the default options:

What other special strings can I use in the prompts?

There are a number of special strings that can be useful.

There are many other special characters!you can see the full list in the PROMPTING section of the Bash man page .

Multi-line prompts

If you use longer prompts (say if you include \H or \w or a full date-time ), you may want to break things over two lines. Here is an example of a multi-line prompt, with the date, time, and current working directory on one line, and username @hostname on the second line:


PS1
=
"\D{%c} \w
\n
[\u@\H]$ "

Are there any other interesting things I can do?

One thing people occasionally do is create colorful prompts. While I find them annoying and distracting, you may like them. For example, to change the date-time above to display in red text, the directory in cyan, and your username on a yellow background, you could try this:

PS1 = "\[\e[31m\]\D{%c}\[\e[0m\]
\[\e[36m\]\w\[\e[0m\] \n [\[\e[1;43m\]\u\[\e[0m\]@\H]$ "

To dissect this:

You can find more colors and tips in the Bash prompt HOWTO . You can even make text inverted or blinking! Why on earth anyone would want to do this, I don't know. But you can!

What are your favorite Bash prompt customizations? And which ones have you seen that drive you crazy? Let me know in the comments. Ben Cotton on 07 Jul 2017 Permalink I really like the Bash-Beautify setup by Chris Albrecht:
https://github.com/KeyboardCowboy/Bash-Beautify/blob/master/.bash_beautify

When you're in a version-controlled directory, it includes the VCS information (e.g. the git branch and status), which is really handy if you do development. Victorhck on 07 Jul 2017 Permalink An easy drag and drop interface to build your own .bashrc/PS1 configuration

http://bashrcgenerator.com/

've phun!

How Docker Is Growing Its Container Business (Apr 21, 2017, 07:00)
VIDEO: Ben Golub, CEO of Docker Inc., discusses the business of containers and where Docker is headed.

Understanding Shell Initialization Files and User Profiles in Linux (Apr 22, 2017, 10:00)
tecmint: Learn about shell initialization files in relation to user profiles for local user management in Linux.

Cockpit An Easy Way to Administer Multiple Remote Linux Servers via a Web Browser (Apr 23, 2017, 18:00)
Cockpit is a free and open source web-based system management tool where users can easily monitor and manage multiple remote Linux servers.

The Story of Getting SSH Port 22 (Apr 24, 2017, 13:00)
It's no coincidence that the SSH protocol got assigned to port 22.

How To Suspend A Process And Resume It Later In Linux (Apr 24, 2017, 11:00)
This brief tutorial describes how to suspend or pause a running process and resume it later in Unix-like operating systems.

ShellCheck -A Tool That Shows Warnings and Suggestions for Shell Scripts (Apr 25, 2017, 06:00)
tecmint: ShellCheck is a static analysis tool that shows warnings and suggestions concerning bad code in bash/sh shell scripts.

Quick guide for Linux check disk space (Apr 26, 2017, 14:00)
Do you know how much space is left on your Linux system?

[Jul 16, 2017] A Collection Of Useful BASH Scripts For Heavy Commandline Users - OSTechNix

Notable quotes:
"... Provides cheat-sheets for various Linux commands ..."
Jul 16, 2017 | www.ostechnix.com
Today, I have stumbled upon a collection of useful BASH scripts for heavy commandline users. These scripts, known as Bash-Snippets , might be quite helpful for those who live in Terminal all day. Want to check the weather of a place where you live? This script will do that for you. Wondering what is the Stock prices? You can run the script that displays the current details of a stock. Feel bored? You can watch some youtube videos. All from commandline. You don't need to install any heavy memory consumable GUI applications.

Bash-Snippets provides the following 12 useful tools:

  1. currency – Currency converter.
  2. stocks – Provides certain Stock details.
  3. weather – Displays weather details of your place.
  4. crypt – Encrypt and decrypt files.
  5. movies – Search and display a movie details.
  6. taste – Recommendation engine that provides three similar items like the supplied item (The items can be books, music, artists, movies, and games etc).
  7. short – URL Shortner
  8. geo – Provides the details of wan, lan, router, dns, mac, and ip.
  9. cheat – Provides cheat-sheets for various Linux commands .
  10. ytview – Watch YouTube from Terminal.
  11. cloudup – A tool to backup your GitHub repositories to bitbucket.
  12. qrify – Turns the given string into a qr code.
Bash-Snippets – A Collection Of Useful BASH Scripts For Heavy Commandline Users Installation

You can install these scripts on any OS that supports BASH.

First, clone the GIT repository using command:

git clone https://github.com/alexanderepstein/Bash-Snippets

Sample output would be:

Cloning into 'Bash-Snippets'...
remote: Counting objects: 1103, done.
remote: Compressing objects: 100% (45/45), done.
remote: Total 1103 (delta 40), reused 55 (delta 23), pack-reused 1029
Receiving objects: 100% (1103/1103), 1.92 MiB | 564.00 KiB/s, done.
Resolving deltas: 100% (722/722), done.

Go to the cloned directory:

cd Bash-Snippets/

Git checkout to the latest stable release:

git checkout v1.11.0

Finally, install the Bash-Snippets using command:

sudo ./install.sh

This will ask you which scripts to install. Just type Y and press ENTER key to install the respective script. If you don't want to install a particular script, type N and hit ENTER.

[Jul 16, 2017] Classifier by classifying them into folders of Xls, Docs, .png, .jpeg, vidoe, music, pdfs, images, ISO, etc.

Jul 16, 2017 | github.com
If i'm not wrong, all our download folder is pretty Sloppy compare with others because most of the downloaded files are sitting over there and we can't delete blindly, which leads to lose some important files. Also not possible to create bunch of folders based on the files and move appropriate files into folder manually.

So, what to do to avoid this ? Better to organize files with help of classifier, later we can delete unnecessary files easily. Classifier app was written in Python.

How to Organize directory ? Simple navigate to corresponding directory, where you want to organize/classify your files and run the classifier command, it will take few mins or more depends on the directory files count or quantity.

Make a note, there is no undo option, if you want to go back. So, finalize before run classifier in directory. Also, it wont move folders.

Install Classifier in Linux through pip

pip is a recommended tool for installing Python packages in Linux. Use pip command instead of package manager to get latest build.

For Debian based systems.

$ sudo apt-get install python-pip

For RHEL/CentOS based systems.

$ sudo yum install python-pip

For Fedora

$ sudo dnf install python-pip

For openSUSE

$ sudo zypper install python-pip

For Arch Linux based systems

$ sudo pacman -S python-pip

Finally run the pip tool to install Classifier on Linux.

$ sudo pip install classifier
Organize pattern files into specific folders

First i will go with default option which will organize pattern files into specific folders. This will create bunch of directories based on the file types and move them into specific folders.

See my directory, how its looking now (Before run classifier command).

$ pwd
/home/magi/classifier

$ ls -lh
total 139M
-rw-r--r-- 1 magi magi 4.5M Mar 21 21:21 Aaluma_Doluma.mp3
-rw-r--r-- 1 magi magi  26K Mar 21 21:12 battery-monitor_0.4-xenial_all.deb
-rw-r--r-- 1 magi magi  24K Mar 21 21:12 buku-command-line-bookmark-manager-linux.png
-rw-r--r-- 1 magi magi    0 Mar 21 21:43 config.php
-rw-r--r-- 1 magi magi   25 Mar 21 21:13 core.py
-rw-r--r-- 1 magi magi 101K Mar 21 21:12 drawing.svg
-rw-r--r-- 1 magi magi  86M Mar 21 21:12 go1.8.linux-amd64.tar.gz
-rw-r--r-- 1 magi magi   28 Mar 21 21:13 index.html
-rw-r--r-- 1 magi magi   27 Mar 21 21:13 index.php
-rw-r--r-- 1 magi magi  48M Apr 30  2016 Kabali Tamil Movie _ Official Teaser _ Rajinikanth _ Radhika Apte _ Pa Ranjith-9mdJV5-eias.webm
-rw-r--r-- 1 magi magi   28 Mar 21 21:12 magi1.txt
-rw-r--r-- 1 magi magi   66 Mar 21 21:12 ppa.py
-rw-r--r-- 1 magi magi 1.1K Mar 21 21:12 Release.html
-rw-r--r-- 1 magi magi  45K Mar 21 21:12 v0.4.zip

Navigate to corresponding directory where you want to organize files, then run classifier command without any option to achieve it.

$ classifier
Scanning Files
Done!

See the Directory look, after run classifier command

$ ls -lh
total 44K
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Archives
-rw-r--r-- 1 magi magi    0 Mar 21 21:43 config.php
-rw-r--r-- 1 magi magi   25 Mar 21 21:13 core.py
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 DEBPackages
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Documents
-rw-r--r-- 1 magi magi   28 Mar 21 21:13 index.html
-rw-r--r-- 1 magi magi   27 Mar 21 21:13 index.php
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Music
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Pictures
-rw-r--r-- 1 magi magi   66 Mar 21 21:12 ppa.py
-rw-r--r-- 1 magi magi 1.1K Mar 21 21:12 Release.html
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Videos

Make a note, this will organize only general category files such docs, audio, video, pictures, archive, etc and wont organize .py, .html, .php, etc.,.

Classify specific file types into specific folder

To Classify specific file types into specific folder, just add -st (mention the file type) & -sf (folder name) followed by classifier command.

For best understanding, i'm going to move .py , .html & .php files into Development folder. See the exact command to achieve it.

$ classifier -st .py .html .php -sf "Development" 
Scanning Files
Done!

If the folder doesn't exit, it will create the new one and organize the files into that. See the following output. It created Development directory and moved all the files inside the directory.

$ ls -lh
total 28K
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Archives
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 DEBPackages
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:51 Development
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Documents
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Music
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Pictures
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Videos

For better clarification, i have listed Development folder files.

$ ls -lh Development/
total 12K
-rw-r--r-- 1 magi magi  0 Mar 21 21:43 config.php
-rw-r--r-- 1 magi magi 25 Mar 21 21:13 core.py
-rw-r--r-- 1 magi magi 28 Mar 21 21:13 index.html
-rw-r--r-- 1 magi magi 27 Mar 21 21:13 index.php
-rw-r--r-- 1 magi magi  0 Mar 21 21:43 ppa.py
-rw-r--r-- 1 magi magi  0 Mar 21 21:43 Release.html

To Organize files by Date. It will organize current directory files based on the date.

$ classifier -dt

me width=

me width=

To save organized files in different location, add -d (source directory) & -o (destination directory) followed by classifier command.

$  classifier -d /home/magi/organizer -o /home/magi/2g

[Jul 10, 2017] Crowdsourcing, Open Data and Precarious Labour by Allana Mayer Model View Culture

Notable quotes:
"... Photo CC-BY Mace Ojala. ..."
"... Photo CC-BY Samantha Marx. ..."
Jul 10, 2017 | modelviewculture.com
Crowdsourcing, Open Data and Precarious Labour Crowdsourcing and microtransactions are two halves of the same coin: they both mark new stages in the continuing devaluation of labour. by Allana Mayer on February 24th, 2016 The cultural heritage industries (libraries, archives, museums, and galleries, often collectively called GLAMs) like to consider themselves the tech industry's little siblings. We're working to develop things like Linked Open Data, a decentralized network of collaboratively-improved descriptive metadata; we're building our own open-source tech to make our catalogues and collections more useful; we're pushing scholarly publishing out from behind paywalls and into open-access platforms; we're driving innovations in accessible tech.

We're only different in a few ways. One, we're a distinctly feminized set of professions , which comes with a large set of internally- and externally-imposed assumptions. Two, we rely very heavily on volunteer labour, and not just in the internship-and-exposure vein : often retirees and non-primary wage-earners are the people we "couldn't do without." Three, the underlying narrative of a "helping" profession ! essentially a social service ! can push us to ignore the first two distinctions, while driving ourselves to perform more and expect less.

I suppose the major way we're different is that tech doesn't acknowledge us, treat us with respect, build things for us, or partner with us, unless they need a philanthropic opportunity. Although, when some ingenue autodidact bootstraps himself up to a billion-dollar IPO, there's a good chance he's been educating himself using our free resources. Regardless, I imagine a few of the issues true in GLAMs are also true in tech culture, especially in regards to labour and how it's compensated.

Crowdsourcing

Notecards in a filing drawer: old-fashioned means of recording metadata.

Photo CC-BY Mace Ojala.

Here's an example. One of the latest trends is crowdsourcing: admitting we don't have all the answers, and letting users suggest some metadata for our records. (Not to be confused with crowdfunding.) The biggest example of this is Flickr Commons: the Library of Congress partnered with Yahoo! to publish thousands of images that had somehow ended up in the LOC's collection without identifying information. Flickr users were invited to tag pictures with their own keywords or suggest descriptions using comments.

Many orphaned works (content whose copyright status is unclear) found their way conclusively out into the public domain (or back into copyright) this way. Other popular crowdsourcing models include gamification , transcription of handwritten documents (which can't be done with Optical Character Recognition), or proofreading OCR output on digitized texts. The most-discussed side benefits of such projects include the PR campaign that raises general awareness about the organization, and a "lifting of the curtain" on our descriptive mechanisms.

The problem with crowdsourcing is that it's been conclusively proven not to function in the way we imagine it does: a handful of users end up contributing massive amounts of labour, while the majority of those signed up might do a few tasks and then disappear. Seven users in the "Transcribe Bentham" project contributed to 70% of the manuscripts completed; 10 "power-taggers" did the lion's share of the Flickr Commons' image-identification work. The function of the distributed digital model of volunteerism is that those users won't be compensated, even though many came to regard their accomplishments as full-time jobs .

It's not what you're thinking: many of these contributors already had full-time jobs , likely ones that allowed them time to mess around on the Internet during working hours. Many were subject-matter experts, such as the vintage-machinery hobbyist who created entire datasets of machine-specific terminology in the form of image tags. (By the way, we have a cute name for this: "folksonomy," a user-built taxonomy. Nothing like reducing unpaid labour to a deeply colonial ascription of communalism.) In this way, we don't have precisely the free-labour-for-exposure/project-experience problem the tech industry has ; it's not our internships that are the problem. We've moved past that, treating even our volunteer labour as a series of microtransactions. Nobody's getting even the dubious benefit of job-shadowing, first-hand looks at business practices, or networking. We've completely obfuscated our own means of production. People who submit metadata or transcriptions don't even have a means of seeing how the institution reviews and ingests their work, and often, to see how their work ultimately benefits the public.

All this really says to me is: we could've hired subject experts to consult, and given them a living wage to do so, instead of building platforms to dehumanize labour. It also means our systems rely on privilege , and will undoubtedly contain and promote content with a privileged bias, as Wikipedia does. (And hey, even Wikipedia contributions can sometimes result in paid Wikipedian-in-Residence jobs.)

For example, the Library of Congress's classification and subject headings have long collected books about the genocide of First Nations peoples during the colonization of North America under terms such as "first contact," "discovery and exploration," "race relations," and "government relations." No "subjugation," "cultural genocide," "extermination," "abuse," or even "racism" in sight. Also, the term "homosexuality" redirected people to "sexual perversion" up until the 1970s. Our patrons are disrespected and marginalized in the very organization of our knowledge.

If libraries continue on with their veneer of passive and objective authorities that offer free access to all knowledge, this underlying bias will continue to propagate subconsciously. As in Mechanical Turk , being "slightly more diverse than we used to be" doesn't get us any points, nor does it assure anyone that our labour isn't coming from countries with long-exploited workers.

Labor and Compensation

Rows and rows of books in a library, on vast curving shelves.

Photo CC-BY Samantha Marx.

I also want to draw parallels between the free labour of crowdsourcing and the free labour offered in civic hackathons or open-data contests. Specifically, I'd argue that open-data projects are less ( but still definitely ) abusive to their volunteers, because at least those volunteers have a portfolio object or other deliverable to show for their work. They often work in groups and get to network, whereas heritage crowdsourcers work in isolation.

There's also the potential for converting open-data projects to something monetizable: for example, a Toronto-specific bike-route app can easily be reconfigured for other cities and sold; while the Toronto version stays free under the terms of the civic initiative, freemium options can be added. The volunteers who supply thousands of transcriptions or tags can't usually download their own datasets and convert them into something portfolio-worthy, let alone sellable. Those data are useless without their digital objects, and those digital objects still belong to the museum or library.

Crowdsourcing and microtransactions are two halves of the same coin: they both mark new stages in the continuing devaluation of labour, and they both enable misuse and abuse of people who increasingly find themselves with few alternatives. If we're not offering these people jobs, reference letters, training, performance reviews, a "foot in the door" (cronyist as that is), or even acknowledgement by name, what impetus do they have to contribute? As with Wikipedia, I think the intrinsic motivation for many people to supply us with free labour is one of two things: either they love being right, or they've been convinced by the feel-good rhetoric that they're adding to the net good of the world. Of course, trained librarians, archivists, and museum workers have fallen sway to the conflation of labour and identity , too, but we expect to be paid for it.

As in tech, stereotypes and PR obfuscate labour in cultural heritage. For tech, an entrepreneurial spirit and a tendency to buck traditional thinking; for GLAMs, a passion for public service and opening up access to treasures ancient and modern. Of course, tech celebrates the autodidactic dropout; in GLAMs, you need a masters. Period. Maybe two. And entry-level jobs in GLAMs require one or more years of experience, across the board.

When library and archives students go into massive student debt, they're rarely apprised of the constant shortfall of funding for government-agency positions, nor do they get told how much work is done by volunteers (and, consequently, how much of the job is monitoring and babysitting said volunteers). And they're not trained with enough technological competency to sysadmin anything , let alone build a platform that pulls crowdsourced data into an authoritative record. The costs of commissioning these platforms aren't yet being made public, but I bet paying subject experts for their hourly labour would be cheaper.

Solutions

I've tried my hand at many of the crowdsourcing and gamifying interfaces I'm here to critique. I've never been caught up in the "passion" ascribed to those super-volunteers who deliver huge amounts of work. But I can tally up other ways I contribute to this problem: I volunteer for scholarly tasks such as peer-reviewing, committee work, and travelling on my own dime to present. I did an unpaid internship without receiving class credit. I've put my research behind a paywall. I'm complicit in the established practices of the industry, which sits uneasily between academic and social work: neither of those spheres have ever been profit-generators, and have always used their codified altruism as ways to finagle more labour for less money.

It's easy to suggest that we outlaw crowdsourced volunteer work, and outlaw microtransactions on Fiverr and MTurk, just as the easy answer would be to outlaw Uber and Lyft for divorcing administration from labour standards. Ideally, we'd make it illegal for technology to wade between workers and fair compensation.

But that's not going to happen, so we need alternatives. Just as unpaid internships are being eliminated ad-hoc through corporate pledges, rather than being prohibited region-by-region, we need pledges from cultural-heritage institutions that they will pay for labour where possible, and offer concrete incentives to volunteer or intern otherwise. Budgets may be shrinking, but that's no reason not to compensate people at least through resume and portfolio entries. The best template we've got so far is the Society of American Archivists' volunteer best practices , which includes "adequate training and supervision" provisions, which I interpret to mean outlawing microtransactions entirely. The Citizen Science Alliance , similarly, insists on "concrete outcomes" for its crowdsourcing projects, to " never waste the time of volunteers ." It's vague, but it's something.

We can boycott and publicly shame those organizations that promote these projects as fun ways to volunteer, and lobby them to instead seek out subject experts for more significant collaboration. We've seen a few efforts to shame job-posters for unicorn requirements and pathetic salaries, but they've flagged without productive alternatives to blind rage.

There are plenty more band-aid solutions. Groups like Shatter The Ceiling offer cash to women of colour who take unpaid internships. GLAM-specific internship awards are relatively common , but could: be bigger, focus on diverse applicants who need extra support, and have eligibility requirements that don't exclude people who most need them (such as part-time students, who are often working full-time to put themselves through school). Better yet, we can build a tech platform that enables paid work, or at least meaningful volunteer projects. We need nationalized or non-profit recruiting systems (a digital "volunteer bureau") that matches subject experts with the institutions that need their help. One that doesn't take a cut from every transaction, or reinforce power imbalances, the way Uber does. GLAMs might even find ways to combine projects, so that one person's work can benefit multiple institutions.

GLAMs could use plenty of other help, too: feedback from UX designers on our catalogue interfaces, helpful tools , customization of our vendor platforms, even turning libraries into Tor relays or exits . The open-source community seems to be looking for ways to contribute meaningful volunteer labour to grateful non-profits; this would be a good start.

What's most important is that cultural heritage preserves the ostensible benefits of crowdsourcing – opening our collections and processes up for scrutiny, and admitting the limits of our knowledge – without the exploitative labour practices. Just like in tech, a few more glimpses behind the curtain wouldn't go astray. But it would require deeper cultural shifts, not least in the self-perceptions of GLAM workers: away from overprotective stewards of information, constantly threatened by dwindling budgets and unfamiliar technologies, and towards facilitators, participants in the communities whose histories we hold.

Tech Workers Please Stop Defending Tech Companies by Shanley Kane Model View Culture

[Jul 06, 2017] Linux tip Bash test and comparison functions

Jul 06, 2017 | www.ibm.com

Demystify test, [, [[, ((, and if-then-else

Ian Shields Ian Shields
Published on February 20, 2007 > > >

[Jul 05, 2017] Linux tip: Bash parameters and parameter expansions by Ian Shields

Definitely gifted author !
economistsview.typepad.com

Do you sometimes wonder how to use parameters with your scripts, and how to pass them to internal functions or other scripts? Do you need to do simple validity tests on parameters or options, or perform simple extraction and replacement operations on the parameter strings? This tip helps you with parameter use and the various parameter expansions available in the bash shell.

[Jul 02, 2017] The Details About the CIAs Deal With Amazon by Frank Konkel

Jul 17, 2014 | www.theatlantic.com

The intelligence community is about to get the equivalent of an adrenaline shot to the chest. This summer, a $600 million computing cloud developed by Amazon Web Services for the Central Intelligence Agency over the past year will begin servicing all 17 agencies that make up the intelligence community. If the technology plays out as officials envision, it will usher in a new era of cooperation and coordination, allowing agencies to share information and services much more easily and avoid the kind of intelligence gaps that preceded the Sept. 11, 2001, terrorist attacks.

For the first time, agencies within the intelligence community will be able to order a variety of on-demand computing and analytic services from the CIA and National Security Agency. What's more, they'll only pay for what they use.

The vision was first outlined in the Intelligence Community Information Technology Enterprise plan championed by Director of National Intelligence James Clapper and IC Chief Information Officer Al Tarasiuk almost three years ago. Cloud computing is one of the core components of the strategy to help the IC discover, access and share critical information in an era of seemingly infinite data.

For the risk-averse intelligence community, the decision to go with a commercial cloud vendor is a radical departure from business as usual.

In 2011, while private companies were consolidating data centers in favor of the cloud and some civilian agencies began flirting with cloud variants like email as a service, a sometimes contentious debate among the intelligence community's leadership took place.

... ... ...

The government was spending more money on information technology within the IC than ever before. IT spending reached $8 billion in 2013, according to budget documents leaked by former NSA contractor Edward Snowden. The CIA and other agencies feasibly could have spent billions of dollars standing up their own cloud infrastructure without raising many eyebrows in Congress, but the decision to purchase a single commercial solution came down primarily to two factors.

"What we were really looking at was time to mission and innovation," the former intelligence official said. "The goal was, 'Can we act like a large enterprise in the corporate world and buy the thing that we don't have, can we catch up to the commercial cycle? Anybody can build a data center, but could we purchase something more?

"We decided we needed to buy innovation," the former intelligence official said.

A Groundbreaking Deal

... ... ...

The Amazon-built cloud will operate behind the IC's firewall, or more simply: It's a public cloud built on private premises.

Intelligence agencies will be able to host applications or order a variety of on-demand services like storage, computing and analytics. True to the National Institute of Standards and Technology definition of cloud computing, the IC cloud scales up or down to meet the need.

In that regard, customers will pay only for services they actually use, which is expected to generate massive savings for the IC.

"We see this as a tremendous opportunity to sharpen our focus and to be very efficient," Wolfe told an audience at AWS' annual nonprofit and government symposium in Washington. "We hope to get speed and scale out of the cloud, and a tremendous amount of efficiency in terms of folks traditionally using IT now using it in a cost-recovery way."

... ... ...

For several years there hasn't been even a close challenger to AWS. Gartner's 2014 quadrant shows that AWS captures 83 percent of the cloud computing infrastructure market.

In the combined cloud markets for infrastructure and platform services, hybrid and private clouds-worth a collective $131 billion at the end of 2013-Amazon's revenue grew 67 percent in the first quarter of 2014, according to Gartner.

While the public sector hasn't been as quick to capitalize on cloud computing as the private sector, government spending on cloud technologies is beginning to jump.

Researchers at IDC estimate federal private cloud spending will reach $1.7 billion in 2014, and $7.7 billion by 2017. In other industries, software services are considered the leading cloud technology, but in the government that honor goes to infrastructure services, which IDC expects to reach $5.4 billion in 2017.

In addition to its $600 million deal with the CIA, Amazon Web Services also does business with NASA, the Food and Drug Administration and the Centers for Disease Control and Prevention. Most recently, the Obama Administration tapped AWS to host portions of HealthCare.gov.

[Jun 28, 2017] PBS Pro Tutorial by Krishna Arutwar

www.nakedcapitalism.com
What is PBS Pro?

Portable Batch System (PBS) is a software which is used in cluster computing to schedule jobs on multiple nodes. PBS was started as contract project by NASA. PBS is available in three different versions as below 1) Torque: Terascale Open-source Resource and QUEue Manager (Torque) is developed from OpenPBS. It is developed and maintain by Adaptive Computing Enterprises. It is used as a distributed resource manager can perform well when integrated with Maui cluster scheduler to improve performance. 2) PBS Professional (PBS Pro): It is commercial version of PBS offered by Altair Engineering. 3) OpenPBS: It is open source version released in 1998 developed by NASA. It is not actively developed.

In this article we are going to concentrate on tutorial of PBS Pro it is similar to some extent with Torque.

PBS contain three basic units server, MoM (execution host), scheduler.

  1. Server: It is heart of the PBS, with executable named "pbs_server". It uses IP network to communicate with the MoMs. PBS server create a batch job, modify the job requested from different MoMs. It keeps track of all resources available, assigned in the PBS complex from different MoMs. It will also monitor the PBS license for jobs. If your license expires it will throw an error.
  2. Scheduler: PBS scheduler uses various algorithms to decide when job should get executed on which node or vnode by using detail of resources available from server. It has executable as "pbs_sched".
  3. MoM: MoM is the mother of all execution job with executable "pbs_mom". When MoM gets job from server it will actually execute that job on the host. Each node must have MoM running to get participate in execution.

Installation and Setting up of environment (cluster with multiple nodes)

Extract compressed software of PBS Pro and go the path of extracted folder it contain "INSTALL" file, make that file executable you may use command like "chmod +x ./INSTALL". As shown in the image below run this executable. It will ask for the "execution directory" where you want to store the executable (such as qsub, pbsnodes, qdel etc.) used for different PBS operations and "home directory" which contain different configuration files. Keep both as default for simplicity. There are three kind of installation available as shown in figure:

1) Server node: PBS server, scheduler, MoM and commands are installed on this node. PBS server will keep track of all execution MoMs present in the cluster. It will schedule jobs on this execution nodes. As MoM and commands are also installed on server node it can be used to submit and execute the jobs. 2) Execution node: This type installs MoM and commands. This nodes are added as available nodes for execution in a cluster. They are also allowed to submit the jobs at server side with specific permission by server as we are going to see below. They are not involved in scheduling. This kind of installation ask for PBS server which is used to submit jobs, get status of jobs etc. 3 ) Client node: This are the nodes which are only allowed to submit a PBS job at server with specific permission by the server and allowed to see the status of the jobs. They are not involved in execution or scheduling.

Creating vnode in PBS Pro:

We can create multiple vnodes in a single node which contain some part of resources in a node. We can execute job on this vnodes with specified allocated resources. We can create vnode using qmgr command which is command line interface to PBS server. We can use command given below to create vnode using qmgr.

Qmgr:
create node Vnode1,Vnode2 resources_available.ncpus=8, resources_available.mem=10gb, 
resources_available.ngpus=1, sharing=default_excl 
The command above will create two vnodes named Vnode1 and Vnode2 with 8 cpus cores, 10gb of memory and 1 GPU with sharing mode as default_excl which means this vnode can execute exclusively only one job at a time independent of number of resources free. This sharing mode can be default_shared which means any number of jobs can run on that vnode until all resources are busy. To know more about all attributes which can be used with vnode creation are available in PBS Pro reference guide.

You can also create a file in " /var/spool/PBS/mom_priv/config.d/ " this folder with any name you want I prefer hostname -vnode with sample given below. It will select all files even temporary files with (~) and replace configuration for same vnode so delete unnecessary files to get proper configuration of vnodes.

e.g.

$configversion 2
hostname
:resources_available.ncpus=0
hostname
:resources_available.mem=0
hostname
:resources_available.ngpus=0
hostname
[0]:resources_available.ncpus=8
hostname
[0]:resources_available.mem=16gb
hostname
[0]:resources_available.ngpus=1
hostname
[0]:sharing=default_excl
hostname
[1]:resources_available.ncpus=8
hostname
[1]:resources_available.mem=16gb


hostname
[1]:resources_available.ngpus=1




hostname
[1]:sharing=default_excl


hostname
[2]:resources_available.ncpus=8


hostname
[2]:resources_available.mem=16gb


hostname
[2]:resources_available.ngpus=1




hostname
[2]:sharing=default_excl


hostname
[3]:resources_available.ncpus=8


hostname
[3]:resources_available.mem=16gb


hostname
[3]:resources_available.ngpus=1

hostname
[3]:sharing=default_excl
Here in this example we assigned default node configuration to resource available as 0 because by default it will detect and allocate all available resources to default node with sharing attribute as is default_shared.

Which cause problem as all the jobs will by default get scheduled on that default vnode because its sharing type is default_shared . If you want to schedule jobs on your customized vnodes you should allocate resources available as 0 on default vnode . Every time whenever you restart the PBS server

PBS get status:

get status of Jobs:

qstat will give details about jobs there states etc.

useful options:

To print detail about all jobs which are running or in hold state: qstat -a

To print detail about subjobs in JobArray which are running or in hold state: qstat -ta

get status of PBS nodes and vnodes:

"pbsnode -a" command will provide list of all nodes present in PBS complex with there resources available, assigned, status etc.

To get details of all nodes and vnodes you created use " pbsnodes -av" command.

You can also specify node or vnode name to get detail information of that specific node or vnode.

e.g.

pbsnodes wolverine (here wolverine is hostname of the node in PBS complex which is mapped with IP address in /etc/hosts file)

Job submission (qsub):

PBS MoM will submit jobs to the PBS server. Server maintain queue of jobs by default all jobs are submitted to default queue named "workq". You may create multiple queues by using "qmgr" command which is administrator interface mainly used to create, delete & modify queues and vnodes. PBS server will decide which job to be scheduled on which node or vnode based on scheduling policy and privileges set by user. To schedule jobs server will continuously ping to all MoMs in the PBS complex to get detail of resources available and assigned. PBS assigns unique job identifier to each and every job called JobID. For job submission PBS uses "qsub" command. It has syntax as shown below qsub script Here script may be a shell (sh, csh, tchs, ksh, bash) script. PBS by default uses /bin/sh. You may refer simple script given below #!/bin/sh

echo "This is PBS job"

When PBS completes execution of job it will store errors in file name with JobName.e{JobID} e.g. Job1.e1492

Output with file name

JobName.o{JobID} e.g. Job1.o1492

By default it will store this files in the current working directory (can be seen by pwd command) . You can change this location by giving path with -o option.

you may specify job name with -N option while submitting the job

qsub -N firstJob ./test.sh

If you don't specify job name it will store files by replacing JobName with script name. e.g. qsub ./test.sh this command will store results in file with test.sh.e1493 and test.sh.o.1493 in current working directory.

OR

qsub -N firstJob -o /home/user1/ ./test.sh this command will store results in file with test.sh.e1493 and test.sh.o.1493 in /home/user1/ directory.

If submitted job terminate abnormally (errors in job is not abnormal, this errors get stored in JobName.e{JobID} file) it will store its error and output files in "/var/spool/PBS/undelivered/ " folder.

Useful Options:

Select resources:

qsub -l select="chunks":ncpus=3:ngpus=1:mem=2gb script 

e.g.

This Job will selects 2 copies with 3 cpus, 1 gpu and 2gb memory which mean it will select 6 cpus, 2 gpus and 4 gb ram.

qsub -l nodes=megamind:ncpus=3 /home/titan/PBS/input/in.sh

This job will select one node specified with hostname.

To select multiple nodes you may use command given below

qsub -l nodes=megamind+titan:ncpus=3 /home/titan/PBS/input/in.sh
Submit multiple jobs with same script (JobArray):

qsub -J 1-20 script

Submit dependant jobs:

In some cases you may require job which should run after successful or unsuccessful completion of some specified jobs for that PBS provide some options such as

qsub -W depend=afterok:316.megamind /home/titan/PBS/input/in.sh

This specified job will start only after successful completion of job with job ID "316.megamind". Like afterok PBS has other options such as beforeok

beforenotok to , afternotok. You may find all this details in the man page of qsub .

Submit Job with priority :

There are two ways using which we can set priority to jobs which are going to execute.

1) Using single queue with different jobs with different priority:

To change sequence of jobs queued in a execution queue open "$PBS_HOME/sched_priv/sched_config" file, normally $PBS_HOME is present in "/var/spool/PBS/" folder. Open this file and uncomment the line below if present otherwise add it .

job_sort_key : "job_priority HIGH"

After saving this file you will need to restart the pbs_sched daemon on head node you may use command below

service pbs restart

After completing this task you have to submit the job with -p option to specify priority of job within queue. This value may range between (-1024) to 1023, where -1024 is the lowest priority and 1023 is the highest priority in the queue.

e.g.

qsub -p 100 ./X.sh

qsub -p 101 ./Y.sh


qsub -p 102 ./Z.sh 
In this case PBS will execute jobs as explain in the diagram given below

multipleJobsInoneQ

2) Using different queues with specified priority: We are going to discuss this point in PBS Queue section.

q

In this example all jobs in queue 2 will complete first then queue 3 then queue 1, since priority of queue 2 > queue 3 > queue 1. Because of this job execution flow is as shown below

J4=> J5=> J6=>J7=> J8=> J9=> J1=> J2=> J3 PBS Queue:

PBS Pro can manage multiple queue as per users requirement. By default every job is queued in "workq" for execution. There are two types of queue are available execution and routing queue. Jobs in execution queue are used by pbs server for execution. Jobs in routing queue can not be executed they can be redirected to execution queue or another routing queue by using command qmove command. By default queue "workq" is an execution queue. The sequence of job in queue may change by using priority defined while job submission as specified above in job submission section.

Useful qmgr commands:

First type qmgr which is Manager interface of PBS Pro.

To create queue:


Qmgr:
 create queue test2

To set type of queue you created:


Qmgr:
 set queue test2 queue_type=execution

OR


Qmgr:
 set queue test2 queue_type=route

To enable queue:


Qmgr:
 set queue test2 enabled=True

To set priority of queue:


Qmgr:
 set queue test2 priority=50

Jobs in queue with higher priority will get first preference. After completion of all jobs in the queue with higher priority jobs in lower priority queue are scheduled. There is huge probability of job starvation in queue with lower priority.

To start queue:


Qmgr:
 set queue test2 started = True

To activate all queue (present at particular node):


Qmgr:
 active queue @default

To set queue for specified users : You require to set acl_user_enable attribute to true which indicate PBS to only allow user present in acl_users list to submit the job.


 Qmgr:
 set queue test2 acl_user_enable=True

To set users permitted (to submit job in a queue):


Qmgr:
 set queue test2 acl_users="user1@
..
,user2@
..
,user3@
..
"

(in place of .. you have to specify hostname of compute node in PBS complex. Only user name without hostname will allow users ( with same name ) to submit job from all nodes ( permitted to submit job ) in a PBS Complex).

To delete queues we created:


Qmgr:
 delete queue test2

To see details of all queue status:

qstat -Q


You may specify specific queue name: qstat -Q test2

To see full details of all queue: qstat -Q -f

You may specify specific queue name: qstat -Q -f test2

[Jun 18, 2017] An introduction to parameter expansion in Bash by James Pannacciulli

Notable quotes:
"... parameter expansion ..."
"... var="" ..."
"... var="gnu" ..."
"... parameter expansion ..."
"... offset of 5 length of 4 ..."
"... parameter expansion ..."
"... pattern of string of _ ..."
Jun 18, 2017 | opensource.com
About conditional, substring, and substitution parameter expansion operators Conditional parameter expansion

Conditional parameter expansion allows branching on whether the parameter is unset, empty, or has content. Based on these conditions, the parameter can be expanded to its value, a default value, or an alternate value; throw a customizable error; or reassign the parameter to a default value. The following table shows the conditional parameter expansions-each row shows a parameter expansion using an operator to potentially modify the expansion, with the columns showing the result of that expansion given the parameter's status as indicated in the column headers. Operators with the ':' prefix treat parameters with empty values as if they were unset.

parameter expansion unset var var="" var="gnu"
${var-default} default - gnu
${var:-default} default default gnu
${var+alternate} - alternate alternate
${var:+alternate} - - alternate
${var?error} error - gnu
${var:?error} error error gnu

The = and := operators in the table function identically to - and :- , respectively, except that the = variants rebind the variable to the result of the expansion.

As an example, let's try opening a user's editor on a file specified by the OUT_FILE variable. If either the EDITOR environment variable or our OUT_FILE variable is not specified, we will have a problem. Using a conditional expansion, we can ensure that when the EDITOR variable is expanded, we get the specified value or at least a sane default:

$
echo
${
EDITOR
}

/usr/bin/vi
$
echo
${
EDITOR
:-
$(
which nano
)
}

/usr/bin/vi
$
unset
EDITOR
$
echo
${
EDITOR
:-
$(
which nano
)
}

/usr/bin/nano

Building on the above, we can run the editor command and abort with a helpful error at runtime if there's no filename specified:

$
${
EDITOR
:-
$(
which nano
)
}
${
OUT_FILE
:?
Missing filename
}

bash: OUT_FILE: Missing filename
Substring parameter expansion

Parameters can be expanded to just part of their contents, either by offset or by removing content matching a pattern. When specifying a substring offset, a length may optionally be specified. If running Bash version 4.2 or greater, negative numbers may be used as offsets from the end of the string. Note the parentheses used around the negative offset, which ensure that Bash does not parse the expansion as having the conditional default expansion operator from above:

$
location
=
"
CA 90095
"
$
echo
"
Zip Code: 
${
location
:
3
}
"

Zip Code: 90095
$
echo
"
Zip Code: 
${
location
:
(-5)
}
"

Zip Code: 90095
$
echo
"
State: 
${
location
:
0
:
2
}
"

State: CA

Another way to take a substring is to remove characters from the string matching a pattern, either from the left edge with the # and ## operators or from the right edge with the % and %% operators. A useful mnemonic is that # appears left of a comment and % appears right of a number. When the operator is doubled, it matches greedily, as opposed to the single version, which removes the most minimal set of characters matching the pattern.

var="open source"
parameter expansion offset of 5
length of 4
${var:offset} source
${var:offset:length} sour
pattern of *o?
${var#pattern} en source
${var##pattern} rce
pattern of ?e*
${var%pattern} open sour
${var%%pattern} o

The pattern-matching used is the same as with filename globbing: * matches zero or more of any character, ? matches exactly one of any character, [...] brackets introduce a character class match against a single character, supporting negation ( ^ ), as well as the posix character classes, e.g. . By excising characters from our string in this manner, we can take a substring without first knowing the offset of the data we need:

$
echo
$
PATH

/usr/local/bin:/usr/bin:/bin
$
echo
"
Lowest priority in PATH: 
${
PATH
##
*:
}
"

Lowest priority in PATH: /bin
$
echo
"
Everything except lowest priority: 
${
PATH
%
:*
}
"

Everything except lowest priority: /usr/local/bin:/usr/bin
$
echo
"
Highest priority in PATH: 
${
PATH
%%
:*
}
"

Highest priority in PATH: /usr/local/bin
Substitution in parameter expansion

The same types of patterns are used for substitution in parameter expansion. Substitution is introduced with the / or // operators, followed by two arguments separated by another / representing the pattern and the string to substitute. The pattern matching is always greedy, so the doubled version of the operator, in this case, causes all matches of the pattern to be replaced in the variable's expansion, while the singleton version replaces only the leftmost.

var="free and open"
parameter expansion pattern of
string of _
${var/pattern/string} free_and open
${var//pattern/string} free_and_open

The wealth of parameter expansion modifiers transforms Bash variables and other parameters into powerful tools beyond simple value stores. At the very least, it is important to understand how parameter expansion works when reading Bash scripts, but I suspect that not unlike myself, many of you will enjoy the conciseness and expressiveness that these expansion modifiers bring to your scripts as well as your interactive sessions.

[Jun 17, 2017] How containers and DevOps transformed Duke Universitys IT department by Chris Collins

Jun 16, 2017 | opensource.com

...At Duke University's Office of Information Technology (OIT), we began looking at containers as a way to achieve higher density from the virtualized infrastructure used to host websites. Virtual machine (VM) sprawl had started to become a problem. We favored separating each client's website onto its own VM for both segregation and organization, but steady growth meant we were managing more servers than we could handle. As we looked for ways to lower management overhead and make better use of resources, Docker hit the news, and we began to experiment with containerization for our web applications.

For us, the initial investigation of containers mirrors a shift toward a DevOps culture.

Where we started

When we first looked into container technology, OIT was highly process driven and composed of monolithic applications and a monolithic organizational structure. Some early forays into automation were beginning to lead the shift toward a new cultural organization inside the department, but even so, the vast majority of our infrastructure consisted of "pet" servers (to use the pets vs. cattle analogy). Developers created their applications on staging servers designed to match production hosting environments and deployed by migrating code from the former to the latter. Operations still approached hosting as it always had: creating dedicated VMs for individual services and filing manual tickets for monitoring and backups. A service's lifecycle was marked by change requests, review boards, standard maintenance windows, and lots of personal attention.

A shift in culture

As we began to embrace containers, some of these longstanding attitudes toward development and hosting began to shift a bit. Two of the larger container success stories came from our investigation into cloud infrastructure. The first project was created to host hundreds of R-Studio containers for student classes on Microsoft Azure hosts, breaking from our existing model of individually managed servers and moving toward "cattle"-style infrastructure designed for hosting containerized applications.

The other was a rapid containerization and deployment of the Duke website to Amazon Web Services while in the midst of a denial-of-service attack, dynamically creating infrastructure and rapidly deploying services.

The success of these two wildly nonstandard projects helped to legitimize containers within the department, and more time and effort was put into looking further into their benefits and those of on-demand and disposable cloud infrastructure, both on-premises and through public cloud providers.

It became apparent early on that containers lived within a different timescale from traditional infrastructure. We started to notice cases where short-lived, single-purpose services were created, deployed, lived their entire lifecycle, and were decommissioned before we completed the tickets created to enter them into inventory, monitoring, or backups. Our policies and procedures were not able to keep up with the timescales that accompanied container development and deployment.

In addition, humans couldn't keep up with the automation that went into creating and managing the containers on our hosts. In response, we began to develop more automation to accomplish usually human-gated processes. For example, the dynamic migration of containers from one host to another required a change in our approach to monitoring. It is no longer enough to tie host and service monitoring together or to submit a ticket manually, as containers are automatically destroyed and recreated on other hosts in response to events.

Some of this was in the works for us already-automation and container adoption seem to parallel one another. At some point, they become inextricably intertwined.

As containers continued to grow in popularity and OIT began to develop tools for container orchestration, we tried to further reinforce the "cattle not pets" approach to infrastructure. We limited login of the hosts to operations staff only (breaking with tradition) and gave all hosts destined for container hosting a generic name. Similar to being coached to avoid naming a stray animal in an effort to prevent attachment, servers with generic names became literally forgettable. Management of the infrastructure itself became the responsibility of automation, not humans, and humans focused their efforts on the services inside the containers.

Containers also helped to usher continuous integration into our everyday workflows. OIT's Identity Management team members were early adopters and began to build Kerberos key distribution centers (KDCs) inside containers using Jenkins, building regularly to incorporate patches and test the resulting images. This allowed the team to catch breaking builds before they were pushed out onto production servers. Prior to that, the complexity of the environment and the widespread impact of an outage made patching the systems a difficult task.

Embracing continuous deployment

Since that initial use case, we've also embraced continuous deployment. There is a solid pattern for every project that gets involved with our continuous integration/continuous deployment (CI/CD) system. Many teams initially have a lot of hesitation about automatically deploying when tests pass, and they tend to build checkpoints requiring human intervention. However, as they become more comfortable with the system and learn how to write good tests, they almost always remove these checkpoints.

Within our container orchestration automation, we use Jenkins to patch base images on a regular basis and rebuild all the child images when the parent changes. We made the decision early that the images could be rebuilt and redeployed at any time by automated processes. This meant that any code included in the branch of the git repository used in the build job would be included in the image and potentially deployed without any humans involved. While some developers initially were uncomfortable with this, it ultimately led to better development practices: Developers merge into the production branch only code that is truly ready to be deployed.

This practice facilitated rebuilding container images immediately when code is merged into the production branch and allows us to automatically deploy the new image once it's built. At this point, almost every project using the automatic rebuild has also enabled automated deployment.

Looking ahead

Today the adoption of both containers and DevOps is still a work in progress for OIT.

Internally we still have to fight the entropy of history even as we adopt new tools and culture. Our biggest challenge will be convincing people to break away from the repetitive break-fix mentality that currently dominates their jobs and to focus more on automation. While time is always short, and the first step always daunting, in the long run adopting automation for day-to-day tasks will free them to work on more interesting and complex projects.

Thankfully, people within the organization are starting to embrace working in organized or ad hoc groups of cross-discipline members and developing automation together. This will definitely become necessary as we embrace automated orchestration and complex systems. A group of talented individuals who possess complementary skills will be required to fully manage the new environments.

[Jun 09, 2017] Amazon's S3 web-based storage service is experiencing widespread issues on Feb 28 2017

Jun 09, 2017 | techcrunch.com

Amazon's S3 web-based storage service is experiencing widespread issues, leading to service that's either partially or fully broken on websites, apps and devices upon which it relies. The AWS offering provides hosting for images for a lot of sites, and also hosts entire websites, and app backends including Nest.

The S3 outage is due to "high error rates with S3 in US-EAST-1," according to Amazon's AWS service health dashboard , which is where the company also says it's working on "remediating the issue," without initially revealing any further details.

Affected websites and services include Quora, newsletter provider Sailthru, Business Insider, Giphy, image hosting at a number of publisher websites, filesharing in Slack, and many more. Connected lightbulbs, thermostats and other IoT hardware is also being impacted, with many unable to control these devices as a result of the outage.

Amazon S3 is used by around 148,213 websites, and 121,761 unique domains, according to data tracked by SimilarTech , and its popularity as a content host concentrates specifically in the U.S. It's used by 0.8 percent of the top 1 million websites, which is actually quite a bit smaller than CloudFlare, which is used by 6.2 percent of the top 1 million websites globally – and yet it's still having this much of an effect.

Amazingly, even the status indicators on the AWS service status page rely on S3 for storage of its health marker graphics, hence why the site is still showing all services green despite obvious evidence to the contrary. Update (11:40 AM PT): AWS has fixed the issues with its own dashboard at least – it'll now accurately reflect service status as it continues to attempt to fix the problem .

[May 29, 2017] Release of Wine 2.8

May 29, 2017 | news.softpedia.com
What's new in this release (see below for details):
- - TCP and UDP connection support in WebServices.
- - Various shader improvements for Direct3D 11.
- - Improved support for high DPI settings.
- - Partial reimplementation of the GLU library.
- - Support for recent versions of OSMesa.
- - Window management improvements on macOS.
+ - Direct3D command stream runs asynchronously.
+ - Better serial and parallel ports autodetection.
+ - Still more fixes for high DPI settings.
+ - System tray notifications on macOS.
- Various bug fixes.

... improved support for Warhammer 40,000: Dawn of War III that'll be ported to Linux and SteamOS platforms by Feral Interactive on June 8, Wine 2.9 is here to introduce support for tesselation shaders in Direct3D, binary mode support in WebServices, RegEdit UI improvements, and clipboard changes detected through Xfixes.

...

The Wine 2.9 source tarball can be downloaded right now from our website if you fancy compiling it on your favorite GNU/Linux distribution, but please try to keep in mind that this is a pre-release version not suitable for production use. We recommend installing the stable Wine branch if you want to have a reliable and bug-free experience.

Wine 2.9 will also be installable from the software repos of your operating system in the coming days.

[May 27, 2017] An introduction to EXT4 filesystem

Notable quotes:
"... In EXT4, data allocation was changed from fixed blocks to extents. ..."
"... EXT4 reduces fragmentation by scattering newly created files across the disk so that they are not bunched up in one location at the beginning of the disk, ..."
"... Aside from the actual location of the data on the disk, EXT4 uses functional strategies, such as delayed allocation, to allow the filesystem to collect all the data being written to the disk before allocating space to it. This can improve the likelihood that the data space will be contiguous. ..."
May 27, 2017 | opensource.com
EXT4

The EXT4 filesystem primarily improves performance, reliability, and capacity. To improve reliability, metadata and journal checksums were added. To meet various mission-critical requirements, the filesystem timestamps were improved with the addition of intervals down to nanoseconds. The addition of two high-order bits in the timestamp field defers the Year 2038 problem until 2446-for EXT4 filesystems, at least.

In EXT4, data allocation was changed from fixed blocks to extents. An extent is described by its starting and ending place on the hard drive. This makes it possible to describe very long, physically contiguous files in a single inode pointer entry, which can significantly reduce the number of pointers required to describe the location of all the data in larger files. Other allocation strategies have been implemented in EXT4 to further reduce fragmentation.

EXT4 reduces fragmentation by scattering newly created files across the disk so that they are not bunched up in one location at the beginning of the disk, as many early PC filesystems did. The file-allocation algorithms attempt to spread the files as evenly as possible among the cylinder groups and, when fragmentation is necessary, to keep the discontinuous file extents as close as possible to others in the same file to minimize head seek and rotational latency as much as possible. Additional strategies are used to pre-allocate extra disk space when a new file is created or when an existing file is extended. This helps to ensure that extending the file will not automatically result in its becoming fragmented. New files are never allocated immediately after existing files, which also prevents fragmentation of the existing files.

Aside from the actual location of the data on the disk, EXT4 uses functional strategies, such as delayed allocation, to allow the filesystem to collect all the data being written to the disk before allocating space to it. This can improve the likelihood that the data space will be contiguous.

Older EXT filesystems, such as EXT2 and EXT3, can be mounted as EXT4 to make some minor performance gains. Unfortunately, this requires turning off some of the important new features of EXT4, so I recommend against this.

EXT4 has been the default filesystem for Fedora since Fedora 14.

An EXT3 filesystem can be upgraded to EXT4 using the procedure described in the Fedora documentation, however its performance will still suffer due to residual EXT3 metadata structures.

The best method for upgrading to EXT4 from EXT3 is to back up all the data on the target filesystem partition, use the mkfs command to write an empty EXT4 filesystem to the partition, and then restore all the data from the backup.

[May 20, 2017] Outsourcing higher wage work is more profitable than outsourcing lower wage work

Notable quotes:
"... Baker correctly diagnoses the impact of boomers aging, but there is another effect - "knowledge work" and "high skill manufacturing" is more easily outsourced/offshored than work requiring a physical presence. ..."
"... That's what happened with American IT. ..."
May 20, 2017 | economistsview.typepad.com
cm, May 20, 2017 at 04:51 PM
Baker correctly diagnoses the impact of boomers aging, but there is another effect - "knowledge work" and "high skill manufacturing" is more easily outsourced/offshored than work requiring a physical presence.

Also outsourcing "higher wage" work is more profitable than outsourcing "lower wage" work - with lower wages also labor cost as a proportion of total cost tends to be lower (not always).

And outsourcing and geographically relocating work creates other overhead costs that are not much related to the wages of the local work replaced - and those overheads are larger in relation to lower wages than in relation to higher wages.

libezkova -> cm... May 20, 2017 at 08:34 PM

"Also outsourcing "higher wage" work is more profitable than outsourcing "lower wage" work"

That's what happened with American IT.

[May 19, 2017] IT ops doesnt matter. Really by Dale Vile

Notable quotes:
"... All of the hype around software and developers, which tends to significantly skew even the DevOps discussion, runs the risk of creating the perception that IT ops is just a necessary evil. Indeed, some go so far as to make the case for a 'NoOps' world in which the public cloud magically takes care of everything downstream once developers have 'innovated' and 'created'. ..."
"... This kind of view comes about from people looking through the wrong end of the telescope. Turn the thing around and look up close at what goes on in the world of ops, and you get a much better sense of perspective. Teams operating in this space are not just there to deploy the next custom software release and make sure it runs quickly and robustly - in fact that's often a relatively small part of what they do. ..."
"... And coming back to operations, you are sadly mistaken if you think that the public cloud makes all challenges and requirements go away. If anything, the piecemeal adoption of cloud services has made things more complex and unpredictable from an integration and management perspective. ..."
"... There are all kinds of valid reasons to keep an application sitting on your own infrastructure anyway - regulation, performance, proximity to dependent solutions and data, etc. Then let's not forget the simple fact that running things in the cloud is often more expensive over the longer term. ..."
Dec 19, 2016 | theregister.co.uk

Get real – it's not all about developers and DevOps

Listen to some DevOps evangelists talk, and you would get the impression that IT operations teams exist only to serve the needs of developers. Don't get me wrong, software development is a good competence to have in-house if your organisation depends on custom applications and services to differentiate its business.

As an ex-developer, I appreciate the value of being able to deliver something tailored to a specific need, even if it does pain me to see the shortcuts too often taken nowadays due to ignorance of some of the old disciplines, or an obsession with time-to-market above all else.

But before this degenerates into an 'old guy' rant about 'youngsters today', let's get back to the point that I really want to make.

All of the hype around software and developers, which tends to significantly skew even the DevOps discussion, runs the risk of creating the perception that IT ops is just a necessary evil. Indeed, some go so far as to make the case for a 'NoOps' world in which the public cloud magically takes care of everything downstream once developers have 'innovated' and 'created'.

This kind of view comes about from people looking through the wrong end of the telescope. Turn the thing around and look up close at what goes on in the world of ops, and you get a much better sense of perspective. Teams operating in this space are not just there to deploy the next custom software release and make sure it runs quickly and robustly - in fact that's often a relatively small part of what they do.

This becomes obvious when you recognize how much stuff runs in an Enterprise IT landscape - software packages enabling core business processes, messaging, collaboration and workflow platforms keeping information flowing, analytics environments generating critical business insights, and desktop and mobile estates serving end user access needs - to name but a few.

Vital operations

There's then everything required to deal with security, data protection, compliance and other aspects of risk. Apart from the odd bit of integration and tailoring work - the need for which is diminishing with modern 'soft-coded', connector-driven solutions - very little of all this has anything to do with development and developers.

A big part of the rationale for modernising your application landscape and migrating to the latest flexible and open software packages and platforms is to eradicate the need for coding wherever you can. Code is expensive to build and maintain, and the same can often be achieved today through software switches, policy-driven workflow, drag-and-drop interface design, and so on. Sensible IT teams only code when they absolutely have to.

And coming back to operations, you are sadly mistaken if you think that the public cloud makes all challenges and requirements go away. If anything, the piecemeal adoption of cloud services has made things more complex and unpredictable from an integration and management perspective.

There are all kinds of valid reasons to keep an application sitting on your own infrastructure anyway - regulation, performance, proximity to dependent solutions and data, etc. Then let's not forget the simple fact that running things in the cloud is often more expensive over the longer term.

Against this background, an 'appropriate' level of custom development and the selective use of cloud services will be the way forward for most organisations, all underpinned by a well-run data centre environment acting as the hub for hybrid delivery. This is the approach that tends to be taken by the most successful enterprise IT teams, and the element that makes particularly high achievers stand out is agile and effective IT operations.

This isn't just to support any DevOps agenda you might have; it is demonstrably a key enabler across the board. Of course if you work in operations, you will know already intuitively know all this. But if you want some ammunition to spell it out to others who need enlightenment, take a look at our research report entitled IT Ops and a Digital Business Enabler; more than just keeping the lights on . This is based on input from 400 Senior European IT professionals. ฎ

Paul Smith
I think this is one fad that has run its course. If nothing else, the one thing that cloud has brought to the software world is the separation of software from the environment it runs in, and since the the Ops side of DevOps is all about the integration of the platform and software, what you end up with in a cloudy world is a lot of people looking for a new job.
Anonymous Coward

For decades developers have been ignored by infrastructure vendors because the decision makers buying infrastructure sit in the infrastructure teams. Now with the cloud etc vendors realize they will lose supporters within these teams.

So instead - infrastructure vendors target developers to become their next fanboys.

E.g. Dear developer, you won't need to speak to your infrastructure admins anymore to setup a development environment. Now you can automate, orchestrate the provisioning of your containerized development environment at the push of a button. Blah blah blah, but you have to buy our storage.

I remember the days when every DBA wanted RAID10 just because thats what the whitepaper recommended. By that time storage technology had long moved on, but the DBA still talked about Full Stripe Writes.

Now with DevOps you'll have Developers influencing infrastructure decisions, because they just learned about snapshots. And yes - it has to be all flash - and designed from the ground up by millenials that eat avocado.

John 104
Re: DevOps was never supposed to replace Operations

Yes, DevOps isn't about replacing Ops. But try telling that to the powers that be. It is sold and seen as a cost cutting measure.

As for devs learning Ops and vice versa, there are very few on both sides who really understand what it takes to do the others job. I have a very high regard for Devs, but when it comes to infra, they are, as a whole, very incompetent. Just like I'm incompetent in Dev. can't have one without the other. I feel that in time, the pendulum will swing away from cloud as execs and accountants realize how it isn't really saving any money.

The real question is: Will there be any qualified operations engineers available or will they all have retired out or have found work elsewhere. It isn't easy to be an ops engineer, takes a lot of experience to get there, and qualified candidates are hard to come by. Let's face it, in today's world, its a dying breed.

John 104
Very Nice

Nice of you to point out what us in Ops have known all along. I'm afraid it will fall on deaf ears, though. Until the executives who constantly fall for the new shiny are made to actually examine business needs and processes and make business decisions based on said.

Our laughable move to cloud here involved migrating off of on prem Exchange to O365. The idea was to free up our operations team to allow us to do more in house projects. Funny thing is, it takes more management of the service than we ever did on premises. True, we aren't maintaining the Exchange infra, but now we have SQL servers, DCs, ADFS, etc, to maintain in the MS cloud to allow authentication just to use the product. And because mail and messaging is business critical, we have to have geographically disparate instances of both. And the cost isn't pretty. Yay cloud.

[May 17, 2017] Talk of tech innovation is bullsht. Shut up and get the work done – says Linus Torvalds

May 17, 2017 | theregister.co.uk

Linus Torvalds believes the technology industry's celebration of innovation is smug, self-congratulatory, and self-serving. The term of art he used was more blunt: "The innovation the industry talks about so much is bullshit," he said. "Anybody can innovate. Don't do this big 'think different'... screw that. It's meaningless.

In a deferential interview at the Open Source Leadership Summit in California on Wednesday, conducted by Jim Zemlin, executive director of the Linux Foundation, Torvalds discussed how he has managed the development of the Linux kernel and his attitude toward work.

"All that hype is not where the real work is," said Torvalds. "The real work is in the details."

Torvalds said he subscribes to the view that successful projects are 99 per cent perspiration, and one per cent innovation.

As the creator and benevolent dictator of the open-source Linux kernel , not to mention the inventor of the Git distributed version control system, Torvalds has demonstrated that his approach produces results. It's difficult to overstate the impact that Linux has had on the technology industry. Linux is the dominant operating system for servers. Almost all high-performance computing runs on Linux. And the majority of mobile devices and embedded devices rely on Linux under the hood.

The Linux kernel is perhaps the most successful collaborative technology project of the PC era. Kernel contributors, totaling more than 13,500 since 2005, are adding about 10,000 lines of code, removing 8,000, and modifying between 1,500 and 1,800 daily, according to Zemlin. And this has been going on – though not at the current pace – for more than two and a half decades.

"We've been doing this for 25 years and one of the constant issues we've had is people stepping on each other's toes," said Torvalds. "So for all of that history what we've done is organize the code, organize the flow of code, [and] organize our maintainership so the pain point – which is people disagreeing about a piece of code – basically goes away."

The project is structured so people can work independently, Torvalds explained. "We've been able to really modularize the code and development model so we can do a lot in parallel," he said.

Technology plays an obvious role but process is at least as important, according to Torvalds.

"It's a social project," said Torvalds. "It's about technology and the technology is what makes people able to agree on issues, because ... there's usually a fairly clear right and wrong."

But now that Torvalds isn't personally reviewing every change as he did 20 years ago, he relies on a social network of contributors. "It's the social network and the trust," he said. "...and we have a very strong network. That's why we can have a thousand people involved in every release."

The emphasis on trust explains the difficulty of becoming involved in kernel development, because people can't sign on, submit code, and disappear. "You shoot off a lot of small patches until the point where the maintainers trust you, and at that point you become more than just a guy who sends patches, you become part of the network of trust," said Torvalds.

Ten years ago, Torvalds said he told other kernel contributors that he wanted to have an eight-week release schedule, instead of a release cycle that could drag on for years. The kernel developers managed to reduce their release cycle to around two and half months. And since then, development has continued without much fuss.

"It's almost boring how well our process works," Torvalds said. "All the really stressful times for me have been about process. They haven't been about code. When code doesn't work, that can actually be exciting ... Process problems are a pain in the ass. You never, ever want to have process problems ... That's when people start getting really angry at each other." ฎ

[May 17, 2017] So your client's under-spent on IT for decades and lives in fear of an audit

Notable quotes:
"... Most of us use some form of desired state solution already. Desired state solutions basically involve an OS agent that gets a config from a centralized location and applies the relevant configuration to the operating system and/or applications. ..."
May 17, 2017 | theregister.co.uk
12 May 2017 at 14:56, Trevor Pott Infrastructure as code is a buzzword frequently thrown out alongside DevOps and continuous integration as being the modern way of doing things. Proponents cite benefits ranging from an amorphous "agility" to reducing the time to deploy new workloads. I have an argument for infrastructure as code that boils down to "cover your ass", and have discovered it's not quite so difficult as we might think.

... ... ...

None of this is particularly surprising. When you have an environment where each workload is a pet , change is slow, difficult, and requires a lot of testing. Reverting changes is equally tedious, and so a lot of planning goes into making sure than any given change won't cascade and cause knock-on effects elsewhere.

In the real world this is really the result of two unfortunate aspects of human nature. First: everyone hates doing documentation, so it's highly unlikely that in an unstructured environment every change from the last refresh was documented. The second driver of chaos and problems is that there are few things more permanent than a temporary fix.

When you don't have the budget for the right hardware, software or services you make do. When something doesn't work you "innovate" a solution. When that breaks something, you patch it. You move from one problem to the next, and if you're not careful, you end up with something so fragile that if you breathe on it, it falls over. At this point, you burn it all down and restart from scratch.

This approach to IT is fine - if you have 5, 10 or even 50 workloads. A single techie can reasonably be expected to keep that all in their head, know their network and solve any problems they encounter. Unfortunately, 50 workloads is today restricted to only the smallest of shops. Everyone else is juggling too many workloads to be playing the pets game any more.

Most of us use some form of desired state solution already. Desired state solutions basically involve an OS agent that gets a config from a centralized location and applies the relevant configuration to the operating system and/or applications. Microsoft's group policy can be considered a really primitive version of this, with System Center being a more powerful but miserable to use example. The modern friendly tools being Puppet, Chef, Saltstack, Ansible and the like.

Once you have desired state configs in place we're no longer beating individual workloads into shape, or checking them manually for deviation from design. If all does what it says on the tin, configurations are applied and errors thrown if they can't be. Usually there is some form of analysis software to determine how many of what is out of compliance. This is a big step forward.

... ... ...

This article is sponsored by HPE.

[May 16, 2017] The Technocult Soleil Wiki Fandom powered by Wikia

May 16, 2017 | soleil.wikia.com
The Technocult, also known as the Machine cult is the semi-offical name given by The Church of the Crossed Heart to followers of the Mechanicum faith who supply and maintain virtually all of the church's technology, engineering and industry.

Although they serve with the Church of the Crossed Heart they have their own version of worship that differs substantially in theology and ritualistic forms from that of The Twelve Angels . Instead the Technocult worships a deity they call the Machine god or Omnissiah. The Technocult believes that knowledge is divine and comes only form the Omnissiah thus making any objects that demonstrate the application of knowledge , i.e machinery, or contain it (books) holy in the eyes/optical implants of the Techcult. The Technocult regard organic flesh as weak and imperfect, with the Rot being veiwed as a divine message from the Omnissah demonstrating its weakness, thus making its removal and replacement by mechanical, bionic parts a sacred process that brings them closer to their god with many of its older members having very little of their original bodies remaining.

The date of the cults formation is unknown, or a closely guarded secret...

[May 16, 2017] 10 Things I Hate About Agile Development!

May 16, 2017 | www.allaboutagile.com

1. Saying you're doing Agile just cos you're doing daily stand-ups. You're not doing agile. There is so much more to agile practices than this! Yet I'm surprised how often I've heard that story. It really is remarkable.

... ... ....

3. Thinking that agile is a silver bullet and will solve all your problems. That's so naiive, of course it won't! Humans and software are a complex mix with any methodology, let alone with an added dose of organisational complexity. Agile development will probably help with many things, but it still requires a great deal of skill and there is no magic button.

... ... ...

8. People who use agile as an excuse for having no process or producing no documentation. If documents are required or useful, there's no reason why an agile development team shouldn't produce them. Just not all up-front; do it as required to support each feature or iteration. JFDI (Just F'ing Do It) is not agile!

David, 23 February 2010 at 1:21 am

So agree on number 1. Following "Certified" Scrum Master training (prior to the exam requirement), a manager I know now calls every regular status meeting a "scrum", regardless of project or methodology. Somehow the team is more agile as a result.

Ironically he pulled up another staff member for "incorrectly" using the term retrospective.

Andy Till, 23 February 2010 at 9:28 am

I can think of far worse, how about pairing with the guy in the office who is incapable of compromise?

Steve Watson, 13 May 2010 at 10:06 am

Kelly

Good list!

I like number 9 as I find with testing people think that they no longer need to write proper test cases and scripts – a list of confirmations on a user story will do. Well, if its a simple change I guess you can dispense with test scripts, but if its something more complex then there is no reason NOT to write scripts. If you have a reasonably large team of people who could execute the tests, they can follow the test steps and validate against the expected results. It also means that you can sensibly lump together test cases and cover them with one test.

If you dont think about how you will execute them and just tackle them one by one off the confirmations list, you miss the opportunity to run one test and cover many separate cases, saving time.

I always find test scripts useful if someone different re-runs a test, as they then follow the same process as before. This is why we automate regression so the tests are executed the same each time.

John Quincy, 24 October 2011 at 12:02 am

I am not a fan of agile. Unless you have a small group of developers who are in perfect sync with each other at all times, this "one size fits all" methodology is destructive and downright dangerous. I have personally witnessed a very good company go out of business this year because they transformed their development shop from a home-grown iterative methodology to SCRUM. The team was required to abide by the SCRUM rules 100%. They could not keep up with customer requirements and produced bug filled releases that were always late. These developers went from fun, friendly, happy people (pre-SCRUM) [who NEVER missed a date] to bitter, sarcastic, hard to be around 'employees'. When the writing was on the wall a couple of months back, the good ones got the hell out of there, and the company could not recover.

Some day, I'm convinced that Beck through Thomas will proclaim that the Agile Manifesto was all a big practical joke that got out of control.

This video pretty much lays out the one and only reason why management wants to implement Agile:

http://www.youtube.com/watch?v=nvks70PD0Rs

grumpasaurus, 9 February 2014 at 4:30 pm

It's a cycle of violence when a project claims to be Agile just because of standups and iterations and don't think about resolving the core challenges they've had to begin with. People are left still battling said challenges and then say that Agile sucks.

[May 15, 2017] Wall Street Journal Enterprises Are Not Ready for DevOps, but May Not Survive Without It by Abel Avram

Notable quotes:
"... while DevOps is appealing to startups, there are important stumbling blocks in the path of DevOps adoption within the enterprise. ..."
"... The tools needed to implement a DevOps culture are lacking. While some of the tools can be provided by vendors and others can be created within the enterprise, a process which takes a long period of time, "there is a marathon of organizational change and restructuring that must occur before such tools could ever be bought or built." ..."
Jun 06, 2014 | www.infoq.com
Rachel Shannon-Solomon suggests that most enterprises are not ready for DevOps, while Gene Kim says that they must make themselves ready if they want to survive.

Rachel Shannon-Solomon, a venture associate at At Work-Bench, has recently written a blog post for The Wall Street Journal entitled DevOps Is Great for Startups, but for Enterprises It Won't Work-Yet , arguing that while DevOps is appealing to startups, there are important stumbling blocks in the path of DevOps adoption within the enterprise.

While acknowledging that large companies such as Google and Facebook benefit from implementing DevOps, and that "there is no lack of appetite to experiment with DevOps practices" within "Fortune 500s and specifically financial services firms", Shannon-Solomon remarks that "there are few true change agents within enterprise IT willing to affect DevOps implementations."

Shehas come to this conclusion basedon "conversations with startup founders, technology incumbents offering DevOps solutions, and technologists within large enterprises."

Shannon-Solomon brings four arguments to support her position:

Shannon-Solomonends her post wondering "how long will it be until enterprises are forced to accept that they must accelerate their experiments with DevOps" and hoping that "more individual change agents within large organizations may emerge" in the future.

[May 15, 2017] Why Your Users Hate Agile

No methodology can substitute good engineers who actually talk to and work with each other. Good engineers can benefit from a better software development methodology, but even the best software development methodology is powerless to convert mediocre developers into stars.
Notable quotes:
"... disorganized and never-ending ..."
"... Agile is to proper software engineering what Red Bull Flugtag is to proper aeronautic engineering.... ..."
"... As TFA points out, that always works fine when your requirements are *all* known an are completely static. That rarely happens in most fields. ..."
"... The problem with Agile is that it gives too much freedom to the customer to change their mind late in the project and make the developers do it all over again. ..."
"... If you are delivering to customer requests you will always be a follower and never succeed. You need to anticipate what the customers need. ..."
"... It frequently is. It doesn't matter what methodology you use -- if you change major features/priorities at the last minute it will cost multiple times as much. Yet frequently customers expect it to be cheap because "we're agile". And by accepting that change will happen you don't push the customers to make important decisions early, ensuring that major changes will happen, instead of just being possible. ..."
"... The problem with all methodologies, or processes, or whatever today's buzzword is, is that too many people want to practice them in their purest form. Excessive zeal in using any one approach is the enemy of getting things done. ..."
"... On a sufficiently large project, some kind of upfront design is necessary. ..."
"... If you insist on spinning back every little change to a monstrously detailed Master Design Document, you'll move at a snail's pace. As much as I hate the buzzword "design patterns", some pattern is highly desirable. ..."
"... there is no substitute for good engineers who actually talk to and work with each other. ..."
"... If you don't trust those people to make intelligent decisions (including about when things do have to be passed up) then you've either got the wrong people or a micromanagement fetish ..."
"... The problem the article refers to about an upfront design being ironclad promises is tough. Some customers will work with you, and others will get their lawyers and "systems" people to waste your time complaining about every discrepancy, ..."
"... In defense everything has to meet spec, but it doesn't have to work. ..."
"... There is absolutely no willingness to make tradeoffs as the design progresses and you find out what's practical and necessary and what's not. ..."
"... I'll also admit that there is a tendency to get sloppy in software specs because it is easier to make changes. Hardware, with the need to order materials, have things fabbed, tape out a chip, whatever, imposes a certain discipline that's lacking when you know you can change the source code at anytime. Being both, I'm not saying this is because hardware engineers are virtuous and software engineers are sloppy, but because engineers are human (at least some of them). ..."
"... Impressive stuff, and not unique to the space shuttle. Fly-by-wire systems are the same way. You're talking DO-178B [wikipedia.org] Level A stuff. It works, and it's very very expensive. If it was only 10x the cost of normal software development I'd be amazed. I agree that way too much software is poorly planned and implemented crap, and part of the reason is that nobody wants realistic cost estimates or to make the difficult decisions about what it's supposed to do up-front. But what you're talking about is aerospace quality. You couldn't afford a car or even a dishwasher made to those standards. ..."
Jun 05, 2013 | Slashdot

"What developers see as iterative and flexible, users see as disorganized and never-ending.

This article discusses how some experienced developers have changed that perception. '... She's been frustrated by her Agile experiences - and so have her clients.

"There is no process. Things fly all directions, and despite SVN [version control] developers overwrite each other and then have to have meetings to discuss why things were changed. Too many people are involved, and, again, I repeat, there is no process.' The premise here is not that Agile sucks - quite to the contrary - but that developers have to understand how Agile processes can make users anxious, and learn to respond to those fears. Not all those answers are foolproof.

For example: 'Detailed designs and planning done prior to a project seems to provide a "safety net" to business sponsors, says Semeniuk. "By providing a Big Design Up Front you are pacifying this request by giving them a best guess based on what you know at that time - which is at best partial or incorrect in the first place." The danger, he cautions, is when Big Design becomes Big Commitment - as sometimes business sponsors see this plan as something that needs to be tracked against.

"The big concern with doing a Big Design up front is when it sets a rigid expectation that must be met, regardless of the changes and knowledge discovered along the way," says Semeniuk.' How do you respond to user anxiety from Agile processes?"

Shinobi

Agile summed up (Score:5, Funny)

Agile is to proper software engineering what Red Bull Flugtag is to proper aeronautic engineering....

Nerdfest

Re: doesn't work

As TFA points out, that always works fine when your requirements are *all* known an are completely static. That rarely happens in most fields.

Even in the ones where it does it's usually just management having the balls to say "No, you can give us the next bunch of additions and changes when this is delivered, we agreed on that". It frequently ends up delivering something less than useful.

MichaelSmith

Re: doesn't work (Score:5, Insightful)

The problem with Agile is that it gives too much freedom to the customer to change their mind late in the project and make the developers do it all over again.

ArsonSmith

Re: doesn't work (Score:4, Insightful)

...but they can be trusted to say what is most important to them at the time.

No they can't. If you are delivering to customer requests you will always be a follower and never succeed. You need to anticipate what the customers need. As with the I guess made up quote attributed to Henry Ford, "If I listened to my customers I'd have been trying to make faster horses." Whether he said it or not, the statement is true. Customers know what they have and just want it to be faster/better/etc you need to find out what they really need.

AuMatar

Re: doesn't work (Score:5, Insightful)

It frequently is. It doesn't matter what methodology you use -- if you change major features/priorities at the last minute it will cost multiple times as much. Yet frequently customers expect it to be cheap because "we're agile". And by accepting that change will happen you don't push the customers to make important decisions early, ensuring that major changes will happen, instead of just being possible.

ebno-10db

Re: doesn't work (Score:5, Interesting)

"Proper software engineering" doesn't work.

You're right, but you're going to the other extreme. The problem with all methodologies, or processes, or whatever today's buzzword is, is that too many people want to practice them in their purest form. Excessive zeal in using any one approach is the enemy of getting things done.

On a sufficiently large project, some kind of upfront design is necessary. Spending too much time on it or going into too much detail is a waste though. Once you start to implement things, you'll see what was overlooked or why some things won't work as planned. If you insist on spinning back every little change to a monstrously detailed Master Design Document, you'll move at a snail's pace. As much as I hate the buzzword "design patterns", some pattern is highly desirable. Don't get bent out of shape though when someone has a good reason for occasionally breaking that pattern or, as you say, you'll wind up with 500 SLOC's to add 2+2 in the approved manner.

Lastly, I agree that there is no substitute for good engineers who actually talk to and work with each other. Also don't require that every 2 bit decision they make amongst themselves has to be cleared, or even communicated, to the highest levels. If you don't trust those people to make intelligent decisions (including about when things do have to be passed up) then you've either got the wrong people or a micromanagement fetish. Without good people you'll never get anything decent done, but with good people you still need some kind of organization.

The problem the article refers to about an upfront design being ironclad promises is tough. Some customers will work with you, and others will get their lawyers and "systems" people to waste your time complaining about every discrepancy, without regard to how important it is. Admittedly bad vendors will try and screw their customers with "that doesn't matter" to excuse every screw-up and bit of laziness. For that reason I much prefer working on in-house projects, where "sure we could do exactly what we planned" gets balanced with the cost and other tradeoffs.

The worst example of those problems is defense projects. As someone I used to work with said: In defense everything has to meet spec, but it doesn't have to work. In the commercial world specs are flexible, but it has to work.

If you've ever worked in that atmosphere you'll understand why every defense project costs a trillion dollars. There is absolutely no willingness to make tradeoffs as the design progresses and you find out what's practical and necessary and what's not. I'm not talking about meeting difficult requirements if they serve a purpose (that's what you're paid for) but being unwilling to compromise on any spec that somebody at the beginning of the project pulled out of their posterior and obviously doesn't need to be so stringent. An elephant is a mouse built to government specifications.

Ok, you can get such things changed, but it requires 10 hours from program managers for every hour of engineering. Conversely, don't even think about offering a feature or capability that will be useful and easy to implement, but is not in the spec. They'll just start writing additional specs to define it and screw you by insisting you meet those.

As you might imagine, I'm very happy to be back in the commercial world.

Anonymous Coward

Re: doesn't work (Score:2, Interesting)

You've fallen into the trap of using their terminology. As soon as 'the problem' is defined in terms of 'upfront design', you've already lost half the ideological battle.

'The problem' (with methodology) is that people want to avoid the difficult work of thinking hard about the business/customer's problem and coming up with solutions that meet all their needs. But there isn't a substitute for thinking hard about the problem and almost certainly never will be.

The earlier you do that hard thinking about the customer's problems that you are trying to solve the cheaper, faster and better quality the result will be. Cheaper? Yes, because bugfixing that is done later in the project is a lot more expensive (as numerous software engineering studies have shown) Faster? Yes, because there's less rework. (Also, since there is usually a time = money equivalency, you can't have it done cheap unless it is also done fast. Higher quality? Yes, because you don't just randomly stumble across quality. Good design trumps bad design every single time.

... ... ...

ebno-10db

Re: doesn't work (Score:4, Interesting)

Until the thing is built or the software is shipped there are many options and care should be taken that artificial administrative constraints don't remove too many of them.

Exactly, and as someone who does both hardware and software I can tell you that that's better understood by Whoever Controls The Great Spec in hardware than in software. Hardware is understood to have physical constraints, so not every change is seen as the result of a screw-up. It's a mentality.

I'll also admit that there is a tendency to get sloppy in software specs because it is easier to make changes. Hardware, with the need to order materials, have things fabbed, tape out a chip, whatever, imposes a certain discipline that's lacking when you know you can change the source code at anytime. Being both, I'm not saying this is because hardware engineers are virtuous and software engineers are sloppy, but because engineers are human (at least some of them).

ebno-10db

Re: doesn't work (Score:2)

http://www.fastcompany.com/28121/they-write-right-stuff

This is my evidence that "proper software engineering" *can* work. The fact that most businesses (and their customers) are willing to save money by accepting less from their software is not the fault of software engineering. We could and did build buildings much faster than we do today, if you are willing to make more mistakes and pay more in human lives. If established industries and their customers began demanding software at that higher standard and were willing to pay for it like it was real engineering, then maybe it would happen more often.

Impressive stuff, and not unique to the space shuttle. Fly-by-wire systems are the same way. You're talking DO-178B [wikipedia.org] Level A stuff. It works, and it's very very expensive. If it was only 10x the cost of normal software development I'd be amazed. I agree that way too much software is poorly planned and implemented crap, and part of the reason is that nobody wants realistic cost estimates or to make the difficult decisions about what it's supposed to do up-front. But what you're talking about is aerospace quality. You couldn't afford a car or even a dishwasher made to those standards.

donscarletti

Re: doesn't work (Score:3)

260 people maintaining 420,000 lines of code, written to precise externally provided specifications that change once every few years.

This is fine for NASA, but if you want something that does roughly what you need before your competitors come up with something better, you'd better find some better programmers.

[May 15, 2017] DevOps Fact or Fiction

May 15, 2017 | blog.appdynamics.com

In light of all the hype, we have created a DevOps parody Series – DevOps: Fact or Fiction .

For those of you who did not see, in October we created an entirely separateblog(inspired by this ) – however decided that it is relevant enough to transform into a series on the AppDynamics Blog . The series will point out the good, the bad, and the funny about IT and DevOps. Don't take anything too seriously – it's nearly 100% stereotypes : ).

Stay tuned for more DevOps: Fact or Fiction to come. Here we go

[May 15, 2017] How DevOps is Killing the Developer by Jeff Knupp

Notable quotes:
"... Start-ups taught us this. Good developers can be passable DBAs, if need be. They make decent testers, "deployment engineers", and whatever other ridiculous term you'd like to use. Their job requires them to know much of the domain of "lower" roles. There's one big problem with this, and hopefully by now you see it: It doesn't work in the opposite direction. ..."
"... An example will make this more clear. My dad is a dentist running his own practice. He employs a secretary, hygienist, and dental assistant. Under some sort of "DentOps" movement, my dad would be making appointments and cleaning people's teeth while trying to find time to drill cavities, perform root canals, etc. My dad can do all of the other jobs in his office, because he has all the specialized knowledge required to do so. But no one, not even all of his employees combined, can do his job. ..."
"... Such a movement does a disservice to everyone involved, except (of course) employers. What began as an experiment aimed at increasing software quality has become a farce, where the most talented employees are overworked (while doing less, less useful work) and lower-level positions simply don't exist. ..."
"... you're right. Pure DevOps is no more efficient or sensible than pure Agile (or the pure Extreme Programming or the pure Structured Programming that preceeded it). The problem is purists and ideological zealotry not the particular brand of religion in question. Insistence on adherence to dogma is the problem as it prevents adoption of flexible, 'fit for purpose' solutions. Exposure to all the alternatives is good. Insisting that one hammer is ideal for every sort of nail we have and ever will have is not. ..."
"... There are developers who have a decent set of skills outside of development in QA, Operations, DB Admin, Networking, etc. Equally so, there are operations engineers who have a decent set of skills outside of operations in QA, Development, DB Admin, networking, etc. Extend this to QA and other disciplines. What I have never seen is one person who can perform all those jobs outside of their main discipline with the same level of professionalism, experience and acumen that each of those roles require to do it well at an Enterprise/World Class level. ..."
"... I prefer to think of DevOps as more of a full-stack team concept. Applying the full-stack principle at the individual levels is not sustainable, as you point out. ..."
"... DevOps roles are strictly automation focused, at least according to all job specifications I see on the internet. They don't need any development skills at all. To me it looks like a new term for what we used to call IT Operations, but more scripting/automation focused. DevOps engineer will need to know Puppet, Chef, Ansible, OS management, public cloud management, know how to set up monitoring, logging and all that stuff usual sysadmin used to do but in the modern world. In fact I used to apply for DevOps roles but quickly changed my mind as it turned out no companies need a person wearing many hats, it has absolutely nothing to do with creating software. Am I wrong? ..."
Apr 15, 2014 | jeffknupp.com
How 'DevOps' is Killing the Developer

There are two recent trends I really hate: DevOps and the notion of the "full-stack" developer. The DevOps movement is so popular that I may as well say I hate the x86 architecture or monolithic kernels. But it's true: I can't stand it. The underlying cause of my pain? This fact: not every company is a start-up, though it appears that every company must act as though they were. DevOps

"DevOps" is meant to denote a close collaboration and cross-pollination between what were previously purely development roles, purely operations roles, and purely QA roles. Because software needs to be released at an ever-increasing rate, the old "waterfall" develop-test-release cycle is seen as broken. Developers must also take responsibility for the quality of the testing and release environments.

The increasing scope of responsibility of the "developer" (whether or not that term is even appropriate anymore is debatable) has given rise to a chimera-like job candidate: the "full-stack" developer. Such a developer is capable of doing the job of developer, QA team member, operations analyst, sysadmin, and DBA. Before you accuse me of hyperbole, go back and read that list again. Is there any role in the list whose duties you wouldn't expect a "full-stack" developer to be well versed in?

Where did these concepts come from? Start-ups, of course (and the Agile methodology). Start-ups are a peculiar beast and need to function in a very lean way to survive their first few years. I don't deny this . Unfortunately, we've taken the multiple technical roles that engineers at start-ups were forced to play due to lack of resources into a set of minimum qualifications for the role of "developer".

Many Hats

Imagine you're at a start-up with a development team of seven. You're one year into development of a web applications that X's all the Y's and things are going well, though it's always a frantic scramble to keep everything going. If there's a particularly nasty issue that seems to require deep database knowledge, you don't have the liberty of saying "that's not my specialty," and handing it off to a DBA team to investigate. Due to constrained resources, you're forced to take on the role of DBA and fix the issue yourself.

Now expand that scenario across all the roles listed earlier. At any one time, a developer at a start-up may be acting as a developer, QA tester, deployment/operations analyst, sysadmin, or DBA. That's just the nature of the business, and some people thrive in that type of environment. Somewhere along the way, however, we tricked ourselves into thinking that because, at any one time, a start-up developer had to take on different roles he or she should actually be all those things at once.

If such people even existed , "full-stack" developers still wouldn't be used as they should. Rather than temporarily taking on a single role for a short period of time, then transitioning into the next role, they are meant to be performing all the roles, all the time . And here's what really sucks: most good developers can almost pull this off.

The Totem Pole

Good developers are smart people. I know I'm going to get a ton of hate mail, but there is a hierarchy of usefulness of technology roles in an organization. Developer is at the top, followed by sysadmin and DBA. QA teams, "operations" people, release coordinators and the like are at the bottom of the totem pole. Why is it arranged like this?

Because each role can do the job of all roles below it if necessary.

Start-ups taught us this. Good developers can be passable DBAs, if need be. They make decent testers, "deployment engineers", and whatever other ridiculous term you'd like to use. Their job requires them to know much of the domain of "lower" roles. There's one big problem with this, and hopefully by now you see it: It doesn't work in the opposite direction.

A QA person can't just do the job of a developer in a pinch, nor can a build-engineer do the job of a DBA. They never acquired the specialized knowledge required to perform the role. And that's fine. Like it or not, there are hierarchies in every organization, and people have different skill sets and levels of ability. However, when you make developers take on other roles, you don't have anyone to take on the role of development!

An example will make this more clear. My dad is a dentist running his own practice. He employs a secretary, hygienist, and dental assistant. Under some sort of "DentOps" movement, my dad would be making appointments and cleaning people's teeth while trying to find time to drill cavities, perform root canals, etc. My dad can do all of the other jobs in his office, because he has all the specialized knowledge required to do so. But no one, not even all of his employees combined, can do his job.

Such a movement does a disservice to everyone involved, except (of course) employers. What began as an experiment aimed at increasing software quality has become a farce, where the most talented employees are overworked (while doing less, less useful work) and lower-level positions simply don't exist.

And this is the crux of the issue. All of the positions previously held by people of various levels of ability are made redundant by the "full-stack" engineer. Large companies love this, as it means they can hire far fewer people to do the same amount of work. In the process, though, actual development becomes a vanishingly small part of a developer's job . This is why we see so many developers that can't pass FizzBuzz: they never really had to write any code. All too common a question now, can you imagine interviewing a chef and asking him what portion of the day he actually devotes to cooking?

Jack of All Trades, Master of None

If you are a developer of moderately sized software, you need a deployment system in place. Quick, what are the benefits and drawbacks of the following such systems: Puppet, Chef, Salt, Ansible, Vagrant, Docker. Now implement your deployment solution! Did you even realize which systems had no business being in that list?

We specialize for a reason: human beings are only capable of retaining so much knowledge. Task-switching is cognitively expensive. Forcing developers to take on additional roles traditionally performed by specialists means that they:

What's more, by forcing developers to take on "full-stack" responsibilities, they are paying their employees far more than the market average for most of those tasks. If a developer makes 100K a year, you can pay four developers 100K per year to do 50% development and 50% release management on a single, two-person task. Or, simply hire a release manager at, say, 75K and two developers who develop full-time. And notice the time wasted by developers who are part time release-managers but don't always have releases to manage.

Don't Kill the Developer

The effect of all of this is to destroy the role of "developer" and replace it with a sort of "technology utility-player". Every developer I know got into programming because they actually enjoyed doing it (at one point). You do a disservice to everyone involved when you force your brightest people to take on additional roles.

Not every company is a start-up. Start-ups don't make developers wear multiple hats by choice, they do so out of necessity. Your company likely has enough resource constraints without you inventing some. Please, don't confuse "being lean" with "running with the fewest possible employees". And for God's sake, let developers write code!


Enno • 2 years ago
Some background... I started life as a dev (30years ago), have mostly been doing sysadmin and project tech lead sorts of work for the last 15. I've always assumed the DevOps movement was resulting in sub-par development and sub-par sysadmin/ops precisely because people were timesharing their concerns.

But what it does bring to the party is a greater level of awareness of the other guys problems. There's nothing quite like being rung out of bed at 3am to motivate a developer to improve his products logging to make supporting it easier. Similarly the admin exposed to the vagaries of promoting things into production in a supportable, repeatable, deterministic manner quickly learns to appreciate the issues there. So DevOps has served a purpose and has offered benefits to the organisations that signed on for it.

But, you're right. Pure DevOps is no more efficient or sensible than pure Agile (or the pure Extreme Programming or the pure Structured Programming that preceeded it). The problem is purists and ideological zealotry not the particular brand of religion in question. Insistence on adherence to dogma is the problem as it prevents adoption of flexible, 'fit for purpose' solutions. Exposure to all the alternatives is good. Insisting that one hammer is ideal for every sort of nail we have and ever will have is not.

Zakaria ANBARI -> Enno • 2 years ago
totally agree with you !
DevOps Reaper • 2 years ago
I'm very disappointed to see this kind of rubbish. It's this type of egocentric thinking and generalization that the developer is an omniscient deity requiring worshiping and pampering that prevents DevOps from being successful. Based on the tone and your perspective it sounds like you've been doing DevOps wrong.

A developer role alone is not the linchpin that keeps DevOps humming - instead it's the respect that each team member holds for each discipline and each team member's area of expertise, the willingness of the entire team to own the product, feature delivery and operational stability end to end, to leverage each others skills and abilities, to not blame Dev or Ops or QA for failure, and to share knowledge.

There are developers who have a decent set of skills outside of development in QA, Operations, DB Admin, Networking, etc. Equally so, there are operations engineers who have a decent set of skills outside of operations in QA, Development, DB Admin, networking, etc. Extend this to QA and other disciplines. What I have never seen is one person who can perform all those jobs outside of their main discipline with the same level of professionalism, experience and acumen that each of those roles require to do it well at an Enterprise/World Class level.

If you're a developer doing QA and operations, you're doing it because you have to, but there should be no illusion that you're as good in alternate roles as someone trained and experienced in those disciplines. To do so is a disservice to yourself and your organization that signs your paycheck. If you're in this situation and you'd prefer making a difference rather than spewing complains, I would recommend talking to your manager and above about changing their skewed vision of DevOps. If they aren't open to communication, collaboration, experimentation and continual improvement, then their DevOps vision is dysfunctional and they're not supporting DevOps from the top down. Saying your DevOps and not doing it is *almost* more egregious than saying the developer is the top of a Totem Pole of existence.

spunky brewster -> DevOps Reaper • a month ago
he prefaced it with 'crybabies please ignore' It's his opinion. That everyone but the lower totem pole people agree with so.. agree to disagree. I also don't think being at the bottom of the totem pole is a big f'in deal. If you're getting paid.. embrace it! So many other ways to enjoy life! The top dog people have all the pressure and die young! 99% of the people on earth dont know the difference between one nerd and another. And other nerds are always going to be egomaniacs who will find some way to justify their own superiority no matter what your achievements. So this kind of posturing is a waste of time.
Pramod • 2 years ago
Amen to that!!
carlivar • 2 years ago
I think there's a problem with your definition of DevOps. It doesn't mean developers have to be "full-stack" or do ops stuff. And it doesn't mean "act like a startup." It simply means, at its basis, that Developers and Operations work well together and do not have any communication barriers. This is why I hate DevOps as a title or department, because DevOps is a culture.

Let's take your DentOps example. The dentist has 3 support staff. What if they rarely spoke to the dentist? What if they were on different floors of the building? What if the dentist wrote an email about how teeth should be cleaned and wasn't available to answer questions or willing to consider feedback? What if once in a while the dentist needed to understand enough about the basics of appointment scheduling to point out problems with the system? Maybe appointments are being scheduled too close together. Would the patients get backed up throughout the day because that's the secretary's problem? Of course not. Now we'd be getting into a more accurate analogy to DevOps. If anything a dentist's office is ALREADY "DentOps" and the whole point of DevOps is to make the dev/ops interaction work in a logical culture that other industries (like dentists) already use!

StillMan -> carlivar • 2 years ago
I would tend to agree with some of that. Being able to trouble shoot network issues using monitoring tools like Fiddler is a good thing to be aware of. I can also see a lot of companies using it as a way to make one person do everything. Moreover, there are probably folks out there that perpetuate that behavior by taking on the machismo argument.

By saying that if I can do it that you should be able to do it too or else you're not as good of a developer as I am. I have never heard anyone outright claim this, but I've seen this attitude time and time again from ambitious analysts looking to get a leg up, a pay raise, and a way to template their values on the rest of the team. One of the first things that you're taught as a dev is that you can't hope to know it all.

Your responsibility first and foremost as a developer is the stability and reliability of your code and the services that you provide. In some industries this is literally a matter of life and death(computers in your car, mission critical medical systems). It doesn't work for everyplace.

spunky brewster -> carlivar • a month ago

I wouldn't want to pay a receptionist 200k a year like a dentist though. Learn to hire better receptionists. Even a moderately charming woman can create more customer loyalty, and cheaper, than the best dentist in the world. I want my dentist to keep quiet and have a steady hand. I want my receptionist to engage me and acknolwedge my existence.

I want my secretary to be a multitasking master. I want my dentist not to multitask at all - OUCH!

Ole Hauris -> S๘rensen • 2 years ago
Good points, I tend to agree. I prefer to think of DevOps as more of a full-stack team concept. Applying the full-stack principle at the individual levels is not sustainable, as you point out.

The full-stack DevOps team will have team members with primary skills in either of the traditional specialties, and will, over time, develop decent secondary skills. But the value is not in people constantly content switching - that actually kills efficiency. The value is in developers understanding and developing an open relationship with testing and operations - and vice versa. And this cooperation is inhibited by putting people in separate teams with conflicting goals. DevOps in practice is not a despecialization. It's bringing the specialists together.

ceposta Ole Hauris S๘rensen • 2 years ago +1.

The more isolated or silo'd developers become, the less they realize what constitutes delivering software, and the more problems are contributed to the IT process of test/build/release/scale/monitor, etc. Writing code is a small fraction of that delivery process. I've written about the success of devops and microservices that touches on this stuff because they're highly related. The future success of devops/microservices/cloud/etc isn't related to technology insofar as it is culture: http://blog.christianposta....

Thanks for the post!

Julio • 2 years ago

Interesting points were raised. Solid arguments. I identified with this text and my current difficulties as developer.

Cody • 2 years ago
Great article and you're definitely describing one form of dysfunctional organisation where DevOps, Agile, Full Stack, and every other $2 word has been corrupted to become a cost cutting justification; cramming more work onto people who aren't skilled for it, and eho end up not having any time to do what they were hired as experts for!

But I'd also agree with other posters that it's a little developer centric. I'm a terrible programmer and a great DBA. I can tell you most programmers who try to be DBAs are equally terrible. It's definitely not "doing the job of the receptionist" 😄

And we shouldn't forget what DevOps is meant to be about; teams making sure nobody gets called at night to fix each other's messes. That means neither developers with shitty deployments straight to production nor operations letting the disks silently fill because "if it ain't C: it ain't our problem."

Zac Smith • 6 months ago
I know of 0 developers that can manage a network of any appreciable scale.

In cloud and large enterprise networks, if there were a totem (which there isn't) using your methodology would place the dev under the network engineer. Their software implements the protocol and configuration intent of the NE. Good thing the whole concept is a pile of rubbish. I think you fell into the trap you called out which is thinking at limited scale.

spunky brewster Zac Smith • a month ago
It's true. We can all create LAN's at home but I wouldn't dare f with a corporate network and risk shutting down amazon for a day. Which seems to happen quite a bit.... maybe they're DEVOPPING a bit too much.
David Rawk Zac Smith • 6 months ago
I tend to agree.. they are not under, but beside.. Both require a heap of skill and that includes coding.. but vastly different code.
Wilmer • 9 months ago
Jeff Knupp is to one side of the spectrum. DevOps Reaper is to the other side.

Enno is more attune to what is really going on. So I won't repeat any of those arguments.

However I will ask you to put me in a box. What am I?

I graduated as a Computer Engineer (hybrid between Electrical Engineering and Computer Science). I don't say that anymore as companies have no idea as to what that means. So I called myself a Digital Electronics and Software Engineer for a while. The repeated question was all too often: "So what are you, software or hardware?"
I spent my first few years working down from board design, writing VHDL and Verilog, to embedded software in C and C++, then algorithms in optimization with the CUDA framework in C, with C++ wrappers and C# for the logic tier. Then worked another few year in particle physics with C++ compute engines with x86 assembly declarations for speed and C# for WPF UIs.

After that I went to work for a wind turbines company as system architect where is was mostly embedded and programming ARM Cortex microprocessors, high power electronics controls, custom service and diagnostics tools in C#. Real-time web based dashboards with Angular, Bootrap, and the likes for a good looking web app.
Nowadays I'm working with mobile first web applications that have a massive backend to power them. It is mostly a .NET stack form Entity Framework, to .NET WebAPI, to Angular power font ends. This company is not a start up but it is a small company. Therefore I wear the many hats. I introduced the new software life cycle with includes continuous integration and continuous deployment. Yes, I manage build servers, build tools, I develop, I'm QA, I'm a tester, I'm a DBA., I'm the deployment and configuration manager.

If you are wondering I have resorted to start calling a full stack developer. It has that edgy sound that companies like to hear. I'm still a young developer. I've only been developing for 10 years.

In my team we are all "Jack of all Trades" and "Masters of Many". We switch tasks and hats because it is fun and keep everyone from getting bored/stuck. Our process is called "Best practices that work for this team".

So, I think of myself as a software engineer. I think I'm a developer. I think I'm DevOps, I think I'm QA.

ישראל פרוכטר Wilmer • 8 months ago
I join you with the lack of title, we don't need those titles (only when HR people are involved, and we need to kind of fake our persona anyhow...)
Matt King • a year ago
Lets start with that DevOps didn't come from startups. It came from Boeing mainly, and a few other major blue chip IT shops, investing heavily in systems management technology around the turn of the century. The goal at the time was simply to change the ratio of servers to IT support personnel, and the re-thinking and re-organizing of development and operations into one organization with one set of common goals. The 'wearing many hats' thing you discuss is a feature of startups, but that feature is independent of siloed or integrated organizations.

I prefer the 'sportzing' analogy of basketball and football. Football has specialist teams that are largely functionally independent because they focus on distinct goals. Basketball has specialist positions, but the whole team is focused on the same goals. I'm not saying one sport is better than the other. I am saying the basketball mentality works better in the IT environment. Delivering the product or service to the customer is the common goal that everyone should be thinking about, and how the details of their job fits into that overall picture. It sounds like to me you are really saying "Hey, its my job and only my job to think about how it all fits together and works"

Secondly, while it is pretty clear that the phrase 'full stack engineer' is about as useful as "Cloud Computing", your perspective that somehow developers are the 'top' of the tree able to do any job is very mistaken. There are key contributors from every specialty who have that ability, and more useful names for them are things like "10x", or "T-shaped". Again, you are describing a real situation, but correlating it with unrelated associations. It is just as likely, and just as valuable, to find an information architect who can also code, or a systems admin that can also diagnose database performance, or an electrician that can also hang sheetrock. Those people do fit your analogy of 'being on top', because they are not siloed and stovepiped into just their speciality.

The DevOps mindset fosters this way of thinking, instead of the old and outdated specialist way of thinking you are defending. Is it possible your emotional reaction is fear based against the possibility that your relative value will decrease if others start thinking outside their boxes?

Interesting to note that Agile also started at Boeing, but 10 years earlier. I live in the startup world of Seattle, but know my history and realize that much of what appears new is actually just 'new to you'(or me), and that most of cutting edge technology and thinking is just combining ideas from other industries in new ways.

BosnianDolphin • 2 years ago

Agree on most points, but nobody needs DBA - unless it is some massive project. DBA people should pick up new skills fast.

Safespace Scooter • 2 years ago
The problem is that developers are trained to crank out code and hope that QA teams will find problems, many times not even sure how to catch holes. DevOps trains people to think critically and do both. It isn't killing developers, it is making them look like noobs while phasing them out.
strangedays Safespace Scooter • a year ago
Yeah, good luck with that attitude. Your company's gonna have a good'ole time looking for and keeping new developer talent. Because as we all know, smart people love working with dummies. I'd love to see 'your QA' team work on our 'spatial collision algorithm' and make our devs "look like noob". You sound like most middle management schmucks.
Manachi • 2 years ago
Fantastic article! I was going to start honing in on the points I particularly agree with but all of it is just spot on. Great post.
Ralli Soph • 9 days ago
Funniest article so far on full stack. It's a harsh reality for devs, because were asked to do everything, know everything, so how can you really believe QA or DBA can do the job of someone like that? There is a crazy amount of hours a fullstack dev invests in aquiring that kind of knowledge, not to mention some people are also talented at their job. Imagine trying to tell the QA to do that? Maybe for a few hours someone can be a backup just in case something happens, but really it's like replacing the head surgeon.
spunky brewster • a month ago
The best skill you can learn in your coding career is your next career. Noone wants a 45 year old coder.

I see so much time wasted learning every new thing when you should just be plugging away to get the job done, bank the $$, and move on. All your accumulated skills will be worthless in a decade or so, and your entire knowledge useless in 2 decades. My ability to turn a wrench is what's keeping me from the poor house. And I have a engineering degree from UIUC! I also don't mind. Think about a 100 week as a plumber with OT in a reasonably priced neighborhood, vs a coder. Who do you think is making more? Now I'm not saying you cant survive into your 50's programming, but typically they get retired forcefully, and permanently.. by a heart attack!

But rambling aside.. the author makes a good point and i think is the future of big companies in tech. The current model is driven by temporary factors. Ideally you'd have a specialized workforce. But I think that as a programmer you are in constant fear of being obsolete so you don't want to be pigeon-holed. It's just not mathematically possible to have that 10,000 hour mastery in 50 different areas.. unless you are Bill Murray in Groundhog Day.

hrmilo • 3 months ago
A developer who sees himself at the top of a pyramid. Not surprising, your myopic and egotistical view. I laugh at people who code a few SELECT statements and think they can fill the DBA role. HA HA HA. God, the arrogance. "Well it worked on my machine." - How many sys admins have heard this out of a developers mouth. Unfortunately, projects get stuck with supporting such issues because that very ego has led the developer too far down the road to turn back. They had no common sense or modesty to call on the knowledge of their Sys Ops team to help design the application. I interview jobs candidates all the time calling themselves full stack simply because they compliment their programming language of choice with a mere smattering of knowledge in client-side technologies and can write a few SQL queries. Most developers have NO PERCEPTION of the myriad intricacies it takes to get an application from their unabated desktop with its FULL ADMIN perms and "unlimited resources", through a staging/QA environment, and eventually to the securely locked down production system with limited and perhaps shared or hosted limited resources. Respect of your support teams, communication and coordination, and the knowledge that you do not know it all. THAT'S being Full Stack and DevOps sir.
spunky brewster hrmilo • a month ago
there's always that one query that noone can do in a way that takes less than 2 hours until u pass it off to a real DBA.. its the 80/20 rule basically. I truly dont believe 'full stack' exists. It's an illusion. There's always something that suffers.

The real problem is smart people are in such demand we're forced to adapt to this tribal pre-civilization hodgepodge. Once the industry matures, it'll disappear. Until then they will think they re-invented the wheel.

Ivan Gavrilyuk • 6 months ago I

'm confused here. DevOps roles are strictly automation focused, at least according to all job specifications I see on the internet. They don't need any development skills at all. To me it looks like a new term for what we used to call IT Operations, but more scripting/automation focused. DevOps engineer will need to know Puppet, Chef, Ansible, OS management, public cloud management, know how to set up monitoring, logging and all that stuff usual sysadmin used to do but in the modern world. In fact I used to apply for DevOps roles but quickly changed my mind as it turned out no companies need a person wearing many hats, it has absolutely nothing to do with creating software. Am I wrong?

Mario Bisignani Ivan Gavrilyuk • 4 months ago

It depends on what you mean with development skills. Have you ever tried to automate the deployment of a large web application? In fact the scripts that automate the deployment of large scalable web applications are pretty complex softwares which require in-depth thinking and should follow all the important principles a good developers should know: components isolation, scalability, maintainability, extensibility, etc..
Valentin Georgiev • 10 months ago
1k% agree with this article!
TechZilla • a year ago
Successful DevOps doesn't mean a full stack developer does it all, that's only true for a broken company that succeeded despite bad organization. For example, Twitter's Dev only culture is downright sick, and ONLY works because they are in the tech field. Mind you, I still believe personally that it works for them DESPITE its unbalanced structure. In other words, bad DevOps means the Dev has no extra resources and just more requirements, yea that sucks!....

BUT, on the flip,

Infrastructure works with QA/Build to define supportable deployment standards, they gotta learn all the automatic bits and practice using them. Now Devs have to package all their applications properly, in the formats supported by QABuild's CI and Repositories (that 'working just fine' install script definitly doesn't count). BUT the Dev's get pre-made CI-ready examples, and if needed, code-migration assistance from the QA/Build team. Pretty soon they learn how to package that type of app, like a J2EE Maven EAR, or a Webdeploy to IIS.... and the rest should be hadled for them, as automaticlly as possible by the proactive operations teams.

Make sense? This is how its supposed to work, It sounds like your left alone in a terrible Dev Only/heavy world. The key to DevOps that is great, and everybody likes, vs. more work... is having a very balanced work flow between the teams, and making sure the pass-off points are VERY well defined. Essentially it requires management that cut the responsibility properly, so they have a shared interest in collaborating. In a Dev heavy organization, the Devs can just throw garbage over the wall, and operations has to react to constant problems... they start to hate each other and ..... Dev managers get the idea that they can cut out ops if they do "DevOps", so then they throw it all at you like right now.

Adi Chiru • a year ago

I see in this post so much rubbish and narrow mindedness, so much of the exact stuff that is killing any type of companies. In the last 10 years I had many roles that required of me, as a system engineer, to come in and straighten out all kind of really bad compromises developers did just to make stuff work.

The role never shows the level of intelligence or capabilities. I've seen so many situations in the last 10 years when smart people with the wrong attitude and awareness are too smart for anyone's good and limited people still providing more value than a very smart ones acting as if he is too smart to even have a conversation about anything.

This post is embarrassing for you Jeff, I am sorry for you man.... you just don't get it!

Max B๖rebไck • a year ago
A developer do not have to do full stack, the developer can continue with development, but has to adopt some things for packaging,testing, and how it is operated.
Operations can continue with operations, but has to know how things are built and packaged.
Developers and operations needs to share things like use the same application server for example. Developer needs to understand how it is operated to make sure that the code is written in a proper way. Operations needs to adopt in the need for fast delivery and be able to support a controlled way of deploying daily into production.
Here is a complementing post I have around the topic
http://bit.ly/1r3iVff

Peperud • a year ago
Very much agree with you Jeff. I've been thinking along these lines for awhile now...
Lenny Joseph • a year ago
I will share my experience , I started off my career teaching Programming which included database programming (Oracle-pl/sql, SQL Server-transact sql) which gave good insights into database internals which landed me into DBA world for last 10 years . During these 10 years where i have worked in technology companies regarded as top-notch , I have seen very smart Developers writing excellent application codes but missing out on writing optimized piece to interact with the database. Hence, I think each Job has a scale and professionals of any group can not do what the top professionals of other group can do. I have seen Developers with fairly good database internals knowledge and I have dbas writing code for their automation which can compares well with features of some commercial database products like TOAD. So , generalization like this does not hold.
ceposta • a year ago
BTW.. "efficiency" is not the goal... "

DevOps and the Myth of Efficiency, Part I

http://blog.christianposta....

DG • a year ago
The idea that there is a hierachy of usefulness is bunk. Most developers are horrible at operations because they dislike it. Most sysadmins and DBAs are horrible at coding because they dislike it. People gravitate to what interests them and a disinterested person does a much poorer job than an interested one. DevOps aims to combine roles by removing barriers, but there are costs to quality that no one likes to talk about. Using your hierarchy example most doctors could obtain their RN but they would not make good nurses.
Lana Boltneva • 2 years ago
So true! I offer you to check this 6 best practices in DevOps too http://intersog.com/blog/ag...
Jonathan McAllister • 2 years ago
This is an excellent article on the general concepts of DevOps and the DevOps movement. It helps to identify the cultural shifts required to facilitate proper DevOps implementations. I also write about DevOps.. I authored a book on implementing CI, CD and DevOps related functions within an organization and it was recently published. The book is aptly titled Mastering Jenkins ( http://www.masteringjenkins... ) and aims to codify not only the architectural implementations and requirements of DevOps but the cultural shift needed to propery advocate for the proper adoption of DevOps practices. Let me know what you think.
Chris Kavanagh • 2 years ago
I agree. Although I'm not in the business (yet), I will be soon. What I've noticed just playing around with Vagrant and Chef, Puppet, Ansible is the great amount of time to try and master just one of these provisioners. I can't imagine being responsible for all these roles you spoke of in the article. How can one possibly master all of them, and be good at any of them?
Sarika Mehta • 2 years ago
hmmm.... users & business see as one application.... for them how it was developed, deployed does not matter.... IT is an enabler by definition... so DevOps is mostly about that... giving one view to the customer; quick changes, stable changes, stable application....

Frankly, DevOps is not about developers or testers... it is about the right architecture, right framework... developers/testers anyways do what is there in the script... DevOps is just a new scripts to them.

For right DevOps, you need right framework. architecture for the whole of the program; you need architecture which is built end to end and not in silos...

Hanut Singh • 2 years ago
Quite the interesting read. Having worked as a "Full Stack" Developer , I totally agree with you. Well done sir. My hat is tipped.
Masood • 2 years ago
Software Developer write code that business/customer use
Test Developer write test code to test SUT
Release Developer write code to automate release process
Infrastructure developer write code to create infrastructure automatically.
Performance Developer writes code to performance test the SUT
Security Developer writes code to scan the SUT for security
Database Developer write code for DB

So which developer are you thinking DevOps going to kill?

Today's TDD world, a developer (it could be anyone above) needs to get out of their comfort zone to makes sure they write a testable, releasable, deployable, performable, security complaint and maintainable code.

DevOps brings all this roles together to collaborate and deliver.

Daniel • 2 years ago
You deserve an award.....
Mash -> Dick • 2 years ago
Why wouldn't they be? What are the basic responsibilities that make for a passable DBA and which of those responsibilities cannot be done by a good developer? Say a good developer has just average experience writing stored procs, analyzing query performance, creating (or choosing not to create, for performance reasons) indexes, constraints and triggers, configuring database access rights, setting up regular backups, regular maintenance (ex. rebuilding indexes to avoid fragmentation)... just to name a few.

I'm sure there's several responsibilities that DBA's have that developers would have very little to no experience in, but we're talking about making for a passable DBA. Developers may not be as good at the job as someone who specializes in it for a living, but the author's wording seems to have been chosen very carefully.

SuperQ Mash • a year ago

Yup, I see lots of people trying to defend the DBA as a thing, just like people keep trying to defend the traditional sysadmin as a thing. I started my career as a sysadmin in the 90s, but times have changed and I don't call myself a sysadmin anymore, because that's not what I do.

Now I'm a Systems Engineer/SRE. My mode of working isn't slamming software together, but engineering automation to do it for me.

But I also do QA, Data storage performance analysis, networking, and [have a] deep knowledge of the applications I support.

[May 15, 2017] 10 Things I Hate About DevOps

Notable quotes:
"... The Emergence of the "DevOps' DevOp", a pseudo intellectual loudly spewing theories about distantly unrelated fields that are entirely irrelevant and are designed to make them feel more intelligent and myself more inadequate ..."
"... "The Copenhagen interpretation certainly applies to DevOps" ..."
"... "I'm modeling the relationship between Dev and Ops using quantum entanglement, with a focus on relative quantum superposition - it's the only way to look at it. Why aren't you?" ..."
"... Enterprise Architects. They used to talk about the "Enterprise Continuum". Now they talk about "The Delivery Continuum" or the "DevOps Continuum". ..."
May 15, 2017 | www.upguard.com
DevOps and I sort of have a love/hate relationship. DevOps is near and dear to our heart here at UpGuard and there are plenty of things that I love about it . Love it or hate it, there is little doubt that it is here to stay. I've enjoyed a great deal of success thanks to agile software development and DevOps methods, but here are 10 things I hate about DevOps!

#1 Everyone thinks it's about Automation.

#2 "True" DevOps apparently have no processes - because DevOps takes care of that.

#3 The Emergence of the "DevOps' DevOp", a pseudo intellectual loudly spewing theories about distantly unrelated fields that are entirely irrelevant and are designed to make them feel more intelligent and myself more inadequate:

"The Copenhagen interpretation certainly applies to DevOps"

"I'm modeling the relationship between Dev and Ops using quantum entanglement, with a focus on relative quantum superposition - it's the only way to look at it. Why aren't you?"

#4 Enterprise Architects. They used to talk about the "Enterprise Continuum". Now they talk about "The Delivery Continuum" or the "DevOps Continuum". How about talking about the business guys?

#5 Heroes abound with tragic statements like "It took 3 days to automate everything.. it's great now!" - Clearly these people have never worked in a serious enterprise.

#6 No-one talks about Automation failure...it's everywhere. i.e Listen for the words "Pockets of Automation". Adoption of technology, education and adaptation of process is rarely mentioned (or measured).

#7 People constantly pointing to Etsy, Facebook & Netflix as DevOps. Let's promote the stories of companies that better represent the market at large.

#8 Tech hipsters discounting, or underestimating, Windows sysadmins. There are a lot of them and they better represent the Enterprise than many of the higher profile blowhards.

#9 The same hipsters saying their threads have filled up with DevOps tweets where there were none before.

#10 I've never heard of a Project Manager taking on DevOps. I intend on finding one.

What do you think - did I miss anything? Rants encouraged ;-) Please add your comments.

[May 15, 2017] Why I hate DevOps

Notable quotes:
"... DevOps. The latest software development fad. ..."
"... Continuous Delivery (CD), the act of small, frequent, releases was defined in detail by Jez Humble and Dave Farley in their book – Continuous Delivery: Reliable Software Releases Through Build, Test, and Deployment Automation. The approach makes a lot of sense and encourages a number of healthy behaviors in a team. ..."
"... The problem is we now have teams saying they're doing DevOps. By that they mean is they make small, frequent, releases to production AND the developers are working closely with the Ops team to get things out to production and to keep them running. ..."
"... Well, the problem is the name. We now have a term "DevOps" to describe the entire build, test, release approach. The problem is when you call something DevOps anyone who doesn't identify themselves as a dev or as Ops automatically assumes they're not part of the process. ..."
May 15, 2017 | testingthemind.wordpress.com
DevOps. The latest software development fad. Now you can be Agile, use Continuous Delivery, and believe in DevOps.

Continuous Delivery (CD), the act of small, frequent, releases was defined in detail by Jez Humble and Dave Farley in their book – Continuous Delivery: Reliable Software Releases Through Build, Test, and Deployment Automation. The approach makes a lot of sense and encourages a number of healthy behaviors in a team. For example, frequent releases more or less have to be small. Small releases are easier to understand, which in turn increases our chances of building good features, but also our chances of testing for the right risks. If you do run into problems during testing then it's pretty easy to work out the change that caused them, reducing the time to debug and fix issues.

Unfortunately, along with all the good parts of CD we have a slight problem. The book focused on the areas which were considered to be the most broken, and unfortunately that led to the original CD description implying "Done" meant the code was shipped to production. As anyone who has ever worked on software will know, running code in production also requires a fair bit of work.

So, teams started adopting CD but no one was talking about how the Ops team fitted into the release cycle. Everything from knowing when production systems were in trouble, to reliable release systems was just assumed to be fully functional, and unnecessary for explanation.

To try to plug the gap DevOps rose up.

Now, just to make things even more confusing. Dave Farley later said that not talking about Ops was an omission and CD does include the entire development and release cycle, including running in production. So DevOps and CD have some overlap there.

DevOps does take a slightly different angle on the approach than CD. The emphasis for DevOps is on the collaboration rather than the process. Silos should be actively broken down to help developers understand systems well enough to be able to write good, robust and scalable code.

So far so good.

The problem is we now have teams saying they're doing DevOps. By that they mean is they make small, frequent, releases to production AND the developers are working closely with the Ops team to get things out to production and to keep them running.

Sounds good. So what's the problem?

Well, the problem is the name. We now have a term "DevOps" to describe the entire build, test, release approach. The problem is when you call something DevOps anyone who doesn't identify themselves as a dev or as Ops automatically assumes they're not part of the process.

Seriously, go and ask your designers what they think of DevOps. Or how about your testers. Or Product Managers. Or Customer Support.

And that's a problem.

We've managed to take something that is completely dependant on collaboration, and trust, and name it in a way that excludes a significant number of people. All of the name suggestions that arise when you mention this are just ridiculous. DevTestOps? BusinessDevTestOps? DesignDevOps? Aside from just being stupid names these continue to exclude anyone who doesn't have these words in their title.

So do I hate DevOps? Well no, not the practice. I think we should always be thinking about how things will actually work in production. We need an Ops team to help us do that so it makes total sense to have them involved in the process. Just take care with that name.

Is there a solution? Well, in my mindwe're still talking about collaboration above all else. Thinking about CD as "Delivery on demand" also makes more sense to me. We, the whole team, should be ready to deliver working software to the customer when they want it. By being aware of the confusion, and exclusion that some of these names create we can hopefully bring everyone into the project before it's too late.

[May 15, 2017] Hype Cycle for DevOps, 2016

May 15, 2017 | www.gartner.com
Hype Cycle for DevOps, 2016

DevOps initiatives include a range of technologies and methodologies spanning the software delivery process. IT leaders and DevOps practitioners should proactively understand the readiness and capabilities of technology to identify the most appropriate choices for their specific DevOps initiative.

Table of Contents

[May 15, 2017] The Phoenix Project (novel)

May 15, 2017 | en.wikipedia.org

The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win (2013) is the third book by Gene Kim. The business novel tells the story of an IT manager who has ninety days to rescue an over-budget and late IT initiative, code-named The Phoenix Project. The book was co-authored by Kevin Behr and George Spafford and published by IT Revolution Press in January 2013.[1][2]

Background

The novel is thought of as the modern day version of The Goal by Eliyahu M. Goldratt.[3] The novel describes the problems that almost every IT organization faces, and then shows the practices of how to solve the problems, improve the lives of those who work in IT and be recognized for helping the business win.[1] The goal of the book is to show that a truly collaborative approach between IT and business is possible.[4]

Synopsis

The novel tells the story of Bill, the IT manager at Parts Unlimited.[4][5][6] The company's new IT initiative, code named Phoenix Project, is critical to the future of Parts Unlimited, but the project is massively over budget and very late. The CEO wants Bill to report directly to him and fix the mess in ninety days or else Bill's entire department will be outsourced. With the help of a prospective board member and his mysterious philosophy of The Three Ways, Bill starts to see that IT work has more in common with manufacturing plant work than he ever imagined. With the clock ticking, Bill must organize work flow, streamline interdepartmental communications, and effectively serve the other business functions at Parts Unlimited.[7][8]

Reception

The book has been called a "must read" for IT professionals and quickly reached #1 in its Amazon.com categories.[9][10] The Phoenix Project was featured on 800 CEO Reads Top 25: What Corporate America Is Reading for June, 2013.[11] InfoQ stated, "This book will resonate at one point or another with anyone who's ever worked in IT."[4] Jeremiah Shirk, Integration & Infrastructure Manager at Kansas State University, said of the book: "Some books you give to friends, for the joy of sharing a great novel. Some books you recommend to your colleagues and employees, to create common ground. Some books you share with your boss, to plant the seeds of a big idea. The Phoenix Project is all three."[4] Other reviewers were more skeptical, including the IT Skeptic "Fictionalising allows you to paint an idealised picture, and yet make it seem real, plausible... Sorry but it is all too good to be true... none of the answers are about people or culture or behaviour. They're about tools and techniques and processes." [12] Jez Humble (author of Continuous Delivery) said "unlike real life, there aren't many experiments in the book that end up making things worse..."

[May 15, 2017] 8 DevOps Myths Debunked - DZone DevOps

May 15, 2017 | dzone.com

In a recent webinar, XebiaLabs VP of DevOps Strategy Andrew Phillips sat down with Atos Global Thought Leader in DevOps Dick van der Sar to separate the facts from the fiction. Their findings: most myths come attached with a small piece of fact and vice versa.

1. DevOps Is Developers Doing Operations: Myth

An integral part of DevOps' automation component involves a significant amount of code. This causes people to believe Developers do most of the heavy lifting in the equation. In reality, what ends up happening is due to the amount of infrastructure as Code, Ops begin to look a lot like Dev.

2. Projects Are Dead: Myth

Projects are an ongoing process of evolving systems and failures. To think they can just be handed off to maintenance forever after completion is simply incorrect. This is only true for tightly scoped software needs, including systems built for specific events. When you adopt DevOps and Agile, you are replacing traditional project-based approaches with a focus on product lifecycles.

3. DevOps Doesn't Work in Complex Environments: Myth

DevOps is actually made to thrive in complex environments. The only instance in which it doesn't work is when unrealistic and/or inappropriate goals are set for the enterprise. Complex environments typically suffer due to lack of communication about the state of, and changes to, the interconnected systems. DevOps, on the other hand, encourages communication and collaboration that prevent these issues from arising.

4. It's Hard to Sell DevOps to the Business: Myth

The benefits to DevOps are closely tied benefiting the business. However, that's hard to believe when you pitch adopting DevOps as a plan to "stop working on features and sink a lot of your money into playing with shiny new IT tech." Truth is, DevOps is going to impact the entire enterprise. This may be the source of resistance, but as long as you find the balance between adoption and disruption, you will experience a successful transition.

5. Agile Is for Lazy Engineers: Myth

DevOps prides itself on eliminating unnecessary overhead. Through automation, your enterprise can see a reduction in documentation, meetings, and even manual tasks, giving team members more time to focus on more important priorities. You know your team is running successfully if their productivity increases.

Nonetheless, DevOps does not come without its own form of "boring" processes, including test plans or code audits. Agile may eliminate waste but that doesn't include the tedious yet necessary aspects.

6. If You Can't Code, You Have No Chance in DevOps: Fact

This is only afact because the automation side of DevOps is all Infrastructure as Code (IaC). This typically requires some sort of software development skill such as modularization, automated testing, and Continuous Integration (CI) as IaC. Regardless of scale, automating anything will require, at the very least, software development skills.

7. Managers Disappear: Myth

Rather than disappear, managers take a different role with DevOps. In fact, they are still a necessity to the team. Managers are tasked with the responsibility of keeping the entire DevOps team on track. Classic management tasks may seem to disappear but only because the role is changing to be more focused on empowerment.

8. DevOps or Die: Fact!

Many of today's market leaders already have some sort of advanced DevOps structure in place. As industries incorporate IT further into their business, we will begin to see DevOps as a basic necessity to the modern business and those that can't adapt will simply fall behind.

That being said, you shouldn't think of DevOps as the magic invincibility potion that will keep your enterprise failure free. Rather, DevOps can prevent many types of failure, but there will always be environment specific threats unique to every organization that DevOps can't rescue you from. Discover how to optimize your DevOps workflows with our cloud-based automated testing infrastructure, brought to you in partnership with Sauce Labs .

[May 15, 2017] DevOps Fact vs Fiction

May 15, 2017 | vassit.co.uk
Out of these misunderstandings several common myths have been created. Acceptance of these myths misleads business further.

Here are some of the most common myths and the facts that debunk them.

Myth 1: DevOps needs agile.

Although DevOps and agile are terms frequently used together, they are a long way away from being synonymous with one another. Agile development refers to a method of software delivery that builds software incrementally, whereas DevOps refers not only to a method of delivery but to a culture, which when adopted, results in many business benefits , including faster software delivery.

DevOps processes can help to compliment agile development, but it is not reliant on agile and can support a range of operation models such as

For optimum results, full adoption of the DevOps philosophy is necessary.

Myth 2: DevOps can't work with legacy.

DevOps is often regarded as a modern concept that helps forward-thinking businesses innovate. Although this is true, it can also help those organisations with long-established, standard IT practices. In fact, with legacy applications there are usually big advantages to DevOps adoption.

Managing legacy care and bringing new software to market quickly; blending stability and agility, is a frequently encountered problem in this new era of digital transformation. Bi-modal IT is an approach where Mode 1 refers to legacy systems focussed on stability, and Mode 2 refers to agile IT focussed on rapid application delivery. DevOps principles are often included exclusively within Mode 2, but automation and collaboration can also be used with success within Mode 1 to increase delivery speed whilst ensuring stability.

Myth 3: DevOps is only for continuous delivery.

DevOps doesn't (necessarily) imply continuous delivery. The aim of a DevOps culture is to increase the delivery frequency of an organisation, often from quarterly/monthly to daily releases or more, and improve their ability to respond to changes in the market.

While continuous delivery relies heavily on automation and is aimed at agile and lean thinking organisations, unlike DevOps it is not reliant on a shared culture which enhances collaboration. Gartner summed up the distinction with a report that stated that: "DevOps is not a market, but a tool-centric philosophy that supports a continuous delivery value chain."

Myth 4: DevOps requires new tools.

As with the implementation of any new concept or idea, a common misconception about DevOps adoption is that new toolsets, and skills are required. Though the provision of appropriate and relevant tools can aid adoption, organisations are by no means required to replace tools and processes they use to produce software.

DevOps enables organisations to deliver new capabilities more easily, and bring new software into production more rapidly in order to respond to market changes. It is not strictly reliant on new tools to get this job done.

Myth 5: DevOps is a skill.

The rapid growth of the DevOps movement has resulted in huge demand for professionals who are skilled within the methodology. However, this fact is often misconstrued to suggest that DevOps is itself a skill – this is not the case.

DevOps is a culture – one that needs to be fully adopted throughout an entire organisation for optimum results, and one that is best supported with appropriate and relevant tools.

Myth 6: DevOps is software.

Understanding that DevOps adoption can be better facilitated with software is important, however, maybe more so is understanding that they are not one and the same. Although it is true that there is a significant amount of DevOps software available on the market today, purchasing a specific ad-hoc DevOps product, or even suite of products, will not make your business 'DevOps'.

The DevOps methodology is the communication, collaboration and automation of your development and operations functions, and as described above, is required to be adopted by an entire organisation to achieve optimum results. The software and tools available will undoubtedly reduce the strain of adoption on your business but conscious adoption is required for your business to fully reach the potential that DevOps offers.

Conclusion

Like any new and popular term, people have somewhat confused and sometimes contradictory or partial impressions of what DevOps is and how it works.

DevOps is a philosophy which enables businesses to automate their processes and work more collaboratively to achieve a common goal and deliver software more rapidly.

At VASSIT we help organisations to successfully implement DevOps, click here to learn how we made DevOps a reality at TSB bank Sabadell

[May 08, 2017] Is the Silicon Valley Dynasty Coming to an End Vanity Fair

Notable quotes:
"... In just the past month, the Valley has seemed like it's happily living in some sort of sadomasochistic bubble worthy of a bad Hollywood satire. ..."
Apr 27, 2017 | www.vanityfair.com

It has been said that Silicon Valley, or the 50 or so square-mile area extending from San Francisco to the base of the peninsula, has overseen the creation of more wealth than any place in the history of mankind. It's made people richer than the oil industry; it has created more money than the Gold Rush. Silicon chips, lines of code, and rectangular screens have even minted more wealth than religious wars.

Wealthy societies, indeed, have their own complicated incentive structures and mores. But they do often tend, as any technological entrepreneur will be quick to remind you, to distribute value across numerous income levels, in a scaled capacity. The Ford line, for instance, may have eventually minted some serious millionaires in Detroit, but it also made transportation cheaper, helped drive down prices on countless consumer goods, and facilitated new trade routes and commercial opportunities. Smartphones, or any number of inventive modern apps or other software products, are no different. Sure, they throw off a lot of money to the geniuses who came up with them, and the people who got in at the ground floor. But they also make possible innumerable other opportunities, financial and otherwise, for their millions of consumers.

Silicon Valley is, in its own right, a dynasty. Instead of warriors or military heroes, it has nerds and people in half-zip sweaters. But it is becoming increasingly likely that the Valley might go down in history not only for its wealth, but also for creating more tone deaf people than any other ecosystem in the history of the world.

In just the past month, the Valley has seemed like it's happily living in some sort of sadomasochistic bubble worthy of a bad Hollywood satire. Uber has endured a slate of scandals that would have seriously wounded a less culturally popular company (or a public one, for that matter). There was one former employee's allegation of sexual harassment (which the company reportedly investigated); a report of driver manipulation ; an unpleasant video depicting C.E.O. Travis Kalanick furiously berating an Uber driver; a story about secret software that could subvert regulators ; a report of cocaine use and groping at holiday parties (an offending manager was fired within hours of the scandal); a lawsuit for potentially buying stolen software from a competitor; more groping ; a slew of corporate exits ; and a driverless car crash . (The shit will really hit the fan if it turns out that Uber's self-driving technology was misappropriated from Alphabet's Waymo; Uber has called the lawsuit "baseless.")

Then there was Facebook, which held its developer conference while the Facebook Killer was on the loose. As Mat Honan of BuzzFeed put it so eloquently: "People used to talk about Steve Jobs and Apple's reality distortion field . But Facebook, it sometimes feels, exists in a reality hole. The company doesn't distort reality-but it often seems to lack the ability to recognize it."

And we ended the week with the ultimate tone-deaf statement from the C.E.O. of Juicero, the maker of a $700 dollar-soon-reduced-to-$400 dollar juicer that has $120 million in venture backing. After Bloomberg News discovered that you didn't even need the $700-$400 juicer to make juice (there are, apparently, these things called hands ) the company's chief executive, Jeff Dunn , offered a response on Medium insinuating that he gets up every day to make the world a better place.

Of course, not everyone who makes the pilgrimage out West is, or becomes, a jerk. Some people arrive in the Valley with a philosophy of how to act as an adult. But here's the problem with that group: most of them don't vociferously articulate how unsettled they are by the bad actors. Even when journalists manage to cover these atrocious activities, the powers of Silicon Valley try to ridicule them, often in public. Take, for example, the 2015 TechCrunch Disrupt conference, when a reporter asked billionaire investor Vinod Kholsa -who evidently believes that public beaches should belong to rich people -about some of the ethical controversy surrounding the mayonnaise-disruption startup Hampton Creek (I can't believe I just wrote the words "mayonnaise-disruption"). Khosla responded with a trite and rude retort that the company was fine. When the reporter pressed Khosla, he shut him down by saying, "I know a lot more about how they're doing, excuse me, than you do." A year later and the Justice Department opened a criminal investigation into whether the company defrauded investors when employees secretly purchased the company's own mayonnaise from grocery stores . (The Justice Department has since dropped its investigation.)

When you zoom out of that 50-square-mile area of Silicon Valley, it becomes obvious that big businesses can get shamed into doing the right thing. When it was discovered that Volkswagen lied about emissions outputs, the company's C.E.O. was forced to resign . The same was true for the chief of Wells Fargo , who was embroiled in a financial scandal. In the wake of it's recent public scandal, United recently knocked its C.E.O. down a peg . Even Fox News, one of the most bizarrely unrepentant media outlet in America, pushed out two of the most important people at the network over allegations of sexual harassment. ( Bill O'Reilly has said that claims against him are "unfounded"; Roger Ailes has vociferously denied allegations of sexual harassment.) Even Wall Street can (sometimes) be forced to be more ethical. Yet Elizabeth Holmes is still C.E.O. of Theranos. Travis Kalanick is still going to make billions of dollars as the chief of Uber when the company eventually goes public. The list goes on and on .

In many respects, this is simply the D.N.A. of Silicon Valley. The tech bubble of the mid-90s was inflated by lies that sent the NASDAQ on a vertiginous downward spike that eviscerated the life savings of thousands of retirees and Americans who believed in the hype. This time around, it seems that some of these business may be real, but the people running them are still as tone deaf regarding how their actions affect other people. Silicon Valley has indeed created some amazing things. One can only hope these people don't erase it with their hubris.

E-commerce start-up Fab was once valued at $900 million, a near unicorn in Silicon Valley terms. But after allegedly burning through $200 million of its $336 million in venture capital, C.E.O. Jason Goldberg was forced to shutter its European arm and lay off two-thirds of its staff.

Fired in 2014 from his ad-tech firm RadiumOne following a domestic-violence conviction, Gurbaksh Chahal founded a new company to compete with the one he was kicked out of. But Gravity4, his new firm, was sued for gender discrimination in 2015, though that case is still pending, and former employees have contemplated legal action against him.

[May 07, 2017] centos - Do not play those dangerous games with resing of partitions unless absolutly nessesary

Copying to additional drive (can be USB), repartitioning and then copying everything back is a safer bet
www.softpanorama.org

In theory, you could reduce the size of sda1, increase the size of the extended partition, shift the contents of the extended partition down, then increase the size of the PV on the extended partition and you'd have the extra room. However, the number of possible things that can go wrong there is just astronomical, so I'd recommend either buying a second hard drive (and possibly transferring everything onto it in a more sensible layout, then repartitioning your current drive better) or just making some bind mounts of various bits and pieces out of /home into / to free up a bit more space.

--womble

[May 05, 2017] As Unix does not have a rename command usage of mv for renaming can lead to SNAFU

www.softpanorama.org

If destination does not exist it behaves as rename command but if destination exists and is directory it move it one level up

For example, if you have directories /home and home2 and want to move all subdirectories from /home2 to /home and the directory /home is empty you can't use

mv home2 home

if you forget to remove the directory /home, mv silently will create /home/home2 directory and you have a problem if this is user home directories.

[May 05, 2017] The key problem with cp utility is that it does not preserve timestamp of the file.

Expected behaviour of copy command by windows users is that it preserves attributes. But this in not true for Unix cp command.
Using -r option without -p option destroys all timestamps.
www.vanityfair.com

-p -- Preserve the characteristics of the source_file. Copy the contents, modification times, and permission modes of the source_file to the destination files.

You might wish to create an alias

alias cp='cp -p'

as I can't imagine case where regular Unix behaviour is desirable.

[May 05, 2017] William Binney - The Government is Profiling You (The NSA is Spying on You)

Very interesting discussion of how the project of mass surveillance of internet traffic started and what were the major challenges. that's probably where the idea of collecting "envelopes" and correlating them to create social network. Similar to what was done in civil War.
The idea to prevent corruption of medical establishment to prevent Medicare fraud is very interesting.
Notable quotes:
"... I suspect that it's hopelessly unlikely for honest people to complete the Police Academy; somewhere early on the good cops are weeded out and cannot complete training unless they compromise their integrity. ..."
"... 500 Years of History Shows that Mass Spying Is Always Aimed at Crushing Dissent It's Never to Protect Us From Bad Guys No matter which government conducts mass surveillance, they also do it to crush dissent, and then give a false rationale for why they're doing it. ..."
"... People are so worried about NSA don't be fooled that private companies are doing the same thing. ..."
"... In communism the people learned quick they were being watched. The reaction was not to go to protest. ..."
"... Just not be productive and work the system and not listen to their crap. this is all that was required to bring them down. watching people, arresting does not do shit for their cause ..."
Apr 20, 2017 | www.youtube.com
Chad 2 years ago

"People who believe in these rights very much are forced into compromising their integrity"

I suspect that it's hopelessly unlikely for honest people to complete the Police Academy; somewhere early on the good cops are weeded out and cannot complete training unless they compromise their integrity.

Agent76 1 year ago (edited)
January 9, 2014

500 Years of History Shows that Mass Spying Is Always Aimed at Crushing Dissent It's Never to Protect Us From Bad Guys No matter which government conducts mass surveillance, they also do it to crush dissent, and then give a false rationale for why they're doing it.

http://www.washingtonsblog.com/2014/01/government-spying-citizens-always-focuses-crushing-dissent-keeping-us-safe.html

Homa Monfared 7 months ago

I am wondering how much damage your spying did to the Foreign Countries, I am wondering how you changed regimes around the world, how many refugees you helped to create around the world.

Don Kantner, 2 weeks ago

People are so worried about NSA don't be fooled that private companies are doing the same thing. Plus, the truth is if the NSA wasn't watching any fool with a computer could potentially cause an worldwide economic crisis.

Bettor in Vegas 1 year ago

In communism the people learned quick they were being watched. The reaction was not to go to protest.

Just not be productive and work the system and not listen to their crap. this is all that was required to bring them down. watching people, arresting does not do shit for their cause......

[Apr 26, 2017] ShellCheck - A Tool That Shows Warnings and Suggestions for Shell Scripts

Apr 26, 2017 | www.tecmint.com

by Aaron Kili | Published: April 24, 2017 | April 24, 2017

Download Your Free eBooks NOW - 10 Free Linux eBooks for Administrators | 4 Free Shell Scripting eBooks
ShellCheck is a static analysis tool that shows warnings and suggestions concerning bad code in bash/sh shell scripts. It can be used in several ways: from the web by pasting your shell script in an online editor (Ace – a standalone code editor written in JavaScript) in https://www.shellcheck.net (it is always synchronized to the latest git commit, and is the simplest way to give ShellCheck a go) for instant feedback.

Alternatively, you can install it on your machine and run it from the terminal, integrate it with your text editor as well as in your build or test suites.

There are three things ShellCheck does primarily:

  • It points out and explains typical beginner's syntax issues that cause a shell to give cryptic error messages.
  • It points out and explains typical intermediate level semantic problems that cause a shell to behave strangely and counter-intuitively.
  • It also points out subtle caveats, corner cases and pitfalls that may cause an advanced user's otherwise working script to fail under future circumstances.

In this article, we will show how to install and use ShellCheck in the various ways to find bugs or bad code in your shell scripts in Linux.

How to Install and Use ShellCheck in Linux

ShellCheck can be easily installed locally through your package manager as shown.

On Debian/Ubuntu
# apt-get install shellcheck
On RHEL/CentOS
# yum -y install epel-release
# yum install ShellCheck
On Fedora
# dnf install ShellCheck

Once ShellCheck installed, let's take a look at how to use ShellCheck in the various methods we mentioned before.

Using ShellCheck From the Web

Go to https://www.shellcheck.net and paste your script in the Ace editor provided, you will view the output at the bottom of the editor as shown in the screen shot below.

In the following example, the test shell script consists of the following lines:

#!/bin/bash
#declare variables
MINARGS=2
E_NOTROOT=50
E_MINARGS=100
#echo values of variables 
echo $MINARGS
echo $E_NONROOT
exit 0;
ShellCheck - Online Shell Script Analysis Tool

ShellCheck – Online Shell Script Analysis Tool

From the screenshot above, the first two variables E_NOTROOT and E_MINARGS have been declared but are unused, ShellCheck reports these as "suggestive errors":

SC2034: E_NOTROOT appears unused. Verify it or export it.
SC2034: E_MINARGS appears unused. Verify it or export it. 

Then secondly, the wrong name (in the statement echo $E_NONROOT ) was used to echo variable E_NOTROOT , that is why ShellCheck shows the error:

SC2153: Possible misspelling: E_NONROOT may not be assigned, but E_NOTROOT is

Again when you look at the echo commands , the variables have not been double quoted (helps to prevent globbing and word splitting), therefore Shell Check shows the warning:

SC2086: Double quote to prevent globbing and word splitting.
Using ShellCheck From the Terminal

You can also run ShellCheck from the command-line, we'll use the same shell script above as follows:

$ shellcheck test.sh
ShellCheck - Checks Bad Code in Shell Scripts

ShellCheck – Checks Bad Code in Shell Scripts Using ShellCheck From the Text Editor

You can also view ShellCheck suggestions and warnings directly in a variety of editors, this is probably a more efficient way of using ShellCheck, once you save a files, it shows you any errors in the code.

In Vim , use ALE or Syntastic (we will use this):

Start by installing Pathogen so that it's easy to install syntastic. Run the commands below to get the pathogen.vim file and the directories it needs:

# mkdir -p ~/.vim/autoload ~/.vim/bundle && curl -LSso ~/.vim/autoload/pathogen.vim https://tpo.pe/pathogen.vim

Then add this to your ~/.vimrc file:

execute pathogen#infect()

Once you have installed pathogen, and you now can put syntastic into ~/.vim/bundle as follows:

# cd ~/.vim/bundle && git clone --depth=1 https://github.com/vim-syntastic/syntastic.git

Next, close vim and start it back up to reload it, then type the command below:

:Helptags

If all goes well, you should have ShellCheck integrated with Vim , the following screenshots show how it works using the same script above.

Check Bad Shell Script Code in Vim

Check Bad Shell Script Code in Vim

In case you get an error after following the steps above, then you possibly didn't install Pathogen correctly. Redo the steps but this ensure that you did the following:

  • Created both the ~/.vim/autoload and ~/.vim/bundle directories.
  • Added the execute pathogen#infect() line to your ~/.vimrc file.
  • Did the git clone of syntastic inside ~/.vim/bundle .
  • Use appropriate permissions to access all of the above directories.

You can also use other editors to check bad code in shell scripts like:

  • In Emacs , use Flycheck .
  • In Sublime , employ SublimeLinter.
  • In Atom , make use of Linter.
  • In most other editors, use GCC error compatibility.

Note : Use the gallery of bad code to carry out more ShellChecking.

ShellCheck Github Repository: https://github.com/koalaman/shellcheck

That's it! In this article, we showed how to install and use ShellCheck to finds bugs or bad code in your shell scripts in Linux. Share your thoughts with us via the comment section below.

Do you know of any other similar tools out there? If yes, then share info about them in the comments as well. Share + 0 9 16 Ask Anything

If You Appreciate What We Do Here On TecMint, You Should Consider:
  1. Stay Connected to: Twitter | Facebook | Google Plus
  2. Subscribe to our email updates: Sign Up Now
  3. Use our Hosting referral link if you planning to start your blog ($3.82/month) .
  4. Become a Supporter - Make a contribution via PayPal
  5. Support us by purchasing our premium books in PDF format.
  6. Support us by taking our online Linux courses

We are thankful for your never ending support.

Tags: shell scripting

View all Posts

Aaron Kili

Aaron Kili is a Linux and F.O.S.S enthusiast, an upcoming Linux SysAdmin, web developer, and currently a content creator for TecMint who loves working with computers and strongly believes in sharing knowledge.

Your name can also be listed here. Got a tip? Submit it here to become an TecMint author.

[Apr 19, 2017] Paul Krugman Gets Retail Wrong: They are Not Very Good Jobs

Apr 19, 2017 | economistsview.typepad.com
anne , April 17, 2017 at 05:55 AM
http://cepr.net/blogs/beat-the-press/paul-krugman-gets-retail-wrong-they-are-not-very-good-jobs

April 17, 2017

Paul Krugman Gets Retail Wrong: They are Not Very Good Jobs

Paul Krugman used his column * this morning to ask why we don't pay as much attention to the loss of jobs in retail as we do to jobs lost in mining and manufacturing. His answer is that in large part the former jobs tend to be more white and male than the latter. While this is true, although African Americans have historically been over-represented in manufacturing, there is another simpler explanation: retail jobs tend to not be very good jobs.

The basic story is that jobs in mining and manufacturing tend to offer higher pay and are far more likely to come with health care and pension benefits than retail jobs. A worker who loses a job in these sectors is unlikely to find a comparable job elsewhere. In retail, the odds are that a person who loses a job will be able to find one with similar pay and benefits.

A quick look at average weekly wages ** can make this point. In mining the average weekly wage is $1,450, in manufacturing it is $1,070, by comparison in retail it is just $555. It is worth mentioning that much of this difference is in hours worked, not the hourly pay. There is nothing wrong with working shorter workweeks (in fact, I think it is a very good idea), but for those who need a 40 hour plus workweek to make ends meet, a 30-hour a week job will not fit the bill.

This difference in job quality is apparent in the difference in separation rates by industry. (This is the percentage of workers who lose or leave their job every month.) It was 2.4 percent for the most recent month in manufacturing. By comparison, it was 4.7 percent in retail, almost twice as high. (It was 5.2 percent in mining and logging. My guess is that this is driven by logging, but I will leave that one for folks who know the industry better.)

Anyhow, it shouldn't be a mystery that we tend to be more concerned about the loss of good jobs than the loss of jobs that are not very good. If we want to ask a deeper question, as to why retail jobs are not very good, then the demographics almost certainly play a big role.

Since only a small segment of the workforce is going to be employed in manufacturing regardless of what we do on trade (even the Baker dream policy will add at most 2 million jobs), we should be focused on making retail and other service sector jobs good jobs. The full agenda for making this transformation is a long one (higher minimum wages and unions would be a big part of the picture, along with universal health care insurance and a national pension system), but there is one immediate item on the agenda.

All right minded people should be yelling about the Federal Reserve Board's interest rate hikes. The point of these hikes is to slow the economy and reduce the rate of job creation. The Fed's concern is that the labor market is getting too tight. In a tighter labor market workers, especially those at the bottom of the pecking order, are able to get larger wage increases. The Fed is ostensibly worried that this can lead to higher inflation, which can get us to a wage price spiral like we saw in the 70s.

As I and others have argued, *** there is little basis for thinking that we are anywhere close to a 1970s type inflation, with inflation consistently running below the Fed's 2.0 percent target, (which many of us think is too low anyhow). I'd love to see Krugman pushing the cause of full employment here. We should call out racism and sexism where we see it, but this is a case where there is a concrete policy that can do something to address it. Come on Paul, we need your voice.

* https://www.nytimes.com/2017/04/17/opinion/why-dont-all-jobs-matter.html

** https://www.bls.gov/news.release/empsit.t19.htm

*** http://cepr.net/blogs/beat-the-press/overall-and-core-cpi-fall-in-march

-- Dean Baker

Fred C. Dobbs -> anne... , April 17, 2017 at 06:17 AM
PK: Consider what has happened to department stores. Even as Mr. Trump was boasting about saving a few hundred jobs in manufacturing here and there, Macy's announced plans to close 68 stores and lay off 10,000 workers. Sears, another iconic institution, has expressed "substantial doubt" about its ability to stay in business.

Overall, department stores employ a third fewer people now than they did in 2001. That's half a million traditional jobs gone - about eighteen times as many jobs as were lost in coal mining over the same period.

And retailing isn't the only service industry that has been hit hard by changing technology. Another prime example is newspaper publishing, where employment has declined by 270,000, almost two-thirds of the work force, since 2000. ...

(To those that had them, they were probably
pretty decent jobs, albeit much less 'gritty'
than mining or manufacturing.)

BenIsNotYoda -> anne... , April 17, 2017 at 06:42 AM
Dean is correct. Krugman just wants to play the racism card or tell people those who wish their communities were gutted that they are stupid.
JohnH -> BenIsNotYoda... , April 17, 2017 at 06:48 AM
Elite experts are totally flummoxed...how can they pontificate solutions when they are clueless?

Roger Cohen had a very long piece about France and it discontents in the Times Sunday Review yesterday. He could not make heads or tails of the problem. Not worth the read.
https://www.nytimes.com/2017/04/14/opinion/sunday/france-in-the-end-of-days.html?rref=collection%2Fcolumn%2Froger-cohen&action=click&contentCollection=opinion&region=stream&module=stream_unit&version=latest&contentPlacement=2&pgtype=collection&_r=0

And experts wonder why nobody listens to them any more? Priceless!!!

BenIsNotYoda -> JohnH... , April 17, 2017 at 07:34 AM
clueless experts/academics. well said.
paine -> anne... , April 17, 2017 at 08:27 AM
Exactly dean
Tom aka Rusty -> anne... , April 17, 2017 at 07:39 AM
Krugman is an arrogant elitist who thinks people who disagree with him tend to be ignorant yahoos.

Sort of a Larry Summers with a little better manners.

anne -> Tom aka Rusty... , April 17, 2017 at 08:18 AM
Krugman is an arrogant elitist who thinks people who disagree with him tend to be ignorant yahoos.

[ This is a harsh but fair criticism, and even the apology of Paul Krugman was conditional and showed no thought to the other workers insulted. ]

cm -> Tom aka Rusty... , April 17, 2017 at 08:11 AM
There is a lot of elitism to go around. People will be much more reluctant to express publicly the same as in private (or pseudonymously on the internet?). But looking down on other people and their work is pretty widespread (and in either case there is a lot of assumption about the nature of the work and the personal attributes of the people doing it - usually of a derogatory type in both cases).

I find it plausible that Krugman was referring those widespread stereotypes about job categories that (traditionally?) have not required a college degree, or have been relatively at the low end of the esteem scale in a given industry (e.g. in "tech" and manufacturing, QA/testing related work).

It must be possible to comment on such stereotypes, but there is of course always the risk of being thought to hold them oneself, or indeed being complicit in perpetuating them.

As a thought experiment, I suggest reviewing what you yourself think about occupations not held by yourself, good friends, and family members and acquaintainces you like/respect (these qualifications are deliberate). For example, you seem to think not very highly of maids.

Of course, being an RN requires significantly more training than being a maid, and not just once when you start in your career. But at some level of abstraction, anybody who does work where their autonomy is quite limited (i.e. they are not setting objectives at any level of the organization) is "just a worker". That's the very stereotype we are discussing, isn't it?

anne -> cm... , April 17, 2017 at 08:26 AM
Nicely explained.
paine -> anne... , April 17, 2017 at 08:40 AM
Yes
anne -> Tom aka Rusty... , April 17, 2017 at 08:24 AM
Krugman thinks nurses are the equivalent of maids...

[ The problem is that Paul Krugman dismissed the work of nurses and maids and gardeners as "menial." I find no evidence that Krugman understands that even after conditionally apologizing to nurses. ]

paine -> anne... , April 17, 2017 at 08:42 AM
Even if there are millions of mcjobs
out there
none are filled by mcpeople

[Apr 18, 2017] Learning to Love Intelligent Machines

Notable quotes:
"... Learning to Love Intelligent Machines ..."
Apr 18, 2017 | www.nakedcapitalism.com
MoiAussie , April 17, 2017 at 9:04 am

If anyone is struggling to access Learning to Love Intelligent Machines (WSJ), you can get to it by clicking though this post . YMMV.

MyLessThanPrimeBeef , April 17, 2017 at 11:26 am

Also, don't forget to Learn from your Love Machines.

Artificial Love + Artificial Intelligence = Artificial Utopia.

[Apr 17, 2017] How many articles have I read that state as fact that the problem is REALLY automation?

Notable quotes:
"... It isn't. It's the world's biggest, most advanced cloud-computing company with an online retail storefront stuck between you and it. In 2005-2006 it was already selling supercomputing capability for cents on the dollar - way ahead of Google and Microsoft and IBM. ..."
"... Do you really think the internet created Amazon, Snapchat, Facebook, etc? No, the internet was just a tool to be used. The people who created those businesses would have used any tool they had access to at the time because their original goal was not automation or innovation, it was only to get rich. ..."
"... "Disruptive parasitic intermediation" is superb, thanks. The entire phrase should appear automatically whenever "disruption"/"disruptive" or "innovation"/"innovative" is used in a laudatory sense. ..."
"... >that people have a much bigger aversion to loss than gain. ..."
"... As the rich became uber rich, they hid the money in tax havens. As for globalization, this has less to do these days with technological innovation and more to do with economic exploitation. ..."
Apr 17, 2017 | www.nakedcapitalism.com
Carla , April 17, 2017 at 9:25 am

"how many articles have I read that state as fact that the problem is REALLY automation?

NO, the real problem is that the plutocrats control the policies "

+1

justanotherprogressive , April 17, 2017 at 11:45 am

+100 to your comment. There is a decided attempt by the plutocrats to get us to focus our anger on automation and not the people, like they themselves, who control the automation ..

MoiAussie , April 17, 2017 at 12:10 pm

Plutocrats control much automation, but so do thousands of wannabe plutocrats whose expertise lets them come from nowhere to billionairehood in a few short years by using it to create some novel, disruptive parasitic intermediation that makes their fortune. The "sharing economy" relies on automation. As does Amazon, Snapchat, Facebook, Dropbox, Pinterest,

It's not a stretch to say that automation creates new plutocrats . So blame the individuals, or blame the phenomenon, or both, whatever works for you.

Carolinian , April 17, 2017 at 12:23 pm

So John D. Rockefeller and Andrew Carnegie weren't plutocrats–or were somehow better plutocrats?

Blame not individuals or phenomena but society and the public and elites who shape it. Our social structure is also a kind of machine and perhaps the most imperfectly designed of all of them. My own view is that the people who fear machines are the people who don't like or understand machines. Tools, and the use of them, are an essential part of being human.

MoiAussie , April 17, 2017 at 9:21 pm

Huh? If I wrote "careless campers create forest fires", would you actually think I meant "careless campers create all forest fires"?

Carolinian , April 17, 2017 at 10:23 pm

I'm replying to your upthread comment which seems to say today's careless campers and the technology they rely on are somehow different from those other figures we know so well from history. In fact all technology is tremendously disruptive but somehow things have a way of sorting themselves out. So–just to repeat–the thing is not to "blame" the individuals or the automation but to get to work on the sorting. People like Jeff Bezos with his very flaky business model could be little more than a blip.

a different chris , April 17, 2017 at 12:24 pm

>Amazon, Snapchat, Facebook, Dropbox, Pinterest

Automation? Those companies? I guess Amazon automates ordering not exactly R. Daneel Olivaw for sure. If some poor Asian girl doesn't make the boots or some Agri giant doesn't make the flour Amazon isn't sending you nothin', and the other companies are even more useless.

Mark P. , April 17, 2017 at 2:45 pm

'Automation? Those companies? I guess Amazon automates ordering not exactly R. Daneel Olivaw for sure.'

Um. Amazon is highly deceptive, in that most people think it's a giant online retail store.

It isn't. It's the world's biggest, most advanced cloud-computing company with an online retail storefront stuck between you and it. In 2005-2006 it was already selling supercomputing capability for cents on the dollar - way ahead of Google and Microsoft and IBM.

justanotherprogressive , April 17, 2017 at 12:32 pm

Do you really think the internet created Amazon, Snapchat, Facebook, etc? No, the internet was just a tool to be used. The people who created those businesses would have used any tool they had access to at the time because their original goal was not automation or innovation, it was only to get rich.

Let me remind you of Thomas Edison. If he would have lived 100 years later, he would have used computers instead of electricity to make his fortune. (In contrast, Nikolai Tesla/George Westinghouse used electricity to be innovative, NOT to get rich ). It isn't the tool that is used, it is the mindset of the people who use the tool

clinical wasteman , April 17, 2017 at 2:30 pm

"Disruptive parasitic intermediation" is superb, thanks. The entire phrase should appear automatically whenever "disruption"/"disruptive" or "innovation"/"innovative" is used in a laudatory sense.

100% agreement with your first point in this thread, too. That short comment should stand as a sort of epigraph/reference for all future discussion of these things.

No disagreement on the point about actual and wannabe plutocrats either, but perhaps it's worth emphasising that it's not just a matter of a few successful (and many failed) personal get-rich-quick schemes, real as those are: the potential of 'universal machines' tends to be released in the form of parasitic intermediation because, for the time being at least, it's released into a world subject to the 'demands' of capital, and at a (decades-long) moment of crisis for the traditional model of capital accumulation. 'Universal' potential is set free to seek rents and maybe to do a bit of police work on the side, if the two can even be separated.

The writer of this article from 2010 [ http://www.metamute.org/editorial/articles/artificial-scarcity-world-overproduction-escape-isnt ] surely wouldn't want it to be taken as conclusive, but it's a good example of one marginal train of serious thought about all of the above. See also 'On Africa and Self-Reproducing Automata' written by George Caffentzis 20 years or so earlier [https://libcom.org/library/george-caffentzis-letters-blood-fire]; apologies for link to entire (free, downloadable) book, but my crumbling print copy of the single essay stubbornly resists uploading.

DH , April 17, 2017 at 9:48 am

Unfortunately, the healthcare insurance debate has been simply a battle between competing ideologies. I don't think Americans understand the key role that universal healthcare coverage plays in creating resilient economies.

Before penicillin, heart surgeries, cancer cures, modern obstetrics etc. that it didn't matter if you are rich or poor if you got sick. There was a good chance you would die in either case which was a key reason that the average life span was short.

In the mid-20th century that began to change so now lifespan is as much about income as anything else. It is well known that people have a much bigger aversion to loss than gain. So if you currently have healthcare insurance through a job, then you don't want to lose it by taking a risk to do something where you are no longer covered.

People are moving less to find work – why would you uproot your family to work for a company that is just as likely to lay you off in two years in a place you have no roots? People are less likely to day to quit jobs to start a new business – that is a big gamble today because you not only have to keep the roof over your head and put food on the table, but you also have to cover an even bigger cost of healthcare insurance in the individual market or you have a much greater risk of not making it to your 65th birthday.

In countries like Canada, healthcare coverage is barely a discussion point if somebody is looking to move, change jobs, or start a small business.

If I had a choice today between universal basic income vs universal healthcare coverage, I would choose the healthcare coverage form a societal standpoint. That is simply insuring a risk and can allow people much greater freedom during the working lives. Similarly, Social Security is of similar importance because it provides basic protection against disability and not starving in the cold in your old age. These are vastly different incentive systems than paying people money to live on even if they are not working.

Our ideological debates should be factoring these types of ideas in the discussion instead of just being a food fight.

a different chris , April 17, 2017 at 12:28 pm

>that people have a much bigger aversion to loss than gain.

Yeah well if the downside is that you're dead this starts to make sense.

>instead of just being a food fight.

The thing is that the Powers-That-Be want it to be a food fight, as that is a great stalling at worst and complete diversion at best tactic. Good post, btw.

Altandmain , April 17, 2017 at 12:36 pm

As the rich became uber rich, they hid the money in tax havens. As for globalization, this has less to do these days with technological innovation and more to do with economic exploitation.

I will note that Germany, Japan, South Korea, and a few other nations have not bought into this madness and have retained a good chunk of their manufacturing sectors.

Mark P. , April 17, 2017 at 3:26 pm

'As for globalization, this has less to do these days with technological innovation and more to do with economic exploitation.'

Economic exploiters are always with us. You're underrating the role of a specific technological innovation. Globalization as we now know it really became feasible in the late 1980s with the spread of instant global electronic networks, mostly via the fiberoptic cables through which everything - telephony, Internet, etc - travels Internet packet mode.

That's the point at which capital could really start moving instantly around the world, and companies could really begin to run global supply chains and workforces. That's the point when shifts of workers in facilities in Bangalore or Beijing could start their workdays as shifts of workers in the U.S. were ending theirs, and companies could outsource and offshore their whole operations.

[Apr 16, 2017] The most common characteristic of people running their own business was that theyd been fired twice

Notable quotes:
"... things might have worked out with better luck on timing), you need your head examined to start a small business ..."
"... If you can tolerate the BS, it is vastly better to be on a payroll. 90% of all new businesses fail and running one is no picnic. ..."
"... And new business formation has dived in the US, due mainly IMHO to less than robust demand in many sectors of the economy. ..."
"... You're so right. It used to be that there were set asides for small businesses but nowadays Federal and State Governments are only interested in contracts with large businesses. The SBA classification for small business is based on NAICS code (used to be SIC code) is usually $1-2 million or up to 500 employees. I wonder how they can be small businesses! ..."
"... To survive, small businesses need to sell their goods/services to large businesses. Most of the decision makers who purchase these items are unreachable or already have their favorites. Unless your small business has invented a better mousetrap you're SOL! ..."
Apr 16, 2017 | www.nakedcapitalism.com
Yves Smith, April 16, 2017 at 5:00 pm

As someone who has started three businesses, two of them successful (I went to Australia right before the Gulf War started, which led to new business in Sydney coming to a complete halt for six months; things might have worked out with better luck on timing), you need your head examined to start a small business. The most common characteristic of people running their own business was that they'd been fired twice.

If you can tolerate the BS, it is vastly better to be on a payroll. 90% of all new businesses fail and running one is no picnic.

And new business formation has dived in the US, due mainly IMHO to less than robust demand in many sectors of the economy.

steelhead , April 16, 2017 at 5:41 pm

Unless your family fully bankrolls you until BK kicks in (snark). I would have loved to write as a career. Unfortunately, at the time, promises that had been made were broken and I had to go to work for a F500 just to survive right after my undergraduate degree was completed. Fate and Karma.

oh , April 16, 2017 at 5:56 pm

You're so right. It used to be that there were set asides for small businesses but nowadays Federal and State Governments are only interested in contracts with large businesses. The SBA classification for small business is based on NAICS code (used to be SIC code) is usually $1-2 million or up to 500 employees. I wonder how they can be small businesses!

To survive, small businesses need to sell their goods/services to large businesses. Most of the decision makers who purchase these items are unreachable or already have their favorites. Unless your small business has invented a better mousetrap you're SOL!

[Apr 15, 2017] The Trump phenomenon shows that we urgently need an alternative to the obsolete capitalism

Apr 15, 2017 | failedevolution.blogspot.gr

The Trump phenomenon shows that we urgently need an alternative to the obsolete capitalism globinfo freexchange

It's not only the rapid technological progress, especially in the field of hyper-automation and Artificial Intelligence, that makes capitalism unable to deliver a viable future to the societies. It's also the fact that the dead-end it creates, produces false alternatives like Donald Trump.

As already pointed :

With Trump administration taken over by Goldman Sachs , nothing can surprise us, anymore. The fairy tale of the 'anti-establishment' Trump who would supposedly fight for the interests of the forgotten - by the system - Americans, was collapsed even before Trump election.

What's quite surprising, is how fast the new US president - buddy of the plutocrats, is offering 'earth and water' to the top 1% of the American society, as if they had not already enough at the expense of the 99%. His recent 'achievement', was to sign for more deregulation in favor of the banking mafia that ruined the economy in 2008, destroyed millions of working class Americans and sent waves of financial destruction all over the world. Europe is still on its knees because of the neoliberal destruction and cruel austerity.

Richard Wolff explains:

If you don't want the Trumps of this world to periodically show up and scare everybody, you've got to do something about the basic system that produces the conditions that allow a Trump to get to the position he now occupies.

We need a better politics than having two parties compete for the big corporations to love them, two parties to proudly celebrate capitalism. Real politics needs an opposition, people who think we can do better than capitalism, we ought to try, we ought to discuss it, and the people should have a choice about that. Because if you don't give them that, they are gonna go from one extreme to another, trying to find a way out of the status quo that is no longer acceptable.

I'm amazed that after half a century in which any politician had accepted the name 'Socialist' attached to him or her, thereby committing, effectively, political suicide, Mr. Sanders has shown us that the world has really changed. He could have that label, he could accept the label, he could say he is proud of the label, and millions and millions of Americans said 'that's fine with us', he gets our vote. We will not be the same nation going forward, because of that. It is now openly possible to raise questions about capitalism, to talk about its shortcomings, to explore how we can do better.

Indeed, as the blog pointed before the latest US elections:

Bernie has the background and the ability to change the course of the US politics. He speaks straightly about things buried by the establishment, as if they were absent. Wall Street corruption, growing inequality, corporate funding of politicians by lobbies. He says that he will break the big banks. He will provide free health and education for all the American people. Because of Sanders, Hillary is forced to speak about these issues too. And subsequently, this starts to shape again a fundamental ideological difference between Democrats and Republicans, which was nearly absent for decades.

But none of this would have come to surface if Bernie didn't have the support of the American people. Despite that he came from nowhere, especially the young people mobilized and started to spread his message using the alternative media. Despite that he speaks about Socialism, his popularity grows. The establishment starts to sense the first cracks in its solid structure. But Bernie is only the appropriate tool. It's the American people who make the difference.

No matter who will be elected eventually, the final countdown for the demolition of this brutal system has already started and it's irreversible. The question now is not if, but when it will collapse, and what this collapse will bring the day after. In any case, if people are truly united, they have nothing to fear.

So, what kind of system do we need to replace the obsolete capitalism? Do we need a kind of Democratic Socialism that would be certainly more compatible to the rapid technological progress? Write your thoughts and ideas in the comments below.

[Apr 15, 2017] IMF claims that technology and global integration explain close to 75 percent of the decline in labor shares in Germany and Italy, and close to 50 percent in the United States.

Anything that IMF claim should be taken with a grain of salt. IMF is a quintessential neoliberal institutions that will support neoliberalism to the bitter end.
Apr 15, 2017 | economistsview.typepad.com

point, April 14, 2017 at 05:06 AM

https://blogs.imf.org/2017/04/12/drivers-of-declining-labor-share-of-income/

"In advanced economies, about half of the decline in labor shares can be traced to the impact of technology."

Searching, searching for the policy variable in the regression.

anne -> point... , April 14, 2017 at 08:09 AM
https://blogs.imf.org/2017/04/12/drivers-of-declining-labor-share-of-income/

April 12, 2017

Drivers of Declining Labor Share of Income
By Mai Chi Dao, Mitali Das, Zsoka Koczan, and Weicheng Lian

Technology: a key driver in advanced economies

In advanced economies, about half of the decline in labor shares can be traced to the impact of technology. The decline was driven by a combination of rapid progress in information and telecommunication technology, and a high share of occupations that could be easily be automated.

Global integration-as captured by trends in final goods trade, participation in global value chains, and foreign direct investment-also played a role. Its contribution is estimated at about half that of technology. Because participation in global value chains typically implies offshoring of labor-intensive tasks, the effect of integration is to lower labor shares in tradable sectors.

Admittedly, it is difficult to cleanly separate the impact of technology from global integration, or from policies and reforms. Yet the results for advanced economies is compelling. Taken together, technology and global integration explain close to 75 percent of the decline in labor shares in Germany and Italy, and close to 50 percent in the United States.

paine -> anne... , April 14, 2017 at 08:49 AM
Again this is about changing the wage structure

Total hours is macro management. Mobilizing potential job hours to the max is undaunted by technical progress

Recall industrial jobs required unions to become well paid

We need a CIO for services logistics and commerce

[Apr 14, 2017] Automation as a way to depress wages

Apr 14, 2017 | economistsview.typepad.com
point , April 14, 2017 at 04:59 AM
http://www.bradford-delong.com/2017/04/notes-working-earning-and-learning-in-the-age-of-intelligent-machines.html

Brad said: Few things can turn a perceived threat into a graspable opportunity like a high-pressure economy with a tight job market and rising wages. Few things can turn a real opportunity into a phantom threat like a low-pressure economy, where jobs are scarce and wage stagnant because of the failure of macro economic policy.

What is it that prevents a statement like this from succeeding at the level of policy?

Peter K. -> point... , April 14, 2017 at 06:41 AM
class war

center-left economists like DeLong and Krugman going with neoliberal Hillary rather than Sanders.

Sanders supports that statement, Hillary did not. Obama did not.

PGL spent the primary unfairly attacking Sanders and the "Bernie Bros" on behalf of the center-left.

[Apr 12, 2017] The Despair of Learning That Experience No Longer Matters

Apr 12, 2017 | economistsview.typepad.com
RGC , April 12, 2017 at 06:41 AM
The Despair of Learning That Experience No Longer Matters

By Benjamin Wallace-Wells April 10, 2017

.....................

The arguments about Case and Deaton's work have been an echo of the one that consumed so much of the primary campaign, and then the general election, and which is still unresolved: whether the fury of Donald Trump's supporters came from cultural and racial grievance or from economic plight. Case and Deaton's scholarship does not settle the question. As they write, more than once, "more work is needed."

But part of what Case and Deaton offer in their new paper is an emotional logic to an economic argument. If returns to experience are in decline, if wisdom no longer pays off, then that might help suggest why a group of mostly older people who are not, as a group, disadvantaged might become convinced that the country has taken a turn for the worse. It suggests why their grievances should so idealize the past, and why all the talk about coal miners and factories, jobs in which unions have codified returns to experience into the salary structure, might become such a fixation. Whatever comes from the deliberations over Case and Deaton's statistics, there is within their numbers an especially interesting story.

http://www.newyorker.com/news/benjamin-wallace-wells/the-despair-of-learning-that-experience-no-longer-matters

[Apr 12, 2017] Why losing your job leads to a very long-lasting decline in your lifetime wages

Apr 12, 2017 | lse.ac.uk
Gregor Jarosch (2015, Chicago, Stanford): Jarosch writes a model to explain why losing your job leads to a very long-lasting decline in your lifetime wages. His hypothesis is that this is due to people climbing a ladder of jobs that are increasingly secure, so that when one has the misfortune of losing a job, this leads to a fall down the ladder and a higher likelihood of having further spells of unemployment in the future. He uses administrative social security data to find some evidence for this hypothesis.

[Apr 11, 2017] Legacy systems written in COBOL that depends on a shrinking pool of aging programmers to baby

Notable quotes:
"... Of course after legacy systems [people] were retrenched or shown the door in making government more efficient MBA style, some did hit the jack pot as consultants and made more that on the public dime . but the Gov balance sheet got a nice one time blip. ..."
"... In the government, projects "helped" by Siemens, especially at the Home and Passport Offices, cost billions and were abandoned. At my former employer, an eagle's nest, it was Deloittes. At my current employer, which has lost its passion to perform, it's KPMG and EY helping. ..."
"... My personal favourite is Accenture / British Gas . But then you've also got the masterclass in cockups Raytheon / U.K. Border Agency . Or for sheer breadth of failure, there's the IT Programme That Helped Kill a Whole Bank Stone Dead ( Infosys / Co-op ). ..."
"... I am an assembler expert. I have never seen a job advertised, but a I did not look very hard. Send me your work!!! IBM mainframe assembler ..."
"... What about Computer Associates? For quite a while they proudly maintained the worst reputation amongst all of those consultancy/outsourcing firms. ..."
"... My old boss used to say – a good programmer can learn a new language and be productive in it in in space of weeks (and this was at the time when Object Oriented was the new huge paradigm change). A bad programmer will write bad code in any language. ..."
"... The huge shortcoming of COBOL is that there are no equivalent of editing programs. ..."
"... Original programmers rarely wrote handbooks ..."
"... That is not to say that it is impossible to move off legacy platforms ..."
"... Wherefore are ye startup godz ..."
Apr 11, 2017 | www.nakedcapitalism.com
After we've been writing about the problem of the ticking time bomb of bank legacy systems written in COBOL that depends on a shrinking pool of aging programmers to baby them for now nearly two years, Reuters reports on the issue. Chuck L flagged a Reuters story, Banks scramble to fix old systems as IT 'cowboys' ride into sunset, which made some of the points we've been making but frustratingly missed other key elements.

Here's what Reuters confirmed:

Banks and the Federal government are running mission-critical core systems on COBOL, and only a small number of older software engineers have the expertise to keep the systems running . From the article:

In the United States, the financial sector, major corporations and parts of the federal government still largely rely on it because it underpins powerful systems that were built in the 70s or 80s and never fully replaced

Experienced COBOL programmers can earn more than $100 an hour when they get called in to patch up glitches, rewrite coding manuals or make new systems work with old.

For their customers such expenses pale in comparison with what it would cost to replace the old systems altogether, not to mention the risks involved.

Here's what Reuters missed:

Why young coders are not learning COBOL . Why, in an era when IT grads find it hard to get entry-level jobs in the US, are young programmers not learning COBOL as a guaranteed meal ticket? Basically, it's completely uncool and extremely tedious to work with by modern standards. Given how narrowminded employers are, if you get good at COBOL, I woudl bet it's assumed you are only capable of doing grunt coding and would never get into the circles to work on the fantasy of getting rich by developing a hip app.

I'm sure expert readers will flag other issues, but the huge shortcoming of COBOL is that there are no equivalent of editing programs. Every line of code in a routine must be inspected and changed line by line.

How banks got in this mess in the first place. The original sin of software development is failure to document the code. In fairness, the Reuters story does allude to the issue:

But COBOL veterans say it takes more than just knowing the language itself. COBOL-based systems vary widely and original programmers rarely wrote handbooks, making trouble-shooting difficult for others.

What this does not make quite clear is that given the lack of documentation, it will always be cheaper and lower risk to have someone who is familiar with the code baby it, best of all the guy who originally wrote it. And that means any time you bring someone in, they are going to have to sort out not just the code that might be causing fits and starts, but the considerable interdependencies that have developed over time. As the article notes:

"It is immensely complex," said [former chief executive of Barclays PLC Anthony] Jenkins, who now heads startup 10x Future Technologies, which sells new IT infrastructure to banks. "Legacy systems from different generations are layered and often heavily intertwined."

I had the derivatives trading firm O'Connor & Associates as a client in the early 1990s. It was widely recognized as being one of the two best IT shops in all of Wall Street at the time. O'Connor was running the biggest private sector Unix network in the world back then. And IT was seen as critical to the firm's success; half of O'Connor's expenses went to it.

Even with it being a huge expense, and the my client, the CIO, repeatedly telling his partners that documenting the code would save 20% over the life of the software, his pleas fell on deaf ears. Even with the big commitment to building software, the trading desk heads felt it was already taking too long to get their apps into production. Speed of deployment was more important to them than cost or long-term considerations. 1 And if you saw this sort of behavior with a firm where software development was a huge expense for partners who were spending their own money, it's not hard to see how managers in a firm where the developers were much less important and management was fixated on short term earnings targets to blow off tradeoff like this entirely.

Picking up sales patter from vendors, Reuters is over-stating banks' ability to address this issue . Here is what Reuters would have you believe:

The industry appears to be reaching an inflection point, though. In the United States, banks are slowly shifting toward newer languages taking cue from overseas rivals who have already made the switch-over.

Commonwealth Bank of Australia, for instance, replaced its core banking platform in 2012 with the help of Accenture and software company SAP SE. The job ultimately took five years and cost more than 1 billion Australian dollars ($749.9 million).

Accenture is also working with software vendor Temenos Group AG to help Swedish bank Nordea make a similar transition by 2020. IBM is also setting itself up to profit from the changes, despite its defense of COBOL's relevance. It recently acquired EzSource, a company that helps programmers figure out how old COBOL programs work.

The conundrum is the more new routines banks pile on top of legacy systems, the more difficult a transition becomes. So delay only makes matters worse. Yet the incentives of everyone outside the IT areas is to hope they can ride it out and make the legacy system time bomb their successor's problem.

If you read carefully, Commonwealth is the only success story so far. And it's vastly less complex than that of many US players. First, it has roughly A$990 billion or $740 billion in assets now. While that makes it #46 in the world (and Nordea is of similar size at #44 as of June 30, 2016), JP Morgan and Bank of America are three times larger. Second, and perhaps more important, they are the product of more bank mergers. Commonwealth has acquired only four banks since the computer era. Third, many of the larger banks are major capital markets players, meaning their transaction volume relative to their asset base and product complexit is also vastly greater than for a Commonwealth. Finally, it is not impossible that as a government owned bank prior to 1990 that not being profit driven, Commonwealth's software jockeys might have documented some of the COBOL, making a transition less fraught.

Add to that that the Commonwealth project was clearly a "big IT project". Anything over $500 million comfortably falls into that category. The failure rate on big IT projects is over 50%; some experts estimate it at 80% (costly failures are disguised as well as possible; some big IT projects going off the rails are terminated early).

Mind you, that is not to say that it is impossible to move off legacy platforms. The issue is the time and cost (as well as risk). One reader, I believe Brooklyn Bridge, recounted a prototypical conversation with management in which it became clear that the cost of a migration would be three times a behemoth bank's total profit for three years. That immediately shut down the manager's interest.

Estimates like that don't factor in the high odds of overruns. And even if it is too high for some banks by a factor of five, that's still too big for most to stomach until they are forced to. So the question then becomes: can they whack off enough increments of the problem to make it digestible from a cost and risk perspective? But the flip side is that the easier parts to isolate and migrate are likely not to be the most urgent to address.

____
1 The CIO had been the head index trader and had also help build O'Connor's FX derivatives trading business, so he was well aware of the tradeoff between trading a new instrument sooner versus software life cycle costs. He was convinced his partners were being short-sighted even over the near term and had some analyses to bolster that view. So this was the not empire-building or special pleading. This was an effort at prudent management.

Clive , April 11, 2017 at 5:51 am

I got to the bit which said:

Accenture is also working with software vendor Temenos Group AG to help

and promptly splurted my coffee over my desk. "Help" is the last thing either of these two ne'redowells will be doing.

Apart from the problems ably explained in the above piece, I'm tempted to think industry PR and management gullibility to it are the two biggest risks.

Marina Bart , April 11, 2017 at 6:06 am

As someone who used to do PR for that industry (worked with Accenture, among others), I concur that those are real risks.

skippy , April 11, 2017 at 6:07 am

Heaps of IT upgrades have gone a bit wonky over here of late, Health care payroll, ATO, Centerlink, Census, all assisted by private software vendors and consultants – after – drum roll .. PR management did a "efficiency" drive [by].

Of course after legacy systems [people] were retrenched or shown the door in making government more efficient MBA style, some did hit the jack pot as consultants and made more that on the public dime . but the Gov balance sheet got a nice one time blip.

disheveled . nice self licking icecream cone thingy and its still all gov fault . two'fer

Colonel Smithers , April 11, 2017 at 7:40 am

Thank you, Skippy.

It's the same in the UK as Clive knows and can add.

In the government, projects "helped" by Siemens, especially at the Home and Passport Offices, cost billions and were abandoned. At my former employer, an eagle's nest, it was Deloittes. At my current employer, which has lost its passion to perform, it's KPMG and EY helping.

What I have read / heard is that the external consultants often cost more and will take longer to do the work than internal bidders. The banks and government(s) run an internal market and invite bids.

Clive , April 11, 2017 at 9:33 am

Oh, where to start!

My personal favourite is Accenture / British Gas . But then you've also got the masterclass in cockups Raytheon / U.K. Border Agency . Or for sheer breadth of failure, there's the IT Programme That Helped Kill a Whole Bank Stone Dead ( Infosys / Co-op ).

They keep writing books on how to avoid this sort of thing. Strangely enough, none of them ever tell CEOs or CIOs to pay people decent wages, not treat them like crap and to train up new recruits now and again. And also fail to highlight that though you might like to believe you can go into the streets in Mumbai, Manila or Shenzhen waving a dollar bill and have dozens of experienced, skilled and loyal developers run to you like a cat smelling catnip, that may only be your wishful thinking.

Just wait 'til we get started trying to implement Brexit

Raj , April 11, 2017 at 12:10 pm

Oh man, if you only had a look at the kind of graduates Infosys hires en masse and the state of graduate programmers coming out of universities here in India you'd be amazed how we still haven't had massive hacks. And now the government, so confident in the Indian IT industry's ability to make big IT systems is pushing for the universal ID system(aadhar) to be made mandatory for even booking flight tickets!

So would you recommend graduates do learn COBOL to get good jobs there in the USA?

Clive , April 11, 2017 at 12:22 pm

I'd pick something really obscure, like maybe MUMPS - yes, incredibly niche but that's the point, you can corner a market. You might not get oodles of work but what you do get you can charge the earth for. Getting real-world experience is tricky though.

Another alternative, a little more mainstream is assembler. But that is hideous. You deserve every penny if you can learn that and be productive in it.

visitor , April 11, 2017 at 1:36 pm

Is anybody still using Pick? Or RPG?

Regarding assembler: tricky, as the knowledge is tied to specific processors - and Intel, AMD and ARM keep churning new products.

Synoia , April 11, 2017 at 3:40 pm

I am an assembler expert. I have never seen a job advertised, but a I did not look very hard. Send me your work!!! IBM mainframe assembler

visitor , April 11, 2017 at 10:02 am

What about Computer Associates? For quite a while they proudly maintained the worst reputation amongst all of those consultancy/outsourcing firms.

How does Temenos compare with Oracle, anyway?

Clive , April 11, 2017 at 10:05 am

How does Temenos compare with Oracle, anyway?

Way worse. Yes, I didn't believe it was possible, either.

MoiAussie , April 11, 2017 at 6:13 am

For a bit more on why Cobol is hard to use see Why We Hate Cobol . To summarise, Cobol is barely removed from programming in assembler, i.e. at the lowest level of abstraction, with endless details needing to be taken care of. It dates pack to the punched card era.

It is particularly hard for IT grads who have learned to code in Java or C# or any modern language to come to grips with, due to the lack of features that are usually taken for granted. Those who try to are probably on their own due to a shortage of teachers/courses. It's a language that's best mastered on the job as a junior in a company that still uses it, so it's hard to get it on your CV before landing such a job.

There are potentially two types of career opportunities for those who invest the time to get up-to-speed on Cobol. The first is maintenance and minor extension of legacy Cobol applications. The second and potentially more lucrative one is developing an ability to understand exactly what a Cobol program does in order to craft a suitable replacement in a modern enterprise grade language.

MartyH , April 11, 2017 at 12:53 pm

Well, COBOL's shortcomings are part technical and part "religious". After almost fifty years in software, and with experience in many of the "modern enterprise grade languages", I would argue that the technical and business merits are poorly understood. There is an enormous pressure in the industry to be on the "latest and greatest" language/platform/framework, etc. And under such pressure to sell novelty, the strengths of older technologies are generally overlooked.

@Yves, I would be glad to share my viewpoint (biases, warts and all) at your convenience. I live nearby.

vlade , April 11, 2017 at 7:52 am

"It is particularly hard for IT grads who have learned to code in Java or C# or any modern language to come to grips with"

which tells you something about the quality of IT education these days, where "mastering" a language is more often more important than actually understanding what goes on and how.

My old boss used to say – a good programmer can learn a new language and be productive in it in in space of weeks (and this was at the time when Object Oriented was the new huge paradigm change). A bad programmer will write bad code in any language.

craazyboy , April 11, 2017 at 9:32 am

IMHO, your old boss is wrong about that. Precisely because OO languages are a huge paradigm change and require a programmer to nearly abandon everything he/she knows about programming. Then get his brain around OOP patterns when designing a complex system. Not so easy.

As proof, I put forth the 30% success rate for new large projects in the latter 90s done with OOP tech. Like they say, if it was easy, everyone would be doing it.

More generally, on the subject of Cobol vs Java or C++/C#, in the heyday of OOPs rollout in the early 90s, corporate IT spent record amounts on developing new systems. As news of the Y2K problem spread, they very badly wanted to replace old Cobol/mainframe legacy systems. As things went along, many of those plans got rolled back due to perceived problems with viability, cost and trained personnel.

Part of the reason was existing Cobol IT staff took a look at OOP, then at their huge pile of Cobol legacy code and their brains melted down. I was around lots of them and they had all the symptoms of Snow Crash. [Neil Stephenson] I hope they got better.

Marco , April 11, 2017 at 12:21 pm

It never occurred to me that the OOP-lite character of the newer "hipster" languages (Golang / Go or even plain old javascript) are a response to OOP run amok.

Arizona Slim , April 11, 2017 at 9:35 am

A close friend is a retired programmer. In her mind, knowing how to solve the problem comes​ first.

MartyH , April 11, 2017 at 12:54 pm

@Arizona_Slim: I agree with her. And COBOL lets you write business logic with a minimum of distractions.

Mel , April 11, 2017 at 11:36 am

In the university course I took, we were taught Algol-60. Then it turned out that the univ. had no budget for Algol compiles for us. So we wrote our programs in Algol-60 for 'publication' and grading, and rewrote them in FORTRAN IV to run in a cheap bulk FORTRAN execution system for results. Splendid way to push home Turing's point that all computing is the same. So when the job needed COBOL, "Sure, bring it on."

rfdawn , April 11, 2017 at 1:30 pm

My old boss used to say – a good programmer can learn a new language and be productive in it in in space of weeks (and this was at the time when Object Oriented was the new huge paradigm change). A bad programmer will write bad code in any language.

Yes. Learning a new programming language is fairly easy but understanding existing patchwork code can be very hard indeed. It just gets harder if you want to make reliable changes.

HR thinking, however, demands "credentials" and languages get chosen as such based on their simple labels. They are searchable on L**kedIn!

A related limitation is the corporate aversion to spending any time or money on employee learning of either language or code. There may not be anyone out there with all the skills needed but that will not stop managers from trying to hire them or, better still, just outsourcing the whole mess.

Either choice invites fraud.

reslez , April 11, 2017 at 2:02 pm

Your boss was correct in my opinion - but also atypical. Most firms look for multi-years of experience in a language. They'll toss your resume if you don't show you've used it extensively.

Even if a new coder spent the time to learn COBOL, if he wasn't using it on the job or in pretty significant projects he would not be considered. And there aren't exactly many open source projects out there written in COBOL to prove one's competence. The limiting factor is not whether you "know" COBOL, or whether you know how to learn it. The limiting factor is the actual knowledge of the system, how it was implemented, and all the little details that never get written down no matter how good your documentation. If your system is 30+ years old it has complexity hidden in every nook and cranny.

As for the language itself, COBOL is an ancient language from a much older paradigm than what students learn in school today. Most students skip right past C, they don't learn structural programming. They expect to have extensive libraries of pre-written routines available for reuse. And they expect to work in a modern IDE (development environment), a software package that makes it much easier to write and debug code. COBOL doesn't have tools of this level.

When I was in the Air Force I was trained as a programmer. COBOL was one of the languages they "taught". I never used it, ever, and wouldn't dream of trying it today. It's simply too niche. I would never recommend anyone learn COBOL in the hopes of getting a job. Get the job first, and if it happens to include some COBOL get the expertise that way.

d , April 11, 2017 at 4:04 pm

having seen the 'high level code' in C++, not sure what makes it 'modern'.its really an out growth of C, which is basically the assembler language of Unix. which it self is no spring chicken. mostly what is called 'modern' is just the latest fad, has the highest push from vendors. and sadly what we see in IT, is that the IT trade magazines are more into what they sell, that what companies need (maybe because of advertising?)

as to why schools tend to teach these languages than others? mainly cause its hip. its also cheaper for the schools, as they dont have much in the way of infrastructure to teach them ( kids bring their own computers). course teachers are as likely to be influenced by the latest 'shinny;' thing as any one else

craazyboy , April 11, 2017 at 4:34 pm

C++ shares most of the core C spec but that's it. [variables and scope, datatypes, functions sorta, math and logic operatives, logic control statements] The reason you can read high level C++ is because it uses objects that hide the internal code and are given names that describe their use which if done right makes the code somewhat readable, along with a short comment header, and self documenting.

Then at high level most code is procedural and/or event driven, which makes it appear to function like C or any other procedural language. Without the Goto statements and subroutines, because that functionality is now encapsulated within the C++ objects. {which are a datatype that combines data structures and related functions that act on this data)

ChrisPacific , April 11, 2017 at 5:31 pm

Well put. I was going to make this point. Note that the today's IT grads struggle with Cobol for the same reason that modern airline pilots would struggle to build their own airplane. The industry has evolved and become much more specialized, and standard 'solved' problems have migrated into the core toolsets and become invisible to developers, who now work at a much higher level of abstraction. So for example a programmer who learned using BASIC on a Commodore 64 probably knows all about graphics coding by direct addressing of screen memory, which modern programmers would consider unnecessary at best and dangerous at worst. Not to mention it's exhausting drudgery compared to working with modern toolsets.

The other reason more grads don't learn COBOL is because it's a sunset technology. This is true even if systems written in COBOL are mission critical and not being replaced. As more and more COBOL programmers retire or die, banks will eventually reach the point where they don't have enough skilled staff available to keep their existing systems running. If they are in a position where they have to fix things anyway, for example due to a critical failure, they will be forced to resort to cross-training other developers, at great expense and pain for all concerned, and with no guarantee of success. One or two of these experiences will be enough to convince them that migration is necessary, whatever the cost (if their business survives them, which isn't a given when it comes to critical failures involving out of date and poorly-understood technology). And while developers with COBOL skills will be able to name their own price during those events, it's not likely to be a sustainable working environment in the longer term.

It would take a significant critical mass of younger programmers deciding to learn COBOL to change this dynamic. One person on their own isn't going to make any difference, and it's not career advice I would ever give to a young graduate looking to enter IT.

I am an experienced developer who has worked with a lot of different languages, including some quite low level ones in my early days. I don't know COBOL, but I am confident that I could learn it well enough to perform code archaeology on it given enough time (although probably nowhere near as efficiently as someone who built a career on it). Whether I could be convinced to do so is another question. If you paid me never-need-to-work-again money, then maybe. But nobody is ever going to do that unless it's a crisis, and I'm not likely to sign up for a death march situation with my current family commitments.

Steve , April 11, 2017 at 6:47 am

"Experienced COBOL programmers can earn more than $100 an hour"

Then the people hiring are getting them dirt cheap. This is a lot closer to consulting than contracting–a very specialized skill set and only a small set of people available. The rate should be $200-300/hour.

reslez , April 11, 2017 at 2:46 pm

I wonder if it has something to do with the IRS rules that made that guy fly a plane into an IRS office? Because of the rules, programmers aren't allowed to work as independent consultants. Since their employer/middleman takes a huge cut the pay they receive is a lot lower. Coders with a security clearance make quite a bit but that requires an "in", getting the clearance in the first place which most employers won't pay for.

d , April 11, 2017 at 4:05 pm

not any place i know of. maybe in an extreme crunch. cause today the most COBOL jobs have been offshored. and maybe thats why kids dont lean COBOL.

ChrisPacific , April 11, 2017 at 5:31 pm

I had the same thought. Around here if you want a good one, you would probably need to add another zero to that.

shinola , April 11, 2017 at 6:52 am

Cobol? Are they running it on refrigerator sized machines with reel-to-reel tapes?

ejf , April 11, 2017 at 8:45 am

you're right. I've seen it on cluckny databases in a clothing firm in NY State, a seed and grain distribution facility in Minnesota and a bank in Minneapolis. They're horrible and Yves is right – documentation is completely ABSENT

d , April 11, 2017 at 4:06 pm

in small business, where every penny counts, they dont see the value in documentation. not even when they get big either

Disturbed Voter , April 11, 2017 at 7:05 am

No different than the failure of the public sector to maintain dams, bridges and highways. Basic civil engineering but our business model never included maintenance nor replacement costs. That is because our business model is accounting fraud.

I grew up on Fortran, and Cobol isn't too different, just limited to 2 points past the decimal to the right. I feel so sorry for these code jockies who can't handle a bit of drudgery, who can't do squat without a gigabyte routine library to invoke. Those languages as scripting languages or report writers back in the old days.

Please hire another million Indian programmers they don't mind being poorly paid or the drudgery. Americans and Europeans are so over-rated. Business always complains they can't hire the right people some job requires 2 PhDs and we can't pay more than $30k, am I right? Business needs slaves, not employees.

d , April 11, 2017 at 4:08 pm

COBOL hasnt been restricted to 2 points to the right of decimal place. for decades

clarky90 , April 11, 2017 at 7:06 am

The 'Novopay debacle'

This was a "new payroll" system for school teachers in NZ. It was an ongoing disaster. If something as simple (?) as paying NZ teachers could turn into such a train-wreck, imagine what updating the software of the crooked banks could entail. I bet that there are secret frauds hidden in the ancient software, like the rat mummies and cat skeletons that one finds when lifting the floor of old houses.

https://en.wikipedia.org/wiki/Novopay

"Novopay is a web-based payroll system for state and state integrated schools in New Zealand, processing the pay of 110,000 teaching and support staff at 2,457 schools .. From the outset, the system led to widespread problems with over 8,000 teachers receiving the wrong pay and in some cases no pay at all; within a few months, 90% of schools were affected .."

"Many of the errors were described as 'bizarre'. One teacher was paid for 39 days, instead of 39 hours getting thousands of dollars more than he should have. Another teacher was overpaid by $39,000. She returned the money immediately, but two months later, had not been paid since. A relief teacher was paid for working at two different schools on the same day – one in Upper Hutt and the other in Auckland. Ashburton College principal, Grant McMillan, said the 'most ludicrous' problem was when "Novopay took $40,000 directly out of the school bank account to pay a number of teachers who had never worked at the college".

Can you imagine this, times 10,000,000????

d , April 11, 2017 at 4:12 pm

this wasnt COBOL. or even a technology problem. more like a management one. big failures tend to be that way

vlade , April 11, 2017 at 7:48 am

"but the huge shortcoming of COBOL is that there are no equivalent of editing programs. Every line of code in a routine must be inspected and changed line by line"
I'm not sure what you mean by this.

If you mean that COBOL doesn't have the new flash IDEs that can do smart things with "syntactic sugar", then it really depends on the demand. Smart IDEs can be written for pretty much any languages (smart IDEs work by operating on ASTs, which are part and parcel of any compiler. The problem is more of what to do if you have an externalised functions etc, which is for example why it took so long for those smart IDEs to work with C++ and its linking model). The question is whether it pays – and a lot of old COBOL hands eschew anything except for vi (or equivalent) because coding should be done by REAL MEN.

On the general IT problem. There are three problems, which are sort of related but not.

The first problem is the interconnectedness of the systems. Especially for a large bank, it's not often clear where one system ends and the other begins, what are the side-effects of running something (or not running), who exactly produces what outputs and when etc. The complexity is more often at this level than cobol (or any other) line-by-line code.

The second problem is the IT personell you get. If you're unlucky, you get coding monkeys, who barely understand _any_ programming language (there was time I didn't think people like that get hired. I now know better), and have no idea what analytical and algorithmic thinking is. If you're lucky, you get a bunch of IT geeks, who can discuss the latest technology till cows come home, know the intricate details of what a sequence point in C++ is and how it affects execution, but don't really care that much about the business. Then you get some possibly even brilliant code, but often also get unnecessary technological artifacts and new technologies just because they are fun – even though a much simpler solution would work just as well if not better. TBH, you can get this from the other side too, someone who understands the business but doesn't know even basic language techniques, which generally means their code works very well for the business, but is a nightmare to maintain (a typical population of this groups are front office quants).

If you are incredibily lucky, you get someone who understands the business and happens to know how to code well too. Unfortunately, this is almost a mythical beast, especially since neitehr IT nor the business encourage people to understand each other.

Which is what gets me to the thirds point – politics of it. And that's, TBH, is why most projects fail. Because it's easier to staff a project with 100 developers and then say all that could have been done was done, than get 10 smart people working on it, but risk that if it fails you get told you haven't spent enough resources. "We are not spending enough money" is paradoxically one of the "problems" I often see here, when the problem really is "we're not spending money smartly enough". Because in an organization budget=power. I have yet to see an IT project that would have 100+ developers that would _really_ succeed (as opposed to succeed by redefining what it was to deliver to what was actually delivered).

Oh, and last point, on the documentation. TBH, documentation of the code is superfluous if a) it's clear what business problem is being solved b) has a good set of test cases c) the code is reasonably cleanly written (which tends to be the real problem). Documenting code by anything else but example is in my experience just a costly exercise. Mind you, this is entirely different from documenting how systems hang together and how their interfaces work.

Yves Smith Post author , April 11, 2017 at 7:52 am

On the last point, I have to tell you I in short succession happened to work not just with O'Connor, but about a year later, with Bankers Trust, then regarded as the other top IT shop on Wall Street. Both CIOs would disagree with you vehemently on your claim re documentation.

vlade , April 11, 2017 at 8:25 am

Yes, in 90s there was a great deal of emphasis on code documentation. The problem with that is that the requirements in real world change really quick. Development techniques that worked for sending the man to the moon don't really work well on short-cycle user driven developments.

90s was mostly the good old waterfall method (which was really based on the NASA techniques), but even as early as 2000s it started to change a lot. Part of it come from the realization that the "building" metaphor that was the working approach for a lot of that didn't really work for code.

When you're building a bridge, it's expensive, so you have to spend a lot of time with blueprints etc. When you're doing code, documenting it in "normal" human world just adds a superfluous step. It's much more efficient to make sure your code is clean and readable than writing extra documents that tell you what the code does _and_ have to be kept in sync all the time.

Moreover, bits like pretty pictures showing the code interaction, dependencies and sometimes even more can now be generated automatically from the code, so again, it's more efficient to do that than to keep two different versions of what should be the same truth.

Yves Smith Post author , April 11, 2017 at 8:31 am

With all due respect, O'Connor and Bankers Trust were recognized at top IT shops then PRECISELY because they were the best, bar none, at "short cycle user driven developments." They were both cutting edge in derivatives because you had to knock out the coding to put new complex derivatives into production.

Don't insinuate my clients didn't know what they were talking about. They were running more difficult coding environments than you've ever dealt with even now. The pace of derivative innovation was torrid then and there hasn't been anything like it since in finance. Ten O'Connor partners made $1 billion on the sale of their firm, and it was entirely based on the IT capabilities. That was an unheard of number back then, 1993, particularly given the scale of the firm (one office in Chicago, about 250 employees).

vlade , April 11, 2017 at 9:23 am

Yves,

I can't talk about how good/bad your clients were except for generic statements – and the above were generic statements that in 90s MOST companies used waterfall.

At the same time please do not talk about what programming environments I was in, because you don't know. That's assuming it's even possible to compare coding environments – because quant libraries that first and foremost concentrate on processing data (and I don't even know it's what was the majority of your clients code) is a very very different beast from extremely UI complex but computationally trivial project, or something that has both trivial UI and computation but is very database heavy etc. etc.

I don't know what specific techniques your clients used. But the fact they WANTED to have more documentation doesn't mean that having more documentation would ACTUALLY be useful.

With all due respect, I've spent the first half of 00s talking to some of the top IT development methodologists of the time, from the Gang Of Four people to Agile Manifesto chaps, and practicing/leading/implementing SW development methodology across a number of different industries (anything from "pure" waterfall to variants of it to XP).

The general agreement across the industry was (and I believe still is) that documenting _THE CODE_ (outside of the code) was waste of time (actually it was ranging from any design doc to various levels of design doc, depending on who you were talking to).

Again, I put emphasis on the code – that is not the same as say having a good whitepaper telling you how the model you're implementing works, or what the hell the users actually want – i.e. capturing the requirements.

As an aside – implementation of new derivative payoffs can be actually done in a fairly trivial way, depending on how exactly you model them in the code. I've wrote an extensive library that did it, whose whole purpose was to deal with new products and allow them to be incubated quickly and effectively – and that most likely involved doing things that no-one at BT/O'Conner even looked at in early 1990s (because XVA wasn't even gleam in anyone's eye at that time).

Clive , April 11, 2017 at 9:54 am

Well at my TBTF, where incomprehensible chaos rules, the only thing - and I do mean the only thing - that keeps major disasters averted (perhaps "ameliorated" is putting it better) is where some of the key systems are documented. Most of the core back end is copiously and reasonably well documented and as such can survive a lot of mistreatment at the hands of the current outsourcer de jour.

But some "lower priority" applications are either poorly documented or not documented at all. And a "low priority" application is only "low priority" until it happens to sit on the critical path. Even now I have half of Bangalore (it seems so, at any rate) sitting there trying to reverse engineer some sparsely documented application - although I suspect there was documentation, it just got "lost" in a succession of handovers - desperate in their attempts to figure out what the application does and how it does it. You can hear the fear in their voices, it is scary stuff, given how crappy-little-VB6-pile-of-rubbish is now the only way to manage a key business process where there are no useable comments in the code and no other application documentation, you are totally, totally screwed.

visitor , April 11, 2017 at 3:51 pm

Your TBTF corporation is ISO 9000-3,9001/CMM/TickIt/ITIL certified, of course?

Skip Intro , April 11, 2017 at 11:48 am

It seems like you guys are talking past each other to some degree. I get the sense that vlade is talking about commenting code, and dismissing the idea of code comments that don't live with the code. Yves' former colleagues are probably referring to higher level specifications that describe the functionality, requirements, inputs, and outputs of the various software modules in the system.
If this is the case, then you're both right. Even comments in the code can tend to get out of date due to application of bug fixes, and other reasons for 'drift' in the code, unless the comments are rigorously maintained along wth the code. Were the code-level descriptions maintained somewhere else, that would be much more difficult and less useful. On the other hand the higher-level specifications are pretty essential for using, testing, and maintaining the software, and would sure be useful for someone trying to replace all or parts of the system.

Clive , April 11, 2017 at 12:30 pm

In my experience you need a combination of both. There is simply no substitute for a brief line in some ghastly nested if/then procedure that says "this section catches host offline exceptions if the transaction times out and calls the last incremental earmarked funds as a fallback" or what-have-you.

That sort of thing can save weeks of analysis. It can stop an outage from escalating from a few minutes to hours or even days.

Mathiasalexander , April 11, 2017 at 8:22 am

They could try building the new system from scratch as a stand alone and then entering all the data manually.

Ivy , April 11, 2017 at 10:39 am

There is some problem-solving/catastrophe-avoiding discussion about setting up a new bank with a clean, updated (i.e., this millennium) IT approach and then merging the old bank into that and decommissioning that old one. Many questions arise about applicable software both in-house and at all those vendor shops that would need some inter-connectivity.

Legacy systems lurk all over the economy, from banks to utilities to government and education. The O'Connor CIO advice relating to life-cycle costing was probably unheard in many places besides
The Street.

d , April 11, 2017 at 4:24 pm

building them from scratch is usually the most likely to be a failure as to many in both IT and business only know parts of the needs. and if a company cant implement a vendor supplied package to do the work, what makes us think they can do it from scratch

visitor , April 11, 2017 at 9:44 am

I did learn COBOL when I was at the University more than three decades ago, and at that time it was already decidedly "uncool". The course, given by an old-timer, was great though. I programmed in COBOL in the beginnings of my professional life (MIS applications, not banking), so I can provide a slightly different take on some of those issues.

As far as the language itself is concerned, disregard those comments about it being like "assembly". COBOL already showed its age in the 1980s, but though superannuated it is a high-level language geared at dealing with database records, money amounts (calculations with controlled accuracy), and reports. For that kind of job, it was not that bad.

The huge shortcoming of COBOL is that there are no equivalent of editing programs.

While in the old times a simple text editor was the main tool for programming in that language, modern integrated, interactive development environments for COBOL have been available for quite a while - just as there are for Java, C++ or C#.

And that is a bit of an issue. For, already in my times, a lot, possibly most COBOL was not programmed manually, but generated automatically - typically from pseudo-COBOL annotations or functional extensions inside the code. Want to access a database (say Oracle, DB2, Ingres) from COBOL, or generate a user interface (for 3270 or VT220 terminals in those days), or perform some networking? There were extensions and code generators for that. Nowadays you will also find coding utilities to manipulate XML or interface with routines in other programming languages. All introduce deviations and extensions from the COBOL norm.

If, tomorrow, I wanted to apply for a job at one of those financial institutions battling with legacy software, my rusty COBOL programming skills would not be the main problem, but my lack of knowledge of the entire development environment. That would mean knowing those additional code generators, development environments, extra COBOL-geared database/UI/networking/reporting modules. In an IBM mainframe environment, this would probably mean knowing things like REXX, IMS or DB2, CICS, etc (my background is DEC VMS and related software, not IBM stuff).

So those firms are not holding dear onto just COBOL programmers - they are desperately hoarding people who know their way around in mainframe programming environments for which training (in Universities) basically stopped in the early 1990s.

Furthermore, I suspect that some of those code generators/interfaces might themselves be decaying legacy systems whose original developers went out of business or have been slowly withdrawing from their maintenance. Correcting or adjusting manually the COBOL code generated by such tools in the absence of vendor support is lots of fun (I had to do something like that once, but it actually went smoothly).

Original programmers rarely wrote handbooks

My experience is that proper documentation has a good chance to be rigorously enforced when the software being developed is itself a commercial product to be delivered to outside parties. Then, handbooks, reference manuals and even code documentation become actual deliverables that are part of the product sold, and whose production is planned and budgeted for in software development programmes.

I presume it is difficult to ensure that effort and resources be devoted to document internal software because these are purely cost centers - not profit centers (or at least, do not appear as such directly).

That is not to say that it is impossible to move off legacy platforms

So, we knew that banks were too big to fail, too big to jail, and are still too big to bail. Are their software problems too big to nail?

d , April 11, 2017 at 4:27 pm

actually suspect banks like the rest of business dont really care about their systems, till they are down, as they will find the latest offshore company to do it cheaper.

Yves Smith Post author , April 11, 2017 at 5:58 pm

Why then have I been told that reviewing code for Y2K had to be done line by line?

I said documentation, not handbooks. And you are assuming banks hired third parties to do their development. Buying software packages and customizing them, as well as greater use of third party vendors, became a common practice only as of the 1990s.

JTMcPhee , April 11, 2017 at 10:33 am

I'm in favor of the "Samson Option" in this area.

I know it will screw me and people I care about, and "throw the world economy into chaos," but who effing cares (hint: not me) if the code pile reaches past the limits of its angle of repose, and slumps into some chaotic non-form?

Maybe a sentiment that gets me some abuse, but hey, is it not the gravamen of the story here that dysfunction and then collapse are very possible, maybe even likely?

And where are the tools to re-build this Tower of Babel, symbol of arrogant pride? Maybe G_D has once again, per the Biblical story, confounded the tongues of men (and women) to collapse their edifices and reduce them to working the dirt (what's left of it after centuries of agricultural looting and the current motions toward temperature-driven uninhabitability.)

Especially interesting that people here are seemingly proud of having taken part successfully in the construction of the whole derivatives thing. Maybe I'm misreading that. But what echoes in my mind in this context is the pride that the people of Pantex, https://en.wikipedia.org/wiki/Pantex_Plant , have in their role in keeping the world right on the ragged edge of nuclear "Game Over." On the way to Rapture, because they did G_D's work in preparing Armageddon. http://articles.chicagotribune.com/1986-09-05/features/8603060693_1_pantex-plant-nuclear-weapons-amarillo

"What a wondrous beast this human is "

lyman alpha blob , April 11, 2017 at 10:57 am

So is it time to go long on duct tape and twine?

ChrisAtRU , April 11, 2017 at 11:19 am

#Memories

My first job out of uni, I was trained as a MVS/COBOL programmer. After successfully completing the 11-week pass/fire course, I showed up to my 1st work assignment where my boss said to me, "Here's your UNIX terminal."

;-) – COBOL didn't strike me as difficult, just arcane and verbose. Converting to SAP is a costly nightmare. That caused to me to leave a job once had no desire to deal with SAP/ABAP. I'm surprised no one has come up with an acceptable next-gen thing . I remember years ago seeing an ad for Object-Oriented-COBOL in an IT magazine and I almost pissed myself laughing. On the serious side, if it's still that powerful and well represented in Banking, perhaps someone should look into an upgraded version of the language/concepts and build something easy to lift and shift COBOL++?

Wherefore are ye startup godz

#OnlyHalfKidding

#MaybeNot

Peewee , April 11, 2017 at 12:22 pm

This sounds like an opportunity for a worker's coop, to train their workers in COBOL and to get back at these banks by REALLY exploiting them good and hard.

MartyH , April 11, 2017 at 12:57 pm

@Peewee couldn't agree more! @Diptherio?

susan the other , April 11, 2017 at 1:02 pm

so is this why no one is willing to advocate regulating derivatives in an accountable way? i almost can't believe this stuff. i can't believe that we are functioning at all, financially. 80% of IT projects fail? and if legacy platforms are replaced at great time and expense, years and trillions, what guarantee is there that the new platform will not spin out just as incomprehensibly as COBOL based software evolved, with simplistic patches of other software lost in translation? And maybe many times faster. Did Tuttle do this? I think we need new sophisticated hardware, something even Tuttle can't mess with.

Skip Intro , April 11, 2017 at 3:40 pm

I think it is only 80% of 'large' IT projects fail. I think it says more about the lack of scalability of large software projects, or our (in-) ability to deal with exponential complexity growth

JimTan , April 11, 2017 at 2:34 pm

Looks like there are more than a few current NYC jobs at Accenture, Morgan Stanley, JPMorgan Chase, and Bank of America for programmers who code in COBOL.

https://www.indeed.com/jobs?q=mainframe+Cobol+&l=New+York%2C+NY

[Apr 07, 2017] No it was policy driven by politics. They increased profits at the expense of workers and the middle class. The New Democrats played along with Wall Street.

Apr 07, 2017 | economistsview.typepad.com
ken melvin -> DrDick ... , April 06, 2017 at 08:45 AM
Probably automated 200. In every case, displacing 3/4 of the workers and increasing production 40% while greatly improving quality. Exact same can be said for larger scaled such as automobile mfg, ...

The convergence of offshoring and automation in such a short time frame meant that instead of a gradual transformation that might have allowed for more evolutionary economic thinking, American workers got gobsmacked. The aftermath includes the wage disparity, opiate epidemic, Trump, ...

This transition is of the scale of the industrial revolution with climate change thrown. This is just the beginning of great social and economic turmoil. None of the stuff that evolved specific the industrial revolution applies.

Peter K. -> ken melvin... , April 06, 2017 at 09:01 AM

No it was policy driven by politics. They increased profits at the expense of workers and the middle class. The New Democrats played along with Wall Street.
libezkova -> ken melvin... , April 06, 2017 at 05:43 PM
"while greatly improving quality" -- that's not given.

[Apr 06, 2017] Germany and Japan have retained a larger share of workers in manufacturing, despite more automation

Apr 06, 2017 | economistsview.typepad.com
Peter K. -> EMichael... , April 06, 2017 at 09:18 AM
What do you make of the DeLong link? Why do you avoid discussing it?

"...
The lesson from history is not that the robots should be stopped; it is that we will need to confront the social-engineering and political problem of maintaining a fair balance of relative incomes across society. Toward that end, our task becomes threefold.

First, we need to make sure that governments carry out their proper macroeconomic role, by maintaining a stable, low-unemployment economy so that markets can function properly. Second, we need to redistribute wealth to maintain a proper distribution of income. Our market economy should promote, rather than undermine, societal goals that correspond to our values and morals. Finally, workers must be educated and trained to use increasingly high-tech tools (especially in labor-intensive industries), so that they can make useful things for which there is still demand.

Sounding the alarm about "artificial intelligence taking American jobs" does nothing to bring such policies about. Mnuchin is right: the rise of the robots should not be on a treasury secretary's radar."

DrDick -> EMichael... , April 06, 2017 at 08:43 AM
Except that Germany and Japan have retained a larger share of workers in manufacturing, despite more automation. Germany has also retained much more of its manufacturing base than the US has. The evidence really does point to the role of outsourcing in the US compared with others.

http://www.economist.com/node/21552567

http://www.economist.com/node/2571689

pgl -> DrDick ... , April 06, 2017 at 08:54 AM
I got an email of some tale that Adidas would start manufacturing in Germany as opposed to China. Not with German workers but with robots. The author claimed the robots would cost only $5.50 per hour as opposed to $11 an hour for the Chinese workers. Of course Chinese apparel workers do not get anywhere close to $11 an hour and the author was not exactly a credible source.
pgl -> pgl... , April 06, 2017 at 08:57 AM
Reuters is a more credible source:

http://www.reuters.com/article/us-adidas-manufacturing-idUSKBN0TS0ZM20151209

Pilot program making initially 500 pairs of shoes in the first year. No claims as the wage rate of Chinese workers.

libezkova said in reply to pgl... , April 06, 2017 at 05:41 PM
"The new "Speedfactory" in the southern town of Ansbach near its Bavarian headquarters will start production in the first half of 2016 of a robot-made running shoe that combines a machine-knitted upper and springy "Boost" sole made from a bubble-filled polyurethane foam developed by BASF."

Interesting. I thought that "keds" production was already fully automated. Bright colors are probably the main attraction. But Adidas commands premium price...

Machine-knitted upper is the key -- robots, even sophisticated one, put additional demands on precision of the parts to be assembled. That's also probably why monolithic molded sole is chosen. Kind of 3-D printing of shoes.

Robots do not "feel" the nuances of the technological process like humans do.

kurt -> pgl... , April 06, 2017 at 09:40 AM
While I agree that Chinese workers don't get $11 - frequently employee costs are accounted at a loaded rate (including all benefits - in China would include capital cost of dormitories, food, security staff, benefits and taxes). I am guessing that a $2-3 an hour wage would result in an $11 fully loaded rate under those circumstances. Those other costs are not required with robuts.
Peter K. -> DrDick ... , April 06, 2017 at 08:59 AM
I agree with you. The center-left want to exculpate globalization and outsourcing, or free them from blame, by providing another explanation: technology and robots. They're not just arguing with Trump.

Brad Setser:

"I suspect the politics around trade would be a bit different in the U.S. if the goods-exporting sector had grown in parallel with imports.

That is one key difference between the U.S. and Germany. Manufacturing jobs fell during reunification-and Germany went through a difficult adjustment in the early 2000s. But over the last ten years the number of jobs in Germany's export sector grew, keeping the number of people employed in manufacturing roughly constant over the last ten years even with rising productivity. Part of the "trade" adjustment was a shift from import-competing to exporting sectors, not just a shift out of the goods producing tradables sector. Of course, not everyone can run a German sized surplus in manufactures-but it seems likely the low U.S. share of manufacturing employment (relative to Germany and Japan) is in part a function of the size and persistence of the U.S. trade deficit in manufactures. (It is also in part a function of the fact that the U.S. no longer needs to trade manufactures for imported energy on any significant scale; the U.S. has more jobs in oil and gas production, for example, than Germany or Japan)."

http://blogs.cfr.org/setser/2017/02/06/offshore-profits-and-exports/

anne -> DrDick ... , April 06, 2017 at 10:01 AM
https://fred.stlouisfed.org/graph/?g=dgSQ

January 15, 2017

Percent of Employment in Manufacturing for United States, Germany and Japan, 1970-2012


https://fred.stlouisfed.org/graph/?g=dgT0

January 15, 2017

Percent of Employment in Manufacturing for United States, Germany and Japan, 1970-2012

(Indexed to 1970)

ken melvin -> DrDick ... , April 06, 2017 at 08:45 AM
Probably automated 200. In every case, displacing 3/4 of the workers and increasing production 40% while greatly improving quality. Exact same can be said for larger scaled such as automobile mfg, ...
The convergence of offshoring and automation in such a short time frame meant that instead of a gradual transformation that might have allowed for more evolutionary economic thinking, American workers got gobsmacked. The aftermath includes the wage disparity, opiate epidemic, Trump, ...
This transition is of the scale of the industrial revolution with climate change thrown. This is just the beginning of great social and economic turmoil. None of the stuff that evolved specific the industrial revolution applies.
Peter K. -> ken melvin... , April 06, 2017 at 09:01 AM
No it was policy driven by politics. They increased profits at the expense of workers and the middle class. The New Democrats played along with Wall Street.

[Apr 06, 2017] The impact of information technology on employment is undoubtedly a major issue, but it is also not in society's interest to discourage investment in high-tech companies.

Apr 06, 2017 | economistsview.typepad.com
Peter K. , April 05, 2017 at 01:55 PM
Interesting, thought-provoking discussion by DeLong:

https://www.project-syndicate.org/commentary/mnuchin-automation-low-skill-workers-by-j--bradford-delong-2017-04

APR 3, 2017
Artificial Intelligence and Artificial Problems
by J. Bradford DeLong

BERKELEY – Former US Treasury Secretary Larry Summers recently took exception to current US Treasury Secretary Steve Mnuchin's views on "artificial intelligence" (AI) and related topics. The difference between the two seems to be, more than anything else, a matter of priorities and emphasis.

Mnuchin takes a narrow approach. He thinks that the problem of particular technologies called "artificial intelligence taking over American jobs" lies "far in the future." And he seems to question the high stock-market valuations for "unicorns" – companies valued at or above $1 billion that have no record of producing revenues that would justify their supposed worth and no clear plan to do so.

Summers takes a broader view. He looks at the "impact of technology on jobs" generally, and considers the stock-market valuation for highly profitable technology companies such as Google and Apple to be more than fair.

I think that Summers is right about the optics of Mnuchin's statements. A US treasury secretary should not answer questions narrowly, because people will extrapolate broader conclusions even from limited answers. The impact of information technology on employment is undoubtedly a major issue, but it is also not in society's interest to discourage investment in high-tech companies.

On the other hand, I sympathize with Mnuchin's effort to warn non-experts against routinely investing in castles in the sky. Although great technologies are worth the investment from a societal point of view, it is not so easy for a company to achieve sustained profitability. Presumably, a treasury secretary already has enough on his plate to have to worry about the rise of the machines.

In fact, it is profoundly unhelpful to stoke fears about robots, and to frame the issue as "artificial intelligence taking American jobs." There are far more constructive areas for policymakers to direct their focus. If the government is properly fulfilling its duty to prevent a demand-shortfall depression, technological progress in a market economy need not impoverish unskilled workers.

This is especially true when value is derived from the work of human hands, or the work of things that human hands have made, rather than from scarce natural resources, as in the Middle Ages. Karl Marx was one of the smartest and most dedicated theorists on this topic, and even he could not consistently show that technological progress necessarily impoverishes unskilled workers.

Technological innovations make whatever is produced primarily by machines more useful, albeit with relatively fewer contributions from unskilled labor. But that by itself does not impoverish anyone. To do that, technological advances also have to make whatever is produced primarily by unskilled workers less useful. But this is rarely the case, because there is nothing keeping the relatively cheap machines used by unskilled workers in labor-intensive occupations from becoming more powerful. With more advanced tools, these workers can then produce more useful things.

Historically, there are relatively few cases in which technological progress, occurring within the context of a market economy, has directly impoverished unskilled workers. In these instances, machines caused the value of a good that was produced in a labor-intensive sector to fall sharply, by increasing the production of that good so much as to satisfy all potential consumers.

The canonical example of this phenomenon is textiles in eighteenth- and nineteenth-century India and Britain. New machines made the exact same products that handloom weavers had been making, but they did so on a massive scale. Owing to limited demand, consumers were no longer willing to pay for what handloom weavers were producing. The value of wares produced by this form of unskilled labor plummeted, but the prices of commodities that unskilled laborers bought did not.

The lesson from history is not that the robots should be stopped; it is that we will need to confront the social-engineering and political problem of maintaining a fair balance of relative incomes across society. Toward that end, our task becomes threefold.

First, we need to make sure that governments carry out their proper macroeconomic role, by maintaining a stable, low-unemployment economy so that markets can function properly. Second, we need to redistribute wealth to maintain a proper distribution of income. Our market economy should promote, rather than undermine, societal goals that correspond to our values and morals. Finally, workers must be educated and trained to use increasingly high-tech tools (especially in labor-intensive industries), so that they can make useful things for which there is still demand.

Sounding the alarm about "artificial intelligence taking American jobs" does nothing to bring such policies about. Mnuchin is right: the rise of the robots should not be on a treasury secretary's radar.

anne , April 05, 2017 at 03:14 PM
https://minneapolisfed.org/research/wp/wp736.pdf

January, 2017

The Global Rise of Corporate Saving
By Peter Chen, Loukas Karabarbounis, and Brent Neiman

Abstract

The sectoral composition of global saving changed dramatically during the last three decades. Whereas in the early 1980s most of global investment was funded by household saving, nowadays nearly two-thirds of global investment is funded by corporate saving. This shift in the sectoral composition of saving was not accompanied by changes in the sectoral composition of investment, implying an improvement in the corporate net lending position. We characterize the behavior of corporate saving using both national income accounts and firm-level data and clarify its relationship with the global decline in labor share, the accumulation of corporate cash stocks, and the greater propensity for equity buybacks. We develop a general equilibrium model with product and capital market imperfections to explore quantitatively the determination of the flow of funds across sectors. Changes including declines in the real interest rate, the price of investment, and corporate income taxes generate increases in corporate profits and shifts in the supply of sectoral saving that are of similar magnitude to those observed in the data.

anne -> anne... , April 05, 2017 at 03:17 PM
http://www.nytimes.com/2010/07/06/opinion/06smith.html

July 6, 2010

Are Profits Hurting Capitalism?
By YVES SMITH and ROB PARENTEAU

A STREAM of disheartening economic news last week, including flagging consumer confidence and meager private-sector job growth, is leading experts to worry that the recession is coming back. At the same time, many policymakers, particularly in Europe, are slashing government budgets in an effort to lower debt levels and thereby restore investor confidence, reduce interest rates and promote growth.

There is an unrecognized problem with this approach: Reductions in deficits have implications for the private sector. Higher taxes draw cash from households and businesses, while lower government expenditures withhold money from the economy. Making matters worse, businesses are already plowing fewer profits back into their own enterprises.

Over the past decade and a half, corporations have been saving more and investing less in their own businesses. A 2005 report from JPMorgan Research noted with concern that, since 2002, American corporations on average ran a net financial surplus of 1.7 percent of the gross domestic product - a drastic change from the previous 40 years, when they had maintained an average deficit of 1.2 percent of G.D.P. More recent studies have indicated that companies in Europe, Japan and China are also running unprecedented surpluses.

The reason for all this saving in the United States is that public companies have become obsessed with quarterly earnings. To show short-term profits, they avoid investing in future growth. To develop new products, buy new equipment or expand geographically, an enterprise has to spend money - on marketing research, product design, prototype development, legal expenses associated with patents, lining up contractors and so on.

Rather than incur such expenses, companies increasingly prefer to pay their executives exorbitant bonuses, or issue special dividends to shareholders, or engage in purely financial speculation. But this means they also short-circuit a major driver of economic growth.

Some may argue that businesses aren't investing in growth because the prospects for success are so poor, but American corporate profits are nearly all the way back to their peak, right before the global financial crisis took hold.

Another problem for the economy is that, once the crisis began, families and individuals started tightening their belts, bolstering their bank accounts or trying to pay down borrowings (another form of saving).

If households and corporations are trying to save more of their income and spend less, then it is up to the other two sectors of the economy - the government and the import-export sector - to spend more and save less to keep the economy humming. In other words, there needs to be a large trade surplus, a large government deficit or some combination of the two. This isn't a matter of economic theory; it's based in simple accounting.

What if a government instead embarks on an austerity program? Income growth will stall, and household wages and business profits may fall....

anne -> anne... , April 05, 2017 at 03:21 PM
http://www.nakedcapitalism.com/2017/04/global-corporate-saving-glut.html

April 5, 2017

The Global Corporate Saving Glut
By Yves Smith

On the one hand, the VoxEU article does a fine job of assembling long-term data on a global basis. It demonstrates that the corporate savings glut is long standing and that is has been accompanied by a decline in personal savings.

However, it fails to depict what an unnatural state of affairs this is. The corporate sector as a whole in non-recessionary times ought to be net spending, as in borrowing and investing in growth. As a market-savvy buddy put it, "If a company isn't investing in the business of its business, why should I?" I attributed the corporate savings trend in the US as a result of the fixation of quarterly earnings, which sources such as McKinsey partners with a broad view of the firms' projects were telling me was killing investment (any investment will have an income statement impact too, such as planning, marketing, design, and start up expenses). This post, by contrast, treats this development as lacking in any agency. Labor share of GDP dropped and savings rose. They attribute that to lower interest rates over time. They again fail to see that as the result of power dynamics and political choices....

[Apr 01, 2017] Amazon Web Services outage causes widespread internet problems

Apr 01, 2017 | www.cbsnews.com
Feb 28, 2017 6:03 PM EST NEW YORK -- Amazon's cloud-computing service, Amazon Web Services, experienced an outage in its eastern U.S. region Tuesday afternoon, causing unprecedented and widespread problems for thousands of websites and apps.

Amazon is the largest provider of cloud computing services in the U.S. Beginning around midday Tuesday on the East Coast, one region of its "S3" service based in Virginia began to experience what Amazon, on its service site, called "increased error rates."

In a statement, Amazon said as of 4 p.m. E.T. it was still experiencing "high error rates" that were "impacting various AWS services."

"We are working hard at repairing S3, believe we understand root cause, and are working on implementing what we believe will remediate the issue," the company said.

But less than an hour later, an update offered good news: "As of 1:49 PM PST, we are fully recovered for operations for adding new objects in S3, which was our last operation showing a high error rate. The Amazon S3 service is operating normally," the company said.

Amazon's Simple Storage Service, or S3, stores files and data for companies on remote servers. It's used for everything from building websites and apps to storing images, customer data and customer transactions.

"Anything you can think about storing in the most cost-effective way possible," is how Rich Mogull, CEO of data security firm Securosis, puts it.

Amazon has a strong track record of stability with its cloud computing service, CNET senior editor Dan Ackerman told CBS News.

"AWS... is known for having really good 'up time,'" he said, using industry language.

Over time, cloud computing has become a major part of Amazon's empire.

"Very few people host their own web servers anymore, it's all been outsourced to these big providers , and Amazon is one of the major ones," Ackerman said.

The problem Tuesday affected both "front-end" operations -- meaning the websites and apps that users see -- and back-end data processing that takes place out of sight. Some smaller online services, such as Trello, Scribd and IFTTT, appeared to be down for a while, although all have since recovered.

Some affected websites had fun with the crash, treating it like a snow day:

[Apr 01, 2017] After Amazon outage, HealthExpense worries about cloud lock-in by Maria Korolov

Notable quotes:
"... "From a sustainability and availability standpoint, we definitely need to look at our strategy to not be vendor specific, including with Amazon," said Lee. "That's something that we're aware of and are working towards." ..."
"... "Elastic load balances and other services make it easy to set up. However, it's a double-edged sword, because these types of services will also make it harder to be vendor-agnostic. When other cloud platform don't offer the same services, how do you wean yourself off of them?" ..."
"... Multi-year commitments are another trap, he said. And sometimes there's an extra unpleasant twist -- minimum usage requirements that go up in the later years, like balloon payments on a mortgage. ..."
Apr 01, 2017 | www.networkworld.com

The Amazon outage reminds companies that having all their eggs in one cloud basket might be a risky strategy

"That is the elephant in the room these days," said Lee. "More and more companies are starting to move their services to the cloud providers. I see attackers trying to compromise the cloud provider to get to the information."

If attackers can get into the cloud systems, that's a lot of data they could have access to. But attackers can also go after availability.

"The DDoS attacks are getting larger in scale, and with more IoT systems coming online and being very hackable, a lot of attackers can utilize that as a way to do additional attacks," he said.

And, of course, there's always the possibility of a cloud service outage for other reasons.

The 11-hour outage that Amazon suffered in late February was due to a typo, and affected Netflix, Reddit, Adobe and Imgur, among other sites.

"From a sustainability and availability standpoint, we definitely need to look at our strategy to not be vendor specific, including with Amazon," said Lee. "That's something that we're aware of and are working towards."

The problem is that Amazon offers some very appealing features.

"Amazon has been very good at providing a lot of services that reduce the investment that needs to be made to build the infrastructure," he said. "Elastic load balances and other services make it easy to set up. However, it's a double-edged sword, because these types of services will also make it harder to be vendor-agnostic. When other cloud platform don't offer the same services, how do you wean yourself off of them?"

... ... ...

"If you have a containerized approach, you can run in Amazon's container services, or on Azure," said Tim Beerman, CTO at Ensono , a managed services provider that runs its own cloud data center, manages on-premises environments for customers, and also helps clients run in the public cloud.

"That gives you more portability, you can pick something up and move it," he said.

But that, too, requires advance planning.

"It starts with the application," he said. "And you have to write it a certain way."

But the biggest contributing factor to cloud lock-in is data, he said.

"They make it really easy to put the data in, and they're not as friendly about taking that data out," he said.

The lack of friendliness often shows up in the pricing details.

"Usually the price is lower for data transfers coming into a cloud service provider versus the price to move data out," said Thales' Radford.

Multi-year commitments are another trap, he said. And sometimes there's an extra unpleasant twist -- minimum usage requirements that go up in the later years, like balloon payments on a mortgage.

[Mar 29, 2017] Job Loss in Manufacturing: More Robot Blaming

Mar 29, 2017 | economistsview.typepad.com
anne , March 29, 2017 at 06:11 AM
http://cepr.net/blogs/ beat-the-press/job-loss-in-manufacturing-more-robot-blaming

March 29, 2017

It is striking how the media feel such an extraordinary need to blame robots and productivity growth for the recent job loss in manufacturing rather than trade. We got yet another example of this exercise in a New York Times piece * by Claire Cain Miller, with the title "evidence that robots are winning the race for American jobs." The piece highlights a new paper * by Daron Acemoglu and Pascual Restrepo which finds that robots have a large negative impact on wages and employment.

While the paper has interesting evidence on the link between the use of robots and employment and wages, some of the claims in the piece do not follow. For example, the article asserts:

"The paper also helps explain a mystery that has been puzzling economists: why, if machines are replacing human workers, productivity hasn't been increasing. In manufacturing, productivity has been increasing more than elsewhere - and now we see evidence of it in the employment data, too."

Actually, the paper doesn't provide any help whatsoever in solving this mystery. Productivity growth in manufacturing has almost always been more rapid than productivity growth elsewhere. Furthermore, it has been markedly slower even in manufacturing in recent years than in prior decades. According to the Bureau of Labor Statistics, productivity growth in manufacturing has averaged less than 1.2 percent annually over the last decade and less than 0.5 percent over the last five years. By comparison, productivity growth averaged 2.9 percent a year in the half century from 1950 to 2000.

The article is also misleading in asserting:

"The paper adds to the evidence that automation, more than other factors like trade and offshoring that President Trump campaigned on, has been the bigger long-term threat to blue-collar jobs (emphasis added)."

In terms of recent job loss in manufacturing, and in particular the loss of 3.4 million manufacturing jobs between December of 2000 and December of 2007, the rise of the trade deficit has almost certainly been the more important factor. We had substantial productivity growth in manufacturing between 1970 and 2000, with very little loss of jobs. The growth in manufacturing output offset the gains in productivity. The new part of the story in the period from 2000 to 2007 was the explosion of the trade deficit to a peak of nearly 6.0 percent of GDP in 2005 and 2006.

It is also worth noting that we could in fact expect substantial job gains in manufacturing if the trade deficit were reduced. If the trade deficit fell by 2.0 percentage points of GDP ($380 billion a year) this would imply an increase in manufacturing output of more than 22 percent. If the productivity of the manufacturing workers producing this additional output was the same as the rest of the manufacturing workforce it would imply an additional 2.7 million jobs in manufacturing. That is more jobs than would be eliminated by productivity at the recent 0.5 percent growth rate over the next forty years, even assuming no increase in demand over this period.

While the piece focuses on the displacement of less educated workers by robots and equivalent technology, it is likely that the areas where displacement occurs will be determined in large part by the political power of different groups. For example, it is likely that in the not distant future improvements in diagnostic technology will allow a trained professional to make more accurate diagnoses than the best doctor. Robots are likely to be better at surgery than the best surgeon. The extent to which these technologies will be be allowed to displace doctors is likely to depend more on the political power of the American Medical Association than the technology itself.

Finally, the question of whether the spread of robots will lead to a transfer of income from workers to the people who "own" the robots will depend to a large extent on our patent laws. In the last four decades we have made patents longer and stronger. If we instead made them shorter and weaker, or better relied on open source research, the price of robots would plummet and workers would be better positioned to capture than gains of productivity growth as they had in prior decades. In this story it is not robots who are taking workers' wages, it is politicians who make strong patent laws.

* https://www.nytimes.com/2017/03/28/upshot/evidence-that-robots-are-winning-the-race-for-american-jobs.html

** http://economics.mit.edu/files/12154

-- Dean Baker

anne -> anne... , March 29, 2017 at 06:14 AM
https://fred.stlouisfed.org/graph/?g=d6j3

November 1, 2014

Total Factor Productivity at Constant National Prices for United States, 1950-2014


https://fred.stlouisfed.org/graph/?g=d6j7

November 1, 2014

Total Factor Productivity at Constant National Prices for United States, 1950-2014

(Indexed to 1950)

anne -> anne... , March 29, 2017 at 09:31 AM
https://fred.stlouisfed.org/graph/?g=dbjg

January 4, 2016

Manufacturing Multifactor Productivity, 1988-2014

(Indexed to 1988)


https://fred.stlouisfed.org/graph/?g=dbke

January 4, 2016

Manufacturing Multifactor Productivity, 2000-2014

(Indexed to 2000)

[Mar 29, 2017] I fear Summers at least as much as I fear robots

Mar 29, 2017 | economistsview.typepad.com
anne -> RC AKA Darryl, Ron... , March 29, 2017 at 06:17 AM
https://www.washingtonpost.com/news/wonk/wp/2017/03/27/larry-summers-mnuchins-take-on-artificial-intelligence-is-not-defensible/

March 27, 2017

The robots are coming, whether Trump's Treasury secretary admits it or not
By Lawrence H. Summers - Washington Post

As I learned (sometimes painfully) during my time at the Treasury Department, words spoken by Treasury secretaries can over time have enormous consequences, and therefore should be carefully considered. In this regard, I am very surprised by two comments made by Secretary Steven Mnuchin in his first public interview last week.

In reference to a question about artificial intelligence displacing American workers,Mnuchin responded that "I think that is so far in the future - in terms of artificial intelligence taking over American jobs - I think we're, like, so far away from that [50 to 100 years], that it is not even on my radar screen." He also remarked that he did not understand tech company valuations in a way that implied that he regarded them as excessive. I suppose there is a certain internal logic. If you think AI is not going to have any meaningful economic effects for a half a century, then I guess you should think that tech companies are overvalued. But neither statement is defensible.

Mnuchin's comment about the lack of impact of technology on jobs is to economics approximately what global climate change denial is to atmospheric science or what creationism is to biology. Yes, you can debate whether technological change is in net good. I certainly believe it is. And you can debate what the job creation effects will be relative to the job destruction effects. I think this is much less clear, given the downward trends in adult employment, especially for men over the past generation.

But I do not understand how anyone could reach the conclusion that all the action with technology is half a century away. Artificial intelligence is behind autonomous vehicles that will affect millions of jobs driving and dealing with cars within the next 15 years, even on conservative projections. Artificial intelligence is transforming everything from retailing to banking to the provision of medical care. Almost every economist who has studied the question believes that technology has had a greater impact on the wage structure and on employment than international trade and certainly a far greater impact than whatever increment to trade is the result of much debated trade agreements....

DrDick -> anne... , March 29, 2017 at 10:45 AM
Oddly, the robots are always coming in articles like Summers', but they never seem to get here. Automation has certainly played a role, but outsourcing has been a much bigger issue.
Peter K. -> DrDick ... , March 29, 2017 at 01:09 PM
I'm becoming increasing skeptical about the robots argument.
jonny bakho -> DrDick ... , March 29, 2017 at 05:13 PM
They are all over our manufacturing plants.
They just don't look like C3PO
JohnH -> RC AKA Darryl, Ron... , March 29, 2017 at 06:21 AM
I fear Summers at least as much as I fear robots...
Peter K. -> JohnH... , March 29, 2017 at 07:04 AM
He's just a big bully, like our PGL.

He has gotten a lot better and was supposedly pretty good when advising Obama, but he's sort of reverted to form with the election of Trump and the prominence of the debate on trade policy.

RC AKA Darryl, Ron -> JohnH... , March 29, 2017 at 07:15 AM
Ditto.

Technology rearranges and changes human roles, but it makes entries on both sides of the ledger. On net as long as wages grow then so will the economy and jobs. Trade deficits only help financial markets and the capital owning class.

Paine -> RC AKA Darryl, Ron... , March 29, 2017 at 09:59 AM
There is no limit to jobs
Macro policy and hours regulation
can create

We can both ration job hours And subsidies job wage rates
and at the same time
generate
As many jobs as wanted

All economic rents could be converted into wage subsidies
To boost the per hour income from jobs as well as incentivize diligence skill and creativity

RC AKA Darryl, Ron -> Paine... , March 29, 2017 at 12:27 PM
Works for me.
yuan -> Paine... , March 29, 2017 at 03:50 PM
jobs, jobs, jobs.

some day we will discard with feudal concepts, such as, working for the "man". a right to liberty and the pursuit of happiness is a right to income.

tax those bots!

yuan -> yuan... , March 29, 2017 at 03:51 PM
or better yet...collectivize the bots.
RGC -> RC AKA Darryl, Ron... , March 29, 2017 at 08:47 AM
Summers is a good example of those economists that never seem to pay a price for their errors.

Imo, he should never be listened to. His economics is faulty. His performance in the Clinton administration and his part in the Russian debacle should be enough to consign him to anonymity. People would do well to ignore him.

Peter K. -> RGC... , March 29, 2017 at 09:36 AM
Yeah he's one of those expert economists and technocrats who never admit fault. You don't become Harvard President or Secretary of the Treasury by doing that.

One time that Krugman has admitted error was about productivity gains in the 1990s. He said he didn't see the gains from computers in the numbers and it wasn't and they weren't there at first, but later productivity numbers increased.

It was sort of like what Summers and Munchkin are talking discussing, but there's all sorts of debate about measuring productivity and what it means.

RC AKA Darryl, Ron -> RGC... , March 29, 2017 at 12:29 PM
Yeah. I am not a fan of Summers's, but I do like summers as long as it does not rain too much or too little and I have time to fish.

[Mar 29, 2017] SDB: Midnight Commander tips

openSUSE
Using the mouse

Although Midnight Commander is a text mode application it can make use of mouse. The openSUSE delivered mc will make use of the mouse when used with a GUI console, without any further configuration needed.

The text mode terminal that we get when booting in runlevels 2 or 3 is a bit different story. You have to install the package gpm ("general purpose mouse") which is also called mouse server. The gpm is used in Linux to receive movements and clicks from mouse. Start gpm and then start Midnight commander.

If you come to the text terminal using Ctrl + Alt + F1, then gpm will not work as another driver that belongs to GUI (X Server) claims control over the mouse.

... ... ...

FTP browsing

This is file browsing on remote FTP server just as it is on your computer.

  1. Press F9 to select drop down menus on the top of the screen.
  2. Press Alt + L if you want to use left side panel, or Alt + R for right panel.
  3. Press Alt + P for input box where you have enter server name. Enter for instance
ftp.gwdg.de/pub

and press Enter.

Now mc will try anonymous connection to remote machine. If machine responds, you'll get directory listing of /pub on remote server.

It is possible to do the same from mc command line by typing:

cd /#ftp:ftp.gwdg.de/pub 

Archive browsing

Archive in classic meaning is compressed file. In Linux you can recognize them by suffix like:

tgz, tar.gz, tbz, tar.bz2

and many more, but above few are the most used

  1. Highlight the file
  2. Press Enter

That's it. Midnight Commander will decompress file for you and present it's internal structure like any other directory. If you want to extract one or all files from archive mark what you want toextract and use F5 to copy in another panel. Done.

RPM browsing

The package installation files for any SUSE are RPM and mc will let you browse them.

  1. Highlight the file
  2. Press Enter

You'll see few files:

/INFO
CONTENTS.cpio
HEADER
*INSTALL
*UPGRADE

Browse to see details of your RPM.

The CONTENTS.cpio is actual archive with files, and if you want to see within:

  1. Highlight the file
  2. Press Enter

(You know the drill)

The *INSTALL and *UPGRADE will do what the name tells, but if you want only to extract one or more files from CONTENTS.cpio than use F5 to copy them in the directory in the other panel.

PuTTY and line drawing

PuTTY is terminal application used to access remote computers running Linux via ssh (SSH tunnels from Microsoft Windows see details). The line drawing in Midnight Commander, YaST and another applications that draw lines using special characters can be displayed wrong as something else. The solution is to change settings:

  • menu: Window > Translation:
    • Received data assumed to be in which character set: UTF-8
    • Handling of line drawing characters: Use Unicode for line drawing

If that doesn't help, you may set this too:

  • menu: Connection > Connection-type string: linux
  • menu: Terminal > Keyboard > The Function keys and keypad: Linux

Found on webmilhouse.com.

User menu (F2 key) add-on

Diffs in color

Tip by James Ogley:

+ t r & ! t t
d       Diff against file of same name in other directory
        if [ "%d" = "%D" ]; then
          echo "The two directores must be different"
          exit 1
        fi
        if [ -f %D/%f ]; then        # if two of them, then
          diff -up %f %D/%f | sed -e 's/\(^-.*\)/\x1b[1;31m\1\x1b[0m/g' \
                                  -e 's/\(^\+.*\)/\x1b[1;32m\1\x1b[0m/g' \
                                  -e 's/\(^@.*\)/\x1b[36m\1\x1b[0m/g' | less -R
        else
          echo %f: No copy in %D/%f
        fi

D       Diff current directory against other directory
        if [ "%d" = "%D" ]; then
          echo "The two directores must be different"
          exit 1
        fi
        diff -up %d %D | sed -e 's/\(^-.*\)/\x1b[1;31m\1\x1b[0m/g' \
                             -e 's/\(^\+.*\)/\x1b[1;32m\1\x1b[0m/g' \
                             -e 's/\(^@.*\)/\x1b[36m\1\x1b[0m/g' | less -R
fi

[Mar 24, 2017] There is no such thing as an automated factory. Manufacturing is done by people, *assisted* by automation. Or only part of the production pipeline is automated, but people are still needed to fill in the not-automated pieces

Notable quotes:
"... And it is not only automation vs. in-house labor. There is environmental/compliance cost (or lack thereof) and the fully loaded business services and administration overhead, taxes, etc. ..."
"... When automation increased productivity in agriculture, the government guaranteed free high school education as a right. ..."
"... Now Democrats like you would say it's too expensive. So what's your solution? You have none. You say "sucks to be them." ..."
"... And then they give you the finger and elect Trump. ..."
"... It wasn't only "low-skilled" workers but "anybody whose job could be offshored" workers. Not quite the same thing. ..."
"... It also happened in "knowledge work" occupations - for those functions that could be separated and outsourced without impacting the workflow at more expense than the "savings". And even if so, if enough of the competition did the same ... ..."
"... And not all outsourcing was offshore - also to "lowest bidders" domestically, or replacing "full time" "permanent" staff with contingent workers or outsourced "consultants" hired on a project basis. ..."
"... "People sure do like to attribute the cause to trade policy." Because it coincided with people watching their well-paying jobs being shipped overseas. The Democrats have denied this ever since Clinton and the Republicans passed NAFTA, but finally with Trump the voters had had enough. ..."
"... Why do you think Clinton lost Wisconsin, Michigan, Pennysylvania and Ohio? ..."
Feb 20, 2017 |