Softpanorama

Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
May the source be with you, but remember the KISS principle ;-)

Slightly Skeptical View on Enterprise Unix Administration

News Webliography of problems with "pure" cloud environment Recommended Books Recommended Links Recommended Tools to Enhance Command Line Usage in Windows Programmable Keyboards Microsoft IntelliType Macros
Unix Configuration Management Tools Job schedulers Unix System Monitoring Over 50 and unemployed Corporate bullshit as a communication method Diplomatic Communication Bosos or Empty Suits (Aggressive Incompetent Managers)
ILO command line interface Using HP ILO virtual CDROM iDRAC7 goes unresponsive - can't connect to iDRAC7 Resetting frozen iDRAC without unplugging the server Troubleshooting HPOM agents Webliography of problems with "pure" cloud environment The tar pit of Red Hat overcomplexity
Bare metal recovery of Linux systems Shadow IT Is DevOps a yet another "for profit" technocult Carpal tunnel syndrome Sysadmin Horror Stories Humor Etc


The KISS rule can be expanded as: Keep It Simple, Sysadmin ;-)

This page is written as a pretest against overcomplexity and bizarre data center atmosphere dominant in "semi-outsourced" or fully outsourced datacenters ;-). Unix/Linux sysadmins are being killed with overcomplexity of the environment. As Charlie Schluting noted in 2010: (Enterprise Networking Plane, April 7, 2010)

What happened to the old "sysadmin" of just a few years ago? We've split what used to be the sysadmin into application teams, server teams, storage teams, and network teams. There were often at least a few people, the holders of knowledge, who knew how everything worked, and I mean everything. Every application, every piece of network gear, and how every server was configured -- these people could save a business in times of disaster.

Now look at what we've done. Knowledge is so decentralized we must invent new roles to act as liaisons between all the IT groups.

Architects now hold much of the high-level "how it works" knowledge, but without knowing how any one piece actually does work.

In organizations with more than a few hundred IT staff and developers, it becomes nearly impossible for one person to do and know everything. This movement toward specializing in individual areas seems almost natural. That, however, does not provide a free ticket for people to turn a blind eye.

Specialization

You know the story: Company installs new application, nobody understands it yet, so an expert is hired. Often, the person with a certification in using the new application only really knows how to run that application. Perhaps they aren't interested in learning anything else, because their skill is in high demand right now. And besides, everything else in the infrastructure is run by people who specialize in those elements. Everything is taken care of.

Except, how do these teams communicate when changes need to take place? Are the storage administrators teaching the Windows administrators about storage multipathing; or worse logging in and setting it up because it's faster for the storage gurus to do it themselves? A fundamental level of knowledge is often lacking, which makes it very difficult for teams to brainstorm about new ways evolve IT services. The business environment has made it OK for IT staffers to specialize and only learn one thing.

If you hire someone certified in the application, operating system, or network vendor you use, that is precisely what you get. Certifications may be a nice filter to quickly identify who has direct knowledge in the area you're hiring for, but often they indicate specialization or compensation for lack of experience.

Resource Competition

Does your IT department function as a unit? Even 20-person IT shops have turf wars, so the answer is very likely, "no." As teams are split into more and more distinct operating units, grouping occurs. One IT budget gets split between all these groups. Often each group will have a manager who pitches his needs to upper management in hopes they will realize how important the team is.

The "us vs. them" mentality manifests itself at all levels, and it's reinforced by management having to define each team's worth in the form of a budget. One strategy is to illustrate a doomsday scenario. If you paint a bleak enough picture, you may get more funding. Only if you are careful enough to illustrate the failings are due to lack of capital resources, not management or people. A manager of another group may explain that they are not receiving the correct level of service, so they need to duplicate the efforts of another group and just implement something themselves. On and on, the arguments continue.

Most often, I've seen competition between server groups result in horribly inefficient uses of hardware. For example, what happens in your organization when one team needs more server hardware? Assume that another team has five unused servers sitting in a blade chassis. Does the answer change? No, it does not. Even in test environments, sharing doesn't often happen between IT groups.

With virtualization, some aspects of resource competition get better and some remain the same. When first implemented, most groups will be running their own type of virtualization for their platform. The next step, I've most often seen, is for test servers to get virtualized. If a new group is formed to manage the virtualization infrastructure, virtual machines can be allocated to various application and server teams from a central pool and everyone is now sharing. Or, they begin sharing and then demand their own physical hardware to be isolated from others' resource hungry utilization. This is nonetheless a step in the right direction. Auto migration and guaranteed resource policies can go a long way toward making shared infrastructure, even between competing groups, a viable option.

Blamestorming

The most damaging side effect of splitting into too many distinct IT groups is the reinforcement of an "us versus them" mentality. Aside from the notion that specialization creates a lack of knowledge, blamestorming is what this article is really about. When a project is delayed, it is all too easy to blame another group. The SAN people didn't allocate storage on time, so another team was delayed. That is the timeline of the project, so all work halted until that hiccup was restored. Having someone else to blame when things get delayed makes it all too easy to simply stop working for a while.

More related to the initial points at the beginning of this article, perhaps, is the blamestorm that happens after a system outage.

Say an ERP system becomes unresponsive a few times throughout the day. The application team says it's just slowing down, and they don't know why. The network team says everything is fine. The server team says the application is "blocking on IO," which means it's a SAN issue. The SAN team say there is nothing wrong, and other applications on the same devices are fine. You've ran through nearly every team, but without an answer still. The SAN people don't have access to the application servers to help diagnose the problem. The server team doesn't even know how the application runs.

See the problem? Specialized teams are distinct and by nature adversarial. Specialized staffers often relegate themselves into a niche knowing that as long as they continue working at large enough companies, "someone else" will take care of all the other pieces.

I unfortunately don't have an answer to this problem. Maybe rotating employees between departments will help. They gain knowledge and also get to know other people, which should lessen the propensity to view them as outsiders

Additional useful material on the topic can also be found in my older article Solaris vs Linux:

Abstract

Introduction

Nine factors framework for comparison of two flavors of Unix in a large enterprise environment

Four major areas of Linux and Solaris deployment

Comparison of internal architecture and key subsystems

Security

Hardware: SPARC vs. X86

Development environment

Solaris as a cultural phenomenon

Using Solaris-Linux enterprise mix as the least toxic Unix mix available

Conclusions

Acknowledgements

Webliography

Here are my notes/reflection of sysadmin problem in strange (and typically pretty toxic) IT departments of large corporations:

 


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

2018 2017 2016 2015 2014 2013 2012 2011 2010 2009
2008 2007 2006 2005 2004 2003 2002 2001 2000 1999

"I appreciate Woody Allen's humor because one of my safety valves is an appreciation for life's absurdities. His message is that life isn't a funeral march to the grave. It's a polka."

-- Dennis Kusinich

[Jun 18, 2018] Copy and paste text in midnight commander (MC) via putty in Linux

Notable quotes:
"... IF you're using putty in either Xorg or Windows (i.e terminal within a gui) , it's possible to use the "conventional" right-click copy/paste behavior while in mc. Hold the shift key while you mark/copy. ..."
"... Putty has ability to copy-paste. In mcedit, hold Shift and select by mouse ..."
Jun 18, 2018 | superuser.com

Den ,Mar 1, 2015 at 22:50

I use Midnight Commander (MC) editor over putty to edit files

I want to know how to copy text from one file, close it then open another file and paste it?

If it is not possible with Midnight Commander, is there another easy way to copy and paste specific text from different files?

szkj ,Mar 12, 2015 at 22:40

I would do it like this:
  1. switch to block selection mode by pressing F3
  2. select a block
  3. switch off block selection mode with F3
  4. press Ctrl+F which will open Save block dialog
  5. press Enter to save it to the default location
  6. open the other file in the editor, and navigate to the target location
  7. press Shift+F5 to open Insert file dialog
  8. press Enter to paste from the default file location (which is same as the one in Save block dialog)

NOTE: There are other environment related methods, that could be more conventional nowadays, but the above one does not depend on any desktop environment related clipboard, (terminal emulator features, putty, Xorg, etc.). This is a pure mcedit feature which works everywhere.

Andrejs ,Apr 28, 2016 at 8:13

To copy: (hold) Shift + Select with mouse (copies to clipboard)

To paste in windows: Ctrl+V

To paste in another file in PuTTY/MC: Shift + Ins

Piotr Dobrogost ,Mar 30, 2017 at 17:32

If you get unwanted indents in what was pasted then while editing file in Midnight Commander press F9 to show top menu and in Options/Generals menu uncheck Return does autoindent option. Yes, I was happy when I found it too :) – Piotr Dobrogost Mar 30 '17 at 17:32

mcii-1962 ,May 26, 2015 at 13:17

IF you're using putty in either Xorg or Windows (i.e terminal within a gui) , it's possible to use the "conventional" right-click copy/paste behavior while in mc. Hold the shift key while you mark/copy.

Eden ,Feb 15, 2017 at 4:09

  1. Hold down the Shift key, and drag the mouse through the text you want to copy. The text's background will become dark orange.
  2. Release the Shift key and press Shift + Ctrl + c . The text will be copied.
  3. Now you can paste the text to anywhere you want by pressing Shift + Ctrl + v , even to the new page in MC.

xoid ,Jun 6, 2016 at 6:37

Putty has ability to copy-paste. In mcedit, hold Shift and select by mouse

mcii-1962 ,Jun 20, 2016 at 23:01

LOL - did you actually read the other answers? And your answer is incomplete, you should include what to do with the mouse in order to "select by mouse".
According to help in MC:

Ctrl + Insert copies to the mcedit.clip, and Shift + Insert pastes from mcedit.clip.

It doesn't work for me, by some reason, but by pressing F9 you get a menu, Edit > Copy to clipfile - worked fine.

[Jun 18, 2018] My Favorite Tool - Midnight Commander by Colin Sauze

Notable quotes:
"... "what did I just press and what did it do?" ..."
"... Underneath it's got lots of powerful features like syntax highlighting, bracket matching, regular expression search and replace, and spell checking. ..."
"... I use Mcedit for most of my day-to-day text editing, although I do switch to heavier weight GUI-based editors when I need to edit lots of files at once. ..."
Jun 18, 2018 | software-carpentry.org

I've always hated the Vi vs Emacs holy war that many Unix users like to wage and I find that both editors have serious shortcomings and definitely aren't something I'd recommend a beginner use. Pico and Nano are certainly easier to use, but they always a feel a bit lacking in features and clunky to me.

Mcedit runs from the command line but has a colourful GUI-like interface, you can use the mouse if you want, but I generally don't.

If you're old enough to have used DOS, then it's very reminiscent of the "edit" text editor that was built into MS-DOS 5 and 6, except it's full of powerful features that still make it a good choice in 2018. It has a nice intuitive interface based around the F keys on the keyboard and a pull-down menu which can be accessed by pressing F9 .

It's really easy to use and you're told about all the most important key combinations on screen and the rest can all be discovered from the menus. I find this far nicer than Vi or Emacs where I have to constantly look up key combinations or press a key by mistake and then have the dreaded "what did I just press and what did it do?" thought.

Underneath it's got lots of powerful features like syntax highlighting, bracket matching, regular expression search and replace, and spell checking.

I use Mcedit for most of my day-to-day text editing, although I do switch to heavier weight GUI-based editors when I need to edit lots of files at once. I just wish more people knew about it and then it might be installed by default on more of the shared systems and HPCs that I have to use!

[Jun 17, 2018] Midnight Commander Guide

Jun 17, 2018 | www.nawaz.org

Selecting Text

images/editmark.png

3.2.3 Navigation 3.2.4 Replacing Text

images/editreplace.png

3.2.5 Saving

images/editsaveas.png 3.2.6 Syntax Highlighting

images/edithighlight.png

3.2.7 More Options 3.2.8 Some Comments about Editing

[Jun 14, 2018] Changing shortcuts in midnight commander by rride Last Updated 20:01 PM

Feb 04, 2018 | www.queryxchange.com

I haven't found anything on the topic in the Internet. The only line from .mc/ini that looks related to the question is keymap=mc.keymap but I have no idea what to do with it.

Tags : linux keyboard-shortcuts midnight-commander

Okiedokie... lets see
$ man-section mc | head -n20
mc (1)
--
 Name
 Usage
 Description
 Options
 Overview
 Mouse support
 Keys
 Redefine hotkey bindings

8th section... is that possible? Lets look

man mc (scroll,scroll,scroll)

Redefine hotkey bindings
    Hotkey bindings may be read from external file (keymap-file).  A keymap-
    file is searched on the following algorithm  (to the first one found):

     1) command line option -K <keymap> or --keymap=<keymap>
     2) Environment variable MC_KEYMAP
     3) Parameter keymap in section [Midnight-Commander] of config file.
     4) File ~/.config/mc/mc.keymap
     5) File /etc/mc/mc.keymap
     6) File /usr/share/mc/mc.keymap

Bingo!

cp /etc/mc/mc.keymap ~/.config/mc/

Now edit the key mappings as you like and save ~/.config/mc/mc.keymap when done

For more info, read the Keys ( man mc ) section and the three sections following that.


$ cat /home/jaroslav/bin/man-sections 
#!/bin/sh
MANPAGER=cat man $@ | grep -E '^^[[1m[A-Z]{3,}'

[Jun 13, 2018] How mc.init is stored

Jun 13, 2018 | superuser.com

The configuration is stored in

$HOME/.config/mc/

In your case edit the file $HOME/.config/mc/ini . You can check which files are actually read in by midnight-commander using strace :

strace -e trace=open -o mclog mc

[Jun 13, 2018] Temporary Do Something Else while editing/viewing a file

Jun 13, 2018 | www.nawaz.org

[Jun 13, 2018] My Screen is Garbled Up

Jun 13, 2018 | www.nawaz.org

[Jun 13, 2018] Find file shows no results

Jun 13, 2018 | wiki.archlinux.org

If the Find file dialog (accessible with Alt+? ) shows no results, check the current directory for symbolic links. Find file does not follow symbolic links, so use bind mounts (see mount(8) ) instead, or the External panelize command.

[Jun 13, 2018] Draft of documentation for Midnight Commander

Jun 13, 2018 | midnight-commander.org

Table of content

  1. Introduction
  2. Getting sources
  3. Making and installing?
  4. Ini-options setup?
  5. Usage
  6. Migration to keybindings in 4.8.x series
  7. How to report about bugs
  8. Frequently asked questions

[Jun 13, 2018] Trash support

Jun 13, 2018 | wiki.archlinux.org

Midnight Commander does not support a trash can by default. Using libtrash

Install the libtrash AUR package, and create an mc alias in the initialization file of your shell (e.g., ~/.bashrc or ~/.zshrc ):

alias mc='LD_PRELOAD=/usr/lib/libtrash.so.3.3 mc'

To apply the changes, reopen your shell session or source the shell initialization file.

Default settings are defined in /etc/libtrash.conf.sys . You can overwrite these settings per-user in ~/.libtrash , for example:

TRASH_CAN = .Trash
INTERCEPT_RENAME = NO
IGNORE_EXTENSIONS= o;exe;com
UNCOVER_DIRS=/dev

Now files deleted by Midnight Commander (launched with mc ) will be moved to the ~/.Trash directory.

Warning:

See also [2] .

[Jun 13, 2018] Mcedit is actually a multiwindow editor

Opening another file in editor will create the second window. You can list windows using F9/Window/List\
That allows to copy and paste selections to different files while in editor
Jun 13, 2018 | www.unix.com

Many people don't know that mc has a multi-window text-editor built-in (eerily disabled by default) with macro capability and all sorts of goodies. run

mc -e my.txt

to edit directly.

[Jun 13, 2018] Make both panels display the same directory

Jun 13, 2018 | www.fredshack.com

ALT+i. If NOK, try ESC+i

[Jun 13, 2018] Opening editor in another screen or tmux window

Jun 13, 2018 | www.queryxchange.com

by user2252728 Last Updated May 15, 2015 11:14 AM


The problem

I'm using tmux and I want MC to open files for editing in another tmux window, so that I can keep browsing files while editing.

What I've tried

MC checks if EDITOR variable is set and then interprets it as a program for editing, so if I do export EDITOR=vim then MC will use vim to open files.

I've tried to build on that:

function foo () { tmux new-window "vim $1"; }
export EDITOR=foo

If I do $EDITOR some_file then I get the file open in vim in another tmux windows - exactly what I wanted.

Sadly, when I try to edit in MC it goes blank for a second and then returns to normal MC window. MC doesn't seem to keep any logs and I don't get any error message.

The question(s)

Tags : midnight-commander

Answers 1
You are defining a shell function, which is unknown for mc when it is trying to start the editor.

The correct way is to create a bash script, not a function. Then set EDITOR value to it, for example:

$ cat ~/myEditor.sh
#!/bin/sh
tmux new-window "vim $1"

export EDITOR=~/myEditor.sh

[Jun 13, 2018] Copy and paste text in midnight commander (MC) via putty in Linux

Jun 13, 2018 | www.queryxchange.com

I use Midnight Commander (MC) editor over putty to edit files

I want to know how to copy text from one file, close it then open another file and paste it?

If it is not possible with Midnight Commander, is there another easy way to copy and paste specific text from different files?


I would do it like this:
  1. switch to block selection mode by pressing F3
  2. select a block
  3. switch off block selection mode with F3
  4. press Ctrl+F which will open Save block dialog
  5. press Enter to save it to the default location
  6. open the other file in the editor, and navigate to the target location
  7. press Shift+F5 to open Insert file dialog
  8. press Enter to paste from the default file location (which is same as the one in Save block dialog)

[Jun 13, 2018] How to exclude some pattern when doing a search in MC

Mar 25, 2018 | www.queryxchange.com

In Midnight Commander, is it possible to exclude some directories/patterns/... when doing search? ( M-? ) I'm specifically interested in skipping the .hg subdirectory.


Answers 1
In the "[Misc]" section of your ~/.mc/ini file, you can specify the directories you wish to skip in the "find_ignore_dirs" setting.

To specify multiple directories, use a colon (":") as the delimiter.

[Jun 13, 2018] Midnight Commander tab completion

Sep 17, 2011 | superuser.com
You can get tab-completion by pressing ESC then TAB . You can also get the currently highlighted file/subdir name onto the command line with ESC-ENTER.

[Jun 13, 2018] mc-wrapper does not exit to MC_PWD directory

Jun 13, 2018 | www.queryxchange.com

I recently installed openSUSE 13.1 and set up the mc in typical why by aliasing mc with mc-wrapper.sh to have it exit into the last working directory in mc instance. However this does not seem to be working. I tried to debug the mc-wrapper.sh script - the echo commands.

MC_USER=`id | sed 's/[^(]*(//;s/).*//'`
MC_PWD_FILE="${TMPDIR-/tmp}/mc-$MC_USER/mc.pwd.$$"
/usr/bin/mc -P "$MC_PWD_FILE" "$@"

if test -r "$MC_PWD_FILE"; then
        MC_PWD="`cat "$MC_PWD_FILE"`"
        if test -n "$MC_PWD" && test -d "$MC_PWD"; then
                echo "will cd in : $MC_PWD"
                cd $MC_PWD
                echo $(pwd)
        fi
        unset MC_PWD
fi

rm -f "$MC_PWD_FILE"
unset MC_PWD_FILE
echo $(pwd)

To my surprise, mc-wrapper-sh does change the directory and is in the directory before exiting but back in bash prompt the working directory is the one from which the script was invoked.

Can it be that some bash settings is required for this to work?

Tags : linux bash shell midnight-commander

Answers 1
Using answer above working solution for bash shell is this:
alias mc='source /usr/lib/mc/mc-wrapper.sh'

OR

alias mc='. /usr/lib/mc/mc-wrapper.sh'

[Jun 13, 2018] How to enable find-as-you-type behavior

Jun 13, 2018 | www.queryxchange.com

Alt + S will show the "quick search" in Midnight Commander.

[Jun 13, 2018] How to expand the command line to the whole screen in MC

Jun 13, 2018 | www.queryxchange.com

You can hide the Midnight Commander Window by pressing Ctrl + O . Press Ctrl + O again to return back to Midnight Commander.

[Jun 13, 2018] MC Tips Tricks

Jun 13, 2018 | www.fredshack.com

If MC displays funny characters, make sure the terminal emulator uses UTF8 encoding. Smooth scrolling

vi ~/.mc/ini (per user) or /etc/mc/mc.ini (system-wide):

panel_scroll_pages=0

Make both panels display the same directory

ALT+i. If NOK, try ESC+i

Navigate through history

ESC+y to go back to the previous directory, ESC+u to go the next

Options > Configuration > Lynx-like motion doesn't go through the navigation history but rather jumps in/out of a directory so the user doesn't have to hit PageUp followed by Enter

Loop through all items starting with the same letter

CTRL+s followed by the letter to jump to the first occurence, then keep hitting CTRL+s to loop through the list

Customize keyboard shortcuts

Check mc.keymap

[Jun 13, 2018] MC_HOME allows you to run mc with alternative mc.init

Notable quotes:
"... MC_HOME variable can be set to alternative path prior to starting mc. Man pages are not something you can find the answer right away =) ..."
"... A small drawback of this solution: if you set MC_HOME to a directory different from your usual HOME, mc will ignore the content of your usual ~/.bashrc so, for example, your custom aliases defined in that file won't work anymore. Workaround: add a symlink to your ~/.bashrc into the new MC_HOME directory ..."
"... at the same time ..."
Jun 13, 2018 | unix.stackexchange.com

Tagwint ,Dec 19, 2014 at 16:41

That turned out to be simpler as one might think. MC_HOME variable can be set to alternative path prior to starting mc. Man pages are not something you can find the answer right away =)

here's how it works: - usual way

[jsmith@wstation5 ~]$ mc -F
Root directory: /home/jsmith

[System data]
<skipped>

[User data]
    Config directory: /home/jsmith/.config/mc/
    Data directory:   /home/jsmith/.local/share/mc/
        skins:          /home/jsmith/.local/share/mc/skins/
        extfs.d:        /home/jsmith/.local/share/mc/extfs.d/
        fish:           /home/jsmith/.local/share/mc/fish/
        mcedit macros:  /home/jsmith/.local/share/mc/mc.macros
        mcedit external macros: /home/jsmith/.local/share/mc/mcedit/macros.d/macro.*
    Cache directory:  /home/jsmith/.cache/mc/

and the alternative way:

[jsmith@wstation5 ~]$ MC_HOME=/tmp/MCHOME mc -F
Root directory: /tmp/MCHOME

[System data]
<skipped>    

[User data]
    Config directory: /tmp/MCHOME/.config/mc/
    Data directory:   /tmp/MCHOME/.local/share/mc/
        skins:          /tmp/MCHOME/.local/share/mc/skins/
        extfs.d:        /tmp/MCHOME/.local/share/mc/extfs.d/
        fish:           /tmp/MCHOME/.local/share/mc/fish/
        mcedit macros:  /tmp/MCHOME/.local/share/mc/mc.macros
        mcedit external macros: /tmp/MCHOME/.local/share/mc/mcedit/macros.d/macro.*
    Cache directory:  /tmp/MCHOME/.cache/mc/

Use case of this feature:

You have to share the same user name on remote server (access can be distinguished by rsa keys) and want to use your favorite mc configuration w/o overwriting it. Concurrent sessions do not interfere each other.

This works well as a part of sshrc-approach described in https://github.com/Russell91/sshrc

Cri ,Sep 5, 2016 at 10:26

A small drawback of this solution: if you set MC_HOME to a directory different from your usual HOME, mc will ignore the content of your usual ~/.bashrc so, for example, your custom aliases defined in that file won't work anymore. Workaround: add a symlink to your ~/.bashrc into the new MC_HOME directoryCri Sep 5 '16 at 10:26

goldilocks ,Dec 18, 2014 at 16:03

If you mean, you want to be able to run two instances of mc as the same user at the same time with different config directories, as far as I can tell you can't. The path is hardcoded.

However, if you mean, you want to be able to switch which config directory is being used, here's an idea (tested, works). You probably want to do it without mc running:

Hopefully it's clear what's happening there -- this sets a the config directory path as a symlink. Whatever configuration changes you now make and save will be int the one directory. You can then exit and switch_mc two , reverting to the old config, then start mc again, make changes and save them, etc.

You could get away with removing the killall mc and playing around; the configuration stuff is in the ini file, which is read at start-up (so you can't switch on the fly this way). It's then not touched until exit unless you "Save setup", but at exit it may be overwritten, so the danger here is that you erase something you did earlier or outside of the running instance.

Tagwint ,Dec 18, 2014 at 16:52

that works indeed, your idea is pretty clear, thank you for your time However my idea was to be able run differently configured mc's under the same account not interfering each other. I should have specified that in my question. The path to config dir is in fact hardcoded, but it is hardcoded RELATIVELY to user's home dir, that is the value of $HOME, thus changing it before mc start DOES change the config dir location - I've checked that. the drawback is $HOME stays changed as long as mc runs, which could be resolved if mc had a kind of startup hook to put restore to original HOME into – Tagwint Dec 18 '14 at 16:52

Tagwint ,Dec 18, 2014 at 17:17

I've extended my original q with 'same time' condition - it did not fit in my prev comment size limitation – Tagwint Dec 18 '14 at 17:17

[Jun 13, 2018] Editing mc.ini

Jun 07, 2014 | superuser.com
mc / mcedit has a config option called auto_save_setup which is enabled by default. This option automatically saves your current setup upon exiting. The problem occurs when you try to edit ~/.config/mc/ini using mcedit . It will overwrite whatever changes you made upon exiting, so you must edit the ~/.config/mc/ini using a different editor such as nano .

Source: https://linux.die.net/man/1/mc (search for "Auto Save Setup")

[Jun 13, 2018] Running mc with you own skin

Jun 13, 2018 | help.ubuntu.com

put

export TERM="xterm-256color"

at the bottom (top, if ineffective) of your ~/.bashrc file. Thus you can load skins as in

mc -S sand256.ini

In

/home/you/.config/mc/ini

have the lines:

[Midnight-Commander]
skin=sand256

for preset skin. Newer mc version offer to choose a preset skin from within the menu and save it in the above ini file, relieving you of the above manual step.

Many people don't know that mc has a multi-window text-editor built-in (eerily disabled by default) with macro capability and all sorts of goodies. run

mc -e my.txt

to edit directly.

Be aware that many skins break the special characters for sorting filenames reverse up/down unless one works hard with locale parameters and what not. Few people in the world know how to do that properly. In below screenshot you see "arrowdown n" over the filename list to indicate sort order. In many xterm, you will get ??? instead so you might resort to unskin and go to "default skin" setting with ugly colours.

The below CTRL-O hotkey starts what mc calls a subshell. If you run mc a second time in a "subshell", mc will not remind you of the CTRL-O hotkey (as if the world only knows 3 hotkeys) but will start mc with no deeper "subshell" iteration possible, unless one modifies the sources.

[Jun 13, 2018] mcdiff - Internal diff viewer of GNU Midnight Commander

Jun 13, 2018 | www.systutorials.com

mcdiff: Internal diff viewer of GNU Midnight Commander. Index of mcdiff man page
Read mcdiff man page on Linux: $ man 1 mcdiff NAME mcdiff - Internal diff viewer of GNU Midnight Commander. USAGE mcdiff [-bcCdfhstVx?] file1 file2 DESCRIPTION

mcdiff is a link to mc , the main GNU Midnight Commander executable. Executing GNU Midnight Commander under this name requests starting the internal diff viewer which compares file1 and file2 specified on the command line.

OPTIONS
-b
Force black and white display.
-c
Force color mode on terminals where mcdiff defaults to black and white.
-C <keyword>=<fgcolor>,<bgcolor>,<attributes>:<keyword>= ...
Specify a different color set. See the Colors section in mc (1) for more information.
-d
Disable mouse support.
-f
Display the compiled-in search paths for Midnight Commander files.
-t
Used only if the code was compiled with S-Lang and terminfo: it makes the Midnight Commander use the value of the TERMCAP variable for the terminal information instead of the information on the system wide terminal database
-V
Displays the version of the program.
-x
Forces xterm mode. Used when running on xterm-capable terminals (two screen modes, and able to send mouse escape sequences).
COLORS The default colors may be changed by appending to the MC_COLOR_TABLE environment variable. Foreground and background colors pairs may be specified for example with:
MC_COLOR_TABLE="$MC_COLOR_TABLE:\
normal=lightgray,black:\
selected=black,green"
FILES /usr/share/mc/mc.hlp
The help file for the program.

/usr/share/mc/mc.ini

The default system-wide setup for GNU Midnight Commander, used only if the user's own ~/.config/mc/ini file is missing.

/usr/share/mc/mc.lib

Global settings for the Midnight Commander. Settings in this file affect all users, whether they have ~/.config/mc/ini or not.

~/.config/mc/ini

User's own setup. If this file is present, the setup is loaded from here instead of the system-wide startup file.

[Jun 13, 2018] MC (Midnight Commmander) mc/ini settings file location

Jun 13, 2018 | unix.stackexchange.com

UVV ,Oct 13, 2014 at 7:51

It's in the following file: ~/.config/mc/ini .

obohovyk ,Oct 13, 2014 at 7:53

Unfortunately not... – obohovyk Oct 13 '14 at 7:53

UVV ,Oct 13, 2014 at 8:02

@alexkowalski then it's ~/.config/mc/iniUVV Oct 13 '14 at 8:02

obohovyk ,Oct 13, 2014 at 8:41

Yeah, thanks!!! – obohovyk Oct 13 '14 at 8:41

,

If you have not made any changes, the config file does not yet exist.

The easy way to change from the default skin:

  1. Start Midnight Commander
    sudo mc
    
  2. F9 , O for Options, or cursor to "Options" and press Enter
  3. A for Appearance, or cursor to Appearance and press Enter

    You will see that default is the current skin.

  4. Press Enter to see the other skin choices
  5. Cursor to the skin you want and select it by pressing Enter
  6. Click OK

After you do this, the ini file will exist and can be edited, but it is easier to change skins using the method I described.

[Jun 13, 2018] Hide/view of hidden files

Sep 17, 2011 | superuser.com

Something I discovered which I REALLY appreciated was the hide/view of hidden files can be toggled by pressing ALT-. (ALT-PERIOD). Be aware that often the RIGHT ALT key is NOT seen as an ALT key by the system, so you usually need to use Left-ALT-. to toggle this. I forgot about the Right-ALT weirdness and thought I'd broken mc one day. {sigh} Such a blonde...

Just checked (xev!), I guess the ALT-. toggle is mapped to ALT_L-., and the right ALT key gives an ALT_R keycode... which doesn't match the mc mapping, causing it to not work... now I know why! Hooray!

[Jun 13, 2018] Loss of output problem

Sep 17, 2011 | superuser.com
1) If the panels are active and I issue a command that has a lot of output, it appears to be lost forever.

i.e., if the panels are visible and I cat something (i.e., cat /proc/cpuinfo), that info is gone forever once the panels get redrawn.

If you use Cygwin's mintty terminal, you can use its Flip Screen context menu command (or Alt+F12 shortcut) to switch between the so-called alternate screen, where fullscreen applications like mc normally run, and the primary screen where output from commands such as cat appears.

[Jun 13, 2018] I Can't Select Text With My Mouse

Jun 13, 2018 | www.nawaz.org

I Can't Select Text With My Mouse

[Jun 13, 2018] parsync - a parallel rsync wrapper for large data transfers

Notable quotes:
"... kdirstat-cache-writer ..."
"... only PUSH data to remote targets ..."
Jun 13, 2018 | nac.uci.edu

parsync - a parallel rsync wrapper for large data transfers by Harry Mangalam
< harry.mangalam@uci.edu >
v1.67 (Mac Beta) Table of Contents 1. Download 2. Dependencies 3. Overview 4. parsync help Jan 22, 2017 1. Download If you already know you want it, get it here: parsync+utils.tar.gz (contains parsync plus the kdirstat-cache-writer , stats , and scut utilities below) Extract it into a dir on your $PATH and after verifying the other dependencies below, give it a shot. While parsync is developed for and test on Linux, the latest version of parsync has been modified to (mostly) work on the Mac (tested on OSX 10.9.5). A number of the Linux-specific dependencies have been removed and there are a number of Mac-specific work arounds. Thanks to Phil Reese < preese@stanford.edu > for the code mods needed to get it started. It's the same package and instructions for both platforms. 2. Dependencies parsync requires the following utilities to work:

non-default Perl utility: URI::Escape qw(uri_escape)
sudo yum install perl-URI  # CentOS-like

sudo apt-get install liburi-perl  # Debian-like
parsync needs to be installed only on the SOURCE end of the transfer and uses whatever rsync is available on the TARGET. It uses a number of Linux- specific utilities so if you're transferring between Linux and a FreeBSD host, install parsync on the Linux side. In fact, as currently written, it will only PUSH data to remote targets ; it will not pull data as rsync itself can do. This will probably in the near future. 3. Overview rsync is a fabulous data mover. Possibly more bytes have been moved (or have been prevented from being moved) by rsync than by any other application. So what's not to love? For transferring large, deep file trees, rsync will pause while it generates lists of files to process. Since Version 3, it does this pretty fast, but on sluggish filesystems, it can take hours or even days before it will start to actually exchange rsync data. Second, due to various bottlenecks, rsync will tend to use less than the available bandwidth on high speed networks. Starting multiple instances of rsync can improve this significantly. However, on such transfers, it is also easy to overload the available bandwidth, so it would be nice to both limit the bandwidth used if necessary and also to limit the load on the system. parsync tries to satisfy all these conditions and more by:
Important Only use for LARGE data transfers The main use case for parsync is really only very large data transfers thru fairly fast network connections (>1Gb/s). Below this speed, a single rsync can saturate the connection, so there's little reason to use parsync and in fact the overhead of testing the existence of and starting more rsyncs tends to worsen its performance on small transfers to slightly less than rsync alone.
Beyond this introduction, parsync's internal help is about all you'll need to figure out how to use it; below is what you'll see when you type parsync -h . There are still edge cases where parsync will fail or behave oddly, especially with small data transfers, so I'd be happy to hear of such misbehavior or suggestions to improve it. Download the complete tarball of parsync, plus the required utilities here: parsync+utils.tar.gz Unpack it, move the contents to a dir on your $PATH , chmod it executable, and try it out.
parsync --help
or just
parsync
Below is what you should see: 4. parsync help
parsync version 1.67 (Mac compatibility beta) Jan 22, 2017
by Harry Mangalam <hjmangalam@gmail.com> || <harry.mangalam@uci.edu>

parsync is a Perl script that wraps Andrew Tridgell's miraculous 'rsync' to
provide some load balancing and parallel operation across network connections
to increase the amount of bandwidth it can use.

parsync is primarily tested on Linux, but (mostly) works on MaccOSX
as well.

parsync needs to be installed only on the SOURCE end of the
transfer and only works in local SOURCE -> remote TARGET mode
(it won't allow remote local SOURCE <- remote TARGET, emitting an
error and exiting if attempted).

It uses whatever rsync is available on the TARGET.  It uses a number
of Linux-specific utilities so if you're transferring between Linux
and a FreeBSD host, install parsync on the Linux side.

The only native rsync option that parsync uses is '-a' (archive) &
'-s' (respect bizarro characters in filenames).
If you need more, then it's up to you to provide them via
'--rsyncopts'. parsync checks to see if the current system load is
too heavy and tries to throttle the rsyncs during the run by
monitoring and suspending / continuing them as needed.

It uses the very efficient (also Perl-based) kdirstat-cache-writer
from kdirstat to generate lists of files which are summed and then
crudely divided into NP jobs by size.

It appropriates rsync's bandwidth throttle mechanism, using '--maxbw'
as a passthru to rsync's 'bwlimit' option, but divides it by NP so
as to keep the total bw the same as the stated limit.  It monitors and
shows network bandwidth, but can't change the bw allocation mid-job.
It can only suspend rsyncs until the load decreases below the cutoff.
If you suspend parsync (^Z), all rsync children will suspend as well,
regardless of current state.

Unless changed by '--interface', it tried to figure out how to set the
interface to monitor.  The transfer will use whatever interface routing
provides, normally set by the name of the target.  It can also be used for
non-host-based transfers (between mounted filesystems) but the network
bandwidth continues to be (usually pointlessly) shown.

[[NB: Between mounted filesystems, parsync sometimes works very poorly for
reasons still mysterious.  In such cases (monitor with 'ifstat'), use 'cp'
or 'tnc' (https://goo.gl/5FiSxR) for the initial data movement and a single
rsync to finalize.  I believe the multiple rsync chatter is interfering with
the transfer.]]

It only works on dirs and files that originate from the current dir (or
specified via "--rootdir").  You cannot include dirs and files from
discontinuous or higher-level dirs.

** the ~/.parsync files **
The ~/.parsync dir contains the cache (*.gz), the chunk files (kds*), and the
time-stamped log files. The cache files can be re-used with '--reusecache'
(which will re-use ALL the cache and chunk files.  The log files are
datestamped and are NOT overwritten.

** Odd characters in names **
parsync will sometimes refuse to transfer some oddly named files, altho
recent versions of rsync allow the '-s' flag (now a parsync default)
which tries to respect names with spaces and properly escaped shell
characters.  Filenames with embedded newlines, DOS EOLs, and other
odd chars will be recorded in the log files in the ~/.parsync dir.

** Because of the crude way that files are chunked, NP may be
adjusted slightly to match the file chunks. ie '--NP 8' -> '--NP 7'.
If so, a warning will be issued and the rest of the transfer will be
automatically adjusted.

OPTIONS
=======
[i] = integer number
[f] = floating point number
[s] = "quoted string"
( ) = the default if any

--NP [i] (sqrt(#CPUs)) ...............  number of rsync processes to start
      optimal NP depends on many vars.  Try the default and incr as needed
--startdir [s] (`pwd`)  .. the directory it works relative to. If you omit
                           it, the default is the CURRENT dir. You DO have
                           to specify target dirs.  See the examples below.
--maxbw [i] (unlimited) ..........  in KB/s max bandwidth to use (--bwlimit
       passthru to rsync).  maxbw is the total BW to be used, NOT per rsync.
--maxload [f] (NP+2)  ........ max total system load - if sysload > maxload,
                                               sleeps an rsync proc for 10s
--checkperiod [i] (5) .......... sets the period in seconds between updates
--rsyncopts [s]  ...  options passed to rsync as a quoted string (CAREFUL!)
           this opt triggers a pause before executing to verify the command.
--interface [s]  .............  network interface to /monitor/, not nec use.
      default: `/sbin/route -n | grep "^0.0.0.0" | rev | cut -d' ' -f1 | rev`
      above works on most simple hosts, but complex routes will confuse it.
--reusecache  ..........  don't re-read the dirs; re-use the existing caches
--email [s]  .....................  email address to send completion message
                                      (requires working mail system on host)
--barefiles   .....  set to allow rsync of individual files, as oppo to dirs
--nowait  ................  for scripting, sleep for a few s instead of wait
--version  .................................  dumps version string and exits
--help  .........................................................  this help

Examples
========
-- Good example 1 --
% parsync  --maxload=5.5 --NP=4 --startdir='/home/hjm' dir1 dir2 dir3
hjm@remotehost:~/backups

where
  = "--startdir='/home/hjm'" sets the working dir of this operation to
      '/home/hjm' and dir1 dir2 dir3 are subdirs from '/home/hjm'
  = the target "hjm@remotehost:~/backups" is the same target rsync would use
  = "--NP=4" forks 4 instances of rsync
  = -"-maxload=5.5" will start suspending rsync instances when the 5m system
      load gets to 5.5 and then unsuspending them when it goes below it.

  It uses 4 instances to rsync dir1 dir2 dir3 to hjm@remotehost:~/backups

-- Good example 2 --
% parsync --rsyncopts="--ignore-existing" --reusecache  --NP=3
  --barefiles  *.txt   /mount/backups/txt

where
  =  "--rsyncopts='--ignore-existing'" is an option passed thru to rsync
     telling it not to disturb any existing files in the target directory.
  = "--reusecache" indicates that the filecache shouldn't be re-generated,
    uses the previous filecache in ~/.parsync
  = "--NP=3" for 3 copies of rsync (with no "--maxload", the default is 4)
  = "--barefiles" indicates that it's OK to transfer barefiles instead of
    recursing thru dirs.
  = "/mount/backups/txt" is the target - a local disk mount instead of a network host.

  It uses 3 instances to rsync *.txt from the current dir to "/mount/backups/txt".

-- Error Example 1 --
% pwd
/home/hjm  # executing parsync from here

% parsync --NP4 --compress /usr/local  /media/backupdisk

why this is an error:
  = '--NP4' is not an option (parsync will say "Unknown option: np4")
    It should be '--NP=4'
  = if you were trying to rsync '/usr/local' to '/media/backupdisk',
    it will fail since there is no /home/hjm/usr/local dir to use as
    a source. This will be shown in the log files in
    ~/.parsync/rsync-logfile-<datestamp>_#
    as a spew of "No such file or directory (2)" errors
  = the '--compress' is a native rsync option, not a native parsync option.
    You have to pass it to rsync with "--rsyncopts='--compress'"

The correct version of the above command is:

% parsync --NP=4  --rsyncopts='--compress' --startdir=/usr  local
/media/backupdisk

-- Error Example 2 --
% parsync --start-dir /home/hjm  mooslocal  hjm@moo.boo.yoo.com:/usr/local

why this is an error:
  = this command is trying to PULL data from a remote SOURCE to a
    local TARGET.  parsync doesn't support that kind of operation yet.

The correct version of the above command is:

# ssh to hjm@moo, install parsync, then:
% parsync  --startdir=/usr  local  hjm@remote:/home/hjm/mooslocal

[Jun 09, 2018] How to use the history command in Linux Opensource.com

Jun 09, 2018 | opensource.com

Changing an executed command

history also allows you to rerun a command with different syntax. For example, if I wanted to change my previous command history | grep dnf to history | grep ssh , I can execute the following at the prompt:

$ ^dnf^ssh^

history will rerun the command, but replace dnf with ssh , and execute it.

Removing history

There may come a time that you want to remove some or all the commands in your history file. If you want to delete a particular command, enter history -d <line number> . To clear the entire contents of the history file, execute history -c .

The history file is stored in a file that you can modify, as well. Bash shell users will find it in their Home directory as .bash_history .

Next steps

There are a number of other things that you can do with history :

For more information about the history command and other interesting things you can do with it, take a look at the GNU Bash Manual .

[Jun 09, 2018] 5 Useful Tools to Remember Linux Commands Forever

Jun 09, 2018 | www.tecmint.com

Cheat Program

Cheat is a simple, interactive command-line cheat-sheet program which shows use cases of a Linux command with a number of options and their short understandable function. It is useful for Linux newbies and sysadmins.

To install and use it, check out our complete article about Cheat program and its usage with examples:

  1. Cheat – An Ultimate Command Line 'Cheat-Sheet' for Linux Beginners

That's all! In this article, we have shared 5 command-line tools for remembering Linux commands. If you know any other tools for the same purpose that are missing in the list above, let us know via the feedback form below.

[Jun 03, 2018] What is the best way to transfer a single large file over a high-speed, high-latency WAN link

Notable quotes:
"... I've been dealing with a similar situation, with ~200GB of SQL .bak, except the only way I've been able to get the WAN link to saturate is with FTP. I ended up using 7-zip with zero compression to break it into 512MB chunks. ..."
Jun 03, 2018 | serverfault.com

This looks related to this one , but it's somewhat different.

There is this WAN link between two company sites, and we need to transfer a single very large file (Oracle dump, ~160 GB).

We've got full 100 Mbps bandwidth (tested), but looks like a single TCP connection just can't max it out due to how TCP works (ACKs, etc.). We tested the link with iperf , and results change dramatically when increasing the TCP Window Size: with base settings we get ~5 Mbps throughput, with a bigger WS we can get up to ~45 Mbps, but not any more than that. The network latency is around 10 ms.

Out of curiosity, we ran iperf using more than a single connections, and we found that, when running four of them, they would indeed achieve a speed of ~25 Mbps each, filling up all the available bandwidth; so the key looks to be in running multiple simultaneous transfers.

With FTP, things get worse: even with optimized TCP settings (high Window Size, max MTU, etc.) we can't get more than 20 Mbps on a single transfer. We tried FTPing some big files at the same time, and indeed things got a lot better than when transferring a single one; but then the culprit became disk I/O, because reading and writing four big files from the same disk bottlenecks very soon; also, we don't seem to be able to split that single large file into smaller ones and then merge it back, at least not in acceptable times (obviously we can't spend splicing/merging back the file a time comparable to that of transferring it).

The ideal solution here would be a multithreaded tool that could transfer various chunks of the file at the same time; sort of like peer-to-peer programs like eMule or BitTorrent already do, but from a single source to a single destination. Ideally, the tool would allow us to choose how many parallel connections to use, and of course optimize disk I/O to not jump (too) madly between various sections of the file.

Does anyone know of such a tool?

Or, can anyone suggest a better solution and/or something we already didn't try?

P.S. We already thought of backing that up to tape/disk and physically sending it to destination; that would be our extreme measure if WAN just doesn't cut it, but, as A.S. Tanenbaum said, "Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway." networking bandwidth tcp file-transfer share edited Apr 13 '17 at 12:14 Community ♦ 1 asked Feb 11 '10 at 7:19 Massimo 50.9k 36 157 269 locked by Tom O'Connor Aug 21 '13 at 9:15

This post has been locked due to the high amount of off-topic comments generated. For extended discussions, please use chat .

dbush 148 8 answered Feb 11 '10

Searching for "high latency file transfer" brings up a lot of interesting hits. Clearly, this is a problem that both the CompSci community and the commercial community has put thougth into.

A few commercial offerings that appear to fit the bill:

In the open-source world, the uftp project looks promising. You don't particularly need its multicast capabilities, but the basic idea of blasting out a file to receivers, receiving NAKs for missed blocks at the end of the transfer, and then blasting out the NAK'd blocks (lather, rinse, repeat) sounds like it would do what you need, since there's no ACK'ing (or NAK'ing) from the receiver until after the file transfer has completed once. Assuming the network is just latent, and not lossy, this might do what you need, too.

Evan Anderson 133k 13 165 306

add a comment | up vote 9 down vote Really odd suggestion this one.. Set up a simple web server to host the file on your network (I suggest nginx, incidentally), then set up a pc with firefox on the other end, and install the DownThemAll extension.

It's a download accelerator that supports chunking and re-assembly.
You can break each download into 10 chunks for re-assembly, and it does actually make things quicker!

(caveat: I've never tried it on anything as big as 160GB, but it does work well with 20GB iso files) share answered Feb 11 '10 at 8:23 Tom O'Connor 24.5k 8 60 137

add a comment | up vote 7 down vote The UDT transport is probably the most popular transport for high latency communications. This leads onto their other software called Sector/Sphere a "High Performance Distributed File System and Parallel Data Processing Engine" which might be worthwhile to have a look at. share answered Mar 18 '11 at 3:21 Steve-o 764 4 11 add a comment | up vote 5 down vote My answer is a bit late, but I just found this question, while looking for fasp. During that search I also found this : http://tsunami-udp.sourceforge.net/ , the "Tsunami UDP Protocol".

From their website :

A fast user-space file transfer protocol that uses TCP control and UDP data for transfer over very high speed long distance networks (≥ 1 Gbps and even 10 GE), designed to provide more throughput than possible with TCP over the same networks.the same networks.

As far as speed goes, the page mentions this result (using a link between Helsinki, Finland to Bonn, Germany over a 1GBit link:

Figure 1 - international transfer over the Internet, averaging 800 Mbit/second

If you want to use a download accelerator, have a look at lftp , this is the only download accelerator that can do a recursive mirror, as far as I know. share answered Jun 24 '10 at 20:59 Jan van Haarst 51 1 3

add a comment | up vote 3 down vote The bbcp utility from the very relevant page 'How to transfer large amounts of data via network' seems to be the simplest solution. share answered Jun 27 '12 at 12:23 Robert Polson 31 1 add a comment | protected by Community ♦ Aug 21 '13 at 9:14

Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count ).

Would you like to answer one of these unanswered questions instead?

[Jun 02, 2018] Low-latency continuous rsync

Notable quotes:
"... Low-latency continuous rsync ..."
Jun 02, 2018 | www.danplanet.com

Right Angles

Okay, so " lowish -latency" would be more appropriate.

I regularly work on systems that are fairly distant, over relatively high-latency links. That means that I don't want to run my editor there because 300ms between pressing a key and seeing it show up is maddening. Further, with something as large as the Linux kernel, editor integration with cscope is a huge time saver and pushing enough configuration to do that on each box I work on is annoying. Lately, the speed of the notebook I'm working from often outpaces that of the supposedly-fast machine I'm working on. For many tasks, a four-core, two threads per core, 10GB RAM laptop with an Intel SSD will smoke a 4GHz PowerPC LPAR with 2GB RAM.

I don't really want to go to the trouble of cross-compiling the kernels on my laptop, so that's the only piece I want to do remotely. Thus, I want to have high-speed access to the tree I'm working on from my local disk for editing, grep'ing, and cscope'ing. But, I want the changes to be synchronized (without introducing any user-perceived delay) to the distant machine in the background for when I'm ready to compile. Ideally, this would be some sort of rsync-like tool that uses inotify to notice changes and keep them synchronized to the remote machine over a persistent connection. However, I know of no such tool and haven't been sufficiently annoyed to sit down and write one.

One can, however, achieve a reasonable approximation of this by gluing existing components together. The inotifywait tool from the inotify-tools provides a way to watch a directory and spit out a live list of changed files without much effort. Of course, rsync can handle the syncing for you, but not with a persistent connection. This script mostly does what I want:

#!/bin/bash

DEST="$1"

if [ -z "$DEST" ]; then exit 1; fi

inotifywait -r -m -e close_write --format '%w%f' . |\
while read file
do
        echo $file
        rsync -azvq $file ${DEST}/$file
        echo -n 'Completed at '
        date
done

That will monitor the local directory and synchronize it to the remote host every time a file changes. I run it like this:

sync.sh dan@myhost.domain.com:my-kernel-tree/

It's horribly inefficient of course, but it does the job. The latency for edits to show up on the other end, although not intolerable, is higher than I'd like. The boxes I'm working on these days are in Minnesota, and I have to access them over a VPN which terminates in New York. That means packets leave Portland for Seattle, jump over to Denver, Chicago, Washington DC, then up to New York before they bounce back to Minnesota. Initiating an SSH connection every time the script synchronizes a file requires some chatting back and forth over that link, and thus is fairly slow.

Looking at how I might reduce the setup time for the SSH links, I stumbled across an incredibly cool feature available in recent versions of OpenSSH: connection multiplexing. With this enabled, you pay the high setup cost only the first time you connect to a host. Subsequent connections re-use the same tunnel as the first one, making the process nearly instant. To get this enabled for just the host I'm using, I added this to my ~/.ssh/config file:

Host myhost.domain.com
    ControlMaster auto
    ControlPath /tmp/%h%p%r

Now, all I do is ssh to the box each time I boot it (which I would do anyway) and the sync.sh script from above re-uses that connection for file synchronization. It's still not the same as a shared filesystem, but it's pretty dang close, especially for a few lines of config and shell scripting. Kernel development on these distant boxes is now much less painful. Category(s): Codemonkeying
Tags: inotify , linux , rsync The beauty of automated builds Field Day 2012 4 Responses to Low-latency continuous rsync

  1. Christof Schmitt says: June 25, 2012 at 22:06 This is a great approach. I had the same problem when editing code locally and testing the changes on a remote system. Thanks for sharing, i will give it a try.
  2. Callum says: May 12, 2013 at 15:02 Are you familiar with lsyncd? I think it might do exactly what you want but potentially more easily. It uses inotify or libnotify or something or other to watch a local directory, and then pushes changes every X seconds to a remote host. It's pretty powerful and can even be setup to sync mv commands with a remote ssh mv instead of rsync which can be expensive. It's fairly neat in theory, although I've never used it in practice myself.
  3. johan says: March 29, 2015 at 16:44 Have you tried mosh? It's different protocol from ssh, more suited to your use-case.

    https://mosh.mit.edu/

    Since it's different approach to solving your problem, it has different pros and cons. E.g. jumping and instant searching would still be slow. It's effectively trying to hide the problem by being a bit more intelligent. (It does this by using UDP, previewing keystrokes, robust reconnection, and only updating visible screen so as to avoid freezes due to 'cat my_humongous_log.txt'.)

    -- -- -- -- -- –

    (copy paste)

    Mosh
    (mobile shell)

    Remote terminal application that allows roaming, supports intermittent connectivity, and provides intelligent local echo and line editing of user keystrokes.

    Mosh is a replacement for SSH. It's more robust and responsive, especially over Wi-Fi, cellular, and long-distance links.

[Jun 02, 2018] Parallelise rsync using GNU Parallel

Jun 02, 2018 | unix.stackexchange.com

up vote 7 down vote favorite 4


Mandar Shinde ,Mar 13, 2015 at 6:51

I have been using a rsync script to synchronize data at one host with the data at another host. The data has numerous small-sized files that contribute to almost 1.2TB.

In order to sync those files, I have been using rsync command as follows:

rsync -avzm --stats --human-readable --include-from proj.lst /data/projects REMOTEHOST:/data/

The contents of proj.lst are as follows:

+ proj1
+ proj1/*
+ proj1/*/*
+ proj1/*/*/*.tar
+ proj1/*/*/*.pdf
+ proj2
+ proj2/*
+ proj2/*/*
+ proj2/*/*/*.tar
+ proj2/*/*/*.pdf
...
...
...
- *

As a test, I picked up two of those projects (8.5GB of data) and I executed the command above. Being a sequential process, it tool 14 minutes 58 seconds to complete. So, for 1.2TB of data it would take several hours.

If I would could multiple rsync processes in parallel (using & , xargs or parallel ), it would save my time.

I tried with below command with parallel (after cd ing to source directory) and it took 12 minutes 37 seconds to execute:

parallel --will-cite -j 5 rsync -avzm --stats --human-readable {} REMOTEHOST:/data/ ::: .

This should have taken 5 times less time, but it didn't. I think, I'm going wrong somewhere.

How can I run multiple rsync processes in order to reduce the execution time?

Ole Tange ,Mar 13, 2015 at 7:25

Are you limited by network bandwidth? Disk iops? Disk bandwidth? – Ole Tange Mar 13 '15 at 7:25

Mandar Shinde ,Mar 13, 2015 at 7:32

If possible, we would want to use 50% of total bandwidth. But, parallelising multiple rsync s is our first priority. – Mandar Shinde Mar 13 '15 at 7:32

Ole Tange ,Mar 13, 2015 at 7:41

Can you let us know your: Network bandwidth, disk iops, disk bandwidth, and the bandwidth actually used? – Ole Tange Mar 13 '15 at 7:41

Mandar Shinde ,Mar 13, 2015 at 7:47

In fact, I do not know about above parameters. For the time being, we can neglect the optimization part. Multiple rsync s in parallel is the primary focus now. – Mandar Shinde Mar 13 '15 at 7:47

Mandar Shinde ,Apr 11, 2015 at 13:53

Following steps did the job for me:
  1. Run the rsync --dry-run first in order to get the list of files those would be affected.

rsync -avzm --stats --safe-links --ignore-existing --dry-run --human-readable /data/projects REMOTE-HOST:/data/ > /tmp/transfer.log

  1. I fed the output of cat transfer.log to parallel in order to run 5 rsync s in parallel, as follows:

cat /tmp/transfer.log | parallel --will-cite -j 5 rsync -avzm --relative --stats --safe-links --ignore-existing --human-readable {} REMOTE-HOST:/data/ > result.log

Here, --relative option ( link ) ensured that the directory structure for the affected files, at the source and destination, remains the same (inside /data/ directory), so the command must be run in the source folder (in example, /data/projects ).

Sandip Bhattacharya ,Nov 17, 2016 at 21:22

That would do an rsync per file. It would probably be more efficient to split up the whole file list using split and feed those filenames to parallel. Then use rsync's --files-from to get the filenames out of each file and sync them. rm backups.* split -l 3000 backup.list backups. ls backups.* | parallel --line-buffer --verbose -j 5 rsync --progress -av --files-from {} /LOCAL/PARENT/PATH/ REMOTE_HOST:REMOTE_PATH/ – Sandip Bhattacharya Nov 17 '16 at 21:22

Mike D ,Sep 19, 2017 at 16:42

How does the second rsync command handle the lines in result.log that are not files? i.e. receiving file list ... done created directory /data/ . – Mike D Sep 19 '17 at 16:42

Cheetah ,Oct 12, 2017 at 5:31

On newer versions of rsync (3.1.0+), you can use --info=name in place of -v , and you'll get just the names of the files and directories. You may want to use --protect-args to the 'inner' transferring rsync too if any files might have spaces or shell metacharacters in them. – Cheetah Oct 12 '17 at 5:31

Mikhail ,Apr 10, 2017 at 3:28

I would strongly discourage anybody from using the accepted answer, a better solution is to crawl the top level directory and launch a proportional number of rync operations.

I have a large zfs volume and my source was was a cifs mount. Both are linked with 10G, and in some benchmarks can saturate the link. Performance was evaluated using zpool iostat 1 .

The source drive was mounted like:

mount -t cifs -o username=,password= //static_ip/70tb /mnt/Datahoarder_Mount/ -o vers=3.0

Using a single rsync process:

rsync -h -v -r -P -t /mnt/Datahoarder_Mount/ /StoragePod

the io meter reads:

StoragePod  30.0T   144T      0  1.61K      0   130M
StoragePod  30.0T   144T      0  1.61K      0   130M
StoragePod  30.0T   144T      0  1.62K      0   130M

This in synthetic benchmarks (crystal disk), performance for sequential write approaches 900 MB/s which means the link is saturated. 130MB/s is not very good, and the difference between waiting a weekend and two weeks.

So, I built the file list and tried to run the sync again (I have a 64 core machine):

cat /home/misha/Desktop/rsync_logs_syncs/Datahoarder_Mount.log | parallel --will-cite -j 16 rsync -avzm --relative --stats --safe-links --size-only --human-readable {} /StoragePod/ > /home/misha/Desktop/rsync_logs_syncs/Datahoarder_Mount_result.log

and it had the same performance!

StoragePod  29.9T   144T      0  1.63K      0   130M
StoragePod  29.9T   144T      0  1.62K      0   130M
StoragePod  29.9T   144T      0  1.56K      0   129M

As an alternative I simply ran rsync on the root folders:

rsync -h -v -r -P -t /mnt/Datahoarder_Mount/Mikhail/Marcello_zinc_bone /StoragePod/Marcello_zinc_bone
rsync -h -v -r -P -t /mnt/Datahoarder_Mount/Mikhail/fibroblast_growth /StoragePod/fibroblast_growth
rsync -h -v -r -P -t /mnt/Datahoarder_Mount/Mikhail/QDIC /StoragePod/QDIC
rsync -h -v -r -P -t /mnt/Datahoarder_Mount/Mikhail/sexy_dps_cell /StoragePod/sexy_dps_cell

This actually boosted performance:

StoragePod  30.1T   144T     13  3.66K   112K   343M
StoragePod  30.1T   144T     24  5.11K   184K   469M
StoragePod  30.1T   144T     25  4.30K   196K   373M

In conclusion, as @Sandip Bhattacharya brought up, write a small script to get the directories and parallel that. Alternatively, pass a file list to rsync. But don't create new instances for each file.

Julien Palard ,May 25, 2016 at 14:15

I personally use this simple one:
ls -1 | parallel rsync -a {} /destination/directory/

Which only is usefull when you have more than a few non-near-empty directories, else you'll end up having almost every rsync terminating and the last one doing all the job alone.

Ole Tange ,Mar 13, 2015 at 7:25

A tested way to do the parallelized rsync is: http://www.gnu.org/software/parallel/man.html#EXAMPLE:-Parallelizing-rsync

rsync is a great tool, but sometimes it will not fill up the available bandwidth. This is often a problem when copying several big files over high speed connections.

The following will start one rsync per big file in src-dir to dest-dir on the server fooserver:

cd src-dir; find . -type f -size +100000 | \
parallel -v ssh fooserver mkdir -p /dest-dir/{//}\; \
  rsync -s -Havessh {} fooserver:/dest-dir/{}

The directories created may end up with wrong permissions and smaller files are not being transferred. To fix those run rsync a final time:

rsync -Havessh src-dir/ fooserver:/dest-dir/

If you are unable to push data, but need to pull them and the files are called digits.png (e.g. 000000.png) you might be able to do:

seq -w 0 99 | parallel rsync -Havessh fooserver:src/*{}.png destdir/

Mandar Shinde ,Mar 13, 2015 at 7:34

Any other alternative in order to avoid find ? – Mandar Shinde Mar 13 '15 at 7:34

Ole Tange ,Mar 17, 2015 at 9:20

Limit the -maxdepth of find. – Ole Tange Mar 17 '15 at 9:20

Mandar Shinde ,Apr 10, 2015 at 3:47

If I use --dry-run option in rsync , I would have a list of files that would be transferred. Can I provide that file list to parallel in order to parallelise the process? – Mandar Shinde Apr 10 '15 at 3:47

Ole Tange ,Apr 10, 2015 at 5:51

cat files | parallel -v ssh fooserver mkdir -p /dest-dir/{//}\; rsync -s -Havessh {} fooserver:/dest-dir/{} – Ole Tange Apr 10 '15 at 5:51

Mandar Shinde ,Apr 10, 2015 at 9:49

Can you please explain the mkdir -p /dest-dir/{//}\; part? Especially the {//} thing is a bit confusing. – Mandar Shinde Apr 10 '15 at 9:49

,

For multi destination syncs, I am using
parallel rsync -avi /path/to/source ::: host1: host2: host3:

Hint: All ssh connections are established with public keys in ~/.ssh/authorized_keys

[Jun 02, 2018] Parallelizing rsync

Jun 02, 2018 | www.gnu.org

rsync is a great tool, but sometimes it will not fill up the available bandwidth. This is often a problem when copying several big files over high speed connections.

The following will start one rsync per big file in src-dir to dest-dir on the server fooserver :

  cd src-dir; find . -type f -size +100000 | \
    parallel -v ssh fooserver mkdir -p /dest-dir/{//}\; \
      rsync -s -Havessh {} fooserver:/dest-dir/{}

The dirs created may end up with wrong permissions and smaller files are not being transferred. To fix those run rsync a final time:

  rsync -Havessh src-dir/ fooserver:/dest-dir/

If you are unable to push data, but need to pull them and the files are called digits.png (e.g. 000000.png) you might be able to do:

  seq -w 0 99 | parallel rsync -Havessh fooserver:src/*{}.png destdir/

[Jun 01, 2018] Introduction to Bash arrays by Robert Aboukhalil

Jun 01, 2018 | opensource.com

... ... ...

Looping through arrays

Although in the examples above we used integer indices in our arrays, let's consider two occasions when that won't be the case: First, if we wanted the $i -th element of the array, where $i is a variable containing the index of interest, we can retrieve that element using: echo ${allThreads[$i]} . Second, to output all the elements of an array, we replace the numeric index with the @ symbol (you can think of @ as standing for all ): echo ${allThreads[@]} .

Looping through array elements

With that in mind, let's loop through $allThreads and launch the pipeline for each value of --threads :

for t in ${allThreads[@]} ; do
. / pipeline --threads $t
done

Looping through array indices

Next, let's consider a slightly different approach. Rather than looping over array elements , we can loop over array indices :

for i in ${!allThreads[@]} ; do
. / pipeline --threads ${allThreads[$i]}
done

Let's break that down: As we saw above, ${allThreads[@]} represents all the elements in our array. Adding an exclamation mark to make it ${!allThreads[@]} will return the list of all array indices (in our case 0 to 7). In other words, the for loop is looping through all indices $i and reading the $i -th element from $allThreads to set the value of the --threads parameter.

This is much harsher on the eyes, so you may be wondering why I bother introducing it in the first place. That's because there are times where you need to know both the index and the value within a loop, e.g., if you want to ignore the first element of an array, using indices saves you from creating an additional variable that you then increment inside the loop.

Populating arrays

So far, we've been able to launch the pipeline for each --threads of interest. Now, let's assume the output to our pipeline is the runtime in seconds. We would like to capture that output at each iteration and save it in another array so we can do various manipulations with it at the end.

Some useful syntax

But before diving into the code, we need to introduce some more syntax. First, we need to be able to retrieve the output of a Bash command. To do so, use the following syntax: output=$( ./my_script.sh ) , which will store the output of our commands into the variable $output .

The second bit of syntax we need is how to append the value we just retrieved to an array. The syntax to do that will look familiar:

myArray+=( "newElement1" "newElement2" )
The parameter sweep

Putting everything together, here is our script for launching our parameter sweep:

allThreads = ( 1 2 4 8 16 32 64 128 )
allRuntimes = ()
for t in ${allThreads[@]} ; do
runtime =$ ( . / pipeline --threads $t )
allRuntimes+= ( $runtime )
done

And voilà!

What else you got?

In this article, we covered the scenario of using arrays for parameter sweeps. But I promise there are more reasons to use Bash arrays -- here are two more examples.

Log alerting

In this scenario, your app is divided into modules, each with its own log file. We can write a cron job script to email the right person when there are signs of trouble in certain modules:

# List of logs and who should be notified of issues
logPaths = ( "api.log" "auth.log" "jenkins.log" "data.log" )
logEmails = ( "jay@email" "emma@email" "jon@email" "sophia@email" )

# Look for signs of trouble in each log
for i in ${!logPaths[@]} ;
do
log = ${logPaths[$i]}
stakeholder = ${logEmails[$i]}
numErrors =$ ( tail -n 100 " $log " | grep "ERROR" | wc -l )

# Warn stakeholders if recently saw > 5 errors
if [[ " $numErrors " -gt 5 ]] ;
then
emailRecipient = " $stakeholder "
emailSubject = "WARNING: ${log} showing unusual levels of errors"
emailBody = " ${numErrors} errors found in log ${log} "
echo " $emailBody " | mailx -s " $emailSubject " " $emailRecipient "
fi
done

API queries

Say you want to generate some analytics about which users comment the most on your Medium posts. Since we don't have direct database access, SQL is out of the question, but we can use APIs!

To avoid getting into a long discussion about API authentication and tokens, we'll instead use JSONPlaceholder , a public-facing API testing service, as our endpoint. Once we query each post and retrieve the emails of everyone who commented, we can append those emails to our results array:

endpoint = "https://jsonplaceholder.typicode.com/comments"
allEmails = ()

# Query first 10 posts
for postId in { 1 .. 10 } ;
do
# Make API call to fetch emails of this posts's commenters
response =$ ( curl " ${endpoint} ?postId= ${postId} " )

# Use jq to parse the JSON response into an array
allEmails+= ( $ ( jq '.[].email' <<< " $response " ) )
done

Note here that I'm using the jq tool to parse JSON from the command line. The syntax of jq is beyond the scope of this article, but I highly recommend you look into it.

As you might imagine, there are countless other scenarios in which using Bash arrays can help, and I hope the examples outlined in this article have given you some food for thought. If you have other examples to share from your own work, please leave a comment below.

But wait, there's more!

Since we covered quite a bit of array syntax in this article, here's a summary of what we covered, along with some more advanced tricks we did not cover:

Syntax Result
arr=() Create an empty array
arr=(1 2 3) Initialize array
${arr[2]} Retrieve third element
${arr[@]} Retrieve all elements
${!arr[@]} Retrieve array indices
${#arr[@]} Calculate array size
arr[0]=3 Overwrite 1st element
arr+=(4) Append value(s)
str=$(ls) Save ls output as a string
arr=( $(ls) ) Save ls output as an array of files
${arr[@]:s:n} Retrieve elements at indices n to s+n
One last thought

As we've discovered, Bash arrays sure have strange syntax, but I hope this article convinced you that they are extremely powerful. Once you get the hang of the syntax, you'll find yourself using Bash arrays quite often.

... ... ...

Robert Aboukhalil is a Bioinformatics Software Engineer. In his work, he develops cloud applications for the analysis and interactive visualization of genomics data. Robert holds a Ph.D. in Bioinformatics from Cold Spring Harbor Laboratory and a B.Eng. in Computer Engineering from McGill.

[May 28, 2018] TIP 7-zip s XZ compression on a multiprocessor system is often faster and compresses better than gzip linuxadmin

May 28, 2018 | www.reddit.com

TIP: 7-zip's XZ compression on a multiprocessor system is often faster and compresses better than gzip ( self.linuxadmin )


TyIzaeL line"> [–] kristopolous 4 years ago (4 children)

I did this a while back also. Here's a graph: http://i.imgur.com/gPOQBfG.png

X axis is compression level (min to max) Y is the size of the file that was compressed

I forget what the file was.

TyIzaeL 4 years ago (3 children)
That is a great start (probably better than what I am doing). Do you have time comparisons as well?
kristopolous 4 years ago (1 child)
http://www.reddit.com/r/linuxquestions/comments/1gdvnc/best_file_compression_format/caje4hm there's the post
TyIzaeL 4 years ago (0 children)
Very nice. I might work on something similar to this soon next time I'm bored.
kristopolous 4 years ago (0 children)
nope.
TyIzaeL 4 years ago (0 children)
That's a great point to consider among all of this. Compression is always a tradeoff between how much CPU and memory you want to throw at something and how much space you would like to save. In my case, hammering the server for 3 minutes in order to take a backup is necessary because the uncompressed data would bottleneck at the LAN speed.
randomfrequency 4 years ago (0 children)
You might want to play with 'pigz' - it's gzip, multi-threaded. You can 'pv' to restrict the rate of the output, and it accepts signals to control the rate limiting.
rrohbeck 4 years ago (1 child)
Also pbzip2 -1 to -9 and pigz -1 to -9.

With -9 you can surely make backup CPU bound. I've given up on compression though: rsync is much faster than straight backup and I use btrfs compression/deduplication/snapshotting on the backup server.

TyIzaeL 4 years ago (0 children)
pigz -9 is already on the chart as pigz --best. I'm working on adding the others though.
TyIzaeL 4 years ago (0 children)
I'm running gzip, bzip2, and pbzip2 now (not at the same time, of course) and will add results soon. But in my case the compression keeps my db dumps from being IO bound by the 100mbit LAN connection. For example, lzop in the results above puts out 6041.632 megabits in 53.82 seconds for a total compressed data rate of 112 megabits per second, which would make the transfer IO bound. Whereas the pigz example puts out 3339.872 megabits in 81.892 seconds, for an output data rate of 40.8 megabits per second. This is just on my dual-core box with a static file, on the 8-core server I see the transfer takes a total of about three minutes. It's probably being limited more by the rate at which the MySQL server can dump text from the database, but if there was no compression it'd be limited by the LAN speed. If we were dumping 2.7GB over the LAN directly, we would need 122mbit/s of real throughput to complete it in three minutes.
Shammyhealz 4 years ago (2 children)
I thought the best compression was supposed to be LZMA? Which is what the .7z archives are. I have no idea of the relative speed of LZMA and gzip
TyIzaeL 4 years ago (1 child)
xz archives use the LZMA2 format (which is also used in 7z archives). LZMA2 speed seems to range from a little slower than gzip to much slower than bzip2, but results in better compression all around.
primitive_screwhead 4 years ago (0 children)
However LZMA2 decompression speed is generally much faster than bzip2, in my experience, though not as fast as gzip. This is why we use it, as we decompress our data much more often than we compress it, and the space saving/decompression speed tradeoff is much more favorable for us than either gzip of bzip2.
crustang 4 years ago (2 children)
I mentioned how 7zip was superior to all other zip programs in /r/osx a few days ago and my comment was burried in favor of the the osx circlejerk .. it feels good seeing this data.

I love 7zip

RTFMorGTFO 4 years ago (1 child)
Why... Tar supports xz, lzma, lzop, lzip, and any other kernel based compression algorithms. Its also much more likely to be preinstalled on your given distro.
crustang 4 years ago (0 children)
I've used 7zip at my old job for a backup of our business software's database. We needed speed, high level of compression, and encryption. Portability wasn't high on the list since only a handful of machines needed access to the data. All machines were multi-processor and 7zip gave us the best of everything given the requirements. I haven't really looked at anything deeply - including tar, which my old boss didn't care for.

[May 28, 2018] RPM RedHat EL 6 p7zip 9.20.1 x86_64 rpm

May 28, 2018 | rpm.pbone.net
p7zip rpm build for : RedHat EL 6 . For other distributions click p7zip .
Name : p7zip
Version : 9.20.1 Vendor : Dag Apt Repository, http://dag_wieers_com/apt/
Release : 1.el6.rf Date : 2011-04-20 15:23:34
Group : Applications/Archiving Source RPM : p7zip-9.20.1-1.el6.rf.src.rpm
Size : 14.84 MB
Packager : Dag Wieers < dag_wieers_com>
Summary : Very high compression ratio file archiver
Description :
p7zip is a port of 7za.exe for Unix. 7-Zip is a file archiver with a very high
compression ratio. The original version can be found at http://www.7-zip.org/.

RPM found in directory: /mirror/apt.sw.be/redhat/el6/en/x86_64/rpmforge/RPMS

Content of RPM Changelog Provides Requires
Download
ftp.univie.ac.at p7zip-9.20.1-1.el6.rf.x86_64.rpm
ftp.rediris.es p7zip-9.20.1-1.el6.rf.x86_64.rpm
ftp.icm.edu.pl p7zip-9.20.1-1.el6.rf.x86_64.rpm
ftp.pbone.net p7zip-9.20.1-1.el6.rf.x86_64.rpm
ftp.pbone.net p7zip-9.20.1-1.el6.rf.x86_64.rpm
ftp.pbone.net p7zip-9.20.1-1.el6.rf.x86_64.rpm
ftp.is.co.za p7zip-9.20.1-1.el6.rf.x86_64.rpm

[May 28, 2018] TIL pigz exists A parallel implementation of gzip for modern multi-processor, multi-core machines linux

May 28, 2018 | www.reddit.com

TIL pigz exists "A parallel implementation of gzip for modern multi-processor, multi-core machines" ( self.linux )

submitted 3 years ago by


msiekkinen y unvoted">

[–] tangre 3 years ago (74 children)

Why wouldn't gzip be updated with this functionality instead? Is there a point in keeping it separate?
ilikerackmounts 3 years ago (59 children)
There are certain file sizes were pigz makes no difference, in general you need at least 2 cores to feel the benefits, there are quite a few reasons. That being said, pigz and its bzip counterpart pbzip2 can be symlinked in place when emerged with gentoo and using the "symlink" use flag.
adam@eggsbenedict ~ $ eix pigz
[I] app-arch/pigz
   Available versions:  2.2.5 2.3 2.3.1 (~)2.3.1-r1 {static symlink |test}
   Installed versions:  2.3.1-r1(02:06:01 01/25/14)(symlink -static -|test)
   Homepage:            http://www.zlib.net/pigz/
   Description:         A parallel implementation of gzip
msiekkinen 3 years ago (38 children)

in general you need at least 2 cores to feel the benefits

Is it even possible to buy any single core cpus outside of some kind of specialized embedded system these days?

exdirrk 3 years ago (5 children)
Virtualization.
tw4 3 years ago (2 children)
Yes, but nevertheless it's possible to allocate only one.
too_many_secrets 3 years ago (0 children)

Giving a VM more than one CPU is quite a rare circumstance.

Depends on your circumstances. It's rare that we have any VMs with a single CPU, but we have thousands of servers and a lot of things going on.

FakingItEveryDay 3 years ago (0 children)
You can, but often shouldn't. I can only speak for vmware here, other hypervisors may work differently. Generally you want to size your VMware vm's so that they are around 80% cpu utilization. When any VM with multiple cores needs compute power the hypervisor will make it wait to until it can free that number of CPUs, even if the task in the VM only needs one core. This makes the multi-core VM slower by having to wait longer to do it's work, as well as makes other VMs on the hypervisor slower as they must all wait for it to finish before they can get a core allocated.

[May 28, 2018] Solaris: Parallel Compression/Decompression

Notable quotes:
"... the following prstat, vmstat outputs show that gzip is compressing the ..."
"... tar file using a single thread – hence low CPU utilization. ..."
"... wall clock time is 25s compared to gzip's 3m 27s ..."
"... the following prstat, vmstat outputs show that pigz is compressing the ..."
"... tar file using many threads – hence busy system with high CPU utilization. ..."
"... shows that the pigz compressed file is ..."
"... compatible with gzip/gunzip ..."
"... compare gzip's 52s decompression time with pigz's 18s ..."
May 28, 2018 | hadafq8.wordpress.com

Posted on January 26, 2015 by Sandeep Shenoy This topic is not Solaris specific, but certainly helps Solaris users who are frustrated with the single threaded implementation of all officially supported compression tools such as compress, gzip, zip. pigz (pig-zee) is a parallel implementation of gzip that suits well for the latest multi-processor, multi-core machines. By default, pigz breaks up the input into multiple chunks of size 128 KB, and compress each chunk in parallel with the help of light-weight threads. The number of compress threads is set by default to the number of online processors. The chunk size and the number of threads are configurable. Compressed files can be restored to their original form using -d option of pigz or gzip tools. As per the man page, decompression is not parallelized out of the box, but may show some improvement compared to the existing old tools. The following example demonstrates the advantage of using pigz over gzip in compressing and decompressing a large file. eg.,
Original file, and the target hardware. $ ls -lh PT8.53.04.tar -rw-r–r– 1 psft dba 4.8G Feb 28 14:03 PT8.53.04.tar
$ psrinfo -pv The physical processor has 8 cores and 64 virtual processors (0-63) The core has 8 virtual processors (0-7) The core has 8 virtual processors (56-63) SPARC-T5 (chipid 0, clock 3600 MHz)
gzip compression.
$ time gzip –fast PT8.53.04.tar
real 3m40.125s user 3m27.105s sys 0m13.008s
$ ls -lh PT8.53* -rw-r–r– 1 psft dba 3.1G Feb 28 14:03 PT8.53.04.tar.gz /* the following prstat, vmstat outputs show that gzip is compressing the tar file using a single thread – hence low CPU utilization. */
$ prstat -p 42510 PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP 42510 psft 2616K 2200K cpu16 10 0 0:01:00 1.5% gzip/ 1

$ prstat -m -p 42510 PID USERNAME USR SYS TRP TFL DFL LCK SLP LAT VCX ICX SCL SIG PROCESS/NLWP 42510 psft 95 4.6 0.0 0.0 0.0 0.0 0.0 0.0 0 35 7K 0 gzip/1 $ vmstat 2 r b w swap free re mf pi po fr de sr s0 s1 s2 s3 in sy cs us sy id 0 0 0 776242104 917016008 0 7 0 0 0 0 0 0 0 52 52 3286 2606 2178 2 0 98
1 0 0 776242104 916987888 0 14 0 0 0 0 0 0 0 0 0 3851 3359 2978 2 1 97
0 0 0 776242104 916962440 0 0 0 0 0 0 0 0 0 0 0 3184 1687 2023 1 0 98
0 0 0 775971768 916930720 0 0 0 0 0 0 0 0 0 39 37 3392 1819 2210 2 0 98
0 0 0 775971768 916898016 0 0 0 0 0 0 0 0 0 0 0 3452 1861 2106 2 0 98

pigz compression. $ time ./pigz PT8.53.04.tar real 0m25.111s <== wall clock time is 25s compared to gzip's 3m 27s
user 17m18.398s sys 0m37.718s
/* the following prstat, vmstat outputs show that pigz is compressing the tar file using many threads – hence busy system with high CPU utilization. */
$ prstat -p 49734 PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP 49734 psft 59M 58M sleep 11 0 0:12:58 38% pigz/ 66

$ vmstat 2 kthr memory page disk faults cpu r b w swap free re mf pi po fr de sr s0 s1 s2 s3 in sy cs us sy id 0 0 0 778097840 919076008 6 113 0 0 0 0 0 0 0 40 36 39330 45797 74148 61 4 35
0 0 0 777956280 918841720 0 1 0 0 0 0 0 0 0 0 0 38752 43292 71411 64 4 32
0 0 0 777490336 918334176 0 3 0 0 0 0 0 0 0 17 15 46553 53350 86840 60 4 35
1 0 0 777274072 918141936 0 1 0 0 0 0 0 0 0 39 34 16122 20202 28319 88 4 9
1 0 0 777138800 917917376 0 0 0 0 0 0 0 0 0 3 3 46597 51005 86673 56 5 39

$ ls -lh PT8.53.04.tar.gz -rw-r–r– 1 psft dba 3.0G Feb 28 14:03 PT8.53.04.tar.gz
$ gunzip PT8.53.04.tar.gz <== shows that the pigz compressed file is compatible with gzip/gunzip

$ ls -lh PT8.53* -rw-r–r– 1 psft dba 4.8G Feb 28 14:03 PT8.53.04.tar
Decompression. $ time ./pigz -d PT8.53.04.tar.gz real 0m18.068s
user 0m22.437s sys 0m12.857s
$ time gzip -d PT8.53.04.tar.gz real 0m52.806s <== compare gzip's 52s decompression time with pigz's 18s
user 0m42.068s sys 0m10.736s
$ ls -lh PT8.53.04.tar -rw-r–r– 1 psft dba 4.8G Feb 28 14:03 PT8.53.04.tar
Of course, there are other tools such as Parallel BZIP2 (PBZIP2), which is a parallel implementation of the bzip2 tool are worth a try too. The idea here is to highlight the fact that there are better tools out there to get the job done in a quick manner compared to the existing/old tools that are bundled with the operating system distribution.

[May 28, 2018] Useful Linux Command Line Bash Shortcuts You Should Know

May 28, 2018 | www.tecmint.com

In this article, we will share a number of Bash command-line shortcuts useful for any Linux user. These shortcuts allow you to easily and in a fast manner, perform certain activities such as accessing and running previously executed commands, opening an editor, editing/deleting/changing text on the command line, moving the cursor, controlling processes etc. on the command line.

Although this article will mostly benefit Linux beginners getting their way around with command line basics, those with intermediate skills and advanced users might also find it practically helpful. We will group the bash keyboard shortcuts according to categories as follows.

Launch an Editor

Open a terminal and press Ctrl+X and Ctrl+E to open an editor ( nano editor ) with an empty buffer. Bash will try to launch the editor defined by the $EDITOR environment variable.

Nano Editor

Nano Editor Controlling The Screen

These shortcuts are used to control terminal screen output:

Move Cursor on The Command Line

The next shortcuts are used for moving the cursor within the command-line:

Search Through Bash History

The following shortcuts are used for searching for commands in the bash history:

Delete Text on the Command Line

The following shortcuts are used for deleting text on the command line:

Transpose Text or Change Case on the Command Line

These shortcuts will transpose or change the case of letters or words on the command line:

Working With Processes in Linux

The following shortcuts help you to control running Linux processes.

Learn more about: All You Need To Know About Processes in Linux [Comprehensive Guide]

Bash Bang (!) Commands

In the final part of this article, we will explain some useful ! (bang) operations:

For more information, see the bash man page:

$ man bash

That's all for now! In this article, we shared some common and useful Bash command-line shortcuts and operations. Use the comment form below to make any additions or ask questions.

[May 20, 2018] Midnight Commander (mc): convenient hard links creation from user menu

Notable quotes:
"... Future Releases ..."
May 20, 2018 | bogdan.org.ua

3rd December 2015

Midnight Commander is a convenient two-panel file manager with tons of features.

You can create hard links and symbolic links using C-x l and C-x s keyboard shortcuts. However, these two shortcuts invoke two completely different dialogs.

While for C-x s you get 2 pre-populated fields (path to the existing file, and path to the link – which is pre-populated with your opposite file panel path plus the name of the file under cursor; simply try it to see what I mean), for C-x l you only get 1 empty field: path of the hard link to create for a file under cursor. Symlink's behaviour would be much more convenient

Fortunately, a good man called Wiseman1024 created a feature request in the MC's bug tracker 6 years ago. Not only had he done so, but he had also uploaded a sample mc user menu script ( local copy ), which works wonderfully! You can select multiple files, then F2 l (lower-case L), and hard-links to your selected files (or a file under cursor) will be created in the opposite file panel. Great, thank you Wiseman1024 !

Word of warning: you must know what hard links are and what their limitations are before using this menu script. You also must check and understand the user menu code before adding it to your mc (by F9 C m u , and then pasting the script from the file).

Word of hope: 4 years ago Wiseman's feature request was assigned to Future Releases version, so a more convenient C-x l will (sooner or later) become the part of mc. Hopefully.

[May 20, 2018] Midnight Commander: panelize or select all files newer than specified date

May 20, 2018 | bogdan.org.ua

3rd February 2017

If you ever need to select lots (hundreds, thousands) of files by their modification date, and your directory contains many more files (thousands, tens of thousands), then angel_il has the answer for you:
  1. touch -d "Jun 01 00:00 2011″ /tmp/.date1
  2. enter into your BIG dir
  3. press C-x ! (External panelize)
  4. add new command like a "find . -type f \( -newer /tmp/.date1 \) -print"

I've used a slightly different approach, specifying desired date right in the command line of External Panelize:

  1. enter your directory with many files
  2. press C-x ! (External Panelize)
  3. add a command like find . -type f -newermt "2017-02-01 23:55:00" -print ( man find for more details)

In both cases, the created panel will only have files matching your search condition.

[Apr 30, 2018] New Book Describes Bluffing Programmers in Silicon Valley

Notable quotes:
"... Live Work Work Work Die: A Journey into the Savage Heart of Silicon Valley ..."
"... Older generations called this kind of fraud "fake it 'til you make it." ..."
"... Nowadays I work 9:30-4:30 for a very good, consistent paycheck and let some other "smart person" put in 75 hours a week dealing with hiring ..."
"... It's not a "kids these days" sort of issue, it's *always* been the case that shameless, baseless self-promotion wins out over sincere skill without the self-promotion, because the people who control the money generally understand boasting more than they understand the technology. ..."
"... In the bad old days we had a hell of a lot of ridiculous restriction We must somehow made our programs to run successfully inside a RAM that was 48KB in size (yes, 48KB, not 48MB or 48GB), on a CPU with a clock speed of 1.023 MHz ..."
"... So what are the uses for that? I am curious what things people have put these to use for. ..."
"... Also, Oracle, SAP, IBM... I would never buy from them, nor use their products. I have used plenty of IBM products and they suck big time. They make software development 100 times harder than it could be. ..."
"... I have a theory that 10% of people are good at what they do. It doesn't really matter what they do, they will still be good at it, because of their nature. These are the people who invent new things, who fix things that others didn't even see as broken and who automate routine tasks or simply question and erase tasks that are not necessary. ..."
"... 10% are just causing damage. I'm not talking about terrorists and criminals. ..."
"... Programming is statistically a dead-end job. Why should anyone hone a dead-end skill that you won't be able to use for long? For whatever reason, the industry doesn't want old programmers. ..."
Apr 30, 2018 | news.slashdot.org

Long-time Slashdot reader Martin S. pointed us to this an excerpt from the new book Live Work Work Work Die: A Journey into the Savage Heart of Silicon Valley by Portland-based investigator reporter Corey Pein.

The author shares what he realized at a job recruitment fair seeking Java Legends, Python Badasses, Hadoop Heroes, "and other gratingly childish classifications describing various programming specialities.

" I wasn't the only one bluffing my way through the tech scene. Everyone was doing it, even the much-sought-after engineering talent.

I was struck by how many developers were, like myself, not really programmers , but rather this, that and the other. A great number of tech ninjas were not exactly black belts when it came to the actual onerous work of computer programming. So many of the complex, discrete tasks involved in the creation of a website or an app had been automated that it was no longer necessary to possess knowledge of software mechanics. The coder's work was rarely a craft. The apps ran on an assembly line, built with "open-source", off-the-shelf components. The most important computer commands for the ninja to master were copy and paste...

[M]any programmers who had "made it" in Silicon Valley were scrambling to promote themselves from coder to "founder". There wasn't necessarily more money to be had running a startup, and the increase in status was marginal unless one's startup attracted major investment and the right kind of press coverage. It's because the programmers knew that their own ladder to prosperity was on fire and disintegrating fast. They knew that well-paid programming jobs would also soon turn to smoke and ash, as the proliferation of learn-to-code courses around the world lowered the market value of their skills, and as advances in artificial intelligence allowed for computers to take over more of the mundane work of producing software. The programmers also knew that the fastest way to win that promotion to founder was to find some new domain that hadn't yet been automated. Every tech industry campaign designed to spur investment in the Next Big Thing -- at that time, it was the "sharing economy" -- concealed a larger programme for the transformation of society, always in a direction that favoured the investor and executive classes.

"I wasn't just changing careers and jumping on the 'learn to code' bandwagon," he writes at one point. "I was being steadily indoctrinated in a specious ideology."


Anonymous Coward , Saturday April 28, 2018 @11:40PM ( #56522045 )

older generations already had a term for this ( Score: 5 , Interesting)

Older generations called this kind of fraud "fake it 'til you make it."

raymorris ( 2726007 ) , Sunday April 29, 2018 @02:05AM ( #56522343 ) Journal
The people who are smarter won't ( Score: 5 , Informative)

> The people can do both are smart enough to build their own company and compete with you.

Been there, done that. Learned a few lessons. Nowadays I work 9:30-4:30 for a very good, consistent paycheck and let some other "smart person" put in 75 hours a week dealing with hiring, managing people, corporate strategy, staying up on the competition, figuring out tax changes each year and getting taxes filed six times each year, the various state and local requirements, legal changes, contract hassles, etc, while hoping the company makes money this month so they can take a paycheck and lay their rent.

I learned that I'm good at creating software systems and I enjoy it. I don't enjoy all-nighters, partners being dickheads trying to pull out of a contract, or any of a thousand other things related to running a start-up business. I really enjoy a consistent, six-figure compensation package too.

brian.stinar ( 1104135 ) writes:
Re: ( Score: 2 )

* getting taxes filled eighteen times a year.

I pay monthly gross receipts tax (12), quarterly withholdings (4) and a corporate (1) and individual (1) returns. The gross receipts can vary based on the state, so I can see how six times a year would be the minimum.

Cederic ( 9623 ) writes:
Re: ( Score: 2 )

Fuck no. Cost of full automation: $4m Cost of manual entry: $0 Opportunity cost of manual entry: $800/year

At worse, pay for an accountant, if you can get one that cheaply. Bear in mind talking to them incurs most of that opportunity cost anyway.

serviscope_minor ( 664417 ) writes:
Re: ( Score: 2 )

Nowadays I work 9:30-4:30 for a very good, consistent paycheck and let some other "smart person" put in 75 hours a week dealing with hiring

There's nothing wrong with not wnting to run your own business, it's not for most people, and even if it was, the numbers don't add up. But putting the scare qoutes in like that makes it sound like you have huge chip on your shoulder. Those things re just as essential to the business as your work and without them you wouldn't have the steady 9:30-4:30 with good paycheck.

raymorris ( 2726007 ) writes:
Important, and dumb. ( Score: 3 , Informative)

Of course they are important. I wouldn't have done those things if they weren't important!

I frequently have friends say things like "I love baking. I can't get enough of baking. I'm going to open a bakery.". I ask them "do you love dealing with taxes, every month? Do you love contract law? Employment law? Marketing? Accounting?" If you LOVE baking, the smart thing to do is to spend your time baking. Running a start-up business, you're not going to do much baking.

If you love marketing, employment law, taxes

raymorris ( 2726007 ) writes:
Four tips for a better job. Who has more? ( Score: 3 )

I can tell you a few things that have worked for me. I'll go in chronological order rather than priority order.

Make friends in the industry you want to be in. Referrals are a major way people get jobs.

Look at the job listings for jobs you'd like to have and see which skills a lot of companies want, but you're missing. For me that's Java. A lot companies list Java skills and I'm not particularly good with Java. Then consider learning the skills you lack, the ones a lot of job postings are looking for.

Certifi

goose-incarnated ( 1145029 ) , Sunday April 29, 2018 @02:34PM ( #56524475 ) Journal
Re: older generations already had a term for this ( Score: 5 , Insightful)
You don't understand the point of an ORM do you? I'd suggest reading why they exist

They exist because programmers value code design more than data design. ORMs are the poster-child for square-peg-round-hole solutions, which is why all ORMs choose one of three different ways of squashing hierarchical data into a relational form, all of which are crappy.

If the devs of the system (the ones choosing to use an ORM) had any competence at all they'd design their database first because in any application that uses a database the database is the most important bit, not the OO-ness or Functional-ness of the design.

Over the last few decades I've seen programs in a system come and go; a component here gets rewritten, a component there gets rewritten, but you know what? They all have to work with the same damn data.

You can more easily switch out your code for new code with new design in a new language, than you can switch change the database structure. So explain to me why it is that you think the database should be mangled to fit your OO code rather than mangling your OO code to fit the database?

cheekyboy ( 598084 ) writes:
im sick of reinventors and new frameworks ( Score: 3 )

Stick to the one thing for 10-15years. Often all this new shit doesn't do jack different to the old shit, its not faster, its not better. Every dick wants to be famous so make another damn library/tool with his own fancy name and feature, instead of enhancing an existing product.

gbjbaanb ( 229885 ) writes:
Re: ( Score: 2 )

amen to that.

Or kids who can't hack the main stuff, suddenly discover the cool new, and then they can pretend they're "learning" it, and when the going gets tough (as it always does) they can declare the tech to be pants and move to another.

hence we had so many people on the bandwagon for functional programming, then dumped it for ruby on rails, then dumped that for Node.js, not sure what they're on at currently, probably back to asp.net.

Greyfox ( 87712 ) writes:
Re: ( Score: 2 )

How much code do you have to reuse before you're not really programming anymore? When I started in this business, it was reasonably possible that you could end up on a project that didn't particularly have much (or any) of an operating system. They taught you assembly language and the process by which the system boots up, but I think if I were to ask most of the programmers where I work, they wouldn't be able to explain how all that works...

djinn6 ( 1868030 ) writes:
Re: ( Score: 2 )
It really feels like if you know what you're doing it should be possible to build a team of actually good programmers and put everyone else out of business by actually meeting your deliverables, but no one has yet. I wonder why that is.

You mean Amazon, Google, Facebook and the like? People may not always like what they do, but they manage to get things done and make plenty of money in the process. The problem for a lot of other businesses is not having a way to identify and promote actually good programmers. In your example, you could've spent 10 minutes fixing their query and saved them days of headache, but how much recognition will you actually get? Where is your motivation to help them?

Junta ( 36770 ) writes:
Re: ( Score: 2 )

It's not a "kids these days" sort of issue, it's *always* been the case that shameless, baseless self-promotion wins out over sincere skill without the self-promotion, because the people who control the money generally understand boasting more than they understand the technology. Yes it can happen that baseless boasts can be called out over time by a large enough mass of feedback from competent peers, but it takes a *lot* to overcome the tendency for them to have faith in the boasts.

It does correlate stron

cheekyboy ( 598084 ) writes:
Re: ( Score: 2 )

And all these modern coders forget old lessons, and make shit stuff, just look at instagram windows app, what a load of garbage shit, that us old fuckers could code in 2-3 weeks.

Instagram - your app sucks, cookie cutter coders suck, no refinement, coolness. Just cheap ass shit, with limited usefulness.

Just like most of commercial software that's new - quick shit.

Oh and its obvious if your an Indian faking it, you haven't worked in 100 companies at the age of 29.

Junta ( 36770 ) writes:
Re: ( Score: 2 )

Here's another problem, if faced with a skilled team that says "this will take 6 months to do right" and a more naive team that says "oh, we can slap that together in a month", management goes with the latter. Then the security compromises occur, then the application fails due to pulling in an unvetted dependency update live into production. When the project grows to handling thousands instead of dozens of users and it starts mysteriously folding over and the dev team is at a loss, well the choice has be

molarmass192 ( 608071 ) , Sunday April 29, 2018 @02:15AM ( #56522359 ) Homepage Journal
Re:older generations already had a term for this ( Score: 5 , Interesting)

These restrictions is a large part of what makes Arduino programming "fun". If you don't plan out your memory usage, you're gonna run out of it. I cringe when I see 8MB web pages of bloated "throw in everything including the kitchen sink and the neighbor's car". Unfortunately, the careful and cautious way is a dying in favor of the throw 3rd party code at it until it does something. Of course, I don't have time to review it but I'm sure everybody else has peer reviewed it for flaws and exploits line by line.

AmiMoJo ( 196126 ) writes: < mojo@@@world3...net > on Sunday April 29, 2018 @05:15AM ( #56522597 ) Homepage Journal
Re:older generations already had a term for this ( Score: 4 , Informative)
Unfortunately, the careful and cautious way is a dying in favor of the throw 3rd party code at it until it does something.

Of course. What is the business case for making it efficient? Those massive frameworks are cached by the browser and run on the client's system, so cost you nothing and save you time to market. Efficient costs money with no real benefit to the business.

If we want to fix this, we need to make bloat have an associated cost somehow.

locketine ( 1101453 ) writes:
Re: older generations already had a term for this ( Score: 2 )

My company is dealing with the result of this mentality right now. We released the web app to the customer without performance testing and doing several majorly inefficient things to meet deadlines. Once real load was put on the application by users with non-ideal hardware and browsers, the app was infuriatingly slow. Suddenly our standard sub-40 hour workweek became a 50+ hour workweek for months while we fixed all the inefficient code and design issues.

So, while you're right that getting to market and opt

serviscope_minor ( 664417 ) writes:
Re: ( Score: 2 )

In the bad old days we had a hell of a lot of ridiculous restriction We must somehow made our programs to run successfully inside a RAM that was 48KB in size (yes, 48KB, not 48MB or 48GB), on a CPU with a clock speed of 1.023 MHz

We still have them. In fact some of the systems I've programmed have been more resource limited than the gloriously spacious 32KiB memory of the BBC model B. Take the PIC12F or 10F series. A glorious 64 bytes of RAM, max clock speed of 16MHz, but not unusual to run it 32kHz.

serviscope_minor ( 664417 ) writes:
Re: ( Score: 2 )

So what are the uses for that? I am curious what things people have put these to use for.

It's hard to determine because people don't advertise use of them at all. However, I know that my electric toothbrush uses an Epson 4 bit MCU of some description. It's got a status LED, basic NiMH batteryb charger and a PWM controller for an H Bridge. Braun sell a *lot* of electric toothbrushes. Any gadget that's smarter than a simple switch will probably have some sort of basic MCU in it. Alarm system components, sensor

tlhIngan ( 30335 ) writes:
Re: ( Score: 3 , Insightful)
b) No computer ever ran at 1.023 MHz. It was either a nice multiple of 1Mhz or maybe a multiple of 3.579545Mhz (ie. using the TV output circuit's color clock crystal to drive the CPU).

Well, it could be used to drive the TV output circuit, OR, it was used because it's a stupidly cheap high speed crystal. You have to remember except for a few frequencies, most crystals would have to be specially cut for the desired frequency. This occurs even today, where most oscillators are either 32.768kHz (real time clock

Anonymous Coward writes:
Re: ( Score: 2 , Interesting)

Yeah, nice talk. You could have stopped after the first sentence. The other AC is referring to the Commodore C64 [wikipedia.org]. The frequency has nothing to do with crystal availability but with the simple fact that everything in the C64 is synced to the TV. One clock cycle equals 8 pixels. The graphics chip and the CPU take turns accessing the RAM. The different frequencies dictated by the TV standards are the reason why the CPU in the NTSC version of the C64 runs at 1.023MHz and the PAL version at 0.985MHz.

Wraithlyn ( 133796 ) writes:
Re: ( Score: 2 )

LOL what exactly is so special about 16K RAM? https://yourlogicalfallacyis.c... [yourlogicalfallacyis.com]

I cut my teeth on a VIC20 (5K RAM), then later a C64 (which ran at 1.023MHz...)

Anonymous Coward writes:
Re: ( Score: 2 , Interesting)

Commodore 64 for the win. I worked for a company that made detection devices for the railroad, things like monitoring axle temperatures, reading the rail car ID tags. The original devices were made using Commodore 64 boards using software written by an employee at the one rail road company working with them.

The company then hired some electrical engineers to design custom boards using the 68000 chips and I was hired as the only programmer. Had to rewrite all of the code which was fine...

wierd_w ( 1375923 ) , Saturday April 28, 2018 @11:58PM ( #56522075 )
... A job fair can easily test this competency. ( Score: 4 , Interesting)

Many of these languages have an interactive interpreter. I know for a fact that Python does.

So, since job-fairs are an all day thing, and setup is already a thing for them -- set up a booth with like 4 computers at it, and an admin station. The 4 terminals have an interactive session with the interpreter of choice. Every 20min or so, have a challenge for "Solve this problem" (needs to be easy and already solved in general. Programmers hate being pimped without pay. They don't mind tests of skill, but hate being pimped. Something like "sort this array, while picking out all the prime numbers" or something.) and see who steps up. The ones that step up have confidence they can solve the problem, and you can quickly see who can do the work and who can't.

The ones that solve it, and solve it to your satisfaction, you offer a nice gig to.

ShanghaiBill ( 739463 ) , Sunday April 29, 2018 @01:50AM ( #56522321 )
Re:... A job fair can easily test this competency. ( Score: 5 , Informative)
Then you get someone good at sorting arrays while picking out prime numbers, but potentially not much else.

The point of the test is not to identify the perfect candidate, but to filter out the clearly incompetent. If you can't sort an array and write a function to identify a prime number, I certainly would not hire you. Passing the test doesn't get you a job, but it may get you an interview ... where there will be other tests.

wierd_w ( 1375923 ) writes:
Re: ( Score: 2 )

BINGO!

(I am not even a professional programmer, but I can totally perform such a trivially easy task. The example tests basic understanding of loop construction, function construction, variable use, efficient sorting, and error correction-- especially with mixed type arrays. All of these are things any programmer SHOULD now how to do, without being overly complicated, or clearly a disguised occupational problem trying to get a free solution. Like I said, programmers hate being pimped, and will be turned off

wierd_w ( 1375923 ) , Sunday April 29, 2018 @04:02AM ( #56522443 )
Re: ... A job fair can easily test this competency ( Score: 5 , Insightful)

Again, the quality applicant and the code monkey both have something the fakers do not-- Actual comprehension of what a program is, and how to create one.

As Bill points out, this is not the final exam. This is the "Oh, I see you do actually know how to program-- show me more" portion of the process. This is the part that HR drones are not capable of performing, due to Dunning-Krueger. Those that are actually, REALLY competent will do more than just satisfy the requirements of the challenge, they will provide actually working solutions to the challenge that properly validate their input, and return proper error states if the input is invalid, etc-- You can learn a LOT about a potential hire by observing their work. *THAT* is what this is really about. The triviality of the problem is a necessity, because you ***DON'T*** try to get free solutions out of people.

I realize that may be difficult for you to comprehend, but you *DON'T* do that. The job fair is to let people know that you have a position available, and try to curry interest in people to apply. A successful pre-screening is confidence building, and helps the potential hire to feel that your company is actually interested in actually hiring somebody, and not just fucking off in the booth, to cover for "failing to find somebody" and then "Getting yet another H1B". It gives them a chance to show you what they can do. That is what it is for, and what it does. It also excludes the fakers that this article is about-- The ones that can talk a good talk, but could not program a simple boolean check condition if their life depended on it.

If it were not for the time constraints of a job fair (usually only 2 days, and in that time you need to try and pre-screen as many as possible), I would suggest a tiered challenge, with progressively harder challenges, where you hand out resumes to the ones that make it to the top 3 brackets, but that is not the way the world works.

luis_a_espinal ( 1810296 ) writes:
Re: ( Score: 2 )
This in my opinion is really a waste of time. Challenges like this have to be so simple they can be done walking up to a booth are not likely to filter the "all talks" any better than a few interview questions could (imperson so the candidate can't just google it).

Tougher more involved stuff isn't good either it gives a huge advantage to the full time job hunter, the guy or gal that already has a 9-5 and a family that wants to seem them has not got time for games. We have been struggling with hiring where I work ( I do a lot of the interviews ) and these are the conclusions we have reached

You would be surprised at the number of people with impeccable-looking resumes failing at something as simple as the FizzBuzz test [codinghorror.com]

PaulRivers10 ( 4110595 ) writes:
Re: ... A job fair can easily test this competenc ( Score: 2 )

The only thing fuzzbuzz tests is "have you done fizzbuzz before"? It's a short question filled with every petty trick the author could think ti throw in there. If you haven't seen the tricks they trip you up for no reason related to your actual coding skills. Once you have seen them they're trivial and again unrelated to real work. Fizzbuzz is best passed by someone aiming to game the interview system. It passes people gaming it and trips up people who spent their time doing on the job real work.

Hognoxious ( 631665 ) writes:
Re: ( Score: 2 )
they trip you up for no reason related to your actual codung skills.

Bullshit!

luis_a_espinal ( 1810296 ) , Sunday April 29, 2018 @07:49AM ( #56522861 ) Homepage
filter the lame code monkeys ( Score: 4 , Informative)
Lame monkey tests select for lame monkeys.

A good programmer first and foremost has a clean mind. Experience suggests puzzle geeks, who excel at contrived tests, are usually sloppy thinkers.

No. Good programmers can trivially knock out any of these so-called lame monkey tests. It's lame code monkeys who can't do it. And I've seen their work. Many night shifts and weekends I've burned trying to fix their shit because they couldn't actually do any of the things behind what you call "lame monkey tests", like:

    pulling expensive invariant calculations out of loops using for loops to scan a fucking table to pull rows or calculate an aggregate when they could let the database do what it does best with a simple SQL statement systems crashing under actual load because their shitty code was never stress tested ( but it worked on my dev box! .) again with databases, having to redo their schemas because they were fattened up so much with columns like VALUE1, VALUE2, ... VALUE20 (normalize you assholes!) chatting remote APIs - because these code monkeys cannot think about the need for bulk operations in increasingly distributed systems. storing dates in unsortable strings because the idiots do not know most modern programming languages have a date data type.

Oh and the most important, off-by-one looping errors. I see this all the time, the type of thing a good programmer can spot on quickly because he or she can do the so-called "lame monkey tests" that involve arrays and sorting.

I've seen the type: "I don't need to do this shit because I have business knowledge and I code for business and IT not google", and then they go and code and fuck it up... and then the rest of us have to go clean up their shit at 1AM or on weekends.

If you work as an hourly paid contractor cleaning that crap, it can be quite lucrative. But sooner or later it truly sucks the energy out of your soul.

So yeah, we need more lame monkey tests ... to filter the lame code monkeys.

ShanghaiBill ( 739463 ) writes:
Re: ( Score: 3 )
Someone could Google the problem with the phone then step up and solve the challenge.

If given a spec, someone can consistently cobble together working code by Googling, then I would love to hire them. That is the most productive way to get things done.

There is nothing wrong with using external references. When I am coding, I have three windows open: an editor, a testing window, and a browser with a Stackoverflow tab open.

Junta ( 36770 ) writes:
Re: ( Score: 2 )

Yeah, when we do tech interviews, we ask questions that we are certain they won't be able to answer, but want to see how they would think about the problem and what questions they ask to get more data and that they don't just fold up and say "well that's not the sort of problem I'd be thinking of" The examples aren't made up or anything, they are generally selection of real problems that were incredibly difficult that our company had faced before, that one may not think at first glance such a position would

bobstreo ( 1320787 ) writes:
Nothing worse ( Score: 2 )

than spending weeks interviewing "good" candidates for an opening, selecting a couple and hiring them as contractors, then finding out they are less than unqualified to do the job they were hired for.

I've seen it a few times, Java "experts", Microsoft "experts" with years of experience on their resumes, but completely useless in coding, deployment or anything other than buying stuff from the break room vending machines.

That being said, I've also seen projects costing hundreds of thousands of dollars, with y

Anonymous Coward , Sunday April 29, 2018 @12:34AM ( #56522157 )
Re:Nothing worse ( Score: 4 , Insightful)

The moment you said "contractors", and you have lost any sane developer. Keep swimming, its not a fish.

Anonymous Coward writes:
Re: ( Score: 2 , Informative)

I agree with this. I consider myself to be a good programmer and I would never go into contractor game. I also wonder, how does it take you weeks to interview someone and you still can't figure out if the person can't code? I could probably see that in 15 minutes in a pair coding session.

Also, Oracle, SAP, IBM... I would never buy from them, nor use their products. I have used plenty of IBM products and they suck big time. They make software development 100 times harder than it could be. Their technical supp

Lanthanide ( 4982283 ) writes:
Re: ( Score: 2 )

It's weeks to interview multiple different candidates before deciding on 1 or 2 of them. Not weeks per person.

Anonymous Coward writes:
Re: ( Score: 3 , Insightful)
That being said, I've also seen projects costing hundreds of thousands of dollars, with years of delays from companies like Oracle, Sun, SAP, and many other "vendors"

Software development is a hard thing to do well, despite the general thinking of technology becoming cheaper over time, and like health care the quality of the goods and services received can sometimes be difficult to ascertain. However, people who don't respect developers and the problems we solve are very often the same ones who continually frustrate themselves by trying to cheap out, hiring outsourced contractors, and then tearing their hair out when sub par results are delivered, if anything is even del

pauljlucas ( 529435 ) writes:
Re: ( Score: 2 )

As part of your interview process, don't you have candidates code a solution to a problem on a whiteboard? I've interviewed lots of "good" candidates (on paper) too, but they crashed and burned when challenged with a coding exercise. As a result, we didn't make them job offers.

VeryFluffyBunny ( 5037285 ) writes:
I do the opposite ( Score: 2 )

I'm not a great coder but good enough to get done what clients want done. If I'm not sure or don't think I can do it, I tell them. I think they appreciate the honesty. I don't work in a tech-hub, startups or anything like that so I'm not under the same expectations and pressures that others may be.

Tony Isaac ( 1301187 ) writes:
Bigger building blocks ( Score: 2 )

OK, so yes, I know plenty of programmers who do fake it. But stitching together components isn't "fake" programming.

Back in the day, we had to write our own code to loop through an XML file, looking for nuggets. Now, we just use an XML serializer. Back then, we had to write our own routines to send TCP/IP messages back and forth. Now we just use a library.

I love it! I hated having to make my own bricks before I could build a house. Now, I can get down to the business of writing the functionality I want, ins

Anonymous Coward writes:
Re: ( Score: 2 , Insightful)

But, I suspect you could write the component if you had to. That makes you a very different user of that component than someone who just knows it as a magic black box.

Because of this, you understand the component better and have real knowledge of its strengths and limitations. People blindly using components with only a cursory idea of their internal operation often cause major performance problems. They rarely recognize when it is time to write their own to overcome a limitation (or even that it is possibl

Tony Isaac ( 1301187 ) writes:
Re: ( Score: 2 )

You're right on all counts. A person who knows how the innards work, is better than someone who doesn't, all else being equal. Still, today's world is so specialized that no one can possibly learn it all. I've never built a processor, as you have, but I still have been able to build a DNA matching algorithm for a major DNA lab.

I would argue that anyone who can skillfully use off-the-shelf components can also learn how to build components, if they are required to.

thesupraman ( 179040 ) writes:
Ummm. ( Score: 2 )

1, 'Back in the Day' there was no XML, XMl was not very long ago.
2, its a parser, a serialiser is pretty much the opposite (unless this weeks fashion has redefined that.. anything is possible).
3, 'Back then' we didnt have TCP stacks...

But, actually I agree with you. I can only assume the author thinks there are lots of fake plumbers because they dont cast their own toilet bowels from raw clay, and use pre-build fittings and pipes! That car mechanics start from raw steel scrap and a file.. And that you need

Tony Isaac ( 1301187 ) writes:
Re: ( Score: 2 )

For the record, XML was invented in 1997, you know, in the last century! https://en.wikipedia.org/wiki/... [wikipedia.org]
And we had a WinSock library in 1992. https://en.wikipedia.org/wiki/... [wikipedia.org]

Yes, I agree with you on the "middle ground." My reaction was to the author's point that "not knowing how to build the components" was the same as being a "fake programmer."

Tony Isaac ( 1301187 ) , Sunday April 29, 2018 @01:46AM ( #56522313 ) Homepage
Re:Bigger building blocks ( Score: 5 , Interesting)

If I'm a plumber, and I don't know anything about the engineering behind the construction of PVC pipe, I can still be a good plumber. If I'm an electrician, and I don't understand the role of a blast furnace in the making of the metal components, I can still be a good electrician.

The analogy fits. If I'm a programmer, and I don't know how to make an LZW compression library, I can still be a good programmer. It's a matter of layers. These days, we specialize. You've got your low-level programmers that make the components, the high level programmers that put together the components, the graphics guys who do HTML/CSS, and the SQL programmers that just know about databases. Every person has their specialty. It's no longer necessary to be a low-level programmer, or jack-of-all-trades, to be "good."

If I don't know the layout of the IP header, I can still write quality networking software, and if I know XSLT, I can still do cool stuff with XML, even if I don't know how to write a good parser.

frank_adrian314159 ( 469671 ) writes:
Re: ( Score: 3 )

I was with you until you said " I can still do cool stuff with XML".

Tony Isaac ( 1301187 ) writes:
Re: ( Score: 2 )

LOL yeah I know it's all JSON now. I've been around long enough to see these fads come and go. Frankly, I don't see a whole lot of advantage of JSON over XML. It's not even that much more compact, about 10% or so. But the point is that the author laments the "bad old days" when you had to create all your own building blocks, and you didn't have a team of specialists. I for one don't want to go back to those days!

careysub ( 976506 ) writes:
Re: ( Score: 3 )

The main advantage is that JSON is that it is consistent. XML has attributes, embedded optional stuff within tags. That was derived from the original SGML ancestor where is was thought to be a convenience for the human authors who were supposed to be making the mark-up manually. Programmatically it is a PITA.

Cederic ( 9623 ) writes:
Re: ( Score: 3 )

I got shit for decrying XML back when it was the trendy thing. I've had people apologise to me months later because they've realized I was right, even though at the time they did their best to fuck over my career because XML was the new big thing and I wasn't fully on board.

XML has its strengths and its place, but fuck me it taught me how little some people really fucking understand shit.

Anonymous Coward writes:
Silicon Valley is Only Part of the Tech Business ( Score: 2 , Informative)

And a rather small part at that, albeit a very visible and vocal one full of the proverbial prima donas. However, much of the rest of the tech business, or at least the people working in it, are not like that. It's small groups of developers working in other industries that would not typically be considered technology. There are software developers working for insurance companies, banks, hedge funds, oil and gas exploration or extraction firms, national defense and many hundreds and thousands of other small

phantomfive ( 622387 ) writes:
bonfire of fakers ( Score: 2 )

This is the reason I wish programming didn't pay so much....the field is better when it's mostly populated by people who enjoy programming.

Njovich ( 553857 ) , Sunday April 29, 2018 @05:35AM ( #56522641 )
Learn to code courses ( Score: 5 , Insightful)
They knew that well-paid programming jobs would also soon turn to smoke and ash, as the proliferation of learn-to-code courses around the world lowered the market value of their skills, and as advances in artificial intelligence allowed for computers to take over more of the mundane work of producing software.

Kind of hard to take this article serious after saying gibberish like this. I would say most good programmers know that neither learn-to-code courses nor AI are going to make a dent in their income any time soon.

AndyKron ( 937105 ) writes:
Me? No ( Score: 2 )

As a non-programmer Arduino and libraries are my friends

Escogido ( 884359 ) , Sunday April 29, 2018 @06:59AM ( #56522777 )
in the silly cone valley ( Score: 5 , Interesting)

There is a huge shortage of decent programmers. I have personally witnessed more than one phone "interview" that went like "have you done this? what about this? do you know what this is? um, can you start Monday?" (120K-ish salary range)

Partly because there are way more people who got their stupid ideas funded than good coders willing to stain their resume with that. partly because if you are funded, and cannot do all the required coding solo, here's your conundrum:

  • top level hackers can afford to be really picky, so on one hand it's hard to get them interested, and if you could get that, they often want some ownership of the project. the plus side is that they are happy to work for lots of equity if they have faith in the idea, but that can be a huge "if".
  • "good but not exceptional" senior engineers aren't usually going to be super happy, as they often have spouses and children and mortgages, so they'd favor job security over exciting ideas and startup lottery.
  • that leaves you with fresh-out-of-college folks, which are really really a mixed bunch. some are actually already senior level of understanding without the experience, some are absolutely useless, with varying degrees in between, and there's no easy way to tell which is which early.

so the not-so-scrupulous folks realized what's going on, and launched multiple coding boot camps programmes, to essentially trick both the students into believing they can become a coder in a month or two, and also the prospective employers that said students are useful. so far it's been working, to a degree, in part because in such companies coding skill evaluation process is broken. but one can only hide their lack of value add for so long, even if they do manage to bluff their way into a job.

quonset ( 4839537 ) , Sunday April 29, 2018 @07:20AM ( #56522817 )
Duh! ( Score: 4 , Insightful)

All one had to do was look at the lousy state of software and web sites today to see this is true. It's quite obvious little to no thought is given on how to make something work such that one doesn't have to jump through hoops.

I have many times said the most perfect word processing program ever developed was WordPefect 5.1 for DOS. Ones productivity was astonishing. It just worked.

Now we have the bloated behemoth Word which does its utmost to get in the way of you doing your work. The only way to get it to function is to turn large portions of its "features" off, and even then it still insists on doing something other than what you told it to do.

Then we have the abomination of Windows 10, which is nothing but Clippy on 10X steroids. It is patently obvious the people who program this steaming pile have never heard of simplicity. Who in their right mind would think having to "search" for something is more efficient than going directly to it? I would ask the question if these people wander around stores "searching" for what they're looking for, but then I realize that's how their entire life is run. They search for everything online rather than going directly to the source. It's no wonder they complain about not having time to things. They're always searching.

Web sites are another area where these people have no clue what they're doing. Anything that might be useful is hidden behind dropdown menus, flyouts, popup bubbles and intriately designed mazes of clicks needed to get to where you want to go. When someone clicks on a line of products, they shouldn't be harassed about what part of the product line they want to look at. Give them the information and let the user go where they want.

This rant could go on, but this article explains clearly why we have regressed when it comes to software and web design. Instead of making things simple and easy to use, using the one or two brain cells they have, programmers and web designers let the software do what it wants without considering, should it be done like this?

swb ( 14022 ) , Sunday April 29, 2018 @07:48AM ( #56522857 )
Tech industry churn ( Score: 3 )

The tech industry has a ton of churn -- there's some technological advancement, but there's an awful lot of new products turned out simply to keep customers buying new licenses and paying for upgrades.

This relentless and mostly phony newness means a lot of people have little experience with current products. People fake because they have no choice. The good ones understand the general technologies and problems they're meant to solve and can generally get up to speed quickly, while the bad ones are good at faking it but don't really know what they're doing. Telling the difference from the outside is impossible.

Sales people make it worse, promoting people as "experts" in specific products or implementations because the people have experience with a related product and "they're all the same". This burns out the people with good adaption skills.

DaMattster ( 977781 ) , Sunday April 29, 2018 @08:39AM ( #56522979 )
Interesting ( Score: 3 )

From the summary, it sounds like a lot of programmers and software engineers are trying to develop the next big thing so that they can literally beg for money from the elite class and one day, hopefully, become a member of the aforementioned. It's sad how the middle class has been utterly decimated in the United States that some of us are willing to beg for scraps from the wealthy. I used to work in IT but I've aged out and am now back in school to learn automotive technology so that I can do something other than being a security guard. Currently, the only work I have been able to find has been in the unglamorous security field.

I am learning some really good new skills in the automotive program that I am in but I hate this one class called "Professionalism in the Shop." I can summarize the entire class in one succinct phrase, "Learn how to appeal to, and communicate with, Mr. Doctor, Mr. Lawyer, or Mr. Wealthy-man." Basically, the class says that we are supposed to kiss their ass so they keep coming back to the Audi, BMW, Mercedes, Volvo, or Cadillac dealership. It feels a lot like begging for money on behalf of my employer (of which very little of it I will see) and nothing like professionalism. Professionalism is doing the job right the first time, not jerking the customer off. Professionalism is not begging for a 5 star review for a few measly extra bucks but doing absolute top quality work. I guess the upshot is that this class will be the easiest 4.0 that I've ever seen.

There is something fundamentally wrong when the wealthy elite have basically demanded that we beg them for every little scrap. I can understand the importance of polite and professional interaction but this prevalent expectation that we bend over backwards for them crosses a line with me. I still suck it up because I have to but it chafes my ass to basically validate the wealthy man.

ElitistWhiner ( 79961 ) writes:
Natural talent... ( Score: 2 )

In 70's I worked with two people who had a natural talent for computer science algorithms .vs. coding syntax. In the 90's while at COLUMBIA I worked with only a couple of true computer scientists out of 30 students. I've met 1 genius who programmed, spoke 13 languages, ex-CIA, wrote SWIFT and spoke fluent assembly complete with animated characters.

According to the Bluff Book, everyone else without natural talent fakes it. In the undiluted definition of computer science, genetics roulette and intellectual d

fahrbot-bot ( 874524 ) writes:
Other book sells better and is more interesting ( Score: 2 )
New Book Describes 'Bluffing' Programmers in Silicon Valley

It's not as interesting as the one about "fluffing" [urbandictionary.com] programmers.

Anonymous Coward writes:
Re: ( Score: 3 , Funny)

Ah yes, the good old 80:20 rule, except it's recursive for programmers.

80% are shit, so you fire them. Soon you realize that 80% of the remaining 20% are also shit, so you fire them too. Eventually you realize that 80% of the 4% remaining after sacking the 80% of the 20% are also shit, so you fire them!

...

The cycle repeats until there's just one programmer left: the person telling the joke.

---

tl;dr: All programmers suck. Just ask them to review their own code from more than 3 years ago: they'll tell you that

luis_a_espinal ( 1810296 ) writes:
Re: ( Score: 3 )
Who gives a fuck about lines? If someone gave me JavaScript, and someone gave me minified JavaScript, which one would I want to maintain?

I donâ(TM)t care about your line savings, less isnâ(TM)t always better.

Because the world of programming is not centered about JavasScript and reduction of lines is not the same as minification. If the first thing that came to your mind was about minified JavaScript when you saw this conversation, you are certainly not the type of programmer I would want to inherit code from.

See, there's a lot of shit out there that is overtly redundant and unnecessarily complex. This is specially true when copy-n-paste code monkeys are left to their own devices for whom code formatting seems

Anonymous Coward , Sunday April 29, 2018 @01:17AM ( #56522241 )
Re:Most "Professional programmers" are useless. ( Score: 4 , Interesting)

I have a theory that 10% of people are good at what they do. It doesn't really matter what they do, they will still be good at it, because of their nature. These are the people who invent new things, who fix things that others didn't even see as broken and who automate routine tasks or simply question and erase tasks that are not necessary. If you have a software team that contain 5 of these, you can easily beat a team of 100 average people, not only in cost but also in schedule, quality and features. In theory they are worth 20 times more than average employees, but in practise they are usually paid the same amount of money with few exceptions.

80% of people are the average. They can follow instructions and they can get the work done, but they don't see that something is broken and needs fixing if it works the way it has always worked. While it might seem so, these people are not worthless. There are a lot of tasks that these people are happily doing which the 10% don't want to do. E.g. simple maintenance work, implementing simple features, automating test cases etc. But if you let the top 10% lead the project, you most likely won't be needed that much of these people. Most work done by these people is caused by themselves, by writing bad software due to lack of good leader.

10% are just causing damage. I'm not talking about terrorists and criminals. I have seen software developers who have tried (their best?), but still end up causing just damage to the code that someone else needs to fix, costing much more than their own wasted time. You really must use code reviews if you don't know your team members, to find these people early.

Anonymous Coward , Sunday April 29, 2018 @01:40AM ( #56522299 )
Re:Most "Professional programmers" are useless. ( Score: 5 , Funny)
to find these people early

and promote them to management where they belong.

raymorris ( 2726007 ) , Sunday April 29, 2018 @01:51AM ( #56522329 ) Journal
Seems about right. Constantly learning, studying ( Score: 5 , Insightful)

That seems about right to me.

I have a lot of weaknesses. My people skills suck, I'm scrawny, I'm arrogant. I'm also generally known as a really good programmer and people ask me how/why I'm so much better at my job than everyone else in the room. (There are a lot of things I'm not good at, but I'm good at my job, so say everyone I've worked with.)

I think one major difference is that I'm always studying, intentionally working to improve, every day. I've been doing that for twenty years.

I've worked with people who have "20 years of experience"; they've done the same job, in the same way, for 20 years. Their first month on the job they read the first half of "Databases for Dummies" and that's what they've been doing for 20 years. They never read the second half, and use Oracle database 18.0 exactly the same way they used Oracle Database 2.0 - and it was wrong 20 years ago too. So it's not just experience, it's 20 years of learning, getting better, every day. That's 7,305 days of improvement.

gbjbaanb ( 229885 ) writes:
Re: ( Score: 2 )

I think I can guarantee that they are a lot better at their jobs than you think, and that you are a lot worse at your job than you think too.

m00sh ( 2538182 ) writes:
Re: ( Score: 2 )
That seems about right to me.

I have a lot of weaknesses. My people skills suck, I'm scrawny, I'm arrogant. I'm also generally known as a really good programmer and people ask me how/why I'm so much better at my job than everyone else in the room. (There are a lot of things I'm not good at, but I'm good at my job, so say everyone I've worked with.)

I think one major difference is that I'm always studying, intentionally working to improve, every day. I've been doing that for twenty years.

I've worked with people who have "20 years of experience"; they've done the same job, in the same way, for 20 years. Their first month on the job they read the first half of "Databases for Dummies" and that's what they've been doing for 20 years. They never read the second half, and use Oracle database 18.0 exactly the same way they used Oracle Database 2.0 - and it was wrong 20 years ago too. So it's not just experience, it's 20 years of learning, getting better, every day. That's 7,305 days of improvement.

If you take this attitude towards other people, people will not ask your for help. At the same time, you'll be also be not able to ask for their help.

You're not interviewing your peers. They are already in your team. You should be working together.

I've seen superstar programmers suck the life out of project by over-complicating things and not working together with others.

raymorris ( 2726007 ) writes:
Which part? Learning makes you better? ( Score: 2 )

You quoted a lot. Is there one part exactly do you have in mind? The thesis of my post is of course "constant learning, on purpose, makes you better"

> you take this attitude towards other people, people will not ask your for help. At the same time, you'll be also be not able to ask for their help.

Are you saying that trying to learn means you can't ask for help, or was there something more specific? For me, trying to learn means asking.

Trying to learn, I've had the opportunity to ask for help from peop

phantomfive ( 622387 ) writes:
Re: ( Score: 2 )

The difference between a smart programmer who succeeds and a stupid programmer who drops out is that the smart programmer doesn't give up.

complete loony ( 663508 ) writes:
Re: ( Score: 2 )

In other words;

What is often mistaken for 20 years' experience, is just 1 year's experience repeated 20 times.
serviscope_minor ( 664417 ) writes:
Re: ( Score: 2 )

10% are just causing damage. I'm not talking about terrorists and criminals.

Terrorists and criminals have nothing on those guys. I know guy who is one of those. Worse, he's both motivated and enthusiastic. He also likes to offer help and advice to other people who don't know the systems well.

asifyoucare ( 302582 ) , Sunday April 29, 2018 @08:49AM ( #56522999 )
Re:Most "Professional programmers" are useless. ( Score: 5 , Insightful)

Good point. To quote Kurt von Hammerstein-Equord:

"I divide my officers into four groups. There are clever, diligent, stupid, and lazy officers. Usually two characteristics are combined. Some are clever and diligent -- their place is the General Staff. The next lot are stupid and lazy -- they make up 90 percent of every army and are suited to routine duties. Anyone who is both clever and lazy is qualified for the highest leadership duties, because he possesses the intellectual clarity and the composure necessary for difficult decisions. One must beware of anyone who is stupid and diligent -- he must not be entrusted with any responsibility because he will always cause only mischief."

gweihir ( 88907 ) writes:
Re: ( Score: 2 )

Oops. Good thing I never did anything military. I am definitely in the "clever and lazy" class.

apoc.famine ( 621563 ) writes:
Re: ( Score: 2 )

I was just thinking the same thing. One of my passions in life is coming up with clever ways to do less work while getting more accomplished.

Software_Dev_GL ( 5377065 ) writes:
Re: ( Score: 2 )

It's called the Pareto Distribution [wikipedia.org]. The number of competent people (people doing most of the work) in any given organization goes like the square root of the number of employees.

gweihir ( 88907 ) writes:
Re: ( Score: 2 )

Matches my observations. 10-15% are smart, can think independently, can verify claims by others and can identify and use rules in whatever they do. They are not fooled by things "everybody knows" and see standard-approaches as first approximations that, of course, need to be verified to work. They do not trust anything blindly, but can identify whether something actually work well and build up a toolbox of such things.

The problem is that in coding, you do not have a "(mass) production step", and that is the

geoskd ( 321194 ) writes:
Re: ( Score: 2 )

In basic concept I agree with your theory, it fits my own anecdotal experience well, but I find that your numbers are off. The top bracket is actually closer to 20%. The reason it seems so low is that a large portion of the highly competent people are running one programmer shows, so they have no co-workers to appreciate their knowledge and skill. The places they work do a very good job of keeping them well paid and happy (assuming they don't own the company outright), so they rarely if ever switch jobs.

The

Tablizer ( 95088 ) , Sunday April 29, 2018 @01:54AM ( #56522331 ) Journal
Re:Most "Professional programmers" are useless. ( Score: 4 , Interesting)
at least 70, probably 80, maybe even 90 percent of professional programmers should just fuck off and do something else as they are useless at programming.

Programming is statistically a dead-end job. Why should anyone hone a dead-end skill that you won't be able to use for long? For whatever reason, the industry doesn't want old programmers.

Otherwise, I'd suggest longer training and education before they enter the industry. But that just narrows an already narrow window of use.

Cesare Ferrari ( 667973 ) writes:
Re: ( Score: 2 )

Well, it does rather depend on which industry you work in - i've managed to find interesting programming jobs for 25 years, and there's no end in sight for interesting projects and new avenues to explore. However, this isn't for everyone, and if you have good personal skills then moving from programming into some technical management role is a very worthwhile route, and I know plenty of people who have found very interesting work in that direction.

gweihir ( 88907 ) writes:
Re: ( Score: 3 , Insightful)

I think that is a misinterpretation of the facts. Old(er) coders that are incompetent are just much more obvious and usually are also limited to technologies that have gotten old as well. Hence the 90% old coders that can actually not hack it and never really could get sacked at some time and cannot find a new job with their limited and outdated skills. The 10% that are good at it do not need to worry though. Who worries there is their employers when these people approach retirement age.

gweihir ( 88907 ) writes:
Re: ( Score: 2 )

My experience as an IT Security Consultant (I also do some coding, but only at full rates) confirms that. Most are basically helpless and many have negative productivity, because people with a clue need to clean up after them. "Learn to code"? We have far too many coders already.

tomhath ( 637240 ) writes:
Re: ( Score: 2 )

You can't bluff you way through writing software, but many, many people have bluffed their way into a job and then tried to learn it from the people who are already there. In a marginally functional organization those incompetents are let go pretty quickly, but sometimes they stick around for months or years.

Apparently the author of this book is one of those, probably hired and fired several times before deciding to go back to his liberal arts roots and write a book.

DaMattster ( 977781 ) writes:
Re: ( Score: 2 )

There are some mechanics that bluff their way through an automotive repair. It's the same damn thing

gweihir ( 88907 ) writes:
Re: ( Score: 2 )

I think you can and this is by far not the first piece describing that. Here is a classic: https://blog.codinghorror.com/... [codinghorror.com]
Yet these people somehow manage to actually have "experience" because they worked in a role they are completely unqualified to fill.

phantomfive ( 622387 ) writes:
Re: ( Score: 2 )
Fiddling with JavaScript libraries to get a fancy dancy interface that makes PHB's happy is a sought-after skill, for good or bad. Now that we rely more on half-ass libraries, much of "programming" is fiddling with dark-grey boxes until they work good enough.

This drives me crazy, but I'm consoled somewhat by the fact that it will all be thrown out in five years anyway.

[Apr 29, 2018] How not to do system bare-metal backup with tar

He excluded /dev. This is a mistake if we are talking about bare metal recovery.
Apr 29, 2018 | serverfault.com

The backup is made with Tar. I backup the whole system into the Tar file.

If the HDD on my webserver dies, I got all my backups in a safe place.

But what would be the best way to do a Bare Metal Restore on a new HDD with a differential backup make the previous day? Can I boot with a boot cd, and then format a new HDD and untar the backup file into it? How do I do that exactly?

EDIT:

This is my backup script:

#!/bin/sh
# Backup script

BACKUPDIR="/backups"
BACKUPFILE=$BACKUPDIR/backup_$(date +%y-%m-%d).tgz

if [ ! -d $BACKUPDIR ]; then
        mkdir $BACKUPDIR
fi

if [ -f $BACKUPFILE ]; then
        echo "Backup file already exists and will be replaced."
        rm $BACKUPFILE
fi

apt-get clean

tar czpf $BACKUPFILE --same-owner \
--exclude=$BACKUPDIR \
--exclude=/boot/grub/menu.lst* \
--exclude=/home/error.log \
--exclude=/proc \
--exclude=/media \
--exclude=/dev/* \
--exclude=/mnt \
--exclude=/sys/* \
--exclude=/cdrom \
--exclude=/lost+found \
--exclude=/var/cache/* \
--exclude=/tmp / 2>/home/error.log
linux backup debian tar share improve this question edited Dec 22 '11 at 13:25 asked Dec 22 '11 at 3:44 Jonathan Rioux 1,087 4 22 47 add a comment 4 Answers active oldest votes up vote 3 down vote accepted Simply restoring the HDD will not be enough, you're probably will want your boot record too which I hardly believe exists in your backup (am I wrong?, it's better for you if i do!)...

Lest assume you got the server to the point it can boot (i personally prefer creating the additional partition mounted to /boot which will have kernel and initrd with busybox or something similar to allow you basic maintenance tasks). You can also use a live CD of your Linux distribution.

Mount your future root partition somewhere and restore your backup.

tar was created for tapes so it support appending to archive files with same name. If you used this method just untar -xvpf backup.tar -C /mnt if not you'll need to restore "last sunday" backup and applying deferential parts up to needed day.

You should keep in mind that there is a lot of stuff that you should not backup, things like: /proc , /dev , /sys , /media , /mnt (and probably some more which depend on your needs). You'll need to take care of it before creating backup, or it may became severe pain while in restore process!

There is many points that you can easily miss with that backup method for whole server:

Some good points on that exact method can be found on Ubuntu Wiki:BackupYourSystem/TAR . Look for Restoring.

BTW:

P.P.S

I recommend reading couple of Jeff Atwood posts about backups http://www.codinghorror.com/blog/2008/01/whats-your-backup-strategy.html and http://www.codinghorror.com/blog/2009/12/international-backup-awareness-day.html

[Apr 29, 2018] Bare-metal server restore using tar by Keith Winston

The idea of restoring only selected directories after creating "skeleton" linux OS from Red Hat DVD is viable. But this is not optimal bare matalrestore method with tar
Apr 29, 2018 | www.linux.com

... ... ...

The backup tape from the previous night was still on site (our off-site rotations happen once a week). Once I restored the filelist.txt file, I browsed through the list to determine the order that the directories were written to the tape. Then, I placed that list in this restore script:

#!/bin/sh

# Restore everything
# This script restores all system files from tape.
#
# Initialize the tape drive
if /bin/mt -f "/dev/nst0" tell > /dev/null 2>&1
then
    # Rewind before restore
    /bin/mt -f "/dev/nst0" rewind > /dev/null 2>&1
else
    echo "Restore aborted: No tape loaded"
    exit 1
fi

# Do restore
# The directory order must match the order on the tape.
#
/bin/tar --extract --verbose --preserve --file=/dev/nst0 var etc root usr lib boot bin home sbin backup

# note: in many cases, these directories don't need to be restored:
# initrd opt misc tmp mnt

# Rewind tape when done
/bin/mt -f "/dev/nst0" rewind

In the script, the list of directories to restore is passed as parameters to tar. Just as in the backup script, it is important to use the
--preserve switch so that file permissions are restored to the way they were before the backup. I could have just restored the / directory, but
there were a couple of directories I wanted to exclude, so I decided to be explicit about what to restore. If you want to use this script for your own restores, be sure the list of directories matches the order they were backed up on your system.

Although it is listed in the restore script, I removed the /boot directory from my restore, because I suspected my file system problem was related to a kernel upgrade I had done three days earlier. By not restoring the /boot directory, the system would continue to use the stock kernel that shipped on the CDs until I upgraded it. I also wanted to exclude the /tmp directory and a few other directories that I knew were not important.

The restore ran for a long time, but uneventfully. Finally, I rebooted the system, reloaded the MySQL databases from the dumps, and the system was fully restored and working perfectly. Just over four hours elapsed from total meltdown to complete restore. I probably could trim at least an hour off that time if I had to do it a second time.

Postmortem

I filed a bug report with Red Hat Bugzilla , but I could only provide log files from the day before the crash. All core files and logs from the day of the crash were lost when I tried to repair the file system. I exchanged posts with a Red Hat engineer, but we were not able to nail down the cause. I suspect the problem was either in the RAID driver code or ext3 code. I should note that the server is a relatively new HP ProLiant server with an Intel hyperthreaded Pentium 4 processor. Because the Linux kernel sees a hyperthreaded processor as a dual processor, I was using an SMP kernel when the problem arose. I reasoned that I might squeeze a few percentage points of performance out of the SMP kernel. This bug may only manifest when running on a hyperthreaded processor in SMP mode. I don't have a spare server to try to recreate it.

After the restore, I went back to the uniprocessor kernel and have not yet patched it back up to the level it had been. Happily, the ext3 error has not returned. I scan the logs every day, but it has been well over a month since the restore and there are still no signs of trouble. I am looking forward to my next full restore -- hopefully not until sometime in 2013.

[Apr 29, 2018] Clear unused space with zeros (ext3, ext4)

Notable quotes:
"... Purpose: I'd like to compress partition images, so filling unused space with zeros is highly recommended. ..."
"... Such an utility is zerofree . ..."
"... Be careful - I lost ext4 filesystem using zerofree on Astralinux (Debian based) ..."
"... If the "disk" your filesystem is on is thin provisioned (e.g. a modern SSD supporting TRIM, a VM file whose format supports sparseness etc.) and your kernel says the block device understands it, you can use e2fsck -E discard src_fs to discard unused space (requires e2fsprogs 1.42.2 or higher). ..."
"... If you have e2fsprogs 1.42.9, then you can use e2image to create the partition image without the free space in the first place, so you can skip the zeroing step. ..."
Apr 29, 2018 | unix.stackexchange.com

Grzegorz Wierzowiecki, Jul 29, 2012 at 10:02

How to clear unused space with zeros ? (ext3, ext4)

I'm looking for something smarter than

cat /dev/zero > /mnt/X/big_zero ; sync; rm /mnt/X/big_zero

Like FSArchiver is looking for "used space" and ignores unused, but opposite site.

Purpose: I'd like to compress partition images, so filling unused space with zeros is highly recommended.

Btw. For btrfs : Clear unused space with zeros (btrfs)

Mat, Jul 29, 2012 at 10:18

Check this out: superuser.com/questions/19326/Mat Jul 29 '12 at 10:18

Totor, Jan 5, 2014 at 2:57

Two different kind of answer are possible. What are you trying to achieve? Either 1) security, by forbidding someone to read those data, or 2) optimizing compression of the whole partition or [SSD performance]( en.wikipedia.org/wiki/Trim_(computing) ? – Totor Jan 5 '14 at 2:57

enzotib, Jul 29, 2012 at 11:45

Such an utility is zerofree .

From its description:

Zerofree finds the unallocated, non-zeroed blocks in an ext2 or ext3 file-system and fills them with zeroes. This is useful if the device on which this file-system resides is a disk image. In this case, depending on the type of disk image, a secondary utility may be able to reduce the size of the disk image after zerofree has been run. Zerofree requires the file-system to be unmounted or mounted read-only.

The usual way to achieve the same result (zeroing the unused blocks) is to run "dd" do create a file full of zeroes that takes up the entire free space on the drive, and then delete this file. This has many disadvantages, which zerofree alleviates:

  • it is slow
  • it makes the disk image (temporarily) grow to its maximal extent
  • it (temporarily) uses all free space on the disk, so other concurrent write actions may fail.

Zerofree has been written to be run from GNU/Linux systems installed as guest OSes inside a virtual machine. If this is not your case, you almost certainly don't need this package.

UPDATE #1

The description of the .deb package contains the following paragraph now which would imply this will work fine with ext4 too.

Description: zero free blocks from ext2, ext3 and ext4 file-systems Zerofree finds the unallocated blocks with non-zero value content in an ext2, ext3 or ext4 file-system and fills them with zeroes...

Grzegorz Wierzowiecki, Jul 29, 2012 at 14:08

Is it official page of the tool intgat.tigress.co.uk/rmy/uml/index.html ? Do you think it's safe to use with ext4 ? – Grzegorz Wierzowiecki Jul 29 '12 at 14:08

enzotib, Jul 29, 2012 at 14:12

@GrzegorzWierzowiecki: yes, that is the page, but for debian and friends it is already in the repos. I used on a ext4 partition on a virtual disk to successively shrink the disk file image, and had no problem. – enzotib Jul 29 '12 at 14:12

jlh, Mar 4, 2016 at 10:10

This isn't equivalent to the crude dd method in the original question, since it doesn't work on mounted file systems. – jlh Mar 4 '16 at 10:10

endolith, Oct 14, 2016 at 16:33

zerofree page talks about a patch that lets you do "filesystem is mounted with the zerofree option" so that it always zeros out deleted files continuously. does this require recompiling the kernel then? is there an easier way to accomplish the same thing? – endolith Oct 14 '16 at 16:33

Hubbitus, Nov 23, 2016 at 22:20

Be careful - I lost ext4 filesystem using zerofree on Astralinux (Debian based)Hubbitus Nov 23 '16 at 22:20

Anon, Dec 27, 2015 at 17:53

Summary of the methods (as mentioned in this question and elsewhere) to clear unused space on ext2/ext3/ext4: Zeroing unused space File system is not mounted File system is mounted

Having the filesystem unmounted will give better results than having it mounted. Discarding tends to be the fastest method when a lot of previously used space needs to be zeroed but using zerofree after the discard process can sometimes zero a little bit extra (depending on how discard is implemented on the "disk").

Making the image file smaller Image is in a dedicated VM format

You will need to use an appropriate disk image tool (such as qemu-img convert src_image dst_image ) to enable the zeroed space to be reclaimed and to allow the file representing the image to become smaller.

Image is a raw file

One of the following techniques can be used to make the file sparse (so runs of zero stop taking up space):

These days it might easier to use a tool like virt-sparsify to do these steps and more in one go.

Sources

cas, Jul 29, 2012 at 11:45

sfill from secure-delete can do this and several other related jobs.

e.g.

sfill -l -l -z /mnt/X
UPDATE #1

There is a source tree that appears to be used by the ArchLinux project on github that contains the source for sfill which is a tool included in the package Secure-Delete.

Also a copy of sfill 's man page is here:

cas, Jul 29, 2012 at 12:04

that URL is obsolete. no idea where its home page is now (or even if it still has one), but it's packaged for debian and ubuntu. probably other distros too. if you need source code, that can be found in the debian archives if you can't find it anywhere else. – cas Jul 29 '12 at 12:04

mwfearnley, Jul 31, 2017 at 13:04

The obsolete manpage URL is fixed now. Looks like "Digipedia" is no longer a thing. – mwfearnley Jul 31 '17 at 13:04

psusi, Apr 2, 2014 at 15:27

If you have e2fsprogs 1.42.9, then you can use e2image to create the partition image without the free space in the first place, so you can skip the zeroing step.

mwfearnley, Mar 3, 2017 at 13:36

I couldn't (easily) find any info online about these parameters, but they are indeed given in the 1.42.9 release notes: e2fsprogs.sf.net/e2fsprogs-release.html#1.42.9mwfearnley Mar 3 '17 at 13:36

user64219, Apr 2, 2014 at 14:39

You can use sfill . It's a better solution for thin volumes.

Anthon, Apr 2, 2014 at 15:01

If you want to comment on cas answer, wait until you have enough reputation to do so. – Anthon Apr 2 '14 at 15:01

derobert, Apr 2, 2014 at 17:01

I think the answer is referring to manpages.ubuntu.com/manpages/lucid/man1/sfill.1.html ... which is at least an attempt at answering. ("online" in this case meaning "with the filesystem mounted", not "on the web"). – derobert Apr 2 '14 at 17:01

[Apr 28, 2018] GitHub - ch1x0r-LinuxRespin Fork of remastersys - updates

Apr 28, 2018 | github.com

Fork of remastersys - updates

This tool is used to backup your image, create distributions, create live cd/dvds. install respin

If you are using Ubuntu - Consider switching to Debian. This is NOT officially for Ubuntu. Debian.

We have an Ubuntu GUI version now. Thank you to the members of the Ubuntu Community for working with us!!! We were also featured in LinuxJournal! http://www.linuxjournal.com/content/5-minute-foss-spinning-custom-linux-distribution

Respin

For more information, please visit http://www.linuxrespin.org

See also: 5 Minute FOSS Spinning a custom Linux distribution Linux Journal by Petros Koutoupis on March 23, 2018

[Apr 28, 2018] Linux server backup using tar

Apr 28, 2018 | serverfault.com

up vote 2 down vote favorite 3
Im new to linux backup.
Im thinking of full system backup of my linux server using tar. I came up with the following code:

tar -zcvpf /archive/fullbackup.tar.gz \
--exclude=/archive \
--exclude=/mnt \
--exclude=/proc \
--exclude=/lost+found \
--exclude=/dev \
--exclude=/sys \
--exclude=/tmp \
/

and if in need of any hardware problem, restore it with

cd /
tar -zxpvf fullbackup.tar.gz

But does my above code back up MBR and filesystem? Will the above code be enough to bring the same server back? linux backup tar share | improve this question edited Feb 14 '13 at 10:17

But does my above code back up MBR and filesystem?

Hennes 4,470 1 13 27

No. It backs up the contents of the filesystem.

Not the MBR which is not a file but is contained in a sector outside the file systems. And not the filesystem with it potentially tweaked settings and or errors, just the contents of the file system (granted, that is a minor difference).

and if in need of any hardware problem, restore it with

cd /
tar -zxpvf fullbackup.tar.gz

Will the above code be enough to bring the same server back?

Probably, as long as you use the same setup. The tarball will just contain the files, not the partition scheme used for the disks. So you will have to partition the disk in the same way. (Or copy the old partition scheme, e.g. with dd if=/dev/sda of=myMBRbackup bs=512 count=1 ).

Note that there are better ways to create backups, some of which already have been answered in other posts. Personally I would just backup the configuration and the data. Everything else is merely a matter of reinstalling. Possibly even with the latest version.

Also not that tar will backup all files. The first time that is a good thing.

But if you run that weekly or daily you will get a lot of large backups. In that case look at rsync (which does incremental changes) or one of the many other options. share | improve this answer edited Feb 14 '13 at 8:14 answered Feb 14 '13 at 7:29

Rouben

Using tar to backup/restore a system is pretty rudimentary, and by that I mean that there are probably more elegant ways out there to backup your system... If you really want to stick to tar, here's a very good guide I found (it includes instructions on backing up the MBR; grub specifically).=: https://help.ubuntu.com/community/BackupYourSystem/TAR While it's on the Ubuntu wiki website, there's no reason why it wouldn't work on any UNIX/Linux machine.

You may also wish to check out this: https://help.ubuntu.com/community/BackupYourSystem

If you'd like something with a nice web GUI that's relatively straightforward to set up and use: http://backuppc.sourceforge.net/

Floyd Feb 14 '13 at 6:40

Using remastersys :

You can create live iso of your existing system. so install all the required packages on your ubuntu and then take a iso using remastersys. Then using startup disk, you can create bootable usb from this iso.

edit your /etc/apt/sources.list file. Add the following line in the end of the file.

deb http://www.remastersys.com/ubuntu precise main

Then run the following command:

sudo apt-get update

sudo apt-get install remastersys

sudo apt-get install remastersys-gui

sudo apt-get install remastersys-gtk

To run the remastersys in gui mode, type the following command:

sudo remastersys-gui share | improve this answer

[Apr 28, 2018] How to properly backup your system using TAR

This is mostly incorrect if we are talking about bare metal restore ;-). Mostly correct for your data. The thinking is very primitive here, which is the trademark of Ubuntu.
Only some tips are useful: you are warned
Notable quotes:
"... Don't forget to empty your Wastebasket, remove any unwanted files in your /home ..."
"... Depending on why you're backing up, you might ..."
"... This will not create a mountable DVD. ..."
Apr 28, 2018 | www.lylebackenroth.com
Source: help.ubuntu.com

Preparing for backup

Just a quick note. You are about to back up your entire system. Don't forget to empty your Wastebasket, remove any unwanted files in your /home directory, and cleanup your desktop.

.... ... ...

[Apr 28, 2018] tar exclude single files/directories, not patterns

The important detail about this is that the excluded file name must match exactly the notation reported by the tar listing.
Apr 28, 2018 | stackoverflow.com

Udo G ,May 9, 2012 at 7:13

I'm using tar to make daily backups of a server and want to avoid backup of /proc and /sys system directories, but without excluding any directories named "proc" or "sys" somewhere else in the file tree.

For, example having the following directory tree (" bla " being normal files):

# find
.
./sys
./sys/bla
./foo
./foo/sys
./foo/sys/bla

I would like to exclude ./sys but not ./foo/sys .

I can't seem to find an --exclude pattern that does that...

# tar cvf /dev/null * --exclude=sys
foo/

or...

# tar cvf /dev/null * --exclude=/sys
foo/
foo/sys/
foo/sys/bla
sys/
sys/bla

Any ideas? (Linux Debian 6)

drinchev ,May 9, 2012 at 7:19

Are you sure there is no exclude? If you are using MAC OS it is a different story! Look heredrinchev May 9 '12 at 7:19

Udo G ,May 9, 2012 at 7:21

Not sure I understand your question. There is a --exclude option, but I don't know how to match it for single, absolute file names (not any file by that name) - see my examples above. – Udo G May 9 '12 at 7:21

paulsm4 ,May 9, 2012 at 7:22

Look here: stackoverflow.com/questions/984204/paulsm4 May 9 '12 at 7:22

CharlesB ,May 9, 2012 at 7:29

You can specify absolute paths to the exclude pattern, this way other sys or proc directories will be archived:
tar --exclude=/sys --exclude=/proc /

Udo G ,May 9, 2012 at 7:34

True, but the important detail about this is that the excluded file name must match exactly the notation reported by the tar listing. For my example that would be ./sys - as I just found out now. – Udo G May 9 '12 at 7:34

pjv ,Apr 9, 2013 at 18:14

In this case you might want to use:
--anchored --exclude=sys/\*

because in case your tar does not show the leading "/" you have a problem with the filter.

Savvas Radevic ,May 9, 2013 at 10:44

This did the trick for me, thank you! I wanted to exclude a specific directory, not all directories/subdirectories matching the pattern. bsdtar does not have "--anchored" option though, and with bsdtar we can use full paths to exclude specific folders. – Savvas Radevic May 9 '13 at 10:44

Savvas Radevic ,May 9, 2013 at 10:58

ah found it! in bsdtar the anchor is "^": bsdtar cvjf test.tar.bz2 --exclude myfile.avi --exclude "^myexcludedfolder" *Savvas Radevic May 9 '13 at 10:58

Stephen Donecker ,Nov 8, 2012 at 19:12

Using tar you can exclude directories by placing a tag file in any directory that should be skipped.

Create tag files,

touch /sys/.exclude_from_backup
touch /proc/.exclude_from_backup

Then,

tar -czf backup.tar.gz --exclude-tag-all=.exclude_from_backup *

pjv ,Apr 9, 2013 at 17:58

Good idea in theory but often /sys and /proc cannot be written to. – pjv Apr 9 '13 at 17:58

[Apr 27, 2018] Shell command to tar directory excluding certain files-folders

Highly recommended!
Notable quotes:
"... Trailing slashes at the end of excluded folders will cause tar to not exclude those folders at all ..."
"... I had to remove the single quotation marks in order to exclude sucessfully the directories ..."
"... Exclude files using tags by placing a tag file in any directory that should be skipped ..."
"... Nice and clear thank you. For me the issue was that other answers include absolute or relative paths. But all you have to do is add the name of the folder you want to exclude. ..."
"... Adding a wildcard after the excluded directory will exclude the files but preserve the directories: ..."
"... You can use cpio(1) to create tar files. cpio takes the files to archive on stdin, so if you've already figured out the find command you want to use to select the files the archive, pipe it into cpio to create the tar file: ..."
Apr 27, 2018 | stackoverflow.com

deepwell ,Jun 11, 2009 at 22:57

Is there a simple shell command/script that supports excluding certain files/folders from being archived?

I have a directory that need to be archived with a sub directory that has a number of very large files I do not need to backup.

Not quite solutions:

The tar --exclude=PATTERN command matches the given pattern and excludes those files, but I need specific files & folders to be ignored (full file path), otherwise valid files might be excluded.

I could also use the find command to create a list of files and exclude the ones I don't want to archive and pass the list to tar, but that only works with for a small amount of files. I have tens of thousands.

I'm beginning to think the only solution is to create a file with a list of files/folders to be excluded, then use rsync with --exclude-from=file to copy all the files to a tmp directory, and then use tar to archive that directory.

Can anybody think of a better/more efficient solution?

EDIT: cma 's solution works well. The big gotcha is that the --exclude='./folder' MUST be at the beginning of the tar command. Full command (cd first, so backup is relative to that directory):

cd /folder_to_backup
tar --exclude='./folder' --exclude='./upload/folder2' -zcvf /backup/filename.tgz .

Rekhyt ,May 1, 2012 at 12:55

Another thing caught me out on that, might be worth a note:

Trailing slashes at the end of excluded folders will cause tar to not exclude those folders at all. – Rekhyt May 1 '12 at 12:55

Brice ,Jun 24, 2014 at 16:06

I had to remove the single quotation marks in order to exclude sucessfully the directories. ( tar -zcvf gatling-charts-highcharts-1.4.6.tar.gz /opt/gatling-charts-highcharts-1.4.6 --exclude=results --exclude=target ) – Brice Jun 24 '14 at 16:06

Charles Ma ,Jun 11, 2009 at 23:11

You can have multiple exclude options for tar so
$ tar --exclude='./folder' --exclude='./upload/folder2' -zcvf /backup/filename.tgz .

etc will work. Make sure to put --exclude before the source and destination items.

shasi kanth ,Feb 27, 2015 at 10:49

As an example, if you are trying to backup your wordpress project folder, excluding the uploads folder, you can use this command:

tar -cvf wordpress_backup.tar wordpress --exclude=wp-content/uploads

Alfred Bez ,Jul 16, 2015 at 7:28

I came up with the following command: tar -zcv --exclude='file1' --exclude='pattern*' --exclude='file2' -f /backup/filename.tgz . note that the -f flag needs to precede the tar file see:

flickerfly ,Aug 21, 2015 at 16:22

A "/" on the end of the exclude directory will cause it to fail. I guess tar thinks an ending / is part of the directory name to exclude. BAD: --exclude=mydir/ GOOD: --exclude=mydir – flickerfly Aug 21 '15 at 16:22

NightKnight on Cloudinsidr.com ,Nov 24, 2016 at 9:55

> Make sure to put --exclude before the source and destination items. OR use an absolute path for the exclude: tar -cvpzf backups/target.tar.gz --exclude='/home/username/backups' /home/username – NightKnight on Cloudinsidr.com Nov 24 '16 at 9:55

Johan Soderberg ,Jun 11, 2009 at 23:10

To clarify, you can use full path for --exclude. – Johan Soderberg Jun 11 '09 at 23:10

Stephen Donecker ,Nov 8, 2012 at 0:22

Possible options to exclude files/directories from backup using tar:

Exclude files using multiple patterns

tar -czf backup.tar.gz --exclude=PATTERN1 --exclude=PATTERN2 ... /path/to/backup

Exclude files using an exclude file filled with a list of patterns

tar -czf backup.tar.gz -X /path/to/exclude.txt /path/to/backup

Exclude files using tags by placing a tag file in any directory that should be skipped

tar -czf backup.tar.gz --exclude-tag-all=exclude.tag /path/to/backup

Anish Ramaswamy ,May 16, 2015 at 0:11

This answer definitely helped me! The gotcha for me was that my command looked something like tar -czvf mysite.tar.gz mysite --exclude='./mysite/file3' --exclude='./mysite/folder3' , and this didn't exclude anything. – Anish Ramaswamy May 16 '15 at 0:11

Hubert ,Feb 22, 2017 at 7:38

Nice and clear thank you. For me the issue was that other answers include absolute or relative paths. But all you have to do is add the name of the folder you want to exclude.Hubert Feb 22 '17 at 7:38

GeertVc ,Dec 31, 2013 at 13:35

Just want to add to the above, that it is important that the directory to be excluded should NOT contain a final backslash. So, --exclude='/path/to/exclude/dir' is CORRECT , --exclude='/path/to/exclude/dir/' is WRONG . – GeertVc Dec 31 '13 at 13:35

Eric Manley ,May 14, 2015 at 14:10

You can use standard "ant notation" to exclude directories relative.
This works for me and excludes any .git or node_module directories.
tar -cvf myFile.tar --exclude=**/.git/* --exclude=**/node_modules/*  -T /data/txt/myInputFile.txt 2> /data/txt/myTarLogFile.txt

myInputFile.txt Contains:

/dev2/java
/dev2/javascript

not2qubit ,Apr 4 at 3:24

I believe this require that the Bash shell option variable globstar has to be enabled. Check with shopt -s globstar . I think it off by default on most unix based OS's. From Bash manual: " globstar: If set, the pattern ** used in a filename expansion context will match all files and zero or more directories and subdirectories. If the pattern is followed by a '/', only directories and subdirectories match. " – not2qubit Apr 4 at 3:24

Benoit Duffez ,Jun 19, 2016 at 21:14

Don't forget COPYFILE_DISABLE=1 when using tar, otherwise you may get ._ files in your tarballBenoit Duffez Jun 19 '16 at 21:14

Scott Stensland ,Feb 12, 2015 at 20:55

This exclude pattern handles filename suffix like png or mp3 as well as directory names like .git and node_modules
tar --exclude={*.png,*.mp3,*.wav,.git,node_modules} -Jcf ${target_tarball}  ${source_dirname}

Alex B ,Jun 11, 2009 at 23:03

Use the find command in conjunction with the tar append (-r) option. This way you can add files to an existing tar in a single step, instead of a two pass solution (create list of files, create tar).
find /dir/dir -prune ... -o etc etc.... -exec tar rvf ~/tarfile.tar {} \;

carlo ,Mar 4, 2012 at 15:18

To avoid possible 'xargs: Argument list too long' errors due to the use of find ... | xargs ... when processing tens of thousands of files, you can pipe the output of find directly to tar using find ... -print0 | tar --null ... .
# archive a given directory, but exclude various files & directories 
# specified by their full file paths
find "$(pwd -P)" -type d \( -path '/path/to/dir1' -or -path '/path/to/dir2' \) -prune \
   -or -not \( -path '/path/to/file1' -or -path '/path/to/file2' \) -print0 | 
   gnutar --null --no-recursion -czf archive.tar.gz --files-from -
   #bsdtar --null -n -czf archive.tar.gz -T -

Znik ,Mar 4, 2014 at 12:20

you can quote 'exclude' string, like this: 'somedir/filesdir/*' then shell isn't going to expand asterisks and other white chars.

Tuxdude ,Nov 15, 2014 at 5:12

xargs -n 1 is another option to avoid xargs: Argument list too long error ;) – Tuxdude Nov 15 '14 at 5:12

Aaron Votre ,Jul 15, 2016 at 15:56

I agree the --exclude flag is the right approach.
$ tar --exclude='./folder_or_file' --exclude='file_pattern' --exclude='fileA'

A word of warning for a side effect that I did not find immediately obvious: The exclusion of 'fileA' in this example will search for 'fileA' RECURSIVELY!

Example:A directory with a single subdirectory containing a file of the same name (data.txt)

data.txt
config.txt
--+dirA
  |  data.txt
  |  config.docx

Mike ,May 9, 2014 at 21:26

After reading this thread, I did a little testing on RHEL 5 and here are my results for tarring up the abc directory:

This will exclude the directories error and logs and all files under the directories:

tar cvpzf abc.tgz abc/ --exclude='abc/error' --exclude='abc/logs'

Adding a wildcard after the excluded directory will exclude the files but preserve the directories:

tar cvpzf abc.tgz --exclude='abc/error/*' --exclude='abc/logs/*' abc/

camh ,Jun 12, 2009 at 5:53

You can use cpio(1) to create tar files. cpio takes the files to archive on stdin, so if you've already figured out the find command you want to use to select the files the archive, pipe it into cpio to create the tar file:
find ... | cpio -o -H ustar | gzip -c > archive.tar.gz

frommelmak ,Sep 10, 2012 at 14:08

You can also use one of the "--exclude-tag" options depending on your needs:

The folder hosting the specified FILE will be excluded.

Joe ,Jun 11, 2009 at 23:04

Your best bet is to use find with tar, via xargs (to handle the large number of arguments). For example:
find / -print0 | xargs -0 tar cjf tarfile.tar.bz2

jørgensen ,Mar 4, 2012 at 15:23

That can cause tar to be invoked multiple times - and will also pack files repeatedly. Correct is: find / -print0 | tar -T- --null --no-recursive -cjf tarfile.tar.bz2jørgensen Mar 4 '12 at 15:23

Stphane ,Dec 19, 2015 at 11:10

I read somewhere that when using xargs , one should use tar r option instead of c because when find actually finds loads of results, the xargs will split those results (based on the local command line arguments limit) into chuncks and invoke tar on each part. This will result in a archive containing the last chunck returned by xargs and not all results found by the find command. – Stphane Dec 19 '15 at 11:10

Andrew ,Apr 14, 2014 at 16:21

gnu tar v 1.26 the --exclude needs to come after archive file and backup directory arguments, should have no leading or trailing slashes, and prefers no quotes (single or double). So relative to the PARENT directory to be backed up, it's:

tar cvfz /path_to/mytar.tgz ./dir_to_backup --exclude=some_path/to_exclude

Ashwini Gupta ,Jan 12 at 10:30

tar -cvzf destination_folder source_folder -X /home/folder/excludes.txt

-X indicates a file which contains a list of filenames which must be excluded from the backup. For Instance, you can specify *~ in this file to not include any filenames ending with ~ in the backup.

Georgios ,Sep 4, 2013 at 22:35

Possible redundant answer but since I found it useful, here it is:

While a FreeBSD root (i.e. using csh) I wanted to copy my whole root filesystem to /mnt but without /usr and (obviously) /mnt. This is what worked (I am at /):

tar --exclude ./usr --exclude ./mnt --create --file - . (cd /mnt && tar xvd -)

My whole point is that it was necessary (by putting the ./ ) to specify to tar that the excluded directories where part of the greater directory being copied.

My €0.02

user2792605 ,Sep 30, 2013 at 20:07

I had no luck getting tar to exclude a 5 Gigabyte subdirectory a few levels deep. In the end, I just used the unix Zip command. It worked a lot easier for me.

So for this particular example from the original post
(tar --exclude='./folder' --exclude='./upload/folder2' -zcvf /backup/filename.tgz . )

The equivalent would be:

zip -r /backup/filename.zip . -x upload/folder/**\* upload/folder2/**\*

(NOTE: Here is the post I originally used that helped me https://superuser.com/questions/312301/unix-zip-directory-but-excluded-specific-subdirectories-and-everything-within-t )

t0r0X ,Sep 29, 2014 at 20:25

Beware: zip does not pack empty directories, but tar does! – t0r0X Sep 29 '14 at 20:25

RohitPorwal ,Jul 21, 2016 at 9:56

Check it out
tar cvpzf zip_folder.tgz . --exclude=./public --exclude=./tmp --exclude=./log --exclude=fileName

James ,Oct 28, 2016 at 14:01

The following bash script should do the trick. It uses the answer given here by Marcus Sundman.
#!/bin/bash

echo -n "Please enter the name of the tar file you wish to create with out extension "
read nam

echo -n "Please enter the path to the directories to tar "
read pathin

echo tar -czvf $nam.tar.gz
excludes=`find $pathin -iname "*.CC" -exec echo "--exclude \'{}\'" \;|xargs`
echo $pathin

echo tar -czvf $nam.tar.gz $excludes $pathin

This will print out the command you need and you can just copy and paste it back in. There is probably a more elegant way to provide it directly to the command line.

Just change *.CC for any other common extension, file name or regex you want to exclude and this should still work.

EDIT

Just to add a little explanation; find generates a list of files matching the chosen regex (in this case *.CC). This list is passed via xargs to the echo command. This prints --exclude 'one entry from the list'. The slashes () are escape characters for the ' marks.

tripleee ,Sep 14, 2017 at 4:27

Requiring interactive input is a poor design choice for most shell scripts. Make it read command-line parameters instead and you get the benefit of the shell's tab completion, history completion, history editing, etc. – tripleee Sep 14 '17 at 4:27

tripleee ,Sep 14, 2017 at 4:38

Additionally, your script does not work for paths which contain whitespace or shell metacharacters. You should basically always put variables in double quotes unless you specifically require the shell to perform whitespace tokenization and wildcard expansion. For details, please see stackoverflow.com/questions/10067266/tripleee Sep 14 '17 at 4:38

> ,Apr 18 at 0:31

For those who have issues with it, some versions of tar would only work properly without the './' in the exclude value.
Tar --version

tar (GNU tar) 1.27.1

Command syntax that work:

tar -czvf ../allfiles-butsome.tar.gz * --exclude=acme/foo

These will not work:

$ tar -czvf ../allfiles-butsome.tar.gz * --exclude=./acme/foo
$ tar -czvf ../allfiles-butsome.tar.gz * --exclude='./acme/foo'
$ tar --exclude=./acme/foo -czvf ../allfiles-butsome.tar.gz *
$ tar --exclude='./acme/foo' -czvf ../allfiles-butsome.tar.gz *
$ tar -czvf ../allfiles-butsome.tar.gz * --exclude=/full/path/acme/foo
$ tar -czvf ../allfiles-butsome.tar.gz * --exclude='/full/path/acme/foo'
$ tar --exclude=/full/path/acme/foo -czvf ../allfiles-butsome.tar.gz *
$ tar --exclude='/full/path/acme/foo' -czvf ../allfiles-butsome.tar.gz *

[Apr 26, 2018] How to create a Bash completion script

Notable quotes:
"... now, tomorrow, never ..."
Apr 26, 2018 | opensource.com

Bash completion is a functionality through which Bash helps users type their commands more quickly and easily. It does this by presenting possible options when users press the Tab key while typing a command.

$ git < tab >< tab >
git git-receive-pack git-upload-archive
gitk git-shell git-upload-pack
$ git-s < tab >
$ git-shell

How it works

More Linux resources

The completion script is code that uses the builtin Bash command complete to define which completion suggestions can be displayed for a given executable . The nature of the completion options vary, from simple static to highly sophisticated. Why bother?

This functionality helps users by:

Hands-on

Here's what we will do in this tutorial:

We will first create a dummy executable script called dothis . All it does is execute the command that resides on the number that was passed as an argument in the user's history. For example, the following command will simply execute the ls -a command, given that it exists in history with number 235 :

dothis 235

Then we will create a Bash completion script that will display commands along with their number from the user's history, and we will "bind" it to the dothis executable.

$ dothis < tab >< tab >
215 ls
216 ls -la
217 cd ~
218 man history
219 git status
220 history | cut -c 8 - bash_screen.png dothis executable screen

You can see a gif demonstrating the functionality at this tutorial's code repository on GitHub .

Let the show begin.

Creating the executable script

Create a file named dothis in your working directory and add the following code:

if [ -z "$1" ] ; then
echo "No command number passed"
exit 2
fi

exists =$ ( fc -l -1000 | grep ^ $1 -- 2 >/ dev / null )

if [ -n " $exists " ] ; then
fc -s -- "$1"
else
echo "Command with number $1 was not found in recent history"
exit 2
fi

Notes:

Make the script executable with:

chmod +x ./dothis

We will execute this script many times in this tutorial, so I suggest you place it in a folder that is included in your path so that we can access it from anywhere by typing dothis .

I installed it in my home bin folder using:

install ./dothis ~/bin/dothis

You can do the same given that you have a ~/bin folder and it is included in your PATH variable.

Check to see if it's working:

dothis

You should see this:

$ dothis
No command number passed

Done.

Creating the completion script

Create a file named dothis-completion.bash . From now on, we will refer to this file with the term completion script .

Once we add some code to it, we will source it to allow the completion to take effect. We must source this file every single time we change something in it .

Later in this tutorial, we will discuss our options for registering this script whenever a Bash shell opens.

Static completion

Suppose that the dothis program supported a list of commands, for example:

Let's use the complete command to register this list for completion. To use the proper terminology, we say we use the complete command to define a completion specification ( compspec ) for our program.

Add this to the completion script.

#/usr/bin/env bash
complete -W "now tomorrow never" dothis

Here's what we specified with the complete command above:

Source the file:

source ./dothis-completion.bash

Now try pressing Tab twice in the command line, as shown below:

$ dothis < tab >< tab >
never now tomorrow

Try again after typing the n :

$ dothis n < tab >< tab >
never now

Magic! The completion options are automatically filtered to match only those starting with n .

Note: The options are not displayed in the order that we defined them in the word list; they are automatically sorted.

There are many other options to be used instead of the -W that we used in this section. Most produce completions in a fixed manner, meaning that we don't intervene dynamically to filter their output.

For example, if we want to have directory names as completion words for the dothis program, we would change the complete command to the following:

complete -A directory dothis

Pressing Tab after the dothis program would get us a list of the directories in the current directory from which we execute the script:

$ dothis < tab >< tab >
dir1 / dir2 / dir3 /

Find the complete list of the available flags in the Bash Reference Manual .

Dynamic completion

We will be producing the completions of the dothis executable with the following logic:

Let's start by defining a function that will execute each time the user requests completion on a dothis command. Change the completion script to this:

#/usr/bin/env bash
_dothis_completions ()
{
COMPREPLY+= ( "now" )
COMPREPLY+= ( "tomorrow" )
COMPREPLY+= ( "never" )
}

complete -F _dothis_completions dothis

Note the following:

Now source the script and go for completion:

$ dothis < tab >< tab >
never now tomorrow

Perfect. We produce the same completions as in the previous section with the word list. Or not? Try this:

$ dothis nev < tab >< tab >
never now tomorrow

As you can see, even though we type nev and then request for completion, the available options are always the same and nothing gets completed automatically. Why is this happening?

Enter compgen : a builtin command that generates completions supporting most of the options of the complete command (ex. -W for word list, -d for directories) and filtering them based on what the user has already typed.

Don't worry if you feel confused; everything will become clear later.

Type the following in the console to better understand what compgen does:

$ compgen -W "now tomorrow never"
now
tomorrow
never
$ compgen -W "now tomorrow never" n
now
never
$ compgen -W "now tomorrow never" t
tomorrow

So now we can use it, but we need to find a way to know what has been typed after the dothis command. We already have the way: The Bash completion facilities provide Bash variables related to the completion taking place. Here are the more important ones:

To access the word just after the dothis word, we can use the value of COMP_WORDS[1]

Change the completion script again:

#/usr/bin/env bash
_dothis_completions ()
{
COMPREPLY = ( $ ( compgen -W "now tomorrow never" " ${COMP_WORDS[1]} " ))
}

complete -F _dothis_completions dothis

Source, and there you are:

$ dothis
never now tomorrow
$ dothis n
never now

Now, instead of the words now, tomorrow, never , we would like to see actual numbers from the command history.

The fc -l command followed by a negative number -n displays the last n commands. So we will use:

fc -l -50

which lists the last 50 executed commands along with their numbers. The only manipulation we need to do is replace tabs with spaces to display them properly from the completion mechanism. sed to the rescue.

Change the completion script as follows:

#/usr/bin/env bash
_dothis_completions ()
{
COMPREPLY = ( $ ( compgen -W " $(fc -l -50 | sed 's/\t//') " -- " ${COMP_WORDS[1]} " ))
}

complete -F _dothis_completions dothis

Source and test in the console:

$ dothis < tab >< tab >
632 source dothis-completion.bash 649 source dothis-completion.bash 666 cat ~ / .bash_profile
633 clear 650 clear 667 cat ~ / .bashrc
634 source dothis-completion.bash 651 source dothis-completion.bash 668 clear
635 source dothis-completion.bash 652 source dothis-completion.bash 669 install . / dothis ~ / bin / dothis
636 clear 653 source dothis-completion.bash 670 dothis
637 source dothis-completion.bash 654 clear 671 dothis 6546545646
638 clear 655 dothis 654 672 clear
639 source dothis-completion.bash 656 dothis 631 673 dothis
640 source dothis-completion.bash 657 dothis 150 674 dothis 651
641 source dothis-completion.bash 658 dothis 675 source dothis-completion.bash
642 clear 659 clear 676 dothis 651
643 dothis 623 ls -la 660 dothis 677 dothis 659
644 clear 661 install . / dothis ~ / bin / dothis 678 clear
645 source dothis-completion.bash 662 dothis 679 dothis 665
646 clear 663 install . / dothis ~ / bin / dothis 680 clear
647 source dothis-completion.bash 664 dothis 681 clear
648 clear 665 cat ~ / .bashrc

Not bad.

We do have a problem, though. Try typing a number as you see it in your completion list and then press the key again.

$ dothis 623 < tab >
$ dothis 623 ls 623 ls -la
...
$ dothis 623 ls 623 ls 623 ls 623 ls 623 ls -la

This is happening because in our completion script, we used the ${COMP_WORDS[1]} to always check the first typed word after the dothis command (the number 623 in the above snippet). Hence the completion continues to suggest the same completion again and again when the Tab key is pressed.

To fix this, we will not allow any kind of completion to take place if at least one argument has already been typed. We will add a condition in our function that checks the size of the aforementioned COMP_WORDS array.

#/usr/bin/env bash
_dothis_completions ()
{
if [ " ${#COMP_WORDS[@]} " ! = "2" ] ; then
return
fi

COMPREPLY = ( $ ( compgen -W " $(fc -l -50 | sed 's/\t//') " -- " ${COMP_WORDS[1]} " ))
}

complete -F _dothis_completions dothis

Source and retry.

$ dothis 623 < tab >
$ dothis 623 ls -la < tab > # SUCCESS: nothing happens here

There is another thing we don't like, though. We do want to display the numbers along with the corresponding commands to help users decide which one is desired, but when there is only one completion suggestion and it gets automatically picked by the completion mechanism, we shouldn't append the command literal too .

In other words, our dothis executable accepts only a number, and we haven't added any functionality to check or expect other arguments. When our completion function gives only one result, we should trim the command literal and respond only with the command number.

To accomplish this, we will keep the response of the compgen command in an array variable, and if its size is 1 , we will trim the one and only element to keep just the number. Otherwise, we'll let the array as is.

Change the completion script to this:

#/usr/bin/env bash
_dothis_completions ()
{
if [ " ${#COMP_WORDS[@]} " ! = "2" ] ; then
return
fi

# keep the suggestions in a local variable
local suggestions = ( $ ( compgen -W " $(fc -l -50 | sed 's/\t/ /') " -- " ${COMP_WORDS[1]} " ))

if [ " ${#suggestions[@]} " == "1" ] ; then
# if there's only one match, we remove the command literal
# to proceed with the automatic completion of the number
local number =$ ( echo ${suggestions[0]/%\ */} )
COMPREPLY = ( " $number " )
else
# more than one suggestions resolved,
# respond with the suggestions intact
COMPREPLY = ( " ${suggestions[@]} " )
fi
}

complete -F _dothis_completions dothis

Registering the completion script

If you want to enable the completion just for you on your machine, all you have to do is add a line in your .bashrc file sourcing the script:

source <path-to-your-script>/dothis-completion.bash

If you want to enable the completion for all users, you can just copy the script under /etc/bash_completion.d/ and it will automatically be loaded by Bash.

Fine-tuning the completion script

Here are some extra steps for better results:

Displaying each entry in a new line

In the Bash completion script I was working on, I too had to present suggestions consisting of two parts. I wanted to display the first part in the default color and the second part in gray to distinguish it as help text. In this tutorial's example, it would be nice to present the numbers in the default color and the command literal in a less fancy one.

Unfortunately, this is not possible, at least for now, because the completions are displayed as plain text and color directives are not processed (for example: \e[34mBlue ).

What we can do to improve the user experience (or not) is to display each entry in a new line. This solution is not that obvious since we can't just append a new line character in each COMPREPLY entry. We will follow a rather hackish method and pad suggestion literals to a width that fills the terminal.

Enter printf . If you want to display each suggestion on each own line, change the completion script to the following:

#/usr/bin/env bash
_dothis_completions ()
{
if [ " ${#COMP_WORDS[@]} " ! = "2" ] ; then
return
fi

local IFS =$ '\n'
local suggestions = ( $ ( compgen -W " $(fc -l -50 | sed 's/\t//') " -- " ${COMP_WORDS[1]} " ))

if [ " ${#suggestions[@]} " == "1" ] ; then
local number = " ${suggestions[0]/%\ */} "
COMPREPLY = ( " $number " )
else
for i in " ${!suggestions[@]} " ; do
suggestions [ $i ] = " $(printf '%*s' "-$COLUMNS" "${suggestions[$i]}") "
done

COMPREPLY = ( " ${suggestions[@]} " )
fi
}

complete -F _dothis_completions dothis

Source and test:

dothis < tab >< tab >
...
499 source dothis-completion.bash
500 clear
...
503 dothis 500 Customizable behavior

In our case, we hard-coded to display the last 50 commands for completion. This is not a good practice. We should first respect what each user might prefer. If he/she hasn't made any preference, we should default to 50.

To accomplish that, we will check if an environment variable DOTHIS_COMPLETION_COMMANDS_NUMBER has been set.

Change the completion script one last time:

#/usr/bin/env bash
_dothis_completions ()
{
if [ " ${#COMP_WORDS[@]} " ! = "2" ] ; then
return
fi

local commands_number = ${DOTHIS_COMPLETION_COMMANDS_NUMBER:-50}
local IFS =$ '\n'
local suggestions = ( $ ( compgen -W " $(fc -l -$commands_number | sed 's/\t//') " -- " ${COMP_WORDS[1]} " ))

if [ " ${#suggestions[@]} " == "1" ] ; then
local number = " ${suggestions[0]/%\ */} "
COMPREPLY = ( " $number " )
else
for i in " ${!suggestions[@]} " ; do
suggestions [ $i ] = " $(printf '%*s' "-$COLUMNS" "${suggestions[$i]}") "
done

COMPREPLY = ( " ${suggestions[@]} " )
fi
}

complete -F _dothis_completions dothis

Source and test:

export DOTHIS_COMPLETION_COMMANDS_NUMBER = 5
$ dothis < tab >< tab >
505 clear
506 source . / dothis-completion.bash
507 dothis clear
508 clear
509 export DOTHIS_COMPLETION_COMMANDS_NUMBER = 5 Useful links Code and comments

You can find the code of this tutorial on GitHub .

For feedback, comments, typos, etc., please open an issue in the repository.

Lazarus Lazaridis - I am an open source enthusiast and I like helping developers with tutorials and tools . I usually code in Ruby especially when it's on Rails but I also speak Java, Go, bash & C#. I have studied CS at Athens University of Economics and Business and I live in Athens, Greece. My nickname is iridakos and I publish tech related posts on my personal blog iridakos.com .

[Apr 26, 2018] Bash Range How to iterate over sequences generated on the shell Linux Hint by Fahmida Yesmin

Notable quotes:
"... When only upper limit is used then the number will start from 1 and increment by one in each step. ..."
Apr 26, 2018 | linuxhint.com

Bash Range: How to iterate over sequences generated on the shell 2 days ago You can iterate the sequence of numbers in bash by two ways. One is by using seq command and another is by specifying range in for loop. In seq command, the sequence starts from one, the number increments by one in each step and print each number in each line up to the upper limit by default. If the number starts from upper limit then it decrements by one in each step. Normally, all numbers are interpreted as floating point but if the sequence starts from integer then the list of decimal integers will print. If seq command can execute successfully then it returns 0, otherwise it returns any non-zero number. You can also iterate the sequence of numbers using for loop with range. Both seq command and for loop with range are shown in this tutorial by using examples.

The options of seq command:

You can use seq command by using the following options.

Examples of seq command:

You can apply seq command by three ways. You can use only upper limit or upper and lower limit or upper and lower limit with increment or decrement value of each step . Different uses of the seq command with options are shown in the following examples.

Example-1: seq command without option

When only upper limit is used then the number will start from 1 and increment by one in each step. The following command will print the number from 1 to 4.

$ seq 4

When the two values are used with seq command then first value will be used as starting number and second value will be used as ending number. The following command will print the number from 7 to 15.

$ seq 7 15

When you will use three values with seq command then the second value will be used as increment or decrement value for each step. For the following command, the starting number is 10, ending number is 1 and each step will be counted by decrementing 2.

$ seq 10 -2 1
Example-2: seq with –w option

The following command will print the output by adding leading zero for the number from 1 to 9.

$ seq -w 0110
Example-3: seq with –s option

The following command uses "-" as separator for each sequence number. The sequence of numbers will print by adding "-" as separator.

$ seq -s - 8

Example-4: seq with -f option

The following command will print 10 date values starting from 1. Here, "%g" option is used to add sequence number with other string value.

$ seq -f "%g/04/2018" 10

The following command is used to generate the sequence of floating point number using "%f" . Here, the number will start from 3 and increment by 0.8 in each step and the last number will be less than or equal to 6.

$ seq -f "%f" 3 0.8 6

Example-5: Write the sequence in a file

If you want to save the sequence of number into a file without printing in the console then you can use the following commands. The first command will print the numbers to a file named " seq.txt ". The number will generate from 5 to 20 and increment by 10 in each step. The second command is used to view the content of " seq.txt" file.

seq 5 10 20 | cat &gt; seq.txt
cat seq.txt

Example-6: Using seq in for loop

Suppose, you want to create files named fn1 to fn10 using for loop with seq. Create a file named "sq1.bash" and add the following code. For loop will iterate for 10 times using seq command and create 10 files in the sequence fn1, fn2,fn3 ..fn10.

#!/bin/bash
for i in ` seq 10 ` ; do touch fn. $i done

Run the following commands to execute the code of the bash file and check the files are created or not.

bash sq1.bash
ls

Examples of for loop with range: Example-7: For loop with range

The alternative of seq command is range. You can use range in for loop to generate sequence of numbers like seq. Write the following code in a bash file named " sq2.bash ". The loop will iterate for 5 times and print the square root of each number in each step.

#!/bin/bash
for n in { 1 .. 5 } ; do (( result =n * n ))
echo $n square = $result
done

Run the command to execute the script of the file.

bash sq2.bash

Example-8: For loop with range and increment value

By default, the number is increment by one in each step in range like seq. You can also change the increment value in range. Write the following code in a bash file named " sq3.bash ". The for loop in the script will iterate for 5 times, each step is incremented by 2 and print all odd numbers between 1 to 10.

#!/bin/bash
echo "all odd numbers from 1 to 10 are"
for i in { 1 .. 10 .. 2 }; do echo $i ; done

Run the command to execute the script of the file.

bash sq3.bash

If you want to work with the sequence of numbers then you can use any of the options that are shown in this tutorial. After completing this tutorial, you will be able to use seq command and for loop with range more efficiently in your bash script.

[Apr 22, 2018] Happy Sysadmin Appreciation Day 2016 Opensource.com

Apr 22, 2018 | opensource.com

Necessity is frequently the mother of invention. I knew very little about BASH scripting but that was about to change rapidly. Working with the existing script and using online help forums, search engines, and some printed documentation, I setup Linux network attached storage computer running on Fedora Core. I learned how to create an SSH keypair and configure that along with rsync to move the backup file from the email server to the storage server. That worked well for a few days until I noticed that the storage servers disk space was rapidly disappearing. What was I going to do?

That's when I learned more about Bash scripting. I modified my rsync command to delete backed up files older than ten days. In both cases I learned that a little knowledge can be a dangerous thing but in each case my experience and confidence as Linux user and system administrator grew and due to that I functioned as a resource for other. On the plus side, we soon realized that the disk to disk backup system was superior to tape when it came to restoring email files. In the long run it was a win but there was a lot of uncertainty and anxiety along the way.

[Apr 22, 2018] Unix Horror story script question Unix Linux Forums Shell Programming and Scripting

Apr 22, 2018 | www.unix.com

scottsiddharth Registered User

Unix Horror story script question


This text and script is borrowed from the "Unix Horror Stories" document.

It states as follows

"""""Management told us to email a security notice to every user on the our system (at that time, around 3000 users). A certain novice administrator on our system wanted to do it, so I instructed them to extract a list of users from /etc/passwd, write a simple shell loop to do the job, and throw it in the background.
Here's what they wrote (bourne shell)...

for USER in `cat user.list`;
do mail $USER <message.text &
done

Have you ever seen a load average of over 300 ??? """" END

My question is this- What is wrong with the script above? Why did it find a place in the Horror stories? It worked well when I tried it.

Maybe he intended to throw the whole script in the background and not just the Mail part. But even so it works just as well... So?

Thunderbolt

RE:Unix Horror story script question
I think, it does well deserve to be placed Horror stories.

Consider the given server for with or without SMTP service role, this script tries to process 3000 mail commands in parallel to send the text to it's 3000 repective receipents.

Have you ever tried with valid 3000 e-mail IDs, you can feel the heat of CPU (sar 1 100)

P.S.: I did not tested it but theoretically affirmed.

Best Regards.

Thunderbolt, View Public Profile 3 11-24-2008 - Original Discussion by scottsiddharth

Quote:

Originally Posted by scottsiddharth

Thank you for the reply. But isn't that exactly what the real admin asked the novice admin to do.

Is there a better script or solution ?

Well, Let me try to make it sequential to reduce the CPU load, but it will take no. of users*SLP_INT(default=1) seconds to execute....

#Interval between concurrent mail commands excution in seconds, minimum 1 second.

      SLP_INT=1
      for USER in `cat user.list`;
      do; 
         mail $USER <message.text; [ -z "${SLP_INT}" ] && sleep 1 || sleep ${SLP_INT}" ;
      done

[Apr 22, 2018] THE classic Unix horror story programming

Looks like not much changed since 1986. I amazed how little changed with Unix over the years. RM remains a danger although zsh and -I option on gnu rm are improvement. . I think every sysadmin wiped out important data with rm at least once in his career. So more work on this problem is needed.
Notable quotes:
"... Because we are creatures of habit. If you ALWAYS have to type 'yes' for every single deletion, it will become habitual, and you will start doing it without conscious thought. ..."
"... Amazing what kind of damage you can recover from given enough motivation. ..."
"... " in "rm -rf ~/ ..."
Apr 22, 2008 | www.reddit.com

probablycorey 10 years ago (35 children)

A little trick I use to ensure I never delete the root or home dir... Put a file called -i in / and ~

If you ever call rm -rf *, -i (the request confirmation option) will be the first path expanded. So your command becomes...

rm -rf -i

Catastrophe Averted!

mshade 10 years ago (0 children)
That's a pretty good trick! Unfortunately it doesn't work if you specify the path of course, but will keep you from doing it with a PWD of ~ or /.

Thanks!

aythun 10 years ago (2 children)
Or just use zsh. It's awesome in every possible way.
brian@xenon:~/tmp/test% rm -rf *
zsh: sure you want to delete all the files in /home/brian/tmp/test [yn]?
rex5249 10 years ago (1 child)
I keep an daily clone of my laptop and I usually do some backups in the middle of the day, so if I lose a disk it isn't a big deal other than the time wasted copying files.
MyrddinE 10 years ago (1 child)
Because we are creatures of habit. If you ALWAYS have to type 'yes' for every single deletion, it will become habitual, and you will start doing it without conscious thought.

Warnings must only pop up when there is actual danger, or you will become acclimated to, and cease to see, the warning.

This is exactly the problem with Windows Vista, and why so many people harbor such ill-will towards its 'security' system.

zakk 10 years ago (3 children)
and if I want to delete that file?!? ;-)
alanpost 10 years ago (0 children)
I use the same trick, so either of:

$ rm -- -i

or

$ rm ./-i

will work.

emag 10 years ago (0 children)
rm /-i ~/-i
nasorenga 10 years ago * (2 children)
The part that made me the most nostalgic was his email address: mcvax!ukc!man.cs.ux!miw

Gee whiz, those were the days... (Edit: typo)

floweryleatherboy 10 years ago (6 children)
One of my engineering managers wiped out an important server with rm -rf. Later it turned out he had a giant stock of kiddy porn on company servers.
monstermunch 10 years ago (16 children)
Whenever I use rm -rf, I always make sure to type the full path name in (never just use *) and put the -rf at the end, not after the rm. This means you don't have to worry about hitting "enter" in the middle of typing the path name (it won't delete the directory because the -rf is at the end) and you don't have to worry as much about data deletion from accidentally copy/pasting the command somewhere with middle click or if you redo the command while looking in your bash history.

Hmm, couldn't you alias "rm -rf" to mv the directory/files to a temp directory to be on the safe side?

branston 10 years ago (8 children)
Aliasing 'rm' is fairly common practice in some circles. It can have its own set of problems however (filling up partitions, running out of inodes...)
amnezia 10 years ago (5 children)
you could alias it with a script that prevents rm -rf * being run in certain directories.
jemminger 10 years ago (4 children)
you could also alias it to 'ls' :)
derefr 10 years ago * (1 child)
One could write a daemon that lets the oldest files in that directory be "garbage collected" when those conditions are approaching. I think this is, in a roundabout way, how Windows' shadow copy works.
branston 10 years ago (0 children)
Could do. Think we might be walking into the over-complexity trap however. The only time I've ever had an rm related disaster was when accidentally repeating an rm that was already in my command buffer. I looked at trying to exclude patterns from the command history but csh doesn't seem to support doing that so I gave up.

A decent solution just occurred to me when the underlying file system supports snapshots (UFS2 for example). Just snap the fs on which the to-be-deleted items are on prior to the delete. That needs barely any IO to do and you can set the snapshots to expire after 10 minutes.

Hmm... Might look at implementing that..

mbm 10 years ago (0 children)
Most of the original UNIX tools took the arguments in strict order, requiring that the options came first; you can even see this on some modern *BSD systems.
shadowsurge 10 years ago (1 child)
I just always format the command with ls first just to make sure everything is in working order. Then my neurosis kicks in and I do it again... and a couple more times just to make sure nothing bad happens.
Jonathan_the_Nerd 10 years ago (0 children)
If you're unsure about your wildcards, you can use echo to see exactly how the shell will expand your arguments.
splidge 10 years ago (0 children)
A better trick IMO is to use ls on the directory first.. then when you are sure that's what you meant type rm -rf !$ to delete it.
earthboundkid 10 years ago * (0 children)
Ever since I got burned by letting my pinky slip on the enter key years ago, I've been typing echo path first, then going back and adding the rm after the fact.
zerokey 10 years ago * (2 children)
Great story. Halfway through reading, I had a major wtf moment. I wasn't surprised by the use of a VAX, as my old department just retired their last VAX a year ago. The whole time, I'm thinking, "hello..mount the tape hardware on another system and, worst case scenario, boot from a live cd!"

Then I got to, "The next idea was to write a program to make a device descriptor for the tape deck" and looked back at the title and realized that it was from 1986 and realized, "oh..oh yeah...that's pretty fucked."

iluvatar 10 years ago (0 children)

Great story

Yeah, but really, he had way too much of a working system to qualify for true geek godhood. That title belongs to Al Viro . Even though I've read it several times, I'm still in awe every time I see that story...

cdesignproponentsist 10 years ago (0 children)
FreeBSD has backup statically-linked copies of essential system recovery tools in /rescue, just in case you toast /bin, /sbin, /lib, ld-elf.so.1, etc.

It won't protect against a rm -rf / though (and is not intended to), although you could chflags -R schg /rescue to make them immune to rm -rf.

clytle374 10 years ago * (9 children)
It happens, I tried a few months back to rm -rf bin to delete a directory and did a rm -rf /bin instead.

First thought: That took a long time.

Second thought: What do you mean ls not found.

I was amazed that the desktop survived for nearly an hour before crashing.

earthboundkid 10 years ago (8 children)
This really is a situation where GUIs are better than CLIs. There's nothing like the visual confirmation of seeing what you're obliterating to set your heart into the pit of your stomach.
jib 10 years ago (0 children)
If you're using a GUI, you probably already have that. If you're using a command line, use mv instead of rm.

In general, if you want the computer to do something, tell it what you want it to do, rather than telling it to do something you don't want and then complaining when it does what you say.

earthboundkid 10 years ago (3 children)
Yes, but trash cans aren't manly enough for vi and emacs users to take seriously. If it made sense and kept you from shooting yourself in the foot, it wouldn't be in the Unix tradition.
earthboundkid 10 years ago (1 child)
  1. Are you so low on disk space that it's important for your trash can to be empty at all times?
  2. Why should we humans have to adapt our directory names to route around the busted-ass-ness of our tools? The tools should be made to work with capital letters and spaces. Or better, use a GUI for deleting so that you don't have to worry about OMG, I forgot to put a slash in front of my space!

Seriously, I use the command line multiple times every day, but there are some tasks for which it is just not well suited compared to a GUI, and (bizarrely considering it's one thing the CLI is most used for) one of them is moving around and deleting files.

easytiger 10 years ago (0 children)
Thats a very simple bash/ksh/python/etc script.
  1. script a move op to a hidden dir on the /current/ partition.
  2. alias this to rm
  3. wrap rm as an alias to delete the contents of the hidden folder with confirmation
mattucf 10 years ago (3 children)
I'd like to think that most systems these days don't have / set as root's home directory, but I've seen a few that do. :/
dsfox 10 years ago (0 children)
This is a good approach in 1986. Today I would just pop in a bootable CDROM.
fjhqjv 10 years ago * (5 children)
That's why I always keep stringent file permissions and never act as the root user.

I'd have to try to rm -rf, get a permission denied error, then retype sudo rm -rf and then type in my password to ever have a mistake like that happen.

But I'm not a systems administrator, so maybe it's not the same thing.

toast_and_oj 10 years ago (2 children)
I aliased "rm -rf" to "omnomnom" and got myself into the habit of using that. I can barely type "omnomnom" when I really want to, let alone when I'm not really paying attention. It's saved one of my projects once already.
shen 10 years ago (0 children)
I've aliased "rm -rf" to "rmrf". Maybe I'm just a sucker for punishment.

I haven't been bit by it yet, the defining word being yet.

robreim 10 years ago (0 children)
I would have thought tab completion would have made omnomnom potentially easier to type than rm -rf (since the -rf part needs to be typed explicitly)
immure 10 years ago (0 children)
It's not.
lespea 10 years ago (0 children)
before I ever do something like that I make sure I don't have permissions so I get an error, then I press up, home, and type sudo <space> <enter> and it works as expected :)
kirun 10 years ago (0 children)
And I was pleased the other day how easy it was to fix the system after I accidentally removed kdm, konqueror and kdesktop... but these guys are hardcore.
austin_k 10 years ago (0 children)
I actually started to feel sick reading that. I've been in a IT disaster before where we almost lost a huge database. Ugh.. I still have nightmares.
umilmi81 10 years ago (4 children)
Task number 1 with a UNIX system. Alias rm to rm -i. Call the explicit path when you want to avoid the -i (ie: /bin/rm -f). Nobody is too cool to skip this basic protection.
flinchn 10 years ago (0 children)
i did an application install at an LE agency last fall - stupid me mv ./etc ./etcbk <> mv /etc /etcbk

ahh that damned period

DrunkenAsshole 10 years ago (0 children)
Were the "*"s really needed for a story that has plagued, at one point or another, all OS users?
xelfer 10 years ago (0 children)
Is the home directory for root / for some unix systems? i thought 'cd' then 'rm -rf *' would have deleted whatever's in his home directory (or whatever $HOME points to)
srparish 10 years ago (0 children)
Couldn't he just have used the editor to create the etc files he wanted, and used cpio as root to copy that over as an /etc?

sRp

stox 10 years ago (1 child)
Been there, done that. Have the soiled underwear to prove it. Amazing what kind of damage you can recover from given enough motivation.
sheepskin 10 years ago * (0 children)
I had a customer do this, he killed it about the same time. I told him he was screwed and I'd charge him a bunch of money to take down his server, rebuild it from a working one and put it back up. But the customer happened to have a root ftp session up, and was able to upload what he needed to bring the system back. by the time he was done I rebooted it to make sure it was cool and it booted all the way back up.

Of course I've also had a lot of customer that have done it, and they where screwed, and I got to charge them a bunch of money.

jemminger 10 years ago (0 children)
pfft. that's why lusers don't get root access.
supersan 10 years ago (2 children)
i had the same thing happened to me once.. my c:\ drive was running ntfs and i accidently deleted the "ntldr" system file in the c:\ root (because the name didn't figure much).. then later, i couldn't even boot in the safe mode! and my bootable disk didn't recognize the c:\ drive because it was ntfs!! so sadly, i had to reinstall everything :( wasted a whole day over it..
b100dian 10 years ago (0 children)
Yes, but that's a single file. I suppose anyone can write hex into mbr to copy ntldr from a samba share!
bobcat 10 years ago (0 children)
http://en.wikipedia.org/wiki/Emergency_Repair_Disk
boredzo 10 years ago (0 children)
Neither one is the original source. The original source is Usenet, and I can't find it with Google Groups. So either of these webpages is as good as the other.
docgnome 10 years ago (0 children)
In 1986? On a VAX?
MarlonBain 10 years ago (0 children)

This classic article from Mario Wolczko first appeared on Usenet in 1986 .

amoore 10 years ago (0 children)
I got sidetracked trying to figure out why the fictional antagonist would type the extra "/ " in "rm -rf ~/ ".
Zombine 10 years ago (2 children)

...it's amazing how much of the system you can delete without it falling apart completely. Apart from the fact that nobody could login (/bin/login?), and most of the useful commands had gone, everything else seemed normal.

Yeah. So apart from the fact that no one could get any work done or really do anything, things were working great!

I think a more rational reaction would be "Why on Earth is this big, important system on which many people rely designed in such a way that a simple easy-to-make human error can screw it up so comprehensively?" or perhaps "Why on Earth don't we have a proper backup system?"

daniels220 10 years ago (1 child)
The problem wasn't the backup system, it was the restore system, which relied on the machine having a "copy" command. Perfectly reasonable assumption that happened not to be true.
Zombine 10 years ago * (0 children)
Neither backup nor restoration serves any purpose in isolation. Most people would group those operations together under the heading "backup;" certainly you win only a semantic victory by doing otherwise. Their fail-safe data-protection system, call it what you will, turned out not to work, and had to be re-engineered on-the-fly.

I generally figure that the assumptions I make that turn out to be entirely wrong were not "perfectly reasonable" assumptions in the first place. Call me a traditionalist.

[Apr 22, 2018] rm and Its Dangers (Unix Power Tools, 3rd Edition)

Apr 22, 2018 | docstore.mik.ua
14.3. rm and Its Dangers

Under Unix, you use the rm command to delete files. The command is simple enough; you just type rm followed by a list of files. If anything, rm is too simple. It's easy to delete more than you want, and once something is gone, it's permanently gone. There are a few hacks that make rm somewhat safer, and we'll get to those momentarily. But first, here's a quick look at some of the dangers.

To understand why it's impossible to reclaim deleted files, you need to know a bit about how the Unix filesystem works. The system contains a "free list," which is a list of disk blocks that aren't used. When you delete a file, its directory entry (which gives it its name) is removed. If there are no more links ( Section 10.3 ) to the file (i.e., if the file only had one name), its inode ( Section 14.2 ) is added to the list of free inodes, and its datablocks are added to the free list.

Well, why can't you get the file back from the free list? After all, there are DOS utilities that can reclaim deleted files by doing something similar. Remember, though, Unix is a multitasking operating system. Even if you think your system is a single-user system, there are a lot of things going on "behind your back": daemons are writing to log files, handling network connections, processing electronic mail, and so on. You could theoretically reclaim a file if you could "freeze" the filesystem the instant your file was deleted -- but that's not possible. With Unix, everything is always active. By the time you realize you made a mistake, your file's data blocks may well have been reused for something else.

When you're deleting files, it's important to use wildcards carefully. Simple typing errors can have disastrous consequences. Let's say you want to delete all your object ( .o ) files. You want to type:

% rm *.o

But because of a nervous twitch, you add an extra space and type:

% rm * .o

It looks right, and you might not even notice the error. But before you know it, all the files in the current directory will be gone, irretrievably.

If you don't think this can happen to you, here's something that actually did happen to me. At one point, when I was a relatively new Unix user, I was working on my company's business plan. The executives thought, so as to be "secure," that they'd set a business plan's permissions so you had to be root ( Section 1.18 ) to modify it. (A mistake in its own right, but that's another story.) I was using a terminal I wasn't familiar with and accidentally created a bunch of files with four control characters at the beginning of their name. To get rid of these, I typed (as root ):

# rm ????*

This command took a long time to execute. When about two-thirds of the directory was gone, I realized (with horror) what was happening: I was deleting all files with four or more characters in the filename.

The story got worse. They hadn't made a backup in about five months. (By the way, this article should give you plenty of reasons for making regular backups ( Section 38.3 ).) By the time I had restored the files I had deleted (a several-hour process in itself; this was on an ancient version of Unix with a horrible backup utility) and checked (by hand) all the files against our printed copy of the business plan, I had resolved to be very careful with my rm commands.

[Some shells have safeguards that work against Mike's first disastrous example -- but not the second one. Automatic safeguards like these can become a crutch, though . . . when you use another shell temporarily and don't have them, or when you type an expression like Mike's very destructive second example. I agree with his simple advice: check your rm commands carefully! -- JP ]

-- ML

[Apr 22, 2018] How to prevent a mistaken rm -rf for specific folders?

Notable quotes:
"... There's nothing more on a traditional Linux, but you can set Apparmor/SELinux/ rules that prevent rm from accessing certain directories. ..."
"... Probably your best bet with it would be to alias rm -ri into something memorable like kill_it_with_fire . This way whenever you feel like removing something, go ahead and kill it with fire. ..."
Jan 20, 2013 | unix.stackexchange.com

I think pretty much people here mistakenly 'rm -rf'ed the wrong directory, and hopefully it did not cause a huge damage.. Is there any way to prevent users from doing a similar unix horror story?? Someone mentioned (in the comments section of the previous link) that

... I am pretty sure now every unix course or company using unix sets rm -fr to disable accounts of people trying to run it or stop them from running it ...

Is there any implementation of that in any current Unix or Linux distro? And what is the common practice to prevent that error even from a sysadmin (with root access)?

It seems that there was some protection for the root directory (/) in Solaris (since 2005) and GNU (since 2006). Is there anyway to implement the same protection way to some other folders as well??

To give it more clarity, I was not asking about general advice about rm usage (and I've updated the title to indicate that more), I want something more like the root folder protection: in order to rm -rf / you have to pass a specific parameter: rm -rf --no-preserve-root /.. Is there similar implementations for customized set of directories? Or can I specify files in addition to / to be protected by the preserve-root option?


amyassin, Jan 20, 2013 at 17:26

I think pretty much people here mistakenly ' rm -rf 'ed the wrong directory, and hopefully it did not cause a huge damage.. Is there any way to prevent users from doing a similar unix horror story ?? Someone mentioned (in the comments section of the previous link ) that

... I am pretty sure now every unix course or company using unix sets rm -fr to disable accounts of people trying to run it or stop them from running it ...

Is there any implementation of that in any current Unix or Linux distro? And what is the common practice to prevent that error even from a sysadmin (with root access)?

It seems that there was some protection for the root directory ( / ) in Solaris (since 2005) and GNU (since 2006). Is there anyway to implement the same protection way to some other folders as well??

To give it more clarity, I was not asking about general advice about rm usage (and I've updated the title to indicate that more), I want something more like the root folder protection: in order to rm -rf / you have to pass a specific parameter: rm -rf --no-preserve-root / .. Is there similar implementations for customized set of directories? Or can I specify files in addition to / to be protected by the preserve-root option?

mattdm, Jan 20, 2013 at 17:33

1) Change management 2) Backups. – mattdm Jan 20 '13 at 17:33

Keith, Jan 20, 2013 at 17:40

probably the only way would be to replace the rm command with one that doesn't have that feature. – Keith Jan 20 '13 at 17:40

sr_, Jan 20, 2013 at 18:28

safe-rm maybe – sr_ Jan 20 '13 at 18:28

Bananguin, Jan 20, 2013 at 21:07

most distros do `alias rm='rm -i' which makes rm ask you if you are sure.

Besides that: know what you are doing. only become root if necessary. for any user with root privileges security of any kind must be implemented in and by the user. hire somebody if you can't do it yourself.over time any countermeasure becomes equivalaent to the alias line above if you cant wrap your own head around the problem. – Bananguin Jan 20 '13 at 21:07

midnightsteel, Jan 22, 2013 at 14:21

@amyassin using rm -rf can be a resume generating event. Check and triple check before executing it – midnightsteel Jan 22 '13 at 14:21

Gilles, Jan 22, 2013 at 0:18

To avoid a mistaken rm -rf, do not type rm -rf .

If you need to delete a directory tree, I recommend the following workflow:

Never call rm -rf with an argument other than DELETE . Doing the deletion in several stages gives you an opportunity to verify that you aren't deleting the wrong thing, either because of a typo (as in rm -rf /foo /bar instead of rm -rf /foo/bar ) or because of a braino (oops, no, I meant to delete foo.old and keep foo.new ).

If your problem is that you can't trust others not to type rm -rf, consider removing their admin privileges. There's a lot more that can go wrong than rm .


Always make backups .

Periodically verify that your backups are working and up-to-date.

Keep everything that can't be easily downloaded from somewhere under version control.


With a basic unix system, if you really want to make some directories undeletable by rm, replace (or better shadow) rm by a custom script that rejects certain arguments. Or by hg rm .

Some unix variants offer more possibilities.

amyassin, Jan 22, 2013 at 9:41

Yeah backing up is the most amazing solution, but I was thinking of something like the --no-preserve-root option, for other important folder.. And that apparently does not exist even as a practice... – amyassin Jan 22 '13 at 9:41

Gilles, Jan 22, 2013 at 20:32

@amyassin I'm afraid there's nothing more (at least not on Linux). rm -rf already means "delete this, yes I'm sure I know what I'm doing". If you want more, replace rm by a script that refuses to delete certain directories. – Gilles Jan 22 '13 at 20:32

Gilles, Jan 22, 2013 at 22:17

@amyassin Actually, I take this back. There's nothing more on a traditional Linux, but you can set Apparmor/SELinux/ rules that prevent rm from accessing certain directories. Also, since your question isn't only about Linux, I should have mentioned OSX, which has something a bit like what you want. – Gilles Jan 22 '13 at 22:17

qbi, Jan 22, 2013 at 21:29

If you are using rm * and the zsh, you can set the option rmstarwait :
setopt rmstarwait

Now the shell warns when you're using the * :

> zsh -f
> setopt rmstarwait
> touch a b c
> rm *
zsh: sure you want to delete all the files in /home/unixuser [yn]? _

When you reject it ( n ), nothing happens. Otherwise all files will be deleted.

Drake Clarris, Jan 22, 2013 at 14:11

EDIT as suggested by comment:

You can change the attribute of to immutable the file or directory and then it cannot be deleted even by root until the attribute is removed.

chattr +i /some/important/file

This also means that the file cannot be written to or changed in anyway, even by root . Another attribute apparently available that I haven't used myself is the append attribute ( chattr +a /some/important/file . Then the file can only be opened in append mode, meaning no deletion as well, but you can add to it (say a log file). This means you won't be able to edit it in vim for example, but you can do echo 'this adds a line' >> /some/important/file . Using > instead of >> will fail.

These attributes can be unset using a minus sign, i.e. chattr -i file

Otherwise, if this is not suitable, one thing I practice is to always ls /some/dir first, and then instead of retyping the command, press up arrow CTL-A, then delete the ls and type in my rm -rf if I need it. Not perfect, but by looking at the results of ls, you know before hand if it is what you wanted.

NlightNFotis, Jan 22, 2013 at 8:27

One possible choice is to stop using rm -rf and start using rm -ri . The extra i parameter there is to make sure that it asks if you are sure you want to delete the file.

Probably your best bet with it would be to alias rm -ri into something memorable like kill_it_with_fire . This way whenever you feel like removing something, go ahead and kill it with fire.

amyassin, Jan 22, 2013 at 14:24

I like the name, but isn't f is the exact opposite of i option?? I tried it and worked though... – amyassin Jan 22 '13 at 14:24

NlightNFotis, Jan 22, 2013 at 16:09

@amyassin Yes it is. For some strange kind of fashion, I thought I only had r in there. Just fixed it. – NlightNFotis Jan 22 '13 at 16:09

Silverrocker, Jan 22, 2013 at 14:46

To protect against an accidental rm -rf * in a directory, create a file called "-i" (you can do this with emacs or some other program) in that directory. The shell will try to interpret -i and will cause it to go into interactive mode.

For example: You have a directory called rmtest with the file named -i inside. If you try to rm everything inside the directory, rm will first get -i passed to it and will go into interactive mode. If you put such a file inside the directories you would like to have some protection on, it might help.

Note that this is ineffective against rm -rf rmtest .

ValeriRangelov, Dec 21, 2014 at 3:03

If you understand C programming language, I think it is possible to rewrite the rm source code and make a little patch for kernel. I saw this on one server and it was impossible to delete some important directories and when you type 'rm -rf /direcotyr' it send email to sysadmin.

[Apr 22, 2018] Unix-Linux Horror Stories Unix Horror Stories The good thing about Unix, is when it screws up, it does so very quickly

Notable quotes:
"... And then I realized I had thrashed the server. Completely. ..."
"... There must be a way to fix this , I thought. HP-UX has a package installer like any modern Linux/Unix distribution, that is swinstall . That utility has a repair command, swrepair . ..."
"... you probably don't want that user owning /bin/nologin. ..."
Aug 04, 2011 | unixhorrorstories.blogspot.com

Unix Horror Stories: The good thing about Unix, is when it screws up, it does so very quickly The project to deploy a new, multi-million-dollar commercial system on two big, brand-new HP-UX servers at a brewing company that shall not be named, had been running on time and within budgets for several months. Just a few steps remained, among them, the migration of users from the old servers to the new ones.

The task was going to be simple: just copy the home directories of each user from the old server to the new ones, and a simple script to change the owner so as to make sure that each home directory was owned by the correct user. The script went something like this:

#!/bin/bash

cat /etc/passwd|while read line
      do
         USER=$(echo $line|cut -d: -f1)
         HOME=$(echo $line|cut -d: -f6)
         chown -R $USER $HOME
      done

[NOTE: the script does not filter out system ids from userids and that's a grave mistake. also it was run before it was tested ; -) -- NNB]

As you see, this script is pretty simple: obtain the user and the home directory from the password file, and then execute the chown command recursively on the home directory. I copied the files, executed the script, and thought, great, just 10 minutes and all is done.

That's when the calls started.

It turns out that while I was executing those seemingly harmless commands, the server was under acceptance test. You see, we were just one week away from going live and the final touches were everything that was required. So the users in the brewing company started testing if everything they needed was working like in the old servers. And suddenly, the users noticed that their system was malfunctioning and started making furious phone calls to my boss and then my boss started to call me.

And then I realized I had thrashed the server. Completely. My console was still open and I could see that the processes started failing, one by one, reporting very strange messages to the console, that didn't look any good. I started to panic. My workmate Ayelen and I (who just copied my script and executed it in the mirror server) realized only too late that the home directory of the root user was / -the root filesystem- so we changed the owner of every single file in the filesystem to root!!! That's what I love about Unix: when it screws up, it does so very quickly, and thoroughly.

There must be a way to fix this , I thought. HP-UX has a package installer like any modern Linux/Unix distribution, that is swinstall . That utility has a repair command, swrepair . So the following command put the system files back in order, needing a few permission changes on the application directories that weren't installed with the package manager:

swrepair -F

But the story doesn't end here. The next week, we were going live, and I knew that the migration of the users would be for real this time, not just a test. My boss and I were going to the brewing company, and he receives a phone call. Then he turns to me and asks me, "What was the command that you used last week?". I told him and I noticed that he was dictating it very carefully. When we arrived, we saw why: before the final deployment, a Unix administrator from the company did the same mistake I did, but this time, people from the whole country were connecting to the system, and he received phone calls from a lot of angry users. Luckily, the mistake could be fixed, and we all, young and old, went back to reading the HP-UX manual. Those things can come handy sometimes!

Morale of this story: before doing something on the users directories, take the time to see which is the User ID of actual users - which start usually in 500 but it's configuration-dependent - because system users's IDs are lower than that.

Send in your Unix horror story, and it will be featured here in the blog!

Greetings,
Agustin

Colin McD, 16 de marzo de 2017, 15:02

This script is so dangerous. You are giving home directories to say the apache user and you probably don't want that user owning /bin/nologin.

[Apr 21, 2018] Any alias of rm is a very stupid idea

Option -I is more modern and more useful then old option -i. It is highly recommended. And it make sense to to use alias with it contrary to what this author states (he probably does not understand that aliases do not wok for non-interactive sessions.).
The point the author make is that when you automatically expect rm to be aisles to rm -i you get into trouble on machines where this is not the case. And that's completely true.
But it does not solve the problem as respondents soon became automatic. stated. Writing your own wrapper is a better deal. One such wrapper -- safe-rm already exists and while not perfect is useful
Notable quotes:
"... A co-worker had such an alias. Imagine the disaster when, visiting a customer site, he did "rm *" in the customer's work directory and all he got was the prompt for the next command after rm had done what it was told to do. ..."
"... It you want a safety net, do "alias del='rm -I –preserve_root'", ..."
Feb 14, 2017 | www.cyberciti.biz
Art Protin June 12, 2012, 9:53 pm

Any alias of rm is a very stupid idea (except maybe alias rm=echo fool).

A co-worker had such an alias. Imagine the disaster when, visiting a customer site, he did "rm *" in the customer's work directory and all he got was the prompt for the next command after rm had done what it was told to do.

It you want a safety net, do "alias del='rm -I –preserve_root'",

Drew Hammond March 26, 2014, 7:41 pm
^ This x10000.

I've made the same mistake before and its horrible.

[Apr 20, 2018] GitHub - teejee2008-timeshift System restore tool for Linux. Creates filesystem snapshots using rsync+hardlinks, or BTRFS snap

Notable quotes:
"... System Restore ..."
Apr 20, 2018 | github.com

Timeshift

Timeshift for Linux is an application that provides functionality similar to the System Restore feature in Windows and the Time Machine tool in Mac OS. Timeshift protects your system by taking incremental snapshots of the file system at regular intervals. These snapshots can be restored at a later date to undo all changes to the system.

In RSYNC mode, snapshots are taken using rsync and hard-links . Common files are shared between snapshots which saves disk space. Each snapshot is a full system backup that can be browsed with a file manager.

In BTRFS mode, snapshots are taken using the in-built features of the BTRFS filesystem. BTRFS snapshots are supported only on BTRFS systems having an Ubuntu-type subvolume layout (with @ and @home subvolumes).

Timeshift is similar to applications like rsnapshot , BackInTime and TimeVault but with different goals. It is designed to protect only system files and settings. User files such as documents, pictures and music are excluded. This ensures that your files remains unchanged when you restore your system to an earlier date. If you need a tool to backup your documents and files please take a look at the excellent BackInTime application which is more configurable and provides options for saving user files.

[Apr 04, 2018] The gzip Recovery Toolkit

Apr 04, 2018 | www.urbanophile.com

So you thought you had your files backed up - until it came time to restore. Then you found out that you had bad sectors and you've lost almost everything because gzip craps out 10% of the way through your archive. The gzip Recovery Toolkit has a program - gzrecover - that attempts to skip over bad data in a gzip archive. This saved me from exactly the above situation. Hopefully it will help you as well.

I'm very eager for feedback on this program . If you download and try it, I'd appreciate and email letting me know what your results were. My email is arenn@urbanophile.com . Thanks.

ATTENTION

99% of "corrupted" gzip archives are caused by transferring the file via FTP in ASCII mode instead of binary mode. Please re-transfer the file in the correct mode first before attempting to recover from a file you believe is corrupted.

Disclaimer and Warning

This program is provided AS IS with absolutely NO WARRANTY. It is not guaranteed to recover anything from your file, nor is what it does recover guaranteed to be good data. The bigger your file, the more likely that something will be extracted from it. Also keep in mind that this program gets faked out and is likely to "recover" some bad data. Everything should be manually verified.

Downloading and Installing

Note that version 0.8 contains major bug fixes and improvements. See the ChangeLog for details. Upgrading is recommended. The old version is provided in the event you run into troubles with the new release.

You need the following packages:

First, build and install zlib if necessary. Next, unpack the gzrt sources. Then cd to the gzrt directory and build the gzrecover program by typing make . Install manually by copying to the directory of your choice.

Usage

Run gzrecover on a corrupted .gz file. If you leave the filename blank, gzrecover will read from the standard input. Anything that can be read from the file will be written to a file with the same name, but with a .recovered appended (any .gz is stripped). You can override this with the -o option. The default filename when reading from the standard input is "stdin.recovered". To write recovered data to the standard output, use the -p option. (Note that -p and -o cannot be used together).

To get a verbose readout of exactly where gzrecover is finding bad bytes, use the -v option to enable verbose mode. This will probably overflow your screen with text so best to redirect the stderr stream to a file. Once gzrecover has finished, you will need to manually verify any data recovered as it is quite likely that our output file is corrupt and has some garbage data in it. Note that gzrecover will take longer than regular gunzip. The more corrupt your data the longer it takes. If your archive is a tarball, read on.

For tarballs, the tar program will choke because GNU tar cannot handle errors in the file format. Fortunately, GNU cpio (tested at version 2.6 or higher) handles corrupted files out of the box.

Here's an example:

$ ls *.gz
my-corrupted-backup.tar.gz
$ gzrecover my-corrupted-backup.tar.gz
$ ls *.recovered
my-corrupted-backup.tar.recovered
$ cpio -F my-corrupted-backup.tar.recovered -i -v

Note that newer versions of cpio can spew voluminous error messages to your terminal. You may want to redirect the stderr stream to /dev/null. Also, cpio might take quite a long while to run.

Copyright

The gzip Recovery Toolkit v0.8
Copyright (c) 2002-2013 Aaron M. Renn ( arenn@urbanophile.com )

[Apr 02, 2018] How Many Opioid Overdoses Are Suicides

Notable quotes:
"... By Martha Bebinger of WBUR. Originally published at Kaiser Health News ..."
"... The National Suicide Prevention Lifeline is 800-273-8255. ..."
"... This story is part of a partnership that includes WBUR , NPR and Kaiser Health News. ..."
Apr 02, 2018 | www.nakedcapitalism.com

Posted on March 30, 2018 by Yves Smith Yves here. See also this related Kaiser Health News story: Omissions On Death Certificates Lead To Undercounting Of Opioid Overdoses .

It takes a lot of courage for an addict to recover and stay clean. And it is sadly not news that drug addiction and high levels of prescription drug use are signs that something is deeply broken in our society. There are always some people afflicted with deep personal pain but our system is doing a very good job of generating unnecessary pain and desperation.

By Martha Bebinger of WBUR. Originally published at Kaiser Health News

Mady Ohlman was 22 on the evening some years ago when she stood in a friend's bathroom looking down at the sink.

"I had set up a bunch of needles filled with heroin because I wanted to just do them back-to-back-to-back," Ohlman recalled. She doesn't remember how many she injected before collapsing, or how long she lay drugged-out on the floor.

"But I remember being pissed because I could still get up, you know?"

She wanted to be dead, she said, glancing down, a wisp of straight brown hair slipping from behind an ear across her thin face.

At that point, said Ohlman, she'd been addicted to opioids -- controlled by the drugs -- for more than three years.

"And doing all these things you don't want to do that are horrible -- you know, selling my body, stealing from my mom, sleeping in my car," Ohlman said. "How could I not be suicidal?"

For this young woman, whose weight had dropped to about 90 pounds, who was shooting heroin just to avoid feeling violently ill, suicide seemed a painless way out.

"You realize getting clean would be a lot of work," Ohlman said, her voice rising. "And you realize dying would be a lot less painful. You also feel like you'll be doing everyone else a favor if you die."

Ohlman, who has now been sober for more than four years, said many drug users hit the same point, when the disease and the pursuit of illegal drugs crushes their will to live. Ohlman is among at least 40 percent of active drug users who wrestle with depression, anxiety or another mental health issue that increases the risk of suicide.

Measuring Suicide Among Patients Addicted To Opioids

Massachusetts, where Ohlman lives, began formally recognizing in May 2017 that some opioid overdose deaths are suicides. The state confirmed only about 2 percent of all overdose deaths as suicides, but Dr. Monica Bhare l, head of the Massachusetts Department of Public Health, said it's difficult to determine a person's true intent.

"For one thing, medical examiners use different criteria for whether suicide was involved or not," Bharel said, and the "tremendous amount of stigma surrounding both overdose deaths and suicide sometimes makes it extremely challenging to piece everything together and figure out unintentional and intentional."

Research on drug addiction and suicide suggests much higher numbers.

"[Based on the literature that's available], it looks like it's anywhere between 25 and 45 percent of deaths by overdose that may be actual suicides," said Dr. Maria Oquendo , immediate past president of the American Psychiatric Association.

Oquendo pointed to one study of overdoses from prescription opioids that found nearly 54 percent were unintentional. The rest were either suicide attempts or undetermined.

Several large studies show an increased risk of suicide among drug users addicted to opioids, especially women. In a study of about 5 million veterans, women were eight times as likely as others to be at risk for suicide, while men faced a twofold risk.

The opioid epidemic is occurring at the same time suicides have hit a 30-year high , but Oquendo said few doctors look for a connection.

"They are not monitoring it," said Oquendo, who chairs the department of psychiatry at the University of Pennsylvania. "They are probably not assessing it in the kinds of depths they would need to prevent some of the deaths."

That's starting to change. A few hospitals in Boston, for example, aim to ask every patient admitted about substance use, as well as about whether they've considered hurting themselves.

"No one has answered the chicken and egg [problem]," said Dr. Kiame Mahaniah , a family physician who runs the Lynn Community Health Center in Lynn, Mass. Is it that patients "have mental health issues that lead to addiction, or did a life of addiction then trigger mental health problems?"

With so little data to go on, "it's so important to provide treatment that covers all those bases," Mahaniah said.

'Deaths Of Despair'

When doctors do look deeper into the reasons patients addicted to opioids become suicidal, some economists predict they'll find deep reservoirs of depression and pain.

In a seminal paper published in 2015, Princeton economists Angus Deaton and Anne Case tracked falling marriage rates, the loss of stable middle-class jobs and rising rates of self-reported pain. The authors say opioid overdoses, suicides and diseases related to alcoholism are all often "deaths of despair."

"We think of opioids as something that's thrown petrol on the flames and made things infinitely worse," Deaton said, "but the underlying deep malaise would be there even without the opioids."

Many economists agree on remedies for that deep malaise. Harvard economics professor David Cutle r said solutions include a good education, a steady job that pays a decent wage, secure housing, food and health care.

"And also thinking about a sense of purpose in life," Cutler said. "That is, even if one is doing well financially, is there a sense that one is contributing in a meaningful way?"

Tackling Despair In The Addiction Community

"I know firsthand the sense of hopelessness that people can feel in the throes of addiction," said Michael Botticelli , executive director of the Grayken Center for Addiction at Boston Medical Center; he is in recovery for an addiction to alcohol.

Botticelli said recovery programs must help patients come out of isolation and create or recreate bonds with family and friends.

"The vast majority of people I know who are in recovery often talk about this profound sense of re-establishing -- and sometimes establishing for the first time -- a connection to a much larger community," Botticelli said.

Ohlman said she isn't sure why her attempted suicide, with multiple injections of heroin, didn't work.

"I just got really lucky," Ohlman said. "I don't know how."

A big part of her recovery strategy involves building a supportive community, she said.

"Meetings; 12-step; sponsorship and networking; being involved with people doing what I'm doing," said Ohlman, ticking through a list of her priorities.

There's a fatal overdose at least once a week within her Cape Cod community, she said. Some are accidental, others not. Ohlman said she's convinced that telling her story, of losing and then finding hope, will help bring those numbers down.

The National Suicide Prevention Lifeline is 800-273-8255.

This story is part of a partnership that includes WBUR , NPR and Kaiser Health News.

[Mar 28, 2018] Sysadmin wiped two servers, left the country to escape the shame by Simon Sharwood

Mar 26, 2018 | theregister.co.uk
"This revolutionary product allowed you to basically 'mirror' two file servers," Graham told The Register . "It was clever stuff back then with a high speed 100mb FDDI link doing the mirroring and the 10Mb LAN doing business as usual."

Graham was called upon to install said software at a British insurance company, which involved a 300km trip on Britain's famously brilliant motorways with a pair of servers in the back of a company car.

Maybe that drive was why Graham made a mistake after the first part of the job: getting the servers set up and talking.

"Sadly the software didn't make identifying the location of each disk easy," Graham told us. "And – ummm - I mirrored it the wrong way."

"The net result was two empty but beautifully-mirrored servers."

Oops.

Graham tried to find someone to blame, but as he was the only one on the job that wouldn't work.

His next instinct was to run, but as the site had a stack of Quarter Inch Cartridge backup tapes, he quickly learned that "incremental back-ups are the work of the devil."

Happily, all was well in the end.

[Mar 27, 2018] Cutting 'Old Heads' at IBM

Mar 27, 2018 | news.slashdot.org

(propublica.org) As the world's dominant technology firm, payrolls at International Business Machines swelled to nearly a quarter-million U.S. white-collar workers in the 1980s. Its profits helped underwrite a broad agenda of racial equality, equal pay for women and an unbeatable offer of great wages and something close to lifetime employment, all in return for unswerving loyalty. But when high tech suddenly started shifting and companies went global, IBM faced the changing landscape with a distinction most of its fiercest competitors didn't have: a large number of experienced and aging U.S. employees .

The company reacted with a strategy that, in the words of one confidential planning document, would "correct seniority mix." It slashed IBM's U.S. workforce by as much as three-quarters from its 1980s peak, replacing a substantial share with younger, less-experienced and lower-paid workers and sending many positions overseas. ProPublica estimates that in the past five years alone, IBM has eliminated more than 20,000 American employees ages 40 and over, about 60 percent of its estimated total U.S. job cuts during those years. In making these cuts, IBM has flouted or outflanked U.S. laws and regulations intended to protect later-career workers from age discrimination, according to a ProPublica review of internal company documents, legal filings and public records, as well as information provided via interviews and questionnaires filled out by more than 1,000 former IBM employees.

[Mar 19, 2018] Gogo - Create Shortcuts to Long and Complicated Paths in Linux

I do not see any bright ideas here. Using aliases is pretty much equivalent to this.
Mar 19, 2018 | www.tecmint.com

For example, if you have a directory ~/Documents/Phone-Backup/Linux-Docs/Ubuntu/ , using gogo , you can create an alias (a shortcut name), for instance Ubuntu to access it without typing the whole path anymore. No matter your current working directory, you can move into ~/cd Documents/Phone-Backup/Linux-Docs/Ubuntu/ by simply using the alias Ubuntu .

Read Also : bd – Quickly Go Back to a Parent Directory Instead of Typing "cd ../../.." Redundantly

In addition, it also allows you to create aliases for connecting directly into directories on remote Linux servers.

How to Install Gogo in Linux Systems

To install Gogo , first clone the gogo repository from Github and then copy the gogo.py to any directory in your PATH environmental variable (if you already have the ~/bin/ directory, you can place it here, otherwise create it).

$ git clone https://github.com/mgoral/gogo.git
$ cd gogo/
$ mkdir -p ~/bin        #run this if you do not have ~/bin directory
$ cp gogo.py ~/bin/

... ... ...

To start using gogo , you need to logout and login back to use it. Gogo stores its configuration in ~/.config/gogo/gogo.conf file (which should be auto created if it doesn't exist) and has the following syntax.
# Comments are lines that start from '#' character.
default = ~/something
alias = /desired/path
alias2 = /desired/path with space
alias3 = "/this/also/works"
zażółć = "unicode/is/also/supported/zażółć gęślą jaźń"

If you run gogo run without any arguments, it will go to the directory specified in default; this alias is always available, even if it's not in the configuration file, and points to $HOME directory.

To display the current aliases, use the -l switch

[Mar 13, 2018] GitHub - intoli-exodus Painless relocation of Linux binaries and all of their dependencies without containers.

Mar 13, 2018 | github.com

Painless relocation of Linux binaries–and all of their dependencies–without containers.

The Problem Being Solved

If you simply copy an executable file from one system to another, then you're very likely going to run into problems. Most binaries available on Linux are dynamically linked and depend on a number of external library files. You'll get an error like this when running a relocated binary when it has a missing dependency.

aria2c: error while loading shared libraries: libgnutls.so.30: cannot open shared object file: No such file or directory

You can try to install these libraries manually, or to relocate them and set LD_LIBRARY_PATH to wherever you put them, but it turns out that the locations of the ld-linux linker and the glibc libraries are hardcoded. Things can very quickly turn into a mess of relocation errors,

aria2c: relocation error: /lib/libpthread.so.0: symbol __getrlimit, version
GLIBC_PRIVATE not defined in file libc.so.6 with link time reference

segmentation faults,

Segmentation fault (core dumped)

or, if you're really unlucky, this very confusing symptom of a missing linker.

$ ./aria2c
bash: ./aria2c: No such file or directory
$ ls -lha ./aria2c
-rwxr-xr-x 1 sangaline sangaline 2.8M Jan 30 21:18 ./aria2c

Exodus works around these issues by compiling a small statically linked launcher binary that invokes the relocated linker directly with any hardcoded RPATH library paths overridden. The relocated binary will run with the exact same linker and libraries that it ran with on its origin machine.

[Mar 13, 2018] How To's

Mar 13, 2018 | linuxtechlab.com

Tips & Tricks How to restore deleted files in Linux with Foremost

by Shusain · March 2, 2018

It might have happened to you at one point or another that you deleted a file or an image by mistake & than regretted it immediately. So can we restore such a deleted file/image on Linux machine. In this tutorial, we are going to discuss just that i.e. how to restore a deleted file on Linux machine.

To restore a deleted file on Linux machine, we will be using an application called 'Foremost' . Foremost is a Linux based program data for recovering deleted files. The program uses a configuration file to specify headers and footers to search for. Intended to be run on disk images, foremost can search through most any kind of data without worrying about the format.

Note:- We can only restore deleted files in Linux as long as those sectors have not been overwritten on the hard disk.

We will now discuss how to recover the data with foremost. Let's start tutorial by installation of Foremost on CentOS & Ubuntu systems.

( Recommended Read: Complete guide for creating Vagrant boxes with VirtualBox )

(Also Read: Checking website statistics using Webalizer )

Install Foremost

To install Foremost on CentOS, we will download & install the foremost rpm from official webpage. Open terminal & execute the following command,

$ sudo yum install https://forensics.cert.org/centos/cert/7/x86_64//foremost-1.5.7-13.1.el7.x86_64.rpm –y

With Ubuntu, the foremost package is available with default repository. To install foremost on Ubuntu, run the following command from terminal,

$ sudo apt-get install foremost

Restore deleted files in Linux

For this scenario, we have kept an image named 'dan.jpg ' on our system. We will now delete it from the system with the following command,

$ sudo rm –rf dan.jpg

Now we will use the foremost utility to restore the image, run the following command to restore the file,

$ foremost –t jpeg –I /dev/sda1

Here, with option 't' , we have defined the type of file that needs to be restored,

-I , tells the foremost to look for the file in partition ' /dev/sda1' . We can check the partition with 'mount' command.

Upon successful execution of the command, the file will be restored in current folder. We can also add option to restore the file in a particular folder with option 'o'

$ foremost –t jpeg –I /dev/sda1 –o /root/test_folder

Note:- The restored file will not have the same file name of the original file as the filename is not stored with file itself. So file name will be different but the data should all be there.

With this we now end our tutorial on how to restore deleted files in Linux machine using Foremost. Please feel free to send in any questions or suggestion using the comment box below.

[Jan 29, 2018] How Much Swap Should You Use in Linux by Abhishek Prakash

Red Hat recommends a swap size of 20% of RAM for modern systems (i.e. 4GB or higher RAM).
Notable quotes:
"... So many people (including this article) are misinformed about the Linux swap algorithm. It doesn't just check if your RAM reaches a certain usage point. It's incredibly complicated. Linux will swap even if you are using only 20-50% of your RAM. Inactive processes are often swapped and swapping inactive processes makes more room for buffer and cache. Even if you have 16GB of RAM, having a swap partition can be beneficial ..."
Jan 25, 2018 | itsfoss.com

27 Comments

How much should be the swap size? Should the swap be double of the RAM size or should it be half of the RAM size? Do I need swap at all if my system has got several GBs of RAM? Perhaps these are the most common asked questions about choosing swap size while installing Linux. It's nothing new. There has always been a lot of confusion around swap size.

For a long time, the recommended swap size was double of the RAM size but that golden rule is not applicable to modern computers anymore. We have systems with RAM sizes up to 128 GB, many old computers don't even have this much of hard disk.

... ... ...

Swap acts as a breather to your system when the RAM is exhausted. What happens here is that when the RAM is exhausted, your Linux system uses part of the hard disk memory and allocates it to the running application.

That sounds cool. This means if you allocate like 50GB of swap size, your system can run hundreds or perhaps thousands of applications at the same time? WRONG!

You see, the speed matters here. RAM access data in the order of nanoseconds. An SSD access data in microseconds while as a normal hard disk accesses the data in milliseconds. This means that RAM is 1000 times faster than SSD and 100,000 times faster than the usual HDD.

If an application relies too much on the swap, its performance will degrade as it cannot access the data at the same speed as it would have in RAM. So instead of taking 1 second for a task, it may take several minutes to complete the same task. It will leave the application almost useless. This is known as thrashing in computing terms.

In other words, a little swap is helpful. A lot of it will be of no good use.

Why is swap needed?

There are several reasons why you would need swap.

... ... ...

Can you use Linux without swap?

Yes, you can, especially if your system has plenty of RAM. But as explained in the previous section, a little bit of swap is always advisable.

How much should be the swap size?

... ... ...

If you go by Red Hat's suggestion , they recommend a swap size of 20% of RAM for modern systems (i.e. 4GB or higher RAM).

CentOS has a different recommendation for the swap partition size . It suggests swap size to be:

Ubuntu has an entirely different perspective on the swap size as it takes hibernation into consideration. If you need hibernation, a swap of the size of RAM becomes necessary for Ubuntu. Otherwise, it recommends:

... ... ...

Jaden

So many people (including this article) are misinformed about the Linux swap algorithm. It doesn't just check if your RAM reaches a certain usage point. It's incredibly complicated. Linux will swap even if you are using only 20-50% of your RAM. Inactive processes are often swapped and swapping inactive processes makes more room for buffer and cache. Even if you have 16GB of RAM, having a swap partition can be beneficial (especially if hibernating)

kaylee

I have 4 gigs of ram on old laptop running cinnamon going by this, it is set at 60 ( what does 60 mean, and would 10 be better ) i do a little work with blender ( VSE and just starting to mess with 3d text ) should i change to 10
thanks

a. First check your current swappiness value. Type in the terminal (use copy/paste):

cat /proc/sys/vm/swappiness

Press Enter.

The result will probably be 60.

b. To change the swappiness into a more sensible setting, type in the terminal (use copy/paste to avoid typo's):

gksudo xed /etc/sysctl.conf

Press Enter.

Now a text file opens. Scroll to the bottom of that text file and add your swappiness parameter to override the default. Copy/paste the following two green lines:

# Decrease swap usage to a more reasonable level
vm.swappiness=10

c. Save and close the text file. Then reboot your computer.

DannyB

I have 32 GB of memory. Since I use SSD and no actual hard drive, having Swap would add wear to my SSD. For more than two years now I have used Linux Mint with NO SWAP.

Rationale: a small 2 GB of extra "cushion" doesn't really matter. If programs that misbehave use up 32 GB, then they'll use up 34 GB. If I had 32 GB of SWAP for a LOT of cushion, then a misbehaving program is likely to use it all up anyway.

In practice I have NEVER had a problem with 32 GB with no swap at all. At install time I made the decision to try this (which has been great in hindsight) knowing that if I really did need swap later, I could always configure a swap FILE instead of a swap PARTITION.

But I've never needed to and have never looked back at that decision to use no swap. I would recommend it.

John_Betong

No swap on Ubuntu 17.04 I am pleased to say http://www.omgubuntu.co.uk/2016/12/ubuntu-17-04-drops-swaps-swap-partitions-swap-files

Yerry Sherry

A very good explanation why you SHOULD use swap: https://chrisdown.name/2018/01/02/in-defence-of-swap.htm

[Jan 14, 2018] How to remount filesystem in read write mode under Linux

Jan 14, 2018 | kerneltalks.com

Most of the time on newly created file systems of NFS filesystems we see error like below :

1 2 3 4 root @ kerneltalks # touch file1 touch : cannot touch ' file1 ' : Read - only file system

This is because file system is mounted as read only. In such scenario you have to mount it in read-write mode. Before that we will see how to check if file system is mounted in read only mode and then we will get to how to re mount it as a read write filesystem.


How to check if file system is read only

To confirm file system is mounted in read only mode use below command –

1 2 3 4 # cat /proc/mounts | grep datastore / dev / xvdf / datastore ext3 ro , seclabel , relatime , data = ordered 0 0

Grep your mount point in cat /proc/mounts and observer third column which shows all options which are used in mounted file system. Here ro denotes file system is mounted read-only.

You can also get these details using mount -v command

1 2 3 4 root @ kerneltalks # mount -v |grep datastore / dev / xvdf on / datastore type ext3 ( ro , relatime , seclabel , data = ordered )

In this output. file system options are listed in braces at last column.


Re-mount file system in read-write mode

To remount file system in read-write mode use below command –

1 2 3 4 5 6 root @ kerneltalks # mount -o remount,rw /datastore root @ kerneltalks # mount -v |grep datastore / dev / xvdf on / datastore type ext3 ( rw , relatime , seclabel , data = ordered )

Observe after re-mounting option ro changed to rw . Now, file system is mounted as read write and now you can write files in it.

Note : It is recommended to fsck file system before re mounting it.

You can check file system by running fsck on its volume.

1 2 3 4 5 6 7 8 9 10 root @ kerneltalks # df -h /datastore Filesystem Size Used Avail Use % Mounted on / dev / xvda2 10G 881M 9.2G 9 % / root @ kerneltalks # fsck /dev/xvdf fsck from util - linux 2.23.2 e2fsck 1.42.9 ( 28 - Dec - 2013 ) / dev / xvdf : clean , 12 / 655360 files , 79696 / 2621440 blocks

Sometimes there are some corrections needs to be made on file system which needs reboot to make sure there are no processes are accessing file system.

[Jan 14, 2018] How to Install Snipe-IT Asset Management Software on Debian 9

Jan 14, 2018 | www.howtoforge.com

Snipe-IT is a free and open source IT assets management web application that can be used for tracking licenses, accessories, consumables, and components. It is written in PHP language and uses MySQL to store its data. In this tutorial, we will learn how to install Snipe-IT on Debian 9 server.

[Jan 14, 2018] Linux yes Command Tutorial for Beginners (with Examples)

Jan 14, 2018 | www.howtoforge.com

You can see that user has to type 'y' for each query. It's in situation like these where yes can help. For the above scenario specifically, you can use yes in the following way:

yes | rm -ri test Q3. Is there any use of yes when it's used alone?

Yes, there's at-least one use: to tell how well a computer system handles high amount of loads. Reason being, the tool utilizes 100% processor for systems that have a single processor. In case you want to apply this test on a system with multiple processors, you need to run a yes process for each processor.

[Jan 14, 2018] Working with Vim Editor Advanced concepts

Jan 14, 2018 | linuxtechlab.com

Opening multiple files with VI/VIM editor

To open multiple files, command would be same as is for a single file; we just add the file name for second file as well.

$ vi file1 file2 file 3

Now to browse to next file, we can use

$ :n

or we can also use

$ :e filename

Run external commands inside the editor

We can run external Linux/Unix commands from inside the vi editor, i.e. without exiting the editor. To issue a command from editor, go back to Command Mode if in Insert mode & we use the BANG i.e. '!' followed by the command that needs to be used. Syntax for running a command is,

$ :! command

An example for this would be

$ :! df -H

Searching for a pattern

To search for a word or pattern in the text file, we use following two commands in command mode,

Both of these commands are used for same purpose, only difference being the direction they search in. An example would be,

$ :/ search pattern (If at beginning of the file)

$ :/ search pattern (If at the end of the file)

Searching & replacing a pattern

We might be required to search & replace a word or a pattern from our text files. So rather than finding the occurrence of word from whole text file & replace it, we can issue a command from the command mode to replace the word automatically. Syntax for using search & replacement is,

$ :s/pattern_to_be_found/New_pattern/g

Suppose we want to find word "alpha" & replace it with word "beta", the command would be

$ :s/alpha/beta/g

If we want to only replace the first occurrence of word "alpha", then the command would be

$ :s/alpha/beta/

Using Set commands

We can also customize the behaviour, the and feel of the vi/vim editor by using the set command. Here is a list of some options that can be use set command to modify the behaviour of vi/vim editor,

$ :set ic ignores cases while searching

$ :set smartcase enforce case sensitive search

$ :set nu display line number at the begining of the line

$ :set hlsearch highlights the matching words

$ : set ro change the file type to read only

$ : set term prints the terminal type

$ : set ai sets auto-indent

$ :set noai unsets the auto-indent

Some other commands to modify vi editors are,

$ :colorscheme its used to change the color scheme for the editor. (for VIM editor only)

$ :syntax on will turn on the color syntax for .xml, .html files etc. (for VIM editor only)

This complete our tutorial, do mention your queries/questions or suggestions in the comment box below.

[Jan 14, 2018] Learn to use Wget command with 12 examples

Jan 14, 2018 | linuxtechlab.com

Downloading file & storing with a different name

If we want to save the downloaded file with a different name than its default name, we can use '-O' parameter with wget command to do so,

$ wget -O nagios_latest https://downloads.sourceforge.net/project/nagios/nagios-4.x/nagios-4.3.1/nagios-4.3.1.tar.gz?r=&ts=1489637334&use_mirror=excellmedia

Replicate whole website

If you need to download all contents of a website, you can do so by using '--mirror' parameter,

$ wget --mirror -p --convert-links -P /home/dan xyz.com

Here, wget – mirror is command to download the website,

-p, will download all files necessary to display HTML files properly,

--convert-links, will convert links in documents for viewing,

-P /home/dan, will save the file in /home/dan directory.

Download only a certain type of files

To download only a file with certain format type, use '-r -A' parameters,

$ wget -r -A.txt Website_url

Exclude a certain file type

While downloading a website, if you don't want to download a certain file type you can do so by using '- – reject' parameter,

$ wget --reject=png Website_url

[Jan 14, 2018] Sysadmin Tips on Preparing for Vacation by Kyle Rankin

Notable quotes:
"... If you can, freeze changes in the weeks leading up to your vacation. Try to encourage other teams to push off any major changes until after you get back. ..."
"... Check for any systems about to hit a disk warning threshold and clear out space. ..."
"... Make sure all of your backup scripts are working and all of your backups are up to date. ..."
Jan 11, 2018 | www.linuxjournal.com

... ... ...

If you do need to take your computer, I highly recommend making a full backup before the trip. Your computer is more likely to be lost, stolen or broken while traveling than when sitting safely at the office, so I always take a backup of my work machine before a trip. Even better than taking a backup, leave your expensive work computer behind and use a cheaper more disposable machine for travel and just restore your important files and settings for work on it before you leave and wipe it when you return. If you decide to go the disposable computer route, I recommend working one or two full work days on this computer before the vacation to make sure all of your files and settings are in place.

Documentation

Good documentation is the best way to reduce or eliminate how much you have to step in when you aren't on call, whether you're on vacation or not. Everything from routine procedures to emergency response should be documented and kept up to date. Honestly, this falls under standard best practices as a sysadmin, so it's something you should have whether or not you are about to go on vacation.

One saying about documentation is that if something is documented in two places, one of them will be out of date. Even if you document something only in one place, there's a good chance it is out of date unless you perform routine maintenance. It's a good practice to review your documentation from time to time and update it where necessary and before a vacation is a particularly good time to do it. If you are the only person that knows about the new way to perform a procedure, you should make sure your documentation covers it.

Finally, have your team maintain a page to capture anything that happens while you are gone that they want to tell you about when you get back. If you are the main maintainer of a particular system, but they had to perform some emergency maintenance of it while you were gone, that's the kind of thing you'd like to know about when you get back. If there's a central place for the team to capture these notes, they will be more likely to write things down as they happen and less likely to forget about things when you get back.

Stable State

The more stable your infrastructure is before you leave and the more stable it stays while you are gone, the less likely you'll be disturbed on your vacation. Right before a vacation is a terrible time to make a major change to critical systems. If you can, freeze changes in the weeks leading up to your vacation. Try to encourage other teams to push off any major changes until after you get back.

Before a vacation is also a great time to perform any preventative maintenance on your systems. Check for any systems about to hit a disk warning threshold and clear out space. In general, if you collect trending data, skim through it for any resources that are trending upward that might go past thresholds while you are gone. If you have any tasks that might add extra load to your systems while you are gone, pause or postpone them if you can. Make sure all of your backup scripts are working and all of your backups are up to date.

Emergency Contact Methods

Although it would be great to unplug completely while on vacation, there's a chance that someone from work might want to reach you in an emergency. Depending on where you plan to travel, some contact options may work better than others. For instance, some cell-phone plans that work while traveling might charge high rates for calls, but text messages and data bill at the same rates as at home.

... ... ... Kyle Rankin is senior security and infrastructure architect, the author of many books including Linux Hardening in Hostile Networks, DevOps Troubleshooting and The Official Ubuntu Server Book, and a columnist for Linux Journal. Follow him @kylerankin

[Jan 14, 2018] Linux Filesystem Events with inotify by Charles Fisher

Notable quotes:
"... Lukas Jelinek is the author of the incron package that allows users to specify tables of inotify events that are executed by the master incrond process. Despite the reference to "cron", the package does not schedule events at regular intervals -- it is a tool for filesystem events, and the cron reference is slightly misleading. ..."
"... The incron package is available from EPEL ..."
Jan 08, 2018 | www.linuxjournal.com

Triggering scripts with incron and systemd.

It is, at times, important to know when things change in the Linux OS. The uses to which systems are placed often include high-priority data that must be processed as soon as it is seen. The conventional method of finding and processing new file data is to poll for it, usually with cron. This is inefficient, and it can tax performance unreasonably if too many polling events are forked too often.

Linux has an efficient method for alerting user-space processes to changes impacting files of interest. The inotify Linux system calls were first discussed here in Linux Journal in a 2005 article by Robert Love who primarily addressed the behavior of the new features from the perspective of C.

However, there also are stable shell-level utilities and new classes of monitoring dæmons for registering filesystem watches and reporting events. Linux installations using systemd also can access basic inotify functionality with path units. The inotify interface does have limitations -- it can't monitor remote, network-mounted filesystems (that is, NFS); it does not report the userid involved in the event; it does not work with /proc or other pseudo-filesystems; and mmap() operations do not trigger it, among other concerns. Even with these limitations, it is a tremendously useful feature.

This article completes the work begun by Love and gives everyone who can write a Bourne shell script or set a crontab the ability to react to filesystem changes.

The inotifywait Utility

Working under Oracle Linux 7 (or similar versions of Red Hat/CentOS/Scientific Linux), the inotify shell tools are not installed by default, but you can load them with yum:

 # yum install inotify-tools
Loaded plugins: langpacks, ulninfo
ol7_UEKR4                                      | 1.2 kB   00:00
ol7_latest                                     | 1.4 kB   00:00
Resolving Dependencies
--> Running transaction check
---> Package inotify-tools.x86_64 0:3.14-8.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

==============================================================
Package         Arch       Version        Repository     Size
==============================================================
Installing:
inotify-tools   x86_64     3.14-8.el7     ol7_latest     50 k

Transaction Summary
==============================================================
Install  1 Package

Total download size: 50 k
Installed size: 111 k
Is this ok [y/d/N]: y
Downloading packages:
inotify-tools-3.14-8.el7.x86_64.rpm               |  50 kB   00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Warning: RPMDB altered outside of yum.
  Installing : inotify-tools-3.14-8.el7.x86_64                 1/1
  Verifying  : inotify-tools-3.14-8.el7.x86_64                 1/1

Installed:
  inotify-tools.x86_64 0:3.14-8.el7

Complete!

The package will include two utilities (inotifywait and inotifywatch), documentation and a number of libraries. The inotifywait program is of primary interest.

Some derivatives of Red Hat 7 may not include inotify in their base repositories. If you find it missing, you can obtain it from Fedora's EPEL repository , either by downloading the inotify RPM for manual installation or adding the EPEL repository to yum.

Any user on the system who can launch a shell may register watches -- no special privileges are required to use the interface. This example watches the /tmp directory:

$ inotifywait -m /tmp
Setting up watches.
Watches established.

If another session on the system performs a few operations on the files in /tmp:

$ touch /tmp/hello
$ cp /etc/passwd /tmp
$ rm /tmp/passwd
$ touch /tmp/goodbye
$ rm /tmp/hello /tmp/goodbye

those changes are immediately visible to the user running inotifywait:

/tmp/ CREATE hello
/tmp/ OPEN hello
/tmp/ ATTRIB hello
/tmp/ CLOSE_WRITE,CLOSE hello
/tmp/ CREATE passwd
/tmp/ OPEN passwd
/tmp/ MODIFY passwd
/tmp/ CLOSE_WRITE,CLOSE passwd
/tmp/ DELETE passwd
/tmp/ CREATE goodbye
/tmp/ OPEN goodbye
/tmp/ ATTRIB goodbye
/tmp/ CLOSE_WRITE,CLOSE goodbye
/tmp/ DELETE hello
/tmp/ DELETE goodbye

A few relevant sections of the manual page explain what is happening:

$ man inotifywait | col -b | sed -n '/diagnostic/,/helpful/p'
  inotifywait will output diagnostic information on standard error and
  event information on standard output. The event output can be config-
  ured, but by default it consists of lines of the following form:

  watched_filename EVENT_NAMES event_filename


  watched_filename
    is the name of the file on which the event occurred. If the
    file is a directory, a trailing slash is output.

  EVENT_NAMES
    are the names of the inotify events which occurred, separated by
    commas.

  event_filename
    is output only when the event occurred on a directory, and in
    this case the name of the file within the directory which caused
    this event is output.

    By default, any special characters in filenames are not escaped
    in any way. This can make the output of inotifywait difficult
    to parse in awk scripts or similar. The --csv and --format
    options will be helpful in this case.

It also is possible to filter the output by registering particular events of interest with the -e option, the list of which is shown here:

access create move_self
attrib delete moved_to
close_write delete_self moved_from
close_nowrite modify open
close move unmount

A common application is testing for the arrival of new files. Since inotify must be given the name of an existing filesystem object to watch, the directory containing the new files is provided. A trigger of interest is also easy to provide -- new files should be complete and ready for processing when the close_write trigger fires. Below is an example script to watch for these events:

#!/bin/sh
unset IFS                                 # default of space, tab and nl
                                          # Wait for filesystem events
inotifywait -m -e close_write \
   /tmp /var/tmp /home/oracle/arch-orcl/ |
while read dir op file
do [[ "${dir}" == '/tmp/' && "${file}" == *.txt ]] &&
      echo "Import job should start on $file ($dir $op)."

   [[ "${dir}" == '/var/tmp/' && "${file}" == CLOSE_WEEK*.txt ]] &&
      echo Weekly backup is ready.

   [[ "${dir}" == '/home/oracle/arch-orcl/' && "${file}" == *.ARC ]]
&&
      su - oracle -c 'ORACLE_SID=orcl ~oracle/bin/log_shipper' &

   [[ "${dir}" == '/tmp/' && "${file}" == SHUT ]] && break

   ((step+=1))
done

echo We processed $step events.

There are a few problems with the script as presented -- of all the available shells on Linux, only ksh93 (that is, the AT&T Korn shell) will report the "step" variable correctly at the end of the script. All the other shells will report this variable as null.

The reason for this behavior can be found in a brief explanation on the manual page for Bash: "Each command in a pipeline is executed as a separate process (i.e., in a subshell)." The MirBSD clone of the Korn shell has a slightly longer explanation:

# man mksh | col -b | sed -n '/The parts/,/do so/p'
  The parts of a pipeline, like below, are executed in subshells. Thus,
  variable assignments inside them fail. Use co-processes instead.

  foo | bar | read baz          # will not change $baz
  foo | bar |& read -p baz      # will, however, do so

And, the pdksh documentation in Oracle Linux 5 (from which MirBSD mksh emerged) has several more mentions of the subject:

General features of at&t ksh88 that are not (yet) in pdksh:
  - the last command of a pipeline is not run in the parent shell
  - `echo foo | read bar; echo $bar' prints foo in at&t ksh, nothing
    in pdksh (ie, the read is done in a separate process in pdksh).
  - in pdksh, if the last command of a pipeline is a shell builtin, it
    is not executed in the parent shell, so "echo a b | read foo bar"
    does not set foo and bar in the parent shell (at&t ksh will).
    This may get fixed in the future, but it may take a while.

$ man pdksh | col -b | sed -n '/BTW, the/,/aware/p'
  BTW, the most frequently reported bug is
    echo hi | read a; echo $a   # Does not print hi
  I'm aware of this and there is no need to report it.

This behavior is easy enough to demonstrate -- running the script above with the default bash shell and providing a sequence of example events:

$ cp /etc/passwd /tmp/newdata.txt
$ cp /etc/group /var/tmp/CLOSE_WEEK20170407.txt
$ cp /etc/passwd /tmp/SHUT

gives the following script output:

# ./inotify.sh
Setting up watches.
Watches established.
Import job should start on newdata.txt (/tmp/ CLOSE_WRITE,CLOSE).
Weekly backup is ready.
We processed events.

Examining the process list while the script is running, you'll also see two shells, one forked for the control structure:

$ function pps { typeset a IFS=\| ; ps ax | while read a
do case $a in *$1*|+([!0-9])) echo $a;; esac; done }


$ pps inot
  PID TTY      STAT   TIME COMMAND
 3394 pts/1    S+     0:00 /bin/sh ./inotify.sh
 3395 pts/1    S+     0:00 inotifywait -m -e close_write /tmp /var/tmp
 3396 pts/1    S+     0:00 /bin/sh ./inotify.sh

As it was manipulated in a subshell, the "step" variable above was null when control flow reached the echo. Switching this from #/bin/sh to #/bin/ksh93 will correct the problem, and only one shell process will be seen:

# ./inotify.ksh93
Setting up watches.
Watches established.
Import job should start on newdata.txt (/tmp/ CLOSE_WRITE,CLOSE).
Weekly backup is ready.
We processed 2 events.


$ pps inot
  PID TTY      STAT   TIME COMMAND
 3583 pts/1    S+     0:00 /bin/ksh93 ./inotify.sh
 3584 pts/1    S+     0:00 inotifywait -m -e close_write /tmp /var/tmp

Although ksh93 behaves properly and in general handles scripts far more gracefully than all of the other Linux shells, it is rather large:

$ ll /bin/[bkm]+([aksh93]) /etc/alternatives/ksh
-rwxr-xr-x. 1 root root  960456 Dec  6 11:11 /bin/bash
lrwxrwxrwx. 1 root root      21 Apr  3 21:01 /bin/ksh ->
                                               /etc/alternatives/ksh
-rwxr-xr-x. 1 root root 1518944 Aug 31  2016 /bin/ksh93
-rwxr-xr-x. 1 root root  296208 May  3  2014 /bin/mksh
lrwxrwxrwx. 1 root root      10 Apr  3 21:01 /etc/alternatives/ksh ->
                                                    /bin/ksh93

The mksh binary is the smallest of the Bourne implementations above (some of these shells may be missing on your system, but you can install them with yum). For a long-term monitoring process, mksh is likely the best choice for reducing both processing and memory footprint, and it does not launch multiple copies of itself when idle assuming that a coprocess is used. Converting the script to use a Korn coprocess that is friendly to mksh is not difficult:

#!/bin/mksh
unset IFS                              # default of space, tab and nl
                                       # Wait for filesystem events
inotifywait -m -e close_write \
   /tmp/ /var/tmp/ /home/oracle/arch-orcl/ \
   2</dev/null |&                      # Launch as Korn coprocess

while read -p dir op file              # Read from Korn coprocess
do [[ "${dir}" == '/tmp/' && "${file}" == *.txt ]] &&
      print "Import job should start on $file ($dir $op)."

   [[ "${dir}" == '/var/tmp/' && "${file}" == CLOSE_WEEK*.txt ]] &&
      print Weekly backup is ready.

   [[ "${dir}" == '/home/oracle/arch-orcl/' && "${file}" == *.ARC ]]
&&
      su - oracle -c 'ORACLE_SID=orcl ~oracle/bin/log_shipper' &

   [[ "${dir}" == '/tmp/' && "${file}" == SHUT ]] && break

   ((step+=1))
done

echo We processed $step events.

Note that the Korn and Bolsky reference on the Korn shell outlines the following requirements in a program operating as a coprocess:

Caution: The co-process must:

An fflush(NULL) is found in the main processing loop of the inotifywait source, and these requirements appear to be met.

The mksh version of the script is the most reasonable compromise for efficient use and correct behavior, and I have explained it at some length here to save readers trouble and frustration -- it is important to avoid control structures executing in subshells in most of the Borne family. However, hopefully all of these ersatz shells someday fix this basic flaw and implement the Korn behavior correctly.

A Practical Application -- Oracle Log Shipping

Oracle databases that are configured for hot backups produce a stream of "archived redo log files" that are used for database recovery. These are the most critical backup files that are produced in an Oracle database.

These files are numbered sequentially and are written to a log directory configured by the DBA. An inotifywatch can trigger activities to compress, encrypt and/or distribute the archived logs to backup and disaster recovery servers for safekeeping. You can configure Oracle RMAN to do most of these functions, but the OS tools are more capable, flexible and simpler to use.

There are a number of important design parameters for a script handling archived logs:

Given these design parameters, this is an implementation:

# cat ~oracle/archutils/process_logs

#!/bin/ksh93

set -euo pipefail
IFS=$'\n\t'  # http://redsymbol.net/articles/unofficial-bash-strict-mode/

(
 flock -n 9 || exit 1          # Critical section-allow only one process.

 ARCHDIR=~oracle/arch-${ORACLE_SID}

 APREFIX=${ORACLE_SID}_1_

 ASUFFIX=.ARC

 CURLOG=$(<~oracle/.curlog-$ORACLE_SID)

 File="${ARCHDIR}/${APREFIX}${CURLOG}${ASUFFIX}"

 [[ ! -f "$File" ]] && exit

 while [[ -f "$File" ]]
 do ((NEXTCURLOG=CURLOG+1))

    NextFile="${ARCHDIR}/${APREFIX}${NEXTCURLOG}${ASUFFIX}"

    [[ ! -f "$NextFile" ]] && sleep 60  # Ensure ARCH has finished

    nice /usr/local/bin/lzip -9q "$File"

    until scp "${File}.lz" "yourcompany.com:~oracle/arch-$ORACLE_SID"
    do sleep 5
    done

    CURLOG=$NEXTCURLOG

    File="$NextFile"
 done

 echo $CURLOG > ~oracle/.curlog-$ORACLE_SID

) 9>~oracle/.processing_logs-$ORACLE_SID

The above script can be executed manually for testing even while the inotify handler is running, as the flock protects it.

A standby server, or a DataGuard server in primitive standby mode, can apply the archived logs at regular intervals. The script below forces a 12-hour delay in log application for the recovery of dropped or damaged objects, so inotify cannot be easily used in this case -- cron is a more reasonable approach for delayed file processing, and a run every 20 minutes will keep the standby at the desired recovery point:

# cat ~oracle/archutils/delay-lock.sh

#!/bin/ksh93

(
 flock -n 9 || exit 1              # Critical section-only one process.

 WINDOW=43200                      # 12 hours

 LOG_DEST=~oracle/arch-$ORACLE_SID

 OLDLOG_DEST=$LOG_DEST-applied

 function fage { print $(( $(date +%s) - $(stat -c %Y "$1") ))
  } # File age in seconds - Requires GNU extended date & stat

 cd $LOG_DEST

 of=$(ls -t | tail -1)             # Oldest file in directory

 [[ -z "$of" || $(fage "$of") -lt $WINDOW ]] && exit

 for x in $(ls -rt)                    # Order by ascending file mtime
 do if [[ $(fage "$x") -ge $WINDOW ]]
    then y=$(basename $x .lz)          # lzip compression is optional

         [[ "$y" != "$x" ]] && /usr/local/bin/lzip -dkq "$x"

         $ORACLE_HOME/bin/sqlplus '/ as sysdba' > /dev/null 2>&1 <<-EOF
                recover standby database;
                $LOG_DEST/$y
                cancel
                quit
                EOF

         [[ "$y" != "$x" ]] && rm "$y"

         mv "$x" $OLDLOG_DEST
    fi
              

 done
) 9> ~oracle/.recovering-$ORACLE_SID

I've covered these specific examples here because they introduce tools to control concurrency, which is a common issue when using inotify, and they advance a few features that increase reliability and minimize storage requirements. Hopefully enthusiastic readers will introduce many improvements to these approaches.

The incron System

Lukas Jelinek is the author of the incron package that allows users to specify tables of inotify events that are executed by the master incrond process. Despite the reference to "cron", the package does not schedule events at regular intervals -- it is a tool for filesystem events, and the cron reference is slightly misleading.

The incron package is available from EPEL . If you have installed the repository, you can load it with yum:

# yum install incron
Loaded plugins: langpacks, ulninfo
Resolving Dependencies
--> Running transaction check
---> Package incron.x86_64 0:0.5.10-8.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=================================================================
 Package       Arch       Version           Repository    Size
=================================================================
Installing:
 incron        x86_64     0.5.10-8.el7      epel          92 k

Transaction Summary
==================================================================
Install  1 Package

Total download size: 92 k
Installed size: 249 k
Is this ok [y/d/N]: y
Downloading packages:
incron-0.5.10-8.el7.x86_64.rpm                      |  92 kB   00:01
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : incron-0.5.10-8.el7.x86_64                          1/1
  Verifying  : incron-0.5.10-8.el7.x86_64                          1/1

Installed:
  incron.x86_64 0:0.5.10-8.el7

Complete!

On a systemd distribution with the appropriate service units, you can start and enable incron at boot with the following commands:

# systemctl start incrond
# systemctl enable incrond
Created symlink from
   /etc/systemd/system/multi-user.target.wants/incrond.service
to /usr/lib/systemd/system/incrond.service.

In the default configuration, any user can establish incron schedules. The incrontab format uses three fields:

<path> <mask> <command>

Below is an example entry that was set with the -e option:

$ incrontab -e        #vi session follows

$ incrontab -l
/tmp/ IN_ALL_EVENTS /home/luser/myincron.sh $@ $% $#

You can record a simple script and mark it with execute permission:

$ cat myincron.sh
#!/bin/sh

echo -e "path: $1 op: $2 \t file: $3" >> ~/op

$ chmod 755 myincron.sh

Then, if you repeat the original /tmp file manipulations at the start of this article, the script will record the following output:

$ cat ~/op

path: /tmp/ op: IN_ATTRIB        file: hello
path: /tmp/ op: IN_CREATE        file: hello
path: /tmp/ op: IN_OPEN          file: hello
path: /tmp/ op: IN_CLOSE_WRITE   file: hello
path: /tmp/ op: IN_OPEN          file: passwd
path: /tmp/ op: IN_CLOSE_WRITE   file: passwd
path: /tmp/ op: IN_MODIFY        file: passwd
path: /tmp/ op: IN_CREATE        file: passwd
path: /tmp/ op: IN_DELETE        file: passwd
path: /tmp/ op: IN_CREATE        file: goodbye
path: /tmp/ op: IN_ATTRIB        file: goodbye
path: /tmp/ op: IN_OPEN          file: goodbye
path: /tmp/ op: IN_CLOSE_WRITE   file: goodbye
path: /tmp/ op: IN_DELETE        file: hello
path: /tmp/ op: IN_DELETE        file: goodbye

While the IN_CLOSE_WRITE event on a directory object is usually of greatest interest, most of the standard inotify events are available within incron, which also offers several unique amalgams:

$ man 5 incrontab | col -b | sed -n '/EVENT SYMBOLS/,/child process/p'

EVENT SYMBOLS

These basic event mask symbols are defined:

IN_ACCESS          File was accessed (read) (*)
IN_ATTRIB          Metadata changed (permissions, timestamps, extended
                   attributes, etc.) (*)
IN_CLOSE_WRITE     File opened for writing was closed (*)
IN_CLOSE_NOWRITE   File not opened for writing was closed (*)
IN_CREATE          File/directory created in watched directory (*)
IN_DELETE          File/directory deleted from watched directory (*)
IN_DELETE_SELF     Watched file/directory was itself deleted
IN_MODIFY          File was modified (*)
IN_MOVE_SELF       Watched file/directory was itself moved
IN_MOVED_FROM      File moved out of watched directory (*)
IN_MOVED_TO        File moved into watched directory (*)
IN_OPEN            File was opened (*)

When monitoring a directory, the events marked with an asterisk (*)
above can occur for files in the directory, in which case the name
field in the returned event data identifies the name of the file within
the directory.

The IN_ALL_EVENTS symbol is defined as a bit mask of all of the above
events. Two additional convenience symbols are IN_MOVE, which is a com-
bination of IN_MOVED_FROM and IN_MOVED_TO, and IN_CLOSE, which combines
IN_CLOSE_WRITE and IN_CLOSE_NOWRITE.

The following further symbols can be specified in the mask:

IN_DONT_FOLLOW     Don't dereference pathname if it is a symbolic link
IN_ONESHOT         Monitor pathname for only one event
IN_ONLYDIR         Only watch pathname if it is a directory

Additionally, there is a symbol which doesn't appear in the inotify sym-
bol set. It is IN_NO_LOOP. This symbol disables monitoring events until
the current one is completely handled (until its child process exits).

The incron system likely presents the most comprehensive interface to inotify of all the tools researched and listed here. Additional configuration options can be set in /etc/incron.conf to tweak incron's behavior for those that require a non-standard configuration.

Path Units under systemd

When your Linux installation is running systemd as PID 1, limited inotify functionality is available through "path units" as is discussed in a lighthearted article by Paul Brown at OCS-Mag .

The relevant manual page has useful information on the subject:

$ man systemd.path | col -b | sed -n '/Internally,/,/systems./p'

Internally, path units use the inotify(7) API to monitor file systems.
Due to that, it suffers by the same limitations as inotify, and for
example cannot be used to monitor files or directories changed by other
machines on remote NFS file systems.

Note that when a systemd path unit spawns a shell script, the $HOME and tilde ( ~ ) operator for the owner's home directory may not be defined. Using the tilde operator to reference another user's home directory (for example, ~nobody/) does work, even when applied to the self-same user running the script. The Oracle script above was explicit and did not reference ~ without specifying the target user, so I'm using it as an example here.

Using inotify triggers with systemd path units requires two files. The first file specifies the filesystem location of interest:

$ cat /etc/systemd/system/oralog.path

[Unit]
Description=Oracle Archivelog Monitoring
Documentation=http://docs.yourserver.com

[Path]
PathChanged=/home/oracle/arch-orcl/

[Install]
WantedBy=multi-user.target

The PathChanged parameter above roughly corresponds to the close-write event used in my previous direct inotify calls. The full collection of inotify events is not (currently) supported by systemd -- it is limited to PathExists , PathChanged and PathModified , which are described in man systemd.path .

The second file is a service unit describing a program to be executed. It must have the same name, but a different extension, as the path unit:

$ cat /etc/systemd/system/oralog.service

[Unit]
Description=Oracle Archivelog Monitoring
Documentation=http://docs.yourserver.com

[Service]
Type=oneshot
Environment=ORACLE_SID=orcl
ExecStart=/bin/sh -c '/root/process_logs >> /tmp/plog.txt 2>&1'

The oneshot parameter above alerts systemd that the program that it forks is expected to exit and should not be respawned automatically -- the restarts are limited to triggers from the path unit. The above service configuration will provide the best options for logging -- divert them to /dev/null if they are not needed.

Use systemctl start on the path unit to begin monitoring -- a common error is using it on the service unit, which will directly run the handler only once. Enable the path unit if the monitoring should survive a reboot.

Although this limited functionality may be enough for some casual uses of inotify, it is a shame that the full functionality of inotifywait and incron are not represented here. Perhaps it will come in time.

Conclusion

Although the inotify tools are powerful, they do have limitations. To repeat them, inotify cannot monitor remote (NFS) filesystems; it cannot report the userid involved in a triggering event; it does not work with /proc or other pseudo-filesystems; mmap() operations do not trigger it; and the inotify queue can overflow resulting in lost events, among other concerns.

Even with these weaknesses, the efficiency of inotify is superior to most other approaches for immediate notifications of filesystem activity. It also is quite flexible, and although the close-write directory trigger should suffice for most usage, it has ample tools for covering special use cases.

In any event, it is productive to replace polling activity with inotify watches, and system administrators should be liberal in educating the user community that the classic crontab is not an appropriate place to check for new files. Recalcitrant users should be confined to Ultrix on a VAX until they develop sufficient appreciation for modern tools and approaches, which should result in more efficient Linux systems and happier administrators.

Sidenote: Archiving /etc/passwd

Tracking changes to the password file involves many different types of inotify triggering events. The vipw utility commonly will make changes to a temporary file, then clobber the original with it. This can be seen when the inode number changes:

# ll -i /etc/passwd
199720973 -rw-r--r-- 1 root root 3928 Jul  7 12:24 /etc/passwd

# vipw
[ make changes ]
You are using shadow passwords on this system.
Would you like to edit /etc/shadow now [y/n]? n

# ll -i /etc/passwd
203784208 -rw-r--r-- 1 root root 3956 Jul  7 12:24 /etc/passwd

The destruction and replacement of /etc/passwd even occurs with setuid binaries called by unprivileged users:

$ ll -i /etc/passwd
203784196 -rw-r--r-- 1 root root 3928 Jun 29 14:55 /etc/passwd

$ chsh
Changing shell for fishecj.
Password:
New shell [/bin/bash]: /bin/csh
Shell changed.

$ ll -i /etc/passwd
199720970 -rw-r--r-- 1 root root 3927 Jul  7 12:23 /etc/passwd

For this reason, all inotify triggering events should be considered when tracking this file. If there is concern with an inotify queue overflow (in which events are lost), then the OPEN , ACCESS and CLOSE_NOWRITE,CLOSE triggers likely can be immediately ignored.

All other inotify events on /etc/passwd might run the following script to version the changes into an RCS archive and mail them to an administrator:

#!/bin/sh

# This script tracks changes to the /etc/passwd file from inotify.
# Uses RCS for archiving. Watch for UID zero.

PWMAILS=Charlie.Root@openbsd.org

TPDIR=~/track_passwd

cd $TPDIR

if diff -q /etc/passwd $TPDIR/passwd
then exit                                         # they are the same
else sleep 5                                      # let passwd settle
     diff /etc/passwd $TPDIR/passwd 2>&1 |        # they are DIFFERENT
     mail -s "/etc/passwd changes $(hostname -s)" "$PWMAILS"
     cp -f /etc/passwd $TPDIR                     # copy for checkin

#    "SCCS, the source motel! Programs check in and never check out!"
#     -- Ken Thompson

     rcs -q -l passwd                            # lock the archive
     ci -q -m_ passwd                            # check in new ver
     co -q passwd                                # drop the new copy
fi > /dev/null 2>&1

Here is an example email from the script for the above chfn operation:

-----Original Message-----
From: root [mailto:root@myhost.com]
Sent: Thursday, July 06, 2017 2:35 PM
To: Fisher, Charles J. <Charles.Fisher@myhost.com>;
Subject: /etc/passwd changes myhost

57c57
< fishecj:x:123:456:Fisher, Charles J.:/home/fishecj:/bin/bash
---
> fishecj:x:123:456:Fisher, Charles J.:/home/fishecj:/bin/csh

Further processing on the third column of /etc/passwd might detect UID zero (a root user) or other important user classes for emergency action. This might include a rollback of the file from RCS to /etc and/or SMS messages to security contacts. ______________________

Charles Fisher has an electrical engineering degree from the University of Iowa and works as a systems and database administrator for a Fortune 500 mining and manufacturing corporation.

[Dec 25, 2017] American Carnage by Brad Griffin

Notable quotes:
"... It tells me that the bottom line is that Christmas has become a harder season for White families. We are worse off because of BOTH social and economic liberalism which has only benefited an elite few. The bottom half of the White population is now in total disarray – drug addiction, demoralization, divorce, suicide, abortion, atomization, stagnant wages, declining household income and investments – and this dysfunction is creeping up the social ladder. The worst thing we can do is step on the accelerator. ..."
Dec 24, 2017 | www.unz.com

As we move into 2018, I am swinging away from the Republicans. I don't support the Paul Ryan "Better Way" agenda. I don't support neoliberal economics. I think we have been going in the wrong direction since the 1970s and don't want to continue going down this road.

  1. Opioid Deaths: As we all know, the opioid epidemic has become a national crisis and the White working class has been hit the hardest by it. It is a "sea of despair" out there.
  2. White Mortality: As the family crumbles, religion recedes in his life, and his job prospects dwindle, the middle aged White working class man is turning to drugs, alcohol and suicide: The White suicide rate has soared since 2000:
  3. Median Household Income: The average household in the United States is poorer in 2017 than it was in 1997:
  4. Real GDP: Since the late 1990s, real GDP and real median household income have parted ways:
  5. Productivity and Real Wages: Since the 1970s, the minimum wage has parted ways with productivity gains in the US economy:
  6. Stock Market: Since 2000, the stock market has soared, but 10% of Americans own 80% of stocks. The top 1% owns 38% of stocks. In 2007, 3/4th of middle class households were invested in the stock market, but now only 50% are investors. Overall, 52% of Americans now own stocks, which is down from 65%. The average American has less than $1,000 in their combined checking and savings accounts.

Do you know what this tells me?

It tells me that the bottom line is that Christmas has become a harder season for White families. We are worse off because of BOTH social and economic liberalism which has only benefited an elite few. The bottom half of the White population is now in total disarray – drug addiction, demoralization, divorce, suicide, abortion, atomization, stagnant wages, declining household income and investments – and this dysfunction is creeping up the social ladder. The worst thing we can do is step on the accelerator.

Paul Ryan and his fellow conservatives look at this and conclude we need MORE freedom. We need lower taxes, more free trade, more deregulation, weaker unions, more immigration and less social safety net spending. He wants to follow up tax reform with entitlement reform in 2018. I can't but see how this is going to make an already bad situation for the White working class even worse.

I'm not rightwing in the sense that these people are. I think their policies are harmful to the nation. I don't think they feel any sense of duty and obligation to the working class like we do. They believe in liberal abstractions and make an Ayn Rand fetish out of freedom whereas we feel a sense of solidarity with them grounded in race, ethnicity and culture which tempers class division. We recoil at the evisceration of the social fabric whereas conservatives celebrate this blind march toward plutocracy.

Do the wealthy need to own a greater share of the stock market? Do they need to own a greater share of our national wealth? Do we need to loosen up morals and the labor market? Do we need more White children growing up in financially stressed, broken homes on Christmas? Is the greatest problem facing the nation spending on anti-poverty programs? Paul Ryan and the True Cons think so.

Yeah, I don't think so. I also think it is a good thing right now that we aren't associated with the mainstream Right. In the long run, I bet this will pay off for us. I predict this platform they have been standing on for decades now, which they call the conservative base, is going to implode on them. Donald Trump was only the first sign that Atlas is about to shrug.

(Republished from Occidental Dissent by permission of author or representative)

[Dec 15, 2017] The Crisis Ahead The U.S. Is No Country for Older Men and Women

Notable quotes:
"... The U.S. has a retirement crisis on its hands, and with the far right controlling the executive branch and both houses of Congress, as well as dozens of state governments, things promise to grow immeasurably worse. ..."
"... It wasn't supposed to be this way. Past progressive presidents, notably Franklin D. Roosevelt and Lyndon B. Johnson, took important steps to make life more comfortable for aging Americans. FDR signed the Social Security Act of 1935 into law as part of his New Deal, and when LBJ passed Medicare in 1965, he established a universal health care program for those 65 and older. But the country has embraced a neoliberal economic model since the election of Ronald Reagan, and all too often, older Americans have been quick to vote for far-right Republicans antagonistic to the social safety net. ..."
"... Since then, Ryan has doubled down on his delusion that the banking sector can manage Social Security and Medicare more effectively than the federal government. Republican attacks on Medicare have become a growing concern: according to EBRI, only 38 percent of workers are confident the program will continue to provide the level of benefits it currently does. ..."
"... As 2017 winds down, Americans with health problems are still in the GOP's crosshairs -- this time because of so-called tax reform. The Tax Cuts and Jobs Act (both the House and Senate versions) includes provisions that would undermine Obamacare and cause higher health insurance premiums for older Americans. According to AARP, "Older adults ages 50-64 would be at particularly high risk under the proposal, facing average premium increases of up to $1,500 in 2019 as a result of the bill." ..."
"... Countless Americans who are unable to afford those steep premiums would lose their insurance. The CBO estimates that the Tax Cuts and Jobs Act would cause the number of uninsured under 65 to increase 4 million by 2019 and 13 million by 2027. The bill would also imperil Americans 65 and over by cutting $25 billion from Medicare . ..."
"... Analyzing W2 tax records in 2012, U.S. Census Bureau researchers Michael Gideon and Joshua Mitchell found that only 14 percent of private-sector employers in the U.S. were offering a 401(k) or similar retirement packages to their workers. That figure was thought to be closer to 40 percent, but Gideon and Mitchell discovered the actual number was considerably lower when smaller businesses were carefully analyzed, and that larger companies were more likely to offer 401(k) plans than smaller ones. ..."
"... Today, millions of Americans work in the gig economy who don't have full-time jobs or receive W2s, but instead receive 1099s for freelance work. ..."
"... The combination of stagnant wages and an increasingly high cost of living have been especially hellish for Americans who are trying to save for retirement. The United States' national minimum wage, a mere $7.25 per hour, doesn't begin to cover the cost of housing at a time when rents have soared nationwide. Never mind the astronomical prices in New York City, San Francisco or Washington, D.C. Median rents for one-bedroom apartments are as high as $1,010 per month in Atlanta, $960 per month in Baltimore, $860 per month in Jacksonville and $750 per month in Omaha, according to ApartmentList.com. ..."
"... yeah, Canada has a neoliberal infestation that is somewhere between the US and the UK. France has got one too, but it is less advanced. I'll enjoy my great healthcare, public transportation, and generous paid time off while I can. ..."
"... Europeans may scratch their heads, but they should recall their own histories and the long struggle to the universal benefits now enjoyed. Americans are far too complacent. This mildness is viewed by predators as weakness and the attacks will continue. ..."
"... Not sure if many of the readers here watch non-cable national broadcast news, but Pete Peterson and his foundation are as everpresent an advertiser as the pharma industry. Peterson is the strongest, best organized advocate for gutting social services, social security, and sending every last penny out of the tax-mule consumer's pocket toward wall street. The guy needs an equivalent counterpoint enemy. ..."
"... The social advantages that we still enjoy were fought in the streets, and on the "bricks" flowing with the participants blood. 8 hr. day; women's right to vote; ability and right for groups of laborers to organize; worker safety laws ..and so many others. There is no historical memory on how those rights were achieved. We are slowly slipping into an oligarchy greased by the idea that the physical possession of material things is all that matters. Sheeple, yes. ..."
"... Mmm, I think American voters get what they want in the end. They want their politicians because they believe the lies. 19% of Americans believe they are in the top 1% of wealth. A huge percentage of poor people believe they or their kids will (not can, but will) become wealthy. Most Americans can't find France on a map. ..."
"... I may have been gone for about thirty years, but that has only sharpened my insights into America. It's very hard to see just how flawed America is from the inside but when you step outside and have some perspective, it's frightening. ..."
"... Our government, beginning with Reagan, turned its back on promoting the general welfare. The wealthy soon learned that their best return on investment was the "purchase" of politicians willing to pass the legislation they put in their hands. Much of their investment included creating the right wing media apparatus. ..."
"... The Class War is real. It has been going on for 40 years, with the Conservative army facing virtually no resistance. Conservatives welcome Russia's help. Conservatives welcome barriers to people voting. Conservatives welcome a populace that believes lies that benefit them. Conservatives welcome the social and financial decline of the entire middle class and poor as long as it profits the rich financially, and by extension enhances their power politically. ..."
"... "Single acts of tyranny may be ascribed to the accidental opinion of the day, but a series of oppressions, begun at a distinguished period and pursued unalterably through every change of ministers, too plainly prove a deliberate, systematic plan of reducing [a people] to slavery" Thomas Jefferson. Rights of British America, 1774 ME 1:193, Papers 1:125 ..."
"... yes, my problem with the post as well, completely ignores democrat complicity the part where someone with a 26k salary will pay 16k in insurance? No they won't, the system would collapse in that case which will be fine with me. ..."
"... As your quote appears to imply, it's not a problem that can be solved by voting which, let's not forget, is nothing more than expressing an opinion. I am not sticking around just to find out if economically-crushed, opiod-, entertainment-, social media-addled Americans are actually capable of rolling out tumbrils for trips to the guillotines in the city squares. I strongly suspect not. ..."
"... This is the country where, after the banks crushed the economy in 2008, caused tens of thousands to lose their jobs, and then got huge bailouts, the people couldn't even be bothered to take their money out of the big banks and put it elsewhere. Because, you know, convenience! Expressing an opinion, or mobilizing others to express an opinion, or educating or proselytizing others about what opinion to have, is about the limit of what they are willing, or know how to do. ..."
Dec 14, 2017 | www.nakedcapitalism.com

Yves here. I imagine many readers are acutely aware of the problems outlined in this article, if not beset by them already. By any rational standard, I should move now to a much cheaper country that will have me. I know individuals who live most of the year in third-world and near-third world countries, but they have very cheap ways of still having a toehold in the US and not (yet or maybe ever) getting a long-term residence visa. Ecuador is very accommodating regarding retirement visas, and a Social Security level income goes far there, but yours truly isn't retiring any time soon. And another barrier to an international move (which recall I did once, so I have some appreciation for what it takes), is that one ought to check out possible destinations but if you are already time and money and energy stressed, how do you muster the resources to do that at all, let alone properly?

Aside from the potential to greatly reduce fixed costs, a second impetus for me is Medicare. I know for most people, getting on Medicare is a big plus. I have a very rare good, very old insurance policy. When you include the cost of drug plans, Medicare is no cheaper than what I have now, and considerably narrows my network. Moreover, I expect it to be thoroughly crapified by ten years from now (when I am 70), which argues for getting out of Dodge sooner rather than later.

And that's before you get to another wee problem Lambert points out that I would probably not be happy in a third world or high end second world country. But the only bargain "world city" I know of is Montreal. I'm not sure it would represent enough of an all-in cost saving to justify the hassle of an international move and the attendant tax compliance burdens .and that charitably assumes I could even find a way to get permanent residence. Ugh.

By Alex Henderson, who has written for the L.A. Weekly, Billboard, Spin, Creem, the Pasadena Weekly and many other publications. Follow him on Twitter @alexvhenderson. Originally published at Alternet

Millions can no longer afford to retire, and may never be able when the GOP passes its tax bill.

The news is not good for millions of aging Baby Boomers and Gen Xers in the United States who are moving closer to retirement age. According to the Employee Benefit Research Institute's annual report on retirement preparedness for 2017, only 18 percent of U.S.-based workers feel "very confident" about their ability to retire comfortably ; Craig Copeland, senior research associate for EBRI and the report's co-author, cited "debt, lack of a retirement plan at work, and low savings" as "key factors" in workers' retirement-related anxiety. The Insured Retirement Institute finds a mere 23 percent of Baby Boomers and 24 percent of Gen Xers are confident that their savings will last in retirement. To make matters worse, more than 40 percent of Boomers and over 30 percent of Gen Xers report having no retirement savings whatsoever .

The U.S. has a retirement crisis on its hands, and with the far right controlling the executive branch and both houses of Congress, as well as dozens of state governments, things promise to grow immeasurably worse.

It wasn't supposed to be this way. Past progressive presidents, notably Franklin D. Roosevelt and Lyndon B. Johnson, took important steps to make life more comfortable for aging Americans. FDR signed the Social Security Act of 1935 into law as part of his New Deal, and when LBJ passed Medicare in 1965, he established a universal health care program for those 65 and older. But the country has embraced a neoliberal economic model since the election of Ronald Reagan, and all too often, older Americans have been quick to vote for far-right Republicans antagonistic to the social safety net.

In the 2016 presidential election, 55 percent of voters 50 and older cast their ballots for Donald Trump against just 44 percent for Hillary Clinton. (This was especially true of older white voters; 90 percent of black voters 45 and older, as well as 67 percent of Latino voters in the same age range voted Democratic.)

Sen. Bernie Sanders' (I-VT) economic proposals may have been wildly popular with millennials, but no demographic has a greater incentive to vote progressive than Americans facing retirement. According to research conducted by the American Association of Retired Persons, the three greatest concerns of Americans 50 and older are Social Security, health care costs and caregiving for loved ones -- all areas that have been targeted by Republicans.

House of Representatives Speaker Paul Ryan, a devotee of social Darwinist Ayn Rand , has made no secret of his desire to privatize Social Security and replace traditional Medicare with a voucher program. Had George W. Bush had his way and turned Social Security over to Wall Street, the economic crash of September 2008 might have left millions of senior citizens homeless.

Since then, Ryan has doubled down on his delusion that the banking sector can manage Social Security and Medicare more effectively than the federal government. Republican attacks on Medicare have become a growing concern: according to EBRI, only 38 percent of workers are confident the program will continue to provide the level of benefits it currently does.

The GOP's obsession with abolishing the Affordable Care Act is the most glaring example of its disdain for aging Americans. Yet Obamacare has been a blessing for Boomers and Gen Xers who have preexisting conditions. The ACA's guaranteed issue plans make no distinction between a 52-year-old American with diabetes, heart disease or asthma and a 52-year-old who has never had any of those illnesses. And AARP notes that under the ACA, the uninsured rate for Americans 50 and older decreased from 15 percent in 2013 to 9 percent in 2016.

According to the Congressional Budget Office, the replacement bills Donald Trump hoped to ram through Congress this year would have resulted in staggering premium hikes for Americans over 50. The CBO's analysis of the American Health Care Act, one of the earlier versions of Trumpcare, showed that a 64-year-old American making $26,500 per year could have gone from paying $1,700 annually in premiums to just over $16,000. The CBO also estimated that the GOP's American Health Care Act would have deprived 23 million Americans of health insurance by 2026.

As 2017 winds down, Americans with health problems are still in the GOP's crosshairs -- this time because of so-called tax reform. The Tax Cuts and Jobs Act (both the House and Senate versions) includes provisions that would undermine Obamacare and cause higher health insurance premiums for older Americans. According to AARP, "Older adults ages 50-64 would be at particularly high risk under the proposal, facing average premium increases of up to $1,500 in 2019 as a result of the bill."

The CBO estimates that the bill will cause premiums to spike an average of 10 percent overall, with average premiums increasing $890 per year for a 50-year-old, $1,100 per year for a 55-year-old, $1,350 per year for a 60-year-old and $1,490 per year for a 64-year-old. Premium increases, according to the CBO, would vary from state to state; in Maine, average premiums for a 64-year-old would rise as much as $1,750 per year.

Countless Americans who are unable to afford those steep premiums would lose their insurance. The CBO estimates that the Tax Cuts and Jobs Act would cause the number of uninsured under 65 to increase 4 million by 2019 and 13 million by 2027. The bill would also imperil Americans 65 and over by cutting $25 billion from Medicare .

As morally reprehensible as the GOP's tax legislation may be, it is merely an acceleration of the redistribution of wealth from the bottom to the top that America has undergone since the mid-1970s. (President Richard Nixon may have been a paranoid right-winger with authoritarian tendencies, but he expanded Medicare and supported universal health care.) Between the decline of labor unions, age discrimination, stagnant wages, an ever-rising cost of living, low interest rates, and a shortage of retirement accounts, millions of Gen Xers and Baby Boomers may never be able to retire.

Traditional defined-benefit pensions were once a mainstay of American labor, especially among unionized workers. But according to Pew Charitable Trusts, only 13 percent of Baby Boomers still have them (among millennials, the number falls to 6 percent). In recent decades, 401(k) plans have become much more prominent, yet a majority of American workers don't have them either.

Analyzing W2 tax records in 2012, U.S. Census Bureau researchers Michael Gideon and Joshua Mitchell found that only 14 percent of private-sector employers in the U.S. were offering a 401(k) or similar retirement packages to their workers. That figure was thought to be closer to 40 percent, but Gideon and Mitchell discovered the actual number was considerably lower when smaller businesses were carefully analyzed, and that larger companies were more likely to offer 401(k) plans than smaller ones.

Today, millions of Americans work in the gig economy who don't have full-time jobs or receive W2s, but instead receive 1099s for freelance work. Tax-deferred SEP-IRAs were once a great, low-risk way for freelancers to save for retirement without relying exclusively on Social Security, but times have changed since the 1980s and '90s when interest rates were considerably higher for certificates of deposit and savings accounts. According to Bankrate.com, average rates for one-year CDs dropped from 11.27 percent in 1984 to 8.1 percent in 1990 to 5.22 percent in 1995 to under 1 percent in 2010, where it currently remains.

The combination of stagnant wages and an increasingly high cost of living have been especially hellish for Americans who are trying to save for retirement. The United States' national minimum wage, a mere $7.25 per hour, doesn't begin to cover the cost of housing at a time when rents have soared nationwide. Never mind the astronomical prices in New York City, San Francisco or Washington, D.C. Median rents for one-bedroom apartments are as high as $1,010 per month in Atlanta, $960 per month in Baltimore, $860 per month in Jacksonville and $750 per month in Omaha, according to ApartmentList.com.

That so many older Americans are renting at all is ominous in its own right. FDR made home ownership a primary goal of the New Deal, considering it a key component of a thriving middle class. But last year, the Urban Institute found that 19 million Americans who previously owned a home are now renting, 31 percent between the ages of 36 and 45. Laurie Goodman, one of the study's authors, contends the Great Recession has "permanently raised the number of renters," and that the explosion of foreclosures has hit Gen Xers especially hard.

The severity of the U.S. retirement crisis is further addressed in journalist Jessica Bruder's new book "Nomadland: Surviving America in the 21st Century," which follows Americans in their 50s, 60s and even 70s living in RVs or vans , barely eking out a living doing physically demanding, seasonal temp work from harvesting sugar beets to cleaning toilets at campgrounds. Several had high-paying jobs before their lives were blown apart by the layoffs, foreclosures and corporate downsizing of the Great Recession. Bruder speaks with former college professors and software professionals who now find themselves destitute, teetering on the brink of homelessness and forced to do backbreaking work for next to nothing. Unlike the big banks, they never received a bailout.

These neo-nomads recall the transients of the 1930s, themselves victims of Wall Street's recklessness. But whereas FDR won in a landslide in 1932 and aggressively pursued a program of progressive economic reforms, Republicans in Congress have set out to shred what little remains of the social safety net, giving huge tax breaks to millionaires and billionaires . The older voters who swept Trump into office may have signed their own death warrants.

If aging Americans are going to be saved from this dystopian future, the U.S. will have to forge a new Great Society. Programs like Social Security, Medicare and Medicaid will need to be strengthened, universal health care must become a reality and age discrimination in the workplace will have to be punished as a civil rights violation like racial and gender-based discrimination. If not, millions of Gen Xers and Boomers will spend their golden years scraping for pennies.

Expat , , December 14, 2017 at 6:29 am

I certainly will never go back to the States for these and other reasons. I have a friend, also an American citizen, who travels frequently back to California to visit his son. He is truly worried about getting sick or having an accident when he is there since he knows it might bankrupt him. As he jokes, he would be happy to have another heart attack here in France since it's free!

For those of you who have traveled the world and talked to people, you probably know that most foreigners are perplexed by America's attitude to health care and social services. The richest nation in the world thinks that health and social security (in the larger sense of not being forced into the street) are not rights at all. Europeans scratch their heads at this.

The only solution is education and information, but they are appalling in America. America remains the most ignorant and worst educated of the developed nations and is probably beaten by many developing nations. It is this ignorance and stupidity that gets Americans to vote for the likes of Trump or any of the other rapacious millionaires they send to office every year.

A first step would be for Americans to insist that Congress eliminate its incredibly generous and life-long healthcare plans for elected officials. They should have to do what the rest of Americans do. Of course, since about 95% of Congress are millionaires, it might not be effective. But it's a start.

vidimi , , December 14, 2017 at 6:40 am

France has its share of problems, but boy do they pale next to the problems in America or even Canada. Life here is overall quite pleasant and I have no desire to go back to N.A.

Marco , , December 14, 2017 at 6:46 am

Canada has problems?

WobblyTelomeres , , December 14, 2017 at 7:47 am

Was in Yellowknife a couple of years ago. The First Nations people have a rough life. From what I've read, such extends across the country.

vidimi , , December 14, 2017 at 8:03 am

yeah, Canada has a neoliberal infestation that is somewhere between the US and the UK. France has got one too, but it is less advanced. I'll enjoy my great healthcare, public transportation, and generous paid time off while I can.

JEHR , , December 14, 2017 at 1:46 pm

The newest neoliberal effort in Canada was put forward by our Minister of Finance (a millionaire) who is touting a bill that will get rid of defined benefit pension plans given to public employees for so-called target benefit pension plans. The risk for target plans is taken by the recipient. Morneau's former firm promotes target benefit pension plans and the change could benefit Morneau himself as he did not put his assets from his firm in a blind trust. At the very least, he has a conflict of interest and should probably resign.

There is always an insidious group of wealthy people here who would like to re-make the world in their own image. I fear for the future.

JEHR , , December 14, 2017 at 1:55 pm

Yes, I agree. There is an effort to "simplify" the financial system of the EU to take into account the business cycle and the financial cycle .

Dita , , December 14, 2017 at 8:25 am

Europeans may scratch their heads, but they should recall their own histories and the long struggle to the universal benefits now enjoyed. Americans are far too complacent. This mildness is viewed by predators as weakness and the attacks will continue.

jefemt , , December 14, 2017 at 10:02 am

We really should be able to turn this around, and have an obligation to ourselves and our 'nation state' , IF there were a group of folks running on a fairness, one-for-all, all-for-one platform. That sure isn't the present two-sides-of-the-same-coin Democraps and Republicrunts.

Not sure if many of the readers here watch non-cable national broadcast news, but Pete Peterson and his foundation are as everpresent an advertiser as the pharma industry. Peterson is the strongest, best organized advocate for gutting social services, social security, and sending every last penny out of the tax-mule consumer's pocket toward wall street. The guy needs an equivalent counterpoint enemy.

Check it out, and be vigilant in dispelling his message and mission. Thanks for running this article.

Running away: the almost-haves run to another nation state, the uber-wealthy want to leave the earth, or live in their private Idaho in the Rockies or on the Ocean. What's left for the least among us? Whatever we create?
https://www.pgpf.org/

Scramjett , , December 14, 2017 at 1:43 pm

I think pathologically optimistic is a better term than complacent. Every time someone dumps on them, their response is usually along the lines of "Don't worry, it'll get better," "Everything works itself out in the end," "maybe we'll win the lottery," my personal favorite "things will get better, just give it time" (honestly it's been 40 years of this neoliberal bullcrap, how much more time are we supposed to give it?), "this is just a phase" or "we can always bring it back later and better than ever." The last one is most troubling because after 20 years of witnessing things in the public sphere disappearing, I've yet to see a single thing return in any form at all.

I'm not sure where this annoying optimism came from but I sure wish it would go away.

sierra7 , , December 14, 2017 at 8:45 pm

The "optimism" comes from having a lack of historical memory. So many social protections that we have/had is seen as somehow coming out of the ether benevolently given without any social struggles. The lack of historical education on this subject in particular is appalling. Now, most would probably look for an "APP" on their "dumbphones" to solve the problem.

The social advantages that we still enjoy were fought in the streets, and on the "bricks" flowing with the participants blood. 8 hr. day; women's right to vote; ability and right for groups of laborers to organize; worker safety laws ..and so many others. There is no historical memory on how those rights were achieved. We are slowly slipping into an oligarchy greased by the idea that the physical possession of material things is all that matters. Sheeple, yes.

Jeremy Grimm , , December 14, 2017 at 4:44 pm

WOW! You must have been outside the U.S. for a long time. Your comment seems to suggest we still have some kind of democracy here. We don't get to pick which rapacious millionaires we get to vote for and it doesn't matter any way since whichever one we pick from the sad offerings ends up with policies dictated from elsewhere.

Expat , , December 14, 2017 at 6:10 pm

Mmm, I think American voters get what they want in the end. They want their politicians because they believe the lies. 19% of Americans believe they are in the top 1% of wealth. A huge percentage of poor people believe they or their kids will (not can, but will) become wealthy. Most Americans can't find France on a map.

So, yes, you DO get to pick your rapacious millionaire. You send the same scumbags back to Washington every year because it's not him, it the other guys who are the problem. One third of Americans support Trump! Really, really support him. They think he is Jesus, MacArthur and Adam Smith all rolled up into one.

I may have been gone for about thirty years, but that has only sharpened my insights into America. It's very hard to see just how flawed America is from the inside but when you step outside and have some perspective, it's frightening.

Disturbed Voter , , December 14, 2017 at 6:29 am

The Democrat party isn't a reform party. Thinking it is so, is because of the "No Other Choice" meme. Not saying that the Republican party works in my favor. They don't. Political reform goes deeper than reforming either main party. It means going to a European plurality system (with its own downside). That way growing Third parties will be viable, if they have popular, as opposed to millionaire, support. I don't see this happening, because of Citizens United, but if all you have is hope, then you have to go with that.

Carolinian , , December 14, 2017 at 8:05 am

Had George W. Bush had his way and turned Social Security over to Wall Street, the economic crash of September 2008 might have left millions of senior citizens homeless.

Substitute Bill Clinton for George Bush in that sentence and it works just as well. Neoliberalism is a bipartisan project.

And many of the potential and actual horrors described above arise from the price distortions of the US medical system with Democratic acquiescence in said system making things worse. The above article reads like a DNC press release.

And finally while Washington politicians of both parties have been threatening Social Security for years that doesn't mean its third rail status has been repealed. The populist tremors of the last election -- which have caused our elites to lose their collective mind -- could be a mere prelude to what will happen in the event of a full scale assault on the safety net.

KYrocky , , December 14, 2017 at 12:05 pm

Substitute Obama's quest for a Grand Bargain as well.

Our government, beginning with Reagan, turned its back on promoting the general welfare. The wealthy soon learned that their best return on investment was the "purchase" of politicians willing to pass the legislation they put in their hands. Much of their investment included creating the right wing media apparatus.

The Class War is real. It has been going on for 40 years, with the Conservative army facing virtually no resistance. Conservatives welcome Russia's help. Conservatives welcome barriers to people voting. Conservatives welcome a populace that believes lies that benefit them. Conservatives welcome the social and financial decline of the entire middle class and poor as long as it profits the rich financially, and by extension enhances their power politically.

If retirees flee our country that will certainly please the Conservatives as that will be fewer critics (enemies). Also less need or demand for social programs.

rps , , December 14, 2017 at 5:01 pm

"Single acts of tyranny may be ascribed to the accidental opinion of the day, but a series of oppressions, begun at a distinguished period and pursued unalterably through every change of ministers, too plainly prove a deliberate, systematic plan of reducing [a people] to slavery" Thomas Jefferson. Rights of British America, 1774 ME 1:193, Papers 1:125

tegnost , , December 14, 2017 at 8:59 am

yes, my problem with the post as well, completely ignores democrat complicity the part where someone with a 26k salary will pay 16k in insurance? No they won't, the system would collapse in that case which will be fine with me.

Marco , , December 14, 2017 at 6:55 am

"President Richard Nixon may have been a paranoid right-winger with authoritarian tendencies, but he expanded Medicare and supported universal health care."

"Gimme that old time Republican!"

One of the reasons I love NC is that most political economic analysis is often more harsh on the Democrats than the Repubs so I am a bit dismayed how this article is way too easy on Team D. How many little (and not so little) knives in the back from Clinton and Obama? Is a knife in the chest that much worse?

OpenThePodBayDoorsHAL , December 14, 2017 at 3:57 pm

This entire thread is simply heartbreaking, Americans have had their money, their freedom, their privacy, their health, and sometimes their very lives taken away from them by the State. But the heartbreaking part is that they feel they are powerless to do anything at all about it so are just trying to leave.

But "People should not fear the government; the government should fear the people"

tagio , December 14, 2017 at 4:39 pm

It's more than a feeling, HAL. https://www.newyorker.com/news/john-cassidy/is-america-an-oligarchy Link to the academic paper embedded in article.

As your quote appears to imply, it's not a problem that can be solved by voting which, let's not forget, is nothing more than expressing an opinion. I am not sticking around just to find out if economically-crushed, opiod-, entertainment-, social media-addled Americans are actually capable of rolling out tumbrils for trips to the guillotines in the city squares. I strongly suspect not.

This is the country where, after the banks crushed the economy in 2008, caused tens of thousands to lose their jobs, and then got huge bailouts, the people couldn't even be bothered to take their money out of the big banks and put it elsewhere. Because, you know, convenience! Expressing an opinion, or mobilizing others to express an opinion, or educating or proselytizing others about what opinion to have, is about the limit of what they are willing, or know how to do.

[Dec 13, 2017] A stunning 33% of job seekers ages 55 and older are long-term unemployed, according to the AARP Public Policy Institute

Notable quotes:
"... And, recent studies have shown, the longer you're out of work - especially if you're older and out of work - the harder it becomes to get a job offer. ..."
Dec 13, 2017 | www.nakedcapitalism.com

Livius Drusus , December 13, 2017 at 2:44 pm

I thought this was an interesting article. Apologies if this has been posted on NC already.

A stunning 33% of job seekers ages 55 and older are long-term unemployed, according to the AARP Public Policy Institute. The average length of unemployment for the roughly 1.2 million people 55+ who are out of work: seven to nine months. "It's emotionally devastating for them," said Carl Van Horn, director of Rutgers University's John J. Heldrich Center for Workforce Development, at a Town Hall his center and the nonprofit WorkingNation held earlier this year in New Brunswick, N.J.

... ... ...

The fight faced by the long-term unemployed

And, recent studies have shown, the longer you're out of work - especially if you're older and out of work - the harder it becomes to get a job offer.

The job-finding rate declines by roughly 50% within eight months of unemployment, according to a 2016 paper by economists Gregor Jarosch of Stanford University and Laura Pilossoph of the Federal Reserve Bank of New York. "Unemployment duration has a strongly negative effect on the likelihood of subsequent employment," wrote researchers from the University of Maryland and the U.S. Census Bureau in another 2016 paper.

"Once upon a time, you could take that first job and it would lead to the next job and the job after that," said Town Hall panelist John Colborn, chief operating officer at the nonprofit JEVS Human Services, of Philadelphia. "The notion of a career ladder offered some hope of getting back into the labor market. The rungs of the ladder are getting harder and harder to find and some of them are broken."

In inner cities, said Kimberly McClain, CEO of The Newark Alliance, "there's an extra layer beyond being older and out of work. There are issues of race and poverty and being defined by your ZIP Code. There's an incredible sense of urgency."

... ... ...

Filling a work gap

If you are over 50, unemployed and have a work gap right now, the Town Hall speakers said, fill it by volunteering, getting an internship, doing project work, job-shadowing someone in a field you want to be in or taking a class to re-skill. These kind of things "make a candidate a lot more attractive," said Colborn. Be sure to note them in your cover letter and résumé.

Town Hall panelist Amanda Mullan, senior vice president and chief human resources officer of the New Jersey Resources Corp. (a utility company based in Wall, N.J.), said that when her company is interviewing someone who has been out of work lately, "we will ask: 'What have you done during that time frame?' If we get 'Nuthin,' that shows something about the individual, from a motivational perspective."

... ... ...

The relief of working again

Finally finding work when you're over 50 and unemployed for a stretch can be a relief for far more than financial reasons.

"Once I landed my job, the thing I most looked forward to was the weekend," said Konopka. "Not to relax, but because I didn't have to think about finding a job anymore. That's 24/7 in your head. You're always thinking on a Saturday: 'If I'm not doing something to find a job, will there be a posting out there?'"

Full article: https://www.marketwatch.com/story/jobs-are-everywhere-just-not-for-people-over-55-2017-12-08

[Dec 13, 2017] Stress of long-term unemployment takes a toll on thousands of Jerseyans who are out of work by Leslie Kwoh

Notable quotes:
"... Leslie Kwoh may be reached at lkwoh@starledger.com or (973) 392-4147. ..."
Jun 13, 2010 | www.nj.com

At 5:30 every morning, Tony Gwiazdowski rolls out of bed, brews a pot of coffee and carefully arranges his laptop, cell phone and notepad like silverware across the kitchen table.

And then he waits.

Gwiazdowski, 57, has been waiting for 16 months. Since losing his job as a transportation sales manager in February 2009, he wakes each morning to the sobering reminder that, yes, he is still unemployed. So he pushes aside the fatigue, throws on some clothes and sends out another flurry of resumes and cheery cover letters.

But most days go by without a single phone call. And around sundown, when he hears his neighbors returning home from work, Gwiazdowski -- the former mayor of Hillsborough -- can't help but allow himself one tiny sigh of resignation.

"You sit there and you wonder, 'What am I doing wrong?'" said Gwiazdowski, who finds companionship in his 2-year-old golden retriever, Charlie, until his wife returns from work.

"The worst moment is at the end of the day when it's 4:30 and you did everything you could, and the phone hasn't rung, the e-mails haven't come through."

Gwiazdowski is one of a growing number of chronically unemployed workers in New Jersey and across the country who are struggling to get through what is becoming one long, jobless nightmare -- even as the rest of the economy has begun to show signs of recovery.

Nationwide, 46 percent of the unemployed -- 6.7 million Americans -- have been without work for at least half a year, by far the highest percentage recorded since the U.S. Labor Department began tracking the data in 1948.

In New Jersey, nearly 40 percent of the 416,000 unemployed workers last year fit that profile, up from about 20 percent in previous years, according to the department, which provides only annual breakdowns for individual states. Most of them were unemployed for more than a year.

But the repercussions of chronic unemployment go beyond the loss of a paycheck or the realization that one might never find the same kind of job again. For many, the sinking feeling of joblessness -- with no end in sight -- can take a psychological toll, experts say.

Across the state, mental health crisis units saw a 20 percent increase in demand last year as more residents reported suffering from unemployment-related stress, according to the New Jersey Association of Mental Health Agencies.

"The longer the unemployment continues, the more impact it will have on their personal lives and mental health," said Shauna Moses, the association's associate executive director. "There's stress in the marriage, with the kids, other family members, with friends."

And while a few continue to cling to optimism, even the toughest admit there are moments of despair: Fear of never finding work, envy of employed friends and embarassment at having to tell acquaintances that, nope, still no luck.

"When they say, 'Hi Mayor,' I don't tell a lot of people I'm out of work -- I say I'm semi-retired," said Gwiazdowski, who maxed out on unemployment benefits several months ago.

"They might think, 'Gee, what's wrong with him? Why can't he get a job?' It's a long story and maybe people really don't care and now they want to get away from you."


SECOND TIME AROUND

Lynn Kafalas has been there before, too. After losing her computer training job in 2000, the East Hanover resident took four agonizing years to find new work -- by then, she had refashioned herself into a web designer.

That not-too-distant experience is why Kafalas, 52, who was laid off again eight months ago, grows uneasier with each passing day. Already, some of her old demons have returned, like loneliness, self-doubt and, worst of all, insomnia. At night, her mind races to dissect the latest interview: What went wrong? What else should she be doing? And why won't even Barnes & Noble hire her?

"It's like putting a stopper on my life -- I can't move on," said Kafalas, who has given up karate lessons, vacations and regular outings with friends. "Everything is about the interviews."

And while most of her friends have been supportive, a few have hinted to her that she is doing something wrong, or not doing enough. The remarks always hit Kafalas with a pang.

In a recent study, researchers at Rutgers University found that the chronically unemployed are prone to high levels of stress, anxiety, depression, loneliness and even substance abuse, which take a toll on their self-esteem and personal relationships.

"They're the forgotten group," said Carl Van Horn, director of the John J. Heldrich Center for Workforce Development at Rutgers, and a co-author of the report. "And the longer you are unemployed, the less likely you are to get a job."

Of the 900 unemployed workers first interviewed last August for the study, only one in 10 landed full-time work by March of this year, and only half of those lucky few expressed satisfaction with their new jobs. Another one in 10 simply gave up searching.

Among those who were still unemployed, many struggled to make ends meet by borrowing from friends or family, turning to government food stamps and forgoing health care, according to the study.

More than half said they avoided all social contact, while slightly less than half said they had lost touch with close friends. Six in 10 said they had problems sleeping.

Kafalas says she deals with her chronic insomnia by hitting the gym for two hours almost every evening, lifting weights and pounding the treadmill until she feels tired enough to fall asleep.

"Sometimes I forget what day it is. Is it Tuesday? And then I'll think of what TV show ran the night before," she said. "Waiting is the toughest part."


AGE A FACTOR

Generally, the likelihood of long-term unemployment increases with age, experts say. A report by the National Employment Law Project this month found that nearly half of those who were unemployed for six months or longer were at least 45 years old. Those between 16 and 24 made up just 14 percent.

Tell that to Adam Blank, 24, who has been living with his girlfriend and her parents at their Martinsville home since losing his sales job at Best Buy a year and half ago.

Blank, who graduated from Rutgers with a major in communications, says he feels like a burden sometimes, especially since his girlfriend, Tracy Rosen, 24, works full-time at a local nonprofit. He shows her family gratitude with small chores, like taking out the garbage, washing dishes, sweeping floors and doing laundry.

Still, he often feels inadequate.

"All I'm doing on an almost daily basis is sitting around the house trying to keep myself from going stir-crazy," said Blank, who dreams of starting a social media company.

When he is feeling particularly low, Blank said he turns to a tactic employed by prisoners of war in Vietnam: "They used to build dream houses in their head to help keep their sanity. It's really just imagining a place I can call my own."


LESSONS LEARNED

Meanwhile, Gwiazdowski, ever the optimist, says unemployment has taught him a few things.

He has learned, for example, how to quickly assess an interviewer's age and play up or down his work experience accordingly -- he doesn't want to appear "threatening" to a potential employer who is younger. He has learned that by occasionally deleting and reuploading his resume to job sites, his entry appears fresh.

"It's almost like a game," he said, laughing. "You are desperate, but you can't show it."

But there are days when he just can't find any humor in his predicament -- like when he finishes a great interview but receives no offer, or when he hears a fellow job seeker finally found work and feels a slight twinge of jealousy.

"That's what I'm missing -- putting on that shirt and tie in the morning and going to work," he said.

The memory of getting dressed for work is still so vivid, Gwiazdowski says, that he has to believe another job is just around the corner.

"You always have to hope that that morning when you get up, it's going to be the day," he said.

"Today is going to be the day that something is going to happen."

Leslie Kwoh may be reached at lkwoh@starledger.com or (973) 392-4147.

DrBuzzard Jun 13, 2010

I collect from the state of iowa, was on tier I and when the gov't recessed without passing extension, iowa stopped paying tier I claims that were already open, i was scheduled to be on tier I until july 15th, and its gone now, as a surprise, when i tried to claim my week this week i was notified. SURPRISE, talk about stress.

berganliz Jun 13, 2010

This is terrible....just wait until RIF'd teachers hit the unemployment offices....but then, this is what NJ wanted...fired teachers who are to blame for the worst recession our country has seen in 150 years...thanks GWB.....thanks Donald Rumsfeld......thanks Dick Cheney....thanks Karl "Miss Piggy" Rove...and thank you Mr. Big Boy himself...Gov Krispy Kreame!

rp121 Jun 13, 2010

For readers who care about this nation's unemployed- Call your Senators to pass HR 4213, the "Extenders" bill. Unfortunately, it does not add UI benefits weeks, however it DOES continue the emergency federal tiers of UI. If it does not pass this week many of us are cut off at 26 wks. No tier 1, 2 -nothing.

[Dec 13, 2017] Unemployment health hazard and stress

The longer you are unemployed, the more you are effected by those factors.
Notable quotes:
"... The good news is that only a relatively small number of people are seriously affected by the stress of unemployment to the extent they need medical assistance. Most people don't get to the serious levels of stress, and much as they loathe being unemployed, they suffer few, and minor, ill effects. ..."
"... Worries about income, domestic problems, whatever, the list is as long as humanity. The result of stress is a strain on the nervous system, and these create the physical effects of the situation over time. The chemistry of stress is complex, but it can be rough on the hormonal system. ..."
"... Not at all surprisingly, people under stress experience strong emotions. It's a perfectly natural response to what can be quite intolerable emotional strains. It's fair to say that even normal situations are felt much more severely by people already under stress. Things that wouldn't normally even be issues become problems, and problems become serious problems. Relationships can suffer badly in these circumstances, and that, inevitably, produces further crises. Unfortunately for those affected, these are by now, at this stage, real crises. ..."
"... Some people are stubborn enough and tough enough mentally to control their emotions ruthlessly, and they do better under these conditions. Even that comes at a cost, and although under control, the stress remains a problem. ..."
"... One of the reasons anger management is now a growth industry is because of the growing need for assistance with severe stress over the last decade. This is a common situation, and help is available. ..."
"... Depression is universally hated by anyone who's ever had it. ..."
"... Very important: Do not, under any circumstances, try to use drugs or alcohol as a quick fix. They make it worse, over time, because they actually add stress. Some drugs can make things a lot worse, instantly, too, particularly the modern made-in-a-bathtub variety. They'll also destroy your liver, which doesn't help much, either. ..."
"... You don't have to live in a gym to get enough exercise for basic fitness. A few laps of the pool, a good walk, some basic aerobic exercises, you're talking about 30-45 minutes a day. It's not hard. ..."
Dec 13, 2017 | www.cvtips.com

It's almost impossible to describe the various psychological impacts, because there are so many. There are sometimes serious consequences, including suicide, and, some would say worse, chronic depression.

There's not really a single cause and effect. It's a compound effect, and unemployment, by adding stress, affects people, often badly.

The world doesn't need any more untrained psychologists, and we're not pretending to give medical advice. That's for professionals. Everybody is different, and their problems are different. What we can do is give you an outline of the common problems, and what you can do about them.

The good news is that only a relatively small number of people are seriously affected by the stress of unemployment to the extent they need medical assistance. Most people don't get to the serious levels of stress, and much as they loathe being unemployed, they suffer few, and minor, ill effects.

For others, there are a series of issues, and the big three are:

Stress

Stress is Stage One. It's a natural result of the situation. Worries about income, domestic problems, whatever, the list is as long as humanity. The result of stress is a strain on the nervous system, and these create the physical effects of the situation over time. The chemistry of stress is complex, but it can be rough on the hormonal system.

Over an extended period, the body's natural hormonal balances are affected, and this can lead to problems. These are actually physical issues, but the effects are mental, and the first obvious effects are, naturally, emotional.

Anger, and other negative emotions

Not at all surprisingly, people under stress experience strong emotions. It's a perfectly natural response to what can be quite intolerable emotional strains. It's fair to say that even normal situations are felt much more severely by people already under stress. Things that wouldn't normally even be issues become problems, and problems become serious problems. Relationships can suffer badly in these circumstances, and that, inevitably, produces further crises. Unfortunately for those affected, these are by now, at this stage, real crises.

If the actual situation was already bad, this mental state makes it a lot worse. Constant aggravation doesn't help people to keep a sense of perspective. Clear thinking isn't easy when under constant stress.

Some people are stubborn enough and tough enough mentally to control their emotions ruthlessly, and they do better under these conditions. Even that comes at a cost, and although under control, the stress remains a problem.

One of the reasons anger management is now a growth industry is because of the growing need for assistance with severe stress over the last decade. This is a common situation, and help is available.

If you have reservations about seeking help, bear in mind it can't possibly be any worse than the problem.

Depression

Depression is universally hated by anyone who's ever had it. This is the next stage, and it's caused by hormonal imbalances which affect serotonin. It's actually a physical problem, but it has mental effects which are sometimes devastating, and potentially life threatening.

The common symptoms are:

It's a disgusting experience. No level of obscenity could possibly describe it. Depression is misery on a level people wouldn't conceive in a nightmare. At this stage the patient needs help, and getting it is actually relatively easy. It's convincing the person they need to do something about it that's difficult. Again, the mental state is working against the person. Even admitting there's a problem is hard for many people in this condition.

Generally speaking, a person who is trusted is the best person to tell anyone experiencing the onset of depression to seek help. Important: If you're experiencing any of those symptoms:

Very important: Do not, under any circumstances, try to use drugs or alcohol as a quick fix. They make it worse, over time, because they actually add stress. Some drugs can make things a lot worse, instantly, too, particularly the modern made-in-a-bathtub variety. They'll also destroy your liver, which doesn't help much, either.

Alcohol, in particular, makes depression much worse. Alcohol is a depressant, itself, and it's also a nasty chemical mix with all those stress hormones.

If you've ever had alcohol problems, or seen someone with alcohol wrecking their lives, depression makes things about a million times worse.

Just don't do it. Steer clear of any so-called stimulants, because they don't mix with antidepressants, either.

Unemployment and staying healthy

The above is what you need to know about the risks of unemployment to your health and mental well being.

These situations are avoidable.

Your best defense against the mental stresses and strains of unemployment, and their related problems is staying healthy.

We can promise you that is nothing less than the truth. The healthier you are, the better your defenses against stress, and the more strength you have to cope with situations.

Basic health is actually pretty easy to achieve:

Diet

Eat real food, not junk, and make sure you're getting enough food. Your body can't work with resources it doesn't have. Good food is a real asset, and you'll find you don't get tired as easily. You need the energy reserves.

Give yourself a good selection of food that you like, that's also worth eating.

The good news is that plain food is also reasonably cheap, and you can eat as much as you need. Basic meals are easy enough to prepare, and as long as you're getting all the protein veg and minerals you need, you're pretty much covered.

You can also use a multivitamin cap, or broad spectrum supplements, to make sure you're getting all your trace elements. Also make sure you're getting the benefits of your food by taking acidophilus or eating yogurt regularly.

Exercise

You don't have to live in a gym to get enough exercise for basic fitness. A few laps of the pool, a good walk, some basic aerobic exercises, you're talking about 30-45 minutes a day. It's not hard.

Don't just sit and suffer

If anything's wrong, check it out when it starts, not six months later. Most medical conditions become serious when they're allowed to get worse.

For unemployed people the added risk is also that they may prevent you getting that job, or going for interviews. If something's causing you problems, get rid of it.

Nobody who's been through the blender of unemployment thinks it's fun.

Anyone who's really done it tough will tell you one thing:

Don't be a victim. Beat the problem, and you'll really appreciate the feeling.

[Dec 13, 2017] Being homeless is better than working for Amazon by Nichole Gracely

Notable quotes:
"... According to Amazon's metrics, I was one of their most productive order pickers -- I was a machine, and my pace would accelerate throughout the course of a shift. What they didn't know was that I stayed fast because if I slowed down for even a minute, I'd collapse from boredom and exhaustion ..."
"... toiling in some remote corner of the warehouse, alone for 10 hours, with my every move being monitored by management on a computer screen. ..."
"... ISS could simply deactivate a worker's badge and they would suddenly be out of work. They treated us like beggars because we needed their jobs. Even worse, more than two years later, all I see is: Jeff Bezos is hiring. ..."
"... I have never felt more alone than when I was working there. I worked in isolation and lived under constant surveillance ..."
"... That was 2012 and Amazon's labor and business practices were only beginning to fall under scrutiny. ..."
"... I received $200 a week for the following six months and I haven't had any source of regular income since those benefits lapsed. I sold everything in my apartment and left Pennsylvania as fast as I could. I didn't know how to ask for help. I didn't even know that I qualified for food stamps. ..."
Nov 28, 2014 | theguardian.com

wa8dzp:

Nichole Gracely has a master's degree and was one of Amazon's best order pickers. Now, after protesting the company, she's homeless.

I am homeless. My worst days now are better than my best days working at Amazon.

According to Amazon's metrics, I was one of their most productive order pickers -- I was a machine, and my pace would accelerate throughout the course of a shift. What they didn't know was that I stayed fast because if I slowed down for even a minute, I'd collapse from boredom and exhaustion.

During peak season, I trained incoming temps regularly. When that was over, I'd be an ordinary order picker once again, toiling in some remote corner of the warehouse, alone for 10 hours, with my every move being monitored by management on a computer screen.

Superb performance did not guarantee job security. ISS is the temp agency that provides warehouse labor for Amazon and they are at the center of the SCOTUS case Integrity Staffing Solutions vs. Busk. ISS could simply deactivate a worker's badge and they would suddenly be out of work. They treated us like beggars because we needed their jobs. Even worse, more than two years later, all I see is: Jeff Bezos is hiring.

I have never felt more alone than when I was working there. I worked in isolation and lived under constant surveillance. Amazon could mandate overtime and I would have to comply with any schedule change they deemed necessary, and if there was not any work, they would send us home early without pay. I started to fall behind on my bills.

At some point, I lost all fear. I had already been through hell. I protested Amazon. The gag order was lifted and I was free to speak. I spent my last days in a lovely apartment constructing arguments on discussion boards, writing articles and talking to reporters. That was 2012 and Amazon's labor and business practices were only beginning to fall under scrutiny. I walked away from Amazon's warehouse and didn't have any other source of income lined up.

I cashed in on my excellent credit, took out cards, and used them to pay rent and buy food because it would be six months before I could receive my first unemployment compensation check.

I received $200 a week for the following six months and I haven't had any source of regular income since those benefits lapsed. I sold everything in my apartment and left Pennsylvania as fast as I could. I didn't know how to ask for help. I didn't even know that I qualified for food stamps.

I furthered my Amazon protest while homeless in Seattle. When the Hachette dispute flared up I "flew a sign," street parlance for panhandling with a piece of cardboard: "I was an order picker at amazon.com. Earned degrees. Been published. Now, I'm homeless, writing and doing this. Anything helps."

I have made more money per word with my signs than I will probably ever earn writing, and I make more money per hour than I will probably ever be paid for my work. People give me money and offer well wishes and I walk away with a restored faith in humanity.

I flew my protest sign outside Whole Foods while Amazon corporate employees were on lunch break, and they gawked. I went to my usual flying spots around Seattle and made more money per hour protesting Amazon with my sign than I did while I worked with them. And that was in Seattle. One woman asked, "What are you writing?" I told her about the descent from working poor to homeless, income inequality, my personal experience. She mentioned Thomas Piketty's book, we chatted a little, she handed me $10 and wished me luck. Another guy said, "Damn, that's a great story! I'd read it," and handed me a few bucks.

[snip]

[Dec 13, 2017] Business Staff brand colleagues as 'lazy'

While lazy people do happen, this compulsive quest for "high performance" is one of the most disgusting futures of neoliberlaism. Cemented by annual "performance reviews" which are the scam.
Aug 19, 2005 | BBC NEWS
An overwhelming majority of bosses and employees think that some of their colleagues consistently underperform.

An Investors in People survey found 75% of bosses and 80% of staff thought some colleagues were "dead wood" - and the main reason was thought to be laziness. Nearly half of employees added they worked closely with someone who they thought was lazy and not up to the job. However, four out of ten workers said that their managers did nothing about colleagues not pulling their weight.

According to Investors in People, the problem of employees not doing their jobs properly seemed to be more prevalent in larger organizations. The survey found that 84% of workers in organizations with more than 1,000 employees thought they had an underperforming colleague, compared with 50% in firms with fewer than 50 staff.

Tell tale signs

The survey identified the tell-tale signs of people not pulling their weight, according to both employers and employees, including:

Both employers and employees agreed that the major reason for someone failing in their job was sheer laziness. "Dead wood" employees can have a stark effect on their colleagues' physical and mental well-being, the survey found. Employees reported that they had to work longer hours to cover for shirking colleagues and felt undervalued as a result. Ultimately, working alongside a lazy colleague could prompt workers to look for a new job the survey found.

But according to Nick Parfitt, spokesman for human resources firm Cubiks, an unproductive worker isn't necessarily lazy.

"It can be too easy to brand a colleague lazy," he said. "They may have genuine personal problems or are being asked to do a job that they have not been given the training to do. "The employer must look out for the warning signs of a worker becoming de-motivated - hold regular conversations and appraisals with staff."

However, Mr Parfitt added that ultimately lazy employees may have to be shown the door. "The cost of sacking someone can be colossal and damaging to team morale but sometimes it maybe the only choice."

[Dec 12, 2017] Can Uber Ever Deliver Part Eleven Annual Uber Losses Now Approaching $5 Billion

Notable quotes:
"... Total 2015 gross passenger payments were 200% higher than 2014, but Uber corporate revenue improved 300% because Uber cut the driver share of passenger revenue from 83% to 77%. This was an effective $500 million wealth transfer from drivers to Uber's investors. ..."
"... Uber's P&L gains were wiped out by higher non-EBIDTAR expense. Thus the 300% Uber revenue growth did not result in any improvement in Uber profit margins. ..."
"... In 2016, Uber unilaterally imposed much larger cuts in driver compensation, costing drivers an additional $3 billion. [6] Prior to Uber's market entry, the take home pay of big-city cab drivers in the US was in the $12-17/hour range, and these earnings were possible only if drivers worked 65-75 hours a week. ..."
"... An independent study of the net earnings of Uber drivers (after accounting for the costs of the vehicles they had to provide) in Denver, Houston and Detroit in late 2015 (prior to Uber's big 2016 cuts) found that driver earnings had fallen to the $10-13/hour range. [7] Multiple recent news reports have documented how Uber drivers are increasing unable to support themselves from their reduced share of passenger payments. [8] ..."
"... Since mass driver defections would cause passenger volume growth to collapse completely, Uber was forced to reverse these cuts in 2017 and increased the driver share from 68% to 80%. This meant that Uber's corporate revenue, which had grown over 300% in 2015 and over 200% in 2016 will probably only grow by about 15% in 2017. ..."
"... Socialize the losses, privatize the gains, VC-ize the subsidies. ..."
"... The cold hard truth is that Uber is backed into a corner with severely limited abilities to tweak the numbers on either the supply or the demand side: cut driver compensation and they trigger driver churn (as has already been demonstrated), increase fare prices for riders and riders defect to cheaper alternatives. ..."
"... "Growth and Efficiency" are the sine qua non of Neoliberalism. Kalanick's "hype brilliance" was to con the market with "revenue growth" and signs ..."
Dec 12, 2017 | www.nakedcapitalism.com

Uber lost $2.5 billion in 2015, probably lost $4 billion in 2016, and is on track to lose $5 billion in 2017.

The top line on the table below shows is total passenger payments, which must be split between Uber corporate and its drivers. Driver gross earnings are substantially higher than actual take home pay, as gross earning must cover all the expenses drivers bear, including fuel, vehicle ownership, insurance and maintenance.

Most of the "profit" data released by Uber over time and discussed in the press is not true GAAP (generally accepted accounting principles) profit comparable to the net income numbers public companies publish but is EBIDTAR contribution. Companies have significant leeway as to how they calculate EBIDTAR (although it would exclude interest, taxes, depreciation, amortization) and the percentage of total costs excluded from EBIDTAR can vary significantly from quarter to quarter, given the impact of one-time expenses such as legal settlements and stock compensation. We only have true GAAP net profit results for 2014, 2015 and the 2nd/3rd quarters of 2017, but have EBIDTAR contribution numbers for all other periods. [5]

Uber had GAAP net income of negative $2.6 billion in 2015, and a negative profit margin of 132%. This is consistent with the negative $2.0 billion loss and (143%) margin for the year ending September 2015 presented in part one of the NC Uber series over a year ago.

No GAAP profit results for 2016 have been disclosed, but actual losses likely exceed $4 billion given the EBIDTAR contribution of negative $3.2 billion. Uber's GAAP losses for the 2nd and 3rd quarters of 2017 were over $2.5 billion, suggesting annual losses of roughly $5 billion.

While many Silicon Valley funded startups suffered large initial losses, none of them lost anything remotely close to $2.6 billion in their sixth year of operation and then doubled their losses to $5 billion in year eight. Reversing losses of this magnitude would require the greatest corporate financial turnaround in history.

No evidence of significant efficiency/scale gains; 2015 and 2016 margin improvements entirely explained by unilateral cuts in driver compensation, but losses soared when Uber had to reverse these cuts in 2017.

Total 2015 gross passenger payments were 200% higher than 2014, but Uber corporate revenue improved 300% because Uber cut the driver share of passenger revenue from 83% to 77%. This was an effective $500 million wealth transfer from drivers to Uber's investors. These driver compensation cuts improved Uber's EBIDTAR margin, but Uber's P&L gains were wiped out by higher non-EBIDTAR expense. Thus the 300% Uber revenue growth did not result in any improvement in Uber profit margins.

In 2016, Uber unilaterally imposed much larger cuts in driver compensation, costing drivers an additional $3 billion. [6] Prior to Uber's market entry, the take home pay of big-city cab drivers in the US was in the $12-17/hour range, and these earnings were possible only if drivers worked 65-75 hours a week.

An independent study of the net earnings of Uber drivers (after accounting for the costs of the vehicles they had to provide) in Denver, Houston and Detroit in late 2015 (prior to Uber's big 2016 cuts) found that driver earnings had fallen to the $10-13/hour range. [7] Multiple recent news reports have documented how Uber drivers are increasing unable to support themselves from their reduced share of passenger payments. [8]

A business model where profit improvement is hugely dependent on wage cuts is unsustainable, especially when take home wages fall to (or below) minimum wage levels. Uber's primary focus has always been the rate of growth in gross passenger revenue, as this has been a major justification for its $68 billion valuation. This growth rate came under enormous pressure in 2017 given Uber efforts to raise fares, major increases in driver turnover as wages fell, [9] and the avalanche of adverse publicity it was facing.

Since mass driver defections would cause passenger volume growth to collapse completely, Uber was forced to reverse these cuts in 2017 and increased the driver share from 68% to 80%. This meant that Uber's corporate revenue, which had grown over 300% in 2015 and over 200% in 2016 will probably only grow by about 15% in 2017.

MKS , December 12, 2017 at 6:19 am

"Uber's business model can never produce sustainable profits"

Two words not in my vocabulary are "Never" and "Always", that is a pretty absolute statement in an non-absolute environment. The same environment that has produced the "Silicon Valley Growth Model", with 15x earnings companies like NVIDA, FB and Tesla (Average earnings/stock price ratio in dot com bubble was 10x) will people pay ridiculous amounts of money for a company with no underlying fundamentals you damn right they will! Please stop with the I know all no body knows anything, especially the psychology and irrationality of markets which are made up of irrational people/investors/traders.

JohnnySacks , December 12, 2017 at 7:34 am

My thoughts exactly. Seems the only possible recovery for the investors is a perfectly engineered legendary pump and dump IPO scheme. Risky, but there's a lot of fools out there and many who would also like to get on board early in the ride in fear of missing out on all the money to be hoovered up from the greater fools. Count me out.

SoCal Rhino , December 12, 2017 at 8:30 am

The author clearly distinguishes between GAAP profitability and valuations, which is after all rather the point of the series. And he makes a more nuanced point than the half sentence you have quoted without context or with an indication that you omitted a portion. Did you miss the part about how Uber would have a strong incentive to share the evidence of a network effect or other financial story that pointed the way to eventual profit? Otherwise (my words) it is the classic sell at a loss, make it up with volume path to liquidation.

tegnost , December 12, 2017 at 9:52 am

apples and oranges comparison, nvidia has lots and lots of patented tech that produces revenue, facebook has a kajillion admittedly irrational users, but those users drive massive ad sales (as just one example of how that company capitalizes itself) and tesla makes an actual car, using technology that inspires it's buyers (the put your money where your mouth is crowd and it can't be denied that tesla, whatever it's faults are, battery tech is not one of them and that intellectual property is worth a lot, and tesla's investors are in on that real business, profitable or otherwise)

Uber is an iphone app. They lose money and have no path to profitability (unless it's the theory you espouse that people are unintelligent so even unintelligent ideas work to fleece them). This article touches on one of the great things about the time we now inhabit, uber drivers could bail en masse, there are two sides to the low attachment employees who you can get rid of easily. The drivers can delete the uber app as soon as another iphone app comes along that gets them a better return

allan , December 12, 2017 at 6:52 am

Yet another source (unintended) of subsidies for Uber, Lyft, etc., which might or might not have been mentioned earlier in the series:

Airports Are Losing Money as Ride-Hailing Services Grow [NYT]

For many air travelers, getting to and from the airport has long been part of the whole miserable experience. Do they drive and park in some distant lot? Take mass transit or a taxi? Deal with a rental car?

Ride-hailing services like Uber and Lyft are quickly changing those calculations. That has meant a bit less angst for travelers.

But that's not the case for airports. Travelers' changing habits, in fact, have begun to shake the airports' financial underpinnings. The money they currently collect from ride-hailing services do not compensate for the lower revenues from the other sources.

At the same time, some airports have had to add staff to oversee the operations of the ride-hailing companies, the report said. And with more ride-hailing vehicles on the roads outside terminals,
there's more congestion.

Socialize the losses, privatize the gains, VC-ize the subsidies.

Thuto , December 12, 2017 at 6:55 am

The cold hard truth is that Uber is backed into a corner with severely limited abilities to tweak the numbers on either the supply or the demand side: cut driver compensation and they trigger driver churn (as has already been demonstrated), increase fare prices for riders and riders defect to cheaper alternatives. The only question is how long can they keep the show going before the lights go out, slick marketing and propaganda can only take you so far, and one assumes the dumb money has a finite supply of patience and will at some point begin asking the tough questions.

Louis Fyne , December 12, 2017 at 8:35 am

The irony is that Uber would have been a perfectly fine, very profitable mid-sized company if Uber stuck with its initial model -- sticking to dense cities with limited parking, limiting driver supply, and charging a premium price for door-to-door delivery, whether by livery or a regular sedan. And then perhaps branching into robo-cars.

But somehow Uber/board/Travis got suckered into the siren call of self-driving cars, triple-digit user growth, and being in the top 100 US cities and on every continent.

Thuto , December 12, 2017 at 11:30 am

I've shared a similar sentiment in one of the previous posts about Uber. But operating profitably in decent sized niche doesn't fit well with ambitions of global domination. For Uber to be "right-sized", an admission of folly would have to be made, its managers and investors would have to transcend the sunk cost fallacy in their strategic decision making, and said investors would have to accept massive hits on their invested capital. The cold, hard reality of being blindsided and kicked to the curb in the smartphone business forced RIM/Blackberry to right-size, and they may yet have a profitable future as an enterprise facing software and services company. Uber would benefit from that form of sober mindedness, but I wouldn't hold my breath.

David Carl Grimes , December 12, 2017 at 6:57 am

The question is: Why did Softbank invest in Uber?

Michael Fiorillo , December 12, 2017 at 9:33 am

I know nothing about Softbank or its management, but I do know that the Japanese were the dumb money rubes in the late '80's, overpaying for trophy real estate they lost billions on.

Until informed otherwise, that's my default assumption

JimTan , December 12, 2017 at 10:50 am

Softbank possibly looking to buy more Uber shares at a 30% discount is very odd. Uber had a Series G funding round in June 2016 where a $3.5 billion investment from Saudi Arabia's Public Investment Fund resulted in its current $68 billion valuation. Now apparently Softbank wants to lead a new $6 billion funding round to buy the shares of Uber employees and early investors at a 30% discount from this last "valuation". It's odd because Saudi Arabia's Public Investment Fund has pledged $45 billion to SoftBank's Vision Fund , an amount which was supposed to come from the proceeds of its pending Aramco IPO. If the Uber bid is linked to SoftBank's Vision Fund, or KSA money, then its not clear why this investor might be looking to literally 'double down' from $3.5 billion o $6 billion on a declining investment.

Yves Smith Post author , December 12, 2017 at 11:38 am

SoftBank has not yet invested. Its tender is still open. If it does not get enough shares at a price it likes, it won't invest.

As to why, I have no idea.

Robert McGregor , December 12, 2017 at 7:04 am

"Growth and Efficiency" are the sine qua non of Neoliberalism. Kalanick's "hype brilliance" was to con the market with "revenue growth" and signs of efficiency, and hopes of greater efficiency, and make most people just overlook the essential fact that Uber is the most unprofitable company of all time!

divadab , December 12, 2017 at 7:19 am

What comprises "Uber Expenses"? 2014 – $1.06 billion; 2015 $3.33 billion; 2016 $9.65 billion; forecast 2017 $11.418 billion!!!!!! To me this is the big question – what are they spending $10 billion per year on?

ALso – why did driver share go from 68% in 2016 to 80% in 2017? If you use 68% as in 2016, 2017 Uber revenue is $11.808 billion, which means a bit better than break-even EBITDA, assuming Uber expenses are as stated $11.428 billion.

Perhaps not so bleak as the article presents, although I would not invest in this thing.

Phil in Kansas City , December 12, 2017 at 7:55 am

I have the same question: What comprises over 11 billion dollars in expenses in 2017? Could it be they are paying out dividends to the early investors? Which would mean they are cannibalizing their own company for the sake of the VC! How long can this go on before they'll need a new infusion of cash?

lyman alpha blob , December 12, 2017 at 2:37 pm

The Saudis have thrown a few billion Uber's way and they aren't necessarily known as the smart money.

Maybe the pole dancers have started chipping in too as they are for bitcoin .

Vedant Desai , December 12, 2017 at 10:37 am

Oh article does answer your 2nd question. Read this paragraph:-

Since mass driver defections would cause passenger volume growth to collapse completely , Uber was forced to reverse these cuts in 2017 and increased the driver share from 68% to 80%. This meant that Uber's corporate revenue, which had grown over 300% in 2015 and over 200% in 2016 will probably only grow by about 15% in 2017.

As for the 1st, read this line in the article:-

There are undoubtedly a number of things Uber could do to reduce losses at the margin, but it is difficult to imagine it could suddenly find the $4-5 billion in profit improvement needed merely to reach breakeven.

Louis Fyne , December 12, 2017 at 8:44 am

in addition to all the points listed in the article/comments, the absolute biggest flaw with Uber is that Uber HQ conditioned its customers on (a) cheap fares and (b) that a car is available within minutes (1-5 if in a big city).

Those two are not mutually compatible in the long-term.

Alfred , December 12, 2017 at 9:49 am

Thus (a) "We cost less" and (b) "We're more convenient" -- aren't those also the advantages that Walmart claims and feeds as a steady diet to its ever hungry consumers? Often if not always, disruption may repose upon delusion.

Martin Finnucane , December 12, 2017 at 11:06 am

Uber's business model could never produce sustainable profits unless it was able to exploit significant anti-competitive market power.

Upon that dependent clause hangs the future of capitalism, and – dare I say it? – its inevitable demise.

Altandmain , December 12, 2017 at 11:09 am

When this Uber madness blows up, I wonder if people will finally begin to discuss the brutal reality of Silicon Valley's so called "disruption".

It is heavily built in around the idea of economic exploitation. Uber drivers are often, especially when the true costs to operate an Uber including the vehicle depreciation are factored in, making not very much per hour driven, especially if they don't get the surge money.

Instacart is another example. They are paying the deliver operators very little.

Jim A. , December 12, 2017 at 12:21 pm

At a fundamental level, I think that the Silicon Valley "disruption" model only works for markets (like software) where the marginal cost for production is de minimus and the products can be protected by IP laws. Volume and market power really work in those cases. But out here in meat-space, where actual material and labor are big inputs to each item sold, you can never just sit back on your laurels and rake in the money. Somebody else will always be able to come and and make an equivalent product. If they can do it more cheaply, you are in trouble.

Altandmain , December 12, 2017 at 5:40 pm

There aren't that many areas in goods and services where the marginal costs are very low.

Software is actually quite unique in that regard, costing merely the bandwidth and permanent storage space to store.

Let's see:

1. From the article, they cannot go public and have limited ways to raise more money. An IPO with its more stringent disclosure requirements would expose them.

2. They tried lowering driver compensation and found that model unsustainable.

3. There are no benefits to expanding in terms of economies of scale.

From where I am standing, it looks like a lot of industries gave similar barriers. Silicon Valley is not going to be able to disrupt those.

Tesla, another Silicon Valley company seems to be struggling to mass produce its Model 3 and deliver an electric car that breaks even, is reliable, while disrupting the industry in the ways that Elon Musk attempted to hype up.

So that basically leaves services and manufacturing out for Silicon Valley disruption.

Joe Bentzel , December 12, 2017 at 2:19 pm

UBER has become a "too big to fail" startup because of all the different tentacles of capital from various Tier 1 VCs and investment bankers.

VCs have admitted openly that UBER is a subsidized business, meaning it's product is sold below market value, and the losses reflect that subsidization. The whole "2 sided platform" argument is just marketecture to hustle more investors. It's a form of service "dumping" that puts legacy businesses into bankruptcy. Back during the dotcom bubble one popular investment banker (Paul Deninger) characterized this model as "Terrorist Competition", i.e. coffers full of invested cash to commoditize the market and drive out competition.

UBER is an absolute disaster that has forked the startup model in Silicon Valley in order to drive total dependence on venture capital by founders. And its current diversification into "autonomous vehicles", food delivery, et al are simply more evidence that the company will never be profitable due to its whacky "blitzscaling" approach of layering on new "businesses" prior to achieving "fit" in its current one.

It's economic model has also metastasized into a form of startup cancer that is killing Silicon Valley as a "technology" innovator. Now it's all cargo cult marketing BS tied to "strategic capital".

UBER is the victory of venture capital and user subsidized startups over creativity by real entrepreneurs.

It's shadow is long and that's why this company should be ..wait for it UNBUNDLED (the new silicon valley word attached to that other BS religion called "disruption"). Call it a great unbundling and you can break up this monster corp any way you want.

Naked Capitalism is a great website.

Phil in KC , December 12, 2017 at 3:20 pm

1. I Agree with your last point.

2. The elevator pitch for Uber: subsidize rides to attract customers, put the competition out of business, and then enjoy an unregulated monopoly, all while exploiting economically ignorant drivers–ahem–"partners."

3. But more than one can play that game, and

4. Cab and livery companies are finding ways to survive!

Phil in KC , December 12, 2017 at 3:10 pm

If subsidizing rides is counted as an expense, (not being an accountant, I would guess it so), then whether the subsidy goes to the driver or the passenger, that would account for the ballooning expenses, to answer my own question. Otherwise, the overhead for operating what Uber describes as a tech company should be minimal: A billion should fund a decent headquarters with staff, plus field offices in, say, 100 U.S. cities. However, their global pretensions are probably burning cash like crazy. On top of that, I wonder what the exec compensation is like?

After reading HH's initial series, I made a crude, back-of-the-envelope calculation that Uber would run out of money sometime in the third fiscal quarter of 2018, but that was based on assuming losses were stabilizing in the range of 3 billion a year. Not so, according to the article. I think crunch time is rapidly approaching. If so, then SoftBank's tender offer may look quite appetizing to VC firms and to any Uber employee able to cash in their options. I think there is a way to make a re-envisioned Uber profitable, and with a more independent board, they may be able to restructure the company to show a pathway to profitability before the IPO. But time is running out.

A not insignificant question is the recruitment and retention of the front line "partners." It would seem to me that at some point, Uber will run out of economically ignorant drivers with good manners and nice cars. I would be very interested to know how many drivers give up Uber and other ride-sharing gigs once the 1099's start flying at the beginning of the year. One of the harsh realities of owning a business or being an contractor is the humble fact that you get paid LAST!

Jan Stickle , December 12, 2017 at 5:00 pm

We became instant Uber riders while spending holidays with relatives in San Diego. While their model is indeed unique from a rider perspective, it was the driver pool that fascinates me. These are not professional livery drivers, but rather freebooters of all stripes driving for various reasons. The remuneration they receive cannot possibly generate much income after expenses, never mind the problems associated with IRS filing as independent contractors.

One guy was just cruising listening to music; cooler to get paid for it than just sitting home! A young lady was babbling and gesticulating non stop about nothing coherent and appeared to be on some sort of stimulant. A foreign gentleman, very professional, drove for extra money when not at his regular job. He was the only one who had actually bought a new Prius for this gig, hoping to pay it off in two years.

This is indeed a brave new world. There was a period in Nicaragua just after the Contra war ended when citizens emerged from their homes and hit the streets in large numbers, desperately looking for income. Every car was a taxi and there was a bipedal mini Walmart at every city intersection as individuals sold everything and anything in a sort of euphoric optimism towards the future. Reality just hadn't caught up with them yet .

[Dec 09, 2017] How to rsync only a specific list of files - Stack Overflow

Notable quotes:
"... The filenames that are read from the FILE are all relative to the source dir ..."
Dec 09, 2017 | stackoverflow.com

ash, May 11, 2015 at 20:05

There is a flag --files-from that does exactly what you want. From man rsync :
--files-from=FILE

Using this option allows you to specify the exact list of files to transfer (as read from the specified FILE or - for standard input). It also tweaks the default behavior of rsync to make transferring just the specified files and directories easier:

The filenames that are read from the FILE are all relative to the source dir -- any leading slashes are removed and no ".." references are allowed to go higher than the source dir. For example, take this command:

rsync -a --files-from=/tmp/foo /usr remote:/backup

If /tmp/foo contains the string "bin" (or even "/bin"), the /usr/bin directory will be created as /backup/bin on the remote host. If it contains "bin/" (note the trailing slash), the immediate contents of the directory would also be sent (without needing to be explicitly mentioned in the file -- this began in version 2.6.4). In both cases, if the -r option was enabled, that dir's entire hierarchy would also be transferred (keep in mind that -r needs to be specified explicitly with --files-from, since it is not implied by -a). Also note that the effect of the (enabled by default) --relative option is to duplicate only the path info that is read from the file -- it does not force the duplication of the source-spec path (/usr in this case).

In addition, the --files-from file can be read from the remote host instead of the local host if you specify a "host:" in front of the file (the host must match one end of the transfer). As a short-cut, you can specify just a prefix of ":" to mean "use the remote end of the transfer". For example:

rsync -a --files-from=:/path/file-list src:/ /tmp/copy

This would copy all the files specified in the /path/file-list file that was located on the remote "src" host.

If the --iconv and --protect-args options are specified and the --files-from filenames are being sent from one host to another, the filenames will be translated from the sending host's charset to the receiving host's charset.

NOTE: sorting the list of files in the --files-from input helps rsync to be more efficient, as it will avoid re-visiting the path elements that are shared between adjacent entries. If the input is not sorted, some path elements (implied directories) may end up being scanned multiple times, and rsync will eventually unduplicate them after they get turned into file-list elements.

Nicolas Mattia, Feb 11, 2016 at 11:06

Note that you still have to specify the directory where the files listed are located, for instance: rsync -av --files-from=file-list . target/ for copying files from the current dir. – Nicolas Mattia Feb 11 '16 at 11:06

ash, Feb 12, 2016 at 2:25

Yes, and to reiterate: The filenames that are read from the FILE are all relative to the source dir . – ash Feb 12 '16 at 2:25

Michael ,Nov 2, 2016 at 0:09

if the files-from file has anything starting with .. rsync appears to ignore the .. giving me an error like rsync: link_stat "/home/michael/test/subdir/test.txt" failed: No such file or directory (in this case running from the "test" dir and trying to specify "../subdir/test.txt" which does exist. – Michael Nov 2 '16 at 0:09

xxx,

--files-from= parameter needs trailing slash if you want to keep the absolute path intact. So your command would become something like below:
rsync -av --files-from=/path/to/file / /tmp/

This could be done like there are a large number of files and you want to copy all files to x path. So you would find the files and throw output to a file like below:

find /var/* -name *.log > file

[Dec 09, 2017] linux - What does the line '!-bin-sh -e' do

Dec 09, 2017 | stackoverflow.com

,

That line defines what program will execute the given script. For sh normally that line should start with the # character as so:
#!/bin/sh -e

The -e flag's long name is errexit , causing the script to immediately exit on the first error.

[Dec 07, 2017] First Rule of Usability Don't Listen to Users

Notable quotes:
"... So, do users know what they want? No, no, and no. Three times no. ..."
Dec 07, 2017 | www.nngroup.com

But ultimately, the way to get user data boils down to the basic rules of usability

... ... ...

So, do users know what they want? No, no, and no. Three times no.

Finally, you must consider how and when to solicit feedback. Although it might be tempting to simply post a survey online, you're unlikely to get reliable input (if you get any at all). Users who see the survey and fill it out before they've used the site will offer irrelevant answers. Users who see the survey after they've used the site will most likely leave without answering the questions. One question that does work well in a website survey is "Why are you visiting our site today?" This question goes to users' motivation and they can answer it as soon as they arrive.

[Dec 07, 2017] The rogue DHCP server

Notable quotes:
"... from Don Watkins ..."
Dec 07, 2017 | opensource.com

from Don Watkins

I am a liberal arts person who wound up being a technology director. With the exception of 15 credit hours earned on my way to a Cisco Certified Network Associate credential, all of the rest of my learning came on the job. I believe that learning what not to do from real experiences is often the best teacher. However, those experiences can frequently come at the expense of emotional pain. Prior to my Cisco experience, I had very little experience with TCP/IP networking and the kinds of havoc I could create albeit innocently due to my lack of understanding of the nuances of routing and DHCP.

At the time our school network was an active directory domain with DHCP and DNS provided by a Windows 2000 server. All of our staff access to the email, Internet, and network shares were served this way. I had been researching the use of the K12 Linux Terminal Server ( K12LTSP ) project and had built a Fedora Core box with a single network card in it. I wanted to see how well my new project worked so without talking to my network support specialists I connected it to our main LAN segment. In a very short period of time our help desk phones were ringing with principals, teachers, and other staff who could no longer access their email, printers, shared directories, and more. I had no idea that the Windows clients would see another DHCP server on our network which was my test computer and pick up an IP address and DNS information from it.

I had unwittingly created a "rogue" DHCP server and was oblivious to the havoc that it would create. I shared with the support specialist what had happened and I can still see him making a bee-line for that rogue computer, disconnecting it from the network. All of our client computers had to be rebooted along with many of our switches which resulted in a lot of confusion and lost time due to my ignorance. That's when I learned that it is best to test new products on their own subnet.

[Dec 03, 2017] Business Has Killed IT With Overspecialization by Charlie Schluting

Highly recommended!
Notable quotes:
"... What happened to the old "sysadmin" of just a few years ago? We've split what used to be the sysadmin into application teams, server teams, storage teams, and network teams. There were often at least a few people, the holders of knowledge, who knew how everything worked, and I mean everything. ..."
"... Now look at what we've done. Knowledge is so decentralized we must invent new roles to act as liaisons between all the IT groups. Architects now hold much of the high-level "how it works" knowledge, but without knowing how any one piece actually does work. In organizations with more than a few hundred IT staff and developers, it becomes nearly impossible for one person to do and know everything. This movement toward specializing in individual areas seems almost natural. That, however, does not provide a free ticket for people to turn a blind eye. ..."
"... Does your IT department function as a unit? Even 20-person IT shops have turf wars, so the answer is very likely, "no." As teams are split into more and more distinct operating units, grouping occurs. One IT budget gets split between all these groups. Often each group will have a manager who pitches his needs to upper management in hopes they will realize how important the team is. ..."
"... The "us vs. them" mentality manifests itself at all levels, and it's reinforced by management having to define each team's worth in the form of a budget. One strategy is to illustrate a doomsday scenario. If you paint a bleak enough picture, you may get more funding. Only if you are careful enough to illustrate the failings are due to lack of capital resources, not management or people. A manager of another group may explain that they are not receiving the correct level of service, so they need to duplicate the efforts of another group and just implement something themselves. On and on, the arguments continue. ..."
Apr 07, 2010 | Enterprise Networking Planet

What happened to the old "sysadmin" of just a few years ago? We've split what used to be the sysadmin into application teams, server teams, storage teams, and network teams. There were often at least a few people, the holders of knowledge, who knew how everything worked, and I mean everything. Every application, every piece of network gear, and how every server was configured -- these people could save a business in times of disaster.

Now look at what we've done. Knowledge is so decentralized we must invent new roles to act as liaisons between all the IT groups. Architects now hold much of the high-level "how it works" knowledge, but without knowing how any one piece actually does work. In organizations with more than a few hundred IT staff and developers, it becomes nearly impossible for one person to do and know everything. This movement toward specializing in individual areas seems almost natural. That, however, does not provide a free ticket for people to turn a blind eye.

Specialization

You know the story: Company installs new application, nobody understands it yet, so an expert is hired. Often, the person with a certification in using the new application only really knows how to run that application. Perhaps they aren't interested in learning anything else, because their skill is in high demand right now. And besides, everything else in the infrastructure is run by people who specialize in those elements. Everything is taken care of.

Except, how do these teams communicate when changes need to take place? Are the storage administrators teaching the Windows administrators about storage multipathing; or worse logging in and setting it up because it's faster for the storage gurus to do it themselves? A fundamental level of knowledge is often lacking, which makes it very difficult for teams to brainstorm about new ways evolve IT services. The business environment has made it OK for IT staffers to specialize and only learn one thing.

If you hire someone certified in the application, operating system, or network vendor you use, that is precisely what you get. Certifications may be a nice filter to quickly identify who has direct knowledge in the area you're hiring for, but often they indicate specialization or compensation for lack of experience.

Resource Competition

Does your IT department function as a unit? Even 20-person IT shops have turf wars, so the answer is very likely, "no." As teams are split into more and more distinct operating units, grouping occurs. One IT budget gets split between all these groups. Often each group will have a manager who pitches his needs to upper management in hopes they will realize how important the team is.

The "us vs. them" mentality manifests itself at all levels, and it's reinforced by management having to define each team's worth in the form of a budget. One strategy is to illustrate a doomsday scenario. If you paint a bleak enough picture, you may get more funding. Only if you are careful enough to illustrate the failings are due to lack of capital resources, not management or people. A manager of another group may explain that they are not receiving the correct level of service, so they need to duplicate the efforts of another group and just implement something themselves. On and on, the arguments continue.

Most often, I've seen competition between server groups result in horribly inefficient uses of hardware. For example, what happens in your organization when one team needs more server hardware? Assume that another team has five unused servers sitting in a blade chassis. Does the answer change? No, it does not. Even in test environments, sharing doesn't often happen between IT groups.

With virtualization, some aspects of resource competition get better and some remain the same. When first implemented, most groups will be running their own type of virtualization for their platform. The next step, I've most often seen, is for test servers to get virtualized. If a new group is formed to manage the virtualization infrastructure, virtual machines can be allocated to various application and server teams from a central pool and everyone is now sharing. Or, they begin sharing and then demand their own physical hardware to be isolated from others' resource hungry utilization. This is nonetheless a step in the right direction. Auto migration and guaranteed resource policies can go a long way toward making shared infrastructure, even between competing groups, a viable option.

Blamestorming

The most damaging side effect of splitting into too many distinct IT groups is the reinforcement of an "us versus them" mentality. Aside from the notion that specialization creates a lack of knowledge, blamestorming is what this article is really about. When a project is delayed, it is all too easy to blame another group. The SAN people didn't allocate storage on time, so another team was delayed. That is the timeline of the project, so all work halted until that hiccup was restored. Having someone else to blame when things get delayed makes it all too easy to simply stop working for a while.

More related to the initial points at the beginning of this article, perhaps, is the blamestorm that happens after a system outage.

Say an ERP system becomes unresponsive a few times throughout the day. The application team says it's just slowing down, and they don't know why. The network team says everything is fine. The server team says the application is "blocking on IO," which means it's a SAN issue. The SAN team say there is nothing wrong, and other applications on the same devices are fine. You've ran through nearly every team, but without an answer still. The SAN people don't have access to the application servers to help diagnose the problem. The server team doesn't even know how the application runs.

See the problem? Specialized teams are distinct and by nature adversarial. Specialized staffers often relegate themselves into a niche knowing that as long as they continue working at large enough companies, "someone else" will take care of all the other pieces.

I unfortunately don't have an answer to this problem. Maybe rotating employees between departments will help. They gain knowledge and also get to know other people, which should lessen the propensity to view them as outsiders

[Dec 03, 2017] Nokia Shareholders Fight Back

On the topic of outsourcing, IMO it can be cheaper if done right. On paper it always seems like a great idea, but in practice it's not always the best idea financially and/or getting the same or better result in comparison to keeping it in-house. I've worked for companies where they have outsourced a particular department/function to companies where I am the one the job is outsourced to. My observation has been the success of getting projects done (e.g.: programing) or facilitating a role (e.g.: sys admin) rely on a few factors regardless of outsourcing or not.
Notable quotes:
"... On the topic of outsourcing, IMO it can be cheaper if done right. On paper it always seems like a great idea, but in practice it's not always the best idea financially and/or getting the same or better result in comparison to keeping it in-house. I've worked for companies where they have outsourced a particular department/function to companies where I am the one the job is outsourced to. My observation has been the success of getting projects done (e.g.: programing) or facilitating a role (e.g.: sys admin) rely on a few factors regardless of outsourcing or not. ..."
Slashdot

noc007 (633443)

On the topic of outsourcing, IMO it can be cheaper if done right. On paper it always seems like a great idea, but in practice it's not always the best idea financially and/or getting the same or better result in comparison to keeping it in-house. I've worked for companies where they have outsourced a particular department/function to companies where I am the one the job is outsourced to. My observation has been the success of getting projects done (e.g.: programing) or facilitating a role (e.g.: sys admin) rely on a few factors regardless of outsourcing or not.

The first is a golden rule of sorts on doing anything:

You can only pick two; NO exceptions. I've encountered so many upper management types that foolishly think they can get away with having all three. In my experience 9/10 of the time it turns out a lack of quality bites them in the butt sometime down the road when they assumed they somehow managed to achieve all three.

The second is communication. Mostly everyone in at least the US has experienced the pain of being subjected to some company's outsourced customer service and/or tech support that can't effectively communicate with both parties on the same page of understanding one another. I really shouldn't need to explain why communication, understanding one another is so important. Sadly this is something I have to constantly explain to my current boss with events like today where my non-outsourced colleague rebooted a number of production critical servers when he was asked to reboot just one secondary server.

Third is the employee's skill in doing the job. Again, another obvious one, but I've observed that it isn't always on the hiring menu. Additionally I've seen some people that interview well, but couldn't create a "Hello World" HTML page for a web developer position as an example. There's no point in hiring or keeping a hired individual to do a job that they lack the skill to do; even if it's an entry-level position with training, that person should be willing to put for the effort to learn and take notes. I accept that everyone has their own unique skills that can aide or hinder their ability to learn and be proficient with a particular task. However, I firmly believe anyone can learn to do anything as long as they put their mind to it. I barely have any artistic ability and my drawing skills are stick figures at best (XKCD is miles ahead of me); if I were to put forth the effort to learn how to draw and paint, I could become a good artist. I taught an A+ technician certification class at a tech school a while back and I had a retired Marine that served in the Vietnam War as one of my students. One could argue his best skill was killing and blowing stuff up. He worked hard and learned to be a technician and passed CompTIA's certification test without a problem. That leads me to the next point.

Lastly is attitude of the end employee doing the actual work. It boggles my mind how so many managers loose the plot when it comes to employee morale and motivation. Productivity generally is improved when those two are improved and it usually doesn't have to involve spending a bunch of money. The employee's attitude should be getting the work done correctly in a reasonable amount of time. Demanding it is a poor approach. Poisoning an employee will result in poisoning the company in a small manner all the way up to the failure of the company. Employees should be encouraged through actual morale improvements, positive motivation, and incentives for doing more work at the same and/or better quality level.

Outsourcing or keeping things in house can be successful and possibly economical if approached correctly with the appropriate support of upper management.

Max Littlemore (1001285)

How dramatic? Isn't outsourcing done (like it or not) to reduce costs?

Outsourcing is done to reduce the projected costs that PHBs see. In reality, outsourcing can lead to increased costs and delays due to time zone differences and language/cultural barriers.

I have seen it work reasonably well, but only when the extra effort and delays caused by the increased need for rework that comes from complex software projects. If you are working with others on software, it is so much quicker to produce quality software if the person who knows the business requirements is sitting right next to the person doing design and the person cutting code and the person doing the testing, etc, etc.

If these people or groups are scattered around the world with different cultures and native languages, communication can suffer, increasing misunderstanding and reducing the quality. I have personally seen this lead to massive increase in code defects in a project that went from in house development to outsourced.

Also, time zone differences cause problems. I have noticed that the further west people live, the less likely they are to take into account how far behind they are. Working with people who fail to realise that their Monday morning is the next day for someone else, or that by the time they are halfway through Friday, others are already on their weekend is not only frustrating, it leads to slow turn around of bug fixes, etc.

Yeah, I'm told outsourcing keeps costs down, but I am yet to see conclusive evidence of that in the real world. At least in complex development. YMMV for support/call centre stuff.

-- I don't therefore I'm not.

[Dec 03, 2017] IT workers voices heard in the Senate, confidentially

The resentment against outsourcing was brewing for a long time.
Notable quotes:
"... Much of the frustration focused on the IT layoffs at Southern California Edison , which is cutting 500 IT workers after hiring two offshore outsourcing firms. This has become the latest example for critics of the visa program's capacity for abuse. ..."
"... Infosys whistleblower Jay Palmer, who testified, and is familiar with the displacement process, told Sessions said these workers will get sued if they speak out. "That's the fear and intimidation that these people go through - they're blindsided," said Palmer. ..."
"... Moreover, if IT workers refuse to train their foreign replacement, "they are going to be terminated with cause, which means they won't even get their unemployment insurance," said Ron Hira, an associate professor at Howard University, who also testified. Affected tech workers who speak out publicly and use their names, "will be blackballed from the industry," he said. ..."
"... Hatch, who is leading the effort to increase the H-1B cap, suggested a willingness to raise wage levels for H-1B dependent employers. They are exempt from U.S. worker protection rules if the H-1B worker is paid at least $60,000 or has a master's degree, a figure that was set in law in 1998. Hatch suggested a wage level of $95,000. ..."
"... Sen. Dick Durbin, (Dem-Ill.), who has joined with Grassley on legislation to impose some restrictions on H-1B visa use -- particularly in offshoring -- has argued for a rule that would keep large firms from having more than 50% of their workers on the visa. This so-called 50/50 rule, as Durbin has noted, has drawn much criticism from India, where most of the affected companies are located. ..."
"... "I want to put the H-1B factories out of business," said Durbin. ..."
"... Hal Salzman, a Rutgers University professor who studies STEM (Science, Technology, Engineering and Math) workforce issues, told the committee that the IT industry now fills about two-thirds of its entry-level positions with guest workers. "At the same time, IT wages have stagnated for over a decade," he said. ..."
"... H-1B supporters use demand for the visa - which will exceed the 85,000 cap -- as proof of economic demand. But Salzman argues that U.S. colleges already graduate more scientists and engineers than find employment in those fields, about 200,000 more. ..."
Mar 18, 2015 | Network World

A Senate Judiciary Committee hearing today on the H-1B visa offered up a stew of policy arguments, positioning and frustration.

Much of the frustration focused on the IT layoffs at Southern California Edison, which is cutting 500 IT workers after hiring two offshore outsourcing firms. This has become the latest example for critics of the visa program's capacity for abuse.

Sen. Charles Grassley (R-Iowa), the committee chair who has long sought H-1B reforms, said he invited Southern California Edison officials "to join us today" and testify. "I thought they would want to defend their actions and explain why U.S. workers have been left high and dry," said Grassley. "Unfortunately, they declined my invitation."

The hearing, by the people picked to testify, was weighted toward critics of the program, prompting a response by industry groups.

Compete America, the Consumer Electronics Association, FWD.us, the U.S. Chamber of Commerce and many others submitted a letter to the committee to rebut the "flawed studies" and "non-representative anecdotes used to create myths that suggest immigration harms American and American workers."

The claim that H-1B critics are using "anecdotes" to make their points (which include layoff reports at firms such as Edison) is a naked example of the pot calling the kettle black. The industry musters anecdotal stories in support of its positions readily and often. It makes available to the press and congressional committees people who came to the U.S. on an H-1B visa who started a business or took on a critical role in a start-up. These people are free to share their often compelling and admirable stories.

The voices of the displaced, who may be in fear of losing their homes, are thwarted by severance agreements.

The committee did hear from displaced workers, including some at Southern California Edison. But the communications with these workers are being kept confidential.

"I got the letters here from people, without the names," said Sen. Jeff Sessions (R-Ala.). "If they say what they know and think about this, they will lose the buy-outs."

Infosys whistleblower Jay Palmer, who testified, and is familiar with the displacement process, told Sessions said these workers will get sued if they speak out. "That's the fear and intimidation that these people go through - they're blindsided," said Palmer.

Moreover, if IT workers refuse to train their foreign replacement, "they are going to be terminated with cause, which means they won't even get their unemployment insurance," said Ron Hira, an associate professor at Howard University, who also testified. Affected tech workers who speak out publicly and use their names, "will be blackballed from the industry," he said.

While lawmakers voiced either strong support or criticism of the program, there was interest in crafting legislation that impose some restrictions on H-1B use.

"America and American companies need more high-skilled workers - this is an undeniable fact," said Sen. Orrin Hatch (R-Utah). "America's high-skilled worker shortage has become a crisis."

Hatch, who is leading the effort to increase the H-1B cap, suggested a willingness to raise wage levels for H-1B dependent employers. They are exempt from U.S. worker protection rules if the H-1B worker is paid at least $60,000 or has a master's degree, a figure that was set in law in 1998. Hatch suggested a wage level of $95,000.

Sen. Dick Durbin, (Dem-Ill.), who has joined with Grassley on legislation to impose some restrictions on H-1B visa use -- particularly in offshoring -- has argued for a rule that would keep large firms from having more than 50% of their workers on the visa. This so-called 50/50 rule, as Durbin has noted, has drawn much criticism from India, where most of the affected companies are located.

"I want to put the H-1B factories out of business," said Durbin.

Durbin got some support for the 50/50 rule from one person testifying in support of expanding the cap, Bjorn Billhardt, the founder and president of Enspire Learning, an Austin-based company. Enspire creates learning development tools; Billhardt came to the U.S. as an exchange student and went from an H-1B visa to a green card to, eventually, citizenship.

"I actually think that's a reasonable provision," said Billhardt of the 50% visa limit. He said it could help, "quite a bit." At the same time, he urged lawmakers to raise the cap to end the lottery system now used to distribute visas once that cap is reached.

Today's hearing went well beyond the impact of H-1B use by outsourcing firms to the displacement of workers overall.

Hal Salzman, a Rutgers University professor who studies STEM (Science, Technology, Engineering and Math) workforce issues, told the committee that the IT industry now fills about two-thirds of its entry-level positions with guest workers. "At the same time, IT wages have stagnated for over a decade," he said.

H-1B supporters use demand for the visa - which will exceed the 85,000 cap -- as proof of economic demand. But Salzman argues that U.S. colleges already graduate more scientists and engineers than find employment in those fields, about 200,000 more.

"Asking domestic graduates, both native-born and immigrant, to compete with guest workers on wages is not a winning strategy for strengthening U.S. science, technology and innovation," said Salzman.

See also

[Dec 02, 2017] BASH Shell How To Redirect stderr To stdout ( redirect stderr to a File )

Dec 02, 2017 | www.cyberciti.biz

BASH Shell: How To Redirect stderr To stdout ( redirect stderr to a File ) Posted on March 12, 2008 March 12, 2008 in Categories BASH Shell , Linux , UNIX last updated March 12, 2008 Q. How do I redirect stderr to stdout? How do I redirect stderr to a file?

A. Bash and other modern shell provides I/O redirection facility. There are 3 default standard files (standard streams) open:

[a] stdin – Use to get input (keyboard) i.e. data going into a program.

[b] stdout – Use to write information (screen)

[c] stderr – Use to write error message (screen)

Understanding I/O streams numbers

The Unix / Linux standard I/O streams with numbers:

Handle Name Description
0 stdin Standard input
1 stdout Standard output
2 stderr Standard error
Redirecting the standard error stream to a file

The following will redirect program error message to a file called error.log:
$ program-name 2> error.log
$ command1 2> error.log

Redirecting the standard error (stderr) and stdout to file

Use the following syntax:
$ command-name &>file
OR
$ command > file-name 2>&1
Another useful example:
# find /usr/home -name .profile 2>&1 | more

Redirect stderr to stdout

Use the command as follows:
$ command-name 2>&1

[Dec 01, 2017] NSA hacks system administrators, new leak reveals

Highly recommended!
"I hunt sysadm" policy is the most realosnableif you you want to get into some coporate netwrok. So republication of this three years old post is just a reminder. Any sysadmin that access corporates netwrok not from a dedicated computer using VPN (corporate laptop) is engangering the corporation. As simple as that. The level of non-professionalism demonstrated by Hillary Clinton IT staff suggests that this can be a problem in government too. After all Snowden documents now are studied by all major intelligence agencies of the world.
This also outlines the main danger of "shadow It".
Notable quotes:
"... Journalist Ryan Gallagher reported that Edward Snowden , a former sys admin for NSA contractor Booz Allen Hamilton, provided The Intercept with the internal documents, including one from 2012 that's bluntly titled "I hunt sys admins." ..."
"... "Who better to target than the person that already has the 'keys to the kingdom'?" ..."
"... "They were written by an NSA official involved in the agency's effort to break into foreign network routers, the devices that connect computer networks and transport data across the Internet," ..."
"... "By infiltrating the computers of system administrators who work for foreign phone and Internet companies, the NSA can gain access to the calls and emails that flow over their networks." ..."
"... The latest leak suggests that some NSA analysts took a much different approach when tasked with trying to collect signals intelligence that otherwise might not be easily available. According to the posts, the author advocated for a technique that involves identifying the IP address used by the network's sys admin, then scouring other NSA tools to see what online accounts used those addresses to log-in. Then by using a ..."
"... that tricks targets into installing malware by being misdirected to fake Facebook servers, the intelligence analyst can hope that the sys admin's computer is sufficiently compromised and exploited. ..."
"... Once the NSA has access to the same machine a sys admin does, American spies can mine for a trove of possibly invaluable information, including maps of entire networks, log-in credentials, lists of customers and other details about how systems are wired. In turn, the NSA has found yet another way to, in theory, watch over all traffic on a targeted network. ..."
"... "Up front, sys admins generally are not my end target. My end target is the extremist/terrorist or government official that happens to be using the network some admin takes care of," the NSA employee says in the documents. ..."
"... "A key part of the protections that apply to both US persons and citizens of other countries is the mandate that information be in support of a valid foreign intelligence requirement, and comply with US Attorney General-approved procedures to protect privacy rights." ..."
"... Coincidentally, outgoing-NSA Director Keith Alexander said last year that he was working on drastically cutting the number of sys admins at that agency by upwards of 90 percent - but didn't say it was because they could be exploited by similar tactics waged by adversarial intelligence groups. ..."
Mar 21, 2014 | news.slashdot.org

In its quest to take down suspected terrorists and criminals abroad, the United States National Security Agency has adopted the practice of hacking the system administrators that oversee private computer networks, new documents reveal.

In its quest to take down suspected terrorists and criminals abroad, the United States National Security Agency has adopted the practice of hacking the system administrators that oversee private computer networks, new documents reveal.

The Intercept has published a handful of leaked screenshots taken from an internal NSA message board where one spy agency specialist spoke extensively about compromising not the computers of specific targets, but rather the machines of the system administrators who control entire networks.

Journalist Ryan Gallagher reported that Edward Snowden, a former sys admin for NSA contractor Booz Allen Hamilton, provided The Intercept with the internal documents, including one from 2012 that's bluntly titled "I hunt sys admins."

According to the posts - some labeled "top secret" - NSA staffers should not shy away from hacking sys admins: a successful offensive mission waged against an IT professional with extensive access to a privileged network could provide the NSA with unfettered capabilities, the analyst acknowledged.

"Who better to target than the person that already has the 'keys to the kingdom'?" one of the posts reads.

"They were written by an NSA official involved in the agency's effort to break into foreign network routers, the devices that connect computer networks and transport data across the Internet," Gallagher wrote for the article published late Thursday. "By infiltrating the computers of system administrators who work for foreign phone and Internet companies, the NSA can gain access to the calls and emails that flow over their networks."

Since last June, classified NSA materials taken by Snowden and provided to certain journalists have exposed an increasing number of previously-secret surveillance operations that range from purposely degrading international encryption standards and implanting malware in targeted machines, to tapping into fiber-optic cables that transfer internet traffic and even vacuuming up data as its moved into servers in a decrypted state.

The latest leak suggests that some NSA analysts took a much different approach when tasked with trying to collect signals intelligence that otherwise might not be easily available. According to the posts, the author advocated for a technique that involves identifying the IP address used by the network's sys admin, then scouring other NSA tools to see what online accounts used those addresses to log-in. Then by using a previously-disclosed NSA tool that tricks targets into installing malware by being misdirected to fake Facebook servers, the intelligence analyst can hope that the sys admin's computer is sufficiently compromised and exploited.

Once the NSA has access to the same machine a sys admin does, American spies can mine for a trove of possibly invaluable information, including maps of entire networks, log-in credentials, lists of customers and other details about how systems are wired. In turn, the NSA has found yet another way to, in theory, watch over all traffic on a targeted network.

"Up front, sys admins generally are not my end target. My end target is the extremist/terrorist or government official that happens to be using the network some admin takes care of," the NSA employee says in the documents.

When reached for comment by The Intercept, NSA spokesperson Vanee Vines said that, "A key part of the protections that apply to both US persons and citizens of other countries is the mandate that information be in support of a valid foreign intelligence requirement, and comply with US Attorney General-approved procedures to protect privacy rights."

Coincidentally, outgoing-NSA Director Keith Alexander said last year that he was working on drastically cutting the number of sys admins at that agency by upwards of 90 percent - but didn't say it was because they could be exploited by similar tactics waged by adversarial intelligence groups. Gen. Alexander's decision came just weeks after Snowden - previously one of around 1,000 sys admins working on the NSA's networks, according to Reuters - walked away from his role managing those networks with a trove of classified information.

[Nov 30, 2017] Will Robots Kill the Asian Century

This aritcle is two years old and not much happned during those two years. But still there is a chance that highly authomated factories can make manufacturing in the USA again profitable. the problme is that they will be even more profible in East Asia;-)
Notable quotes:
"... The National Interest ..."
The National Interest

The rise of technologies such as 3-D printing and advanced robotics means that the next few decades for Asia's economies will not be as easy or promising as the previous five.

OWEN HARRIES, the first editor, together with Robert Tucker, of The National Interest, once reminded me that experts-economists, strategists, business leaders and academics alike-tend to be relentless followers of intellectual fashion, and the learned, as Harold Rosenberg famously put it, a "herd of independent minds." Nowhere is this observation more apparent than in the prediction that we are already into the second decade of what will inevitably be an "Asian Century"-a widely held but rarely examined view that Asia's continued economic rise will decisively shift global power from the Atlantic to the western Pacific Ocean.

No doubt the numbers appear quite compelling. In 1960, East Asia accounted for a mere 14 percent of global GDP; today that figure is about 27 percent. If linear trends continue, the region could account for about 36 percent of global GDP by 2030 and over half of all output by the middle of the century. As if symbolic of a handover of economic preeminence, China, which only accounted for about 5 percent of global GDP in 1960, will likely surpass the United States as the largest economy in the world over the next decade. If past record is an indicator of future performance, then the "Asian Century" prediction is close to a sure thing.

[Nov 29, 2017] Take This GUI and Shove It

Providing a great GUI for complex routers or Linux admin is hard. Of course there has to be a CLI, that's how pros get the job done. But a great GUI is one that teaches a new user to eventually graduate to using CLI.
Notable quotes:
"... Providing a great GUI for complex routers or Linux admin is hard. Of course there has to be a CLI, that's how pros get the job done. But a great GUI is one that teaches a new user to eventually graduate to using CLI. ..."
"... What would be nice is if the GUI could automatically create a shell script doing the change. That way you could (a) learn about how to do it per CLI by looking at the generated shell script, and (b) apply the generated shell script (after proper inspection, of course) to other computers. ..."
"... AIX's SMIT did this, or rather it wrote the commands that it executed to achieve what you asked it to do. This meant that you could learn: look at what it did and find out about which CLI commands to run. You could also take them, build them into a script, copy elsewhere, ... I liked SMIT. ..."
"... Cisco's GUI stuff doesn't really generate any scripts, but the commands it creates are the same things you'd type into a CLI. And the resulting configuration is just as human-readable (barring any weird naming conventions) as one built using the CLI. I've actually learned an awful lot about the Cisco CLI by using their GUI. ..."
"... Microsoft's more recent tools are also doing this. Exchange 2007 and newer, for example, are really completely driven by the PowerShell CLI. The GUI generates commands and just feeds them into PowerShell for you. So you can again issue your commands through the GUI, and learn how you could have done it in PowerShell instead. ..."
"... Moreover, the GUI authors seem to have a penchant to find new names for existing CLI concepts. Even worse, those names are usually inappropriate vagueries quickly cobbled together in an off-the-cuff afterthought, and do not actually tell you where the doodad resides in the menu system. With a CLI, the name of the command or feature set is its location. ..."
"... I have a cheap router with only a web gui. I wrote a two line bash script that simply POSTs the right requests to URL. Simply put, HTTP interfaces, especially if they implement the right response codes, are actually very nice to script. ..."
Slashdot

Deep End's Paul Venezia speaks out against the overemphasis on GUIs in today's admin tools, saying that GUIs are fine and necessary in many cases, but only after a complete CLI is in place, and that they cannot interfere with the use of the CLI, only complement it. Otherwise, the GUI simply makes easy things easy and hard things much harder. He writes, 'If you have to make significant, identical changes to a bunch of Linux servers, is it easier to log into them one-by-one and run through a GUI or text-menu tool, or write a quick shell script that hits each box and either makes the changes or simply pulls down a few new config files and restarts some services? And it's not just about conservation of effort - it's also about accuracy. If you write a script, you're certain that the changes made will be identical on each box. If you're doing them all by hand, you aren't.'"

alain94040 (785132)

Here is a Link to the print version of the article [infoworld.com] (that conveniently fits on 1 page instead of 3).

Providing a great GUI for complex routers or Linux admin is hard. Of course there has to be a CLI, that's how pros get the job done. But a great GUI is one that teaches a new user to eventually graduate to using CLI.

A bad GUI with no CLI is the worst of both worlds, the author of the article got that right. The 80/20 rule applies: 80% of the work is common to everyone, and should be offered with a GUI. And the 20% that is custom to each sysadmin, well use the CLI.

maxwell demon:

What would be nice is if the GUI could automatically create a shell script doing the change. That way you could (a) learn about how to do it per CLI by looking at the generated shell script, and (b) apply the generated shell script (after proper inspection, of course) to other computers.

0123456 (636235) writes:

What would be nice is if the GUI could automatically create a shell script doing the change.

While it's not quite the same thing, our GUI-based home router has an option to download the config as a text file so you can automatically reconfigure it from that file if it has to be reset to defaults. You could presumably use sed to change IP addresses, etc, and copy it to a different router. Of course it runs Linux.

Alain Williams:

AIX's SMIT did this, or rather it wrote the commands that it executed to achieve what you asked it to do. This meant that you could learn: look at what it did and find out about which CLI commands to run. You could also take them, build them into a script, copy elsewhere, ... I liked SMIT.

Ephemeriis:

What would be nice is if the GUI could automatically create a shell script doing the change. That way you could (a) learn about how to do it per CLI by looking at the generated shell script, and (b) apply the generated shell script (after proper inspection, of course) to other computers.

Cisco's GUI stuff doesn't really generate any scripts, but the commands it creates are the same things you'd type into a CLI. And the resulting configuration is just as human-readable (barring any weird naming conventions) as one built using the CLI. I've actually learned an awful lot about the Cisco CLI by using their GUI.

We've just started working with Aruba hardware. Installed a mobility controller last week. They've got a GUI that does something similar. It's all a pretty web-based front-end, but it again generates CLI commands and a human-readable configuration. I'm still very new to the platform, but I'm already learning about their CLI through the GUI. And getting work done that I wouldn't be able to if I had to look up the CLI commands for everything.

Microsoft's more recent tools are also doing this. Exchange 2007 and newer, for example, are really completely driven by the PowerShell CLI. The GUI generates commands and just feeds them into PowerShell for you. So you can again issue your commands through the GUI, and learn how you could have done it in PowerShell instead.

Anpheus:

Just about every Microsoft tool newer than 2007 does this. Virtual machine manager, SQL Server has done it for ages, I think almost all the system center tools do, etc.

It's a huge improvement.

PoV:

All good admins document their work (don't they? DON'T THEY?). With a CLI or a script that's easy: it comes down to "log in as user X, change to directory Y, run script Z with arguments A B and C - the output should look like D". Try that when all you have is a GLUI (like a GUI, but you get stuck): open this window, select that option, drag a slider, check these boxes, click Yes, three times. The output might look a little like this blurry screen shot and the only record of a successful execution is a window that disappears as soon as the application ends.

I suppose the Linux community should be grateful that windows made the fundemental systems design error of making everything graphic. Without that basic failure, Linux might never have even got the toe-hold it has now.

skids:

I think this is a stronger point than the OP: GUIs do not lead to good documentation. In fact, GUIs pretty much are limited to procedural documentation like the example you gave.

The best they can do as far as actual documentation, where the precise effect of all the widgets is explained, is a screenshot with little quote bubbles pointing to each doodad. That's a ridiculous way to document.

This is as opposed to a command reference which can organize, usually in a pretty sensible fashion, exact descriptions of what each command does.

Moreover, the GUI authors seem to have a penchant to find new names for existing CLI concepts. Even worse, those names are usually inappropriate vagueries quickly cobbled together in an off-the-cuff afterthought, and do not actually tell you where the doodad resides in the menu system. With a CLI, the name of the command or feature set is its location.

Not that even good command references are mandatory by today's pathetic standards. Even the big boys like Cisco have shown major degradation in the quality of their documentation during the last decade.

pedantic bore:

I think the author might not fully understand who most admins are. They're people who couldn't write a shell script if their lives depended on it, because they've never had to. GUI-dependent users become GUI-dependent admins.

As a percentage of computer users, people who can actually navigate a CLI are an ever-diminishing group.

arth1: /etc/resolv.conf

/etc/init.d/NetworkManager stop
chkconfig NetworkManager off
chkconfig network on
vi /etc/sysconfig/network
vi /etc/sysconfig/network-scripts/eth0

At least they named it NetworkManager, so experienced admins could recognize it as a culprit. Anything named in CamelCase is almost invariably written by new school programmers who don't grok the Unix toolbox concept and write applications instead of tools, and the bloated drivel is usually best avoided.

Darkness404 (1287218) writes: on Monday October 04, @07:21PM (#33789446)

There are more and more small businesses (5, 10 or so employees) realizing that they can get things done easier if they had a server. Because the business can't really afford to hire a sysadmin or a full-time tech person, its generally the employee who "knows computers" (you know, the person who has to help the boss check his e-mail every day, etc.) and since they don't have the knowledge of a skilled *Nix admin, a GUI makes their administration a lot easier.

So with the increasing use of servers among non-admins, it only makes sense for a growth in GUI-based solutions.

Svartalf (2997) writes: Ah... But the thing is... You don't NEED the GUI with recent Linux systems- you do with Windows.

oatworm (969674) writes: on Monday October 04, @07:38PM (#33789624) Homepage

Bingo. Realistically, if you're a company with less than a 100 employees (read: most companies), you're only going to have a handful of servers in house and they're each going to be dedicated to particular roles. You're not going to have 100 clustered fileservers - instead, you're going to have one or maybe two. You're not going to have a dozen e-mail servers - instead, you're going to have one or two. Consequently, the office admin's focus isn't going to be scalability; it just won't matter to the admin if they can script, say, creating a mailbox for 100 new users instead of just one. Instead, said office admin is going to be more focused on finding ways to do semi-unusual things (e.g. "create a VPN between this office and our new branch office", "promote this new server as a domain controller", "install SQL", etc.) that they might do, oh, once a year.

The trouble with Linux, and I'm speaking as someone who's used YaST in precisely this context, is that you have to make a choice - do you let the GUI manage it or do you CLI it? If you try to do both, there will be inconsistencies because the grammar of the config files is too ambiguous; consequently, the GUI config file parser will probably just overwrite whatever manual changes it thinks is "invalid", whether it really is or not. If you let the GUI manage it, you better hope the GUI has the flexibility necessary to meet your needs. If, for example, YaST doesn't understand named Apache virtual hosts, well, good luck figuring out where it's hiding all of the various config files that it was sensibly spreading out in multiple locations for you, and don't you dare use YaST to manage Apache again or it'll delete your Apache-legal but YaST-"invalid" directive.

The only solution I really see is for manual config file support with optional XML (or some other machine-friendly but still human-readable format) linkages. For example, if you want to hand-edit your resolv.conf, that's fine, but if the GUI is going to take over, it'll toss a directive on line 1 that says "#import resolv.conf.xml" and immediately overrides (but does not overwrite) everything following that. Then, if you still want to use the GUI but need to hand-edit something, you can edit the XML file using the appropriate syntax and know that your change will be reflected on the GUI.

That's my take. Your mileage, of course, may vary.

icebraining (1313345) writes: on Monday October 04, @07:24PM (#33789494) Homepage

I have a cheap router with only a web gui. I wrote a two line bash script that simply POSTs the right requests to URL. Simply put, HTTP interfaces, especially if they implement the right response codes, are actually very nice to script.

devent (1627873) writes:

Why Windows servers have a GUI is beyond me anyway. The servers are running 99,99% of the time without a monitor and normally you just login per ssh to a console if you need to administer them. But they are consuming the extra RAM, the extra CPU cycles and the extra security threats. I don't now, but can you de-install the GUI from a Windows server? Or better, do you have an option for no-GUI installation? Just saw the minimum hardware requirements. 512 MB RAM and 32 GB or greater disk space. My server runs

sirsnork (530512) writes: on Monday October 04, @07:43PM (#33789672)

it's called a "core" install in Server 2008 and up, and if you do that, there is no going back, you can't ever add the GUI back.

What this means is you can run a small subset of MS services that don't need GUI interaction. With R2 that subset grew somwhat as they added the ability to install .Net too, which mean't you could run IIS in a useful manner (arguably the strongest reason to want to do this in the first place).

Still it's a one way trip and you better be damn sure what services need to run on that box for the lifetime of that box or you're looking at a reinstall. Most windows admins will still tell you the risk isn't worth it.

Simple things like network configuration without a GUI in windows is tedious, and, at least last time i looked, you lost the ability to trunk network poers because the NIC manufactuers all assumed you had a GUI to configure your NICs

prichardson (603676) writes: on Monday October 04, @07:27PM (#33789520) Journal

This is also a problem with Max OS X Server. Apple builds their services from open source products and adds a GUI for configuration to make it all clickable and easy to set up. However, many options that can be set on the command line can't be set in the GUI. Even worse, making CLI changes to services can break the GUI entirely.

The hardware and software are both super stable and run really smoothly, so once everything gets set up, it's awesome. Still, it's hard for a guy who would rather make changes on the CLI to get used to.

MrEricSir (398214) writes:

Just because you're used to a CLI doesn't make it better. Why would I want to read a bunch of documentation, mess with command line options, then read whole block of text to see what it did? I'd much rather sit back in my chair, click something, and then see if it worked. Don't make me read a bunch of man pages just to do a simple task. In essence, the question here is whether it's okay for the user to be lazy and use a GUI, or whether the programmer should be too lazy to develop a GUI.

ak_hepcat (468765) writes: <leif@MENCKENdenali.net minus author> on Monday October 04, @07:38PM (#33789626) Homepage Journal

Probably because it's also about the ease of troubleshooting issues.

How do you troubleshoot something with a GUI after you've misconfigured? How do you troubleshoot a programming error (bug) in the GUI -> device communication? How do you scale to tens, hundreds, or thousands of devices with a GUI?

CLI makes all this easier and more manageable.

arth1 (260657) writes:

Why would I want to read a bunch of documentation, mess with command line options, then read whole block of text to see what it did? I'd much rather sit back in my chair, click something, and then see if it worked. Don't make me read a bunch of man pages just to do a simple task. Because then you'll be stuck at doing simple tasks, and will never be able to do more advanced tasks. Without hiring a team to write an app for you instead of doing it yourself in two minutes, that is. The time you spend reading man

fandingo (1541045) writes: on Monday October 04, @07:54PM (#33789778)

I don't think you really understand systems administration. 'Users,' or in this case admins, don't typically do stuff once. Furthermore, they need to know what he did and how to do it again (i.e. new server or whatever) or just remember what he did. One-off stuff isn't common and is a sign of poor administration (i.e. tracking changes and following processes).

What I'm trying to get at is that admins shouldn't do anything without reading the manual. As a Windows/Linux admin, I tend to find Linux easier to properly administer because I either already know how to perform an operation or I have to read the manual (manpage) and learn a decent amount about the operation (i.e. more than click here/use this flag).

Don't get me wrong, GUIs can make unknown operations significantly easier, but they often lead to poor process management. To document processes, screenshots are typically needed. They can be done well, but I find that GUI documentation (created by admins, not vendor docs) tend to be of very low quality. They are also vulnerable to 'upgrades' where vendors change the interface design. CLI programs typically have more stable interfaces, but maybe that's just because they have been around longer...

maotx (765127) writes: <maotx@NoSPAM.yahoo.com> on Monday October 04, @07:42PM (#33789666)

That's one thing Microsoft did right with Exchange 2007. They built it entirely around their new powershell CLI and then built a GUI for it. The GUI is limited in compared to what you can do with the CLI, but you can get most things done. The CLI becomes extremely handy for batch jobs and exporting statistics to csv files. I'd say it's really up there with BASH in terms of scripting, data manipulation, and integration (not just Exchange but WMI, SQL, etc.)

They tried to do similar with Windows 2008 and their Core [petri.co.il] feature, but they still have to load a GUI to present a prompt...Reply to This

Charles Dodgeson (248492) writes: <jeffrey@goldmark.org> on Monday October 04, @08:51PM (#33790206) Homepage Journal

Probably Debian would have been OK, but I was finding admin of most Linux distros a pain for exactly these reasons. I couldn't find a layer where I could do everything that I needed to do without worrying about one thing stepping on another. No doubt there are ways that I could manage a Linux system without running into different layers of management tools stepping on each other, but it was a struggle.

There were other reasons as well (although there is a lot that I miss about Linux), but I think that this was one of the leading reasons.

(NB: I realize that this is flamebait (I've got karma to burn), but that isn't my intention here.)

[Nov 28, 2017] The Stigmatization of the Unemployed

"This overly narrow hiring spec then leads to absurd, widespread complaint that companies can't find people with the right skills" . In the IT job markets such postings are often called purple squirrels
Notable quotes:
"... In particular, there seems to be an extremely popular variant of the above where the starting proposition "God makes moral people rich" is improperly converted to "Rich people are more moral" which is then readily negated to "Poor people are immoral" and then expanded to "Poor people are immoral, thus they DESERVE to suffer for it". It's essentially the theological equivalent of dividing by zero ..."
"... That said, the ranks of the neoliberals are not small. They constitute what Jonathan Schell calls a "mass minority." I suspect the neoliberals have about the same level of popular support that the Nazis did at the time of their takeover of Germany in 1932, or the Bolsheviks had in Russia at the time of their takeover in 1917, which is about 20 or 25% of the total population. ..."
"... The ranks of the neoliberals are made to appear far greater than they really are because they have all but exclusive access to the nation's megaphone. The Tea Party can muster a handful of people to disrupt a town hall meeting and it gets coast to coast, primetime coverage. But let a million people protest against bank bailouts, and it is ignored. Thus, by manipulation of the media, the mass minority is made to appear to be much larger than it really is. ..."
Mar 20, 2011 | naked capitalism

Spencer Thomas:

Very good post. Thank you.

Over the past three decades, large parts of our culture here in the US have internalized the lessons of the new Social Darwinism, with a significant body of literature to explain and justify it. Many of us have internalized, without even realizing it, the ideas of "dog eat dog", "every man for himself", "society should be structured like the animal kingdom, where the weak and sick simply die because they cannot compete, and this is healthy", and "everything that happens to you is your own fault. There is no such thing as circumstance that cannot be overcome, and certainly no birth lottery."

The levers pulled by politicians and the Fed put these things into practice, but even if we managed get different (better) politicians or Fed chairmen, ones who weren't steeped in this culture and ideology, we'd still be left with the culture in the population at large, and things like the "unemployed stigma" are likely to die very, very hard. Acceptance of the "just-world phenomenon" here in the US runs deep.

perfect stranger:

"Religion is just as vulnerable to corporate capture as is the government or the academy."

This is rather rhetorical statement, and wrong one. One need to discern spiritual aspect of religion from the religion as a tool.

Religion, as is structured, is complicit: in empoverishment, obedience, people's preconditioning, and legislative enabler in the institutions such as Supreme – and non-supreme – Court(s). It is a form of PR of the ruling class for the governing class.

DownSouth:

perfect stranger,

Religion, just like human nature, is not that easy to put in a box.

For every example you can cite where religion "is complicit: in empoverishment, obedience, people's preconditioning, and legislative enabler in the institution," I can point to an example of where religion engendered a liberating, emancipatory and revolutionary spirit.

Examples:

•Early Christianity •Nominalism •Early Protestantism •Gandhi •Martin Luther King

Now granted, there don't seem to be any recent examples of this of any note, unless we consider Chris Hedges a religionist, which I'm not sure we can do. Would it be appropriate to consider Hedges a religionist?

perfect stranger:

Yes, that maybe, just maybe be the case in early stages of forming new religion(s). In case of Christianity old rulers from Rome were trying to save own head/throne and the S.P.Q.R. imperia by adopting new religion.

You use examples of Gandhi and MLK which is highly questionable both were fighters for independence and the second, civil rights. In a word: not members of establishment just as I said there were (probably) seeing the religion as spiritual force not tool of enslavement.

Matt:

This link may provide some context:

http://en.wikipedia.org/wiki/Prosperity_theology

In particular, there seems to be an extremely popular variant of the above where the starting proposition "God makes moral people rich" is improperly converted to "Rich people are more moral" which is then readily negated to "Poor people are immoral" and then expanded to "Poor people are immoral, thus they DESERVE to suffer for it". It's essentially the theological equivalent of dividing by zero

DownSouth:

Rex,

I agree.

Poll after poll after poll has shown that a majority of Americans, and a rather significant majority, reject the values, attitudes, beliefs and opinions proselytized by the stealth religion we call "neoclassical economics."

That said, the ranks of the neoliberals are not small. They constitute what Jonathan Schell calls a "mass minority." I suspect the neoliberals have about the same level of popular support that the Nazis did at the time of their takeover of Germany in 1932, or the Bolsheviks had in Russia at the time of their takeover in 1917, which is about 20 or 25% of the total population.

The ranks of the neoliberals are made to appear far greater than they really are because they have all but exclusive access to the nation's megaphone. The Tea Party can muster a handful of people to disrupt a town hall meeting and it gets coast to coast, primetime coverage. But let a million people protest against bank bailouts, and it is ignored. Thus, by manipulation of the media, the mass minority is made to appear to be much larger than it really is.

The politicians love this, because as they carry water for their pet corporations, they can point to the Tea Partiers and say: "See what a huge upwelling of popular support I am responding to."

JTFaraday:

Well, if that's true, then the unemployed are employable but the mass mediated mentality would like them to believe they are literally and inherently unemployable so that they underestimate and under-sell themselves.

This is as much to the benefit of those who would like to pick up "damaged goods" on the cheap as those who promote the unemployment problem as one that inheres in prospective employees rather than one that is a byproduct of a bad job market lest someone be tempted to think we should address it politically.

That's where I see this blame the unemployed finger pointing really getting traction these days.

attempter:

I apologize for the fact that I only read the first few paragraphs of this before quitting in disgust.

I just can no longer abide the notion that "labor" can ever be seen by human beings as a "cost" at all. We really need to refuse to even tolerate that way of phrasing things. Workers create all wealth. Parasites have no right to exist. These are facts, and we should refuse to let argument range beyond them.

The only purpose of civilization is to provide a better way of living and for all people. This includes the right and full opportunity to work and manage for oneself and/or as a cooperative group. If civilization doesn't do that, we're better off without it.

psychohistorian:

I am one of those long term unemployed.

I suppose my biggest employment claim would be as some sort of IT techie, with numerous supply chain systems and component design, development, implementation, interfaces with other systems and ongoing support. CCNP certification and a history of techiedom going back to WEYCOS.

I have a patent (6,209,954) in my name and 12+ years of beating my head against the wall in an industry that buys compliance with the "there is no problem here, move on now" approach.

Hell, I was a junior woodchuck program administrator back in the early 70's working for the Office of the Governor of the state of Washington on CETA PSE or Public Service Employment. The office of the Governor ran the PSE program for 32 of the 39 counties in the state that were not big enough to run their own. I helped organize the project approval process in all those counties to hire folk at ( if memory serves me max of $833/mo.) to fix and expand parks and provide social and other government services as defined projects with end dates. If we didn't have the anti-public congress and other government leadership we have this could be a current component in a rational labor policy but I digress.

I have experience in the construction trades mostly as carpenter but some electrical, plumbing, HVAC, etc. also.

So, of course there is some sort of character flaw that is keeping me and all those others from employment ..right. I may have more of an excuse than others, have paid into SS for 45 years but still would work if it was available ..taking work away from other who may need it more .why set up a society where we have to compete as such for mere existence???????

One more face to this rant. We need government by the people and for the people which we do not have now. Good, public focused, not corporate focused government is bigger than any entities that exist under its jurisdiction and is kept updated by required public participation in elections and potentially other things like military, peace corps, etc. in exchange for advanced education. I say this as someone who has worked at various levels in both the public and private sectors there are ignorant and misguided folks everywhere. At least with ongoing active participation there is a chance that government would, once constructed, be able to evolve as needed within public focus .IMO.

Ishmael:

Some people would say I have been unemployed for 10 years. In 2000 after losing the last of my four CFO gigs for public companies I found it necessary to start consulting. This has lead to two of my three biggest winning years. I am usually consulting on cutting edge area of my profession and many times have large staffs reporting to me that I bring on board to get jobs done. For several years I subcontacted to a large international consulting firm to clean up projects which went wrong. Let me give some insight here.

  1. First, most good positions have gate keepers who are professional recruiters. It is near impossible to get by them and if you are unemployed they will hardly talk to you. One time talking to a recruiter at Korn Fery I was interviewing for a job I have done several times in an industry I have worked in several times. She made a statement that I had never worked at a well known company. I just about fell out of my chair laughing. At one time I was a senior level executive for the largest consulting firm in the world and lived on three continents and worked with companies on six. In addition, I had held senior positions for 2 fortune 500 firms and was the CFO for a company with $4.5 billion in revenue. I am well known at several PE firms and the founder of one of the largest mentioned in a meeting that one of his great mistakes was not investing in a very successful LBO (return of in excess of 20 multiple to investors in 18 months) I was the CFO for. In a word most recruiters are incompetent.
  2. Second, most CEO's any more are just insecure politicians. One time during an interview I had a CEO asked me to talk about some accomplishments. I was not paying to much attention as I rattled off accomplishments and the CEO went nuclear and started yelling at me that he did not know where I thought I was going with this job but the only position above the CFO job was his and he was not going anywhere. I assured him I was only interested in the CFO position and not his, but I knew the job was over. Twice feed back that I got from recruiters which they took at criticism was the "client said I seemed very assured of myself."
  3. Third, government, banking, business and the top MBA schools are based upon lying to move forward. I remember a top human resource executive telling me right before Enron, MCI and Sarbanes Oxley that I needed to learn to be more flexible. My response was that flexibility would get me an orange jump suit. Don't get me wrong, I have a wide grey zone, but it use to be in business the looked for people who could identify problems early and resolve them. Now days I see far more of a demand for people who can come up with PR spins to hide them. An attorney/treasurer consultant who partnered with me on a number of consulting jobs told me some one called me "not very charming." He said he asked what that meant, and the person who said that said, "Ish walks into a meeting and within 10 minutes he is asking about the 10,000 pound guerilla sitting in the room that no one wants to talk about." CEO do not want any challenges in their organization.
  4. Fourth, three above has lead to the hiring of very young and inexperienced people at senior levels. These people are insecure and do not want more senior and experienced people above them and than has resulted in people older than 45 not finding positions.
  5. Fifth, people are considered expendable and are fired for the lamest reasons anymore. A partner at one of the larger and more prestigious recruiting firms one time told me, "If you have a good consulting business, just stick with it. Our average placement does not last 18 months any more." Another well known recruiter in S. Cal. one time commented to me, "Your average consulting gig runs longer than our average placement."

With all of that said, I have a hard time understanding such statements as "@attempter "Workers create all wealth. Parasites have no right to exist." What does that mean? Every worker creates wealth. There is no difference in people. Sounds like communism to me. I make a good living and my net worth has grown working for myself. I have never had a consulting gig terminated by the client but I have terminated several. Usually, I am brought in to fix what several other people have failed at. I deliver basically intellectual properties to companies. Does that mean I am not a worker. I do not usually lift anything heavy or move equipment but I tell people what and where to do it so does that make me a parasite.

Those people who think everyone is equal and everyone deserves equal pay are fools or lazy. My rate is high, but what usually starts as short term projects usually run 6 months or more because companies find I can do so much more than what most of their staff can do and I am not a threat.

I would again like to have a senior challenging role at a decent size company but due to the reasons above will probably never get one. However, you can never tell. I am currently consulting for a midsize very profitable company (grew 400% last year) where I am twice the age of most people there, but everyone speaks to me with respect so you can never tell.

Lidia:

Ishmael, you're quite right. When I showed my Italian husband's resume to try and "network" in the US, my IT friends assumed he was lying about his skills and work history.

Contemporaneously, in Italy it is impossible to get a job because of incentives to hire "youth". Age discrimination is not illegal, so it's quite common to see ads that ask for a programmer under 30 with 5 years of experience in COBOL (the purple squirrel).

Hosswire

Some good points about the foolishness of recruiters, but a great deal of that foolishness is forced by the clients themselves. I used to be a recruiter myself, including at Korn Ferry in Southern California. I described the recruiting industry as "yet more proof that God hates poor people" because my job was to ignore resumes from people seeking jobs and instead "source" aka "poach" people who already had good jobs by dangling a higher salary in front of them. I didn't do it because I disparaged the unemployed, or because I could not do the basic analysis to show that a candidate had analogous or transferrable skills to the opening.

I did it because the client, as Yves said, wanted people who were literally in the same job description already. My theory is that the client wanted to have their ass covered in case the hire didn't work out, by being able to say that they looked perfect "on paper." The lesson I learned for myself and my friends looking for jobs was simple, if morally dubious. Basically, that if prospective employers are going to judge you based on a single piece of paper take full advantage of the fact that you get to write that piece of paper yourself.

Ishmael:

Hosswire - I agree with your comment. There are poor recruiters like the one I sited but in general it is the clients fault. Fear of failure. All hires have at least a 50% chance of going sideways on you. Most companies do not even have the ability to look at a resume nor to interview. I did not mean to same nasty things about recruiters, and I even do it sometimes but mine.

I look at failure in a different light than most companies. You need to be continually experimenting and changing to survive as a company and there will be some failures. The goal is to control the cost of failures while looking for the big pay off on a winner.

Mannwich:

As a former recruiter and HR "professional" (I use that term very loosely for obvious reasons), I can honestly say that you nailed it. Most big companies looking for mid to high level white collar "talent" will almost always take the perceived safest route by hiring those who look the best ON PAPER and in a suit and lack any real interviewing skills to find the real stars. What's almost comical is that companies almost always want to see the most linear resume possible because they want to see "job stability" (e.g. a CYA document in case the person fails in that job) when in many cases nobody cares about the long range view of the company anyway. My question was why should the candidate or employee care about the long range view if the employer clearly doesn't?

Ishmael:

Manwhich another on point comment. Sometimes either interviewing for a job or consulting with a CEO it starts getting to the absurd. I see all the time the requirement for stability in a persons background. Hello, where have they been the last 15 years. In addition, the higher up you go the more likely you will be terminated sometime and that is especially true if you are hired from outside the orgnanization. Companies want loyalty from an employee but offer none in return.

The average tenure for a CFO anymore is something around 18 months. I have been a first party participant (more than once) where I went through an endless recruiting process for a company (lasting more than 6 months) they final hire some one and that person is with the company for 3 months and then resigns (of course we all know it is through mutual agreement).

Ishmael:

Birch:

The real problem has become and maybe this is what you are referring to is the "Crony Capitalism." We have lost control of our financial situation. Basically, PE is not the gods of the universe that everyone thinks they are. However, every bankers secret wet dream is to become a private equity guy. Accordingly, bankers make ridiculous loans to PE because if you say no to them then you can not play in their sand box any more. Since the govt will not let the banks go bankrupt like they should then this charade continues inslaving everyone.

This country as well as many others has a large percentage of its assets tied up in over priced deals that the bankers/governments will not let collapse while the blood sucking vampires suck the life out of the assets.

On the other hand, govt is not the answer. Govt is too large and accomplishes too little.

kevin de bruxelles:

The harsh reality is that, at least in the first few rounds, companies kick to the curb their weakest links and perceived slackers. Therefore when it comes time to hire again, they are loath to go sloppy seconds on what they perceive to be some other company's rejects. They would much rather hire someone who survived the layoffs working in a similar position in a similar company. Of course the hiring company is going to have to pay for this privilege. Although not totally reliable, the fact that someone survived the layoffs provides a form social proof for their workplace abilities.

On the macro level, labor has been under attack for thirty years by off shoring and third world immigration. It is no surprise that since the working classes have been severely undermined that the middle classes would start to feel some pressure. By mass immigration and off-shoring are strongly supported by both parties. Only when the pain gets strong enough will enough people rebel and these two policies will be overturned. We still have a few years to go before this happens.

davver:

Let's say I run a factory. I produce cars and it requires very skilled work. Skilled welding, skilled machinists. Now I introduce some robotic welders and an assembly line system. The plants productivity improves and the jobs actually get easier. They require less skill, in fact I've simplified each task to something any idiot can do. Would wages go up or down? Are the workers really contributing to that increase in productivity or is it the machines and methods I created?

Lets say you think laying off or cutting the wages of my existing workers is wrong. What happens when a new entrant into the business employs a smaller workforce and lower wages, which they can do using the same technology? The new workers don't feel like they were cut down in any way, they are just happy to have a job. Before they couldn't get a job at the old plant because they lacked the skill, but now they can work in the new plant because the work is genuinely easier. Won't I go out of business?

Escariot:

I am 54 and have a ton of peers who are former white collar workers and professionals (project managers, architects, lighting designers, wholesalers and sales reps for industrial and construction materials and equipment) now out of work going on three years. Now I say out of work, I mean out of our trained and experienced fields.

We now work two or three gigs (waiting tables, mowing lawns, doing free lance, working in tourism, truck driving, moving company and fedex ups workers) and work HARD, for much much less than we did, and we are seeing the few jobs that are coming back on line going to younger workers. It is just the reality. And for most of us the descent has not been graceful, so our credit is a wreck, which also breeds a whole other level of issues as now it is common for the credit record to be a deal breaker for employment, housing, etc.

Strangely I don't sense a lot of anger or bitterness as much as humility. And gratitude for ANY work that comes our way. Health insurance? Retirement accounts? not so much.

Mickey Marzick:

Yves and I have disagreed on how extensive the postwar "pact" between management and labor was in this country. But if you drew a line from say, Trenton-Patterson, NJ to Cincinatti, OH to Minneapolis, MN, north and east of it where blue collar manufacturing in steel, rubber, auto, machinery, etc., predominated, this "pact" may have existed but ONLY because physical plant and production were concentrated there and workers could STOP production.

Outside of these heavy industrial pockets, unions were not always viewed favorably. As one moved into the rural hinterlands surrounding them there was jealously and/or outright hostility. Elsewhere, especially in the South "unions" were the exception not the rule. The differences between NE Ohio before 1975 – line from Youngstown to Toledo – and the rest of the state exemplified this pattern. Even today, the NE counties of Ohio are traditional Democratic strongholds with the rest of the state largely Republican. And I suspect this pattern existed elsewhere. But it is changing too

In any case, the demonization of the unemployed is just one notch above the vicious demonization of the poor that has always existed in this country. It's a constant reminder for those still working that you could be next – cast out into the darkness – because you "failed" or worse yet, SINNED. This internalization of the "inner cop" reinforces the dominant ideology in two ways. First, it makes any resistance by individuals still employed less likely. Second, it pits those still working against those who aren't, both of which work against the formation of any significant class consciousness amongst working people. The "oppressed" very often internalize the value system of the oppressor.

As a nation of immigrants ETHNICITY may have more explanatory power than CLASS. For increasingly, it would appear that the dominant ethnic group – suburban, white, European Americans – have thrown their lot in with corporate America. Scared of the prospect of downward social mobility and constantly reminded of URBAN America – the other America – this group is trapped with nowhere to else to go.

It's the divide and conquer strategy employed by ruling elites in this country since its founding [Federalist #10] with the Know Nothings, blaming the Irish [NINA - no Irish need apply] and playing off each successive wave of immigrants against the next. Only when the forces of production became concentrated in the urban industrial enclaves of the North was this strategy less effective. And even then internal immigration by Blacks to the North in search of employment blunted the formation of class consciousness among white ethnic industrial workers.

Wherever the postwar "pact of domination" between unions and management held sway, once physical plant was relocated elsewhere [SOUTH] and eventually offshored, unemployment began to trend upwards. First it was the "rustbelt" now it's a nationwide phenomenon. Needless to say, the "pact" between labor and management has been consigned to the dustbin of history.

White, suburban America has hitched its wagon to that of the corporate horse. Demonization of the unemployed coupled with demonization of the poor only serve to terrorize this ethnic group into acquiescence. And as the workplace becomes a multicultural matrix this ethnic group is constantly reminded of its perilous state. Until this increasingly atomized ethnic group breaks with corporate America once and for all, it's unlikely that the most debilitating scourge of all working people – UNEMPLOYMENT – will be addressed.

Make no mistake about it, involuntary UNEMPLOYMENT/UNDEREMPLYEMT is a form of terrorism and its demonization is terrorism in action. This "quiet violence" is psychological and the intimidation wrought by unemployment and/or the threat of it is intended to dehumanize individuals subjected to it. Much like spousal abuse, the emotional and psychological effects are experienced way before any physical violence. It's the inner cop that makes overt repression unnecessary. We terrorize ourselves into submission without even knowing it because we accept it or come to tolerate it. So long as we accept "unemployment" as an inevitable consequence of progress, as something unfortunate but inevitable, we will continue to travel down the road to serfdom where ARBEIT MACHT FREI!

FULL and GAINFUL EMPLOYMENT are the ultimate labor power.

Eric:

It's delicate since direct age discrimination is illegal, but when circumstances permit separating older workers they have a very tough time getting back into the workforce in an era of high health care inflation. Older folks consume more health care and if you are hiring from a huge surplus of available workers it isn't hard to steer around the more experienced. And nobody gets younger, so when you don't get job A and go for job B 2 weeks later you, you're older still!

James:

Yves said- "This overly narrow hiring spec then leads to absurd, widespread complaint that companies can't find people with the right skills"

In the IT job markets such postings are often called purple squirrels. The HR departments require the applicant to be expert in a dozen programming languages. This is an excuse to hire a foreigner on a temp h1-b or other visa.

Most people aren't aware that this model dominates the sciences. Politicians scream we have a shortage of scientists, yet it seems we only have a shortage of cheap easily exploitable labor. The economist recently pointed out the glut of scientists that currently exists in the USA.

http://www.economist.com/node/17723223

This understates the problem. The majority of PhD recipients wander through years of postdocs only to end up eventually changing fields. My observation is that the top ten schools in biochem/chemistry/physics/ biology produce enough scientists to satisfy the national demand.

The exemption from h1-b visa caps for academic institutions exacerbates the problem, providing academics with almost unlimited access to labor.

The pharmaceutical sector has been decimated over the last ten years with tens of thousands of scientists/ factory workers looking for re-training in a dwindling pool of jobs (most of which will deem you overqualified.)

http://pipeline.corante.com/archives/2011/03/03/a_postdocs_lament.php

Abe, NYC:

I wonder how the demonization of the unemployed can be so strong even in the face of close to 10% unemployment/20% underemployment. It's easy and tempting to demonize an abstract young buck or Cadillac-driving welfare queen, but when a family member or a close friend loses a job, or your kids are stuck at your place because they can't find one, shouldn't that alter your perceptions? Of course the tendency will be to blame it all on the government, but there has to be a limit to that in hard-hit places like Ohio, Colorado, or Arizona. And yet, the dynamics aren't changing or even getting worse. Maybe Wisconsin marks a turning point, I certainly hope it does

damien:

It's more than just stupid recruiting, this stigma. Having got out when the getting was good, years ago, I know that any corporate functionary would be insane to hire me now. Socialization wears off, the deformation process reverses, and the ritual and shibboleths become a joke. Even before I bailed I became a huge pain in the ass as economic exigency receded, every bosses nightmare. I suffered fools less gladly and did the right thing out of sheer anarchic malice.

You really can't maintain corporate culture without existential fear – not just, "Uh oh, I'm gonna get fired," fear, but a visceral feeling that you do not exist without a job. In properly indoctrinated workers that feeling is divorced from economic necessity. So anyone who's survived outside a while is bound to be suspect. That's a sign of economic security, and security of any sort undermines social control.

youniquelikeme:

You hit the proverbial nail with that reply. (Although, sorry, doing the right thing should not be done out of malice) The real fit has to be in the corporate yes-man culture (malleable ass kisser) to be suited for any executive position and beyond that it is the willingness to be manipulated and drained to be able to keep a job in lower echelon.

This is the new age of evolution in the work place. The class wars will make it more of an eventual revolution, but it is coming. The unemployment rate (the actual one, not the Government one) globalization and off shore hiring are not sustainable for much longer.

Something has to give, but it is more likely to snap then to come easily. People who are made to be repressed and down and out eventually find the courage to fight back and by then, it is usually not with words.

down and out in Slicon Valley:

This is the response I got from a recruiter:

"I'm going to be overly honest with you. My firm doesn't allow me to submit any candidate who hasn't worked in 6-12 months or more. Recruiting brokers are probably all similar in that way . You are going to have to go through a connection/relationship you have with a colleague, co-worker, past manager or friend to get your next job .that's my advice for you. Best of luck "

I'm 56 years old with MSEE. Gained 20+ years of experience at the best of the best (TRW, Nortel, Microsoft), have been issued a patent. Where do I sign up to gain skills required to find a job now?

Litton Graft :

"Best of the Best?" I know you're down now, but looking back at these Gov'mint contractors you've enjoyed the best socialism money can by.

Nortel/TRW bills/(ed) the Guvmint at 2x, 3x your salary, you can ride this for decades. At the same time the Inc is attached to the Guvmint ATM localities/counties are giving them a red carpet of total freedom from taxation. Double subsidies.

I've worked many years at the big boy bandits, and there is no delusion in my mind that almost anyone, can do what I do and get paid 100K+. I've never understood the mindset of some folks who work in the Wermacht Inc: "Well, someone has to do this work" or worse "What we do, no one else can do" The reason no one else "can do it" is that they are not allowed to. So, we steal from the poor to build fighter jets, write code or network an agency.

Hosswire:

I used to work as a recruiter and can tell you that I only parroted the things my clients told me. I wanted to get you hired, because I was lazy and didn't want to have to talk to someone else next.

So what do you do? To place you that recruiter needs to see on a piece of paper that you are currently working? Maybe get an email or phone call from someone who will vouch for your employment history. That should not be that hard to make happen.

Francois T :

The "bizarre way that companies now spec jobs" is essentially a coded way for mediocre managers to say without saying so explicitly that "we can afford to be extremely picky, and by God, we shall do so no matter what, because we can!"

Of course, when comes the time to hire back because, oh disaster! business is picking up again, (I'm barely caricaturing here; some managers become despondent when they realize that workers regain a bit of the higher ground; loss of power does that to lesser beings) the same idiots who designed those "overly narrow hiring spec then leads to absurd, widespread complaint that companies can't find people with the right skills" are thrown into a tailspin of despair and misery. Instead of figuring out something as simple as "if demand is better, so will our business", they can't see anything else than the (eeeek!) cost of hiring workers. Unable to break their mental corset of penny-pincher, they fail to realize that lack of qualified workers will prevent them to execute well to begin with.

And guess what: qualified workers cost money, qualified workers urgently needed cost much more.

This managerial attitude must be another factor that explain why entrepreneurship and the formation of small businesses is on the decline in the US (contrary to the confabulations of the US officialdumb and the chattering class) while rising in Europe and India/China.

Kit:

If you are 55-60, worked as a professional (i.e., engineering say) and are now unemployed you are dead meat. Sorry to be blunt but thats the way it is in the US today. Let me repeat that : Dead Meat.

I was terminated at age 59, found absolutely NOTHING even though my qualifications were outstanding. Fortunately, my company had an old style pension plan which I was able to qualify for (at age 62 without reduced benefits). So for the next 2+ years my wife and I survived on unemployment insurance, severance, accumulated vacation pay and odd jobs. Not nice – actually, a living hell.

At age 62, I applied for my pension, early social security, sold our old house (at a good profit) just before the RE crash, moved back to our home state. Then my wife qualified for social security also. Our total income is now well above the US median.

Today, someone looking at us would think we were the typical corporate retiree. We surely don't let on any differently but the experience (to get to this point) almost killed us.

I sympathize very strongly with the millions caught in this unemployment death spiral. I wish I had an answer but I just don't. We were very lucky to survive intact.

Ming:

Thank you Yves for your excellent post, and for bringing to light this crucial issue.

Thank you to all the bloggers, who add to the richness of the this discussion.

I wonder if you could comment on this Yves, and correct me if I am wrong I believe that the power of labor was sapped by the massive available supply of global labor. The favorable economic policies enacted by China (both official and unofficial), and trade negotiations between the US government and the Chinese government were critical to creating the massive supply of labor.

Thank you. No rush of course.

Nexus:

There are some odd comments and notions here that are used to support dogma and positions of prejudice. The world can be viewed in a number of ways. Firstly from a highly individualised and personal perspective – that is what has happened to me and here are my experiences. Or alternatively the world can be viewed from a broader societal perspective.

In the context of labour there has always been an unequal confrontation between those that control capital and those that offer their labour, contrary to some of the views exposed here – Marx was a first and foremost a political economist. The political economist seeks to understand the interplay of production, supply, the state and institutions like the media. Modern day economics branched off from political economy and has little value in explaining the real world as the complexity of the world has been reduced to a simplistic rationalistic model of human behaviour underpinned by other equally simplistic notions of 'supply and demand', which are in turn represented by mathematical models, which in themselves are complex but merely represent what is a simplistic view of the way the world operates. This dogmatic thinking has avoided the need to create an underpinning epistemology. This in turn underpins the notion of free choice and individualism which in itself is an illusion as it ignores the operation of the modern state and the exercise of power and influence within society.

It was stated in one of the comments that the use of capital (machines, robotics, CAD design, etc.) de-skills. This is hardly the case as skills rise for those that remain and support highly automated/continuous production factories. This is symptomatic of the owners of capital wanting to extract the maximum value for labour and this is done via the substitution of labour for capital making the labour that remains to run factories highly productive thus eliminating low skill jobs that have been picked up via services (people move into non productive low skilled occupations warehousing and retail distribution, fast food outlets, etc). Of course the worker does not realise the additional value of his or her labour as this is expropriated for the shareholders (including management as shareholders).

The issue of the US is that since the end of WW2 it is not the industrialists that have called the shots and made investments it is the financial calculus of the investment banker (Finance Capital). Other comments have tried to ignore the existence of the elites in society – I would suggest that you read C.W.Mills – The Power Elites as an analysis of how power is exercised in the US – it is not through the will of the people.

For Finance capital investments are not made on the basis of value add, or contribution through product innovation and the exchange of goods but on basis of the lowest cost inputs. Consequently, the 'elites' that make investment decisions, as they control all forms of capital seek to gain access to the cheapest cost inputs. The reality is that the US worker (a pool of 150m) is now part of a global labour pool of a couple of billion that now includes India and China. This means that the elites, US transnational corporations for instance, can access both cheaper labour pools, relocate capital and avoid worker protection (health and safety is not a concern). The strategies of moving factories via off-shoring (over 40,000 US factories closed or relocated) and out-sourcing/in-sourcing labour is also a representations of this.

The consequence for the US is that the need for domestic labour has diminished and been substituted by cheap labour to extract the arbitrage between US labour rates and those of Chinese and Indians. Ironically, in this context capital has become too successful as the mode of consumption in the US shifted from workers that were notionally the people that created the goods, earned wages and then purchased the goods they created to a new model where the worker was substituted by the consumer underpinned by cheap debt and low cost imports – it is illustrative to note that real wages have not increased in the US since the early 1970's while at the same time debt has steadily increased to underpin the illusion of wealth – the 'borrow today and pay tomorrow' mode of capitalist operation. This model of operation is now broken. The labour force is now being demonized as there is a now surplus of labour and a need to drive down labour rates through changes in legislation and austerity programs to meet those of the emerging Chinese and Indian middle class so workers rights need to be broken. Once this is done a process of in-source may take place as US labour costs will be on par with overseas labour pools.

It is ironic that during the Regan administration a number of strategic thinkers saw the threat from emerging economies and the danger of Finance Capital and created 'Project Socrates' that would have sought to re-orientate the US economy from one that was based on the rationale of Finance Capital to one that focused in productive innovation which entailed an alignment of capital investment, research and training to product innovative goods. Of course this was ignored and the rest is history. The race to the lowest input cost is ultimately self defeating as it is clear that the economy de-industrialises through labour and capital changes and living standards collapse. The elites – bankers, US transnational corporations, media, industrial military complex and the politicians don't care as they make money either way and this way you get other people overseas to work cheap for you.

S P:

Neoliberal orthodoxy treats unemployment as well as wage supression as a necessary means to fight "inflation." If there was too much power in the hands of organized labor, inflationary pressures would spiral out of control as supply of goods cannot keep up with demand.

It also treats the printing press as a necessary means to fight "deflation."

So our present scenario: widespread unemployment along with QE to infinity, food stamps for all, is exactly what you'd expect.

The problem with this orthodoxy is that it assumes unlimited growth on a planet with finite resources, particularly oil and energy. Growth is not going to solve unemployment or wages, because we are bumping up against limits to growth.

There are only two solutions. One is tax the rich and capital gains, slow growth, and reinvest the surplus into jobs/skills programs, mostly to maintain existing infrastructure or build new energy infrastructure. Even liberals like Krugman skirt around this, because they aren't willing to accept that we have the reached the end of growth and we need radical redistribution measures.

The other solution is genuine classical liberalism / libertarianism, along the lines of Austrian thought. Return to sound money, and let the deflation naturally take care of the imbalances. Yes, it would be wrenching, but it would likely be wrenching for everybody, making it fair in a universal sense.

Neither of these options is palatable to the elite classes, the financiers of Wall Street, or the leeches and bureaucrats of D.C.

So this whole experiment called America will fail.

[Nov 27, 2017] This Is Why Hewlett-Packard Just Fired Another 30K

Highly recommended!
Notable quotes:
"... Imagine working at HP and having to listen to Carly Fiorina bulldoze you...she is like a blow-torch...here are 4 minutes of Carly and Ralph Nader (if you can take it): https://www.youtube.com/watch?v=vC4JDwoRHtk ..."
"... My husband has been a software architect for 30 years at the same company. Never before has he seen the sheer unadulterated panic in the executives. All indices are down and they are planning for the worst. Quality is being sacrificed for " just get some relatively functional piece of shit out the door we can sell". He is fighting because he has always produced a stellar product and refuses to have shit tied to his name ( 90% of competitor benchmarks fail against his projects). They can't afford to lay him off, but the first time in my life I see my husband want to quit... ..."
"... HP basically makes computer equipment (PCs, servers, Printers) and software. Part of the problem is that computer hardware has been commodized. Since PCs are cheap and frequent replacements are need, People just by the cheapest models, expecting to toss it in a couple of years and by a newer model (aka the Flat screen TV model). So there is no justification to use quality components. Same is become true with the Server market. Businesses have switched to virtualization and/or cloud systems. So instead of taking a boat load of time to rebuild a crashed server, the VM is just moved to another host. ..."
"... I hung an older sign next to the one saying Information Technology. Somehow MIS-Information Technology seemed appropriate.) ..."
"... Then I got to my first duty assignment. It was about five months after the first moon landing, and the aerospace industry was facing cuts in government aerospace spending. I picked up a copy of an engineering journal in the base library and found an article about job cuts. There was a cartoon with two janitors, buckets at their feet and mops in their hands, standing before a blackboard filled with equations. Once was saying to the other, pointing to one section, "you can see where he made his mistake right here...". It represented two engineers who had been reduced to menial labor after losing their jobs. ..."
"... So while I resent all the H1Bs coming into the US - I worked with several for the last four years of my IT career, and was not at all impressed - and despise the politicians who allow it, I know that it is not the first time American STEM grads have been put out of jobs en masse. In some ways that old saying applies: the more things change, the more they stay the same ..."
"... Just like Amazon, HP will supposedly make billions in profit analyzing things in the cloud that nobody looks at and has no use to the real economy, but it makes good fodder for Power Point presentations. I am amazed how much daily productivity goes into creating fancy charts for meetings that are meaningless to the actual business of the company. ..."
"... 'Computers' cost as much - if not more time than they save, at least in corporate settings. Used to be you'd work up 3 budget projections - expected, worst case and best case, you'd have a meeting, hash it out and decide in a week. Now you have endless alternatives, endless 'tweaking' and changes and decisions take forever, with outrageous amounts of time spent on endless 'analysis' and presentations. ..."
"... A recent lay off here turned out to be quite embarrassing for Parmalat there was nobody left that knew how to properly run the place they had to rehire many ex employees as consultants-at a costly premium ..."
"... HP is laying off 80,000 workers or almost a third of its workforce, converting its long-term human capital into short-term gains for rich shareholders at an alarming rate. The reason that product quality has declined is due to the planned obsolescence that spurs needless consumerism, which is necessary to prop up our debt-backed monetary system and the capitalist-owned economy that sits on top of it. ..."
"... The world is heading for massive deflation. Computers have hit the 14 nano-meter lithography zone, the cost to go from 14nm to say 5nm is very high, and the net benefit to computing power is very low, but lets say we go from 14nm to 5nm over the next 4 years. Going from 5nm to 1nm is not going to net a large boost in computing power and the cost to shrink things down and re-tool will be very high for such an insignificant gain in performance. ..."
"... Another classic "Let's rape all we can and bail with my golden parachute" corporate leaders setting themselves up. Pile on the string of non-IT CEOs that have been leading the company to ruin. To them it is nothing more than a contest of being even worse than their predecessor. Just look at the billions each has lost before their exit. Compaq, a cluster. Palm Pilot, a dead product they paid millions for and then buried. And many others. ..."
"... Let's not beat around the bush, they're outsourcing, firing Americans and hiring cheap labor elsewhere: http://www.bloomberg.com/news/articles/2015-09-15/hewlett-packard-to-cut-up-to-30-000-more-jobs-in-restructuring It's also shifting employees to low-cost areas, and hopes to have 60 percent of its workers located in cheaper countries by 2018, Nefkens said. ..."
"... Carly Fiorina: (LOL, leading a tech company with a degree in medieval history and philosophy) While at ATT she was groomed from the Affirmative Action plan. ..."
"... It is very straightforward. Replace 45,000 US workers with 100,000 offshore workers and you still save millions of USD ! Use the "savings" to buy back stock, then borrow more $$ at ZIRP to buy more stock back. ..."
"... If you look on a site like LinkedIN, it will always say 'We're hiring!'. YES, HP is hiring.....but not YOU, they want Ganesh Balasubramaniamawapbapalooboopawapbamboomtuttifrutti, so that they can work him as modern day slave labor for ultra cheap. We can thank idiot 'leaders' like Meg Pasty Faced Whitman and Bill 'Forced Vaccinations' Gates for lobbying Congress for decades, against the rights of American workers. ..."
"... An era of leadership in computer technology has died, and there is no grave marker, not even a funeral ceremony or eulogy ... Hewlett-Packard, COMPAQ, Digital Equipment Corp, UNIVAC, Sperry-Rand, Data General, Tektronix, ZILOG, Advanced Micro Devices, Sun Microsystems, etc, etc, etc. So much change in so short a time, leaves your mind dizzy. ..."
Sep 15, 2015 | Zero Hedge

SixIsNinE

yeah thanks Carly ... HP made bullet-proof products that would last forever..... I still buy HP workstation notebooks, especially now when I can get them for $100 on ebay .... I sold HP products in the 1990s .... we had HP laserjet IIs that companies would run day & night .... virtually no maintenance ... when PCL5 came around then we had LJ IIIs .... and still companies would call for LJ I's, .... 100 pounds of invincible Printing ! .

This kind of product has no place in the World of Planned-Obsolesence .... I'm currently running an 8510w, 8530w, 2530p, Dell 6420 quad i7, hp printers hp scanner