Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Unix system administration bulletin, 2018

Home 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999

For the list of top articles see Recommended Links section


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Dec 24, 2018] Phone in sick: its a small act of rebellion against wage slavery

Notable quotes:
"... By far the biggest act of wage slavery rebellion, don't buy shit. The less you buy, the less you need to earn. Holidays by far the minority of your life should not be a desperate escape from the majority of your life. Spend less, work less and actually really enjoy living more. ..."
"... How about don't shop at Walmart (they helped boost the Chinese economy while committing hari kari on the American Dream) and actually engaging in proper labour action? Calling in sick is just plain childish. ..."
"... I'm all for sticking it to "the man," but when you call into work for a stupid reason (and a hangover is a very stupid reason), it is selfish, and does more damage to the cause of worker's rights, not less. I don't know about where you work, but if I call in sick to my job, other people have to pick up my slack. I work for a public library, and we don't have a lot of funds, so we have the bear minimum of employees we can have and still work efficiently. As such, if anybody calls in, everyone else, up to and including the library director, have to take on more work. ..."
Oct 24, 2015 | The Guardian

"Phoning in sick is a revolutionary act." I loved that slogan. It came to me, as so many good things did, from Housmans, the radical bookshop in King's Cross. There you could rummage through all sorts of anarchist pamphlets and there I discovered, in the early 80s, the wondrous little magazine Processed World. It told you basically how to screw up your workplace. It was smart and full of small acts of random subversion. In many ways it was ahead of its time as it was coming out of San Francisco and prefiguring Silicon Valley. It saw the machines coming. Jobs were increasingly boring and innately meaningless. Workers were "data slaves" working for IBM ("Intensely Boring Machines").

What Processed World was doing was trying to disrupt the identification so many office workers were meant to feel with their management, not through old-style union organising, but through small acts of subversion. The modern office, it stressed, has nothing to do with human need. Its rebellion was about working as little as possible, disinformation and sabotage. It was making alienation fun. In 1981, it could not have known that a self-service till cannot ever phone in sick.

I was thinking of this today, as I wanted to do just that. I have made myself ill with a hangover. A hangover, I always feel, is nature's way of telling you to have a day off. One can be macho about it and eat your way back to sentience via the medium of bacon sandwiches and Maltesers. At work, one is dehydrated, irritable and only semi-present. Better, surely, though to let the day fall through you and dream away.

Having worked in America, though, I can say for sure that they brook no excuses whatsoever. When I was late for work and said things like, "My alarm clock did not go off", they would say that this was not a suitable explanation, which flummoxed me. I had to make up others. This was just to work in a shop.

This model of working – long hours, very few holidays, few breaks, two incomes needed to raise kids, crazed loyalty demanded by huge corporations, the American way – is where we're heading. Except now the model is even more punishing. It is China. We are expected to compete with an economy whose workers are often closer to indentured slaves than anything else.

This is what striving is, then: dangerous, demoralising, often dirty work. Buckle down. It's the only way forward, apparently, which is why our glorious leaders are sucking up to China, which is immoral, never mind ridiculously short-term thinking.

So again I must really speak up for the skivers. What we have to understand about austerity is its psychic effects. People must have less. So they must have less leisure, too. The fact is life is about more than work and work is rapidly changing. Skiving in China may get you killed but here it may be a small act of resistance, or it may just be that skivers remind us that there is meaning outside wage-slavery.

Work is too often discussed by middle-class people in ways that are simply unrecognisable to anyone who has done crappy jobs. Much work is not interesting and never has been. Now that we have a political and media elite who go from Oxbridge to working for a newspaper or a politician, a lot of nonsense is spouted. These people have not cleaned urinals on a nightshift. They don't sit lonely in petrol stations manning the till. They don't have to ask permission for a toilet break in a call centre. Instead, their work provides their own special identity. It is very important.

Low-status jobs, like caring, are for others. The bottom-wipers of this world do it for the glory, I suppose. But when we talk of the coming automation that will reduce employment, bottom-wiping will not be mechanised. Nor will it be romanticised, as old male manual labour is. The mad idea of reopening the coal mines was part of the left's strange notion of the nobility of labour. Have these people ever been down a coal mine? Would they want that life for their children?

Instead we need to talk about the dehumanising nature of work. Bertrand Russell and Keynes thought our goal should be less work, that technology would mean fewer hours.

Far from work giving meaning to life, in some surveys 40% of us say that our jobs are meaningless. Nonetheless, the art of skiving is verboten as we cram our children with ever longer hours of school and homework. All this striving is for what exactly? A soul-destroying job?

Just as education is decided by those who loved school, discussions about work are had by those to whom it is about more than income.

The parts of our lives that are not work – the places we dream or play or care, the space we may find creative – all these are deemed outside the economy. All this time is unproductive. But who decides that?

Skiving work is bad only to those who know the price of everything and the value of nothing.

So go on: phone in sick. You know you want to.

friedad 23 Oct 2015 18:27

We now exist in a society in which the Fear Cloud is wrapped around each citizen. Our proud history of Union and Labor, fighting for decent wages and living conditions for all citizens, and mostly achieving these aims, a history, which should be taught to every child educated in every school in this country, now gradually but surely eroded by ruthless speculators in government, is the future generations are inheriting. The workforce in fear of taking a sick day, the young looking for work in fear of speaking out at diminishing rewards, definitely this 21st Century is the Century of Fear. And how is this fear denied, with mind blowing drugs, regardless if it is is alcohol, description drugs, illicit drugs, a society in denial. We do not require a heavenly object to destroy us, a few soulless monsters in our mist are masters of manipulators, getting closer and closer to accomplish their aim of having zombies doing their beckoning. Need a kidney, no worries, zombie dishwasher, is handy for one. Oh wait that time is already here.

Hemulen6 23 Oct 2015 15:06

Oh join the real world, Suzanne! Many companies now have a limit to how often you can be sick. In the case of the charity I work for it's 9 days a year. I overstepped it, I was genuinely sick, and was hauled up in front of Occupational Health. That will now go on my record and count against me. I work for a cancer care charity. Irony? Surely not.

AlexLeo -> rebel7 23 Oct 2015 13:34

Which is exactly my point. You compete on relevant job skills and quality of your product, not what school you have attended.

Yes, there are thousands, tens of thousands of folks here around San Jose who barely speak English, but are smart and hard working as hell and it takes them a few years to get to 150-200K per year, Many of them get to 300-400K, if they come from strong schools in their countries of origin, compared to the 10k or so where they came from, but probably more than the whining readership here.

This is really difficult to swallow for the Brits back in Britain, isn't it. Those who have moved over have experiences the type of social mobility unthinkable in Britain, but they have had to work hard and get to 300K-700K per year, much better than the 50-100K their parents used to make back in GB. These are averages based on personal interactions with say 50 Brits in the last 15 + years, all employed in the Silicon Valley in very different jobs and roles.

Todd Owens -> Scott W 23 Oct 2015 11:00

I get what you're saying and I agree with a lot of what you said. My only gripe is most employees do not see an operation from a business owner or managerial / financial perspective. They don't understand the costs associated with their performance or lack thereof. I've worked on a lot of projects that we're operating at a loss for a future payoff. When someone decides they don't want to do the work they're contracted to perform that can have a cascading effect on the entire company.

All in all what's being described is for the most part misguided because most people are not in the position or even care to evaluate the particulars. So saying you should do this to accomplish that is bullshit because it's rarely such a simple equation. If anything this type of tactic will leaf to MORE loss and less money for payroll.


weematt -> Barry1858 23 Oct 2015 09:04

Sorry you just can't have a 'nicer' capitalism.

War ( business by other means) and unemployment ( you can't buck the market), are inevitable concomitants of capitalist competition over markets, trade routes and spheres of interests. (Remember the war science of Nagasaki and Hiroshima from the 'good guys' ?)
"..capital comes dripping from head to foot, from every pore, with blood and dirt". (Marx)

You can't have full employment, or even the 'Right to Work'.

There is always ,even in boom times a reserve army of unemployed, to drive down wages. (If necessary they will inject inflation into the economy)
Unemployment is currently 5.5 percent or 1,860,000 people. If their "equilibrium rate" of unemployment is 4% rather than 5% this would still mean 1,352,000 "need be unemployed". The government don't want these people to find jobs as it would strengthen workers' bargaining position over wages, but that doesn't stop them harassing them with useless and petty form-filling, reporting to the so-called "job centre" just for the sake of it, calling them scroungers and now saying they are mentally defective.
Government is 'over' you not 'for' you.

Governments do not exist to ensure 'fair do's' but to manage social expectations with the minimum of dissent, commensurate with the needs of capitalism in the interests of profit.

Worker participation amounts to self managing workers self exploitation for the maximum of profit for the capitalist class.

Exploitation takes place at the point of production.

" Instead of the conservative motto, 'A fair day's wage for a fair day's work!' they ought to inscribe on their banner the revolutionary watchword, 'Abolition of the wages system!'"

Karl Marx [Value, Price and Profit]

John Kellar 23 Oct 2015 07:19

Fortunately; as a retired veteran I don't have to worry about phoning in sick.However; during my Air Force days if you were sick, you had to get yourself to the Base Medical Section and prove to a medical officer that you were sick. If you convinced the medical officer of your sickness then you may have been luck to receive on or two days sick leave. For those who were very sick or incapable of getting themselves to Base Medical an ambulance would be sent - promptly.


Rchrd Hrrcks -> wumpysmum 23 Oct 2015 04:17

The function of civil disobedience is to cause problems for the government. Let's imagine that we could get 100,000 people to agree to phone in sick on a particular date in protest at austerity etc. Leaving aside the direct problems to the economy that this would cause. It would also demonstrate a willingness to take action. It would demonstrate a capability to organise mass direct action. It would demonstrate an ability to bring people together to fight injustice. In and of itself it might not have much impact, but as a precedent set it could be the beginning of something massive, including further acts of civil disobedience.


wumpysmum Rchrd Hrrcks 23 Oct 2015 03:51

There's already a form of civil disobedience called industrial action, which the govt are currently attacking by attempting to change statute. Random sickies as per my post above are certainly not the answer in the public sector at least, they make no coherent political point just cause problems for colleagues. Sadly too in many sectors and with the advent of zero hours contracts sickies put workers at risk of sanctions and lose them earnings.


Alyeska 22 Oct 2015 22:18

I'm American. I currently have two jobs and work about 70 hours a week, and I get no paid sick days. In fact, the last time I had a job with a paid sick day was 2001. If I could afford a day off, you think I'd be working 70 hours a week?

I barely make rent most months, and yes... I have two college degrees. When I try to organize my coworkers to unionize for decent pay and benefits, they all tell me not to bother.... they are too scared of getting on management's "bad side" and "getting in trouble" (yes, even though the law says management can't retaliate.)

Unions are different in the USA than in the UK. The workforce has to take a vote to unionize the company workers; you can't "just join" a union here. That's why our pay and working conditions have gotten worse, year after year.


rtb1961 22 Oct 2015 21:58

By far the biggest act of wage slavery rebellion, don't buy shit. The less you buy, the less you need to earn. Holidays by far the minority of your life should not be a desperate escape from the majority of your life. Spend less, work less and actually really enjoy living more.

Pay less attention to advertising and more attention to the enjoyable simplicity of life, of real direct human relationships, all of them, the ones in passing where you wish a stranger well, chats with service staff to make their life better as well as your own, exchange thoughts and ideas with others, be a human being and share humanity with other human beings.

Mkjaks 22 Oct 2015 20:35

How about don't shop at Walmart (they helped boost the Chinese economy while committing hari kari on the American Dream) and actually engaging in proper labour action? Calling in sick is just plain childish.

toffee1 22 Oct 2015 19:13

It is only considered productive if it feeds the beast, that is, contribute to the accumulation of capital so that the beast can have more power over us. The issue here is the wage labor. The 93 percent of the U.S. working population perform wage labor (see BLS site). It is the highest proportion in any society ever came into history. Under the wage labor (employment) contract, the worker gives up his/her decision making autonomy. The worker accepts the full command of his/her employer during the labor process. The employer directs and commands the labor process to achieve the goals set by himself. Compare this, for example, self-employed providing a service (for example, a plumber). In this case, the customer describes the problem to the service provider but the service provider makes all the decisions on how to organize and apply his labor to solve the problem. Or compare it to a democratically organized coop, where workers make all the decisions collectively, where, how and what to produce. Under the present economic system, a great majority of us are condemned to work in large corporations performing wage labor. The system of wage labor stripping us from autonomy on our own labor, creates all the misery in our present world through alienation. Men and women lose their humanity alienated from their own labor. Outside the world of wage labor, labor can be a source self-realization and true freedom. Labor can be the real fulfillment and love. Labor together our capacity to love make us human. Bourgeoisie dehumanized us steeling our humanity. Bourgeoisie, who sold her soul to the beast, attempting to turn us into ever consuming machines for the accumulation of capital.

patimac54 -> Zach Baker 22 Oct 2015 17:39

Well said. Most retail employers have cut staff to the minimum possible to keep the stores open so if anyone is off sick, it's the devil's own job trying to just get customers served. Making your colleagues work even harder than they normally do because you can't be bothered to act responsibly and show up is just plain selfish.
And sorry, Suzanne, skiving work is nothing more than an act of complete disrespect for those you work with. If you don't understand that, try getting a proper job for a few months and learn how to exercise some self control.

TettyBlaBla -> FranzWilde 22 Oct 2015 17:25

It's quite the opposite in government jobs where I am in the US. As the fiscal year comes to a close, managers look at their budgets and go on huge spending sprees, particularly for temp (zero hours in some countries) help and consultants. They fear if they don't spend everything or even a bit more, their spending will be cut in the next budget. This results in people coming in to do work on projects that have no point or usefulness, that will never be completed or even presented up the food chain of management, and ends up costing taxpayers a small fortune.

I did this one year at an Air Quality Agency's IT department while the paid employees sat at their desks watching portable televisions all day. It was truly demeaning.

oommph -> Michael John Jackson 22 Oct 2015 16:59

Thing is though, children - dependents to pay for - are the easiest way to keep yourself chained to work.

The homemaker model works as long as your spouse's employer retains them (and your spouse retains you in an era of 40% divorce).

You are just as dependent on an employer and "work" but far less in control of it now.


Zach Baker 22 Oct 2015 16:41

I'm all for sticking it to "the man," but when you call into work for a stupid reason (and a hangover is a very stupid reason), it is selfish, and does more damage to the cause of worker's rights, not less. I don't know about where you work, but if I call in sick to my job, other people have to pick up my slack. I work for a public library, and we don't have a lot of funds, so we have the bear minimum of employees we can have and still work efficiently. As such, if anybody calls in, everyone else, up to and including the library director, have to take on more work. If I found out one of my co-workers called in because of a hangover, I'd be pissed. You made the choice to get drunk, knowing that you had to work the following morning. Putting it into the same category of someone who is sick and may not have the luxury of taking off because of a bad employer is insulting.


[Dec 23, 2018] Rule #0 of any checklist

Notable quotes:
"... The Checklist Manifesto ..."
"... The book talks about how checklists reduce major errors in surgery. Hospitals that use checklists are drastically less likely to amputate the wrong leg . ..."
"... any checklist should start off verifying that what you "know" to be true is true ..."
"... Before starting, ask the "Is it plugged in?" question first. What happened today was an example of when asking "Is it plugged in?" would have helped. ..."
"... moral of the story: Make sure that your understanding of the current state is correct. If you're a developer trying to fix a problem, make sure that you are actually able to understand the problem first. ..."
Dec 23, 2018 | hexmode.com

A while back I mentioned Atul Gawande 's book The Checklist Manifesto . Today, I got another example of how to improve my checklists.

The book talks about how checklists reduce major errors in surgery. Hospitals that use checklists are drastically less likely to amputate the wrong leg .

So, the takeaway for me is this: any checklist should start off verifying that what you "know" to be true is true . (Thankfully, my errors can be backed out with very little long term consequences, but I shouldn't use this as an excuse to forego checklists.)

Before starting, ask the "Is it plugged in?" question first. What happened today was an example of when asking "Is it plugged in?" would have helped.

Today I was testing the thumbnailing of some MediaWiki code and trying to understand the $wgLocalFileRepo variable. I copied part of an /images/ directory over from another wiki to my test wiki. I verified that it thumbnailed correctly.

So far so good.

Then I changed the directory parameter and tested. No thumbnail. Later, I realized this is to be expected because I didn't copy over the original images. So that is one issue.

I erased (what I thought was) the thumbnail image and tried again on the main repo. It worked again–I got a thumbnail.

I tried copying over the images directory to the new directory, but it the new thumbnailing directory structure didn't produce a thumbnail.

I tried over and over with the same thumbnail and was confused because it kept telling me the same thing.

I added debugging statements and still got no where.

Finally, I just did an ls on the directory to verify it was there. It was. And it had files in it.

But not the file I was trying to produce a thumbnail of.

The system that "worked" had the thumbnail, but not the original file.

So, moral of the story: Make sure that your understanding of the current state is correct. If you're a developer trying to fix a problem, make sure that you are actually able to understand the problem first.

Maybe your perception of reality is wrong. Mine was. I was sure that the thumbnails were being generated each time until I discovered that I hadn't deleted the thumbnails, I had deleted the original.

[Dec 20, 2018] Your .bashrc

Notable quotes:
"... Erm, did you know that `tar` autoextracts these days? This will work for pretty much anything: ..."
Dec 20, 2018 | forums.debian.net

pawRoot " 2018-10-15 17:13

Just spent some time editing .bashrc to make my life easier, and wondering if anyone has some cool "tricks" for bash as well.

Here is mine:

Code: Select all
# changing shell appearance
PS1='\[\033[0;32m\]\[\033[0m\033[0;32m\]\u\[\033[0;36m\] @ \[\033[0;36m\]\h \w\[\033[0;32m\]$(__git_ps1)\n\[\033[0;32m\]└─\[\033[0m\033[0;32m\] \$\[\033[0m\033[0;32m\] ▶\[\033[0m\] '

# aliases
alias la="ls -la --group-directories-first --color"

# clear terminal
alias cls="clear"

#
alias sup="sudo apt update && sudo apt upgrade"

# search for package
alias apts='apt-cache search'

# start x session
alias x="startx"

# download mp3 in best quality from YouTube
# usage: ytmp3 https://www.youtube.com/watch?v=LINK

alias ytmp3="youtube-dl -f bestaudio --extract-audio --audio-format mp3 --audio-quality 0"

# perform 'la' after 'cd'

alias cd="listDir"

listDir() {
builtin cd "$*"
RESULT=$?
if [ "$RESULT" -eq 0 ]; then
la
fi
}

# type "extract filename" to extract the file

extract () {
if [ -f $1 ] ; then
case $1 in
*.tar.bz2) tar xvjf $1 ;;
*.tar.gz) tar xvzf $1 ;;
*.bz2) bunzip2 $1 ;;
*.rar) unrar x $1 ;;
*.gz) gunzip $1 ;;
*.tar) tar xvf $1 ;;
*.tbz2) tar xvjf $1 ;;
*.tgz) tar xvzf $1 ;;
*.zip) unzip $1 ;;
*.Z) uncompress $1 ;;
*.7z) 7z x $1 ;;
*) echo "don't know how to extract '$1'..." ;;
esac
else
echo "'$1' is not a valid file!"
fi
}

# obvious one

alias ..="cd .."
alias ...="cd ../.."
alias ....="cd ../../.."
alias .....="cd ../../../.."

# tail all logs in /var/log
alias logs="find /var/log -type f -exec file {} \; | grep 'text' | cut -d' ' -f1 | sed -e's/:$//g' | grep -v '[0-9]$' | xargs tail -f"

Head_on_a_Stick " 2018-10-15 18:11

pawRoot wrote:
Code: Select all
extract () {
if [ -f $1 ] ; then
case $1 in
*.tar.bz2) tar xvjf $1 ;;
*.tar.gz) tar xvzf $1 ;;
*.bz2) bunzip2 $1 ;;
*.rar) unrar x $1 ;;
*.gz) gunzip $1 ;;
*.tar) tar xvf $1 ;;
*.tbz2) tar xvjf $1 ;;
*.tgz) tar xvzf $1 ;;
*.zip) unzip $1 ;;
*.Z) uncompress $1 ;;
*.7z) 7z x $1 ;;
*) echo "don't know how to extract '$1'..." ;;
esac
else
echo "'$1' is not a valid file!"
fi
}
Erm, did you know that `tar` autoextracts these days? This will work for pretty much anything:
Code: Select all
tar xf whatever.tar.whatever
I have these functions in my .mkshrc (bash is bloat!):
Code: Select all
function mnt {
for i in proc sys dev dev/pts; do sudo mount --bind /$i "$1"$i; done &
sudo chroot "$1" /bin/bash
sudo umount -R "$1"{proc,sys,dev}
}

function mkiso {
xorriso -as mkisofs \
-iso-level 3 \
-full-iso9660-filenames \
-volid SharpBang-stretch \
-eltorito-boot isolinux/isolinux.bin \
-eltorito-catalog isolinux/boot.cat \
-no-emul-boot -boot-load-size 4 -boot-info-table \
-isohybrid-mbr isolinux/isohdpfx.bin \
-eltorito-alt-boot \
-e boot/grub/efi.img \
-no-emul-boot -isohybrid-gpt-basdat \
-output ../"$1" ./
}

The mnt function acts like a poor person's arch-chroot and will bind mount /proc /sys & /dev before chrooting then tear it down afterwards.

The mkiso function builds a UEFI-capable Debian live system (with the name of the image given as the first argument).

The only other stuff I have are aliases, not really worth posting.

dbruce wrote: Ubuntu forums try to be like a coffee shop in Seattle. Debian forums strive for the charm and ambience of a skinhead bar in Bacau. We intend to keep it that way.

pawRoot " 2018-10-15 18:23

Head_on_a_Stick wrote: Erm, did you know that `tar` autoextracts these days? This will work for pretty much anything:

But it won't work for zip or rar right ?

None1975 " 2018-10-16 13:02

Here is compilation of cool "tricks" for bash. This is similar to oh-my-zsh. OS: Debian Stretch / WM : Fluxbox
Debian Wiki | DontBreakDebian , My config files in github

debiman " 2018-10-21 14:38

i have a LOT of stuff in my /etc/bash.bashrc, because i want it to be available for the root user too.
i won't post everything, but here's a "best of" from both /etc/bash.bashrc and ~/.bashrc:
Code: Select all
case ${TERM} in
xterm*|rxvt*|Eterm|aterm|kterm|gnome*)
PROMPT_COMMAND=${PROMPT_COMMAND:+$PROMPT_COMMAND; }'printf "\033]0;%s: %s\007" "${SHELL##*/}" "${PWD/#$HOME/\~}"'
;;
screen)
PROMPT_COMMAND=${PROMPT_COMMAND:+$PROMPT_COMMAND; }'printf "\033_%s@%s:%s\033\\" "${USER}" "${HOSTNAME%%.*}" "${PWD/#$HOME/\~}"'
;;
linux)
setterm --blength 0
setterm --blank 4
setterm --powerdown 8
;;
esac

PS2='cont> '
PS3='Choice: '
PS4='DEBUG: '

# Bash won't get SIGWINCH if another process is in the foreground.
# Enable checkwinsize so that bash will check the terminal size when
# it regains control.
# http://cnswww.cns.cwru.edu/~chet/bash/FAQ (E11)
shopt -s checkwinsize

# forums.bunsenlabs.org/viewtopic.php?pid=27494#p27494
# also see aliases '...' and '....'
shopt -s autocd
# opensource.com/article/18/5/bash-tricks
shopt -s cdspell

# as big as possible!!!
HISTSIZE=500000
HISTFILESIZE=2000000

# unix.stackexchange.com/a/18443
# history: erase duplicates...
HISTCONTROL=ignoredups:erasedups
shopt -s histappend

# next: enables usage of CTRL-S (backward search) with CTRL-R (forward search)
# digitalocean.com/community/tutorials/how-to-use-bash-history-commands-and-expansions-on-a-linux-vps#searching-through-bash-history
stty -ixon

if [[ ${EUID} == 0 ]] ; then
# root = color=1 # red
if [ "$TERM" != "linux" ]; then
PS1="\[$(tput setaf 1)\]\[$(tput rev)\] \[$(tput sgr0)\]\[$(tput setaf 5)\]\${?#0}\[$(tput setaf 1)\] \u@\h \w\[$(tput sgr0)\]\n\[$(tput rev)\] \[$(tput sgr0)\] "
else
# adding \t = time to tty prompt
PS1="\[$(tput setaf 1)\]\[$(tput rev)\] \[$(tput sgr0)\]\[$(tput setaf 5)\]\${?#0}\[$(tput setaf 1)\] \t \u@\h \w\[$(tput sgr0)\]\n\[$(tput rev)\] \[$(tput sgr0)\] "
fi
else
if [ "$TERM" != "linux" ]; then
PS1="\[$(tput setaf 2)\]\[$(tput rev)\] \[$(tput sgr0)\]\[$(tput setaf 5)\]\${?#0}\[$(tput setaf 2)\] \u@\h \w\[$(tput sgr0)\]\n\[$(tput rev)\] \[$(tput sgr0)\] "
else
# adding \t = time to tty prompt
PS1="\[$(tput setaf 2)\]\[$(tput rev)\] \[$(tput sgr0)\]\[$(tput setaf 5)\]\${?#0}\[$(tput setaf 2)\] \t \u@\h \w\[$(tput sgr0)\]\n\[$(tput rev)\] \[$(tput sgr0)\] "
fi
fi

[ -r /usr/share/bash-completion/bash_completion ] && . /usr/share/bash-completion/bash_completion || true

export EDITOR="nano"

man() {
env LESS_TERMCAP_mb=$(printf "\e[1;31m") \
LESS_TERMCAP_md=$(printf "\e[1;31m") \
LESS_TERMCAP_me=$(printf "\e[0m") \
LESS_TERMCAP_se=$(printf "\e[0m") \
LESS_TERMCAP_so=$(printf "\e[7m") \
LESS_TERMCAP_ue=$(printf "\e[0m") \
LESS_TERMCAP_us=$(printf "\e[1;32m") \
man "$@"
}
#LESS_TERMCAP_so=$(printf "\e[1;44;33m")
# that used to be in the man function for less's annoyingly over-colorful status line.
# changed it to simple reverse video (tput rev)

alias ls='ls --group-directories-first -hF --color=auto'
alias ll='ls --group-directories-first -hF --color=auto -la'
alias mpf='/usr/bin/ls -1 | mpv --playlist=-'
alias ruler='slop -o -c 1,0.3,0'
alias xmeasure='slop -o -c 1,0.3,0'
alias obxprop='obxprop | grep -v _NET_WM_ICON'
alias sx='exec startx > ~/.local/share/xorg/xlog 2>&1'
alias pngq='pngquant --nofs --speed 1 --skip-if-larger --strip '
alias screencap='ffmpeg -r 15 -s 1680x1050 -f x11grab -i :0.0 -vcodec msmpeg4v2 -qscale 2'
alias su='su -'
alias fblc='fluxbox -list-commands | column'
alias torrench='torrench -t -k -s -x -r -l -i -b --sorted'
alias F5='while sleep 60; do notify-send -u low "Pressed F5 on:" "$(xdotool getwindowname $(xdotool getwindowfocus))"; xdotool key F5; done'
alias aurs='aurman --sort_by_name -Ss'
alias cal3='cal -3 -m -w --color'
alias mkdir='mkdir -p -v'
alias ping='ping -c 5'
alias cd..='cd ..'
alias off='systemctl poweroff'
alias xg='xgamma -gamma'
alias find='find 2>/dev/null'
alias stressme='stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout'
alias hf='history|grep'
alias du1='du -m --max-depth=1|sort -g|sed "s/\t./M\t/g ; s/\///g"'
alias zipcat='gunzip -c'

mkcd() {
mkdir -p "$1"
echo cd "$1"
cd "$1"
}

[Dec 16, 2018] Red Hat Enterprise Linux 7.6 Released

Dec 16, 2018 | linux.slashdot.org

ArchieBunker ( 132337 ) , Tuesday October 30, 2018 @07:00PM ( #57565233 ) Homepage

New features include ( Score: 5 , Funny)

All of /etc has been moved to a flat binary database now called REGISTRY.DAT

A new configuration tool known as regeditor authored by Poettering himself (accidental deletion of /home only happens in rare occurrences)

In kernel naughty words filter

systemd now includes a virtual userland previously known as busybox

[Dec 14, 2018] 10 of the best pieces of IT advice I ever heard

Dec 14, 2018 | www.techrepublic.com
  1. Learn to say "no"

    If you're new to the career, chances are you'll be saying "yes" to everything. However, as you gain experience and put in your time, the word "no" needs to creep into your vocabulary. Otherwise, you'll be exploited.

    Of course, you have to use this word with caution. Should the CTO approach and set a task before you, the "no" response might not be your best choice. But if you find end users-and friends-taking advantage of the word "yes," you'll wind up frustrated and exhausted at the end of the day.

  2. Be done at the end of the day

    I used to have a ritual at the end of every day. I would take off my watch and, at that point, I was done... no more work. That simple routine saved my sanity more often than not. I highly suggest you develop the means to inform yourself that, at some point, you are done for the day. Do not be that person who is willing to work through the evening and into the night... or you'll always be that person.

  3. Don't beat yourself up over mistakes made

    You are going to make mistakes. Sometimes will be simple and can be quickly repaired. Others may lean toward the catastrophic. But when you finally call your IT career done, you will have made plenty of mistakes. Beating yourself up over them will prevent you from moving forward. Instead of berating yourself, learn from the mistakes so you don't repeat them.

  4. Always have something nice to say

    You work with others on a daily basis. Too many times I've watched IT pros become bitter, jaded people who rarely have anything nice or positive to say. Don't be that person. If you focus on the positive, people will be more inclined to enjoy working with you, companies will want to hire you, and the daily grind will be less "grindy."

  5. Measure twice, cut once

    How many times have you issued a command or clicked OK before you were absolutely sure you should? The old woodworking adage fits perfectly here. Considering this simple sentence-before you click OK-can save you from quite a lot of headache. Rushing into a task is never the answer, even during an emergency. Always ask yourself: Is this the right solution?

  6. At every turn, be honest

    I've witnessed engineers lie to avoid the swift arm of justice. In the end, however, you must remember that log files don't lie. Too many times there is a trail that can lead to the truth. When the CTO or your department boss discovers this truth, one that points to you lying, the arm of justice will be that much more forceful. Even though you may feel like your job is in jeopardy, or the truth will cause you added hours of work, always opt for the truth. Always.

  7. Make sure you're passionate about what you're doing

    Ask yourself this question: Am I passionate about technology? If not, get out now; otherwise, that job will beat you down. A passion for technology, on the other hand, will continue to drive you forward. Just know this: The longer you are in the field, the more likely that passion is to falter. To prevent that from happening, learn something new.

  8. Don't stop learning

    Quick-how many operating systems have you gone through over the last decade? No career evolves faster than technology. The second you believe you have something perfected, it changes. If you decide you've learned enough, it's time to give up the keys to your kingdom. Not only will you find yourself behind the curve, all those servers and desktops you manage could quickly wind up vulnerable to every new attack in the wild. Don't fall behind.

  9. When you feel your back against a wall, take a breath and regroup

    This will happen to you. You'll be tasked to upgrade a server farm and one of the upgrades will go south. The sweat will collect, your breathing will reach panic level, and you'll lock up like Windows Me. When this happens... stop, take a breath, and reformulate your plan. Strangely enough, it's that breath taken in the moment of panic that will help you survive the nightmare. If a single, deep breath doesn't help, step outside and take in some fresh air so that you are in a better place to change course.

  10. Don't let clients see you Google a solution

    This should be a no-brainer... but I've watched it happen far too many times. If you're in the middle of something and aren't sure how to fix an issue, don't sit in front of a client and Google the solution. If you have to, step away, tell the client you need to use the restroom and, once in the safety of a stall, use your phone to Google the answer. Clients don't want to know you're learning on their dime.

See also

  • [Dec 14, 2018] Blatant neoliberal propagamda anout "booming US job market" by Danielle Paquette

    That's way too much hype even for WaPo pressitutes... The reality is that you can apply to 50 jobs and did not get a single responce.
    Dec 12, 2018 | www.latimes.com

    Economists report that workers are starting to act like millennials on Tinder: They're ditching jobs with nary a text. "A number of contacts said that they had been 'ghosted,' a situation in which a worker stops coming to work without notice and then is impossible to contact," the Federal Reserve Bank of Chicago noted in December's Beige Book report, which tracks employment trends. Advertisement > National data on economic "ghosting" is lacking. The term, which normally applies to dating, first surfaced on Dictionary.com in 2016. But companies across the country say silent exits are on the rise. Analysts blame America's increasingly tight labor market. Job openings have surpassed the number of seekers for eight straight months, and the unemployment rate has clung to a 49-year low of 3.7% since September. Janitors, baristas, welders, accountants, engineers -- they're all in demand, said Michael Hicks, a labor economist at Ball State University in Indiana. More people may opt to skip tough conversations and slide right into the next thing. "Why hassle with a boss and a bunch of out-processing," he said, "when literally everyone has been hiring?" Recruiters at global staffing firm Robert Half have noticed a 10% to 20% increase in ghosting over the last year, D.C. district President Josh Howarth said. Applicants blow off interviews. New hires turn into no-shows. Workers leave one evening and never return. "You feel like someone has a high level of interest, only for them to just disappear," Howarth said. Over the summer, woes he heard from clients emerged in his own life. A job candidate for a recruiter role asked for a day to mull over an offer, saying she wanted to discuss the terms with her spouse. Then she halted communication. "In fairness," Howarth said, "there are some folks who might have so many opportunities they're considering, they honestly forget." Keith Station, director of business relations at Heartland Workforce Solutions, which connects job hunters with companies in Omaha, said workers in his area are most likely to skip out on low-paying service positions. "People just fall off the face of the Earth," he said of the area, which has an especially low unemployment rate of 2.8%. Some employers in Nebraska are trying to head off unfilled shifts by offering apprentice programs that guarantee raises and additional training over time. "Then you want to stay and watch your wage grow," Station said. Advertisement > Other recruitment businesses point to solutions from China, where ghosting took off during the last decade's explosive growth. "We generally make two offers for every job because somebody doesn't show up," said Rebecca Henderson, chief executive of Randstad Sourceright, a talent acquisition firm. And if both hires stick around, she said, her multinational clients are happy to deepen the bench. Though ghosting in the United States does not yet require that level of backup planning, consultants urge employers to build meaningful relationships at every stage of the hiring process. Someone who feels invested in an enterprise is less likely to bounce, said Melissa and Johnathan Nightingale, who have written about leadership and dysfunctional management. "Employees leave jobs that suck," they said in an email. "Jobs where they're abused. Jobs where they don't care about the work. And the less engaged they are, the less need they feel to give their bosses any warning." Some employees are simply young and restless, said James Cooper, former manager of the Old Faithful Inn at Yellowstone National Park, where he said people ghosted regularly. A few of his staffers were college students who lived in park dormitories for the summer. "My favorite," he said, "was a kid who left a note on the floor in his dorm room that said, 'Sorry bros, had to ghost.' " Other ghosters describe an inner voice that just says: Nah. Zach Keel, a 26-year-old server in Austin, Texas, made the call last year to flee a combination bar and cinema after realizing he would have to clean the place until sunrise. More work, he calculated, was always around the corner. "I didn't call," Keel said. "I didn't show up. I figured: No point in feeling guilty about something that wasn't that big of an issue. Turnover is so high, anyway."

    [Dec 14, 2018] You apply for a job. You hear nothing. Here's what to do next

    Dec 14, 2018 | finance.yahoo.com

    But the more common situation is that applicants are ghosted by companies. They apply for a job and never hear anything in response, not even a rejection. In the U.S., companies are generally not legally obligated to deliver bad news to job candidates, so many don't.

    They also don't provide feedback, because it could open the company up to a legal risk if it shows that they decided against a candidate for discriminatory reasons protected by law such as race, gender or disability.

    Hiring can be a lengthy process, and rejecting 99 candidates is much more work than accepting one. But a consistently poor hiring process that leaves applicants hanging can cause companies to lose out on the best talent and even damage perception of their brand.

    Here's what companies can do differently to keep applicants in the loop, and how job seekers can know that it's time to cut their losses.


    What companies can do differently

    There are many ways that technology can make the hiring process easier for both HR professionals and applicants.

    Only about half of all companies get back to the candidates they're not planning to interview, Natalia Baryshnikova, director of product management on the enterprise product team at SmartRecruiters, tells CNBC Make It .

    "Technology has defaults, one change is in the default option," Baryshnikova says. She said that SmartRecruiters changed the default on its technology from "reject without a note" to "reject with a note," so that candidates will know they're no longer involved in the process.

    Companies can also use technology as a reminder to prioritize rejections. For the company, rejections are less urgent than hiring. But for a candidate, they are a top priority. "There are companies out there that get back to 100 percent of candidates, but they are not yet common," Baryshnikova says.

    How one company is trying to help

    WayUp was founded to make the process of applying for a job simpler.

    "The No. 1 complaint from candidates we've heard, from college students and recent grads especially, is that their application goes into a black hole," Liz Wessel, co-founder and CEO of WayUp, a platform that connects college students and recent graduates with employers, tells CNBC Make It .

    WayUp attempts to increase transparency in hiring by helping companies source and screen applicants, and by giving applicants feedback based on soft skills. They also let applicants know if they have advanced to the next round of interviewing within 24 hours.

    Wessel says that in addition to creating a better experience for applicants, WayUp's system helps companies address bias during the resume-screening processes. Resumes are assessed for hard skills up front, then each applicant participates in a phone screening before their application is passed to an employer. This ensures that no qualified candidate is passed over because their resume is different from the typical hire at an organization – something that can happen in a company that uses computers instead of people to scan resumes .

    "The companies we work with see twice as many minorities getting to offer letter," Wessel said.

    When you can safely assume that no news is bad news

    First, if you do feel that you're being ghosted by a company after sending in a job application, don't despair. No news could be good news, so don't assume right off the bat that silence means you didn't get the job.

    Hiring takes time, especially if you're applying for roles where multiple people could be hired, which is common in entry-level positions. It's possible that an HR team is working through hundreds or even thousands of resumes, and they might not have gotten to yours yet. It is not unheard of to hear back about next steps months after submitting an initial application.

    If you don't like waiting, you have a few options. Some companies have application tracking in their HR systems, so you can always check to see if the job you've applied for has that and if there's been an update to the status of your application.

    Otherwise, if you haven't heard anything, Wessel said that the only way to be sure that you aren't still in the running for the job is to determine if the position has started. Some companies will publish their calendar timelines for certain jobs and programs, so check that information to see if your resume could still be in review.

    "If that's the case and the deadline has passed," Wessel says, it's safe to say you didn't get the job.

    And finally, if you're still unclear on the status of your application, she says there's no problem with emailing a recruiter and asking outright.

    [Dec 13, 2018] Red Hat Linux Professional Users Groups

    Compare with Oracle recommendations. Some setting might be wrong. Oracle recommendes, see Oracle kernel parameters tuning on Linux
    Dec 13, 2018 | www.linkedin.com

    Oracle recommmendations:

    ip_local_port_range Minimum:9000 Maximum: 65000 /proc/sys/net/ipv4/ip_local_port_range
    rmem_default 262144 /proc/sys/net/core/rmem_default
    rmem_max 4194304 /proc/sys/net/core/rmem_max
    wmem_default 262144 /proc/sys/net/core/wmem_default
    wmem_max 1048576 /proc/sys/net/core/wmem_max
    tcp_wmem 262144 /proc/sys/net/ipv4/tcp_wmem
    tcp_rmem 4194304 /proc/sys/net/ipv4/tcp_rmem

    Minesh Patel , Site Reliability Engineer, Austin, Texas Area

    TCP IO setting on Red hat will reduce your intermittent or random slowness problem or there issue if you have TCP IO of default settings.

    For Red Hat Linux: 131071 is default value.

    Double the value from 131071 to 262144
    cat /proc/sys/net/core/rmem_max
    131071 → 262144
    cat /proc/sys/net/core/rmem_default
    129024 → 262144
     cat /proc/sys/net/core/wmem_default
    129024 → 262144
     cat /proc/sys/net/core/wmem_max
    131071 → 262144
    
    To improve fail over performance in a RAC cluster, consider changing the following IP kernel parameters as well:
    net.ipv4.tcp_keepalive_time
    net.ipv4.tcp_keepalive_intvl
    net.ipv4.tcp_retries2
    net.ipv4.tcp_syn_retries
    # sysctl -w net.ipv4.ip_local_port_range="1024 65000"
    

    To make the change permanent, add the following line to the /etc/sysctl.conf file, which is used during the boot process:

    net.ipv4.ip_local_port_range=1024 65000
    

    The first number is the first local port allowed for TCP and UDP traffic, and the second number is the last port number.

    [Dec 05, 2018] How can I scroll up to see the past output in PuTTY?

    Dec 05, 2018 | superuser.com

    Ask Question up vote 3 down vote favorite 1

    user1721949 ,Dec 12, 2012 at 8:32

    I have a script which, when I run it from PuTTY, it scrolls the screen. Now, I want to go back to see the errors, but when I scroll up, I can see the past commands, but not the output of the command.

    How can I see the past output?

    Rico ,Dec 13, 2012 at 8:24

    Shift+Pgup/PgDn should work for scrolling without using the scrollbar.

    > ,Jul 12, 2017 at 21:45

    If shift pageup/pagedown fails, try this command: "reset", which seems to correct the display. – user530079 Jul 12 '17 at 21:45

    RedGrittyBrick ,Dec 12, 2012 at 9:31

    If you don't pipe the output of your commands into something like less , you will be able to use Putty's scroll-bars to view earlier output.

    Putty has settings for how many lines of past output it retains in it's buffer.


    before scrolling

    after scrolling back (upwards)

    If you use something like less the output doesn't get into Putty's scroll buffer


    after using less

    David Dai ,Dec 14, 2012 at 3:31

    why is putty different with the native linux console at this point? – David Dai Dec 14 '12 at 3:31

    konradstrack ,Dec 12, 2012 at 9:52

    I would recommend using screen if you want to have good control over the scroll buffer on a remote shell.

    You can change the scroll buffer size to suit your needs by setting:

    defscrollback 4000
    

    in ~/.screenrc , which will specify the number of lines you want to be buffered (4000 in this case).

    Then you should run your script in a screen session, e.g. by executing screen ./myscript.sh or first executing screen and then ./myscript.sh inside the session.

    It's also possible to enable logging of the console output to a file. You can find more info on the screen's man page .

    ,

    From your descript, it sounds like the "problem" is that you are using screen, tmux, or another window manager dependent on them (byobu). Normally you should be able to scroll back in putty with no issue. Exceptions include if you are in an application like less or nano that creates it's own "window" on the terminal.

    With screen and tmux you can generally scroll back with SHIFT + PGUP (same as you could from the physical terminal of the remote machine). They also both have a "copy" mode that frees the cursor from the prompt and lets you use arrow keys to move it around (for selecting text to copy with just the keyboard). It also lets you scroll up and down with the PGUP and PGDN keys. Copy mode under byobu using screen or tmux backends is accessed by pressing F7 (careful, F6 disconnects the session). To do so directly under screen you press CTRL + a then ESC or [ . You can use ESC to exit copy mode. Under tmux you press CTRL + b then [ to enter copy mode and ] to exit.

    The simplest solution, of course, is not to use either. I've found both to be quite a bit more trouble than they are worth. If you would like to use multiple different terminals on a remote machine simply connect with multiple instances of putty and manage your windows using, er... Windows. Now forgive me but I must flee before I am burned at the stake for my heresy.

    EDIT: almost forgot, some keys may not be received correctly by the remote terminal if putty has not been configured correctly. In your putty config check Terminal -> Keyboard . You probably want the function keys and keypad set to be either Linux or Xterm R6 . If you are seeing strange characters on the terminal when attempting the above this is most likely the problem.

    [Nov 22, 2018] Sorry, Linux. Kubernetes is now the OS that matters InfoWorld

    That's a very primitive thinking. If RHEL is royally screwed, like is the case with RHEL7, that affects Kubernetes -- it does not exists outside the OS
    Nov 22, 2018 | www.infoworld.com
    We now live in a Kubernetes world

    Perhaps Redmonk analyst Stephen O'Grady said it best : "If there was any question in the wake of IBM's $34 billion acquisition of Red Hat and its Kubernetes-based OpenShift offering that it's Kubernetes's world and we're all just living in it, those [questions] should be over." There has been nearly $60 billion in open source M&A in 2018, but most of it revolves around Kubernetes.

    Red Hat, for its part, has long been (rightly) labeled the enterprise Linux standard, but IBM didn't pay for Red Hat Enterprise Linux. Not really.

    [Nov 21, 2018] Linux Shutdown Command 5 Practical Examples Linux Handbook

    Nov 21, 2018 | linuxhandbook.com

    Restart the system with shutdown command

    There is a separate reboot command but you don't need to learn a new command just for rebooting the system. You can use the Linux shutdown command for rebooting as wel.

    To reboot a system using the shutdown command, use the -r option.

    sudo shutdown -r
    

    The behavior is the same as the regular shutdown command. It's just that instead of a shutdown, the system will be restarted.

    So, if you used shutdown -r without any time argument, it will schedule a reboot after one minute.

    You can schedule reboots the same way you did with shutdown.

    sudo shutdown -r +30
    

    You can also reboot the system immediately with shutdown command:

    sudo shutdown -r now
    
    4. Broadcast a custom message

    If you are in a multi-user environment and there are several users logged on the system, you can send them a custom broadcast message with the shutdown command.

    By default, all the logged users will receive a notification about scheduled shutdown and its time. You can customize the broadcast message in the shutdown command itself:

    sudo shutdown 16:00 "systems will be shutdown for hardware upgrade, please save your work"
    

    Fun Stuff: You can use the shutdown command with -k option to initiate a 'fake shutdown'. It won't shutdown the system but the broadcast message will be sent to all logged on users.

    5. Cancel a scheduled shutdown

    If you scheduled a shutdown, you don't have to live with it. You can always cancel a shutdown with option -c.

    sudo shutdown -c
    

    And if you had broadcasted a messaged about the scheduled shutdown, as a good sysadmin, you might also want to notify other users about cancelling the scheduled shutdown.

    sudo shutdown -c "planned shutdown has been cancelled"
    

    Halt vs Power off

    Halt (option -H): terminates all processes and shuts down the cpu .
    Power off (option -P): Pretty much like halt but it also turns off the unit itself (lights and everything on the system).

    Historically, the earlier computers used to halt the system and then print a message like "it's ok to power off now" and then the computers were turned off through physical switches.

    These days, halt should automically power off the system thanks to ACPI .

    These were the most common and the most useful examples of the Linux shutdown command. I hope you have learned how to shut down a Linux system via command line. You might also like reading about the less command usage or browse through the list of Linux commands we have covered so far.

    If you have any questions or suggestions, feel free to let me know in the comment section.

    [Nov 19, 2018] The rise of Shadow IT - Should CIOs take umbrage

    Notable quotes:
    "... Shadow IT broadly refers to technology introduced into an organisation that has not passed through the IT department. ..."
    "... The result is first; no proactive recommendations from the IT department and second; long approval periods while IT teams evaluate solutions that the business has proposed. Add an over-defensive approach to security, and it is no wonder that some departments look outside the organisation for solutions. ..."
    Nov 19, 2018 | cxounplugged.com

    Shadow IT broadly refers to technology introduced into an organisation that has not passed through the IT department. A familiar example of this is BYOD but, significantly, Shadow IT now includes enterprise grade software and hardware, which is increasingly being sourced and managed outside of the direct control of the organisation's IT department and CIO.

    Examples include enterprise wide CRM solutions and marketing automation systems procured by the marketing department, as well as data warehousing, BI and analysis services sourced by finance officers.

    So why have so many technology solutions slipped through the hands of so many CIOs? I believe a confluence of events is behind the trend; there is the obvious consumerisation of IT, which has resulted in non-technical staff being much more aware of possible solutions to their business needs – they are more tech-savvy. There is also the fact that some CIOs and technology departments have been too slow to react to the business's technology needs.

    The reason for this slow reaction is that very often IT Departments are just too busy running day-to-day infrastructure operations such as network and storage management along with supporting users and software. The result is first; no proactive recommendations from the IT department and second; long approval periods while IT teams evaluate solutions that the business has proposed. Add an over-defensive approach to security, and it is no wonder that some departments look outside the organisation for solutions.

    [Nov 18, 2018] Systemd killing screen and tmux

    Nov 18, 2018 | theregister.co.uk

    fobobob , Thursday 10th May 2018 18:00 GMT

    Might just be a Debian thing as I haven't looked into it, but I have enough suspicion towards systemd that I find it worth mentioning. Until fairly recently (in terms of Debian releases), the default configuration was to murder a user's processes when they log out. This includes things such as screen and tmux, and I seem to recall it also murdering disowned and NOHUPed processes as well.
    Tim99 , Thursday 10th May 2018 06:26 GMT
    How can we make money?

    A dilemma for a Really Enterprise Dependant Huge Applications Technology company - The technology they provide is open, so almost anyone could supply and support it. To continue growing, and maintain a healthy profit they could consider locking their existing customer base in; but they need to stop other suppliers moving in, who might offer a better and cheaper alternative, so they would like more control of the whole ecosystem. The scene: An imaginary high-level meeting somewhere - The agenda: Let's turn Linux into Windows - That makes a lot of money:-

    Q: Windows is a monopoly, so how are we going to monopolise something that is free and open, because we will have to supply source code for anything that will do that? A: We make it convoluted and obtuse, then we will be the only people with the resources to offer it commercially; and to make certain, we keep changing it with dependencies to "our" stuff everywhere - Like Microsoft did with the Registry.

    Q: How are we going to sell that idea? A: Well, we could create a problem and solve it - The script kiddies who like this stuff, keep fiddling with things and rebooting all of the time. They don't appear to understand the existing systems - Sell the idea they do not need to know why *NIX actually works.

    Q: *NIX is designed to be dependable, and go for long periods without rebooting, How do we get around that. A: That is not the point, the kids don't know that; we can sell them the idea that a minute or two saved every time that they reboot is worth it, because they reboot lots of times in every session - They are mostly running single user laptops, and not big multi-user systems, so they might think that that is important - If there is somebody who realises that this is trivial, we sell them the idea of creating and destroying containers or stopping and starting VMs.

    Q: OK, you have sold the concept, how are we going to make it happen? A: Well, you know that we contribute quite a lot to "open" stuff. Let's employ someone with a reputation for producing fragile, barely functioning stuff for desktop systems, and tell them that we need a "fast and agile" approach to create "more advanced" desktop style systems - They would lead a team that will spread this everywhere. I think I know someone who can do it - We can have almost all of the enterprise market.

    Q: What about the other large players, surely they can foil our plan? A: No, they won't want to, they are all big companies and can see the benefit of keeping newer, efficient competitors out of the market. Some of them sell equipment and system-wide consulting, so they might just use our stuff with a suitable discount/mark-up structure anyway.

    ds6 , 6 months
    Re: How can we make money?

    This is scarily possible and undeserving of the troll icon.

    Harkens easily to non-critical software developers intentionally putting undocumented, buggy code into production systems, forcing the company to keep the guy on payroll to keep the wreck chugging along.

    DougS , Thursday 10th May 2018 07:30 GMT
    Init did need fixing

    But replacing it with systemd is akin to "fixing" the restrictions of travel by bicycle (limited speed and range, ending up sweaty at your destination, dangerous in heavy traffic) by replacing it with an Apache helicopter gunship that has a whole new set of restrictions (need for expensive fuel, noisy and pisses off the neighbors, need a crew of trained mechanics to keep it running, local army base might see you as a threat and shoot missiles at you)

    Too bad we didn't get the equivalent of a bicycle with an electric motor, or perhaps a moped.

    -tim , Thursday 10th May 2018 07:33 GMT
    Those who do not understand Unix are condemned to reinvent it, poorly.

    "It sounds super basic, but actually it is much more complex than people think," Poettering said. "Because Systemd knows which service a process belongs to, it can shut down that process."

    Poettering and Red Hat,

    Please learn about "Process Groups"

    Init has had the groundwork for most of the missing features since the early 1980s. For example the "id" field in /etc/inittab was intended for a "makefile" like syntax to fix most of these problems but was dropped in the early days of System V because it wasn't needed.

    Herby , Thursday 10th May 2018 07:42 GMT
    Process 1 IS complicated.

    That is the main problem. With different processes you get different results. For all its faults, SysV init and RC scripts was understandable to some extent. My (cursory) understanding of systemd is that it appears more complicated to UNDERSTAND than the init stuff.

    The init scripts are nice text scripts which are executed by a nice well documented shell (bash mostly). Systemd has all sorts of blobs that somehow do things and are totally confusing to me. It suffers from "anti- kiss "

    Perhaps a nice book could be written WITH example to show what is going on.

    Now let's see does audio come before or after networking (or at the same time)?

    Chronos , Thursday 10th May 2018 09:12 GMT
    Logging

    If they removed logging from the systemd core and went back to good ol' plaintext syslog[-ng], I'd have very little bad to say about Lennart's monolithic pet project. Indeed, I much prefer writing unit files than buggering about getting rcorder right in the old SysV init.

    Now, if someone wanted to nuke pulseaudio from orbit and do multiplexing in the kernel a la FreeBSD, I'll chip in with a contribution to the warhead fund. Needing a userland daemon just to pipe audio to a device is most certainly a solution in search of a problem.

    Tinslave_the_Barelegged , Thursday 10th May 2018 11:29 GMT
    Re: Logging

    > If they removed logging from the systemd core

    And time syncing

    And name resolution

    And disk mounting

    And logging in

    ...and...

    [Nov 18, 2018] From now on, I will call Systemd-based Linux distros "SNU Linux". Because Systemd's Not Unix-like.

    Nov 18, 2018 | theregister.co.uk

    tekHedd , Thursday 10th May 2018 15:28 GMT

    Not UNIX-like? SNU!

    From now on, I will call Systemd-based Linux distros "SNU Linux". Because Systemd's Not Unix-like.

    It's not clever, but it's the future. From now on, all major distributions will be called SNU Linux. You can still freely choose to use a non-SNU linux distro, but if you want to use any of the "normal" ones, you will have to call it "SNU" whether you like it or not. It's for your own good. You'll thank me later.

    [Nov 18, 2018] So in all reality, systemd is an answer to a problem that nobody who are administring servers ever had.

    Nov 18, 2018 | theregister.co.uk

    jake , Thursday 10th May 2018 20:23 GMT

    Re: Bah!

    Nice rant. Kinda.

    However, I don't recall any major agreement that init needed fixing. Between BSD and SysV inits, probably 99.999% of all use cases were covered. In the 1 in 100,000 use case, a little bit of C (stand alone code, or patching init itself) covered the special case. In the case of Slackware's SysV/BSD amalgam, I suspect it was more like one in ten million.

    So in all reality, systemd is an answer to a problem that nobody had. There was no reason for it in the first place. There still isn't a reason for it ... especially not in the 999,999 places out of 1,000,000 where it is being used. Throw in the fact that it's sticking its tentacles[0] into places where nobody in their right mind would expect an init as a dependency (disk partitioning software? WTF??), can you understand why us "old guard" might question the sanity of people singing it's praises?

    [0] My spall chucker insists that the word should be "testicles". Tempting ...

    [Nov 18, 2018] You love systems -- you just don't know it yet, wink Red Hat bods

    Nov 18, 2018 | theregister.co.uk

    sisk , Thursday 10th May 2018 21:17 GMT

    It's a pretty polarizing debate: either you see Systemd as a modern, clean, and coherent management toolkit

    Very, very few Linux users see it that way.

    or an unnecessary burden running roughshod over the engineering maxim: if it ain't broke, don't fix it.

    Seen as such by 90% of Linux users because it demonstrably is.

    Truthfully Systemd is flawed at a deeply fundamental level. While there are a very few things it can do that init couldn't - the killing off processes owned by a service mentioned as an example in this article is handled just fine by a well written init script - the tradeoffs just aren't worth it. For example: fscking BINARY LOGS. Even if all of Systemd's numerous other problems were fixed that one would keep it forever on my list of things to avoid if at all possible, and the fact that the Systemd team thought it a good idea to make the logs binary shows some very troubling flaws in their thinking at a very fundamental level.

    Dazed and Confused , Thursday 10th May 2018 21:43 GMT
    Re: fscking BINARY LOGS.

    And config too

    When it comes to logs and config file if you can't grep it then it doesn't belong on Linux/Unix

    Nate Amsden , Thursday 10th May 2018 23:51 GMT
    Re: fscking BINARY LOGS.

    WRT grep and logs I'm the same way which is why I hate json so much. My saying has been along the lines of "if it's not friends with grep/sed then it's not friends with me". I have whipped some some whacky sed stuff to generate a tiny bit of json to read into chef for provisioning systems though.

    XML is similar though I like XML a lot more at least the closing tags are a lot easier to follow then trying to count the nested braces in json.

    I haven't had the displeasure much of dealing with the systemd binary logs yet myself.

    Tomato42 , Saturday 12th May 2018 08:26 GMT
    Re: fscking BINARY LOGS.

    > I haven't had the displeasure much of dealing with the systemd binary logs yet myself.

    "I have no clue what I'm talking about or what's a robust solution but dear god, that won't stop me!" – why is it that all the people complaining about journald sound like that?

    systemd works just fine with regular syslog-ng, without journald (that's the thing that has binary logs) in sight

    HieronymusBloggs , Saturday 12th May 2018 18:17 GMT
    Re: fscking BINARY LOGS.

    "systemd works just fine with regular syslog-ng, without journald (that's the thing that has binary logs) in sight"

    Journald can't be switched off, only redirected to /dev/null. It still generates binary log data (which has caused me at least one system hang due to the absurd amount of data it was generating on a system that was otherwise functioning correctly) and consumes system resources. That isn't my idea of "works just fine".

    ""I have no clue what I'm talking about or what's a robust solution but dear god, that won't stop me!" – why is it that all the people complaining about journald sound like that?"

    Nice straw man. Most of the complaints I've seen have been from experienced people who do know what they're talking about.

    sisk , Tuesday 15th May 2018 20:22 GMT
    Re: fscking BINARY LOGS.

    "I have no clue what I'm talking about or what's a robust solution but dear god, that won't stop me!" – why is it that all the people complaining about journald sound like that?

    I have had the displeasure of dealing with journald and it is every bit as bad as everyone says and worse.

    systemd works just fine with regular syslog-ng, without journald (that's the thing that has binary logs) in sight

    Yeah, I've tried that. It caused problems. It wasn't a viable option.

    Anonymous Coward , Thursday 10th May 2018 22:30 GMT
    Parking U$5bn in redhad for a few months will fix this...

    So it's now been 4 years since they first tried to force that shoddy desk-top init system into our servers? And yet they still feel compelled to tell everyone, look it really isn't that terrible. That should tell you something. Unless you are tone death like Redhat. Surprised people didn't start walking out when Poettering outlined his plans for the next round of systemD power grabs...

    Anyway the only way this farce will end is with shareholder activism. Some hedge fund to buy 10-15 percent of redhat (about the amount you need to make life difficult for management) and force them to sack that "stable genius" Poettering. So market cap is 30bn today. Anyone with 5bn spare to park for a few months wanna step forward and do some good?

    cjcox , Thursday 10th May 2018 22:33 GMT
    He's a pain

    Early on I warned that he was trying to solve a very large problem space. He insisted he could do it with his 10 or so "correct" ways of doing things, which quickly became 20, then 30, then 50, then 90, etc.. etc. I asked for some of the features we had in init, he said "no valid use case". Then, much later (years?), he implements it (no use case provided btw).

    Interesting fellow. Very bitter. And not a good listener. But you don't need to listen when you're always right.

    Daggerchild , Friday 11th May 2018 08:27 GMT
    Spherical wheel is superior.

    @T42

    Now, you see, you just summed up the whole problem. Like systemd's author, you think you know better than the admin how to run his machine, without knowing, or caring to ask, what he's trying to achieve. Nobody ever runs a computer, to achieve running systemd do they.

    Tomato42 , Saturday 12th May 2018 09:05 GMT
    Re: Spherical wheel is superior.

    I don't claim I know better, but I do know that I never saw a non-distribution provided init script that handled correctly the basic of corner cases – service already running, run file left-over but process dead, service restart – let alone the more obscure ones, like application double forking when it shouldn't (even when that was the failure mode of the application the script was provided with). So maybe, just maybe, you haven't experienced everything there is to experience, so your opinion is subjective?

    Yes, the sides of the discussion should talk more, but this applies to both sides. "La, la, la, sysv is working fine on my machine, thankyouverymuch" is not what you can call "participating in discussion". So is quoting well known and long discussed (and disproven) points. (and then downvoting people into oblivion for daring to point this things out).

    now in the real world, people that have to deal with init systems on daily basis, as distribution maintainers, by large, have chosen to switch their distributions to systemd, so the whole situation I can sum up one way:

    "the dogs may bark, but the caravan moves on"

    Kabukiwookie , Monday 14th May 2018 00:14 GMT
    Re: Spherical wheel is superior.

    I do know that I never saw a non-distribution provided init script that handled correctly the basic of corner cases – service already running

    This only shows that you don't have much real life experience managing lots of hosts.

    like application double forking when it shouldn't

    If this is a problem in the init script, this should be fixed in the init script. If this is a problem in the application itself, it should be fixed in the application, not worked around by the init mechanism. If you're suggesting the latter, you should not be touching any production box.

    "La, la, la, sysv is working fine on my machine, thankyouverymuch" is not what you can call "participating in discussion".

    Shoving down systemd down people's throat as a solution to a non-existing problem, is not a discussion either; it is the very definition of 'my way or the highway' thinking.

    now in the real world, people that have to deal with init systems on daily basis

    Indeed and having a bunch of sub-par developers, focused on the 'year of the Linux desktop' to decide what the best way is for admins to manage their enterprise environment is not helping.

    "the dogs may bark, but the caravan moves on"

    Indeed. It's your way or the highway; I thought you were just complaining about the people complaining about systemd not wanting to have a discussion, while all the while it's systemd proponents ignoring and dismissing very valid complaints.

    Daggerchild , Monday 14th May 2018 14:10 GMT
    Re: Spherical wheel is superior.

    "I never saw ... run file left-over but process dead, service restart ..."

    Seriously? I wrote one last week! You use an OS atomic lock on the pidfile and exec the service if the lock succeeded. The lock dies with the process. It's a very small shellscript.

    I shot a systemd controlled service. Systemd put it into error state and wouldn't restart it unless I used the right runes. That is functionally identical to the thing you just complained about.

    "application double forking when it shouldn't"

    I'm going to have to guess what that means, and then point you at DJB's daemontools. You leave a FD open in the child. They can fork all they like. You'll still track when the last dies as the FD will cause an event on final close.

    "So maybe, just maybe, you haven't experienced everything there is to experience"

    You realise that's the conspiracy theorist argument "You don't know everything, therefore I am right". Doubt is never proof of anything.

    "La, la, la, sysv is working fine" is not what you can call "participating in discussion".

    Well, no.. it's called evidence. Evidence that things are already working fine, thanks. Evidence that the need for discussion has not been displayed. Would you like a discussion about the Earth being flat? Why not? Are you refusing to engage in a constructive discussion? How obstructive!

    "now in the real world..."

    In the *real* world people run Windows and Android, so you may want to rethink the "we outnumber you, so we must be right" angle. You're claiming an awful lot of highground you don't seem to actually know your way around, while trying to wield arguments you don't want to face yourself...

    "(and then downvoting people into oblivion for daring to point this things out)"

    It's not some denialist conspiracy to suppress your "daring" Truth - you genuinely deserve those downvotes.

    Anonymous Coward , Friday 11th May 2018 17:27 GMT
    I have no idea how or why systemd ended up on servers. Laptops I can see the appeal for "this is the year of the linux desktop" - for when you want your rebooted machine to just be there as fast as possible (or fail mysteriously as fast as possible). Servers, on the other hand, which take in the order of 10+ minutes to get through POST, initialising whatever LOM, disk controllers, and whatever exotica hardware you may also have connected, I don't see a benefit in Linux starting (or failing to start) a wee bit more quickly. You're only going to reboot those beasts when absolutely necessary. And it should boot the same as it booted last time. PID1 should be as simple as possible.

    I only use CentOS these days for FreeIPA but now I'm questioning my life decisions even here. That Debian adopted systemd too is a real shame. It's actually put me off the whole game. Time spent learning systemd is time that could have been spent doing something useful that won't end up randomly breaking with a "will not fix" response.

    Systemd should be taken out back and put out of our misery.

    [Nov 18, 2018] Just let chef start the services when it runs after the system boots(which means they start maybe 1 or 2 mins after bootup).

    Notable quotes:
    "... Another thing bit us with systemd recently as well again going back to bind. Someone on the team upgraded our DNS systems to systemd and the startup parameters for bind were not preserved because systemd ignores the /etc/default/bind file. As a result we had tons of DNS failures when bind was trying to reach out to IPv6 name servers(ugh), when there is no IPv6 connectivity in the network (the solution is to start bind with a -4 option). ..."
    "... I'm sure I've only scratched the surface of systemd pain. I'm sure it provides good value to some people, I hear it's good with containers (I have been running LXC containers for years now, I see nothing with systemd that changes that experience so far). ..."
    "... If systemd is a solution to any set of problems, I'd love to have those problems back! ..."
    Nov 18, 2018 | theregister.co.uk

    Nate Amsden , Thursday 10th May 2018 16:34 GMT

    as a linux user for 22 users

    (20 of which on Debian, before that was Slackware)

    I am new to systemd, maybe 3 or 4 months now tops on Ubuntu, and a tiny bit on Debian before that.

    I was confident I was going to hate systemd before I used it just based on the comments I had read over the years, I postponed using it as long as I could. Took just a few minutes of using it to confirm my thoughts. Now to be clear, if I didn't have to mess with the systemd to do stuff then I really wouldn't care since I don't interact with it (which is the case on my laptop at least though laptop doesn't have systemd anyway). I manage about 1,000 systems running Ubuntu for work, so I have to mess with systemd, and init etc there. If systemd would just do ONE thing I think it would remove all of the pain that it has inflicted on me over the past several months and I could learn to accept it.

    That one thing is, if there is an init script, RUN IT. Not run it like systemd does now. But turn off ALL intelligence systemd has when it finds that script and run it. Don't put it on any special timers, don't try to detect if it is running already, or stopped already or whatever, fire the script up in blocking mode and wait till it exits.

    My first experience with systemd was on one of my home servers, I re-installed Debian on it last year, rebuilt the hardware etc and with it came systemd. I believe there is a way to turn systemd off but I haven't tried that yet. The first experience was with bind. I have a slightly custom init script (from previous debian) that I have been using for many years. I copied it to the new system and tried to start bind. Nothing. I looked in the logs and it seems that it was trying to interface with rndc(internal bind thing) for some reason, and because rndc was not working(I never used it so I never bothered to configure it) systemd wouldn't launch bind. So I fixed rndc and systemd would now launch bind, only to stop it within 1 second of launching. My first workaround was just to launch bind by hand at the CLI (no init script), left it running for a few months. Had a discussion with a co-worker who likes systemd and he explained that making a custom unit file and using the type=forking option may fix it.. That did fix the issue.

    Next issue came up when dealing with MySQL clusters. I had to initialize the cluster with the "service mysql bootstrap-pxc" command (using the start command on the first cluster member is a bad thing). Run that with systemd, and systemd runs it fine. But go to STOP the service, and systemd thinks the service is not running so doesn't even TRY to stop the service(the service is running). My workaround for my automation for mysql clusters at this point is to just use mysqladmin to shut the mysql instances down. Maybe newer mysql versions have better systemd support though a co-worker who is our DBA and has used mysql for many years says even the new Maria DB builds don't work well with systemd. I am working with Mysql 5.6 which is of course much much older.

    Next issue came up with running init scripts that have the same words in them, in the case of most recently I upgraded systems to systemd that run OSSEC. OSSEC has two init scripts for us on the server side (ossec and ossec-auth). Systemd refuses to run ossec-auth because it thinks there is a conflict with the ossec service. I had the same problem with multiple varnish instances running on the same system (varnish instances were named varnish-XXX and varnish-YYY). In the varnish case using custom unit files I got systemd to the point where it would start the service but it still refuses to "enable" the service because of the name conflict (I even changed the name but then systemd was looking at the name of the binary being called in the unit file and said there is a conflict there).

    fucking a. Systemd shut up, just run the damn script. It's not hard.

    Later a co-worker explained the "systemd way" for handling something like multiple varnish instances on the system but I'm not doing that, in the meantime I just let chef start the services when it runs after the system boots(which means they start maybe 1 or 2 mins after bootup).

    Another thing bit us with systemd recently as well again going back to bind. Someone on the team upgraded our DNS systems to systemd and the startup parameters for bind were not preserved because systemd ignores the /etc/default/bind file. As a result we had tons of DNS failures when bind was trying to reach out to IPv6 name servers(ugh), when there is no IPv6 connectivity in the network (the solution is to start bind with a -4 option).

    I believe I have also caught systemd trying to mess with file systems(iscsi mount points). I have lots of automation around moving data volumes on the SAN between servers and attaching them via software iSCSI directly to the VMs themselves(before vsphere 4.0 I attached them via fibre channel to the hypervisor but a feature in 4.0 broke that for me). I noticed on at least one occasion when I removed the file systems from a system that SOMETHING (I assume systemd) mounted them again, and it was very confusing to see file systems mounted again for block devices that DID NOT EXIST on the server at the time. I worked around THAT one I believe with the "noauto" option in fstab again. I had to put a lot of extra logic in my automation scripts to work around systemd stuff.

    I'm sure I've only scratched the surface of systemd pain. I'm sure it provides good value to some people, I hear it's good with containers (I have been running LXC containers for years now, I see nothing with systemd that changes that experience so far).

    But if systemd would just do this one thing and go into dumb mode with init scripts I would be quite happy.

    GrumpenKraut , Thursday 10th May 2018 17:52 GMT
    Re: as a linux user for 22 users

    Now more seriously: it really strikes me that complaints about systemd come from people managing non-trivial setups like the one you describe. While it might have been a PITA to get this done with the old init mechanism, you could make it work reliably.

    If systemd is a solution to any set of problems, I'd love to have those problems back!

    [Nov 18, 2018] SystemD is just a symptom of this regression of Red Hat into money making machine

    Nov 18, 2018 | theregister.co.uk

    Will Godfrey , Thursday 10th May 2018 16:30 GMT

    Business Model

    Red Hat have definitely taken a lurch to the dark side in recent years. It seems to be the way businesses go.

    They start off providing a service to customers.

    As they grow the customers become users.

    Once they reach a certain point the users become consumers, and at this point it is the 'consumers' that provide a service for the business.

    SystemD is just a symptom of this regression.

    [Nov 18, 2018] Fudging the start-up and restoring eth0

    Truth be told boisdevname abomination is from Dell
    Nov 18, 2018 | theregister.co.uk

    The Electron , Thursday 10th May 2018 12:05 GMT

    Fudging the start-up and restoring eth0

    I knew systemd was coming thanks to playing with Fedora. The quicker start-up times were welcomed. That was about it! I have had to kickstart many of my CentOS 7 builds to disable IPv6 (NFS complains bitterly), kill the incredibly annoying 'biosdevname' that turns sensible eth0/eth1 into some daftly named nonsense, replace Gnome 3 (shudder) with MATE, and fudge start-up processes. In a previous job, I maintained 2 sets of CentOS 7 'infrastructure' servers that provided DNS, DHCP, NTP, and LDAP to a large number of historical vlans. Despite enabling the systemd-network wait online option, which is supposed to start all networks *before* listening services, systemd would run off flicking all the "on" switches having only set-up a couple of vlans. Result: NTP would only be listening on one or two vlan interfaces. The only way I found to get around that was to enable rc.local and call systemd to restart the NTP daemon after 20 seconds. I never had the time to raise a bug with Red Hat, and I assume the issue still persists as no-one designed systemd to handle 15-odd vlans!?

    Jay 2 , Thursday 10th May 2018 15:02 GMT
    Re: Predictable names

    I can't remember if it's HPE or Dell (or both) where you can use set the kernel option biosdevname=0 during build/boot to turn all that renaming stuff off and revert to ethX.

    However on (RHEL?)/CentOS 7 I've found that if you build a server like that, and then try to renam/swap the interfaces it will refuse point blank to allow you to swap the interfaces round so that something else can be eth0. In the end we just gave up and renamed everything lanX instead which it was quite happy with.

    HieronymusBloggs , Thursday 10th May 2018 16:23 GMT
    Re: Predictable names

    "I can't remember if it's HPE or Dell (or both) where you can use set the kernel option biosdevname=0 during build/boot to turn all that renaming stuff off and revert to ethX."

    I'm using this on my Debian 9 systems. IIRC the option to do so will be removed in Debian 10.

    Dazed and Confused , Thursday 10th May 2018 19:21 GMT
    Re: Predictable names

    I can't remember if it's HPE or Dell (or both)

    It's Dell. I got the impression that much of this work had been done, at least, in conjunction with Dell.

    [Nov 18, 2018] The beatings will continue until morale improves.

    Nov 18, 2018 | theregister.co.uk

    Doctor Syntax , Thursday 10th May 2018 10:26 GMT

    "The more people learn about it, the more they like it."

    Translation: We define those who don't like it as not have learned enough about it.

    ROC , Friday 11th May 2018 17:32 GMT
    Alternate translation:

    The beatings will continue until morale improves.

    [Nov 18, 2018] I am barely tolerating SystemD on some servers because RHEL/CentOS 7 is the dominant business distro with a decent support life

    Nov 18, 2018 | theregister.co.uk

    AJ MacLeod , Thursday 10th May 2018 13:51 GMT

    @Sheepykins

    I'm not really bothered about whether init was perfect from the beginning - for as long as I've been using Linux (20 years) until now, I have never known the init system to be the cause of major issues. Since in my experience it's not been seriously broken for two decades, why throw it out now for something that is orders of magnitude more complex and ridiculously overreaching?

    Like many here I bet, I am barely tolerating SystemD on some servers because RHEL/CentOS 7 is the dominant business distro with a decent support life - but this is also the first time I can recall ever having serious unpredictable issues with startup and shutdown on Linux servers.


    stiine, Thursday 10th May 2018 15:38 GMT

    sysV init

    I've been using Linux ( RedHat, CentOS, Ubuntu), BSD (Solaris, SunOS, freeBSD) and Unix ( aix, sysv all of the way back to AT&T 3B2 servers) in farms of up to 400 servers since 1988 and I never, ever had issues with eth1 becoming eth0 after a reboot. I also never needed to run ifconfig before configuring an interface just to determine what the inteface was going to be named on a server at this time. Then they hired Poettering... now, if you replace a failed nic, 9 times out of 10, the interface is going to have a randomly different name.

    /rant

    [Nov 18, 2018] systems helps with mounting NSF4 filesystems

    Nov 18, 2018 | theregister.co.uk

    Chronos , Thursday 10th May 2018 13:32 GMT

    Re: Logging

    And disk mounting

    Well, I am compelled to agree with most everything you wrote except one niche area that systemd does better: Remember putzing about with the amd? One line in fstab:

    nasbox:/srv/set0 /nas nfs4 _netdev,noauto,nolock,x-systemd.automount,x-systemd.idle-timeout=1min 0 0
    

    Bloody thing only works and nobody's system comes grinding to a halt every time some essential maintenance is done on the NAS.

    Candour compels me to admit surprise that it worked as advertised, though.

    DCFusor , Thursday 10th May 2018 13:58 GMT

    Re: Logging

    No worries, as has happened with every workaround to make systemD simply mount cifs or NFS at boot, yours will fail as soon as the next change happens, yet it will remain on the 'net to be tried over and over as have all the other "fixes" for Poettering's arrogant breakages.

    The last one I heard from him on this was "don't mount shares at boot, it's not reliable WONTFIX".

    Which is why we're all bitching.

    Break my stuff.

    Web shows workaround.

    Break workaround without fixing the original issue, really.

    Never ensure one place for current dox on what works now.

    Repeat above endlessly.

    Fine if all you do is spin up endless identical instances in some cloud (EG a big chunk of RH customers - but not Debian for example). If like me you have 20+ machines customized to purpose...for which one workaround works on some but not others, and every new release of systemD seems to break something new that has to be tracked down and fixed, it's not acceptable - it's actually making proprietary solutions look more cost effective and less blood pressure raising.

    The old init scripts worked once you got them right, and stayed working. A new distro release didn't break them, nor did a systemD update (because there wasn't one). This feels more like sabotage.

    [Nov 18, 2018] Today I've kickstarted RHEL7 on a rack of 40 identical servers using same script. On about 25 out of 40 postinstall script added to rc.local failed to run with some obscure error

    Nov 18, 2018 | theregister.co.uk

    Dabbb , Thursday 10th May 2018 10:16 GMT

    Quite understandable that people who don't know anything else would accept systemd. For everyone else it has nothing to do with old school but everything to do with unpredictability of systemd.

    Today I've kickstarted RHEL7 on a rack of 40 identical servers using same script. On about 25 out of 40 postinstall script added to rc.local failed to run with some obscure error about script being terminated because something unintelligible did not like it. It never ever happened on RHEL6, it happens all the time on RHEL7. And that's exactly the reason I absolutely hate it both RHEL7 and systemd.

    [Nov 18, 2018] You love Systemd you just don't know it yet, wink Red Hat bods

    Nov 18, 2018 | theregister.co.uk

    Anonymous Coward , Thursday 10th May 2018 02:58 GMT

    Poettering still doesn't get it... Pid 1 is for people wearing big boy pants.

    "And perhaps, in the process, you may warm up a bit more to the tool"

    Like from LNG to Dry Ice? and by tool does he mean Poettering or systemd?

    I love the fact that they aren't trying to address the huge and legitimate issues with Systemd, while still plowing ahead adding more things we don't want Systemd to touch into it's ever expanding sprawl.

    The root of the issue with Systemd is the problems it causes, not the lack of "enhancements" initd offered. Replacing Init didn't require the breaking changes and incompatibility induced by Poettering's misguided handiwork. A clean init replacement would have made Big Linux more compatible with both it's roots and the other parts of the broader Linux/BSD/Unix world. As a result of his belligerent incompetence, other peoples projects have had to be re-engineered, resulting in incompatibility, extra porting work, and security problems. In short were stuck cleaning up his mess, and the consequences of his security blunders

    A worthy Init replacement should have moved to compiled code and given us asynchronous startup, threading, etc, without senselessly re-writing basic command syntax or compatibility. Considering the importance of PID 1, it should have used a formal development process like the BSD world.

    Fedora needs to stop enabling his prima donna antics and stop letting him touch things until he admits his mistakes and attempts to fix them. The flame wars not going away till he does.

    asdf , Thursday 10th May 2018 23:38 GMT
    Re: Poettering still doesn't get it... Pid 1 is for people wearing big boy pants.

    SystemD is corporate money (Redhat support dollars) triumphing over the long hairs sadly. Enough money can buy a shitload of code and you can overwhelm the hippies with hairball dependencies (the key moment was udev being dependent on systemd) and soon get as much FOSS as possible dependent on the Linux kernel. This has always been the end game as Red Hat makes its bones on Linux specifically not on FOSS in general (that say runs on Solaris or HP-UX). The tighter they can glue the FOSS ecosystem and the Linux kernel together ala Windows lite style the better for their bottom line. Poettering is just being a good employee asshat extraordinaire he is.

    whitepines , Thursday 10th May 2018 03:47 GMT
    Raise your hand if you've been completely locked out of a server or laptop (as in, break out the recovery media and settle down, it'll be a while) because systemd:

    1.) Couldn't raise a network interface

    2.) Farted and forgot the UUID for a disk, then refused to give a recovery shell

    3.) Decided an unimportant service (e.g. CUPS or avahi) was too critical to start before giving a login over SSH or locally, then that service stalls forever

    4.) Decided that no, you will not be network booting your server today. No way to recover and no debug information, just an interminable hang as it raises wrong network interfaces and waits for DHCP addresses that will never come.

    And lest the fun be restricted to startup, on shutdown systemd can quite happily hang forever doing things like stopping nonessential services, *with no timeout and no way to interrupt*. Then you have to Magic Sysreq the machine, except that sometimes secure servers don't have that ability, at least not remotely. Cue data loss and general excitement.

    And that's not even going into the fact that you need to *reboot the machine* to patch the *network enabled* and highly privileged systemd, or that it seems to have the attack surface of Jupiter.

    Upstart was better than this. SysV was better than this. Mac is better than this. Windows is better than this.

    Uggh.

    Daggerchild , Thursday 10th May 2018 11:39 GMT
    Re: Ahhh SystemD

    I honestly would love someone to lay out the problems it solves. Solaris has a similar parallellised startup system, with some similar problems, but it didn't need pid 1.

    Tridac , Thursday 10th May 2018 11:53 GMT
    Re: Ahhh SystemD

    Agreed, Solaris svcadm and svcs etc are an example of how it should be done. A layered approach maintaining what was already there, while adding functionality for management purposes. Keeps all the old text based log files and uses xml scripts (human readable and editable) for higher level functions. Afaics, systemd is a power grab by red hat and an ego trip for it's primary developer. Dumped bloatware Linux in favour of FreeBSD and others after Suse 11.4, though that was bad enough with Gnome 3...

    [Nov 17, 2018] hh command man page

    Later was renamed to hstr
    Notable quotes:
    "... By default it parses .bash-history file that is filtered as you type a command substring. ..."
    "... Favorite and frequently used commands can be bookmarked ..."
    Nov 17, 2018 | www.mankier.com

    hh -- easily view, navigate, sort and use your command history with shell history suggest box.

    Synopsis

    hh [option] [arg1] [arg2]...
    hstr [option] [arg1] [arg2]...

    Description

    hh uses shell history to provide suggest box like functionality for commands used in the past. By default it parses .bash-history file that is filtered as you type a command substring. Commands are not just filtered, but also ordered by a ranking algorithm that considers number of occurrences, length and timestamp. Favorite and frequently used commands can be bookmarked . In addition hh allows removal of commands from history - for instance with a typo or with a sensitive content.

    Options
    -h --help
    Show help
    -n --non-interactive
    Print filtered history on standard output and exit
    -f --favorites
    Show favorites view immediately
    -s --show-configuration
    Show configuration that can be added to ~/.bashrc
    -b --show-blacklist
    Show blacklist of commands to be filtered out before history processing
    -V --version
    Show version information
    Keys
    pattern
    Type to filter shell history.
    Ctrl-e
    Toggle regular expression and substring search.
    Ctrl-t
    Toggle case sensitive search.
    Ctrl-/ , Ctrl-7
    Rotate view of history as provided by Bash, ranked history ordered by the number of occurences/length/timestamp and favorites.
    Ctrl-f
    Add currently selected command to favorites.
    Ctrl-l
    Make search pattern lowercase or uppercase.
    Ctrl-r , UP arrow, DOWN arrow, Ctrl-n , Ctrl-p
    Navigate in the history list.
    TAB , RIGHT arrow
    Choose currently selected item for completion and let user to edit it on the command prompt.
    LEFT arrow
    Choose currently selected item for completion and let user to edit it in editor (fix command).
    ENTER
    Choose currently selected item for completion and execute it.
    DEL
    Remove currently selected item from the shell history.
    BACSKSPACE , Ctrl-h
    Delete last pattern character.
    Ctrl-u , Ctrl-w
    Delete pattern and search again.
    Ctrl-x
    Write changes to shell history and exit.
    Ctrl-g
    Exit with empty prompt.
    Environment Variables

    hh defines the following environment variables:

    HH_CONFIG
    Configuration options:

    hicolor
    Get more colors with this option (default is monochromatic).

    monochromatic
    Ensure black and white view.

    prompt-bottom
    Show prompt at the bottom of the screen (default is prompt at the top).

    regexp
    Filter command history using regular expressions (substring match is default)

    substring
    Filter command history using substring.

    keywords
    Filter command history using keywords - item matches if contains all keywords in pattern in any order.

    casesensitive
    Make history filtering case sensitive (it's case insensitive by default).

    rawhistory
    Show normal history as a default view (metric-based view is shown otherwise).

    favorites
    Show favorites as a default view (metric-based view is shown otherwise).

    duplicates
    Show duplicates in rawhistory (duplicates are discarded by default).

    blacklist
    Load list of commands to skip when processing history from ~/.hh_blacklist (built-in blacklist used otherwise).

    big-keys-skip
    Skip big history entries i.e. very long lines (default).

    big-keys-floor
    Use different sorting slot for big keys when building metrics-based view (big keys are skipped by default).

    big-keys-exit
    Exit (fail) on presence of a big key in history (big keys are skipped by default).

    warning
    Show warning.

    debug
    Show debug information.

    Example:
    export HH_CONFIG=hicolor,regexp,rawhistory

    HH_PROMPT
    Change prompt string which is user@host$ by default.

    Example:
    export HH_PROMPT="$ "

    Files
    ~/.hh_favorites
    Bookmarked favorite commands.
    ~/.hh_blacklist
    Command blacklist.
    Bash Configuration

    Optionally add the following lines to ~/.bashrc:

    export HH_CONFIG=hicolor         # get more colors
    shopt -s histappend              # append new history items to .bash_history
    export HISTCONTROL=ignorespace   # leading space hides commands from history
    export HISTFILESIZE=10000        # increase history file size (default is 500)
    export HISTSIZE=${HISTFILESIZE}  # increase history size (default is 500)
    export PROMPT_COMMAND="history -a; history -n; ${PROMPT_COMMAND}"
    # if this is interactive shell, then bind hh to Ctrl-r (for Vi mode check doc)
    if [[ $- =~ .*i.* ]]; then bind '"\C-r": "\C-a hh -- \C-j"'; fi
    

    The prompt command ensures synchronization of the history between BASH memory and history file.

    ZSH Configuration

    Optionally add the following lines to ~/.zshrc:

    export HISTFILE=~/.zsh_history   # ensure history file visibility
    export HH_CONFIG=hicolor         # get more colors
    bindkey -s "\C-r" "\eqhh\n"  # bind hh to Ctrl-r (for Vi mode check doc, experiment with --)
    
    Examples
    hh git
    Start `hh` and show only history items containing 'git'.
    hh --non-interactive git
    Print history items containing 'git' to standard output and exit.
    hh --show-configuration >> ~/.bashrc
    Append default hh configuration to your Bash profile.
    hh --show-blacklist
    Show blacklist configured for history processing.
    Author

    Written by Martin Dvorak <martin.dvorak@mindforger.com>

    Bugs

    Report bugs to https://github.com/dvorka/hstr/issues

    See Also

    history(1), bash(1), zsh(1)

    Referenced By

    The man page hstr(1) is an alias of hh(1).

    [Nov 15, 2018] Is Glark a Better Grep Linux.com The source for Linux information

    Notable quotes:
    "... stringfilenames ..."
    Nov 15, 2018 | www.linux.com

    Is Glark a Better Grep? GNU grep is one of my go-to tools on any Linux box. But grep isn't the only tool in town. If you want to try something a bit different, check out glark a grep alternative that might might be better in some situations.

    What is glark? Basically, it's a utility that's similar to grep, but it has a few features that grep does not. This includes complex expressions, Perl-compatible regular expressions, and excluding binary files. It also makes showing contextual lines a bit easier. Let's take a look.

    I installed glark (yes, annoyingly it's yet another *nix utility that has no initial cap) on Linux Mint 11. Just grab it with apt-get install glark and you should be good to go.

    Simple searches work the same way as with grep : glark stringfilenames . So it's pretty much a drop-in replacement for those.

    But you're interested in what makes glark special. So let's start with a complex expression, where you're looking for this or that term:

    glark -r -o thing1 thing2 *

    This will search the current directory and subdirectories for "thing1" or "thing2." When the results are returned, glark will colorize the results and each search term will be highlighted in a different color. So if you search for, say "Mozilla" and "Firefox," you'll see the terms in different colors.

    You can also use this to see if something matches within a few lines of another term. Here's an example:

    glark --and=3 -o Mozilla Firefox -o ID LXDE *

    This was a search I was using in my directory of Linux.com stories that I've edited. I used three terms I knew were in one story, and one term I knew wouldn't be. You can also just use the --and option to spot two terms within X number of lines of each other, like so:

    glark --and=3 term1 term2

    That way, both terms must be present.

    You'll note the --and option is a bit simpler than grep's context line options. However, glark tries to stay compatible with grep, so it also supports the -A , -B and -C options from grep.

    Miss the grep output format? You can tell glark to use grep format with the --grep option.

    Most, if not all, GNU grep options should work with glark .

    Before and After

    If you need to search through the beginning or end of a file, glark has the --before and --after options (short versions, -b and -a ). You can use these as percentages or as absolute number of lines. For instance:

    glark -a 20 expression *

    That will find instances of expression after line 20 in a file.

    The glark Configuration File

    Note that you can have a ~/.glarkrc that will set common options for each use of glark (unless overridden at the command line). The man page for glark does include some examples, like so:

    after-context:     1
    before-context:    6
    context:           5
    file-color:        blue on yellow
    highlight:         off
    ignore-case:       false
    quiet:             yes
    text-color:        bold reverse
    line-number-color: bold
    verbose:           false
    grep:              true
    

    Just put that in your ~/.glarkrc and customize it to your heart's content. Note that I've set mine to grep: false and added the binary-files: without-match option. You'll definitely want the quiet option to suppress all the notes about directories, etc. See the man page for more options. It's probably a good idea to spend about 10 minutes on setting up a configuration file.

    Final Thoughts

    One thing that I have noticed is that glark doesn't seem as fast as grep . When I do a recursive search through a bunch of directories containing (mostly) HTML files, I seem to get results a lot faster with grep . This is not terribly important for most of the stuff I do with either utility. However, if you're doing something where performance is a major factor, then you may want to see if grep fits the bill better.

    Is glark "better" than grep? It depends entirely on what you're doing. It has a few features that give it an edge over grep, and I think it's very much worth trying out if you've never given it a shot.

    [Nov 13, 2018] GridFTP : User s Guide

    Notable quotes:
    "... file:///path/to/my/file ..."
    "... gsiftp://hostname/path/to/remote/file ..."
    "... third party transfer ..."
    toolkit.globus.org

    Table of Contents

    1. Introduction
    2. Usage scenarios
    2.1. Basic procedure for using GridFTP (globus-url-copy)
    2.2. Accessing data in...
    3. Command line tools
    4. Graphical user interfaces
    4.1. Globus GridFTP GUI
    4.2. UberFTP
    5. Security Considerations
    5.1. Two ways to configure your server
    5.2. New authentication options
    5.3. Firewall requirements
    6. Troubleshooting
    6.1. Establish control channel connection
    6.2. Try running globus-url-copy
    6.3. If your server starts...
    7. Usage statistics collection by the Globus Alliance
    1. Introduction The GridFTP User's Guide provides general end user-oriented information. 2. Usage scenarios 2.1. Basic procedure for using GridFTP (globus-url-copy) If you just want the "rules of thumb" on getting started (without all the details), the following options using globus-url-copy will normally give acceptable performance:
    globus-url-copy -vb -tcp-bs 2097152 -p 4 source_url destination_url
    
    The source/destination URLs will normally be one of the following: 2.1.1. Putting files One of the most basic tasks in GridFTP is to "put" files, i.e., moving a file from your file system to the server. So for example, if you want to move the file /tmp/foo from a file system accessible to the host on which you are running your client to a file name /tmp/bar on a host named remote.machine.my.edu running a GridFTP server, you would use this command:
    globus-url-copy -vb -tcp-bs 2097152 -p 4 file:///tmp/foo gsiftp://remote.machine.my.edu/tmp/bar
    
    [Note] Note
    In theory, remote.machine.my.edu could be the same host as the one on which you are running your client, but that is normally only done in testing situations.
    2.1.2. Getting files A get, i.e, moving a file from a server to your file system, would just reverse the source and destination URLs:
    [Tip] Tip
    Remember file: always refers to your file system.
    globus-url-copy -vb -tcp-bs 2097152 -p 4 gsiftp://remote.machine.my.edu/tmp/bar file:///tmp/foo
    
    2.1.3. Third party transfers Finally, if you want to move a file between two GridFTP servers (a third party transfer ), both URLs would use gsiftp: as the protocol:
    globus-url-copy -vb -tcp-bs 2097152 -p 4 gsiftp://other.machine.my.edu/tmp/foo gsiftp://remote.machine.my.edu/tmp/bar
    
    2.1.4. For more information If you want more information and details on URLs and the command line options , the Key Concepts Guide gives basic definitions and an overview of the GridFTP protocol as well as our implementation of it. 2.2. Accessing data in... 2.2.1. Accessing data in a non-POSIX file data source that has a POSIX interface If you want to access data in a non-POSIX file data source that has a POSIX interface, the standard server will do just fine. Just make sure it is really POSIX-like (out of order writes, contiguous byte writes, etc). 2.2.2. Accessing data in HPSS The following information is helpful if you want to use GridFTP to access data in HPSS. Architecturally, the Globus GridFTP server can be divided into 3 modules: In the GT4.0.x implementation, the data transform module and the DSI have been merged, although we plan to have separate, chainable, data transform modules in the future.
    [Note] Note
    This architecture does NOT apply to the WU-FTPD implementation (GT3.2.1 and lower).
    2.2.2.1. GridFTP Protocol Module
    The GridFTP protocol module is the module that reads and writes to the network and implements the GridFTP protocol. This module should not need to be modified since to do so would make the server non-protocol compliant, and unable to communicate with other servers.
    2.2.2.2. Data Transform Functionality
    The data transform functionality is invoked by using the ERET (extended retrieve) and ESTO (extended store) commands. It is seldom used and bears careful consideration before it is implemented, but in the right circumstances can be very useful. In theory, any computation could be invoked this way, but it was primarily intended for cases where some simple pre-processing (such as a partial get or sub-sampling) can greatly reduce the network load. The disadvantage to this is that you remove any real option for planning, brokering, etc., and any significant computation could adversely affect the data transfer performance. Note that the client must also support the ESTO/ERET functionality as well.
    2.2.2.3. Data Storage Interface (DSI) / Data Transform module
    The Data Storage Interface (DSI) / Data Transform module knows how to read and write to the "local" storage system and can optionally transform the data. We put local in quotes because in a complicated storage system, the storage may not be directly attached, but for performance reasons, it should be relatively close (for instance on the same LAN). The interface consists of functions to be implemented such as send (get), receive (put), command (simple commands that simply succeed or fail like mkdir), etc.. Once these functions have been implemented for a specific storage system, a client should not need to know or care what is actually providing the data. The server can either be configured specifically with a specific DSI, i.e., it knows how to interact with a single class of storage system, or one particularly useful function for the ESTO/ERET functionality mentioned above is to load and configure a DSI on the fly.
    2.2.2.4. HPSS info
    Last Update: August 2005 Working with Los Alamos National Laboratory and the High Performance Storage System (HPSS) collaboration ( http://www.hpss-collaboration.org ), we have written a Data Storage Interface (DSI) for read/write access to HPSS. This DSI would allow an existing application that uses a GridFTP compliant client to utilize an HPSS data resources. This DSI is currently in testing. Due to changes in the HPSS security mechanisms, it requires HPSS 6.2 or later, which is due to be released in Q4 2005. Distribution for the DSI has not been worked out yet, but it will *probably* be available from both Globus and the HPSS collaboration. While this code will be open source, it requires underlying HPSS libraries which are NOT open source (proprietary).
    [Note] Note
    This is a purely server side change, the client does not know what DSI is running, so only a site that is already running HPSS and wants to allow GridFTP access needs to worry about access to these proprietary libraries.
    2.2.3. Accessing data in SRB The following information is helpful if you want to use GridFTP to access data in SRB. Architecturally, the Globus GridFTP server can be divided into 3 modules: In the GT4.0.x implementation, the data transform module and the DSI have been merged, although we plan to have separate, chainable, data transform modules in the future.
    [Note] Note
    This architecture does NOT apply to the WU-FTPD implementation (GT3.2.1 and lower).
    2.2.3.1. GridFTP Protocol Module
    The GridFTP protocol module is the module that reads and writes to the network and implements the GridFTP protocol. This module should not need to be modified since to do so would make the server non-protocol compliant, and unable to communicate with other servers.
    2.2.3.2. Data Transform Functionality
    The data transform functionality is invoked by using the ERET (extended retrieve) and ESTO (extended store) commands. It is seldom used and bears careful consideration before it is implemented, but in the right circumstances can be very useful. In theory, any computation could be invoked this way, but it was primarily intended for cases where some simple pre-processing (such as a partial get or sub-sampling) can greatly reduce the network load. The disadvantage to this is that you remove any real option for planning, brokering, etc., and any significant computation could adversely affect the data transfer performance. Note that the client must also support the ESTO/ERET functionality as well.
    2.2.3.3. Data Storage Interface (DSI) / Data Transform module
    The Data Storage Interface (DSI) / Data Transform module knows how to read and write to the "local" storage system and can optionally transform the data. We put local in quotes because in a complicated storage system, the storage may not be directly attached, but for performance reasons, it should be relatively close (for instance on the same LAN). The interface consists of functions to be implemented such as send (get), receive (put), command (simple commands that simply succeed or fail like mkdir), etc.. Once these functions have been implemented for a specific storage system, a client should not need to know or care what is actually providing the data. The server can either be configured specifically with a specific DSI, i.e., it knows how to interact with a single class of storage system, or one particularly useful function for the ESTO/ERET functionality mentioned above is to load and configure a DSI on the fly.
    2.2.3.4. SRB info
    Last Update: August 2005 Working with the SRB team at the San Diego Supercomputing Center, we have written a Data Storage Interface (DSI) for read/write access to data in the Storage Resource Broker (SRB) (http://www.npaci.edu/DICE/SRB). This DSI will enable GridFTP compliant clients to read and write data to an SRB server, similar in functionality to the sput/sget commands. This DSI is currently in testing and is not yet publicly available, but will be available from both the SRB web site (here) and the Globus web site (here). It will also be included in the next stable release of the toolkit. We are working on performance tests, but early results indicate that for wide area network (WAN) transfers, the performance is comparable. When might you want to use this functionality: 2.2.4. Accessing data in some other non-POSIX data source The following information is helpful If you want to use GridFTP to access data in a non-POSIX data source. Architecturally, the Globus GridFTP server can be divided into 3 modules: In the GT4.0.x implementation, the data transform module and the DSI have been merged, although we plan to have separate, chainable, data transform modules in the future.
    [Note] Note
    This architecture does NOT apply to the WU-FTPD implementation (GT3.2.1 and lower).
    2.2.4.1. GridFTP Protocol Module
    The GridFTP protocol module is the module that reads and writes to the network and implements the GridFTP protocol. This module should not need to be modified since to do so would make the server non-protocol compliant, and unable to communicate with other servers.
    2.2.4.2. Data Transform Functionality
    The data transform functionality is invoked by using the ERET (extended retrieve) and ESTO (extended store) commands. It is seldom used and bears careful consideration before it is implemented, but in the right circumstances can be very useful. In theory, any computation could be invoked this way, but it was primarily intended for cases where some simple pre-processing (such as a partial get or sub-sampling) can greatly reduce the network load. The disadvantage to this is that you remove any real option for planning, brokering, etc., and any significant computation could adversely affect the data transfer performance. Note that the client must also support the ESTO/ERET functionality as well.
    2.2.4.3. Data Storage Interface (DSI) / Data Transform module
    Nov 13, 2018 | toolkit.globus.org

    The Data Storage Interface (DSI) / Data Transform module knows how to read and write to the "local" storage system and can optionally transform the data. We put local in quotes because in a complicated storage system, the storage may not be directly attached, but for performance reasons, it should be relatively close (for instance on the same LAN).

    The interface consists of functions to be implemented such as send (get), receive (put), command (simple commands that simply succeed or fail like mkdir), etc..

    Once these functions have been implemented for a specific storage system, a client should not need to know or care what is actually providing the data. The server can either be configured specifically with a specific DSI, i.e., it knows how to interact with a single class of storage system, or one particularly useful function for the ESTO/ERET functionality mentioned above is to load and configure a DSI on the fly. 3. Command line tools

    Please see the GridFTP Command Reference .

    [Nov 13, 2018] Resuming rsync partial (-P/--partial) on a interrupted transfer

    Notable quotes:
    "... should ..."
    May 15, 2013 | stackoverflow.com

    Glitches , May 15, 2013 at 18:06

    I am trying to backup my file server to a remove file server using rsync. Rsync is not successfully resuming when a transfer is interrupted. I used the partial option but rsync doesn't find the file it already started because it renames it to a temporary file and when resumed it creates a new file and starts from beginning.

    Here is my command:

    rsync -avztP -e "ssh -p 2222" /volume1/ myaccont@backup-server-1:/home/myaccount/backup/ --exclude "@spool" --exclude "@tmp"

    When this command is ran, a backup file named OldDisk.dmg from my local machine get created on the remote machine as something like .OldDisk.dmg.SjDndj23 .

    Now when the internet connection gets interrupted and I have to resume the transfer, I have to find where rsync left off by finding the temp file like .OldDisk.dmg.SjDndj23 and rename it to OldDisk.dmg so that it sees there already exists a file that it can resume.

    How do I fix this so I don't have to manually intervene each time?

    Richard Michael , Nov 6, 2013 at 4:26

    TL;DR : Use --timeout=X (X in seconds) to change the default rsync server timeout, not --inplace .

    The issue is the rsync server processes (of which there are two, see rsync --server ... in ps output on the receiver) continue running, to wait for the rsync client to send data.

    If the rsync server processes do not receive data for a sufficient time, they will indeed timeout, self-terminate and cleanup by moving the temporary file to it's "proper" name (e.g., no temporary suffix). You'll then be able to resume.

    If you don't want to wait for the long default timeout to cause the rsync server to self-terminate, then when your internet connection returns, log into the server and clean up the rsync server processes manually. However, you must politely terminate rsync -- otherwise, it will not move the partial file into place; but rather, delete it (and thus there is no file to resume). To politely ask rsync to terminate, do not SIGKILL (e.g., -9 ), but SIGTERM (e.g., pkill -TERM -x rsync - only an example, you should take care to match only the rsync processes concerned with your client).

    Fortunately there is an easier way: use the --timeout=X (X in seconds) option; it is passed to the rsync server processes as well.

    For example, if you specify rsync ... --timeout=15 ... , both the client and server rsync processes will cleanly exit if they do not send/receive data in 15 seconds. On the server, this means moving the temporary file into position, ready for resuming.

    I'm not sure of the default timeout value of the various rsync processes will try to send/receive data before they die (it might vary with operating system). In my testing, the server rsync processes remain running longer than the local client. On a "dead" network connection, the client terminates with a broken pipe (e.g., no network socket) after about 30 seconds; you could experiment or review the source code. Meaning, you could try to "ride out" the bad internet connection for 15-20 seconds.

    If you do not clean up the server rsync processes (or wait for them to die), but instead immediately launch another rsync client process, two additional server processes will launch (for the other end of your new client process). Specifically, the new rsync client will not re-use/reconnect to the existing rsync server processes. Thus, you'll have two temporary files (and four rsync server processes) -- though, only the newer, second temporary file has new data being written (received from your new rsync client process).

    Interestingly, if you then clean up all rsync server processes (for example, stop your client which will stop the new rsync servers, then SIGTERM the older rsync servers, it appears to merge (assemble) all the partial files into the new proper named file. So, imagine a long running partial copy which dies (and you think you've "lost" all the copied data), and a short running re-launched rsync (oops!).. you can stop the second client, SIGTERM the first servers, it will merge the data, and you can resume.

    Finally, a few short remarks:

    JamesTheAwesomeDude , Dec 29, 2013 at 16:50

    Just curious: wouldn't SIGINT (aka ^C ) be 'politer' than SIGTERM ? – JamesTheAwesomeDude Dec 29 '13 at 16:50

    Richard Michael , Dec 29, 2013 at 22:34

    I didn't test how the server-side rsync handles SIGINT, so I'm not sure it will keep the partial file - you could check. Note that this doesn't have much to do with Ctrl-c ; it happens that your terminal sends SIGINT to the foreground process when you press Ctrl-c , but the server-side rsync has no controlling terminal. You must log in to the server and use kill . The client-side rsync will not send a message to the server (for example, after the client receives SIGINT via your terminal Ctrl-c ) - might be interesting though. As for anthropomorphizing, not sure what's "politer". :-) – Richard Michael Dec 29 '13 at 22:34

    d-b , Feb 3, 2015 at 8:48

    I just tried this timeout argument rsync -av --delete --progress --stats --human-readable --checksum --timeout=60 --partial-dir /tmp/rsync/ rsync://$remote:/ /src/ but then it timed out during the "receiving file list" phase (which in this case takes around 30 minutes). Setting the timeout to half an hour so kind of defers the purpose. Any workaround for this? – d-b Feb 3 '15 at 8:48

    Cees Timmerman , Sep 15, 2015 at 17:10

    @user23122 --checksum reads all data when preparing the file list, which is great for many small files that change often, but should be done on-demand for large files. – Cees Timmerman Sep 15 '15 at 17:10

    [Nov 12, 2018] Linux Find Out Which Process Is Listening Upon a Port

    Jun 25, 2012 | www.cyberciti.biz

    How do I find out running processes were associated with each open port? How do I find out what process has open tcp port 111 or udp port 7000 under Linux?

    You can the following programs to find out about port numbers and its associated process:

    1. netstat – a command-line tool that displays network connections, routing tables, and a number of network interface statistics.
    2. fuser – a command line tool to identify processes using files or sockets.
    3. lsof – a command line tool to list open files under Linux / UNIX to report a list of all open files and the processes that opened them.
    4. /proc/$pid/ file system – Under Linux /proc includes a directory for each running process (including kernel processes) at /proc/PID, containing information about that process, notably including the processes name that opened port.

    You must run above command(s) as the root user.

    netstat example

    Type the following command:
    # netstat -tulpn
    Sample outputs:

    Active Internet connections (only servers)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
    tcp        0      0 127.0.0.1:3306          0.0.0.0:*               LISTEN      1138/mysqld     
    tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      850/portmap     
    tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      1607/apache2    
    tcp        0      0 0.0.0.0:55091           0.0.0.0:*               LISTEN      910/rpc.statd   
    tcp        0      0 192.168.122.1:53        0.0.0.0:*               LISTEN      1467/dnsmasq    
    tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      992/sshd        
    tcp        0      0 127.0.0.1:631           0.0.0.0:*               LISTEN      1565/cupsd      
    tcp        0      0 0.0.0.0:7000            0.0.0.0:*               LISTEN      3813/transmission
    tcp6       0      0 :::22                   :::*                    LISTEN      992/sshd        
    tcp6       0      0 ::1:631                 :::*                    LISTEN      1565/cupsd      
    tcp6       0      0 :::7000                 :::*                    LISTEN      3813/transmission
    udp        0      0 0.0.0.0:111             0.0.0.0:*                           850/portmap     
    udp        0      0 0.0.0.0:662             0.0.0.0:*                           910/rpc.statd   
    udp        0      0 192.168.122.1:53        0.0.0.0:*                           1467/dnsmasq    
    udp        0      0 0.0.0.0:67              0.0.0.0:*                           1467/dnsmasq    
    udp        0      0 0.0.0.0:68              0.0.0.0:*                           3697/dhclient   
    udp        0      0 0.0.0.0:7000            0.0.0.0:*                           3813/transmission
    udp        0      0 0.0.0.0:54746           0.0.0.0:*                           910/rpc.statd
    

    TCP port 3306 was opened by mysqld process having PID # 1138. You can verify this using /proc, enter:
    # ls -l /proc/1138/exe
    Sample outputs:

    lrwxrwxrwx 1 root root 0 2010-10-29 10:20 /proc/1138/exe -> /usr/sbin/mysqld
    

    You can use grep command to filter out information:
    # netstat -tulpn | grep :80
    Sample outputs:

    tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      1607/apache2
    
    Video demo

    https://www.youtube.com/embed/h3fJlmuGyos

    fuser command

    Find out the processes PID that opened tcp port 7000, enter:
    # fuser 7000/tcp
    Sample outputs:

    7000/tcp:             3813
    

    Finally, find out process name associated with PID # 3813, enter:
    # ls -l /proc/3813/exe
    Sample outputs:

    lrwxrwxrwx 1 vivek vivek 0 2010-10-29 11:00 /proc/3813/exe -> /usr/bin/transmission
    

    /usr/bin/transmission is a bittorrent client, enter:
    # man transmission
    OR
    # whatis transmission
    Sample outputs:

    transmission (1)     - a bittorrent client
    
    Task: Find Out Current Working Directory Of a Process

    To find out current working directory of a process called bittorrent or pid 3813, enter:
    # ls -l /proc/3813/cwd
    Sample outputs:

    lrwxrwxrwx 1 vivek vivek 0 2010-10-29 12:04 /proc/3813/cwd -> /home/vivek
    

    OR use pwdx command, enter:
    # pwdx 3813
    Sample outputs:

    3813: /home/vivek
    
    Task: Find Out Owner Of a Process

    Use the following command to find out the owner of a process PID called 3813:
    # ps aux | grep 3813
    OR
    # ps aux | grep '[3]813'
    Sample outputs:

    vivek     3813  1.9  0.3 188372 26628 ?        Sl   10:58   2:27 transmission
    

    OR try the following ps command:
    # ps -eo pid,user,group,args,etime,lstart | grep '[3]813'
    Sample outputs:

    3813 vivek    vivek    transmission                   02:44:05 Fri Oct 29 10:58:40 2010
    

    Another option is /proc/$PID/environ, enter:
    # cat /proc/3813/environ
    OR
    # grep --color -w -a USER /proc/3813/environ
    Sample outputs (note –colour option):

    Fig.01: grep output
    Fig.01: grep output

    lsof Command Example

    Type the command as follows:

    lsof -i :portNumber 
    lsof -i tcp:portNumber 
    lsof -i udp:portNumber 
    lsof -i :80
    lsof -i :80 | grep LISTEN
    

    lsof -i :portNumber lsof -i tcp:portNumber lsof -i udp:portNumber lsof -i :80 lsof -i :80 | grep LISTEN

    Sample outputs:

    apache2   1607     root    3u  IPv4   6472      0t0  TCP *:www (LISTEN)
    apache2   1616 www-data    3u  IPv4   6472      0t0  TCP *:www (LISTEN)
    apache2   1617 www-data    3u  IPv4   6472      0t0  TCP *:www (LISTEN)
    apache2   1618 www-data    3u  IPv4   6472      0t0  TCP *:www (LISTEN)
    apache2   1619 www-data    3u  IPv4   6472      0t0  TCP *:www (LISTEN)
    apache2   1620 www-data    3u  IPv4   6472      0t0  TCP *:www (LISTEN)
    

    Now, you get more information about pid # 1607 or 1616 and so on:
    # ps aux | grep '[1]616'
    Sample outputs:
    www-data 1616 0.0 0.0 35816 3880 ? S 10:20 0:00 /usr/sbin/apache2 -k start
    I recommend the following command to grab info about pid # 1616:
    # ps -eo pid,user,group,args,etime,lstart | grep '[1]616'
    Sample outputs:

    1616 www-data www-data /usr/sbin/apache2 -k start     03:16:22 Fri Oct 29 10:20:17 2010
    

    Where,

    Help: I Discover an Open Port Which I Don't Recognize At All

    The file /etc/services is used to map port numbers and protocols to service names. Try matching port numbers:
    $ grep port /etc/services
    $ grep 443 /etc/services

    Sample outputs:

    https		443/tcp				# http protocol over TLS/SSL
    https		443/udp
    
    Check For rootkit

    I strongly recommend that you find out which processes are really running, especially servers connected to the high speed Internet access. You can look for rootkit which is a program designed to take fundamental control (in Linux / UNIX terms "root" access, in Windows terms "Administrator" access) of a computer system, without authorization by the system's owners and legitimate managers. See how to detecting / checking rootkits under Linux .

    Keep an Eye On Your Bandwidth Graphs

    Usually, rooted servers are used to send a large number of spam or malware or DoS style attacks on other computers.

    See also:

    See the following man pages for more information:
    $ man ps
    $ man grep
    $ man lsof
    $ man netstat
    $ man fuser

    Posted by: Vivek Gite

    The author is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly email newsletter . GOT FEEDBACK? CLICK HERE TO JOIN THE DISCUSSION

    [Nov 12, 2018] Shell Games Linux Magazine

    Nov 12, 2018 | www.linux-magazine.com

    First pdsh Commands

    To begin, I'll try to get the kernel version of a node by using its IP address:

    $ pdsh -w 192.168.1.250 uname -r
    192.168.1.250: 2.6.32-431.11.2.el6.x86_64
    

    The -w option means I am specifying the node(s) that will run the command. In this case, I specified the IP address of the node (192.168.1.250). After the list of nodes, I add the command I want to run, which is uname -r in this case. Notice that pdsh starts the output line by identifying the node name.

    If you need to mix rcmd modules in a single command, you can specify which module to use in the command line,

    $ pdsh -w ssh:laytonjb@192.168.1.250 uname -r
    192.168.1.250: 2.6.32-431.11.2.el6.x86_64
    

    by putting the rcmd module before the node name. In this case, I used ssh and typical ssh syntax.

    A very common way of using pdsh is to set the environment variable WCOLL to point to the file that contains the list of hosts you want to use in the pdsh command. For example, I created a subdirectory PDSH where I create a file hosts that lists the hosts I want to use:

    [laytonjb@home4 ~]$ mkdir PDSH
    [laytonjb@home4 ~]$ cd PDSH
    [laytonjb@home4 PDSH]$ vi hosts
    [laytonjb@home4 PDSH]$ more hosts
    192.168.1.4
    192.168.1.250
    

    I'm only using two nodes: 192.168.1.4 and 192.168.1.250. The first is my test system (like a cluster head node), and the second is my test compute node. You can put hosts in the file as you would on the command line separated by commas. Be sure not to put a blank line at the end of the file because pdsh will try to connect to it. You can put the environment variable WCOLL in your .bashrc file:

    export WCOLL=/home/laytonjb/PDSH/hosts
    

    As before, you can source your .bashrc file, or you can log out and log back in. Specifying Hosts

    I won't list all the several other ways to specify a list of nodes, because the pdsh website [9] discusses virtually all of them; however, some of the methods are pretty handy. The simplest way is to specify the nodes on the command line is to use the -w option:

    $ pdsh -w 192.168.1.4,192.168.1.250 uname -r
    192.168.1.4: 2.6.32-431.17.1.el6.x86_64
    192.168.1.250: 2.6.32-431.11.2.el6.x86_64
    

    In this case, I specified the node names separated by commas. You can also use a range of hosts as follows:

    pdsh -w host[1-11]
    pdsh -w host[1-4,8-11]
    

    In the first case, pdsh expands the host range to host1, host2, host3, , host11. In the second case, it expands the hosts similarly (host1, host2, host3, host4, host8, host9, host10, host11). You can go to the pdsh website for more information on hostlist expressions [10] .

    Another option is to have pdsh read the hosts from a file other than the one to which WCOLL points. The command shown in Listing 2 tells pdsh to take the hostnames from the file /tmp/hosts , which is listed after -w ^ (with no space between the "^" and the filename). You can also use several host files,

    Listing 2 Read Hosts from File
    $ more /tmp/hosts
    192.168.1.4
    $ more /tmp/hosts2
    192.168.1.250
    $ pdsh -w ^/tmp/hosts,^/tmp/hosts2 uname -r
    192.168.1.4: 2.6.32-431.17.1.el6.x86_64
    192.168.1.250: 2.6.32-431.11.2.el6.x86_64
    

    or you can exclude hosts from a list:

    $ pdsh -w -192.168.1.250 uname -r
    192.168.1.4: 2.6.32-431.17.1.el6.x86_64
    

    The option -w -192.168.1.250 excluded node 192.168.1.250 from the list and only output the information for 192.168.1.4. You can also exclude nodes using a node file:

    $ pdsh -w -^/tmp/hosts2 uname -r
    192.168.1.4: 2.6.32-431.17.1.el6.x86_64
    

    In this case, /tmp/hosts2 contains 192.168.1.250, which isn't included in the output. Using the -x option with a hostname,

    $ pdsh -x 192.168.1.4 uname -r
    192.168.1.250: 2.6.32-431.11.2.el6.x86_64
    $ pdsh -x ^/tmp/hosts uname -r
    192.168.1.250: 2.6.32-431.11.2.el6.x86_64
    $ more /tmp/hosts
    192.168.1.4
    

    or a list of hostnames to be excluded from the command to run also works.

    More Useful pdsh Commands

    Now I can shift into second gear and try some fancier pdsh tricks. First, I want to run a more complicated command on all of the nodes ( Listing 3 ). Notice that I put the entire command in quotes. This means the entire command is run on each node, including the first ( cat /proc/cpuinfo ) and second ( grep bogomips ) parts.

    Listing 3 Quotation Marks 1

    In the output, the node precedes the command results, so you can tell what output is associated with which node. Notice that the BogoMips values are different on the two nodes, which is perfectly understandable because the systems are different. The first node has eight cores (four cores and four Hyper-Thread cores), and the second node has four cores.

    You can use this command across a homogeneous cluster to make sure all the nodes are reporting back the same BogoMips value. If the cluster is truly homogeneous, this value should be the same. If it's not, then I would take the offending node out of production and check it.

    A slightly different command shown in Listing 4 runs the first part contained in quotes, cat /proc/cpuinfo , on each node and the second part of the command, grep bogomips , on the node on which you issue the pdsh command.

    Listing 4 Quotation Marks 2

    The point here is that you need to be careful on the command line. In this example, the differences are trivial, but other commands could have differences that might be difficult to notice.

    One very important thing to note is that pdsh does not guarantee a return of output in any particular order. If you have a list of 20 nodes, the output does not necessarily start with node 1 and increase incrementally to node 20. For example, in Listing 5 , I run vmstat on each node and get three lines of output from each node.

    [Nov 12, 2018] Edge Computing vs. Cloud Computing What's the Difference by Andy Patrizio ,

    "... Download the authoritative guide: Cloud Computing 2018: Using the Cloud to Transform Your Business ..."
    Notable quotes:
    "... Download the authoritative guide: Cloud Computing 2018: Using the Cloud to Transform Your Business ..."
    "... Edge computing is a term you are going to hear more of in the coming years because it precedes another term you will be hearing a lot, the Internet of Things (IoT). You see, the formally adopted definition of edge computing is a form of technology that is necessary to make the IoT work. ..."
    "... Tech research firm IDC defines edge computing is a "mesh network of micro data centers that process or store critical data locally and push all received data to a central data center or cloud storage repository, in a footprint of less than 100 square feet." ..."
    Jan 23, 2018 | www.datamation.com
    Download the authoritative guide: Cloud Computing 2018: Using the Cloud to Transform Your Business

    The term cloud computing is now as firmly lodged in our technical lexicon as email and Internet, and the concept has taken firm hold in business as well. By 2020, Gartner estimates that a "no cloud" policy will be as prevalent in business as a "no Internet" policy. Which is to say no one who wants to stay in business will be without one.

    You are likely hearing a new term now, edge computing . One of the problems with technology is terms tend to come before the definition. Technologists (and the press, let's be honest) tend to throw a word around before it is well-defined, and in that vacuum come a variety of guessed definitions, of varying accuracy.

    Edge computing is a term you are going to hear more of in the coming years because it precedes another term you will be hearing a lot, the Internet of Things (IoT). You see, the formally adopted definition of edge computing is a form of technology that is necessary to make the IoT work.

    Tech research firm IDC defines edge computing is a "mesh network of micro data centers that process or store critical data locally and push all received data to a central data center or cloud storage repository, in a footprint of less than 100 square feet."

    It is typically used in IoT use cases, where edge devices collect data from IoT devices and do the processing there, or send it back to a data center or the cloud for processing. Edge computing takes some of the load off the central data center, reducing or even eliminating the processing work at the central location.

    IoT Explosion in the Cloud Era

    To understand the need for edge computing you must understand the explosive growth in IoT in the coming years, and it is coming on big. There have been a number of estimates of the growth in devices, and while they all vary, they are all in the billions of devices.

    This is taking place in a number of areas, most notably cars and industrial equipment. Cars are becoming increasingly more computerized and more intelligent. Gone are the days when the "Check engine" warning light came on and you had to guess what was wrong. Now it tells you which component is failing.

    The industrial sector is a broad one and includes sensors, RFID, industrial robotics, 3D printing, condition monitoring, smart meters, guidance, and more. This sector is sometimes called the Industrial Internet of Things (IIoT) and the overall market is expected to grow from $93.9 billion in 2014 to $151.01 billion by 2020.

    All of these sensors are taking in data but they are not processing it. Your car does some of the processing of sensor data but much of it has to be sent in to a data center for computation, monitoring and logging.

    The problem is that this would overload networks and data centers. Imaging the millions of cars on the road sending in data to data centers around the country. The 4G network would be overwhelmed, as would the data centers. And if you are in California and the car maker's data center is in Texas, that's a long round trip.

    [Nov 09, 2018] Cloud-hosted date must be accessed by users over existing WAN which creates performance issues due to bandwidth and latency constraints

    Notable quotes:
    "... Congestion problems lead to miserable performance. We have one WAN pipe, typically 1.5 Mbps to 10 MBps ..."
    Nov 09, 2018 | www.eiseverywhere.com

    However, cloud-hosted information assets must still be accessed by users over existing WAN infrastructures, where there are performance issues due to bandwidth and latency constraints.

    THE EXTREMELY UNFUNNY PART - UP TO 20x SLOWER

    Public/Private

    Cloud

    Thousands of companies
    Millions of users
    Varied bandwidth

    ♦ Per-unit provisioning costs does not decrease much with size after, say, 100 units.

    > Cloud data centers are potentially "far away"

    ♦ Cloud infrastructure supports many enterprises

    ♦ Large scale drives lower per-unit cost for data center
    services

    > All employees will be "remote" from their data

    ♦ Even single-location companies will be remote from their data

    ♦ HQ employees previously local to servers, but not with Cloud model

    > Lots of data needs to be sent over limited WAN bandwidth

    Congestion problems lead to miserable performance. We have one WAN pipe, typically 1.5 Mbps to 10 MBps

    > Disk-based deduplication technology

    ♦ Identify redundant data at the byte level, not application (e.g., file) level

    ♦ Use disks to store vast dictionaries of byte sequences for long periods of time

    ♦ Use symbols to transfer repetitive sequences of byte-level raw data

    ♦ Only deduplicated data stored on disk

    [Nov 09, 2018] Troubleshoot WAN Performance Issues SD Wan Experts by Steve Garson

    Feb 08, 2013 | www.sd-wan-experts.com

    Troubleshooting MPLS Networks

    How should you troubleshoot WAN performance issues. Your MPLS or VPLS network and your clients in field offices are complaining about slow WAN performance. Your network should be performing better and you can't figure out what the problem is. You can contact SD-WAN-Experts to have their engineers solve your problem, but you want to try to solve the problems yourself.

    1. The first thing to check, seems trivial, but you need to confirm that the ports on your router and switch ports are configured for the same speed and duplex. Log into your switches and check the logs for mismatches of speed or duplex. Auto-negotiation sometimes does not work properly, so a 10M port connected to a 100M port is mismatched. Or you might have a half-duplex port connected to a full-duplex port. Don't assume that a 10/100/1000 port is auto-negotiating correctly!
    2. Is your WAN performance problem consistent? Does it occur at roughly the same time of day? Or is it completely random? If you don't have the monitoring tools to measure this, you are at a big disadvantage in resolving the issues on your own.
    3. Do you have Class of Service configured on your WAN? Do you have DSCP configured on your LAN? What is the mapping of your DSCP values to CoS?
    4. What kind of applications are traversing your WAN? Are there specific apps that work better than others?
    5. Have your reviewed bandwidth utilization on your carrier's web portal to determine if you are saturating the MPLS port of any locations? Even brief peaks will be enough to generate complaints. Large files, such as CAD drawings, can completely saturate a WAN link.
    6. Are you backing up or synchronizing data over the WAN? Have you confirmed 100% that this work is completed before the work day begins.
    7. Might your routing be taking multiple paths and not the most direct path? Look at your routing tables.
    8. Next, you want to see long term trend statistics. This means monitoring the SNMP streams from all your routers, using tools such as MRTG, NTOP or Cacti. A two week sampling should provide a very good picture of what is happening on your network to help troubleshoot your WAN.

    NTOP allows you to

    MRTG (Multi-Router Traffic Grapher) provides easy to understand graphs of your network bandwidth utilization.

    MRTG Picture

    Cacti requires a MySQL database. It is a complete network graphing solution designed to harness the power of RRDTool 's data storage and graphing functionality. Cacti provides a fast poller, advanced graph templating, multiple data acquisition methods, and user management features out of the box. All of this is wrapped in an intuitive, easy to use interface that makes sense for LAN-sized installations up to complex networks with hundreds of devices.

    Both NTOP and MRTG are freeware applications to help troubleshoot your WAN that will run on the freeware versions of Linux. As a result, they can be installed on almost any desktop computer that has out-lived its value as a Windows desktop machine. If you are skilled with Linux and networking, and you have the time, you can install this monitoring system on your own. You will need to get your carrier to provide read-only access to your router SNMP traffic.

    But you might find it more cost effective to have the engineers at SD-WAN-Experts do the work for you. All you need to do is provide an available machine with a Linux install (Ubuntu, CentOS, RedHat, etc) with remote access via a VPN. Our engineers will then download all the software remotely, install and configure the machine. When we are done with the monitoring, beside understanding how to solve your problem (and solving it!) you will have your own network monitoring system installed for your use on a daily basis. We'll teach you how to use it, which is quite simple using the web based tools, so you can view it from any machine on your network.

    If you need assistance in troubleshooting your wide area network, contact SD-WAN-Experts today !

    You might also find these troubleshooting tips of interest;

    Troubleshooting MPLS Network Performance Issues

    Packet Loss and How It Affects Performance

    Troubleshooting VPLS and Ethernet Tunnels over MPLS

    [Nov 09, 2018] Storage in private clouds

    Nov 09, 2018 | www.redhat.com

    Storage in private clouds

    Storage is one of the most popular uses of cloud computing, particularly for consumers. The user-friendly design of service-based companies have helped make "cloud" a pretty normal term -- even reaching meme status in 2016.

    However, cloud storage means something very different to businesses. Big data and the Internet of Things (IoT) have made it difficult to appraise the value of data until long after it's originally stored -- when finding that piece of data becomes the key to revealing valuable business insights or unlocking an application's new feature. Even after enterprises decide where to store their data in the cloud (on-premise, off-premise, public, or private), they still have to decide how they're going to store it. What good is data that can't be found?

    It's common to store data in the cloud using software-defined storage . Software-defined storage decouples storage software from hardware so you can abstract and consolidate storage capacity in a cloud. It allows you to scale beyond whatever individual hardware components your cloud is built on.

    Two of the more common software-defined storage solutions include Ceph for structured data and Gluster for unstructured data. Ceph is a massively scalable, programmable storage system that works well with clouds -- particularly those deployed using OpenStack ® -- because of its ability to unify object, block, and file storage into 1 pool of resources. Gluster is designed to handle the requirements of traditional file storage and is particularly adept at provisioning and managing elastic storage for container-based applications.

    [Nov 09, 2018] Cloud Computing vs Edge Computing Which Will Prevail

    Notable quotes:
    "... The recent widespread of edge computing in some 5G showcases, like the major sports events, has generated the ongoing discussion about the possibility of edge computing to replace cloud computing. ..."
    "... For instance, Satya Nadella, the CEO of Microsoft, announced in Microsoft Build 2017 that the company will focus its strategy on edge computing. Indeed, edge computing will be the key for the success of smart home and driverless vehicles ..."
    "... the edge will be the first to process and store the data generated by user devices. This will reduce the latency for the data to travel to the cloud. In other words, the edge optimizes the efficiency for the cloud. ..."
    Nov 09, 2018 | www.lannerinc.com

    The recent widespread of edge computing in some 5G showcases, like the major sports events, has generated the ongoing discussion about the possibility of edge computing to replace cloud computing.

    In fact, there have been announcements from global tech leaders like Nokia and Huawei demonstrating increased efforts and resources in developing edge computing.

    For instance, Satya Nadella, the CEO of Microsoft, announced in Microsoft Build 2017 that the company will focus its strategy on edge computing. Indeed, edge computing will be the key for the success of smart home and driverless vehicles.

    ... ... ...

    Cloud or edge, which will lead the future?

    The answer to that question is "Cloud – Edge Mixing". The cloud and the edge will complement each other to offer the real IoT experience. For instance, while the cloud coordinates all the technology and offers SaaS to users, the edge will be the first to process and store the data generated by user devices. This will reduce the latency for the data to travel to the cloud. In other words, the edge optimizes the efficiency for the cloud.

    It is strongly suggested to implement open architecture white-box servers for both cloud and edge, to minimize the latency for cloud-edge synchronization and optimize the compatibility between the two. For example, Lanner Electronics offers a wide range of Intel x86 white box appliances for data centers and edge uCPE/vCPE.

    http://www.lannerinc.com/telecom-datacenter-appliances/vcpe/ucpe-platforms/

    [Nov 09, 2018] OpenStack is overkill for Docker

    Notable quotes:
    "... OpenStack's core value is to gather a pool of hypervisor-enabled computers and enable the delivery of virtual machines (VMs) on demand to users. ..."
    Nov 09, 2018 | www.techrepublic.com

    javascript:void(0)

    Both OpenStack and Docker were conceived to make IT more agile. OpenStack has strived to do this by turning hitherto static IT resources into elastic infrastructure, whereas Docker has reached for this goal by harmonizing development, test, and production resources, as Red Hat's Neil Levine suggests .

    But while Docker adoption has soared, OpenStack is still largely stuck in neutral. OpenStack is kept relevant by so many wanting to believe its promise, but never hitting its stride due to a host of factors , including complexity.

    And yet Docker could be just the thing to turn OpenStack's popularity into productivity. Whether a Docker-plus-OpenStack pairing is right for your enterprise largely depends on the kind of capacity your enterprise hopes to deliver. If simply Docker, OpenStack is probably overkill.

    An open source approach to delivering virtual machines

    OpenStack is an operational model for delivering virtualized compute capacity.

    Sure, some give it a more grandiose definition ("OpenStack is a set of software tools for building and managing cloud computing platforms for public and private clouds"), but if we ignore secondary services like Cinder, Heat, and Magnum, for example, OpenStack's core value is to gather a pool of hypervisor-enabled computers and enable the delivery of virtual machines (VMs) on demand to users.

    That's it.

    Not that this is a small thing. After all, without OpenStack, the hypervisor sits idle, lonesome on a single computer, with no way to expose that capacity programmatically (or otherwise) to users.

    Before cloudy systems like OpenStack or Amazon's EC2, users would typically file a help ticket with IT. An IT admin, in turn, would use a GUI or command line to create a VM, and then share the credentials with the user.

    Systems like OpenStack significantly streamline this process, enabling IT to programmatically deliver capacity to users. That's a big deal.

    Docker peanut butter, meet OpenStack jelly

    Docker, the darling of the containers world, is similar to the VM in the IaaS picture painted above.

    A Docker host is really the unit of compute capacity that users need, and not the container itself. Docker addresses what you do with a host once you've got it, but it doesn't really help you get the host in the first place.

    A Docker machine provides a client-side tool that lets you request Docker hosts from an IaaS provider (like EC2 or OpenStack or vSphere), but it's far from a complete solution. In part, this stems from the fact that Docker doesn't have a tenancy model.

    With a hypervisor, each VM is a tenant. But in Docker, the Docker host is a tenant. You typically don't want multiple users sharing a Docker host because then they see each others' containers. So typically an enterprise will layer a cloud system underneath Docker to add tenancy. This yields a stack that looks like: hardware > hypervisor > Docker host > container.

    A common approach today would be to take OpenStack and use it as the enterprise platform to deliver capacity on demand to users. In other words, users rely on OpenStack to request a Docker host, and then they use Docker to run containers in their Docker host.

    So far, so good.

    If all you need is Docker...

    Things get more complicated when we start parsing what capacity needs delivering.

    When an enterprise wants to use Docker, they need to get Docker hosts from a data center. OpenStack can do that, and it can do it alongside delivering all sorts of other capacity to the various teams within the enterprise.

    But if all an enterprise IT team needs is Docker containers delivered, then OpenStack -- or a similar orchestration tool -- may be overkill, as VMware executive Jared Rosoff told me.

    For this sort of use case, we really need a new platform. This platform could take the form of a piece of software that an enterprise installs on all of its computers in the data center. It would expose an interface to developers that lets them programmatically create Docker hosts when they need them, and then use Docker to create containers in those hosts.

    Google has a vision for something like this with its Google Container Engine . Amazon has something similar in its EC2 Container Service . These are both API's that developers can use to provision some Docker-compatible capacity from their data center.

    As for Docker, the company behind Docker, the technology, it seems to have punted on this problem. focusing instead on what happens on the host itself.

    While we probably don't need to build up a big OpenStack cloud simply to manage Docker instances, it's worth asking what OpenStack should look like if what we wanted to deliver was only Docker hosts, and not VMs.

    Again, we see Google and Amazon tackling the problem, but when will OpenStack, or one of its supporters, do the same? The obvious candidate would be VMware, given its longstanding dominance of tooling around virtualization. But the company that solves this problem first, and in a way that comforts traditional IT with familiar interfaces yet pulls them into a cloudy future, will win, and win big.

    [Nov 09, 2018] What is Hybrid Cloud Computing

    Nov 09, 2018 | www.dummies.com

    The hybrid cloud

    A hybrid cloud is a combination of a private cloud combined with the use of public cloud services where one or several touch points exist between the environments. The goal is to combine services and data from a variety of cloud models to create a unified, automated, and well-managed computing environment.

    Combining public services with private clouds and the data center as a hybrid is the new definition of corporate computing. Not all companies that use some public and some private cloud services have a hybrid cloud. Rather, a hybrid cloud is an environment where the private and public services are used together to create value.

    A cloud is hybrid

    A cloud is not hybrid

    [Nov 09, 2018] Why Micro Data Centers Deliver Good Things in Small Packages by Calvin Hennick

    Notable quotes:
    "... "There's a big transformation happening," says Thomas Humphrey, segment director for edge computing at APC . "Technologies like IoT have started to require that some local computing and storage happen out in that distributed IT architecture." ..."
    "... In retail, for example, edge computing will become more important as stores find success with IoT technologies such as mobile beacons, interactive mirrors and real-time tools for customer experience, behavior monitoring and marketing . ..."
    Nov 09, 2018 | solutions.cdw.com

    Enterprises are deploying self-contained micro data centers to power computing at the network edge.

    The location for data processing has changed significantly throughout the history of computing. During the mainframe era, data was processed centrally, but client/server architectures later decentralized computing. In recent years, cloud computing centralized many processing workloads, but digital transformation and the Internet of Things are poised to move computing to new places, such as the network edge .

    "There's a big transformation happening," says Thomas Humphrey, segment director for edge computing at APC . "Technologies like IoT have started to require that some local computing and storage happen out in that distributed IT architecture."

    For example, some IoT systems require processing of data at remote locations rather than a centralized data center , such as at a retail store instead of a corporate headquarters.

    To meet regulatory requirements and business needs, IoT solutions often need low latency, high bandwidth, robust security and superior reliability . To meet these demands, many organizations are deploying micro data centers: self-contained solutions that provide not only essential infrastructure, but also physical security, power and cooling and remote management capabilities.

    "Digital transformation happens at the network edge, and edge computing will happen inside micro data centers ," says Bruce A. Taylor, executive vice president at Datacenter Dynamics . "This will probably be one of the fastest growing segments -- if not the fastest growing segment -- in data centers for the foreseeable future."

    What Is a Micro Data Center?

    Delivering the IT capabilities needed for edge computing represents a significant challenge for many organizations, which need manageable and secure solutions that can be deployed easily, consistently and close to the source of computing . Vendors such as APC have begun to create comprehensive solutions that provide these necessary capabilities in a single, standardized package.

    "From our perspective at APC, the micro data center was a response to what was happening in the market," says Humphrey. "We were seeing that enterprises needed more robust solutions at the edge."

    Most micro data center solutions rely on hyperconverged infrastructure to integrate computing, networking and storage technologies within a compact footprint . A typical micro data center also incorporates physical infrastructure (including racks), fire suppression, power, cooling and remote management capabilities. In effect, the micro data center represents a sweet spot between traditional IT closets and larger modular data centers -- giving organizations the ability to deploy professional, powerful IT resources practically anywhere .

    Standardized Deployments Across the Country

    Having robust IT resources at the network edge helps to improve reliability and reduce latency, both of which are becoming more and more important as analytics programs require that data from IoT deployments be processed in real time .

    "There's always been edge computing," says Taylor. "What's new is the need to process hundreds of thousands of data points for analytics at once."

    Standardization, redundant deployment and remote management are also attractive features, especially for large organizations that may need to deploy tens, hundreds or even thousands of micro data centers. "We spoke to customers who said, 'I've got to roll out and install 3,500 of these around the country,'" says Humphrey. "And many of these companies don't have IT staff at all of these sites." To address this scenario, APC designed standardized, plug-and-play micro data centers that can be rolled out seamlessly. Additionally, remote management capabilities allow central IT departments to monitor and troubleshoot the edge infrastructure without costly and time-intensive site visits.

    In part because micro data centers operate in far-flung environments, security is of paramount concern. The self-contained nature of micro data centers ensures that only authorized personnel will have access to infrastructure equipment , and security tools such as video surveillance provide organizations with forensic evidence in the event that someone attempts to infiltrate the infrastructure.

    How Micro Data Centers Can Help in Retail, Healthcare

    Micro data centers make business sense for any organization that needs secure IT infrastructure at the network edge. But the solution is particularly appealing to organizations in fields such as retail, healthcare and finance , where IT environments are widely distributed and processing speeds are often a priority.

    In retail, for example, edge computing will become more important as stores find success with IoT technologies such as mobile beacons, interactive mirrors and real-time tools for customer experience, behavior monitoring and marketing .

    "It will be leading-edge companies driving micro data center adoption, but that doesn't necessarily mean they'll be technology companies," says Taylor. "A micro data center can power real-time analytics for inventory control and dynamic pricing in a supermarket."

    In healthcare, digital transformation is beginning to touch processes and systems ranging from medication carts to patient records, and data often needs to be available locally; for example, in case of a data center outage during surgery. In finance, the real-time transmission of data can have immediate and significant financial consequences. And in both of these fields, regulations governing data privacy make the monitoring and security features of micro data centers even more important.

    Micro data centers also have enormous potential to power smart city initiatives and to give energy companies a cost-effective way of deploying resources in remote locations , among other use cases.

    "The proliferation of edge computing will be greater than anything we've seen in the past," Taylor says. "I almost can't think of a field where this won't matter."

    Learn more about how solutions and services from CDW and APC can help your organization overcome its data center challenges.

    Micro Data Centers Versus IT Closets

    Think the micro data center is just a glorified update on the traditional IT closet? Think again.

    "There are demonstrable differences," says Bruce A. Taylor, executive vice president at Datacenter Dynamics. "With micro data centers, there's a tremendous amount of computing capacity in a very small, contained space, and we just didn't have that capability previously ."

    APC identifies three key differences between IT closets and micro data centers:

    1. Difference #1: Uptime Expectations. APC notes that, of the nearly 3 million IT closets in the U.S., over 70 percent report outages directly related to human error. In an unprotected IT closet, problems can result from something as preventable as cleaning staff unwittingly disconnecting a cable. Micro data centers, by contrast, utilize remote monitoring, video surveillance and sensors to reduce downtime related to human error.
    2. Difference #2: Cooling Configurations. The cooling of IT wiring closets is often approached both reactively and haphazardly, resulting in premature equipment failure. Micro data centers are specifically designed to assure cooling compatibility with anticipated loads.
    3. Difference #3: Power Infrastructure. Unlike many IT closets, micro data centers incorporate uninterruptible power supplies, ensuring that infrastructure equipment has the power it needs to help avoid downtime.

    Calvin Hennick is a freelance journalist who specializes in business and technology writing. He is a contributor to the CDW family of technology magazines.

    [Nov 09, 2018] Solving Office 365 and SaaS Performance Issues with SD-WAN

    Notable quotes:
    "... most of the Office365 deployments face network related problems - typically manifesting as screen freezes. Limited WAN optimization capability further complicates the problems for most SaaS applications. ..."
    "... Why enterprises overlook the importance of strategically placing cloud gateways ..."
    Nov 09, 2018 | www.brighttalk.com

    About this webinar Major research highlights that most of the Office365 deployments face network related problems - typically manifesting as screen freezes. Limited WAN optimization capability further complicates the problems for most SaaS applications. To compound the issue, different SaaS applications issue different guidelines for solving performance issues. We will investigate the major reasons for these problems.

    SD-WAN provides an essential set of features that solves these networking issues related to Office 365 and SaaS applications. This session will cover the following major topics:

    [Nov 09, 2018] Make sense of edge computing vs. cloud computing

    Notable quotes:
    "... We already know that computing at the edge pushes most of the data processing out to the edge of the network, close to the source of the data. Then it's a matter of dividing the processing between the edge and the centralized system, meaning a public cloud such as Amazon Web Services, Google Cloud, or Microsoft Azure. ..."
    "... The goal is to process near the device the data that it needs quickly, such as to act on. There are hundreds of use cases where reaction time is the key value of the IoT system, and consistently sending the data back to a centralized cloud prevents that value from happening. ..."
    Nov 09, 2018 | www.infoworld.com

    The internet of things is real, and it's a real part of the cloud. A key challenge is how you can get data processed from so many devices. Cisco Systems predicts that cloud traffic is likely to rise nearly fourfold by 2020, increasing 3.9 zettabytes (ZB) per year in 2015 (the latest full year for which data is available) to 14.1ZB per year by 2020.

    As a result, we could have the cloud computing perfect storm from the growth of IoT. After all, IoT is about processing device-generated data that is meaningful, and cloud computing is about using data from centralized computing and storage. Growth rates of both can easily become unmanageable.

    So what do we do? The answer is something called "edge computing." We already know that computing at the edge pushes most of the data processing out to the edge of the network, close to the source of the data. Then it's a matter of dividing the processing between the edge and the centralized system, meaning a public cloud such as Amazon Web Services, Google Cloud, or Microsoft Azure.

    That may sound a like a client/server architecture, which also involved figuring out what to do at the client versus at the server. For IoT and any highly distributed applications, you've essentially got a client/network edge/server architecture going on, or -- if your devices can't do any processing themselves, a network edge/server architecture.

    The goal is to process near the device the data that it needs quickly, such as to act on. There are hundreds of use cases where reaction time is the key value of the IoT system, and consistently sending the data back to a centralized cloud prevents that value from happening.

    You would still use the cloud for processing that is either not as time-sensitive or is not needed by the device, such as for big data analytics on data from all your devices.

    There's another dimension to this: edge computing and cloud computing are two very different things. One does not replace the other. But too many articles confuse IT pros by suggesting that edge computing will displace cloud computing. It's no more true than saying PCs would displace the datacenter.

    It makes perfect sense to create purpose-built edge computing-based applications, such as an app that places data processing in a sensor to quickly process reactions to alarms. But you're not going to place your inventory-control data and applications at the edge -- moving all compute to the edge would result in a distributed, unsecured, and unmanageable mess.

    All the public cloud providers have IoT strategies and technology stacks that include, or will include, edge computing. Edge and cloud computing can and do work well together, but edge computing is for purpose-built systems with special needs. Cloud computing is a more general-purpose platform that also can work with purpose-built systems in that old client/server model.

    Related:

    David S. Linthicum is a chief cloud strategy officer at Deloitte Consulting, and an internationally recognized industry expert and thought leader. His views are his own.

    [Nov 08, 2018] GT 6.0 GridFTP

    Notable quotes:
    "... GridFTP is a high-performance, secure, reliable data transfer protocol optimized for high-bandwidth wide-area networks ..."
    Nov 08, 2018 | toolkit.globus.org

    The open source Globus® Toolkit is a fundamental enabling technology for the "Grid," letting people share computing power, databases, and other tools securely online across corporate, institutional, and geographic boundaries without sacrificing local autonomy. The toolkit includes software services and libraries for resource monitoring, discovery, and management, plus security and file management. In addition to being a central part of science and engineering projects that total nearly a half-billion dollars internationally, the Globus Toolkit is a substrate on which leading IT companies are building significant commercial Grid products.

    The toolkit includes software for security, information infrastructure, resource management, data management, communication, fault detection, and portability. It is packaged as a set of components that can be used either independently or together to develop applications. Every organization has unique modes of operation, and collaboration between multiple organizations is hindered by incompatibility of resources such as data archives, computers, and networks. The Globus Toolkit was conceived to remove obstacles that prevent seamless collaboration. Its core services, interfaces and protocols allow users to access remote resources as if they were located within their own machine room while simultaneously preserving local control over who can use resources and when.

    The Globus Toolkit has grown through an open-source strategy similar to the Linux operating system's, and distinct from proprietary attempts at resource-sharing software. This encourages broader, more rapid adoption and leads to greater technical innovation, as the open-source community provides continual enhancements to the product.

    Essential background is contained in the papers " Anatomy of the Grid " by Foster, Kesselman and Tuecke and " Physiology of the Grid " by Foster, Kesselman, Nick and Tuecke.

    Acclaim for the Globus Toolkit

    From version 1.0 in 1998 to the 2.0 release in 2002 and now the latest 4.0 version based on new open-standard Grid services, the Globus Toolkit has evolved rapidly into what The New York Times called "the de facto standard" for Grid computing. In 2002 the project earned a prestigious R&D 100 award, given by R&D Magazine in a ceremony where the Globus Toolkit was named "Most Promising New Technology" among the year's top 100 innovations. Other honors include project leaders Ian Foster of Argonne National Laboratory and the University of Chicago, Carl Kesselman of the University of Southern California's Information Sciences Institute (ISI), and Steve Tuecke of Argonne being named among 2003's top ten innovators by InfoWorld magazine, and a similar honor from MIT Technology Review, which named Globus Toolkit-based Grid computing one of "Ten Technologies That Will Change the World." The Globus Toolkit also GridFTP is a high-performance, secure, reliable data transfer protocol optimized for high-bandwidth wide-area networks . The GridFTP protocol is based on FTP, the highly-popular Internet file transfer protocol. We have selected a set of protocol features and extensions defined already in IETF RFCs and added a few additional features to meet requirements from current data grid projects.

    The following guides are available for this component:

    Data Management Key Concepts For important general concepts [ pdf ].
    Admin Guide For system administrators and those installing, building and deploying GT. You should already have read the Installation Guide and Quickstart [ pdf ]
    User's Guide Describes how end-users typically interact with this component. [ pdf ].
    Developer's Guide Reference and usage scenarios for developers. [ pdf ].
    Other information available for this component are:
    Release Notes What's new with the 6.0 release for this component. [ pdf ]
    Public Interface Guide Information for all public interfaces (including APIs, commands, etc). Please note this is a subset of information in the Developer's Guide [ pdf ].
    Quality Profile Information about test coverage reports, etc. [ pdf ].
    Migrating Guide Information for migrating to this version if you were using a previous version of GT. [ pdf ]
    All GridFTP Guides (PDF only) Includes all GridFTP guides except Public Interfaces (which is a subset of the Developer's Guide)

    [Nov 08, 2018] globus-gridftp-server-control-6.2-1.el7.x86_64.rpm

    Nov 08, 2018 | centos.pkgs.org
    6.2 x86_64 EPEL Testing
    globus-gridftp-server-control - - -
    Requires
    Name Value
    /sbin/ldconfig -
    globus-xio-gsi-driver(x86-64) >= 2
    globus-xio-pipe-driver(x86-64) >= 2
    libc.so.6(GLIBC_2.14)(64bit) -
    libglobus_common.so.0()(64bit) -
    libglobus_common.so.0(GLOBUS_COMMON_14)(64bit) -
    libglobus_gss_assist.so.3()(64bit) -
    libglobus_gssapi_error.so.2()(64bit) -
    libglobus_gssapi_gsi.so.4()(64bit) -
    libglobus_gssapi_gsi.so.4(globus_gssapi_gsi)(64bit) -
    libglobus_openssl_error.so.0()(64bit) -
    libglobus_xio.so.0()(64bit) -
    rtld(GNU_HASH) -
    See Also
    Package Description
    globus-gridftp-server-control-devel-6.1-1.el7.x86_64.rpm Globus Toolkit - Globus GridFTP Server Library Development Files
    globus-gridftp-server-devel-12.5-1.el7.x86_64.rpm Globus Toolkit - Globus GridFTP Server Development Files
    globus-gridftp-server-progs-12.5-1.el7.x86_64.rpm Globus Toolkit - Globus GridFTP Server Programs
    globus-gridmap-callout-error-2.5-1.el7.x86_64.rpm Globus Toolkit - Globus Gridmap Callout Errors
    globus-gridmap-callout-error-devel-2.5-1.el7.x86_64.rpm Globus Toolkit - Globus Gridmap Callout Errors Development Files
    globus-gridmap-callout-error-doc-2.5-1.el7.noarch.rpm Globus Toolkit - Globus Gridmap Callout Errors Documentation Files
    globus-gridmap-eppn-callout-1.13-1.el7.x86_64.rpm Globus Toolkit - Globus gridmap ePPN callout
    globus-gridmap-verify-myproxy-callout-2.9-1.el7.x86_64.rpm Globus Toolkit - Globus gridmap myproxy callout
    globus-gsi-callback-5.13-1.el7.x86_64.rpm Globus Toolkit - Globus GSI Callback Library
    globus-gsi-callback-devel-5.13-1.el7.x86_64.rpm Globus Toolkit - Globus GSI Callback Library Development Files
    globus-gsi-callback-doc-5.13-1.el7.noarch.rpm Globus Toolkit - Globus GSI Callback Library Documentation Files
    globus-gsi-cert-utils-9.16-1.el7.x86_64.rpm Globus Toolkit - Globus GSI Cert Utils Library
    globus-gsi-cert-utils-devel-9.16-1.el7.x86_64.rpm Globus Toolkit - Globus GSI Cert Utils Library Development Files
    globus-gsi-cert-utils-doc-9.16-1.el7.noarch.rpm Globus Toolkit - Globus GSI Cert Utils Library Documentation Files
    globus-gsi-cert-utils-progs-9.16-1.el7.noarch.rpm Globus Toolkit - Globus GSI Cert Utils Library Programs
    Provides
    Name Value
    globus-gridftp-server-control = 6.1-1.el7
    globus-gridftp-server-control(x86-64) = 6.1-1.el7
    libglobus_gridftp_server_control.so.0()(64bit) -
    Required By Download
    Type URL
    Binary Package globus-gridftp-server-control-6.1-1.el7.x86_64.rpm
    Source Package globus-gridftp-server-control-6.1-1.el7.src.rpm
    Install Howto
    1. Download the latest epel-release rpm from
      http://dl.fedoraproject.org/pub/epel/7/x86_64/
      
    2. Install epel-release rpm:
      # rpm -Uvh epel-release*rpm
      
    3. Install globus-gridftp-server-control rpm package:
      # yum install globus-gridftp-server-control
      
    Files
    Path
    /usr/lib64/libglobus_gridftp_server_control.so.0
    /usr/lib64/libglobus_gridftp_server_control.so.0.6.1
    /usr/share/doc/globus-gridftp-server-control-6.1/README
    /usr/share/licenses/globus-gridftp-server-control-6.1/GLOBUS_LICENSE
    Changelog
    2018-04-07 - Mattias Ellert <mattias.ellert@physics.uu.se> - 6.1-1
    - GT6 update: Don't error if acquire_cred fails when vhost env is set
    

    [Nov 08, 2018] 9 Aspera Sync Alternatives Top Best Alternatives

    Nov 08, 2018 | www.topbestalternatives.com

    Aspera Sync is an elite, versatile, multi-directional no concurrent record replication and synchronization. It is intended to conquer the execution and versatility inadequacies of conventional synchronization instruments like Rsync. Aspera Sync can scale up and out for most extreme rate replication and synchronization over WANs. Prominent capacities are The FASP advantage, superior, smart trade for Rsync, underpins complex synchronization arrangements, propelled record taking care of, and so on. Aspera Sync is reason worked by Aspera for elite, versatile, multi-directional offbeat record replication and synchronization. Intended to beat the execution and adaptability deficiencies of conventional synchronization instruments like Rsync, Aspera Sync can scale up and out for greatest pace replication and synchronization over WANs, for now,'s biggest vast information record stores -- from a great many individual documents to the most significant document sizes. Hearty reinforcement and recuperation strategies secure business necessary information and frameworks so undertakings can rapidly recoup necessary documents, structures or a whole site in the occasion if a calamity. Be that as it may, these strategies can be undermined by average exchange speeds amongst essential and reinforcement locales, bringing about fragmented reinforcements and augmented recuperation times. With FASP – controlled transactions, replication fits inside the little operational window so you can meet your recuperation point objective (RPO) and recovery time objective (RTO).

    1. Syncthing Syncthing replaces exclusive synchronize and cloud administrations with something open, reliable and decentralized. Your information is your information alone, and you should pick where it is put away if it is imparted to some outsider and how it's transmitted over the Internet. Syncthing is a record sharing application that permits you to share reports between various gadgets in an advantageous way. Its online Graphical User Interface (GUI) makes it conceivable Website Syncthing Alternatives

    [Nov 08, 2018] Can rsync resume after being interrupted?

    Sep 15, 2012 | unix.stackexchange.com

    Tim , Sep 15, 2012 at 23:36

    I used rsync to copy a large number of files, but my OS (Ubuntu) restarted unexpectedly.

    After reboot, I ran rsync again, but from the output on the terminal, I found that rsync still copied those already copied before. But I heard that rsync is able to find differences between source and destination, and therefore to just copy the differences. So I wonder in my case if rsync can resume what was left last time?

    Gilles , Sep 16, 2012 at 1:56

    Yes, rsync won't copy again files that it's already copied. There are a few edge cases where its detection can fail. Did it copy all the already-copied files? What options did you use? What were the source and target filesystems? If you run rsync again after it's copied everything, does it copy again? – Gilles Sep 16 '12 at 1:56

    Tim , Sep 16, 2012 at 2:30

    @Gilles: Thanks! (1) I think I saw rsync copied the same files again from its output on the terminal. (2) Options are same as in my other post, i.e. sudo rsync -azvv /home/path/folder1/ /home/path/folder2 . (3) Source and target are both NTFS, buy source is an external HDD, and target is an internal HDD. (3) It is now running and hasn't finished yet. – Tim Sep 16 '12 at 2:30

    jwbensley , Sep 16, 2012 at 16:15

    There is also the --partial flag to resume partially transferred files (useful for large files) – jwbensley Sep 16 '12 at 16:15

    Tim , Sep 19, 2012 at 5:20

    @Gilles: What are some "edge cases where its detection can fail"? – Tim Sep 19 '12 at 5:20

    Gilles , Sep 19, 2012 at 9:25

    @Tim Off the top of my head, there's at least clock skew, and differences in time resolution (a common issue with FAT filesystems which store times in 2-second increments, the --modify-window option helps with that). – Gilles Sep 19 '12 at 9:25

    DanielSmedegaardBuus , Nov 1, 2014 at 12:32

    First of all, regarding the "resume" part of your question, --partial just tells the receiving end to keep partially transferred files if the sending end disappears as though they were completely transferred.

    While transferring files, they are temporarily saved as hidden files in their target folders (e.g. .TheFileYouAreSending.lRWzDC ), or a specifically chosen folder if you set the --partial-dir switch. When a transfer fails and --partial is not set, this hidden file will remain in the target folder under this cryptic name, but if --partial is set, the file will be renamed to the actual target file name (in this case, TheFileYouAreSending ), even though the file isn't complete. The point is that you can later complete the transfer by running rsync again with either --append or --append-verify .

    So, --partial doesn't itself resume a failed or cancelled transfer. To resume it, you'll have to use one of the aforementioned flags on the next run. So, if you need to make sure that the target won't ever contain files that appear to be fine but are actually incomplete, you shouldn't use --partial . Conversely, if you want to make sure you never leave behind stray failed files that are hidden in the target directory, and you know you'll be able to complete the transfer later, --partial is there to help you.

    With regards to the --append switch mentioned above, this is the actual "resume" switch, and you can use it whether or not you're also using --partial . Actually, when you're using --append , no temporary files are ever created. Files are written directly to their targets. In this respect, --append gives the same result as --partial on a failed transfer, but without creating those hidden temporary files.

    So, to sum up, if you're moving large files and you want the option to resume a cancelled or failed rsync operation from the exact point that rsync stopped, you need to use the --append or --append-verify switch on the next attempt.

    As @Alex points out below, since version 3.0.0 rsync now has a new option, --append-verify , which behaves like --append did before that switch existed. You probably always want the behaviour of --append-verify , so check your version with rsync --version . If you're on a Mac and not using rsync from homebrew , you'll (at least up to and including El Capitan) have an older version and need to use --append rather than --append-verify . Why they didn't keep the behaviour on --append and instead named the newcomer --append-no-verify is a bit puzzling. Either way, --append on rsync before version 3 is the same as --append-verify on the newer versions.

    --append-verify isn't dangerous: It will always read and compare the data on both ends and not just assume they're equal. It does this using checksums, so it's easy on the network, but it does require reading the shared amount of data on both ends of the wire before it can actually resume the transfer by appending to the target.

    Second of all, you said that you "heard that rsync is able to find differences between source and destination, and therefore to just copy the differences."

    That's correct, and it's called delta transfer, but it's a different thing. To enable this, you add the -c , or --checksum switch. Once this switch is used, rsync will examine files that exist on both ends of the wire. It does this in chunks, compares the checksums on both ends, and if they differ, it transfers just the differing parts of the file. But, as @Jonathan points out below, the comparison is only done when files are of the same size on both ends -- different sizes will cause rsync to upload the entire file, overwriting the target with the same name.

    This requires a bit of computation on both ends initially, but can be extremely efficient at reducing network load if for example you're frequently backing up very large files fixed-size files that often contain minor changes. Examples that come to mind are virtual hard drive image files used in virtual machines or iSCSI targets.

    It is notable that if you use --checksum to transfer a batch of files that are completely new to the target system, rsync will still calculate their checksums on the source system before transferring them. Why I do not know :)

    So, in short:

    If you're often using rsync to just "move stuff from A to B" and want the option to cancel that operation and later resume it, don't use --checksum , but do use --append-verify .

    If you're using rsync to back up stuff often, using --append-verify probably won't do much for you, unless you're in the habit of sending large files that continuously grow in size but are rarely modified once written. As a bonus tip, if you're backing up to storage that supports snapshotting such as btrfs or zfs , adding the --inplace switch will help you reduce snapshot sizes since changed files aren't recreated but rather the changed blocks are written directly over the old ones. This switch is also useful if you want to avoid rsync creating copies of files on the target when only minor changes have occurred.

    When using --append-verify , rsync will behave just like it always does on all files that are the same size. If they differ in modification or other timestamps, it will overwrite the target with the source without scrutinizing those files further. --checksum will compare the contents (checksums) of every file pair of identical name and size.

    UPDATED 2015-09-01 Changed to reflect points made by @Alex (thanks!)

    UPDATED 2017-07-14 Changed to reflect points made by @Jonathan (thanks!)

    Alex , Aug 28, 2015 at 3:49

    According to the documentation --append does not check the data, but --append-verify does. Also, as @gaoithe points out in a comment below, the documentation claims --partial does resume from previous files. – Alex Aug 28 '15 at 3:49

    DanielSmedegaardBuus , Sep 1, 2015 at 13:29

    Thank you @Alex for the updates. Indeed, since 3.0.0, --append no longer compares the source to the target file before appending. Quite important, really! --partial does not itself resume a failed file transfer, but rather leaves it there for a subsequent --append(-verify) to append to it. My answer was clearly misrepresenting this fact; I'll update it to include these points! Thanks a lot :) – DanielSmedegaardBuus Sep 1 '15 at 13:29

    Cees Timmerman , Sep 15, 2015 at 17:21

    This says --partial is enough. – Cees Timmerman Sep 15 '15 at 17:21

    DanielSmedegaardBuus , May 10, 2016 at 19:31

    @CMCDragonkai Actually, check out Alexander's answer below about --partial-dir -- looks like it's the perfect bullet for this. I may have missed something entirely ;) – DanielSmedegaardBuus May 10 '16 at 19:31

    Jonathan Y. , Jun 14, 2017 at 5:48

    What's your level of confidence in the described behavior of --checksum ? According to the man it has more to do with deciding which files to flag for transfer than with delta-transfer (which, presumably, is rsync 's default behavior). – Jonathan Y. Jun 14 '17 at 5:48

    [Nov 08, 2018] How to remove all installed dependent packages while removing a package in centos 7?

    Aug 16, 2016 | unix.stackexchange.com

    ukll , Aug 16, 2016 at 15:26

    I am kinda new to Linux so this may be a dumb question. I searched both in stackoverflow and google but could not find any answer.

    I am using CentOS 7. I installed okular, which is a PDF viewer, with the command:

    sudo yum install okular
    

    As you can see in the picture below, it installed 37 dependent packages to install okular.

    But I wasn't satisfied with the features of the application and I decided to remove it. The problem is that if I remove it with the command:

    sudo yum autoremove okular
    

    It only removes four dependent packages.

    And if I remove it with the command:

    sudo yum remove okular
    

    It removes only one package which is okular.x86_64.

    Now, my question is that is there a way to remove all 37 installed packages with a command or do I have to remove all of them one by one?

    Thank you in advance.

    Jason Powell , Aug 16, 2016 at 17:25

    Personally, I don't like yum plugins because they don't work a lot of the time, in my experience.

    You can use the yum history command to view your yum history.

    [root@testbox ~]# yum history
    Loaded plugins: product-id, rhnplugin, search-disabled-repos, subscription-manager, verify, versionlock
    ID     | Login user               | Date and time    | Action(s)      | Altered
    ----------------------------------------------------------------------------------
    19 | Jason <jason>  | 2016-06-28 09:16 | Install        |   10
    

    You can find info about the transaction by doing yum history info <transaction id> . So:

    yum history info 19 would tell you all the packages that were installed with transaction 19 and the command line that was used to install the packages. If you want to undo transaction 19, you would run yum history undo 19 .

    Alternatively, if you just wanted to undo the last transaction you did (you installed a software package and didn't like it), you could just do yum history undo last

    Hope this helps!

    ukll , Aug 16, 2016 at 18:34

    Firstly, thank you for your excellent answer. And secondly, when I did sudo yum history , it showed only actions with id 30 through 49. Is there a way to view all actions history (including with id 1-29)? – ukll Aug 16 '16 at 18:34

    Jason Powell , Aug 16, 2016 at 19:00

    You're welcome! Yes, there is a way to show all of your history. Just do yum history list all . – Jason Powell Aug 16 '16 at 19:00

    ,

    yum remove package_name will remove only that package and all their dependencies.

    yum autoremove will remove the unused dependencies

    To remove a package with it's dependencies , you need to install yum plugin called: remove-with-leaves

    To install it type:

    yum install yum-plugin-remove-with-leaves
    

    To remove package_name type:

    yum remove package_name --remove-leaves
    

    [Nov 08, 2018] collectl

    Nov 08, 2018 | collectl.sourceforge.net
    Collectl Get collectl at SourceForge.net. Fast, secure and Free Open Source software downloads
    Latest Version: 4.2.0 June 12, 2017
    To use it download the tarball, unpack it and run ./INSTALL
    Collectl now supports OpenStack Clouds
    Colmux now part of collectl package
    Looking for colplot ? It's now here!

    Remember, to get lustre support contact Peter Piela to get his custom plugin.

    Home | Architecture | Features | Documentation | Releases | FAQ | Support | News | Acknowledgements

    There are a number of times in which you find yourself needing performance data. These can include benchmarking, monitoring a system's general heath or trying to determine what your system was doing at some time in the past. Sometimes you just want to know what the system is doing right now. Depending on what you're doing, you often end up using different tools, each designed to for that specific situation.

    Unlike most monitoring tools that either focus on a small set of statistics, format their output in only one way, run either interatively or as a daemon but not both, collectl tries to do it all. You can choose to monitor any of a broad set of subsystems which currently include buddyinfo, cpu, disk, inodes, infiniband, lustre, memory, network, nfs, processes, quadrics, slabs, sockets and tcp.

    The following is an example taken while writing a large file and running the collectl command with no arguments. By default it shows cpu, network and disk stats in brief format . The key point of this format is all output appears on a single line making it much easier to spot spikes or other anomalies in the output:

    collectl
    
    #<--------CPU--------><-----------Disks-----------><-----------Network---------->
    #cpu sys inter  ctxsw KBRead  Reads  KBWrit Writes netKBi pkt-in  netKBo pkt-out
      37  37   382    188      0      0   27144    254     45     68       3      21
      25  25   366    180     20      4   31280    296      0      1       0       0
      25  25   368    183      0      0   31720    275      2     20       0       1
    
    In this example, taken while writing to an NFS mounted filesystem, collectl displays interrupts, memory usage and nfs activity with timestamps. Keep in mind that you can mix and match any data and in the case of brief format you simply need to have a window wide enough to accommodate your output.
    collectl -sjmf -oT
    
    #         <-------Int--------><-----------Memory-----------><------NFS Totals------>
    #Time     Cpu0 Cpu1 Cpu2 Cpu3 Free Buff Cach Inac Slab  Map  Reads Writes Meta Comm
    08:36:52  1001   66    0    0   2G 201M 609M 363M 219M 106M      0      0    5    0
    08:36:53   999 1657    0    0   2G 201M   1G 918M 252M 106M      0  12622    0    2
    08:36:54  1001 7488    0    0   1G 201M   1G   1G 286M 106M      0  20147    0    2
    
    You can also display the same information in verbose format , in which case you get a single line for each type of data at the expense of more screen real estate, as can be seen in this example of network data during NFS writes. Note how you can actually see the network traffic stall while waiting for the server to physically write the data.
    collectl -sn --verbose -oT
    
    # NETWORK SUMMARY (/sec)
    #          KBIn  PktIn SizeIn  MultI   CmpI  ErrIn  KBOut PktOut  SizeO   CmpO ErrOut
    08:46:35   3255  41000     81      0      0      0 112015  78837   1454      0      0
    08:46:36      0      9     70      0      0      0     29     25   1174      0      0
    08:46:37      0      2     70      0      0      0      0      2    134      0      0
    
    In this last example we see what detail format looks like where we see multiple lines of output for a partitular type of data, which in this case is interrupts. We've also elected to show the time in msecs as well.
    collectl -sJ -oTm
    
    #              Int    Cpu0   Cpu1   Cpu2   Cpu3   Type            Device(s)
    08:52:32.002   225       0      4      0      0   IO-APIC-level   ioc0
    08:52:32.002   000    1000      0      0      0   IO-APIC-edge    timer
    08:52:32.002   014       0      0     18      0   IO-APIC-edge    ide0
    08:52:32.002   090       0      0      0  15461   IO-APIC-level   eth1
    
    Collectl output can also be saved in a rolling set of logs for later playback or displayed interactively in a variety of formats. If all that isn't enough there are plugins that allow you to report data in alternate formats or even send them over a socket to remote tools such as ganglia or graphite. You can even create files in space-separated format for plotting with external packages like gnuplot. The one below was created with colplot, part of the collectl utilities project, which provides a web-based interface to gnuplot.

    Are you a big user of the top command? Have you ever wanted to look across a cluster to see what the top processes are? Better yet, how about using iostat across a cluster? Or maybe vmstat or even looking at top network interfaces across a cluster? Look no more because if collectl reports it for one node, colmux can do it across a cluster AND you can sort by any column of your choice by simply using the right/left arrow keys.

    Collectl and Colmux run on all linux distros and are available in redhat and debian respositories and so getting it may be as simple as running yum or apt-get. Note that since colmux has just been merged into the collectl V4.0.0 package it may not yet be available in the repository of your choice and you should install collectl-utils V4.8.2 or earlier to get it for the time being.

    Collectl requires perl which is usually installed by default on all major Linux distros and optionally uses Time::Hires which is also usually installed and allows collectl to use fractional intervals and display timestamps in msec. The Compress::Zlib module is usually installed as well and if present the recorded data will be compressed and therefore use on average 90% less storage when recording to a file.

    If you're still not sure if collectl is right for you, take a couple of minutes to look at the Collectl Tutorial to get a better feel for what collectl can do. Also be sure to check back and see what's new on the website, sign up for a Mailing List or watch the Forums .

    "I absolutely love it and have been using it extensively for months."
    Kevin Closson: Performance Architect, EMC
    "Collectl is indispensable to any system admin."
    Matt Heaton: President, Bluehost.com

    [Nov 08, 2018] How to find which process is regularly writing to disk?

    Notable quotes:
    "... tick...tick...tick...trrrrrr ..."
    "... /var/log/syslog ..."
    Nov 08, 2018 | unix.stackexchange.com

    Cedric Martin , Jul 27, 2012 at 4:31

    How can I find which process is constantly writing to disk?

    I like my workstation to be close to silent and I just build a new system (P8B75-M + Core i5 3450s -- the 's' because it has a lower max TDP) with quiet fans etc. and installed Debian Wheezy 64-bit on it.

    And something is getting on my nerve: I can hear some kind of pattern like if the hard disk was writing or seeking someting ( tick...tick...tick...trrrrrr rinse and repeat every second or so).

    In the past I had a similar issue in the past (many, many years ago) and it turned out it was some CUPS log or something and I simply redirected that one (not important) logging to a (real) RAM disk.

    But here I'm not sure.

    I tried the following:

    ls -lR /var/log > /tmp/a.tmp && sleep 5 && ls -lR /var/log > /tmp/b.tmp && diff /tmp/?.tmp
    

    but nothing is changing there.

    Now the strange thing is that I also hear the pattern when the prompt asking me to enter my LVM decryption passphrase is showing.

    Could it be something in the kernel/system I just installed or do I have a faulty harddisk?

    hdparm -tT /dev/sda report a correct HD speed (130 GB/s non-cached, sata 6GB) and I've already installed and compiled from big sources (Emacs) without issue so I don't think the system is bad.

    (HD is a Seagate Barracude 500GB)

    Mat , Jul 27, 2012 at 6:03

    Are you sure it's a hard drive making that noise, and not something else? (Check the fans, including PSU fan. Had very strange clicking noises once when a very thin cable was too close to a fan and would sometimes very slightly touch the blades and bounce for a few "clicks"...) – Mat Jul 27 '12 at 6:03

    Cedric Martin , Jul 27, 2012 at 7:02

    @Mat: I'll take the hard drive outside of the case (the connectors should be long enough) to be sure and I'll report back ; ) – Cedric Martin Jul 27 '12 at 7:02

    camh , Jul 27, 2012 at 9:48

    Make sure your disk filesystems are mounted relatime or noatime. File reads can be causing writes to inodes to record the access time. – camh Jul 27 '12 at 9:48

    mnmnc , Jul 27, 2012 at 8:27

    Did you tried to examin what programs like iotop is showing? It will tell you exacly what kind of process is currently writing to the disk.

    example output:

    Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s
      TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
        1 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % init
        2 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kthreadd]
        3 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/0]
        6 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/0]
        7 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [watchdog/0]
        8 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/1]
     1033 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [flush-8:0]
       10 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/1]
    

    Cedric Martin , Aug 2, 2012 at 15:56

    thanks for that tip. I didn't know about iotop . On Debian I did an apt-cache search iotop to find out that I had to apt-get iotop . Very cool command! – Cedric Martin Aug 2 '12 at 15:56

    ndemou , Jun 20, 2016 at 15:32

    I use iotop -o -b -d 10 which every 10secs prints a list of processes that read/wrote to disk and the amount of IO bandwidth used. – ndemou Jun 20 '16 at 15:32

    scai , Jul 27, 2012 at 10:48

    You can enable IO debugging via echo 1 > /proc/sys/vm/block_dump and then watch the debugging messages in /var/log/syslog . This has the advantage of obtaining some type of log file with past activities whereas iotop only shows the current activity.

    dan3 , Jul 15, 2013 at 8:32

    It is absolutely crazy to leave sysloging enabled when block_dump is active. Logging causes disk activity, which causes logging, which causes disk activity etc. Better stop syslog before enabling this (and use dmesg to read the messages) – dan3 Jul 15 '13 at 8:32

    scai , Jul 16, 2013 at 6:32

    You are absolutely right, although the effect isn't as dramatic as you describe it. If you just want to have a short peek at the disk activity there is no need to stop the syslog daemon. – scai Jul 16 '13 at 6:32

    dan3 , Jul 16, 2013 at 7:22

    I've tried it about 2 years ago and it brought my machine to a halt. One of these days when I have nothing important running I'll try it again :) – dan3 Jul 16 '13 at 7:22

    scai , Jul 16, 2013 at 10:50

    I tried it, nothing really happened. Especially because of file system buffering. A write to syslog doesn't immediately trigger a write to disk. – scai Jul 16 '13 at 10:50

    Volker Siegel , Apr 16, 2014 at 22:57

    I would assume there is rate general rate limiting in place for the log messages, which handles this case too(?) – Volker Siegel Apr 16 '14 at 22:57

    Gilles , Jul 28, 2012 at 1:34

    Assuming that the disk noises are due to a process causing a write and not to some disk spindown problem , you can use the audit subsystem (install the auditd package ). Put a watch on the sync calls and its friends:
    auditctl -S sync -S fsync -S fdatasync -a exit,always
    

    Watch the logs in /var/log/audit/audit.log . Be careful not to do this if the audit logs themselves are flushed! Check in /etc/auditd.conf that the flush option is set to none .

    If files are being flushed often, a likely culprit is the system logs. For example, if you log failed incoming connection attempts and someone is probing your machine, that will generate a lot of entries; this can cause a disk to emit machine gun-style noises. With the basic log daemon sysklogd, check /etc/syslog.conf : if a log file name is not be preceded by - , then that log is flushed to disk after each write.

    Gilles , Mar 23 at 18:24

    @StephenKitt Huh. No. The asker mentioned Debian so I've changed it to a link to the Debian package. – Gilles Mar 23 at 18:24

    cas , Jul 27, 2012 at 9:40

    It might be your drives automatically spinning down, lots of consumer-grade drives do that these days. Unfortunately on even a lightly loaded system, this results in the drives constantly spinning down and then spinning up again, especially if you're running hddtemp or similar to monitor the drive temperature (most drives stupidly don't let you query the SMART temperature value without spinning up the drive - cretinous!).

    This is not only annoying, it can wear out the drives faster as many drives have only a limited number of park cycles. e.g. see https://bugs.launchpad.net/ubuntu/+source/hdparm/+bug/952556 for a description of the problem.

    I disable idle-spindown on all my drives with the following bit of shell code. you could put it in an /etc/rc.boot script, or in /etc/rc.local or similar.

    for disk in /dev/sd? ; do
      /sbin/hdparm -q -S 0 "/dev/$disk"
    done
    

    Cedric Martin , Aug 2, 2012 at 16:03

    that you can't query SMART readings without spinning up the drive leaves me speechless :-/ Now obviously the "spinning down" issue can become quite complicated. Regarding disabling the spinning down: wouldn't that in itself cause the HD to wear out faster? I mean: it's never ever "resting" as long as the system is on then? – Cedric Martin Aug 2 '12 at 16:03

    cas , Aug 2, 2012 at 21:42

    IIRC you can query some SMART values without causing the drive to spin up, but temperature isn't one of them on any of the drives i've tested (incl models from WD, Seagate, Samsung, Hitachi). Which is, of course, crazy because concern over temperature is one of the reasons for idling a drive. re: wear: AIUI 1. constant velocity is less wearing than changing speed. 2. the drives have to park the heads in a safe area and a drive is only rated to do that so many times (IIRC up to a few hundred thousand - easily exceeded if the drive is idling and spinning up every few seconds) – cas Aug 2 '12 at 21:42

    Micheal Johnson , Mar 12, 2016 at 20:48

    It's a long debate regarding whether it's better to leave drives running or to spin them down. Personally I believe it's best to leave them running - I turn my computer off at night and when I go out but other than that I never spin my drives down. Some people prefer to spin them down, say, at night if they're leaving the computer on or if the computer's idle for a long time, and in such cases the advantage of spinning them down for a few hours versus leaving them running is debatable. What's never good though is when the hard drive repeatedly spins down and up again in a short period of time. – Micheal Johnson Mar 12 '16 at 20:48

    Micheal Johnson , Mar 12, 2016 at 20:51

    Note also that spinning the drive down after it's been idle for a few hours is a bit silly, because if it's been idle for a few hours then it's likely to be used again within an hour. In that case, it would seem better to spin the drive down promptly if it's idle (like, within 10 minutes), but it's also possible for the drive to be idle for a few minutes when someone is using the computer and is likely to need the drive again soon. – Micheal Johnson Mar 12 '16 at 20:51

    ,

    I just found that s.m.a.r.t was causing an external USB disk to spin up again and again on my raspberry pi. Although SMART is generally a good thing, I decided to disable it again and since then it seems that unwanted disk activity has stopped

    [Nov 08, 2018] Determining what process is bound to a port

    Mar 14, 2011 | unix.stackexchange.com
    I know that using the command:
    lsof -i TCP

    (or some variant of parameters with lsof) I can determine which process is bound to a particular port. This is useful say if I'm trying to start something that wants to bind to 8080 and some else is already using that port, but I don't know what.

    Is there an easy way to do this without using lsof? I spend time working on many systems and lsof is often not installed.

    Cakemox , Mar 14, 2011 at 20:48

    netstat -lnp will list the pid and process name next to each listening port. This will work under Linux, but not all others (like AIX.) Add -t if you want TCP only.
    # netstat -lntp
    Active Internet connections (only servers)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
    tcp        0      0 0.0.0.0:24800           0.0.0.0:*               LISTEN      27899/synergys
    tcp        0      0 0.0.0.0:8000            0.0.0.0:*               LISTEN      3361/python
    tcp        0      0 127.0.0.1:3306          0.0.0.0:*               LISTEN      2264/mysqld
    tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      22964/apache2
    tcp        0      0 192.168.99.1:53         0.0.0.0:*               LISTEN      3389/named
    tcp        0      0 192.168.88.1:53         0.0.0.0:*               LISTEN      3389/named
    

    etc.

    xxx , Mar 14, 2011 at 21:01

    Cool, thanks. Looks like that that works under RHEL, but not under Solaris (as you indicated). Anybody know if there's something similar for Solaris? – user5721 Mar 14 '11 at 21:01

    Rich Homolka , Mar 15, 2011 at 19:56

    netstat -p above is my vote. also look at lsof . – Rich Homolka Mar 15 '11 at 19:56

    Jonathan , Aug 26, 2014 at 18:50

    As an aside, for windows it's similar: netstat -aon | more – Jonathan Aug 26 '14 at 18:50

    sudo , May 25, 2017 at 2:24

    What about for SCTP? – sudo May 25 '17 at 2:24

    frielp , Mar 15, 2011 at 13:33

    On AIX, netstat & rmsock can be used to determine process binding:
    [root@aix] netstat -Ana|grep LISTEN|grep 80
    f100070000280bb0 tcp4       0      0  *.37               *.*        LISTEN
    f1000700025de3b0 tcp        0      0  *.80               *.*        LISTEN
    f1000700002803b0 tcp4       0      0  *.111              *.*        LISTEN
    f1000700021b33b0 tcp4       0      0  127.0.0.1.32780    *.*        LISTEN
    
    # Port 80 maps to f1000700025de3b0 above, so we type:
    [root@aix] rmsock f1000700025de3b0 tcpcb
    The socket 0x25de008 is being held by process 499790 (java).
    

    Olivier Dulac , Sep 18, 2013 at 4:05

    Thanks for this! Is there a way, however, to just display what process listen on the socket (instead of using rmsock which attempt to remove it) ? – Olivier Dulac Sep 18 '13 at 4:05

    Vitor Py , Sep 26, 2013 at 14:18

    @OlivierDulac: "Unlike what its name implies, rmsock does not remove the socket, if it is being used by a process. It just reports the process holding the socket." ( ibm.com/developerworks/community/blogs/cgaix/entry/ ) – Vitor Py Sep 26 '13 at 14:18

    Olivier Dulac , Sep 26, 2013 at 16:00

    @vitor-braga: Ah thx! I thought it was trying but just said which process holds in when it couldn't remove it. Apparently it doesn't even try to remove it when a process holds it. That's cool! Thx! – Olivier Dulac Sep 26 '13 at 16:00

    frielp , Mar 15, 2011 at 13:27

    Another tool available on Linux is ss . From the ss man page on Fedora:
    NAME
           ss - another utility to investigate sockets
    SYNOPSIS
           ss [options] [ FILTER ]
    DESCRIPTION
           ss is used to dump socket statistics. It allows showing information 
           similar to netstat. It can display more TCP and state informations  
           than other tools.
    

    Example output below - the final column shows the process binding:

    [root@box] ss -ap
    State      Recv-Q Send-Q      Local Address:Port          Peer Address:Port
    LISTEN     0      128                    :::http                    :::*        users:(("httpd",20891,4),("httpd",20894,4),("httpd",20895,4),("httpd",20896,4)
    LISTEN     0      128             127.0.0.1:munin                    *:*        users:(("munin-node",1278,5))
    LISTEN     0      128                    :::ssh                     :::*        users:(("sshd",1175,4))
    LISTEN     0      128                     *:ssh                      *:*        users:(("sshd",1175,3))
    LISTEN     0      10              127.0.0.1:smtp                     *:*        users:(("sendmail",1199,4))
    LISTEN     0      128             127.0.0.1:x11-ssh-offset                  *:*        users:(("sshd",25734,8))
    LISTEN     0      128                   ::1:x11-ssh-offset                 :::*        users:(("sshd",25734,7))
    

    Eugen Constantin Dinca , Mar 14, 2011 at 23:47

    For Solaris you can use pfiles and then grep by sockname: or port: .

    A sample (from here ):

    pfiles `ptree | awk '{print $1}'` | egrep '^[0-9]|port:'
    

    rickumali , May 8, 2011 at 14:40

    I was once faced with trying to determine what process was behind a particular port (this time it was 8000). I tried a variety of lsof and netstat, but then took a chance and tried hitting the port via a browser (i.e. http://hostname:8000/ ). Lo and behold, a splash screen greeted me, and it became obvious what the process was (for the record, it was Splunk ).

    One more thought: "ps -e -o pid,args" (YMMV) may sometimes show the port number in the arguments list. Grep is your friend!

    Gilles , Oct 8, 2015 at 21:04

    In the same vein, you could telnet hostname 8000 and see if the server prints a banner. However, that's mostly useful when the server is running on a machine where you don't have shell access, and then finding the process ID isn't relevant. – Gilles May 8 '11 at 14:45

    [Nov 08, 2018] How to find which process is regularly writing to disk?

    Notable quotes:
    "... tick...tick...tick...trrrrrr ..."
    "... /var/log/syslog ..."
    Jul 27, 2012 | unix.stackexchange.com

    Cedric Martin , Jul 27, 2012 at 4:31

    How can I find which process is constantly writing to disk?

    I like my workstation to be close to silent and I just build a new system (P8B75-M + Core i5 3450s -- the 's' because it has a lower max TDP) with quiet fans etc. and installed Debian Wheezy 64-bit on it.

    And something is getting on my nerve: I can hear some kind of pattern like if the hard disk was writing or seeking someting ( tick...tick...tick...trrrrrr rinse and repeat every second or so).

    In the past I had a similar issue in the past (many, many years ago) and it turned out it was some CUPS log or something and I simply redirected that one (not important) logging to a (real) RAM disk.

    But here I'm not sure.

    I tried the following:

    ls -lR /var/log > /tmp/a.tmp && sleep 5 && ls -lR /var/log > /tmp/b.tmp && diff /tmp/?.tmp
    

    but nothing is changing there.

    Now the strange thing is that I also hear the pattern when the prompt asking me to enter my LVM decryption passphrase is showing.

    Could it be something in the kernel/system I just installed or do I have a faulty harddisk?

    hdparm -tT /dev/sda report a correct HD speed (130 GB/s non-cached, sata 6GB) and I've already installed and compiled from big sources (Emacs) without issue so I don't think the system is bad.

    (HD is a Seagate Barracude 500GB)

    Mat , Jul 27, 2012 at 6:03

    Are you sure it's a hard drive making that noise, and not something else? (Check the fans, including PSU fan. Had very strange clicking noises once when a very thin cable was too close to a fan and would sometimes very slightly touch the blades and bounce for a few "clicks"...) – Mat Jul 27 '12 at 6:03

    Cedric Martin , Jul 27, 2012 at 7:02

    @Mat: I'll take the hard drive outside of the case (the connectors should be long enough) to be sure and I'll report back ; ) – Cedric Martin Jul 27 '12 at 7:02

    camh , Jul 27, 2012 at 9:48

    Make sure your disk filesystems are mounted relatime or noatime. File reads can be causing writes to inodes to record the access time. – camh Jul 27 '12 at 9:48

    mnmnc , Jul 27, 2012 at 8:27

    Did you tried to examin what programs like iotop is showing? It will tell you exacly what kind of process is currently writing to the disk.

    example output:

    Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s
      TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
        1 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % init
        2 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kthreadd]
        3 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/0]
        6 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/0]
        7 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [watchdog/0]
        8 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/1]
     1033 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [flush-8:0]
       10 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/1]
    

    Cedric Martin , Aug 2, 2012 at 15:56

    thanks for that tip. I didn't know about iotop . On Debian I did an apt-cache search iotop to find out that I had to apt-get iotop . Very cool command! – Cedric Martin Aug 2 '12 at 15:56

    ndemou , Jun 20, 2016 at 15:32

    I use iotop -o -b -d 10 which every 10secs prints a list of processes that read/wrote to disk and the amount of IO bandwidth used. – ndemou Jun 20 '16 at 15:32

    scai , Jul 27, 2012 at 10:48

    You can enable IO debugging via echo 1 > /proc/sys/vm/block_dump and then watch the debugging messages in /var/log/syslog . This has the advantage of obtaining some type of log file with past activities whereas iotop only shows the current activity.

    dan3 , Jul 15, 2013 at 8:32

    It is absolutely crazy to leave sysloging enabled when block_dump is active. Logging causes disk activity, which causes logging, which causes disk activity etc. Better stop syslog before enabling this (and use dmesg to read the messages) – dan3 Jul 15 '13 at 8:32

    scai , Jul 16, 2013 at 6:32

    You are absolutely right, although the effect isn't as dramatic as you describe it. If you just want to have a short peek at the disk activity there is no need to stop the syslog daemon. – scai Jul 16 '13 at 6:32

    dan3 , Jul 16, 2013 at 7:22

    I've tried it about 2 years ago and it brought my machine to a halt. One of these days when I have nothing important running I'll try it again :) – dan3 Jul 16 '13 at 7:22

    scai , Jul 16, 2013 at 10:50

    I tried it, nothing really happened. Especially because of file system buffering. A write to syslog doesn't immediately trigger a write to disk. – scai Jul 16 '13 at 10:50

    Volker Siegel , Apr 16, 2014 at 22:57

    I would assume there is rate general rate limiting in place for the log messages, which handles this case too(?) – Volker Siegel Apr 16 '14 at 22:57

    Gilles , Jul 28, 2012 at 1:34

    Assuming that the disk noises are due to a process causing a write and not to some disk spindown problem , you can use the audit subsystem (install the auditd package ). Put a watch on the sync calls and its friends:
    auditctl -S sync -S fsync -S fdatasync -a exit,always
    

    Watch the logs in /var/log/audit/audit.log . Be careful not to do this if the audit logs themselves are flushed! Check in /etc/auditd.conf that the flush option is set to none .

    If files are being flushed often, a likely culprit is the system logs. For example, if you log failed incoming connection attempts and someone is probing your machine, that will generate a lot of entries; this can cause a disk to emit machine gun-style noises. With the basic log daemon sysklogd, check /etc/syslog.conf : if a log file name is not be preceded by - , then that log is flushed to disk after each write.

    Gilles , Mar 23 at 18:24

    @StephenKitt Huh. No. The asker mentioned Debian so I've changed it to a link to the Debian package. – Gilles Mar 23 at 18:24

    cas , Jul 27, 2012 at 9:40

    It might be your drives automatically spinning down, lots of consumer-grade drives do that these days. Unfortunately on even a lightly loaded system, this results in the drives constantly spinning down and then spinning up again, especially if you're running hddtemp or similar to monitor the drive temperature (most drives stupidly don't let you query the SMART temperature value without spinning up the drive - cretinous!).

    This is not only annoying, it can wear out the drives faster as many drives have only a limited number of park cycles. e.g. see https://bugs.launchpad.net/ubuntu/+source/hdparm/+bug/952556 for a description of the problem.

    I disable idle-spindown on all my drives with the following bit of shell code. you could put it in an /etc/rc.boot script, or in /etc/rc.local or similar.

    for disk in /dev/sd? ; do
      /sbin/hdparm -q -S 0 "/dev/$disk"
    done
    

    Cedric Martin , Aug 2, 2012 at 16:03

    that you can't query SMART readings without spinning up the drive leaves me speechless :-/ Now obviously the "spinning down" issue can become quite complicated. Regarding disabling the spinning down: wouldn't that in itself cause the HD to wear out faster? I mean: it's never ever "resting" as long as the system is on then? – Cedric Martin Aug 2 '12 at 16:03

    cas , Aug 2, 2012 at 21:42

    IIRC you can query some SMART values without causing the drive to spin up, but temperature isn't one of them on any of the drives i've tested (incl models from WD, Seagate, Samsung, Hitachi). Which is, of course, crazy because concern over temperature is one of the reasons for idling a drive. re: wear: AIUI 1. constant velocity is less wearing than changing speed. 2. the drives have to park the heads in a safe area and a drive is only rated to do that so many times (IIRC up to a few hundred thousand - easily exceeded if the drive is idling and spinning up every few seconds) – cas Aug 2 '12 at 21:42

    Micheal Johnson , Mar 12, 2016 at 20:48

    It's a long debate regarding whether it's better to leave drives running or to spin them down. Personally I believe it's best to leave them running - I turn my computer off at night and when I go out but other than that I never spin my drives down. Some people prefer to spin them down, say, at night if they're leaving the computer on or if the computer's idle for a long time, and in such cases the advantage of spinning them down for a few hours versus leaving them running is debatable. What's never good though is when the hard drive repeatedly spins down and up again in a short period of time. – Micheal Johnson Mar 12 '16 at 20:48

    Micheal Johnson , Mar 12, 2016 at 20:51

    Note also that spinning the drive down after it's been idle for a few hours is a bit silly, because if it's been idle for a few hours then it's likely to be used again within an hour. In that case, it would seem better to spin the drive down promptly if it's idle (like, within 10 minutes), but it's also possible for the drive to be idle for a few minutes when someone is using the computer and is likely to need the drive again soon. – Micheal Johnson Mar 12 '16 at 20:51

    ,

    I just found that s.m.a.r.t was causing an external USB disk to spin up again and again on my raspberry pi. Although SMART is generally a good thing, I decided to disable it again and since then it seems that unwanted disk activity has stopped

    [Nov 08, 2018] How to use parallel ssh (PSSH) for executing ssh in parallel on a number of Linux-Unix-BSD servers

    Looks like -h option is slightly more convenient then -w option.
    Notable quotes:
    "... Each line in the host file are of the form [user@]host[:port] and can include blank lines and comments lines beginning with "#". ..."
    Nov 08, 2018 | www.cyberciti.biz

    First you need to create a text file called hosts file from which pssh read hosts names. The syntax is pretty simple.

    Each line in the host file are of the form [user@]host[:port] and can include blank lines and comments lines beginning with "#".

    Here is my sample file named ~/.pssh_hosts_files:
    $ cat ~/.pssh_hosts_files
    vivek@dellm6700
    root@192.168.2.30
    root@192.168.2.45
    root@192.168.2.46

    Run the date command all hosts:
    $ pssh -i -h ~/.pssh_hosts_files date
    Sample outputs:

    [1] 18:10:10 [SUCCESS] root@192.168.2.46
    Sun Feb 26 18:10:10 IST 2017
    [2] 18:10:10 [SUCCESS] vivek@dellm6700
    Sun Feb 26 18:10:10 IST 2017
    [3] 18:10:10 [SUCCESS] root@192.168.2.45
    Sun Feb 26 18:10:10 IST 2017
    [4] 18:10:10 [SUCCESS] root@192.168.2.30
    Sun Feb 26 18:10:10 IST 2017
    

    Run the uptime command on each host:
    $ pssh -i -h ~/.pssh_hosts_files uptime
    Sample outputs:

    [1] 18:11:15 [SUCCESS] root@192.168.2.45
     18:11:15 up  2:29,  0 users,  load average: 0.00, 0.00, 0.00
    [2] 18:11:15 [SUCCESS] vivek@dellm6700
     18:11:15 up 19:06,  0 users,  load average: 0.13, 0.25, 0.27
    [3] 18:11:15 [SUCCESS] root@192.168.2.46
     18:11:15 up  1:55,  0 users,  load average: 0.00, 0.00, 0.00
    [4] 18:11:15 [SUCCESS] root@192.168.2.30
     6:11PM  up 1 day, 21:38, 0 users, load averages: 0.12, 0.14, 0.09
    

    You can now automate common sysadmin tasks such as patching all servers:
    $ pssh -h ~/.pssh_hosts_files -- sudo yum -y update
    OR
    $ pssh -h ~/.pssh_hosts_files -- sudo apt-get -y update
    $ pssh -h ~/.pssh_hosts_files -- sudo apt-get -y upgrade

    How do I use pssh to copy file to all servers?

    The syntax is:
    pscp -h ~/.pssh_hosts_files src dest
    To copy $HOME/demo.txt to /tmp/ on all servers, enter:
    $ pscp -h ~/.pssh_hosts_files $HOME/demo.txt /tmp/
    Sample outputs:

    [1] 18:17:35 [SUCCESS] vivek@dellm6700
    [2] 18:17:35 [SUCCESS] root@192.168.2.45
    [3] 18:17:35 [SUCCESS] root@192.168.2.46
    [4] 18:17:35 [SUCCESS] root@192.168.2.30
    

    Or use the prsync command for efficient copying of files:
    $ prsync -h ~/.pssh_hosts_files /etc/passwd /tmp/
    $ prsync -h ~/.pssh_hosts_files *.html /var/www/html/

    How do I kill processes in parallel on a number of hosts?

    Use the pnuke command for killing processes in parallel on a number of hosts. The syntax is:
    $ pnuke -h .pssh_hosts_files process_name
    ### kill nginx and firefox on hosts:
    $ pnuke -h ~/.pssh_hosts_files firefox
    $ pnuke -h ~/.pssh_hosts_files nginx

    See pssh/pscp command man pages for more information.

    [Nov 08, 2018] Parallel command execution with PDSH

    Notable quotes:
    "... (did I mention that Rittman Mead laptops are Macs, so I can do all of this straight from my work machine... :-) ) ..."
    "... open an editor and paste the following lines into it and save the file as /foo/bar ..."
    Nov 08, 2018 | www.rittmanmead.com

    In this series of blog posts I'm taking a look at a few very useful tools that can make your life as the sysadmin of a cluster of Linux machines easier. This may be a Hadoop cluster, or just a plain simple set of 'normal' machines on which you want to run the same commands and monitoring.

    Previously we looked at using SSH keys for intra-machine authorisation , which is a pre-requisite what we'll look at here -- executing the same command across multiple machines using PDSH. In the next post of the series we'll see how we can monitor OS metrics across a cluster with colmux.

    PDSH is a very smart little tool that enables you to issue the same command on multiple hosts at once, and see the output. You need to have set up ssh key authentication from the client to host on all of them, so if you followed the steps in the first section of this article you'll be good to go.

    The syntax for using it is nice and simple:

    For example run against a small cluster of four machines that I have:

    robin@RNMMBP $ pdsh -w root@rnmcluster02-node0[1-4] date
    
    rnmcluster02-node01: Fri Nov 28 17:26:17 GMT 2014  
    rnmcluster02-node02: Fri Nov 28 17:26:18 GMT 2014  
    rnmcluster02-node03: Fri Nov 28 17:26:18 GMT 2014  
    rnmcluster02-node04: Fri Nov 28 17:26:18 GMT 2014
    

    PDSH can be installed on the Mac under Homebrew (did I mention that Rittman Mead laptops are Macs, so I can do all of this straight from my work machine... :-) )

    brew install pdsh
    

    And if you want to run it on Linux from the EPEL yum repository (RHEL-compatible, but packages for other distros are available):

    yum install pdsh
    

    You can run it from a cluster node, or from your client machine (assuming your client machine is mac/linux).

    Example - install and start collectl on all nodes

    I started looking into pdsh when it came to setting up a cluster of machines from scratch. One of the must-have tools I like to have on any machine that I work with is the excellent collectl . This is an OS resource monitoring tool that I initially learnt of through Kevin Closson and Greg Rahn , and provides the kind of information you'd get from top etc – and then some! It can run interactively, log to disk, run as a service – and it also happens to integrate very nicely with graphite , making it a no-brainer choice for any server.

    So, instead of logging into each box individually I could instead run this:

    pdsh -w root@rnmcluster02-node0[1-4] yum install -y collectl  
    pdsh -w root@rnmcluster02-node0[1-4] service collectl start  
    pdsh -w root@rnmcluster02-node0[1-4] chkconfig collectl on
    

    Yes, I know there are tools out there like puppet and chef that are designed for doing this kind of templated build of multiple servers, but the point I want to illustrate here is that pdsh enables you to do ad-hoc changes to a set of servers at once. Sure, once I have my cluster built and want to create an image/template for future builds, then it would be daft if I were building the whole lot through pdsh-distributed yum commands.

    Example - setting up the date/timezone/NTPD

    Often the accuracy of the clock on each server in a cluster is crucial, and we can easily do this with pdsh:

    Install packages

    robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node0[1-4] yum install -y ntp ntpdate
    

    Set the timezone:

    robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node0[1-4] ln -sf /usr/share/zoneinfo/Europe/London /etc/localtime
    

    Force a time refresh:

    robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node0[1-4] ntpdate pool.ntp.org  
    rnmcluster02-node03: 30 Nov 20:46:22 ntpdate[27610]: step time server 176.58.109.199 offset -2.928585 sec  
    rnmcluster02-node02: 30 Nov 20:46:22 ntpdate[28527]: step time server 176.58.109.199 offset -2.946021 sec  
    rnmcluster02-node04: 30 Nov 20:46:22 ntpdate[27615]: step time server 129.250.35.250 offset -2.915713 sec  
    rnmcluster02-node01: 30 Nov 20:46:25 ntpdate[29316]: 178.79.160.57 rate limit response from server.  
    rnmcluster02-node01: 30 Nov 20:46:22 ntpdate[29316]: step time server 176.58.109.199 offset -2.925016 sec
    

    Set NTPD to start automatically at boot:

    robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node0[1-4] chkconfig ntpd on
    

    Start NTPD:

    robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node0[1-4] service ntpd start
    
    Example - using a HEREDOC (here-document) and sending quotation marks in a command with PDSH

    Here documents (heredocs) are a nice way to embed multi-line content in a single command, enabling the scripting of a file creation rather than the clumsy instruction to " open an editor and paste the following lines into it and save the file as /foo/bar ".

    Fortunately heredocs work just fine with pdsh, so long as you remember to enclose the whole command in quotation marks. And speaking of which, if you need to include quotation marks in your actual command, you need to escape them with a backslash. Here's an example of both, setting up the configuration file for my ever-favourite gnu screen on all the nodes of the cluster:

    robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node0[1-4] "cat > ~/.screenrc <<EOF  
    hardstatus alwayslastline \"%{= RY}%H %{kG}%{G} Screen(s): %{c}%w %=%{kG}%c  %D, %M %d %Y  LD:%l\"  
    startup_message off  
    msgwait 1  
    defscrollback 100000  
    nethack on  
    EOF  
    "
    

    Now when I login to each individual node and run screen, I get a nice toolbar at the bottom:

    Combining commands

    To combine commands together that you send to each host you can use the standard bash operator semicolon ;

    robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node0[1-4] "date;sleep 5;date"  
    rnmcluster02-node01: Sun Nov 30 20:57:06 GMT 2014  
    rnmcluster02-node03: Sun Nov 30 20:57:06 GMT 2014  
    rnmcluster02-node04: Sun Nov 30 20:57:06 GMT 2014  
    rnmcluster02-node02: Sun Nov 30 20:57:06 GMT 2014  
    rnmcluster02-node01: Sun Nov 30 20:57:11 GMT 2014  
    rnmcluster02-node03: Sun Nov 30 20:57:11 GMT 2014  
    rnmcluster02-node04: Sun Nov 30 20:57:11 GMT 2014  
    rnmcluster02-node02: Sun Nov 30 20:57:11 GMT 2014
    

    Note the use of the quotation marks to enclose the entire command string. Without them the bash interpretor will take the ; as the delineator of the local commands, and try to run the subsequent commands locally:

    robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node0[1-4] date;sleep 5;date  
    rnmcluster02-node03: Sun Nov 30 20:57:53 GMT 2014  
    rnmcluster02-node04: Sun Nov 30 20:57:53 GMT 2014  
    rnmcluster02-node02: Sun Nov 30 20:57:53 GMT 2014  
    rnmcluster02-node01: Sun Nov 30 20:57:53 GMT 2014  
    Sun 30 Nov 2014 20:58:00 GMT
    

    You can also use && and || to run subsequent commands conditionally if the previous one succeeds or fails respectively:

    robin@RNMMBP $ pdsh -w root@rnmcluster02-node[01-4] "chkconfig collectl on && service collectl start"
    
    rnmcluster02-node03: Starting collectl: [  OK  ]  
    rnmcluster02-node02: Starting collectl: [  OK  ]  
    rnmcluster02-node04: Starting collectl: [  OK  ]  
    rnmcluster02-node01: Starting collectl: [  OK  ]
    
    Piping and file redirects

    Similar to combining commands above, you can pipe the output of commands, and you need to use quotation marks to enclose the whole command string.

    robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node[01-4] "chkconfig|grep collectl"  
    rnmcluster02-node03: collectl           0:off   1:off   2:on    3:on    4:on    5:on    6:off  
    rnmcluster02-node01: collectl           0:off   1:off   2:on    3:on    4:on    5:on    6:off  
    rnmcluster02-node04: collectl           0:off   1:off   2:on    3:on    4:on    5:on    6:off  
    rnmcluster02-node02: collectl           0:off   1:off   2:on    3:on    4:on    5:on    6:off
    

    However, you can pipe the output from pdsh to a local process if you want:

    robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node[01-4] chkconfig|grep collectl  
    rnmcluster02-node02: collectl           0:off   1:off   2:on    3:on    4:on    5:on    6:off  
    rnmcluster02-node04: collectl           0:off   1:off   2:on    3:on    4:on    5:on    6:off  
    rnmcluster02-node03: collectl           0:off   1:off   2:on    3:on    4:on    5:on    6:off  
    rnmcluster02-node01: collectl           0:off   1:off   2:on    3:on    4:on    5:on    6:off
    

    The difference is that you'll be shifting the whole of the pipe across the network in order to process it locally, so if you're just grepping etc this doesn't make any sense. For use of utilities held locally and not on the remote server though, this might make sense.

    File redirects work the same way – within quotation marks and the redirect will be to a file on the remote server, outside of them it'll be local:

    robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node[01-4] "chkconfig>/tmp/pdsh.out"  
    robin@RNMMBP ~ $ ls -l /tmp/pdsh.out  
    ls: /tmp/pdsh.out: No such file or directory
    
    robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node[01-4] chkconfig>/tmp/pdsh.out  
    robin@RNMMBP ~ $ ls -l /tmp/pdsh.out  
    -rw-r--r--  1 robin  wheel  7608 30 Nov 19:23 /tmp/pdsh.out
    
    Cancelling PDSH operations

    As you can see from above, the precise syntax of pdsh calls can be hugely important. If you run a command and it appears 'stuck', or if you have that heartstopping realisation that the shutdown -h now you meant to run locally you ran across the cluster, you can press Ctrl-C once to see the status of your commands:

    robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node[01-4] sleep 30  
    ^Cpdsh@RNMMBP: interrupt (one more within 1 sec to abort)
    pdsh@RNMMBP:  (^Z within 1 sec to cancel pending threads)  
    pdsh@RNMMBP: rnmcluster02-node01: command in progress  
    pdsh@RNMMBP: rnmcluster02-node02: command in progress  
    pdsh@RNMMBP: rnmcluster02-node03: command in progress  
    pdsh@RNMMBP: rnmcluster02-node04: command in progress
    

    and press it twice (or within a second of the first) to cancel:

    robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node[01-4] sleep 30  
    ^Cpdsh@RNMMBP: interrupt (one more within 1 sec to abort)
    pdsh@RNMMBP:  (^Z within 1 sec to cancel pending threads)  
    pdsh@RNMMBP: rnmcluster02-node01: command in progress  
    pdsh@RNMMBP: rnmcluster02-node02: command in progress  
    pdsh@RNMMBP: rnmcluster02-node03: command in progress  
    pdsh@RNMMBP: rnmcluster02-node04: command in progress  
    ^Csending SIGTERM to ssh rnmcluster02-node01
    sending signal 15 to rnmcluster02-node01 [ssh] pid 26534  
    sending SIGTERM to ssh rnmcluster02-node02  
    sending signal 15 to rnmcluster02-node02 [ssh] pid 26535  
    sending SIGTERM to ssh rnmcluster02-node03  
    sending signal 15 to rnmcluster02-node03 [ssh] pid 26533  
    sending SIGTERM to ssh rnmcluster02-node04  
    sending signal 15 to rnmcluster02-node04 [ssh] pid 26532  
    pdsh@RNMMBP: interrupt, aborting.
    

    If you've got threads yet to run on the remote hosts, but want to keep running whatever has already started, you can use Ctrl-C, Ctrl-Z:

    robin@RNMMBP ~ $ pdsh -f 2 -w root@rnmcluster02-node[01-4] "sleep 5;date"  
    ^Cpdsh@RNMMBP: interrupt (one more within 1 sec to abort)
    pdsh@RNMMBP:  (^Z within 1 sec to cancel pending threads)  
    pdsh@RNMMBP: rnmcluster02-node01: command in progress  
    pdsh@RNMMBP: rnmcluster02-node02: command in progress  
    ^Zpdsh@RNMMBP: Canceled 2 pending threads.
    rnmcluster02-node01: Mon Dec  1 21:46:35 GMT 2014  
    rnmcluster02-node02: Mon Dec  1 21:46:35 GMT 2014
    

    NB the above example illustrates the use of the -f argument to limit how many threads are run against remote hosts at once. We can see the command is left running on the first two nodes and returns the date, whilst the Ctrl-C - Ctrl-Z stops it from being executed on the remaining nodes.

    PDSH_SSH_ARGS_APPEND

    By default, when you ssh to new host for the first time you'll be prompted to validate the remote host's SSH key fingerprint.

    The authenticity of host 'rnmcluster02-node02 (172.28.128.9)' can't be established.  
    RSA key fingerprint is 00:c0:75:a8:bc:30:cb:8e:b3:8e:e4:29:42:6a:27:1c.  
    Are you sure you want to continue connecting (yes/no)?
    

    This is one of those prompts that the majority of us just hit enter at and ignore; if that includes you then you will want to make sure that your PDSH call doesn't fall in a heap because you're connecting to a bunch of new servers all at once. PDSH is not an interactive tool, so if it requires input from the hosts it's connecting to it'll just fail. To avoid this SSH prompt, you can set up the environment variable PDSH SSH ARGS_APPEND as follows:

    export PDSH_SSH_ARGS_APPEND="-q -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"
    

    The -q makes failures less verbose, and the -o passes in a couple of options, StrictHostKeyChecking to disable the above check, and UserKnownHostsFile to stop SSH keeping a list of host IP/hostnames and corresponding SSH fingerprints (by pointing it at /dev/null ). You'll want this if you're working with VMs that are sharing a pool of IPs and get re-used, otherwise you get this scary failure:

    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@  
    @    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
    IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!  
    Someone could be eavesdropping on you right now (man-in-the-middle attack)!  
    It is also possible that a host key has just been changed.  
    The fingerprint for the RSA key sent by the remote host is  
    00:c0:75:a8:bc:30:cb:8e:b3:8e:e4:29:42:6a:27:1c.  
    Please contact your system administrator.
    

    For both of these above options, make sure you're aware of the security implications that you're opening yourself up to. For a sandbox environment I just ignore them; for anything where security is of importance make sure you are aware of quite which server you are connecting to by SSH, and protecting yourself from MitM attacks.

    PDSH Reference

    You can find out more about PDSH at https://code.google.com/p/pdsh/wiki/UsingPDSH

    Summary

    When working with multiple Linux machines I would first and foremost make sure SSH keys are set up in order to ease management through password-less logins.

    After SSH keys, I would recommend pdsh for parallel execution of the same SSH command across the cluster. It's a big time saver particularly when initially setting up the cluster given the installation and configuration changes that are inevitably needed.

    In the next article of this series we'll see how the tool colmux is a powerful way to monitor OS metrics across a cluster.

    So now your turn – what particular tools or tips do you have for working with a cluster of Linux machines? Leave your answers in the comments below, or tweet them to me at @rmoff .

    [Nov 08, 2018] pexec utility is similar to parallel

    Nov 08, 2018 | www.gnu.org

    Welcome to the web page of the pexec program!

    The main purpose of the program pexec is to execute the given command or shell script (e.g. parsed by /bin/sh ) in parallel on the local host or on remote hosts, while some of the execution parameters, namely the redirected standard input, output or error and environmental variables can be varied. This program is therefore capable to replace the classic shell loop iterators (e.g. for ~ in ~ done , in bash ) by executing the body of the loop in parallel. Thus, the program pexec implements shell level data parallelism in a barely simple form. The capabilities of the program is extended with additional features, such as allowing to define mutual exclusions, do atomic command executions and implement higher level resource and job control. See the complete manual for more details. See a brief Hungarian description of the program here .

    The actual version of the program package is 1.0rc8 .

    You may browse the package directory here (for FTP access, see this directory ). See the GNU summary page of this project here . The latest version of the program source package is pexec-1.0rc8.tar.gz . Here is another mirror of the package directory.

    Please consider making donations to the author (via PayPal ) in order to help further development of the program or support the GNU project via the FSF .

    [Nov 08, 2018] How to split one string into multiple variables in bash shell? [duplicate]

    Nov 08, 2018 | stackoverflow.com
    This question already has an answer here:

    Rob I , May 9, 2012 at 19:22

    For your second question, see @mkb's comment to my answer below - that's definitely the way to go! – Rob I May 9 '12 at 19:22

    Dennis Williamson , Jul 4, 2012 at 16:14

    See my edited answer for one way to read individual characters into an array. – Dennis Williamson Jul 4 '12 at 16:14

    Nick Weedon , Dec 31, 2015 at 11:04

    Here is the same thing in a more concise form: var1=$(cut -f1 -d- <<<$STR) – Nick Weedon Dec 31 '15 at 11:04

    Rob I , May 9, 2012 at 17:00

    If your solution doesn't have to be general, i.e. only needs to work for strings like your example, you could do:
    var1=$(echo $STR | cut -f1 -d-)
    var2=$(echo $STR | cut -f2 -d-)
    

    I chose cut here because you could simply extend the code for a few more variables...

    crunchybutternut , May 9, 2012 at 17:40

    Can you look at my post again and see if you have a solution for the followup question? thanks! – crunchybutternut May 9 '12 at 17:40

    mkb , May 9, 2012 at 17:59

    You can use cut to cut characters too! cut -c1 for example. – mkb May 9 '12 at 17:59

    FSp , Nov 27, 2012 at 10:26

    Although this is very simple to read and write, is a very slow solution because forces you to read twice the same data ($STR) ... if you care of your script performace, the @anubhava solution is much better – FSp Nov 27 '12 at 10:26

    tripleee , Jan 25, 2016 at 6:47

    Apart from being an ugly last-resort solution, this has a bug: You should absolutely use double quotes in echo "$STR" unless you specifically want the shell to expand any wildcards in the string as a side effect. See also stackoverflow.com/questions/10067266/tripleee Jan 25 '16 at 6:47

    Rob I , Feb 10, 2016 at 13:57

    You're right about double quotes of course, though I did point out this solution wasn't general. However I think your assessment is a bit unfair - for some people this solution may be more readable (and hence extensible etc) than some others, and doesn't completely rely on arcane bash feature that wouldn't translate to other shells. I suspect that's why my solution, though less elegant, continues to get votes periodically... – Rob I Feb 10 '16 at 13:57

    Dennis Williamson , May 10, 2012 at 3:14

    read with IFS are perfect for this:
    $ IFS=- read var1 var2 <<< ABCDE-123456
    $ echo "$var1"
    ABCDE
    $ echo "$var2"
    123456
    

    Edit:

    Here is how you can read each individual character into array elements:

    $ read -a foo <<<"$(echo "ABCDE-123456" | sed 's/./& /g')"
    

    Dump the array:

    $ declare -p foo
    declare -a foo='([0]="A" [1]="B" [2]="C" [3]="D" [4]="E" [5]="-" [6]="1" [7]="2" [8]="3" [9]="4" [10]="5" [11]="6")'
    

    If there are spaces in the string:

    $ IFS=$'\v' read -a foo <<<"$(echo "ABCDE 123456" | sed 's/./&\v/g')"
    $ declare -p foo
    declare -a foo='([0]="A" [1]="B" [2]="C" [3]="D" [4]="E" [5]=" " [6]="1" [7]="2" [8]="3" [9]="4" [10]="5" [11]="6")'
    

    insecure , Apr 30, 2014 at 7:51

    Great, the elegant bash-only way, without unnecessary forks. – insecure Apr 30 '14 at 7:51

    Martin Serrano , Jan 11 at 4:34

    this solution also has the benefit that if delimiter is not present, the var2 will be empty – Martin Serrano Jan 11 at 4:34

    mkb , May 9, 2012 at 17:02

    If you know it's going to be just two fields, you can skip the extra subprocesses like this:
    var1=${STR%-*}
    var2=${STR#*-}
    

    What does this do? ${STR%-*} deletes the shortest substring of $STR that matches the pattern -* starting from the end of the string. ${STR#*-} does the same, but with the *- pattern and starting from the beginning of the string. They each have counterparts %% and ## which find the longest anchored pattern match. If anyone has a helpful mnemonic to remember which does which, let me know! I always have to try both to remember.

    Jens , Jan 30, 2015 at 15:17

    Plus 1 For knowing your POSIX shell features, avoiding expensive forks and pipes, and the absence of bashisms. – Jens Jan 30 '15 at 15:17

    Steven Lu , May 1, 2015 at 20:19

    Dunno about "absence of bashisms" considering that this is already moderately cryptic .... if your delimiter is a newline instead of a hyphen, then it becomes even more cryptic. On the other hand, it works with newlines , so there's that. – Steven Lu May 1 '15 at 20:19

    mkb , Mar 9, 2016 at 17:30

    @KErlandsson: done – mkb Mar 9 '16 at 17:30

    mombip , Aug 9, 2016 at 15:58

    I've finally found documentation for it: Shell-Parameter-Expansionmombip Aug 9 '16 at 15:58

    DS. , Jan 13, 2017 at 19:56

    Mnemonic: "#" is to the left of "%" on a standard keyboard, so "#" removes a prefix (on the left), and "%" removes a suffix (on the right). – DS. Jan 13 '17 at 19:56

    tripleee , May 9, 2012 at 17:57

    Sounds like a job for set with a custom IFS .
    IFS=-
    set $STR
    var1=$1
    var2=$2
    

    (You will want to do this in a function with a local IFS so you don't mess up other parts of your script where you require IFS to be what you expect.)

    Rob I , May 9, 2012 at 19:20

    Nice - I knew about $IFS but hadn't seen how it could be used. – Rob I May 9 '12 at 19:20

    Sigg3.net , Jun 19, 2013 at 8:08

    I used triplee's example and it worked exactly as advertised! Just change last two lines to <pre> myvar1= echo $1 && myvar2= echo $2 </pre> if you need to store them throughout a script with several "thrown" variables. – Sigg3.net Jun 19 '13 at 8:08

    tripleee , Jun 19, 2013 at 13:25

    No, don't use a useless echo in backticks . – tripleee Jun 19 '13 at 13:25

    Daniel Andersson , Mar 27, 2015 at 6:46

    This is a really sweet solution if we need to write something that is not Bash specific. To handle IFS troubles, one can add OLDIFS=$IFS at the beginning before overwriting it, and then add IFS=$OLDIFS just after the set line. – Daniel Andersson Mar 27 '15 at 6:46

    tripleee , Mar 27, 2015 at 6:58

    FWIW the link above is broken. I was lazy and careless. The canonical location still works; iki.fi/era/unix/award.html#echotripleee Mar 27 '15 at 6:58

    anubhava , May 9, 2012 at 17:09

    Using bash regex capabilities:
    re="^([^-]+)-(.*)$"
    [[ "ABCDE-123456" =~ $re ]] && var1="${BASH_REMATCH[1]}" && var2="${BASH_REMATCH[2]}"
    echo $var1
    echo $var2
    

    OUTPUT

    ABCDE
    123456
    

    Cometsong , Oct 21, 2016 at 13:29

    Love pre-defining the re for later use(s)! – Cometsong Oct 21 '16 at 13:29

    Archibald , Nov 12, 2012 at 11:03

    string="ABCDE-123456"
    IFS=- # use "local IFS=-" inside the function
    set $string
    echo $1 # >>> ABCDE
    echo $2 # >>> 123456
    

    tripleee , Mar 27, 2015 at 7:02

    Hmmm, isn't this just a restatement of my answer ? – tripleee Mar 27 '15 at 7:02

    Archibald , Sep 18, 2015 at 12:36

    Actually yes. I just clarified it a bit. – Archibald Sep 18 '15 at 12:36

    [Nov 08, 2018] How to split a string in shell and get the last field

    Nov 08, 2018 | stackoverflow.com

    cd1 , Jul 1, 2010 at 23:29

    Suppose I have the string 1:2:3:4:5 and I want to get its last field ( 5 in this case). How do I do that using Bash? I tried cut , but I don't know how to specify the last field with -f .

    Stephen , Jul 2, 2010 at 0:05

    You can use string operators :
    $ foo=1:2:3:4:5
    $ echo ${foo##*:}
    5
    

    This trims everything from the front until a ':', greedily.

    ${foo  <-- from variable foo
      ##   <-- greedy front trim
      *    <-- matches anything
      :    <-- until the last ':'
     }
    

    eckes , Jan 23, 2013 at 15:23

    While this is working for the given problem, the answer of William below ( stackoverflow.com/a/3163857/520162 ) also returns 5 if the string is 1:2:3:4:5: (while using the string operators yields an empty result). This is especially handy when parsing paths that could contain (or not) a finishing / character. – eckes Jan 23 '13 at 15:23

    Dobz , Jun 25, 2014 at 11:44

    How would you then do the opposite of this? to echo out '1:2:3:4:'? – Dobz Jun 25 '14 at 11:44

    Mihai Danila , Jul 9, 2014 at 14:07

    And how does one keep the part before the last separator? Apparently by using ${foo%:*} . # - from beginning; % - from end. # , % - shortest match; ## , %% - longest match. – Mihai Danila Jul 9 '14 at 14:07

    Putnik , Feb 11, 2016 at 22:33

    If i want to get the last element from path, how should I use it? echo ${pwd##*/} does not work. – Putnik Feb 11 '16 at 22:33

    Stan Strum , Dec 17, 2017 at 4:22

    @Putnik that command sees pwd as a variable. Try dir=$(pwd); echo ${dir##*/} . Works for me! – Stan Strum Dec 17 '17 at 4:22

    a3nm , Feb 3, 2012 at 8:39

    Another way is to reverse before and after cut :
    $ echo ab:cd:ef | rev | cut -d: -f1 | rev
    ef
    

    This makes it very easy to get the last but one field, or any range of fields numbered from the end.

    Dannid , Jan 14, 2013 at 20:50

    This answer is nice because it uses 'cut', which the author is (presumably) already familiar. Plus, I like this answer because I am using 'cut' and had this exact question, hence finding this thread via search. – Dannid Jan 14 '13 at 20:50

    funroll , Aug 12, 2013 at 19:51

    Some cut-and-paste fodder for people using spaces as delimiters: echo "1 2 3 4" | rev | cut -d " " -f1 | revfunroll Aug 12 '13 at 19:51

    EdgeCaseBerg , Sep 8, 2013 at 5:01

    the rev | cut -d -f1 | rev is so clever! Thanks! Helped me a bunch (my use case was rev | -d ' ' -f 2- | rev – EdgeCaseBerg Sep 8 '13 at 5:01

    Anarcho-Chossid , Sep 16, 2015 at 15:54

    Wow. Beautiful and dark magic. – Anarcho-Chossid Sep 16 '15 at 15:54

    shearn89 , Aug 17, 2017 at 9:27

    I always forget about rev , was just what I needed! cut -b20- | rev | cut -b10- | revshearn89 Aug 17 '17 at 9:27

    William Pursell , Jul 2, 2010 at 7:09

    It's difficult to get the last field using cut, but here's (one set of) solutions in awk and perl
    $ echo 1:2:3:4:5 | awk -F: '{print $NF}'
    5
    $ echo 1:2:3:4:5 | perl -F: -wane 'print $F[-1]'
    5
    

    eckes , Jan 23, 2013 at 15:20

    great advantage of this solution over the accepted answer: it also matches paths that contain or do not contain a finishing / character: /a/b/c/d and /a/b/c/d/ yield the same result ( d ) when processing pwd | awk -F/ '{print $NF}' . The accepted answer results in an empty result in the case of /a/b/c/d/eckes Jan 23 '13 at 15:20

    stamster , May 21 at 11:52

    @eckes In case of AWK solution, on GNU bash, version 4.3.48(1)-release that's not true, as it matters whenever you have trailing slash or not. Simply put AWK will use / as delimiter, and if your path is /my/path/dir/ it will use value after last delimiter, which is simply an empty string. So it's best to avoid trailing slash if you need to do such a thing like I do. – stamster May 21 at 11:52

    Nicholas M T Elliott , Jul 1, 2010 at 23:39

    Assuming fairly simple usage (no escaping of the delimiter, for example), you can use grep:
    $ echo "1:2:3:4:5" | grep -oE "[^:]+$"
    5
    

    Breakdown - find all the characters not the delimiter ([^:]) at the end of the line ($). -o only prints the matching part.

    Dennis Williamson , Jul 2, 2010 at 0:05

    One way:
    var1="1:2:3:4:5"
    var2=${var1##*:}
    

    Another, using an array:

    var1="1:2:3:4:5"
    saveIFS=$IFS
    IFS=":"
    var2=($var1)
    IFS=$saveIFS
    var2=${var2[@]: -1}
    

    Yet another with an array:

    var1="1:2:3:4:5"
    saveIFS=$IFS
    IFS=":"
    var2=($var1)
    IFS=$saveIFS
    count=${#var2[@]}
    var2=${var2[$count-1]}
    

    Using Bash (version >= 3.2) regular expressions:

    var1="1:2:3:4:5"
    [[ $var1 =~ :([^:]*)$ ]]
    var2=${BASH_REMATCH[1]}
    

    liuyang1 , Mar 24, 2015 at 6:02

    Thanks so much for array style, as I need this feature, but not have cut, awk these utils. – liuyang1 Mar 24 '15 at 6:02

    user3133260 , Dec 24, 2013 at 19:04

    $ echo "a b c d e" | tr ' ' '\n' | tail -1
    e
    

    Simply translate the delimiter into a newline and choose the last entry with tail -1 .

    Yajo , Jul 30, 2014 at 10:13

    It will fail if the last item contains a \n , but for most cases is the most readable solution. – Yajo Jul 30 '14 at 10:13

    Rafael , Nov 10, 2016 at 10:09

    Using sed :
    $ echo '1:2:3:4:5' | sed 's/.*://' # => 5
    
    $ echo '' | sed 's/.*://' # => (empty)
    
    $ echo ':' | sed 's/.*://' # => (empty)
    $ echo ':b' | sed 's/.*://' # => b
    $ echo '::c' | sed 's/.*://' # => c
    
    $ echo 'a' | sed 's/.*://' # => a
    $ echo 'a:' | sed 's/.*://' # => (empty)
    $ echo 'a:b' | sed 's/.*://' # => b
    $ echo 'a::c' | sed 's/.*://' # => c
    

    Ab Irato , Nov 13, 2013 at 16:10

    If your last field is a single character, you could do this:
    a="1:2:3:4:5"
    
    echo ${a: -1}
    echo ${a:(-1)}
    

    Check string manipulation in bash .

    gniourf_gniourf , Nov 13, 2013 at 16:15

    This doesn't work: it gives the last character of a , not the last field . – gniourf_gniourf Nov 13 '13 at 16:15

    Ab Irato , Nov 25, 2013 at 13:25

    True, that's the idea, if you know the length of the last field it's good. If not you have to use something else... – Ab Irato Nov 25 '13 at 13:25

    sphakka , Jan 25, 2016 at 16:24

    Interesting, I didn't know of these particular Bash string manipulations. It also resembles to Python's string/array slicing . – sphakka Jan 25 '16 at 16:24

    ghostdog74 , Jul 2, 2010 at 1:16

    Using Bash.
    $ var1="1:2:3:4:0"
    $ IFS=":"
    $ set -- $var1
    $ eval echo  \$${#}
    0
    

    Sopalajo de Arrierez , Dec 24, 2014 at 5:04

    I would buy some details about this method, please :-) . – Sopalajo de Arrierez Dec 24 '14 at 5:04

    Rafa , Apr 27, 2017 at 22:10

    Could have used echo ${!#} instead of eval echo \$${#} . – Rafa Apr 27 '17 at 22:10

    Crytis , Dec 7, 2016 at 6:51

    echo "a:b:c:d:e"|xargs -d : -n1|tail -1
    

    First use xargs split it using ":",-n1 means every line only have one part.Then,pring the last part.

    BDL , Dec 7, 2016 at 13:47

    Although this might solve the problem, one should always add an explanation to it. – BDL Dec 7 '16 at 13:47

    Crytis , Jun 7, 2017 at 9:13

    already added.. – Crytis Jun 7 '17 at 9:13

    021 , Apr 26, 2016 at 11:33

    There are many good answers here, but still I want to share this one using basename :
     basename $(echo "a:b:c:d:e" | tr ':' '/')
    

    However it will fail if there are already some '/' in your string . If slash / is your delimiter then you just have to (and should) use basename.

    It's not the best answer but it just shows how you can be creative using bash commands.

    Nahid Akbar , Jun 22, 2012 at 2:55

    for x in `echo $str | tr ";" "\n"`; do echo $x; done
    

    chepner , Jun 22, 2012 at 12:58

    This runs into problems if there is whitespace in any of the fields. Also, it does not directly address the question of retrieving the last field. – chepner Jun 22 '12 at 12:58

    Christoph Böddeker , Feb 19 at 15:50

    For those that comfortable with Python, https://github.com/Russell91/pythonpy is a nice choice to solve this problem.
    $ echo "a:b:c:d:e" | py -x 'x.split(":")[-1]'
    

    From the pythonpy help: -x treat each row of stdin as x .

    With that tool, it is easy to write python code that gets applied to the input.

    baz , Nov 24, 2017 at 19:27

    a solution using the read builtin
    IFS=':' read -a field <<< "1:2:3:4:5"
    echo ${field[4]}
    

    [Nov 08, 2018] How do I split a string on a delimiter in Bash?

    Notable quotes:
    "... Bash shell script split array ..."
    "... associative array ..."
    "... pattern substitution ..."
    "... Debian GNU/Linux ..."
    Nov 08, 2018 | stackoverflow.com

    stefanB , May 28, 2009 at 2:03

    I have this string stored in a variable:
    IN="bla@some.com;john@home.com"
    

    Now I would like to split the strings by ; delimiter so that I have:

    ADDR1="bla@some.com"
    ADDR2="john@home.com"
    

    I don't necessarily need the ADDR1 and ADDR2 variables. If they are elements of an array that's even better.


    After suggestions from the answers below, I ended up with the following which is what I was after:

    #!/usr/bin/env bash
    
    IN="bla@some.com;john@home.com"
    
    mails=$(echo $IN | tr ";" "\n")
    
    for addr in $mails
    do
        echo "> [$addr]"
    done
    

    Output:

    > [bla@some.com]
    > [john@home.com]
    

    There was a solution involving setting Internal_field_separator (IFS) to ; . I am not sure what happened with that answer, how do you reset IFS back to default?

    RE: IFS solution, I tried this and it works, I keep the old IFS and then restore it:

    IN="bla@some.com;john@home.com"
    
    OIFS=$IFS
    IFS=';'
    mails2=$IN
    for x in $mails2
    do
        echo "> [$x]"
    done
    
    IFS=$OIFS
    

    BTW, when I tried

    mails2=($IN)
    

    I only got the first string when printing it in loop, without brackets around $IN it works.

    Brooks Moses , May 1, 2012 at 1:26

    With regards to your "Edit2": You can simply "unset IFS" and it will return to the default state. There's no need to save and restore it explicitly unless you have some reason to expect that it's already been set to a non-default value. Moreover, if you're doing this inside a function (and, if you aren't, why not?), you can set IFS as a local variable and it will return to its previous value once you exit the function. – Brooks Moses May 1 '12 at 1:26

    dubiousjim , May 31, 2012 at 5:21

    @BrooksMoses: (a) +1 for using local IFS=... where possible; (b) -1 for unset IFS , this doesn't exactly reset IFS to its default value, though I believe an unset IFS behaves the same as the default value of IFS ($' \t\n'), however it seems bad practice to be assuming blindly that your code will never be invoked with IFS set to a custom value; (c) another idea is to invoke a subshell: (IFS=$custom; ...) when the subshell exits IFS will return to whatever it was originally. – dubiousjim May 31 '12 at 5:21

    nicooga , Mar 7, 2016 at 15:32

    I just want to have a quick look at the paths to decide where to throw an executable, so I resorted to run ruby -e "puts ENV.fetch('PATH').split(':')" . If you want to stay pure bash won't help but using any scripting language that has a built-in split is easier. – nicooga Mar 7 '16 at 15:32

    Jeff , Apr 22 at 17:51

    This is kind of a drive-by comment, but since the OP used email addresses as the example, has anyone bothered to answer it in a way that is fully RFC 5322 compliant, namely that any quoted string can appear before the @ which means you're going to need regular expressions or some other kind of parser instead of naive use of IFS or other simplistic splitter functions. – Jeff Apr 22 at 17:51

    user2037659 , Apr 26 at 20:15

    for x in $(IFS=';';echo $IN); do echo "> [$x]"; doneuser2037659 Apr 26 at 20:15

    Johannes Schaub - litb , May 28, 2009 at 2:23

    You can set the internal field separator (IFS) variable, and then let it parse into an array. When this happens in a command, then the assignment to IFS only takes place to that single command's environment (to read ). It then parses the input according to the IFS variable value into an array, which we can then iterate over.
    IFS=';' read -ra ADDR <<< "$IN"
    for i in "${ADDR[@]}"; do
        # process "$i"
    done
    

    It will parse one line of items separated by ; , pushing it into an array. Stuff for processing whole of $IN , each time one line of input separated by ; :

     while IFS=';' read -ra ADDR; do
          for i in "${ADDR[@]}"; do
              # process "$i"
          done
     done <<< "$IN"
    

    Chris Lutz , May 28, 2009 at 2:25

    This is probably the best way. How long will IFS persist in it's current value, can it mess up my code by being set when it shouldn't be, and how can I reset it when I'm done with it? – Chris Lutz May 28 '09 at 2:25

    Johannes Schaub - litb , May 28, 2009 at 3:04

    now after the fix applied, only within the duration of the read command :) – Johannes Schaub - litb May 28 '09 at 3:04

    lhunath , May 28, 2009 at 6:14

    You can read everything at once without using a while loop: read -r -d '' -a addr <<< "$in" # The -d '' is key here, it tells read not to stop at the first newline (which is the default -d) but to continue until EOF or a NULL byte (which only occur in binary data). – lhunath May 28 '09 at 6:14

    Charles Duffy , Jul 6, 2013 at 14:39

    @LucaBorrione Setting IFS on the same line as the read with no semicolon or other separator, as opposed to in a separate command, scopes it to that command -- so it's always "restored"; you don't need to do anything manually. – Charles Duffy Jul 6 '13 at 14:39

    chepner , Oct 2, 2014 at 3:50

    @imagineerThis There is a bug involving herestrings and local changes to IFS that requires $IN to be quoted. The bug is fixed in bash 4.3. – chepner Oct 2 '14 at 3:50

    palindrom , Mar 10, 2011 at 9:00

    Taken from Bash shell script split array :
    IN="bla@some.com;john@home.com"
    arrIN=(${IN//;/ })
    

    Explanation:

    This construction replaces all occurrences of ';' (the initial // means global replace) in the string IN with ' ' (a single space), then interprets the space-delimited string as an array (that's what the surrounding parentheses do).

    The syntax used inside of the curly braces to replace each ';' character with a ' ' character is called Parameter Expansion .

    There are some common gotchas:

    1. If the original string has spaces, you will need to use IFS :
      • IFS=':'; arrIN=($IN); unset IFS;
    2. If the original string has spaces and the delimiter is a new line, you can set IFS with:
      • IFS=$'\n'; arrIN=($IN); unset IFS;

    Oz123 , Mar 21, 2011 at 18:50

    I just want to add: this is the simplest of all, you can access array elements with ${arrIN[1]} (starting from zeros of course) – Oz123 Mar 21 '11 at 18:50

    KomodoDave , Jan 5, 2012 at 15:13

    Found it: the technique of modifying a variable within a ${} is known as 'parameter expansion'. – KomodoDave Jan 5 '12 at 15:13

    qbolec , Feb 25, 2013 at 9:12

    Does it work when the original string contains spaces? – qbolec Feb 25 '13 at 9:12

    Ethan , Apr 12, 2013 at 22:47

    No, I don't think this works when there are also spaces present... it's converting the ',' to ' ' and then building a space-separated array. – Ethan Apr 12 '13 at 22:47

    Charles Duffy , Jul 6, 2013 at 14:39

    This is a bad approach for other reasons: For instance, if your string contains ;*; , then the * will be expanded to a list of filenames in the current directory. -1 – Charles Duffy Jul 6 '13 at 14:39

    Chris Lutz , May 28, 2009 at 2:09

    If you don't mind processing them immediately, I like to do this:
    for i in $(echo $IN | tr ";" "\n")
    do
      # process
    done
    

    You could use this kind of loop to initialize an array, but there's probably an easier way to do it. Hope this helps, though.

    Chris Lutz , May 28, 2009 at 2:42

    You should have kept the IFS answer. It taught me something I didn't know, and it definitely made an array, whereas this just makes a cheap substitute. – Chris Lutz May 28 '09 at 2:42

    Johannes Schaub - litb , May 28, 2009 at 2:59

    I see. Yeah i find doing these silly experiments, i'm going to learn new things each time i'm trying to answer things. I've edited stuff based on #bash IRC feedback and undeleted :) – Johannes Schaub - litb May 28 '09 at 2:59

    lhunath , May 28, 2009 at 6:12

    -1, you're obviously not aware of wordsplitting, because it's introducing two bugs in your code. one is when you don't quote $IN and the other is when you pretend a newline is the only delimiter used in wordsplitting. You are iterating over every WORD in IN, not every line, and DEFINATELY not every element delimited by a semicolon, though it may appear to have the side-effect of looking like it works. – lhunath May 28 '09 at 6:12

    Johannes Schaub - litb , May 28, 2009 at 17:00

    You could change it to echo "$IN" | tr ';' '\n' | while read -r ADDY; do # process "$ADDY"; done to make him lucky, i think :) Note that this will fork, and you can't change outer variables from within the loop (that's why i used the <<< "$IN" syntax) then – Johannes Schaub - litb May 28 '09 at 17:00

    mklement0 , Apr 24, 2013 at 14:13

    To summarize the debate in the comments: Caveats for general use : the shell applies word splitting and expansions to the string, which may be undesired; just try it with. IN="bla@some.com;john@home.com;*;broken apart" . In short: this approach will break, if your tokens contain embedded spaces and/or chars. such as * that happen to make a token match filenames in the current folder. – mklement0 Apr 24 '13 at 14:13

    F. Hauri , Apr 13, 2013 at 14:20

    Compatible answer

    To this SO question, there is already a lot of different way to do this in bash . But bash has many special features, so called bashism that work well, but that won't work in any other shell .

    In particular, arrays , associative array , and pattern substitution are pure bashisms and may not work under other shells .

    On my Debian GNU/Linux , there is a standard shell called dash , but I know many people who like to use ksh .

    Finally, in very small situation, there is a special tool called busybox with his own shell interpreter ( ash ).

    Requested string

    The string sample in SO question is:

    IN="bla@some.com;john@home.com"
    

    As this could be useful with whitespaces and as whitespaces could modify the result of the routine, I prefer to use this sample string:

     IN="bla@some.com;john@home.com;Full Name <fulnam@other.org>"
    
    Split string based on delimiter in bash (version >=4.2)

    Under pure bash, we may use arrays and IFS :

    var="bla@some.com;john@home.com;Full Name <fulnam@other.org>"
    
    oIFS="$IFS"
    IFS=";"
    declare -a fields=($var)
    IFS="$oIFS"
    unset oIFS
    
    
    IFS=\; read -a fields <<<"$var"
    

    Using this syntax under recent bash don't change $IFS for current session, but only for the current command:

    set | grep ^IFS=
    IFS=$' \t\n'
    

    Now the string var is split and stored into an array (named fields ):

    set | grep ^fields=\\\|^var=
    fields=([0]="bla@some.com" [1]="john@home.com" [2]="Full Name <fulnam@other.org>")
    var='bla@some.com;john@home.com;Full Name <fulnam@other.org>'
    

    We could request for variable content with declare -p :

    declare -p var fields
    declare -- var="bla@some.com;john@home.com;Full Name <fulnam@other.org>"
    declare -a fields=([0]="bla@some.com" [1]="john@home.com" [2]="Full Name <fulnam@other.org>")
    

    read is the quickiest way to do the split, because there is no forks and no external resources called.

    From there, you could use the syntax you already know for processing each field:

    for x in "${fields[@]}";do
        echo "> [$x]"
        done
    > [bla@some.com]
    > [john@home.com]
    > [Full Name <fulnam@other.org>]
    

    or drop each field after processing (I like this shifting approach):

    while [ "$fields" ] ;do
        echo "> [$fields]"
        fields=("${fields[@]:1}")
        done
    > [bla@some.com]
    > [john@home.com]
    > [Full Name <fulnam@other.org>]
    

    or even for simple printout (shorter syntax):

    printf "> [%s]\n" "${fields[@]}"
    > [bla@some.com]
    > [john@home.com]
    > [Full Name <fulnam@other.org>]
    
    Split string based on delimiter in shell

    But if you would write something usable under many shells, you have to not use bashisms .

    There is a syntax, used in many shells, for splitting a string across first or last occurrence of a substring:

    ${var#*SubStr}  # will drop begin of string up to first occur of `SubStr`
    ${var##*SubStr} # will drop begin of string up to last occur of `SubStr`
    ${var%SubStr*}  # will drop part of string from last occur of `SubStr` to the end
    ${var%%SubStr*} # will drop part of string from first occur of `SubStr` to the end
    

    (The missing of this is the main reason of my answer publication ;)

    As pointed out by Score_Under :

    # and % delete the shortest possible matching string, and

    ## and %% delete the longest possible.

    This little sample script work well under bash , dash , ksh , busybox and was tested under Mac-OS's bash too:

    var="bla@some.com;john@home.com;Full Name <fulnam@other.org>"
    while [ "$var" ] ;do
        iter=${var%%;*}
        echo "> [$iter]"
        [ "$var" = "$iter" ] && \
            var='' || \
            var="${var#*;}"
      done
    > [bla@some.com]
    > [john@home.com]
    > [Full Name <fulnam@other.org>]
    

    Have fun!

    Score_Under , Apr 28, 2015 at 16:58

    The # , ## , % , and %% substitutions have what is IMO an easier explanation to remember (for how much they delete): # and % delete the shortest possible matching string, and ## and %% delete the longest possible. – Score_Under Apr 28 '15 at 16:58

    sorontar , Oct 26, 2016 at 4:36

    The IFS=\; read -a fields <<<"$var" fails on newlines and add a trailing newline. The other solution removes a trailing empty field. – sorontar Oct 26 '16 at 4:36

    Eric Chen , Aug 30, 2017 at 17:50

    The shell delimiter is the most elegant answer, period. – Eric Chen Aug 30 '17 at 17:50

    sancho.s , Oct 4 at 3:42

    Could the last alternative be used with a list of field separators set somewhere else? For instance, I mean to use this as a shell script, and pass a list of field separators as a positional parameter. – sancho.s Oct 4 at 3:42

    F. Hauri , Oct 4 at 7:47

    Yes, in a loop: for sep in "#" "ł" "@" ; do ... var="${var#*$sep}" ...F. Hauri Oct 4 at 7:47

    DougW , Apr 27, 2015 at 18:20

    I've seen a couple of answers referencing the cut command, but they've all been deleted. It's a little odd that nobody has elaborated on that, because I think it's one of the more useful commands for doing this type of thing, especially for parsing delimited log files.

    In the case of splitting this specific example into a bash script array, tr is probably more efficient, but cut can be used, and is more effective if you want to pull specific fields from the middle.

    Example:

    $ echo "bla@some.com;john@home.com" | cut -d ";" -f 1
    bla@some.com
    $ echo "bla@some.com;john@home.com" | cut -d ";" -f 2
    john@home.com
    

    You can obviously put that into a loop, and iterate the -f parameter to pull each field independently.

    This gets more useful when you have a delimited log file with rows like this:

    2015-04-27|12345|some action|an attribute|meta data
    

    cut is very handy to be able to cat this file and select a particular field for further processing.

    MisterMiyagi , Nov 2, 2016 at 8:42

    Kudos for using cut , it's the right tool for the job! Much cleared than any of those shell hacks. – MisterMiyagi Nov 2 '16 at 8:42

    uli42 , Sep 14, 2017 at 8:30

    This approach will only work if you know the number of elements in advance; you'd need to program some more logic around it. It also runs an external tool for every element. – uli42 Sep 14 '17 at 8:30

    Louis Loudog Trottier , May 10 at 4:20

    Excatly waht i was looking for trying to avoid empty string in a csv. Now i can point the exact 'column' value as well. Work with IFS already used in a loop. Better than expected for my situation. – Louis Loudog Trottier May 10 at 4:20

    , May 28, 2009 at 10:31

    How about this approach:
    IN="bla@some.com;john@home.com" 
    set -- "$IN" 
    IFS=";"; declare -a Array=($*) 
    echo "${Array[@]}" 
    echo "${Array[0]}" 
    echo "${Array[1]}"
    

    Source

    Yzmir Ramirez , Sep 5, 2011 at 1:06

    +1 ... but I wouldn't name the variable "Array" ... pet peev I guess. Good solution. – Yzmir Ramirez Sep 5 '11 at 1:06

    ata , Nov 3, 2011 at 22:33

    +1 ... but the "set" and declare -a are unnecessary. You could as well have used just IFS";" && Array=($IN)ata Nov 3 '11 at 22:33

    Luca Borrione , Sep 3, 2012 at 9:26

    +1 Only a side note: shouldn't it be recommendable to keep the old IFS and then restore it? (as shown by stefanB in his edit3) people landing here (sometimes just copying and pasting a solution) might not think about this – Luca Borrione Sep 3 '12 at 9:26

    Charles Duffy , Jul 6, 2013 at 14:44

    -1: First, @ata is right that most of the commands in this do nothing. Second, it uses word-splitting to form the array, and doesn't do anything to inhibit glob-expansion when doing so (so if you have glob characters in any of the array elements, those elements are replaced with matching filenames). – Charles Duffy Jul 6 '13 at 14:44

    John_West , Jan 8, 2016 at 12:29

    Suggest to use $'...' : IN=$'bla@some.com;john@home.com;bet <d@\ns* kl.com>' . Then echo "${Array[2]}" will print a string with newline. set -- "$IN" is also neccessary in this case. Yes, to prevent glob expansion, the solution should include set -f . – John_West Jan 8 '16 at 12:29

    Steven Lizarazo , Aug 11, 2016 at 20:45

    This worked for me:
    string="1;2"
    echo $string | cut -d';' -f1 # output is 1
    echo $string | cut -d';' -f2 # output is 2
    

    Pardeep Sharma , Oct 10, 2017 at 7:29

    this is sort and sweet :) – Pardeep Sharma Oct 10 '17 at 7:29

    space earth , Oct 17, 2017 at 7:23

    Thanks...Helped a lot – space earth Oct 17 '17 at 7:23

    mojjj , Jan 8 at 8:57

    cut works only with a single char as delimiter. – mojjj Jan 8 at 8:57

    lothar , May 28, 2009 at 2:12

    echo "bla@some.com;john@home.com" | sed -e 's/;/\n/g'
    bla@some.com
    john@home.com
    

    Luca Borrione , Sep 3, 2012 at 10:08

    -1 what if the string contains spaces? for example IN="this is first line; this is second line" arrIN=( $( echo "$IN" | sed -e 's/;/\n/g' ) ) will produce an array of 8 elements in this case (an element for each word space separated), rather than 2 (an element for each line semi colon separated) – Luca Borrione Sep 3 '12 at 10:08

    lothar , Sep 3, 2012 at 17:33

    @Luca No the sed script creates exactly two lines. What creates the multiple entries for you is when you put it into a bash array (which splits on white space by default) – lothar Sep 3 '12 at 17:33

    Luca Borrione , Sep 4, 2012 at 7:09

    That's exactly the point: the OP needs to store entries into an array to loop over it, as you can see in his edits. I think your (good) answer missed to mention to use arrIN=( $( echo "$IN" | sed -e 's/;/\n/g' ) ) to achieve that, and to advice to change IFS to IFS=$'\n' for those who land here in the future and needs to split a string containing spaces. (and to restore it back afterwards). :) – Luca Borrione Sep 4 '12 at 7:09

    lothar , Sep 4, 2012 at 16:55

    @Luca Good point. However the array assignment was not in the initial question when I wrote up that answer. – lothar Sep 4 '12 at 16:55

    Ashok , Sep 8, 2012 at 5:01

    This also works:
    IN="bla@some.com;john@home.com"
    echo ADD1=`echo $IN | cut -d \; -f 1`
    echo ADD2=`echo $IN | cut -d \; -f 2`
    

    Be careful, this solution is not always correct. In case you pass "bla@some.com" only, it will assign it to both ADD1 and ADD2.

    fersarr , Mar 3, 2016 at 17:17

    You can use -s to avoid the mentioned problem: superuser.com/questions/896800/ "-f, --fields=LIST select only these fields; also print any line that contains no delimiter character, unless the -s option is specified" – fersarr Mar 3 '16 at 17:17

    Tony , Jan 14, 2013 at 6:33

    I think AWK is the best and efficient command to resolve your problem. AWK is included in Bash by default in almost every Linux distribution.
    echo "bla@some.com;john@home.com" | awk -F';' '{print $1,$2}'
    

    will give

    bla@some.com john@home.com
    

    Of course your can store each email address by redefining the awk print field.

    Jaro , Jan 7, 2014 at 21:30

    Or even simpler: echo "bla@some.com;john@home.com" | awk 'BEGIN{RS=";"} {print}' – Jaro Jan 7 '14 at 21:30

    Aquarelle , May 6, 2014 at 21:58

    @Jaro This worked perfectly for me when I had a string with commas and needed to reformat it into lines. Thanks. – Aquarelle May 6 '14 at 21:58

    Eduardo Lucio , Aug 5, 2015 at 12:59

    It worked in this scenario -> "echo "$SPLIT_0" | awk -F' inode=' '{print $1}'"! I had problems when trying to use atrings (" inode=") instead of characters (";"). $ 1, $ 2, $ 3, $ 4 are set as positions in an array! If there is a way of setting an array... better! Thanks! – Eduardo Lucio Aug 5 '15 at 12:59

    Tony , Aug 6, 2015 at 2:42

    @EduardoLucio, what I'm thinking about is maybe you can first replace your delimiter inode= into ; for example by sed -i 's/inode\=/\;/g' your_file_to_process , then define -F';' when apply awk , hope that can help you. – Tony Aug 6 '15 at 2:42

    nickjb , Jul 5, 2011 at 13:41

    A different take on Darron's answer , this is how I do it:
    IN="bla@some.com;john@home.com"
    read ADDR1 ADDR2 <<<$(IFS=";"; echo $IN)
    

    ColinM , Sep 10, 2011 at 0:31

    This doesn't work. – ColinM Sep 10 '11 at 0:31

    nickjb , Oct 6, 2011 at 15:33

    I think it does! Run the commands above and then "echo $ADDR1 ... $ADDR2" and i get "bla@some.com ... john@home.com" output – nickjb Oct 6 '11 at 15:33

    Nick , Oct 28, 2011 at 14:36

    This worked REALLY well for me... I used it to itterate over an array of strings which contained comma separated DB,SERVER,PORT data to use mysqldump. – Nick Oct 28 '11 at 14:36

    dubiousjim , May 31, 2012 at 5:28

    Diagnosis: the IFS=";" assignment exists only in the $(...; echo $IN) subshell; this is why some readers (including me) initially think it won't work. I assumed that all of $IN was getting slurped up by ADDR1. But nickjb is correct; it does work. The reason is that echo $IN command parses its arguments using the current value of $IFS, but then echoes them to stdout using a space delimiter, regardless of the setting of $IFS. So the net effect is as though one had called read ADDR1 ADDR2 <<< "bla@some.com john@home.com" (note the input is space-separated not ;-separated). – dubiousjim May 31 '12 at 5:28

    sorontar , Oct 26, 2016 at 4:43

    This fails on spaces and newlines, and also expand wildcards * in the echo $IN with an unquoted variable expansion. – sorontar Oct 26 '16 at 4:43

    gniourf_gniourf , Jun 26, 2014 at 9:11

    In Bash, a bullet proof way, that will work even if your variable contains newlines:
    IFS=';' read -d '' -ra array < <(printf '%s;\0' "$in")
    

    Look:

    $ in=$'one;two three;*;there is\na newline\nin this field'
    $ IFS=';' read -d '' -ra array < <(printf '%s;\0' "$in")
    $ declare -p array
    declare -a array='([0]="one" [1]="two three" [2]="*" [3]="there is
    a newline
    in this field")'
    

    The trick for this to work is to use the -d option of read (delimiter) with an empty delimiter, so that read is forced to read everything it's fed. And we feed read with exactly the content of the variable in , with no trailing newline thanks to printf . Note that's we're also putting the delimiter in printf to ensure that the string passed to read has a trailing delimiter. Without it, read would trim potential trailing empty fields:

    $ in='one;two;three;'    # there's an empty field
    $ IFS=';' read -d '' -ra array < <(printf '%s;\0' "$in")
    $ declare -p array
    declare -a array='([0]="one" [1]="two" [2]="three" [3]="")'
    

    the trailing empty field is preserved.


    Update for Bash≥4.4

    Since Bash 4.4, the builtin mapfile (aka readarray ) supports the -d option to specify a delimiter. Hence another canonical way is:

    mapfile -d ';' -t array < <(printf '%s;' "$in")
    

    John_West , Jan 8, 2016 at 12:10

    I found it as the rare solution on that list that works correctly with \n , spaces and * simultaneously. Also, no loops; array variable is accessible in the shell after execution (contrary to the highest upvoted answer). Note, in=$'...' , it does not work with double quotes. I think, it needs more upvotes. – John_West Jan 8 '16 at 12:10

    Darron , Sep 13, 2010 at 20:10

    How about this one liner, if you're not using arrays:
    IFS=';' read ADDR1 ADDR2 <<<$IN
    

    dubiousjim , May 31, 2012 at 5:36

    Consider using read -r ... to ensure that, for example, the two characters "\t" in the input end up as the same two characters in your variables (instead of a single tab char). – dubiousjim May 31 '12 at 5:36

    Luca Borrione , Sep 3, 2012 at 10:07

    -1 This is not working here (ubuntu 12.04). Adding echo "ADDR1 $ADDR1"\n echo "ADDR2 $ADDR2" to your snippet will output ADDR1 bla@some.com john@home.com\nADDR2 (\n is newline) – Luca Borrione Sep 3 '12 at 10:07

    chepner , Sep 19, 2015 at 13:59

    This is probably due to a bug involving IFS and here strings that was fixed in bash 4.3. Quoting $IN should fix it. (In theory, $IN is not subject to word splitting or globbing after it expands, meaning the quotes should be unnecessary. Even in 4.3, though, there's at least one bug remaining--reported and scheduled to be fixed--so quoting remains a good idea.) – chepner Sep 19 '15 at 13:59

    sorontar , Oct 26, 2016 at 4:55

    This breaks if $in contain newlines even if $IN is quoted. And adds a trailing newline. – sorontar Oct 26 '16 at 4:55

    kenorb , Sep 11, 2015 at 20:54

    Here is a clean 3-liner:
    in="foo@bar;bizz@buzz;fizz@buzz;buzz@woof"
    IFS=';' list=($in)
    for item in "${list[@]}"; do echo $item; done
    

    where IFS delimit words based on the separator and () is used to create an array . Then [@] is used to return each item as a separate word.

    If you've any code after that, you also need to restore $IFS , e.g. unset IFS .

    sorontar , Oct 26, 2016 at 5:03

    The use of $in unquoted allows wildcards to be expanded. – sorontar Oct 26 '16 at 5:03

    user2720864 , Sep 24 at 13:46

    + for the unset command – user2720864 Sep 24 at 13:46

    Emilien Brigand , Aug 1, 2016 at 13:15

    Without setting the IFS

    If you just have one colon you can do that:

    a="foo:bar"
    b=${a%:*}
    c=${a##*:}
    

    you will get:

    b = foo
    c = bar
    

    Victor Choy , Sep 16, 2015 at 3:34

    There is a simple and smart way like this:
    echo "add:sfff" | xargs -d: -i  echo {}
    

    But you must use gnu xargs, BSD xargs cant support -d delim. If you use apple mac like me. You can install gnu xargs :

    brew install findutils
    

    then

    echo "add:sfff" | gxargs -d: -i  echo {}
    

    Halle Knast , May 24, 2017 at 8:42

    The following Bash/zsh function splits its first argument on the delimiter given by the second argument:
    split() {
        local string="$1"
        local delimiter="$2"
        if [ -n "$string" ]; then
            local part
            while read -d "$delimiter" part; do
                echo $part
            done <<< "$string"
            echo $part
        fi
    }
    

    For instance, the command

    $ split 'a;b;c' ';'
    

    yields

    a
    b
    c
    

    This output may, for instance, be piped to other commands. Example:

    $ split 'a;b;c' ';' | cat -n
    1   a
    2   b
    3   c
    

    Compared to the other solutions given, this one has the following advantages:

    If desired, the function may be put into a script as follows:

    #!/usr/bin/env bash
    
    split() {
        # ...
    }
    
    split "$@"
    

    sandeepkunkunuru , Oct 23, 2017 at 16:10

    works and neatly modularized. – sandeepkunkunuru Oct 23 '17 at 16:10

    Prospero , Sep 25, 2011 at 1:09

    This is the simplest way to do it.
    spo='one;two;three'
    OIFS=$IFS
    IFS=';'
    spo_array=($spo)
    IFS=$OIFS
    echo ${spo_array[*]}
    

    rashok , Oct 25, 2016 at 12:41

    IN="bla@some.com;john@home.com"
    IFS=';'
    read -a IN_arr <<< "${IN}"
    for entry in "${IN_arr[@]}"
    do
        echo $entry
    done
    

    Output

    bla@some.com
    john@home.com
    

    System : Ubuntu 12.04.1

    codeforester , Jan 2, 2017 at 5:37

    IFS is not getting set in the specific context of read here and hence it can upset rest of the code, if any. – codeforester Jan 2 '17 at 5:37

    shuaihanhungry , Jan 20 at 15:54

    you can apply awk to many situations
    echo "bla@some.com;john@home.com"|awk -F';' '{printf "%s\n%s\n", $1, $2}'
    

    also you can use this

    echo "bla@some.com;john@home.com"|awk -F';' '{print $1,$2}' OFS="\n"
    

    ghost , Apr 24, 2013 at 13:13

    If no space, Why not this?
    IN="bla@some.com;john@home.com"
    arr=(`echo $IN | tr ';' ' '`)
    
    echo ${arr[0]}
    echo ${arr[1]}
    

    eukras , Oct 22, 2012 at 7:10

    There are some cool answers here (errator esp.), but for something analogous to split in other languages -- which is what I took the original question to mean -- I settled on this:
    IN="bla@some.com;john@home.com"
    declare -a a="(${IN/;/ })";
    

    Now ${a[0]} , ${a[1]} , etc, are as you would expect. Use ${#a[*]} for number of terms. Or to iterate, of course:

    for i in ${a[*]}; do echo $i; done
    

    IMPORTANT NOTE:

    This works in cases where there are no spaces to worry about, which solved my problem, but may not solve yours. Go with the $IFS solution(s) in that case.

    olibre , Oct 7, 2013 at 13:33

    Does not work when IN contains more than two e-mail addresses. Please refer to same idea (but fixed) at palindrom's answerolibre Oct 7 '13 at 13:33

    sorontar , Oct 26, 2016 at 5:14

    Better use ${IN//;/ } (double slash) to make it also work with more than two values. Beware that any wildcard ( *?[ ) will be expanded. And a trailing empty field will be discarded. – sorontar Oct 26 '16 at 5:14

    jeberle , Apr 30, 2013 at 3:10

    Use the set built-in to load up the $@ array:
    IN="bla@some.com;john@home.com"
    IFS=';'; set $IN; IFS=$' \t\n'
    

    Then, let the party begin:

    echo $#
    for a; do echo $a; done
    ADDR1=$1 ADDR2=$2
    

    sorontar , Oct 26, 2016 at 5:17

    Better use set -- $IN to avoid some issues with "$IN" starting with dash. Still, the unquoted expansion of $IN will expand wildcards ( *?[ ). – sorontar Oct 26 '16 at 5:17

    NevilleDNZ , Sep 2, 2013 at 6:30

    Two bourne-ish alternatives where neither require bash arrays:

    Case 1 : Keep it nice and simple: Use a NewLine as the Record-Separator... eg.

    IN="bla@some.com
    john@home.com"
    
    while read i; do
      # process "$i" ... eg.
        echo "[email:$i]"
    done <<< "$IN"
    

    Note: in this first case no sub-process is forked to assist with list manipulation.

    Idea: Maybe it is worth using NL extensively internally , and only converting to a different RS when generating the final result externally .

    Case 2 : Using a ";" as a record separator... eg.

    NL="
    " IRS=";" ORS=";"
    
    conv_IRS() {
      exec tr "$1" "$NL"
    }
    
    conv_ORS() {
      exec tr "$NL" "$1"
    }
    
    IN="bla@some.com;john@home.com"
    IN="$(conv_IRS ";" <<< "$IN")"
    
    while read i; do
      # process "$i" ... eg.
        echo -n "[email:$i]$ORS"
    done <<< "$IN"
    

    In both cases a sub-list can be composed within the loop is persistent after the loop has completed. This is useful when manipulating lists in memory, instead storing lists in files. {p.s. keep calm and carry on B-) }

    fedorqui , Jan 8, 2015 at 10:21

    Apart from the fantastic answers that were already provided, if it is just a matter of printing out the data you may consider using awk :
    awk -F";" '{for (i=1;i<=NF;i++) printf("> [%s]\n", $i)}' <<< "$IN"
    

    This sets the field separator to ; , so that it can loop through the fields with a for loop and print accordingly.

    Test
    $ IN="bla@some.com;john@home.com"
    $ awk -F";" '{for (i=1;i<=NF;i++) printf("> [%s]\n", $i)}' <<< "$IN"
    > [bla@some.com]
    > [john@home.com]
    

    With another input:

    $ awk -F";" '{for (i=1;i<=NF;i++) printf("> [%s]\n", $i)}' <<< "a;b;c   d;e_;f"
    > [a]
    > [b]
    > [c   d]
    > [e_]
    > [f]
    

    18446744073709551615 , Feb 20, 2015 at 10:49

    In Android shell, most of the proposed methods just do not work:
    $ IFS=':' read -ra ADDR <<<"$PATH"                             
    /system/bin/sh: can't create temporary file /sqlite_stmt_journals/mksh.EbNoR10629: No such file or directory
    

    What does work is:

    $ for i in ${PATH//:/ }; do echo $i; done
    /sbin
    /vendor/bin
    /system/sbin
    /system/bin
    /system/xbin
    

    where // means global replacement.

    sorontar , Oct 26, 2016 at 5:08

    Fails if any part of $PATH contains spaces (or newlines). Also expands wildcards (asterisk *, question mark ? and braces [ ]). – sorontar Oct 26 '16 at 5:08

    Eduardo Lucio , Apr 4, 2016 at 19:54

    Okay guys!

    Here's my answer!

    DELIMITER_VAL='='
    
    read -d '' F_ABOUT_DISTRO_R <<"EOF"
    DISTRIB_ID=Ubuntu
    DISTRIB_RELEASE=14.04
    DISTRIB_CODENAME=trusty
    DISTRIB_DESCRIPTION="Ubuntu 14.04.4 LTS"
    NAME="Ubuntu"
    VERSION="14.04.4 LTS, Trusty Tahr"
    ID=ubuntu
    ID_LIKE=debian
    PRETTY_NAME="Ubuntu 14.04.4 LTS"
    VERSION_ID="14.04"
    HOME_URL="http://www.ubuntu.com/"
    SUPPORT_URL="http://help.ubuntu.com/"
    BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
    EOF
    
    SPLIT_NOW=$(awk -F$DELIMITER_VAL '{for(i=1;i<=NF;i++){printf "%s\n", $i}}' <<<"${F_ABOUT_DISTRO_R}")
    while read -r line; do
       SPLIT+=("$line")
    done <<< "$SPLIT_NOW"
    for i in "${SPLIT[@]}"; do
        echo "$i"
    done
    

    Why this approach is "the best" for me?

    Because of two reasons:

    1. You do not need to escape the delimiter;
    2. You will not have problem with blank spaces . The value will be properly separated in the array!

    []'s

    gniourf_gniourf , Jan 30, 2017 at 8:26

    FYI, /etc/os-release and /etc/lsb-release are meant to be sourced, and not parsed. So your method is really wrong. Moreover, you're not quite answering the question about spiltting a string on a delimiter.gniourf_gniourf Jan 30 '17 at 8:26

    Michael Hale , Jun 14, 2012 at 17:38

    A one-liner to split a string separated by ';' into an array is:
    IN="bla@some.com;john@home.com"
    ADDRS=( $(IFS=";" echo "$IN") )
    echo ${ADDRS[0]}
    echo ${ADDRS[1]}
    

    This only sets IFS in a subshell, so you don't have to worry about saving and restoring its value.

    Luca Borrione , Sep 3, 2012 at 10:04

    -1 this doesn't work here (ubuntu 12.04). it prints only the first echo with all $IN value in it, while the second is empty. you can see it if you put echo "0: "${ADDRS[0]}\n echo "1: "${ADDRS[1]} the output is 0: bla@some.com;john@home.com\n 1: (\n is new line) – Luca Borrione Sep 3 '12 at 10:04

    Luca Borrione , Sep 3, 2012 at 10:05

    please refer to nickjb's answer at for a working alternative to this idea stackoverflow.com/a/6583589/1032370 – Luca Borrione Sep 3 '12 at 10:05

    Score_Under , Apr 28, 2015 at 17:09

    -1, 1. IFS isn't being set in that subshell (it's being passed to the environment of "echo", which is a builtin, so nothing is happening anyway). 2. $IN is quoted so it isn't subject to IFS splitting. 3. The process substitution is split by whitespace, but this may corrupt the original data. – Score_Under Apr 28 '15 at 17:09

    ajaaskel , Oct 10, 2014 at 11:33

    IN='bla@some.com;john@home.com;Charlie Brown <cbrown@acme.com;!"#$%&/()[]{}*? are no problem;simple is beautiful :-)'
    set -f
    oldifs="$IFS"
    IFS=';'; arrayIN=($IN)
    IFS="$oldifs"
    for i in "${arrayIN[@]}"; do
    echo "$i"
    done
    set +f
    

    Output:

    bla@some.com
    john@home.com
    Charlie Brown <cbrown@acme.com
    !"#$%&/()[]{}*? are no problem
    simple is beautiful :-)
    

    Explanation: Simple assignment using parenthesis () converts semicolon separated list into an array provided you have correct IFS while doing that. Standard FOR loop handles individual items in that array as usual. Notice that the list given for IN variable must be "hard" quoted, that is, with single ticks.

    IFS must be saved and restored since Bash does not treat an assignment the same way as a command. An alternate workaround is to wrap the assignment inside a function and call that function with a modified IFS. In that case separate saving/restoring of IFS is not needed. Thanks for "Bize" for pointing that out.

    gniourf_gniourf , Feb 20, 2015 at 16:45

    !"#$%&/()[]{}*? are no problem well... not quite: []*? are glob characters. So what about creating this directory and file: `mkdir '!"#$%&'; touch '!"#$%&/()[]{} got you hahahaha - are no problem' and running your command? simple may be beautiful, but when it's broken, it's broken. – gniourf_gniourf Feb 20 '15 at 16:45

    ajaaskel , Feb 25, 2015 at 7:20

    @gniourf_gniourf The string is stored in a variable. Please see the original question. – ajaaskel Feb 25 '15 at 7:20

    gniourf_gniourf , Feb 25, 2015 at 7:26

    @ajaaskel you didn't fully understand my comment. Go in a scratch directory and issue these commands: mkdir '!"#$%&'; touch '!"#$%&/()[]{} got you hahahaha - are no problem' . They will only create a directory and a file, with weird looking names, I must admit. Then run your commands with the exact IN you gave: IN='bla@some.com;john@home.com;Charlie Brown <cbrown@acme.com;!"#$%&/()[]{}*? are no problem;simple is beautiful :-)' . You'll see that you won't get the output you expect. Because you're using a method subject to pathname expansions to split your string. – gniourf_gniourf Feb 25 '15 at 7:26

    gniourf_gniourf , Feb 25, 2015 at 7:29

    This is to demonstrate that the characters * , ? , [...] and even, if extglob is set, !(...) , @(...) , ?(...) , +(...) are problems with this method! – gniourf_gniourf Feb 25 '15 at 7:29

    ajaaskel , Feb 26, 2015 at 15:26

    @gniourf_gniourf Thanks for detailed comments on globbing. I adjusted the code to have globbing off. My point was however just to show that rather simple assignment can do the splitting job. – ajaaskel Feb 26 '15 at 15:26

    > , Dec 19, 2013 at 21:39

    Maybe not the most elegant solution, but works with * and spaces:
    IN="bla@so me.com;*;john@home.com"
    for i in `delims=${IN//[^;]}; seq 1 $((${#delims} + 1))`
    do
       echo "> [`echo $IN | cut -d';' -f$i`]"
    done
    

    Outputs

    > [bla@so me.com]
    > [*]
    > [john@home.com]
    

    Other example (delimiters at beginning and end):

    IN=";bla@so me.com;*;john@home.com;"
    > []
    > [bla@so me.com]
    > [*]
    > [john@home.com]
    > []
    

    Basically it removes every character other than ; making delims eg. ;;; . Then it does for loop from 1 to number-of-delimiters as counted by ${#delims} . The final step is to safely get the $i th part using cut .

    [Nov 08, 2018] 15 Linux Split and Join Command Examples to Manage Large Files

    Nov 08, 2018 | www.thegeekstuff.com

    by Himanshu Arora on October 16, 2012

    https://apis.google.com/se/0/_/+1/fastbutton?usegapi=1&size=medium&origin=https%3A%2F%2Fwww.thegeekstuff.com&url=https%3A%2F%2Fwww.thegeekstuff.com%2F2012%2F10%2F15-linux-split-and-join-command-examples-to-manage-large-files%2F&gsrc=3p&jsh=m%3B%2F_%2Fscs%2Fapps-static%2F_%2Fjs%2Fk%3Doz.gapi.en_US.f5JujS1eFMY.O%2Fm%3D__features__%2Fam%3DQQE%2Frt%3Dj%2Fd%3D1%2Frs%3DAGLTcCNDI1_ftdVIpg6jNiygedEKTreQ2A#_methods=onPlusOne%2C_ready%2C_close%2C_open%2C_resizeMe%2C_renderstart%2Concircled%2Cdrefresh%2Cerefresh&id=I0_1529064634502&_gfid=I0_1529064634502&parent=https%3A%2F%2Fwww.thegeekstuff.com&pfname=&rpctoken=68750732

    https://www.facebook.com/plugins/like.php?href=https%3A%2F%2Fwww.thegeekstuff.com%2F2012%2F10%2F15-linux-split-and-join-command-examples-to-manage-large-files%2F&send=false&layout=button_count&width=450&show_faces=false&action=like&colorscheme=light&font&height=21

    https://platform.twitter.com/widgets/tweet_button.c5b006ac082bc92aa829181b9ce63af1.en.html#dnt=false&id=twitter-widget-0&lang=en&original_referer=https%3A%2F%2Fwww.thegeekstuff.com%2F2012%2F10%2F15-linux-split-and-join-command-examples-to-manage-large-files%2F&size=m&text=15%20Linux%20Split%20and%20Join%20Command%20Examples%20to%20Manage%20Large%20Files&time=1529064635577&type=share&url=https%3A%2F%2Fwww.thegeekstuff.com%2F2012%2F10%2F15-linux-split-and-join-command-examples-to-manage-large-files%2F

    Linux split and join commands are very helpful when you are manipulating large files. This article explains how to use Linux split and join command with descriptive examples.

    Join and split command syntax:

    join [OPTION] FILE1 FILE2
    split [OPTION] [INPUT [PREFIX]]

    Linux Split Command Examples 1. Basic Split Example

    Here is a basic example of split command.

    $ split split.zip 
    
    $ ls
    split.zip  xab  xad  xaf  xah  xaj  xal  xan  xap  xar  xat  xav  xax  xaz  xbb  xbd  xbf  xbh  xbj  xbl  xbn
    xaa        xac  xae  xag  xai  xak  xam  xao  xaq  xas  xau  xaw  xay  xba  xbc  xbe  xbg  xbi  xbk  xbm  xbo
    

    So we see that the file split.zip was split into smaller files with x** as file names. Where ** is the two character suffix that is added by default. Also, by default each x** file would contain 1000 lines.

    $ wc -l *
       40947 split.zip
        1000 xaa
        1000 xab
        1000 xac
        1000 xad
        1000 xae
        1000 xaf
        1000 xag
        1000 xah
        1000 xai
    ...
    ...
    ...
    

    So the output above confirms that by default each x** file contains 1000 lines.

    2.Change the Suffix Length using -a option

    As discussed in example 1 above, the default suffix length is 2. But this can be changed by using -a option.

    me name=

    As you see in the following example, it is using suffix of length 5 on the split files.

    $ split -a5 split.zip
    $ ls
    split.zip  xaaaac  xaaaaf  xaaaai  xaaaal  xaaaao  xaaaar  xaaaau  xaaaax  xaaaba  xaaabd  xaaabg  xaaabj  xaaabm
    xaaaaa     xaaaad  xaaaag  xaaaaj  xaaaam  xaaaap  xaaaas  xaaaav  xaaaay  xaaabb  xaaabe  xaaabh  xaaabk  xaaabn
    xaaaab     xaaaae  xaaaah  xaaaak  xaaaan  xaaaaq  xaaaat  xaaaaw  xaaaaz  xaaabc  xaaabf  xaaabi  xaaabl  xaaabo
    

    Note: Earlier we also discussed about other file manipulation utilities – tac, rev, paste .

    3.Customize Split File Size using -b option

    Size of each output split file can be controlled using -b option.

    In this example, the split files were created with a size of 200000 bytes.

    $ split -b200000 split.zip 
    
    $ ls -lart
    total 21084
    drwxrwxr-x 3 himanshu himanshu     4096 Sep 26 21:20 ..
    -rw-rw-r-- 1 himanshu himanshu 10767315 Sep 26 21:21 split.zip
    -rw-rw-r-- 1 himanshu himanshu   200000 Sep 26 21:35 xad
    -rw-rw-r-- 1 himanshu himanshu   200000 Sep 26 21:35 xac
    -rw-rw-r-- 1 himanshu himanshu   200000 Sep 26 21:35 xab
    -rw-rw-r-- 1 himanshu himanshu   200000 Sep 26 21:35 xaa
    -rw-rw-r-- 1 himanshu himanshu   200000 Sep 26 21:35 xah
    -rw-rw-r-- 1 himanshu himanshu   200000 Sep 26 21:35 xag
    -rw-rw-r-- 1 himanshu himanshu   200000 Sep 26 21:35 xaf
    -rw-rw-r-- 1 himanshu himanshu   200000 Sep 26 21:35 xae
    -rw-rw-r-- 1 himanshu himanshu   200000 Sep 26 21:35 xar
    ...
    ...
    ...
    
    4. Create Split Files with Numeric Suffix using -d option

    As seen in examples above, the output has the format of x** where ** are alphabets. You can change this to number using -d option.

    Here is an example. This has numeric suffix on the split files.

    $ split -d split.zip
    $ ls
    split.zip  x01  x03  x05  x07  x09  x11  x13  x15  x17  x19  x21  x23  x25  x27  x29  x31  x33  x35  x37  x39
    x00        x02  x04  x06  x08  x10  x12  x14  x16  x18  x20  x22  x24  x26  x28  x30  x32  x34  x36  x38  x40
    
    5. Customize the Number of Split Chunks using -C option

    To get control over the number of chunks, use the -C option.

    This example will create 50 chunks of split files.

    $ split -n50 split.zip
    $ ls
    split.zip  xac  xaf  xai  xal  xao  xar  xau  xax  xba  xbd  xbg  xbj  xbm  xbp  xbs  xbv
    xaa        xad  xag  xaj  xam  xap  xas  xav  xay  xbb  xbe  xbh  xbk  xbn  xbq  xbt  xbw
    xab        xae  xah  xak  xan  xaq  xat  xaw  xaz  xbc  xbf  xbi  xbl  xbo  xbr  xbu  xbx
    
    6. Avoid Zero Sized Chunks using -e option

    While splitting a relatively small file in large number of chunks, its good to avoid zero sized chunks as they do not add any value. This can be done using -e option.

    Here is an example:

    $ split -n50 testfile
    
    $ ls -lart x*
    -rw-rw-r-- 1 himanshu himanshu 0 Sep 26 21:55 xag
    -rw-rw-r-- 1 himanshu himanshu 1 Sep 26 21:55 xaf
    -rw-rw-r-- 1 himanshu himanshu 1 Sep 26 21:55 xae
    -rw-rw-r-- 1 himanshu himanshu 1 Sep 26 21:55 xad
    -rw-rw-r-- 1 himanshu himanshu 1 Sep 26 21:55 xac
    -rw-rw-r-- 1 himanshu himanshu 1 Sep 26 21:55 xab
    -rw-rw-r-- 1 himanshu himanshu 1 Sep 26 21:55 xaa
    -rw-rw-r-- 1 himanshu himanshu 0 Sep 26 21:55 xbx
    -rw-rw-r-- 1 himanshu himanshu 0 Sep 26 21:55 xbw
    -rw-rw-r-- 1 himanshu himanshu 0 Sep 26 21:55 xbv
    ...
    ...
    ...
    

    So we see that lots of zero size chunks were produced in the above output. Now, lets use -e option and see the results:

    $ split -n50 -e testfile
    $ ls
    split.zip  testfile  xaa  xab  xac  xad  xae  xaf
    
    $ ls -lart x*
    -rw-rw-r-- 1 himanshu himanshu 1 Sep 26 21:57 xaf
    -rw-rw-r-- 1 himanshu himanshu 1 Sep 26 21:57 xae
    -rw-rw-r-- 1 himanshu himanshu 1 Sep 26 21:57 xad
    -rw-rw-r-- 1 himanshu himanshu 1 Sep 26 21:57 xac
    -rw-rw-r-- 1 himanshu himanshu 1 Sep 26 21:57 xab
    -rw-rw-r-- 1 himanshu himanshu 1 Sep 26 21:57 xaa
    

    So we see that no zero sized chunk was produced in the above output.

    7. Customize Number of Lines using -l option

    Number of lines per output split file can be customized using the -l option.

    As seen in the example below, split files are created with 20000 lines.

    $ split -l20000 split.zip
    
    $ ls
    split.zip  testfile  xaa  xab  xac
    
    $ wc -l x*
       20000 xaa
       20000 xab
         947 xac
       40947 total
    
    Get Detailed Information using –verbose option

    To get a diagnostic message each time a new split file is opened, use –verbose option as shown below.

    $ split -l20000 --verbose split.zip
    creating file `xaa'
    creating file `xab'
    creating file `xac'
    

    [Nov 08, 2018] Utilizing multi core for tar+gzip-bzip compression-decompression

    Nov 08, 2018 | stackoverflow.com

    Ask Question up vote 163 down vote favorite 67


    user1118764 , Sep 7, 2012 at 6:58

    I normally compress using tar zcvf and decompress using tar zxvf (using gzip due to habit).

    I've recently gotten a quad core CPU with hyperthreading, so I have 8 logical cores, and I notice that many of the cores are unused during compression/decompression.

    Is there any way I can utilize the unused cores to make it faster?

    Warren Severin , Nov 13, 2017 at 4:37

    The solution proposed by Xiong Chiamiov above works beautifully. I had just backed up my laptop with .tar.bz2 and it took 132 minutes using only one cpu thread. Then I compiled and installed tar from source: gnu.org/software/tar I included the options mentioned in the configure step: ./configure --with-gzip=pigz --with-bzip2=lbzip2 --with-lzip=plzip I ran the backup again and it took only 32 minutes. That's better than 4X improvement! I watched the system monitor and it kept all 4 cpus (8 threads) flatlined at 100% the whole time. THAT is the best solution. – Warren Severin Nov 13 '17 at 4:37

    Mark Adler , Sep 7, 2012 at 14:48

    You can use pigz instead of gzip, which does gzip compression on multiple cores. Instead of using the -z option, you would pipe it through pigz:
    tar cf - paths-to-archive | pigz > archive.tar.gz
    

    By default, pigz uses the number of available cores, or eight if it could not query that. You can ask for more with -p n, e.g. -p 32. pigz has the same options as gzip, so you can request better compression with -9. E.g.

    tar cf - paths-to-archive | pigz -9 -p 32 > archive.tar.gz
    

    user788171 , Feb 20, 2013 at 12:43

    How do you use pigz to decompress in the same fashion? Or does it only work for compression? – user788171 Feb 20 '13 at 12:43

    Mark Adler , Feb 20, 2013 at 16:18

    pigz does use multiple cores for decompression, but only with limited improvement over a single core. The deflate format does not lend itself to parallel decompression. The decompression portion must be done serially. The other cores for pigz decompression are used for reading, writing, and calculating the CRC. When compressing on the other hand, pigz gets close to a factor of n improvement with n cores. – Mark Adler Feb 20 '13 at 16:18

    Garrett , Mar 1, 2014 at 7:26

    The hyphen here is stdout (see this page ). – Garrett Mar 1 '14 at 7:26

    Mark Adler , Jul 2, 2014 at 21:29

    Yes. 100% compatible in both directions. – Mark Adler Jul 2 '14 at 21:29

    Mark Adler , Apr 23, 2015 at 5:23

    There is effectively no CPU time spent tarring, so it wouldn't help much. The tar format is just a copy of the input file with header blocks in between files. – Mark Adler Apr 23 '15 at 5:23

    Jen , Jun 14, 2013 at 14:34

    You can also use the tar flag "--use-compress-program=" to tell tar what compression program to use.

    For example use:

    tar -c --use-compress-program=pigz -f tar.file dir_to_zip
    

    ranman , Nov 13, 2013 at 10:01

    This is an awesome little nugget of knowledge and deserves more upvotes. I had no idea this option even existed and I've read the man page a few times over the years. – ranman Nov 13 '13 at 10:01

    Valerio Schiavoni , Aug 5, 2014 at 22:38

    Unfortunately by doing so the concurrent feature of pigz is lost. You can see for yourself by executing that command and monitoring the load on each of the cores. – Valerio Schiavoni Aug 5 '14 at 22:38

    bovender , Sep 18, 2015 at 10:14

    @ValerioSchiavoni: Not here, I get full load on all 4 cores (Ubuntu 15.04 'Vivid'). – bovender Sep 18 '15 at 10:14

    Valerio Schiavoni , Sep 28, 2015 at 23:41

    On compress or on decompress ? – Valerio Schiavoni Sep 28 '15 at 23:41

    Offenso , Jan 11, 2017 at 17:26

    I prefer tar - dir_to_zip | pv | pigz > tar.file pv helps me estimate, you can skip it. But still it easier to write and remember. – Offenso Jan 11 '17 at 17:26

    Maxim Suslov , Dec 18, 2014 at 7:31

    Common approach

    There is option for tar program:

    -I, --use-compress-program PROG
          filter through PROG (must accept -d)
    

    You can use multithread version of archiver or compressor utility.

    Most popular multithread archivers are pigz (instead of gzip) and pbzip2 (instead of bzip2). For instance:

    $ tar -I pbzip2 -cf OUTPUT_FILE.tar.bz2 paths_to_archive
    $ tar --use-compress-program=pigz -cf OUTPUT_FILE.tar.gz paths_to_archive
    

    Archiver must accept -d. If your replacement utility hasn't this parameter and/or you need specify additional parameters, then use pipes (add parameters if necessary):

    $ tar cf - paths_to_archive | pbzip2 > OUTPUT_FILE.tar.gz
    $ tar cf - paths_to_archive | pigz > OUTPUT_FILE.tar.gz
    

    Input and output of singlethread and multithread are compatible. You can compress using multithread version and decompress using singlethread version and vice versa.

    p7zip

    For p7zip for compression you need a small shell script like the following:

    #!/bin/sh
    case $1 in
      -d) 7za -txz -si -so e;;
       *) 7za -txz -si -so a .;;
    esac 2>/dev/null
    

    Save it as 7zhelper.sh. Here the example of usage:

    $ tar -I 7zhelper.sh -cf OUTPUT_FILE.tar.7z paths_to_archive
    $ tar -I 7zhelper.sh -xf OUTPUT_FILE.tar.7z
    
    xz

    Regarding multithreaded XZ support. If you are running version 5.2.0 or above of XZ Utils, you can utilize multiple cores for compression by setting -T or --threads to an appropriate value via the environmental variable XZ_DEFAULTS (e.g. XZ_DEFAULTS="-T 0" ).

    This is a fragment of man for 5.1.0alpha version:

    Multithreaded compression and decompression are not implemented yet, so this option has no effect for now.

    However this will not work for decompression of files that haven't also been compressed with threading enabled. From man for version 5.2.2:

    Threaded decompression hasn't been implemented yet. It will only work on files that contain multiple blocks with size information in block headers. All files compressed in multi-threaded mode meet this condition, but files compressed in single-threaded mode don't even if --block-size=size is used.

    Recompiling with replacement

    If you build tar from sources, then you can recompile with parameters

    --with-gzip=pigz
    --with-bzip2=lbzip2
    --with-lzip=plzip
    

    After recompiling tar with these options you can check the output of tar's help:

    $ tar --help | grep "lbzip2\|plzip\|pigz"
      -j, --bzip2                filter the archive through lbzip2
          --lzip                 filter the archive through plzip
      -z, --gzip, --gunzip, --ungzip   filter the archive through pigz
    

    > , Apr 28, 2015 at 20:41

    This is indeed the best answer. I'll definitely rebuild my tar! – user1985657 Apr 28 '15 at 20:41

    mpibzip2 , Apr 28, 2015 at 20:57

    I just found pbzip2 and mpibzip2 . mpibzip2 looks very promising for clusters or if you have a laptop and a multicore desktop computer for instance. – user1985657 Apr 28 '15 at 20:57

    oᴉɹǝɥɔ , Jun 10, 2015 at 17:39

    This is a great and elaborate answer. It may be good to mention that multithreaded compression (e.g. with pigz ) is only enabled when it reads from the file. Processing STDIN may in fact be slower. – oᴉɹǝɥɔ Jun 10 '15 at 17:39

    selurvedu , May 26, 2016 at 22:13

    Plus 1 for xz option. It the simplest, yet effective approach. – selurvedu May 26 '16 at 22:13

    panticz.de , Sep 1, 2014 at 15:02

    You can use the shortcut -I for tar's --use-compress-program switch, and invoke pbzip2 for bzip2 compression on multiple cores:
    tar -I pbzip2 -cf OUTPUT_FILE.tar.bz2 DIRECTORY_TO_COMPRESS/
    

    einpoklum , Feb 11, 2017 at 15:59

    A nice TL;DR for @MaximSuslov's answer . – einpoklum Feb 11 '17 at 15:59

    ,

    If you want to have more flexibility with filenames and compression options, you can use:
    find /my/path/ -type f -name "*.sql" -o -name "*.log" -exec \
    tar -P --transform='s@/my/path/@@g' -cf - {} + | \
    pigz -9 -p 4 > myarchive.tar.gz
    
    Step 1: find

    find /my/path/ -type f -name "*.sql" -o -name "*.log" -exec

    This command will look for the files you want to archive, in this case /my/path/*.sql and /my/path/*.log . Add as many -o -name "pattern" as you want.

    -exec will execute the next command using the results of find : tar

    Step 2: tar

    tar -P --transform='s@/my/path/@@g' -cf - {} +

    --transform is a simple string replacement parameter. It will strip the path of the files from the archive so the tarball's root becomes the current directory when extracting. Note that you can't use -C option to change directory as you'll lose benefits of find : all files of the directory would be included.

    -P tells tar to use absolute paths, so it doesn't trigger the warning "Removing leading `/' from member names". Leading '/' with be removed by --transform anyway.

    -cf - tells tar to use the tarball name we'll specify later

    {} + uses everyfiles that find found previously

    Step 3: pigz

    pigz -9 -p 4

    Use as many parameters as you want. In this case -9 is the compression level and -p 4 is the number of cores dedicated to compression. If you run this on a heavy loaded webserver, you probably don't want to use all available cores.

    Step 4: archive name

    > myarchive.tar.gz

    Finally.

    [Nov 08, 2018] Technology Detox The Health Benefits of Unplugging Unwinding by Sara Tipton

    Notable quotes:
    "... Another great tip is to buy one of those old-school alarm clocks so the smartphone isn't ever in your bedroom. ..."
    Nov 07, 2018 | www.zerohedge.com

    Authored by Sara Tipton via ReadyNutrition.com,

    Recent studies have shown that 90% of Americans use digital devices for two or more hours each day and the average American spends more time a day on high-tech devices than they do sleeping: 8 hours and 21 minutes to be exact. If you've ever considered attempting a "digital detox", there are some health benefits to making that change and a few tips to make things a little easier on yourself.

    Many Americans are on their phones rather than playing with their children or spending quality family time together. Some people give up technology, or certain aspects of it, such as social media for varying reasons, and there are some shockingly terrific health benefits that come along with that type of a detox from technology. In fact, more and more health experts and medical professionals are suggesting a periodic digital detox; an extended period without those technology gadgets. Studies continue to show that a digital detox, has proven to be beneficial for relationships, productivity, physical health, and mental health. If you find yourself overly stressed or unproductive or generally disengaged from those closest to you, it might be time to unplug.

    DIGITAL ADDICTION RESOLUTION

    It may go unnoticed but there are many who are actually addicted to their smartphones or tablet. It could be social media or YouTube videos, but these are the people who never step away. They are the ones with their face in their phone while out to dinner with their family. They can't have a quiet dinner without their phone on the table. We've seen them at the grocery store aimlessly pushing around a cart while ignoring their children and scrolling on their phone. A whopping 83% of American teenagers claim to play video games while other people are in the same room and 92% of teens report to going online daily . 24% of those users access the internet via laptops, tablets, and mobile devices.

    Addiction therapists who treat gadget-obsessed people say their patients aren't that different from other kinds of addicts. Whereas alcohol, tobacco, and drugs involve a substance that a user's body gets addicted to, in behavioral addiction, it's the mind's craving to turn to the smartphone or the Internet. Taking a break teaches us that we can live without constant stimulation, and lessens our dependence on electronics. Trust us: that Facebook message with a funny meme attached or juicy tidbit of gossip can wait.

    IMPROVE RELATIONSHIPS AND BE MORE PERSONABLE

    Another benefit to keeping all your electronics off is that it will allow you to establish good mannerisms and people skills and build your relationships to a strong level of connection. If you have ever sat across someone at the dinner table who made more phone contact than eye contact, you know it feels to take a backseat to a screen. Cell phones and other gadgets force people to look down and away from their surroundings, giving them a closed off and inaccessible (and often rude) demeanor. A digital detox has the potential of forcing you out of that unhealthy comfort zone. It could be a start toward rebuilding a struggling relationship too. In a Forbes study , 3 out of 5 people claimed that they spend more time on their digital devices than they do with their partners. This can pose a real threat to building and maintaining real-life relationships. The next time you find yourself going out on a dinner date, try leaving your cell phone and other devices at home and actually have a conversation. Your significant other will thank you.

    BETTER SLEEP AND HEALTHIER EATING HABITS

    The sleep interference caused by these high-tech gadgets is another mental health concern. The stimulation caused by artificial light can make you feel more awake than you really are, which can potentially interfere with your sleep quality. It is recommended that you give yourself at least two hours of technology-free time before bedtime. The "blue light" has been shown to interfere with sleeping patterns by inhibiting melatonin (the hormone which controls our sleep/wake cycle known as circadian rhythm) production. Try shutting off your phone after dinner and leaving it in a room other than your bedroom. Another great tip is to buy one of those old-school alarm clocks so the smartphone isn't ever in your bedroom. This will help your body readjust to a normal and healthy sleep schedule.

    Your eating habits can also suffer if you spend too much time checking your newsfeed. The Rochester Institute of Technology released a study that revealed students are more likely to eat while staring into digital media than they are to eat at a dinner table. This means that eating has now become a multi-tasking activity, rather than a social and loving experience in which healthy foods meant to sustain the body are consumed. This can prevent students from eating consciously, which promotes unhealthy eating habits such as overeating and easy choices, such as a bag of chips as opposed to washing and peeling some carrots. Whether you're an overworked college student checking your Facebook, or a single bachelor watching reruns of The Office , a digital detox is a great way to promote healthy and conscious eating.

    IMPROVE OVERALL MENTAL HEALTH

    Social media addicts experience a wide array of emotions when looking at the photos of Instagram models and the exercise regimes of others who live in exotic locations. These emotions can be mentally draining and psychologically unhealthy and lead to depression. Smartphone use has been linked to loneliness, shyness, and less engagement at work. In other words, one may have many "social media friends" while being lonely and unsatisfied because those friends are only accessible through their screen. Start by limiting your time on social media. Log out of all social media accounts. That way, you've actually got to log back in if you want to see what that Parisian Instagram vegan model is up to.

    If you feel like a detox is in order but don't know how to go about it, start off small. Try shutting off your phone after dinner and don't turn it back on until after breakfast. Keep your phone in another room besides your bedroom overnight. If you use your phone as an alarm clock, buy a cheap alarm clock to use instead to lessen your dependence on your phone. Boredom is often the biggest factor in the beginning stages of a detox, but try playing an undistracted board game with your children, leaving your phone at home during a nice dinner out, or playing with a pet. All of these things are not only good for you but good for your family and beloved furry critter as well!

    [Nov 07, 2018] Stuxnet 2.0? Iran claims Israel launched new cyber attacks

    Nov 07, 2018 | arstechnica.com

    President Rouhani's phone "bugged," attacks against network infrastructure claimed.

    Sean Gallagher - 11/5/2018, 5:10 PM

    reader comments

    Last week, Iran's chief of civil defense claimed that the Iranian government had fought off Israeli attempts to infect computer systems with what he described as a new version of Stuxnet -- the malware reportedly developed jointly by the US and Israel that targeted Iran's uranium-enrichment program. Gholamreza Jalali, chief of the National Passive Defense Organization (NPDO), told Iran's IRNA news service, "Recently, we discovered a new generation of Stuxnet which consisted of several parts... and was trying to enter our systems."

    On November 5, Iran Telecommunications Minister Mohammad-Javad Azari Jahromi accused Israel of being behind the attack, and he said that the malware was intended to "harm the country's communication infrastructures." Jahromi praised "technical teams" for shutting down the attack, saying that the attackers "returned empty-handed." A report from Iran's Tasnim news agency quoted Deputy Telecommunications Minister Hamid Fattahi as stating that more details of the cyber attacks would be made public soon.

    Jahromi said that Iran would sue Israel over the attack through the International Court of Justice. The Iranian government has also said it would sue the US in the ICJ over the reinstatement of sanctions. Israel has remained silent regarding the accusations .

    The claims come a week after the NPDO's Jalali announced that President Hassan Rouhani's cell phone had been "tapped" and was being replaced with a new, more secure device. This led to a statement by Iranian Supreme Leader Ayatollah Ali Khamenei, exhorting Iran's security apparatus to "confront infiltration through scientific, accurate, and up-to-date action."

    While Iran protests the alleged attacks -- about which the Israeli government has been silent -- Iranian hackers have continued to conduct their own cyber attacks. A recent report from security tools company Carbon Black based on data from the company's incident-response partners found that Iran had been a significant source of attacks in the third quarter of this year, with one incident-response professional noting, "We've seen a lot of destructive actions from Iran and North Korea lately, where they've effectively wiped machines they suspect of being forensically analyzed."


    SymmetricChaos </> , 2018-11-05T17:16:46-05:00 I feel like governments still think of cyber warfare as something that doesn't really count and are willing to be dangerously provocative in their use of it. ihatewinter , 2018-11-05T17:27:06-05:00 Another day in international politics. Beats lobbing bombs at each other. +13 ( +16 / -3 ) fahrenheit_ak </> , 2018-11-05T17:46:44-05:00

    corey_1967 wrote:
    The twin pillars of Iran's foreign policy - America is evil and Wipe Israel off the map - do not appear to be serving the country very well.

    They serve Iran very well, America is an easy target to gather support against, and Israel is more than willing to play the bad guy (for a bunch of reasons including Israels' policy of nuclear hegemony in the region and historical antagonism against Arab states).
    revision0 , 2018-11-05T17:48:22-05:00 Israeli hackers?

    Go on!

    Quote:

    Israeli hackers offered Cambridge Analytica, the data collection firm that worked on U.S. President Donald Trump's election campaign, material on two politicians who are heads of state, the Guardian reported Wednesday, citing witnesses.

    https://www.haaretz.com/israel-news/isr ... -1.5933977

    Quote:

    For $20M, These Israeli Hackers Will Spy On Any Phone On The Planet

    https://www.forbes.com/sites/thomasbrew ... -ulin-ss7/

    Quote:

    While Israelis are not necessarily number one in technical skills -- that award goes to Russian hackers -- Israelis are probably the best at thinking on their feet and adjusting to changing situations on the fly, a trait essential for success in a wide range of areas, including cyber-security, said Forzieri. "In modern attacks, the human factor -- for example, getting someone to click on a link that will install malware -- constitutes as much as 85% of a successful attack," he said.

    http://www.timesofisrael.com/israeli-ha ... ty-expert/

    +5 ( +9 / -4 )
    ihatewinter </> , 2018-11-05T17:52:15-05:00
    dramamoose wrote:
    thorpe wrote:
    The pro-Israel trolls out in front of this comment section...

    You don't have to be pro-Israel to be anti-Iran. Far from it. I think many of Israel's actions in Palestine are reprehensible, but I also know to (rightly) fear an Islamic dictatorship who is actively funding terrorism groups and is likely a few years away from having a working nuclear bomb, should they resume research (which the US actions seem likely to cause).

    The US created the Islamic Republic of Iran by holding a cruel dictator in power rather than risking a slide into communism. We should be engaging diplomatically, rather than trying sanctions which clearly don't work. But I don't think that the original Stuxnet was a bad idea, nor do I think that intense surveillance of what could be a potentially very dangerous country is a bad one either.

    If the Israelis (slash US) did in fact target civilian infrastructure, that's a problem. Unless, of course, they were bugging them for espionage purposes.

    Agree. While Israel is not about to win Humanitarian Nation of the year Award any time soon, I don't see it going to Iran in a close vote tally either.

    [Nov 05, 2018] Frequently is no way to judge whether individual is competent or incompetent to hold a given position. Stated another way: there is no adequate competence criterion for technical managers.

    Nov 05, 2018 | www.rako.com

    However, there is another anomaly with more interesting consequences; namely, there frequently is no way to judge whether individual is competent or incompetent to hold a given position. Stated another way: there is no adequate competence criterion for technical managers.

    Consider. for example. the manager of a small group of chemists. He asked his group to develop a nonfading system of dyes using complex organic compounds that they had been studying for some time. Eighteen months later they reported little success with dyes but had discovered a new substance that was rather effective as an insect repellent.

    Should the manager be chastised for failing to accomplish anything toward his original objective, or should he be praised for resourcefulness
    in finding something useful in the new chemical system? Was 18 months a long time or a short time for this accomplishment?

    [Nov 05, 2018] Management theories for CIOs The Peter Principle and Parkinson's Law

    Notable quotes:
    "... Josι Ortega y Gasset. ..."
    "... "Works expands so as to fill the time available for its completion." ..."
    "... "The time spent on any item of the agenda will be in inverse proportion to the sum of money involved." ..."
    "... Gφdel, Escher, Bach: An Eternal Golden Braid, ..."
    "... "It always takes longer than you expect, even when you take into account Hofstadter's Law." ..."
    "... "Anything that can go wrong, will go wrong." ..."
    "... "Anything that can go wrong, will go wrong - at the worst possible moment." ..."
    Nov 05, 2018 | cio.co.uk

    From the semi-serious to the confusingly ironic, the business world is not short of pseudo-scientific principles, laws and management theories concerning how organisations and their leaders should and should not behave. CIO UK takes a look at some sincere, irreverent and leftfield management concepts that are relevant to CIOs and all business leaders.

    The Peter Principle

    A concept formulated by Laurence J Peter in 1969, the Peter Principle runs that in a hierarchical structure, employees are promoted to their highest level of incompetence at which point they are no longer able to fulfil an effective role for their organisation.

    In the Peter Principle people are promoted when they excel, but this process falls down when they are unlikely to gain further promotion or be demoted with the logical end point, according to Peter, where "every post tends to be occupied by an employee who is incompetent to carry out its duties" and that "work is accomplished by those employees who have not yet reached their level of incompetence".

    To counter the Peter Principle leaders could seek the advice of Spanish liberal philosopher Josι Ortega y Gasset. While he died 14 years before the Peter Principle was published, Ortega had been in exile in Argentina during the Spanish Civil War and prompted by his observations in South America had quipped: "All public employees should be demoted to their immediately lower level, as they have been promoted until turning incompetent."

    Parkinson's Law

    Cyril Northcote Parkinson's eponymous law, derived from his extensive experience in the British Civil Service, states that: "Works expands so as to fill the time available for its completion."

    The first sentence of a humorous essay published in The Economist in 1955, Parkinson's Law is familiar with CIOs, IT teams, journalists, students, and every other occupation that can learn from Parkinson's mocking of pubic administration in the UK. The corollary law most applicable to CIOs runs that "data expands to fill the space available for storage", while Parkinson's broader work about the self-satisfying uncontrolled growth of bureaucratic apparatus is as relevant for the scaling startup as it is to the large corporate.

    Related Parkinson's Law of Triviality

    Flirting with the ground between flippancy and seriousness, Parkinson argued that boards and members of an organisation give disproportional weight to trivial issues and those that are easiest to grasp for non-experts. In his words: "The time spent on any item of the agenda will be in inverse proportion to the sum of money involved."

    Parkinson's anecdote is of a fictional finance committee's three-item agenda to cover a £10 million contract discussing the components of a new nuclear reactor, a proposal to build a new £350 bicycle shed, and finally which coffee and biscuits should be supplied at future committee meetings. While the first item on the agenda is far too complex and ironed out in two and a half minutes, 45 minutes is spent discussing bike sheds, and debates about the £21 refreshment provisions are so drawn out that the committee runs over its two-hour time allocation with a note to provide further information about coffee and biscuits to be continued at the next meeting.

    The Dilbert Principle

    Referring to a 1990s theory by popular Dilbert cartoonist Scott Adams, the Dilbert Principle runs that companies tend to promote their least competent employees to management roles to curb the amount of damage they are capable of doing to the organisation.

    Unlike the Peter Principle , which is positive in its aims by rewarding competence, the Dilbert Principle assumes people are moved to quasi-senior supervisory positions in a structure where they are less likely to have an effect on productive output of the company which is performed by those lower down the ladder.

    Hofstadter's Law

    Coined by Douglas Hofstadter in his 1979 book Gφdel, Escher, Bach: An Eternal Golden Braid, Hofstadter's Law states: "It always takes longer than you expect, even when you take into account Hofstadter's Law."

    Particularly relevant to CIOs and business leaders overseeing large projects and transformation programmes, Hofstadter's Law suggests that even appreciating your own subjective pessimism in your projected timelines, they are still worth re-evaluating.

    Related Murphy's Law

    "Anything that can go wrong, will go wrong."

    An old adage and without basis in any scientific laws or management principles, Murphy's Law is always worth bearing in mind for CIOs or when undertaking thorough scenario planning for adverse situations. It's also perhaps worth bearing in mind the corollary principle Finagle's Law , which states: "Anything that can go wrong, will go wrong - at the worst possible moment."

    Lindy Effect

    Concerning the life expectancy of non-perishable things, the Lindy Effect is as relevant to CIOs procuring new technologies or maintaining legacy infrastructure as it is to the those buying homes, used cars, a fountain pen or mobile phone.

    Harder to define than other principles and laws, the Lindy Effect suggests that mortality rate decreases with time, unlike in nature and in human beings where - after childhood - mortality rate increases with time. Ergo, every day of server uptime implies a longer remaining life expectancy.

    A corollary effect related to the Lindy Effect which is a good explanation is the Copernican Principle , which states that the future life expectancy is equal to the current age, i.e. that barring any addition evidence on the contrary, something must be halfway through its life span.

    The Lindy Effect and the idea that older things are more robust has specific relevance to CIOs beyond servers and IT infrastructure with its association with source code, where newer code will in general have lower probability of remaining within a year and an increased likelihood of causing problems compared to code written a long time ago, and in project management where the lifecycle of a project grows and its scope changes, an Agile methodology can be used to mitigate project risks and fix mistakes.

    The Jevons Paradox

    Wikipedia offers the best economic description of the Jevons Paradox or Jevons effect, in which a technological progress increases efficiency with which a resource is used, but the rate of consumption of that resource subsequently rises because of increasing demand.

    Think email, think Slack, instant messaging, printing, how easy it is to create Excel reports, coffee-making, conference calls, network and internet speeds, the list is endless. If you suspect demand in these has increased along with technological advancement negating the positive impact of said efficiency gains in the first instance, sounds like the paradox first described by William Stanley Jevons in 1865 when observing coal consumption following the introduction of the Watt steam engine.

    Ninety-Ninety Rule

    A light-hearted quip bespoke to computer programming and software development, the Ninety-Ninety Rule states that: "The first 90% of the code accounts for the first 90% of the development time. The remaining 10% of the code accounts for the other 90% of the development time." See also, Hofstadter's Law .

    Related to this is the Pareto Principle , or the 80-20 Rule, and how it relates to software, with supporting anecdotes that "20% of the code has 80% of the errors" or in load testing that it is common practice to estimate that 80% of the traffic occurs during 20% of the time.

    Pygmalion Effect and Golem Effect

    Named after the Greek myth of Pygmalion, a sculptor who fell in love with a statue he carved, and relevant to managers across industry and seniority, the Pygmalion Effect runs that higher expectations lead to an increased performance.

    Counter to the Pygmalion Effect is the Golem effect , whereby low expectations result in a decrease in performance.

    Dunning-Kruger Effect

    The Dunning-Kruger Effect , named after two psychologists from Cornell University, states that incompetent people are significantly less able to recognise their own lack of skill, the extent of their inadequacy, and even to gauge the skill of others. Furthermore, they are only able to acknowledge their own incompetence after they have been exposed to training in that skill.

    At a loss to find a better visual representation of the Dunning-Kruger Effect , here is Simon Wardley's graph with Knowledge and Expertise axes - a warning as to why self-professed experts are the worst people to listen to on a given subject.

    me title=

    See also this picture of AOL "Digital Prophet" David Shing and web developer Sir Tim Berners-Lee.

    [Nov 05, 2018] Putt's Law

    Nov 05, 2018 | davewentzel.com

    ... ... ...

    Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand. --Putt's Law

    If you are in IT and are not familiar with Archibald Putt, I suggest you stop reading this blog post, RIGHT NOW, and go buy the book Putt's Law and the Successful Technocrat. How to Win in the Information Age . Putt's Law , for short, is a combination of Dilbert and The Mythical Man-Month . It shows you exactly how managers of technologists think, how they got to where they are, and how they stay there. Just like Dilbert, you'll initially laugh, then you'll cry, because you'll realize just how true Putt's Law really is. But, unlike Dilbert, whose technologist-fans tend to have a revulsion for management, Putt tries to show the technologist how to become one of the despised. Now granted, not all of us technologists have a desire to be management, it is still useful to "know one's enemy."

    Two amazing facts:

    1. Archibald Putt is a pseudonym and his true identity has yet to be revealed. A true "Deep Throat" for us IT guys.
    2. Putt's Law was written back in 1981. It amazes me how the Old IT Classics (Putt's Law, Mythical Man-Month, anything by Knuth) are even more relevant today than ever.

    Every technical hierarchy, in time, develops a competence inversion. --Putt's Corollary

    Putt's Corollary says that in a corporate technocracy, the more technically competent people will remain in charge of the technology, whereas the less competent will be promoted to management. That sounds a lot like The Peter Principle (another timeless classic written in 1969).

    People rise to their level of incompetence. --Dave's Summary of the Peter Principle

    I can tell you that managers have the least information about technical issues and they should be the last people making technical decisions. Period. I've often heard that managers are used as the arbiters of technical debates. Bad idea. Arbiters should always be the [[benevolent dictators]] (the most admired/revered technologist you have). The exception is when your manager is also your benevolent dictator, which is rare. Few humans have the capability, or time, for both.

    I see more and more hit-and-run managers where I work. They feel as though they are the technical decision-makers. They attend technical meetings they were not invited to. Then they ask pointless, irrelevant questions that suck the energy out of the team. Then they want status updates hourly. Eventually after they have totally derailed the process they move along to some other, sexier problem with more management visibility.

    I really admire managers who follow the MBWA ( management by walking around ) principle. This management philosophy is very simple...the best managers are those who leave their offices and observe. By observing they learn what the challenges are for their teams and how to help them better.

    So, what I am looking for in a manager

    1. He knows he is the least qualified person to make a technical decision.
    2. He is a facilitator. He knows how to help his technologists succeed.
    3. MBWA

    [Nov 05, 2018] Why the Peter Principle Works

    Notable quotes:
    "... The Corner Office ..."
    Aug 15, 2011 | www.cbsnews.com
    Why The Peter Principle Works Everyone's heard of the Peter Principle - that employees tend to rise to their level of incompetence - a concept that walks that all-too-fine line between humor and reality.

    We've all seen it in action more times than we'd like. Ironically, some percentage of you will almost certainly be promoted to a position where you're no longer effective. For some of you, that's already happened. Sobering thought.

    Well, here's the thing. Not only is the Peter Principle alive and well in corporate America, but contrary to popular wisdom, it's actually necessary for a healthy capitalist system. That's right, you heard it here, folks, incompetence is a good thing. Here's why.

    Robert Browning once said, "A man's reach should exceed his grasp." It's a powerful statement that means you should seek to improve your situation, strive to go above and beyond. Not only is that an embodiment of capitalism, but it also leads directly to the Peter Principle because, well, how do you know when to quit?

    Now, most of us don't perpetually reach for the stars, but until there's clear evidence that we're not doing ourselves or anyone else any good, we're bound to keep right on reaching. After all, objectivity is notoriously difficult when opportunities for a better life are staring you right in the face.

    I mean, who turns down promotions? Who doesn't strive to reach that next rung on the ladder? When you get an email from an executive recruiter about a VP or CEO job, are you likely to respond, "Sorry, I think that may be beyond my competency" when you've got to send two kids to college and you may actually want to retire someday?

    Wasn't America founded by people who wanted a better life for themselves and their children? God knows, there were plenty of indications that they shouldn't take the plunge and, if they did, wouldn't succeed. That's called a challenge and, well, do you ever really know if you've reached too far until after the fact?

    Perhaps the most interesting embodiment of all this is the way people feel about CEOs. Some think pretty much anyone can do a CEO's job for a fraction of the compensation. Seriously, you hear that sort of thing a lot, especially these days with class warfare being the rage and all.

    One The Corner Office reader asked straight out in an email: "Would you agree that, in most cases, the company could fire the CEO and hire someone young, smart, and hungry at 1/10 the salary/perks/bonuses who would achieve the same performance?"

    Sure, it's easy: you just set the direction, hire a bunch of really smart executives, then get out of the way and let them do their jobs. Once in a blue moon you swoop in, deal with a problem, then return to your ivory tower. Simple.

    Well, not exactly.

    You see, I sort of grew up at Texas Instruments in the 80s when the company was nearly run into the ground by Mark Shepherd and J. Fred Bucy - two CEOs who never should have gotten that far in their careers.

    But the company's board, in its wisdom, promoted Jerry Junkins and, after his untimely death, Tom Engibous , to the CEO post. Not only were those guys competent, they revived the company and transformed it into what it is today.

    I've seen what a strong CEO can do for a company, its customers, its shareholders, and its employees. I've also seen the destruction the Peter Principle can bring to those same stakeholders. But, even now, after 30 years of corporate and consulting experience, the one thing I've never seen is a CEO or executive with an easy job.

    That's because there's no such thing. And to think you can eliminate incompetency from the executive ranks when it exists at every organizational level is, to be blunt, childlike or Utopian thinking. It's silly and trite. It doesn't even make sense.

    It's not as if TI's board knew ahead of time that Shepherd and Bucy weren't the right guys for the job. They'd both had long, successful careers at the company. But the board did right the ship in time. And that's the mark of a healthy system at work.

    The other day I read a truly fantastic story in Fortune about the rise and fall of Jeffrey Kindler as CEO of troubled pharmaceutical giant Pfizer . I remember when he suddenly stepped down amidst all sorts of rumor and conjecture about the underlying causes of the shocking news.

    What really happened is the guy had a fabulous career as a litigator, climbed the corporate ladder to general ounsel of McDonald's and then Pfizer, had some limited success in operations, and once he was promoted to CEO, flamed out. Not because he was incompetent - he wasn't. And certainly not because he was a dysfunctional, antagonistic, micromanaging control freak - he was.

    He failed because it was a really tough job and he was in over his head. It happens. It happens a lot. After all, this wasn't just some everyday company that's simple to run. This was Pfizer - a pharmaceutical giant with its top products going generic and a dried-up drug pipeline in need of a major overhaul.

    The guy couldn't handle it. And when executives with issues get in over their heads, their issues become their undoing. It comes as no surprise that folks at McDonald's were surprised at the way he flamed out at Pfizer. That was a whole different ballgame.

    Now, I bet those same people who think a CEO's job is a piece of cake will have a similar response to the Kindler situation at Pfizer. Why take the job if he knew he couldn't handle it? The board should have canned him before it got to that point. Why didn't the guy's executives speak up sooner?

    Because, just like at TI, nobody knows ahead of time if people are going to be effective on the next rung of the ladder. Every situation is unique and there are no questions or test that will foretell the future. I mean, it's not as if King Solomon comes along and writes who the right guy for the job is on the wall.

    The Peter Principle works because, in a capitalist system, there are top performers, abysmal failures, and everything in between. Expecting anything different when people must reach for the stars to achieve growth and success so our children have a better life than ours isn't how it works in the real world.

    The Peter Principle works because it's the yin to Browning's yang, the natural outcome of striving to better our lives. Want to know how to bring down a free market capitalist system? Don't take the promotion because you're afraid to fail.

    [Nov 05, 2018] Putt's Law, Peter Principle, Dilbert Principle of Incompetence Parkinson's Law

    Nov 05, 2018 | asmilingassasin.blogspot.com

    Putt's Law, Peter Principle, Dilbert Principle of Incompetence & Parkinson's Law

    June 10, 2015 Putt's Law, Peter Principle, Dilbert Principle of Incompetence & Parkinson's Law I am a big fan of Scott Adams & Dilbert Comic Series. I realize that these laws and principles - the Putt's law, Peter Principle, the Dilbert Principle, and Parkinson's Law - aren't necessarily founded in reality. It's easy to look at a manager's closed doors and wonder he or she does all day, if anything. But having said that and having come to realize the difficulty and scope of what management entails. It's hard work and requires a certain skill-set that I'm only beginning to develop. One should therefore look at these principles and laws with an acknowledgment that they most likely developed from the employee's perspective, not the manager's. Take with a pinch of salt!
    Source: Google Images
    The Putt's law: · Putt's Law: " Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand. " · Putt's Corollary: " Every technical hierarchy, in time, develops a competence inversion. " with incompetence being "flushed out of the lower levels" of a technocratic hierarchy, ensuring that technically competent people remain directly in charge of the actual technology while those without technical competence move into management. The Peter Principle: The Peter Principle states that " in a hierarchy every employee tends to rise to his level of incompetence." In other words, employees who perform their roles with competence are promoted into successively higher levels until they reach a level at which they are no longer competent. There they remain. For example, let's say you are a brilliant programmer. You spend your days coding with amazing efficiency and prowess. After a couple of years, you're promoted to lead programmer, and then promoted to team manager. You may have no interest in managing other programmers, but it's the reward for your competence. There you sit -- you have risen to a level of incompetence. Your technical skills lie dormant while you fill your day with one-on-one meetings, department strategy meetings, planning meetings, budgets, and reports. The Dilbert Principle The principle states that companies tend to promote the most incompetent employees to management as a form of damage control . The principle argues that leaders, specifically those in middle management, are in reality the ones that have little effect on productivity. In order to limit the harm caused by incompetent employees who are actually not doing the work, companies make them leaders. The Dilbert Principle assumes that "the majority of real, productive work in a company is done by people lower in the power ladder." Those in management don't actually do anything to move forward the work. How it happens? The Incompetent Leader Stereotype often hits new leaders, specifically those who have no prior experience in a particular field. Often times, leaders who have been transferred from other departments are viewed as mere figureheads, rather than actual leaders who have knowledge of the work situation. Failure to prove technical capability can also lead to a leader being branded incompetent. Why it's bad? Being a victim of the incompetent leader stereotype is bad. Firstly, no one takes you seriously. Your ability to insert input into projects is hampered when your followers actively disregard anything you say as fluff. This is especially true if you are in middle management, where your power as a leader is limited. Secondly, your chances of rising ranks are curtailed. If viewed as an incompetent leader by your followers, your superiors are unlikely to entrust you with further projects which have more impact. How to get over it Know when to concede. As a leader, no one expects you to be competent in every area; though basic knowledge of every section you are leading is necessary. Readily admitting incompetency in certain areas will take out the impact out of it when others paint you as incompetent. Prove competency somewhere. Quickly establish yourself as having some purpose in the workplace, rather than being a mere picture of tokenism. This can be done by personally involving yourself in certain projects. Parkinson's Law Parkinson's Law states that " work expands so as to fill the time available for its completion ." Although this law has application with procrastination, storage capacity, and resource usage, Parkinson focuses his law on Corporate lethargy. Parkinson says that lethargy swell for two reasons: (1) "A manager wants to multiply subordinates, not rivals" and (2) "Managers make work for each other." In other words, a team size may swell not because the workload increases, but because they have the capacity and resources that allow for an increased workload even if the workload does not in fact increase. People without any work find ways to increase the amount of "work" and therefore add to the size of their lethargy. My Analysis I know none of these principles or laws gives much credit to management. The wrong person fills the wrong role, the role exists only to minimize damage control, or the role swells unnecessarily simply because it can. I find the whole topic of management somewhat fascinating, not because I think these theories apply to my own managers. These management theories are however relevant. Software coders looking to leverage coding talent for their projects often find themselves in management roles, without a strong understanding of how to manage people. Most of the time, these coders fail to engage. The project leaders are usually brilliant at their technical job but don't excel at management.
    However the key principle to follow should be this: put individuals to work in their core competencies . It makes little sense to take your most brilliant engineer and have him or her manage people and budgets. Likewise, it makes no sense to take a shrewd consultant, one who can negotiate projects and requirements down to the minutest detail, and put that individual into a role involving creative design and content generation. However, to implement this model, you have to allow for reward without a dramatic change in job responsibilities or skills.

    [Nov 04, 2018] Archibald Putt The Unknown Technocrat Returns - IEEE Spectrum

    Nov 04, 2018 | spectrum.ieee.org

    While similar things can, and do, occur in large technical hierarchies, incompetent technical people experience a social pressure from their more competent colleagues that causes them to seek security within the ranks of management. In technical hierarchies, there is always the possibility that incompetence will be rewarded by promotion.

    Other Putt laws we love include the law of failure: "Innovative organizations abhor little failures but reward big ones." And the first law of invention: "An innovated success is as good as a successful innovation."

    Now Putt has revised and updated his short, smart book, to be released in a new edition by Wiley-IEEE Press ( http://www.wiley.com/ieee ) at the end of this month. There have been murmurings that Putt's identity, the subject of much rumormongering, will be revealed after the book comes out, but we think that's unlikely. How much more interesting it is to have an anonymous chronicler wandering the halls of the tech industry, codifying its unstated, sometimes bizarre, and yet remarkably consistent rules of behavior.

    This is management writing the way it ought to be. Think Dilbert , but with a very big brain. Read it and weep. Or laugh, depending on your current job situation.

    [Nov 04, 2018] Two Minutes on Hiring by Eric Samuelson

    Notable quotes:
    "... Eric Samuelson is the creator of the Confident Hiring System™. Working with Dave Anderson of Learn to Lead, he provides the Anderson Profiles and related services to clients in the automotive retail industry as well as a variety of other businesses. ..."
    Nov 04, 2018 | www.andersonprofiles.com

    In 1981, an author in the Research and Development field, writing under the pseudonym Archibald Putt, penned this famous quote, now known as Putt's Law:

    "Technology is dominated by two types of people: those who understand what they do not manage, and those who manage what they do not understand."

    Have you ever hired someone without knowing for sure if they can do the job? Have you promoted a good salesperson to management only to realize you made a dire mistake? The qualities needed to succeed in a technical field are quite different than for a leader.

    The legendary immigrant engineer Charles Steinmetz worked at General Electric in the early 1900s. He made phenomenal advancements in the field of electric motors. His work was instrumental to the growth of the electric power industry. With a goal of rewarding him, GE promoted him to a management position, but he failed miserably. Realizing their error, and not wanting to offend this genius, GE's leadership retitled him as a Chief Engineer, with no supervisory duties, and let him go back to his research.

    Avoid the double disaster of losing a good worker by promoting him to management failure. By using the unique Anderson Position Overlay system, you can avoid future regret by comparing your candidate's qualities to the requirements of the position before saying "Welcome Aboard".

    Eric Samuelson is the creator of the Confident Hiring System™. Working with Dave Anderson of Learn to Lead, he provides the Anderson Profiles and related services to clients in the automotive retail industry as well as a variety of other businesses.

    [Nov 04, 2018] Putt's Law and the Successful Technocrat

    Nov 04, 2018 | en.wikipedia.org

    From Wikipedia, the free encyclopedia Jump to navigation Jump to search

    Question book-new.svg This article relies too much on references to primary sources . Please improve this by adding secondary or tertiary sources . (January 2015) ( Learn how and when to remove this template message )
    Putt's Law and the Successful Technocrat
    Putt's Law and the Successful Technocrat cover.jpg
    Author Archibald Putt (pseudonym)
    Illustrator Dennis Driscoll
    Country United States
    Language English
    Genre Industrial Management
    Publisher Wiley-IEEE Press
    Publication date 28 April 2006
    Media type Print ( hardcover )
    Pages 171 pages
    ISBN 0-471-71422-4
    OCLC 68710099
    Dewey Decimal 658.22
    LC Class HD31 .P855 2006

    Putt's Law and the Successful Technocrat is a book, credited to the pseudonym Archibald Putt, published in 1981. An updated edition, subtitled How to Win in the Information Age , was published by Wiley-IEEE Press in 2006. The book is based upon a series of articles published in Research/Development Magazine in 1976 and 1977.

    It proposes Putt's Law and Putt's Corollary [1] which are principles of negative selection similar to The Dilbert principle by Scott Adams proposed in the 1990s. Putt's law is sometimes grouped together with the Peter principle , Parkinson's Law and Stephen Potter 's Gamesmanship series as "P-literature". [2]

    Contents Putt's Law [ edit ]

    The book proposes Putt's Law and Putt's Corollary

    See also [ edit ] References [ edit ]
    1. Jump up ^ Archibald Putt. Putt's Law and the Successful Technocrat: How to Win in the Information Age , Wiley-IEEE Press (2006), ISBN 0-471-71422-4 . Preface.
    2. Jump up ^ John Walker (October 1981). "Review of Putt's Law and the Successful Technocrat " . New Scientist : 52.
    3. ^ Jump up to: a b Archibald Putt. Putt's Law and the Successful Technocrat: How to Win in the Information Age , Wiley-IEEE Press (2006), ISBN 0-471-71422-4 . page 7.
    External links [ edit ]

    [Nov 03, 2018] David Both

    Jun 22, 2017 | opensource.com
    ...

    The long listing of the /lib64 directory above shows that the first character in the filemode is the letter "l," which means that each is a soft or symbolic link.

    Hard links

    In An introduction to Linux's EXT4 filesystem , I discussed the fact that each file has one inode that contains information about that file, including the location of the data belonging to that file. Figure 2 in that article shows a single directory entry that points to the inode. Every file must have at least one directory entry that points to the inode that describes the file. The directory entry is a hard link, thus every file has at least one hard link.

    In Figure 1 below, multiple directory entries point to a single inode. These are all hard links. I have abbreviated the locations of three of the directory entries using the tilde ( ~ ) convention for the home directory, so that ~ is equivalent to /home/user in this example. Note that the fourth directory entry is in a completely different directory, /home/shared , which might be a location for sharing files between users of the computer.

    fig1directory_entries.png Figure 1

    Hard links are limited to files contained within a single filesystem. "Filesystem" is used here in the sense of a partition or logical volume (LV) that is mounted on a specified mount point, in this case /home . This is because inode numbers are unique only within each filesystem, and a different filesystem, for example, /var or /opt , will have inodes with the same number as the inode for our file.

    Because all the hard links point to the single inode that contains the metadata about the file, all of these attributes are part of the file, such as ownerships, permissions, and the total number of hard links to the inode, and cannot be different for each hard link. It is one file with one set of attributes. The only attribute that can be different is the file name, which is not contained in the inode. Hard links to a single file/inode located in the same directory must have different names, due to the fact that there can be no duplicate file names within a single directory.

    The number of hard links for a file is displayed with the ls -l command. If you want to display the actual inode numbers, the command ls -li does that.

    Symbolic (soft) links

    The difference between a hard link and a soft link, also known as a symbolic link (or symlink), is that, while hard links point directly to the inode belonging to the file, soft links point to a directory entry, i.e., one of the hard links. Because soft links point to a hard link for the file and not the inode, they are not dependent upon the inode number and can work across filesystems, spanning partitions and LVs.

    The downside to this is: If the hard link to which the symlink points is deleted or renamed, the symlink is broken. The symlink is still there, but it points to a hard link that no longer exists. Fortunately, the ls command highlights broken links with flashing white text on a red background in a long listing.

    Lab project: experimenting with links

    I think the easiest way to understand the use of and differences between hard and soft links is with a lab project that you can do. This project should be done in an empty directory as a non-root user . I created the ~/temp directory for this project, and you should, too. It creates a safe place to do the project and provides a new, empty directory to work in so that only files associated with this project will be located there.

    Initial setup

    First, create the temporary directory in which you will perform the tasks needed for this project. Ensure that the present working directory (PWD) is your home directory, then enter the following command.

    mkdir temp
    

    Change into ~/temp to make it the PWD with this command.

    cd temp
    

    To get started, we need to create a file we can link to. The following command does that and provides some content as well.

    du -h > main.file.txt
    

    Use the ls -l long list to verify that the file was created correctly. It should look similar to my results. Note that the file size is only 7 bytes, but yours may vary by a byte or two.

    [ dboth @ david temp ] $ ls -l
    total 4
    -rw-rw-r-- 1 dboth dboth 7 Jun 13 07: 34 main.file.txt

    Notice the number "1" following the file mode in the listing. That number represents the number of hard links that exist for the file. For now, it should be 1 because we have not created any additional links to our test file.

    Experimenting with hard links

    Hard links create a new directory entry pointing to the same inode, so when hard links are added to a file, you will see the number of links increase. Ensure that the PWD is still ~/temp . Create a hard link to the file main.file.txt , then do another long list of the directory.

    [ dboth @ david temp ] $ ln main.file.txt link1.file.txt
    [ dboth @ david temp ] $ ls -l
    total 8
    -rw-rw-r-- 2 dboth dboth 7 Jun 13 07: 34 link1.file.txt
    -rw-rw-r-- 2 dboth dboth 7 Jun 13 07: 34 main.file.txt

    Notice that both files have two links and are exactly the same size. The date stamp is also the same. This is really one file with one inode and two links, i.e., directory entries to it. Create a second hard link to this file and list the directory contents. You can create the link to either of the existing ones: link1.file.txt or main.file.txt .

    [ dboth @ david temp ] $ ln link1.file.txt link2.file.txt ; ls -l
    total 16
    -rw-rw-r-- 3 dboth dboth 7 Jun 13 07: 34 link1.file.txt
    -rw-rw-r-- 3 dboth dboth 7 Jun 13 07: 34 link2.file.txt
    -rw-rw-r-- 3 dboth dboth 7 Jun 13 07: 34 main.file.txt

    Notice that each new hard link in this directory must have a different name because two files -- really directory entries -- cannot have the same name within the same directory. Try to create another link with a target name the same as one of the existing ones.

    [ dboth @ david temp ] $ ln main.file.txt link2.file.txt
    ln: failed to create hard link 'link2.file.txt' : File exists

    Clearly that does not work, because link2.file.txt already exists. So far, we have created only hard links in the same directory. So, create a link in your home directory, the parent of the temp directory in which we have been working so far.

    [ dboth @ david temp ] $ ln main.file.txt .. / main.file.txt ; ls -l .. / main *
    -rw-rw-r-- 4 dboth dboth 7 Jun 13 07: 34 main.file.txt

    The ls command in the above listing shows that the main.file.txt file does exist in the home directory with the same name as the file in the temp directory. Of course, these are not different files; they are the same file with multiple links -- directory entries -- to the same inode. To help illustrate the next point, add a file that is not a link.

    [ dboth @ david temp ] $ touch unlinked.file ; ls -l
    total 12
    -rw-rw-r-- 4 dboth dboth 7 Jun 13 07: 34 link1.file.txt
    -rw-rw-r-- 4 dboth dboth 7 Jun 13 07: 34 link2.file.txt
    -rw-rw-r-- 4 dboth dboth 7 Jun 13 07: 34 main.file.txt
    -rw-rw-r-- 1 dboth dboth 0 Jun 14 08: 18 unlinked.file

    Look at the inode number of the hard links and that of the new file using the -i option to the ls command.

    [ dboth @ david temp ] $ ls -li
    total 12
    657024 -rw-rw-r-- 4 dboth dboth 7 Jun 13 07: 34 link1.file.txt
    657024 -rw-rw-r-- 4 dboth dboth 7 Jun 13 07: 34 link2.file.txt
    657024 -rw-rw-r-- 4 dboth dboth 7 Jun 13 07: 34 main.file.txt
    657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08: 18 unlinked.file

    Notice the number 657024 to the left of the file mode in the example above. That is the inode number, and all three file links point to the same inode. You can use the -i option to view the inode number for the link we created in the home directory as well, and that will also show the same value. The inode number of the file that has only one link is different from the others. Note that the inode numbers will be different on your system.

    Let's change the size of one of the hard-linked files.

    [ dboth @ david temp ] $ df -h > link2.file.txt ; ls -li
    total 12
    657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14 : 14 link1.file.txt
    657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14 : 14 link2.file.txt
    657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14 : 14 main.file.txt
    657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08: 18 unlinked.file

    The file size of all the hard-linked files is now larger than before. That is because there is really only one file that is linked to by multiple directory entries.

    I know this next experiment will work on my computer because my /tmp directory is on a separate LV. If you have a separate LV or a filesystem on a different partition (if you're not using LVs), determine whether or not you have access to that LV or partition. If you don't, you can try to insert a USB memory stick and mount it. If one of those options works for you, you can do this experiment.

    Try to create a link to one of the files in your ~/temp directory in /tmp (or wherever your different filesystem directory is located).

    [ dboth @ david temp ] $ ln link2.file.txt / tmp / link3.file.txt
    ln: failed to create hard link '/tmp/link3.file.txt' = > 'link2.file.txt' :
    Invalid cross-device link

    Why does this error occur? The reason is each separate mountable filesystem has its own set of inode numbers. Simply referring to a file by an inode number across the entire Linux directory structure can result in confusion because the same inode number can exist in each mounted filesystem.

    There may be a time when you will want to locate all the hard links that belong to a single inode. You can find the inode number using the ls -li command. Then you can use the find command to locate all links with that inode number.

    [ dboth @ david temp ] $ find . -inum 657024
    . / main.file.txt
    . / link1.file.txt
    . / link2.file.txt

    Note that the find command did not find all four of the hard links to this inode because we started at the current directory of ~/temp . The find command only finds files in the PWD and its subdirectories. To find all the links, we can use the following command, which specifies your home directory as the starting place for the search.

    [ dboth @ david temp ] $ find ~ -samefile main.file.txt
    / home / dboth / temp / main.file.txt
    / home / dboth / temp / link1.file.txt
    / home / dboth / temp / link2.file.txt
    / home / dboth / main.file.txt

    You may see error messages if you do not have permissions as a non-root user. This command also uses the -samefile option instead of specifying the inode number. This works the same as using the inode number and can be easier if you know the name of one of the hard links.

    Experimenting with soft links

    As you have just seen, creating hard links is not possible across filesystem boundaries; that is, from a filesystem on one LV or partition to a filesystem on another. Soft links are a means to answer that problem with hard links. Although they can accomplish the same end, they are very different, and knowing these differences is important.

    Let's start by creating a symlink in our ~/temp directory to start our exploration.

    [ dboth @ david temp ] $ ln -s link2.file.txt link3.file.txt ; ls -li
    total 12
    657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14 : 14 link1.file.txt
    657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14 : 14 link2.file.txt
    658270 lrwxrwxrwx 1 dboth dboth 14 Jun 14 15 : 21 link3.file.txt - >
    link2.file.txt
    657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14 : 14 main.file.txt
    657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08: 18 unlinked.file

    The hard links, those that have the inode number 657024 , are unchanged, and the number of hard links shown for each has not changed. The newly created symlink has a different inode, number 658270 . The soft link named link3.file.txt points to link2.file.txt . Use the cat command to display the contents of link3.file.txt . The file mode information for the symlink starts with the letter " l " which indicates that this file is actually a symbolic link.

    The size of the symlink link3.file.txt is only 14 bytes in the example above. That is the size of the text link3.file.txt -> link2.file.txt , which is the actual content of the directory entry. The directory entry link3.file.txt does not point to an inode; it points to another directory entry, which makes it useful for creating links that span file system boundaries. So, let's create that link we tried before from the /tmp directory.

    [ dboth @ david temp ] $ ln -s / home / dboth / temp / link2.file.txt
    / tmp / link3.file.txt ; ls -l / tmp / link *
    lrwxrwxrwx 1 dboth dboth 31 Jun 14 21 : 53 / tmp / link3.file.txt - >
    / home / dboth / temp / link2.file.txt Deleting links

    There are some other things that you should consider when you need to delete links or the files to which they point.

    First, let's delete the link main.file.txt . Remember that every directory entry that points to an inode is simply a hard link.

    [ dboth @ david temp ] $ rm main.file.txt ; ls -li
    total 8
    657024 -rw-rw-r-- 3 dboth dboth 1157 Jun 14 14 : 14 link1.file.txt
    657024 -rw-rw-r-- 3 dboth dboth 1157 Jun 14 14 : 14 link2.file.txt
    658270 lrwxrwxrwx 1 dboth dboth 14 Jun 14 15 : 21 link3.file.txt - >
    link2.file.txt
    657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08: 18 unlinked.file

    The link main.file.txt was the first link created when the file was created. Deleting it now still leaves the original file and its data on the hard drive along with all the remaining hard links. To delete the file and its data, you would have to delete all the remaining hard links.

    Now delete the link2.file.txt hard link.

    [ dboth @ david temp ] $ rm link2.file.txt ; ls -li
    total 8
    657024 -rw-rw-r-- 3 dboth dboth 1157 Jun 14 14 : 14 link1.file.txt
    658270 lrwxrwxrwx 1 dboth dboth 14 Jun 14 15 : 21 link3.file.txt - >
    link2.file.txt
    657024 -rw-rw-r-- 3 dboth dboth 1157 Jun 14 14 : 14 main.file.txt
    657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08: 18 unlinked.file

    Notice what happens to the soft link. Deleting the hard link to which the soft link points leaves a broken link. On my system, the broken link is highlighted in colors and the target hard link is flashing. If the broken link needs to be fixed, you can create another hard link in the same directory with the same name as the old one, so long as not all the hard links have been deleted. You could also recreate the link itself, with the link maintaining the same name but pointing to one of the remaining hard links. Of course, if the soft link is no longer needed, it can be deleted with the rm command.

    The unlink command can also be used to delete files and links. It is very simple and has no options, as the rm command does. It does, however, more accurately reflect the underlying process of deletion, in that it removes the link -- the directory entry -- to the file being deleted.

    Final thoughts

    I worked with both types of links for a long time before I began to understand their capabilities and idiosyncrasies. It took writing a lab project for a Linux class I taught to fully appreciate how links work. This article is a simplification of what I taught in that class, and I hope it speeds your learning curve. David Both - David Both is a Linux and Open Source advocate who resides in Raleigh, North Carolina. He has been in the IT industry for over forty years and taught OS/2 for IBM where he worked for over 20 years. While at IBM, he wrote the first training course for the original IBM PC in 1981. He has taught RHCE classes for Red Hat and has worked at MCI Worldcom, Cisco, and the State of North Carolina. He has been working with Linux and Open Source Software for almost 20 years. dgrb on 23 Jun 2017 Permalink There is a hard link "gotcha" which IMHO is worth mentioning.

    If you use an editor which makes automatic backups - emacs certainly is one such - then you may end up with a new version of the edited file, while the backup is the linked copy, because the editor simply renames the file to the backup name (with emacs, test.c would be renamed test.c~) and the new version when saved under the old name is no longer linked.

    Symbolic links avoid this problem, so I tend to use them for source code where required.

    [Nov 03, 2018] Neoliberal Measurement Mania

    Highly recommended!
    Technology is dominated by two types of people: those who understand what they do not manage, and those who manage what they do not understand. -- Archibald Putt
    Neoliberal PHBs like talk about KJLOCs, error counts, tickets closed and other types of numerical measurements designed so that they can be used by lower-level PHBs to report fake results to higher level PHBs. These attempts to quantify 'the quality' and volume of work performed by software developers and sysadmins completely miss the point. For software is can lead to code bloat.
    The number of tickets taken and resolved in a specified time period probably the most ignorant way to measure performance of sysadmins. For sysadmin you can invent creative creating way of generating and resolving tickets. And spend time accomplishing fake task, instead of thinking about real problem that datacenter face. Using Primitive measurement strategies devalue the work being performed by Sysadmins and programmers. They focus on the wrong things. They create the boundaries that are supposed to contain us in a manner that is comprehensible to the PHB who knows nothing about real problems we face.
    Notable quotes:
    "... Technology is dominated by two types of people: those who understand what they do not manage, and those who manage what they do not understand. ..."
    Nov 03, 2018 | www.rako.com

    In an advanced research or development project, success or failure is largely determined when the goals or objectives are set and before a manager is chosen. While a hard-working and diligent manager can increase the chances of success, the outcome of the project is most strongly affected by preexisting but unknown technological factors over which the project manager has no control. The success or failure of the project should not, therefore, be used as the sole measure or even the primary measure of the manager's competence.

    Putt's Law Is promulgated

    Without an adequate competence criterion for technical managers, there is no way to determine when a person has reached his level of incompetence. Thus a clever and ambitious individual may be promoted from one level of incompetence to another. He will ultimately perform incompetently in the highest level of the hierarchy just as he did in numerous lower levels. The lack of an adequate competence criterion combined with the frequent practice of creative incompetence in technical hierarchies results in a competence inversion, with the most competent people remaining near the bottom while persons of lesser talent rise to the top. It also provides the basis for Putt's Law, which can be stated in an intuitive and nonmathematical form as follows:

    Technology is dominated by two types of people: those who understand what they do not manage, and those who manage what they do not understand.

    As in any other hierarchy, the majority of persons in technology neither understand nor manage much of anything. This, however, does not create an exception to Putt's Law, because such persons clearly do not dominate the hierarchy. While this was not previously stated as a basic law, it is clear that the success of every technocrat depends on his ability to deal with and benefit from the consequences of Putt's Law.

    [Nov 03, 2018] Archibald Putt The Unknown Technocrat Returns - IEEE Spectrum

    Notable quotes:
    "... Who is Putt? Well, for those of you under 40, the pseudonymous Archibald Putt, Ph.D., penned a series of articles for Research/Development magazine in the 1970s that eventually became the 1981 cult classic Putt's Law and the Successful Technocrat , an unorthodox and archly funny how-to book for achieving tech career success. ..."
    "... His first law, "Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand," along with its corollary, "Every technical hierarchy, in time, develops a competence inversion," have been immortalized on Web sites around the world. ..."
    "... what's a competence inversion? It means that the best and the brightest in a technology company tend to settle on the lowest rungs of the corporate ladder -- where things like inventing and developing new products get done -- while those who manage what they cannot hope to make or understand float to the top (see Putt's first law, above, and a fine example of Putt's law in action in the editorial, " Is Bad Design a Nuisance? "). ..."
    "... Other Putt laws we love include the law of failure: "Innovative organizations abhor little failures but reward big ones." And the first law of invention: "An innovated success is as good as a successful innovation." ..."
    "... This is management writing the way it ought to be. Think Dilbert , but with a very big brain. Read it and weep. Or laugh, depending on your current job situation. ..."
    "... n.hantman@ieee.org ..."
    Nov 03, 2018 | spectrum.ieee.org

    If you want to jump-start your technology career, put aside your Peter Drucker, your Tom Peters, and your Marcus Buckingham management tomes. Archibald Putt is back.

    Who is Putt? Well, for those of you under 40, the pseudonymous Archibald Putt, Ph.D., penned a series of articles for Research/Development magazine in the 1970s that eventually became the 1981 cult classic Putt's Law and the Successful Technocrat , an unorthodox and archly funny how-to book for achieving tech career success.

    In the book, Putt put forth a series of laws and axioms for surviving and succeeding in the unique corporate cultures of big technology companies, where being the builder of the best technology and becoming the top dog on the block almost never mix. His first law, "Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand," along with its corollary, "Every technical hierarchy, in time, develops a competence inversion," have been immortalized on Web sites around the world.

    The first law is obvious, but what's a competence inversion? It means that the best and the brightest in a technology company tend to settle on the lowest rungs of the corporate ladder -- where things like inventing and developing new products get done -- while those who manage what they cannot hope to make or understand float to the top (see Putt's first law, above, and a fine example of Putt's law in action in the editorial, " Is Bad Design a Nuisance? ").

    Other Putt laws we love include the law of failure: "Innovative organizations abhor little failures but reward big ones." And the first law of invention: "An innovated success is as good as a successful innovation."

    Now Putt has revised and updated his short, smart book, to be released in a new edition by Wiley-IEEE Press ( http://www.wiley.com/ieee ) at the end of this month. There have been murmurings that Putt's identity, the subject of much rumormongering, will be revealed after the book comes out, but we think that's unlikely. How much more interesting it is to have an anonymous chronicler wandering the halls of the tech industry, codifying its unstated, sometimes bizarre, and yet remarkably consistent rules of behavior.

    This is management writing the way it ought to be. Think Dilbert , but with a very big brain. Read it and weep. Or laugh, depending on your current job situation.

    The editorial content of IEEE Spectrum does not represent official positions of the IEEE or its organizational units. Please address comments to Forum at n.hantman@ieee.org .

    [Nov 03, 2018] Technology is dominated by two types of people; those who understand what they don t manage; and those who manage what they don t understand – ARCHIBALD PUTT ( PUTTS LAW )

    Notable quotes:
    "... These C level guys see cloud services – applications, data, backup, service desk – as a great way to free up a blockage in how IT service is being delivered on premise. ..."
    "... IMHO there is a big difference between management of IT and management of IT service. Rarely do you get people who can do both. ..."
    Nov 03, 2018 | brummieruss.wordpress.com

    ...Cloud introduces a whole new ball game and will no doubt perpetuate Putts Law for ever more. Why?

    Well unless 100% of IT infrastructure goes up into the clouds ( unlikely for any organization with a history ; likely for a new organization ( probably micro small ) that starts up in the next few years ) the 'art of IT management' will demand even more focus and understanding.

    I always think a great acid test of Putts Law is to look at one of the two aspects of IT management

    1. Show me a simple process that you follow each day that delivers an aspect of IT service i.e. how to buy a piece of IT stuff, or a way to report a fault
    2. Show me how you manage a single entity on the network i.e. a file server, a PC, a network switch

    Usually the answers ( which will be different from people on the same team, in the same room and from the same person on different days !) will give you an insight to Putts Law.

    Childs play for most of course who are challenged with some real complex management situations such as data center virtualization projects, storage explosion control, edge device management, backend application upgrades, global messaging migrations and B2C identity integration. But of course if its evidenced that they seem to be managing (simple things ) without true understanding one could argue 'how the hell can they be expected to manage what they understand with the complex things?' Fair point?

    Of course many C level people have an answer to Putts Law. Move the problem to people who do understand what they manage. Professionals who provide cloud versions of what the C level person struggles to get a professional service from. These C level guys see cloud services – applications, data, backup, service desk – as a great way to free up a blockage in how IT service is being delivered on premise. And they are right ( and wrong ).

    ... ... ...

    ( Quote attributed to Archibald Putt author of Putt's Law and the Successful Technocrat: How to Win in the Information Age )

    rowan says: March 9, 2012 at 9:03 am

    IMHO there is a big difference between management of IT and management of IT service. Rarely do you get people who can do both. Understanding inventory, disk space, security etc is one thing; but understanding the performance of apps and user impact is another ball game. Putts Law is alive and well in my organisation. TGIF.

    Rowan in Belfast.

    stephen777 says: March 31, 2012 at 7:32 am

    Rowan is right I used to be an IT Manager but now my title is Service Delivery Manager. Why? Because we had a new CTO who changed how people saw what we did. I ve been doing this new role for 5 years and I really do understand what i don't manage. LOL

    Stephen777

    [Nov 03, 2018] David Both

    Nov 03, 2018 | opensource.com

    Feed 161 up 4 comments Links Image by : Paul Lewin . Modified by Opensource.com. CC BY-SA 2.0 x Get the newsletter

    Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.

    https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0

    An introduction to Linux's EXT4 filesystem ; Managing devices in Linux ; An introduction to Linux filesystems ; and A Linux user's guide to Logical Volume Management , I have briefly mentioned an interesting feature of Linux filesystems that can make some tasks easier by providing access to files from multiple locations in the filesystem directory tree.

    There are two types of Linux filesystem links: hard and soft. The difference between the two types of links is significant, but both types are used to solve similar problems. They both provide multiple directory entries (or references) to a single file, but they do it quite differently. Links are powerful and add flexibility to Linux filesystems because everything is a file .

    More Linux resources

    I have found, for instance, that some programs required a particular version of a library. When a library upgrade replaced the old version, the program would crash with an error specifying the name of the old, now-missing library. Usually, the only change in the library name was the version number. Acting on a hunch, I simply added a link to the new library but named the link after the old library name. I tried the program again and it worked perfectly. And, okay, the program was a game, and everyone knows the lengths that gamers will go to in order to keep their games running.

    In fact, almost all applications are linked to libraries using a generic name with only a major version number in the link name, while the link points to the actual library file that also has a minor version number. In other instances, required files have been moved from one directory to another to comply with the Linux file specification, and there are links in the old directories for backwards compatibility with those programs that have not yet caught up with the new locations. If you do a long listing of the /lib64 directory, you can find many examples of both.

    lrwxrwxrwx. 1 root root 36 Dec 8 2016 cracklib_dict.hwm -> ../../usr/share/cracklib/pw_dict.hwm
    lrwxrwxrwx. 1 root root 36 Dec 8 2016 cracklib_dict.pwd -> ../../usr/share/cracklib/pw_dict.pwd
    lrwxrwxrwx. 1 root root 36 Dec 8 2016 cracklib_dict.pwi -> ../../usr/share/cracklib/pw_dict.pwi
    lrwxrwxrwx. 1 root root 27 Jun 9 2016 libaccountsservice.so.0 -> libaccountsservice.so.0.0.0
    -rwxr-xr-x. 1 root root 288456 Jun 9 2016 libaccountsservice.so.0.0.0
    lrwxrwxrwx 1 root root 15 May 17 11:47 libacl.so.1 -> libacl.so.1.1.0
    -rwxr-xr-x 1 root root 36472 May 17 11:47 libacl.so.1.1.0
    lrwxrwxrwx. 1 root root 15 Feb 4 2016 libaio.so.1 -> libaio.so.1.0.1
    -rwxr-xr-x. 1 root root 6224 Feb 4 2016 libaio.so.1.0.0
    -rwxr-xr-x. 1 root root 6224 Feb 4 2016 libaio.so.1.0.1
    lrwxrwxrwx. 1 root root 30 Jan 16 16:39 libakonadi-calendar.so.4 -> libakonadi-calendar.so.4.14.26
    -rwxr-xr-x. 1 root root 816160 Jan 16 16:39 libakonadi-calendar.so.4.14.26
    lrwxrwxrwx. 1 root root 29 Jan 16 16:39 libakonadi-contact.so.4 -> libakonadi-contact.so.4.14.26

    A few of the links in the /lib64 directory

    The long listing of the /lib64 directory above shows that the first character in the filemode is the letter "l," which means that each is a soft or symbolic link.

    Hard links

    In An introduction to Linux's EXT4 filesystem , I discussed the fact that each file has one inode that contains information about that file, including the location of the data belonging to that file. Figure 2 in that article shows a single directory entry that points to the inode. Every file must have at least one directory entry that points to the inode that describes the file. The directory entry is a hard link, thus every file has at least one hard link.

    In Figure 1 below, multiple directory entries point to a single inode. These are all hard links. I have abbreviated the locations of three of the directory entries using the tilde ( ~ ) convention for the home directory, so that ~ is equivalent to /home/user in this example. Note that the fourth directory entry is in a completely different directory, /home/shared , which might be a location for sharing files between users of the computer.

    fig1directory_entries.png Figure 1

    Hard links are limited to files contained within a single filesystem. "Filesystem" is used here in the sense of a partition or logical volume (LV) that is mounted on a specified mount point, in this case /home . This is because inode numbers are unique only within each filesystem, and a different filesystem, for example, /var or /opt , will have inodes with the same number as the inode for our file.

    Because all the hard links point to the single inode that contains the metadata about the file, all of these attributes are part of the file, such as ownerships, permissions, and the total number of hard links to the inode, and cannot be different for each hard link. It is one file with one set of attributes. The only attribute that can be different is the file name, which is not contained in the inode. Hard links to a single file/inode located in the same directory must have different names, due to the fact that there can be no duplicate file names within a single directory.

    The number of hard links for a file is displayed with the ls -l command. If you want to display the actual inode numbers, the command ls -li does that.

    Symbolic (soft) links

    The difference between a hard link and a soft link, also known as a symbolic link (or symlink), is that, while hard links point directly to the inode belonging to the file, soft links point to a directory entry, i.e., one of the hard links. Because soft links point to a hard link for the file and not the inode, they are not dependent upon the inode number and can work across filesystems, spanning partitions and LVs.

    The downside to this is: If the hard link to which the symlink points is deleted or renamed, the symlink is broken. The symlink is still there, but it points to a hard link that no longer exists. Fortunately, the ls command highlights broken links with flashing white text on a red background in a long listing.

    Lab project: experimenting with links

    I think the easiest way to understand the use of and differences between hard and soft links is with a lab project that you can do. This project should be done in an empty directory as a non-root user . I created the ~/temp directory for this project, and you should, too. It creates a safe place to do the project and provides a new, empty directory to work in so that only files associated with this project will be located there.

    Initial setup

    First, create the temporary directory in which you will perform the tasks needed for this project. Ensure that the present working directory (PWD) is your home directory, then enter the following command.

    mkdir temp
    

    Change into ~/temp to make it the PWD with this command.

    cd temp
    

    To get started, we need to create a file we can link to. The following command does that and provides some content as well.

    du -h > main.file.txt
    

    Use the ls -l long list to verify that the file was created correctly. It should look similar to my results. Note that the file size is only 7 bytes, but yours may vary by a byte or two.

    [ dboth @ david temp ] $ ls -l
    total 4
    -rw-rw-r-- 1 dboth dboth 7 Jun 13 07: 34 main.file.txt

    Notice the number "1" following the file mode in the listing. That number represents the number of hard links that exist for the file. For now, it should be 1 because we have not created any additional links to our test file.

    Experimenting with hard links

    Hard links create a new directory entry pointing to the same inode, so when hard links are added to a file, you will see the number of links increase. Ensure that the PWD is still ~/temp . Create a hard link to the file main.file.txt , then do another long list of the directory.

    [ dboth @ david temp ] $ ln main.file.txt link1.file.txt
    [ dboth @ david temp ] $ ls -l
    total 8
    -rw-rw-r-- 2 dboth dboth 7 Jun 13 07: 34 link1.file.txt
    -rw-rw-r-- 2 dboth dboth 7 Jun 13 07: 34 main.file.txt

    Notice that both files have two links and are exactly the same size. The date stamp is also the same. This is really one file with one inode and two links, i.e., directory entries to it. Create a second hard link to this file and list the directory contents. You can create the link to either of the existing ones: link1.file.txt or main.file.txt .

    [ dboth @ david temp ] $ ln link1.file.txt link2.file.txt ; ls -l
    total 16
    -rw-rw-r-- 3 dboth dboth 7 Jun 13 07: 34 link1.file.txt
    -rw-rw-r-- 3 dboth dboth 7 Jun 13 07: 34 link2.file.txt
    -rw-rw-r-- 3 dboth dboth 7 Jun 13 07: 34 main.file.txt

    Notice that each new hard link in this directory must have a different name because two files -- really directory entries -- cannot have the same name within the same directory. Try to create another link with a target name the same as one of the existing ones.

    [ dboth @ david temp ] $ ln main.file.txt link2.file.txt
    ln: failed to create hard link 'link2.file.txt' : File exists

    Clearly that does not work, because link2.file.txt already exists. So far, we have created only hard links in the same directory. So, create a link in your home directory, the parent of the temp directory in which we have been working so far.

    [ dboth @ david temp ] $ ln main.file.txt .. / main.file.txt ; ls -l .. / main *
    -rw-rw-r-- 4 dboth dboth 7 Jun 13 07: 34 main.file.txt

    The ls command in the above listing shows that the main.file.txt file does exist in the home directory with the same name as the file in the temp directory. Of course, these are not different files; they are the same file with multiple links -- directory entries -- to the same inode. To help illustrate the next point, add a file that is not a link.

    [ dboth @ david temp ] $ touch unlinked.file ; ls -l
    total 12
    -rw-rw-r-- 4 dboth dboth 7 Jun 13 07: 34 link1.file.txt
    -rw-rw-r-- 4 dboth dboth 7 Jun 13 07: 34 link2.file.txt
    -rw-rw-r-- 4 dboth dboth 7 Jun 13 07: 34 main.file.txt
    -rw-rw-r-- 1 dboth dboth 0 Jun 14 08: 18 unlinked.file

    Look at the inode number of the hard links and that of the new file using the -i option to the ls command.

    [ dboth @ david temp ] $ ls -li
    total 12
    657024 -rw-rw-r-- 4 dboth dboth 7 Jun 13 07: 34 link1.file.txt
    657024 -rw-rw-r-- 4 dboth dboth 7 Jun 13 07: 34 link2.file.txt
    657024 -rw-rw-r-- 4 dboth dboth 7 Jun 13 07: 34 main.file.txt
    657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08: 18 unlinked.file

    Notice the number 657024 to the left of the file mode in the example above. That is the inode number, and all three file links point to the same inode. You can use the -i option to view the inode number for the link we created in the home directory as well, and that will also show the same value. The inode number of the file that has only one link is different from the others. Note that the inode numbers will be different on your system.

    Let's change the size of one of the hard-linked files.

    [ dboth @ david temp ] $ df -h > link2.file.txt ; ls -li
    total 12
    657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14 : 14 link1.file.txt
    657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14 : 14 link2.file.txt
    657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14 : 14 main.file.txt
    657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08: 18 unlinked.file

    The file size of all the hard-linked files is now larger than before. That is because there is really only one file that is linked to by multiple directory entries.

    I know this next experiment will work on my computer because my /tmp directory is on a separate LV. If you have a separate LV or a filesystem on a different partition (if you're not using LVs), determine whether or not you have access to that LV or partition. If you don't, you can try to insert a USB memory stick and mount it. If one of those options works for you, you can do this experiment.

    Try to create a link to one of the files in your ~/temp directory in /tmp (or wherever your different filesystem directory is located).

    [ dboth @ david temp ] $ ln link2.file.txt / tmp / link3.file.txt
    ln: failed to create hard link '/tmp/link3.file.txt' = > 'link2.file.txt' :
    Invalid cross-device link

    Why does this error occur? The reason is each separate mountable filesystem has its own set of inode numbers. Simply referring to a file by an inode number across the entire Linux directory structure can result in confusion because the same inode number can exist in each mounted filesystem.

    There may be a time when you will want to locate all the hard links that belong to a single inode. You can find the inode number using the ls -li command. Then you can use the find command to locate all links with that inode number.

    [ dboth @ david temp ] $ find . -inum 657024
    . / main.file.txt
    . / link1.file.txt
    . / link2.file.txt

    Note that the find command did not find all four of the hard links to this inode because we started at the current directory of ~/temp . The find command only finds files in the PWD and its subdirectories. To find all the links, we can use the following command, which specifies your home directory as the starting place for the search.

    [ dboth @ david temp ] $ find ~ -samefile main.file.txt
    / home / dboth / temp / main.file.txt
    / home / dboth / temp / link1.file.txt
    / home / dboth / temp / link2.file.txt
    / home / dboth / main.file.txt

    You may see error messages if you do not have permissions as a non-root user. This command also uses the -samefile option instead of specifying the inode number. This works the same as using the inode number and can be easier if you know the name of one of the hard links.

    Experimenting with soft links

    As you have just seen, creating hard links is not possible across filesystem boundaries; that is, from a filesystem on one LV or partition to a filesystem on another. Soft links are a means to answer that problem with hard links. Although they can accomplish the same end, they are very different, and knowing these differences is important.

    Let's start by creating a symlink in our ~/temp directory to start our exploration.

    [ dboth @ david temp ] $ ln -s link2.file.txt link3.file.txt ; ls -li
    total 12
    657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14 : 14 link1.file.txt
    657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14 : 14 link2.file.txt
    658270 lrwxrwxrwx 1 dboth dboth 14 Jun 14 15 : 21 link3.file.txt - >
    link2.file.txt
    657024 -rw-rw-r-- 4 dboth dboth 1157 Jun 14 14 : 14 main.file.txt
    657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08: 18 unlinked.file

    The hard links, those that have the inode number 657024 , are unchanged, and the number of hard links shown for each has not changed. The newly created symlink has a different inode, number 658270 . The soft link named link3.file.txt points to link2.file.txt . Use the cat command to display the contents of link3.file.txt . The file mode information for the symlink starts with the letter " l " which indicates that this file is actually a symbolic link.

    The size of the symlink link3.file.txt is only 14 bytes in the example above. That is the size of the text link3.file.txt -> link2.file.txt , which is the actual content of the directory entry. The directory entry link3.file.txt does not point to an inode; it points to another directory entry, which makes it useful for creating links that span file system boundaries. So, let's create that link we tried before from the /tmp directory.

    [ dboth @ david temp ] $ ln -s / home / dboth / temp / link2.file.txt
    / tmp / link3.file.txt ; ls -l / tmp / link *
    lrwxrwxrwx 1 dboth dboth 31 Jun 14 21 : 53 / tmp / link3.file.txt - >
    / home / dboth / temp / link2.file.txt Deleting links

    There are some other things that you should consider when you need to delete links or the files to which they point.

    First, let's delete the link main.file.txt . Remember that every directory entry that points to an inode is simply a hard link.

    [ dboth @ david temp ] $ rm main.file.txt ; ls -li
    total 8
    657024 -rw-rw-r-- 3 dboth dboth 1157 Jun 14 14 : 14 link1.file.txt
    657024 -rw-rw-r-- 3 dboth dboth 1157 Jun 14 14 : 14 link2.file.txt
    658270 lrwxrwxrwx 1 dboth dboth 14 Jun 14 15 : 21 link3.file.txt - >
    link2.file.txt
    657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08: 18 unlinked.file

    The link main.file.txt was the first link created when the file was created. Deleting it now still leaves the original file and its data on the hard drive along with all the remaining hard links. To delete the file and its data, you would have to delete all the remaining hard links.

    Now delete the link2.file.txt hard link.

    [ dboth @ david temp ] $ rm link2.file.txt ; ls -li
    total 8
    657024 -rw-rw-r-- 3 dboth dboth 1157 Jun 14 14 : 14 link1.file.txt
    658270 lrwxrwxrwx 1 dboth dboth 14 Jun 14 15 : 21 link3.file.txt - >
    link2.file.txt
    657024 -rw-rw-r-- 3 dboth dboth 1157 Jun 14 14 : 14 main.file.txt
    657863 -rw-rw-r-- 1 dboth dboth 0 Jun 14 08: 18 unlinked.file

    Notice what happens to the soft link. Deleting the hard link to which the soft link points leaves a broken link. On my system, the broken link is highlighted in colors and the target hard link is flashing. If the broken link needs to be fixed, you can create another hard link in the same directory with the same name as the old one, so long as not all the hard links have been deleted. You could also recreate the link itself, with the link maintaining the same name but pointing to one of the remaining hard links. Of course, if the soft link is no longer needed, it can be deleted with the rm command.

    The unlink command can also be used to delete files and links. It is very simple and has no options, as the rm command does. It does, however, more accurately reflect the underlying process of deletion, in that it removes the link -- the directory entry -- to the file being deleted.

    Final thoughts

    I worked with both types of links for a long time before I began to understand their capabilities and idiosyncrasies. It took writing a lab project for a Linux class I taught to fully appreciate how links work. This article is a simplification of what I taught in that class, and I hope it speeds your learning curve. Topics Linux About the author David Both - David Both is a Linux and Open Source advocate who resides in Raleigh, North Carolina. He has been in the IT industry for over forty years and taught OS/2 for IBM where he worked for over 20 years. While at IBM, he wrote the first training course for the original IBM PC in 1981. He has taught RHCE classes for Red Hat and has worked at MCI Worldcom, Cisco, and the State of North Carolina. He has been working with Linux and Open Source Software for almost 20 years. David has written articles for...

    [Nov 03, 2018] Is Red Hat IBM's 'Hail Mary' pass

    Notable quotes:
    "... if those employees become unhappy, they can effectively go anywhere they want. ..."
    "... IBM's partner/reseller ecosystem is nowhere near what it was since it owned the PC and Server businesses that Lenovo now owns. And IBM's Softlayer/BlueMix cloud is largely tied to its legacy software business, which, again, is slowing. ..."
    "... I came to IBM from their SoftLayer acquisition. Their ability to stomp all over the things SoftLayer was almost doing right were astounding. I stood and listened to Ginni say things like, "We purchased SoftLayer because we need to learn from you," and, "We want you to teach us how to do Cloud the right way, since we spent all these years doing things the wrong way," and, "If you find yourself in a meeting with one of our old teams, you guys are gonna be the ones in charge. You are the ones who know how this is supposed to work - our culture has failed at it." Promises which were nothing more than hollow words. ..."
    "... Next, it's a little worrisome that the author, now over the whole IBM thing is recommending firing "older people," you know, the ones who helped the company retain its performance in years' past. The smartest article I've read about IBM worried about its cheap style of "acquiring" non-best-of-breed companies and firing oodles of its qualified R&D guys. THAT author was right. ..."
    "... Four years in GTS ... joined via being outsourced to IBM by my previous employer. Left GTS after 4 years. ..."
    "... The IBM way of life was throughout the Oughts and the Teens an utter and complete failure from the perspective of getting work done right and using people to their appropriate and full potential. ..."
    "... As a GTS employee, professional technical training was deemed unnecessary, hence I had no access to any unless I paid for it myself and used my personal time ... the only training available was cheesy presentations or other web based garbage from the intranet, or casual / OJT style meetings with other staff who were NOT professional or expert trainers. ..."
    "... As a GTS employee, I had NO access to the expert and professional tools that IBM fricking made and sold to the same damn customers I was supposed to be supporting. Did we have expert and professional workflow / document management / ITIL aligned incident and problem management tools? NO, we had fricking Lotus Notes and email. Instead of upgrading to the newest and best software solutions for data center / IT management & support, we degraded everything down the simplest and least complex single function tools that no "best practices" organization on Earth would ever consider using. ..."
    "... And the people management paradigm ... employees ranked annually not against a static or shared goal or metric, but in relation to each other, and there was ALWAYS a "top 10 percent" and a "bottom ten percent" required by upper management ... a system that was sociopathic in it's nature because it encourages employees to NOT work together ... by screwing over one's coworkers, perhaps by not giving necessary information, timely support, assistance as needed or requested, one could potentially hurt their performance and make oneself look relatively better. That's a self-defeating system and it was encouraged by the way IBM ran things. ..."
    Nov 03, 2018 | www.zdnet.com
    Brain drain is a real risk

    IBM has not had a particularly great track record when it comes to integrating the cultures of other companies into its own, and brain drain with a company like Red Hat is a real risk because if those employees become unhappy, they can effectively go anywhere they want. They have the skills to command very high salaries at any of the top companies in the industry.

    The other issue is that IBM hasn't figured out how to capture revenue from SMBs -- and that has always been elusive for them. Unless a deal is worth at least $1 million, and realistically $10 million, sales guys at IBM don't tend to get motivated.

    Also: Red Hat changes its open-source licensing rules

    The 5,000-seat and below market segment has traditionally been partner territory, and when it comes to reseller partners for its cloud, IBM is way, way behind AWS, Microsoft, Google, or even (gasp) Oracle, which is now offering serious margins to partners that land workloads on the Oracle cloud.

    IBM's partner/reseller ecosystem is nowhere near what it was since it owned the PC and Server businesses that Lenovo now owns. And IBM's Softlayer/BlueMix cloud is largely tied to its legacy software business, which, again, is slowing.

    ... ... ...

    But I think that it is very unlikely the IBM Cloud, even when juiced on Red Hat steroids, will become anything more ambitious than a boutique business for hybrid workloads when compared with AWS or Azure. Realistically, it has to be the kind of cloud platform that interoperates well with the others or nobody will want it.


    geek49203_z , Wednesday, April 26, 2017 10:27 AM

    Ex-IBM contractor here...

    1. IBM used to value long-term employees. Now they "value" short-term contractors -- but they still pull them out of production for lots of training that, quite frankly, isn't exactly needed for what they are doing. Personally, I think that IBM would do well to return to valuing employees instead of looking at them as expendable commodities, but either way, they need to get past the legacies of when they had long-term employees all watching a single main frame.

    2. As IBM moved to an army of contractors, they killed off the informal (but important!) web of tribal knowledge. You know, a friend of a friend who new the answer to some issue, or knew something about this customer? What has happened is that the transaction costs (as economists call it) have escalated until IBM can scarcely order IBM hardware for its own projects, or have SDM's work together.

    M Wagner geek49203_z , Wednesday, April 26, 2017 10:35 AM
    geek49203_z Number 2 is a problem everywhere. As long-time employees (mostly baby-boomers) retire, their replacements are usually straight out of college with various non-technical degrees. They come in with little history and few older-employees to which they can turn for "the tricks of the trade".
    Shmeg , Wednesday, April 26, 2017 10:41 AM
    I came to IBM from their SoftLayer acquisition. Their ability to stomp all over the things SoftLayer was almost doing right were astounding. I stood and listened to Ginni say things like, "We purchased SoftLayer because we need to learn from you," and, "We want you to teach us how to do Cloud the right way, since we spent all these years doing things the wrong way," and, "If you find yourself in a meeting with one of our old teams, you guys are gonna be the ones in charge. You are the ones who know how this is supposed to work - our culture has failed at it." Promises which were nothing more than hollow words.
    geek49203_z , Wednesday, April 26, 2017 10:27 AM
    Ex-IBM contractor here...

    1. IBM used to value long-term employees. Now they "value" short-term contractors -- but they still pull them out of production for lots of training that, quite frankly, isn't exactly needed for what they are doing. Personally, I think that IBM would do well to return to valuing employees instead of looking at them as expendable commodities, but either way, they need to get past the legacies of when they had long-term employees all watching a single main frame.

    2. As IBM moved to an army of contractors, they killed off the informal (but important!) web of tribal knowledge. You know, a friend of a friend who new the answer to some issue, or knew something about this customer? What has happened is that the transaction costs (as economists call it) have escalated until IBM can scarcely order IBM hardware for its own projects, or have SDM's work together.

    M Wagner geek49203_z , Wednesday, April 26, 2017 10:35 AM
    geek49203_z Number 2 is a problem everywhere. As long-time employees (mostly baby-boomers) retire, their replacements are usually straight out of college with various non-technical degrees. They come in with little history and few older-employees to which they can turn for "the tricks of the trade".
    Shmeg , Wednesday, April 26, 2017 10:41 AM
    I came to IBM from their SoftLayer acquisition. Their ability to stomp all over the things SoftLayer was almost doing right were astounding. I stood and listened to Ginni say things like, "We purchased SoftLayer because we need to learn from you," and, "We want you to teach us how to do Cloud the right way, since we spent all these years doing things the wrong way," and, "If you find yourself in a meeting with one of our old teams, you guys are gonna be the ones in charge. You are the ones who know how this is supposed to work - our culture has failed at it." Promises which were nothing more than hollow words.
    cavman , Wednesday, April 26, 2017 3:58 PM
    In the 1970's 80's and 90's I was working in tech support for a company called ROLM. We were doing communications , voice and data and did many systems for Fortune 500 companies along with 911 systems and the secure system at the White House. My job was to fly all over North America to solve problems with customers and integration of our equipment into their business model. I also did BETA trials and documented systems so others would understand what it took to make it run fine under all conditions.

    In 84 IBM bought a percentage of the company and the next year they bought out the company. When someone said to me "IBM just bought you out , you must thing you died and went to heaven." My response was "Think of them as being like the Federal Government but making a profit". They were so heavily structured and hide bound that it was a constant battle working with them. Their response to any comments was "We are IBM"

    I was working on an equipment project in Colorado Springs and IBM took control. I was immediately advised that I could only talk to the people in my assigned group and if I had a question outside of my group I had to put it in writing and give it to my manager and if he thought it was relevant it would be forwarded up the ladder of management until it reached a level of a manager that had control of both groups and at that time if he thought it was relevant it would be sent to that group who would send the answer back up the ladder.

    I'm a Vietnam Veteran and I used my military training to get things done just like I did out in the field. I went looking for the person I could get an answer from.

    At first others were nervous about doing that but within a month I had connections all over the facility and started introducing people at the cafeteria. Things moved quickly as people started working together as a unit. I finished my part of the work which was figuring all the spares technicians would need plus the costs for packaging and service contract estimates. I submitted it to all the people that needed it. I was then hauled into a meeting room by the IBM management and advised that I was a disruptive influence and would be removed. Just then the final contracts that vendors had to sign showed up and it used all my info. The IBM people were livid that they were not involved.

    By the way a couple months later the IBM THINK magazine came out with a new story about a radical concept they had tried. A cover would not fit on a component and under the old system both the component and the cover would be thrown out and they would start from scratch doing it over. They decided to have the two groups sit together and figure out why it would not fit and correct it on the spot.

    Another great example of IBM people is we had a sales contract to install a multi node voice mail system at WANG computers but we lost it because the IBM people insisted on bundling in AS0400 systems into the sale to WANG computer. Instead we lost a multi million dollar contract.

    Eventually Siemens bought 50% of the company and eventually full control. Now all we heard was "That is how we do it in Germany" Our response was "How did that WW II thing work out".

    Stockholder , Wednesday, April 26, 2017 7:20 PM
    The author may have more loyalty to Microsoft than he confides, is the first thing noticeable about this article. The second thing is that in terms of getting rid of those aged IBM workers, I think he may have completely missed the mark, in fairness, that may be the product of his IBM experience, The sheer hubris of tech-talking from the middle of the story and missing the global misstep that is today's IBM is noticeable. As a stockholder, the first question is, "Where is the investigation to the breach of fiduciary duty by a board that owes its loyalty to stockholders who are scratching their heads at the 'positive' spin the likes of Ginni Rometty is putting on 20 quarters of dead losses?" Got that, 20 quarters of losses.

    Next, it's a little worrisome that the author, now over the whole IBM thing is recommending firing "older people," you know, the ones who helped the company retain its performance in years' past. The smartest article I've read about IBM worried about its cheap style of "acquiring" non-best-of-breed companies and firing oodles of its qualified R&D guys. THAT author was right.

    IBM's been run into the ground by Ginni, I'll use her first name, since apparently my money is now used to prop up this sham of a leader, who from her uncomfortable public announcement with Tim Cook of Apple, which HAS gone up, by the way, has embraced every political trend, not cause but trend from hiring more women to marginalizing all those old-time white males...You know the ones who produced for the company based on merit, sweat, expertise, all those non-feeling based skills that ultimately are what a shareholder is interested in and replaced them with young, and apparently "social" experts who are pasting some phony "modernity" on a company that under Ginni's leadership has become more of a pet cause than a company.

    Finally, regarding ageism and the author's advocacy for the same, IBM's been there, done that as they lost an age discrimination lawsuit decades ago. IBM gave up on doing what it had the ability to do as an enormous business and instead under Rometty's leadership has tried to compete with the scrappy startups where any halfwit knows IBM cannot compete.

    The company has rendered itself ridiculous under Rometty, a board that collects paychecks and breaches any notion of fiduciary duty to shareholders, an attempt at partnering with a "mod" company like Apple that simply bolstered Apple and left IBM languishing and a rejection of what has a track record of working, excellence, rewarding effort of employees and the steady plod of performance. Dump the board and dump Rometty.

    jperlow Stockholder , Wednesday, April 26, 2017 8:36 PM
    Stockholder Your comments regarding any inclination towards age discrimination are duly noted, so I added a qualifier in the piece.
    Gravyboat McGee , Wednesday, April 26, 2017 9:00 PM
    Four years in GTS ... joined via being outsourced to IBM by my previous employer. Left GTS after 4 years.

    The IBM way of life was throughout the Oughts and the Teens an utter and complete failure from the perspective of getting work done right and using people to their appropriate and full potential. I went from a multi-disciplinary team of engineers working across technologies to support corporate needs in the IT environment to being siloed into a single-function organization.

    My first year of on-boarding with IBM was spent deconstructing application integration and cross-organizational structures of support and interwork that I had spent 6 years building and maintaining. Handing off different chunks of work (again, before the outsourcing, an Enterprise solution supported by one multi-disciplinary team) to different IBM GTS work silos that had no physical spacial relationship and no interworking history or habits. What we're talking about here is the notion of "left hand not knowing what the right hand is doing" ...

    THAT was the IBM way of doing things, and nothing I've read about them over the past decade or so tells me it has changed.

    As a GTS employee, professional technical training was deemed unnecessary, hence I had no access to any unless I paid for it myself and used my personal time ... the only training available was cheesy presentations or other web based garbage from the intranet, or casual / OJT style meetings with other staff who were NOT professional or expert trainers.

    As a GTS employee, I had NO access to the expert and professional tools that IBM fricking made and sold to the same damn customers I was supposed to be supporting. Did we have expert and professional workflow / document management / ITIL aligned incident and problem management tools? NO, we had fricking Lotus Notes and email. Instead of upgrading to the newest and best software solutions for data center / IT management & support, we degraded everything down the simplest and least complex single function tools that no "best practices" organization on Earth would ever consider using.

    And the people management paradigm ... employees ranked annually not against a static or shared goal or metric, but in relation to each other, and there was ALWAYS a "top 10 percent" and a "bottom ten percent" required by upper management ... a system that was sociopathic in it's nature because it encourages employees to NOT work together ... by screwing over one's coworkers, perhaps by not giving necessary information, timely support, assistance as needed or requested, one could potentially hurt their performance and make oneself look relatively better. That's a self-defeating system and it was encouraged by the way IBM ran things.

    The "not invented here" ideology was embedded deeply in the souls of all senior IBMers I ever met or worked with ... if you come on board with any outside knowledge or experience, you must not dare to say "this way works better" because you'd be shut down before you could blink. The phrase "best practices" to them means "the way we've always done it".

    IBM gave up on innovation long ago. Since the 90's the vast majority of their software has been bought, not built. Buy a small company, strip out the innovation, slap an IBM label on it, sell it as the next coming of Jesus even though they refuse to expend any R&D to push the product to the next level ... damn near everything IBM sold was gentrified, never cutting edge.

    And don't get me started on sales practices ... tell the customer how product XYZ is a guaranteed moonshot, they'll be living on lunar real estate in no time at all, and after all the contracts are signed hand the customer a box of nuts & bolts and a letter telling them where they can look up instructions on how to build their own moon rocket. Or for XX dollars more a year, hire a Professional Services IBMer to build it for them.

    I have no sympathy for IBM. They need a clean sweep throughout upper management, especially any of the old True Blue hard-core IBMers.

    billa201 , Thursday, April 27, 2017 11:24 AM
    You obviously have been gone from IBM as they do not treat their employees well anymore and get rid of good talent not keep it a sad state.
    ClearCreek , Tuesday, May 9, 2017 7:04 PM
    We tried our best to be SMB partners with IBM & Arrow in the early 2000s ... but could never get any traction. I personally needed a mentor, but never found one. I still have/wear some of their swag, and I write this right now on a re-purposed IBM 1U server that is 10 years old, but ... I can't see any way our small company can make $ with them.

    Watson is impressive, but you can't build a company on just Watson. This author has some great ideas, yet the phrase that keeps coming to me is internal politics. That corrosive reality has & will kill companies, and it will kill IBM unless it is dealt with.

    Turn-arounds are possible (look at MS), but they are hard and dangerous. Hope IBM can figure it out...

    [Nov 03, 2018] The evaluation system in which there was ALWAYS a "top 10 percent" and a "bottom ten percent" is sociopathic in it's nature

    Notable quotes:
    "... Four years in GTS ... joined via being outsourced to IBM by my previous employer. Left GTS after 4 years. ..."
    "... The IBM way of life was throughout the Oughts and the Teens an utter and complete failure from the perspective of getting work done right and using people to their appropriate and full potential. ..."
    "... As a GTS employee, professional technical training was deemed unnecessary, hence I had no access to any unless I paid for it myself and used my personal time ... the only training available was cheesy presentations or other web based garbage from the intranet, or casual / OJT style meetings with other staff who were NOT professional or expert trainers. ..."
    "... As a GTS employee, I had NO access to the expert and professional tools that IBM fricking made and sold to the same damn customers I was supposed to be supporting. Did we have expert and professional workflow / document management / ITIL aligned incident and problem management tools? NO, we had fricking Lotus Notes and email. Instead of upgrading to the newest and best software solutions for data center / IT management & support, we degraded everything down the simplest and least complex single function tools that no "best practices" organization on Earth would ever consider using. ..."
    "... And the people management paradigm ... employees ranked annually not against a static or shared goal or metric, but in relation to each other, and there was ALWAYS a "top 10 percent" and a "bottom ten percent" required by upper management ... a system that was sociopathic in it's nature because it encourages employees to NOT work together ... by screwing over one's coworkers, perhaps by not giving necessary information, timely support, assistance as needed or requested, one could potentially hurt their performance and make oneself look relatively better. That's a self-defeating system and it was encouraged by the way IBM ran things. ..."
    Nov 03, 2018 | www.zdnet.com

    Gravyboat McGee , Wednesday, April 26, 2017 9:00 PM

    Four years in GTS ... joined via being outsourced to IBM by my previous employer. Left GTS after 4 years.

    The IBM way of life was throughout the Oughts and the Teens an utter and complete failure from the perspective of getting work done right and using people to their appropriate and full potential. I went from a multi-disciplinary team of engineers working across technologies to support corporate needs in the IT environment to being siloed into a single-function organization.

    My first year of on-boarding with IBM was spent deconstructing application integration and cross-organizational structures of support and interwork that I had spent 6 years building and maintaining. Handing off different chunks of work (again, before the outsourcing, an Enterprise solution supported by one multi-disciplinary team) to different IBM GTS work silos that had no physical special relationship and no interworking history or habits. What we're talking about here is the notion of "left hand not knowing what the right hand is doing" ...

    THAT was the IBM way of doing things, and nothing I've read about them over the past decade or so tells me it has changed.

    As a GTS employee, professional technical training was deemed unnecessary, hence I had no access to any unless I paid for it myself and used my personal time ... the only training available was cheesy presentations or other web based garbage from the intranet, or casual / OJT style meetings with other staff who were NOT professional or expert trainers.

    As a GTS employee, I had NO access to the expert and professional tools that IBM fricking made and sold to the same damn customers I was supposed to be supporting. Did we have expert and professional workflow / document management / ITIL aligned incident and problem management tools? NO, we had fricking Lotus Notes and email. Instead of upgrading to the newest and best software solutions for data center / IT management & support, we degraded everything down the simplest and least complex single function tools that no "best practices" organization on Earth would ever consider using.

    And the people management paradigm ... employees ranked annually not against a static or shared goal or metric, but in relation to each other, and there was ALWAYS a "top 10 percent" and a "bottom ten percent" required by upper management ... a system that was sociopathic in it's nature because it encourages employees to NOT work together ... by screwing over one's coworkers, perhaps by not giving necessary information, timely support, assistance as needed or requested, one could potentially hurt their performance and make oneself look relatively better. That's a self-defeating system and it was encouraged by the way IBM ran things.

    The "not invented here" ideology was embedded deeply in the souls of all senior IBMers I ever met or worked with ... if you come on board with any outside knowledge or experience, you must not dare to say "this way works better" because you'd be shut down before you could blink. The phrase "best practices" to them means "the way we've always done it".

    IBM gave up on innovation long ago. Since the 90's the vast majority of their software has been bought, not built. Buy a small company, strip out the innovation, slap an IBM label on it, sell it as the next coming of Jesus even though they refuse to expend any R&D to push the product to the next level ... damn near everything IBM sold was gentrified, never cutting edge.

    And don't get me started on sales practices ... tell the customer how product XYZ is a guaranteed moonshot, they'll be living on lunar real estate in no time at all, and after all the contracts are signed hand the customer a box of nuts & bolts and a letter telling them where they can look up instructions on how to build their own moon rocket. Or for XX dollars more a year, hire a Professional Services IBMer to build it for them.

    I have no sympathy for IBM. They need a clean sweep throughout upper management, especially any of the old True Blue hard-core IBMers.

    [Nov 02, 2018] The D in Systemd stands for 'Dammmmit!' A nasty DHCPv6 packet can pwn a vulnerable Linux box by Shaun Nichols

    Notable quotes:
    "... Hole opens up remote-code execution to miscreants – or a crash, if you're lucky ..."
    "... You can use NAT with IPv6. ..."
    Oct 26, 2018 | theregister.co.uk

    Hole opens up remote-code execution to miscreants – or a crash, if you're lucky A security bug in Systemd can be exploited over the network to, at best, potentially crash a vulnerable Linux machine, or, at worst, execute malicious code on the box.

    The flaw therefore puts Systemd-powered Linux computers – specifically those using systemd-networkd – at risk of remote hijacking: maliciously crafted DHCPv6 packets can try to exploit the programming cockup and arbitrarily change parts of memory in vulnerable systems, leading to potential code execution. This code could install malware, spyware, and other nasties, if successful.

    The vulnerability – which was made public this week – sits within the written-from-scratch DHCPv6 client of the open-source Systemd management suite, which is built into various flavors of Linux.

    This client is activated automatically if IPv6 support is enabled, and relevant packets arrive for processing. Thus, a rogue DHCPv6 server on a network, or in an ISP, could emit specially crafted router advertisement messages that wake up these clients, exploit the bug, and possibly hijack or crash vulnerable Systemd-powered Linux machines.

    Here's the Red Hat Linux summary :

    systemd-networkd is vulnerable to an out-of-bounds heap write in the DHCPv6 client when handling options sent by network adjacent DHCP servers. A attacker could exploit this via malicious DHCP server to corrupt heap memory on client machines, resulting in a denial of service or potential code execution.

    Felix Wilhelm, of the Google Security team, was credited with discovering the flaw, designated CVE-2018-15688 . Wilhelm found that a specially crafted DHCPv6 network packet could trigger "a very powerful and largely controlled out-of-bounds heap write," which could be used by a remote hacker to inject and execute code.

    "The overflow can be triggered relatively easy by advertising a DHCPv6 server with a server-id >= 493 characters long," Wilhelm noted.

    In addition to Ubuntu and Red Hat Enterprise Linux, Systemd has been adopted as a service manager for Debian, Fedora, CoreOS, Mint, and SUSE Linux Enterprise Server. We're told RHEL 7, at least, does not use the vulnerable component by default.

    Systemd creator Lennart Poettering has already published a security fix for the vulnerable component – this should be weaving its way into distros as we type.

    If you run a Systemd-based Linux system, and rely on systemd-networkd, update your operating system as soon as you can to pick up the fix when available and as necessary.

    The bug will come as another argument against Systemd as the Linux management tool continues to fight for the hearts and minds of admins and developers alike. Though a number of major admins have in recent years adopted and championed it as the replacement for the old Init era, others within the Linux world seem to still be less than impressed with Systemd and Poettering's occasionally controversial management of the tool. ® Page:

    2 3 Next →

    Oh Homer , 6 days

    Meh

    As anyone who bothers to read my comments (BTW "hi" to both of you) already knows, I despise systemd with a passion, but this one is more an IPv6 problem in general.

    Yes this is an actual bug in networkd, but IPv6 seems to be far more bug prone than v4, and problems are rife in all implementations. Whether that's because the spec itself is flawed, or because nobody understands v6 well enough to implement it correctly, or possibly because there's just zero interest in making any real effort, I don't know, but it's a fact nonetheless, and my primary reason for disabling it wherever I find it. Which of course contributes to the "zero interest" problem that perpetuates v6's bug prone condition, ad nauseam.

    IPv6 is just one of those tech pariahs that everyone loves to hate, much like systemd, albeit fully deserved IMO.

    Oh yeah, and here's the obligatory "systemd sucks". Personally I always assumed the "d" stood for "destroyer". I believe the "IP" in "IPv6" stands for "Idiot Protocol".

    Anonymous Coward , 6 days
    Re: Meh

    "nonetheless, and my primary reason for disabling it wherever I find it. "

    The very first guide I read to hardening a system recommended disabling services you didn't need and emphasized IPV6 for the reasons you just stated.

    Wasn't there a bux in Xorg reported recently as well?

    https://www.theregister.co.uk/2018/10/25/x_org_server_vulnerability/

    "FreeDesktop.org Might Formally Join Forces With The X.Org Foundation"

    https://www.phoronix.com/scan.php?page=news_item&px=FreeDesktop-org-Xorg-Forces

    Also, does this mean that Facebook was vulnerable to attack, again?

    "Simply put, you could say Facebook loves systemd."

    https://www.phoronix.com/scan.php?page=news_item&px=Facebook-systemd-2018

    Jay Lenovo , 6 days
    Re: Meh

    IPv6 and SystemD: Forced industry standard diseases that requires most of us to bite our lips and bear it.

    Fortunately, IPv6 by lack of adopted use, limits the scope of this bug.

    vtcodger , 6 days
    Re: Meh
    Fortunately, IPv6 by lack of adopted use, limits the scope of this bug.

    Yeah, fortunately IPv6 is only used by a few fringe organizations like Google and Microsoft.

    Seriously, I personally want nothing to do with either systemd or IPv6. Both seem to me to fall into the bin labeled "If it ain't broke, let's break it" But still it's troubling that things that some folks regard as major system components continue to ship with significant security flaws. How can one trust anything connected to the Internet that is more sophisticated and complex than a TV streaming box?

    DougS , 6 days
    Re: Meh

    Was going to say the same thing, and I disable IPv6 for the exact same reason. IPv6 code isn't as well tested, as well audited, or as well targeted looking for exploits as IPv4. Stuff like this only proves that it was smart to wait, and I should wait some more.

    Nate Amsden , 6 days
    Re: Meh

    Count me in the camp of who hates systemd(hates it being "forced" on just about every distro, otherwise wouldn't care about it - and yes I am moving my personal servers to Devuan, thought I could go Debian 7->Devuan but turns out that may not work, so I upgraded to Debian 8 a few weeks ago, and will go to Devuan from there in a few weeks, upgraded one Debian 8 to Devuan already 3 more to go -- Debian user since 1998), when reading this article it reminded me of

    https://www.theregister.co.uk/2017/06/29/systemd_pwned_by_dns_query/

    bombastic bob , 6 days
    The gift that keeps on giving (systemd) !!!

    This makes me glad I'm using FreeBSD. The Xorg version in FreeBSD's ports is currently *slightly* older than the Xorg version that had that vulnerability in it. AND, FreeBSD will *NEVER* have systemd in it!

    (and, for Linux, when I need it, I've been using Devuan)

    That being said, the whole idea of "let's do a re-write and do a 'systemd' instead of 'system V init' because WE CAN and it's OUR TURN NOW, 'modern' 'change for the sake of change' etc." kinda reminds me of recent "update" problems with Win-10-nic...

    Oh, and an obligatory Schadenfreude laugh: HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA!!!!!!!!!!!!!!!!!!!

    Long John Brass , 6 days
    Re: The gift that keeps on giving (systemd) !!!

    Finally got all my machines cut over from Debian to Devuan.

    Might spin a FreeBSD system up in a VM and have a play.

    I suspect that the infestation of stupid into the Linux space won't stop with or be limited to SystemD. I will wait and watch to see what damage the re-education gulag has done to Sweary McSwearFace (Mr Torvalds)

    Dan 55 , 6 days
    Re: Meh

    I despise systemd with a passion, but this one is more an IPv6 problem in general.

    Not really, systemd has its tentacles everywhere and runs as root. Exploits which affect systemd therefore give you the keys to the kingdom.

    Orv , 3 days
    Re: Meh
    Not really, systemd has its tentacles everywhere and runs as root.

    Yes, but not really the problem in this case. Any DHCP client is going to have to run at least part of the time as root. There's not enough nuance in the Linux privilege model to allow it to manipulate network interfaces, otherwise.

    4 1
    Long John Brass , 3 days
    Re: Meh
    Yes, but not really the problem in this case. Any DHCP client is going to have to run at least part of the time as root. There's not enough nuance in the Linux privilege model to allow it to manipulate network interfaces, otherwise.

    Sorry but utter bullshit. You can if you are so inclined you can use the Linux Capabilities framework for this kind of thing. See https://wiki.archlinux.org/index.php/capabilities

    3 0
    JohnFen , 6 days
    Yay for me

    "If you run a Systemd-based Linux system"

    I remain very happy that I don't use systemd on any of my machines anymore. :)

    "others within the Linux world seem to still be less than impressed with Systemd"

    Yep, I'm in that camp. I gave it a good, honest go, but it increased the amount of hassle and pain of system management without providing any noticeable benefit, so I ditched it.

    ElReg!comments!Pierre , 2 days
    Re: Time to troll

    > Just like it's entirely possible to have a Linux system without any GNU in it

    Just like it's possible to have a GNU system without Linux on it - ho well as soon as GNU MACH is finally up to the task ;-)

    On the systemd angle, I, too, am in the process of switching all my machines from Debian to Devuan but on my personnal(*) network a few systemd-infected machines remain, thanks to a combination of laziness from my part and stubborn "systemd is quite OK" attitude from the raspy foundation. That vuln may be the last straw : one on the aforementionned machines sits on my DMZ, chatting freely with the outside world. Nothing really crucial on it, but i'd hate it if it became a foothold for nasties on my network.

    (*) policy at work is RHEL, and that's negociated far above my influence level, but I don't really care as all my important stuff runs on Z/OS anyway ;-) . Ok we have to reboot a few VMs occasionnally when systemd throws a hissy fit -which is surprisingly often for an "enterprise" OS -, but meh.

    Destroy All Monsters , 5 days
    Re: Not possible

    This code is actually pretty bad and should raise all kinds of red flags in a code review.

    Anonymous Coward , 5 days
    Re: Not possible

    ITYM Lennart

    Christian Berger , 5 days
    Re: Not possible

    "This code is actually pretty bad and should raise all kinds of red flags in a code review."

    Yeah, but for that you need people who can do code reviews, and also people who can accept criticism. That also means saying "no" to people who are bad at coding, and saying that repeatedly if they don't learn.

    SystemD seems to be the area where people gather who want to get code in for their resumes, not for people who actually want to make the world a better place.

    26 1
    jake , 6 days
    There is a reason ...

    ... that an init, traditionally, is a small bit of code that does one thing very well. Like most of the rest of the *nix core utilities. All an init should do is start PID1, set run level, spawn a tty (or several), handle a graceful shutdown, and log all the above in plaintext to make troubleshooting as simplistic as possible. Anything else is a vanity project that is best placed elsewhere, in it's own stand-alone code base.

    Inventing a clusterfuck init variation that's so big and bulky that it needs to be called a "suite" is just asking for trouble.

    IMO, systemd is a cancer that is growing out of control, and needs to be cut out of Linux before it infects enough of the system to kill it permanently.

    AdamWill , 6 days
    Re: There is a reason ...

    That's why systemd-networkd is a separate, optional component, and not actually part of the init daemon at all. Most systemd distros do not use it by default and thus are not vulnerable to this unless the user actively disables the default network manager and chooses to use networkd instead.

    Anonymous Coward , 4 days
    Re: There is a reason ...

    "Just go install a default Fedora or Ubuntu system and check for yourself: you'll have systemd, but you *won't* have systemd-networkd running."

    Funny that I installed ubuntu 18.04 a few weeks ago and the fucking thing installed itself then! ( and was a fucking pain to remove).

    LP is a fucking arsehole.

    Orv , 3 days
    Re: There is a reason ...
    Pardon my ignorance (I don't use a distro with systemd) why bother with networkd in the first place if you don't have to use it.

    Mostly because the old-style init system doesn't cope all that well with systems that move from network to network. It works for systems with a static IP, or that do a DHCP request at boot, but it falls down on anything more dynamic.

    In order to avoid restarting the whole network system every time they switch WiFi access points, people have kludged on solutions like NetworkManager. But it's hard to argue it's more stable or secure than networkd. And this is always going to be a point of vulnerability because anything that manipulates network interfaces will have to be running as root.

    These days networking is essential to the basic functionality of most computers; I think there's a good argument that it doesn't make much sense to treat it as a second-class citizen.

    AdamWill , 2 days
    Re: There is a reason ...

    "Funny that I installed ubuntu 18.04 a few weeks ago and the fucking thing installed itself then! ( and was a fucking pain to remove)."

    So I looked into it a bit more, and from a few references at least, it seems like Ubuntu has a sort of network configuration abstraction thingy that can use both NM and systemd-networkd as backends; on Ubuntu desktop flavors NM is usually the default, but apparently for recent Ubuntu Server, networkd might indeed be the default. I didn't notice that as, whenever I want to check what's going on in Ubuntu land, I tend to install the default desktop spin...

    "LP is a fucking arsehole."

    systemd's a lot bigger than Lennart, you know. If my grep fu is correct, out of 1543 commits to networkd, only 298 are from Lennart...

    1 0
    alain williams , 6 days
    Old is good

    in many respects when it comes to software because, over time, the bugs will have been found and squashed. Systemd brings in a lot of new code which will, naturally, have lots of bugs that will take time to find & remove. This is why we get problems like this DHCP one.

    Much as I like the venerable init: it did need replacing. Systemd is one way to go, more flexible, etc, etc. Something event driven is a good approach.

    One of the main problems with systemd is that it has become too big, slurped up lots of functionality which has removed choice, increased fragility. They should have concentrated on adding ways of talking to existing daemons, eg dhcpd, through an API/something. This would have reused old code (good) and allowed other implementations to use the API - this letting people choose what they wanted to run.

    But no: Poettering seems to want to build a Cathedral rather than a Bazzar.

    He appears to want to make it his way or no way. This is bad, one reason that *nix is good is because different solutions to a problem have been able to be chosen, one removed and another slotted in. This encourages competition and the 'best of breed' comes out on top. Poettering is endangering that process.

    Also: he refusal to accept patches to let it work on non-Linux Unix is just plain nasty.

    oiseau , 4 days
    Re: Old is good

    Hello:

    One of the main problems with systemd is that it has become too big, slurped up lots of functionality which has removed choice, increased fragility.

    IMO, there is a striking paralell between systemd and the registry in Windows OSs.

    After many years of dealing with the registry (W98 to XPSP3) I ended up seeing the registry as a sort of developer sanctioned virus running inside the OS, constantly changing and going deeper and deeper into the OS with every iteration and as a result, progressively putting an end to the possibility of knowing/controlling what was going on inside your box/the OS.

    Years later, when I learned about the existence of systemd (I was already running Ubuntu) and read up on what it did and how it did it, it dawned on me that systemd was nothing more than a registry class virus and it was infecting Linux_land at the behest of the developers involved.

    So I moved from Ubuntu to PCLinuxOS and then on to Devuan.

    Call me paranoid but I am convinced that there are people both inside and outside IT that actually want this and are quite willing to pay shitloads of money for it to happen.

    I don't see this MS cozying up to Linux in various ways lately as a coincidence: these things do not happen just because or on a senior manager's whim.

    What I do see (YMMV) is systemd being a sort of convergence of Linux with Windows, which will not be good for Linux and may well be its undoing.

    Cheers,

    O.

    Rich 2 , 4 days
    Re: Old is good

    "Also: he refusal to accept patches to let it work on non-Linux Unix is just plain nasty"

    Thank goodness this crap is unlikely to escape from Linux!

    By the way, for a systemd-free Linux, try void - it's rather good.

    Michael Wojcik , 3 days
    Re: Old is good

    Much as I like the venerable init: it did need replacing.

    For some use cases, perhaps. Not for any of mine. SysV init, or even BSD init, does everything I need a Linux or UNIX init system to do. And I don't need any of the other crap that's been built into or hung off systemd, either.

    Orv , 3 days
    Re: Old is good

    BSD init and SysV init work pretty darn well for their original purpose -- servers with static IP addresses that are rebooted no more than once in a fortnight. Anything more dynamic starts to give it trouble.

    Chairman of the Bored , 6 days
    Too bad Linus swore off swearing

    Situations like this go beyond a little "golly gee, I screwed up some C"...

    jake , 6 days
    Re: Too bad Linus swore off swearing

    Linus doesn't care. systemd has nothing to do with the kernel ... other than the fact that the lead devs for systemd have been banned from working on the kernel because they don't play nice with others.

    JLV , 6 days
    how did it get to this?

    I've been using runit, because I am too lazy and clueless to write init scripts reliably. It's very lightweight, runs on a bunch of systems and really does one thing - keep daemons up.

    I am not saying it's the best - but it looks like it has a very small codebase, it doesn't do much and generally has not bugged me after I configured each service correctly. I believe other systems also exist to avoid using init scripts directly. Not Monit, as it relies on you configuring the daemon start/stop commands elsewhere.

    On the other hand, systemd is a massive sprawl, does a lot of things - some of them useful, like dependencies and generally has needed more looking after. Twice I've had errors on a Django server that, after a lot of looking around ended up because something had changed in the, Chef-related, code that's exposed to systemd and esoteric (not emitted by systemd) errors resulted when systemd could not make sense of the incorrect configuration.

    I don't hate it - init scripts look a bit antiquated to me and they seem unforgiving to beginners - but I don't much like it. What I certainly do hate is how, in an OS that is supposed to be all about choice, sometime excessively so as in the window manager menagerie, we somehow ended up with one mandatory daemon scheduler on almost all distributions. Via, of all types of dependencies, the GUI layer. For a window manager that you may not even have installed.

    Talk about the antithesis of the Unix philosophy of do one thing, do it well.

    Oh, then there are also the security bugs and the project owner is an arrogant twat. That too.

    Doctor Syntax , 6 days
    Re: how did it get to this?

    "init scripts look a bit antiquated to me and they seem unforgiving to beginners"

    Init scripts are shell scripts. Shell scripts are as old as Unix. If you think that makes them antiquated then maybe Unix-like systems are not for you. In practice any sub-system generally gets its own scripts installed with the rest of the S/W so if being unforgiving puts beginners off tinkering with them so much the better. If an experienced Unix user really needs to modify one of the system-provided scripts their existing shell knowledge will let them do exactly what's needed. In the extreme, if you need to develop a new init script then you can do so in the same way as you'd develop any other script - edit and test from the command line.

    33 4
    onefang , 6 days
    Re: how did it get to this?

    "Init scripts are shell scripts."

    While generally true, some sysv init style inits can handle init "scripts" written in any language.

    sed gawk , 6 days
    Re: how did it get to this?

    I personally like openrc as an init system, but systemd is a symptom of the tooling problem.

    It's for me a retrograde step but again, it's linux, one can, as you and I do, just remove systemd.

    There are a lot of people in the industry now who don't seem able to cope with shell scripts nor are minded to research the arguments for or against shell as part of a unix style of system design.

    In conclusion, we are outnumbered, but it will eventually collapse under its own weight and a worthy successor shall rise, perhaps called SystemV, might have to shorten that name a bit.

    AdamWill , 6 days
    Just about nothing actually uses networkd

    "In addition to Ubuntu and Red Hat Enterprise Linux, Systemd has been adopted as a service manager for Debian, Fedora, CoreOS, Mint, and SUSE Linux Enterprise Server. We're told RHEL 7, at least, does not use the vulnerable component by default."

    I can tell you for sure that no version of Fedora does, either, and I'm fairly sure that neither does Debian, SLES or Mint. I don't know anything much about CoreOS, but https://coreos.com/os/docs/latest/network-config-with-networkd.html suggests it actually *might* use systemd-networkd.

    systemd-networkd is not part of the core systemd init daemon. It's an optional component, and most distros use some other network manager (like NetworkManager or wicd) by default.

    Christian Berger , 5 days
    The important word here is "still"

    I mean commercial distributions seem to be particularly interested in trying out new things that can increase their number of support calls. It's probably just that networkd is either to new and therefore not yet in the release, or still works so badly even the most rudimentary tests fail.

    There is no reason to use that NTP daemon of systemd, yet more and more distros ship with it enabled, instead of some sane NTP-server.

    NLCSGRV , 6 days
    The Curse of Poettering strikes again.
    _LC_ , 6 days
    Now hang on, please!

    Ser iss no neet to worry, systemd will becum stable soon after PulseAudio does.

    Ken Hagan , 6 days
    Re: Now hang on, please!

    I won't hold my breath, then. I have a laptop at the moment that refuses to boot because (as I've discovered from looking at the journal offline) pulseaudio is in an infinite loop waiting for the successful detection of some hardware that, presumably, I don't have.

    I imagine I can fix it by hacking the file-system (offline) so that fuckingpulse is no longer part of the boot configuration, but I shouldn't have to. A decent init system would be able to kick of everything else in parallel and if one particular service doesn't come up properly then it just logs the error. I *thought* that was one of the claimed advantages of systemd, but apparently that's just a load of horseshit.

    26 0
    Obesrver1 , 5 days
    Reason for disabling IVP6

    That it punches thru NAT routers enabling all your little goodies behind them as directly accessible.

    MS even supplies tunneling (Ivp4 to Ivp6) so if using Linux in a VM on a MS system you may still have it anyway.

    NAT was always recommended to be used in hardening your system, I prefer to keep all my idIoT devices behind one.

    As they are just Idiot devices.

    In future I will need a NAT that acts as a DNS and offers some sort of solution to keeping Ivp4.

    Orv , 3 days
    Re: Reason for disabling IVP6

    My NAT router statefully firewalls incoming IPv6 by default, which I consider equivalently secure. NAT adds security mostly by accident, because it de-facto adds a firewall that blocks incoming packets. It's not the address translation itself that makes things more secure, it's the inability to route in from the outside.

    dajames , 3 days
    Re: Reason for disabling IVP6

    You can use NAT with IPv6.

    You can, but why would you want to.

    NAT is schtick for connecting a whole LAN to a WAN using a single IPv4 address (useful with IPv4 because most ISPs don't give you a /24 when you sign up). If you have a native IPv6 address you'll have something like 2^64 addresses, so machines on your LAN can have an actual WAN-visible address of their own without needing a trick like NAT.

    Using NAT with IPv6 is just missing the point.

    JohnFen , 3 days
    Re: Reason for disabling IVP6

    "so machines on your LAN can have an actual WAN-visible address of their own without needing a trick like NAT."

    Avoiding that configuration is exactly the use case for using NAT with IPv6. As others have pointed out, you can accomplish the same thing with IPv6 router configuration, but NAT is easier in terms of configuration and maintenance. Given that, and assuming that you don't want to be able to have arbitrary machines open ports that are visible to the internet, then why not use NAT?

    Also, if your goal is to make people more likely to move to IPv6, pointing out IPv4 methods that will work with IPv6 (even if you don't consider them optimal) seems like a really, really good idea. It eases the transition.

    Destroy All Monsters , 5 days
    Please El Reg these stories make ma rage at breakfast, what's this?

    The bug will come as another argument against Systemd as the Linux management tool continues to fight for the hearts and minds of admins and developers alike.

    Less against systemd (which should get attacked on the design & implementation level) or against IPv6 than against the use of buffer-overflowable languages in 2018 in code that processes input from the Internet (it's not the middle ages anymore) or at least very hard linting of the same.

    But in the end, what did it was a violation of the Don't Repeat Yourself principle and lack of sufficently high-level datastructures. Pointer into buffer, and the remaining buffer length are two discrete variables that need to be updated simultaneously to keep the invariant and this happens in several places. This is just a catastrophe waiting to happen. You forget to update it once, you are out! Use structs and functions updating the structs correctly.

    And use assertions in the code , this stuff all seems disturbingly assertion-free.

    Excellent explanation by Felix Wilhelm:

    https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1795921

    The function receives a pointer to the option buffer buf, it's remaining size buflen and the IA to be added to the buffer. While the check at (A) tries to ensure that the buffer has enough space left to store the IA option, it does not take the additional 4 bytes from the DHCP6Option header into account (B). Due to this the memcpy at (C) can go out-of-bound and *buflen can underflow [i.e. you suddenly have a gazillion byte buffer, Ed.] in (D) giving an attacker a very powerful and largely controlled OOB heap write starting at (E).

    TheSkunkyMonk , 5 days
    Init is 1026 lines of code in one file and it works great.
    Anonymous Coward , 5 days
    "...and Poettering's occasionally controversial management of the tool."

    Shouldn't that be "...Potterings controversial management as a tool."?

    clocKwize , 4 days
    Re: Contractor rights

    why don't we stop writing code in languages that make it easy to screw up so easily like this?

    There are plenty about nowadays, I'd rather my DHCP client be a little bit slower at processing packets if I had more confidence it would not process then incorrectly and execute code hidden in said packets...

    Anonymous Coward , 4 days
    Switch, as easy as that

    The circus that is called "Linux" have forced me to Devuan and the likes however the circus is getting worse and worse by the day, thus I have switched to the BSD world, I will learn that rather than sit back and watch this unfold As many of us have been saying, the sudden switch to SystemD was rather quick, perhaps you guys need to go investigate why it really happened, don't assume you know, go dig and you will find the answers, it's rather scary, thus I bid the Linux world a farewell after 10 years of support, I will watch the grass dry out from the other side of the fence, It was destined to fail by means of infiltration and screw it up motive(s) on those we do not mention here.

    oiseau , 3 days
    Re: Switch, as easy as that

    Hello:

    As many of us have been saying, the sudden switch to SystemD was rather quick, perhaps you guys need to go investigate why it really happened, don't assume you know, go dig and you will find the answers, it's rather scary ...

    Indeed, it was rather quick and is very scary.

    But there's really no need to dig much, just reason it out.

    It's like a follow the money situation of sorts.

    I'll try to sum it up in three short questions:

    Q1: Hasn't the Linux philosophy (programs that do one thing and do it well) been a success?

    A1: Indeed, in spite of the many init systems out there, it has been a success in stability and OS management. And it can easily be tested and debugged, which is an essential requirement.

    Q2: So what would Linux need to have the practical equivalent of the registry in Windows for?

    A2: So that whatever the registry does in/to Windows can also be done in/to Linux.

    Q3: I see. And just who would want that to happen? Makes no sense, it is a huge step backwards.

    A3: ....

    Cheers,

    O.

    Dave Bell , 4 days
    Reporting weakness

    OK, so I was able to check through the link you provided, which says "up to and including 239", but I had just installed a systemd update and when you said there was already a fix written, working it's way through the distro update systems, all I had to do was check my log.

    Linux Mint makes it easy.

    But why didn't you say something such as "reported to affect systemd versions up to and including 239" and then give the link to the CVE? That failure looks like rather careless journalism.

    W.O.Frobozz , 3 days
    Hmm.

    /sbin/init never had these problems. But then again /sbin/init didn't pretend to be the entire operating system.

    [Nov 02, 2018] How to Recover from an Accidental SSH Disconnection on Linux RoseHosting

    Nov 02, 2018 | www.rosehosting.com

    ... I can get a list of all previous screens using the command:

    screen -ls
    

    And this gives me the output as shown here:

    Previous Screen Session is Preserved

    As you can see, there is a screen session here with the name:

    pts-0.test-centos-server

    To reconnect to it, just type:

    screen -r
    

    And this will take you back to where you were before the SSH connection was terminated! It's an amazing tool that you need to use for all important operations as insurance against accidental terminations.

    Manually Detaching Screens

    When you break an SSH session, what actually happens is that the screen is automatically detached from it and exists independently. While this is great, you can also detach screens manually and have multiple screens existing at the same time.

    For example, to detach a screen just type:

    screen -d
    

    And the current screen will be detached and preserved. However, all the processes inside it are still running, and all the states are preserved:

    Manually Detaching Screens

    You can re-attach to a screen at any time using the "screen -r" command. To connect to a specific screen instead of the most recent, use:

    screen -r [screenname]
    
    Changing the Screen Names to Make Them More Relevant

    By default, the screen names don't mean much. And when you have a bunch of them present, you won't know which screens contain which processes. Fortunately, renaming a screen is easy when inside one. Just type:

    ctrl+a :

    We saw in the previous article that "ctrl+a" is the trigger condition for screen commands. The colon (:) will take you to the bottom of the screen where you can type commands. To rename, use:

    sessionname [newscreenname]
    

    As shown here:

    Change the Session Name

    And now when you detach the screen, it will show with the new name like this:

    New Session Name Created

    Now you can have as many screens as you want without getting confused about which one is which!


    If you are one of our Managed VPS hosting clients, we can do all of this for you. Simply contact our system administrators and they will respond to your request as soon as possible.

    If you liked this blog post on how to recover from an accidental SSH disconnection on Linux, please share it with your friends on social media networks, or if you have any question regarding this blog post, simply leave a comment below and we will answer it. Thanks!

    [Nov 01, 2018] IBM was doing their best to demoralize their staff (and contractors) and annoy their customers as much as possible!

    Notable quotes:
    "... I worked as a contractor for IBM's IGS division in the late '90s and early 2000s at their third biggest customer, and even then, IBM was doing their best to demoralize their staff (and contractors) and annoy their customers as much as possible! ..."
    "... IBM charged themselves 3x the actual price to customers for their ThinkPads at the time! This meant that I never had a laptop or desktop PC from IBM in the 8 years I worked there. If it wasn't for the project work I did I would not have had a PC to work on! ..."
    "... What was strange is that every single time I got a pay cut, IBM would then announce that they had bought a new company! I would have quit long before I did, but I was tied to them while waiting for my Green Card to be approved. I know that raises are few in the current IBM for normal employees and that IBM always pleads poverty for any employee request. Yet, they somehow manage to pay billions of dollars for a new company. Strange that, isn't it? ..."
    "... I moved to the company that had won the contract and regret not having the chance to tell that IBM manager what I thought about him and where he could stick the new laptop. ..."
    "... After that experience I decided to never work for them in any capacity ever again. I feel pity for the current Red Hat employees and my only advice to them is to get out while they can. "DON'T FOLLOW THE RED HAT TO HELL" ..."
    Nov 01, 2018 | theregister.co.uk

    Edwin Tumblebunny Ashes to ashes, dust to dust - Red Hat is dead.

    Red Hat will be a distant memory in a few years as it gets absorbed by the abhorrent IBM culture and its bones picked clean by the IBM beancounters. Nothing good ever happens to a company bought by IBM.

    I worked as a contractor for IBM's IGS division in the late '90s and early 2000s at their third biggest customer, and even then, IBM was doing their best to demoralize their staff (and contractors) and annoy their customers as much as possible!

    Some examples:

    The on-site IBM employees (and contractors) had to use Lotus Notes for email. That was probably the worst piece of software I have ever used - I think baboons on drugs could have done a better design job. IBM set up a T1 (1.54 Mbps) link between the customer and the local IBM hub for email, etc. It sounds great until you realize there were over 150 people involved and due to the settings of Notes replication, it could often take over an hour to actually download email to read.

    To do my job I needed to install some IBM software. My PC did not have enough disk space for this software as well as the other software I needed. Rather than buy me a bigger hard disk I had to spend 8 hours a week installing and reinstalling software to do my job.

    I waited three months for a $50 stick of memory to be approved. When it finally arrived my machine had been changed out (due to a new customer project) and the memory was not compatible! Since I worked on a lot of projects I often had machines supplied by the customer on my desk. So, I would use one of these as my personal PC and would get an upgrade when the next project started!

    I was told I could not be supplied with a laptop or desktop from IBM as they were too expensive (my IBM division did not want to spend money on anything). IBM charged themselves 3x the actual price to customers for their ThinkPads at the time! This meant that I never had a laptop or desktop PC from IBM in the 8 years I worked there. If it wasn't for the project work I did I would not have had a PC to work on!

    IBM has many strange and weird processes that allow them to circumvent the contract they have with their preferred contractor companies. This meant that for a number of years I ended up getting a pay cut. What was strange is that every single time I got a pay cut, IBM would then announce that they had bought a new company! I would have quit long before I did, but I was tied to them while waiting for my Green Card to be approved. I know that raises are few in the current IBM for normal employees and that IBM always pleads poverty for any employee request. Yet, they somehow manage to pay billions of dollars for a new company. Strange that, isn't it?

    Eventually I was approved to get a laptop and excitedly watched it move slowly through the delivery system. I got confused when it was reported as delivered to Ohio rather than my work (not in Ohio). After some careful searching I discovered that my manager and his wife both worked for IBM from their home in, yes you can guess, Ohio. It looked like he had redirected my new laptop for his own use and most likely was going to send me his old one and claim it was a new one. I never got the chance to confront him about it, though, as IBM lost the contract with the customer that month and before the laptop should have arrived IBM was out! I moved to the company that had won the contract and regret not having the chance to tell that IBM manager what I thought about him and where he could stick the new laptop.

    After that experience I decided to never work for them in any capacity ever again. I feel pity for the current Red Hat employees and my only advice to them is to get out while they can. "DON'T FOLLOW THE RED HAT TO HELL"

    Certain Hollywood stars seem to be psychic types : https://twitter.com/JimCarrey/status/1057328878769721344

    rmstock , 2 days

    Re: "DON'T FOLLOW THE RED HAT TO HELL"

    I sense that a global effort is ongoing to shutdown open source software by brute force. First, the enforcement of the EU General Data Protection Regulation (GDPR) by ICANN.org to enable untraceable takeovers of domains. Microsoft buying github. Linus Torvalds forced out of his own Linux kernel project because of the Code of Conduct and now IBM buying RedHat. I wrote the following at https://lulz.com/linux-devs-threaten-killswitch-coc-controversy-1252/ "Torvalds should lawyer up. The problems are the large IT Tech firms who platinum donated all over the place in Open Source land. When IBM donated with 1 billion USD to Linux in 2000 https://itsfoss.com/ibm-invest-1-billion-linux/ a friend who vehemently was against the GPL and what Torvalds was doing, told me that in due time OSS would simply just go away.

    These Community Organizers, not Coders per se, are on a mission to overtake and control the Linux Foundation, and if they can't, will search and destroy all of it, even if it destroys themselves. Coraline is merely a expendable pion here. Torvalds is now facing unjust confrontations and charges resembling the nomination of Judge Brett Kavanaugh. Looking at the CoC document it even might have been written by a Google executive, who themselves currently are facing serious charges and lawsuits from their own Code of Conduct. See theintercept.com, their leaked video the day after the election of 2016. They will do anything to pursue this. However to pursue a personal bias or agenda furnishing enactments or acts such as to, omit contradicting facts (code), commit perjury, attend riots and harassments, cleanse Internet archives and search engines of exculpatory evidence and ultimately hire hit-men to exterminate witnesses of truth (developers), in an attempt to elevate bias as fabricated fact (code) are crimes and should be prosecuted accordingly."

    [Nov 01, 2018] Will Red Hat Survive IBM Ownership

    It does not matter if somebody "stresses independence word are cheap. The mere fact that this is not IBM changes relationships. also IBM executives need to show "leadership" and that entails some "general direction" for Red Hat from now on. At least with Microsoft and HP relationship will be much cooler, then before.
    Also IBM does not like "charity" projects like CentOS and that will be affected too, no matter what executives tell you right now. Paradoxically this greatly strengthen the position of Oracle Linux.
    The status of IBM software in corporate world 9outside finance companies) is low and their games with licenses (licensing products per core, etc) are viewed by most of their customers as despicable. This was one of the reasons IBM lost it share in enterprise software. For example, greed in selling Tivoli more or less killed the software. All credits to Lou Gerstner who initially defined this culture of relentless outsource and cult of the "bottom line" (which was easy for his because he does not understood technology at all). His successor was even more active in driving company into the ground. R ampant cost-arbitrage driven offshoring has left a legacy of dissatisfied customers. Most projects are over priced. and most of those which were priced more or less on industry level had bad quality results and costs overruns.
    IBM cut severance pay to 1 month, is firing people left and right and is insensitive to the fired employees and the result is enormous negativity for them. Good people are scared to work for them and people are asking tough questions.
    It's strategy since Gerstner. Palmisano (this guys with history diploma witched into Cybersecurity after retirement in 2012 and led Obama cybersecurity commission) & Ginni Rometty was completely neoliberal mantra "share holder value", grow earnings per share at all costs. They all manage IBM for financial appearance rather than the quaility products. There was no focus on breakthrough-innovation, market leadership in mega-trends (like cloud computing) or even giving excellent service to customers.
    Ginni Rometty accepted bonuses during times of layoffs and outsourcing.
    When company have lost it architectural talent the brand will eventually die. IBM still has very high number of patents every year. It is still a very large company in terms of revenue and number of employees. However, there are strong signs that its position in the technology industry might deteriorate further.
    Nov 01, 2018 | www.itprotoday.com

    Cormier also stressed Red Hat independence when we asked how the acquisition would affect ongoing partnerships in place with Amazon Web Services, Microsoft Azure, and other public clouds.

    "One of the things you keep seeing in this is that we're going to run Red Hat as a separate entity within IBM," he said. "One of the reasons is business. We need to, and will, remain Switzerland in terms of how we interact with our partners. We're going to continue to prioritize what we do for our partners within our products on a business case perspective, including IBM as a partner. We're not going to do unnatural things. We're going to do the right thing for the business, and most importantly, the customer, in terms of where we steer our products."

    Red Hat promises that independence will extend to Red Hat's community projects, such as its freely available Fedora Linux distribution which is widely used by developers. When asked what impact the sale would have on Red Hat maintained projects like Fedora and CentoOS, Cormier replied, "None. We just came from an all-hands meeting from the company around the world for Red Hat, and we got asked this question and my answer was that the day after we close, I don't intend to do anything different or in any other way than we do our every day today. Arvind, Ginnie, Jim, and I have talked about this extensively. For us, it's business as usual. Whatever we were going to do for our road maps as a stand alone will continue. We have to do what's right for the community, upstream, our associates, and our business."

    This all sounds good, but as they say, talk is cheap. Six months from now we'll have a better idea of how well IBM can walk the walk and leave Red Hat alone. If it can't and Red Hat doesn't remain clearly distinct from IBM ownership control, then Big Blue will have wasted $33 billion it can't afford to lose and put its future, as well as the future of Red Hat, in jeopardy.

    [Oct 30, 2018] There are plenty of examples of people who were doing their jobs, IN SPADES, putting in tons of unpaid overtime, and generally doing whatever was humanly possible to make sure that whatever was promised to the customer was delivered within their span of control. As they grew older corporations threw them out like an empty can

    Notable quotes:
    "... The other alternative is a market-based life that, for many, will be cruel, brutish, and short. ..."
    Oct 30, 2018 | features.propublica.org

    Lorilynn King

    Step back and think about this for a minute. There are plenty of examples of people who were doing their jobs, IN SPADES, putting in tons of unpaid overtime, and generally doing whatever was humanly possible to make sure that whatever was promised to the customer was delivered (within their span of control... I'm not going to get into a discussion of how IBM pulls the rug out from underneath contracts after they've been signed).

    These people were, and still are, high performers, they are committed to the job and the purpose that has been communicated to them by their peers, management, and customers; and they take the time (their OWN time) to pick up new skills and make sure that they are still current and marketable. They do this because they are committed to doing the job to the best of their ability.... it's what makes them who they are.

    IBM (and other companies) are firing these very people ***for one reason and one reason ONLY***: their AGE. They have the skills and they're doing their jobs. If the same person was 30 you can bet that they'd still be there. Most of the time it has NOTHING to do with performance or lack of concurrency. Once the employee is fired, the job is done by someone else. The work is still there, but it's being done by someone younger and/or of a different nationality.

    The money that is being saved by these companies has to come from somewhere. People that are having to withdraw their retirement savings 20 or so years earlier than planned are going to run out of funds.... and when they're in nursing homes, guess who is going to be supporting them? Social security will be long gone, their kids have their own monetary challenges.... so it will be government programs.... maybe.

    This is not just a problem that impacts the 40 and over crowd. This is going to impact our entire society for generations to come.

    NoPolitician
    The business reality you speak of can be tempered via government actions. A few things:

    The other alternative is a market-based life that, for many, will be cruel, brutish, and short.

    [Oct 30, 2018] Soon after I started, the company fired hundreds of 50-something employees and put we "kids" in their jobs. Seeing that employee loyalty was a one way street at that place, I left after a couple of years. Best career move I ever made.

    Oct 30, 2018 | features.propublica.org

    Al Romig , Wednesday, April 18, 2018 5:20 AM

    As a new engineering graduate, I joined a similar-sized multinational US-based company in the early '70s. Their recruiting pitch was, "Come to work here, kid. Do your job, keep your nose clean, and you will enjoy great, secure work until you retire on easy street".

    Soon after I started, the company fired hundreds of 50-something employees and put we "kids" in their jobs. Seeing that employee loyalty was a one way street at that place, I left after a couple of years. Best career move I ever made.

    GoingGone , Friday, April 13, 2018 6:06 PM
    As a 25yr+ vet of IBM, I can confirm that this article is spot-on true. IBM used to be a proud and transparent company that clearly demonstrated that it valued its employees as much as it did its stock performance or dividend rate or EPS, simply because it is good for business. Those principles helped make and keep IBM atop the business world as the most trusted international brand and business icon of success for so many years. In 2000, all that changed when Sam Palmisano became the CEO. Palmisano's now infamous "Roadmap 2015" ran the company into the ground through its maniacal focus on increasing EPS at any and all costs. Literally. Like, its employees, employee compensation, benefits, skills, and education opportunities. Like, its products, product innovation, quality, and customer service. All of which resulted in the devastation of its technical capability and competitiveness, employee engagement, and customer loyalty. Executives seemed happy enough as their compensation grew nicely with greater financial efficiencies, and Palisano got a sweet $270M+ exit package in 2012 for a job well done. The new CEO, Ginni Rometty has since undergone a lot of scrutiny for her lack of business results, but she was screwed from day one. Of course, that doesn't leave her off the hook for the business practices outlined in the article, but what do you expect: she was hand picked by Palmisano and approved by the same board that thought Palmisano was golden.
    Paul V Sutera , Tuesday, April 3, 2018 7:33 PM
    In 1994, I saved my job at IBM for the first time, and survived. But I was 36 years old. I sat down at the desk of a man in his 50s, and found a few odds and ends left for me in the desk. Almost 20 years later, it was my turn to go. My health and well-being is much better now. Less money but better health. The sins committed by management will always be: "I was just following orders".

    [Oct 30, 2018] IBM age discrimination

    Notable quotes:
    "... Correction, March 24, 2018: Eileen Maroney lives in Aiken, South Carolina. The name of her city was incorrect in the original version of this story. ..."
    Oct 30, 2018 | features.propublica.org

    Consider, for example, a planning presentation that former IBM executives said was drafted by heads of a business unit carved out of IBM's once-giant software group and charged with pursuing the "C," or cloud, portion of the company's CAMS strategy.

    The presentation laid out plans for substantially altering the unit's workforce. It was shown to company leaders including Diane Gherson, the senior vice president for human resources, and James Kavanaugh, recently elevated to chief financial officer. Its language was couched in the argot of "resources," IBM's term for employees, and "EP's," its shorthand for early professionals or recent college graduates.

    Among the goals: "Shift headcount mix towards greater % of Early Professional hires." Among the means: "[D]rive a more aggressive performance management approach to enable us to hire and replace where needed, and fund an influx of EPs to correct seniority mix." Among the expected results: "[A] significant reduction in our workforce of 2,500 resources."

    A slide from a similar presentation prepared last spring for the same leaders called for "re-profiling current talent" to "create room for new talent." Presentations for 2015 and 2016 for the 50,000-employee software group also included plans for "aggressive performance management" and emphasized the need to "maintain steady attrition to offset hiring."

    IBM declined to answer questions about whether either presentation was turned into company policy. The description of the planned moves matches what hundreds of older ex-employees told ProPublica they believe happened to them: They were ousted because of their age. The company used their exits to hire replacements, many of them young; to ship their work overseas; or to cut its overall headcount.

    Ed Alpern, now 65, of Austin, started his 39-year run with IBM as a Selectric typewriter repairman. He ended as a project manager in October of 2016 when, he said, his manager told him he could either leave with severance and other parting benefits or be given a bad job review -- something he said he'd never previously received -- and risk being fired without them.

    Albert Poggi, now 70, was a three-decade IBM veteran and ran the company's Palisades, New York, technical center where clients can test new products. When notified in November of 2016 he was losing his job to layoff, he asked his bosses why, given what he said was a history of high job ratings. "They told me," he said, "they needed to fill it with someone newer."

    The presentations from the software group, as well as the stories of ex-employees like Alpern and Poggi, square with internal documents from two other major IBM business units. The documents for all three cover some or all of the years from 2013 through the beginning of 2018 and deal with job assessments, hiring, firing and layoffs.

    The documents detail practices that appear at odds with how IBM says it treats its employees. In many instances, the practices in effect, if not intent, tilt against the company's older U.S. workers.

    For example, IBM spokespeople and lawyers have said the company never considers a worker's age in making decisions about layoffs or firings.

    But one 2014 document reviewed by ProPublica includes dates of birth. An ex-IBM employee familiar with the process said executives from one business unit used it to decide about layoffs or other job changes for nearly a thousand workers, almost two-thirds of them over 50.

    Documents from subsequent years show that young workers are protected from cuts for at least a limited period of time. A 2016 slide presentation prepared by the company's global technology services unit, titled "U.S. Resource Action Process" and used to guide managers in layoff procedures, includes bullets for categories considered "ineligible" for layoff. Among them: "early professional hires," meaning recent college graduates.

    In responding to age-discrimination complaints that ex-employees file with the EEOC, lawyers for IBM say that front-line managers make all decisions about who gets laid off, and that their decisions are based strictly on skills and job performance, not age.

    But ProPublica reviewed spreadsheets that indicate front-line managers hardly acted alone in making layoff calls. Former IBM managers said the spreadsheets were prepared for upper-level executives and kept continuously updated. They list hundreds of employees together with codes like "lift and shift," indicating that their jobs were to be lifted from them and shifted overseas, and details such as whether IBM's clients had approved the change.

    An examination of several of the spreadsheets suggests that, whatever the criteria for assembling them, the resulting list of those marked for layoff was skewed toward older workers. A 2016 spreadsheet listed more than 400 full-time U.S. employees under the heading "REBAL," which refers to "rebalancing," the process that can lead to laying off workers and either replacing them or shifting the jobs overseas. Using the job search site LinkedIn, ProPublica was able to locate about 100 of these employees and then obtain their ages through public records. Ninety percent of those found were 40 or older. Seventy percent were over 50.

    IBM frequently cites its history of encouraging diversity in its responses to EEOC complaints about age discrimination. "IBM has been a leader in taking positive actions to ensure its business opportunities are made available to individuals without regard to age, race, color, gender, sexual orientation and other categories," a lawyer for the company wrote in a May 2017 letter. "This policy of non-discrimination is reflected in all IBM business activities."

    But ProPublica found at least one company business unit using a point system that disadvantaged older workers. The system awarded points for attributes valued by the company. The more points a person garnered, according to the former employee, the more protected she or he was from layoff or other negative job change; the fewer points, the more vulnerable.

    The arrangement appears on its face to favor younger newcomers over older veterans. Employees were awarded points for being relatively new at a job level or in a particular role. Those who worked for IBM for fewer years got more points than those who'd been there a long time.

    The ex-employee familiar with the process said a 2014 spreadsheet from that business unit, labeled "IBM Confidential," was assembled to assess the job prospects of more than 600 high-level employees, two-thirds of them from the U.S. It included employees' years of service with IBM, which the former employee said was used internally as a proxy for age. Also listed was an assessment by their bosses of their career trajectories as measured by the highest job level they were likely to attain if they remained at the company, as well as their point scores.

    The tilt against older workers is evident when employees' years of service are compared with their point scores. Those with no points and therefore most vulnerable to layoff had worked at IBM an average of more than 30 years; those with a high number of points averaged half that.

    Perhaps even more striking is the comparison between employees' service years and point scores on the one hand and their superiors' assessments of their career trajectories on the other.

    Along with many American employers, IBM has argued it needs to shed older workers because they're no longer at the top of their games or lack "contemporary" skills.

    But among those sized up in the confidential spreadsheet, fully 80 percent of older employees -- those with the most years of service but no points and therefore most vulnerable to layoff -- were rated by superiors as good enough to stay at their current job levels or be promoted. By contrast, only a small percentage of younger employees with a high number of points were similarly rated.

    "No major company would use tools to conduct a layoff where a disproportionate share of those let go were African Americans or women," said Cathy Ventrell-Monsees, senior attorney adviser with the EEOC and former director of age litigation for the senior lobbying giant AARP. "There's no difference if the tools result in a disproportionate share being older workers."

    In addition to the point system that disadvantaged older workers in layoffs, other documents suggest that IBM has made increasingly aggressive use of its job-rating machinery to pave the way for straight-out firings, or what the company calls "management-initiated separations." Internal documents suggest that older workers were especially targets.

    Like in many companies, IBM employees sit down with their managers at the start of each year and set goals for themselves. IBM graded on a scale of 1 to 4, with 1 being top-ranked.

    Those rated as 3 or 4 were given formal short-term goals known as personal improvement plans, or PIPs. Historically many managers were lenient, especially toward those with 3s whose ratings had dropped because of forces beyond their control, such as a weakness in the overall economy, ex-employees said.

    But within the past couple of years, IBM appears to have decided the time for leniency was over. For example, a software group planning document for 2015 said that, over and above layoffs, the unit should seek to fire about 3,000 of the unit's 50,000-plus workers.

    To make such deep cuts, the document said, executives should strike an "aggressive performance management posture." They needed to double the share of employees given low 3 and 4 ratings to at least 6.6 percent of the division's workforce. And because layoffs cost the company more than outright dismissals or resignations, the document said, executives should make sure that more than 80 percent of those with low ratings get fired or forced to quit.

    Finally, the 2015 document said the division should work "to attract the best and brightest early professionals" to replace up to two-thirds of those sent packing. A more recent planning document -- the presentation to top executives Gherson and Kavanaugh for a business unit carved out of the software group -- recommended using similar techniques to free up money by cutting current employees to fund an "influx" of young workers.

    In a recent interview, Poggi said he was resigned to being laid off. "Everybody at IBM has a bullet with their name on it," he said. Alpern wasn't nearly as accepting of being threatened with a poor job rating and then fired.

    Alpern had a particular reason for wanting to stay on at IBM, at least until the end of last year. His younger son, Justin, then a high school senior, had been named a National Merit semifinalist. Alpern wanted him to be able to apply for one of the company's Watson scholarships. But IBM had recently narrowed eligibility so only the children of current employees could apply, not also retirees as it was until 2014.

    Alpern had to make it through December for his son to be eligible.

    But in August, he said, his manager ordered him to retire. He sought to buy time by appealing to superiors. But he said the manager's response was to threaten him with a bad job review that, he was told, would land him on a PIP, where his work would be scrutinized weekly. If he failed to hit his targets -- and his managers would be the judges of that -- he'd be fired and lose his benefits.

    Alpern couldn't risk it; he retired on Oct. 31. His son, now a freshman on the dean's list at Texas A&M University, didn't get to apply.

    "I can think of only a couple regrets or disappointments over my 39 years at IBM,"" he said, "and that's one of them."

    'Congratulations on Your Retirement!'

    Like any company in the U.S., IBM faces few legal constraints to reducing the size of its workforce. And with its no-disclosure strategy, it eliminated one of the last regular sources of information about its employment practices and the changing size of its American workforce.

    But there remained the question of whether recent cutbacks were big enough to trigger state and federal requirements for disclosure of layoffs. And internal documents, such as a slide in a 2016 presentation titled "Transforming to Next Generation Digital Talent," suggest executives worried that "winning the talent war" for new young workers required IBM to improve the "attractiveness of (its) culture and work environment," a tall order in the face of layoffs and firings.

    So the company apparently has sought to put a softer face on its cutbacks by recasting many as voluntary rather than the result of decisions by the firm. One way it has done this is by converting many layoffs to retirements.

    Some ex-employees told ProPublica that, faced with a layoff notice, they were just as happy to retire. Others said they felt forced to accept a retirement package and leave. Several actively objected to the company treating their ouster as a retirement. The company nevertheless processed their exits as such.

    Project manager Ed Alpern's departure was treated in company paperwork as a voluntary retirement. He didn't see it that way, because the alternative he said he was offered was being fired outright.

    Lorilynn King, a 55-year-old IT specialist who worked from her home in Loveland, Colorado, had been with IBM almost as long as Alpern by May 2016 when her manager called to tell her the company was conducting a layoff and her name was on the list.

    King said the manager told her to report to a meeting in Building 1 on IBM's Boulder campus the following day. There, she said, she found herself in a group of other older employees being told by an IBM human resources representative that they'd all be retiring. "I have NO intention of retiring," she remembers responding. "I'm being laid off."

    ProPublica has collected documents from 15 ex-IBM employees who got layoff notices followed by a retirement package and has talked with many others who said they received similar paperwork. Critics say the sequence doesn't square well with the law.

    "This country has banned mandatory retirement," said Seiner, the University of South Carolina law professor and former EEOC appellate lawyer. "The law says taking a retirement package has to be voluntary. If you tell somebody 'Retire or we'll lay you off or fire you,' that's not voluntary."

    Until recently, the company's retirement paperwork included a letter from Rometty, the CEO, that read, in part, "I wanted to take this opportunity to wish you well on your retirement While you may be retiring to embark on the next phase of your personal journey, you will always remain a valued and appreciated member of the IBM family." Ex-employees said IBM stopped sending the letter last year.

    IBM has also embraced another practice that leads workers, especially older ones, to quit on what appears to be a voluntary basis. It substantially reversed its pioneering support for telecommuting, telling people who've been working from home for years to begin reporting to certain, often distant, offices. Their other choice: Resign.

    David Harlan had worked as an IBM marketing strategist from his home in Moscow, Idaho, for 15 years when a manager told him last year of orders to reduce the performance ratings of everybody at his pay grade. Then in February last year, when he was 50, came an internal video from IBM's new senior vice president, Michelle Peluso, which announced plans to improve the work of marketing employees by ordering them to work "shoulder to shoulder." Those who wanted to stay on would need to "co-locate" to offices in one of six cities.

    Early last year, Harlan received an email congratulating him on "the opportunity to join your team in Raleigh, North Carolina." He had 30 days to decide on the 2,600-mile move. He resigned in June.

    David Harlan worked for IBM for 15 years from his home in Moscow, Idaho, where he also runs a drama company. Early last year, IBM offered him a choice: Move 2,600 miles to Raleigh-Durham to begin working at an office, or resign. He left in June. (Rajah Bose for ProPublica)

    After the Peluso video was leaked to the press, an IBM spokeswoman told the Wall Street Journal that the " vast majority " of people ordered to change locations and begin reporting to offices did so. IBM Vice President Ed Barbini said in an initial email exchange with ProPublica in July that the new policy affected only about 2,000 U.S. employees and that "most" of those had agreed to move.

    But employees across a wide range of company operations, from the systems and technology group to analytics, told ProPublica they've also been ordered to co-locate in recent years. Many IBMers with long service said that they quit rather than sell their homes, pull children from school and desert aging parents. IBM declined to say how many older employees were swept up in the co-location initiative.

    "They basically knew older employees weren't going to do it," said Eileen Maroney, a 63-year-old IBM product manager from Aiken, South Carolina, who, like Harlan, was ordered to move to Raleigh or resign. "Older people aren't going to move. It just doesn't make any sense." Like Harlan, Maroney left IBM last June.

    Having people quit rather than being laid off may help IBM avoid disclosing how much it is shrinking its U.S. workforce and where the reductions are occurring.

    Under the federal WARN Act , adopted in the wake of huge job cuts and factory shutdowns during the 1980s, companies laying off 50 or more employees who constitute at least one-third of an employer's workforce at a site have to give advance notice of layoffs to the workers, public agencies and local elected officials.

    Similar laws in some states where IBM has a substantial presence are even stricter. California, for example, requires advanced notice for layoffs of 50 or more employees, no matter what the share of the workforce. New York requires notice for 25 employees who make up a third.

    Because the laws were drafted to deal with abrupt job cuts at individual plants, they can miss reductions that occur over long periods among a workforce like IBM's that was, at least until recently, widely dispersed because of the company's work-from-home policy.

    IBM's training sessions to prepare managers for layoffs suggest the company was aware of WARN thresholds, especially in states with strict notification laws such as California. A 2016 document entitled "Employee Separation Processing" and labeled "IBM Confidential" cautions managers about the "unique steps that must be taken when processing separations for California employees."

    A ProPublica review of five years of WARN disclosures for a dozen states where the company had large facilities that shed workers found no disclosures in nine. In the other three, the company alerted authorities of just under 1,000 job cuts -- 380 in California, 369 in New York and 200 in Minnesota. IBM's reported figures are well below the actual number of jobs the company eliminated in these states, where in recent years it has shuttered, sold off or leveled plants that once employed vast numbers.

    By contrast, other employers in the same 12 states reported layoffs last year alone totaling 215,000 people. They ranged from giant Walmart to Ostrom's Mushroom Farms in Washington state.

    Whether IBM operated within the rules of the WARN act, which are notoriously fungible, could not be determined because the company declined to provide ProPublica with details on its layoffs.

    A Second Act, But Poorer

    W ith 35 years at IBM under his belt, Ed Miyoshi had plenty of experience being pushed to take buyouts, or early retirement packages, and refusing them. But he hadn't expected to be pushed last fall.

    Miyoshi, of Hopewell Junction, New York, had some years earlier launched a pilot program to improve IBM's technical troubleshooting. With the blessing of an IBM vice president, he was busily interviewing applicants in India and Brazil to staff teams to roll the program out to clients worldwide.

    The interviews may have been why IBM mistakenly assumed Miyoshi was a manager, and so emailed him to eliminate the one U.S.-based employee still left in his group.

    "That was me," Miyoshi realized.

    In his sign-off email to colleagues shortly before Christmas 2016, Miyoshi, then 57, wrote: "I am too young and too poor to stop working yet, so while this is good-bye to my IBM career, I fully expect to cross paths with some of you very near in the future."

    He did, and perhaps sooner than his colleagues had expected; he started as a subcontractor to IBM about two weeks later, on Jan. 3.

    Miyoshi is an example of older workers who've lost their regular IBM jobs and been brought back as contractors. Some of them -- not Miyoshi -- became contract workers after IBM told them their skills were out of date and no longer needed.

    Employment law experts said that hiring ex-employees as contractors can be legally dicey. It raises the possibility that the layoff of the employee was not for the stated reason but perhaps because they were targeted for their age, race or gender.

    IBM appears to recognize the problem. Ex-employees say the company has repeatedly told managers -- most recently earlier this year -- not to contract with former employees or sign on with third-party contracting firms staffed by ex-IBMers. But ProPublica turned up dozens of instances where the company did just that.

    Only two weeks after IBM laid him off in December 2016, Ed Miyoshi of Hopewell Junction, New York, started work as a subcontractor to the company. But he took a $20,000-a-year pay cut. "I'm not a millionaire, so that's a lot of money to me," he says. (Demetrius Freeman for ProPublica)

    Responding to a question in a confidential questionnaire from ProPublica, one 35-year company veteran from New York said he knew exactly what happened to the job he left behind when he was laid off. "I'M STILL DOING IT. I got a new gig eight days after departure, working for a third-party company under contract to IBM doing the exact same thing."

    In many cases, of course, ex-employees are happy to have another job, even if it is connected with the company that laid them off.

    Henry, the Columbus-based sales and technical specialist who'd been with IBM's "resiliency services" unit, discovered that he'd lost his regular IBM job because the company had purchased an Indian firm that provided the same services. But after a year out of work, he wasn't going to turn down the offer of a temporary position as a subcontractor for IBM, relocating data centers. It got money flowing back into his household and got him back where he liked to be, on the road traveling for business.

    The compensation most ex-IBM employees make as contractors isn't comparable. While Henry said he collected the same dollar amount, it didn't include health insurance, which cost him $1,325 a month. Miyoshi said his paycheck is 20 percent less than what he made as an IBM regular.

    "I took an over $20,000 hit by becoming a contractor. I'm not a millionaire, so that's a lot of money to me," Miyoshi said.

    And lower pay isn't the only problem ex-IBM employees-now-subcontractors face. This year, Miyoshi's payable hours have been cut by an extra 10 "furlough days." Internal documents show that IBM repeatedly furloughs subcontractors without pay, often for two, three or more weeks a quarter. In some instances, the furloughs occur with little advance notice and at financially difficult moments. In one document, for example, it appears IBM managers, trying to cope with a cost overrun spotted in mid-November, planned to dump dozens of subcontractors through the end of the year, the middle of the holiday season.

    Former IBM employees now on contract said the company controls costs by notifying contractors in the midst of projects they have to take pay cuts or lose the work. Miyoshi said that he originally started working for his third-party contracting firm for 10 percent less than at IBM, but ended up with an additional 10 percent cut in the middle of 2017, when IBM notified the contractor it was slashing what it would pay.

    For many ex-employees, there are few ways out. Henry, for example, sought to improve his chances of landing a new full-time job by seeking assistance to finish a college degree through a federal program designed to retrain workers hurt by offshoring of jobs.

    But when he contacted the Ohio state agency that administers the Trade Adjustment Assistance, or TAA, program, which provides assistance to workers who lose their jobs for trade-related reasons, he was told IBM hadn't submitted necessary paperwork. State officials said Henry could apply if he could find other IBM employees who were laid off with him, information that the company doesn't provide.

    TAA is overseen by the Labor Department but is operated by states under individual agreements with Washington, so the rules can vary from state to state. But generally employers, unions, state agencies and groups of employers can petition for training help and cash assistance. Labor Department data compiled by the advocacy group Global Trade Watch shows that employers apply in about 40 percent of cases. Some groups of IBM workers have obtained retraining funds when they or their state have applied, but records dating back to the early 1990s show IBM itself has applied for and won taxpayer assistance only once, in 2008, for three Chicago-area workers whose jobs were being moved to India.

    Teasing New Jobs

    A s IBM eliminated thousands of jobs in 2016, David Carroll, a 52-year-old Austin software engineer, thought he was safe.

    His job was in mobile development, the "M" in the company's CAMS strategy. And if that didn't protect him, he figured he was only four months shy of qualifying for a program that gives employees who leave within a year of their three-decade mark access to retiree medical coverage and other benefits.

    But the layoff notice Carroll received March 2 gave him three months -- not four -- to come up with another job. Having been a manager, he said he knew the gantlet he'd have to run to land a new position inside IBM.

    Still, he went at it hard, applying for more than 50 IBM jobs, including one for a job he'd successfully done only a few years earlier. For his effort, he got one offer -- the week after he'd been forced to depart. He got severance pay but lost access to what would have been more generous benefits.

    Edward Kishkill, then 60, of Hillsdale, New Jersey, had made a similar calculation.

    A senior systems engineer, Kishkill recognized the danger of layoffs, but assumed he was immune because he was working in systems security, the "S" in CAMS and another hot area at the company.

    The precaution did him no more good than it had Carroll. Kishkill received a layoff notice the same day, along with 17 of the 22 people on his systems security team, including Diane Moos. The notice said that Kishkill could look for other jobs internally. But if he hadn't landed anything by the end of May, he was out.

    With a daughter who was a senior in high school headed to Boston University, he scrambled to apply, but came up dry. His last day was May 31, 2016.

    For many, the fruitless search for jobs within IBM is the last straw, a final break with the values the company still says it embraces. Combined with the company's increasingly frequent request that departing employees train their overseas replacements, it has left many people bitter. Scores of ex-employees interviewed by ProPublica said that managers with job openings told them they weren't allowed to hire from layoff lists without getting prior, high-level clearance, something that's almost never given.

    ProPublica reviewed documents that show that a substantial share of recent IBM layoffs have involved what the company calls "lift and shift," lifting the work of specific U.S. employees and shifting it to specific workers in countries such as India and Brazil. For example, a document summarizing U.S. employment in part of the company's global technology services division for 2015 lists nearly a thousand people as layoff candidates, with the jobs of almost half coded for lift and shift.

    Ex-employees interviewed by ProPublica said the lift-and-shift process required their extensive involvement. For example, shortly after being notified she'd be laid off, Kishkill's colleague, Moos, was told to help prepare a "knowledge transfer" document and begin a round of conference calls and email exchanges with two Indian IBM employees who'd be taking over her work. Moos said the interactions consumed much of her last three months at IBM.

    Next Chapters

    W hile IBM has managed to keep the scale and nature of its recent U.S. employment cuts largely under the public's radar, the company drew some unwanted attention during the 2016 presidential campaign, when then-candidate Donald Trump lambasted it for eliminating 500 jobs in Minnesota, where the company has had a presence for a half century, and shifting the work abroad.

    The company also has caught flak -- in places like Buffalo, New York ; Dubuque, Iowa ; Columbia, Missouri , and Baton Rouge, Louisiana -- for promising jobs in return for state and local incentives, then failing to deliver. In all, according to public officials in those and other places, IBM promised to bring on 3,400 workers in exchange for as much as $250 million in taxpayer financing but has hired only about half as many.

    After Trump's victory, Rometty, in a move at least partly aimed at courting the president-elect, pledged to hire 25,000 new U.S. employees by 2020. Spokesmen said the hiring would increase IBM's U.S. employment total, although, given its continuing job cuts, the addition is unlikely to approach the promised hiring total.

    When The New York Times ran a story last fall saying IBM now has more employees in India than the U.S., Barbini, the corporate spokesman, rushed to declare, "The U.S. has always been and remains IBM's center of gravity." But his stream of accompanying tweets and graphics focused as much on the company's record for racking up patents as hiring people.

    IBM has long been aware of the damage its job cuts can do to people. In a series of internal training documents to prepare managers for layoffs in recent years, the company has included this warning: "Loss of a job often triggers a grief reaction similar to what occurs after a death."

    Most, though not all, of the ex-IBM employees with whom ProPublica spoke have weathered the loss and re-invented themselves.

    Marjorie Madfis, the digital marketing strategist, couldn't land another tech job after her 2013 layoff, so she headed in a different direction. She started a nonprofit called Yes She Can Inc. that provides job skills development for young autistic women, including her 21-year-old daughter.

    After almost two years of looking and desperate for useful work, Brian Paulson, the widely traveled IBM senior manager, applied for and landed a position as a part-time rural letter carrier in Plano, Texas. He now works as a contract project manager for a Las Vegas gaming and lottery firm.

    Ed Alpern, who started at IBM as a Selectric typewriter repairman, watched his son go on to become a National Merit Scholar at Texas A&M University, but not a Watson scholarship recipient.

    Lori King, the IT specialist and 33-year IBM veteran who's now 56, got in a parting shot. She added an addendum to the retirement papers the firm gave her that read in part: "It was never my plan to retire earlier than at least age 60 and I am not committing to retire. I have been informed that I am impacted by a resource action effective on 2016-08-22, which is my last day at IBM, but I am NOT retiring."

    King has aced more than a year of government-funded coding boot camps and university computer courses, but has yet to land a new job.

    David Harlan still lives in Moscow, Idaho, after refusing IBM's "invitation" to move to North Carolina, and is artistic director of the Moscow Art Theatre (Too).

    Ed Miyoshi is still a technical troubleshooter working as a subcontractor for IBM.

    Ed Kishkill, the senior systems engineer, works part time at a local tech startup, but pays his bills as an associate at a suburban New Jersey Staples store.

    This year, Paul Henry was back on the road, working as an IBM subcontractor in Detroit, about 200 miles from where he lived in Columbus. On Jan. 8, he put in a 14-hour day and said he planned to call home before turning in. He died in his sleep.

    Correction, March 24, 2018: Eileen Maroney lives in Aiken, South Carolina. The name of her city was incorrect in the original version of this story.

    Do you have information about age discrimination at IBM?

    Let us know.

    Peter Gosselin joined ProPublica as a contributing reporter in January 2017 to cover aging. He has covered the U.S. and global economies for, among others, the Los Angeles Times and The Boston Globe, focusing on the lived experiences of working people. He is the author of "High Wire: The Precarious Financial Lives of American Families."

    Ariana Tobin is an engagement reporter at ProPublica, where she works to cultivate communities to inform our coverage. She was previously at The Guardian and WNYC. Ariana has also worked as digital producer for APM's Marketplace and contributed to outlets including The New Republic , On Being , the St. Louis Beacon and Bustle .

    Production by Joanna Brenner and Hannah Birch . Art direction by David Sleight . Illustrations by Richard Borge .

    [Oct 30, 2018] Elimination of loyalty: what corporations cloak as weeding out the low performers tranparantly reveals catching the older workers in the net as well.

    Oct 30, 2018 | features.propublica.org

    Great White North, Thursday, March 22, 2018 11:29 PM

    There's not a word of truth quoted in this article. That is, quoted from IBM spokespeople. It's the culture there now. They don't even realize that most of their customers have become deaf to the same crap from their Sales and Marketing BS, which is even worse than their HR speak.

    The sad truth is that IBM became incapable of taking its innovation (IBM is indeed a world beating, patent generating machine) to market a long time ago. It has also lost the ability (if it ever really had it) to acquire other companies and foster their innovation either - they ran most into the ground. As a result, for nearly a decade revenues have declined and resource actions grown. The resource actions may seem to be the ugly problem, but they're only the symptom of a fat greedy and pompous bureaucracy that's lost its ability to grow and stay relevant in a very competitive and changing industry. What they have been able to perfect and grow is their ability to downsize and return savings as dividends (Big Sam Palmisano's "innovation"). Oh, and for senior management to line their pockets.

    Nothing IBM is currently doing is sustainable.

    If you're still employed there, listen to the pain in the words of your fallen comrades and don't knock yourself out trying to stay afloat. Perhaps learn some BS of your own and milk your job (career? not...) until you find freedom and better pastures.

    If you own stock, do like Warren Buffett, and sell it while it still has some value.

    Danllo , Thursday, March 22, 2018 10:43 PM
    This is NOTHING NEW! All major corporations have and will do this at some point in their existence. Another industry that does this regularly every 3 to 5 years is the pharamaceutical industry. They'll decimate their sales forces in order to, as they like to put it, "right size" the company.

    They'll cloak it as weeding out the low performers, but they'll try to catch the "older" workers in the net as well.

    [Oct 30, 2018] Cutting 'Old Heads' at IBM

    Notable quotes:
    "... I took an early retirement package when IBM first started downsizing. I had 30 years with them, but I could see the writing on the wall so I got out. I landed an exec job with a biotech company some years later and inherited an IBM consulting team that were already engaged. I reviewed their work for 2 months then had the pleasure of terminating the contract and actually escorting the team off the premises because the work product was so awful. ..."
    "... Every former or prospective IBM employee is a potential future IBM customer or partner. How you treat them matters! ..."
    "... I advise IBM customers now. My biggest professional achievements can be measured in how much revenue IBM lost by my involvement - millions. Favorite is when IBM paid customer to stop the bleeding. ..."
    Oct 30, 2018 | features.propublica.org

    I took an early retirement package when IBM first started downsizing. I had 30 years with them, but I could see the writing on the wall so I got out. I landed an exec job with a biotech company some years later and inherited an IBM consulting team that were already engaged. I reviewed their work for 2 months then had the pleasure of terminating the contract and actually escorting the team off the premises because the work product was so awful.

    They actually did a presentation of their interim results - but it was a 52 slide package that they had presented to me in my previous job but with the names and numbers changed. see more

    DarthVaderMentor dauwkus , Thursday, April 5, 2018 4:43 PM

    Intellectual Capital Re-Use! LOL! Not many people realize in IBM that many, if not all of the original IBM Consulting Group materials were made under the Type 2 Materials clause of the IBM Contract, which means the customers actually owned the IP rights of the documents. Can you imagine the mess if just one customer demands to get paid for every re-use of the IP that was developed for them and then re-used over and over again?
    NoGattaca dauwkus , Monday, May 7, 2018 5:37 PM
    Beautiful! Yea, these companies so fast to push experienced people who have dedicated their lives to the firm - how can you not...all the hours and commitment it takes - way underestimate the power of the network of those left for dead and their influence in that next career gig. Memories are long...very long when it comes to experiences like this.
    davosil North_40 , Sunday, March 25, 2018 5:19 PM
    True dat! Every former or prospective IBM employee is a potential future IBM customer or partner. How you treat them matters!
    Playing Defense North_40 , Tuesday, April 3, 2018 4:41 PM
    I advise IBM customers now. My biggest professional achievements can be measured in how much revenue IBM lost by my involvement - millions. Favorite is when IBM paid customer to stop the bleeding.

    [Oct 30, 2018] American companies pay health insurance premiums based on their specific employee profiles

    Notable quotes:
    "... As long as companies pay for their employees' health insurance they will have an incentive to fire older employees. ..."
    "... The answer is to separate health insurance from employment. Companies can't be trusted. Not only health care, but retirement is also sorely abused by corporations. All the money should be in protected employee based accounts. ..."
    Oct 30, 2018 | features.propublica.org

    sometimestheyaresomewhatright , Thursday, March 22, 2018 4:13 PM

    American companies pay health insurance premiums based on their specific employee profiles. Insurance companies compete with each other for the business, but costs are actual. And based on the profile of the pool of employees. So American companies fire older workers just to lower the average age of their employees. Statistically this is going to lower their health care costs.

    As long as companies pay for their employees' health insurance they will have an incentive to fire older employees. They have an incentive to fire sick employees and employees with genetic risks. Those are harder to implement as ways to lower costs. Firing older employees is simple to do, just look up their ages.

    The answer is to separate health insurance from employment. Companies can't be trusted. Not only health care, but retirement is also sorely abused by corporations. All the money should be in protected employee based accounts.

    By the way, most tech companies are actually run by older people. The goal is to broom out mid-level people based on age. Nobody is going to suggest to a sixty year old president that they should self fire, for the good of the company.

    [Oct 30, 2018] It s all about making the numbers so the management can present a Potemkin Village of profits and ever-increasing growth sufficient to get bonuses. There is no relation to any sort of quality or technological advancement, just HR 3-card monte

    Notable quotes:
    "... It's no coincidence whatsoever that Diane Gherson, mentioned prominently in the article, blasted out an all-employees email crowing about IBM being a great place to work according to (ahem) LinkedIn. I desperately want to post a link to this piece in the corporate Slack, but that would get me fired immediately instead of in a few months at the next "resource action." It's been a whole 11 months since our division had one, so I know one is coming soon. ..."
    "... I used to say when I was there that: "After every defeat, they pin medals on the generals and shoot the soldiers". ..."
    "... 1990 is also when H-1B visa rules were changed so that companies no longer had to even attempt to hire an American worker as long as the job paid $60,000, which hasn't changed since. This article doesn't even mention how our work visa system facilitated and even rewarded this abuse of Americans. ..."
    "... Well, starting in the 1980s, the American management was allowed by Reagan to get rid of its workforce. ..."
    "... It's all about making the numbers so the management can present a Potemkin Village of profits and ever-increasing growth sufficient to get bonuses. There is no relation to any sort of quality or technological advancement, just HR 3-card monte. They have installed air bearing in Old Man Watson's coffin as it has been spinning ever faster ..."
    "... Corporate America executive management is all about stock price management. Their bonus's in the millions of dollars are based on stock performance. With IBM's poor revenue performance since Ginny took over, profits can only be maintained by cost reduction. Look at the IBM executive's bonus's throughout the last 20 years and you can see that all resource actions have been driven by Palmisano's and Rominetty's greed for extravagant bonus's. ..."
    "... Also worth noting is that IBM drastically cut the cap on it's severance pay calculation. Almost enough to make me regret not having retired before that changed. ..."
    "... Yeah, severance started out at 2 yrs pay, went to 1 yr, then to 6 mos. and is now 1 month. ..."
    "... You need to investigate AT&T as well, as they did the same thing. I was 'sold' by IBM to AT&T as part of he Network Services operation. AT&T got rid of 4000 of the 8000 US employees sent to AT&T within 3 years. Nearly everyone of us was a 'senior' employee. ..."
    Oct 30, 2018 | disqus.com

    dragonflap• 7 months ago I'm a 49-year-old SW engineer who started at IBM as part of an acquisition in 2000. I got laid off in 2002 when IBM started sending reqs to Bangalore in batches of thousands. After various adventures, I rejoined IBM in 2015 as part of the "C" organization referenced in the article.

    It's no coincidence whatsoever that Diane Gherson, mentioned prominently in the article, blasted out an all-employees email crowing about IBM being a great place to work according to (ahem) LinkedIn. I desperately want to post a link to this piece in the corporate Slack, but that would get me fired immediately instead of in a few months at the next "resource action." It's been a whole 11 months since our division had one, so I know one is coming soon.

    Stewart Dean • 7 months ago ,

    The lead-in to this piece makes it sound like IBM was forced into these practices by inescapable forces. I'd say not, rather that it pursued them because a) the management was clueless about how to lead IBM in the new environment and new challenges so b) it started to play with numbers to keep the (apparent) profits up....to keep the bonuses coming. I used to say when I was there that: "After every defeat, they pin medals on the generals and shoot the soldiers".

    And then there's the Pig with the Wooden Leg shaggy dog story that ends with the punch line, "A pig like that you don't eat all at once", which has a lot of the flavor of how many of us saw our jobs as IBM die a slow death.

    IBM is about to fall out of the sky, much as General Motors did. How could that happen? By endlessly beating the cow to get more milk.

    IBM was hiring right through the Great Depression such that It Did Not Pay Unemployment Insurance. Because it never laid people off, Because until about 1990, your manager was responsible for making sure you had everything you needed to excel and grow....and you would find people that had started on the loading dock and had become Senior Programmers. But then about 1990, IBM starting paying unemployment insurance....just out of the goodness of its heart. Right.

    CRAW Stewart Dean • 7 months ago ,

    1990 is also when H-1B visa rules were changed so that companies no longer had to even attempt to hire an American worker as long as the job paid $60,000, which hasn't changed since. This article doesn't even mention how our work visa system facilitated and even rewarded this abuse of Americans.

    DDRLSGC Stewart Dean • 7 months ago ,

    Well, starting in the 1980s, the American management was allowed by Reagan to get rid of its workforce.

    Georgann Putintsev Stewart Dean • 7 months ago ,

    I found that other Ex-IBMer's respect other Ex-IBMer's work ethics, knowledge and initiative.

    Other companies are happy to get them as a valueable resource. In '89 when our Palo Alto Datacenter moved, we were given two options: 1.) to become a Programmer (w/training) 2.) move to Boulder or 3.) to leave.

    I got my training with programming experience and left IBM in '92, when for 4 yrs IBM offerred really good incentives for leaving the company. The Executives thought that the IBM Mainframe/MVS z/OS+ was on the way out and the Laptop (Small but Increasing Capacity) Computer would take over everything.

    It didn't. It did allow many skilled IBMers to succeed outside of IBM and help built up our customer skill sets. And like many, when the opportunity arose to return I did. In '91 I was accidentally given a male co-workers paycheck and that was one of the reasons for leaving. During my various Contract work outside, I bumped into other male IBMer's that had left too, some I had trained, and when they disclosed that it was their salary (which was 20-40%) higher than mine was the reason they left, I knew I had made the right decision.

    Women tend to under-value themselves and their capabilities. Contracting also taught me that companies that had 70% employees and 30% contractors, meant that contractors would be let go if they exceeded their quarterly expenditures.

    I first contracted with IBM in '98 and when I decided to re-join IBM '01, I had (3) job offers and I took the most lucrative exciting one to focus on fixing & improving DB2z Qry Parallelism. I developed a targeted L3 Technical Change Team to help L2 Support reduce Customer problems reported and improve our product. The instability within IBM remained and I saw IBM try to eliminate aging, salaried, benefited employees. The 1.) find a job within IBM ... to 2.) to leave ... was now standard.

    While my salary had more than doubled since I left IBM the first time, it still wasn't near other male counterparts. The continual rating competition based on salary ranged titles and timing a title raise after a round of layoffs, not before. I had another advantage going and that was that my changed reduced retirement benefits helped me stay there. It all comes down to the numbers that Mgmt is told to cut & save IBM. While much of this article implies others were hired, at our Silicon Valley Location and other locations, they had no intent to backfill. So the already burdened employees were laden with more workloads & stress.

    In the early to mid 2000's IBM setup a counter lab in China where they were paying 1/4th U.S. salaries and many SVL IBMers went to CSDL to train our new world 24x7 support employees. But many were not IBM loyal and their attrition rates were very high, so it fell to a wave of new-hires at SVL to help address it.

    Stewart Dean Georgann Putintsev • 7 months ago ,

    It's all about making the numbers so the management can present a Potemkin Village of profits and ever-increasing growth sufficient to get bonuses. There is no relation to any sort of quality or technological advancement, just HR 3-card monte. They have installed air bearing in Old Man Watson's coffin as it has been spinning ever faster

    IBM32_retiree • 7 months ago ,

    Corporate America executive management is all about stock price management. Their bonus's in the millions of dollars are based on stock performance. With IBM's poor revenue performance since Ginny took over, profits can only be maintained by cost reduction. Look at the IBM executive's bonus's throughout the last 20 years and you can see that all resource actions have been driven by Palmisano's and Rominetty's greed for extravagant bonus's.

    Dan Yurman • 7 months ago ,

    Bravo ProPublica for another "sock it to them" article - journalism in honor of the spirit of great newspapers everywhere that the refuge of justice in hard times is with the press.

    Felix Domestica • 7 months ago ,

    Also worth noting is that IBM drastically cut the cap on it's severance pay calculation. Almost enough to make me regret not having retired before that changed.

    RonF Felix Domestica • 7 months ago ,

    Yeah, severance started out at 2 yrs pay, went to 1 yr, then to 6 mos. and is now 1 month.

    mjmadfis RonF • 7 months ago ,

    When I was let go in June 2013 it was 6 months severance.

    Terry Taylor • 7 months ago ,

    You need to investigate AT&T as well, as they did the same thing. I was 'sold' by IBM to AT&T as part of he Network Services operation. AT&T got rid of 4000 of the 8000 US employees sent to AT&T within 3 years. Nearly everyone of us was a 'senior' employee.

    weelittlepeople Terry Taylor • 7 months ago ,

    Good Ol Ma Bell is following the IBM playbook to a Tee

    emnyc • 7 months ago ,

    ProPublica deserves a Pulitzer for this article and all the extensive research that went into this investigation.

    Incredible job! Congrats.

    On a separate note, IBM should be ashamed of themselves and the executive team that enabled all of this should be fired.

    WmBlake • 7 months ago ,

    As a permanent old contractor and free-enterprise defender myself, I don't blame IBM a bit for wanting to cut the fat. But for the outright *lies, deception and fraud* that they use to break laws, weasel out of obligations... really just makes me want to shoot them... and I never even worked for them.

    Michael Woiwood • 7 months ago ,

    Great Article.

    Where I worked, In Rochester,MN, people have known what is happening for years. My last years with IBM were the most depressing time in my life.

    I hear a rumor that IBM would love to close plants they no longer use but they are so environmentally polluted that it is cheaper to maintain than to clean up and sell.

    scorcher14 • 7 months ago ,

    One of the biggest driving factors in age discrimination is health insurance costs, not salary. It can cost 4-5x as much to insure and older employee vs. a younger one, and employers know this. THE #1 THING WE CAN DO TO STOP AGE DISCRIMINATION IS TO MOVE AWAY FROM OUR EMPLOYER-PROVIDED INSURANCE SYSTEM. It could be single-payer, but it could also be a robust individual market with enough pool diversification to make it viable. Freeing employers from this cost burden would allow them to pick the right talent regardless of age.

    DDRLSGC scorcher14 • 7 months ago ,

    The American business have constantly fought against single payer since the end of World War II and why should I feel sorry for them when all of a sudden, they are complaining about health care costs? It is outrageous that workers have to face age discrimination; however, the CEOs don't have to deal with that issue since they belong to a tiny group of people who can land a job anywhere else.

    pieinthesky scorcher14 • 7 months ago ,

    Single payer won't help. We have single payer in Canada and just as much age discrimination in employment. Society in general does not like older people so unless you're a doctor, judge or pharmacist you will face age bias. It's even worse in popular culture never mind in employment.

    OrangeGina scorcher14 • 7 months ago ,

    I agree. Yet, a determined company will find other methods, explanations and excuses.

    JohnCordCutter • 7 months ago ,

    Thanks for the great article. I left IBM last year. USA based. 49. Product Manager in one of IBMs strategic initiatives, however got told to relocate or leave. I found another job and left. I came to IBM from an acquisition. My only regret is, I wish I had left this toxic environment earlier. It truely is a dreadful place to work.

    60 Soon • 7 months ago ,

    The methodology has trickled down to smaller companies pursuing the same net results for headcount reduction. The similarities to my experience were painful to read. The grief I felt after my job was "eliminated" 10 years ago while the Recession was at its worst and shortly after my 50th birthday was coming back. I never have recovered financially but have started writing a murder mystery. The first victim? The CEO who let me go. It's true. Revenge is best served cold.

    donttreadonme9 • 7 months ago ,

    Well written . people like me have experienced exactly what you wrote. IBM is a shadow of it's former greatness and I have advised my children to stay away from IBM and companies like it as they start their careers. IBM is a corrupt company. Shame on them !

    annapurna • 7 months ago ,

    I hope they find some way to bring a class action lawsuit against these assholes.

    Mark annapurna • 7 months ago ,

    I suspect someone will end up hunt them down with an axe at some point. That's the only way they'll probably learn. I don't know about IBM specifically, but when Carly Fiorina ran HP, she travelled with and even went into engineering labs with an armed security detail.

    OrangeGina Mark • 7 months ago ,

    all the bigwig CEOs have these black SUV security details now.

    Sarahw • 7 months ago ,

    IBM has been using these tactics at least since the 1980s, when my father was let go for similar 'reasons.'

    Vin • 7 months ago ,

    Was let go after 34 years of service. Mine Resource Action latter had additional lines after '...unless you are offered ... position within IBM before that date.' , implying don't even try to look for a position. They lines were ' Additional business controls are in effect to manage the business objectives of this resource action, therefore, job offers within (the name of division) will be highly unlikely.'.

    Mark Vin • 7 months ago ,

    Absolutely and utterly disgusting.

    Greybeard • 7 months ago ,

    I've worked for a series of vendors for over thirty years. A job at IBM used to be the brass ring; nowadays, not so much.

    I've heard persistent rumors from IBMers that U.S. headcount is below 25,000 nowadays. Given events like the recent downtime of the internal systems used to order parts (5 or so days--website down because staff who maintained it were let go without replacements), it's hard not to see the spiral continue down the drain.

    What I can't figure out is whether Rometty and cronies know what they're doing or are just clueless. Either way, the result is the same: destruction of a once-great company and brand. Tragic.

    ManOnTheHill Greybeard • 7 months ago ,

    Well, none of these layoffs/ageist RIFs affect the execs, so they don't see the effects, or they see the effects but attribute them to some other cause.

    (I'm surprised the article doesn't address this part of the story; how many affected by layoffs are exec/senior management? My bet is very few.)

    ExIBMExec ManOnTheHill • 7 months ago ,

    I was a D-banded exec (Director-level) who was impacted and I know even some VPs who were affected as well, so they do spread the pain, even in the exec ranks.

    ManOnTheHill ExIBMExec • 7 months ago ,

    That's different than I have seen in companies I have worked for (like HP). There RIFs (Reduction In Force, their acronym for layoff) went to the director level and no further up.

    [Oct 30, 2018] Cutting Old Heads at IBM by Peter Gosselin and Ariana Tobin

    Mar 22, 2018 | features.propublica.org

    This story was co-published with Mother Jones.

    F or nearly a half century, IBM came as close as any company to bearing the torch for the American Dream.

    As the world's dominant technology firm, payrolls at International Business Machines Corp. swelled to nearly a quarter-million U.S. white-collar workers in the 1980s. Its profits helped underwrite a broad agenda of racial equality, equal pay for women and an unbeatable offer of great wages and something close to lifetime employment, all in return for unswerving loyalty.

    How the Crowd Led Us to Investigate IBM

    Our project started with a digital community of ex-employees. Read more about how we got this story.

    Email Updates

    Sign up to get ProPublica's major investigations delivered to your inbox.

    Do you have information about age discrimination at IBM?

    Let us know.

    But when high tech suddenly started shifting and companies went global, IBM faced the changing landscape with a distinction most of its fiercest competitors didn't have: a large number of experienced and aging U.S. employees.

    The company reacted with a strategy that, in the words of one confidential planning document, would "correct seniority mix." It slashed IBM's U.S. workforce by as much as three-quarters from its 1980s peak, replacing a substantial share with younger, less-experienced and lower-paid workers and sending many positions overseas. ProPublica estimates that in the past five years alone, IBM has eliminated more than 20,000 American employees ages 40 and over, about 60 percent of its estimated total U.S. job cuts during those years.

    In making these cuts, IBM has flouted or outflanked U.S. laws and regulations intended to protect later-career workers from age discrimination, according to a ProPublica review of internal company documents, legal filings and public records, as well as information provided via interviews and questionnaires filled out by more than 1,000 former IBM employees.

    Among ProPublica's findings, IBM:

    Denied older workers information the law says they need in order to decide whether they've been victims of age bias, and required them to sign away the right to go to court or join with others to seek redress. Targeted people for layoffs and firings with techniques that tilted against older workers, even when the company rated them high performers. In some instances, the money saved from the departures went toward hiring young replacements. Converted job cuts into retirements and took steps to boost resignations and firings. The moves reduced the number of employees counted as layoffs, where high numbers can trigger public disclosure requirements. Encouraged employees targeted for layoff to apply for other IBM positions, while quietly advising managers not to hire them and requiring many of the workers to train their replacements. Told some older employees being laid off that their skills were out of date, but then brought them back as contract workers, often for the same work at lower pay and fewer benefits.

    IBM declined requests for the numbers or age breakdown of its job cuts. ProPublica provided the company with a 10-page summary of its findings and the evidence on which they were based. IBM spokesman Edward Barbini said that to respond the company needed to see copies of all documents cited in the story, a request ProPublica could not fulfill without breaking faith with its sources. Instead, ProPublica provided IBM with detailed descriptions of the paperwork. Barbini declined to address the documents or answer specific questions about the firm's policies and practices, and instead issued the following statement:

    "We are proud of our company and our employees' ability to reinvent themselves era after era, while always complying with the law. Our ability to do this is why we are the only tech company that has not only survived but thrived for more than 100 years."

    With nearly 400,000 people worldwide, and tens of thousands still in the U.S., IBM remains a corporate giant. How it handles the shift from its veteran baby-boom workforce to younger generations will likely influence what other employers do. And the way it treats its experienced workers will eventually affect younger IBM employees as they too age.

    Fifty years ago, Congress made it illegal with the Age Discrimination in Employment Act , or ADEA, to treat older workers differently than younger ones with only a few exceptions, such as jobs that require special physical qualifications. And for years, judges and policymakers treated the law as essentially on a par with prohibitions against discrimination on the basis of race, gender, sexual orientation and other categories.

    In recent decades, however, the courts have responded to corporate pleas for greater leeway to meet global competition and satisfy investor demands for rising profits by expanding the exceptions and shrinking the protections against age bias .

    "Age discrimination is an open secret like sexual harassment was until recently," said Victoria Lipnic, the acting chair of the Equal Employment Opportunity Commission, or EEOC, the independent federal agency that administers the nation's workplace anti-discrimination laws.

    "Everybody knows it's happening, but often these cases are difficult to prove" because courts have weakened the law, Lipnic said. "The fact remains it's an unfair and illegal way to treat people that can be economically devastating."

    Many companies have sought to take advantage of the court rulings. But the story of IBM's downsizing provides an unusually detailed portrait of how a major American corporation systematically identified employees to coax or force out of work in their 40s, 50s and 60s, a time when many are still productive and need a paycheck, but face huge hurdles finding anything like comparable jobs.

    The dislocation caused by IBM's cuts has been especially great because until recently the company encouraged its employees to think of themselves as "IBMers" and many operated under the assumption that they had career-long employment.

    When the ax suddenly fell, IBM provided almost no information about why an employee was cut or who else was departing, leaving people to piece together what had happened through websites, listservs and Facebook groups such as "Watching IBM" or "Geographically Undesirable IBM Marketers," as well as informal support groups.

    Marjorie Madfis, at the time 57, was a New York-based digital marketing strategist and 17-year IBM employee when she and six other members of her nine-person team -- all women in their 40s and 50s -- were laid off in July 2013. The two who remained were younger men.

    Since her specialty was one that IBM had said it was expanding, she asked for a written explanation of why she was let go. The company declined to provide it.

    "They got rid of a group of highly skilled, highly effective, highly respected women, including me, for a reason nobody knows," Madfis said in an interview. "The only explanation is our age."

    Brian Paulson, also 57, a senior manager with 18 years at IBM, had been on the road for more than a year overseeing hundreds of workers across two continents as well as hitting his sales targets for new services, when he got a phone call in October 2015 telling him he was out. He said the caller, an executive who was not among his immediate managers, cited "performance" as the reason, but refused to explain what specific aspects of his work might have fallen short.

    It took Paulson two years to land another job, even though he was equipped with an advanced degree, continuously employed at high-level technical jobs for more than three decades and ready to move anywhere from his Fairview, Texas, home.

    "It's tough when you've worked your whole life," he said. "The company doesn't tell you anything. And once you get to a certain age, you don't hear a word from the places you apply."

    Paul Henry, a 61-year-old IBM sales and technical specialist who loved being on the road, had just returned to his Columbus home from a business trip in August 2016 when he learned he'd been let go. When he asked why, he said an executive told him to "keep your mouth shut and go quietly."

    Henry was jobless more than a year, ran through much of his savings to cover the mortgage and health insurance and applied for more than 150 jobs before he found a temporary slot.

    "If you're over 55, forget about preparing for retirement," he said in an interview. "You have to prepare for losing your job and burning through every cent you've saved just to get to retirement."

    IBM's latest actions aren't anything like what most ex-employees with whom ProPublica talked expected from their years of service, or what today's young workers think awaits them -- or are prepared to deal with -- later in their careers.

    "In a fast-moving economy, employers are always going to be tempted to replace older workers with younger ones, more expensive workers with cheaper ones, those who've performed steadily with ones who seem to be up on the latest thing," said Joseph Seiner, an employment law professor at the University of South Carolina and former appellate attorney for the EEOC.

    "But it's not good for society," he added. "We have rules to try to maintain some fairness in our lives, our age-discrimination laws among them. You can't just disregard them."

    [Oct 30, 2018] Red Hat hired the CentOS developers 4.5-years ago

    Oct 30, 2018 | linux.slashdot.org

    quantaman ( 517394 ) , Sunday October 28, 2018 @04:22PM ( #57550805 )

    Re:Well at least we'll still have Cent ( Score: 4 , Informative)
    Fedora is fully owned by Red Hat and CentOS requires the availability of the Red Hat repositories which they aren't obliged to make public to non-customers..

    Fedora is fully under Red Hat's control. It's used as a bleeding edge distro for hobbyists and as a testing ground for code before it goes into RHEL. I doubt its going away since it does a great job of establishing mindshare but no business in their right mind is going to run Fedora in production.

    But CentOS started as a separate organization with a fairly adversarial relationship to Red Hat since it really is free RHEL which cuts into their actual customer base. They didn't need Red Hat repos back then, just the code which they rebuilt from scratch (which is why they were often a few months behind).

    If IBM kills CentOS a new one will pop up in a week, that's the beauty of the GPL.

    Luthair ( 847766 ) , Sunday October 28, 2018 @04:22PM ( #57550799 )
    Re:Well at least we'll still have Cent ( Score: 3 )

    Red Hat hired the CentOS developers 4.5-years ago.

    [Oct 30, 2018] We run just about everything on CentOS around here, downstream of RHEL. Should we be worried?

    Oct 30, 2018 | arstechnica.com

    Muon , Ars Scholae Palatinae 6 hours ago Popular

    We run just about everything on CentOS around here, downstream of RHEL. Should we be worried? 649 posts | registered 1/26/2009
    brandnewmath , Smack-Fu Master, in training et Subscriptor 6 hours ago Popular
    We'll see. Companies in an acquisition always rush to explain how nothing will change to reassure their customers. But we'll see.
    Kilroy420 , Ars Tribunus Militum 6 hours ago Popular
    Perhaps someone can explain this... Red Hat's revenue and assets barely total about $5B. Even factoring in market share and capitalization, how the hey did IBM come up with $34B cash being a justifiable purchase price??

    Honestly, why would Red Hat have said no? 1648 posts | registered 4/3/2012

    dorkbert , Ars Tribunus Militum 6 hours ago Popular
    My personal observation of IBM over the past 30 years or so is that everything it acquires dies horribly.
    barackorama , Smack-Fu Master, in training 6 hours ago
    ...IBM's own employees see it as a company in free fall. This is not good news.

    In other news, property values in Raleigh will rise even more...

    Moodyz , Ars Centurion 6 hours ago Popular
    Quote:
    This is fine

    Looking back at what's happened with many of IBM's past acquisitions, I'd say no, not quite fine.
    I am not your friend , Wise, Aged Ars Veteran et Subscriptor 6 hours ago Popular
    I just can't comprehend that price. Cloud has a rich future, but I didn't even know Red Hat had any presence there, let alone $35 billion worth.
    jandrese , Ars Tribunus Angusticlavius et Subscriptor 6 hours ago Popular
    50me12 wrote:
    Will IBM even know what to do with them?

    IBM has been fumbling around for a while. They didn't know how to sell Watson as they sold it like a weird magical drop in service and it failed repeatedly, where really it should be a long term project that you bring customer's along for the ride...

    I had a buddy using their cloud service and they went to spin up servers and IBM was all "no man we have to set them up first".... like that's not cloud IBM...

    If IBM can't figure out how to sell its own services I'm not sure the powers that be are capable of getting the job done ever. IBM's own leadership seems incompatible with the state of the world.

    IBM basically bought a ton of service contracts for companies all over the world. This is exactly what the suits want: reliable cash streams without a lot of that pesky development stuff.

    IMHO this is perilous for RHEL. It would be very easy for IBM to fire most of the developers and just latch on to the enterprise services stuff to milk it till its dry.

    skizzerz , Wise, Aged Ars Veteran et Subscriptor 6 hours ago
    toturi wrote:
    I can only see this as a net positive - the ability to scale legacy mainframes onto "Linux" and push for even more security auditing.

    I would imagine the RHEL team will get better funding but I would be worried if you're a centos or fedora user.

    I'm nearly certain that IBM's management ineptitude will kill off Fedora and CentOS (or at least severely gimp them compared to how they currently are), not realizing how massively important both of these projects are to the core RHEL product. We'll see RHEL itself suffer as a result.

    I normally try to understand things with an open mindset, but in this case, IBM has had too long of a history of doing things wrong for me to trust them. I'll be watching this carefully and am already prepping to move off of my own RHEL servers once the support contract expires in a couple years just in case it's needed.

    Iphtashu Fitz , Ars Scholae Palatinae 6 hours ago Popular
    50me12 wrote:
    Will IBM even know what to do with them?

    My previous job (6+ years ago now) was at a university that was rather heavily invested in IBM for a high performance research cluster. It was something around 100 or so of their X-series blade servers, all of which were running Red Hat Linux. It wouldn't surprise me if they decided to acquire Red Hat in large part because of all these sorts of IBM systems that run Red Hat on them.

    TomXP411 , Ars Tribunus Angusticlavius 6 hours ago Popular
    Iphtashu Fitz wrote:
    50me12 wrote:
    blockquote> Will IBM even know what to do with them?

    My previous job (6+ years ago now) was at a university that was rather heavily invested in IBM for a high performance research cluster. It was something around 100 or so of their X-series blade servers, all of which were running Red Hat Linux. It wouldn't surprise me if they decided to acquire Red Hat in large part because of all these sorts of IBM systems that run Red Hat on them.

    That was my thought. IBM wants to own an operating system again. With AIX being relegated to obscurity, buying Red Hat is simpler than creating their own Linux fork.

    anon_lawyer , Wise, Aged Ars Veteran 6 hours ago Popular
    Valuing Red Hat at $34 billion means valuing it at more than 1/4 of IBMs current market cap. From my perspective this tells me IBM is in even worse shape than has been reported.
    dmoan , Ars Centurion 6 hours ago
    I am not your friend wrote:
    I just can't comprehend that price. Cloud has a rich future, but I didn't even know Red Hat had any presence there, let alone $35 billion worth.

    Redhat made 258 million in income last year so they paid over 100 times its net income that's crazy valuation here...

    [Oct 30, 2018] I have worked at IBM 17 years and have worried about being layed off for about 11 of them. Moral is in the toilet. Bonuses for the rank and file are in the under 1% range while the CEO gets millions

    Notable quotes:
    "... Adjusting for inflation, I make $6K less than I did my first day. My group is a handful of people as at least 1/2 have quit or retired. To support our customers, we used to have several people, now we have one or two and if someone is sick or on vacation, our support structure is to hope nothing breaks. ..."
    Oct 30, 2018 | features.propublica.org

    Buzz , Friday, March 23, 2018 12:00 PM

    I've worked there 17 years and have worried about being layed off for about 11 of them. Moral is in the toilet. Bonuses for the rank and file are in the under 1% range while the CEO gets millions. Pay raises have been non existent or well under inflation for years.

    Adjusting for inflation, I make $6K less than I did my first day. My group is a handful of people as at least 1/2 have quit or retired. To support our customers, we used to have several people, now we have one or two and if someone is sick or on vacation, our support structure is to hope nothing breaks.

    We can't keep millennials because of pay, benefits and the expectation of being available 24/7 because we're shorthanded. As the unemployment rate drops, more leave to find a different job, leaving the old people as they are less willing to start over with pay, vacation, moving, selling a house, pulling kids from school, etc.

    The younger people are generally less likely to be willing to work as needed on off hours or to pull work from a busier colleague.

    I honestly have no idea what the plan is when the people who know what they are doing start to retire, we are way top heavy with 30-40 year guys who are on their way out, very few of the 10-20 year guys due to hiring freezes and we can't keep new people past 2-3 years. It's like our support business model is designed to fail.

    [Oct 30, 2018] Will systemd become standard on mainframes as well?

    It will be interesting to see what happens in any case.
    Oct 30, 2018 | theregister.co.uk

    Doctor Syntax , 1 day

    So now it becomes Blue Hat. Will systemd become standard on mainframes as well?
    DCFusor , 15 hrs
    @Doctor

    Maybe we get really lucky and they RIF Lennart Poettering or he quits? I hear IBM doesn't tolerate prima donnas and cults of personality quite as much as RH?

    Anonymous Coward , 15 hrs
    I hear IBM doesn't tolerate prima donnas and cults of personality quite as much as RH?

    Quite the contrary. IBM is run and managed by prima donnas and personality cults.

    Waseem Alkurdi , 18 hrs
    Re: Poettering</