Softpanorama

Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
May the source be with you, but remember the KISS principle ;-)
Bigger doesn't imply better. Bigger often is a sign of obesity, of lost control, of overcomplexity, of cancerous cells

Slightly Skeptical View on Enterprise Unix Administration

News Webliography of problems with "pure" cloud environment Recommended Books Recommended Links Recommended Tools to Enhance Command Line Usage in Windows Programmable Keyboards Microsoft IntelliType Macros
Open source politics: IBM acquires Red Hat Over 50 and unemployed Shadow IT Is DevOps a yet another "for profit" technocult? Dealing with multiple flavors of Unix Classification of System Administrators Red Hat Certification Program
Unix Configuration Management Tools Job schedulers Unix System Monitoring Red Hat Enterprise Linux Life Cycle Corporate bullshit as a communication method Diplomatic Communication Bosos or Empty Suits (Aggressive Incompetent Managers)
ILO command line interface Using HP ILO virtual CDROM iDRAC7 goes unresponsive - can't connect to iDRAC7 Resetting frozen iDRAC without unplugging the server Troubleshooting HPOM agents Webliography of problems with "pure" cloud environment The tar pit of Red Hat overcomplexity
Bare metal recovery of Linux systems Number of Servers per Sysadmin Is DevOps a yet another "for profit" technocult Carpal tunnel syndrome Sysadmin Horror Stories Humor Etc


The KISS rule can be expanded as: Keep It Simple, Sysadmin ;-)

This page is written as a protest against overcomplexity and bizarre data center atmosphere typical in "semi-outsourced" or fully outsourced datacenters ;-). Unix/Linux sysadmins are being killed by overcomplexity of the environment.  Later swats  of Linux knowledge (and many excellent  books)  were  killed with introduction of systemd. Especially for older, most experience members of the team, who have unique set of organization knowledge as well as specifics of their career which allowed them to watch the development of  Linux almost from the version 0.92 

As Charlie Schluting noted in 2010: (Enterprise Networking Plane, April 7, 2010)

What happened to the old "sysadmin" of just a few years ago? We've split what used to be the sysadmin into application teams, server teams, storage teams, and network teams. There were often at least a few people, the holders of knowledge, who knew how everything worked, and I mean everything. Every application, every piece of network gear, and how every server was configured -- these people could save a business in times of disaster.

Now look at what we've done. Knowledge is so decentralized we must invent new roles to act as liaisons between all the IT groups.

Architects now hold much of the high-level "how it works" knowledge, but without knowing how any one piece actually does work.

In organizations with more than a few hundred IT staff and developers, it becomes nearly impossible for one person to do and know everything. This movement toward specializing in individual areas seems almost natural. That, however, does not provide a free ticket for people to turn a blind eye.

Specialization

You know the story: Company installs new application, nobody understands it yet, so an expert is hired. Often, the person with a certification in using the new application only really knows how to run that application. Perhaps they aren't interested in learning anything else, because their skill is in high demand right now. And besides, everything else in the infrastructure is run by people who specialize in those elements. Everything is taken care of.

Except, how do these teams communicate when changes need to take place? Are the storage administrators teaching the Windows administrators about storage multipathing; or worse logging in and setting it up because it's faster for the storage gurus to do it themselves? A fundamental level of knowledge is often lacking, which makes it very difficult for teams to brainstorm about new ways evolve IT services. The business environment has made it OK for IT staffers to specialize and only learn one thing.

If you hire someone certified in the application, operating system, or network vendor you use, that is precisely what you get. Certifications may be a nice filter to quickly identify who has direct knowledge in the area you're hiring for, but often they indicate specialization or compensation for lack of experience.

Resource Competition

Does your IT department function as a unit? Even 20-person IT shops have turf wars, so the answer is very likely, "no." As teams are split into more and more distinct operating units, grouping occurs. One IT budget gets split between all these groups. Often each group will have a manager who pitches his needs to upper management in hopes they will realize how important the team is.

The "us vs. them" mentality manifests itself at all levels, and it's reinforced by management having to define each team's worth in the form of a budget. One strategy is to illustrate a doomsday scenario. If you paint a bleak enough picture, you may get more funding. Only if you are careful enough to illustrate the failings are due to lack of capital resources, not management or people. A manager of another group may explain that they are not receiving the correct level of service, so they need to duplicate the efforts of another group and just implement something themselves. On and on, the arguments continue.

Most often, I've seen competition between server groups result in horribly inefficient uses of hardware. For example, what happens in your organization when one team needs more server hardware? Assume that another team has five unused servers sitting in a blade chassis. Does the answer change? No, it does not. Even in test environments, sharing doesn't often happen between IT groups.

With virtualization, some aspects of resource competition get better and some remain the same. When first implemented, most groups will be running their own type of virtualization for their platform. The next step, I've most often seen, is for test servers to get virtualized. If a new group is formed to manage the virtualization infrastructure, virtual machines can be allocated to various application and server teams from a central pool and everyone is now sharing. Or, they begin sharing and then demand their own physical hardware to be isolated from others' resource hungry utilization. This is nonetheless a step in the right direction. Auto migration and guaranteed resource policies can go a long way toward making shared infrastructure, even between competing groups, a viable option.

Blamestorming

The most damaging side effect of splitting into too many distinct IT groups is the reinforcement of an "us versus them" mentality. Aside from the notion that specialization creates a lack of knowledge, blamestorming is what this article is really about. When a project is delayed, it is all too easy to blame another group. The SAN people didn't allocate storage on time, so another team was delayed. That is the timeline of the project, so all work halted until that hiccup was restored. Having someone else to blame when things get delayed makes it all too easy to simply stop working for a while.

More related to the initial points at the beginning of this article, perhaps, is the blamestorm that happens after a system outage.

Say an ERP system becomes unresponsive a few times throughout the day. The application team says it's just slowing down, and they don't know why. The network team says everything is fine. The server team says the application is "blocking on IO," which means it's a SAN issue. The SAN team say there is nothing wrong, and other applications on the same devices are fine. You've ran through nearly every team, but without an answer still. The SAN people don't have access to the application servers to help diagnose the problem. The server team doesn't even know how the application runs.

See the problem? Specialized teams are distinct and by nature adversarial. Specialized staffers often relegate themselves into a niche knowing that as long as they continue working at large enough companies, "someone else" will take care of all the other pieces.

I unfortunately don't have an answer to this problem. Maybe rotating employees between departments will help. They gain knowledge and also get to know other people, which should lessen the propensity to view them as outsiders

The tragic part of the current environment is that its like shifting sands. If you are a sysadmin, who is writing  his own scripts, you write on the sand, spending a lot of time thinking over and debugging your scripts. Which raise you productivity and diminish the number of possible errors. But the next OS version wipes everything, making it worthless.  Or the decision of the brass to switch to a different flavor of Linux does the same. Add you this inevitable technological changes and the question arise, can't you get a more respectable profession, in which 66% of knowledge is not replaced in the next ten years.

Balkanization of linux demonstrated also in the Babylon  Tower of system programming languages (C, C++, Perl, Python, Ruby, Go, Java to name a few) and systems that supposedly should help you but mostly do quite opposite (Puppet, Ansible, Chef, etc). Add to this monitoring infrastructure (say Nagios) and you definitely have an information overload.

Those laments about training just add to the stress. First of all corporations no longer want to pay for it. So you are your own and need to do it mostly on your free time, as the workload is substantial in most organizations.  Days when you can for a week travel to vendor training center and have a chance to communicate with other admins from different organization, are long in the past. Most training now is via Web and chances for face-to-face communication disappeared.

Also the necessary to relearn staff again and again (and often new technologies/daemons/version of OS) are iether the same or inferior to previous, or represent open scam in which training is the way to extract money from lemmings (Agile, most of DevOps hoopla, etc). There s also tendency to treat virtual machine and cloud infrastructure as separate technologies, which requires separate training and separate set of certifications (ASW, Asure).  This is a kind of infantilization of profession when a person who learned a lot of staff in previous 10 years need to forget it and relearn most of it again and again.

Of course  sysadmins not the only suffered. Computer scientists also now struggle with  the excessive level of complexity and too quickly shifting sand. Look at the tragedy of Donald Knuth with this life long idea to create comprehensive monograph for system programmers (the Art of computer programming). He probably will not be able to finish even volume 4 (out of seven that were planned) in his lifetime. 

Of course much  depends on the evolution of hardware and changes caused by the evolution of hardware such as mass introduction of large SSDs, multi-core CPUs and large RAM (nobody is now surprised to see a server with 128GB of RAM) while painful are inevitable. The other are changes caused by fashion and the desire to entrench their position by the dominate player are more difficult to accept. It is difficult or even impossible to predict which technology became fashionable tomorrow and how long DevOp will remain in fashion. Typically such thing last around ten years.  After that everything is typically fades in oblivion,  or even is crossed out, and former idols will be shattered. This strange period of re-invention of "glass-walls datacenter" under then banner of DevOps  (and old timers still remember that IBM datacenters were hated with passion, and this hate created additional non-technological incentive for mini-computers and later for IBM PC)  is characterized by the level of hype usually reserved for woman fashion.  Now it sometimes looks to me that the movie The Devil Wears Prada  is a subtle parable on sysadmin work.

Add to this horrible job  market, especially for university graduated and older sysadmins (see Over 50 and unemployed ) and one probably start suspect that the life of modern sysadmin is far from paradise. When you read some job description  on sites like Monster, Dice or  Indeed you just ask yourself, if those people really want to hire anybody, or this is just a smoke screen for H1B candidates job certification.  The level of details often is so precise that it is almost impossible to change your current specialization. They do not care about the level of talent, they do not want to train a suitable candidate. They want a person who fit 100% from day 1.  Also in place likne NYC or SF rent and property prices and valuations are growing while income growth has been stagnant. I turned down several job opportunities in SF and NYC and Silicon Valley because of the crazy cost of living.

Vandalism ofUnix performed by Red Hat with RHEL 7 makes the current  environment somewhat unhealthy. It is clear that this was done by the whim of Red Hat brass, not in the interest of the community. This is a typical Microsoft-style trick which make dozens of high quality books written by very talented authors instantly semi-obsolete.  And question arise whether it make sense to write any book about RHEL other then for solid advance.  It generarated some backlash, but the position  of Red Hat as Microsoft on Linux  allowed it to shove down the throat their inferior technical decisions. In a way it reminds me the way Microsoft dealt with Windows 7 replacing it with Windows 10.  Essentially destroying previous windows interface ecosystem (while preserving binary compatibility)

See also

Here are my notes/reflection of sysadmin problem that often arise if rather strange (and sometimes pretty toxic) IT departments of large corporations:


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

2018 2017 2016 2015 2014 2013 2012 2011 2010 2009
2008 2007 2006 2005 2004 2003 2002 2001 2000 1999

"I appreciate Woody Allen's humor because one of my safety valves is an appreciation for life's absurdities. His message is that life isn't a funeral march to the grave. It's a polka."

-- Dennis Kusinich

[Feb 15, 2019] Losing a job in your 50s is especially tough. Here are 3 steps to take when layoffs happen by Peter Dunn

Unemployment usually is just six month or so; this is the time when you can plan you "downsizing". You do not need to rush.
Often losing job logically requires selling your home and moving to a modest apartment, especially if no children are living with you. At 50 it is abut time... You need to do it later anyway, so why not now.
But that's a very tough decision to make... Still, if the current housing market is close to the top, this is one of the best moves you can make. Getting from your house several hundred thousand dollars allows you to create kind of private pension to compensate for losses in income till you hit your Social Security check, which currently means 66.
$300K investment in A quality bonds that returns 3% per year are enough to provides you with $24K per year "pension" from 50 to age of 66. That allows you to pay for the apartment and amenities. The food is extra...
This way you can take lower paid job and survive.
And in this case you 401k remains intact and can supplement your SS income later on. Simple Excel spreadsheet can provide you with a complete picture of what you can afford and what not. Actually ability to walk of fresh air for 3 or more hours each day worth a lot of money ;-)
Notable quotes:
"... Losing a job in your 50s is a devastating moment, especially if the job is connected to a long career ripe with upward mobility. As a frequent observer of this phenomenon, it's as scary and troublesome as unchecked credit card debt or an expensive chronic health condition. This is one of the many reasons why I believe our 50s can be the most challenging decade of our lives. ..."
"... The first thing you should do is identify the exact day your job income stops arriving ..."
"... Next, and by next I mean five minutes later, explore your eligibility for unemployment benefits, and then file for them if you're able. ..."
"... Grab your bank statement, a marker, and a calculator. As much as you want to pretend its business as usual, you shouldn't. Identify expenses that don't make sense if you don't have a job. Circle them. Add them up. Resolve to eliminate them for the time being, and possibly permanently. While this won't necessarily lengthen your fuse, it could lessen the severity of a potential boom. ..."
Feb 15, 2019 | finance.yahoo.com

... ... ...

Losing a job in your 50s is a devastating moment, especially if the job is connected to a long career ripe with upward mobility. As a frequent observer of this phenomenon, it's as scary and troublesome as unchecked credit card debt or an expensive chronic health condition. This is one of the many reasons why I believe our 50s can be the most challenging decade of our lives.

Assuming you can clear the mental challenges, the financial and administrative obstacles can leave you feeling like a Rube Goldberg machine.

Income, health insurance, life insurance, disability insurance, bills, expenses, short-term savings and retirement savings are all immediately important in the face of a job loss. Never mind your Parent PLUS loans, financially-dependent aging parents, and boomerang children (adult kids who live at home), which might all be lurking as well.

When does your income stop?

From the shocking moment a person learns their job is no longer their job, the word "triage" must flash in bright lights like an obnoxiously large sign in Times Square. This is more challenging than you might think. Like a pickpocket bumping into you right before he grabs your wallet, the distraction is the problem that takes your focus away from the real problem.

This is hard to do because of the emotion that arrives with the dirty deed. The mind immediately begins to race to sources of money and relief. And unfortunately that relief is often found in the wrong place.

The first thing you should do is identify the exact day your job income stops arriving . That's how much time you have to defuse the bomb. Your fuse may come in the form of a severance package, or work you've performed but have't been paid for yet.

When do benefits kick in?

Next, and by next I mean five minutes later, explore your eligibility for unemployment benefits, and then file for them if you're able. However, in some states severance pay affects your immediate eligibility for unemployment benefits. In other words, you can't file for unemployment until your severance payments go away.

Assuming you can't just retire at this moment, which you likely can't, you must secure fresh employment income quickly. But quickly is relative to the length of your fuse. I've witnessed way too many people miscalculate the length and importance of their fuse. If you're able to get back to work quickly, the initial job loss plus severance ends up enhancing your financial life. If you take too much time, by your choice or that of the cosmos, boom.

The next move is much more hands-on, and must also be performed the day you find yourself without a job.

What nonessentials do I cut?

Grab your bank statement, a marker, and a calculator. As much as you want to pretend its business as usual, you shouldn't. Identify expenses that don't make sense if you don't have a job. Circle them. Add them up. Resolve to eliminate them for the time being, and possibly permanently. While this won't necessarily lengthen your fuse, it could lessen the severity of a potential boom.

The idea of diving into your spending habits on the day you lose your job is no fun. But when else will you have such a powerful reason to do so? You won't. It's better than dipping into your assets to fund your current lifestyle. And that's where we'll pick it up the next time.

We've covered day one. In my next column we will tackle day two and beyond.

Peter Dunn is an author, speaker and radio host, and he has a free podcast: "Million Dollar Plan." Have a question for Pete the Planner? Email him at AskPete@petetheplanner.com. The views and opinions expressed in this column are the author's and do not necessarily reflect those of USA TODAY.

[Feb 13, 2019] Microsoft patches 0-day vulnerabilities in IE and Exchange

It is unclear how long this vulnerability exists, but this is pretty serious staff that shows how Hillary server could be hacked via Abedin account. As Abedin technical level was lower then zero, to hack into her home laptop just just trivial.
Feb 13, 2019 | arstechnica.com

Microsoft also patched Exchange against a vulnerability that allowed remote attackers with little more than an unprivileged mailbox account to gain administrative control over the server. Dubbed PrivExchange, CVE-2019-0686 was publicly disclosed last month , along with proof-of-concept code that exploited it. In Tuesday's advisory , Microsoft officials said they haven't seen active exploits yet but that they were "likely."

[Feb 12, 2019] Older Workers Need a Different Kind of Layoff A 60-year-old whose position is eliminated might be unable to find another job, but could retire if allowed early access to Medicare

Highly recommended!
This is a constructive suggestion that is implementable even under neoliberalism. As everything is perverted under neoliberalism that might prompt layoffs before the age of 55.
Notable quotes:
"... Older workers often struggle to get rehired as easily as younger workers. Age discrimination is a well-known problem in corporate America. What's a 60-year-old back office worker supposed to do if downsized in a merger? The BB&T-SunTrust prospect highlights the need for a new type of unemployment insurance for some of the workforce. ..."
"... One policy might be treating unemployed older workers differently than younger workers. Giving them unemployment benefits for a longer period of time than younger workers would be one idea, as well as accelerating the age of Medicare eligibility for downsized employees over the age of 55. The latter idea would help younger workers as well, by encouraging older workers to accept buyout packages -- freeing up career opportunities for younger workers. ..."
Feb 12, 2019 | www.bloomberg.com

The proposed merger between SunTrust and BB&T makes sense for both firms -- which is why Wall Street sent both stocks higher on Thursday after the announcement. But employees of the two banks, especially older workers who are not yet retirement age, are understandably less enthused at the prospect of downsizing. In a nation with almost 37 million workers over the age of 55, the quandary of SunTrust-BB&T workforce will become increasingly familiar across the U.S. economy.

But what's good for the firms isn't good for all of the workers. Older workers often struggle to get rehired as easily as younger workers. Age discrimination is a well-known problem in corporate America. What's a 60-year-old back office worker supposed to do if downsized in a merger? The BB&T-SunTrust prospect highlights the need for a new type of unemployment insurance for some of the workforce.

One policy might be treating unemployed older workers differently than younger workers. Giving them unemployment benefits for a longer period of time than younger workers would be one idea, as well as accelerating the age of Medicare eligibility for downsized employees over the age of 55. The latter idea would help younger workers as well, by encouraging older workers to accept buyout packages -- freeing up career opportunities for younger workers.

The economy can be callous toward older workers, but policy makers don't have to be. We should think about ways of dealing with this shift in the labor market before it happens.

[Feb 11, 2019] Resuming rsync on a interrupted transfer

May 15, 2013 | stackoverflow.com

Glitches , May 15, 2013 at 18:06

I am trying to backup my file server to a remove file server using rsync. Rsync is not successfully resuming when a transfer is interrupted. I used the partial option but rsync doesn't find the file it already started because it renames it to a temporary file and when resumed it creates a new file and starts from beginning.

Here is my command:

rsync -avztP -e "ssh -p 2222" /volume1/ myaccont@backup-server-1:/home/myaccount/backup/ --exclude "@spool" --exclude "@tmp"

When this command is ran, a backup file named OldDisk.dmg from my local machine get created on the remote machine as something like .OldDisk.dmg.SjDndj23 .

Now when the internet connection gets interrupted and I have to resume the transfer, I have to find where rsync left off by finding the temp file like .OldDisk.dmg.SjDndj23 and rename it to OldDisk.dmg so that it sees there already exists a file that it can resume.

How do I fix this so I don't have to manually intervene each time?

Richard Michael , Nov 6, 2013 at 4:26

TL;DR : Use --timeout=X (X in seconds) to change the default rsync server timeout, not --inplace .

The issue is the rsync server processes (of which there are two, see rsync --server ... in ps output on the receiver) continue running, to wait for the rsync client to send data.

If the rsync server processes do not receive data for a sufficient time, they will indeed timeout, self-terminate and cleanup by moving the temporary file to it's "proper" name (e.g., no temporary suffix). You'll then be able to resume.

If you don't want to wait for the long default timeout to cause the rsync server to self-terminate, then when your internet connection returns, log into the server and clean up the rsync server processes manually. However, you must politely terminate rsync -- otherwise, it will not move the partial file into place; but rather, delete it (and thus there is no file to resume). To politely ask rsync to terminate, do not SIGKILL (e.g., -9 ), but SIGTERM (e.g., pkill -TERM -x rsync - only an example, you should take care to match only the rsync processes concerned with your client).

Fortunately there is an easier way: use the --timeout=X (X in seconds) option; it is passed to the rsync server processes as well.

For example, if you specify rsync ... --timeout=15 ... , both the client and server rsync processes will cleanly exit if they do not send/receive data in 15 seconds. On the server, this means moving the temporary file into position, ready for resuming.

I'm not sure of the default timeout value of the various rsync processes will try to send/receive data before they die (it might vary with operating system). In my testing, the server rsync processes remain running longer than the local client. On a "dead" network connection, the client terminates with a broken pipe (e.g., no network socket) after about 30 seconds; you could experiment or review the source code. Meaning, you could try to "ride out" the bad internet connection for 15-20 seconds.

If you do not clean up the server rsync processes (or wait for them to die), but instead immediately launch another rsync client process, two additional server processes will launch (for the other end of your new client process). Specifically, the new rsync client will not re-use/reconnect to the existing rsync server processes. Thus, you'll have two temporary files (and four rsync server processes) -- though, only the newer, second temporary file has new data being written (received from your new rsync client process).

Interestingly, if you then clean up all rsync server processes (for example, stop your client which will stop the new rsync servers, then SIGTERM the older rsync servers, it appears to merge (assemble) all the partial files into the new proper named file. So, imagine a long running partial copy which dies (and you think you've "lost" all the copied data), and a short running re-launched rsync (oops!).. you can stop the second client, SIGTERM the first servers, it will merge the data, and you can resume.

Finally, a few short remarks:

JamesTheAwesomeDude , Dec 29, 2013 at 16:50

Just curious: wouldn't SIGINT (aka ^C ) be 'politer' than SIGTERM ? – JamesTheAwesomeDude Dec 29 '13 at 16:50

Richard Michael , Dec 29, 2013 at 22:34

I didn't test how the server-side rsync handles SIGINT, so I'm not sure it will keep the partial file - you could check. Note that this doesn't have much to do with Ctrl-c ; it happens that your terminal sends SIGINT to the foreground process when you press Ctrl-c , but the server-side rsync has no controlling terminal. You must log in to the server and use kill . The client-side rsync will not send a message to the server (for example, after the client receives SIGINT via your terminal Ctrl-c ) - might be interesting though. As for anthropomorphizing, not sure what's "politer". :-) – Richard Michael Dec 29 '13 at 22:34

d-b , Feb 3, 2015 at 8:48

I just tried this timeout argument rsync -av --delete --progress --stats --human-readable --checksum --timeout=60 --partial-dir /tmp/rsync/ rsync://$remote:/ /src/ but then it timed out during the "receiving file list" phase (which in this case takes around 30 minutes). Setting the timeout to half an hour so kind of defers the purpose. Any workaround for this? – d-b Feb 3 '15 at 8:48

Cees Timmerman , Sep 15, 2015 at 17:10

@user23122 --checksum reads all data when preparing the file list, which is great for many small files that change often, but should be done on-demand for large files. – Cees Timmerman Sep 15 '15 at 17:10

[Feb 11, 2019] prsync command man page - pssh

Originally from Brent N. Chun ~ Intel Research Berkeley
Feb 11, 2019 | www.mankier.com

prsync -- parallel file sync program

Synopsis

prsync [ - v A r a z ] [ -h hosts_file ] [ -H [ user @] host [: port ]] [ -l user ] [ -p par ] [ -o outdir ] [ -e errdir ] [ -t timeout ] [ -O options ] [ -x args ] [ -X arg ] [ -S args ] local ... remote

Description

prsync is a program for copying files in parallel to a number of hosts using the popular rsync program. It provides features such as passing a password to ssh, saving output to files, and timing out.

Options
-h host_file
--hosts host_file
Read hosts from the given host_file . Lines in the host file are of the form [ user @] host [: port ] and can include blank lines and comments (lines beginning with "#"). If multiple host files are given (the -h option is used more than once), then prsync behaves as though these files were concatenated together. If a host is specified multiple times, then prsync will connect the given number of times.
-H
[ user @] host [: port ]
--host
[ user @] host [: port ]
-H
"[ user @] host [: port ] [ [ user @] host [: port ] ... ]"
--host
"[ user @] host [: port ] [ [ user @] host [: port ] ... ]"

Add the given host strings to the list of hosts. This option may be given multiple times, and may be used in conjunction with the -h option.

-l user
--user user
Use the given username as the default for any host entries that don't specifically specify a user.
-p parallelism
--par parallelism
Use the given number as the maximum number of concurrent connections.
-t timeout
--timeout timeout
Make connections time out after the given number of seconds. With a value of 0, prsync will not timeout any connections.
-o outdir
--outdir outdir
Save standard output to files in the given directory. Filenames are of the form [ user @] host [: port ][. num ] where the user and port are only included for hosts that explicitly specify them. The number is a counter that is incremented each time for hosts that are specified more than once.
-e errdir
--errdir errdir
Save standard error to files in the given directory. Filenames are of the same form as with the -o option.
-x args
--extra-args args
Passes extra rsync command-line arguments (see the rsync(1) man page for more information about rsync arguments). This option may be specified multiple times. The arguments are processed to split on whitespace, protect text within quotes, and escape with backslashes. To pass arguments without such processing, use the -X option instead.
-X arg
--extra-arg arg
Passes a single rsync command-line argument (see the rsync(1) man page for more information about rsync arguments). Unlike the -x option, no processing is performed on the argument, including word splitting. To pass multiple command-line arguments, use the option once for each argument.
-O options
--options options
SSH options in the format used in the SSH configuration file (see the ssh_config(5) man page for more information). This option may be specified multiple times.
-A
--askpass
Prompt for a password and pass it to ssh. The password may be used for either to unlock a key or for password authentication. The password is transferred in a fairly secure manner (e.g., it will not show up in argument lists). However, be aware that a root user on your system could potentially intercept the password.
-v
--verbose
Include error messages from rsync with the -i and \ options.
-r
--recursive
Recursively copy directories.
-a
--archive
Use rsync archive mode (rsync's -a option).
-z
--compress
Use rsync compression.
-S args
--ssh-args args
Passes extra SSH command-line arguments (see the ssh(1) man page for more information about SSH arguments). The given value is appended to the ssh command (rsync's -e option) without any processing.
Tips

The ssh_config file can include an arbitrary number of Host sections. Each host entry specifies ssh options which apply only to the given host. Host definitions can even behave like aliases if the HostName option is included. This ssh feature, in combination with pssh host files, provides a tremendous amount of flexibility.

Exit Status

The exit status codes from prsync are as follows:

0
Success
1
Miscellaneous error
2
Syntax or usage error
3
At least one process was killed by a signal or timed out.
4
All processes completed, but at least one rsync process reported an error (exit status other than 0).
Authors

Written by Brent N. Chun <bnc@theether.org> and Andrew McNabb <amcnabb@mcnabbs.org>.

https://github.com/lilydjwg/pssh

See Also

rsync(1) , ssh(1) , ssh_config(5) , pssh(1) , prsync (1), pslurp(1) , pnuke(1) ,

Referenced By

pnuke(1) , pscp.pssh(1) , pslurp(1) , pssh(1) .

[Feb 07, 2019] Installing Nagios-3.4 in CentOS 6.3 LinTut

Feb 07, 2019 | lintut.com

Nagios is an opensource software used for network and infrastructure monitoring . Nagios will monitor servers, switches, applications and services . It alerts the System Administrator when something went wrong and also alerts back when the issues has been rectified.

View also: How to Enable EPEL Repository for RHEL/CentOS 6/5

View also: How to Enable EPEL Repository for RHEL/CentOS 6/5
yum install nagios nagios-devel nagios-plugins* gd gd-devel httpd php gcc glibc glibc-common

Bydefualt on doing yum install nagios, in cgi.cfg file, authorized user name nagiosadmin is mentioned and for htpasswd file /etc/nagios/passwd file is used.So for easy steps I am using the same name.
# htpasswd -c /etc/nagios/passwd nagiosadmin

Check the below given values in /etc/nagios/cgi.cfg
nano /etc/nagios/cgi.cfg
# AUTHENTICATION USAGE
use_authentication=1
# SYSTEM/PROCESS INFORMATION ACCESS
authorized_for_system_information=nagiosadmin
# CONFIGURATION INFORMATION ACCESS
authorized_for_configuration_information=nagiosadmin
# SYSTEM/PROCESS COMMAND ACCESS
authorized_for_system_commands=nagiosadmin
# GLOBAL HOST/SERVICE VIEW ACCESS
authorized_for_all_services=nagiosadmin
authorized_for_all_hosts=nagiosadmin
# GLOBAL HOST/SERVICE COMMAND ACCESS
authorized_for_all_service_commands=nagiosadmin
authorized_for_all_host_commands=nagiosadmin

For provoding the access to nagiosadmin user in http, /etc/httpd/conf.d/nagios.conf file exist. Below is the nagios.conf configuration for nagios server.
cat /etc/http/conf.d/nagios.conf
# SAMPLE CONFIG SNIPPETS FOR APACHE WEB SERVER
# Last Modified: 11-26-2005
#
# This file contains examples of entries that need
# to be incorporated into your Apache web server
# configuration file. Customize the paths, etc. as
# needed to fit your system.

ScriptAlias /nagios/cgi-bin/ "/usr/lib/nagios/cgi-bin/"
# SSLRequireSSL
Options ExecCGI
AllowOverride None
Order allow,deny
Allow from all
# Order deny,allow
# Deny from all
# Allow from 127.0.0.1
AuthName "Nagios Access"
AuthType Basic
AuthUserFile /etc/nagios/passwd
Require valid-user

Alias /nagios "/usr/share/nagios/html"
# SSLRequireSSL
Options None
AllowOverride None
Order allow,deny
Allow from all
# Order deny,allow
# Deny from all
Allow from 127.0.0.1
AuthName "Nagios Access"
AuthType Basic
AuthUserFile /etc/nagios/passwd
Require valid-user

Start the httpd and nagios /etc/init.d/httpd start /etc/init.d/nagios start [warn]Note: SELINUX and IPTABLE are disabled.[/warn] Access the nagios server by http://nagios_server_ip-address/nagios Give the username = nagiosadmin and password which you have given to nagiosadmin user.

[Feb 04, 2019] Do not play those dangerous games with resing of partitions unless absolutly nessesary

Copying to additional drive (can be USB), repartitioning and then copying everything back is a safer bet
May 07, 2017 | superuser.com
womble

In theory, you could reduce the size of sda1, increase the size of the extended partition, shift the contents of the extended partition down, then increase the size of the PV on the extended partition and you'd have the extra room.

However, the number of possible things that can go wrong there is just astronomical

So I'd recommend either buying a second hard drive (and possibly transferring everything onto it in a more sensible layout, then repartitioning your current drive better) or just making some bind mounts of various bits and pieces out of /home into / to free up a bit more space.

--womble

[Feb 04, 2019] Ticket 3745 (Integration mc with mc2(Lua))

This ticket is from2016...
Dec 01, 2020 | midnight-commander.org
Ticket #3745 (closed enhancement: invalid)

Opened 2 years ago

Last modified 2 years ago Integration mc with mc2(Lua)

Description I think that it is necessary that code base mc and mc2 correspond each other. mooffie? can you check that patches from andrew_b easy merged with mc2 and if some patch conflict with mc2 code hold this changes by writing about in corresponding ticket. zaytsev can you help automate this( continues integration, travis and so on). Sorry, but some words in Russian:

Ребята, я не пытаюсь давать ЦУ, Вы делаете классную работу. Просто яхотел обратить внимание, что Муфья пытается поддерживать свой код в актуальном состоянии, но видя как у него возникают проблемы на ровном месте боюсь энтузиазм у него может пропасть.

Change History comment:1 Changed 2 years ago by zaytsev-work

​ https://mail.gnome.org/archives/mc-devel/2016-February/msg00021.html

I have asked what plans does mooffie have for mc 2 sometime ago and never got an answer. Note that I totally don't blame him for that. Everyone here is working at their own pace. Sometimes I disappear for weeks or months, because I can't get a spare 5 minutes not even speaking of several hours due to the non-mc related workload. I hope that one day we'll figure out the way towards merging it, and eventually get it done.

In the mean time, he's working together with us by offering extremely important and well-prepared contributions, which are a pleasure to deal with and we are integrating them as fast as we can, so it's not like we are at war and not talking to each other.

Anyways, creating random noise in the ticket tracking system will not help to advance your cause. The only way to influence the process is to invest serious amount of time in the development.

[Feb 02, 2019] Google Employees Are Fighting With Executives Over Pay

Notable quotes:
"... In July, Bloomberg reported that, for the first time, more than 50 percent of Google's workforce were temps, contractors, and vendors. ..."
Feb 02, 2019 | www.wired.com

... ... ...

Asked whether they have confidence in CEO Sundar Pichai and his management team to "effectively lead in the future," 74 percent of employees responded "positive," as opposed to "neutral" or "negative," in late 2018, down from 92 percent "positive" the year before. The 18-point drop left employee confidence at its lowest point in at least six years. The results of the survey, known internally as Googlegeist, also showed a decline in employees' satisfaction with their compensation, with 54 percent saying they were satisfied, compared with 64 percent the prior year.

The drop in employee sentiment helps explain why internal debate around compensation, pay equity, and trust in executives has heated up in recent weeks -- and why an HR presentation from 2016 went viral inside the company three years later.

The presentation, first reported by Bloomberg and reviewed by WIRED, dates from July 2016, about a year after Google started an internal effort to curb spending . In the slide deck, Google's human-resources department presents potential ways to cut the company's $20 billion compensation budget. Ideas include: promoting fewer people, hiring proportionately more low-level employees, and conducting an audit to make sure Google is paying benefits "(only) for the right people." In some cases, HR suggested ways to implement changes while drawing little attention, or tips on how to sell the changes to Google employees. Some of the suggestions were implemented, like eliminating the annual employee holiday gift; most were not.

Another, more radical proposal floated inside the company around the same time didn't appear in the deck. That suggested converting some full-time employees to contractors to save money. A person familiar with the situation said this proposal was not implemented. In July, Bloomberg reported that, for the first time, more than 50 percent of Google's workforce were temps, contractors, and vendors.

[Jan 31, 2019] Troubleshooting performance issue in CentOS-RHEL using collectl utility The Geek Diary

Jan 31, 2019 | www.thegeekdiary.com

Troubleshooting performance issue in CentOS/RHEL using collectl utility

By admin

Unlike most monitoring tools that either focus on a small set of statistics, format their output in only one way, run either interactively or as a daemon but not both, collectl tries to do it all. You can choose to monitor any of a broad set of subsystems which currently include buddyinfo, cpu, disk, inodes, InfiniBand, lustre, memory, network, nfs, processes, quadrics, slabs, sockets and tcp.

Installing collectl

The collectl community project is maintained at http://collectl.sourceforge.net/ as well as provided in the Fedora community project. For Red Hat Enterprise Linux 6 and 7, the easiest way to install collectl is via the EPEL repositories (Extra Packages for Enterprise Linux) maintained by the Fedora community.

Once set up, collectl can be installed with the following command:

# yum install collectl

The packages are also available for direct download using the following links:

RHEL 5 x86_64 (available in the EPEL archives) https://archive.fedoraproject.org/pub/archive/epel/5/x86_64/
RHEL 6 x86_64 http://dl.fedoraproject.org/pub/epel/6/x86_64/
RHEL 7 x86_64 http://dl.fedoraproject.org/pub/epel/7/x86_64/

General usage of collectl

The collectl utility can be run manually via the command line or as a service. Data will be logged to /var/log/collectl/*.raw.gz . The logs will be rotated every 24 hours by default. To run as a service:

# chkconfig collectl on       # [optional, to start at boot time]
# service collectl start
Sample Intervals

When run manually from the command line, the first Interval value is 1 . When running as a service, default sample intervals are as show below. It might sometimes be desired to lower these to avoid averaging, such as 1,30,60.

# grep -i interval /etc/collectl.conf 
#Interval =     10
#Interval2 =    60
#Interval3 =   120
Using collectl to troubleshoot disk or SAN storage performance

The defaults of 10s for all but process data which is collected at 60s intervals are best left as is, even for storage performance analysis.

The SAR Equivalence Matrix shows common SAR command equivalents to help experienced SAR users learn to use Collectl. The following example command will view summary detail of the CPU, Network and Disk from the file /var/log/collectl/HOSTNAME-20190116-164506.raw.gz :

# collectl -scnd -oT -p HOSTNAME-20190116-164506.raw.gz
#         <----CPU[HYPER]-----><----------Disks-----------><----------Network---------->
#Time     cpu sys inter  ctxsw KBRead  Reads KBWrit Writes   KBIn  PktIn  KBOut  PktOut 
16:46:10    9   2 14470  20749      0      0     69      9      0      1      0       2 
16:46:20   13   4 14820  22569      0      0    312     25    253    174      7      79 
16:46:30   10   3 15175  21546      0      0     54      5      0      2      0       3 
16:46:40    9   2 14741  21410      0      0     57      9      1      2      0       4 
16:46:50   10   2 14782  23766      0      0    374      8    250    171      5      75 
....

The next example will output the 1 minute period from 17:00 – 17:01.

# collectl -scnd -oT --from 17:00 --thru 17:01 -p HOSTNAME-20190116-164506.raw.gz
#         <----CPU[HYPER]-----><----------Disks-----------><----------Network---------->
#Time     cpu sys inter  ctxsw KBRead  Reads KBWrit Writes   KBIn  PktIn  KBOut  PktOut 
17:00:00   13   3 15870  25320      0      0     67      9    251    172      6      90 
17:00:10   16   4 16386  24539      0      0    315     17    246    170      6      84 
17:00:20   10   2 14959  22465      0      0     65     26      5      6      1       8 
17:00:30   11   3 15056  24852      0      0    323     12    250    170      5      69 
17:00:40   18   5 16595  23826      0      0    463     13      1      5      0       5 
17:00:50   12   3 15457  23663      0      0     57      9    250    170      6      76 
17:01:00   13   4 15479  24488      0      0    304      7    254    176      5      70

The next example will output Detailed Disk data.

# collectl -scnD -oT -p HOSTNAME-20190116-164506.raw.gz

### RECORD    7 >>> tabserver <<< (1366318860.001) (Thu Apr 18 17:01:00 2013) ###

# CPU[HYPER] SUMMARY (INTR, CTXSW & PROC /sec)
# User  Nice   Sys  Wait   IRQ  Soft Steal  Idle  CPUs  Intr  Ctxsw  Proc  RunQ   Run   Avg1  Avg5 Avg15 RunT BlkT
     8     0     3     0     0     0     0    86     8   15K    24K     0   638     5   1.07  1.05  0.99    0    0

# DISK STATISTICS (/sec)
#          <---------reads---------><---------writes---------><--------averages--------> Pct
#Name       KBytes Merged  IOs Size  KBytes Merged  IOs Size  RWSize  QLen  Wait SvcTim Util
sda              0      0    0    0     304     11    7   44      44     2    16      6    4
sdb              0      0    0    0       0      0    0    0       0     0     0      0    0
dm-0             0      0    0    0       0      0    0    0       0     0     0      0    0
dm-1             0      0    0    0       5      0    1    4       4     1     2      2    0
dm-2             0      0    0    0     298      0   14   22      22     1     4      3    4
dm-3             0      0    0    0       0      0    0    0       0     0     0      0    0
dm-4             0      0    0    0       0      0    0    0       0     0     0      0    0
dm-5             0      0    0    0       0      0    0    0       0     0     0      0    0
dm-6             0      0    0    0       0      0    0    0       0     0     0      0    0
dm-7             0      0    0    0       0      0    0    0       0     0     0      0    0
dm-8             0      0    0    0       0      0    0    0       0     0     0      0    0
dm-9             0      0    0    0       0      0    0    0       0     0     0      0    0
dm-10            0      0    0    0       0      0    0    0       0     0     0      0    0
dm-11            0      0    0    0       0      0    0    0       0     0     0      0    0

# NETWORK SUMMARY (/sec)
# KBIn  PktIn SizeIn  MultI   CmpI  ErrsI  KBOut PktOut  SizeO   CmpO  ErrsO
   253    175   1481      0      0      0      5     70     79      0      0
....
Commonly used options

These generate summary, which is the total of ALL data for a particular type

These generate detail data, typically but not limited to the device level

The most useful switches are listed here

Final Thoughts

Performance Co-Pilot (PCP) is the preferred tool for collecting comprehensive performance metrics for performance analysis and troubleshooting. It is shipped and supported in Red Hat Enterprise Linux 6 & 7 and is the preferred recommendation over Collectl or Sar/Sysstat. It also includes conversion tools between its own performance data and Collectl & Sar/Syststat.

[Jan 31, 2019] Linus Torvalds and others on Linux's systemd by By Steven J. Vaughan-Nichols

Notable quotes:
"... I think some of the design details are insane (I dislike the binary logs, for example) ..."
"... Systemd problems might not have mattered that much, except that GNOME has a similar attitude; they only care for a small subset of the Linux desktop users, and they have historically abandoned some ways of interacting the Desktop in the interest of supporting touchscreen devices and to try to attract less technically sophisticated users. ..."
"... If you don't fall in the demographic of what GNOME supports, you're sadly out of luck. (Or you become a second class citizen, being told that you have to rely on GNOME extensions that may break on every single new version of GNOME.) ..."
"... As a result, many traditional GNOME users have moved over to Cinnamon, XFCE, KDE, etc. But as systemd starts subsuming new functions, components like network-manager will only work on systemd or other components that are forced to be used due to a network of interlocking dependencies; and it may simply not be possible for these alternate desktops to continue to function, because there is [no] viable alternative to systemd supported by more and more distributions. ..."
| www.zdnet.com

So what do Linux's leaders think of all this? I asked them and this is what they told me.

Linus Torvalds said:

"I don't actually have any particularly strong opinions on systemd itself. I've had issues with some of the core developers that I think are much too cavalier about bugs and compatibility, and I think some of the design details are insane (I dislike the binary logs, for example) , but those are details, not big issues."

Theodore "Ted" Ts'o, a leading Linux kernel developer and a Google engineer, sees systemd as potentially being more of a problem. "The bottom line is that they are trying to solve some real problems that matter in some use cases. And, [that] sometimes that will break assumptions made in other parts of the system."

Another concern that Ts'o made -- which I've heard from many other developers -- is that the systemd move was made too quickly: "The problem is sometimes what they break are in other parts of the software stack, and so long as it works for GNOME, they don't necessarily consider it their responsibility to fix the rest of the Linux ecosystem."

This, as Ts'o sees it, feeds into another problem:

" Systemd problems might not have mattered that much, except that GNOME has a similar attitude; they only care for a small subset of the Linux desktop users, and they have historically abandoned some ways of interacting the Desktop in the interest of supporting touchscreen devices and to try to attract less technically sophisticated users.

If you don't fall in the demographic of what GNOME supports, you're sadly out of luck. (Or you become a second class citizen, being told that you have to rely on GNOME extensions that may break on every single new version of GNOME.) "

Ts'o has an excellent point. GNOME 3.x has alienated both users and developers . He continued,

" As a result, many traditional GNOME users have moved over to Cinnamon, XFCE, KDE, etc. But as systemd starts subsuming new functions, components like network-manager will only work on systemd or other components that are forced to be used due to a network of interlocking dependencies; and it may simply not be possible for these alternate desktops to continue to function, because there is [no] viable alternative to systemd supported by more and more distributions. "

Of course, Ts'o continued, "None of these nightmare scenarios have happened yet. The people who are most stridently objecting to systemd are people who are convinced that the nightmare scenario is inevitable so long as we continue on the same course and altitude."

Ts'o is "not entirely certain it's going to happen, but he's afraid it will.

What I find puzzling about all this is that even though everyone admits that sysvinit needed replacing and many people dislike systemd, the distributions keep adopting it. Only a few distributions, including Slackware , Gentoo , PCLinuxOS , and Chrome OS , haven't adopted it.

It's not like there aren't alternatives. These include Upstart , runit , and OpenRC .

If systemd really does turn out to be as bad as some developers fear, there are plenty of replacements waiting in the wings. Indeed, rather than hear so much about how awful systemd is, I'd rather see developers spending their time working on an alternative.

[Jan 29, 2019] hardware - Is post-sudden-power-loss filesystem corruption on an SSD drive's ext3 partition expected behavior

Dec 04, 2012 | serverfault.com

My company makes an embedded Debian Linux device that boots from an ext3 partition on an internal SSD drive. Because the device is an embedded "black box", it is usually shut down the rude way, by simply cutting power to the device via an external switch.

This is normally okay, as ext3's journalling keeps things in order, so other than the occasional loss of part of a log file, things keep chugging along fine.

However, we've recently seen a number of units where after a number of hard-power-cycles the ext3 partition starts to develop structural issues -- in particular, we run e2fsck on the ext3 partition and it finds a number of issues like those shown in the output listing at the bottom of this Question. Running e2fsck until it stops reporting errors (or reformatting the partition) clears the issues.

My question is... what are the implications of seeing problems like this on an ext3/SSD system that has been subjected to lots of sudden/unexpected shutdowns?

My feeling is that this might be a sign of a software or hardware problem in our system, since my understanding is that (barring a bug or hardware problem) ext3's journalling feature is supposed to prevent these sorts of filesystem-integrity errors. (Note: I understand that user-data is not journalled and so munged/missing/truncated user-files can happen; I'm specifically talking here about filesystem-metadata errors like those shown below)

My co-worker, on the other hand, says that this is known/expected behavior because SSD controllers sometimes re-order write commands and that can cause the ext3 journal to get confused. In particular, he believes that even given normally functioning hardware and bug-free software, the ext3 journal only makes filesystem corruption less likely, not impossible, so we should not be surprised to see problems like this from time to time.

Which of us is right?

Embedded-PC-failsafe:~# ls
Embedded-PC-failsafe:~# umount /mnt/unionfs
Embedded-PC-failsafe:~# e2fsck /dev/sda3
e2fsck 1.41.3 (12-Oct-2008)
embeddedrootwrite contains a file system with errors, check forced.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Invalid inode number for '.' in directory inode 46948.
Fix<y>? yes

Directory inode 46948, block 0, offset 12: directory corrupted
Salvage<y>? yes

Entry 'status_2012-11-26_14h13m41.csv' in /var/log/status_logs (46956) has deleted/unused inode 47075.  Clear<y>? yes
Entry 'status_2012-11-26_10h42m58.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47076.  Clear<y>? yes
Entry 'status_2012-11-26_11h29m41.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47080.  Clear<y>? yes
Entry 'status_2012-11-26_11h42m13.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47081.  Clear<y>? yes
Entry 'status_2012-11-26_12h07m17.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47083.  Clear<y>? yes
Entry 'status_2012-11-26_12h14m53.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47085.  Clear<y>? yes
Entry 'status_2012-11-26_15h06m49.csv' in /var/log/status_logs (46956) has deleted/unused inode 47088.  Clear<y>? yes
Entry 'status_2012-11-20_14h50m09.csv' in /var/log/status_logs (46956) has deleted/unused inode 47073.  Clear<y>? yes
Entry 'status_2012-11-20_14h55m32.csv' in /var/log/status_logs (46956) has deleted/unused inode 47074.  Clear<y>? yes
Entry 'status_2012-11-26_11h04m36.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47078.  Clear<y>? yes
Entry 'status_2012-11-26_11h54m45.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47082.  Clear<y>? yes
Entry 'status_2012-11-26_12h12m20.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47084.  Clear<y>? yes
Entry 'status_2012-11-26_12h33m52.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47086.  Clear<y>? yes
Entry 'status_2012-11-26_10h51m59.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47077.  Clear<y>? yes
Entry 'status_2012-11-26_11h17m09.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47079.  Clear<y>? yes
Entry 'status_2012-11-26_12h54m11.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47087.  Clear<y>? yes

Pass 3: Checking directory connectivity
'..' in /etc/network/run (46948) is <The NULL inode> (0), should be /etc/network (46953).
Fix<y>? yes

Couldn't fix parent of inode 46948: Couldn't find parent directory entry

Pass 4: Checking reference counts
Unattached inode 46945
Connect to /lost+found<y>? yes

Inode 46945 ref count is 2, should be 1.  Fix<y>? yes
Inode 46953 ref count is 5, should be 4.  Fix<y>? yes

Pass 5: Checking group summary information
Block bitmap differences:  -(208264--208266) -(210062--210068) -(211343--211491) -(213241--213250) -(213344--213393) -213397 -(213457--213463) -(213516--213521) -(213628--213655) -(213683--213688) -(213709--213728) -(215265--215300) -(215346--215365) -(221541--221551) -(221696--221704) -227517
Fix<y>? yes

Free blocks count wrong for group #6 (17247, counted=17611).
Fix<y>? yes

Free blocks count wrong (161691, counted=162055).
Fix<y>? yes

Inode bitmap differences:  +(47089--47090) +47093 +47095 +(47097--47099) +(47101--47104) -(47219--47220) -47222 -47224 -47228 -47231 -(47347--47348) -47350 -47352 -47356 -47359 -(47457--47488) -47985 -47996 -(47999--48000) -48017 -(48027--48028) -(48030--48032) -48049 -(48059--48060) -(48062--48064) -48081 -(48091--48092) -(48094--48096)
Fix<y>? yes

Free inodes count wrong for group #6 (7608, counted=7624).
Fix<y>? yes

Free inodes count wrong (61919, counted=61935).
Fix<y>? yes


embeddedrootwrite: ***** FILE SYSTEM WAS MODIFIED *****

embeddedrootwrite: ********** WARNING: Filesystem still has errors **********

embeddedrootwrite: 657/62592 files (24.4% non-contiguous), 87882/249937 blocks

Embedded-PC-failsafe:~# 
Embedded-PC-failsafe:~# e2fsck /dev/sda3
e2fsck 1.41.3 (12-Oct-2008)
embeddedrootwrite contains a file system with errors, check forced.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Directory entry for '.' in ... (46948) is big.
Split<y>? yes

Missing '..' in directory inode 46948.
Fix<y>? yes

Setting filetype for entry '..' in ... (46948) to 2.
Pass 3: Checking directory connectivity
'..' in /etc/network/run (46948) is <The NULL inode> (0), should be /etc/network (46953).
Fix<y>? yes

Pass 4: Checking reference counts
Inode 2 ref count is 12, should be 13.  Fix<y>? yes

Pass 5: Checking group summary information

embeddedrootwrite: ***** FILE SYSTEM WAS MODIFIED *****
embeddedrootwrite: 657/62592 files (24.4% non-contiguous), 87882/249937 blocks
Embedded-PC-failsafe:~# 
Embedded-PC-failsafe:~# e2fsck /dev/sda3
e2fsck 1.41.3 (12-Oct-2008)
embeddedrootwrite: clean, 657/62592 files, 87882/249937 blocks
filesystems hardware ssd ext3 share | improve this question edited Dec 5 '12 at 18:40 ewwhite 173k 75 364 712 asked Dec 4 '12 at 1:13 Jeremy Friesner Jeremy Friesner 611 1 8 25 add a comment | 2 Answers 2 active oldest votes 10 You're both wrong (maybe?)... ext3 is coping the best it can with having its underlying storage removed so abruptly.

Your SSD probably has some type of onboard cache. You don't mention the make/model of SSD in use, but this sounds like a consumer-level SSD versus an enterprise or industrial-grade model .

Either way, the cache is used to help coalesce writes and prolong the life of the drive. If there are writes in-transit, the sudden loss of power is definitely the source of your corruption. True enterprise and industrial SSD's have supercapacitors that maintain power long enough to move data from cache to nonvolatile storage, much in the same way battery-backed and flash-backed RAID controller caches work .

If your drive doesn't have a supercap, the in-flight transactions are being lost, hence the filesystem corruption. ext3 is probably being told that everything is on stable storage, but that's just a function of the cache. share | improve this answer edited Apr 13 '17 at 12:14 Community ♦ 1 answered Dec 4 '12 at 1:24 ewwhite ewwhite 173k 75 364 712

add a comment | 2 You are right and your coworker is wrong. Barring something going wrong the journal makes sure you never have inconsistent fs metadata. You might check with hdparm to see if the drive's write cache is enabled. If it is, and you have not enabled IO barriers ( off by default on ext3, on by default in ext4 ), then that would be the cause of the problem.

The barriers are needed to force the drive write cache to flush at the correct time to maintain consistency, but some drives are badly behaved and either report that their write cache is disabled when it is not, or silently ignore the flush commands. This prevents the journal from doing its job. share | improve this answer answered Dec 5 '12 at 19:09 psusi psusi 2,617 11 9

[Jan 29, 2019] xfs corrupted after power failure

Highly recommended!
Oct 15, 2013 | www.linuxquestions.org

katmai90210

hi guys,

i have a problem. yesterday there was a power outage at one of my datacenters, where i have a relatively large fileserver. 2 arrays, 1 x 14 tb and 1 x 18 tb both in raid6, with a 3ware card.

after the outage, the server came back online, the xfs partitions were mounted, and everything looked okay. i could access the data and everything seemed just fine.

today i woke up to lots of i/o errors, and when i rebooted the server, the partitions would not mount:

Oct 14 04:09:17 kp4 kernel:
Oct 14 04:09:17 kp4 kernel: XFS internal error XFS_WANT_CORRUPTED_RETURN a<ffffffff80056933>] pdflush+0x0/0x1fb
Oct 14 04:09:17 kp4 kernel: [<ffffffff80056a84>] pdflush+0x151/0x1fb
Oct 14 04:09:17 kp4 kernel: [<ffffffff800cd931>] wb_kupdate+0x0/0x16a
Oct 14 04:09:17 kp4 kernel: [<ffffffff80032c2b>] kthread+0xfe/0x132
Oct 14 04:09:17 kp4 kernel: [<ffffffff8005dfc1>] child_rip+0xa/0x11
Oct 14 04:09:17 kp4 kernel: [<ffffffff800a3ab7>] keventd_create_kthread+0x0/0xc4
Oct 14 04:09:17 kp4 kernel: [<ffffffff80032b2d>] kthread+0x0/0x132
Oct 14 04:09:17 kp4 kernel: [<ffffffff8005dfb7>] child_rip+0x0/0x11
Oct 14 04:09:17 kp4 kernel:
Oct 14 04:09:17 kp4 kernel: XFS internal error XFS_WANT_CORRUPTED_RETURN at line 279 of file fs/xfs/xfs_alloc.c. Caller 0xffffffff88342331
Oct 14 04:09:17 kp4 kernel:

got a bunch of these in dmesg.

The array is fine:

[root@kp4 ~]# tw_cli
//kp4> focus c6
s
//kp4/c6> how

Unit UnitType Status %RCmpl %V/I/M Stripe Size(GB) Cache AVrfy
------------------------------------------------------------------------------
u0 RAID-6 OK - - 256K 13969.8 RiW ON
u1 RAID-6 OK - - 256K 16763.7 RiW ON

VPort Status Unit Size Type Phy Encl-Slot Model
------------------------------------------------------------------------------
p0 OK u1 2.73 TB SATA 0 - Hitachi HDS723030AL
p1 OK u1 2.73 TB SATA 1 - Hitachi HDS723030AL
p2 OK u1 2.73 TB SATA 2 - Hitachi HDS723030AL
p3 OK u1 2.73 TB SATA 3 - Hitachi HDS723030AL
p4 OK u1 2.73 TB SATA 4 - Hitachi HDS723030AL
p5 OK u1 2.73 TB SATA 5 - Hitachi HDS723030AL
p6 OK u1 2.73 TB SATA 6 - Hitachi HDS723030AL
p7 OK u1 2.73 TB SATA 7 - Hitachi HDS723030AL
p8 OK u0 2.73 TB SATA 8 - Hitachi HDS723030AL
p9 OK u0 2.73 TB SATA 9 - Hitachi HDS723030AL
p10 OK u0 2.73 TB SATA 10 - Hitachi HDS723030AL
p11 OK u0 2.73 TB SATA 11 - Hitachi HDS723030AL
p12 OK u0 2.73 TB SATA 12 - Hitachi HDS723030AL
p13 OK u0 2.73 TB SATA 13 - Hitachi HDS723030AL
p14 OK u0 2.73 TB SATA 14 - Hitachi HDS723030AL

Name OnlineState BBUReady Status Volt Temp Hours LastCapTest
---------------------------------------------------------------------------
bbu On Yes OK OK OK 0 xx-xxx-xxxx

i googled for solutions and i think i jumped the horse by doing

xfs_repair -L /dev/sdc

it would not clean it with xfs_repair /dev/sdc, and everybody pretty much says the same thing.

this is what i was getting when trying to mount the array.

Filesystem Corruption of in-memory data detected. Shutting down filesystem xfs_check

Did i jump the gun by using the -L switch :/ ?

jefro

Here is the RH data on that.

https://docs.fedoraproject.org/en-US...xfsrepair.html

[Jan 29, 2019] an HVAC tech that confused the BLACK button that got pushed to exit the room with the RED button clearly marked EMERGENCY POWER OFF.

Jan 29, 2019 | thwack.solarwinds.com

George Sutherland Jul 8, 2015 9:58 AM ( in response to RandyBrown ) had similar thing happen with an HVAC tech that confused the BLACK button that got pushed to exit the room with the RED button clearly marked EMERGENCY POWER OFF. Clear plastic cover installed with in 24 hours.... after 3 hours of recovery!

PS... He told his boss that he did not do it.... the camera that focused on the door told a much different story. He was persona non grata at our site after that.

[Jan 29, 2019] HVAC units greatly help to increase reliability

Jan 29, 2019 | thwack.solarwinds.com

sleeper_777 Jul 15, 2015 1:07 PM

Worked at a bank. 6" raised floor. Liebert cooling units on floor with all network equipment. Two units developed a water drain issue over a weekend.

About an hour into Monday morning, devices, servers, routers, in a domino effect starting shorting out and shutting down or blowing up, literally.

Opened the floor tiles to find three inches of water.

We did not have water alarms on the floor at the time.

Shortly after the incident, we did.

But the mistake was very costly and multiple 24 hour shifts of IT people made it a week of pure h3ll.

[Jan 29, 2019] In a former life, I had every server crash over the weekend when the facilities group took down the climate control and HVAC systems without warning

Jan 29, 2019 | thwack.solarwinds.com

[Jan 29, 2019] [SOLVED] Unable to mount root file system after a power failure

Jan 29, 2019 | www.linuxquestions.org
07-01-2012, 12:56 PM # 1
damateem LQ Newbie
Registered: Dec 2010 Posts: 8
Rep: Reputation: 0
Unable to mount root file system after a power failure

[ Log in to get rid of this advertisement] We had a storm yesterday and the power dropped out, causing my Ubuntu server to shut off. Now, when booting, I get

[ 0.564310] Kernel panic - not syncing: VFS: Unable to mount root fs on unkown-block(0,0)

It looks like a file system corruption, but I'm having a hard time fixing the problem. I'm using Rescue Remix 12-04 to boot from USB and get access to the system.

Using

sudo fdisk -l

Shows the hard drive as

/dev/sda1: Linux
/dev/sda2: Extended
/dev/sda5: Linux LVM

Using

sudo lvdisplay

Shows LV Names as

/dev/server1/root
/dev/server1/swap_1

Using

sudo blkid

Shows types as

/dev/sda1: ext2
/dev/sda5: LVM2_member
/dev/mapper/server1-root: ext4
/dev/mapper/server1-swap_1: swap

I can mount sda1 and server1/root and all the files appear normal, although I'm not really sure what issues I should be looking for. On sda1, I see a grub folder and several other files. On root, I see the file system as it was before I started having trouble.

I've ran the following fsck commands and none of them report any errors

sudo fsck -f /dev/sda1
sudo fsck -f /dev/server1/root
sudo fsck.ext2 -f /dev/sda1
sudo fsck.ext4 -f /dev/server1/root

and I still get the same error when the system boots.

I've hit a brick wall.

What should I try next?

What can I look at to give me a better understanding of what the problem is?

Thanks,
David

damateem
View Public Profile
View LQ Blog
View Review Entries
View HCL Entries
Find More Posts by damateem
Old 07-02-2012, 05:58 AM # 2
syg00 LQ Veteran
Registered: Aug 2003 Location: Australia Distribution: Lots ... Posts: 17,415
Rep: Reputation: 2720 Reputation: 2720 Reputation: 2720 Reputation: 2720 Reputation: 2720 Reputation: 2720 Reputation: 2720 Reputation: 2720 Reputation: 2720 Reputation: 2720 Reputation: 2720
Might depend a bit on what messages we aren't seeing.

Normally I'd reckon that means that either the filesystem or disk controller support isn't available. But with something like Ubuntu you'd expect that to all be in place from the initrd. And that is on the /boot partition, and shouldn't be subject to update activity in a normal environment. Unless maybe you're real unlucky and an update was in flight.

Can you chroot into the server (disk) install and run from there successfully ?.

syg00
View Public Profile
View LQ Blog
View Review Entries
View HCL Entries
Find More Posts by syg00
Old 07-02-2012, 06:08 PM # 3
damateem LQ Newbie
Registered: Dec 2010 Posts: 8
Original Poster
Rep: Reputation: 0
I had a very hard time getting the Grub menu to appear. There must be a very small window for detecting the shift key. Holding it down through the boot didn't work. Repeatedly hitting it at about twice per second didn't work. Increasing the rate to about 4 hits per second got me into it.

Once there, I was able to select an older kernel (2.6.32-39-server). The non-booting kernel was 2.6.32-40-server. 39 booted without any problems.

When I initially setup this system, I couldn't send email from it. It wasn't important to me at the time, so I planned to come back and fix it later. Last week (before the power drop), email suddenly started working on its own. I was surprised because I haven't specifically performed any updates. However, I seem to remember setting up automatic updates, so perhaps an auto update was done that introduced a problem, but it wasn't seen until the reboot that was forced by the power outage.

Next, I'm going to try updating to the latest kernel and see if it has the same problem.

Thanks,
David

damateem
View Public Profile
View LQ Blog
View Review Entries
View HCL Entries
Find More Posts by damateem
Old 07-02-2012, 06:24 PM # 4
frieza Senior Member Contributing Member
Registered: Feb 2002 Location: harvard, il Distribution: Ubuntu 11.4,DD-WRT micro plus ssh,lfs-6.6,Fedora 15,Fedora 16 Posts: 3,233
Rep: Reputation: 405 Reputation: 405 Reputation: 405 Reputation: 405 Reputation: 405
imho auto updates are dangerous, if you want my opinion, make sure auto updates are off, and only have the system tell you there are updates, that way you can chose not to install them during a power failure

as for a possible future solution for what you went through, unlike other keys, the shift key being held doesn't register as a stuck key to the best of my knowledge, so you can hold the shift key to get into grub, after that, edit the recovery line (the e key) to say at the end, init=/bin/bash then boot the system using the keys specified on the bottom of the screen, then once booted to a prompt, you would run
Code:

fsck -f {root partition}
(in this state, the root partition should be either not mounted or mounted read-only, so you can safely run an fsck on the drive)

note the -f seems to be an undocumented flag that does a more thorough scan than merely a standard run of fsck.

then reboot, and hopefully that fixes things

glad things seem to be working for the moment though.

frieza
View Public Profile
View LQ Blog
View Review Entries
View HCL Entries
Visit frieza's homepage!
Find More Posts by frieza
Old 07-02-2012, 06:32 PM # 5
suicidaleggroll LQ Guru Contributing Member
Registered: Nov 2010 Location: Colorado Distribution: OpenSUSE, CentOS Posts: 5,573
Rep: Reputation: 2132 Reputation: 2132 Reputation: 2132 Reputation: 2132 Reputation: 2132 Reputation: 2132 Reputation: 2132 Reputation: 2132 Reputation: 2132 Reputation: 2132 Reputation: 2132
Quote:
Originally Posted by damateem View Post However, I seem to remember setting up automatic updates, so perhaps an auto update was done that introduced a problem, but it wasn't seen until the reboot that was forced by the power outage.
I think this is very likely. Delayed reboots after performing an update can make tracking down errors impossibly difficult. I had a system a while back that wouldn't boot, turns out it was caused by an update I had done 6 MONTHS earlier, and the system had simply never been restarted afterward.
suicidaleggroll
View Public Profile
View LQ Blog
View Review Entries
View HCL Entries
Find More Posts by suicidaleggroll
Old 07-04-2012, 10:18 AM # 6
damateem LQ Newbie
Registered: Dec 2010 Posts: 8
Original Poster
Rep: Reputation: 0
I discovered the root cause of the problem. When I attempted the update, I found that the boot partition was full. So I suspect that caused issues for the auto update, but they went undetected until the reboot.

I next tried to purge old kernels using the instructions at

http://www.liberiangeek.net/2011/11/...neiric-ocelot/

but that failed because a previous install had not completed, but it couldn't complete because of the full partition. So had no choice but to manually rm the oldest kernel and it's associated files. With that done, the command

apt-get -f install

got far enough that I could then purge the unwanted kernels. Finally,

sudo apt-get update
sudo apt-get upgrade

brought everything up to date.

I will be deactivating the auto updates.

Thanks for all the help!

David

[Jan 29, 2019] How to Setup DRBD to Replicate Storage on Two CentOS 7 Servers by Aaron Kili

Notable quotes:
"... It mirrors the content of block devices such as hard disks, partitions, logical volumes etc. between servers. ..."
"... It involves a copy of data on two storage devices, such that if one fails, the data on the other can be used. ..."
"... Originally, DRBD was mainly used in high availability (HA) computer clusters, however, starting with version 9, it can be used to deploy cloud storage solutions. In this article, we will show how to install DRBD in CentOS and briefly demonstrate how to use it to replicate storage (partition) on two servers. ..."
www.thegeekdiary.com
The DRBD (stands for Distributed Replicated Block Device ) is a distributed, flexible and versatile replicated storage solution for Linux. It mirrors the content of block devices such as hard disks, partitions, logical volumes etc. between servers.

It involves a copy of data on two storage devices, such that if one fails, the data on the other can be used.

You can think of it somewhat like a network RAID 1 configuration with the disks mirrored across servers. However, it operates in a very different way from RAID and even network RAID.

Originally, DRBD was mainly used in high availability (HA) computer clusters, however, starting with version 9, it can be used to deploy cloud storage solutions. In this article, we will show how to install DRBD in CentOS and briefly demonstrate how to use it to replicate storage (partition) on two servers.

... ... ...

For the purpose of this article, we are using two nodes cluster for this setup.

... ... ...

Reference : The DRBD User's Guide .
Summary
Jan 19, 2019 | www.tecmint.com

DRBD is extremely flexible and versatile, which makes it a storage replication solution suitable for adding HA to just about any application. In this article, we have shown how to install DRBD in CentOS 7 and briefly demonstrated how to use it to replicate storage. Feel free to share your thoughts with us via the feedback form below.

[Jan 29, 2019] mc2 is the first version of Midnight commander that supports LUA by mooffie

Highly recommended!
That was three years ago. No progress so far in merging it with mainstream version. Sad but typical...
Links are now broken as the site was migrated to www.geek.co.il. Valid link is Getting started
Oct 15, 2015 | n2.nabble.com

[ANN] mc^2 11 posts

mc^2 is a fork of Midnight Commander with Lua support:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/

...but let's skip the verbiage and go directly to the screenshots:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/guide/SCREENSHOTS.md.html

Now, I assume most of you here aren't users of MC.

So I won't bore you with description of how Lua makes MC a better file-manager. Instead, I'll just list some details that may interest
any developer who works on extending some application.

And, as you'll shortly see, you may find mc^2 useful even if you aren't a user of MC!

So, some interesting details:

* Programmer Goodies

- You can restart the Lua system from within MC.

- Since MC has a built-in editor, you can edit Lua code right there and restart Lua. So it's somewhat like a live IDE:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/images/screenshots/game.png

- It comes with programmer utilities: regular expressions; global scope protected by default; good pretty printer for Lua tables; calculator where you can type Lua expressions; the editor can "lint" Lua code (and flag uses of global variables).

- It installs a /usr/bin/mcscript executable letting you use all the goodies from "outside" MC:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/guide/60-standalone.md.html

* User Interface programming (UI)

- You can program a UI (user interface) very easily. The API is fun
yet powerful. It has some DOM/JavaScript borrowings in it: you can
attach functions to events like on_click, on_change, etc. The API
uses "properties", so your code tends to be short and readable:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/guide/40-user-interface.md.html

- The UI has a "canvas" object letting you draw your own stuff. The
system is so fast you can program arcade games. Pacman, Tetris,
Digger, whatever:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/classes/ui.Canvas.html

Need timers in your game? You've got them:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/modules/timer.html

- This UI API is an ideal replacement for utilities like dialog(1).
You can write complex frontends to command-line tools with ease:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/images/screenshots/frontend-scanimage.png

- Thanks to the aforementioned /usr/bin/mcscript, you can run your
games/frontends from "outside" MC:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/images/screenshots/standalone-game.png

* Misc

- You can compile it against Lua 5.1, 5.2, 5.3, or LuaJIT.

- Extensive documentation.

[Jan 29, 2019] hstr -- Bash and zsh shell history suggest box - easily view, navigate, search and manage your command history

This is quite useful command. RPM exists for CentOS7. You need to build on other versions.
Nov 17, 2018 | dvorka.github.io

hstr -- Bash and zsh shell history suggest box - easily view, navigate, search and manage your command history.

View on GitHub

Configuration

Get most of HSTR by configuring it with:

hstr --show-configuration >> ~/.bashrc

Run hstr --show-configuration to determine what will be appended to your Bash profile. Don't forget to source ~/.bashrc to apply changes.


For more configuration options details please refer to:

Check also configuration examples .

Binding HSTR to Keyboard Shortcut

Bash uses Emacs style keyboard shortcuts by default. There is also Vi mode. Find out how to bind HSTR to a keyboard shortcut based on the style you prefer below.

Check your active Bash keymap with:

bind -v | grep editing-mode
bind -v | grep keymap

To determine character sequence emitted by a pressed key in terminal, type Ctrlv and then press the key. Check your current bindings using:

bind -S
Bash Emacs Keymap (default)

Bind HSTR to a Bash key e.g. to Ctrlr :

bind '"\C-r": "\C-ahstr -- \C-j"'

or CtrlAltr :

bind '"\e\C-r":"\C-ahstr -- \C-j"'

or CtrlF12 :

bind '"\e[24;5~":"\C-ahstr -- \C-j"'

Bind HSTR to Ctrlr only if it is interactive shell:

if [[ $- =~ .*i.* ]]; then bind '"\C-r": "\C-a hstr -- \C-j"'; fi

You can bind also other HSTR commands like --kill-last-command :

if [[ $- =~ .*i.* ]]; then bind '"\C-xk": "\C-a hstr -k \C-j"'; fi
Bash Vim Keymap

Bind HSTR to a Bash key e.g. to Ctrlr :

bind '"\C-r": "\e0ihstr -- \C-j"'
Zsh Emacs Keymap

Bind HSTR to a zsh key e.g. to Ctrlr :

bindkey -s "\C-r" "\eqhstr --\n"
Alias

If you want to make running of hstr from command line even easier, then define alias in your ~/.bashrc :

alias hh=hstr

Don't forget to source ~/.bashrc to be able to to use hh command.

Colors

Let HSTR to use colors:

export HSTR_CONFIG=hicolor

or ensure black and white mode:

export HSTR_CONFIG=monochromatic
Default History View

To show normal history by default (instead of metrics-based view, which is default) use:

export HSTR_CONFIG=raw-history-view

To show favorite commands as default view use:

export HSTR_CONFIG=favorites-view
Filtering

To use regular expressions based matching:

export HSTR_CONFIG=regexp-matching

To use substring based matching:

export HSTR_CONFIG=substring-matching

To use keywords (substrings whose order doesn't matter) search matching (default):

export HSTR_CONFIG=keywords-matching

Make search case sensitive (insensitive by default):

export HSTR_CONFIG=case-sensitive

Keep duplicates in raw-history-view (duplicate commands are discarded by default):

export HSTR_CONFIG=duplicates
Static favorites

Last selected favorite command is put the head of favorite commands list by default. If you want to disable this behavior and make favorite commands list static, then use the following configuration:

export HSTR_CONFIG=static-favorites
Skip favorites comments

If you don't want to show lines starting with # (comments) among favorites, then use the following configuration:

export HSTR_CONFIG=skip-favorites-comments
Blacklist

Skip commands when processing history i.e. make sure that these commands will not be shown in any view:

export HSTR_CONFIG=blacklist

Commands to be stored in ~/.hstr_blacklist file with trailing empty line. For instance:

cd
my-private-command
ls
ll
Confirm on Delete

Do not prompt for confirmation when deleting history items:

export HSTR_CONFIG=no-confirm
Verbosity

Show a message when deleting the last command from history:

export HSTR_CONFIG=verbose-kill

Show warnings:

export HSTR_CONFIG=warning

Show debug messages:

export HSTR_CONFIG=debug
Bash History Settings

Use the following Bash settings to get most out of HSTR.

Increase the size of history maintained by BASH - variables defined below increase the number of history items and history file size (default value is 500):

export HISTFILESIZE=10000
export HISTSIZE=${HISTFILESIZE}

Ensure syncing (flushing and reloading) of .bash_history with in-memory history:

export PROMPT_COMMAND="history -a; history -n; ${PROMPT_COMMAND}"

Force appending of in-memory history to .bash_history (instead of overwriting):

shopt -s histappend

Use leading space to hide commands from history:

export HISTCONTROL=ignorespace

Suitable for a sensitive information like passwords.

zsh History Settings

If you use zsh , set HISTFILE environment variable in ~/.zshrc :

export HISTFILE=~/.zsh_history
Examples

More colors with case sensitive search of history:

export HSTR_CONFIG=hicolor,case-sensitive

Favorite commands view in black and white with prompt at the bottom of the screen:

export HSTR_CONFIG=favorites-view,prompt-bottom

Keywords based search in colors with debug mode verbosity:

export HSTR_CONFIG=keywords-matching,hicolor,debug

[Jan 29, 2019] Split string into an array in Bash

May 14, 2012 | stackoverflow.com

Lgn ,May 14, 2012 at 15:15

In a Bash script I would like to split a line into pieces and store them in an array.

The line:

Paris, France, Europe

I would like to have them in an array like this:

array[0] = Paris
array[1] = France
array[2] = Europe

I would like to use simple code, the command's speed doesn't matter. How can I do it?

antak ,Jun 18, 2018 at 9:22

This is #1 Google hit but there's controversy in the answer because the question unfortunately asks about delimiting on , (comma-space) and not a single character such as comma. If you're only interested in the latter, answers here are easier to follow: stackoverflow.com/questions/918886/ – antak Jun 18 '18 at 9:22

Dennis Williamson ,May 14, 2012 at 15:16

IFS=', ' read -r -a array <<< "$string"

Note that the characters in $IFS are treated individually as separators so that in this case fields may be separated by either a comma or a space rather than the sequence of the two characters. Interestingly though, empty fields aren't created when comma-space appears in the input because the space is treated specially.

To access an individual element:

echo "${array[0]}"

To iterate over the elements:

for element in "${array[@]}"
do
    echo "$element"
done

To get both the index and the value:

for index in "${!array[@]}"
do
    echo "$index ${array[index]}"
done

The last example is useful because Bash arrays are sparse. In other words, you can delete an element or add an element and then the indices are not contiguous.

unset "array[1]"
array[42]=Earth

To get the number of elements in an array:

echo "${#array[@]}"

As mentioned above, arrays can be sparse so you shouldn't use the length to get the last element. Here's how you can in Bash 4.2 and later:

echo "${array[-1]}"

in any version of Bash (from somewhere after 2.05b):

echo "${array[@]: -1:1}"

Larger negative offsets select farther from the end of the array. Note the space before the minus sign in the older form. It is required.

l0b0 ,May 14, 2012 at 15:24

Just use IFS=', ' , then you don't have to remove the spaces separately. Test: IFS=', ' read -a array <<< "Paris, France, Europe"; echo "${array[@]}" – l0b0 May 14 '12 at 15:24

Dennis Williamson ,May 14, 2012 at 16:33

@l0b0: Thanks. I don't know what I was thinking. I like to use declare -p array for test output, by the way. – Dennis Williamson May 14 '12 at 16:33

Nathan Hyde ,Mar 16, 2013 at 21:09

@Dennis Williamson - Awesome, thorough answer. – Nathan Hyde Mar 16 '13 at 21:09

dsummersl ,Aug 9, 2013 at 14:06

MUCH better than multiple cut -f calls! – dsummersl Aug 9 '13 at 14:06

caesarsol ,Oct 29, 2015 at 14:45

Warning: the IFS variable means split by one of these characters , so it's not a sequence of chars to split by. IFS=', ' read -a array <<< "a,d r s,w" => ${array[*]} == a d r s w – caesarsol Oct 29 '15 at 14:45

Jim Ho ,Mar 14, 2013 at 2:20

Here is a way without setting IFS:
string="1:2:3:4:5"
set -f                      # avoid globbing (expansion of *).
array=(${string//:/ })
for i in "${!array[@]}"
do
    echo "$i=>${array[i]}"
done

The idea is using string replacement:

${string//substring/replacement}

to replace all matches of $substring with white space and then using the substituted string to initialize a array:

(element1 element2 ... elementN)

Note: this answer makes use of the split+glob operator . Thus, to prevent expansion of some characters (such as * ) it is a good idea to pause globbing for this script.

Werner Lehmann ,May 4, 2013 at 22:32

Used this approach... until I came across a long string to split. 100% CPU for more than a minute (then I killed it). It's a pity because this method allows to split by a string, not some character in IFS. – Werner Lehmann May 4 '13 at 22:32

Dieter Gribnitz ,Sep 2, 2014 at 15:46

WARNING: Just ran into a problem with this approach. If you have an element named * you will get all the elements of your cwd as well. thus string="1:2:3:4:*" will give some unexpected and possibly dangerous results depending on your implementation. Did not get the same error with (IFS=', ' read -a array <<< "$string") and this one seems safe to use. – Dieter Gribnitz Sep 2 '14 at 15:46

akostadinov ,Nov 6, 2014 at 14:31

not reliable for many kinds of values, use with care – akostadinov Nov 6 '14 at 14:31

Andrew White ,Jun 1, 2016 at 11:44

quoting ${string//:/ } prevents shell expansion – Andrew White Jun 1 '16 at 11:44

Mark Thomson ,Jun 5, 2016 at 20:44

I had to use the following on OSX: array=(${string//:/ }) – Mark Thomson Jun 5 '16 at 20:44

bgoldst ,Jul 19, 2017 at 21:20

All of the answers to this question are wrong in one way or another.

Wrong answer #1

IFS=', ' read -r -a array <<< "$string"

1: This is a misuse of $IFS . The value of the $IFS variable is not taken as a single variable-length string separator, rather it is taken as a set of single-character string separators, where each field that read splits off from the input line can be terminated by any character in the set (comma or space, in this example).

Actually, for the real sticklers out there, the full meaning of $IFS is slightly more involved. From the bash manual :

The shell treats each character of IFS as a delimiter, and splits the results of the other expansions into words using these characters as field terminators. If IFS is unset, or its value is exactly <space><tab><newline> , the default, then sequences of <space> , <tab> , and <newline> at the beginning and end of the results of the previous expansions are ignored, and any sequence of IFS characters not at the beginning or end serves to delimit words. If IFS has a value other than the default, then sequences of the whitespace characters <space> , <tab> , and <newline> are ignored at the beginning and end of the word, as long as the whitespace character is in the value of IFS (an IFS whitespace character). Any character in IFS that is not IFS whitespace, along with any adjacent IFS whitespace characters, delimits a field. A sequence of IFS whitespace characters is also treated as a delimiter. If the value of IFS is null, no word splitting occurs.

Basically, for non-default non-null values of $IFS , fields can be separated with either (1) a sequence of one or more characters that are all from the set of "IFS whitespace characters" (that is, whichever of <space> , <tab> , and <newline> ("newline" meaning line feed (LF) ) are present anywhere in $IFS ), or (2) any non-"IFS whitespace character" that's present in $IFS along with whatever "IFS whitespace characters" surround it in the input line.

For the OP, it's possible that the second separation mode I described in the previous paragraph is exactly what he wants for his input string, but we can be pretty confident that the first separation mode I described is not correct at all. For example, what if his input string was 'Los Angeles, United States, North America' ?

IFS=', ' read -ra a <<<'Los Angeles, United States, North America'; declare -p a;
## declare -a a=([0]="Los" [1]="Angeles" [2]="United" [3]="States" [4]="North" [5]="America")

2: Even if you were to use this solution with a single-character separator (such as a comma by itself, that is, with no following space or other baggage), if the value of the $string variable happens to contain any LFs, then read will stop processing once it encounters the first LF. The read builtin only processes one line per invocation. This is true even if you are piping or redirecting input only to the read statement, as we are doing in this example with the here-string mechanism, and thus unprocessed input is guaranteed to be lost. The code that powers the read builtin has no knowledge of the data flow within its containing command structure.

You could argue that this is unlikely to cause a problem, but still, it's a subtle hazard that should be avoided if possible. It is caused by the fact that the read builtin actually does two levels of input splitting: first into lines, then into fields. Since the OP only wants one level of splitting, this usage of the read builtin is not appropriate, and we should avoid it.

3: A non-obvious potential issue with this solution is that read always drops the trailing field if it is empty, although it preserves empty fields otherwise. Here's a demo:

string=', , a, , b, c, , , '; IFS=', ' read -ra a <<<"$string"; declare -p a;
## declare -a a=([0]="" [1]="" [2]="a" [3]="" [4]="b" [5]="c" [6]="" [7]="")

Maybe the OP wouldn't care about this, but it's still a limitation worth knowing about. It reduces the robustness and generality of the solution.

This problem can be solved by appending a dummy trailing delimiter to the input string just prior to feeding it to read , as I will demonstrate later.


Wrong answer #2

string="1:2:3:4:5"
set -f                     # avoid globbing (expansion of *).
array=(${string//:/ })

Similar idea:

t="one,two,three"
a=($(echo $t | tr ',' "\n"))

(Note: I added the missing parentheses around the command substitution which the answerer seems to have omitted.)

Similar idea:

string="1,2,3,4"
array=(`echo $string | sed 's/,/\n/g'`)

These solutions leverage word splitting in an array assignment to split the string into fields. Funnily enough, just like read , general word splitting also uses the $IFS special variable, although in this case it is implied that it is set to its default value of <space><tab><newline> , and therefore any sequence of one or more IFS characters (which are all whitespace characters now) is considered to be a field delimiter.

This solves the problem of two levels of splitting committed by read , since word splitting by itself constitutes only one level of splitting. But just as before, the problem here is that the individual fields in the input string can already contain $IFS characters, and thus they would be improperly split during the word splitting operation. This happens to not be the case for any of the sample input strings provided by these answerers (how convenient...), but of course that doesn't change the fact that any code base that used this idiom would then run the risk of blowing up if this assumption were ever violated at some point down the line. Once again, consider my counterexample of 'Los Angeles, United States, North America' (or 'Los Angeles:United States:North America' ).

Also, word splitting is normally followed by filename expansion ( aka pathname expansion aka globbing), which, if done, would potentially corrupt words containing the characters * , ? , or [ followed by ] (and, if extglob is set, parenthesized fragments preceded by ? , * , + , @ , or ! ) by matching them against file system objects and expanding the words ("globs") accordingly. The first of these three answerers has cleverly undercut this problem by running set -f beforehand to disable globbing. Technically this works (although you should probably add set +f afterward to reenable globbing for subsequent code which may depend on it), but it's undesirable to have to mess with global shell settings in order to hack a basic string-to-array parsing operation in local code.

Another issue with this answer is that all empty fields will be lost. This may or may not be a problem, depending on the application.

Note: If you're going to use this solution, it's better to use the ${string//:/ } "pattern substitution" form of parameter expansion , rather than going to the trouble of invoking a command substitution (which forks the shell), starting up a pipeline, and running an external executable ( tr or sed ), since parameter expansion is purely a shell-internal operation. (Also, for the tr and sed solutions, the input variable should be double-quoted inside the command substitution; otherwise word splitting would take effect in the echo command and potentially mess with the field values. Also, the $(...) form of command substitution is preferable to the old `...` form since it simplifies nesting of command substitutions and allows for better syntax highlighting by text editors.)


Wrong answer #3

str="a, b, c, d"  # assuming there is a space after ',' as in Q
arr=(${str//,/})  # delete all occurrences of ','

This answer is almost the same as #2 . The difference is that the answerer has made the assumption that the fields are delimited by two characters, one of which being represented in the default $IFS , and the other not. He has solved this rather specific case by removing the non-IFS-represented character using a pattern substitution expansion and then using word splitting to split the fields on the surviving IFS-represented delimiter character.

This is not a very generic solution. Furthermore, it can be argued that the comma is really the "primary" delimiter character here, and that stripping it and then depending on the space character for field splitting is simply wrong. Once again, consider my counterexample: 'Los Angeles, United States, North America' .

Also, again, filename expansion could corrupt the expanded words, but this can be prevented by temporarily disabling globbing for the assignment with set -f and then set +f .

Also, again, all empty fields will be lost, which may or may not be a problem depending on the application.


Wrong answer #4

string='first line
second line
third line'

oldIFS="$IFS"
IFS='
'
IFS=${IFS:0:1} # this is useful to format your code with tabs
lines=( $string )
IFS="$oldIFS"

This is similar to #2 and #3 in that it uses word splitting to get the job done, only now the code explicitly sets $IFS to contain only the single-character field delimiter present in the input string. It should be repeated that this cannot work for multicharacter field delimiters such as the OP's comma-space delimiter. But for a single-character delimiter like the LF used in this example, it actually comes close to being perfect. The fields cannot be unintentionally split in the middle as we saw with previous wrong answers, and there is only one level of splitting, as required.

One problem is that filename expansion will corrupt affected words as described earlier, although once again this can be solved by wrapping the critical statement in set -f and set +f .

Another potential problem is that, since LF qualifies as an "IFS whitespace character" as defined earlier, all empty fields will be lost, just as in #2 and #3 . This would of course not be a problem if the delimiter happens to be a non-"IFS whitespace character", and depending on the application it may not matter anyway, but it does vitiate the generality of the solution.

So, to sum up, assuming you have a one-character delimiter, and it is either a non-"IFS whitespace character" or you don't care about empty fields, and you wrap the critical statement in set -f and set +f , then this solution works, but otherwise not.

(Also, for information's sake, assigning a LF to a variable in bash can be done more easily with the $'...' syntax, e.g. IFS=$'\n'; .)


Wrong answer #5

countries='Paris, France, Europe'
OIFS="$IFS"
IFS=', ' array=($countries)
IFS="$OIFS"

Similar idea:

IFS=', ' eval 'array=($string)'

This solution is effectively a cross between #1 (in that it sets $IFS to comma-space) and #2-4 (in that it uses word splitting to split the string into fields). Because of this, it suffers from most of the problems that afflict all of the above wrong answers, sort of like the worst of all worlds.

Also, regarding the second variant, it may seem like the eval call is completely unnecessary, since its argument is a single-quoted string literal, and therefore is statically known. But there's actually a very non-obvious benefit to using eval in this way. Normally, when you run a simple command which consists of a variable assignment only , meaning without an actual command word following it, the assignment takes effect in the shell environment:

IFS=', '; ## changes $IFS in the shell environment

This is true even if the simple command involves multiple variable assignments; again, as long as there's no command word, all variable assignments affect the shell environment:

IFS=', ' array=($countries); ## changes both $IFS and $array in the shell environment

But, if the variable assignment is attached to a command name (I like to call this a "prefix assignment") then it does not affect the shell environment, and instead only affects the environment of the executed command, regardless whether it is a builtin or external:

IFS=', ' :; ## : is a builtin command, the $IFS assignment does not outlive it
IFS=', ' env; ## env is an external command, the $IFS assignment does not outlive it

Relevant quote from the bash manual :

If no command name results, the variable assignments affect the current shell environment. Otherwise, the variables are added to the environment of the executed command and do not affect the current shell environment.

It is possible to exploit this feature of variable assignment to change $IFS only temporarily, which allows us to avoid the whole save-and-restore gambit like that which is being done with the $OIFS variable in the first variant. But the challenge we face here is that the command we need to run is itself a mere variable assignment, and hence it would not involve a command word to make the $IFS assignment temporary. You might think to yourself, well why not just add a no-op command word to the statement like the : builtin to make the $IFS assignment temporary? This does not work because it would then make the $array assignment temporary as well:

IFS=', ' array=($countries) :; ## fails; new $array value never escapes the : command

So, we're effectively at an impasse, a bit of a catch-22. But, when eval runs its code, it runs it in the shell environment, as if it was normal, static source code, and therefore we can run the $array assignment inside the eval argument to have it take effect in the shell environment, while the $IFS prefix assignment that is prefixed to the eval command will not outlive the eval command. This is exactly the trick that is being used in the second variant of this solution:

IFS=', ' eval 'array=($string)'; ## $IFS does not outlive the eval command, but $array does

So, as you can see, it's actually quite a clever trick, and accomplishes exactly what is required (at least with respect to assignment effectation) in a rather non-obvious way. I'm actually not against this trick in general, despite the involvement of eval ; just be careful to single-quote the argument string to guard against security threats.

But again, because of the "worst of all worlds" agglomeration of problems, this is still a wrong answer to the OP's requirement.


Wrong answer #6

IFS=', '; array=(Paris, France, Europe)

IFS=' ';declare -a array=(Paris France Europe)

Um... what? The OP has a string variable that needs to be parsed into an array. This "answer" starts with the verbatim contents of the input string pasted into an array literal. I guess that's one way to do it.

It looks like the answerer may have assumed that the $IFS variable affects all bash parsing in all contexts, which is not true. From the bash manual:

IFS The Internal Field Separator that is used for word splitting after expansion and to split lines into words with the read builtin command. The default value is <space><tab><newline> .

So the $IFS special variable is actually only used in two contexts: (1) word splitting that is performed after expansion (meaning not when parsing bash source code) and (2) for splitting input lines into words by the read builtin.

Let me try to make this clearer. I think it might be good to draw a distinction between parsing and execution . Bash must first parse the source code, which obviously is a parsing event, and then later it executes the code, which is when expansion comes into the picture. Expansion is really an execution event. Furthermore, I take issue with the description of the $IFS variable that I just quoted above; rather than saying that word splitting is performed after expansion , I would say that word splitting is performed during expansion, or, perhaps even more precisely, word splitting is part of the expansion process. The phrase "word splitting" refers only to this step of expansion; it should never be used to refer to the parsing of bash source code, although unfortunately the docs do seem to throw around the words "split" and "words" a lot. Here's a relevant excerpt from the linux.die.net version of the bash manual:

Expansion is performed on the command line after it has been split into words. There are seven kinds of expansion performed: brace expansion , tilde expansion , parameter and variable expansion , command substitution , arithmetic expansion , word splitting , and pathname expansion .

The order of expansions is: brace expansion; tilde expansion, parameter and variable expansion, arithmetic expansion, and command substitution (done in a left-to-right fashion); word splitting; and pathname expansion.

You could argue the GNU version of the manual does slightly better, since it opts for the word "tokens" instead of "words" in the first sentence of the Expansion section:

Expansion is performed on the command line after it has been split into tokens.

The important point is, $IFS does not change the way bash parses source code. Parsing of bash source code is actually a very complex process that involves recognition of the various elements of shell grammar, such as command sequences, command lists, pipelines, parameter expansions, arithmetic substitutions, and command substitutions. For the most part, the bash parsing process cannot be altered by user-level actions like variable assignments (actually, there are some minor exceptions to this rule; for example, see the various compatxx shell settings , which can change certain aspects of parsing behavior on-the-fly). The upstream "words"/"tokens" that result from this complex parsing process are then expanded according to the general process of "expansion" as broken down in the above documentation excerpts, where word splitting of the expanded (expanding?) text into downstream words is simply one step of that process. Word splitting only touches text that has been spit out of a preceding expansion step; it does not affect literal text that was parsed right off the source bytestream.


Wrong answer #7

string='first line
        second line
        third line'

while read -r line; do lines+=("$line"); done <<<"$string"

This is one of the best solutions. Notice that we're back to using read . Didn't I say earlier that read is inappropriate because it performs two levels of splitting, when we only need one? The trick here is that you can call read in such a way that it effectively only does one level of splitting, specifically by splitting off only one field per invocation, which necessitates the cost of having to call it repeatedly in a loop. It's a bit of a sleight of hand, but it works.

But there are problems. First: When you provide at least one NAME argument to read , it automatically ignores leading and trailing whitespace in each field that is split off from the input string. This occurs whether $IFS is set to its default value or not, as described earlier in this post. Now, the OP may not care about this for his specific use-case, and in fact, it may be a desirable feature of the parsing behavior. But not everyone who wants to parse a string into fields will want this. There is a solution, however: A somewhat non-obvious usage of read is to pass zero NAME arguments. In this case, read will store the entire input line that it gets from the input stream in a variable named $REPLY , and, as a bonus, it does not strip leading and trailing whitespace from the value. This is a very robust usage of read which I've exploited frequently in my shell programming career. Here's a demonstration of the difference in behavior:

string=$'  a  b  \n  c  d  \n  e  f  '; ## input string

a=(); while read -r line; do a+=("$line"); done <<<"$string"; declare -p a;
## declare -a a=([0]="a  b" [1]="c  d" [2]="e  f") ## read trimmed surrounding whitespace

a=(); while read -r; do a+=("$REPLY"); done <<<"$string"; declare -p a;
## declare -a a=([0]="  a  b  " [1]="  c  d  " [2]="  e  f  ") ## no trimming

The second issue with this solution is that it does not actually address the case of a custom field separator, such as the OP's comma-space. As before, multicharacter separators are not supported, which is an unfortunate limitation of this solution. We could try to at least split on comma by specifying the separator to the -d option, but look what happens:

string='Paris, France, Europe';
a=(); while read -rd,; do a+=("$REPLY"); done <<<"$string"; declare -p a;
## declare -a a=([0]="Paris" [1]=" France")

Predictably, the unaccounted surrounding whitespace got pulled into the field values, and hence this would have to be corrected subsequently through trimming operations (this could also be done directly in the while-loop). But there's another obvious error: Europe is missing! What happened to it? The answer is that read returns a failing return code if it hits end-of-file (in this case we can call it end-of-string) without encountering a final field terminator on the final field. This causes the while-loop to break prematurely and we lose the final field.

Technically this same error afflicted the previous examples as well; the difference there is that the field separator was taken to be LF, which is the default when you don't specify the -d option, and the <<< ("here-string") mechanism automatically appends a LF to the string just before it feeds it as input to the command. Hence, in those cases, we sort of accidentally solved the problem of a dropped final field by unwittingly appending an additional dummy terminator to the input. Let's call this solution the "dummy-terminator" solution. We can apply the dummy-terminator solution manually for any custom delimiter by concatenating it against the input string ourselves when instantiating it in the here-string:

a=(); while read -rd,; do a+=("$REPLY"); done <<<"$string,"; declare -p a;
declare -a a=([0]="Paris" [1]=" France" [2]=" Europe")

There, problem solved. Another solution is to only break the while-loop if both (1) read returned failure and (2) $REPLY is empty, meaning read was not able to read any characters prior to hitting end-of-file. Demo:

a=(); while read -rd,|| [[ -n "$REPLY" ]]; do a+=("$REPLY"); done <<<"$string"; declare -p a;
## declare -a a=([0]="Paris" [1]=" France" [2]=$' Europe\n')

This approach also reveals the secretive LF that automatically gets appended to the here-string by the <<< redirection operator. It could of course be stripped off separately through an explicit trimming operation as described a moment ago, but obviously the manual dummy-terminator approach solves it directly, so we could just go with that. The manual dummy-terminator solution is actually quite convenient in that it solves both of these two problems (the dropped-final-field problem and the appended-LF problem) in one go.

So, overall, this is quite a powerful solution. It's only remaining weakness is a lack of support for multicharacter delimiters, which I will address later.


Wrong answer #8

string='first line
        second line
        third line'

readarray -t lines <<<"$string"

(This is actually from the same post as #7 ; the answerer provided two solutions in the same post.)

The readarray builtin, which is a synonym for mapfile , is ideal. It's a builtin command which parses a bytestream into an array variable in one shot; no messing with loops, conditionals, substitutions, or anything else. And it doesn't surreptitiously strip any whitespace from the input string. And (if -O is not given) it conveniently clears the target array before assigning to it. But it's still not perfect, hence my criticism of it as a "wrong answer".

First, just to get this out of the way, note that, just like the behavior of read when doing field-parsing, readarray drops the trailing field if it is empty. Again, this is probably not a concern for the OP, but it could be for some use-cases. I'll come back to this in a moment.

Second, as before, it does not support multicharacter delimiters. I'll give a fix for this in a moment as well.

Third, the solution as written does not parse the OP's input string, and in fact, it cannot be used as-is to parse it. I'll expand on this momentarily as well.

For the above reasons, I still consider this to be a "wrong answer" to the OP's question. Below I'll give what I consider to be the right answer.


Right answer

Here's a naοve attempt to make #8 work by just specifying the -d option:

string='Paris, France, Europe';
readarray -td, a <<<"$string"; declare -p a;
## declare -a a=([0]="Paris" [1]=" France" [2]=$' Europe\n')

We see the result is identical to the result we got from the double-conditional approach of the looping read solution discussed in #7 . We can almost solve this with the manual dummy-terminator trick:

readarray -td, a <<<"$string,"; declare -p a;
## declare -a a=([0]="Paris" [1]=" France" [2]=" Europe" [3]=$'\n')

The problem here is that readarray preserved the trailing field, since the <<< redirection operator appended the LF to the input string, and therefore the trailing field was not empty (otherwise it would've been dropped). We can take care of this by explicitly unsetting the final array element after-the-fact:

readarray -td, a <<<"$string,"; unset 'a[-1]'; declare -p a;
## declare -a a=([0]="Paris" [1]=" France" [2]=" Europe")

The only two problems that remain, which are actually related, are (1) the extraneous whitespace that needs to be trimmed, and (2) the lack of support for multicharacter delimiters.

The whitespace could of course be trimmed afterward (for example, see How to trim whitespace from a Bash variable? ). But if we can hack a multicharacter delimiter, then that would solve both problems in one shot.

Unfortunately, there's no direct way to get a multicharacter delimiter to work. The best solution I've thought of is to preprocess the input string to replace the multicharacter delimiter with a single-character delimiter that will be guaranteed not to collide with the contents of the input string. The only character that has this guarantee is the NUL byte . This is because, in bash (though not in zsh, incidentally), variables cannot contain the NUL byte. This preprocessing step can be done inline in a process substitution. Here's how to do it using awk :

readarray -td '' a < <(awk '{ gsub(/, /,"\0"); print; }' <<<"$string, "); unset 'a[-1]';
declare -p a;
## declare -a a=([0]="Paris" [1]="France" [2]="Europe")

There, finally! This solution will not erroneously split fields in the middle, will not cut out prematurely, will not drop empty fields, will not corrupt itself on filename expansions, will not automatically strip leading and trailing whitespace, will not leave a stowaway LF on the end, does not require loops, and does not settle for a single-character delimiter.


Trimming solution

Lastly, I wanted to demonstrate my own fairly intricate trimming solution using the obscure -C callback option of readarray . Unfortunately, I've run out of room against Stack Overflow's draconian 30,000 character post limit, so I won't be able to explain it. I'll leave that as an exercise for the reader.

function mfcb { local val="$4"; "$1"; eval "$2[$3]=\$val;"; };
function val_ltrim { if [[ "$val" =~ ^[[:space:]]+ ]]; then val="${val:${#BASH_REMATCH[0]}}"; fi; };
function val_rtrim { if [[ "$val" =~ [[:space:]]+$ ]]; then val="${val:0:${#val}-${#BASH_REMATCH[0]}}"; fi; };
function val_trim { val_ltrim; val_rtrim; };
readarray -c1 -C 'mfcb val_trim a' -td, <<<"$string,"; unset 'a[-1]'; declare -p a;
## declare -a a=([0]="Paris" [1]="France" [2]="Europe")

fbicknel ,Aug 18, 2017 at 15:57

It may also be helpful to note (though understandably you had no room to do so) that the -d option to readarray first appears in Bash 4.4. – fbicknel Aug 18 '17 at 15:57

Cyril Duchon-Doris ,Nov 3, 2017 at 9:16

You should add a "TL;DR : scroll 3 pages to see the right solution at the end of my answer" – Cyril Duchon-Doris Nov 3 '17 at 9:16

dawg ,Nov 26, 2017 at 22:28

Great answer (+1). If you change your awk to awk '{ gsub(/,[ ]+|$/,"\0"); print }' and eliminate that concatenation of the final ", " then you don't have to go through the gymnastics on eliminating the final record. So: readarray -td '' a < <(awk '{ gsub(/,[ ]+/,"\0"); print; }' <<<"$string") on Bash that supports readarray . Note your method is Bash 4.4+ I think because of the -d in readarray – dawg Nov 26 '17 at 22:28

datUser ,Feb 22, 2018 at 14:54

Looks like readarray is not an available builtin on OSX. – datUser Feb 22 '18 at 14:54

bgoldst ,Feb 23, 2018 at 3:37

@datUser That's unfortunate. Your version of bash must be too old for readarray . In this case, you can use the second-best solution built on read . I'm referring to this: a=(); while read -rd,; do a+=("$REPLY"); done <<<"$string,"; (with the awk substitution if you need multicharacter delimiter support). Let me know if you run into any problems; I'm pretty sure this solution should work on fairly old versions of bash, back to version 2-something, released like two decades ago. – bgoldst Feb 23 '18 at 3:37

Jmoney38 ,Jul 14, 2015 at 11:54

t="one,two,three"
a=($(echo "$t" | tr ',' '\n'))
echo "${a[2]}"

Prints three

shrimpwagon ,Oct 16, 2015 at 20:04

I actually prefer this approach. Simple. – shrimpwagon Oct 16 '15 at 20:04

Ben ,Oct 31, 2015 at 3:11

I copied and pasted this and it did did not work with echo, but did work when I used it in a for loop. – Ben Oct 31 '15 at 3:11

Pinaki Mukherjee ,Nov 9, 2015 at 20:22

This is the simplest approach. thanks – Pinaki Mukherjee Nov 9 '15 at 20:22

abalter ,Aug 30, 2016 at 5:13

This does not work as stated. @Jmoney38 or shrimpwagon if you can paste this in a terminal and get the desired output, please paste the result here. – abalter Aug 30 '16 at 5:13

leaf ,Jul 17, 2017 at 16:28

@abalter Works for me with a=($(echo $t | tr ',' "\n")) . Same result with a=($(echo $t | tr ',' ' ')) . – leaf Jul 17 '17 at 16:28

Luca Borrione ,Nov 2, 2012 at 13:44

Sometimes it happened to me that the method described in the accepted answer didn't work, especially if the separator is a carriage return.
In those cases I solved in this way:
string='first line
second line
third line'

oldIFS="$IFS"
IFS='
'
IFS=${IFS:0:1} # this is useful to format your code with tabs
lines=( $string )
IFS="$oldIFS"

for line in "${lines[@]}"
    do
        echo "--> $line"
done

Stefan van den Akker ,Feb 9, 2015 at 16:52

+1 This completely worked for me. I needed to put multiple strings, divided by a newline, into an array, and read -a arr <<< "$strings" did not work with IFS=$'\n' . – Stefan van den Akker Feb 9 '15 at 16:52

Stefan van den Akker ,Feb 10, 2015 at 13:49

Here is the answer to make the accepted answer work when the delimiter is a newline . – Stefan van den Akker Feb 10 '15 at 13:49

,Jul 24, 2015 at 21:24

The accepted answer works for values in one line.
If the variable has several lines:
string='first line
        second line
        third line'

We need a very different command to get all lines:

while read -r line; do lines+=("$line"); done <<<"$string"

Or the much simpler bash readarray :

readarray -t lines <<<"$string"

Printing all lines is very easy taking advantage of a printf feature:

printf ">[%s]\n" "${lines[@]}"

>[first line]
>[        second line]
>[        third line]

Mayhem ,Dec 31, 2015 at 3:13

While not every solution works for every situation, your mention of readarray... replaced my last two hours with 5 minutes... you got my vote – Mayhem Dec 31 '15 at 3:13

Derek 朕會功夫 ,Mar 23, 2018 at 19:14

readarray is the right answer. – Derek 朕會功夫 Mar 23 '18 at 19:14

ssanch ,Jun 3, 2016 at 15:24

This is similar to the approach by Jmoney38, but using sed:
string="1,2,3,4"
array=(`echo $string | sed 's/,/\n/g'`)
echo ${array[0]}

Prints 1

dawg ,Nov 26, 2017 at 19:59

The key to splitting your string into an array is the multi character delimiter of ", " . Any solution using IFS for multi character delimiters is inherently wrong since IFS is a set of those characters, not a string.

If you assign IFS=", " then the string will break on EITHER "," OR " " or any combination of them which is not an accurate representation of the two character delimiter of ", " .

You can use awk or sed to split the string, with process substitution:

#!/bin/bash

str="Paris, France, Europe"
array=()
while read -r -d $'\0' each; do   # use a NUL terminated field separator 
    array+=("$each")
done < <(printf "%s" "$str" | awk '{ gsub(/,[ ]+|$/,"\0"); print }')
declare -p array
# declare -a array=([0]="Paris" [1]="France" [2]="Europe") output

It is more efficient to use a regex you directly in Bash:

#!/bin/bash

str="Paris, France, Europe"

array=()
while [[ $str =~ ([^,]+)(,[ ]+|$) ]]; do
    array+=("${BASH_REMATCH[1]}")   # capture the field
    i=${#BASH_REMATCH}              # length of field + delimiter
    str=${str:i}                    # advance the string by that length
done                                # the loop deletes $str, so make a copy if needed

declare -p array
# declare -a array=([0]="Paris" [1]="France" [2]="Europe") output...

With the second form, there is no sub shell and it will be inherently faster.


Edit by bgoldst: Here are some benchmarks comparing my readarray solution to dawg's regex solution, and I also included the read solution for the heck of it (note: I slightly modified the regex solution for greater harmony with my solution) (also see my comments below the post):

## competitors
function c_readarray { readarray -td '' a < <(awk '{ gsub(/, /,"\0"); print; };' <<<"$1, "); unset 'a[-1]'; };
function c_read { a=(); local REPLY=''; while read -r -d ''; do a+=("$REPLY"); done < <(awk '{ gsub(/, /,"\0"); print; };' <<<"$1, "); };
function c_regex { a=(); local s="$1, "; while [[ $s =~ ([^,]+),\  ]]; do a+=("${BASH_REMATCH[1]}"); s=${s:${#BASH_REMATCH}}; done; };

## helper functions
function rep {
    local -i i=-1;
    for ((i = 0; i<$1; ++i)); do
        printf %s "$2";
    done;
}; ## end rep()

function testAll {
    local funcs=();
    local args=();
    local func='';
    local -i rc=-1;
    while [[ "$1" != ':' ]]; do
        func="$1";
        if [[ ! "$func" =~ ^[_a-zA-Z][_a-zA-Z0-9]*$ ]]; then
            echo "bad function name: $func" >&2;
            return 2;
        fi;
        funcs+=("$func");
        shift;
    done;
    shift;
    args=("$@");
    for func in "${funcs[@]}"; do
        echo -n "$func ";
        { time $func "${args[@]}" >/dev/null 2>&1; } 2>&1| tr '\n' '/';
        rc=${PIPESTATUS[0]}; if [[ $rc -ne 0 ]]; then echo "[$rc]"; else echo; fi;
    done| column -ts/;
}; ## end testAll()

function makeStringToSplit {
    local -i n=$1; ## number of fields
    if [[ $n -lt 0 ]]; then echo "bad field count: $n" >&2; return 2; fi;
    if [[ $n -eq 0 ]]; then
        echo;
    elif [[ $n -eq 1 ]]; then
        echo 'first field';
    elif [[ "$n" -eq 2 ]]; then
        echo 'first field, last field';
    else
        echo "first field, $(rep $[$1-2] 'mid field, ')last field";
    fi;
}; ## end makeStringToSplit()

function testAll_splitIntoArray {
    local -i n=$1; ## number of fields in input string
    local s='';
    echo "===== $n field$(if [[ $n -ne 1 ]]; then echo 's'; fi;) =====";
    s="$(makeStringToSplit "$n")";
    testAll c_readarray c_read c_regex : "$s";
}; ## end testAll_splitIntoArray()

## results
testAll_splitIntoArray 1;
## ===== 1 field =====
## c_readarray   real  0m0.067s   user 0m0.000s   sys  0m0.000s
## c_read        real  0m0.064s   user 0m0.000s   sys  0m0.000s
## c_regex       real  0m0.000s   user 0m0.000s   sys  0m0.000s
##
testAll_splitIntoArray 10;
## ===== 10 fields =====
## c_readarray   real  0m0.067s   user 0m0.000s   sys  0m0.000s
## c_read        real  0m0.064s   user 0m0.000s   sys  0m0.000s
## c_regex       real  0m0.001s   user 0m0.000s   sys  0m0.000s
##
testAll_splitIntoArray 100;
## ===== 100 fields =====
## c_readarray   real  0m0.069s   user 0m0.000s   sys  0m0.062s
## c_read        real  0m0.065s   user 0m0.000s   sys  0m0.046s
## c_regex       real  0m0.005s   user 0m0.000s   sys  0m0.000s
##
testAll_splitIntoArray 1000;
## ===== 1000 fields =====
## c_readarray   real  0m0.084s   user 0m0.031s   sys  0m0.077s
## c_read        real  0m0.092s   user 0m0.031s   sys  0m0.046s
## c_regex       real  0m0.125s   user 0m0.125s   sys  0m0.000s
##
testAll_splitIntoArray 10000;
## ===== 10000 fields =====
## c_readarray   real  0m0.209s   user 0m0.093s   sys  0m0.108s
## c_read        real  0m0.333s   user 0m0.234s   sys  0m0.109s
## c_regex       real  0m9.095s   user 0m9.078s   sys  0m0.000s
##
testAll_splitIntoArray 100000;
## ===== 100000 fields =====
## c_readarray   real  0m1.460s   user 0m0.326s   sys  0m1.124s
## c_read        real  0m2.780s   user 0m1.686s   sys  0m1.092s
## c_regex       real  17m38.208s   user 15m16.359s   sys  2m19.375s
##

bgoldst ,Nov 27, 2017 at 4:28

Very cool solution! I never thought of using a loop on a regex match, nifty use of $BASH_REMATCH . It works, and does indeed avoid spawning subshells. +1 from me. However, by way of criticism, the regex itself is a little non-ideal, in that it appears you were forced to duplicate part of the delimiter token (specifically the comma) so as to work around the lack of support for non-greedy multipliers (also lookarounds) in ERE ("extended" regex flavor built into bash). This makes it a little less generic and robust. – bgoldst Nov 27 '17 at 4:28

bgoldst ,Nov 27, 2017 at 4:28

Secondly, I did some benchmarking, and although the performance is better than the other solutions for smallish strings, it worsens exponentially due to the repeated string-rebuilding, becoming catastrophic for very large strings. See my edit to your answer. – bgoldst Nov 27 '17 at 4:28

dawg ,Nov 27, 2017 at 4:46

@bgoldst: What a cool benchmark! In defense of the regex, for 10's or 100's of thousands of fields (what the regex is splitting) there would probably be some form of record (like \n delimited text lines) comprising those fields so the catastrophic slow-down would likely not occur. If you have a string with 100,000 fields -- maybe Bash is not ideal ;-) Thanks for the benchmark. I learned a thing or two. – dawg Nov 27 '17 at 4:46

Geoff Lee ,Mar 4, 2016 at 6:02

Try this
IFS=', '; array=(Paris, France, Europe)
for item in ${array[@]}; do echo $item; done

It's simple. If you want, you can also add a declare (and also remove the commas):

IFS=' ';declare -a array=(Paris France Europe)

The IFS is added to undo the above but it works without it in a fresh bash instance

MrPotatoHead ,Nov 13, 2018 at 13:19

Pure bash multi-character delimiter solution.

As others have pointed out in this thread, the OP's question gave an example of a comma delimited string to be parsed into an array, but did not indicate if he/she was only interested in comma delimiters, single character delimiters, or multi-character delimiters.

Since Google tends to rank this answer at or near the top of search results, I wanted to provide readers with a strong answer to the question of multiple character delimiters, since that is also mentioned in at least one response.

If you're in search of a solution to a multi-character delimiter problem, I suggest reviewing Mallikarjun M 's post, in particular the response from gniourf_gniourf who provides this elegant pure BASH solution using parameter expansion:

#!/bin/bash
str="LearnABCtoABCSplitABCaABCString"
delimiter=ABC
s=$str$delimiter
array=();
while [[ $s ]]; do
    array+=( "${s%%"$delimiter"*}" );
    s=${s#*"$delimiter"};
done;
declare -p array

Link to cited comment/referenced post

Link to cited question: Howto split a string on a multi-character delimiter in bash?

Eduardo Cuomo ,Dec 19, 2016 at 15:27

Use this:
countries='Paris, France, Europe'
OIFS="$IFS"
IFS=', ' array=($countries)
IFS="$OIFS"

#${array[1]} == Paris
#${array[2]} == France
#${array[3]} == Europe

gniourf_gniourf ,Dec 19, 2016 at 17:22

Bad: subject to word splitting and pathname expansion. Please don't revive old questions with good answers to give bad answers. – gniourf_gniourf Dec 19 '16 at 17:22

Scott Weldon ,Dec 19, 2016 at 18:12

This may be a bad answer, but it is still a valid answer. Flaggers / reviewers: For incorrect answers such as this one, downvote, don't delete! – Scott Weldon Dec 19 '16 at 18:12

George Sovetov ,Dec 26, 2016 at 17:31

@gniourf_gniourf Could you please explain why it is a bad answer? I really don't understand when it fails. – George Sovetov Dec 26 '16 at 17:31

gniourf_gniourf ,Dec 26, 2016 at 18:07

@GeorgeSovetov: As I said, it's subject to word splitting and pathname expansion. More generally, splitting a string into an array as array=( $string ) is a (sadly very common) antipattern: word splitting occurs: string='Prague, Czech Republic, Europe' ; Pathname expansion occurs: string='foo[abcd],bar[efgh]' will fail if you have a file named, e.g., food or barf in your directory. The only valid usage of such a construct is when string is a glob. – gniourf_gniourf Dec 26 '16 at 18:07

user1009908 ,Jun 9, 2015 at 23:28

UPDATE: Don't do this, due to problems with eval.

With slightly less ceremony:

IFS=', ' eval 'array=($string)'

e.g.

string="foo, bar,baz"
IFS=', ' eval 'array=($string)'
echo ${array[1]} # -> bar

caesarsol ,Oct 29, 2015 at 14:42

eval is evil! don't do this. – caesarsol Oct 29 '15 at 14:42

user1009908 ,Oct 30, 2015 at 4:05

Pfft. No. If you're writing scripts large enough for this to matter, you're doing it wrong. In application code, eval is evil. In shell scripting, it's common, necessary, and inconsequential. – user1009908 Oct 30 '15 at 4:05

caesarsol ,Nov 2, 2015 at 18:19

put a $ in your variable and you'll see... I write many scripts and I never ever had to use a single eval – caesarsol Nov 2 '15 at 18:19

Dennis Williamson ,Dec 2, 2015 at 17:00

Eval command and security issues – Dennis Williamson Dec 2 '15 at 17:00

user1009908 ,Dec 22, 2015 at 23:04

You're right, this is only usable when the input is known to be clean. Not a robust solution. – user1009908 Dec 22 '15 at 23:04

Eduardo Lucio ,Jan 31, 2018 at 20:45

Here's my hack!

Splitting strings by strings is a pretty boring thing to do using bash. What happens is that we have limited approaches that only work in a few cases (split by ";", "/", "." and so on) or we have a variety of side effects in the outputs.

The approach below has required a number of maneuvers, but I believe it will work for most of our needs!

#!/bin/bash

# --------------------------------------
# SPLIT FUNCTION
# ----------------

F_SPLIT_R=()
f_split() {
    : 'It does a "split" into a given string and returns an array.

    Args:
        TARGET_P (str): Target string to "split".
        DELIMITER_P (Optional[str]): Delimiter used to "split". If not 
    informed the split will be done by spaces.

    Returns:
        F_SPLIT_R (array): Array with the provided string separated by the 
    informed delimiter.
    '

    F_SPLIT_R=()
    TARGET_P=$1
    DELIMITER_P=$2
    if [ -z "$DELIMITER_P" ] ; then
        DELIMITER_P=" "
    fi

    REMOVE_N=1
    if [ "$DELIMITER_P" == "\n" ] ; then
        REMOVE_N=0
    fi

    # NOTE: This was the only parameter that has been a problem so far! 
    # By Questor
    # [Ref.: https://unix.stackexchange.com/a/390732/61742]
    if [ "$DELIMITER_P" == "./" ] ; then
        DELIMITER_P="[.]/"
    fi

    if [ ${REMOVE_N} -eq 1 ] ; then

        # NOTE: Due to bash limitations we have some problems getting the 
        # output of a split by awk inside an array and so we need to use 
        # "line break" (\n) to succeed. Seen this, we remove the line breaks 
        # momentarily afterwards we reintegrate them. The problem is that if 
        # there is a line break in the "string" informed, this line break will 
        # be lost, that is, it is erroneously removed in the output! 
        # By Questor
        TARGET_P=$(awk 'BEGIN {RS="dn"} {gsub("\n", "3F2C417D448C46918289218B7337FCAF"); printf $0}' <<< "${TARGET_P}")

    fi

    # NOTE: The replace of "\n" by "3F2C417D448C46918289218B7337FCAF" results 
    # in more occurrences of "3F2C417D448C46918289218B7337FCAF" than the 
    # amount of "\n" that there was originally in the string (one more 
    # occurrence at the end of the string)! We can not explain the reason for 
    # this side effect. The line below corrects this problem! By Questor
    TARGET_P=${TARGET_P%????????????????????????????????}

    SPLIT_NOW=$(awk -F"$DELIMITER_P" '{for(i=1; i<=NF; i++){printf "%s\n", $i}}' <<< "${TARGET_P}")

    while IFS= read -r LINE_NOW ; do
        if [ ${REMOVE_N} -eq 1 ] ; then

            # NOTE: We use "'" to prevent blank lines with no other characters 
            # in the sequence being erroneously removed! We do not know the 
            # reason for this side effect! By Questor
            LN_NOW_WITH_N=$(awk 'BEGIN {RS="dn"} {gsub("3F2C417D448C46918289218B7337FCAF", "\n"); printf $0}' <<< "'${LINE_NOW}'")

            # NOTE: We use the commands below to revert the intervention made 
            # immediately above! By Questor
            LN_NOW_WITH_N=${LN_NOW_WITH_N%?}
            LN_NOW_WITH_N=${LN_NOW_WITH_N#?}

            F_SPLIT_R+=("$LN_NOW_WITH_N")
        else
            F_SPLIT_R+=("$LINE_NOW")
        fi
    done <<< "$SPLIT_NOW"
}

# --------------------------------------
# HOW TO USE
# ----------------

STRING_TO_SPLIT="
 * How do I list all databases and tables using psql?

\"
sudo -u postgres /usr/pgsql-9.4/bin/psql -c \"\l\"
sudo -u postgres /usr/pgsql-9.4/bin/psql <DB_NAME> -c \"\dt\"
\"

\"
\list or \l: list all databases
\dt: list all tables in the current database
\"

[Ref.: https://dba.stackexchange.com/questions/1285/how-do-i-list-all-databases-and-tables-using-psql]


"

f_split "$STRING_TO_SPLIT" "bin/psql -c"

# --------------------------------------
# OUTPUT AND TEST
# ----------------

ARR_LENGTH=${#F_SPLIT_R[*]}
for (( i=0; i<=$(( $ARR_LENGTH -1 )); i++ )) ; do
    echo " > -----------------------------------------"
    echo "${F_SPLIT_R[$i]}"
    echo " < -----------------------------------------"
done

if [ "$STRING_TO_SPLIT" == "${F_SPLIT_R[0]}bin/psql -c${F_SPLIT_R[1]}" ] ; then
    echo " > -----------------------------------------"
    echo "The strings are the same!"
    echo " < -----------------------------------------"
fi

sel-en-ium ,May 31, 2018 at 5:56

Another way to do it without modifying IFS:
read -r -a myarray <<< "${string//, /$IFS}"

Rather than changing IFS to match our desired delimiter, we can replace all occurrences of our desired delimiter ", " with contents of $IFS via "${string//, /$IFS}" .

Maybe this will be slow for very large strings though?

This is based on Dennis Williamson's answer.

rsjethani ,Sep 13, 2016 at 16:21

Another approach can be:
str="a, b, c, d"  # assuming there is a space after ',' as in Q
arr=(${str//,/})  # delete all occurrences of ','

After this 'arr' is an array with four strings. This doesn't require dealing IFS or read or any other special stuff hence much simpler and direct.

gniourf_gniourf ,Dec 26, 2016 at 18:12

Same (sadly common) antipattern as other answers: subject to word splitting and filename expansion. – gniourf_gniourf Dec 26 '16 at 18:12

Safter Arslan ,Aug 9, 2017 at 3:21

Another way would be:
string="Paris, France, Europe"
IFS=', ' arr=(${string})

Now your elements are stored in "arr" array. To iterate through the elements:

for i in ${arr[@]}; do echo $i; done

bgoldst ,Aug 13, 2017 at 22:38

I cover this idea in my answer ; see Wrong answer #5 (you might be especially interested in my discussion of the eval trick). Your solution leaves $IFS set to the comma-space value after-the-fact. – bgoldst Aug 13 '17 at 22:38

[Jan 29, 2019] A new term PEBKAC

Jan 29, 2019 | thwack.solarwinds.com

dtreloar Jul 30, 2015 8:51 PM PEBKAC

P roblem

E xists

B etween

K eyboard

A nd

C hair

or the most common fault is the id ten t or ID10T

[Jan 29, 2019] Are you sure?

Jan 29, 2019 | thwack.solarwinds.com

RichardLetts

Jul 13, 2015 8:13 PM Dealing with my ISP:

Me: There is a problem with your head-end router, you need to get an engineer to troubleshoot it

Them: no the problem is with your cable modem and router, we can see it fine on our network

Me: That's interesting because I powered it off and disconnected it from the wall before we started this conversation.

Them: Are you sure?

Me: I'm pretty sure that the lack of blinky lights means it's got no power but if you think it's still working fine then I'd suggest the problem at your end of this phone conversation and not at my end.

[Jan 29, 2019] RHEL7 is a fine OS, the only thing it s missing is a really good init system.

Highly recommended!
Or in other words, a simple, reliable and clear solution (which has some faults due to its age) was replaced with a gigantic KISS violation. No engineer worth the name will ever do that. And if it needs doing, any good engineer will make damned sure to achieve maximum compatibility and a clean way back. The systemd people seem to be hell-bent on making it as hard as possible to not use their monster. That alone is a good reason to stay away from it.
Notable quotes:
"... We are systemd. Lower your memory locks and surrender your processes. We will add your calls and code distinctiveness to our own. Your functions will adapt to service us. Resistance is futile. ..."
"... I think we should call systemd the Master Control Program since it seems to like making other programs functions its own. ..."
"... RHEL7 is a fine OS, the only thing it's missing is a really good init system. ..."
Oct 14, 2018 | linux.slashdot.org

Reverend Green ( 4973045 ) , Monday December 11, 2017 @04:48AM ( #55714431 )

Re: Does systemd make ... ( Score: 5 , Funny)

Systemd is nothing but a thinly-veiled plot by Vladimir Putin and Beyonce to import illegal German Nazi immigrants over the border from Mexico who will then corner the market in kimchi and implement Sharia law!!!

Anonymous Coward , Monday December 11, 2017 @01:38AM ( #55714015 )

Re:It violates fundamental Unix principles ( Score: 4 , Funny)

The Emacs of the 2010s.

DontBeAMoran ( 4843879 ) , Monday December 11, 2017 @01:57AM ( #55714059 )
Re:It violates fundamental Unix principles ( Score: 5 , Funny)

We are systemd. Lower your memory locks and surrender your processes. We will add your calls and code distinctiveness to our own. Your functions will adapt to service us. Resistance is futile.

serviscope_minor ( 664417 ) , Monday December 11, 2017 @04:47AM ( #55714427 ) Journal
Re:It violates fundamental Unix principles ( Score: 4 , Insightful)

I think we should call systemd the Master Control Program since it seems to like making other programs functions its own.

Anonymous Coward , Monday December 11, 2017 @01:47AM ( #55714035 )
Don't go hating on systemd ( Score: 5 , Funny)

RHEL7 is a fine OS, the only thing it's missing is a really good init system.

[Jan 29, 2019] Your tax dollars at government It work

Jan 29, 2019 | thwack.solarwinds.com

pzjones Jul 8, 2015 10:34 AM

My story is about required processes...Need to add DHCP entries to the DHCP server. Here is the process. Receive request. Write 5 page document (no exaggeration) detailing who submitted the request, why the request was submitted, what the solution would be, the detailed steps of the solution including spreadsheet showing how each field would be completed and backup procedures. Produce second document to include pre execution test plan, and post execution test plan in minute detail. Submit to CAB board for review, submit to higher level advisory board for review; attend CAB meeting for formal approval; attend additional approval board meeting if data center is in freeze; attend post implementation board for lessons learned...Lesson learned: now I know where our tax dollars go...

[Jan 29, 2019] Your worst sysadmin horror story

Notable quotes:
"... Disk Array not found. ..."
"... Disk Array not found. ..."
"... Windows 2003 is now loading. ..."
Jan 29, 2019 | www.reddit.com

highlord_fox Moderator | /r/sysadmin Sock Puppet 10 points 11 points 12 points 3 years ago (1 child)

9-10 year old Poweredge 2950. Four drives, 250GB ea, RAID 5. Not even sure the fourth drive was even part of the array at this point. Backups consist of cloud file-level backup of most of the server's files. I was working on the server, updating the OS, rebooting it to solve whatever was ailing it at the time, and it was probably about 7-8PM on a Friday. I powered it off, and went to power it back on.

Disk Array not found.

SHIT SHIT SHIT SHIT SHIT SHIT SHIT . Power it back off. Power it back on.

Disk Array not found.

I stared at it, and hope I don't have to call for emergency support on the thing. Power it off and back on a third time.

Windows 2003 is now loading.

OhThankTheGods

I didn't power it off again until I replaced it, some 4-6 months later. And then it stayed off for a good few weeks, before I had to buy a Perc 5i card off ebay to get it running again. Long story short, most of the speed issues I was having was due to the card dying. AH WELL.

EDIT: Formatting.

[Jan 29, 2019] Extra security can be a dangerious thing

Viewing backup logs is vital. Often it only looks that backup is going fine...
Notable quotes:
"... Things looked fine until someone noticed that a directory with critically important and sensitive data was missing. Turned out that some manager had decided to 'secure' the directory by doing 'chmod 000 dir' to protect the data from inquisitive eyes when the data was not being used. ..."
"... Of course, tar complained about the situation and returned with non-null status, but since the backup procedure had seemed to work fine, no one thought it necessary to view the logs... ..."
Jul 20, 2017 | www.linuxjournal.com

Anonymous, 11/08/2002

At an unnamed location it happened thus... The customer had been using a home built 'tar' -based backup system for a long time. They were informed enough to have even tested and verified that recovery would work also.

Everything had been working fine, and they even had to do a recovery which went fine. Well, one day something evil happened to a disk and they had to replace the unit and do a full recovery.

Things looked fine until someone noticed that a directory with critically important and sensitive data was missing. Turned out that some manager had decided to 'secure' the directory by doing 'chmod 000 dir' to protect the data from inquisitive eyes when the data was not being used.

Of course, tar complained about the situation and returned with non-null status, but since the backup procedure had seemed to work fine, no one thought it necessary to view the logs...

[Jan 29, 2019] Backing things up with rsync

Notable quotes:
"... I RECURSIVELY DELETED ALL THE LIVE CORPORATE WEBSITES ON FRIDAY AFTERNOON AT 4PM! ..."
"... This is why it's ALWAYS A GOOD IDEA to use Midnight Commander or something similar to delete directories!! ..."
"... rsync with ssh as the transport mechanism works very well with my nightly LAN backups. I've found this page to be very helpful: http://www.mikerubel.org/computers/rsync_snapshots/ ..."
Jul 20, 2017 | www.linuxjournal.com

Anonymous on Fri, 11/08/2002 - 03:00.

The Subject, not the content, really brings back memories.

Imagine this, your tasked with complete control over the network in a multi-million dollar company. You've had some experience in the real world of network maintaince, but mostly you've learned from breaking things at home.

Time comes to implement (yes this was a startup company), a backup routine. You carefully consider the best way to do it and decide copying data to a holding disk before the tape run would be perfect in the situation, faster restore if the holding disk is still alive.

So off you go configuring all your servers for ssh pass through, and create the rsync scripts. Then before the trial run you think it would be a good idea to create a local backup of all the websites.

You logon to the web server, create a temp directory and start testing your newly advance rsync skills. After a couple of goes, you think your ready for the real thing, but you decide to run the test one more time.

Everything seems fine so you delete the temp directory. You pause for a second and your month drops open wider than it has ever opened before, and a feeling of terror overcomes you. You want to hide in a hole and hope you didn't see what you saw.

I RECURSIVELY DELETED ALL THE LIVE CORPORATE WEBSITES ON FRIDAY AFTERNOON AT 4PM!

Anonymous on Sun, 11/10/2002 - 03:00.

This is why it's ALWAYS A GOOD IDEA to use Midnight Commander or something similar to delete directories!!

...Root for (5) years and never trashed a filesystem yet (knockwoody)...

Anonymous on Fri, 11/08/2002 - 03:00.

rsync with ssh as the transport mechanism works very well with my nightly LAN backups. I've found this page to be very helpful: http://www.mikerubel.org/computers/rsync_snapshots/

[Jan 29, 2019] It helps if somebody checked if the equpment really has power, but often this step is skipped.

Notable quotes:
"... On closer inspection, noticed this power lead was only half in the socket... I connected this back to the original switch, grabbed the "I.T manager" and asked him to "just push the power lead"... his face? Looked like Casper the friendly ghost. ..."
Jan 29, 2019 | thwack.solarwinds.com

nantwiched Jul 13, 2015 11:18 AM

I've had a few horrors, heres a few...

Had to travel from Cheshire to Glasgow (4+hours) at 3am to get to a major high street store for 8am, an hour before opening. A switch had failed and taken out a whole floor of the store. So I prepped the new switch, using the same power lead from the failed switch as that was the only available lead / socket. No power. Initially thought the replacement switch was faulty and I would be in trouble for not testing this prior to attending site...

On closer inspection, noticed this power lead was only half in the socket... I connected this back to the original switch, grabbed the "I.T manager" and asked him to "just push the power lead"... his face? Looked like Casper the friendly ghost.

Problem solved at a massive expense to the company due to the out of hours charges. Surely that would be the first thing to check? Obviously not...

The same thing happened in Aberdeen, a 13 hour round trip to resolve a fault on a "failed router". The router looked dead at first glance, but after taking the side panel off the cabinet, I discovered it always helps if the router is actually plugged in...

Yet the customer clearly said everything is plugged in as it should be and it "must be faulty"... It does tend to appear faulty when not supplied with any power...

[Jan 29, 2019] It can be hot inside the rack

Jan 29, 2019 | thwack.solarwinds.com

jemertz Mar 28, 2016 12:16 PM

Shortly after I started my first remote server-monitoring job, I started receiving, one by one, traps for servers that had gone heartbeat missing/no-ping at a remote site. I looked up the site, and there were 16 total servers there, of which about 4 or 5 (and counting) were already down. Clearly not network issues. I remoted into one of the ones that was still up, and found in the Windows event viewer that it was beginning to overheat.

I contacted my front-line team and asked them to call the site to find out if the data center air conditioner had gone out, or if there was something blocking the servers' fans or something. He called, the client at the site checked and said the data center was fine, so I dispatched IBM (our remote hands) to go to the site and check out the servers. They got there and called in laughing.

There was construction in the data center, and the contractors, being thoughtful, had draped a painter's dropcloth over the server racks to keep off saw dust. Of COURSE this caused the servers to overheat. Somehow the client had failed to mention this.

...so after all this went down, the client had the gall to ask us to replace the servers "just in case" there was any damage, despite the fact that each of them had shut itself down in order to prevent thermal damage. We went ahead and replaced them anyway. (I'm sure they were rebuilt and sent to other clients, but installing these servers on site takes about 2-3 hours of IBM's time on site and 60-90 minutes of my remote team's time, not counting the rebuild before recycling.
Oh well. My employer paid me for my time, so no skin off my back.

[Jan 29, 2019] "Sure, I get out my laptop, plug in the network cable, get on the internet from home. I start the VPN client, take out this paper with the code on it, and type it in..." Yup. He wrote down the RSA token's code before he went home.

Jan 29, 2019 | thwack.solarwinds.com

jm_sysadmin Expert Jul 8, 2015 7:04 AM

I was just starting my IT career, and I was told a VIP user couldn't VPN in, and I was asked to help. Everything checked out with the computer, so I asked the user to try it in front of me. He took out his RSA token, knew what to do with it, and it worked.

I also knew this user had been complaining of this issue for some time, and I wasn't the first person to try to fix this. Something wasn't right.

I asked him to walk me through every step he took from when it failed the night before.

"Sure, I get out my laptop, plug in the network cable, get on the internet from home. I start the VPN client, take out this paper with the code on it, and type it in..." Yup. He wrote down the RSA token's code before he went home. See that little thing was expensive, and he didn't want to lose it. I explained that the number changes all time, and that he needed to have it with him. VPN issue resolved.

[Jan 29, 2019] How electricians can help to improve server uptime

Notable quotes:
"... "Oh my God, the server room is full of smoke!" Somehow they hooked up things wrong and fed 220v instead of 110v to all the circuits. Every single UPS was dead. Several of the server power supplies were fried. ..."
Jan 29, 2019 | thwack.solarwinds.com

wfordham Jul 13, 2015 1:09 PM

This happened back when we had an individual APC UPS for each server. Most of the servers were really just whitebox PCs in a rack mount case running a server OS.

The facilities department was doing some planned maintenance on the electrical panel in the server room over the weekend. They assured me that they were not going to touch any of the circuits for the server room, just for the rooms across the hallway. Well, they disconnected power to the entire panel. Then they called me to let me know what they did. I was able to remotely verify that everything was running on battery just fine. I let them know that they had about 20 minutes to restore power or I would need to start shutting down servers. They called me again and said,

"Oh my God, the server room is full of smoke!" Somehow they hooked up things wrong and fed 220v instead of 110v to all the circuits. Every single UPS was dead. Several of the server power supplies were fried.

And a few motherboards didn't make it either. It took me the rest of the weekend kludging things together to get the critical systems back online.

[Jan 29, 2019] 7th Circuit Rules Age Discrimination Law Does Not Include Job Applicants

Notable quotes:
"... By Jerri-Lynn Scofield, who has worked as a securities lawyer and a derivatives trader. She is currently writing a book about textile artisans. ..."
"... Kleber filed suit, pursuing claims for both disparate treatment and disparate impact under the ADEA. The Chicago Tribune notes in Hinsdale man loses appeal in age discrimination case that challenged experience caps in job ads that "Kleber had out of work and job hunting for three years" when he applied for the CareFusion job. ..."
"... Unfortunately, the seventh circuit has now held that the disparate impact section of the ADEA does not extend to job applicants. .Judge Michael Scudder, a Trump appointee, wrote the majority 8-4 opinion, which reverses an earlier 2-1 panel ruling last April in Kleber's favor that had initially overruled the district court's dismissal of Kleber's disparate impact claim. ..."
"... hiring discrimination is difficult to prove and often goes unreported. Only 3 percent have made a formal complaint. ..."
"... The decision narrowly applies to disparate impact claims of age discrimination under the ADEA. It is important to remember that job applicants are protected under the disparate treatment portion of the statute. ..."
"... I forbade my kids to study programming. ..."
"... I'm re reading the classic of Sociology Ain't No Makin It by Jay MacLeod, in which he studies the employment prospects of youths in the 1980s and determined that even then there was no stable private sector employment and your best option is a government job or to have an excellent "network" which is understandably hard for most people to achieve. ..."
"... I think the trick is to study something and programming, so the programming becomes a tool rather than an end. ..."
"... the problem is it is almost impossible to exit the programming business and join another domain. Anyone can enter it. (evidence – all the people with "engineering" degrees from India) Also my wages are now 50% of what i made 10 years ago (nominal). Also I notice that almost no one is doing sincere work. Most are just coasting, pretending to work with the latest toy (ie, preparing for the next interview). ..."
"... I am an "aging" former STEM worker (histology researcher) as well. Much like the IT landscape, you are considered "over-the-hill" at 35, which I turn on the 31st. ..."
"... Most of the positions in science and engineering fields now are basically "gig" positions, lasting a few months to a year. ..."
Jan 29, 2019 | www.nakedcapitalism.com

By Jerri-Lynn Scofield, who has worked as a securities lawyer and a derivatives trader. She is currently writing a book about textile artisans.

The US Court of Appeals for the Seventh Circuit decided in Kleber v. CareFusion Corporation last Wednesday that disparate impact liability under the Age Discrimination in Employment Act (ADEA) applies only to current employees and does not include job applicants.

The case was brought by Dale Kleber, an attorney, who applied for a senior position in CareFusion's legal department. The job description required applicants to have "3 to 7 years (no more than 7 years) of relevant legal experience."

Kleber was 58 at the time he applied and had more than seven years of pertinent experience. CareFusion hired a 29-year-old applicant who met but did not exceed the experience requirement.

Kleber filed suit, pursuing claims for both disparate treatment and disparate impact under the ADEA. The Chicago Tribune notes in Hinsdale man loses appeal in age discrimination case that challenged experience caps in job ads that "Kleber had out of work and job hunting for three years" when he applied for the CareFusion job.

Some Basics

Let's start with some basics, as the US Equal Employment Opportunity Commission (EEOC) set out in a brief primer on basic US age discrimination law entitled Questions and Answers on EEOC Final Rule on Disparate Impact and "Reasonable Factors Other Than Age" Under the Age Discrimination in Employment Act of 1967 . The EEOC began with a brief description of the purpose of the ADEA:

The purpose of the ADEA is to prohibit employment discrimination against people who are 40 years of age or older. Congress enacted the ADEA in 1967 because of its concern that older workers were disadvantaged in retaining and regaining employment. The ADEA also addressed concerns that older workers were barred from employment by some common employment practices that were not intended to exclude older workers, but that had the effect of doing so and were unrelated to job performance.

It was with these concerns in mind that Congress created a system that included liability for both disparate treatment and disparate impact. What's the difference between these two concepts?

According to the EEOC:

[The ADEA] prohibits discrimination against workers because of their older age with respect to any aspect of employment. In addition to prohibiting intentional discrimination against older workers (known as "disparate treatment"), the ADEA prohibits practices that, although facially neutral with regard to age, have the effect of harming older workers more than younger workers (known as "disparate impact"), unless the employer can show that the practice is based on an [Reasonable Factor Other Than Age (RFAO)]

The crux: it's much easier for a plaintiff to prove disparate impact, because s/he needn't show that the employer intended to discriminate. Of course, many if not most employers are savvy enough not to be explicit about their intentions to discriminate against older people as they don't wish to get sued.

District, Panel, and Full Seventh Circuit Decisions

The district court dismissed Kleber's disparate impact claim, on the grounds that the text of the statute- (§ 4(a)(2))- did not extend to outside job applicants. Kleber then voluntarily dismissed his separate claim for disparate treatment liability to appeal the dismissal of his disparate impact claim. No doubt he was aware – either because he was an attorney, or because of the legal advice received – that it is much more difficult to prevail on a disparate treatment claim, which would require that he establish CareFusion's intent to discriminate.

Or at least that was true before this decision was rendered.

Unfortunately, the seventh circuit has now held that the disparate impact section of the ADEA does not extend to job applicants. .Judge Michael Scudder, a Trump appointee, wrote the majority 8-4 opinion, which reverses an earlier 2-1 panel ruling last April in Kleber's favor that had initially overruled the district court's dismissal of Kleber's disparate impact claim.

The majority ruled:

By its terms, § 4(a)(2) proscribes certain conduct by employers and limits its protection to employees. The prohibited conduct entails an employer acting in any way to limit, segregate, or classify its employees based on age. The language of § 4(a)(2) then goes on to make clear that its proscriptions apply only if an employer's actions have a particular impact -- "depriv[ing] or tend[ing] to deprive any individual of em- ployment opportunities or otherwise adversely affect[ing] his status as an employee." This language plainly demonstrates that the requisite impact must befall an individual with "status as an employee." Put most simply, the reach of § 4(a)(2) does not extend to applicants for employment, as common dictionary definitions confirm that an applicant has no "status as an employee." (citation omitted)[opinion, pp. 3-4]

By contrast, in the disparate treatment part of the statute (§ 4(a)(1)):

Congress made it unlawful for an employer "to fail or refuse to hire or to discharge any individual or otherwise discriminate against any individual with respect to his compensation, terms, conditions, or privi- leges of employment, because of such individual's age."[opinion, p.6]

The court compared the disparate treatment section – § 4(a)(1) – directly with the disparate impact section – § 4(a)(2):

Yet a side-by-side comparison of § 4(a)(1) with § 4(a)(2) shows that the language in the former plainly covering appli-cants is conspicuously absent from the latter. Section 4(a)(2) says nothing about an employer's decision "to fail or refuse to hire any individual" and instead speaks only in terms of an employer's actions that "adversely affect his status as an employee." We cannot conclude this difference means nothing: "when 'Congress includes particular language in one section of a statute but omits it in another' -- let alone in the very next provision -- the Court presumes that Congress intended a difference in meaning." (citations omitted)[opinion, pp. 6-7]

The majority's conclusion:

In the end, the plain language of § 4(a)(2) leaves room for only one interpretation: Congress authorized only employees to bring disparate impact claims.[opinion, p.8]

Greying of the Workforce

Older people account for a growing percentage of the workforce, as Reuters reports in Age bias law does not cover job applicants: U.S. appeals court :

People 55 or older comprised 22.4 percent of U.S. workers in 2016, up from 11.9 percent in 1996, and may account for close to one-fourth of the labor force by 2022, according to the Bureau of Labor Statistics.

The greying of the workforce is "thanks to better health in older age and insufficient savings that require people to keeping working longer," according to the Chicago Tribune. Yet:

numerous hiring practices are under fire for negatively impacting older applicants. In addition to experience caps, lawsuits have challenged the exclusive use of on-campus recruiting to fill positions and algorithms that target job ads to show only in certain people's social media feeds.

Unless Congress amends the ADEA to include job applicants, older people will continue to face barriers to getting jobs.

The Chicago Tribune reports:

The [EEOC], which receives about 20,000 age discrimination charges every year, issued a report in June citing surveys that found 3 in 4 older workers believe their age is an obstacle in getting a job. Yet hiring discrimination is difficult to prove and often goes unreported. Only 3 percent have made a formal complaint. Allowing older applicants to challenge policies that have an unintentionally discriminatory impact would offer another tool for fighting age discrimination, Ray Peeler, associate legal counsel at the EEOC, has said.

How will these disparate impact claims now fare?

The Bottom Line

FordHarrison, a firm specialising in human relations law, noted in Seventh Circuit Limits Job Applicants' Age Discrimination Claims :

The decision narrowly applies to disparate impact claims of age discrimination under the ADEA. It is important to remember that job applicants are protected under the disparate treatment portion of the statute. There is no split among the federal appeals courts on this issue, making it an unlikely candidate for Supreme Court review, but the four judges in dissent read the statute as being vague and susceptible to an interpretation that includes job applicants.

Their conclusion: "a decision finding disparate impact liability for job applicants under the ADEA is unlikely in the near future."

Alas, for reasons of space, I will not consider the extensive dissent. My purpose in writing this post is to discuss the majority decision, not to opine on which side made the better arguments.

antidlc , January 27, 2019 at 3:28 pm

8-4 opinion. Which judges ruled for the majority? Which judges ruled for the minority opinion?

Sorry,,,don't have time to research right now. It says a Trump appointee wrote the majority opinion. Who were the other 7?

grayslady , January 27, 2019 at 6:09 pm

There were 3 judges who dissented in whole and one who dissented in part. Of the three full dissensions, two were Clinton appointees (including the Chief Justice, who was one of the dissenters) and one was a Reagan appointee. The partial dissenter was also a Reagan appointee.

run75441 , January 27, 2019 at 11:25 pm

ant: Not your law clerk, read the opinion. Easterbook and Wood dissented. Find the other two and and you can figure out who agreed.

YankeeFrank , January 27, 2019 at 3:58 pm

"depriv[ing] or tend[ing] to deprive any individual of employment opportunities or otherwise adversely affect[ing] his status as an employee."

–This language plainly demonstrates that the requisite impact must befall an individual with "status as an employee."

So they totally ignore the first part of the sentence -- "depriv[ing] or tend[ing] to deprive any individual of employment opportunities " -- "employment opportunities" clearly applies to applicants.

Its as if these judges cannot make sense of the English language. Hopefully the judges on appeal will display better command of the language.

Alfred , January 27, 2019 at 5:56 pm

I agree. "Employment opportunities," in the "plain language" so meticulously respected by the 7th Circuit, must surely refer at minimum to 'the chance to apply for a job and to have one's application fairly considered'. It seems on the other hand a stretch to interpret the phrase to mean only 'the chance to keep a job one already has'. Both are important, however; to split them would challenge even Solomonic wisdom, as I suppose the curious decision discussed here demonstrates. I am less convinced that the facts as presented here establish a clear case of age discrimination. True, they point in that direction. But a hypothetical 58-year old who only earned a law degree in his or her early 50s, perhaps after an earlier career in paralegal work, could have legitimately applied for a position requiring 3 to 7 years of "relevant legal experience." That last phrase, is of course, quite weasel-y: what counts as "relevant" and what counts as "legal" experience would under any circumstances be subject to (discriminatory) interpretation. The limitation of years of experience in the job announcement strikes me as a means to keep the salary within a certain budgetary range as prescribed either by law or collective bargaining.

KLG , January 27, 2019 at 6:42 pm

Almost like the willful misunderstanding of "A well regulated militia being necessary to the security of a free State "? Of course, that militia also meant slave patrols and the occasional posse to put down the native "savages," but still.

Lambert Strether , January 28, 2019 at 2:08 am

> "depriv[ing] or tend[ing] to deprive any individual of employment opportunities or otherwise adversely affect[ing] his status as an employee."

Says "or." Not "and."

Magic Sam , January 27, 2019 at 5:53 pm

They are failing to find what they don't want to find.

Magic Sam , January 27, 2019 at 5:58 pm

Being pro-Labor will not get you Federalist Society approval to be nominated to the bench by Trump. This decision came down via the ideological makeup of the court, not the letter of the law. Their stated pretext is obviously b.s.. It contradicts itself.

Mattie , January 27, 2019 at 6:05 pm

Yep. That is when their Utah et al property mgt teams began breaking into homes, tossing contents – including pets – outside & changing locks

Even when borrowers were in approved HAMP, etc. pipelines

PLUG: If you haven't yet – See "The Florida Project"

nothing but the truth , January 27, 2019 at 7:18 pm

as an aging "stem" (cough coder) worker who typically has to look for a new "gig" every few years, i am trembling at this.

Luckily, i bought a small business when I had a few saved up, so I won't starve.

Health insurance is another matter.

I forbade my kids to study programming.

Lambert Strether , January 28, 2019 at 2:09 am

Plumbing. Electrical work. Permaculture. Get those kids Jackpot-ready!

Joe Well , January 28, 2019 at 11:40 am

I'm re reading the classic of Sociology Ain't No Makin It by Jay MacLeod, in which he studies the employment prospects of youths in the 1980s and determined that even then there was no stable private sector employment and your best option is a government job or to have an excellent "network" which is understandably hard for most people to achieve. So I'm genuinely interested in what possible options there are for anyone entering the job market today or God help you, re-entering. I am guessing the barriers to entry to those trades are quite high but would love to be corrected.

jrs , January 28, 2019 at 1:39 pm

what is the point of being jackpot ready if you can't even support yourself today? To fantasize about collapse while sleeping in a rented closet and driving for Uber? In that case one's personal collapse has already happened, which will matter a lot more to an individual than any potential jackpot.

Plumbers and electricians can make money now of course (although yea barriers to entry do seem high, don't you kind of have to know people to get in those industries?). But permaculture?

Ford Prefect , January 28, 2019 at 1:00 pm

I think the trick is to study something and programming, so the programming becomes a tool rather than an end. A couple of my kids used to ride horses. One of the instructors and stable owners said that a lot of people went to school for equine studies and ended up shoveling horse poop for a living. She said the thing to do was to study business and do the equestrian stuff as a hobby/minor. That way you came out prepared to run a business and hire the equine studies people to clean the stalls.

jrs , January 28, 2019 at 1:36 pm

Do you actually see that many jobs requiring something and programming though? I haven't really. There seems no easy transition out of software work which that would make possible either. Might as well just study the "something".

rd , January 28, 2019 at 2:21 pm

Programming is a means to an end, not the end itself. If all you do is program, then you are essentially a machine lathe operator, not somebody creating the products the lathe operators turn out.

Understanding what needs to be done helps with structured programs and better input/output design. In turn, structured programming is a good tool to understand the basics of how to manage tasks. At the higher level, Fred Brooks book "The Mythical Man-Month" has a lot of useful project management information that can be re-applied for non computer program development.

We are doing a lot of work with mobile computing and data collection to assist in our regular work. The people doing this are mainly non-computer scientists that have learned enough programming to get by.

The engineering programs that we use are typically written more by engineers than by programmers as the entire point behind the program is to apply the theory into a numerical computation and presentation system. Programmers with a graphic design background can assist in creating much better user interfaces.

If you have some sort of information theory background (GIS, statistics, etc.) then big data actually means something.

nothing but the truth , January 28, 2019 at 7:02 pm

the problem is it is almost impossible to exit the programming business and join another domain. Anyone can enter it. (evidence – all the people with "engineering" degrees from India) Also my wages are now 50% of what i made 10 years ago (nominal). Also I notice that almost no one is doing sincere work. Most are just coasting, pretending to work with the latest toy (ie, preparing for the next interview).

Now almost every "interview" requires writing a coding exam. Which other profession will make you write an exam for 25-30 year veterans? Can you write your high school exam again today? What if your profession requires you to write it a couple of times almost every year?

Hepativore , January 28, 2019 at 2:56 pm

I am an "aging" former STEM worker (histology researcher) as well. Much like the IT landscape, you are considered "over-the-hill" at 35, which I turn on the 31st. While I do not have children and never intend to get married, many biotech companies consider this the age at which a worker is getting long in the tooth. This is because there is the underlying assumption that is when people start having familial obligations.

Most of the positions in science and engineering fields now are basically "gig" positions, lasting a few months to a year. A lot of people my age are finding how much harder it is to find any position at all in these areas as there is a massive pool of people to choose from, even for permatemp work simply because serfs in their mid-30s might get uppity about benefits like family health plans or 401k

Steve , January 27, 2019 at 7:32 pm

I am 59 and do not mind having employers discriminate against me due to age. ( I also need a job) I had my own business and over the years got quite damaged. I was a contractor specializing in older (historical) work.

I was always the lead worker with many friends and other s working with me. At 52 I was given a choice of very involved neck surgery or quit. ( no small businesses have disability insurance!)

I shut down everything and helped my friends who worked for me take some of the work or find something else. I was also a nationally published computer consultant a long time ago and graphic artist.

Reality is I can still do many things but I do nothing as well as I did when I was younger and the cost to employers for me is far higher than a younger person. I had my chance and I chose poorly. Younger people, if that makes them abetter fit, deserve a chance now more than I do.

Joe Well , January 27, 2019 at 7:49 pm

I'm sorry for your predicament. Do you mean you chose poorly when you chose not to get neck surgery? What was the choice you regret?

Steve , January 27, 2019 at 10:12 pm

My career choices. Choosing to close my business to possibly avoid the surgery was actually a good choice.

Joe Well , January 28, 2019 at 11:47 am

I'm sorry for your challenges but I don't think there were many good careers you could have chosen and it would have required a crystal ball to know which were the good ones. Americans your age entered the job market just after the very end of the Golden Age of labor conditions and have been weathering the decline your entire working lives. At least I entered the job market when everyone knew for years things were falling apart. It's not your fault. You were cheated plain and simple.

Lambert Strether , January 28, 2019 at 2:14 am

> I had my chance and I chose poorly.

I don't see how it's possible to predict the labor market years in advance. Why blame yourself for poor choices when so much chance is involved?

With a Jobs Guarantee, such questions would not arise. I also don't think it's only a question of doing, but a question of sharing ("experience, strength, and hope," as AA -- a very successful organization! -- puts it, in a way of thinking that has wide application).

Dianne Shatin , January 27, 2019 at 7:46 pm

Unelected plutocrat and his international syndicate funded by former IBM artificial intelligence developer and social darwinian. data manipulation electronic platforms and social media are at the levels of power in the USA. Anti justice, anti enlightenment, etc.

Since the installation of GW Bush by the Supreme Court, almost 20 yrs. ago, they have tunneled deeply, speaking through propaganda machines such as Rush Limbaugh gaining traction .making it over the finish line with KGB and Russian oligarch backing. The net effect on us? The loss of all built on the foundation of the enlightenment and an exceptional nation no king, a nation of, for and by the people, and the rule of law. There is nothing Judeo-Christian about social darwinism but is eerily similar to National Socialism (Nazis). The ruling againt the plaintiff by the 7th circuit in the U.S. and their success in creating chaos in Great Britain vis a vis "Brexit" by fascist Lafarge Inc. are indicators how easy their ascent.
ows how powerful they have become.

anon y'mouse , January 27, 2019 at 9:19 pm

They had better get ready to lower the SSI retirement age to 55, then. Or I predict blood in the streets.

jrs , January 28, 2019 at 1:49 pm

I wish it was so. They just expect the older crowd to die quietly.

How is it legal , January 27, 2019 at 10:04 pm

Where are the Bipartisan Presidential Candidates and Legislators on oral and verbal condemnation of Age Discrimination , along with putting teeth into Age Discrimination Laws, and Tax Policy. – nowhere to be seen , or heard, that I've noticed; particularly in Blue ™ California, which is famed for Age Discrimination of those as young as 36 years of age, since Mark Zuckerberg proclaimed anyone over 35, over the hill in the early 2000's , and never got crushed for it by the media, or the Politicians, as he should have (particularly in Silicon Valley).

I know those Republicans are venal, but I dare anyone to show me a meaningful Age Discrimination Policy Proposal, pushed by Blue Obama, Hillary, even Sanders and Jill Stein. Certainly none of California's Nationally known (many well over retirement age) Gubernatorial and Legislative Democratic Politicians: Jerry Brown, Gavin Newsom, Dianne Feinstein, Barbara Boxer, Nancy Pelosi, Kamala Harris, and Ro Khanna (or the lesser known California Federal State and Local Democratic Politicians) have ever addressed it; despite the fact that homelessness deaths of those near 'retirement age' have been frighteningly increasing in California's obscenely wealthy homelessness 'hotspots,' such as Silicon Valley.

Such a tragic issue, which has occurred while the last over a decade of Mainstream News and Online Pundits, have Proclaimed 50 to be the new 30. Sadistic. I have no doubt this is linked to the ever increasing Deaths of Despair and attempted and successful suicides of those under, and just over retirement age– while the US has an average Senate age of 65, and a President and 2020 Presidential contenders, over 70 (I am not at all saying older persons shouldn't be elected, nor that younger persons shouldn't be elected, I'm pointing out the imbalance, insanity, and cruelty of it).

Further, age discrimination has been particularly brutal to single, divorced, and widowed females , whom have most assuredly made far, far less on the dollar than males (if they could even get hired for the position, or leave the kids alone, and housekeeping undone, to get a job):

Patrick Button, an assistant economics professor at Tulane University, was part of a research project last year that looked at callback rates from resumes in various entry-level jobs. He said women seeking the positions appeared to be most affected.

"Based on over 40,000 job applications, we find robust evidence of age discrimination in hiring against older women, especially those near retirement age, but considerably less evidence of age discrimination against men," according to an abstract of the study.

Jacquelyn James, co-director of the Center on Aging and Work at Boston College, said age discrimination in employment is a crucial issue in part because of societal changes that are forcing people to delay retirement. Moves away from defined-¬benefit pension plans to less assured forms of retirement savings are part of the reason.

Lambert Strether , January 28, 2019 at 2:15 am

> "Based on over 40,000 job applications, we find robust evidence of age discrimination in hiring against older women, especially those near retirement age, but considerably less evidence of age discrimination against men," according to an abstract of the study.

Well, these aren't real women, obviously. If they were, the Democrats would already be taking care of them.

jrs , January 28, 2019 at 1:58 pm

From the article: The greying of the workforce is "thanks to better health in older age and insufficient savings that require people to keeping working longer," according to the Chicago Tribune.

Get on the clue train Chicago Tribune, because your like W and Trump not knowing how a supermarket works, that's how dense you are. Even if one saved, and even if one won the luck lottery in terms of job stability and adequate income to save from, healthcare alone is a reason to work, either to get employer provided if lucky, or to work without it and put most of one's money toward an ACA plan or the like if not lucky. Yes the cost of almost all other necessities has also increased greatly, but even parts of the country without a high cost of living have unaffordable healthcare.

Enquiring Mind , January 27, 2019 at 11:07 pm

Benefits may be 23-30% or so of payroll and represent another expense management opportunity for the diligent executive. One piece of low-hanging fruit is the age-related healthcare cost. If you hire young people, who under-consume healthcare relative to older cohorts, you save money, ceteris paribus. They have lower premiums, lower loss experience and they rebound more quickly, so you hit a triple at your first at-bat swinging at that fruit. Yes, metaphors are fungible along with every line on the income statement.

If your company still has the vestiges of a pension or similar blandishment, you may even back-load contributions more aggressively, of course to the extent allowable. That added expense diligence will pay off when those annuated employees leave before hitting the more expensive funding years.

NB, the above reflects what I saw and heard at a Fortune 500 company.

rd , January 28, 2019 at 12:56 pm

Another good reason for a Canadian style single payer system. That turns a deciding factor into a non-factor.

Jack Hayes , January 28, 2019 at 8:15 am

A reason why the court system is overburdened is lack of clarity in laws and regulations. Fix the disparity between the two sections of the law so that courts don't have to decide which section rules.

rd , January 28, 2019 at 2:24 pm

Polarization has made tweaks and repairs of laws impossible.

Jeff N , January 28, 2019 at 10:17 am

Yep. Many police departments *legally* refuse to hire anyone over 35 years old (exceptions for prior police experience or certain military service)

Joe Well , January 28, 2019 at 12:36 pm

It amazes me how often the government will give itself exemptions to its own laws and principles, and also how often "progressive" nonprofits and political groups will also give themselves such exemptions, for instance, regarding health insurance, paid overtime, paid training, etc. that they are legally required to provide.

Ford Prefect , January 28, 2019 at 2:27 pm

There are specific physical demands in things like policing. So it doesn't make much sense to hire 55 year old rookie policemen when many policemen are retiring at that age.

Arthur Dent , January 28, 2019 at 2:59 pm

Its an interesting quandary. We have older staff that went back to school and changed careers. They do a good job and get paid at a rate similar to the younger staff with similar job-related experience. However, they will be retiring at about the same time as the much more experienced staff, so they will not be future succession replacements for the senior staff.

So we also have to hire people in their 20s and 30s because that will be the future when people like me retire in a few years. That could very well be the reason for the specific wording of the job opening (I haven't read the opinion). I know of current hiring for a position where the firm is primarily looking for somebody in their 20s or early 30s for precisely that reason. The staff currently doing the work are in their 40s and 50s and need to start bringing up the next generation. If somebody went back to school late and was in their 40s or 50s (so would be at a lower billing rate due to lack of job related experience), they would be seriously considered. But the firm would still be left with the challenge of having to hire another person at the younger age within a couple of years to build the succession. Once people make it past 5 years at the firm, they tend to stay for a long time with senior staff generally having been at the firm for 20 years or more, so hiring somebody really is a long-term investment.

[Jan 28, 2019] Testing backup system as the main source of power outatages

Highly recommended!
Jan 28, 2019 | thwack.solarwinds.com

gcp Jul 8, 2015 10:33 PM

Many years ago I worked at an IBM Mainframe site. To make systems more robust they installed a UPS system for the mainframe with battery bank and a honkin' great diesel generator in the yard.

During the commissioning of the system, they decided to test the UPS cutover one afternoon - everything goes *dark* in seconds. Frantic running around to get power back on and MF restarted and databases recovered (afternoon, remember? during the work day...). Oh! The UPS batteries were not charged! Oops.

Over the next few weeks, they did two more 'tests' during the working day, with everything going *dark* in seconds for various reasons. Oops.

Then they decided - perhaps we should test this outside of office hours. (YAY!)

Still took a few more efforts to get everything working - diesel generator wouldn't start automatically, fixed that and forgot to fill up the diesel tank so cutover was fine until the fuel ran out.

Many, many lessons learned from this episode.

[Jan 28, 2019] False alarm: bas small inmashine room due to electrical light not a server

Jan 28, 2019 | www.reddit.com

radiomix Jack of All Trades 5 points 6 points 7 points 3 years ago (2 children)

I was in my main network facility, for a municipal fiber optic ring. Outside were two technicians replacing our backup air conditioning unit. I walk inside after talking with the two technicians, turn on the lights and begin walking around just visually checking things around the room. All of a sudden I started smelling that dreaded electric hot/burning smell. In this place I have my core switch, primary router, a handful of servers, some customer equipment and a couple of racks for my service provider. I start running around the place like a mad man sniffing all the equipment. I even called in the AC technicians to help me sniff.

After 15 minutes we could not narrow down where it was coming from. Finally I noticed that one of the florescent lights had not come on. I grabbed a ladder and opened it up.

The ballast had burned out on the light and it just so happen to be the light right in front of the AC vent blowing the smell all over the room.

The last time I had smelled that smell in that room a major piece of equipment went belly up and there was nothing I could do about it.

benjunmun 2 points 3 points 4 points 3 years ago (0 children)
The exact same thing has happened to me. Nothing quite as terrifying as the sudden smell of ozone as you're surrounded by critical computers and electrical gear.

[Jan 28, 2019] Loss of power problems: Machines are running, but every switch in the cabinet is dead. Some servers are dead. Panic sets in.

Jan 28, 2019 | www.reddit.com

eraser_6776 VP IT/Sec (a damn suit) 9 points 10 points 11 points 3 years ago (1 child)

May 22, 2004. There was a rather massive storm here that spurred one of the [biggest Tornaodes recorded in Nebraska]( www.tornadochaser.net/hallam.html ) and I was a sysadmin for a small company. It was a Saturday, aka beer day, and as all hell was breaking loose my friends and roomates' pagers and phones were all going off. "Ha ha!" I said, looking at a silent cellphone "sucks to be you!"

Next morning around 10 my phone rings, and I groggily answer it because it's the owner of the company. "You'd better come in here, none of the computers will turn on" he says. Slight panic, but I hadn't received any emails. So it must have been breakers, and I can get that fixed. No problem.

I get into the office and something strikes me. That eery sound of silence. Not a single machine is on.. why not? Still shaking off too much beer from the night before, I go into the server room and find out why I didn't get paged. Machines are running, but every switch in the cabinet is dead. Some servers are dead. Panic sets in.

I start walking around the office trying to turn on machines and.. dead. All of them. Every last desktop won't power on. That's when panic REALLY set in.

In the aftermath I found out two things - one, when the building was built, it was built with a steel roof and steel trusses. Two, when my predecessor had the network cabling wired he hired an idiot who didn't know fire code and ran the network cabling, conveniently, along the trusses into the ceiling. Thus, when lightning hit the building it had a perfect ground path to every workstation in the company. Some servers that weren't in the primary cabinet had been wired to a wall jack (which, in turn, went up into the ceiling then back down into the cabinet because you know, wire management!). Thankfully they were all "legacy" servers.

The only thing that saved the main servers was that Cisco 2924 XL-EN's are some badass mofo's that would die before they let that voltage pass through to the servers in the cabinet. At least that's what I told myself.

All in all, it ended up being one of the longest work weeks ever as I first had to source a bunch of switches, fast to get things like mail and the core network back up. Next up was feeding my buddies a bunch of beer and pizza after we raided every box store in town for spools of Cat 5 and threw wire along the floor.

Finally I found out that CDW can and would get you a whole lot of desktops delivered to your door with your software pre-installed in less than 24 hours if you have an open checkbook. Thanks to a great insurance policy, we did. Shipping and "handling" for those were more than the cost of the machines (again, this was back in 2004 and they were business desktops so you can imagine).

Still, for weeks after I had non-stop user complaints that generally involved "..I think this is related to the lightning ". I drank a lot that summer.

[Jan 28, 2019] Format of wrong particon initiated during RHEL install

Notable quotes:
"... Look at the screen, check out what it is doing, realize that the installer had grabbed the backend and he said yeah format all(we are not sure exactly how he did it). ..."
Jan 28, 2019 | www.reddit.com

kitched 5 points 6 points 7 points 3 years ago (2 children)

~10 years ago. 100GB drives on a node attached to an 8TB SAN. Cabling is all hooked up as we are adding this new node to manage the existing data on the SAN. A guy that is training up to help, we let him install RedHat and go through the GUI setup. Did not pay attention to him, and after a while wonder what is taking so long. Walk over to him and he is still staring at the install screen and says, "Hey guys, this format sure is taking a while".

Look at the screen, check out what it is doing, realize that the installer had grabbed the backend and he said yeah format all(we are not sure exactly how he did it).

Middle of the day, better kick off the tape restore for 8TB of data.

[Jan 28, 2019] I still went to work that day, tired, grumpy and hyped on caffeine teetering between consciousness and a comatose state

Big mistake. This is a perfect state to commit some big SNAFU
Jan 28, 2019 | thwack.solarwinds.com

porterseceng Jul 9, 2015 9:44 AM

I was the on-call technician for the security team supporting a Fortune 500 logistics company, in fact it was my first time being on-call. My phone rings at about 2:00 AM and the help desk agent says that the Citrix portal is down for everyone. This is a big deal because it's a 24/7 shop with people remoting in all around the world. While not strictly a security appliance, my team was responsible for the Citrix Access Gateway that was run on a NetScaler. Also on the line are the systems engineers responsible for the Citrix presentation/application servers.

I log in, check the appliance, look at all of the monitors, everything is reporting up. After about 4 hours of troubleshooting and trying everything within my limited knowledge of this system we get my boss on the line to help.

It came down to this: the Citrix team didn't troubleshoot anything and it was the StoreFront and broker servers that were having the troubles; but since the CAG wouldn't let people see any applications they instantly pointed the finger at the security team and blamed us.

I still went to work that day, tired, grumpy and hyped on caffeine teetering between consciousness and a comatose state because of two reasons: the Citrix team doesn't know how to do their job and I was too tired to ask the investigating questions like "when did it stop working? has anything changed? what have you looked at so far?".

[Jan 28, 2019] Any horror stories about tired sysadmins...

Long story short, don't drink soda late at night, especially near your laptop! Soda spills are not easy to cleanup.
Jan 28, 2019 | thwack.solarwinds.com

mickyred 1 point 2 points 3 points 4 years ago (1 child)

I initially read this as "Any horror stories about tired sysadmins..."
cpbills Sr. Linux Admin 1 point 2 points 3 points 4 years ago (0 children)
They exist. This is why 'good' employers provide coffee.

[Jan 28, 2019] Something about the meaning of the word space

Jul 13, 2015 | thwack.solarwinds.com

Jul 13, 2015 7:44 AM

Trying to walk a tech through some switch config.

me: type config space t

them: it doesn't work

me: <sigh> <spells out config> space the single letter t

them: it still doesn't work

--- try some other rudimentary things ---

me: uh, are you typing in the word 'space'?

them: you said to

[Jan 28, 2019] Any horror stories about fired sysadmins

Notable quotes:
"... leave chat logs on his computer detailing criminal activity like doing drugs in the office late at night and theft ..."
"... the law assumes that [he/she] has suffered this harm ..."
"... assumed by the law ..."
"... The Police are asking the public if anyone has information on "The Ethernet Killer" to please come forward ..."
Jan 28, 2016 | www.reddit.com

nai1sirk

Everyone seems to be really paranoid when firing a senior sysadmin. Advice seems to range from "check for backdoors" to "remove privileges while he is in the severage meeting"

I think it sounds a bit paranoid to be honest. I know media loves these stories, and I doubt they are that common.

Has anyone actually personally experienced a fired sysadmin who has retaliated?

skibumatbu 42 points 43 points 44 points 4 years ago (5 children)
Many moons ago I worked for a a very large dot com. I won't even call it a startup as they were pretty big and well used. They were hacked once. The guy did a ransom type of thing and the company paid him. But they also hired him as a sysadmin with a focus on security. For 6 months he did nothing but surf IRC channels. One thing leads to another and the guy was fired. A month later I'm looking a an issue on a host and notice a weird port open on the front end web server (the guy was so good at security that he insisted on no firewalls). Turns out the guy hacked back into our servers. The next week our primary database goes down for 24 hours. I wonder how that happened.

He eventually got on the secret services radar for stealing credit card information from thousands of people. He's now in jail.

nai1sirk [ S ] 39 points 40 points 41 points 4 years ago (3 children)
Flawless logic; hire criminal as sheriff, suddenly he's a good guy
skibumatbu 13 points 14 points 15 points 4 years ago (0 children)
It works in TV shows.
VexingRaven 7 points 8 points 9 points 4 years ago (0 children)
It works for the FBI. But it all depends on the TYPE of blackhat you hire. You want the kind in it just to prove they can do it, not the type that are out for making money.
AsciiFace DevOps Tooling 7 points 8 points 9 points 4 years ago (0 children)
To be fair, I have a large group of friends that exactly this happened to. From what I hear, it is kind of cool to be legally allowed to commit purger for the sake of work (infosec).
cpbills Sr. Linux Admin 3 points 4 points 5 points 4 years ago (0 children)

the guy was so good at security that he insisted on no firewalls

HAHAHAHAHAHAHAHAHA.

Slamp872 Linux Admin 12 points 13 points 14 points 4 years ago (17 children)

"haha, nice backups dickhead"

That's career suicide, especially in a smaller IT market.

Ashmedai 4 points 5 points 6 points 4 years ago (15 children)

's career suicide, especially in a smaller IT market.

It ought to be, but it would be libel per se to say that he did this, meaning that if he sues, you'd have to prove what you said true, or you'd lose by default, and the finding would assumed by the court to be large. Libel per se is nothing to sneeze at.

dmsean DevOps 3 points 4 points 5 points 4 years ago (5 children)
I always hated that. We had a guy steal from us, server's, mobile phones, computers, etc. We caught him with footage, and he was even dumb enough to leave chat logs on his computer detailing criminal activity like doing drugs in the office late at night and theft . We were such a small shop at the time. We fired him and nobody followed up and filed any charges. Around 2 months later we get a call from employment insurance office and they say the dispute was we claimed him stole office equipment but had no proof. We would have had to higher lawyers and it just wasn't worth it...we let him go and have his free money. Always pissed me off.
Ashmedai 4 points 5 points 6 points 4 years ago (0 children)
That resolution is typical, I'm afraid.
VexingRaven 4 points 5 points 6 points 4 years ago (3 children)
This is why you press charges. If he'd been convicted of theft, which he almost surely would have if you had so much evidence, he not only would've had no ground to stand on for that suit, but he'd be in jail. The best part is, the police handle the pressing of charges, because criminal prosecution is state v. accused.
dmsean DevOps 3 points 4 points 5 points 4 years ago (2 children)
Yah I really wish we had. But when your vp of customer service takes support calls, when your CFO is also a lead programmer and your accountant's primary focus is calling customers to pay their bills it's easier said then done!
neoice Principal Linux Systems Engineer 1 point 2 points 3 points 4 years ago (1 child)
I dunno, for criminal proceedings the police do most of the legwork. when my car got stolen, I had to fill out and sign a statement then the state did the rest.

you probably would have spent a 2-16 hours with a detective going over the evidence and filing your report. I think that's a pretty low cost to let the wheels of justice spin!

dmsean DevOps 0 points 1 point 2 points 4 years ago (0 children)
I'm canadian, the theft was under $5000 so it would have had to go to the local court. The Vancouver PD are really horrible.
the_ancient1 Say no to BYOD 2 points 3 points 4 points 4 years ago (4 children)
Depending on the position of the person, i.e a manager or in management at the former employees company, a person "bad mouthing" an ex employee will also be in violation of anti-blacklistling laws that are in place in many/most states which prohibit companies and authorized agents of companies (HR, managers etc) from (and I quote from the statute)

using any words that any discharged employees, or attempt[ing] by words or writing, or any other means whatever, to prevent such discharged employee, or any employee who may have voluntarily left said company's service from obtaining employment with any other person, or company. -

So most business to not authorize their managers or HR to do anything other than confirm dates of employment, job title, and possibly wage.

Ashmedai 1 point 2 points 3 points 4 years ago (3 children)
I'm (only a little) surprised by this. Since you are quoting from statute, can you link me. I'm curious about that whole section of code. I should assume this is some specific state, yes?
the_ancient1 Say no to BYOD 2 points 3 points 4 points 4 years ago * (2 children)
http://www.in.gov/legislative/ic/2010/title22/ar5/ch3.pdf

the quote is from IC 22-5-3-2, which on its face seem to apply only to "Railroads" but also clearly says "any other company" in the text.

There are some of the standard Libale protections as well for truthful statements, but you must prove it was truthful, so something like "he was always late" if you have time cards to prove that would not be in violation, but something like "He smelled bad, was rude and incompetent" would likely be a violation

user4201 1 point 2 points 3 points 4 years ago (1 child)
Your second example is actually just three opinions which a person cannot sue you for. If I declare that I thought you smelled bad and where an incompetent employee I'm not making libelous statements because libel laws don't cover personal opinions. If I say you where late everyday, that is a factual statement that can either be proven or disproven, so libel law now applies.
the_ancient1 Say no to BYOD 0 points 1 point 2 points 4 years ago (0 children)

libel laws don't cover personal opinions.

Libel laws do not, blacklisting laws do if your statements are in relation to an inquiry by a potential employer of a former employee

CaptainDave Infrastructure Engineer 0 points 1 point 2 points 4 years ago (3 children)
Libel per se is just libel that doesn't need intent to be proved. It has nothing to do with burdens or quanta of proof, much less presumptions.
Ashmedai 0 points 1 point 2 points 4 years ago * (2 children)
"CaptainDave is a convicted pederast" would be an example of libel per se. It doesn't matter if I believe it true. It doesn't matter if I am not malicious in my statement. The statement must be true for it to not be libel. If CaptainDave were to sue me, I'd have to show proof of that conviction. CaptainDave would not be required to prove the statement false. The court would not be interested in investigating the matter itself, so the burden of proof would shift to me . If I were not to succeed in proving this statement, the court would assume the damages of this class of libel to be high. Generally; with caveats.
CaptainDave Infrastructure Engineer 0 points 1 point 2 points 4 years ago (1 child)
That's what I was saying: there's no intent element. That's what's meant by "libel per se." That doesn't shift the burden of proof, it just means there's one less thing to show. You have burden shifting confused with the affirmative defense that the statement was true; however, that, as an affirmative defense, is always on the party pressing it. There is thus no burden shifting. Moreover, you have to prove your damages no matter what; there is no presumption as to their amount (beyond "nominal," IIRC).
Ashmedai 0 points 1 point 2 points 4 years ago * (0 children)

Moreover, you have to prove your damages no matter what; t

These are the actual written instructions given to juries in California:

"Even if [name of plaintiff] has not proved any actual damages for harm to reputation or shame, mortification or hurt feelings, the law assumes that [he/she] has suffered this harm . Without presenting evidence of damage, [name of plaintiff] is entitled to receive compensation for this assumed harm in whatever sum you believe is reasonable."

Juries, of course, are not instructed on an actual amount here. As you say, it might only be nominal. But in the case of an employer leaving a negative reference about an employee accusing them of a crime? It won't be as a matter of practice, now will it? The jury has been told the harm to reputation and mortification is assumed by the law . While this does not guarantee a sympathetic jury, and obviously the case will have its context, I'll make the assumption starting right now that you don't want to be on the receiving end of a legitimate libel per se case, is that fair? :-P

At least in California. I've been told not all states have libel per se laws, but I really wouldn't know.

As far as my statement that "would be assumed by the court to be large," this was sloppily worded, yes. Let's just say that, with wording like the above, the only real test is... "is the jury offended on your behalf?" Because if they are, with instructions like that, and any actually serious libel per se case, defendant is screwed. It's also a bit of a stinger that attorney's fees are generally included in libel per se cases (at least according to black letter law; IANAL, so I'm not acquainted with real case histories).

cpbills Sr. Linux Admin 0 points 1 point 2 points 4 years ago (0 children)

When we got rid of the last guy I worked with he remoted into one of our servers

People need to know that this is a big no-no. Whether your employer remembered to delete your accounts or not, attempting to access or accessing servers once you've been termed is against the law in most places.

Whether you're being malicious or not, accessing or even attempting to access systems you no longer have permission to access can easily be construed as malicious.

secretphoto 13 points 14 points 15 points 4 years ago (0 children)
i was working late night in our colo within $LARGE_DATA_RECOVERY_CENTER . one of the sysadmins (financials) for the hosting company was there telling me about how she was getting the axe and had to train her overseas counterparts how to do her job. lets say she was less than gruntled. she mentioned something about "at(1) jobs they'll never find".

years later i read a vague article in an industry journal about insider sabotage at a local company that caused millions of dollars of downtime..

i'm not sure if that was her, but the details lined up very closely and it makes a lot of sense that the f500 company would want to sweep this under the rug.

VexingRaven 2 points 3 points 4 points 4 years ago (6 children)
That's ridiculous. If that's a crime, this whole sub and /r/talesfromtechsupport is filled with crimes. Instead, we call it stupidity.
thatmorrowguy Netsec Admin 8 points 9 points 10 points 4 years ago (5 children)
The Childs case is like if you took your average /r/talesfromtechsupport story and mixed it with about 50% more paranoia and half as much common sense - continuing to refuse requests for the administrator passwords even after being arrested. If management asked me for the passwords to all of my systems, they can have them. In fact, in my exit interview, I would be more than happy to point out each and every remote access method that I have to their systems, and request that all of those passwords are changed. I don't WANT there to be any conceivable way for me to get back into a previous employers' environment when I go. Whenever I leave a team, my last action is deactivating all of my own database logins, removing my sudo rights, removing myself from any groups with elevated rights, ensuring that the team will be changing admin passwords ASAP. That way when colleagues and customers come back pleading for me to fix stuff, I can honestly tell them I no longer have the ability to solve their problem - go hit up the new guy. They stop calling much quicker that way.
neoice Principal Linux Systems Engineer 2 points 3 points 4 points 4 years ago (4 children)

Whenever I leave a team, my last action is deactivating all of my own database logins, removing my sudo rights, removing myself from any groups with elevated rights, ensuring that the team will be changing admin passwords ASAP.

I <3 our version controlled infrastructure. I could remove my database logins, sudo rights and lock my user account with a single commit. then I could push another commit to revoke my commit privs :)

David_Crockett 0 points 1 point 2 points 4 years ago (1 child)
Sounds nifty. How do you have it set up? SVN?
neoice Principal Linux Systems Engineer 0 points 1 point 2 points 4 years ago (0 children)
git+gitolite+Puppet.
thatmorrowguy Netsec Admin 0 points 1 point 2 points 4 years ago (1 child)
I am envious of your setup. Ours is very fragmented, but cobbled together with tons of somewhat fragile home-grown scripts, mostly manageable. Somehow configuration management never seems to make it to the top of the project list ...
neoice Principal Linux Systems Engineer 0 points 1 point 2 points 4 years ago (0 children)
doooo it. it's so so worth it. even if your initial rollout is just getting a config mgmt daemon installed and managed. once you get over the initial hurdle of having your config mgmt infrastructure in place, updates become so cheap and fast. it really will accelerate your organization.
Antoak 1 point 2 points 3 points 4 years ago (0 children)
What a coincidence, I'm listening to an interview with the involved chief security officer involved in that incident here .
the_ancient1 Say no to BYOD 25 points 26 points 27 points 4 years ago (21 children)
Most of these "horror stories" are not malicious in nature, but trace back to poor documentation, and experience.

If an admin has been in the same environment for 10+ years, they know all the quirks, all of one off scripts that hold critical systems together, how each piece fits with each other piece, etc.

So when a new person comes in off the street with no time to do a knowledge transfer, which normally takes months or years in some cases, problems arise and the immediate reaction is "Ex Employee did this on purpose because we fired them"

loquacious 16 points 17 points 18 points 4 years ago (3 children)
Solution: Replace management with a small shell script.
David_Crockett 1 point 2 points 3 points 4 years ago (1 child)

to0 valuable.

FTFY

Kreiger81 2 points 3 points 4 points 4 years ago (0 children)
I had a warehouse job where I ran into a similar issue. The person I was bring brought in to replace had been there for 25+ years, and even tho I was being taught by her to replace, I wasn't up to her speed on a lot of the procedures, and I didn't have the entire warehouse memorized like she did (She was on the planning team who BUILT the damned thing)

They never understood that I could not do in six months what it took her 25 years to perfect. "But Kreiger, she's old and retiring, how come you can't do it that fast"

the_ancient1 Say no to BYOD 8 points 9 points 10 points 4 years ago (10 children)
Bus factor ....

Unfortunately most companies have a bus factor of 1 on most systems. They do not want to pay the money to get a higher number

gawdimatwrk 4 points 5 points 6 points 4 years ago (1 child)
I was hired because of bus factor. My boss crashed his motorcycle and management was forced to hire someone while he was recovering. But when he returned to work it was back to the old habits. Even now he leaves me off the all the important stuff and documents nothing. Senior management is aware and they don't care. Needless to say, I haven't stopped looking.
the_ancient1 Say no to BYOD 4 points 5 points 6 points 4 years ago (0 children)
This is why Bean Counters and IT always clash

IT sees redundancy as critical to operational health and security.

Bean Counters see Redundancy as waste and an easy way to bump the quarterly numbers

dmsean DevOps 2 points 3 points 4 points 4 years ago (3 children)
I hate the bus factor. I prefer to be one of those optimist types and say "What if Jon Bob won the lottery!"
ChrisOfAllTrades Admin ALL the things! 8 points 9 points 10 points 4 years ago (2 children)
Well, not being a dick, if I won the lottery, I'd probably stick around doing stuff but having way, way less stress about any of it. Finish up documentation, wrap up loose ends, take the team out for dinner and beers, then leave.

Hit by a bus? I'm not going to be doing jack shit if I'm in a casket.

VexingRaven 0 points 1 point 2 points 4 years ago (0 children)
TIL Jack Shit likes dead people.
neoice Principal Linux Systems Engineer 0 points 1 point 2 points 4 years ago (3 children)
my company's bus factor is 1.5-1.75. there are still places where knowledge is stored in a single head, but we're confident that the other party could figure things out given some time.
the_ancient1 Say no to BYOD 1 point 2 points 3 points 4 years ago (2 children)

other party could figure things out given some time.

ofcourse, given enough time a qualified person could figure out all systems, that is not what the bus factor is about.

The Bus Factor is, if you die, quit, or are in some way incapacitated TODAY, can someone pick up where you left off with out any impact to business operations.

That does not mean "Yes we only have 1 admin for this systems, but the admins of another system can figure out it over 6 mos" That would be a 1 for a bus factor.

neoice Principal Linux Systems Engineer 0 points 1 point 2 points 4 years ago (1 child)
I think it would be 3-6mo before anything broke that wasn't bus factor 2. probably 6-9mo for something actually important.
Strelock 1 point 2 points 3 points 4 years ago (0 children)
Or tomorrow...
Nyarlathotep124 0 points 1 point 2 points 4 years ago (0 children)
"Job Security"
jhulbe Citrix Admin 5 points 6 points 7 points 4 years ago (1 child)
yeah, it's usually a lot reseting on one persons shoulders and then he becomes overworked and resentful of his job but happy to have it. Then if he's ever let go he feels like hes owed something and gets upset because of all the time has put into his masterpiece.

Or wait, is that serial killers?

Shock223 Student 1 point 2 points 3 points 4 years ago (0 children)

Or wait, is that serial killers?

The Police are asking the public if anyone has information on "The Ethernet Killer" to please come forward

curiousGambler 1 point 2 points 3 points 4 years ago (0 children)

the bean counters

Having just started as a software engineer at a major bank, I love this. Or hate it haha!

Slagwag 6 points 7 points 8 points 4 years ago (0 children)
At a previous job we had a strict password change policy when someone left the company or was let go. Unfortunately the password change didn't have a task to change it on the backup system and we had a centralized backup location for all of our customers offsite. An employee that was let go must have tried all systems and found this one was available. He connected in and deleted all backup data and stopped them from backing up. He then somehow connected into the customer somehow (I believe this customer wanted RDP open on a specific port despite our advice) and used that to connect in and delete their data.

The person tried to make this look like it was not them by using a local public wifi but it was traced to him since the location it was done at showed he was nearby due to his EZPass triggering when driving there.

Unfortunately I think today years after this occurred it is still pending investigation and nothing was really done.

Loki-L Please contact your System Administrator 7 points 8 points 9 points 4 years ago (2 children)
So far nothing worse than bad online review has ever happened from a co-worker leaving. Mostly that was because everyone here is sort of a professional and half of the co-workers that have left, have left to customers or partner companies or otherwise kept in business contact. There has been very little bridge burning despite a relatively high turnover in the IT department.

Part of me is hoping that someone would try to do something just so I could have something to show to the bosses about why I am always talking about having a better exit procedure than just stopping paying people and having the rest of the company find out by themselves sooner or later. There have been several instances of me deactivating accounts days or even weeks after someone stopped working for us because nobody thought to tell anyone....

On the flip-side it appears that if I ever left my current employer I would not need to sabotage them or withhold any critical information or anything. Based on the fact that they managed to call me on the first day of my vacation (before I actually planned to get up really) for something that was both obvious and well documented, I half expect them to simply collapse by themselves If I stayed away for more than two weeks.

mwerte in over my head 1 point 2 points 3 points 4 years ago (0 children)
My old company kept paying people on several occasions because nobody bothered to fill out the 3 form sheet (first, last, date of termination) that was very prominently displayed on our intranet, send us an email, or even stop by and say "by the way...". It was good times.
Lagkiller 2 points 3 points 4 points 4 years ago (1 child)
It's not paranoia if they really are out to get you
AceBacker 0 points 1 point 2 points 4 years ago (1 child)
Reminds me of a saying that I heard once.

If Network Security guys ran the police department, the police would stop writing tickets. Instead they would just shoot the speeders.

punkwalrus DevOps 7 points 8 points 9 points 4 years ago (0 children)
It was 1998, and our company had been through another round of layoffs. A few months later, a rep in member services got a weird error while attempting to log into a large database. "Please enter in administrative password." She showed it to her supervisor, who had a password for those types of errors. The manager usually just keyed in the password, which was used to fix random corrupt or incomplete records. But instead, she paused.

"Why would my rep get this error upon login?"

She called down to the database folks, who did a search and immediately shut down the database access, which pretty much killed all member service reps from doing any work for the rest of the day.

Turns out, one of the previous DBA/programmers had released a "time bomb" of sorts into the database client. Long story short, it was one of those, "if date is greater than [6 months from last build], run a delete query on the primary key upon first login." His mistake was that the db client was used by a rep who didn't have access to delete records. Had her manager just typed in a password, they would have wiped and made useless over 50 million records. Sure, they had backups, but upon restore, it would have done it again.

IIRC, the supervisor and rep got some kind of reward or bonus.

The former DBA was formally charged with whatever the law was back then, but I don't know what became of him after he was charged.

Sideonecincy 6 points 7 points 8 points 4 years ago (3 children)
This isn't a personal experience but was a recent news story that lead to prison time. Guy ended up with a 4 year prison sentence and a $500k fine.

In June 2012, Mitchell found out he was going to be fired from EnerVest and in response he decided to reset the company's servers to their original factory settings. He also disabled cooling equipment for EnerVest's systems and disabled a data-replication process.

Mitchell's actions left EnerVest unable to "fully communicate or conduct business operations" for about 30 days, according to Booth's office. The company also had to spend hundreds of thousands of dollars on data-recovery efforts, and part of the information could not be retrieved.

http://www.pcworld.com/article/2158020/it-pro-gets-prison-time-for-sabotaging-exemployers-system.html

MightySasquatch 2 points 3 points 4 points 4 years ago (2 children)
I honestly think people don't stop to think about how intentionally damaging equipment is illegal.
telemecanique 0 points 1 point 2 points 4 years ago (1 child)
huh? they just don't think period, but that's the point... we're all capable of it
MightySasquatch 0 points 1 point 2 points 4 years ago (0 children)
Maybe that's true, I suppose it depends on circumstance
jdom22 Master of none 6 points 7 points 8 points 4 years ago (2 children)
gaining access to a network you are not permitted to access = federal crime. doing so after a bitter departure makes you suspect #1. Don't do it. You will get caught, you will go to jail, you will likely never work in IT again.
wolfmann Jack of All Trades 2 points 3 points 4 points 4 years ago (0 children)

you will likely never work in IT again.

not so sure about that... there are several hackers out there that have their own consulting businesses that are doing quite well.

cpbills Sr. Linux Admin 0 points 1 point 2 points 4 years ago (0 children)
Even attempting to access systems you no longer have permission to access can be construed as malicious in nature and a crime.
lawrish Automation Lover 5 points 6 points 7 points 4 years ago (0 children)
Once upon a time my company was a mess, they have next to no network infrastructure. Firewalls? Too fancy. Everything in an external facing server was open. A contractor put a back door in 4 of those servers, granting root access. Did he ever use it? No idea. I discovered it 5 years later. Not only that, he was bright enough to upload that really unique code into github, with his full name and a linkedin profile, linking him to my current company for 3 months.
jaydestro Sysadmin 16 points 17 points 18 points 4 years ago (5 children)
here's a tip from a senior sysadmin to anyone considering terminating a peer's employment...

treating someone like garbage is one of the reasons a lot of times the person in question might put "backdoors" or anything else that could be malicious. i've been fired from a job before, and you know what i did to "get back at them?" i got another, better job.

be a pro to the person and they'll be a pro to you, even when it's time to move on.

i know one person who retaliated after being fired, and he went to prison for a year. he was really young and dumb at the time, but it taught me a big lesson on how to act in this industry. getting mad gets you no where.

dmsean DevOps 6 points 7 points 8 points 4 years ago (2 children)
I've watched 3 senior IT people fired. All of them were given very cushy severances (like 4 months) and walked out the door with all sorts of statements like "we are willing to be a good reference for you" etc etc.
superspeck 3 points 4 points 5 points 4 years ago (1 child)
Seen this happen too, but it's usually been when a senior person gets a little crusty around the edges, starts being an impediment, and refuses to do things the 'new' way.
AceBacker 2 points 3 points 4 points 4 years ago (0 children)
I call this the Dick Van Dyke on Scrubs effect.

The Scrubs episode "My Brother, My Keeper" goes into perfect detail about this.

wolfmann Jack of All Trades 1 point 2 points 3 points 4 years ago (0 children)
Fear leads to anger. Anger leads to hate. Hate leads to suffering.

should have just watched Star Wars instead.

telemecanique 0 points 1 point 2 points 4 years ago (0 children)
you assume you/that person can think rationally at that point in time in every case, that assumption is incorrect.
telemecanique 1 point 2 points 3 points 4 years ago (2 children)
it has nothing to do with logic, EVERYONE can snap when under the right circumstances. It's why school shootings, postman shootings even regular road rage and really any craziness happens, we all have different amount of stress that will make us simply not care, but we're all capable of losing our shit. Imagine if your wife divorces you, you lose your kids, you get raped in divorce court, your work suffers, you get fired and you have access to a gun or in this case a keyboard + admin access... million ways for a person to snap.
telemecanique 0 points 1 point 2 points 4 years ago (0 children)
and 99.9% of people in 99.9% of cases do, you're missing the simple truth that it can happen to anyone at anytime, you never know what someone you're firing today has been going through in the last 6 months. Hence you should worry.
JetlagMk2 Master of None 4 points 5 points 6 points 4 years ago (0 children)
The BACKGROUND section of this file is relevant
Omega Engineering Corp. ("Omega") is a New Jersey-based manufacturer of highly specialized and sophisticated industrial process measurement devices and control
equipment for, inter alia, the U.S. Navy and NASA. On July 31, 1996, all its design and production computer programs were permanently deleted. About 1,200 computer programs
were deleted and purged, crippling Omega's manufacturing capabilities and resulting in a loss of millions of dollars in sales and contracts.

There's an interesting rumor that because of the insurance payout Omega actually profited from the sabotage. Maybe that's the real business lesson.

danfirst 6 points 7 points 8 points 4 years ago (0 children)
Not a horror story exactly, but, my first IT job I worked for a small non profit as the "IT Guy" so servers, networks, users, whatever. My manager wanted to get her nephew into IT, so she "chose not to renew my contract". Her and the HR lady brought me in, told me I wasn't being renewed and said that for someone in my position they should have someone go to my desk and clean everything for me and escort me out so I can't damage anything.

I told her, "listen, you both should know I'm not that sort of person, but really, I can access the entire system from home, easily, if I wanted to trash things I could, but I don't do that sort of thing. So how about you give me 10 minutes so I can pack up my own personal things?" They both turned completely white and nodded their heads and I left.

I got emails from the staff for months, the new guy was horrible. My manager was let go a few months later. Too bad on the timing really as it was a pretty great first IT job.

lawtechie 3 points 4 points 5 points 4 years ago (0 children)
I used to be a sysadmin at a small web hosting company owned by a family member. When I went to law school, I asked to take a reduced role. The current management didn't really understand systems administration, so they asked the outside developer to take on this role.

They then got into a licensing dispute with the developer over ownership of their code. The dev locked them out and threatened to wipe the servers if he wasn't paid. He started deleting email accounts and websites as threats. So, I get called the night before the bar exam by the family member.

I walk him through manually changing the root passwords and locking out all the unknown users. The real annoyance came when I asked the owner some simple information to threaten the developer. Turns out, the owner didn't even know the guy's full name or address. The checks were sent to a P.O. box.

BerkeleyFarmGirl Jane of Most Trades 4 points 5 points 6 points 4 years ago (0 children)
I had some minor issues with a co-worker. He got bounced out because he had a bad habit of changing things and not testing them (and not being around to deal with the fallout). He was also super high control.

I knew enough about the system (he was actually my "replacement" when my helpdesk/support/sysadmin job was too big for one person) to head things off at the pass, but one day I was at the office late doing sysadmin stuff and got bounced off everything. Turns out he had set "login hours" on my account.

munky9002 7 points 8 points 9 points 4 years ago (3 children)
I had one where I was taking over and they still had access; we weren't supposed to cut off access. Well they setup backup exclusion and deleted all the backups of a certain directory. This directory had about 20 scripts which eliminates people's jobs and after about 1 week they deleted the folder.

Mind you I had no idea it was even there. The disaster started in the morning and eventually after lunch all I did was log in as their user and restore it from their recycle bin.

We then kept the story going and asking them for copies of the scripts etc etc. They played it off like 'oh wow you guys havent even taken over yet and there's a disaster' and 'unfortunately we don't have copies of your scripts.'

It was days before they managed to find them and sent them to us. You also read the things. REM This script is a limited license use to use only if you are our customer. Copyrights are ours.

So naturally I fix their scripts as there was problems with them and I put GPL at the top. Month later they contact the CFO with a quote of $40,000 to allow them to keep using their intellectual property. I wish I got to see their face when they got the email back saying.

"We caught you deleting the scripts and since it took you too long to respond and provide us with the scripts we wrote our owned and we licensed this with GPL, because it would be unethical to do otherwise.

Fortunately since we are not using your scripts and you just sent them to us without any mention of cost; we owe nothing."

munky9002 7 points 8 points 9 points 4 years ago (1 child)

However, slapping the GPL on top of someone else's licensed code doesn't actually GPL it.

I never said I put GPL on their work. I put GPL on my work. I recreated the scripts from scratch. I can license my own work however I damned well feel.

Skrp 2 points 3 points 4 points 4 years ago (0 children)
They're not that common, but the malicious insider threat is a very real concern.
punkwalrus DevOps 2 points 3 points 4 points 4 years ago (0 children)
In the 1980s, there was a story that went around the computer hobbyist circles about a travel agency in our area. They were big, and had all kinds of TV ads. Their claim to fame was how modern they were (for the time), and used computers to find the best deal at the last second, predict travel cost trends, and so on.

But behind the scenes, all was not well. The person in charge of their computer system was this older, crotchety bastard who was a former IBM or DEC employee (I forget which). The stereotype of the BOFH before that was even a thing. He was unfriendly, made his own hours, and as time went on, demanded more money, less work, more hardware, and management hated him. They tried to hire him help, but he refused to tell the new guys anything, and after 2-3 years of assistant techs quitting, they finally fired the guy and hired a consulting team to take over.

The programmer left quietly, didn't create a fuss, and no one suspected anything was amiss. But at some point, he dialed back into the mainframe and wiped all records and data. The backup tapes were all blank, too. He didn't document anything.

This pretty much fucked the company. They were out of business within a few months.

The big news about this was at the time, there was no precedence for this type of behavior, and there were no laws specific to this kind of crime. Essentially, they didn't have any proof of what he did, and those that could prove it didn't have a case because it wasn't a crime yet. He couldn't be charged with destruction of property, because no property was actually touched (from a legal perspective). This led to more modern laws and some of the first laws in preventing data from being deleted.

BerkeleyFarmGirl Jane of Most Trades 1 point 2 points 3 points 4 years ago (0 children)
I worked for a local government agency and our group acquired a sociopath boss.

My then supervisor (direct report to $Crazy) found another job and gave his notice. On his last day he admitted that he had considered screwing with things but that the people mostly hurt by it would be us and his beef was not with us.

$Crazy must have heard because all future leavers in the group (and there was hella turnover) got put on admin leave the minute they gave notice. E.g. no system access.

girlgerms Windows Syster 0 points 1 point 2 points 4 years ago (0 children)
This is also more a people/process issue than a technical one.

If your processes are in place to ensure documentation is written, access is listed somewhere etc. then it shouldn't be an issue.

If the people who are hiring are like this, then there was an issue in the hiring process - people with these kind of ethics aren't good admins. They're not even admins. They're chumps.

tahoebigah 0 points 1 point 2 points 4 years ago (0 children)
The guy I actually replaced was leaving on bad terms and let loose Conficker right before he left and caused a lot of other issues. He is now the Director of IT at another corporation ....
Pookiebeary 0 points 1 point 2 points 4 years ago (0 children)
Change admin and the pw of Terminated Domain Admin. Reformat TAD's pcs. Tell Tad he's welcome to x months of severance as long as Tad doesn't come back or start shit. Worked for us so far...

[Jan 28, 2019] Happy Sysadmin Appreciation Day 2016

Jan 28, 2019 | opensource.com

dale.sykora on 29 Jul 2016 Permalink

I have a horror story from another IT person. One day they were tasked with adding a new server to a rack in their data center. They added the server... being careful to not bump a cable to the nearby production servers, SAN, and network switch. The physical install went well. But when they powered on the server, the ENTIRE RACK went dark. Customers were not happy:( IT turns out that the power circuit they attached the server to was already at max capacity and thus they caused the breaker to trip. Lessons learned... use redundant power and monitor power consumption.

Another issue was being a newbie on a Cisco switch and making a few changes and thinking the innocent sounding "reload" command would work like Linux does when you restart a daemon. Watching 48 link activity LEDs go dark on your vmware cluster switch... Priceless

[Jan 28, 2019] The ghost of the failed restore

Notable quotes:
"... "Of course! You told me that I had to stay a couple of extra hours to perform that task," I answered. "Exactly! But you preferred to leave early without finishing that task," he said. "Oh my! I thought it was optional!" I exclaimed. ..."
"... "It was, it was " ..."
"... Even with the best solution that promises to make the most thorough backups, the ghost of the failed restoration can appear, darkening our job skills, if we don't make a habit of validating the backup every time. ..."
Nov 01, 2018 | opensource.com

In a well-known data center (whose name I do not want to remember), one cold October night we had a production outage in which thousands of web servers stopped responding due to downtime in the main database. The database administrator asked me, the rookie sysadmin, to recover the database's last full backup and restore it to bring the service back online.

But, at the end of the process, the database was still broken. I didn't worry, because there were other full backup files in stock. However, even after doing the process several times, the result didn't change.

With great fear, I asked the senior sysadmin what to do to fix this behavior.

"You remember when I showed you, a few days ago, how the full backup script was running? Something about how important it was to validate the backup?" responded the sysadmin.

"Of course! You told me that I had to stay a couple of extra hours to perform that task," I answered. "Exactly! But you preferred to leave early without finishing that task," he said. "Oh my! I thought it was optional!" I exclaimed.

"It was, it was "

Moral of the story: Even with the best solution that promises to make the most thorough backups, the ghost of the failed restoration can appear, darkening our job skills, if we don't make a habit of validating the backup every time.

[Jan 28, 2019] T>he danger of a single backup harddrive (USB or not)

The most typical danger is dropping of the hard drive on the floor.
Notable quotes:
"... Also, backing up to another disk in the same computer will probably not save you when lighting strikes, as the backup disk is just as likely to be fried as the main disk. ..."
"... In real life, the backup strategy and hardware/software choices to support it is (as most other things) a balancing act. The important thing is that you have a strategy, and that you test it regularly to make sure it works as intended (as the main point is in the article). Also, realizing that achieving 100% backup security is impossible might save a lot of time in setting up the strategy. ..."
Nov 08, 2002 | www.linuxjournal.com

Anonymous on Fri, 11/08/2002

Why don't you just buy an extra hard disk and have a copy of your important data there. With today's prices it doesn't cost anything.

Anonymous on Fri, 11/08/2002 - 03:00. A lot of people seams to have this idea, and in many situations it should work fine.

However, there is the human factor. Sometimes simple things go wrong (as simple as copying a file), and it takes a while before anybody notices that the contents of this file is not what is expected. This means you have to have many "generations" of backup of the file in order to be able to restore it, and in order to not put all the "eggs in the same basket" each of the file backups should be on a physical device.

Also, backing up to another disk in the same computer will probably not save you when lighting strikes, as the backup disk is just as likely to be fried as the main disk.

In real life, the backup strategy and hardware/software choices to support it is (as most other things) a balancing act. The important thing is that you have a strategy, and that you test it regularly to make sure it works as intended (as the main point is in the article). Also, realizing that achieving 100% backup security is impossible might save a lot of time in setting up the strategy.

(I.e. you have to say that this strategy has certain specified limits, like not being able to restore a file to its intermediate state sometime during a workday, only to the state it had when it was last backed up, which should be a maximum of xxx hours ago and so on...)

Hallvard P

[Jan 28, 2019] Those power cables ;-)

Jan 28, 2019 | opensource.com

John Fano on 31 Jul 2016

I was reaching down to power up the new UPS as my guy was stepping out from behind the rack and the whole rack went dark. His foot caught the power cord of the working UPS and pulled it just enough to break the contacts and since the battery was failed it couldn't provide power and shut off. It took about 30 minutes to bring everything back up..

Things went much better with the second UPS replacement. :-)

[Jan 28, 2019] "Right," I said. "Time to get the backup." I knew I had to leave when I saw his face start twitching and he whispered: "Backup ...?"

Jan 28, 2019 | opensource.com

SemperOSS on 13 Sep 2016 Permalink This one seems to be a classic too:

Working for a large UK-based international IT company, I had a call from newest guy in the internal IT department: "The main server, you know ..."

"Yes?"

"I was cleaning out somebody's homedir ..."

"Yes?"

"Well, the server stopped running properly ..."

"Yes?"

"... and I can't seem to get it to boot now ..."

"Oh-kayyyy. I'll just totter down to you and give it an eye."

I went down to the basement where the IT department was located and had a look at his terminal screen on his workstation. Going back through the terminal history, just before a hefty amount of error messages, I found his last command: 'rm -rf /home/johndoe /*'. And I probably do not have to say that he was root at the time (it was them there days before sudo, not that that would have helped in his situation).

"Right," I said. "Time to get the backup." I knew I had to leave when I saw his face start twitching and he whispered: "Backup ...?"

==========

Bonus entries from same company:

It was the days of the 5.25" floppy disks (Wikipedia is your friend, if you belong to the younger generation). I sometimes had to ask people to send a copy of a floppy to check why things weren't working properly. Once I got a nice photocopy and another time, the disk came with a polite note attached ... stapled through the disk, to be more precise!

[Jan 28, 2019] regex - Safe rm -rf function in shell script

Jan 28, 2019 | stackoverflow.com

community wiki
5 revs
,May 23, 2017 at 12:26

This question is similar to What is the safest way to empty a directory in *nix?

I'm writing bash script which defines several path constants and will use them for file and directory manipulation (copying, renaming and deleting). Often it will be necessary to do something like:

rm -rf "/${PATH1}"
rm -rf "${PATH2}/"*

While developing this script I'd want to protect myself from mistyping names like PATH1 and PATH2 and avoid situations where they are expanded to empty string, thus resulting in wiping whole disk. I decided to create special wrapper:

rmrf() {
    if [[ $1 =~ "regex" ]]; then
        echo "Ignoring possibly unsafe path ${1}"
        exit 1
    fi

    shopt -s dotglob
    rm -rf -- $1
    shopt -u dotglob
}

Which will be called as:

rmrf "/${PATH1}"
rmrf "${PATH2}/"*

Regex (or sed expression) should catch paths like "*", "/*", "/**/", "///*" etc. but allow paths like "dir", "/dir", "/dir1/dir2/", "/dir1/dir2/*". Also I don't know how to enable shell globbing in case like "/dir with space/*". Any ideas?

EDIT: this is what I came up with so far:

rmrf() {
    local RES
    local RMPATH="${1}"
    SAFE=$(echo "${RMPATH}" | sed -r 's:^((\.?\*+/+)+.*|(/+\.?\*+)+.*|[\.\*/]+|.*/\.\*+)$::g')
    if [ -z "${SAFE}" ]; then
        echo "ERROR! Unsafe deletion of ${RMPATH}"
        return 1
    fi

    shopt -s dotglob
    if [ '*' == "${RMPATH: -1}" ]; then
        echo rm -rf -- "${RMPATH/%\*/}"*
        RES=$?
    else
        echo rm -rf -- "${RMPATH}"
        RES=$?
    fi
    shopt -u dotglob

    return $RES
}

Intended use is (note an asterisk inside quotes):

rmrf "${SOMEPATH}"
rmrf "${SOMEPATH}/*"

where $SOMEPATH is not system or /home directory (in my case all such operations are performed on filesystem mounted under /scratch directory).

CAVEATS:

SpliFF ,Jun 14, 2009 at 13:45

I've found a big danger with rm in bash is that bash usually doesn't stop for errors. That means that:
cd $SOMEPATH
rm -rf *

Is a very dangerous combination if the change directory fails. A safer way would be:

cd $SOMEPATH && rm -rf *

Which will ensure the rf won't run unless you are really in $SOMEPATH. This doesn't protect you from a bad $SOMEPATH but it can be combined with the advice given by others to help make your script safer.

EDIT: @placeybordeaux makes a good point that if $SOMEPATH is undefined or empty cd doesn't treat it as an error and returns 0. In light of that this answer should be considered unsafe unless $SOMEPATH is validated as existing and non-empty first. I believe cd with no args should be an illegal command since at best is performs a no-op and at worse it can lead to unexpected behaviour but it is what it is.

Sazzad Hissain Khan ,Jul 6, 2017 at 11:45

nice trick, I am one stupid victim. – Sazzad Hissain Khan Jul 6 '17 at 11:45

placeybordeaux ,Jun 21, 2018 at 22:59

If $SOMEPATH is empty won't this rm -rf the user's home directory? – placeybordeaux Jun 21 '18 at 22:59

SpliFF ,Jun 27, 2018 at 4:10

@placeybordeaux The && only runs the second command if the first succeeds - so if cd fails rm never runs – SpliFF Jun 27 '18 at 4:10

placeybordeaux ,Jul 3, 2018 at 18:46

@SpliFF at least in ZSH the return value of cd $NONEXISTANTVAR is 0placeybordeaux Jul 3 '18 at 18:46

ruakh ,Jul 13, 2018 at 6:46

Instead of cd $SOMEPATH , you should write cd "${SOMEPATH?}" . The ${varname?} notation ensures that the expansion fails with a warning-message if the variable is unset or empty (such that the && ... part is never run); the double-quotes ensure that special characters in $SOMEPATH , such as whitespace, don't have undesired effects. – ruakh Jul 13 '18 at 6:46

community wiki
2 revs
,Jul 24, 2009 at 22:36

There is a set -u bash directive that will cause exit, when uninitialized variable is used. I read about it here , with rm -rf as an example. I think that's what you're looking for. And here is set's manual .

,Jun 14, 2009 at 12:38

I think "rm" command has a parameter to avoid the deleting of "/". Check it out.

Max ,Jun 14, 2009 at 12:56

Thanks! I didn't know about such option. Actually it is named --preserve-root and is not mentioned in the manpage. – Max Jun 14 '09 at 12:56

Max ,Jun 14, 2009 at 13:18

On my system this option is on by default, but it cat't help in case like rm -ri /* – Max Jun 14 '09 at 13:18

ynimous ,Jun 14, 2009 at 12:42

I would recomend to use realpath(1) and not the command argument directly, so that you can avoid things like /A/B/../ or symbolic links.

Max ,Jun 14, 2009 at 13:30

Useful but non-standard command. I've found possible bash replacement: archlinux.org/pipermail/pacman-dev/2009-February/008130.htmlMax Jun 14 '09 at 13:30

Jonathan Leffler ,Jun 14, 2009 at 12:47

Generally, when I'm developing a command with operations such as ' rm -fr ' in it, I will neutralize the remove during development. One way of doing that is:
RMRF="echo rm -rf"
...
$RMRF "/${PATH1}"

This shows me what should be deleted - but does not delete it. I will do a manual clean up while things are under development - it is a small price to pay for not running the risk of screwing up everything.

The notation ' "/${PATH1}" ' is a little unusual; normally, you would ensure that PATH1 simply contains an absolute pathname.

Using the metacharacter with ' "${PATH2}/"* ' is unwise and unnecessary. The only difference between using that and using just ' "${PATH2}" ' is that if the directory specified by PATH2 contains any files or directories with names starting with dot, then those files or directories will not be removed. Such a design is unlikely and is rather fragile. It would be much simpler just to pass PATH2 and let the recursive remove do its job. Adding the trailing slash is not necessarily a bad idea; the system would have to ensure that $PATH2 contains a directory name, not just a file name, but the extra protection is rather minimal.

Using globbing with ' rm -fr ' is usually a bad idea. You want to be precise and restrictive and limiting in what it does - to prevent accidents. Of course, you'd never run the command (shell script you are developing) as root while it is under development - that would be suicidal. Or, if root privileges are absolutely necessary, you neutralize the remove operation until you are confident it is bullet-proof.

Max ,Jun 14, 2009 at 13:09

To delete subdirectories and files starting with dot I use "shopt -s dotglob". Using rm -rf "${PATH2}" is not appropriate because in my case PATH2 can be only removed by superuser and this results in error status for "rm" command (and I verify it to track other errors). – Max Jun 14 '09 at 13:09

Jonathan Leffler ,Jun 14, 2009 at 13:37

Then, with due respect, you should use a private sub-directory under $PATH2 that you can remove. Avoid glob expansion with commands like 'rm -rf' like you would avoid the plague (or should that be A/H1N1?). – Jonathan Leffler Jun 14 '09 at 13:37

Max ,Jun 14, 2009 at 14:10

Meanwhile I've found this perl project: http://code.google.com/p/safe-rm/

community wiki
too much php
,Jun 15, 2009 at 1:55

If it is possible, you should try and put everything into a folder with a hard-coded name which is unlikely to be found anywhere else on the filesystem, such as ' foofolder '. Then you can write your rmrf() function as:
rmrf() {
    rm -rf "foofolder/$PATH1"
    # or
    rm -rf "$PATH1/foofolder"
}

There is no way that function can delete anything but the files you want it to.

vadipp ,Jan 13, 2017 at 11:37

Actually there is a way: if PATH1 is something like ../../someotherdirvadipp Jan 13 '17 at 11:37

community wiki
btop
,Jun 15, 2009 at 6:34

You may use
set -f    # cf. help set

to disable filename generation (*).

community wiki
Howard Hong
,Oct 28, 2009 at 19:56

You don't need to use regular expressions.
Just assign the directories you want to protect to a variable and then iterate over the variable. eg:
protected_dirs="/ /bin /usr/bin /home $HOME"
for d in $protected_dirs; do
    if [ "$1" = "$d" ]; then
        rm=0
        break;
    fi
done
if [ ${rm:-1} -eq 1 ]; then
    rm -rf $1
fi

,

Add the following codes to your ~/.bashrc
# safe delete
move_to_trash () { now="$(date +%Y%m%d_%H%M%S)"; mv "$@" ~/.local/share/Trash/files/"$@_$now"; }
alias del='move_to_trash'

# safe rm
alias rmi='rm -i'

Every time you need to rm something, first consider del , you can change the trash folder. If you do need to rm something, you could go to the trash folder and use rmi .

One small bug for del is that when del a folder, for example, my_folder , it should be del my_folder but not del my_folder/ since in order for possible later restore, I attach the time information in the end ( "$@_$now" ). For files, it works fine.

[Jan 28, 2019] That's how I learned to always check with somebody else before rebooting a production server, no matter how minor it may seem

Jan 28, 2019 | www.reddit.com

VexingRaven 1 point 2 points 3 points 3 years ago (1 child)

Not really a horror story but definitely one of my first "Oh shit" moments. I was the FNG helpdesk/sysadmin at a company of 150 people. I start getting calls that something (I think it was Outlook) wasn't working in Citrix, apparently something broken on one of the Citrix servers. I'm 100% positive it will be fixed with a reboot (I've seen this before on individual PCs), so I diligently start working to get people off that Citrix server (one of three) so I can reboot it.

I get it cleared out, hit Reboot... And almost immediately get a call from the call center manager saying every single person just got kicked off Citrix. Oh shit. But there was nobody on that server! Apparently that server also housed the Secure Gateway server which my senior hadn't bothered to tell me or simply didn't know (Set up by a consulting firm). Whoops. Thankfully the servers were pretty fast and people's sessions reconnected a few minutes later, no harm no foul. And on the plus side, it did indeed fix the problem.

And that's how I learned to always check with somebody else before rebooting a production server, no matter how minor it may seem.

[Jan 26, 2019] How and why i run my own dns servers

Notable quotes:
"... Learn Bash the Hard Way ..."
"... Learn Bash the Hard Way ..."
zwischenzugs
Introduction Despite my woeful knowledge of networking, I run my own DNS servers on my own websites run from home. I achieved this through trial and error and now it requires almost zero maintenance, even though I don't have a static IP at home.

Here I share how (and why) I persist in this endeavour.

Overview This is an overview of the setup: DNSSetup

This is how I set up my DNS. I:

How? Walking through step-by-step how I did it: 1) Set up two Virtual Private Servers (VPSes) You will need two stable machines with static IP addresses. If you're not lucky enough to have these in your possession, then you can set one up on the cloud. I used this site , but there are plenty out there. NB I asked them, and their IPs are static per VPS. I use the cheapest cloud VPS (1$/month) and set up debian on there. NOTE: Replace any mention of DNSIP1 and DNSIP2 below with the first and second static IP addresses you are given. Log on and set up root password SSH to the servers and set up a strong root password. 2) Set up domains You will need two domains: one for your dns servers, and one for the application running on your host. I use dot.tk to get free throwaway domains. In this case, I might setup a myuniquedns.tk DNS domain and a myuniquesite.tk site domain. Whatever you choose, replace your DNS domain when you see YOURDNSDOMAIN below. Similarly, replace your app domain when you see YOURSITEDOMAIN below. 3) Set up a 'glue' record If you use dot.tk as above, then to allow you to manage the YOURDNSDOMAIN domain you will need to set up a 'glue' record. What this does is tell the current domain authority (dot.tk) to defer to your nameservers (the two servers you've set up) for this specific domain. Otherwise it keeps referring back to the .tk domain for the IP. See here for a fuller explanation. Another good explanation is here . To do this you need to check with the authority responsible how this is done, or become the authority yourself. dot.tk has a web interface for setting up a glue record, so I used that. There, you need to go to 'Manage Domains' => 'Manage Domain' => 'Management Tools' => 'Register Glue Records' and fill out the form. Your two hosts will be called ns1.YOURDNSDOMAIN and ns2.YOURDNSDOMAIN , and the glue records will point to either IP address. Note, you may need to wait a few hours (or longer) for this to take effect. If really unsure, give it a day.
If you like this post, you might be interested in my book Learn Bash the Hard Way , available here for just $5. hero
4) Install bind on the DNS Servers On a Debian machine (for example), and as root, type: apt install bind9 bind is the domain name server software you will be running. 5) Configure bind on the DNS Servers Now, this is the hairy bit. There are two parts this with two files involved: named.conf.local , and the db.YOURDNSDOMAIN file. They are both in the /etc/bind folder. Navigate there and edit these files. Part 1 – named.conf.local This file lists the 'zone's (domains) served by your DNS servers. It also defines whether this bind instance is the 'master' or the 'slave'. I'll assume ns1.YOURDNSDOMAIN is the 'master' and ns2.YOURDNSDOMAIN is the 'slave.
Part 1a – the master
On the master/ ns1.YOURNDSDOMAIN , the named.conf.local should be changed to look like this:
zone "YOURDNSDOMAIN" {
 type master;
 file "/etc/bind/db.YOURDNSDOMAIN";
 allow-transfer { DNSIP2; };
};
zone "YOURSITEDOMAIN" {
 type master;
 file "/etc/bind/YOURDNSDOMAIN";
 allow-transfer { DNSIP2; };
};

zone "14.127.75.in-addr.arpa" {
 type master;
 notify no;
 file "/etc/bind/db.75";
 allow-transfer { DNSIP2; };
};

logging {
 channel query.log {
 file "/var/log/query.log";
 // Set the severity to dynamic to see all the debug messages.
 severity debug 3;
 };
category queries { query.log; };
};
The logging at the bottom is optional (I think). I added it a while ago, and I leave it in here for interest. I don't know what the 14.127 zone stanza is about.
Part 1b – the slave
Jan 26, 2019 | zwischenzugs.com

On the slave/ ns2.YOURNDSDOMAIN , the named.conf.local should be changed to look like this:

zone "YOURDNSDOMAIN" {
 type slave;
 file "/var/cache/bind/db.YOURDNSDOMAIN";
 masters { DNSIP1; };
};

zone "YOURSITEDOMAIN" {
 type slave;
 file "/var/cache/bind/db.YOURSITEDOMAIN";
 masters { DNSIP1; };
};

zone "14.127.75.in-addr.arpa" {
 type slave;
 file "/var/cache/bind/db.75";
 masters { DNSIP1; };
};
Part 2 – db.YOURDNSDOMAIN

Now we get to the meat – your DNS database is stored in this file.

On the master/ ns1.YOURDNSDOMAIN the db.YOURDNSDOMAIN file looks like this :

$TTL 4800
@ IN SOA ns1.YOURDNSDOMAIN. YOUREMAIL.YOUREMAILDOMAIN. (
  2018011615 ; Serial
  604800 ; Refresh
  86400 ; Retry
  2419200 ; Expire
  604800 ) ; Negative Cache TTL
;
@ IN NS ns1.YOURDNSDOMAIN.
@ IN NS ns2.YOURDNSDOMAIN.
ns1 IN A DNSIP1
ns2 IN A DNSIP2
YOURSITEDOMAIN. IN A YOURDYNAMICIP

On the slave/ ns2.YOURDNSDOMAIN it's very similar, but has ns1 in the SOA line, and the IN NS lines reversed. I can't remember if this reversal is needed or not :

$TTL 4800 @ IN SOA ns1.YOURDNSDOMAIN. YOUREMAIL.YOUREMAILDOMAIN. (
  2018011615 ; Serial
 604800 ; Refresh
 86400 ; Retry
 2419200 ; Expire
 604800 ) ; Negative Cache TTL
;
@ IN NS ns1.YOURDNSDOMAIN.
@ IN NS ns2.YOURDNSDOMAIN.
ns1 IN A DNSIP1
ns2IN A DNSIP2
YOURSITEDOMAIN. IN A YOURDYNAMICIP

A few notes on the above:

the next step is to dynamically update the DNS server with your dynamic IP address whenever it changes.

6) Copy ssh keys

Before setting up your dynamic DNS you need to set up your ssh keys so that your home server can access the DNS servers.

NOTE: This is not security advice. Use at your own risk.

First, check whether you already have an ssh key generated:

ls ~/.ssh/id_rsa

If that returns a file, you're all set up. Otherwise, type:

ssh-keygen

and accept the defaults.

Then, once you have a key set up, copy your ssh ID to the nameservers:

ssh-copy-id root@DNSIP1
ssh-copy-id root@DNSIP2

Inputting your root password on each command.

7) Create an IP updater script

Now ssh to both servers and place this script in /root/update_ip.sh :

#!/bin/bash
set -o nounset
sed -i "s/^(.*) IN A (.*)$/1 IN A $1/" /etc/bind/db.YOURDNSDOMAIN
sed -i "s/.*Serial$/ $(date +%Y%m%d%H) ; Serial/" /etc/bind/db.YOURDNSDOMAIN
/etc/init.d/bind9 restart

Make it executable by running:

chmod +x /root/update_ip.sh

Going through it line by line:

This line throws an error if the IP is not passed in as the argument to the script.

Replaces the IP address with the contents of the first argument to the script.

Ups the 'serial number'

Restart the bind service on the host.

8) Cron Your Dynamic DNS

At this point you've got access to update the IP when your dynamic IP changes, and the script to do the update.

Here's the raw cron entry:

* * * * * curl ifconfig.co 2>/dev/null > /tmp/ip.tmp && (diff /tmp/ip.tmp /tmp/ip || (mv /tmp/ip.tmp /tmp/ip && ssh root@DNSIP1 "/root/update_ip.sh $(cat /tmp/ip)")); curl ifconfig.co 2>/dev/null > /tmp/ip.tmp2 && (diff /tmp/ip.tmp2 /tmp/ip2 || (mv /tmp/ip.tmp2 /tmp/ip2 && ssh root@192.210.238.236 "/root/update_ip.sh $(cat /tmp/ip2)"))

Breaking this command down step by step:

curl ifconfig.co 2>/dev/null > /tmp/ip.tmp

This curls a 'what is my IP address' site, and deposits the output to /tmp/ip.tmp

diff /tmp/ip.tmp /tmp/ip || (mv /tmp/ip.tmp /tmp/ip && ssh root@DNSIP1 "/root/update_ip.sh $(cat /tmp/ip)"))

This diffs the contents of /tmp/ip.tmp with /tmp/ip (which is yet to be created, and holds the last-updated ip address). If there is an error (ie there is a new IP address to update on the DNS server), then the subshell is run. This overwrites the ip address, and then ssh'es onto the

The same process is then repeated for DNSIP2 using separate files ( /tmp/ip.tmp2 and /tmp/ip2 ).

Why!?

You may be wondering why I do this in the age of cloud services and outsourcing. There's a few reasons.

It's Cheap

The cost of running this stays at the cost of the two nameservers (24$/year) no matter how many domains I manage and whatever I want to do with them.

Learning

I've learned a lot by doing this, probably far more than any course would have taught me.

More Control

I can do what I like with these domains: set up any number of subdomains, try my hand at secure mail techniques, experiment with obscure DNS records and so on.

I could extend this into a service. If you're interested, my rates are very low :)


If you like this post, you might be interested in my book Learn Bash the Hard Way , available here for just $5.

[Jan 26, 2019] Shell startup scripts

flowblok's blog
that diagram shows what happens according to the man page, and not what happens when you actually try it out in real life. This second diagram more accurately captures the insanity of bash:

See how remote interactive login shells read /etc/bash.bashrc, but normal interactive login shells don't? Sigh.

Finally, here's a repository containing my implementation and the graphviz files for the above diagram. If your POSIX-compliant shell isn't listed here, or if I've made a horrible mistake (or just a tiny one), please send me a pull request or make a comment below, and I'll update this post accordingly.

[1]

and since I'm writing this, I can make you say whatever I want for the purposes of narrative.

[Jan 26, 2019] Shell startup script order of execution

Highly recommended!
Jan 26, 2019 | flowblok.id.au

Adrian • a month ago ,

6 years late, but...

In my experience, if your bash sources /etc/bash.bashrc, odds are good it also sources /etc/bash.bash_logout or something similar on logout (after ~/.bash_logout, of course).

From bash-4.4/config-top.h:

/* System-wide .bashrc file for interactive shells. */
/* #define SYS_BASHRC "/etc/bash.bashrc" */

/* System-wide .bash_logout for login shells. */
/* #define SYS_BASH_LOGOUT "/etc/bash.bash_logout" */

(Yes, they're disabled by default.)

Check the FILES section of your system's bash man page for details.

[Jan 26, 2019] Systemd developers don't want to replace the kernel, they are more than happy to leverage Linus's good work on what they see as a collection of device driver

Jan 26, 2019 | blog.erratasec.com

John Morris said...

They don't want to replace the kernel, they are more than happy to leverage Linus's good work on what they see as a collection of device drivers. No, they want to replace the GNU/X in the traditional Linux/GNU/X arrangement. All of the command line tools, up to and including bash are to go, replaced with the more Windows like tools most of the systemd developers grew up on, while X and the desktop environments all get rubbished for Wayland and GNOME3.

And I would wish them luck, the world could use more diversity in operating systems. So long as they stayed the hell over at RedHat and did their grand experiment and I could still find a Linux/GNU/X distribution to run. But they had to be borg and insist that all must bend the knee and to that I say HELL NO!

[Jan 26, 2019] The coming enhancement to systemd

Jan 26, 2019 | blog.erratasec.com

Siegfried Kiermayer said...

I'm waiting for pulse audio being included in systemd to have proper a boot sound :D

[Jan 26, 2019] Ten Things I Wish I'd Known About about bash

Highly recommended!
Jan 06, 2018 | zwischenzugs.com
Intro

Recently I wanted to deepen my understanding of bash by researching as much of it as possible. Because I felt bash is an often-used (and under-understood) technology, I ended up writing a book on it .

A preview is available here .

You don't have to look hard on the internet to find plenty of useful one-liners in bash, or scripts. And there are guides to bash that seem somewhat intimidating through either their thoroughness or their focus on esoteric detail.

Here I've focussed on the things that either confused me or increased my power and productivity in bash significantly, and tried to communicate them (as in my book) in a way that emphasises getting the understanding right.

Enjoy!

hero

1) `` vs $()

These two operators do the same thing. Compare these two lines:

$ echo `ls`
$ echo $(ls)

Why these two forms existed confused me for a long time.

If you don't know, both forms substitute the output of the command contained within it into the command.

The principal difference is that nesting is simpler.

Which of these is easier to read (and write)?

    $ echo `echo \`echo \\\`echo inside\\\`\``

or:

    $ echo $(echo $(echo $(echo inside)))

If you're interested in going deeper, see here or here .

2) globbing vs regexps

Another one that can confuse if never thought about or researched.

While globs and regexps can look similar, they are not the same.

Consider this command:

$ rename -n 's/(.*)/new$1/' *

The two asterisks are interpreted in different ways.

The first is ignored by the shell (because it is in quotes), and is interpreted as '0 or more characters' by the rename application. So it's interpreted as a regular expression.

The second is interpreted by the shell (because it is not in quotes), and gets replaced by a list of all the files in the current working folder. It is interpreted as a glob.

So by looking at man bash can you figure out why these two commands produce different output?

$ ls *
$ ls .*

The second looks even more like a regular expression. But it isn't!

3) Exit Codes

Not everyone knows that every time you run a shell command in bash, an 'exit code' is returned to bash.

Generally, if a command 'succeeds' you get an error code of 0 . If it doesn't succeed, you get a non-zero code. 1 is a 'general error', and others can give you more information (eg which signal killed it, for example).

But these rules don't always hold:

$ grep not_there /dev/null
$ echo $?

$? is a special bash variable that's set to the exit code of each command after it runs.

Grep uses exit codes to indicate whether it matched or not. I have to look up every time which way round it goes: does finding a match or not return 0 ?

Grok this and a lot will click into place in what follows.

4) if statements, [ and [[

Here's another 'spot the difference' similar to the backticks one above.

What will this output?

if grep not_there /dev/null
then
    echo hi
else
    echo lo
fi

grep's return code makes code like this work more intuitively as a side effect of its use of exit codes.

Now what will this output?

a) hihi
b) lolo
c) something else

if [ $(grep not_there /dev/null) = '' ]
then
    echo -n hi
else
    echo -n lo
fi
if [[ $(grep not_there /dev/null) = '' ]]
then
    echo -n hi
else
    echo -n lo
fi

The difference between [ and [[ was another thing I never really understood. [ is the original form for tests, and then [[ was introduced, which is more flexible and intuitive. In the first if block above, the if statement barfs because the $(grep not_there /dev/null) is evaluated to nothing, resulting in this comparison:

[ = '' ]

which makes no sense. The double bracket form handles this for you.

This is why you occasionally see comparisons like this in bash scripts:

if [ x$(grep not_there /dev/null) = 'x' ]

so that if the command returns nothing it still runs. There's no need for it, but that's why it exists.

5) set s

Bash has configurable options which can be set on the fly. I use two of these all the time:

set -e

exits from a script if any command returned a non-zero exit code (see above).

This outputs the commands that get run as they run:

set -x

So a script might start like this:

#!/bin/bash
set -e
set -x
grep not_there /dev/null
echo $?

What would that script output?

6) ​​ <()

This is my favourite. It's so under-used, perhaps because it can be initially baffling, but I use it all the time.

It's similar to $() in that the output of the command inside is re-used.

In this case, though, the output is treated as a file. This file can be used as an argument to commands that take files as an argument.

Confused? Here's an example.

Have you ever done something like this?

$ grep somestring file1 > /tmp/a
$ grep somestring file2 > /tmp/b
$ diff /tmp/a /tmp/b

That works, but instead you can write:

diff <(grep somestring file1) <(grep somestring file2)

Isn't that neater?

7) Quoting

Quoting's a knotty subject in bash, as it is in many software contexts.

Firstly, variables in quotes:

A='123'  
echo "$A"
echo '$A'

Pretty simple – double quotes dereference variables, while single quotes go literal.

So what will this output?

mkdir -p tmp
cd tmp
touch a
echo "*"
echo '*'

Surprised? I was.

8) Top three shortcuts

There are plenty of shortcuts listed in man bash , and it's not hard to find comprehensive lists. This list consists of the ones I use most often, in order of how often I use them.

Rather than trying to memorize them all, I recommend picking one, and trying to remember to use it until it becomes unconscious. Then take the next one. I'll skip over the most obvious ones (eg !! – repeat last command, and ~ – your home directory).

!$

I use this dozens of times a day. It repeats the last argument of the last command. If you're working on a file, and can't be bothered to re-type it command after command it can save a lot of work:

grep somestring /long/path/to/some/file/or/other.txt
vi !$

​​ !:1-$

This bit of magic takes this further. It takes all the arguments to the previous command and drops them in. So:

grep isthere /long/path/to/some/file/or/other.txt
egrep !:1-$
fgrep !:1-$

The ! means 'look at the previous command', the : is a separator, and the 1 means 'take the first word', the - means 'until' and the $ means 'the last word'.

Note: you can achieve the same thing with !* . Knowing the above gives you the control to limit to a specific contiguous subset of arguments, eg with !:2-3 .

:h

I use this one a lot too. If you put it after a filename, it will change that filename to remove everything up to the folder. Like this:

grep isthere /long/path/to/some/file/or/other.txt
cd !$:h

which can save a lot of work in the course of the day.

9) startup order

The order in which bash runs startup scripts can cause a lot of head-scratching. I keep this diagram handy (from this great page):

shell-startup-actual

It shows which scripts bash decides to run from the top, based on decisions made about the context bash is running in (which decides the colour to follow).

So if you are in a local (non-remote), non-login, interactive shell (eg when you run bash itself from the command line), you are on the 'green' line, and these are the order of files read:

/etc/bash.bashrc
~/.bashrc
[bash runs, then terminates]
~/.bash_logout

This can save you a hell of a lot of time debugging.

10) getopts (cheapci)

If you go deep with bash, you might end up writing chunky utilities in it. If you do, then getting to grips with getopts can pay large dividends.

For fun, I once wrote a script called cheapci which I used to work like a Jenkins job.

The code here implements the reading of the two required, and 14 non-required arguments . Better to learn this than to build up a bunch of bespoke code that can get very messy pretty quickly as your utility grows.


This is based on some of the contents of my book Learn Bash the Hard Way , available at $7 :

[Jan 25, 2019] Some systemd problems that arise in reasonably complex datacenter environment

May 10, 2018 | theregister.co.uk
Thursday 10th May 2018 16:34 GMT Nate Amsden

as a linux user for 22 users

(20 of which on Debian, before that was Slackware)

I am new to systemd, maybe 3 or 4 months now tops on Ubuntu, and a tiny bit on Debian before that.

I was confident I was going to hate systemd before I used it just based on the comments I had read over the years, I postponed using it as long as I could. Took just a few minutes of using it to confirm my thoughts. Now to be clear, if I didn't have to mess with the systemd to do stuff then I really wouldn't care since I don't interact with it (which is the case on my laptop at least though laptop doesn't have systemd anyway). I manage about 1,000 systems running Ubuntu for work, so I have to mess with systemd, and init etc there.

If systemd would just do ONE thing I think it would remove all of the pain that it has inflicted on me over the past several months and I could learn to accept it.

That one thing is, if there is an init script, RUN IT. Not run it like systemd does now. But turn off ALL intelligence systemd has when it finds that script and run it. Don't put it on any special timers, don't try to detect if it is running already, or stopped already or whatever, fire the script up in blocking mode and wait till it exits.

My first experience with systemd was on one of my home servers, I re-installed Debian on it last year, rebuilt the hardware etc and with it came systemd. I believe there is a way to turn systemd off but I haven't tried that yet. The first experience was with bind. I have a slightly custom init script (from previous debian) that I have been using for many years. I copied it to the new system and tried to start bind. Nothing. I looked in the logs and it seems that it was trying to interface with rndc(internal bind thing) for some reason, and because rndc was not working(I never used it so I never bothered to configure it) systemd wouldn't launch bind. So I fixed rndc and systemd would now launch bind, only to stop it within 1 second of launching. My first workaround was just to launch bind by hand at the CLI (no init script), left it running for a few months. Had a discussion with a co-worker who likes systemd and he explained that making a custom unit file and using the type=forking option may fix it.. That did fix the issue.

Next issue came up when dealing with MySQL clusters. I had to initialize the cluster with the "service mysql bootstrap-pxc" command (using the start command on the first cluster member is a bad thing). Run that with systemd, and systemd runs it fine. But go to STOP the service, and systemd thinks the service is not running so doesn't even TRY to stop the service(the service is running). My workaround for my automation for mysql clusters at this point is to just use mysqladmin to shut the mysql instances down. Maybe newer mysql versions have better systemd support though a co-worker who is our DBA and has used mysql for many years says even the new Maria DB builds don't work well with systemd. I am working with Mysql 5.6 which is of course much much older.

Next issue came up with running init scripts that have the same words in them, in the case of most recently I upgraded systems to systemd that run OSSEC. OSSEC has two init scripts for us on the server side (ossec and ossec-auth). Systemd refuses to run ossec-auth because it thinks there is a conflict with the ossec service. I had the same problem with multiple varnish instances running on the same system (varnish instances were named varnish-XXX and varnish-YYY). In the varnish case using custom unit files I got systemd to the point where it would start the service but it still refuses to "enable" the service because of the name conflict (I even changed the name but then systemd was looking at the name of the binary being called in the unit file and said there is a conflict there).

fucking a. Systemd shut up, just run the damn script. It's not hard.

Later a co-worker explained the "systemd way" for handling something like multiple varnish instances on the system but I'm not doing that, in the meantime I just let chef start the services when it runs after the system boots(which means they start maybe 1 or 2 mins after bootup).

Another thing bit us with systemd recently as well again going back to bind. Someone on the team upgraded our DNS systems to systemd and the startup parameters for bind were not preserved because systemd ignores the /etc/default/bind file. As a result we had tons of DNS failures when bind was trying to reach out to IPv6 name servers(ugh), when there is no IPv6 connectivity in the network (the solution is to start bind with a -4 option).

I believe I have also caught systemd trying to mess with file systems(iscsi mount points). I have lots of automation around moving data volumes on the SAN between servers and attaching them via software iSCSI directly to the VMs themselves(before vsphere 4.0 I attached them via fibre channel to the hypervisor but a feature in 4.0 broke that for me). I noticed on at least one occasion when I removed the file systems from a system that SOMETHING (I assume systemd) mounted them again, and it was very confusing to see file systems mounted again for block devices that DID NOT EXIST on the server at the time. I worked around THAT one I believe with the "noauto" option in fstab again. I had to put a lot of extra logic in my automation scripts to work around systemd stuff.

I'm sure I've only scratched the surface of systemd pain. I'm sure it provides good value to some people, I hear it's good with containers (I have been running LXC containers for years now, I see nothing with systemd that changes that experience so far).

But if systemd would just do this one thing and go into dumb mode with init scripts I would be quite happy.

[Jan 25, 2019] SystemD vs Solaris 10 SMF

"Shadow files" approach of Solaris 10, where additional functions of init are controlled by XML script that exist in a separate directory with the same names as init scripts can be improved but architecturally it is much cleaner then systemd approach.
Notable quotes:
"... Solaris has a similar parallellised startup system, with some similar problems, but it didn't need pid 1. ..."
"... Agreed, Solaris svcadm and svcs etc are an example of how it should be done. A layered approach maintaining what was already there, while adding functionality for management purposes. Keeps all the old text based log files and uses xml scripts (human readable and editable) for higher level functions. ..."
"... AFAICT everyone followed RedHat because they also dominate Gnome, and chose to make Gnome depend on systemd. Thus if one had any aspirations for your distro supporting Gnome in any way, you have to have systemd underneath it all. ..."
Jan 25, 2019 | theregister.co.uk

Re: Poettering still doesn't get it... Pid 1 is for people wearing big boy pants.

SystemD is corporate money (Redhat support dollars) triumphing over the long hairs sadly. Enough money can buy a shitload of code and you can overwhelm the hippies with hairball dependencies (the key moment was udev being dependent on systemd) and soon get as much FOSS as possible dependent on the Linux kernel.

This has always been the end game as Red Hat makes its bones on Linux specifically not on FOSS in general (that say runs on Solaris or HP-UX).

The tighter they can glue the FOSS ecosystem and the Linux kernel together ala Windows lite style the better for their bottom line. Poettering is just being a good employee asshat extraordinaire he is.

Re: Ahhh SystemD

I honestly would love someone to lay out the problems it solves.

Solaris has a similar parallellised startup system, with some similar problems, but it didn't need pid 1.

Re: Ahhh SystemD

Agreed, Solaris svcadm and svcs etc are an example of how it should be done. A layered approach maintaining what was already there, while adding functionality for management purposes. Keeps all the old text based log files and uses xml scripts (human readable and editable) for higher level functions.

Afaics, systemd is a power grab by Red Hat and an ego trip for it's primary developer.

Dumped bloatware Linux in favour of FreeBSD and others after Suse 11.4, though that was bad enough with Gnome 3...

starbase7, Thursday 10th May 2018 04:36 GMT

SMF?

As an older timer (on my way but not there yet), I never cared for the init.d startup and I dislike the systemd monolithic architecture.

What I do like is Solaris SMF and wish Linux would have adopted a method such as or similar to that. I still think SMF was/is a great comprise to the init.d method or systemd manor.

I used SMF professionally, but now I have moved on with Linux professionally as Solaris is, well, dead. I only get to enjoy SMF on my home systems, and savor it. I'm trying to like Linux over all these years, but this systemd thing is a real big road block for me to get enthusiastic.

I have a hard time understanding why all the other Linux distros joined hands with Redhat and implemented that thing, systemd. Sigh.

Anonymous Coward, Thursday 10th May 2018 04:53 GMT

Re: SMF?

You're not alone in liking SMF and Solaris.

AFAICT everyone followed RedHat because they also dominate Gnome, and chose to make Gnome depend on systemd. Thus if one had any aspirations for your distro supporting Gnome in any way, you have to have systemd underneath it all.

RedHat seem to call the shots these days as to what a Linux distro has. I personally have mixed opinions on this; I think the vast anarchy of Linux is a bad thing for Linux adoption ("this is the year of the Linux desktop" don't make me laugh), and Linux would benefit from a significant culling of the vast number of distros out there. However if that did happen and all that was left was something controlled by RedHat, that would be a bad situation.

Steve Davies, Thursday 10th May 2018 07:30 GMT 3

Re: SMF?
Remember who 'owns' SMF... namely Oracle. They may well have made it impossible for anyone to adopt. That stance is not unknown now is it...?

As for systemd, I have bit my teeth and learned to tolerate it. I'll never be as comfortable with it as I was with the old init system but I did start running into issues especially with shutdown syncing with it on some complex systems.

Still not sure if systemd is the right way forward even after four years.

Daggerchild, Thursday 10th May 2018 14:30 GMT

Re: SMF?
SMF should be good, and yet they released it before they'd documented it. Strange priorities...

And XML is *not* a config file format you should let humans at. Finding out the correct order to put the XML elements in to avoid unexplained "parse error", was *not* a fun game.

And someone correct me, but it looks like there are SMF properties of a running service that can only be modified/added by editing the file, reloading *and* restarting the service. A metadata and state/dependency tracking system shouldn't require you to shut down the priority service it's meant to be ensuring... Again, strange priorities...

12 1 Reply
Friday 11th May 2018 07:55 GMTonefangSilver badge
Reply Icon
FAIL
Re: SMF?
"XML is *not* a config file format you should let humans at"

XML is a format you shouldn't let computers at, it was designed to be human readable and writable. It fails totally.

5 1 Reply
Friday 6th July 2018 12:27 GMTHans 1Silver badge
Reply Icon
Re: SMF?
Finding out the correct order to put the XML elements in to avoid unexplained "parse error", was *not* a fun game.

Hm, you do know the grammar is in a dtd ? Yes, XML takes time to learn, but very powerful once mastered.

0 1 Reply
Thursday 10th May 2018 13:24 GMTCrazyOldCatManSilver badge
Reply Icon
Re: SMF?
I have a hard time understanding why all the other Linux distros joined hands with Redhat and implemented that thing, systemd

Several reasons:

A lot of other distros use Redhat (or Fedora) as their base and then customise it.

A lot of other distros include things dependant on systemd (Gnome being the one with biggest dependencies - you can just about to get it to run without systemd but it's a pain and every update will break your fixes).

Redhat has a lot of clout.

6 3

[Jan 17, 2019] The financial struggles of unplanned retirement

People who are kicked out of their IT jobs around 55 now has difficulties to find even full-time McJobs... Only part time jobs are available. With the current round of layoff and job freezes, neoliberalism in the USA is entering terminal phase, I think.
Jan 17, 2019 | finance.yahoo.com

A survey by Transamerica Center for Retirement Studies found on average Americans are retiring at age 63, with more than half indicating they retired sooner than they had planned. Among them, most retired for health or employment-related reasons.

... ... ...

On April 3, 2018, Linda LaBarbera received the phone call that changed her life forever. "We are outsourcing your work to India and your services are no longer needed, effective today," the voice on the other end of the phone line said.

... ... ...

"It's not like we are starving or don't have a home or anything like that," she says. "But we did have other plans for before we retired and setting ourselves up a little better while we both still had jobs."

... ... ...

Linda hasn't needed to dip into her 401(k) yet. She plans to start collecting Social Security when she turns 70, which will give her the maximum benefit. To earn money and keep busy, Linda has taken short-term contract editing jobs. She says she will only withdraw money from her savings if something catastrophic happens. Her husband's salary is their main source of income.

"I am used to going out and spending money on other people," she says. "We are very generous with our family and friends who are not as well off as we are. So we take care of a lot of people. We can't do that anymore. I can't go out and be frivolous anymore. I do have to look at what we spend - what I spend."

Vogelbacher says cutting costs is essential when living in retirement, especially for those on a fixed income. He suggests moving to a tax-friendly location if possible. Kiplinger ranks Alaska, Wyoming, South Dakota, Mississippi, and Florida as the top five tax-friendly states for retirees. If their health allows, Vogelbacher recommends getting a part-time job. For those who own a home, he says paying off the mortgage is a smart financial move.

... ... ...

Monica is one of the 44 percent of unmarried persons who rely on Social Security for 90 percent or more of their income. At the beginning of 2019, Monica and more than 62 million Americans received a 2.8 percent cost of living adjustment from Social Security. The increase is the largest since 2012.

With the Social Security hike, Monica's monthly check climbed $33. Unfortunately, the new year also brought her a slight increase in what she pays for Medicare; along with a $500 property tax bill and the usual laundry list of monthly expenses.

"If you don't have much, the (Social Security) raise doesn't represent anything," she says with a dry laugh. "But it's good to get it."

[Jan 14, 2019] Safe rm stops you accidentally wiping the system! @ New Zealand Linux

Jan 14, 2019 | www.nzlinux.com
  1. Francois Marier October 21, 2009 at 10:34 am

    Another related tool, to prevent accidental reboots of servers this time, is molly-guard:

    http://packages.debian.org/sid/molly-guard

    It asks you to type the hostname of the machine you want to reboot as an extra confirmation step.

[Jan 14, 2019] Linux-UNIX xargs command examples

Jan 14, 2019 | www.linuxtechi.com

Example:10 Move files to a different location

linuxtechi@mail:~$ pwd
/home/linuxtechi
linuxtechi@mail:~$ ls -l *.sh
-rw-rw-r– 1 linuxtechi linuxtechi 0 Sep 15 22:53 abcde.sh
-rw-rw-r– 1 linuxtechi linuxtechi 0 Sep 15 22:53 abcd.sh
-rw-rw-r– 1 linuxtechi linuxtechi 0 Sep 15 22:53 fg.sh

linuxtechi@mail:~$ sudo find . -name "*.sh" -print0 | xargs -0 -I {} mv {} backup/
linuxtechi@mail:~$ ls -ltr backup/

total 0
-rw-rw-r– 1 linuxtechi linuxtechi 0 Sep 15 22:53 abcd.sh
-rw-rw-r– 1 linuxtechi linuxtechi 0 Sep 15 22:53 abcde.sh
-rw-rw-r– 1 linuxtechi linuxtechi 0 Sep 15 22:53 fg.sh
linuxtechi@mail:~$

[Jan 14, 2019] xargs command tutorial with examples by George Ornbo

Sep 11, 2017 | shapeshed.com
How to use xargs

By default xargs reads items from standard input as separated by blanks and executes a command once for each argument. In the following example standard input is piped to xargs and the mkdir command is run for each argument, creating three folders.

echo 'one two three' | xargs mkdir
ls
one two three
How to use xargs with find

The most common usage of xargs is to use it with the find command. This uses find to search for files or directories and then uses xargs to operate on the results. Typical examples of this are removing files, changing the ownership of files or moving files.

find and xargs can be used together to operate on files that match certain attributes. In the following example files older than two weeks in the temp folder are found and then piped to the xargs command which runs the rm command on each file and removes them.

find /tmp -mtime +14 | xargs rm
xargs v exec {}

The find command supports the -exec option that allows arbitrary commands to be found on files that are found. The following are equivalent.

find ./foo -type f -name "*.txt" -exec rm {} \; 
find ./foo -type f -name "*.txt" | xargs rm

So which one is faster? Let's compare a folder with 1000 files in it.

time find . -type f -name "*.txt" -exec rm {} \;
0.35s user 0.11s system 99% cpu 0.467 total

time find ./foo -type f -name "*.txt" | xargs rm
0.00s user 0.01s system 75% cpu 0.016 total

Clearly using xargs is far more efficient. In fact several benchmarks suggest using xargs over exec {} is six times more efficient.

How to print commands that are executed

The -t option prints each command that will be executed to the terminal. This can be helpful when debugging scripts.

echo 'one two three' | xargs -t rm
rm one two three
How to view the command and prompt for execution

The -p command will print the command to be executed and prompt the user to run it. This can be useful for destructive operations where you really want to be sure on the command to be run. l

echo 'one two three' | xargs -p touch
touch one two three ?...
How to run multiple commands with xargs

It is possible to run multiple commands with xargs by using the -I flag. This replaces occurrences of the argument with the argument passed to xargs. The following prints echos a string and creates a folder.

cat foo.txt
one
two
three

cat foo.txt | xargs -I % sh -c 'echo %; mkdir %'
one 
two
three

ls 
one two three
Further reading

[Jan 10, 2019] When idiots are offloaded to security department, interesting things with network eventually happen

Highly recommended!
Security department often does more damage to the network then any sophisticated hacker can. Especially if they are populated with morons, as they usually are. One of the most blatant examples is below... Those idiots decided to disable Traceroute (which means ICMP) in order to increase security.
Notable quotes:
"... Traceroute is disabled on every network I work with to prevent intruders from determining the network structure. Real pain in the neck, but one of those things we face to secure systems. ..."
"... Also really stupid. A competent attacker (and only those manage it into your network, right?) is not even slowed down by things like this. ..."
"... Breaking into a network is a slow process. Slow and precise. Trying to fix problems is a fast reactionary process. Who do you really think you're hurting? Yes another example of how ignorant opinions can become common sense. ..."
"... Disable all ICMP is not feasible as you will be disabling MTU negotiation and destination unreachable messages. You are essentially breaking the TCP/IP protocol. And if you want the protocol working OK, then people can do traceroute via HTTP messages or ICMP echo and reply. ..."
"... You have no fucking idea what you're talking about. I run a multi-regional network with over 130 peers. Nobody "disables ICMP". IP breaks without it. Some folks, generally the dimmer of us, will disable echo responses or TTL expiration notices thinking it is somehow secure (and they are very fucking wrong) but nobody blocks all ICMP, except for very very dim witted humans, and only on endpoint nodes. ..."
"... You have no idea what you're talking about, at any level. "disabled ICMP" - state statement alone requires such ignorance to make that I'm not sure why I'm even replying to ignorant ass. ..."
"... In short, he's a moron. I have reason to suspect you might be, too. ..."
"... No, TCP/IP is not working fine. It's broken and is costing you performance and $$$. But it is not evident because TCP/IP is very good about dealing with broken networks, like yours. ..."
"... It's another example of security by stupidity which seldom provides security, but always buys added cost. ..."
"... A brief read suggests this is a good resource: https://john.albin.net/essenti... [albin.net] ..."
"... Linux has one of the few IP stacks that isn't derived from the BSD stack, which in the industry is considered the reference design. Instead for linux, a new stack with it's own bugs and peculiarities was cobbled up. ..."
"... Reference designs are a good thing to promote interoperability. As far as TCP/IP is concerned, linux is the biggest and ugliest stepchild. A theme that fits well into this whole discussion topic, actually. ..."
May 27, 2018 | linux.slashdot.org

jfdavis668 ( 1414919 ) , Sunday May 27, 2018 @11:09AM ( #56682996 )

Re:So ( Score: 5 , Interesting)

Traceroute is disabled on every network I work with to prevent intruders from determining the network structure. Real pain in the neck, but one of those things we face to secure systems.

Anonymous Coward writes:
Re: ( Score: 2 , Insightful)

What is the point? If an intruder is already there couldn't they just upload their own binary?

Hylandr ( 813770 ) , Sunday May 27, 2018 @05:57PM ( #56685274 )
Re: So ( Score: 5 , Interesting)

They can easily. And often time will compile their own tools, versions of Apache, etc..

At best it slows down incident response and resolution while doing nothing to prevent discovery of their networks. If you only use Vlans to segregate your architecture you're boned.

gweihir ( 88907 ) , Sunday May 27, 2018 @12:19PM ( #56683422 )
Re: So ( Score: 5 , Interesting)

Also really stupid. A competent attacker (and only those manage it into your network, right?) is not even slowed down by things like this.

bferrell ( 253291 ) , Sunday May 27, 2018 @12:20PM ( #56683430 ) Homepage Journal
Re: So ( Score: 4 , Interesting)

Except it DOESN'T secure anything, simply renders things a little more obscure... Since when is obscurity security?

fluffernutter ( 1411889 ) writes:
Re: ( Score: 3 )

Doing something to make things more difficult for a hacker is better than doing nothing to make things more difficult for a hacker. Unless you're lazy, as many of these things should be done as possible.

DamnOregonian ( 963763 ) , Sunday May 27, 2018 @04:37PM ( #56684878 )
Re:So ( Score: 5 , Insightful)

No.

Things like this don't slow down "hackers" with even a modicum of network knowledge inside of a functioning network. What they do slow down is your ability to troubleshoot network problems.

Breaking into a network is a slow process. Slow and precise. Trying to fix problems is a fast reactionary process. Who do you really think you're hurting? Yes another example of how ignorant opinions can become common sense.

mSparks43 ( 757109 ) writes:
Re: So ( Score: 2 )

Pretty much my reaction. like WTF? OTON, redhat flavors all still on glibc2 starting to become a regular p.i.t.a. so the chances of this actually becoming a thing to be concerned about seem very low.

Kinda like gdpr, same kind of groupthink that anyone actually cares or concerns themselves with policy these days.

ruir ( 2709173 ) writes:
Re: ( Score: 3 )

Disable all ICMP is not feasible as you will be disabling MTU negotiation and destination unreachable messages. You are essentially breaking the TCP/IP protocol. And if you want the protocol working OK, then people can do traceroute via HTTP messages or ICMP echo and reply.

Or they can do reverse traceroute at least until the border edge of your firewall via an external site.

DamnOregonian ( 963763 ) , Sunday May 27, 2018 @04:32PM ( #56684858 )
Re:So ( Score: 4 , Insightful)

You have no fucking idea what you're talking about. I run a multi-regional network with over 130 peers. Nobody "disables ICMP". IP breaks without it. Some folks, generally the dimmer of us, will disable echo responses or TTL expiration notices thinking it is somehow secure (and they are very fucking wrong) but nobody blocks all ICMP, except for very very dim witted humans, and only on endpoint nodes.

DamnOregonian ( 963763 ) writes:
Re: ( Score: 3 )

That's hilarious... I am *the guy* who runs the network. I am our senior network engineer. Every line in every router -- mine.

You have no idea what you're talking about, at any level. "disabled ICMP" - state statement alone requires such ignorance to make that I'm not sure why I'm even replying to ignorant ass.

DamnOregonian ( 963763 ) writes:
Re: ( Score: 3 )

Nonsense. I conceded that morons may actually go through the work to totally break their PMTUD, IP error signaling channels, and make their nodes "invisible"

I understand "networking" at a level I'm pretty sure you only have a foggy understanding of. I write applications that require layer-2 packet building all the way up to layer-4.

In short, he's a moron. I have reason to suspect you might be, too.

DamnOregonian ( 963763 ) writes:
Re: ( Score: 3 )

A CDS is MAC. Turning off ICMP toward people who aren't allowed to access your node/network is understandable. They can't get anything else though, why bother supporting the IP control channel? CDS does *not* say turn off ICMP globally. I deal with CDS, SSAE16 SOC 2, and PCI compliance daily. If your CDS solution only operates with a layer-4 ACL, it's a pretty simple model, or You're Doing It Wrong (TM)

nyet ( 19118 ) writes:
Re: ( Score: 3 )

> I'm not a network person

IOW, nothing you say about networking should be taken seriously.

kevmeister ( 979231 ) , Sunday May 27, 2018 @05:47PM ( #56685234 ) Homepage
Re:So ( Score: 4 , Insightful)

No, TCP/IP is not working fine. It's broken and is costing you performance and $$$. But it is not evident because TCP/IP is very good about dealing with broken networks, like yours.

The problem is that doing this requires things like packet fragmentation which greatly increases router CPU load and reduces the maximum PPS of your network as well s resulting in dropped packets requiring re-transmission and may also result in widow collapse fallowed with slow-start, though rapid recovery mitigates much of this, it's still not free.

It's another example of security by stupidity which seldom provides security, but always buys added cost.

Hylandr ( 813770 ) writes:
Re: ( Score: 3 )

As a server engineer I am experiencing this with our network team right now.

Do you have some reading that I might be able to further educate myself? I would like to be able to prove to the directors why disabling ICMP on the network may be the cause of our issues.

Zaelath ( 2588189 ) , Sunday May 27, 2018 @07:51PM ( #56685758 )
Re:So ( Score: 4 , Informative)

A brief read suggests this is a good resource: https://john.albin.net/essenti... [albin.net]

Bing Tsher E ( 943915 ) , Sunday May 27, 2018 @01:22PM ( #56683792 ) Journal
Re: Denying ICMP echo @ server/workstation level t ( Score: 5 , Insightful)

Linux has one of the few IP stacks that isn't derived from the BSD stack, which in the industry is considered the reference design. Instead for linux, a new stack with it's own bugs and peculiarities was cobbled up.

Reference designs are a good thing to promote interoperability. As far as TCP/IP is concerned, linux is the biggest and ugliest stepchild. A theme that fits well into this whole discussion topic, actually.

[Jan 10, 2019] saferm Safely remove files, moving them to GNOME/KDE trash instead of deleting by Eemil Lagerspetz

Jan 10, 2019 | github.com
#!/bin/bash
##
## saferm.sh
## Safely remove files, moving them to GNOME/KDE trash instead of deleting.
## Made by Eemil Lagerspetz
## Login   <vermind@drache>
## 
## Started on  Mon Aug 11 22:00:58 2008 Eemil Lagerspetz
## Last update Sat Aug 16 23:49:18 2008 Eemil Lagerspetz
##

version="1.16";

## flags (change these to change default behaviour)
recursive="" # do not recurse into directories by default
verbose="true" # set verbose by default for inexperienced users.
force="" #disallow deleting special files by default
unsafe="" # do not behave like regular rm by default

## possible flags (recursive, verbose, force, unsafe)
# don't touch this unless you want to create/destroy flags
flaglist="r v f u q"

# Colours
blue='\e[1;34m'
red='\e[1;31m'
norm='\e[0m'

## trashbin definitions
# this is the same for newer KDE and GNOME:
trash_desktops="$HOME/.local/share/Trash/files"
# if neither is running:
trash_fallback="$HOME/Trash"

# use .local/share/Trash?
use_desktop=$( ps -U $USER | grep -E "gnome-settings|startkde|mate-session|mate-settings|mate-panel|gnome-shell|lxsession|unity" )

# mounted filesystems, for avoiding cross-device move on safe delete
filesystems=$( mount | awk '{print $3; }' )

if [ -n "$use_desktop" ]; then
    trash="${trash_desktops}"
    infodir="${trash}/../info";
    for k in "${trash}" "${infodir}"; do
        if [ ! -d "${k}" ]; then mkdir -p "${k}"; fi
    done
else
    trash="${trash_fallback}"
fi

usagemessage() {
        echo -e "This is ${blue}saferm.sh$norm $version. LXDE and Gnome3 detection.
    Will ask to unsafe-delete instead of cross-fs move. Allows unsafe (regular rm) delete (ignores trashinfo).
    Creates trash and trashinfo directories if they do not exist. Handles symbolic link deletion.
    Does not complain about different user any more.\n";
        echo -e "Usage: ${blue}/path/to/saferm.sh$norm [${blue}OPTIONS$norm] [$blue--$norm] ${blue}files and dirs to safely remove$norm"
        echo -e "${blue}OPTIONS$norm:"
        echo -e "$blue-r$norm      allows recursively removing directories."
        echo -e "$blue-f$norm      Allow deleting special files (devices, ...)."
  echo -e "$blue-u$norm      Unsafe mode, bypass trash and delete files permanently."
        echo -e "$blue-v$norm      Verbose, prints more messages. Default in this version."
  echo -e "$blue-q$norm      Quiet mode. Opposite of verbose."
        echo "";
}

detect() {
    if [ ! -e "$1" ]; then fs=""; return; fi
    path=$(readlink -f "$1")
    for det in $filesystems; do
        match=$( echo "$path" | grep -oE "^$det" )
        if [ -n "$match" ]; then
            if [ ${#det} -gt ${#fs} ]; then
                fs="$det"
            fi
        fi
    done
}


trashinfo() {
#gnome: generate trashinfo:
        bname=$( basename -- "$1" )
    fname="${trash}/../info/${bname}.trashinfo"
    cat < "${fname}"
[Trash Info]
Path=$PWD/${1}
DeletionDate=$( date +%Y-%m-%dT%H:%M:%S )
EOF
}

setflags() {
    for k in $flaglist; do
        reduced=$( echo "$1" | sed "s/$k//" )
        if [ "$reduced" != "$1" ]; then
            flags_set="$flags_set $k"
        fi
    done
  for k in $flags_set; do
        if [ "$k" == "v" ]; then
            verbose="true"
        elif [ "$k" == "r" ]; then 
            recursive="true"
        elif [ "$k" == "f" ]; then 
            force="true"
        elif [ "$k" == "u" ]; then 
            unsafe="true"
        elif [ "$k" == "q" ]; then 
    unset verbose
        fi
  done
}

performdelete() {
                        # "delete" = move to trash
                        if [ -n "$unsafe" ]
                        then
                          if [ -n "$verbose" ];then echo -e "Deleting $red$1$norm"; fi
                    #UNSAFE: permanently remove files.
                    rm -rf -- "$1"
                        else
                          if [ -n "$verbose" ];then echo -e "Moving $blue$k$norm to $red${trash}$norm"; fi
                    mv -b -- "$1" "${trash}" # moves and backs up old files
                        fi
}

askfs() {
  detect "$1"
  if [ "${fs}" != "${tfs}" ]; then
    unset answer;
    until [ "$answer" == "y" -o "$answer" == "n" ]; do
      echo -e "$blue$1$norm is on $blue${fs}$norm. Unsafe delete (y/n)?"
      read -n 1 answer;
    done
    if [ "$answer" == "y" ]; then
      unsafe="yes"
    fi
  fi
}

complain() {
  msg=""
  if [ ! -e "$1" -a ! -L "$1" ]; then # does not exist
    msg="File does not exist:"
        elif [ ! -w "$1" -a ! -L "$1" ]; then # not writable
    msg="File is not writable:"
        elif [ ! -f "$1" -a ! -d "$1" -a -z "$force" ]; then # Special or sth else.
        msg="Is not a regular file or directory (and -f not specified):"
        elif [ -f "$1" ]; then # is a file
    act="true" # operate on files by default
        elif [ -d "$1" -a -n "$recursive" ]; then # is a directory and recursive is enabled
    act="true"
        elif [ -d "$1" -a -z "${recursive}" ]; then
                msg="Is a directory (and -r not specified):"
        else
                # not file or dir. This branch should not be reached.
                msg="No such file or directory:"
        fi
}

asknobackup() {
  unset answer
        until [ "$answer" == "y" -o "$answer" == "n" ]; do
          echo -e "$blue$k$norm could not be moved to trash. Unsafe delete (y/n)?"
          read -n 1 answer
        done
        if [ "$answer" == "y" ]
        then
          unsafe="yes"
          performdelete "${k}"
          ret=$?
                # Reset temporary unsafe flag
          unset unsafe
          unset answer
        else
          unset answer
        fi
}

deletefiles() {
  for k in "$@"; do
          fdesc="$blue$k$norm";
          complain "${k}"
          if [ -n "$msg" ]
          then
                  echo -e "$msg $fdesc."
    else
        #actual action:
        if [ -z "$unsafe" ]; then
          askfs "${k}"
        fi
                  performdelete "${k}"
                  ret=$?
                  # Reset temporary unsafe flag
                  if [ "$answer" == "y" ]; then unset unsafe; unset answer; fi
      #echo "MV exit status: $ret"
      if [ ! "$ret" -eq 0 ]
      then 
        asknobackup "${k}"
      fi
      if [ -n "$use_desktop" ]; then
          # generate trashinfo for desktop environments
        trashinfo "${k}"
      fi
    fi
        done
}

# Make trash if it doesn't exist
if [ ! -d "${trash}" ]; then
    mkdir "${trash}";
fi

# find out which flags were given
afteropts=""; # boolean for end-of-options reached
for k in "$@"; do
        # if starts with dash and before end of options marker (--)
        if [ "${k:0:1}" == "-" -a -z "$afteropts" ]; then
                if [ "${k:1:2}" == "-" ]; then # if end of options marker
                        afteropts="true"
                else # option(s)
                    setflags "$k" # set flags
                fi
        else # not starting with dash, or after end-of-opts
                files[++i]="$k"
        fi
done

if [ -z "${files[1]}" ]; then # no parameters?
        usagemessage # tell them how to use this
        exit 0;
fi

# Which fs is trash on?
detect "${trash}"
tfs="$fs"

# do the work
deletefiles "${files[@]}"



[Jan 08, 2019] Bind DNS threw a (network unreachable) error CentOS

Jan 08, 2019 | www.reddit.com

submitted 11 days ago by


mr-bope

Bind 9 on my CentOS 7.6 machine threw this error:
error (network unreachable) resolving './DNSKEY/IN': 2001:7fe::53#53
error (network unreachable) resolving './NS/IN': 2001:7fe::53#53
error (network unreachable) resolving './DNSKEY/IN': 2001:500:a8::e#53
error (network unreachable) resolving './NS/IN': 2001:500:a8::e#53
error (FORMERR) resolving './NS/IN': 198.97.190.53#53
error (network unreachable) resolving './DNSKEY/IN': 2001:dc3::35#53
error (network unreachable) resolving './NS/IN': 2001:dc3::35#53
error (network unreachable) resolving './DNSKEY/IN': 2001:500:2d::d#53
error (network unreachable) resolving './NS/IN': 2001:500:2d::d#53
managed-keys-zone: Unable to fetch DNSKEY set '.': failure

What does it mean? Can it be fixed?

And is it at all related with DNSSEC cause I cannot seem to get it working whatsoever.

cryan7755 1 point 2 points 3 points 11 days ago (1 child)
Looks like failure to reach ipv6 addressed NS servers. If you don't utilize ipv6 on your network then this should be expected.
knobbysideup 1 point 2 points 3 points 11 days ago (0 children)
Can be dealt with by adding
#/etc/sysconfig/named
OPTIONS="-4"

[Jan 01, 2019] Re: customize columns in single panel view

Jun 12, 2017 | mail.gnome.org
On 6/12/17, Karel <lists vcomp ch> wrote:
Hello,

Is it possible to customize the columns in the single panel view ?

For my default (two panel) view, I have customized it using:

 -> Listing Mode
   (*) User defined:
      half type name | size:15 | mtime

however, when I switch to the single panel view, there are different
columns (obviously):

  Permission   Nl   Owner   Group   Size   Modify time   Name

For instance, I need to change the width of "Size" to 15.
No, you can't change the format of the "Long" listing-mode.

(You can make the "User defined" listing-mode display in one panel (by
changing "half" to "full"), but this is not what you want.)

So, you have two options:

(1) Modify the source code (search panel.c for "full perm space" and
tweak it); or:

(2) Use mc^2. It allows you to do this. (It already comes with a
snippet that enlarges the "Size" field a bit so there'd be room for
the commas (or other locale-dependent formatting) it adds. This makes
reading long numbers much easier.)

[Jan 01, 2019] Re- Help- meaning of the panelize command in left-right menus

Feb 17, 2017 | mail.gnome.org


On Thu, Feb 16, 2017 at 01:25:22PM +1300, William Kimber wrote:
Briefly,  if you do a search over several directories you can put all those
files into a single panel. Not withstanding that they are from different
directories.
I'm not sure I understand what you mean here; anyway I noticed that if you do a
search using the "Find file" (M-?) command, choose "Panelize" (at the bottom
of the "Find File" popup window), then change to some other directory (thus
exiting from panelized mode), if you now choose Left -> Panelize, you can recall
the panelized view of the last "Find file" results. Is this what you mean?

However this seems to work only with panelized results coming from the
"Find file" command, not with results from the "External panelize" command:
if I change directory, and then choose Left -> Panelize I get an empty panel.
Is this a bug?

Cri

[Jan 01, 2019] Re- Help- meaning of the panelize command in left-right menus

Jan 01, 2019 | mail.gnome.org

Re: Help: meaning of the panelize command in left/right menus



On Thu, Feb 16, 2017 at 01:25:22PM +1300, William Kimber wrote:
Briefly,  if you do a search over several directories you can put all those
files into a single panel. Not withstanding that they are from different
directories.
I'm not sure I understand what you mean here; anyway I noticed that if you do a
search using the "Find file" (M-?) command, choose "Panelize" (at the bottom
of the "Find File" popup window), then change to some other directory (thus
exiting from panelized mode), if you now choose Left -> Panelize, you can recall
the panelized view of the last "Find file" results. Is this what you mean?

However this seems to work only with panelized results coming from the
"Find file" command, not with results from the "External panelize" command:
if I change directory, and then choose Left -> Panelize I get an empty panel.
Is this a bug?

Cri

[Jan 01, 2019] Re- customize columns in single panel view

Jan 01, 2019 | mail.gnome.org
On 6/12/17, Karel <lists vcomp ch> wrote:
Hello,

Is it possible to customize the columns in the single panel view ?

For my default (two panel) view, I have customized it using:

 -> Listing Mode
   (*) User defined:
      half type name | size:15 | mtime

however, when I switch to the single panel view, there are different
columns (obviously):

  Permission   Nl   Owner   Group   Size   Modify time   Name

For instance, I need to change the width of "Size" to 15.
No, you can't change the format of the "Long" listing-mode.

(You can make the "User defined" listing-mode display in one panel (by
changing "half" to "full"), but this is not what you want.)

So, you have two options:

(1) Modify the source code (search panel.c for "full perm space" and
tweak it); or:

(2) Use mc^2. It allows you to do this. (It already comes with a
snippet that enlarges the "Size" field a bit so there'd be room for
the commas (or other locale-dependent formatting) it adds. This makes
reading long numbers much easier.)

[Jan 01, 2019] %f macro in mcedit

Jan 01, 2019 | mail.gnome.org

    
Hi!
My mc version:
$ mc --version
GNU Midnight Commander 4.8.19
System: Fedora 24

I just want to tell you that %f macro in mcedit is not correct. It
contains the current file name that is selected in the panel but not
the actual file name that is opened in mcedit.

I created the mcedit item to run C++ program:
+= f \.cpp$
r       Run
    clear
    app_path=/tmp/$(uuidgen)
    if g++ -o $app_path "%f"; then
        $app_path
        rm $app_path
    fi
    echo 'Press any key to exit.'
    read -s -n 1

Imagine that I opened the file a.cpp in mcedit.
Then I pressed alt+` and switched to panel.
Then I selected (or even opened in mcedit) the file b.cpp.
Then I pressed alt+` and switched to mcedit with a.cpp.
Then I executed the "Run" item from user menu.
And... The b.cpp will be compiled and run. This is wrong! Why b.cpp???
I executed "Run" from a.cpp!

I propose you to do the new macros for mcedit.

%opened_file
- the file name that is opened in current instance of mcedit.

%opened_file_full_path
- as %opened_file but full path to that file.

I think that %opened_file may be not safe because the current
directory may be changed in mc panel. So it is better to use
%opened_file_full_path.

%opened_file_dir
- full path to directory where %opened_file is.

%save
- save opened file before executing the menu commands. May be useful
in some cases. For example I don't want to press F2 every time before
run changed code.

Thanks for the mc.
Best regards, Sergiy Vovk.

[Jan 01, 2019] Re- Setting left and right panel directories at startup

Jan 01, 2019 | mail.gnome.org

Re: Setting left and right panel directories at startup



Sorry, forgot to reply all.
I said that, personally, I would put ~/Documents in the directory hotlist and get there via C-\.

On Sun, Mar 18, 2018 at 5:38 PM, Keith Roberts < keith karsites net > wrote:

On 18/03/18 20:14, wwp wrote:

Hello Keith,

On Sun, 18 Mar 2018 19:14:33 +0000 Keith Roberts < keith karsites net > wrote:

Hi all,

I found this in /home/keith/.config/mc/panels. ini

[Dirs]
current_is_left=true
other_dir=/home/keith/Document s/

I'd like mc to open /home/keith/Documents/ in the left panel as well whenever I start mc up, so both panels are showing the /home/keith/Documents/ directory.

Is there some way to tell mc how to do this please?

I think you could use: `mc <path> <path>`, for instance:
`mc /home/keith/Documents/ /tmp`, but of course this requires you to know
the second path to open in addition to your ~/Documents. Not really
satisfying?

Regards,

Hi wwp,

Thanks for your suggestion and that seems to work OK - I just start mc with the following command:

mc ~/Documents

and both panes are opened at the ~Documents directories now which is fine.

Kind Regards,

Keith Roberts

[Jan 01, 2019] Mc2 by mooffie

Notable quotes:
"... Future Releases ..."
Jan 01, 2019 | midnight-commander.org

#3745 (Integration mc with mc2(Lua)) – Midnight Commander

Ticket #3745 (closed enhancement: invalid)

Opened 2 years ago

Last modified 2 years ago Integration mc with mc2(Lua)

Reported by: q19l405n5a Owned by:
Priority: major Milestone:
Component: mc-core Version: master
Keywords: Cc:
Blocked By: Blocking:
Branch state: no branch Votes for changeset:
Description I think that it is necessary that code base mc and mc2 correspond each other. mooffie? can you check that patches from andrew_b easy merged with mc2 and if some patch conflict with mc2 code hold this changes by writing about in corresponding ticket. zaytsev can you help automate this( continues integration, travis and so on). Sorry, but some words in Russian:

Ребята, я не пытаюсь давать ЦУ, Вы делаете классную работу. Просто яхотел обратить внимание, что Муфья пытается поддерживать свой код в актуальном состоянии, но видя как у него возникают проблемы на ровном месте боюсь энтузиазм у него может пропасть.
Change History comment:1 Changed 2 years ago by zaytsev-work

​ https://mail.gnome.org/archives/mc-devel/2016-February/msg00021.html

I have asked what plans does mooffie have for mc 2 sometime ago and never got an answer. Note that I totally don't blame him for that. Everyone here is working at their own pace. Sometimes I disappear for weeks or months, because I can't get a spare 5 minutes not even speaking of several hours due to the non-mc related workload. I hope that one day we'll figure out the way towards merging it, and eventually get it done.

In the mean time, he's working together with us by offering extremely important and well-prepared contributions, which are a pleasure to deal with and we are integrating them as fast as we can, so it's not like we are at war and not talking to each other.

Anyways, creating random noise in the ticket tracking system will not help to advance your cause. The only way to influence the process is to invest serious amount of time in the development.
comment:2 Changed 2 years ago by zaytsev

Lua-l - [ANN] mc^2

Selected post Oct 15, 2015; 12:13pm [ANN] mc^2
Mooffie 11 posts mc^2 is a fork of Midnight Commander with Lua support:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/

...but let's skip the verbiage and go directly to the screenshots:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/guide/SCREENSHOTS.md.html

Now, I assume most of you here aren't users of MC.

So I won't bore you with description of how Lua makes MC a better
file-manager. Instead, I'll just list some details that may interest
any developer who works on extending some application.

And, as you'll shortly see, you may find mc^2 useful even if you
aren't a user of MC!

So, some interesting details:

* Programmer Goodies

- You can restart the Lua system from within MC.

- Since MC has a built-in editor, you can edit Lua code right there
and restart Lua. So it's somewhat like a live IDE:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/images/screenshots/game.png

- It comes with programmer utilities: regular expressions; global scope
protected by default; good pretty printer for Lua tables; calculator
where you can type Lua expressions; the editor can "lint" Lua code (and
flag uses of global variables).

- It installs a /usr/bin/mcscript executable letting you use all the
goodies from "outside" MC:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/guide/60-standalone.md.html

* User Interface programming (UI)

- You can program a UI (user interface) very easily. The API is fun
yet powerful. It has some DOM/JavaScript borrowings in it: you can
attach functions to events like on_click, on_change, etc. The API
uses "properties", so your code tends to be short and readable:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/guide/40-user-interface.md.html

- The UI has a "canvas" object letting you draw your own stuff. The
system is so fast you can program arcade games. Pacman, Tetris,
Digger, whatever:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/classes/ui.Canvas.html

Need timers in your game? You've got them:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/modules/timer.html

- This UI API is an ideal replacement for utilities like dialog(1).
You can write complex frontends to command-line tools with ease:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/images/screenshots/frontend-scanimage.png

- Thanks to the aforementioned /usr/bin/mcscript, you can run your
games/frontends from "outside" MC:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/images/screenshots/standalone-game.png

* Misc

- You can compile it against Lua 5.1, 5.2, 5.3, or LuaJIT.

- Extensive documentation.

[Jan 01, 2019] mc - How can I set the default (user defined) listing mode in Midnight Commander- - Unix Linux Stack Exchange

Jan 01, 2019 | unix.stackexchange.com

Ask Question 0

papaiatis ,Jul 14, 2016 at 11:51

I defined my own listing mode and I'd like to make it permanent so that on the next mc start my defined listing mode will be set. I found no configuration file for mc.

,

You have probably Auto save setup turned off in Options->Configuration menu.

You can save the configuration manually by Options->Save setup .

Panels setup is saved to ~/.config/mc/panels.ini .

[Jan 01, 2019] Lua-l - [ANN] mc^2

Jan 01, 2019 | n2.nabble.com

Selected post Oct 15, 2015; 12:13pm [ANN] mc^2

Mooffie 11 posts mc^2 is a fork of Midnight Commander with Lua support:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/

...but let's skip the verbiage and go directly to the screenshots:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/guide/SCREENSHOTS.md.html

Now, I assume most of you here aren't users of MC.

So I won't bore you with description of how Lua makes MC a better
file-manager. Instead, I'll just list some details that may interest
any developer who works on extending some application.

And, as you'll shortly see, you may find mc^2 useful even if you
aren't a user of MC!

So, some interesting details:

* Programmer Goodies

- You can restart the Lua system from within MC.

- Since MC has a built-in editor, you can edit Lua code right there
and restart Lua. So it's somewhat like a live IDE:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/images/screenshots/game.png

- It comes with programmer utilities: regular expressions; global scope
protected by default; good pretty printer for Lua tables; calculator
where you can type Lua expressions; the editor can "lint" Lua code (and
flag uses of global variables).

- It installs a /usr/bin/mcscript executable letting you use all the
goodies from "outside" MC:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/guide/60-standalone.md.html

* User Interface programming (UI)

- You can program a UI (user interface) very easily. The API is fun
yet powerful. It has some DOM/JavaScript borrowings in it: you can
attach functions to events like on_click, on_change, etc. The API
uses "properties", so your code tends to be short and readable:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/guide/40-user-interface.md.html

- The UI has a "canvas" object letting you draw your own stuff. The
system is so fast you can program arcade games. Pacman, Tetris,
Digger, whatever:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/classes/ui.Canvas.html

Need timers in your game? You've got them:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/modules/timer.html

- This UI API is an ideal replacement for utilities like dialog(1).
You can write complex frontends to command-line tools with ease:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/images/screenshots/frontend-scanimage.png

- Thanks to the aforementioned /usr/bin/mcscript, you can run your
games/frontends from "outside" MC:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/images/screenshots/standalone-game.png

* Misc

- You can compile it against Lua 5.1, 5.2, 5.3, or LuaJIT.

- Extensive documentation.

[Jan 01, 2019] Re change default configuration

Jan 01, 2019 | mail.gnome.org
On Fri, 27 Jul 2018 17:01:17 +0300 Sergey Naumov via mc-devel wrote:
I'm curious whether there is a way to change default configuration that is
generated when user invokes mc for the first time?

For example, I want "use_internal_edit" to be true by default instead of
false for any new user.
In vanilla mc the initial value of use_internal_edit is true. Some distros
(Debian and some others) change this to false.
If there is a way to do it, then is it possible to just use lines that I
want to change, not the whole configuration, say

[Midnight-Commander]
use_internal_edit=true
Before first run, ~/.config/mc/ini doesn't exist.
If ~/.config/mc/ini doesn't exist, /etc/mc/mc.ini is used.
If /etc/mc/mc.ini doesn't exist, /usr/share/mc/mc.ini is used.
You can create one of these files with required default settings set.

Unfortunately, there is no info about /etc/mc/mc.ini in the man page.
I'll fix that at this weekend.

[Jan 01, 2019] Re does mc support sftp

Jan 01, 2019 | mail.gnome.org

Yes, it does, if it has been compiled accordingly.

http://www.linux-databook.info/wp-content/uploads/2015/04/MC-02.jpeg

On Thu, 15 Nov 2018, Fourhundred Thecat wrote:

Hello,

I need to connect to server where I don't have shell access (no ssh)

the server only allows sftp. I can connect with winscp, for instance.

does mc support sftp  as well ?

thanks,
_______________________________________________
mc mailing list
https://mail.gnome.org/mailman/listinfo/mc

--
Sincerely yours,
Yury V. Zaytsev

[Jan 01, 2019] Re: Ctrl+J in mc

Jan 01, 2019 | mail.gnome.org

, Thomas Zajic

* Ivan Pizhenko via mc-devel, 28.10.18 21:52

Hi, I'm wondering why following happens:
In Ubuntu and FreeBSD, when I am pressing Ctrl+J in MC, it puts name
of file on which file cursor is currently on. But this doesn't work in
CentOS and RHEL.
How to fix that in CentOS and RHEL?
Ivan.
Never heard about Ctrl+j, I always used Alt+Enter for that purpose.
Alt+a does the same thing for the path, BTW (just in case you didn't
know). :-)

HTH,
Thomas
_______________________________________________
mc-devel mailing list
https://mail.gnome.org/mailman/listinfo/mc-devel

[Jan 01, 2019] IBM Systems Magazine - All Hail the Midnight Commander! by Jesse Gorzinski

Notable quotes:
"... Sometimes, though, a tool is just too fun to pass up; such is the case for Midnight Commander! Of course, we also had numerous requests for it, and that helped, too! Today, let's explore this useful utility. ..."
Nov 27, 2018 | ibmsystemsmag.com

Quite often, I'm asked how open source deliveries are prioritized at IBM. The answer isn't simple. Even after we estimate the cost of a project, there are many factors to consider. For instance, does it enable a specific solution to run? Does it expand a programming language's abilities? Is it highly-requested by the community or vendors?

Sometimes, though, a tool is just too fun to pass up; such is the case for Midnight Commander! Of course, we also had numerous requests for it, and that helped, too! Today, let's explore this useful utility.

... ... ...

Getting Started
Installing Midnight Commander is easy. Once you have the yum package manager , use it to install the 'mc' package.

In order for the interface to display properly, you'll want to set the LC_ALL environment variable to a UTF-8 locale. For instance, "EN_US.UTF-8" would work just fine. You can have this done automatically by putting the following lines in your $HOME/.profile file (or $HOME/.bash_profile):

LC_ALL=EN_US.UTF-8
export LC_ALL

If you haven't done so already, you might want to also make sure the PATH environment variable is set up to use the new open source tools .

Once that's done, you can run 'mc -c' from your SSH terminal . (You didn't expect this to work from QSH, did you?) If you didn't set up your environment variables, you can just run 'LC_ALL=EN_US.UTF-8 /QOpenSys/pkgs/bin/mc -c' instead. I recommend the '-c' option because it enables colors.

A Community Effort
As with many things open source, IBM was not the only contributor. In this particular case, a "tip of the hat" goes to Jack Woehr. You may remember Jack as the creator of Ublu , an open source programming language for IBM i. He also hosts his own RPM repository with lynx, a terminal-based web browser (perhaps a future topic?). The initial port of Midnight Commander was collaboratively done with work from both parties. Jack also helped with quality assurance and worked with project owners to upstream all code changes. In fact, the main code stream for Midnight Commander can now be built for IBM i with no modifications.

Now that we've delivered hundreds of open source packages, it seems like there's something for everybody. This seems like one of those tools that is useful for just about anyone. And with a name like "Midnight Commander," how can you go wrong? Try it today!

[Jan 01, 2019] NEWS-4.8.22 Midnight Commander

Looks like they fixed sftp problems and it is now usale.
Jan 01, 2019 | midnight-commander.org
View all closed tickets for this release Major changes since 4.8.21 Core VFS Editor Viewer Diff viewer Misc Fixes

[Dec 24, 2018] Phone in sick: its a small act of rebellion against wage slavery

Notable quotes:
"... By far the biggest act of wage slavery rebellion, don't buy shit. The less you buy, the less you need to earn. Holidays by far the minority of your life should not be a desperate escape from the majority of your life. Spend less, work less and actually really enjoy living more. ..."
"... How about don't shop at Walmart (they helped boost the Chinese economy while committing hari kari on the American Dream) and actually engaging in proper labour action? Calling in sick is just plain childish. ..."
"... I'm all for sticking it to "the man," but when you call into work for a stupid reason (and a hangover is a very stupid reason), it is selfish, and does more damage to the cause of worker's rights, not less. I don't know about where you work, but if I call in sick to my job, other people have to pick up my slack. I work for a public library, and we don't have a lot of funds, so we have the bear minimum of employees we can have and still work efficiently. As such, if anybody calls in, everyone else, up to and including the library director, have to take on more work. ..."
Oct 24, 2015 | The Guardian

"Phoning in sick is a revolutionary act." I loved that slogan. It came to me, as so many good things did, from Housmans, the radical bookshop in King's Cross. There you could rummage through all sorts of anarchist pamphlets and there I discovered, in the early 80s, the wondrous little magazine Processed World. It told you basically how to screw up your workplace. It was smart and full of small acts of random subversion. In many ways it was ahead of its time as it was coming out of San Francisco and prefiguring Silicon Valley. It saw the machines coming. Jobs were increasingly boring and innately meaningless. Workers were "data slaves" working for IBM ("Intensely Boring Machines").

What Processed World was doing was trying to disrupt the identification so many office workers were meant to feel with their management, not through old-style union organising, but through small acts of subversion. The modern office, it stressed, has nothing to do with human need. Its rebellion was about working as little as possible, disinformation and sabotage. It was making alienation fun. In 1981, it could not have known that a self-service till cannot ever phone in sick.

I was thinking of this today, as I wanted to do just that. I have made myself ill with a hangover. A hangover, I always feel, is nature's way of telling you to have a day off. One can be macho about it and eat your way back to sentience via the medium of bacon sandwiches and Maltesers. At work, one is dehydrated, irritable and only semi-present. Better, surely, though to let the day fall through you and dream away.

Having worked in America, though, I can say for sure that they brook no excuses whatsoever. When I was late for work and said things like, "My alarm clock did not go off", they would say that this was not a suitable explanation, which flummoxed me. I had to make up others. This was just to work in a shop.

This model of working – long hours, very few holidays, few breaks, two incomes needed to raise kids, crazed loyalty demanded by huge corporations, the American way – is where we're heading. Except now the model is even more punishing. It is China. We are expected to compete with an economy whose workers are often closer to indentured slaves than anything else.

This is what striving is, then: dangerous, demoralising, often dirty work. Buckle down. It's the only way forward, apparently, which is why our glorious leaders are sucking up to China, which is immoral, never mind ridiculously short-term thinking.

So again I must really speak up for the skivers. What we have to understand about austerity is its psychic effects. People must have less. So they must have less leisure, too. The fact is life is about more than work and work is rapidly changing. Skiving in China may get you killed but here it may be a small act of resistance, or it may just be that skivers remind us that there is meaning outside wage-slavery.

Work is too often discussed by middle-class people in ways that are simply unrecognisable to anyone who has done crappy jobs. Much work is not interesting and never has been. Now that we have a political and media elite who go from Oxbridge to working for a newspaper or a politician, a lot of nonsense is spouted. These people have not cleaned urinals on a nightshift. They don't sit lonely in petrol stations manning the till. They don't have to ask permission for a toilet break in a call centre. Instead, their work provides their own special identity. It is very important.

Low-status jobs, like caring, are for others. The bottom-wipers of this world do it for the glory, I suppose. But when we talk of the coming automation that will reduce employment, bottom-wiping will not be mechanised. Nor will it be romanticised, as old male manual labour is. The mad idea of reopening the coal mines was part of the left's strange notion of the nobility of labour. Have these people ever been down a coal mine? Would they want that life for their children?

Instead we need to talk about the dehumanising nature of work. Bertrand Russell and Keynes thought our goal should be less work, that technology would mean fewer hours.

Far from work giving meaning to life, in some surveys 40% of us say that our jobs are meaningless. Nonetheless, the art of skiving is verboten as we cram our children with ever longer hours of school and homework. All this striving is for what exactly? A soul-destroying job?

Just as education is decided by those who loved school, discussions about work are had by those to whom it is about more than income.

The parts of our lives that are not work – the places we dream or play or care, the space we may find creative – all these are deemed outside the economy. All this time is unproductive. But who decides that?

Skiving work is bad only to those who know the price of everything and the value of nothing.

So go on: phone in sick. You know you want to.

friedad 23 Oct 2015 18:27

We now exist in a society in which the Fear Cloud is wrapped around each citizen. Our proud history of Union and Labor, fighting for decent wages and living conditions for all citizens, and mostly achieving these aims, a history, which should be taught to every child educated in every school in this country, now gradually but surely eroded by ruthless speculators in government, is the future generations are inheriting. The workforce in fear of taking a sick day, the young looking for work in fear of speaking out at diminishing rewards, definitely this 21st Century is the Century of Fear. And how is this fear denied, with mind blowing drugs, regardless if it is is alcohol, description drugs, illicit drugs, a society in denial. We do not require a heavenly object to destroy us, a few soulless monsters in our mist are masters of manipulators, getting closer and closer to accomplish their aim of having zombies doing their beckoning. Need a kidney, no worries, zombie dishwasher, is handy for one. Oh wait that time is already here.

Hemulen6 23 Oct 2015 15:06

Oh join the real world, Suzanne! Many companies now have a limit to how often you can be sick. In the case of the charity I work for it's 9 days a year. I overstepped it, I was genuinely sick, and was hauled up in front of Occupational Health. That will now go on my record and count against me. I work for a cancer care charity. Irony? Surely not.

AlexLeo -> rebel7 23 Oct 2015 13:34

Which is exactly my point. You compete on relevant job skills and quality of your product, not what school you have attended.

Yes, there are thousands, tens of thousands of folks here around San Jose who barely speak English, but are smart and hard working as hell and it takes them a few years to get to 150-200K per year, Many of them get to 300-400K, if they come from strong schools in their countries of origin, compared to the 10k or so where they came from, but probably more than the whining readership here.

This is really difficult to swallow for the Brits back in Britain, isn't it. Those who have moved over have experiences the type of social mobility unthinkable in Britain, but they have had to work hard and get to 300K-700K per year, much better than the 50-100K their parents used to make back in GB. These are averages based on personal interactions with say 50 Brits in the last 15 + years, all employed in the Silicon Valley in very different jobs and roles.

Todd Owens -> Scott W 23 Oct 2015 11:00

I get what you're saying and I agree with a lot of what you said. My only gripe is most employees do not see an operation from a business owner or managerial / financial perspective. They don't understand the costs associated with their performance or lack thereof. I've worked on a lot of projects that we're operating at a loss for a future payoff. When someone decides they don't want to do the work they're contracted to perform that can have a cascading effect on the entire company.

All in all what's being described is for the most part misguided because most people are not in the position or even care to evaluate the particulars. So saying you should do this to accomplish that is bullshit because it's rarely such a simple equation. If anything this type of tactic will leaf to MORE loss and less money for payroll.


weematt -> Barry1858 23 Oct 2015 09:04

Sorry you just can't have a 'nicer' capitalism.

War ( business by other means) and unemployment ( you can't buck the market), are inevitable concomitants of capitalist competition over markets, trade routes and spheres of interests. (Remember the war science of Nagasaki and Hiroshima from the 'good guys' ?)
"..capital comes dripping from head to foot, from every pore, with blood and dirt". (Marx)

You can't have full employment, or even the 'Right to Work'.

There is always ,even in boom times a reserve army of unemployed, to drive down wages. (If necessary they will inject inflation into the economy)
Unemployment is currently 5.5 percent or 1,860,000 people. If their "equilibrium rate" of unemployment is 4% rather than 5% this would still mean 1,352,000 "need be unemployed". The government don't want these people to find jobs as it would strengthen workers' bargaining position over wages, but that doesn't stop them harassing them with useless and petty form-filling, reporting to the so-called "job centre" just for the sake of it, calling them scroungers and now saying they are mentally defective.
Government is 'over' you not 'for' you.

Governments do not exist to ensure 'fair do's' but to manage social expectations with the minimum of dissent, commensurate with the needs of capitalism in the interests of profit.

Worker participation amounts to self managing workers self exploitation for the maximum of profit for the capitalist class.

Exploitation takes place at the point of production.

" Instead of the conservative motto, 'A fair day's wage for a fair day's work!' they ought to inscribe on their banner the revolutionary watchword, 'Abolition of the wages system!'"

Karl Marx [Value, Price and Profit]

John Kellar 23 Oct 2015 07:19

Fortunately; as a retired veteran I don't have to worry about phoning in sick.However; during my Air Force days if you were sick, you had to get yourself to the Base Medical Section and prove to a medical officer that you were sick. If you convinced the medical officer of your sickness then you may have been luck to receive on or two days sick leave. For those who were very sick or incapable of getting themselves to Base Medical an ambulance would be sent - promptly.


Rchrd Hrrcks -> wumpysmum 23 Oct 2015 04:17

The function of civil disobedience is to cause problems for the government. Let's imagine that we could get 100,000 people to agree to phone in sick on a particular date in protest at austerity etc. Leaving aside the direct problems to the economy that this would cause. It would also demonstrate a willingness to take action. It would demonstrate a capability to organise mass direct action. It would demonstrate an ability to bring people together to fight injustice. In and of itself it might not have much impact, but as a precedent set it could be the beginning of something massive, including further acts of civil disobedience.


wumpysmum Rchrd Hrrcks 23 Oct 2015 03:51

There's already a form of civil disobedience called industrial action, which the govt are currently attacking by attempting to change statute. Random sickies as per my post above are certainly not the answer in the public sector at least, they make no coherent political point just cause problems for colleagues. Sadly too in many sectors and with the advent of zero hours contracts sickies put workers at risk of sanctions and lose them earnings.


Alyeska 22 Oct 2015 22:18

I'm American. I currently have two jobs and work about 70 hours a week, and I get no paid sick days. In fact, the last time I had a job with a paid sick day was 2001. If I could afford a day off, you think I'd be working 70 hours a week?

I barely make rent most months, and yes... I have two college degrees. When I try to organize my coworkers to unionize for decent pay and benefits, they all tell me not to bother.... they are too scared of getting on management's "bad side" and "getting in trouble" (yes, even though the law says management can't retaliate.)

Unions are different in the USA than in the UK. The workforce has to take a vote to unionize the company workers; you can't "just join" a union here. That's why our pay and working conditions have gotten worse, year after year.


rtb1961 22 Oct 2015 21:58

By far the biggest act of wage slavery rebellion, don't buy shit. The less you buy, the less you need to earn. Holidays by far the minority of your life should not be a desperate escape from the majority of your life. Spend less, work less and actually really enjoy living more.

Pay less attention to advertising and more attention to the enjoyable simplicity of life, of real direct human relationships, all of them, the ones in passing where you wish a stranger well, chats with service staff to make their life better as well as your own, exchange thoughts and ideas with others, be a human being and share humanity with other human beings.

Mkjaks 22 Oct 2015 20:35

How about don't shop at Walmart (they helped boost the Chinese economy while committing hari kari on the American Dream) and actually engaging in proper labour action? Calling in sick is just plain childish.

toffee1 22 Oct 2015 19:13

It is only considered productive if it feeds the beast, that is, contribute to the accumulation of capital so that the beast can have more power over us. The issue here is the wage labor. The 93 percent of the U.S. working population perform wage labor (see BLS site). It is the highest proportion in any society ever came into history. Under the wage labor (employment) contract, the worker gives up his/her decision making autonomy. The worker accepts the full command of his/her employer during the labor process. The employer directs and commands the labor process to achieve the goals set by himself. Compare this, for example, self-employed providing a service (for example, a plumber). In this case, the customer describes the problem to the service provider but the service provider makes all the decisions on how to organize and apply his labor to solve the problem. Or compare it to a democratically organized coop, where workers make all the decisions collectively, where, how and what to produce. Under the present economic system, a great majority of us are condemned to work in large corporations performing wage labor. The system of wage labor stripping us from autonomy on our own labor, creates all the misery in our present world through alienation. Men and women lose their humanity alienated from their own labor. Outside the world of wage labor, labor can be a source self-realization and true freedom. Labor can be the real fulfillment and love. Labor together our capacity to love make us human. Bourgeoisie dehumanized us steeling our humanity. Bourgeoisie, who sold her soul to the beast, attempting to turn us into ever consuming machines for the accumulation of capital.

patimac54 -> Zach Baker 22 Oct 2015 17:39

Well said. Most retail employers have cut staff to the minimum possible to keep the stores open so if anyone is off sick, it's the devil's own job trying to just get customers served. Making your colleagues work even harder than they normally do because you can't be bothered to act responsibly and show up is just plain selfish.
And sorry, Suzanne, skiving work is nothing more than an act of complete disrespect for those you work with. If you don't understand that, try getting a proper job for a few months and learn how to exercise some self control.

TettyBlaBla -> FranzWilde 22 Oct 2015 17:25

It's quite the opposite in government jobs where I am in the US. As the fiscal year comes to a close, managers look at their budgets and go on huge spending sprees, particularly for temp (zero hours in some countries) help and consultants. They fear if they don't spend everything or even a bit more, their spending will be cut in the next budget. This results in people coming in to do work on projects that have no point or usefulness, that will never be completed or even presented up the food chain of management, and ends up costing taxpayers a small fortune.

I did this one year at an Air Quality Agency's IT department while the paid employees sat at their desks watching portable televisions all day. It was truly demeaning.

oommph -> Michael John Jackson 22 Oct 2015 16:59

Thing is though, children - dependents to pay for - are the easiest way to keep yourself chained to work.

The homemaker model works as long as your spouse's employer retains them (and your spouse retains you in an era of 40% divorce).

You are just as dependent on an employer and "work" but far less in control of it now.


Zach Baker 22 Oct 2015 16:41

I'm all for sticking it to "the man," but when you call into work for a stupid reason (and a hangover is a very stupid reason), it is selfish, and does more damage to the cause of worker's rights, not less. I don't know about where you work, but if I call in sick to my job, other people have to pick up my slack. I work for a public library, and we don't have a lot of funds, so we have the bear minimum of employees we can have and still work efficiently. As such, if anybody calls in, everyone else, up to and including the library director, have to take on more work. If I found out one of my co-workers called in because of a hangover, I'd be pissed. You made the choice to get drunk, knowing that you had to work the following morning. Putting it into the same category of someone who is sick and may not have the luxury of taking off because of a bad employer is insulting.


[Dec 23, 2018] Rule #0 of any checklist

Notable quotes:
"... The Checklist Manifesto ..."
"... The book talks about how checklists reduce major errors in surgery. Hospitals that use checklists are drastically less likely to amputate the wrong leg . ..."
"... any checklist should start off verifying that what you "know" to be true is true ..."
"... Before starting, ask the "Is it plugged in?" question first. What happened today was an example of when asking "Is it plugged in?" would have helped. ..."
"... moral of the story: Make sure that your understanding of the current state is correct. If you're a developer trying to fix a problem, make sure that you are actually able to understand the problem first. ..."
Dec 23, 2018 | hexmode.com

A while back I mentioned Atul Gawande 's book The Checklist Manifesto . Today, I got another example of how to improve my checklists.

The book talks about how checklists reduce major errors in surgery. Hospitals that use checklists are drastically less likely to amputate the wrong leg .

So, the takeaway for me is this: any checklist should start off verifying that what you "know" to be true is true . (Thankfully, my errors can be backed out with very little long term consequences, but I shouldn't use this as an excuse to forego checklists.)

Before starting, ask the "Is it plugged in?" question first. What happened today was an example of when asking "Is it plugged in?" would have helped.

Today I was testing the thumbnailing of some MediaWiki code and trying to understand the $wgLocalFileRepo variable. I copied part of an /images/ directory over from another wiki to my test wiki. I verified that it thumbnailed correctly.

So far so good.

Then I changed the directory parameter and tested. No thumbnail. Later, I realized this is to be expected because I didn't copy over the original images. So that is one issue.

I erased (what I thought was) the thumbnail image and tried again on the main repo. It worked again–I got a thumbnail.

I tried copying over the images directory to the new directory, but it the new thumbnailing directory structure didn't produce a thumbnail.

I tried over and over with the same thumbnail and was confused because it kept telling me the same thing.

I added debugging statements and still got no where.

Finally, I just did an ls on the directory to verify it was there. It was. And it had files in it.

But not the file I was trying to produce a thumbnail of.

The system that "worked" had the thumbnail, but not the original file.

So, moral of the story: Make sure that your understanding of the current state is correct. If you're a developer trying to fix a problem, make sure that you are actually able to understand the problem first.

Maybe your perception of reality is wrong. Mine was. I was sure that the thumbnails were being generated each time until I discovered that I hadn't deleted the thumbnails, I had deleted the original.

[Dec 20, 2018] Your .bashrc

Notable quotes:
"... Erm, did you know that `tar` autoextracts these days? This will work for pretty much anything: ..."
Dec 20, 2018 | forums.debian.net

pawRoot " 2018-10-15 17:13

Just spent some time editing .bashrc to make my life easier, and wondering if anyone has some cool "tricks" for bash as well.

Here is mine:

Code: Select all
# changing shell appearance
PS1='\[\033[0;32m\]\[\033[0m\033[0;32m\]\u\[\033[0;36m\] @ \[\033[0;36m\]\h \w\[\033[0;32m\]$(__git_ps1)\n\[\033[0;32m\]└─\[\033[0m\033[0;32m\] \$\[\033[0m\033[0;32m\] ▶\[\033[0m\] '

# aliases
alias la="ls -la --group-directories-first --color"

# clear terminal
alias cls="clear"

#
alias sup="sudo apt update && sudo apt upgrade"

# search for package
alias apts='apt-cache search'

# start x session
alias x="startx"

# download mp3 in best quality from YouTube
# usage: ytmp3 https://www.youtube.com/watch?v=LINK

alias ytmp3="youtube-dl -f bestaudio --extract-audio --audio-format mp3 --audio-quality 0"

# perform 'la' after 'cd'

alias cd="listDir"

listDir() {
builtin cd "$*"
RESULT=$?
if [ "$RESULT" -eq 0 ]; then
la
fi
}

# type "extract filename" to extract the file

extract () {
if [ -f $1 ] ; then
case $1 in
*.tar.bz2) tar xvjf $1 ;;
*.tar.gz) tar xvzf $1 ;;
*.bz2) bunzip2 $1 ;;
*.rar) unrar x $1 ;;
*.gz) gunzip $1 ;;
*.tar) tar xvf $1 ;;
*.tbz2) tar xvjf $1 ;;
*.tgz) tar xvzf $1 ;;
*.zip) unzip $1 ;;
*.Z) uncompress $1 ;;
*.7z) 7z x $1 ;;
*) echo "don't know how to extract '$1'..." ;;
esac
else
echo "'$1' is not a valid file!"
fi
}

# obvious one

alias ..="cd .."
alias ...="cd ../.."
alias ....="cd ../../.."
alias .....="cd ../../../.."

# tail all logs in /var/log
alias logs="find /var/log -type f -exec file {} \; | grep 'text' | cut -d' ' -f1 | sed -e's/:$//g' | grep -v '[0-9]$' | xargs tail -f"

Head_on_a_Stick " 2018-10-15 18:11

pawRoot wrote:
Code: Select all
extract () {
if [ -f $1 ] ; then
case $1 in
*.tar.bz2) tar xvjf $1 ;;
*.tar.gz) tar xvzf $1 ;;
*.bz2) bunzip2 $1 ;;
*.rar) unrar x $1 ;;
*.gz) gunzip $1 ;;
*.tar) tar xvf $1 ;;
*.tbz2) tar xvjf $1 ;;
*.tgz) tar xvzf $1 ;;
*.zip) unzip $1 ;;
*.Z) uncompress $1 ;;
*.7z) 7z x $1 ;;
*) echo "don't know how to extract '$1'..." ;;
esac
else
echo "'$1' is not a valid file!"
fi
}
Erm, did you know that `tar` autoextracts these days? This will work for pretty much anything:
Code: Select all
tar xf whatever.tar.whatever
I have these functions in my .mkshrc (bash is bloat!):
Code: Select all
function mnt {
for i in proc sys dev dev/pts; do sudo mount --bind /$i "$1"$i; done &
sudo chroot "$1" /bin/bash
sudo umount -R "$1"{proc,sys,dev}
}

function mkiso {
xorriso -as mkisofs \
-iso-level 3 \
-full-iso9660-filenames \
-volid SharpBang-stretch \
-eltorito-boot isolinux/isolinux.bin \
-eltorito-catalog isolinux/boot.cat \
-no-emul-boot -boot-load-size 4 -boot-info-table \
-isohybrid-mbr isolinux/isohdpfx.bin \
-eltorito-alt-boot \
-e boot/grub/efi.img \
-no-emul-boot -isohybrid-gpt-basdat \
-output ../"$1" ./
}

The mnt function acts like a poor person's arch-chroot and will bind mount /proc /sys & /dev before chrooting then tear it down afterwards.

The mkiso function builds a UEFI-capable Debian live system (with the name of the image given as the first argument).

The only other stuff I have are aliases, not really worth posting.

dbruce wrote: Ubuntu forums try to be like a coffee shop in Seattle. Debian forums strive for the charm and ambience of a skinhead bar in Bacau. We intend to keep it that way.

pawRoot " 2018-10-15 18:23

Head_on_a_Stick wrote: Erm, did you know that `tar` autoextracts these days? This will work for pretty much anything:

But it won't work for zip or rar right ?

None1975 " 2018-10-16 13:02

Here is compilation of cool "tricks" for bash. This is similar to oh-my-zsh. OS: Debian Stretch / WM : Fluxbox
Debian Wiki | DontBreakDebian , My config files in github

debiman " 2018-10-21 14:38

i have a LOT of stuff in my /etc/bash.bashrc, because i want it to be available for the root user too.
i won't post everything, but here's a "best of" from both /etc/bash.bashrc and ~/.bashrc:
Code: Select all
case ${TERM} in
xterm*|rxvt*|Eterm|aterm|kterm|gnome*)
PROMPT_COMMAND=${PROMPT_COMMAND:+$PROMPT_COMMAND; }'printf "\033]0;%s: %s\007" "${SHELL##*/}" "${PWD/#$HOME/\~}"'
;;
screen)
PROMPT_COMMAND=${PROMPT_COMMAND:+$PROMPT_COMMAND; }'printf "\033_%s@%s:%s\033\\" "${USER}" "${HOSTNAME%%.*}" "${PWD/#$HOME/\~}"'
;;
linux)
setterm --blength 0
setterm --blank 4
setterm --powerdown 8
;;
esac

PS2='cont> '
PS3='Choice: '
PS4='DEBUG: '

# Bash won't get SIGWINCH if another process is in the foreground.
# Enable checkwinsize so that bash will check the terminal size when
# it regains control.
# http://cnswww.cns.cwru.edu/~chet/bash/FAQ (E11)
shopt -s checkwinsize

# forums.bunsenlabs.org/viewtopic.php?pid=27494#p27494
# also see aliases '...' and '....'
shopt -s autocd
# opensource.com/article/18/5/bash-tricks
shopt -s cdspell

# as big as possible!!!
HISTSIZE=500000
HISTFILESIZE=2000000

# unix.stackexchange.com/a/18443
# history: erase duplicates...
HISTCONTROL=ignoredups:erasedups
shopt -s histappend

# next: enables usage of CTRL-S (backward search) with CTRL-R (forward search)
# digitalocean.com/community/tutorials/how-to-use-bash-history-commands-and-expansions-on-a-linux-vps#searching-through-bash-history
stty -ixon

if [[ ${EUID} == 0 ]] ; then
# root = color=1 # red
if [ "$TERM" != "linux" ]; then
PS1="\[$(tput setaf 1)\]\[$(tput rev)\] \[$(tput sgr0)\]\[$(tput setaf 5)\]\${?#0}\[$(tput setaf 1)\] \u@\h \w\[$(tput sgr0)\]\n\[$(tput rev)\] \[$(tput sgr0)\] "
else
# adding \t = time to tty prompt
PS1="\[$(tput setaf 1)\]\[$(tput rev)\] \[$(tput sgr0)\]\[$(tput setaf 5)\]\${?#0}\[$(tput setaf 1)\] \t \u@\h \w\[$(tput sgr0)\]\n\[$(tput rev)\] \[$(tput sgr0)\] "
fi
else
if [ "$TERM" != "linux" ]; then
PS1="\[$(tput setaf 2)\]\[$(tput rev)\] \[$(tput sgr0)\]\[$(tput setaf 5)\]\${?#0}\[$(tput setaf 2)\] \u@\h \w\[$(tput sgr0)\]\n\[$(tput rev)\] \[$(tput sgr0)\] "
else
# adding \t = time to tty prompt
PS1="\[$(tput setaf 2)\]\[$(tput rev)\] \[$(tput sgr0)\]\[$(tput setaf 5)\]\${?#0}\[$(tput setaf 2)\] \t \u@\h \w\[$(tput sgr0)\]\n\[$(tput rev)\] \[$(tput sgr0)\] "
fi
fi

[ -r /usr/share/bash-completion/bash_completion ] && . /usr/share/bash-completion/bash_completion || true

export EDITOR="nano"

man() {
env LESS_TERMCAP_mb=$(printf "\e[1;31m") \
LESS_TERMCAP_md=$(printf "\e[1;31m") \
LESS_TERMCAP_me=$(printf "\e[0m") \
LESS_TERMCAP_se=$(printf "\e[0m") \
LESS_TERMCAP_so=$(printf "\e[7m") \
LESS_TERMCAP_ue=$(printf "\e[0m") \
LESS_TERMCAP_us=$(printf "\e[1;32m") \
man "$@"
}
#LESS_TERMCAP_so=$(printf "\e[1;44;33m")
# that used to be in the man function for less's annoyingly over-colorful status line.
# changed it to simple reverse video (tput rev)

alias ls='ls --group-directories-first -hF --color=auto'
alias ll='ls --group-directories-first -hF --color=auto -la'
alias mpf='/usr/bin/ls -1 | mpv --playlist=-'
alias ruler='slop -o -c 1,0.3,0'
alias xmeasure='slop -o -c 1,0.3,0'
alias obxprop='obxprop | grep -v _NET_WM_ICON'
alias sx='exec startx > ~/.local/share/xorg/xlog 2>&1'
alias pngq='pngquant --nofs --speed 1 --skip-if-larger --strip '
alias screencap='ffmpeg -r 15 -s 1680x1050 -f x11grab -i :0.0 -vcodec msmpeg4v2 -qscale 2'
alias su='su -'
alias fblc='fluxbox -list-commands | column'
alias torrench='torrench -t -k -s -x -r -l -i -b --sorted'
alias F5='while sleep 60; do notify-send -u low "Pressed F5 on:" "$(xdotool getwindowname $(xdotool getwindowfocus))"; xdotool key F5; done'
alias aurs='aurman --sort_by_name -Ss'
alias cal3='cal -3 -m -w --color'
alias mkdir='mkdir -p -v'
alias ping='ping -c 5'
alias cd..='cd ..'
alias off='systemctl poweroff'
alias xg='xgamma -gamma'
alias find='find 2>/dev/null'
alias stressme='stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout'
alias hf='history|grep'
alias du1='du -m --max-depth=1|sort -g|sed "s/\t./M\t/g ; s/\///g"'
alias zipcat='gunzip -c'

mkcd() {
mkdir -p "$1"
echo cd "$1"
cd "$1"
}

[Dec 16, 2018] Red Hat Enterprise Linux 7.6 Released

Dec 16, 2018 | linux.slashdot.org

ArchieBunker ( 132337 ) , Tuesday October 30, 2018 @07:00PM ( #57565233 ) Homepage

New features include ( Score: 5 , Funny)

All of /etc has been moved to a flat binary database now called REGISTRY.DAT

A new configuration tool known as regeditor authored by Poettering himself (accidental deletion of /home only happens in rare occurrences)

In kernel naughty words filter

systemd now includes a virtual userland previously known as busybox

[Dec 14, 2018] 10 of the best pieces of IT advice I ever heard

Dec 14, 2018 | www.techrepublic.com
  1. Learn to say "no"

    If you're new to the career, chances are you'll be saying "yes" to everything. However, as you gain experience and put in your time, the word "no" needs to creep into your vocabulary. Otherwise, you'll be exploited.

    Of course, you have to use this word with caution. Should the CTO approach and set a task before you, the "no" response might not be your best choice. But if you find end users-and friends-taking advantage of the word "yes," you'll wind up frustrated and exhausted at the end of the day.

  2. Be done at the end of the day

    I used to have a ritual at the end of every day. I would take off my watch and, at that point, I was done... no more work. That simple routine saved my sanity more often than not. I highly suggest you develop the means to inform yourself that, at some point, you are done for the day. Do not be that person who is willing to work through the evening and into the night... or you'll always be that person.

  3. Don't beat yourself up over mistakes made

    You are going to make mistakes. Sometimes will be simple and can be quickly repaired. Others may lean toward the catastrophic. But when you finally call your IT career done, you will have made plenty of mistakes. Beating yourself up over them will prevent you from moving forward. Instead of berating yourself, learn from the mistakes so you don't repeat them.

  4. Always have something nice to say

    You work with others on a daily basis. Too many times I've watched IT pros become bitter, jaded people who rarely have anything nice or positive to say. Don't be that person. If you focus on the positive, people will be more inclined to enjoy working with you, companies will want to hire you, and the daily grind will be less "grindy."

  5. Measure twice, cut once

    How many times have you issued a command or clicked OK before you were absolutely sure you should? The old woodworking adage fits perfectly here. Considering this simple sentence-before you click OK-can save you from quite a lot of headache. Rushing into a task is never the answer, even during an emergency. Always ask yourself: Is this the right solution?

  6. At every turn, be honest

    I've witnessed engineers lie to avoid the swift arm of justice. In the end, however, you must remember that log files don't lie. Too many times there is a trail that can lead to the truth. When the CTO or your department boss discovers this truth, one that points to you lying, the arm of justice will be that much more forceful. Even though you may feel like your job is in jeopardy, or the truth will cause you added hours of work, always opt for the truth. Always.

  7. Make sure you're passionate about what you're doing

    Ask yourself this question: Am I passionate about technology? If not, get out now; otherwise, that job will beat you down. A passion for technology, on the other hand, will continue to drive you forward. Just know this: The longer you are in the field, the more likely that passion is to falter. To prevent that from happening, learn something new.

  8. Don't stop learning

    Quick-how many operating systems have you gone through over the last decade? No career evolves faster than technology. The second you believe you have something perfected, it changes. If you decide you've learned enough, it's time to give up the keys to your kingdom. Not only will you find yourself behind the curve, all those servers and desktops you manage could quickly wind up vulnerable to every new attack in the wild. Don't fall behind.

  9. When you feel your back against a wall, take a breath and regroup

    This will happen to you. You'll be tasked to upgrade a server farm and one of the upgrades will go south. The sweat will collect, your breathing will reach panic level, and you'll lock up like Windows Me. When this happens... stop, take a breath, and reformulate your plan. Strangely enough, it's that breath taken in the moment of panic that will help you survive the nightmare. If a single, deep breath doesn't help, step outside and take in some fresh air so that you are in a better place to change course.

  10. Don't let clients see you Google a solution

    This should be a no-brainer... but I've watched it happen far too many times. If you're in the middle of something and aren't sure how to fix an issue, don't sit in front of a client and Google the solution. If you have to, step away, tell the client you need to use the restroom and, once in the safety of a stall, use your phone to Google the answer. Clients don't want to know you're learning on their dime.

See also

  • [Dec 14, 2018] Blatant neoliberal propagamda anout "booming US job market" by Danielle Paquette

    That's way too much hype even for WaPo pressitutes... The reality is that you can apply to 50 jobs and did not get a single responce.
    Dec 12, 2018 | www.latimes.com

    Economists report that workers are starting to act like millennials on Tinder: They're ditching jobs with nary a text. "A number of contacts said that they had been 'ghosted,' a situation in which a worker stops coming to work without notice and then is impossible to contact," the Federal Reserve Bank of Chicago noted in December's Beige Book report, which tracks employment trends. Advertisement > National data on economic "ghosting" is lacking. The term, which normally applies to dating, first surfaced on Dictionary.com in 2016. But companies across the country say silent exits are on the rise. Analysts blame America's increasingly tight labor market. Job openings have surpassed the number of seekers for eight straight months, and the unemployment rate has clung to a 49-year low of 3.7% since September. Janitors, baristas, welders, accountants, engineers -- they're all in demand, said Michael Hicks, a labor economist at Ball State University in Indiana. More people may opt to skip tough conversations and slide right into the next thing. "Why hassle with a boss and a bunch of out-processing," he said, "when literally everyone has been hiring?" Recruiters at global staffing firm Robert Half have noticed a 10% to 20% increase in ghosting over the last year, D.C. district President Josh Howarth said. Applicants blow off interviews. New hires turn into no-shows. Workers leave one evening and never return. "You feel like someone has a high level of interest, only for them to just disappear," Howarth said. Over the summer, woes he heard from clients emerged in his own life. A job candidate for a recruiter role asked for a day to mull over an offer, saying she wanted to discuss the terms with her spouse. Then she halted communication. "In fairness," Howarth said, "there are some folks who might have so many opportunities they're considering, they honestly forget." Keith Station, director of business relations at Heartland Workforce Solutions, which connects job hunters with companies in Omaha, said workers in his area are most likely to skip out on low-paying service positions. "People just fall off the face of the Earth," he said of the area, which has an especially low unemployment rate of 2.8%. Some employers in Nebraska are trying to head off unfilled shifts by offering apprentice programs that guarantee raises and additional training over time. "Then you want to stay and watch your wage grow," Station said. Advertisement > Other recruitment businesses point to solutions from China, where ghosting took off during the last decade's explosive growth. "We generally make two offers for every job because somebody doesn't show up," said Rebecca Henderson, chief executive of Randstad Sourceright, a talent acquisition firm. And if both hires stick around, she said, her multinational clients are happy to deepen the bench. Though ghosting in the United States does not yet require that level of backup planning, consultants urge employers to build meaningful relationships at every stage of the hiring process. Someone who feels invested in an enterprise is less likely to bounce, said Melissa and Johnathan Nightingale, who have written about leadership and dysfunctional management. "Employees leave jobs that suck," they said in an email. "Jobs where they're abused. Jobs where they don't care about the work. And the less engaged they are, the less need they feel to give their bosses any warning." Some employees are simply young and restless, said James Cooper, former manager of the Old Faithful Inn at Yellowstone National Park, where he said people ghosted regularly. A few of his staffers were college students who lived in park dormitories for the summer. "My favorite," he said, "was a kid who left a note on the floor in his dorm room that said, 'Sorry bros, had to ghost.' " Other ghosters describe an inner voice that just says: Nah. Zach Keel, a 26-year-old server in Austin, Texas, made the call last year to flee a combination bar and cinema after realizing he would have to clean the place until sunrise. More work, he calculated, was always around the corner. "I didn't call," Keel said. "I didn't show up. I figured: No point in feeling guilty about something that wasn't that big of an issue. Turnover is so high, anyway."

    [Dec 14, 2018] You apply for a job. You hear nothing. Here's what to do next

    Dec 14, 2018 | finance.yahoo.com

    But the more common situation is that applicants are ghosted by companies. They apply for a job and never hear anything in response, not even a rejection. In the U.S., companies are generally not legally obligated to deliver bad news to job candidates, so many don't.

    They also don't provide feedback, because it could open the company up to a legal risk if it shows that they decided against a candidate for discriminatory reasons protected by law such as race, gender or disability.

    Hiring can be a lengthy process, and rejecting 99 candidates is much more work than accepting one. But a consistently poor hiring process that leaves applicants hanging can cause companies to lose out on the best talent and even damage perception of their brand.

    Here's what companies can do differently to keep applicants in the loop, and how job seekers can know that it's time to cut their losses.


    What companies can do differently

    There are many ways that technology can make the hiring process easier for both HR professionals and applicants.

    Only about half of all companies get back to the candidates they're not planning to interview, Natalia Baryshnikova, director of product management on the enterprise product team at SmartRecruiters, tells CNBC Make It .

    "Technology has defaults, one change is in the default option," Baryshnikova says. She said that SmartRecruiters changed the default on its technology from "reject without a note" to "reject with a note," so that candidates will know they're no longer involved in the process.

    Companies can also use technology as a reminder to prioritize rejections. For the company, rejections are less urgent than hiring. But for a candidate, they are a top priority. "There are companies out there that get back to 100 percent of candidates, but they are not yet common," Baryshnikova says.

    How one company is trying to help

    WayUp was founded to make the process of applying for a job simpler.

    "The No. 1 complaint from candidates we've heard, from college students and recent grads especially, is that their application goes into a black hole," Liz Wessel, co-founder and CEO of WayUp, a platform that connects college students and recent graduates with employers, tells CNBC Make It .

    WayUp attempts to increase transparency in hiring by helping companies source and screen applicants, and by giving applicants feedback based on soft skills. They also let applicants know if they have advanced to the next round of interviewing within 24 hours.

    Wessel says that in addition to creating a better experience for applicants, WayUp's system helps companies address bias during the resume-screening processes. Resumes are assessed for hard skills up front, then each applicant participates in a phone screening before their application is passed to an employer. This ensures that no qualified candidate is passed over because their resume is different from the typical hire at an organization – something that can happen in a company that uses computers instead of people to scan resumes .

    "The companies we work with see twice as many minorities getting to offer letter," Wessel said.

    When you can safely assume that no news is bad news

    First, if you do feel that you're being ghosted by a company after sending in a job application, don't despair. No news could be good news, so don't assume right off the bat that silence means you didn't get the job.

    Hiring takes time, especially if you're applying for roles where multiple people could be hired, which is common in entry-level positions. It's possible that an HR team is working through hundreds or even thousands of resumes, and they might not have gotten to yours yet. It is not unheard of to hear back about next steps months after submitting an initial application.

    If you don't like waiting, you have a few options. Some companies have application tracking in their HR systems, so you can always check to see if the job you've applied for has that and if there's been an update to the status of your application.

    Otherwise, if you haven't heard anything, Wessel said that the only way to be sure that you aren't still in the running for the job is to determine if the position has started. Some companies will publish their calendar timelines for certain jobs and programs, so check that information to see if your resume could still be in review.

    "If that's the case and the deadline has passed," Wessel says, it's safe to say you didn't get the job.

    And finally, if you're still unclear on the status of your application, she says there's no problem with emailing a recruiter and asking outright.

    [Dec 13, 2018] Red Hat Linux Professional Users Groups

    Compare with Oracle recommendations. Some setting might be wrong. Oracle recommendes, see Oracle kernel parameters tuning on Linux
    Dec 13, 2018 | www.linkedin.com

    Oracle recommmendations:

    ip_local_port_range Minimum:9000 Maximum: 65000 /proc/sys/net/ipv4/ip_local_port_range
    rmem_default 262144 /proc/sys/net/core/rmem_default
    rmem_max 4194304 /proc/sys/net/core/rmem_max
    wmem_default 262144 /proc/sys/net/core/wmem_default
    wmem_max 1048576 /proc/sys/net/core/wmem_max
    tcp_wmem 262144 /proc/sys/net/ipv4/tcp_wmem
    tcp_rmem 4194304 /proc/sys/net/ipv4/tcp_rmem

    Minesh Patel , Site Reliability Engineer, Austin, Texas Area

    TCP IO setting on Red hat will reduce your intermittent or random slowness problem or there issue if you have TCP IO of default settings.

    For Red Hat Linux: 131071 is default value.

    Double the value from 131071 to 262144
    cat /proc/sys/net/core/rmem_max
    131071 → 262144
    cat /proc/sys/net/core/rmem_default
    129024 → 262144
     cat /proc/sys/net/core/wmem_default
    129024 → 262144
     cat /proc/sys/net/core/wmem_max
    131071 → 262144
    
    To improve fail over performance in a RAC cluster, consider changing the following IP kernel parameters as well:
    net.ipv4.tcp_keepalive_time
    net.ipv4.tcp_keepalive_intvl
    net.ipv4.tcp_retries2
    net.ipv4.tcp_syn_retries
    # sysctl -w net.ipv4.ip_local_port_range="1024 65000"
    

    To make the change permanent, add the following line to the /etc/sysctl.conf file, which is used during the boot process:

    net.ipv4.ip_local_port_range=1024 65000
    

    The first number is the first local port allowed for TCP and UDP traffic, and the second number is the last port number.

    [Dec 05, 2018] How can I scroll up to see the past output in PuTTY?

    Dec 05, 2018 | superuser.com

    Ask Question up vote 3 down vote favorite 1

    user1721949 ,Dec 12, 2012 at 8:32

    I have a script which, when I run it from PuTTY, it scrolls the screen. Now, I want to go back to see the errors, but when I scroll up, I can see the past commands, but not the output of the command.

    How can I see the past output?

    Rico ,Dec 13, 2012 at 8:24

    Shift+Pgup/PgDn should work for scrolling without using the scrollbar.

    > ,Jul 12, 2017 at 21:45

    If shift pageup/pagedown fails, try this command: "reset", which seems to correct the display. – user530079 Jul 12 '17 at 21:45

    RedGrittyBrick ,Dec 12, 2012 at 9:31

    If you don't pipe the output of your commands into something like less , you will be able to use Putty's scroll-bars to view earlier output.

    Putty has settings for how many lines of past output it retains in it's buffer.


    before scrolling

    after scrolling back (upwards)

    If you use something like less the output doesn't get into Putty's scroll buffer


    after using less

    David Dai ,Dec 14, 2012 at 3:31

    why is putty different with the native linux console at this point? – David Dai Dec 14 '12 at 3:31

    konradstrack ,Dec 12, 2012 at 9:52

    I would recommend using screen if you want to have good control over the scroll buffer on a remote shell.

    You can change the scroll buffer size to suit your needs by setting:

    defscrollback 4000
    

    in ~/.screenrc , which will specify the number of lines you want to be buffered (4000 in this case).

    Then you should run your script in a screen session, e.g. by executing screen ./myscript.sh or first executing screen and then ./myscript.sh inside the session.

    It's also possible to enable logging of the console output to a file. You can find more info on the screen's man page .

    ,

    From your descript, it sounds like the "problem" is that you are using screen, tmux, or another window manager dependent on them (byobu). Normally you should be able to scroll back in putty with no issue. Exceptions include if you are in an application like less or nano that creates it's own "window" on the terminal.

    With screen and tmux you can generally scroll back with SHIFT + PGUP (same as you could from the physical terminal of the remote machine). They also both have a "copy" mode that frees the cursor from the prompt and lets you use arrow keys to move it around (for selecting text to copy with just the keyboard). It also lets you scroll up and down with the PGUP and PGDN keys. Copy mode under byobu using screen or tmux backends is accessed by pressing F7 (careful, F6 disconnects the session). To do so directly under screen you press CTRL + a then ESC or [ . You can use ESC to exit copy mode. Under tmux you press CTRL + b then [ to enter copy mode and ] to exit.

    The simplest solution, of course, is not to use either. I've found both to be quite a bit more trouble than they are worth. If you would like to use multiple different terminals on a remote machine simply connect with multiple instances of putty and manage your windows using, er... Windows. Now forgive me but I must flee before I am burned at the stake for my heresy.

    EDIT: almost forgot, some keys may not be received correctly by the remote terminal if putty has not been configured correctly. In your putty config check Terminal -> Keyboard . You probably want the function keys and keypad set to be either Linux or Xterm R6 . If you are seeing strange characters on the terminal when attempting the above this is most likely the problem.

    [Nov 22, 2018] Sorry, Linux. Kubernetes is now the OS that matters InfoWorld

    That's a very primitive thinking. If RHEL is royally screwed, like is the case with RHEL7, that affects Kubernetes -- it does not exists outside the OS
    Nov 22, 2018 | www.infoworld.com
    We now live in a Kubernetes world

    Perhaps Redmonk analyst Stephen O'Grady said it best : "If there was any question in the wake of IBM's $34 billion acquisition of Red Hat and its Kubernetes-based OpenShift offering that it's Kubernetes's world and we're all just living in it, those [questions] should be over." There has been nearly $60 billion in open source M&A in 2018, but most of it revolves around Kubernetes.

    Red Hat, for its part, has long been (rightly) labeled the enterprise Linux standard, but IBM didn't pay for Red Hat Enterprise Linux. Not really.

    [Nov 21, 2018] Linux Shutdown Command 5 Practical Examples Linux Handbook

    Nov 21, 2018 | linuxhandbook.com

    Restart the system with shutdown command

    There is a separate reboot command but you don't need to learn a new command just for rebooting the system. You can use the Linux shutdown command for rebooting as wel.

    To reboot a system using the shutdown command, use the -r option.

    sudo shutdown -r
    

    The behavior is the same as the regular shutdown command. It's just that instead of a shutdown, the system will be restarted.

    So, if you used shutdown -r without any time argument, it will schedule a reboot after one minute.

    You can schedule reboots the same way you did with shutdown.

    sudo shutdown -r +30
    

    You can also reboot the system immediately with shutdown command:

    sudo shutdown -r now
    
    4. Broadcast a custom message

    If you are in a multi-user environment and there are several users logged on the system, you can send them a custom broadcast message with the shutdown command.

    By default, all the logged users will receive a notification about scheduled shutdown and its time. You can customize the broadcast message in the shutdown command itself:

    sudo shutdown 16:00 "systems will be shutdown for hardware upgrade, please save your work"
    

    Fun Stuff: You can use the shutdown command with -k option to initiate a 'fake shutdown'. It won't shutdown the system but the broadcast message will be sent to all logged on users.

    5. Cancel a scheduled shutdown

    If you scheduled a shutdown, you don't have to live with it. You can always cancel a shutdown with option -c.

    sudo shutdown -c
    

    And if you had broadcasted a messaged about the scheduled shutdown, as a good sysadmin, you might also want to notify other users about cancelling the scheduled shutdown.

    sudo shutdown -c "planned shutdown has been cancelled"
    

    Halt vs Power off

    Halt (option -H): terminates all processes and shuts down the cpu .
    Power off (option -P): Pretty much like halt but it also turns off the unit itself (lights and everything on the system).

    Historically, the earlier computers used to halt the system and then print a message like "it's ok to power off now" and then the computers were turned off through physical switches.

    These days, halt should automically power off the system thanks to ACPI .

    These were the most common and the most useful examples of the Linux shutdown command. I hope you have learned how to shut down a Linux system via command line. You might also like reading about the less command usage or browse through the list of Linux commands we have covered so far.

    If you have any questions or suggestions, feel free to let me know in the comment section.

    [Nov 19, 2018] The rise of Shadow IT - Should CIOs take umbrage

    Notable quotes:
    "... Shadow IT broadly refers to technology introduced into an organisation that has not passed through the IT department. ..."
    "... The result is first; no proactive recommendations from the IT department and second; long approval periods while IT teams evaluate solutions that the business has proposed. Add an over-defensive approach to security, and it is no wonder that some departments look outside the organisation for solutions. ..."
    Nov 19, 2018 | cxounplugged.com

    Shadow IT broadly refers to technology introduced into an organisation that has not passed through the IT department. A familiar example of this is BYOD but, significantly, Shadow IT now includes enterprise grade software and hardware, which is increasingly being sourced and managed outside of the direct control of the organisation's IT department and CIO.

    Examples include enterprise wide CRM solutions and marketing automation systems procured by the marketing department, as well as data warehousing, BI and analysis services sourced by finance officers.

    So why have so many technology solutions slipped through the hands of so many CIOs? I believe a confluence of events is behind the trend; there is the obvious consumerisation of IT, which has resulted in non-technical staff being much more aware of possible solutions to their business needs – they are more tech-savvy. There is also the fact that some CIOs and technology departments have been too slow to react to the business's technology needs.

    The reason for this slow reaction is that very often IT Departments are just too busy running day-to-day infrastructure operations such as network and storage management along with supporting users and software. The result is first; no proactive recommendations from the IT department and second; long approval periods while IT teams evaluate solutions that the business has proposed. Add an over-defensive approach to security, and it is no wonder that some departments look outside the organisation for solutions.

    [Nov 18, 2018] Systemd killing screen and tmux

    Nov 18, 2018 | theregister.co.uk

    fobobob , Thursday 10th May 2018 18:00 GMT

    Might just be a Debian thing as I haven't looked into it, but I have enough suspicion towards systemd that I find it worth mentioning. Until fairly recently (in terms of Debian releases), the default configuration was to murder a user's processes when they log out. This includes things such as screen and tmux, and I seem to recall it also murdering disowned and NOHUPed processes as well.
    Tim99 , Thursday 10th May 2018 06:26 GMT
    How can we make money?

    A dilemma for a Really Enterprise Dependant Huge Applications Technology company - The technology they provide is open, so almost anyone could supply and support it. To continue growing, and maintain a healthy profit they could consider locking their existing customer base in; but they need to stop other suppliers moving in, who might offer a better and cheaper alternative, so they would like more control of the whole ecosystem. The scene: An imaginary high-level meeting somewhere - The agenda: Let's turn Linux into Windows - That makes a lot of money:-

    Q: Windows is a monopoly, so how are we going to monopolise something that is free and open, because we will have to supply source code for anything that will do that? A: We make it convoluted and obtuse, then we will be the only people with the resources to offer it commercially; and to make certain, we keep changing it with dependencies to "our" stuff everywhere - Like Microsoft did with the Registry.

    Q: How are we going to sell that idea? A: Well, we could create a problem and solve it - The script kiddies who like this stuff, keep fiddling with things and rebooting all of the time. They don't appear to understand the existing systems - Sell the idea they do not need to know why *NIX actually works.

    Q: *NIX is designed to be dependable, and go for long periods without rebooting, How do we get around that. A: That is not the point, the kids don't know that; we can sell them the idea that a minute or two saved every time that they reboot is worth it, because they reboot lots of times in every session - They are mostly running single user laptops, and not big multi-user systems, so they might think that that is important - If there is somebody who realises that this is trivial, we sell them the idea of creating and destroying containers or stopping and starting VMs.

    Q: OK, you have sold the concept, how are we going to make it happen? A: Well, you know that we contribute quite a lot to "open" stuff. Let's employ someone with a reputation for producing fragile, barely functioning stuff for desktop systems, and tell them that we need a "fast and agile" approach to create "more advanced" desktop style systems - They would lead a team that will spread this everywhere. I think I know someone who can do it - We can have almost all of the enterprise market.

    Q: What about the other large players, surely they can foil our plan? A: No, they won't want to, they are all big companies and can see the benefit of keeping newer, efficient competitors out of the market. Some of them sell equipment and system-wide consulting, so they might just use our stuff with a suitable discount/mark-up structure anyway.

    ds6 , 6 months
    Re: How can we make money?

    This is scarily possible and undeserving of the troll icon.

    Harkens easily to non-critical software developers intentionally putting undocumented, buggy code into production systems, forcing the company to keep the guy on payroll to keep the wreck chugging along.

    DougS , Thursday 10th May 2018 07:30 GMT
    Init did need fixing

    But replacing it with systemd is akin to "fixing" the restrictions of travel by bicycle (limited speed and range, ending up sweaty at your destination, dangerous in heavy traffic) by replacing it with an Apache helicopter gunship that has a whole new set of restrictions (need for expensive fuel, noisy and pisses off the neighbors, need a crew of trained mechanics to keep it running, local army base might see you as a threat and shoot missiles at you)

    Too bad we didn't get the equivalent of a bicycle with an electric motor, or perhaps a moped.

    -tim , Thursday 10th May 2018 07:33 GMT
    Those who do not understand Unix are condemned to reinvent it, poorly.

    "It sounds super basic, but actually it is much more complex than people think," Poettering said. "Because Systemd knows which service a process belongs to, it can shut down that process."

    Poettering and Red Hat,

    Please learn about "Process Groups"

    Init has had the groundwork for most of the missing features since the early 1980s. For example the "id" field in /etc/inittab was intended for a "makefile" like syntax to fix most of these problems but was dropped in the early days of System V because it wasn't needed.

    Herby , Thursday 10th May 2018 07:42 GMT
    Process 1 IS complicated.

    That is the main problem. With different processes you get different results. For all its faults, SysV init and RC scripts was understandable to some extent. My (cursory) understanding of systemd is that it appears more complicated to UNDERSTAND than the init stuff.

    The init scripts are nice text scripts which are executed by a nice well documented shell (bash mostly). Systemd has all sorts of blobs that somehow do things and are totally confusing to me. It suffers from "anti- kiss "

    Perhaps a nice book could be written WITH example to show what is going on.

    Now let's see does audio come before or after networking (or at the same time)?

    Chronos , Thursday 10th May 2018 09:12 GMT
    Logging

    If they removed logging from the systemd core and went back to good ol' plaintext syslog[-ng], I'd have very little bad to say about Lennart's monolithic pet project. Indeed, I much prefer writing unit files than buggering about getting rcorder right in the old SysV init.

    Now, if someone wanted to nuke pulseaudio from orbit and do multiplexing in the kernel a la FreeBSD, I'll chip in with a contribution to the warhead fund. Needing a userland daemon just to pipe audio to a device is most certainly a solution in search of a problem.

    Tinslave_the_Barelegged , Thursday 10th May 2018 11:29 GMT
    Re: Logging

    > If they removed logging from the systemd core

    And time syncing

    And name resolution

    And disk mounting

    And logging in

    ...and...

    [Nov 18, 2018] From now on, I will call Systemd-based Linux distros "SNU Linux". Because Systemd's Not Unix-like.

    Nov 18, 2018 | theregister.co.uk

    tekHedd , Thursday 10th May 2018 15:28 GMT

    Not UNIX-like? SNU!

    From now on, I will call Systemd-based Linux distros "SNU Linux". Because Systemd's Not Unix-like.

    It's not clever, but it's the future. From now on, all major distributions will be called SNU Linux. You can still freely choose to use a non-SNU linux distro, but if you want to use any of the "normal" ones, you will have to call it "SNU" whether you like it or not. It's for your own good. You'll thank me later.

    [Nov 18, 2018] So in all reality, systemd is an answer to a problem that nobody who are administring servers ever had.

    Nov 18, 2018 | theregister.co.uk

    jake , Thursday 10th May 2018 20:23 GMT

    Re: Bah!

    Nice rant. Kinda.

    However, I don't recall any major agreement that init needed fixing. Between BSD and SysV inits, probably 99.999% of all use cases were covered. In the 1 in 100,000 use case, a little bit of C (stand alone code, or patching init itself) covered the special case. In the case of Slackware's SysV/BSD amalgam, I suspect it was more like one in ten million.

    So in all reality, systemd is an answer to a problem that nobody had. There was no reason for it in the first place. There still isn't a reason for it ... especially not in the 999,999 places out of 1,000,000 where it is being used. Throw in the fact that it's sticking its tentacles[0] into places where nobody in their right mind would expect an init as a dependency (disk partitioning software? WTF??), can you understand why us "old guard" might question the sanity of people singing it's praises?

    [0] My spall chucker insists that the word should be "testicles". Tempting ...

    [Nov 18, 2018] You love systems -- you just don't know it yet, wink Red Hat bods

    Nov 18, 2018 | theregister.co.uk

    sisk , Thursday 10th May 2018 21:17 GMT

    It's a pretty polarizing debate: either you see Systemd as a modern, clean, and coherent management toolkit

    Very, very few Linux users see it that way.

    or an unnecessary burden running roughshod over the engineering maxim: if it ain't broke, don't fix it.

    Seen as such by 90% of Linux users because it demonstrably is.

    Truthfully Systemd is flawed at a deeply fundamental level. While there are a very few things it can do that init couldn't - the killing off processes owned by a service mentioned as an example in this article is handled just fine by a well written init script - the tradeoffs just aren't worth it. For example: fscking BINARY LOGS. Even if all of Systemd's numerous other problems were fixed that one would keep it forever on my list of things to avoid if at all possible, and the fact that the Systemd team thought it a good idea to make the logs binary shows some very troubling flaws in their thinking at a very fundamental level.

    Dazed and Confused , Thursday 10th May 2018 21:43 GMT
    Re: fscking BINARY LOGS.

    And config too

    When it comes to logs and config file if you can't grep it then it doesn't belong on Linux/Unix

    Nate Amsden , Thursday 10th May 2018 23:51 GMT
    Re: fscking BINARY LOGS.

    WRT grep and logs I'm the same way which is why I hate json so much. My saying has been along the lines of "if it's not friends with grep/sed then it's not friends with me". I have whipped some some whacky sed stuff to generate a tiny bit of json to read into chef for provisioning systems though.

    XML is similar though I like XML a lot more at least the closing tags are a lot easier to follow then trying to count the nested braces in json.

    I haven't had the displeasure much of dealing with the systemd binary logs yet myself.

    Tomato42 , Saturday 12th May 2018 08:26 GMT
    Re: fscking BINARY LOGS.

    > I haven't had the displeasure much of dealing with the systemd binary logs yet myself.

    "I have no clue what I'm talking about or what's a robust solution but dear god, that won't stop me!" – why is it that all the people complaining about journald sound like that?

    systemd works just fine with regular syslog-ng, without journald (that's the thing that has binary logs) in sight

    HieronymusBloggs , Saturday 12th May 2018 18:17 GMT
    Re: fscking BINARY LOGS.

    "systemd works just fine with regular syslog-ng, without journald (that's the thing that has binary logs) in sight"

    Journald can't be switched off, only redirected to /dev/null. It still generates binary log data (which has caused me at least one system hang due to the absurd amount of data it was generating on a system that was otherwise functioning correctly) and consumes system resources. That isn't my idea of "works just fine".

    ""I have no clue what I'm talking about or what's a robust solution but dear god, that won't stop me!" – why is it that all the people complaining about journald sound like that?"

    Nice straw man. Most of the complaints I've seen have been from experienced people who do know what they're talking about.

    sisk , Tuesday 15th May 2018 20:22 GMT
    Re: fscking BINARY LOGS.

    "I have no clue what I'm talking about or what's a robust solution but dear god, that won't stop me!" – why is it that all the people complaining about journald sound like that?

    I have had the displeasure of dealing with journald and it is every bit as bad as everyone says and worse.

    systemd works just fine with regular syslog-ng, without journald (that's the thing that has binary logs) in sight

    Yeah, I've tried that. It caused problems. It wasn't a viable option.

    Anonymous Coward , Thursday 10th May 2018 22:30 GMT
    Parking U$5bn in redhad for a few months will fix this...

    So it's now been 4 years since they first tried to force that shoddy desk-top init system into our servers? And yet they still feel compelled to tell everyone, look it really isn't that terrible. That should tell you something. Unless you are tone death like Redhat. Surprised people didn't start walking out when Poettering outlined his plans for the next round of systemD power grabs...

    Anyway the only way this farce will end is with shareholder activism. Some hedge fund to buy 10-15 percent of redhat (about the amount you need to make life difficult for management) and force them to sack that "stable genius" Poettering. So market cap is 30bn today. Anyone with 5bn spare to park for a few months wanna step forward and do some good?

    cjcox , Thursday 10th May 2018 22:33 GMT
    He's a pain

    Early on I warned that he was trying to solve a very large problem space. He insisted he could do it with his 10 or so "correct" ways of doing things, which quickly became 20, then 30, then 50, then 90, etc.. etc. I asked for some of the features we had in init, he said "no valid use case". Then, much later (years?), he implements it (no use case provided btw).

    Interesting fellow. Very bitter. And not a good listener. But you don't need to listen when you're always right.

    Daggerchild , Friday 11th May 2018 08:27 GMT
    Spherical wheel is superior.

    @T42

    Now, you see, you just summed up the whole problem. Like systemd's author, you think you know better than the admin how to run his machine, without knowing, or caring to ask, what he's trying to achieve. Nobody ever runs a computer, to achieve running systemd do they.

    Tomato42 , Saturday 12th May 2018 09:05 GMT
    Re: Spherical wheel is superior.

    I don't claim I know better, but I do know that I never saw a non-distribution provided init script that handled correctly the basic of corner cases – service already running, run file left-over but process dead, service restart – let alone the more obscure ones, like application double forking when it shouldn't (even when that was the failure mode of the application the script was provided with). So maybe, just maybe, you haven't experienced everything there is to experience, so your opinion is subjective?

    Yes, the sides of the discussion should talk more, but this applies to both sides. "La, la, la, sysv is working fine on my machine, thankyouverymuch" is not what you can call "participating in discussion". So is quoting well known and long discussed (and disproven) points. (and then downvoting people into oblivion for daring to point this things out).

    now in the real world, people that have to deal with init systems on daily basis, as distribution maintainers, by large, have chosen to switch their distributions to systemd, so the whole situation I can sum up one way:

    "the dogs may bark, but the caravan moves on"

    Kabukiwookie , Monday 14th May 2018 00:14 GMT
    Re: Spherical wheel is superior.

    I do know that I never saw a non-distribution provided init script that handled correctly the basic of corner cases – service already running

    This only shows that you don't have much real life experience managing lots of hosts.

    like application double forking when it shouldn't

    If this is a problem in the init script, this should be fixed in the init script. If this is a problem in the application itself, it should be fixed in the application, not worked around by the init mechanism. If you're suggesting the latter, you should not be touching any production box.

    "La, la, la, sysv is working fine on my machine, thankyouverymuch" is not what you can call "participating in discussion".

    Shoving down systemd down people's throat as a solution to a non-existing problem, is not a discussion either; it is the very definition of 'my way or the highway' thinking.

    now in the real world, people that have to deal with init systems on daily basis

    Indeed and having a bunch of sub-par developers, focused on the 'year of the Linux desktop' to decide what the best way is for admins to manage their enterprise environment is not helping.

    "the dogs may bark, but the caravan moves on"

    Indeed. It's your way or the highway; I thought you were just complaining about the people complaining about systemd not wanting to have a discussion, while all the while it's systemd proponents ignoring and dismissing very valid complaints.

    Daggerchild , Monday 14th May 2018 14:10 GMT
    Re: Spherical wheel is superior.

    "I never saw ... run file left-over but process dead, service restart ..."

    Seriously? I wrote one last week! You use an OS atomic lock on the pidfile and exec the service if the lock succeeded. The lock dies with the process. It's a very small shellscript.

    I shot a systemd controlled service. Systemd put it into error state and wouldn't restart it unless I used the right runes. That is functionally identical to the thing you just complained about.

    "application double forking when it shouldn't"

    I'm going to have to guess what that means, and then point you at DJB's daemontools. You leave a FD open in the child. They can fork all they like. You'll still track when the last dies as the FD will cause an event on final close.

    "So maybe, just maybe, you haven't experienced everything there is to experience"

    You realise that's the conspiracy theorist argument "You don't know everything, therefore I am right". Doubt is never proof of anything.

    "La, la, la, sysv is working fine" is not what you can call "participating in discussion".

    Well, no.. it's called evidence. Evidence that things are already working fine, thanks. Evidence that the need for discussion has not been displayed. Would you like a discussion about the Earth being flat? Why not? Are you refusing to engage in a constructive discussion? How obstructive!

    "now in the real world..."

    In the *real* world people run Windows and Android, so you may want to rethink the "we outnumber you, so we must be right" angle. You're claiming an awful lot of highground you don't seem to actually know your way around, while trying to wield arguments you don't want to face yourself...

    "(and then downvoting people into oblivion for daring to point this things out)"

    It's not some denialist conspiracy to suppress your "daring" Truth - you genuinely deserve those downvotes.

    Anonymous Coward , Friday 11th May 2018 17:27 GMT
    I have no idea how or why systemd ended up on servers. Laptops I can see the appeal for "this is the year of the linux desktop" - for when you want your rebooted machine to just be there as fast as possible (or fail mysteriously as fast as possible). Servers, on the other hand, which take in the order of 10+ minutes to get through POST, initialising whatever LOM, disk controllers, and whatever exotica hardware you may also have connected, I don't see a benefit in Linux starting (or failing to start) a wee bit more quickly. You're only going to reboot those beasts when absolutely necessary. And it should boot the same as it booted last time. PID1 should be as simple as possible.

    I only use CentOS these days for FreeIPA but now I'm questioning my life decisions even here. That Debian adopted systemd too is a real shame. It's actually put me off the whole game. Time spent learning systemd is time that could have been spent doing something useful that won't end up randomly breaking with a "will not fix" response.

    Systemd should be taken out back and put out of our misery.

    [Nov 18, 2018] Just let chef start the services when it runs after the system boots(which means they start maybe 1 or 2 mins after bootup).

    Notable quotes:
    "... Another thing bit us with systemd recently as well again going back to bind. Someone on the team upgraded our DNS systems to systemd and the startup parameters for bind were not preserved because systemd ignores the /etc/default/bind file. As a result we had tons of DNS failures when bind was trying to reach out to IPv6 name servers(ugh), when there is no IPv6 connectivity in the network (the solution is to start bind with a -4 option). ..."
    "... I'm sure I've only scratched the surface of systemd pain. I'm sure it provides good value to some people, I hear it's good with containers (I have been running LXC containers for years now, I see nothing with systemd that changes that experience so far). ..."
    "... If systemd is a solution to any set of problems, I'd love to have those problems back! ..."
    Nov 18, 2018 | theregister.co.uk

    Nate Amsden , Thursday 10th May 2018 16:34 GMT

    as a linux user for 22 users

    (20 of which on Debian, before that was Slackware)

    I am new to systemd, maybe 3 or 4 months now tops on Ubuntu, and a tiny bit on Debian before that.

    I was confident I was going to hate systemd before I used it just based on the comments I had read over the years, I postponed using it as long as I could. Took just a few minutes of using it to confirm my thoughts. Now to be clear, if I didn't have to mess with the systemd to do stuff then I really wouldn't care since I don't interact with it (which is the case on my laptop at least though laptop doesn't have systemd anyway). I manage about 1,000 systems running Ubuntu for work, so I have to mess with systemd, and init etc there. If systemd would just do ONE thing I think it would remove all of the pain that it has inflicted on me over the past several months and I could learn to accept it.

    That one thing is, if there is an init script, RUN IT. Not run it like systemd does now. But turn off ALL intelligence systemd has when it finds that script and run it. Don't put it on any special timers, don't try to detect if it is running already, or stopped already or whatever, fire the script up in blocking mode and wait till it exits.

    My first experience with systemd was on one of my home servers, I re-installed Debian on it last year, rebuilt the hardware etc and with it came systemd. I believe there is a way to turn systemd off but I haven't tried that yet. The first experience was with bind. I have a slightly custom init script (from previous debian) that I have been using for many years. I copied it to the new system and tried to start bind. Nothing. I looked in the logs and it seems that it was trying to interface with rndc(internal bind thing) for some reason, and because rndc was not working(I never used it so I never bothered to configure it) systemd wouldn't launch bind. So I fixed rndc and systemd would now launch bind, only to stop it within 1 second of launching. My first workaround was just to launch bind by hand at the CLI (no init script), left it running for a few months. Had a discussion with a co-worker who likes systemd and he explained that making a custom unit file and using the type=forking option may fix it.. That did fix the issue.

    Next issue came up when dealing with MySQL clusters. I had to initialize the cluster with the "service mysql bootstrap-pxc" command (using the start command on the first cluster member is a bad thing). Run that with systemd, and systemd runs it fine. But go to STOP the service, and systemd thinks the service is not running so doesn't even TRY to stop the service(the service is running). My workaround for my automation for mysql clusters at this point is to just use mysqladmin to shut the mysql instances down. Maybe newer mysql versions have better systemd support though a co-worker who is our DBA and has used mysql for many years says even the new Maria DB builds don't work well with systemd. I am working with Mysql 5.6 which is of course much much older.

    Next issue came up with running init scripts that have the same words in them, in the case of most recently I upgraded systems to systemd that run OSSEC. OSSEC has two init scripts for us on the server side (ossec and ossec-auth). Systemd refuses to run ossec-auth because it thinks there is a conflict with the ossec service. I had the same problem with multiple varnish instances running on the same system (varnish instances were named varnish-XXX and varnish-YYY). In the varnish case using custom unit files I got systemd to the point where it would start the service but it still refuses to "enable" the service because of the name conflict (I even changed the name but then systemd was looking at the name of the binary being called in the unit file and said there is a conflict there).

    fucking a. Systemd shut up, just run the damn script. It's not hard.

    Later a co-worker explained the "systemd way" for handling something like multiple varnish instances on the system but I'm not doing that, in the meantime I just let chef start the services when it runs after the system boots(which means they start maybe 1 or 2 mins after bootup).

    Another thing bit us with systemd recently as well again going back to bind. Someone on the team upgraded our DNS systems to systemd and the startup parameters for bind were not preserved because systemd ignores the /etc/default/bind file. As a result we had tons of DNS failures when bind was trying to reach out to IPv6 name servers(ugh), when there is no IPv6 connectivity in the network (the solution is to start bind with a -4 option).

    I believe I have also caught systemd trying to mess with file systems(iscsi mount points). I have lots of automation around moving data volumes on the SAN between servers and attaching them via software iSCSI directly to the VMs themselves(before vsphere 4.0 I attached them via fibre channel to the hypervisor but a feature in 4.0 broke that for me). I noticed on at least one occasion when I removed the file systems from a system that SOMETHING (I assume systemd) mounted them again, and it was very confusing to see file systems mounted again for block devices that DID NOT EXIST on the server at the time. I worked around THAT one I believe with the "noauto" option in fstab again. I had to put a lot of extra logic in my automation scripts to work around systemd stuff.

    I'm sure I've only scratched the surface of systemd pain. I'm sure it provides good value to some people, I hear it's good with containers (I have been running LXC containers for years now, I see nothing with systemd that changes that experience so far).

    But if systemd would just do this one thing and go into dumb mode with init scripts I would be quite happy.

    GrumpenKraut , Thursday 10th May 2018 17:52 GMT
    Re: as a linux user for 22 users

    Now more seriously: it really strikes me that complaints about systemd come from people managing non-trivial setups like the one you describe. While it might have been a PITA to get this done with the old init mechanism, you could make it work reliably.

    If systemd is a solution to any set of problems, I'd love to have those problems back!

    [Nov 18, 2018] SystemD is just a symptom of this regression of Red Hat into money making machine

    Nov 18, 2018 | theregister.co.uk

    Will Godfrey , Thursday 10th May 2018 16:30 GMT

    Business Model

    Red Hat have definitely taken a lurch to the dark side in recent years. It seems to be the way businesses go.

    They start off providing a service to customers.

    As they grow the customers become users.

    Once they reach a certain point the users become consumers, and at this point it is the 'consumers' that provide a service for the business.

    SystemD is just a symptom of this regression.

    [Nov 18, 2018] Fudging the start-up and restoring eth0

    Truth be told boisdevname abomination is from Dell
    Nov 18, 2018 | theregister.co.uk

    The Electron , Thursday 10th May 2018 12:05 GMT

    Fudging the start-up and restoring eth0

    I knew systemd was coming thanks to playing with Fedora. The quicker start-up times were welcomed. That was about it! I have had to kickstart many of my CentOS 7 builds to disable IPv6 (NFS complains bitterly), kill the incredibly annoying 'biosdevname' that turns sensible eth0/eth1 into some daftly named nonsense, replace Gnome 3 (shudder) with MATE, and fudge start-up processes. In a previous job, I maintained 2 sets of CentOS 7 'infrastructure' servers that provided DNS, DHCP, NTP, and LDAP to a large number of historical vlans. Despite enabling the systemd-network wait online option, which is supposed to start all networks *before* listening services, systemd would run off flicking all the "on" switches having only set-up a couple of vlans. Result: NTP would only be listening on one or two vlan interfaces. The only way I found to get around that was to enable rc.local and call systemd to restart the NTP daemon after 20 seconds. I never had the time to raise a bug with Red Hat, and I assume the issue still persists as no-one designed systemd to handle 15-odd vlans!?

    Jay 2 , Thursday 10th May 2018 15:02 GMT
    Re: Predictable names

    I can't remember if it's HPE or Dell (or both) where you can use set the kernel option biosdevname=0 during build/boot to turn all that renaming stuff off and revert to ethX.

    However on (RHEL?)/CentOS 7 I've found that if you build a server like that, and then try to renam/swap the interfaces it will refuse point blank to allow you to swap the interfaces round so that something else can be eth0. In the end we just gave up and renamed everything lanX instead which it was quite happy with.

    HieronymusBloggs , Thursday 10th May 2018 16:23 GMT
    Re: Predictable names

    "I can't remember if it's HPE or Dell (or both) where you can use set the kernel option biosdevname=0 during build/boot to turn all that renaming stuff off and revert to ethX."

    I'm using this on my Debian 9 systems. IIRC the option to do so will be removed in Debian 10.

    Dazed and Confused , Thursday 10th May 2018 19:21 GMT
    Re: Predictable names

    I can't remember if it's HPE or Dell (or both)

    It's Dell. I got the impression that much of this work had been done, at least, in conjunction with Dell.

    [Nov 18, 2018] The beatings will continue until morale improves.

    Nov 18, 2018 | theregister.co.uk

    Doctor Syntax , Thursday 10th May 2018 10:26 GMT

    "The more people learn about it, the more they like it."

    Translation: We define those who don't like it as not have learned enough about it.

    ROC , Friday 11th May 2018 17:32 GMT
    Alternate translation:

    The beatings will continue until morale improves.

    [Nov 18, 2018] I am barely tolerating SystemD on some servers because RHEL/CentOS 7 is the dominant business distro with a decent support life

    Nov 18, 2018 | theregister.co.uk

    AJ MacLeod , Thursday 10th May 2018 13:51 GMT

    @Sheepykins

    I'm not really bothered about whether init was perfect from the beginning - for as long as I've been using Linux (20 years) until now, I have never known the init system to be the cause of major issues. Since in my experience it's not been seriously broken for two decades, why throw it out now for something that is orders of magnitude more complex and ridiculously overreaching?

    Like many here I bet, I am barely tolerating SystemD on some servers because RHEL/CentOS 7 is the dominant business distro with a decent support life - but this is also the first time I can recall ever having serious unpredictable issues with startup and shutdown on Linux servers.


    stiine, Thursday 10th May 2018 15:38 GMT

    sysV init

    I've been using Linux ( RedHat, CentOS, Ubuntu), BSD (Solaris, SunOS, freeBSD) and Unix ( aix, sysv all of the way back to AT&T 3B2 servers) in farms of up to 400 servers since 1988 and I never, ever had issues with eth1 becoming eth0 after a reboot. I also never needed to run ifconfig before configuring an interface just to determine what the inteface was going to be named on a server at this time. Then they hired Poettering... now, if you replace a failed nic, 9 times out of 10, the interface is going to have a randomly different name.

    /rant

    [Nov 18, 2018] systems helps with mounting NSF4 filesystems

    Nov 18, 2018 | theregister.co.uk

    Chronos , Thursday 10th May 2018 13:32 GMT

    Re: Logging

    And disk mounting

    Well, I am compelled to agree with most everything you wrote except one niche area that systemd does better: Remember putzing about with the amd? One line in fstab:

    nasbox:/srv/set0 /nas nfs4 _netdev,noauto,nolock,x-systemd.automount,x-systemd.idle-timeout=1min 0 0
    

    Bloody thing only works and nobody's system comes grinding to a halt every time some essential maintenance is done on the NAS.

    Candour compels me to admit surprise that it worked as advertised, though.

    DCFusor , Thursday 10th May 2018 13:58 GMT

    Re: Logging

    No worries, as has happened with every workaround to make systemD simply mount cifs or NFS at boot, yours will fail as soon as the next change happens, yet it will remain on the 'net to be tried over and over as have all the other "fixes" for Poettering's arrogant breakages.

    The last one I heard from him on this was "don't mount shares at boot, it's not reliable WONTFIX".

    Which is why we're all bitching.

    Break my stuff.

    Web shows workaround.

    Break workaround without fixing the original issue, really.

    Never ensure one place for current dox on what works now.

    Repeat above endlessly.

    Fine if all you do is spin up endless identical instances in some cloud (EG a big chunk of RH customers - but not Debian for example). If like me you have 20+ machines customized to purpose...for which one workaround works on some but not others, and every new release of systemD seems to break something new that has to be tracked down and fixed, it's not acceptable - it's actually making proprietary solutions look more cost effective and less blood pressure raising.

    The old init scripts worked once you got them right, and stayed working. A new distro release didn't break them, nor did a systemD update (because there wasn't one). This feels more like sabotage.

    [Nov 18, 2018] Today I've kickstarted RHEL7 on a rack of 40 identical servers using same script. On about 25 out of 40 postinstall script added to rc.local failed to run with some obscure error

    Nov 18, 2018 | theregister.co.uk

    Dabbb , Thursday 10th May 2018 10:16 GMT

    Quite understandable that people who don't know anything else would accept systemd. For everyone else it has nothing to do with old school but everything to do with unpredictability of systemd.

    Today I've kickstarted RHEL7 on a rack of 40 identical servers using same script. On about 25 out of 40 postinstall script added to rc.local failed to run with some obscure error about script being terminated because something unintelligible did not like it. It never ever happened on RHEL6, it happens all the time on RHEL7. And that's exactly the reason I absolutely hate it both RHEL7 and systemd.

    [Nov 18, 2018] You love Systemd you just don't know it yet, wink Red Hat bods

    Nov 18, 2018 | theregister.co.uk

    Anonymous Coward , Thursday 10th May 2018 02:58 GMT

    Poettering still doesn't get it... Pid 1 is for people wearing big boy pants.

    "And perhaps, in the process, you may warm up a bit more to the tool"

    Like from LNG to Dry Ice? and by tool does he mean Poettering or systemd?

    I love the fact that they aren't trying to address the huge and legitimate issues with Systemd, while still plowing ahead adding more things we don't want Systemd to touch into it's ever expanding sprawl.

    The root of the issue with Systemd is the problems it causes, not the lack of "enhancements" initd offered. Replacing Init didn't require the breaking changes and incompatibility induced by Poettering's misguided handiwork. A clean init replacement would have made Big Linux more compatible with both it's roots and the other parts of the broader Linux/BSD/Unix world. As a result of his belligerent incompetence, other peoples projects have had to be re-engineered, resulting in incompatibility, extra porting work, and security problems. In short were stuck cleaning up his mess, and the consequences of his security blunders

    A worthy Init replacement should have moved to compiled code and given us asynchronous startup, threading, etc, without senselessly re-writing basic command syntax or compatibility. Considering the importance of PID 1, it should have used a formal development process like the BSD world.

    Fedora needs to stop enabling his prima donna antics and stop letting him touch things until he admits his mistakes and attempts to fix them. The flame wars not going away till he does.

    asdf , Thursday 10th May 2018 23:38 GMT
    Re: Poettering still doesn't get it... Pid 1 is for people wearing big boy pants.

    SystemD is corporate money (Redhat support dollars) triumphing over the long hairs sadly. Enough money can buy a shitload of code and you can overwhelm the hippies with hairball dependencies (the key moment was udev being dependent on systemd) and soon get as much FOSS as possible dependent on the Linux kernel. This has always been the end game as Red Hat makes its bones on Linux specifically not on FOSS in general (that say runs on Solaris or HP-UX). The tighter they can glue the FOSS ecosystem and the Linux kernel together ala Windows lite style the better for their bottom line. Poettering is just being a good employee asshat extraordinaire he is.

    whitepines , Thursday 10th May 2018 03:47 GMT
    Raise your hand if you've been completely locked out of a server or laptop (as in, break out the recovery media and settle down, it'll be a while) because systemd:

    1.) Couldn't raise a network interface

    2.) Farted and forgot the UUID for a disk, then refused to give a recovery shell

    3.) Decided an unimportant service (e.g. CUPS or avahi) was too critical to start before giving a login over SSH or locally, then that service stalls forever

    4.) Decided that no, you will not be network booting your server today. No way to recover and no debug information, just an interminable hang as it raises wrong network interfaces and waits for DHCP addresses that will never come.

    And lest the fun be restricted to startup, on shutdown systemd can quite happily hang forever doing things like stopping nonessential services, *with no timeout and no way to interrupt*. Then you have to Magic Sysreq the machine, except that sometimes secure servers don't have that ability, at least not remotely. Cue data loss and general excitement.

    And that's not even going into the fact that you need to *reboot the machine* to patch the *network enabled* and highly privileged systemd, or that it seems to have the attack surface of Jupiter.

    Upstart was better than this. SysV was better than this. Mac is better than this. Windows is better than this.

    Uggh.

    Daggerchild , Thursday 10th May 2018 11:39 GMT
    Re: Ahhh SystemD

    I honestly would love someone to lay out the problems it solves. Solaris has a similar parallellised startup system, with some similar problems, but it didn't need pid 1.

    Tridac , Thursday 10th May 2018 11:53 GMT
    Re: Ahhh SystemD

    Agreed, Solaris svcadm and svcs etc are an example of how it should be done. A layered approach maintaining what was already there, while adding functionality for management purposes. Keeps all the old text based log files and uses xml scripts (human readable and editable) for higher level functions. Afaics, systemd is a power grab by red hat and an ego trip for it's primary developer. Dumped bloatware Linux in favour of FreeBSD and others after Suse 11.4, though that was bad enough with Gnome 3...

    [Nov 17, 2018] hh command man page

    Later was renamed to hstr
    Notable quotes:
    "... Favorite and frequently used commands can be bookmarked ..."
    Nov 17, 2018 | www.mankier.com

    hh -- easily view, navigate, sort and use your command history with shell history suggest box.

    Synopsis

    hh [option] [arg1] [arg2]...
    hstr [option] [arg1] [arg2]...

    Description

    hh uses shell history to provide suggest box like functionality for commands used in the past. By default it parses .bash-history file that is filtered as you type a command substring. Commands are not just filtered, but also ordered by a ranking algorithm that considers number of occurrences, length and timestamp. Favorite and frequently used commands can be bookmarked . In addition hh allows removal of commands from history - for instance with a typo or with a sensitive content.

    Options
    -h --help
    Show help
    -n --non-interactive
    Print filtered history on standard output and exit
    -f --favorites
    Show favorites view immediately
    -s --show-configuration
    Show configuration that can be added to ~/.bashrc
    -b --show-blacklist
    Show blacklist of commands to be filtered out before history processing
    -V --version
    Show version information
    Keys
    pattern
    Type to filter shell history.
    Ctrl-e
    Toggle regular expression and substring search.
    Ctrl-t
    Toggle case sensitive search.
    Ctrl-/ , Ctrl-7
    Rotate view of history as provided by Bash, ranked history ordered by the number of occurences/length/timestamp and favorites.
    Ctrl-f
    Add currently selected command to favorites.
    Ctrl-l
    Make search pattern lowercase or uppercase.
    Ctrl-r , UP arrow, DOWN arrow, Ctrl-n , Ctrl-p
    Navigate in the history list.
    TAB , RIGHT arrow
    Choose currently selected item for completion and let user to edit it on the command prompt.
    LEFT arrow
    Choose currently selected item for completion and let user to edit it in editor (fix command).
    ENTER
    Choose currently selected item for completion and execute it.
    DEL
    Remove currently selected item from the shell history.
    BACSKSPACE , Ctrl-h
    Delete last pattern character.
    Ctrl-u , Ctrl-w
    Delete pattern and search again.
    Ctrl-x
    Write changes to shell history and exit.
    Ctrl-g
    Exit with empty prompt.
    Environment Variables

    hh defines the following environment variables:

    HH_CONFIG
    Configuration options:

    hicolor
    Get more colors with this option (default is monochromatic).

    monochromatic
    Ensure black and white view.

    prompt-bottom
    Show prompt at the bottom of the screen (default is prompt at the top).

    regexp
    Filter command history using regular expressions (substring match is default)

    substring
    Filter command history using substring.

    keywords
    Filter command history using keywords - item matches if contains all keywords in pattern in any order.

    casesensitive
    Make history filtering case sensitive (it's case insensitive by default).

    rawhistory
    Show normal history as a default view (metric-based view is shown otherwise).

    favorites
    Show favorites as a default view (metric-based view is shown otherwise).

    duplicates
    Show duplicates in rawhistory (duplicates are discarded by default).

    blacklist
    Load list of commands to skip when processing history from ~/.hh_blacklist (built-in blacklist used otherwise).

    big-keys-skip
    Skip big history entries i.e. very long lines (default).

    big-keys-floor
    Use different sorting slot for big keys when building metrics-based view (big keys are skipped by default).

    big-keys-exit
    Exit (fail) on presence of a big key in history (big keys are skipped by default).

    warning
    Show warning.

    debug
    Show debug information.

    Example:
    export HH_CONFIG=hicolor,regexp,rawhistory

    HH_PROMPT
    Change prompt string which is user@host$ by default.

    Example:
    export HH_PROMPT="$ "

    Files
    ~/.hh_favorites
    Bookmarked favorite commands.
    ~/.hh_blacklist
    Command blacklist.
    Bash Configuration

    Optionally add the following lines to ~/.bashrc:

    export HH_CONFIG=hicolor         # get more colors
    shopt -s histappend              # append new history items to .bash_history
    export HISTCONTROL=ignorespace   # leading space hides commands from history
    export HISTFILESIZE=10000        # increase history file size (default is 500)
    export HISTSIZE=${HISTFILESIZE}  # increase history size (default is 500)
    export PROMPT_COMMAND="history -a; history -n; ${PROMPT_COMMAND}"
    # if this is interactive shell, then bind hh to Ctrl-r (for Vi mode check doc)
    if [[ $- =~ .*i.* ]]; then bind '"\C-r": "\C-a hh -- \C-j"'; fi
    

    The prompt command ensures synchronization of the history between BASH memory and history file.

    ZSH Configuration

    Optionally add the following lines to ~/.zshrc:

    export HISTFILE=~/.zsh_history   # ensure history file visibility
    export HH_CONFIG=hicolor         # get more colors
    bindkey -s "\C-r" "\eqhh\n"  # bind hh to Ctrl-r (for Vi mode check doc, experiment with --)
    
    Examples
    hh git
    Start `hh` and show only history items containing 'git'.
    hh --non-interactive git
    Print history items containing 'git' to standard output and exit.
    hh --show-configuration >> ~/.bashrc
    Append default hh configuration to your Bash profile.
    hh --show-blacklist
    Show blacklist configured for history processing.
    Author

    Written by Martin Dvorak <martin.dvorak@mindforger.com>

    Bugs

    Report bugs to https://github.com/dvorka/hstr/issues

    See Also

    history(1), bash(1), zsh(1)

    Referenced By

    The man page hstr(1) is an alias of hh(1).

    [Nov 15, 2018] Is Glark a Better Grep Linux.com The source for Linux information

    Notable quotes:
    "... stringfilenames .