Softpanorama

Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
May the source be with you, but remember the KISS principle ;-)
Bigger doesn't imply better. Bigger often is a sign of obesity, of lost control, of overcomplexity, of cancerous cells

Slightly Skeptical View on Enterprise Unix Administration

News Webliography of problems with "pure" cloud environment Recommended Books Recommended Links Shadow IT Orthodox Editors Programmable Keyboards
The tar pit of Red Hat overcomplexity Systemd invasion into Linux Server space Unix System Monitoring Nagios in Large Enterprise Environment Sudoer File Examples Dealing with multiple flavors of Unix SSH Configuration
Unix Configuration Management Tools Job schedulers Red Hat Certification Program Red Hat Enterprise Linux Life Cycle Registering a server using Red Hat Subscription Manager (RHSM) Open source politics: IBM acquires Red Hat Recommended Tools to Enhance Command Line Usage in Windows
Is DevOps a yet another "for profit" technocult Using HP ILO virtual CDROM iDRAC7 goes unresponsive - can't connect to iDRAC7 Resetting frozen iDRAC without unplugging the server Troubleshooting HPOM agents Saferm -- wrapper for rm command ILO command line interface
Bare metal recovery of Linux systems Relax-and-Recover on RHEL HP Operations Manager Troubleshooting HPOM agents Number of Servers per Sysadmin Tivoli Enterprise Console Tivoli Workload Scheduler
Over 50 and unemployed Surviving a Bad Performance Review Understanding Micromanagers and Control Freaks Bosos or Empty Suits (Aggressive Incompetent Managers) Narcissists Female Sociopaths Bully Managers
Slackerism Information Overload Workaholism and Burnout Unix Sysadmin Tips Sysadmin Horror Stories Admin Humor Sysadmin Health Issues


The KISS rule can be expanded as: Keep It Simple, Sysadmin ;-)

This page is written as a protest against overcomplexity and bizarre data center atmosphere typical in "semi-outsourced" or fully outsourced datacenters ;-). Unix/Linux sysadmins are being killed by overcomplexity of the environment.  Later swats  of Linux knowledge (and many excellent  books)  were  killed with introduction of systemd. Especially for older, most experience members of the team, who have unique set of organization knowledge as well as specifics of their career which allowed them to watch the development of Linux almost from the version 0.92.

System administration is still a unique area were people with the ability to program can display their own creativity with relative ease and can still enjoy "old style" atmosphere of software development, when you yourself put a specification, implement it, test the program and then use it in daily work. This is a very exciting, unique opportunity that no DevOps can ever provide. Then why an increasing number of sysadmins are far from being excited about working in those positions, or outright want to quick the  field (or, at least, work 4 days a week). And that include sysadmins who have tremendous speed and capability to process and learn new information. Even for them "enough is enough".   The answer is different for each individual sysadmins, but usually is some variation of the following themes: 

  1.  Too rapid pace of change with a lot of "change for the sake of the change"  often serving as smokescreen for outsourcing efforts (VMware yesterday, Azure today, Amazon cloud tomorrow, etc)
  2. Job insecurity due to outsourcing/offshoring -- constant pressure to cut headcount in the name of 'efficiency" which in reality is more connected with the size of top brass bonuses then anything related to IT datacenter functioning.   Sysadmin over 50 are especially vulnerable category here and in case the are laid off have almost no chances to get back into the IT workforce at the previous level of salary/benefits. often the only job they can find is job  as Home Depot, or similar retail outlets. 
  3. Back breaking level of overcomplexity and bizarre tech decisions crippling the data center (aka crapification ). Potemkin-style  culture often prevails in evaluation of software in large US corporations. The surface sheen is more important than the substance. The marketing brochures and manuals are no different from mainstream news media in the level of BS they spew. IBM is especially guilty (look how they marketed IBM Watson; ; as Oren Etzioni, CEO of the Allen Institute for AI noted "the only intelligent thing about Watson was IBM PR department [push]").
  4. Bureaucratization/fossilization of the large companies IT environment. That includes using "Performance Reviews" (prevalent in IT variant of waterboarding ;-) for the enforcement of management policies, priorities, whims, etc.   That creates alienation from the company (as it should). One can think of the modern corporate Data Center as an organization where the administration has more tremendously power in the decision-making process and eats up more of the corporate budget while the people who do the actual work are increasingly ignored and their share of the budget shrinks.
  5. "Neoliberal austerity" (which is essentially another name for the "war on labor") -- Drastic cost cutting measures at the expense of workforce such as elimination of external vendor training, crapification of benefits, limitation of business trips and enforcing useless or outright harmful for business "new" products instead of "tried and true" old with  the same function.    They are accompanied by the new cultural obsession with ‘character’ (as in "he/she has a right character" -- which in "Neoliberal speak" means he/she is a toothless conformist ;-), glorification of groupthink,   and the intensification of surveillance.

As Charlie Schluting noted in 2010: (Enterprise Networking Plane, April 7, 2010)

What happened to the old "sysadmin" of just a few years ago? We've split what used to be the sysadmin into application teams, server teams, storage teams, and network teams. There were often at least a few people, the holders of knowledge, who knew how everything worked, and I mean everything. Every application, every piece of network gear, and how every server was configured -- these people could save a business in times of disaster.

Now look at what we've done. Knowledge is so decentralized we must invent new roles to act as liaisons between all the IT groups.

Architects now hold much of the high-level "how it works" knowledge, but without knowing how any one piece actually does work.

In organizations with more than a few hundred IT staff and developers, it becomes nearly impossible for one person to do and know everything. This movement toward specializing in individual areas seems almost natural. That, however, does not provide a free ticket for people to turn a blind eye.

Specialization

You know the story: Company installs new application, nobody understands it yet, so an expert is hired. Often, the person with a certification in using the new application only really knows how to run that application. Perhaps they aren't interested in learning anything else, because their skill is in high demand right now. And besides, everything else in the infrastructure is run by people who specialize in those elements. Everything is taken care of.

Except, how do these teams communicate when changes need to take place? Are the storage administrators teaching the Windows administrators about storage multipathing; or worse logging in and setting it up because it's faster for the storage gurus to do it themselves? A fundamental level of knowledge is often lacking, which makes it very difficult for teams to brainstorm about new ways evolve IT services. The business environment has made it OK for IT staffers to specialize and only learn one thing.

If you hire someone certified in the application, operating system, or network vendor you use, that is precisely what you get. Certifications may be a nice filter to quickly identify who has direct knowledge in the area you're hiring for, but often they indicate specialization or compensation for lack of experience.

Resource Competition

Does your IT department function as a unit? Even 20-person IT shops have turf wars, so the answer is very likely, "no." As teams are split into more and more distinct operating units, grouping occurs. One IT budget gets split between all these groups. Often each group will have a manager who pitches his needs to upper management in hopes they will realize how important the team is.

The "us vs. them" mentality manifests itself at all levels, and it's reinforced by management having to define each team's worth in the form of a budget. One strategy is to illustrate a doomsday scenario. If you paint a bleak enough picture, you may get more funding. Only if you are careful enough to illustrate the failings are due to lack of capital resources, not management or people. A manager of another group may explain that they are not receiving the correct level of service, so they need to duplicate the efforts of another group and just implement something themselves. On and on, the arguments continue.

Most often, I've seen competition between server groups result in horribly inefficient uses of hardware. For example, what happens in your organization when one team needs more server hardware? Assume that another team has five unused servers sitting in a blade chassis. Does the answer change? No, it does not. Even in test environments, sharing doesn't often happen between IT groups.

With virtualization, some aspects of resource competition get better and some remain the same. When first implemented, most groups will be running their own type of virtualization for their platform. The next step, I've most often seen, is for test servers to get virtualized. If a new group is formed to manage the virtualization infrastructure, virtual machines can be allocated to various application and server teams from a central pool and everyone is now sharing. Or, they begin sharing and then demand their own physical hardware to be isolated from others' resource hungry utilization. This is nonetheless a step in the right direction. Auto migration and guaranteed resource policies can go a long way toward making shared infrastructure, even between competing groups, a viable option.

Blamestorming

The most damaging side effect of splitting into too many distinct IT groups is the reinforcement of an "us versus them" mentality. Aside from the notion that specialization creates a lack of knowledge, blamestorming is what this article is really about. When a project is delayed, it is all too easy to blame another group. The SAN people didn't allocate storage on time, so another team was delayed. That is the timeline of the project, so all work halted until that hiccup was restored. Having someone else to blame when things get delayed makes it all too easy to simply stop working for a while.

More related to the initial points at the beginning of this article, perhaps, is the blamestorm that happens after a system outage.

Say an ERP system becomes unresponsive a few times throughout the day. The application team says it's just slowing down, and they don't know why. The network team says everything is fine. The server team says the application is "blocking on IO," which means it's a SAN issue. The SAN team say there is nothing wrong, and other applications on the same devices are fine. You've ran through nearly every team, but without an answer still. The SAN people don't have access to the application servers to help diagnose the problem. The server team doesn't even know how the application runs.

See the problem? Specialized teams are distinct and by nature adversarial. Specialized staffers often relegate themselves into a niche knowing that as long as they continue working at large enough companies, "someone else" will take care of all the other pieces.

I unfortunately don't have an answer to this problem. Maybe rotating employees between departments will help. They gain knowledge and also get to know other people, which should lessen the propensity to view them as outsiders

The tragic part of the current environment is that it is like shifting sands. And it is not only due to the "natural process of crapification of operating systems" in which the OS gradually loses its architectural integrity. The pace of change is just too fast to adapt for mere humans. And most of it represents "change for the  sake of change" not some valuable improvement or extension of capabilities.

If you are a sysadmin, who is writing  his own scripts, you write on the sand, spending a lot of time thinking over and debugging your scripts. Which raise you productivity and diminish the number of possible errors. But the next OS version wipes considerable part of your word and you need to revise your scripts again. The tale of Sisyphus can now be re-interpreted as a prescient warning about the thankless task of sysadmin to learn new staff and maintain their own script library ;-)  Sometimes a lot of work is wiped out because the corporate brass decides to to switch to a different flavor of Linux, or we add "yet another flavor" due to a large acquisition.  Add to this inevitable technological changes and the question arise, can't you get a more respectable profession, in which 66% of knowledge is not replaced in the next ten years.  

Balkanization of linux demonstrated also in the Babylon  Tower of system programming languages (C, C++, Perl, Python, Ruby, Go, Java to name a few) and systems that supposedly should help you but mostly do quite opposite (Puppet, Ansible, Chef, etc). Add to this monitoring infrastructure (say Nagios) and you definitely have an information overload.

Inadequate training just add to the stress. First of all corporations no longer want to pay for it. So you are your own and need to do it mostly on your free time, as the workload is substantial in most organizations. Using free or low cost courses if they are available, or buying your own books and trying to learn new staff using them (which of course is the mark of any good sysadmin, but should not the only source of new knowledge  Days when you can for a week travel to vendor training center and have a chance to communicate with other admins from different organization for a week (which probably was the most valuable part of the whole exercise; although I can tell that training by Sun (Solaris) and IBM (AIX) in late 1990th was really high quality using highly qualified instructors, from which you can learn a lot outside the main topic of the course.  Thos days are long in the past. Unlike "Trump University" Sun courses could probably have been called "Sun University." Most training now is via Web and chances for face-to-face communication disappeared.  Also from learning "why" the stress now is on learning of "how".  Why topic typically are reserved to "advanced" courses.

Also the necessary to relearn staff again and again (and often new technologies/daemons/version of OS) are iether the same, or even inferior to previous, or represent open scam in which training is the way to extract money from lemmings (Agile, most of DevOps hoopla, etc). This is typical neoliberal mentality (" greed is good") implemented in education. There is also tendency to treat virtual machines and cloud infrastructure as separate technologies, which requires separate training and separate set of certifications (ASW, Asure).  This is a kind of infantilization of profession when a person who learned a lot of staff in previous 10 years need to forget it and relearn most of it again and again.

Of course  sysadmins not the only suffered. Computer scientists also now struggle with  the excessive level of complexity and too quickly shifting sand. Look at the tragedy of Donald Knuth with this life long idea to create comprehensive monograph for system programmers (The Art of Computer programming). He was flattened by the shifting sands and probably will not be able to finish even volume 4 (out of seven that were planned) in his lifetime. 

Of course much  depends on the evolution of hardware and changes caused by the evolution of hardware such as mass introduction of large SSDs, multi-core CPUs and large RAM

Nobody is now surprised to see a server with 128GB of RAM, laptop with  16Gb of RAM, or cellphones with  4GB of RAM and 1GHZ CPU (Please not that IBM Pc stated with 1 MBof RAM (of which only 640KB was available for programs) and 4.7 MHz (not GHz) single core CPU without floating arithmetic unit).  Such changes while  painful are inevitable and hardware progress slowed down recently as it reached physical limits of technology (we probably will not see 2 nanometer lithography based CPU and 8GHz CPU clock speed in our lifetimes. .

 The other are changes caused by fashion and the desire to entrench their position by the dominate player are more difficult to accept. It is difficult or even impossible to predict which technology became fashionable tomorrow and how long DevOp will remain in fashion. Typically such thing last around ten years.  After that everything is typically fades in oblivion,  or even is crossed out, and former idols will be shattered. This strange period of re-invention of "glass-walls datacenter" under then banner of DevOps  (and old timers still remember that IBM datacenters were hated with passion, and this hate created additional non-technological incentive for mini-computers and later for IBM PC)  is characterized by the level of hype usually reserved for woman fashion.  Now it sometimes looks to me that the movie The Devil Wears Prada  is a subtle parable on sysadmin work.

Add to this horrible job  market, especially for university graduated and older sysadmins (see Over 50 and unemployed ) and one probably start suspect that the life of modern sysadmin is far from paradise. When you read some job description  on sites like Monster, Dice or  Indeed you just ask yourself, if those people really want to hire anybody, or this is just a smoke screen for H1B candidates job certification.  The level of details often is so precise that it is almost impossible to change your current specialization. They do not care about the level of talent, they do not want to train a suitable candidate. They want a person who fit 100% from day 1.  Also in place like NYC or SF rent and property prices and valuations are growing while income growth has been stagnant.

Vandalism of Unix performed by Red Hat with RHEL 7 makes the current  environment somewhat unhealthy. It is clear that this was done by the whim of Red Hat brass, not in the interest of the community. This is a typical Microsoft-style trick which make dozens of high quality books written by very talented authors instantly semi-obsolete.  And question arise whether it make sense to write any book about RHEL other then for solid advance.  It generated some backlash, but the position  of Red Hat as Microsoft on Linux  allowed it to shove down the throat their inferior technical decisions. In a way it reminds me the way Microsoft dealt with Windows 7 replacing it with Windows 10.  Essentially destroying previous windows interface ecosystem (while preserving binary compatibility)

See also

Here are my notes/reflection of sysadmin problem that often arise if rather strange (and sometimes pretty toxic) IT departments of large corporations:


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

Home 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999

For the list of top articles see Recommended Links section

2018 2017 2016 2015 2014 2013 2012 2011 2010 2009
2008 2007 2006 2005 2004 2003 2002 2001 2000 1999

"I appreciate Woody Allen's humor because one of my safety valves is an appreciation for life's absurdities. His message is that life isn't a funeral march to the grave. It's a polka."

-- Dennis Kusinich

[Apr 13, 2019] For those IT guys who want to change the specalty

Highly recommended!
The neoliberal war on labor in the USA is real. And it is especially real for It folk over 50. No country for the old men, so to speak...
Notable quotes:
"... Obviously you need a financial cushion to not be earning for months and to pay for the training courses. ..."
"... Yeah, people get set in their ways and resistant to make changes. Steve Jobs talked about people developing grooves in their brain and how important it is to force yourself out of these grooves.* ..."
"... Your thoughts construct patterns like scaffolding in your mind. You are really etching chemical patterns. In most cases, people get stuck in those patterns, just like grooves in a record, and they never get out of them. ..."
"... The brain is like a muscle, it needs to be constantly worked to become strong. If you waste it watching football or looking at porn your brain will atrophy like the muscles of a person in a wheelchair. ..."
"... IBEW (licensed electricians) has no upper age limit for apprentices They have lots of American engineers who applied in their 30s after realizing most companies want diverse HI-B engineers. ..."
"... At 40+, I still can learn advanced mathematics as well as I ever did. In fact, I can still compete with the Chinese 20 year olds. The problem is not mental horsepower, it's time and energy. I rarely have time to concentrate these days (wife, kids, pets), which makes it hard to get the solid hours of prime mental time required to really push yourself at a hard pace and learn advanced material. ..."
"... That's a huge key and I discovered it when I was asked to tutor people who were failing chemistry. I quickly discovered that all it took for most of them to "get it" was to keep approaching the problem from different angles until a light came on for them and for me the challenge of finding the right approach was a great motivator. Invariably it was some minor issue and once they overcame that, it became easy for them. I'm still astonished at that to this day. ..."
"... Sorry man, English teaching is huge, and will remain so for some time to come. I'm heavily involved in the area and know plenty of ESL teachers. Spain for me, and the level of English here is still so dreadful and they all need it, the demand is staggering and their schools suck at teaching it themselves. ..."
"... You have to really dislike your circumstances in the US to leave and be willing to find some way to get by overseas. ..."
"... We already saw this in South Africa. Mandela took over, the country went down the tubes, the wealthy whites left and the Boers were left to die in refugee camps. They WANT to leave and a few went to Russia, but most developed countries don't want them. Not with the limited amount of money they have. ..."
"... Americans are mostly ignorant to the fact that they live in a 2nd world country except for blacks and rednecks I have met in the Philippines who were stationed there in the military and have a $1000 a month check. Many of them live in more dangerous and dirty internal third worlds in America than what they can have in Southeast Asia and a good many would be homeless. They are worldly enough to leave. ..."
Apr 13, 2019 | www.unz.com

Anonymous [388] Disclaimer , says: March 12, 2019 at 1:26 pm GMT

@YetAnotherAnon

" He's 28 years old getting too old and soft for the entry-level grunt work in the skilled trades as well. What then?"

I know a UK guy (ex City type) who retrained as an electrician in his early 50s. Competent guy. Obviously no one would take him on as an apprentice, so he wired up all his outbuildings as his project to get his certificate. But he's getting work now, word gets around if you're any good.

Obviously you need a financial cushion to not be earning for months and to pay for the training courses.

Yeah, people get set in their ways and resistant to make changes. Steve Jobs talked about people developing grooves in their brain and how important it is to force yourself out of these grooves.*

I know a Haitian immigrant without a college degree who was working three jobs and then dropped down to two jobs and went to school part time in his late 40's and earned his degree in engineering and is a now an engineer in his early 50's.

*From Steve Jobs by Walter Isaacson (Simon and Schuster, 2011), pp.330-331:

"It's rare that you see an artist in his 30s or 40s able to really contribute something amazing," Jobs said wistfully to the writer David Sheff, who published a long and intimate interview in Playboy the month he turned thirty. "Of course, there are some people who are innately curious, forever little kids in their awe of life, but they're rare." The interview touched on many subjects, but Jobs's most poignant ruminations were about growing old and facing the future:

Your thoughts construct patterns like scaffolding in your mind. You are really etching chemical patterns. In most cases, people get stuck in those patterns, just like grooves in a record, and they never get out of them.

I'll always stay connected with Apple. I hope that throughout my life I'll sort of have the thread of my life and the thread of Apple weave in and out of each other, like a tapestry. There may be a few years when I'm not there, but I'll always come back. . . .

If you want to live your life in a creative way, as an artist, you have to not look back too much. You have to be willing to take whatever you've done and whoever you were and throw them away.

The more the outside world tries to reinforce an image of you, the harder it is to continue to be an artist, which is why a lot of times, artists have to say, "Bye. I have to go. I'm going crazy and I'm getting out of here." And they go and hibernate somewhere. Maybe later they re-emerge a little differently.

anonymous [191] Disclaimer , says: March 12, 2019 at 9:59 pm GMT
@The Anti-Gnostic

"fluid intelligence" starts crystallizing after your 20's". Nonsense, I had a great deal of trouble learning anything from my teen years and 20's because I didn't know how to learn. I went for 30 years and eventually figured out a learning style that worked for me. I have learned more and mastered more skills in the past ten years ages 49-59 than I had in the previous 30.

You can challenge yourself like I did and after a while of doing this (6 months) you will find it a lot easier to learn and comprehend than you did previously. (This is true only if you haven't damaged your brain from years of smoking and drinking). I constantly challenged myself with trying to learn math that I had trouble with in school and eventually mastered it.

The brain is like a muscle, it needs to be constantly worked to become strong. If you waste it watching football or looking at porn your brain will atrophy like the muscles of a person in a wheelchair.

Anon [257] Disclaimer , says: March 15, 2019 at 4:29 am GMT
@YetAnotherAnon

IBEW (licensed electricians) has no upper age limit for apprentices They have lots of American engineers who applied in their 30s after realizing most companies want diverse HI-B engineers.

Upper age limits for almost every occupation disappeared decades ago in America because of age discrimination laws.

I can't see how any 28 year old could possibly be too soft to go into any kind of manual labor job.

jbwilson24 , says: March 15, 2019 at 9:31 am GMT
@anonymous Yeah, there was a recent study showing that 70 year olds can form neural connections as quickly as teenagers.
At 40+, I still can learn advanced mathematics as well as I ever did. In fact, I can still compete with the Chinese 20 year olds. The problem is not mental horsepower, it's time and energy. I rarely have time to concentrate these days (wife, kids, pets), which makes it hard to get the solid hours of prime mental time required to really push yourself at a hard pace and learn advanced material.

This is why the Chinese are basically out of date when they are 30, their companies assume that they have kids and are not able to give 110% anymore.

jacques sheete , says: March 15, 2019 at 11:14 am GMT
@anonymous

eventually figured out a learning style that worked for me.

That's a huge key and I discovered it when I was asked to tutor people who were failing chemistry. I quickly discovered that all it took for most of them to "get it" was to keep approaching the problem from different angles until a light came on for them and for me the challenge of finding the right approach was a great motivator. Invariably it was some minor issue and once they overcame that, it became easy for them. I'm still astonished at that to this day.

The brain is like a muscle, it needs to be constantly worked to become strong. If you waste it watching football or looking at porn your brain will atrophy like the muscles of a person in a wheelchair.

No doubt about it. No embellishment needed there!

s.n , says: March 15, 2019 at 11:42 am GMT
@The Anti-Gnostic

Yeah. He's 28 years old and apparently his chosen skillset is teaching EASL in foreign countries. That sector is shrinking as English becomes the global lingua franca and is taught in elementary schools worldwide. He's really too old and soft for his Plan B (military), and getting too old and soft for the entry-level grunt work in the skilled trades as well. What then?

do you know anything first hand about the teaching- english- as-a- second- language hustle?

Asking sincerely – as I don't know anything about it. However I kinda suspect that 'native speakers' will be in demand in many parts of the globe for some time to come [as an aside – and maybe Linh has written of this and I missed it – but last spring I was in Saigon for a couple of weeks and, hanging out one day at the zoo & museum complex, was startled to see about three groups of Vietnamese primary-school students being led around by americans in their early 20s, narrating everything in american english . Apparently private schools offering entirely english-language curriculum are the big hit with the middle & upper class elite there. Perhaps more of the same elsewhere in the region?]

At any rate the young man in this interview has a lot more in the way of qualifications and skill sets than I had when I left the States 35 years ago, and I've done just fine. I'd advise any prospective expats to get that TEFL certificate as it's one extra thing to have in your back pocket and who knows?

PS: "It really can't be overstated how blessed you are to have American citizenship" – well, yes it can. Everyone knows that the best passport on earth is from Northwest Euroland, one of those places with free university education and free health care and where teenage mothers don't daily keel over dead from heroin overdoses in Dollar Stores .. Also more places visa-free

The Anti-Gnostic , says: Website March 15, 2019 at 2:37 pm GMT
@s.n

When you left the States 35 years ago, the world was 3 billion people smaller. The labor market has gotten a tad more competitive. I don't see any indication of a trade or other refined skillset in this article.

People who teach EASL for a living are like people who drive cars for a living: you don't do it because you're really good at teaching your native language, you do it because you're not marketable at anything else.

jeff stryker , says: March 15, 2019 at 3:20 pm GMT
@jacques sheete JACQUES

I think being Australian is the best citizenry you can have. The country is far from perfect, but any lower middle class American white like myself would prefer to be lower middle class there than in Detroit or Phoenix, where being lower income means life around the unfettered urban underclass that is paranoia inducing.

Being from the US is not as bad as being Bangladeshi, but if you had to be white and urban and poor you'd be better off in Sydney than Flint.

The most patriotic Americans have never been anywhere, so they have no idea whether Australia or Tokyo are better. They have never traveled.

s.n , says: March 15, 2019 at 11:42 am GMT
@The Anti-Gnostic

Yeah. He's 28 years old and apparently his chosen skillset is teaching EASL in foreign countries. That sector is shrinking as English becomes the global lingua franca and is taught in elementary schools worldwide. He's really too old and soft for his Plan B (military), and getting too old and soft for the entry-level grunt work in the skilled trades as well. What then?

do you know anything first hand about the teaching- english- as-a- second- language hustle?

Asking sincerely – as I don't know anything about it. However I kinda suspect that 'native speakers' will be in demand in many parts of the globe for some time to come [as an aside – and maybe Linh has written of this and I missed it – but last spring I was in Saigon for a couple of weeks and, hanging out one day at the zoo & museum complex, was startled to see about three groups of Vietnamese primary-school students being led around by americans in their early 20s, narrating everything in american english .

Apparently private schools offering entirely english-language curriculum are the big hit with the middle & upper class elite there. Perhaps more of the same elsewhere in the region?]

At any rate the young man in this interview has a lot more in the way of qualifications and skill sets than I had when I left the States 35 years ago, and I've done just fine. I'd advise any prospective expats to get that TEFL certificate as it's one extra thing to have in your back pocket and who knows?

ps: "It really can't be overstated how blessed you are to have American citizenship" – well, yes it can. Everyone knows that the best passport on earth is from Northwest Euroland, one of those places with free university education and free health care and where teenage mothers don't daily keel over dead from heroin overdoses in Dollar Stores ..

Also more places visa-free

s.n , says: March 16, 2019 at 7:23 am GMT
@The Anti-Gnostic

People who teach EASL for a living are like people who drive cars for a living: you don't do it because you're really good at teaching your native language, you do it because you're not marketable at anything else.

well that's the beauty of it: you don't have to be good at anything other than just being a native speaker to succeed as an EASL teacher, and thousands more potential customers are born every day. I'd definitely advise any potential expats to become accomplished, and, even better, qualified, in as many trades as possible. But imho the real key to success as a long term expat is your mindset: determination and will-power to survive no matter what. If you really want to break out of the States and see the world, and don't have inherited wealth, you will be forced to rely on your wits and good luck and seize the opportunities that arise, whatever those opportunities may be.

Thedirtysponge , says: March 16, 2019 at 4:01 pm GMT
@The Anti-Gnostic

Sorry man, English teaching is huge, and will remain so for some time to come. I'm heavily involved in the area and know plenty of ESL teachers. Spain for me, and the level of English here is still so dreadful and they all need it, the demand is staggering and their schools suck at teaching it themselves.

You are one of those people who just like to shit on things:) and people make a lot of money out of it, not everyone of course, like any area. But it's perfectly viable and good to go for a long time yet. It's exactly that English is the lingua Franca that people need to be at a high level of it. The Chinese market is still massive. The bag packer esl teachers are the ones that give off this stigma, and 'bag packer' and 'traveller' are by now very much regarded as dirty words in the ESL world.

Mike P , says: March 16, 2019 at 5:52 pm GMT
@Thedirtysponge

ESL teachers. Spain for me

There is a very funny version also with Jack Lemmon in "Irma la Douce", but I can't find that one on youtube.

jeff stryker , says: March 17, 2019 at 7:26 am GMT
@Thedirtysponge S.N. & DIRTY SPONGE

Most Americans lack the initiative to move anywhere. Most will complain but will never leave the street they were born on. Urban whites are used to adaptation being around other cultures anyhow and being somewhat street smart, but the poor rural whites in the exurbs or sticks whose live would really improve if they got the hell out of America will never move anywhere.

You have to really dislike your circumstances in the US to leave and be willing to find some way to get by overseas.

Lots of people will talk about leaving America without having a clue as to how hard this is to actually do. Australia and New Zealand are not crying out for white proles with high school education or GED. It is much more difficult to move overseas and stay overseas than most Americans think.

Except of course for the ruling elite. And that is because five-star hotels look the same everywhere and money is an international language.

We already saw this in South Africa. Mandela took over, the country went down the tubes, the wealthy whites left and the Boers were left to die in refugee camps. They WANT to leave and a few went to Russia, but most developed countries don't want them. Not with the limited amount of money they have.

Australia and NZ would rather have refugees than white people in dire circumstances.

Even immigrating to Canada, a country that I worked in, is much much harder than anyone imagines.

jeff stryker , says: March 17, 2019 at 7:37 am GMT
A LONGTIME EXPAT ON LIVING ABROAD

Americans are mostly ignorant to the fact that they live in a 2nd world country except for blacks and rednecks I have met in the Philippines who were stationed there in the military and have a $1000 a month check. Many of them live in more dangerous and dirty internal third worlds in America than what they can have in Southeast Asia and a good many would be homeless. They are worldly enough to leave.

But most Americans whose lives would be vastly improved overseas think they are living in the greatest country on earth.

[Mar 26, 2019] I wiped out a call center by mistyping the user profile expiration purge parameters in a script before leaving for the day.

Mar 26, 2019 | twitter.com

SwiftOnSecurity ‏ 7:07 PM - 25 Mar 2019

I wiped out a call center by mistyping the user profile expiration purge parameters in a script before leaving for the day.

https:// twitter.com/soniagupta504/ status/1109979183352942592

SwiftOnSecurity ‏ 7:08 PM - 25 Mar 2019

Luckily most of it was backed up with a custom-built user profile roaming system, but still it was down for an hour and a half and degraded for more...

[Mar 25, 2019] How to Monitor Disk IO in Linux Linux Hint

Mar 25, 2019 | linuxhint.com

Monitoring Specific Storage Devices or Partitions with iostat:

By default, iostat monitors all the storage devices of your computer. But, you can monitor specific storage devices (such as sda, sdb etc) or specific partitions (such as sda1, sda2, sdb4 etc) with iostat as well.

For example, to monitor the storage device sda only, run iostat as follows:

$ sudo iostat sda

Or

$ sudo iostat -d 2 sda

As you can see, only the storage device sda is monitored.

You can also monitor multiple storage devices with iostat.

For example, to monitor the storage devices sda and sdb , run iostat as follows:

$ sudo iostat sda sdb

Or

$ sudo iostat -d 2 sda sdb

If you want to monitor specific partitions, then you can do so as well.

For example, let's say, you want to monitor the partitions sda1 and sda2 , then run iostat as follows:

$ sudo iostat sda1 sda2

Or

$ sudo iostat -d 2 sda1 sda2

As you can see, only the partitions sda1 and sda2 are monitored.

Monitoring LVM Devices with iostat:

You can monitor the LVM devices of your computer with the -N option of iostat.

To monitor the LVM devices of your Linux machine as well, run iostat as follows:

$ sudo iostat -N -d 2

You can also monitor specific LVM logical volume as well.

For example, to monitor the LVM logical volume centos-root (let's say), run iostat as follows:

$ sudo iostat -N -d 2 centos-root

Changing the Units of iostat:

By default, iostat generates reports in kilobytes (kB) unit. But there are options that you can use to change the unit.

For example, to change the unit to megabytes (MB), use the -m option of iostat.

You can also change the unit to human readable with the -h option of iostat. Human readable format will automatically pick the right unit depending on the available data.

To change the unit to megabytes, run iostat as follows:

$ sudo iostat -m -d 2 sda

To change the unit to human readable format, run iostat as follows:

$ sudo iostat -h -d 2 sda

I copied as file and as you can see, the unit is now in megabytes (MB).

It changed to kilobytes (kB) as soon as the file copy is over.

Extended Display of iostat:

If you want, you can display a lot more information about disk i/o with iostat. To do that, use the -x option of iostat.

For example, to display extended information about disk i/o, run iostat as follows:

$ sudo iostat -x -d 2 sda

You can find what each of these fields (rrqm/s, %wrqm etc) means in the man page of iostat.

Getting Help:

If you need more information on each of the supported options of iostat and what each of the fields of iostat means, I recommend you take a look at the man page of iostat.

You can access the man page of iostat with the following command:

$ man iostat

So, that's how you use iostat in Linux. Thanks for reading this article.

[Mar 25, 2019] Concatenating Strings with the += Operator

Mar 25, 2019 | linuxize.com

https://acdn.adnxs.com/ib/static/usersync/v3/async_usersync.html

https://bh.contextweb.com/visitormatch

Concatenating Strings with the += Operator

Another way of concatenating strings in bash is by appending variables or literal strings to a variable using the += operator:

VAR1="Hello, "
VAR1+=" World"
echo "$VAR1"
Hello, World

The following example is using the += operator to concatenate strings in bash for loop :

languages.sh
VAR=""
for ELEMENT in 'Hydrogen' 'Helium' 'Lithium' 'Beryllium'; do
  VAR+="${ELEMENT} "
done

echo "$VAR"

[Mar 13, 2019] Getting started with the cat command by Alan Formy-Duval

Mar 13, 2019 | opensource.com

6 comments

Cat can also number a file's lines during output. There are two commands to do this, as shown in the help documentation: -b, --number-nonblank number nonempty output lines, overrides -n
-n, --number number all output lines

If I use the -b command with the hello.world file, the output will be numbered like this:

   $ cat -b hello.world
   1 Hello World !

In the example above, there is an empty line. We can determine why this empty line appears by using the -n argument:

$ cat -n hello.world
   1 Hello World !
   2
   $

Now we see that there is an extra empty line. These two arguments are operating on the final output rather than the file contents, so if we were to use the -n option with both files, numbering will count lines as follows:

   
   $ cat -n hello.world goodbye.world
   1 Hello World !
   2
   3 Good Bye World !
   4
   $

One other option that can be useful is -s for squeeze-blank . This argument tells cat to reduce repeated empty line output down to one line. This is helpful when reviewing files that have a lot of empty lines, because it effectively fits more text on the screen. Suppose I have a file with three lines that are spaced apart by several empty lines, such as in this example, greetings.world :

   $ cat greetings.world
   Greetings World !

   Take me to your Leader !

   We Come in Peace !
   $

Using the -s option saves screen space:

$ cat -s greetings.world

Cat is often used to copy contents of one file to another file. You may be asking, "Why not just use cp ?" Here is how I could create a new file, called both.files , that contains the contents of the hello and goodbye files:

$ cat hello.world goodbye.world > both.files
$ cat both.files
Hello World !
Good Bye World !
$
zcat

There is another variation on the cat command known as zcat . This command is capable of displaying files that have been compressed with Gzip without needing to uncompress the files with the gunzip command. As an aside, this also preserves disk space, which is the entire reason files are compressed!

The zcat command is a bit more exciting because it can be a huge time saver for system administrators who spend a lot of time reviewing system log files. Where can we find compressed log files? Take a look at /var/log on most Linux systems. On my system, /var/log contains several files, such as syslog.2.gz and syslog.3.gz . These files are the result of the log management system, which rotates and compresses log files to save disk space and prevent logs from growing to unmanageable file sizes. Without zcat , I would have to uncompress these files with the gunzip command before viewing them. Thankfully, I can use zcat :

$ cd / var / log
$ ls * .gz
syslog.2.gz syslog.3.gz
$
$ zcat syslog.2.gz | more
Jan 30 00:02: 26 workstation systemd [ 1850 ] : Starting GNOME Terminal Server...
Jan 30 00:02: 26 workstation dbus-daemon [ 1920 ] : [ session uid = 2112 pid = 1920 ] Successful
ly activated service 'org.gnome.Terminal'
Jan 30 00:02: 26 workstation systemd [ 1850 ] : Started GNOME Terminal Server.
Jan 30 00:02: 26 workstation org.gnome.Terminal.desktop [ 2059 ] : # watch_fast: "/org/gno
me / terminal / legacy / " (establishing: 0, active: 0)
Jan 30 00:02:26 workstation org.gnome.Terminal.desktop[2059]: # unwatch_fast: " / org / g
nome / terminal / legacy / " (active: 0, establishing: 1)
Jan 30 00:02:26 workstation org.gnome.Terminal.desktop[2059]: # watch_established: " /
org / gnome / terminal / legacy / " (establishing: 0)
--More--

We can also pass both files to zcat if we want to review both of them uninterrupted. Due to how log rotation works, you need to pass the filenames in reverse order to preserve the chronological order of the log contents:

$ ls -l * .gz
-rw-r----- 1 syslog adm 196383 Jan 31 00:00 syslog.2.gz
-rw-r----- 1 syslog adm 1137176 Jan 30 00:00 syslog.3.gz
$ zcat syslog.3.gz syslog.2.gz | more

The cat command seems simple but is very useful. I use it regularly. You also don't need to feed or pet it like a real cat. As always, I suggest you review the man pages ( man cat ) for the cat and zcat commands to learn more about how it can be used. You can also use the --help argument for a quick synopsis of command line arguments.

Victorhck on 13 Feb 2019 Permalink

and there's also a "tac" command, that is just a "cat" upside down!
Following your example:

~~~~~

tac both.files
Good Bye World!
Hello World!
~~~~
Happy hacking! :)
Johan Godfried on 26 Feb 2019 Permalink

Interesting article but please don't misuse cat to pipe to more......

I am trying to teach people to use less pipes and here you go abusing cat to pipe to other commands. IMHO, 99.9% of the time this is not necessary!

In stead of "cat file | command" most of the time, you can use "command file" (yes, I am an old dinosaur from a time where memory was very expensive and forking multiple commands could fill it all up)

Uri Ran on 03 Mar 2019 Permalink

Run cat then press keys to see the codes your shortcut send. (Press Ctrl+C to kill the cat when you're done.)

For example, on my Mac, the key combination option-leftarrow is ^[^[[D and command-downarrow is ^[[B.

I learned it from https://stackoverflow.com/users/787216/lolesque in his answer to https://stackoverflow.com/questions/12382499/looking-for-altleftarrowkey...

Geordie on 04 Mar 2019 Permalink

cat is also useful to make (or append to) text files without an editor:

$ cat >> foo << "EOF"
> Hello World
> Another Line
> EOF
$

[Mar 13, 2019] Pilots Complained About Boeing 737 Max 8 For Months Before Second Deadly Crash

Mar 13, 2019 | www.zerohedge.com

Several Pilots repeatedly warned federal authorities of safety concerns over the now-grounded Boeing 737 Max 8 for months leading up to the second deadly disaster involving the plane, according to an investigation by the Dallas Morning News . One captain even called the Max 8's flight manual " inadequate and almost criminally insufficient ," according to the report.

" The fact that this airplane requires such jury-rigging to fly is a red flag. Now we know the systems employed are error-prone -- even if the pilots aren't sure what those systems are, what redundancies are in place and failure modes. I am left to wonder: what else don't I know?" wrote the captain.

At least five complaints about the Boeing jet were found in a federal database which pilots routinely use to report aviation incidents without fear of repercussions.

The complaints are about the safety mechanism cited in preliminary reports for an October plane crash in Indonesia that killed 189.

The disclosures found by The News reference problems during flights of Boeing 737 Max 8s with an autopilot system during takeoff and nose-down situations while trying to gain altitude. While records show these flights occurred during October and November, information regarding which airlines the pilots were flying for at the time is redacted from the database. - Dallas Morning News

One captain who flies the Max 8 said in November that it was "unconscionable" that Boeing and federal authorities have allowed pilots to fly the plane without adequate training - including a failure to fully disclose how its systems were distinctly different from other planes.

An FAA spokesman said the reporting system is directly filed to NASA, which serves as an neutral third party in the reporting of grievances.

"The FAA analyzes these reports along with other safety data gathered through programs the FAA administers directly, including the Aviation Safety Action Program, which includes all of the major airlines including Southwest and American," said FAA southwest regional spokesman Lynn Lunsford.

Meanwhile, despite several airlines and foreign countries grounding the Max 8, US regulators have so far declined to follow suit. They have, however, mandated that Boeing upgrade the plane's software by April.

Sen. Ted Cruz (R-TX), who chairs a Senate subcommittee overseeing aviation, called for the grounding of the Max 8 in a Thursday statement.

"Further investigation may reveal that mechanical issues were not the cause, but until that time, our first priority must be the safety of the flying public," said Cruz.

At least 18 carriers -- including American Airlines and Southwest Airlines, the two largest U.S. carriers flying the 737 Max 8 -- have also declined to ground planes , saying they are confident in the safety and "airworthiness" of their fleets. American and Southwest have 24 and 34 of the aircraft in their fleets, respectively. - Dallas Morning News

The United States should be leading the world in aviation safety," said Transport Workers Union president John Samuelsen. " And yet, because of the lust for profit in the American aviation, we're still flying planes that dozens of other countries and airlines have now said need to grounded ." Tags Disaster Accident

[Mar 13, 2019] Boeing's automatic trim for the 737 MAX was not disclosed to the Pilots by Bjorn Fehrm

The background to Boeing's 737 MAX automatic trim
Mar 13, 2019 | leehamnews.com

The automatic trim we described last week has a name, MCAS, or Maneuvering Characteristics Automation System.

It's unique to the MAX because the 737 MAX no longer has the docile pitch characteristics of the 737NG at high Angles Of Attack (AOA). This is caused by the larger engine nacelles covering the higher bypass LEAP-1B engines.

The nacelles for the MAX are larger and placed higher and further forward of the wing, Figure 1.

Figure 1. Boeing 737NG (left) and MAX (right) nacelles compared. Source: Boeing 737 MAX brochure.

By placing the nacelle further forward of the wing, it could be placed higher. Combined with a higher nose landing gear, which raises the nacelle further, the same ground clearance could be achieved for the nacelle as for the 737NG.

The drawback of a larger nacelle, placed further forward, is it destabilizes the aircraft in pitch. All objects on an aircraft placed ahead of the Center of Gravity (the line in Figure 2, around which the aircraft moves in pitch) will contribute to destabilize the aircraft in pitch.

... ... ...

The 737 is a classical flight control aircraft. It relies on a naturally stable base aircraft for its flight control design, augmented in selected areas. Once such area is the artificial yaw damping, present on virtually all larger aircraft (to stop passengers getting sick from the aircraft's natural tendency to Dutch Roll = Wagging its tail).

Until the MAX, there was no need for artificial aids in pitch. Once the aircraft entered a stall, there were several actions described last week which assisted the pilot to exit the stall. But not in normal flight.

The larger nacelles, called for by the higher bypass LEAP-1B engines, changed this. When flying at normal angles of attack (3° at cruise and say 5° in a turn) the destabilizing effect of the larger engines are not felt.

The nacelles are designed to not generate lift in normal flight. It would generate unnecessary drag as the aspect ratio of an engine nacelle is lousy. The aircraft designer focuses the lift to the high aspect ratio wings.

But if the pilot for whatever reason manoeuvres the aircraft hard, generating an angle of attack close to the stall angle of around 14°, the previously neutral engine nacelle generates lift. A lift which is felt by the aircraft as a pitch up moment (as its ahead of the CG line), now stronger than on the 737NG. This destabilizes the MAX in pitch at higher Angles Of Attack (AOA). The most difficult situation is when the maneuver has a high pitch ratio. The aircraft's inertia can then provoke an over-swing into stall AOA.

To counter the MAX's lower stability margins at high AOA, Boeing introduced MCAS. Dependent on AOA value and rate, altitude (air density) and Mach (changed flow conditions) the MCAS, which is a software loop in the Flight Control computer, initiates a nose down trim above a threshold AOA.

It can be stopped by the Pilot counter-trimming on the Yoke or by him hitting the CUTOUT switches on the center pedestal. It's not stopped by the Pilot pulling the Yoke, which for normal trim from the autopilot or runaway manual trim triggers trim hold sensors. This would negate why MCAS was implemented, the Pilot pulling so hard on the Yoke that the aircraft is flying close to stall.

It's probably this counterintuitive characteristic, which goes against what has been trained many times in the simulator for unwanted autopilot trim or manual trim runaway, which has confused the pilots of JT610. They learned that holding against the trim stopped the nose down, and then they could take action, like counter-trimming or outright CUTOUT the trim servo. But it didn't. After a 10 second trim to a 2.5° nose down stabilizer position, the trimming started again despite the Pilots pulling against it. The faulty high AOA signal was still present.

How should they know that pulling on the Yoke didn't stop the trim? It was described nowhere; neither in the aircraft's manual, the AFM, nor in the Pilot's manual, the FCOM. This has created strong reactions from airlines with the 737 MAX on the flight line and their Pilots. They have learned the NG and the MAX flies the same. They fly them interchangeably during the week.

They do fly the same as long as no fault appears. Then there are differences, and the Pilots should have been informed about the differences.

  1. Bruce Levitt
    November 14, 2018
    In figure 2 it shows the same center of gravity for the NG as the Max. I find this a bit surprising as I would have expected that mounting heavy engines further forward would have cause a shift forward in the center of gravity that would not have been offset by the longer tailcone, which I'm assuming is relatively light even with APU installed.

    Based on what is coming out about the automatic trim, Boeing must be counting its lucky stars that this incident happened to Lion Air and not to an American aircraft. If this had happened in the US, I'm pretty sure the fleet would have been grounded by the FAA and the class action lawyers would be lined up outside the door to get their many pounds of flesh.

    This is quite the wake-up call for Boeing.

    • OV-099
      November 14, 2018
      If the FAA is not going to comprehensively review the certification for the 737 MAX, I would not be surprised if EASA would start taking a closer look at the aircraft and why the FAA seemingly missed the seemingly inadequate testing of the automatic trim when they decided to certified the 737 MAX 8. Reply
      • Doubting Thomas
        November 16, 2018
        One wonders if there are any OTHER goodies in the new/improved/yet identical handling latest iteration of this old bird that Boeing did not disclose so that pilots need not be retrained.
        EASA & FAA likely already are asking some pointed questions and will want to verify any statements made by the manufacturer.
        Depending on the answers pilot training requirements are likely to change materially.
    • jbeeko
      November 14, 2018
      CG will vary based on loading. I'd guess the line is the rear-most allowed CG.
    • ahmed
      November 18, 2018
      hi dears
      I think that even the pilot didnt knew about the MCAS ; this case can be corrected by only applying the boeing check list (QRH) stabilizer runaway.
      the pilot when they noticed that stabilizer are trimming without a knewn input ( from pilot or from Auto pilot ) ; shout put the cut out sw in the off position according to QRH. Reply
      • TransWorld
        November 19, 2018
        Please note that the first actions pulling back on the yoke to stop it.

        Also keep in mind the aircraft is screaming stall and the stick shaker is activated.

        Pulling back on the yoke in that case is the WRONG thing to do if you are stalled.

        The Pilot has to then determine which system is lading.

        At the same time its chaning its behavior from previous training, every 5 seconds, it does it again.

        There also was another issue taking place at the same time.

        So now you have two systems lying to you, one that is actively trying to kill you.

        If the Pitot static system is broken, you also have several key instruments feeding you bad data (VSI, altitude and speed)

    • TransWorld
      November 14, 2018
      Grubbie: I can partly answer that.

      Pilots are trained to immediately deal with emergency issues (engine loss etc)

      Then there is a follow up detailed instructions for follow on actions (if any).

      Simulators are wonderful things because you can train lethal scenes without lethal results.

      In this case, with NO pilot training let alone in the manuals, pilots have to either be really quick in the situation or you get the result you do. Some are better at it than others (Sullenbergers along with other aspects elected to turn on his APU even though it was not part of the engine out checklist)

      The other one was to ditch, too many pilots try to turn back even though we are trained not to.

      What I can tell you from personal expereince is having got myself into a spin without any training, I was locked up logic wise (panic) as suddenly nothing was working the way it should.

      I was lucky I was high enough and my brain kicked back into cold logic mode and I knew the counter to a spin from reading)

      Another 500 feet and I would not be here to post.

      While I did parts of the spin recovery wrong, fortunately in that aircraft it did not care, right rudder was enough to stop it.

      Reply
  1. OV-099
    November 14, 2018
    It's starting to look as if Boeing will not be able to just pay victims' relatives in the form of "condolence money", without admitting liability. Reply
    • Dukeofurl
      November 14, 2018
      Im pretty sure, even though its an Indonesian Airline, any whiff of fault with the plane itself will have lawyers taking Boeing on in US courts.
  1. Tech-guru
    November 14, 2018
    Astonishing to say the least. It is quite unlike Boeing. They are normally very good in the documentation and training. It makes everyone wonder how such vital change on the MAX aircraft was omitted from books as weel as in crew training.
    Your explanation is very good as to why you need this damn MCAS. But can you also tell us how just one faulty sensor can trigger this MCAS. In all other Boeing models like B777, the two AOA sensor signals are compared with a calculated AOA and choose the mid value within the ADIRU. It eliminates a drastic mistake of following a wrong sensor input.
    • Bjorn Fehrm
      November 14, 2018
      Hi Tech-Gury,

      it's not sure it's a one sensor fault. One sensor was changed amid information there was a 20 degree diff between the two sides. But then it happened again. I think we might be informed something else is at the root of this, which could also trip such a plausibility check you mention. We just don't know. What we know is the MCAS function was triggered without the aircraft being close to stall.

      Reply
      • Matthew
        November 14, 2018
        If it's certain that the MCAS was doing unhelpful things, that coupled with the fact that no one was telling pilots anything about it suggests to me that this is already effectively an open-and-shut case so far as liability, regulatory remedies are concerned.

        The tecnical root cause is also important, but probably irrelevant so far as estbalishing the ultimate reason behind the crash.

        Reply

[Mar 13, 2019] Boeing Crapification Second 737 Max Plane Within Five Months Crashes Just After Takeoff

Notable quotes:
"... The key point I want to pick up on from that earlier post is this: the Boeing 737 Max includes a new "safety" feature about which the company failed to inform the Federal Aviation Administration (FAA). ..."
"... Boeing Co. withheld information about potential hazards associated with a new flight-control feature suspected of playing a role in last month's fatal Lion Air jet crash, according to safety experts involved in the investigation, as well as midlevel FAA officials and airline pilots. ..."
"... Notice that phrase: "under unusual conditions". Seems now that the pilots of two of these jets may have encountered such unusual conditions since October. ..."
"... Why did Boeing neglect to tell the FAA – or, for that matter, other airlines or regulatory authorities – about the changes to the 737 Max? Well, the airline marketed the new jet as not needing pilots to undergo any additional training in order to fly it. ..."
"... In addition to considerable potential huge legal liability, from both the Lion Air and Ethiopian Airlines crashes, Boeing also faces the commercial consequences of grounding some if not all 737 Max 8 'planes currently in service – temporarily? indefinitely? -and loss or at minimum delay of all future sales of this aircraft model. ..."
"... If this tragedy had happened on an aircraft of another manufacturer other than big Boeing, the fleet would already have been grounded by the FAA. The arrogance of engineers both at Airbus and Boeing, who refuse to give the pilots easy means to regain immediate and full authority over the plane (pitch and power) is just appalling. ..."
"... Boeing has made significant inroads in China with its 737 MAX family. A dozen Chinese airlines have ordered 180 of the planes, and 76 of them have been delivered, according Boeing. About 85% of Boeing's unfilled Chinese airline orders are for 737 MAX planes. ..."
"... "It's pretty asinine for them to put a system on an airplane and not tell the pilots who are operating the airplane, especially when it deals with flight controls," Captain Mike Michaelis, chairman of the safety committee for the Allied Pilots Association, told the Wall Street Journal. ..."
"... The aircraft company concealed the new system and minimized the differences between the MAX and other versions of the 737 to boost sales. On the Boeing website, the company claims that airlines can save "millions of dollars" by purchasing the new plane "because of its commonality" with previous versions of the plane. ..."
"... "Years of experience representing hundreds of victims has revealed a common thread through most air disaster cases," said Charles Herrmann the principle of Herrmann Law. "Generating profit in a fiercely competitive market too often involves cutting safety measures. In this case, Boeing cut training and completely eliminated instructions and warnings on a new system. Pilots didn't even know it existed. I can't blame so many pilots for being mad as hell." ..."
"... The Air France Airbus disaster was jumped on – Boeing's traditional hydraulic links between the sticks for the two pilots ensuring they move in tandem; the supposed comments by Captain Sully that the Airbus software didn't allow him to hit the water at the optimal angle he wanted, causing the rear rupture in the fuselage both showed the inferiority of fly-by-wire until Boeing started using it too. (Sully has taken issue with the book making the above point and concludes fly-by-wire is a "mixed blessing".) ..."
"... Money over people. ..."
Mar 13, 2019 | www.nakedcapitalism.com

Posted on March 11, 2019 by Jerri-Lynn Scofield By Jerri-Lynn Scofield, who has worked as a securities lawyer and a derivatives trader. She is currently writing a book about textile artisans.

Yesterday, an Ethiopian Airlines flight crashed minutes after takeoff, killing all 157 passengers on board.

The crash occurred less than five months after a Lion Air jet crashed near Jakarta, Indonesia, also shortly after takeoff, and killed all 189 passengers.

Both jets were Boeing's latest 737 Max 8 model.

The Wall Street Journal reports in Ethiopian Crash Carries High Stakes for Boeing, Growing African Airline :

The state-owned airline is among the early operators of Boeing's new 737 MAX single-aisle workhorse aircraft, which has been delivered to carriers around the world since 2017. The 737 MAX represents about two-thirds of Boeing's future deliveries and an estimated 40% of its profits, according to analysts.

Having delivered 350 of the 737 MAX planes as of January, Boeing has booked orders for about 5,000 more, many to airlines in fast-growing emerging markets around the world.

The voice and data recorders for the doomed flight have already been recovered, the New York Times reported in Ethiopian Airline Crash Updates: Data and Voice Recorders Recovered . Investigators will soon be able to determine whether the same factors that caused the Lion Air crash also caused the latest Ethiopian Airlines tragedy.

Boeing, Crapification, Two 737 Max Crashes Within Five Months

Yves wrote a post in November, Boeing, Crapification, and the Lion Air Crash , analyzing a devastating Wall Street Journal report on that earlier crash. I will not repeat the details of her post here, but instead encourage interested readers to read it iin full.

The key point I want to pick up on from that earlier post is this: the Boeing 737 Max includes a new "safety" feature about which the company failed to inform the Federal Aviation Administration (FAA). As Yves wrote:

The short version of the story is that Boeing had implemented a new "safety" feature that operated even when its plane was being flown manually, that if it went into a stall, it would lower the nose suddenly to pick airspeed and fly normally again. However, Boeing didn't tell its buyers or even the FAA about this new goodie. It wasn't in pilot training or even the manuals. But even worse, this new control could force the nose down so far that it would be impossible not to crash the plane. And no, I am not making this up. From the Wall Street Journal:

Boeing Co. withheld information about potential hazards associated with a new flight-control feature suspected of playing a role in last month's fatal Lion Air jet crash, according to safety experts involved in the investigation, as well as midlevel FAA officials and airline pilots.

The automated stall-prevention system on Boeing 737 MAX 8 and MAX 9 models -- intended to help cockpit crews avoid mistakenly raising a plane's nose dangerously high -- under unusual conditions can push it down unexpectedly and so strongly that flight crews can't pull it back up. Such a scenario, Boeing told airlines in a world-wide safety bulletin roughly a week after the accident, can result in a steep dive or crash -- even if pilots are manually flying the jetliner and don't expect flight-control computers to kick in.

Notice that phrase: "under unusual conditions". Seems now that the pilots of two of these jets may have encountered such unusual conditions since October.

Why did Boeing neglect to tell the FAA – or, for that matter, other airlines or regulatory authorities – about the changes to the 737 Max? Well, the airline marketed the new jet as not needing pilots to undergo any additional training in order to fly it.

I see. Why Were 737 Max Jets Still in Service? Today, Boeing executives no doubt rue not pulling all 737 Max 8 jets out of service after the October Lion Air crash, to allow their engineers and engineering safety regulators to make necessary changes in the 'plane's design or to develop new training protocols.

In addition to considerable potential huge legal liability, from both the Lion Air and Ethiopian Airlines crashes, Boeing also faces the commercial consequences of grounding some if not all 737 Max 8 'planes currently in service – temporarily? indefinitely? -and loss or at minimum delay of all future sales of this aircraft model.

Over to Yves again, who in her November post cut to the crux:

And why haven't the planes been taken out of service? As one Wall Street Journal reader put it:

If this tragedy had happened on an aircraft of another manufacturer other than big Boeing, the fleet would already have been grounded by the FAA. The arrogance of engineers both at Airbus and Boeing, who refuse to give the pilots easy means to regain immediate and full authority over the plane (pitch and power) is just appalling.

Accident and incident records abound where the automation has been a major contributing factor or precursor. Knowing our friends at Boeing, it is highly probable that they will steer the investigation towards maintenance deficiencies as primary cause of the accident

In the wake of the Ethiopian Airlines crash, other countries have not waited for the FAA to act. China and Indonesia, as well as Ethiopian Airlines and Cayman Airways, have grounded flights of all Boeing 737 Max 8 aircraft, the Guardian reported in Ethiopian Airlines crash: Boeing faces safety questions over 737 Max 8 jets . The FT has called the Chinese and Indonesian actions an "unparalleled flight ban" (see China and Indonesia ground Boeing 737 Max 8 jets after latest crash ). India's air regulator has also issued new rules covering flights of the 737 Max aircraft, requiring pilots to have a minimum of 1,000 hours experience to fly these 'planes, according to a report in the Economic Times, DGCA issues additional safety instructions for flying B737 MAX planes.

Future of Boeing?

The commercial consequences of grounding the 737 Max in China alone are significant, according to this CNN account, Why grounding 737 MAX jets is a big deal for Boeing . The 737 Max is Boeing's most important plane; China is also the company's major market:

"A suspension in China is very significant, as this is a major market for Boeing," said Greg Waldron, Asia managing editor at aviation research firm FlightGlobal.

Boeing has predicted that China will soon become the world's first trillion-dollar market for jets. By 2037, Boeing estimates China will need 7,690 commercial jets to meet its travel demands.

Airbus (EADSF) and Commercial Aircraft Corporation of China, or Comac, are vying with Boeing for the vast and rapidly growing Chinese market.

Comac's first plane, designed to compete with the single-aisle Boeing 737 MAX and Airbus A320, made its first test flight in 2017. It is not yet ready for commercial service, but Boeing can't afford any missteps.

Boeing has made significant inroads in China with its 737 MAX family. A dozen Chinese airlines have ordered 180 of the planes, and 76 of them have been delivered, according Boeing. About 85% of Boeing's unfilled Chinese airline orders are for 737 MAX planes.

The 737 has been Boeing's bestselling product for decades. The company's future depends on the success the 737 MAX, the newest version of the jet. Boeing has 4,700 unfilled orders for 737s, representing 80% of Boeing's orders backlog. Virtually all 737 orders are for MAX versions.

As of the time of posting, US airlines have yet to ground their 737 Max 8 fleets. American Airlines, Alaska Air, Southwest Airlines, and United Airlines have ordered a combined 548 of the new 737 jets, of which 65 have been delivered, according to CNN.

Legal Liability?

Prior to Sunday's Ethiopian Airlines crash, Boeing already faced considerable potential legal liability for the October Lion Air crash. Just last Thursday, the Hermann Law Group of personal injury lawyers filed suit against Boeing on behalf of the families of 17 Indonesian passengers who died in that crash.

The Families of Lion Air Crash File Lawsuit Against Boeing – News Release did not mince words;

"It's pretty asinine for them to put a system on an airplane and not tell the pilots who are operating the airplane, especially when it deals with flight controls," Captain Mike Michaelis, chairman of the safety committee for the Allied Pilots Association, told the Wall Street Journal.

The president of the pilots union at Southwest Airlines, Jon Weaks, said, "We're pissed that Boeing didn't tell the companies, and the pilots didn't get notice."

The aircraft company concealed the new system and minimized the differences between the MAX and other versions of the 737 to boost sales. On the Boeing website, the company claims that airlines can save "millions of dollars" by purchasing the new plane "because of its commonality" with previous versions of the plane.

"Years of experience representing hundreds of victims has revealed a common thread through most air disaster cases," said Charles Herrmann the principle of Herrmann Law. "Generating profit in a fiercely competitive market too often involves cutting safety measures. In this case, Boeing cut training and completely eliminated instructions and warnings on a new system. Pilots didn't even know it existed. I can't blame so many pilots for being mad as hell."

Additionally, the complaint alleges the United States Federal Aviation Administration is partially culpable for negligently certifying Boeing's Air Flight Manual without requiring adequate instruction and training on the new system. Canadian and Brazilian authorities did require additional training.

What's Next?

The consequences for Boeing could be serious and will depend on what the flight and voice data recorders reveal. I also am curious as to what additional flight training or instructions, if any, the Ethiopian Airlines pilots received, either before or after the Lion Air crash, whether from Boeing, an air safety regulator, or any other source.


el_tel , March 11, 2019 at 5:04 pm

Of course we shouldn't engage in speculation, but we will anyway 'cause we're human. If fly-by-wire and the ability of software to over-ride pilots are indeed implicated in the 737 Max 8 then you can bet the Airbus cheer-leaders on YouTube videos will engage in huge Schaudenfreude.

I really shouldn't even look at comments to YouTube videos – it's bad for my blood pressure. But I occasionally dip into the swamp on ones in areas like airlines. Of course – as you'd expect – you get a large amount of "flag waving" between Europeans and Americans. But the level of hatred and suspiciously similar comments by the "if it ain't Boeing I ain't going" brigade struck me as in a whole new league long before the "SJW" troll wars regarding things like Captain Marvel etc of today.

The Air France Airbus disaster was jumped on – Boeing's traditional hydraulic links between the sticks for the two pilots ensuring they move in tandem; the supposed comments by Captain Sully that the Airbus software didn't allow him to hit the water at the optimal angle he wanted, causing the rear rupture in the fuselage both showed the inferiority of fly-by-wire until Boeing started using it too. (Sully has taken issue with the book making the above point and concludes fly-by-wire is a "mixed blessing".)

I'm going to try to steer clear of my YouTube channels on airlines. Hopefully NC will continue to provide the real evidence as it emerges as to what's been going on here.

Monty , March 11, 2019 at 7:14 pm

Re SJW troll wars.

It is really disheartening how an idea as reasonable as "a just society" has been so thoroughly discredited among a large swath of the population.

No wonder there is such a wide interest in primitive construction and technology on YouTube. This society is very sick and it is nice to pretend there is a way to opt out.

none , March 11, 2019 at 8:17 pm

The version I heard (today, on Reddit) was "if it's Boeing, I'm not going". Hadn't seen the opposite version to just now.

Octopii , March 12, 2019 at 5:19 pm

Nobody is going to provide real evidence but the NTSB.

albert , March 12, 2019 at 6:44 pm

Indeed. The NTSB usually works with local investigation teams (as well as a manufacturers rep) if the manufacturer is located in the US, or if specifically requested by the local authorities. I'd like to see their report. I don't care what the FAA or Boeing says about it.
. .. . .. -- .

d , March 12, 2019 at 5:58 pm

fly by wire has been around the 90s, its not new

notabanker , March 11, 2019 at 6:37 pm

Contains a link to a Seattle Times report as a "comprehensive wrap":
Speaking before China's announcement, Cox, who previously served as the top safety official for the Air Line Pilots Association, said it's premature to think of grounding the 737 MAX fleet.

"We don't know anything yet. We don't have close to sufficient information to consider grounding the planes," he said. "That would create economic pressure on a number of the airlines that's unjustified at this point.

China has grounded them . US? Must not create undue economic pressure on the airlines. Right there in black and white. Money over people.

Joey , March 11, 2019 at 11:13 pm

I just emailed southwest about an upcoming flight asking about my choices for refusal to board MAX 8/9 planes based on this "feature". I expect pro forma policy recitation, but customer pressure could trump too big to fail sweeping the dirt under the carpet. I hope.

Thuto , March 12, 2019 at 3:35 am

We got the "safety of our customers is our top priority and we are remaining vigilant and are in touch with Boeing and the Civial Aviation Authority on this matter but will not be grounding the aircraft model until further information on the crash becomes available" speech from a local airline here in South Africa. It didn't take half a day for customer pressure to effect a swift reversal of that blatant disregard for their "top priority", the model is grounded so yeah, customer muscle flexing will do it

Jessica , March 12, 2019 at 5:26 am

On PPRUNE.ORG (where a lot of pilots hang out), they reported that after the Lion Air crash, Southwest added an extra display (to indicate when the two angle of attack sensors were disagreeing with each other) that the folks on PPRUNE thought was an extremely good idea and effective.
Of course, if the Ethiopian crash was due to something different from the Lion Air crash, that extra display on the Southwest planes may not make any difference.

JerryDenim , March 12, 2019 at 2:09 pm

"On PPRUNE.ORG (where a lot of pilots hang out)"

Take those comments with a large dose of salt. Not to say everyone commenting on PPRUNE and sites like PPRUNE are posers, but PPRUNE.org is where a lot of wanna-be pilots and guys that spend a lot of time in basements playing flight simulator games hang out. The "real pilots" on PPRUNE are more frequently of the aspiring airline pilot type that fly smaller, piston-powered planes.

Altandmain , March 11, 2019 at 5:31 pm

We will have to wait and see what the final investigation reveals. However this does not look good for Boeing at all.

The Maneuvering Characteristics Augmentation System (MCAS) system was implicated in the Lion Air crash. There have been a lot of complaints about the system on many of the pilot forums, suggesting at least anecdotally that there are issues. It is highly suspected that the MCAS system is responsible for this crash too.

Keep in mind that Ethiopian Airlines is a pretty well-known and regarded airline. This is not a cut rate airline we are talking about.

At this point, all we can do is to wait for the investigation results.

d , March 12, 2019 at 6:01 pm

one other minor thing. you remember that shut down? seems that would have delayed any updates from Boeing. seems thats one of the things the pilots pointed out when it shutdown was in progress

WestcoastDeplorable , March 11, 2019 at 5:33 pm

What really is the icing on this cake is the fact the new, larger engines on the "Max" changed the center of gravity of the plane and made it unstable. From what I've read on aviation blogs, this is highly unusual for a commercial passenger jet. Boeing then created the new "safety" feature which makes the plane fly nose down to avoid a stall. But of course garbage in, garbage out on sensors (remember AF447 which stalled right into the S. Atlantic?).
It's all politics anyway .if Boeing had been forthcoming about the "Max" it would have required additional pilot training to certify pilots to fly the airliner. They didn't and now another 189 passengers are D.O.A.
I wouldn't fly on one and wouldn't let family do so either.

Carey , March 11, 2019 at 5:40 pm

If I have read correctly, the MCAS system (not known of by pilots until after the Lion Air crash) is reliant on a single Angle of Attack sensor, without redundancy (!). It's too early
to say if MCAS was an issue in the crashes, I guess, but this does not look good.

Jessica , March 12, 2019 at 5:42 am

If it was some other issue with the plane, that will be almost worse for Boeing. Two crash-causing flaws would require grounding all of the planes, suspending production, then doing some kind of severe testing or other to make sure that there isn't a third flaw waiting to show up.

vomkammer , March 12, 2019 at 3:19 pm

If MCAS relies only on one Angle of Attack (AoA) sensor, then it might have been an error in the system design an the safety assessment, from which Boeing may be liable.

It appears that a failure of the AoA can produce an unannuntiated erroneous pitch trim:
a) If the pilots had proper traning and awareness, this event would "only" increase their workload,
b) But for an unaware or untrained pilot, the event would impair its ability to fly and introduce excessive workload.

The difference is important, because according to standard civil aviation safety assessment (see for instance EASA AMC 25.1309 Ch. 7), the case a) should be classified as "Major" failure, whereas b) should be classified as "Hazardous". "Hazardous" failures are required to have much lower probability, which means MCAS needs two AoA sensors.

In summary: a safe MCAS would need either a second AoA or pilot training. It seems that it had neither.

drumlin woodchuckles , March 12, 2019 at 1:01 am

What are the ways an ignorant lay air traveler can find out about whether a particular airline has these new-type Boeing 737 MAXes in its fleet? What are the ways an ignorant air traveler can find out which airlines do not have ANY of these airplanes in their fleet?

What are the ways an ignorant air traveler can find out ahead of time, when still planning herm's trip, which flights use a 737 MAX as against some other kind of plane?

The only way the flying public could possibly torture the airlines into grounding these planes until it is safe to de-ground them is a total all-encompassing "fearcott" against this airplane all around the world. Only if the airlines in the "go ahead and fly it" countries sell zero seats, without exception, on every single 737 MAX plane that flies, will the airlines themselves take them out of service till the issues are resolved.

Hence my asking how people who wish to save their own lives from future accidents can tell when and where they might be exposed to the risk of boarding a Boeing 737 MAX plane.

Carey , March 12, 2019 at 2:13 am

Should be in your flight info, if not, contact the airline. I'm not getting on a 737 MAX.

pau llauter , March 12, 2019 at 10:57 am

Look up the flight on Seatguru. Generally tells type of aircraft. Of course, airlines do change them, too.

Old Jake , March 12, 2019 at 2:57 pm

Stop flying. Your employer requires it? Tell'em where to get off. There are alternatives. The alternatives are less polluting and have lower climate impact also. Yes, this is a hard pill to swallow. No, I don't travel for employment any more, I telecommute. I used to enjoy flying, but I avoid it like plague any more. Crapification.

Darius , March 12, 2019 at 5:09 pm

Additional training won't do. If they wanted larger engines, they needed a different plane. Changing to an unstable center of gravity and compensating for it with new software sounds like a joke except for the hundreds of victims. I'm not getting on that plane.

Joe Well , March 11, 2019 at 5:35 pm

Has there been any study of crapification as a broad social phenomenon? When I Google the word I only get links to NC and sites that reference NC. And yet, this seems like one of the guiding concepts to understand our present world (the crapification of UK media and civil service go a long way towards understanding Brexit, for instance).

I mean, my first thought is, why would Boeing commit corporate self-harm for the sake of a single bullet in sales materials (requires no pilot retraining!). And the answer, of course, is crapification: the people calling the shots don't know what they're doing.

none , March 11, 2019 at 11:56 pm

"Market for lemons" maybe? Anyway the phenomenon is well known.

Alfred , March 12, 2019 at 1:01 am

Google Books finds the word "crapification" quoted (from a 2004) in a work of literary criticism published in 2008 (Literature, Science and a New Humanities, by J. Gottschall). From 2013 it finds the following in a book by Edward Keenan, Some Great Idea: "Policy-wise, it represented a shift in momentum, a slowing down of the childish, intentional crapification of the city ." So there the word appears clearly in the sense understood by regular readers here (along with an admission that crapfication can be intentional and not just inadvertent). To illustrate that sense, Google Books finds the word used in Misfit Toymakers, by Keith T. Jenkins (2014): "We had been to the restaurant and we had water to drink, because after the takeover's, all of the soda makers were brought to ruination by the total crapification of their product, by government management." But almost twenty years earlier the word "crapification" had occurred in a comic strip published in New York Magazine (29 January 1996, p. 100): "Instant crapification! It's the perfect metaphor for the mirror on the soul of America!" The word has been used on television. On 5 January 2010 a sketch subtitled "Night of Terror – The Crapification of the American Pant-scape" ran on The Colbert Report per: https://en.wikipedia.org/wiki/List_of_The_Colbert_Report_episodes_(2010) . Searching the internet, Google results do indeed show many instances of the word "crapification" on NC, or quoted elsewhere from NC posts. But the same results show it used on many blogs since ca. 2010. Here, at http://nyceducator.com/2018/09/the-crapification-factor.html , is a recent example that comments on the word's popularization: "I stole that word, "crapification," from my friend Michael Fiorillo, but I'm fairly certain he stole it from someone else. In any case, I think it applies to our new online attendance system." A comment here, https://angrybearblog.com/2017/09/open-thread-sept-26-2017.html , recognizes NC to have been a vector of the word's increasing usage. Googling shows that there have been numerous instances of the verb "crapify" used in computer-programming contexts, from at least as early as 2006. Google Books finds the word "crapified" used in a novel, Sonic Butler, by James Greve (2004). The derivation, "de-crapify," is also attested. "Crapify" was suggested to Merriam-Webster in 2007 per: http://nws.merriam-webster.com/opendictionary/newword_display_alpha.php?letter=Cr&last=40 . At that time the suggested definition was, "To make situations/things bad." The verb was posted to Urban Dictionary in 2003: https://www.urbandictionary.com/define.php?term=crapify . The earliest serious discussion I could quickly find on crapificatjon as a phenomenon was from 2009 at https://www.cryptogon.com/?p=10611 . I have found only two attempts to elucidate the causes of crapification: http://malepatternboldness.blogspot.com/2017/03/my-jockey-journey-or-crapification-of.html (an essay on undershirts) and https://twilightstarsong.blogspot.com/2017/04/complaints.html (a comment on refrigerators). This essay deals with the mechanics of job crapification: http://asserttrue.blogspot.com/2015/10/how-job-crapification-works.html (relating it to de-skilling). An apparent Americanism, "crapification" has recently been 'translated' into French: "Mon bled est en pleine urbanisation, comprends : en pleine emmerdisation" [somewhat literally -- My hole in the road is in the midst of development, meaning: in the midst of crapification]: https://twitter.com/entre2passions/status/1085567796703096832 Interestingly, perhaps, a comprehensive search of amazon.com yields "No results for crapification."

Joe Well , March 12, 2019 at 12:27 pm

You deserve a medal! That's amazing research!

drumlin woodchuckles , March 12, 2019 at 1:08 am

This seems more like a specific bussiness conspiracy than like general crapification. This isn't " they just don't make them like they used to". This is like Ford deliberately selling the Crash and Burn Pinto with its special explode-on-impact gas-tank feature

Maybe some Trump-style insults should be crafted for this plane so they can get memed-up and travel faster than Boeing's ability to manage the story. Epithets like " the new Boeing crash-a-matic dive-liner
with nose-to-the-ground pilot-override autocrash built into every plane." It seems unfair, but life and safety should come before fairness, and that will only happen if a world wide wave of fear MAKES it happen.

pretzelattack , March 12, 2019 at 2:17 am

yeah first thing i thought of was the ford pinto.

The Rev Kev , March 12, 2019 at 4:19 am

Now there is a car tailor made to modern suicidal Jihadists. You wouldn't even have to load it up with explosives but just a full fuel tank-

https://www.youtube.com/watch?v=lgOxWPGsJNY

drumlin woodchuckles , March 12, 2019 at 3:27 pm

" Instant car bomb. Just add gas."

EoH , March 12, 2019 at 8:47 am

Good time to reread Yves' recent, Is a Harvard MBA Bad For You? :

The underlying problem is increasingly mercenary values in society.

JerryDenim , March 12, 2019 at 2:49 pm

I think crapification is the end result of a self-serving belief in the unfailing goodness and superiority of Ivy faux-meritocracy and the promotion/exaltation of the do-nothing, know-nothing, corporate, revolving-door MBA's and Psych-major HR types over people with many years of both company and industry experience who also have excellent professional track records. The latter group was the group in charge of major corporations and big decisions in the 'good old days', now it's the former. These morally bankrupt people and their vapid, self-righteous culture of PR first, management science second, and what-the-hell-else-matters anyway, are the prime drivers of crapification. Read the bio of an old-school celebrated CEO like Gordon Bethune (Continental CEO with corporate experience at Boeing) who skipped college altogether and joined the Navy at 17, and ask yourself how many people like that are in corporate board rooms today? I'm not saying going back to a 'Good Ole Boy's Club' is the best model of corporate governnace either but at least people like Bethune didn't think they were too good to mix with their fellow employees, understood leadership, the consequences of bullshit, and what 'The buck stops here' thing was really about. Corporate types today sadly believe their own propaganda, and when their fraudulent schemes, can-kicking, and head-in-the sand strategies inevitably blow up in their faces, they accept no blame and fail upwards to another posh corporate job or a nice golden parachute. The wrong people are in charge almost everywhere these days, hence crapification. Bad incentives, zero white collar crime enforcement, self-replicating board rooms, group-think, begets toxic corporate culture, which equals crapification.

Jeff Zink , March 12, 2019 at 5:46 pm

Also try "built in obsolescence"

VietnamVet , March 11, 2019 at 5:40 pm

As a son of a deceased former Boeing aeronautic engineer, this is tragic. It highlights the problem of financialization, neoliberalism, and lack of corporate responsibility pointed out daily here on NC. The crapification was signaled by the move of the headquarters from Seattle to Chicago and spending billions to build a second 787 line in South Carolina to bust their Unions. Boeing is now an unregulated multinational corporation superior to sovereign nations. However, if the 737 Max crashes have the same cause, this will be hard to whitewash. The design failure of windows on the de Havilland Comet killed the British passenger aircraft business. The EU will keep a discrete silence since manufacturing major airline passenger planes is a duopoly with Airbus. However, China hasn't (due to the trade war with the USA) even though Boeing is building a new assembly line there. Boeing escaped any blame for the loss of two Malaysian Airline's 777s. This may be an existential crisis for American aviation. Like a President who denies calling Tim Cook, Tim Apple, or the soft coup ongoing in DC against him, what is really happening globally is not factually reported by corporate media.

Jerry B , March 11, 2019 at 6:28 pm

===Boeing is now an unregulated multinational corporation superior to sovereign nations===

Susan Strange 101.

Or more recently Quinn Slobodian's Globalists: The End of Empire and the Birth of Neoliberalism.

And the beat goes on.

Synoia , March 11, 2019 at 6:49 pm

The design failure of windows on the de Havilland Comet killed the British passenger aircraft business.

Yes, a misunderstanding the the effect of square windows and 3 dimensional stress cracking.

Gary Gray , March 11, 2019 at 7:54 pm

Sorry, but 'sovereign' nations were always a scam. Nothing than a excuse to build capital markets, which are the underpinning of capitalism. Capital Markets are what control countries and have since the 1700's. Maybe you should blame the monarchies for selling out to the bankers in the late middle ages. Sovereign nations are just economic units for the bankers, their businesses they finance and nothing more. I guess they figured out after the Great Depression, they would throw a bunch of goodies at "Indo Europeans" face in western europe ,make them decadent and jaded via debt expansion. This goes back to my point about the yellow vests ..me me me me me. You reek of it. This stuff with Boeing is all profit based. It could have happened in 2000, 1960 or 1920. It could happen even under state control. Did you love Hitler's Voltswagon?

As for the soft coup .lol you mean Trumps soft coup for his allies in Russia and the Middle East viva la Saudi King!!!!!? Posts like these represent the problem with this board. The materialist over the spiritualist. Its like people who still don't get some of the biggest supporters of a "GND" are racialists and being somebody who has long run the environmentalist rally game, they are hugely in the game. Yet Progressives completely seem blind to it. The media ignores them for con men like David Duke(who's ancestry is not clean, no its not) and "Unite the Right"(or as one friend on the environmental circuit told me, Unite the Yahweh apologists) as whats "white". There is a reason they do this.

You need to wake up and stop the self-gratification crap. The planet is dying due to mishandlement. Over urbanization, over population, constant need for me over ecosystem. It can only last so long. That is why I like Zombie movies, its Gaia Theory in a nutshell. Good for you Earth .or Midgard. Which ever you prefer.

Carey , March 11, 2019 at 8:05 pm

Your job seems to be to muddy the waters, and I'm sure we'll be seeing much more of the same; much more.

Thanks!

pebird , March 11, 2019 at 10:24 pm

Hitler had an electric car?

JerryDenim , March 12, 2019 at 3:05 pm

Hee-hee. I noticed that one too.

TimR , March 12, 2019 at 9:41 am

Interesting but I'm unclear on some of it.. GND supporters are racialist?

JerryDenim , March 12, 2019 at 3:02 pm

Spot on comment VietnamVet, a lot of chickens can be seen coming home to roost in this latest Boeing disaster. Remarkable how not many years ago the government could regulate the aviation industry without fear of killing it, since there was more than one aerospace company, not anymore! The scourge of monsopany/monopoly power rears its head and bites in unexpected places.

Ptb , March 11, 2019 at 5:56 pm

More detail on the "MCAS" system responsible for the previous Lion Air crash here (theaircurrent.com)

It says the bigger and repositioned engine, which give the new model its fuel efficiency, and wing angle tweaks needed to fit the engine vs landing gear and clearance,
change the amount of pitch trim it needs in turns to remain level.

The auto system was added top neutralize the pitch trim during turns, too make it handle like the old model.

There is another pitch trim control besides the main "stick". To deactivate the auto system, this other trim control has to be used, the main controls do not deactivate it (perhaps to prevent it from being unintentionally deactivated, which would be equally bad). If the sensor driving the correction system gives a false reading and the pilot were unaware, there would be seesawing and panic

Actually, if this all happened again I would be very surprised. Nobody flying a 737 would not know after the previous crash. Curious what they find.

Ptb , March 11, 2019 at 6:38 pm

Ok typo fixes didn't register gobbledygook.

EoH , March 12, 2019 at 8:38 am

While logical, If your last comment were correct, it should have prevented this most recent crash. It appears that the "seesawing and panic" continue.

I assume it has now gone beyond the cockpit, and beyond the design, and sales teams and reached the Boeing board room. From there, it is likely to travel to the board rooms of every airline flying this aircraft or thinking of buying one, to their banks and creditors, and to those who buy or recommend their stock. But it may not reach the FAA for some time.

marku52 , March 12, 2019 at 2:47 pm

Full technical discussion of why this was needed at:

https://leehamnews.com/2018/11/14/boeings-automatic-trim-for-the-737-max-was-not-disclosed-to-the-pilots/

Ptb , March 12, 2019 at 5:32 pm

Excellent link, thanks!

Kimac , March 11, 2019 at 6:20 pm

As to what's next?

Think, Too Big To Fail.

Any number of ways will be found to put lipstick on this pig once we recognize the context.

allan , March 11, 2019 at 6:38 pm

"Canadian and Brazilian authorities did require additional training" from the quote at the bottom is not
something I've seen before. What did they know and when did they know it?

rd , March 11, 2019 at 8:31 pm

They probably just assumed that the changes in the plane from previous 737s were big enough to warrant treating it like a major change requiring training.

Both countries fly into remote areas with highly variable weather conditions and some rugged terrain.

dcrane , March 11, 2019 at 7:25 pm

Re: withholding information from the FAA

For what it's worth, the quoted section says that Boeing withheld info about the MCAS from "midlevel FAA officials", while Jerri-Lynn refers to the FAA as a whole.

This makes me wonder if top-level FAA people certified the system.

Carey , March 11, 2019 at 7:37 pm

See under "regulatory capture"

Corps run the show, regulators are window-dressing.

IMO, of course. Of course

allan , March 11, 2019 at 8:04 pm

It wasn't always this way. From 1979:

DC-10 Type Certificate Lifted [Aviation Week]

FAA action follows finding of new cracks in pylon aft bulkhead forward flange; crash investigation continues

Suspension of the McDonnell Douglas DC-10's type certificate last week followed a separate grounding order from a federal court as government investigators were narrowing the scope of their investigation of the American Airlines DC-10 crash May 25 in Chicago.

The American DC-10-10, registration No. N110AA, crashed shortly after takeoff from Chicago's O'Hare International Airport, killing 259 passengers, 13 crewmembers and three persons on the ground. The 275 fatalities make the crash the worst in U.S. history.

The controversies surrounding the grounding of the entire U.S. DC-10 fleet and, by extension, many of the DC-10s operated by foreign carriers, by Federal Aviation Administrator Langhorne Bond on the morning of June 6 to revolve around several issues.

Carey , March 11, 2019 at 8:39 pm

Yes, I remember back when the FAA would revoke a type certificate if a plane was a danger to public safety. It wasn't even that long ago. Now their concern is any threat to Boeing™. There's a name for that

Joey , March 11, 2019 at 11:22 pm

'Worst' disaster in Chicago would still ground planes. Lucky for Boeing its brown and browner.

Max Peck , March 11, 2019 at 7:30 pm

It's not correct to claim the MCAS was concealed. It's right in the January 2017 rev of the NG/MAX differences manual.

Carey , March 11, 2019 at 7:48 pm

Mmm. Why do the dudes and dudettes *who fly the things* say they knew nothing
about MCAS? Their training is quite rigorous.

Max Peck , March 11, 2019 at 10:00 pm

See a post below for link. I'd have provided it in my original post but was on a phone in an inconvenient place for editing.

Carey , March 12, 2019 at 1:51 am

'Boeing's automatic trim for the 737 MAX was not disclosed to the Pilots':

https://leehamnews.com/2018/11/14/boeings-automatic-trim-for-the-737-max-was-not-disclosed-to-the-pilots/

marku52 , March 12, 2019 at 2:39 pm

Leeham news is the best site for info on this. For those of you interested in the tech details got to Bjorns Corner, where he writes about aeronautic design issues.

I was somewhat horrified to find that modern aircraft flying at near mach speeds have a lot of somewhat pasted on pilot assistances. All of them. None of them fly with nothing but good old stick-and-rudder. Not Airbus (which is actually fully Fly By wire-all pilot inputs got through a computer) and not Boeing, which is somewhat less so.

This latest "solution came about becuse the larger engines (and nacelles) fitted on the Max increased lift ahead of the center of gravity in a pitchup situation, which was destabilizing. The MCAS uses inputs from air speed and angle of attack sensors to put a pitch down input to the horizonatal stablisizer.

A faluty AofA sensor lead to Lion Air's Max pushing the nose down against the pilots efforts all the way into the sea.

This is the best backgrounder

https://leehamnews.com/2018/11/14/boeings-automatic-trim-for-the-737-max-was-not-disclosed-to-the-pilots/

The Rev Kev , March 11, 2019 at 7:48 pm

One guy said last night on TV that Boeing had eight years of back orders for this aircraft so you had better believe that this crash will be studied furiously. Saw a picture of the crash site and it looks like it augured in almost straight down. There seems to be a large hole and the wreckage is not strew over that much area. I understand that they were digging out the cockpit as it was underground. Strange that.

Carey , March 11, 2019 at 7:55 pm

It's said that the Flight Data Recorders have been found, FWIW.

EoH , March 12, 2019 at 9:28 am

Suggestive of a high-speed, nose-first impact. Not the angle of attack a pilot would ordinarily choose.

Max Peck , March 11, 2019 at 9:57 pm

It's not true that Boeing hid the existence of the MCAS. They documented it in the January 2017 rev of the NG/MAX differences manual and probably earlier than that. One can argue whether the description was adequate, but the system was in no way hidden.

Carey , March 11, 2019 at 10:50 pm

Looks like, for now, we're stuck between your "in no way hidden", and numerous 737 pilots' claims on various online aviation boards that they knew nothing about MCAS. Lots of money involved, so very cloudy weather expected. For now I'll stick with the pilots.

Alex V , March 12, 2019 at 2:27 am

To the best of my understanding and reading on the subject, the system was well documented in the Boeing technical manuals, but not in the pilots' manuals, where it was only briefly mentioned, at best, and not by all airlines. I'm not an airline pilot, but from what I've read, airlines often write their own additional operators manuals for aircraft models they fly, so it was up to them to decide the depth of documentation. These are in theory sufficient to safely operate the plane, but do not detail every aircraft system exhaustively, as a modern aircraft is too complex to fully understand. Other technical manuals detail how the systems work, and how to maintain them, but a pilot is unlikely to read them as they are used by maintenance personnel or instructors. The problem with these cases (if investigations come to the same conclusions) is that insufficient information was included in the pilots manual explaining the MCAS, even though the information was communicated via other technical manuals.

vlade , March 12, 2019 at 11:50 am

This is correct.

A friend of mine is a commercial pilot who's just doing a 'training' exercise having moved airlines.

He's been flying the planes in question most of his life, but the airline is asking him to re-do it all according to their manuals and their rules. If the airline manual does not bring it up, then the pilots will not read it – few of them have time to go after the actual technical manuals and read those in addition to what the airline wants. [oh, and it does not matter that he has tens of thousands of hours on the airplane in question, if he does not do something in accordance with his new airline manual, he'd get kicked out, even if he was right and the airline manual wrong]

I believe (but would have to check with him) that some countries regulators do their own testing over and above the airlines, but again, it depends on what they put in.

Alex V , March 12, 2019 at 11:58 am

Good to head my understanding was correct. My take on the whole situation was that Boeing was negligent in communicating the significance of the change, given human psychology and current pilot training. The reason was to enable easier aircraft sales. The purpose of the MCAS system is however quite legitimate – it enables a more fuel efficient plane while compensating for a corner case of the flight envelope.

Max Peck , March 12, 2019 at 8:01 am

The link is to the actual manual. If that doesn't make you reconsider, nothing will. Maybe some pilots aren't expected to read the manuals, I don't know.

Furthermore, the post stated that Boeing failed to inform the FAA about the MCAS. Surely the FAA has time to read all of the manuals.

Darius , March 12, 2019 at 6:18 pm

Nobody reads instruction manuals. They're for reference. Boeing needed to yell at the pilots to be careful to read new pages 1,576 through 1,629 closely. They're a lulu.

Also, what's with screwing with the geometry of a stable plane so that it will fall out of the sky without constant adjustments by computer software? It's like having a car designed to explode but don't worry. We've loaded software to prevent that. Except when there's an error. But don't worry. We've included reboot instructions. It takes 15 minutes but it'll be OK. And you can do it with one hand and drive with the other. No thanks. I want the car not designed to explode.

The Rev Kev , March 11, 2019 at 10:06 pm

The FAA is already leaping to the defense of the Boeing 737 Max 8 even before they have a chance to open up the black boxes. Hope that nothing "happens" to those recordings.

https://www.bbc.com/news/world-africa-47533052

Milton , March 11, 2019 at 11:04 pm

I don't know, crapification, at least for me, refers to products, services, or infrastructure that has declined to the point that it has become a nuisance rather than a benefit it once was. This case with Boeing borders on criminal negligence.

pretzelattack , March 12, 2019 at 8:20 am

i came across a word that was new to me "crapitalism", goes well with crapification.

TG , March 12, 2019 at 12:50 am

1. It's really kind of amazing that we can fly to the other side of the world in a few hours – a journey that in my grandfather's time would have taken months and been pretty unpleasant and risky – and we expect perfect safety.

2. Of course the best-selling jet will see these issues. It's the law of large numbers.

3. I am not a fan of Boeing's corporate management, but still, compared to Wall Street and Defense Contractors and big education etc. they still produce an actual technical useful artifact that mostly works, and at levels of performance that in other fields would be considered superhuman.

4. Even for Boeing, one wonders when the rot will set in. Building commercial airliners is hard! So many technical details, nowhere to hide if you make even one mistake so easy to just abandon the business entirely. Do what the (ex) US auto industry did, contract out to foreign manufacturers and just slap a "USA" label on it and double down on marketing. Milk the cost-plus cash cow of the defense market. Or just financialize the entire thing and become too big to fail and walk away with all the profits before the whole edifice crumbles. Greed is good, right?

marku52 , March 12, 2019 at 2:45 pm

"Of course the best-selling jet will see these issues. It's the law of large numbers."

2 crashes of a new model in vary similar circumstances is very unusual. And FAA admits they are requiring a FW upgrade sometime in April. Pilots need to be hyperaware of what this MCAS system is doing. And they currently aren't.

Prairie Bear , March 12, 2019 at 2:42 am

if it went into a stall, it would lower the nose suddenly to pick airspeed and fly normally again.

A while before I read this post, I listened to a news clip that reported that the plane was observed "porpoising" after takeoff. I know only enough about planes and aviation to be a more or less competent passenger, but it does seem like that is something that might happen if the plane had such a feature and the pilot was not familiar with it and was trying to fight it? The below link is not to the story I saw I don't think, but another one I just found.

if it went into a stall, it would lower the nose suddenly to pick airspeed and fly normally again.

https://www.yahoo.com/gma/know-boeing-737-max-8-crashed-ethiopia-221411537.html

none , March 12, 2019 at 5:33 am

https://www.reuters.com/article/us-ethiopia-airplane-witnesses/ethiopian-plane-smoked-and-shuddered-before-deadly-plunge-idUSKBN1QS1LJ

Reuters reports people saw smoke and debris coming out of the plane before the crash.

Jessica , March 12, 2019 at 6:06 am

At PPRUNE.ORG, many of the commentators are skeptical of what witnesses of airplane crashes say they see, but more trusting of what they say they hear.
The folks at PPRUNE.ORG who looked at the record of the flight from FlightRadar24, which only covers part of the flight because FlightRadar24's coverage in that area is not so good and the terrain is hilly, see a plane flying fast in a straight line very unusually low.

EoH , March 12, 2019 at 8:16 am

The dodge about making important changes that affect aircraft handling but not disclosing them – so as to avoid mandatory pilot training, which would discourage airlines from buying the modified aircraft – is an obvious business-over-safety choice by an ethics and safety challenged corporation.

But why does even a company of that description, many of whose top managers, designers, and engineers live and breathe flight, allow its s/w engineers to prevent the pilots from overriding a supposed "safety" feature while actually flying the aircraft? Was it because it would have taken a little longer to write and test the additional s/w or because completing the circle through creating a pilot override would have mandated disclosure and additional pilot training?

Capt. "Sully" Sullenberger and his passengers and crew would have ended up in pieces at the bottom of the Hudson if the s/w on his aircraft had prohibited out of the ordinary flight maneuvers that contradicted its programming.

Alan Carr , March 12, 2019 at 9:13 am

If you carefully review the over all airframe of the 737 it has not hardly changed over the past 20 years or so, for the most part Boeing 737 specifications . What I believe the real issue here is the Avionics upgrades over the years has changed dramatically. More and more precision avionics are installed with less and less pilot input and ultimately no control of the aircraft. Though Boeing will get the brunt of the lawsuits, the avionics company will be the real culprit. I believe the avionics on the Boeing 737 is made by Rockwell Collins, which you guessed it, is owned by Boeing.

Max Peck , March 12, 2019 at 9:38 am

Rockwell Collins has never been owned by Boeing.

Also, to correct some upthread assertions, MCAS has an off switch.

WobblyTelomeres , March 12, 2019 at 10:02 am

United Technologies, UTX, I believe. If I knew how to short, I'd probably short this 'cause if they aren't partly liable, they'll still be hurt if Boeing has to slow (or, horror, halt) production.

Alan Carr , March 12, 2019 at 11:47 am

You are right Max I mis spoke. Rockwell Collins is owned by United Technologies Corporation

Darius , March 12, 2019 at 6:24 pm

Which astronaut are you? Heh.

EoH , March 12, 2019 at 9:40 am

Using routine risk management protocols, the American FAA should need continuing "data" on an aircraft for it to maintain its airworthiness certificate. Its current press materials on the Boeing 737 Max 8 suggest it needs data to yank it or to ground the aircraft pending review. Has it had any other commercial aircraft suffer two apparently similar catastrophic losses this close together within two years of the aircraft's launch?

Synoia , March 12, 2019 at 11:37 am

I am raising an issue with "crapification" as a meme. Crapification is a symptom of a specific behaviour.

GREED.

Please could you reconsider your writing to invlude this very old, tremendously venal, and "worst" sin?

US incentiveness of inventing a new word, "crapification" implies that some error cuould be corrected. If a deliberate sin, it requires atonement and forgiveness, and a sacrifice of wolrdy assets, for any chance of forgiveness and redemption.

Alan Carr , March 12, 2019 at 11:51 am

Something else that will be interesting to this thread is that Boeing doesn't seem to mind letting the Boeing 737 Max aircraft remain for sale on the open market

vlade , March 12, 2019 at 11:55 am

the EU suspends MAX 8s too

Craig H. , March 12, 2019 at 2:29 pm

The moderators in reddit.com/r/aviation are fantastic.

They have corralled everything into one mega-thread which is worth review:

https://www.reddit.com/r/aviation/comments/azzp0r/ethiopian_airlines_et302_and_boeing_737_max_8/

allan , March 12, 2019 at 3:00 pm

Thanks. That's a great link with what seem to be some very knowledgeable comments.

John Beech , March 12, 2019 at 2:30 pm

Experienced private pilot here. Lots of commercial pilot friends. First, the EU suspending the MAX 8 is politics. Second, the FAA mandated changes were already in the pipeline. Three, this won't stop the ignorant from staking out a position on this, and speculating about it on the internet, of course. Fourth, I'd hop a flight in a MAX 8 without concern – especially with a US pilot on board. Why? In part because the Lion Air event a few months back led to pointed discussion about the thrust line of the MAX 8 vs. the rest of the 737 fleet and the way the plane has software to help during strong pitch up events (MAX 8 and 9 have really powerful engines).

Basically, pilots have been made keenly aware of the issue and trained in what to do. Another reason I'd hop a flight in one right now is because there have been more than 31,000 trouble free flights in the USA in this new aircraft to date. My point is, if there were a systemic issue we'd already know about it. Note, the PIC in the recent crash had +8000 hours but the FO had about 200 hours and there is speculation he was flying. Speculation.

Anyway, US commercial fleet pilots are very well trained to deal with runaway trim or uncommanded flight excursions. How? Simple, by switching the breaker off. It's right near your fingers. Note, my airplane has an autopilot also. In the event the autopilot does something unexpected, just like the commercial pilot flying the MAX 8, I'm trained in what to do (the very same thing, switch the thing off).

Moreover, I speak form experience because I've had it happen twice in 15 years – once an issue with a servo causing the plane to slowly drift right wing low, and once a connection came loose leaving the plane trimmed right wing low (coincidence). My reaction is/was about the same as that of a experienced typist automatically hitting backspace on the keyboard upon realizing they mistyped a word, e.g. not reflex but nearly so. In my case, it was to throw the breaker to power off the autopilot as I leveled the plane. No big deal.

Finally, as of yet there been no analysis from the black boxes. I advise holding off on the speculation until they do. They've been found and we'll learn something soon. The yammering and near hysteria by non-pilots – especially with this thread – reminds me of the old saw about now knowing how smart or ignorant someone is until they open their mouth.

notabanker , March 12, 2019 at 5:29 pm

So let me get this straight.

While Boeing is designing a new 787, Airbus redesigns the A320. Boeing cannot compete with it, so instead of redesigning the 737 properly, they put larger engines on it further forward, which is never intended in the original design. So to compensate they use software with two sensors, not three, making it mathematically impossible to know if you have a faulty sensor which one it would be, to automatically adjust the pitch to prevent a stall, and this is the only true way to prevent a stall. But since you can kill the breaker and disable it if you have a bad sensor and can't possibly know which one, everything is ok. And now that the pilots can disable a feature required for certification, we should all feel good about these brand new planes, that for the first time in history, crashed within 5 months.

And the FAA, which hasn't had a Director in 14 months, knows better than the UK, Europe, China, Australia, Singapore, India, Indonesia, Africa and basically every other country in the world except Canada. And the reason every country in the world except Canada has grounded the fleet is political? Singapore put Silk Air out of business because of politics?

How many people need to be rammed into the ground at 500 mph from 8000 feet before yammering and hysteria are justified here? 400 obviously isn't enough.

VietnamVet , March 12, 2019 at 5:26 pm

Overnight since my first post above, the 737 Max 8 crash has become political. The black boxes haven't been officially read yet. Still airlines and aviation authorities have grounded the airplane in Europe, India, China, Mexico, Brazil, Australia and S.E. Asia in opposition to FAA's "Continued Airworthiness Notification to the International Community" issued yesterday.

I was wrong. There will be no whitewash. I thought they would remain silent. My guess this is a result of an abundance of caution plus greed (Europeans couldn't help gutting Airbus's competitor Boeing). This will not be discussed but it is also a manifestation of Trump Derangement Syndrome (TDS). Since the President has started dissing Atlantic Alliance partners, extorting defense money, fighting trade wars, and calling 3rd world countries s***-holes; there is no sympathy for the collapsing hegemon. Boeing stock is paying the price. If the cause is the faulty design of the flight position sensors and fly by wire software control system, it will take a long while to design and get approval of a new safe redundant control system and refit the airplanes to fly again overseas. A real disaster for America's last manufacturing industry.

[Mar 13, 2019] Boing might not survive the third crash

Too much automation and too complex flight control computer engager life of pilots and passengers...
Notable quotes:
"... When systems (like those used to fly giant aircraft) become too automatic while remaining essentially stupid or limited by the feedback systems, they endanger the airplane and passengers. These two "accidents" are painful warnings for air passengers and voters. ..."
"... This sort of problem is not new. Search the web for pitot/static port blockage, erroneous stall / overspeed indications. Pilots used to be trained to handle such emergencies before the desk-jockey suits decided computers always know best. ..."
"... @Sky Pilot, under normal circumstances, yes. but there are numerous reports that Boeing did not sufficiently test the MCAS with unreliable or incomplete signals from the sensors to even comply to its own quality regulations. ..."
"... Boeing did cut corners when designing the B737 MAX by just replacing the engines but not by designing a new wing which would have been required for the new engine. ..."
"... I accept that it should be easier for pilots to assume manual control of the aircraft in such situations but I wouldn't rush to condemn the programmers before we get all the facts. ..."
Mar 13, 2019 | www.nytimes.com

Shirley OK March 11

I want to know if Boeing 767s, as well as the new 737s, now has the Max 8 flight control computer installed with pilots maybe not being trained to use it or it being uncontrollable.

A 3rd Boeing - not a passenger plane but a big 767 cargo plane flying a bunch of stuff for Amazon crashed near Houston (where it was to land) on 2-23-19. The 2 pilots were killed. Apparently there was no call for help (at least not mentioned in the AP article about it I read).

'If' the new Max 8 system had been installed, had either Boeing or the owner of the cargo plane business been informed of problems with Max 8 equipment that had caused a crash and many deaths in a passenger plane (this would have been after the Indonesian crash)? Was that info given to the 2 pilots who died if Max 8 is also being used in some 767s? Did Boeing get the black box from that plane and, if so, what did they find out?

Those 2 pilots' lives matter also - particularly since the Indonesian 737 crash with Max 8 equipment had already happened. Boeing hasn't said anything (yet, that I've seen) about whether or not the Max 8 new configuration computer and the extra steps to get manual control is on other of their planes.

I want to know about the cause of that 3rd Boeing plane crashing and if there have been crashes/deaths in other of Boeing's big cargo planes. What's the total of all Boeing crashes/fatalies in the last few months and how many of those planes had Max 8?

Rufus SF March 11

Gentle readers: In the aftermath of the Lion Air crash, do you think it possible that all 737Max pilots have not received mandatory training review in how to quickly disconnect the MCAS system and fly the plane manually?

Do you think it possible that every 737Max pilot does not have a "disconnect review" as part of his personal checklist? Do you think it possible that at the first hint of pitch instability, the pilot does not first think of the MCAS system and whether to disable it?

Harold Orlando March 11

Compare the altitude fluctuations with those from Lion Air in NYTimes excellent coverage( https://www.nytimes.com/interactive/2018/11/16/world/asia/lion-air-crash-cockpit.html ), and they don't really suggest to me a pilot struggling to maintain proper pitch. Maybe the graph isn't detailed enough, but it looks more like a major, single event rather than a number of smaller corrections. I could be wrong.

Reports of smoke and fire are interesting; there is nothing in the modification that (we assume) caused Lion Air's crash that would explain smoke and fire. So I would hesitate to zero in on the modification at this point. Smoke and fire coming from the luggage bay suggest a runaway Li battery someone put in their suitcase. This is a larger issue because that can happen on any aircraft, Boeing, Airbus, or other.

mrpisces Loui March 11

Is is a shame that Boeing will not ground this aircraft knowing they introduced the MCAS component to automate the stall recovery of the 737 MAX and is behind these accidents in my opinion. Stall recovery has always been a step all pilots handled when the stick shaker and other audible warnings were activated to alert the pilots.

Now, Boeing invented MCAS as a "selling and marketing point" to a problem that didn't exist. MCAS kicks in when the aircraft is about to enter the stall phase and places the aircraft in a nose dive to regain speed. This only works when the air speed sensors are working properly. Now imagine when the air speed sensors have a malfunction and the plane is wrongly put into a nose dive.

The pilots are going to pull back on the stick to level the plane. The MCAS which is still getting incorrect air speed data is going to place the airplane back into a nose dive. The pilots are going to pull back on the stick to level the aircraft. This repeats itself till the airplane impacts the ground which is exactly what happened.

Add the fact that Boeing did not disclose the existence of the MCAS and its role to pilots. At this point only money is keeping the 737 MAX in the air. When Boeing talks about safety, they are not referring to passenger safety but profit safety.

Tony San Diego March 11

1. The procedure to allow a pilot to take complete control of the aircraft from auto-pilot mode should have been a standard eg pull back on the control column. It is not reasonable to expect a pilot to follow some checklist to determine and then turn off a misbehaving module especially in emergency situations. Even if that procedure is written in fine print in a manual. (The number of modules to disable may keep increasing if this is allowed).

2. How are US airlines confident of the safety of the 737 MAX right now when nothing much is known about the cause of the 2nd crash? What is known if that both the crashed aircraft were brand new, and we should be seeing news articles on how the plane's brand-new advanced technology saved the day from the pilot and not the other way round

3. In the first crash, the plane's advanced technology could not even recognize that either the flight path was abnormal and/or the airspeed readings were too erroneous and mandate the pilot to therefore take complete control immediately!

John✔️✔️Brews Tucson, AZ March 11

It's straightforward to design for standard operation under normal circumstances. But when bizarre operation occurs resulting in extreme circumstances a lot more comes into play. Not just more variables interacting more rapidly, testing system response times, but much happening quickly, testing pilot response times and experience. It is doubtful that the FAA can assess exactly what happened in these crashes. It is a result of a complex and rapid succession of man-machine-software-instrumentation interactions, and the number of permutations is huge. Boeing didn't imagine all of them, and didn't test all those it did think of.

The FAA is even less likely to do so. Boeing eventually will fix some of the identified problems, and make pilot intervention more effective. Maybe all that effort to make the new cockpit look as familiar as the old one will be scrapped? Pilot retraining will be done? Redundant sensors will be added? Additional instrumentation? Software re-written?

That'll increase costs, of course. Future deliveries will cost more. Looks likely there will be some downtime. Whether the fixes will cover sufficient eventualities, time will tell. Whether Boeing will be more scrupulous in future designs, less willing to cut corners without evaluating them? Will heads roll? Well, we'll see...

Ron SC March 11

Boeing has been in trouble technologically since its merger with McDonnell Douglas, which some industry analysts called a takeover, though it isn't clear who took over whom since MD got Boeing's name while Boeing took the MD logo and moved their headquarters from Seattle to Chicago.

In addition to problems with the 737 Max, Boeing is charging NASA considerably more than the small startup, SpaceX, for a capsule designed to ferry astronauts to the space station. Boeing's Starliner looks like an Apollo-era craft and is launched via a 1960's-like ATLAS booster.

Despite using what appears to be old technology, the Starliner is well behind schedule and over budget while the SpaceX capsule has already docked with the space station using state-of-art reusable rocket boosters at a much lower cost. It seems Boeing is in trouble, technologically.

BSmith San Francisco March 11

When you read that this model of the Boeing 737 Max was more fuel efficient, and view the horrifying graphs (the passengers spent their last minutes in sheer terror) of the vertical jerking up and down of both air crafts, and learn both crashes occurred minutes after take off, you are 90% sure that the problem is with design, or design not compatible with pilot training. Pilots in both planes had received permission to return to the airports. The likely culprit. to a trained designer, is the control system for injecting the huge amounts of fuel necessary to lift the plane to cruising altitude. Pilots knew it was happening and did not know how to override the fuel injection system.

These two crashes foretell what will happen if airlines, purely in the name of saving money, elmininate human control of aircraft. There will be many more crashes.

These ultra-complicated machines which defy gravity and lift thousands of pounds of dead weight into the stratesphere to reduce friction with air, are immensely complex and common. Thousands of flight paths cover the globe each day. Human pilots must ultimately be in charge - for our own peace of mind, and for their ability to deal with unimaginable, unforeseen hazards.

When systems (like those used to fly giant aircraft) become too automatic while remaining essentially stupid or limited by the feedback systems, they endanger the airplane and passengers. These two "accidents" are painful warnings for air passengers and voters.

Brez Spring Hill, TN March 11

1. Ground the Max 737.

2. Deactivate the ability of the automated system to override pilot inputs, which it apparently can do even with the autopilot disengaged.

3. Make sure that the autopilot disengage button on the yoke (pickle switch) disconnects ALL non-manual control inputs.

4. I do not know if this version of the 737 has direct input ("rope start") gyroscope, airspeed and vertical speed inticators for emergencies such as failure of the electronic wonder-stuff. If not, install them. Train pilots to use them.

5. This will cost money, a lot of money, so we can expect more self-serving excuses until the FAA forces Boeing to do the right thing.

6. This sort of problem is not new. Search the web for pitot/static port blockage, erroneous stall / overspeed indications. Pilots used to be trained to handle such emergencies before the desk-jockey suits decided computers always know best.

Harper Arkansas March 11

I flew big jets for 34 years, mostly Boeing's. Boeing added new logic to the trim system and was allowed to not make it known to pilots. However it was in maintenance manuals. Not great, but these airplanes are now so complex there are many systems that pilots don't know all of the intimate details.

NOT IDEAL, BUT NOT OVERLY SIGNIFICANT. Boeing changed one of the ways to stop a runaway trim system by eliminating the control column trim brake, ie airplane nose goes up, push down (which is instinct) and it stops the trim from running out of control.

BIG DEAL BOIENG AND FAA, NOT TELLING PILOTS. Boeing produces checklists for almost any conceivable malfunction. We pilots are trained to accomplish the obvious then go immediately to the checklist. Some items on the checklist are so important they are called "Memory Items" or "Red Box Items".

These would include things like in an explosive depressurization to put on your o2 mask, check to see that the passenger masks have dropped automatically and start a descent.

Another has always been STAB TRIM SWITCHES ...... CUTOUT which is surrounded by a RED BOX.

For very good reasons these two guarded switches are very conveniently located on the pedestal right between the pilots.

So if the nose is pitching incorrectly, STAB TRIM SWITCHES ..... CUTOUT!!! Ask questions later, go to the checklist. THAT IS THE PILOTS AND TRAINING DEPARTMENTS RESPONSIBILITY. At this point it is not important as to the cause.

David Rubien New York March 11

If these crashes turn out to result from a Boeing flaw, how can that company continue to stay in business? It should be put into receivership and its executives prosecuted. How many deaths are persmissable?

Osama Portland OR March 11

The emphasis on software is misplaced. The software intervention is triggered by readings from something called an Angle of Attack sensor. This sensor is relatively new on airplanes. A delicate blade protrudes from the fuselage and is deflected by airflow. The direction of the airflow determines the reading. A false reading from this instrument is the "garbage in" input to the software that takes over the trim function and directs the nose of the airplane down. The software seems to be working fine. The AOA sensor? Not so much.

experience Michiigan March 11

The basic problem seems to be that the 737 Max 8 was not designed for the larger engines and so there are flight characteristics that could be dangerous. To compensate for the flaw, computer software was used to control the aircraft when the situation was encountered. The software failed to prevent the situation from becoming a fatal crash.

A work around that may be the big mistake of not redesigning the aircraft properly for the larger engines in the first place. The aircraft may need to be modified at a cost that would be not realistic and therefore abandoned and a entirely new aircraft design be implemented. That sounds very drastic but the only other solution would be to go back to the original engines. The Boeing Company is at a crossroad that could be their demise if the wrong decision is made.

Sky Pilot NY March 11

It may be a training issue in that the 737 Max has several systems changes from previous 737 models that may not be covered adequately in differences training, checklists, etc. In the Lyon Air crash, a sticky angle-of-attack vane caused the auto-trim to force the nose down in order to prevent a stall. This is a worthwhile safety feature of the Max, but the crew was slow (or unable) to troubleshoot and isolate the problem. It need not have caused a crash. I suspect the same thing happened with Ethiopian Airlines. The circumstances are temptingly similar.

Thomas Singapore March 11

@Sky Pilot, under normal circumstances, yes. but there are numerous reports that Boeing did not sufficiently test the MCAS with unreliable or incomplete signals from the sensors to even comply to its own quality regulations. And that is just one of the many quality issues with the B737 MAX that have been in the news for a long time and have been of concern to some of the operators while at the same time being covered up by the FAA.

Just look at the difference in training requirements between the FAA and the Brazilian aviation authority.

Brazilian pilots need to fully understand the MCAS and how to handle it in emergency situations while FAA does not even require pilots to know about it.

Thomas Singapore March 11

This is yet another beautiful example of the difference in approach between Europeans and US Americans. While Europeans usually test their before they deliver the product thoroughly in order to avoid any potential failures of the product in their customers hands, the US approach is different: It is "make it work somehow and fix the problems when the client has them".

Which is what happened here as well. Boeing did cut corners when designing the B737 MAX by just replacing the engines but not by designing a new wing which would have been required for the new engine.

So the aircraft became unstable to fly at low speedy and tight turns which required a fix by implementing the MCAS which then was kept from recertification procedures for clients for reasons competitive sales arguments. And of course, the FAA played along and provided a cover for this cutting of corners as this was a product of a US company.

Then the proverbial brown stuff hit the fan, not once but twice. So Boeing sent its "thoughts and prayers" and started to hope for the storm to blow over and for finding a fix that would not be very expensive and not eat the share holder value away.

Sorry, but that is not the way to design and maintain aircraft. If you do it, do it right the first time and not fix it after more than 300 people died in accidents. There is a reason why China has copied the Airbus A-320 and not the Boeing B737 when building its COMAC C919. The Airbus is not a cheap fix, still tested by customers.

Rafael USA March 11

@Thomas And how do you know that Boeing do not test the aircrafts before delivery? It is a requirement by FAA for all complete product, systems, parts and sub-parts to be tested before delivery. However it seems Boeing has not approached the problem (or maybe they do not know the real issue).

As for the design, are you an engineer that can say whatever the design and use of new engines without a complete re-design is wrong? Have you seen the design drawings of the airplane? I do work in an industry in which our products are use for testing different parts of aircratfs and Boeing is one of our customers.

Our products are use during manufacturing and maintenance of airplanes. My guess is that Boeing has no idea what is going on. Your biased opinion against any US product is evident. There are regulations in the USA (and not in other Asia countries) that companies have to follow. This is not a case of untested product, it is a case of unknown problem and Boeing is really in the dark of what is going on...

Sam Europe March 11

Boeing and Regulators continue to exhibit criminal behaviour in this case. Ethical responsibility expects that when the first brand new MAX 8 fell for potentially issues with its design, the fleet should have been grounded. Instead, money was a priority; and unfortunately still is. They are even now flying. Disgraceful and criminal behaviour.

Imperato NYC March 11

@Sam no...too soon to come anywhere near that conclusion.

YW New York, NY March 11

A terrible tragedy for Ethiopia and all of the families affected by this disaster. The fact that two 737 Max jets have crashed in one year is indeed suspicious, especially as it has long been safer to travel in a Boeing plane than a car or school bus. That said, it is way too early to speculate on the causes of the two crashes being identical. Eyewitness accounts of debris coming off the plane in mid-air, as has been widely reported, would not seem to square with the idea that software is again at fault. Let's hope this puzzle can be solved quickly.

Wayne Brooklyn, New York March 11

@Singh the difference is consumer electronic products usually have a smaller number of components and wiring compared to commercial aircraft with miles of wiring and multitude of sensors and thousands of components. From what I know they usually have a preliminary report that comes out in a short time. But the detailed reported that takes into account analysis will take over one year to be written.

John A San Diego March 11

The engineers and management at Boeing need a crash course in ethics. After the crash in Indonesia, Boeing was trying to pass the blame rather than admit responsibility. The planes should all have been grounded then. Now the chickens have come to roost. Boeing is in serious trouble and it will take a long time to recover the reputation. Large multinationals never learn.

Imperato NYC March 11

@John A the previous pilot flying the Lion jet faced the same problem but dealt with it successfully. The pilot on the ill fated flight was less experienced and unfortunately failed.

BSmith San Francisco March 11

@Imperato Solving a repeat problem on an airplane type must not solely depend upon a pilot undertaking an emergency response! That is nonsense even to a non-pilot! This implies that Boeing allows a plane to keep flying which it knows has a fatal flaw! Shouldn't it be grounding all these planes until it identifies and solves the same problem?

Jimi DC March 11

NYT recently did an excellent job explaining how pilots were kept in the dark, by Boeing, during software update for 737 Max: https://www.nytimes.com/2019/02/03/world/asia/lion-air-plane-crash-pilots.html#click=https://t.co/MRgpKKhsly

Steve Charlotte, NC March 11

Something is wrong with those two graphs of altitude and vertical speed. For example, both are flat at the end, even though the vertical speed graph indicates that the plane was climbing rapidly. So what is the source of those numbers? Is it ground-based radar, or telemetry from onboard instruments? If the latter, it might be a clue to the problem.

Imperato NYC March 11

@Steve Addis Ababa is almost at 8000ft.

George North Carolina March 11

I wonder if, somewhere, there is a a report from some engineers saying that the system pushed by administrative-types to get the plane on the market quickly, will results in serious problems down the line.

Rebecca Michigan March 11

If we don't know why the first two 737 Max Jets crashed, then we don't know how long it will be before another one has a catastrophic failure. All the planes need to be grounded until the problem can be duplicated and eliminated.

Shirley OK March 11

@Rebecca And if it is something about the plane itself - and maybe an interaction with the new software - then someone has to be ready to volunteer to die to replicate what's happened.....

Rebecca Michigan March 12

@Shirley Heavens no. When investigating failures, duplicating the problem helps develop the solution. If you can't recreate the problem, then there is nothing to solve. Duplicating the problem generally is done through analysis and simulations, not with actual planes and passengers.

Sisifo Carrboro, NC March 11

Computer geeks can be deadly. This is clearly a software problem. The more software goes into a plane, the more likely it is for a software failure to bring down a plane. And computer geeks are always happy to try "new things" not caring what the effects are in the real world. My PC has a feature that controls what gets typed depending on the speed and repetitiveness of what I type. The darn thing is a constant source of annoyance as I sit at my desk, and there is absolutely no way to neutralize it because a computer geek so decided. Up in an airliner cockpit, this same software idiocy is killing people like flies.

Pooja MA March 11

@Sisifo Software that goes into critical systems like aircraft have a lot more constraints. Comparing it to the user interface on your PC doesn't make any sense. It's insulting to assume programmers are happy to "try new things" at the expense of lives. If you'd read about the Lion Air crash carefully you'd remember that there were faulty sensors involved. The software was doing what it was designed to do but the input it was getting was incorrect. I accept that it should be easier for pilots to assume manual control of the aircraft in such situations but I wouldn't rush to condemn the programmers before we get all the facts.

BSmith San Francisco March 11

@Pooja Mistakes happen. If humans on board can't respond to terrible situations then there is something wrong with the aircraft and its computer systems. By definition.

Patriot NJ March 11

Airbus had its own experiences with pilot "mode confusion" in the 1990's with at least 3 fatal crashes in the A320, but was able to control the media narrative until they resolved the automation issues. Look up Air Inter 148 in Wikipedia to learn the similarities.

Opinioned! NYC -- currently wintering in the Pacific March 11

"Commands issued by the plane's flight control computer that bypasses the pilots." What could possibly go wrong? Now let's see whether Boeing's spin doctors can sell this as a feature, not a bug.

Chris Hartnett Minneapolis March 11

It is telling that the Chinese government grounded their fleet of 737 Max 8 aircraft before the US government. The world truly has turned upside down when it potentially is safer to fly in China than the US. Oh, the times we live in. Chris Hartnett Datchet, UK (formerly Minneapolis)

Hollis Barcelona March 11

As a passenger who likes his captains with a head full of white hair, even if the plane is nosediving to instrument failure, does not every pilot who buckles a seat belt worldwide know how to switch off automatic flight controls and fly the airplane manually?

Even if this were 1000% Boeing's fault pilots should be able to override electronics and fly the plane safely back to the airport. I'm sure it's not that black and white in the air and I know it's speculation at this point but can any pilots add perspective regarding human responsibility?

Karl Rollings Sydney, Australia March 11

@Hollis I'm not a pilot nor an expert, but my understanding is that planes these days are "fly by wire", meaning the control surfaces are operated electronically, with no mechanical connection between the pilot's stick and the wings. So if the computer goes down, the ability to control the plane goes with it.

William Philadelphia March 11

@Hollis The NYT's excellent reporting on the Lion Air crash indicated that in nearly all other commercial aircraft, manual control of the pilot's yoke would be sufficient to override the malfunctioning system (which was controlling the tail wings in response to erroneous sensor data). Your white haired captain's years of training would have ingrained that impulse.

Unfortunately, on the Max 8 that would not sufficiently override the tail wings until the pilots flicked a switch near the bottom of the yoke. It's unclear whether individual airlines made pilots aware of this. That procedure existed in older planes but may not have been standard practice because the yoke WOULD sufficiently override the tail wings. Boeing's position has been that had pilots followed the procedure, a crash would not have occurred.

Nat Netherlands March 11

@Hollis No, that is the entire crux of this problem; switching from auto-pilot to manual does NOT solve it. Hence the danger of this whole system. T

his new Boeing 737-Max series are having the engines placed a bit further away than before and I don't know why they did this, but the result is that there can be some imbalance in air, which they then tried to correct with this strange auto-pilot technical adjustment.

Problem is that it stalls the plane (by pushing its nose down and even flipping out small wings sometimes) even when it shouldn't, and even when they switch to manual this system OVERRULES the pilot and switches back to auto-pilot, continuing to try to 'stabilize' (nose dive) the plane. That's what makes it so dangerous.

It was designed to keep the plane stable but basically turned out to function more or less like a glitch once you are taking off and need the ascend. I don't know why it only happens now and then, as this plane had made many other take-offs prior, but when it hits, it can be deadly. So far Boeings 'solution' is sparsely sending out a HUGE manual for pilots about how to fight with this computer problem.

Which are complicated to follow in a situation of stress with a plane computer constantly pushing the nose of your plane down. Max' mechanism is wrong and instead of correcting it properly, pilots need special training. Or a new technical update may help... which has been delayed and still hasn't been provided.

Mark Lebow Milwaukee, WI March 11

Is it the inability of the two airlines to maintain one of the plane's fly-by-wire systems that is at fault, not the plane itself? Or are both crashes due to pilot error, not knowing how to operate the system and then overreacting when it engages? Is the aircraft merely too advanced for its own good? None of these questions seems to have been answered yet.

Shane Marin County, CA March 11 Times Pick

This is such a devastating thing for Ethiopian Airlines, which has been doing critical work in connecting Africa internally and to the world at large. This is devastating for the nation of Ethiopia and for all the family members of those killed. May the memory of every passenger be a blessing. We should all hope a thorough investigation provides answers to why this make and model of airplane keep crashing so no other people have to go through this horror again.

Mal T KS March 11

A possible small piece of a big puzzle: Bishoftu is a city of 170,000 that is home to the main Ethiopian air force base, which has a long runway. Perhaps the pilot of Flight 302 was seeking to land there rather than returning to Bole Airport in Addis Ababa, a much larger and more densely populated city than Bishoftu. The pilot apparently requested return to Bole, but may have sought the Bishoftu runway when he experienced further control problems. Detailed analysis of radar data, conversations between pilot and control tower, flight path, and other flight-related information will be needed to establish the cause(s) of this tragedy.

Nan Socolow West Palm Beach, FL March 11

The business of building and selling airplanes is brutally competitive. Malfunctions in the systems of any kind on jet airplanes ("workhorses" for moving vast quantities of people around the earth) lead to disaster and loss of life. Boeing's much ballyhooed and vaunted MAX 8 737 jet planes must be grounded until whatever computer glitches brought down Ethiopian Air and LION Air planes -- with hundreds of passenger deaths -- are explained and fixed.

In 1946, Arthur Miller's play, "All My Sons", brought to life guilt by the airplane industry leading to deaths of WWII pilots in planes with defective parts. Arthur Miller was brought before the House UnAmerican Activities Committee because of his criticism of the American Dream. His other seminal American play, "Death of a Salesman", was about an everyman to whom attention must be paid. Attention must be paid to our aircraft industry. The American dream must be repaired.

Rachel Brooklyn, NY March 11

This story makes me very afraid of driverless cars.

Chuck W. Seattle, WA March 11

Meanwhile, human drivers killed 40,000 and injured 4.5 million people in 2018... For comparison, 58,200 American troops died in the entire Vietnam war. Computers do not fall asleep, get drunk, drive angry, or get distracted. As far as I am concerned, we cannot get unreliable humans out from behind the wheel fast enough.

jcgrim Knoxville, TN March 11

@Chuck W. Humans write the algorithms of driverless cars. Algorithms are not 100% fail-safe. Particularly when humans can't seem to write snap judgements or quick inferences into an algorithm. An algorithm can make driverless cars safe in predictable situations but that doesn't mean driveless cars will work in unpredictable events. Also, I don't trust the hype from Uber or the tech industry. https://www.nytimes.com/2017/02/24/technology/anthony-levandowski-waymo-uber-google-lawsuit.html?mtrref=t.co&gwh=D6880521C2C06930788921147F4506C8&gwt=pay

John NYC March 11

The irony here seems to be that in attempting to make the aircraft as safe as possible (with systems updates and such) Boeing may very well have made their product less safe. Since the crashes, to date, have been limited to the one product that product should be grounded until a viable determination has been made. John~ American Net'Zen

cosmos Washington March 11

Knowing quite a few Boeing employees and retirees, people who have shared numerous stories of concerns about Boeing operations -- I personally avoid flying. As for the assertion: "The business of building and selling jets is brutally competitive" -- it is monopolistic competition, as there are only two players. That means consumers (in this case airlines) do not end up with the best and widest array of airplanes. The more monopolistic a market, the more it needs to be regulated in the public interest -- yet I seriously doubt the FAA or any governmental agency has peeked into all the cost cutting measures Boeing has implemented in recent years

drdeanster tinseltown March 11

@cosmos Patently ridiculous. Your odds are greater of dying from a lightning strike, or in a car accident. Or even from food poisoning. Do you avoid driving? Eating? Something about these major disasters makes people itching to abandon all sense of probability and statistics.

Bob Milan March 11

When the past year was the dealiest one in decades, and when there are two disasters involved the same plane within that year, how can anyone not draw an inference that there are something wrong with the plane? In statistical studies of a pattern, this is a very very strong basis for a logical reasoning that something is wrong with the plane. When the number involves human lives, we must take very seriously the possibility of design flaws. The MAX planes should be all grounded for now. Period.

65 Recommend
mak pakistan March 11

@Bob couldn't agree more - however the basic design and engineering of the 737 is proven to be dependable over the past ~ 6 decades......not saying that there haven't been accidents - but these probably lie well within the industry / type averages. the problems seems to have arisen with the introduction of systems which have purportedly been introduced to take a part of the work-load off the pilots & pass it onto a central compuertised system.

Maybe the 'automated anti-stalling ' programme installed into the 737 Max, due to some erroneous inputs from the sensors, provide inaccurate data to the flight management controls leading to stalling of the aircraft. It seems that the manufacturer did not provide sufficent technical data about the upgraded software, & incase of malfunction, the corrective procedures to be followed to mitigate such diasters happening - before delivery of the planes to customers.

The procedure for the pilot to take full control of the aircraft by disengaging the central computer should be simple and fast to execute. Please we don't want Tesla driverless vehicles high up in the sky !

James Conner Northwestern Montana March 11

All we know at the moment is that a 737 Max crashed in Africa a few minutes after taking off from a high elevation airport. Some see similarities with the crash of Lion Air's 737 Max last fall -- but drawing a line between the only two dots that exist does not begin to present a useful picture of the situation.

Human nature seeks an explanation for an event, and may lead some to make assumptions that are without merit in order to provide closure. That tendency is why following a dramatic event, when facts are few, and the few that exist may be misleading, there is so much cocksure speculation masquerading as solid, reasoned, analysis. At this point, it's best to keep an open mind and resist connecting dots.

Peter Sweden March 11

@James Conner 2 deadly crashes after the introduction of a new airplane has no precedence in recent aviation history. And the time it has happened (with Comet), it was due to a faulty aircraft design. There is, of course, some chance that there is no connection between the two accidents, but if there is, the consequences are huge. Especially because the two events happened with very similar fashion (right after takeoff, with wild altitude changes), so there is more similarities than just the type of the plane. So there is literally no reason to keep this model in the air until the investigation is concluded. Oh well, there is: money. Over human lives.

svenbi NY March 11

It might be a wrong analogy, but if Toyota/Lexus recall over 1.5 million vehicles due to at least over 20 fatalities in relations to potentially fawlty airbags, Boeing should -- after over 300 deaths in just about 6 months -- pull their product of the market voluntarily until it is sorted out once and for all.

This tragic situation recalls the early days of the de Havilland Comet, operated by BOAC, which kept plunging from the skies within its first years of operation until the fault was found to be in the rectangular windows, which did not withstand the pressure due its jet speed and the subsequent cracks in body ripped the planes apart in midflight.

Thore Eilertsen Oslo March 11

A third crash may have the potential to take the aircraft manufacturer out of business, it is therefore unbelievable that the reasons for the Lion Air crash haven't been properly established yet. With more than a 100 Boeing 737 Max already grounded, I would expect crash investigations now to be severely fast tracked.

And the entire fleet should be grounded on the principle of "better safe than sorry". But then again, that would cost Boeing money, suggesting that the company's assessment of the risks involved favours continued operations above the absolute safety of passengers.

Londoner London March 11

@Thore Eilertsen This is also not a case for a secretive and extended crash investigation process. As soon as the cockpit voice recording is extracted - which might be later today - it should be made public. We also need to hear the communications between the controllers and the aircraft and to know about the position regarding the special training the pilots received after the Lion Air crash.

Trevor Canada March 11

@Thore Eilertsen I would imagine that Boeing will be the first to propose grounding these planes if they believe with a high degree of probability that it's their issue. They have the most to lose. Let logic and patience prevail.

Marvin McConoughey oregon March 11

It is very clear, even in these early moments, that aircraft makers need far more comprehensive information on everything pertinent that is going on in cockpits when pilots encounter problems. That information should be continually transmitted to ground facilities in real time to permit possible ground technical support.

[Mar 11, 2019] The university professors, who teach but do not learn: neoliberal shill DeJong tries to prolong the life of neoliberalism in the USA

Highly recommended!
DeJong is more dangerous them Malkin... It poisons students with neoliberalism more effectively.
Mar 11, 2019 | www.nakedcapitalism.com

Kurtismayfield , , March 10, 2019 at 10:52 am

Re:Wall Street Democrats

They know, however, that they've been conned, played, and they're absolute fools in the game.

Thank you Mr. Black for the laugh this morning. They know exactly what they have been doing. Whether it was deregulating so that Hedge funds and vulture capitalism can thrive, or making sure us peons cannot discharge debts, or making everything about financalization. This was all done on purpose, without care for "winning the political game". Politics is economics, and the Wall Street Democrats have been winning.

notabanker , , March 10, 2019 at 12:26 pm

For sure. I'm quite concerned at the behavior of the DNC leadership and pundits. They are doubling down on blatant corporatist agendas. They are acting like they have this in the bag when objective evidence says they do not and are in trouble. Assuming they are out of touch is naive to me. I would assume the opposite, they know a whole lot more than what they are letting on.

urblintz , , March 10, 2019 at 12:49 pm

I think the notion that the DNC and the Democrat's ruling class would rather lose to a like-minded Republican corporatist than win with someone who stands for genuine progressive values offering "concrete material benefits." I held my nose and read comments at the kos straw polls (where Sanders consistently wins by a large margin) and it's clear to me that the Clintonista's will do everything in their power to derail Bernie.

polecat , , March 10, 2019 at 1:00 pm

"It's the Externalities, stupid economists !" *should be the new rallying cry ..

rd , , March 10, 2019 at 3:26 pm

Keynes' "animal spirits" and the "tragedy of the commons" (Lloyd, 1833 and Hardin, 1968) both implied that economics was messier than Samuelson and Friedman would have us believe because there are actual people with different short- and long-term interests.

The behavioral folks (Kahnemann, Tversky, Thaler etc.) have all shown that people are even messier than we would have thought. So most macro-economic stuff over the past half-century has been largely BS in justifying trickle-down economics, deregulation etc.

There needs to be some inequality as that provides incentives via capitalism but unfettered it turns into France 1989 or the Great Depression. It is not coincidence that the major experiment in this in the late 90s and early 2000s required massive government intervention to keep the ship from sinking less than a decade after the great unregulated creative forces were unleashed.

MMT is likely to be similar where productive uses of deficits can be beneficial, but if the money is wasted on stupid stuff like unnecessary wars, then the loss of credibility means that the fiat currency won't be quite as fiat anymore. Britain was unbelievably economically powerfully in the late 1800s but in half a century went to being an economic afterthought hamstrung by deficits after two major wars and a depression.

So it is good that people like Brad DeLong are coming to understand that the pretty economic theories have some truths but are utter BS (and dangerous) when extrapolated without accounting for how people and societies actually behave.

Chris Cosmos , , March 10, 2019 at 6:43 pm

I never understood the incentive to make more money -- that only works if money = true value and that is the implication of living in a capitalist society (not economy)–everything then becomes a commodity and alienation results and all the depression, fear, anxiety that I see around me. Whereas human happiness actually comes from helping others and finding meaning in life not money or dominating others. That's what social science seems to be telling us.

Oregoncharles , , March 10, 2019 at 2:46 pm

Quoting DeLong:

" He says we are discredited. Our policies have failed. And they've failed because we've been conned by the Republicans."

That's welcome, but it's still making excuses. Neoliberal policies have failed because the economics were wrong, not because "we've been conned by the Republicans." Furthermore, this may be important – if it isn't acknowledged, those policies are quite likely to come sneaking back, especially if Democrats are more in the ascendant., as they will be, given the seesaw built into the 2-Party.

The Rev Kev , , March 10, 2019 at 7:33 pm

Might be right there. Groups like the neocons were originally attached the the left side of politics but when the winds changed, detached themselves and went over to the Republican right. The winds are changing again so those who want power may be going over to what is called the left now to keep their grip on power. But what you say is quite true. It is not really the policies that failed but the economics themselves that were wrong and which, in an honest debate, does not make sense either.

marku52 , , March 10, 2019 at 3:39 pm

"And they've failed because we've been conned by the Republicans.""

Not at all. What about the "free trade" hokum that DeJong and his pal Krugman have been peddling since forever? History and every empirical test in the modern era shows that it fails in developing countries and only exacerbates inequality in richer ones.

That's just a failed policy.

I'm still waiting for an apology for all those years that those two insulted anyone who questioned their dogma as just "too ignorant to understand."

Glen , , March 10, 2019 at 4:47 pm

Thank you!

He created FAILED policies. He pushed policies which have harmed America, harmed Americans, and destroyed the American dream.

Kevin Carhart , , March 10, 2019 at 4:29 pm

It's intriguing, but two other voices come to mind. One is Never Let a Serious Crisis Go To Waste by Mirowski and the other is Generation Like by Doug Rushkoff.

Neoliberalism is partially entrepreneurial self-conceptions which took a long time to promote. Rushkoff's Frontline shows the Youtube culture. There is a girl with a "leaderboard" on the wall of her suburban room, keeping track of her metrics.

There's a devastating VPRO Backlight film on the same topic. Internet-platform neoliberalism does not have much to do with the GOP.

It's going to be an odd hybrid at best – you could have deep-red communism but enacted for and by people whose self-conception is influenced by decades of Becker and Hayek? One place this question leads is to ask what's the relationship between the set of ideas and material conditions-centric philosophies? If new policies pass that create a different possibility materially, will the vise grip of the entrepreneurial self loosen?

Partially yeah, maybe, a Job Guarantee if it passes and actually works, would be an anti-neoliberal approach to jobs, which might partially loosen the regime of neoliberal advice for job candidates delivered with a smug attitude that There Is No Alternative. (Described by Gershon). We take it seriously because of a sense of dread that it might actually be powerful enough to lock us out if we don't, and an uncertainty of whether it is or not.

There has been deep damage which is now a very broad and resilient base. It is one of the prongs of why 2008 did not have the kind of discrediting effect that 1929 did. At least that's what I took away from _Never Let_.

Brad DeLong handing the baton might mean something but it is not going to ameliorate the sense-of-life that young people get from managing their channels and metrics.

Take the new 1099 platforms as another focal point. Suppose there were political measures that splice in on the platforms and take the edge off materially, such as underwritten healthcare not tied to your job. The platforms still use star ratings, make star ratings seem normal, and continually push a self-conception as a small business. If you have overt DSA plus covert Becker it is, again, a strange hybrid,

Jeremy Grimm , , March 10, 2019 at 5:13 pm

Your comment is very insightful. Neoliberalism embeds its mindset into the very fabric of our culture and self-concepts. It strangely twists many of our core myths and beliefs.

Raulb , , March 10, 2019 at 6:36 pm

This is nothing but a Trojan horse to 'co-opt' and 'subvert'. Neoliberals sense a risk to their neo feudal project and are simply attempting to infiltrate and hollow out any threats from within.

There are the same folks who have let entire economics departments becomes mouthpieces for corporate propaganda and worked with thousands of think tanks and international organizations to mislead, misinform and cause pain to millions of people.

They have seeded decontextualized words like 'wealth creators' and 'job creators' to create a halo narrative for corporate interests and undermine society, citizenship, the social good, the environment that make 'wealth creation' even possible. So all those take a backseat to 'wealth creator' interests. Since you can't create wealth without society this is some achievement.

Its because of them that we live in a world where the most important economic idea is protecting people like Kochs business and personal interests and making sure government is not 'impinging on their freedom'. And the corollary a fundamental anti-human narrative where ordinary people and workers are held in contempt for even expecting living wages and conditions and their access to basics like education, health care and living conditions is hollowed out out to promote privatization and become 'entitlements'.

Neoliberalism has left us with a decontextualized highly unstable world that exists in a collective but is forcefully detached into a context less individual existence. These are not mistakes of otherwise 'well meaning' individuals, there are the results of hard core ideologues and high priests of power.

Dan , , March 10, 2019 at 7:31 pm

Two thumbs up. This has been an ongoing agenda for decades and it has succeeded in permeating every aspect of society, which is why the United States is such a vacuous, superficial place. And it's exporting that superficiality to the rest of the world.

VietnamVet , , March 10, 2019 at 7:17 pm

I read Brad DeLong's and Paul Krugman's blogs until their contradictions became too great. If anything, we need more people seeing the truth. The Global War on Terror is into its 18th year. In October the USA will spend approximately $6 trillion and will have accomplish nothing except to create blow back. The Middle Class is disappearing. Those who remain in their homes are head over heels in debt.

The average American household carries $137,063 in debt. The wealthy are getting richer.

The Jeff Bezos, Warren Buffett and Bill Gates families together have as much wealth as the lowest half of Americans. Donald Trump's Presidency and Brexit document that neoliberal politicians have lost contact with reality. They are nightmares that there is no escaping. At best, perhaps, Roosevelt Progressives will be reborn to resurrect regulated capitalism and debt forgiveness.

But more likely is a middle-class revolt when Americans no longer can pay for water, electricity, food, medicine and are jailed for not paying a $1,500 fine for littering the Beltway.

A civil war inside a nuclear armed nation state is dangerous beyond belief. France is approaching this.

[Mar 10, 2019] How do I detach a process from Terminal, entirely?

Mar 10, 2019 | superuser.com

stackoverflow.com, Aug 25, 2016 at 17:24

I use Tilda (drop-down terminal) on Ubuntu as my "command central" - pretty much the way others might use GNOME Do, Quicksilver or Launchy.

However, I'm struggling with how to completely detach a process (e.g. Firefox) from the terminal it's been launched from - i.e. prevent that such a (non-)child process

For example, in order to start Vim in a "proper" terminal window, I have tried a simple script like the following:

exec gnome-terminal -e "vim $@" &> /dev/null &

However, that still causes pollution (also, passing a file name doesn't seem to work).

lhunath, Sep 23, 2016 at 19:08

First of all; once you've started a process, you can background it by first stopping it (hit Ctrl - Z ) and then typing bg to let it resume in the background. It's now a "job", and its stdout / stderr / stdin are still connected to your terminal.

You can start a process as backgrounded immediately by appending a "&" to the end of it:

firefox &

To run it in the background silenced, use this:

firefox </dev/null &>/dev/null &

Some additional info:

nohup is a program you can use to run your application with such that its stdout/stderr can be sent to a file instead and such that closing the parent script won't SIGHUP the child. However, you need to have had the foresight to have used it before you started the application. Because of the way nohup works, you can't just apply it to a running process .

disown is a bash builtin that removes a shell job from the shell's job list. What this basically means is that you can't use fg , bg on it anymore, but more importantly, when you close your shell it won't hang or send a SIGHUP to that child anymore. Unlike nohup , disown is used after the process has been launched and backgrounded.

What you can't do, is change the stdout/stderr/stdin of a process after having launched it. At least not from the shell. If you launch your process and tell it that its stdout is your terminal (which is what you do by default), then that process is configured to output to your terminal. Your shell has no business with the processes' FD setup, that's purely something the process itself manages. The process itself can decide whether to close its stdout/stderr/stdin or not, but you can't use your shell to force it to do so.

To manage a background process' output, you have plenty of options from scripts, "nohup" probably being the first to come to mind. But for interactive processes you start but forgot to silence ( firefox < /dev/null &>/dev/null & ) you can't do much, really.

I recommend you get GNU screen . With screen you can just close your running shell when the process' output becomes a bother and open a new one ( ^Ac ).


Oh, and by the way, don't use " $@ " where you're using it.

$@ means, $1 , $2 , $3 ..., which would turn your command into:

gnome-terminal -e "vim $1" "$2" "$3" ...

That's probably not what you want because -e only takes one argument. Use $1 to show that your script can only handle one argument.

It's really difficult to get multiple arguments working properly in the scenario that you gave (with the gnome-terminal -e ) because -e takes only one argument, which is a shell command string. You'd have to encode your arguments into one. The best and most robust, but rather cludgy, way is like so:

gnome-terminal -e "vim $(printf "%q " "$@")"

Limited Atonement ,Aug 25, 2016 at 17:22

nohup cmd &

nohup detaches the process completely (daemonizes it)

Randy Proctor ,Sep 13, 2016 at 23:00

If you are using bash , try disown [ jobspec ] ; see bash(1) .

Another approach you can try is at now . If you're not superuser, your permission to use at may be restricted.

Stephen Rosen ,Jan 22, 2014 at 17:08

Reading these answers, I was under the initial impression that issuing nohup <command> & would be sufficient. Running zsh in gnome-terminal, I found that nohup <command> & did not prevent my shell from killing child processes on exit. Although nohup is useful, especially with non-interactive shells, it only guarantees this behavior if the child process does not reset its handler for the SIGHUP signal.

In my case, nohup should have prevented hangup signals from reaching the application, but the child application (VMWare Player in this case) was resetting its SIGHUP handler. As a result when the terminal emulator exits, it could still kill your subprocesses. This can only be resolved, to my knowledge, by ensuring that the process is removed from the shell's jobs table. If nohup is overridden with a shell builtin, as is sometimes the case, this may be sufficient, however, in the event that it is not...


disown is a shell builtin in bash , zsh , and ksh93 ,

<command> &
disown

or

<command> &; disown

if you prefer one-liners. This has the generally desirable effect of removing the subprocess from the jobs table. This allows you to exit the terminal emulator without accidentally signaling the child process at all. No matter what the SIGHUP handler looks like, this should not kill your child process.

After the disown, the process is still a child of your terminal emulator (play with pstree if you want to watch this in action), but after the terminal emulator exits, you should see it attached to the init process. In other words, everything is as it should be, and as you presumably want it to be.

What to do if your shell does not support disown ? I'd strongly advocate switching to one that does, but in the absence of that option, you have a few choices.

  1. screen and tmux can solve this problem, but they are much heavier weight solutions, and I dislike having to run them for such a simple task. They are much more suitable for situations in which you want to maintain a tty, typically on a remote machine.
  2. For many users, it may be desirable to see if your shell supports a capability like zsh's setopt nohup . This can be used to specify that SIGHUP should not be sent to the jobs in the jobs table when the shell exits. You can either apply this just before exiting the shell, or add it to shell configuration like ~/.zshrc if you always want it on.
  3. Find a way to edit the jobs table. I couldn't find a way to do this in tcsh or csh , which is somewhat disturbing.
  4. Write a small C program to fork off and exec() . This is a very poor solution, but the source should only consist of a couple dozen lines. You can then pass commands as commandline arguments to the C program, and thus avoid a process specific entry in the jobs table.

Sheljohn ,Jan 10 at 10:20

  1. nohup $COMMAND &
  2. $COMMAND & disown
  3. setsid command

I've been using number 2 for a very long time, but number 3 works just as well. Also, disown has a 'nohup' flag of '-h', can disown all processes with '-a', and can disown all running processes with '-ar'.

Silencing is accomplished by '$COMMAND &>/dev/null'.

Hope this helps!

dunkyp

add a comment ,Mar 25, 2009 at 1:51
I think screen might solve your problem

Nathan Fellman ,Mar 23, 2009 at 14:55

in tcsh (and maybe in other shells as well), you can use parentheses to detach the process.

Compare this:

> jobs # shows nothing
> firefox &
> jobs
[1]  + Running                       firefox

To this:

> jobs # shows nothing
> (firefox &)
> jobs # still shows nothing
>

This removes firefox from the jobs listing, but it is still tied to the terminal; if you logged in to this node via 'ssh', trying to log out will still hang the ssh process.

,

To disassociate tty shell run command through sub-shell for e.g.

(command)&

When exit used terminal closed but process is still alive.

check -

(sleep 100) & exit

Open other terminal

ps aux | grep sleep

Process is still alive.

[Mar 10, 2019] linux - How to attach terminal to detached process

Mar 10, 2019 | unix.stackexchange.com

Ask Question 86


Gilles ,Feb 16, 2012 at 21:39

I have detached a process from my terminal, like this:
$ process &

That terminal is now long closed, but process is still running and I want to send some commands to that process's stdin. Is that possible?

Samuel Edwin Ward ,Dec 22, 2018 at 13:34

Yes, it is. First, create a pipe: mkfifo /tmp/fifo . Use gdb to attach to the process: gdb -p PID

Then close stdin: call close (0) ; and open it again: call open ("/tmp/fifo", 0600)

Finally, write away (from a different terminal, as gdb will probably hang):

echo blah > /tmp/fifo

NiKiZe ,Jan 6, 2017 at 22:52

When original terminal is no longer accessible...

reptyr might be what you want, see https://serverfault.com/a/284795/187998

Quote from there:

Have a look at reptyr , which does exactly that. The github page has all the information.
reptyr - A tool for "re-ptying" programs.

reptyr is a utility for taking an existing running program and attaching it to a new terminal. Started a long-running process over ssh, but have to leave and don't want to interrupt it? Just start a screen, use reptyr to grab it, and then kill the ssh session and head on home.

USAGE

reptyr PID

"reptyr PID" will grab the process with id PID and attach it to your current terminal.

After attaching, the process will take input from and write output to the new terminal, including ^C and ^Z. (Unfortunately, if you background it, you will still have to run "bg" or "fg" in the old terminal. This is likely impossible to fix in a reasonable way without patching your shell.)

manatwork ,Nov 20, 2014 at 22:59

I am quite sure you can not.

Check using ps x . If a process has a ? as controlling tty , you can not send input to it any more.

9942 ?        S      0:00 tail -F /var/log/messages
9947 pts/1    S      0:00 tail -F /var/log/messages

In this example, you can send input to 9947 doing something like echo "test" > /dev/pts/1 . The other process ( 9942 ) is not reachable.

Next time, you could use screen or tmux to avoid this situation.

Stéphane Gimenez ,Feb 16, 2012 at 16:16

EDIT : As Stephane Gimenez said, it's not that simple. It's only allowing you to print to a different terminal.

You can try to write to this process using /proc . It should be located in /proc/ pid /fd/0 , so a simple :

echo "hello" > /proc/PID/fd/0

should do it. I have not tried it, but it should work, as long as this process still has a valid stdin file descriptor. You can check it with ls -l on /proc/ pid /fd/ .

See nohup for more details about how to keep processes running.

Stéphane Gimenez ,Nov 20, 2015 at 5:08

Just ending the command line with & will not completely detach the process, it will just run it in the background. (With zsh you can use &! to actually detach it, otherwise you have do disown it later).

When a process runs in the background, it won't receive input from its controlling terminal anymore. But you can send it back into the foreground with fg and then it will read input again.

Otherwise, it's not possible to externally change its filedescriptors (including stdin) or to reattach a lost controlling terminal unless you use debugging tools (see Ansgar's answer , or have a look at the retty command).

[Mar 10, 2019] linux - Preventing tmux session created by systemd from automatically terminating on Ctrl+C - Stack Overflow

Mar 10, 2019 | stackoverflow.com

Preventing tmux session created by systemd from automatically terminating on Ctrl+C Ask Question -1


Jim Stewart ,Nov 10, 2018 at 12:55

Since a few days I'm successfully running the new Minecraft Bedrock Edition dedicated server on my Ubuntu 18.04 LTS home server. Because it should be available 24/7 and automatically startup after boot I created a systemd service for a detached tmux session:

tmux.minecraftserver.service

[Unit]
Description=tmux minecraft_server detached

[Service]
Type=forking
WorkingDirectory=/home/mine/minecraftserver
ExecStart=/usr/bin/tmux new -s minecraftserver -d "LD_LIBRARY_PATH=. /home/mine/minecraftser$
User=mine

[Install]
WantedBy=multi-user.target

Everything works as expected but there's one tiny thing that keeps bugging me:

How can I prevent tmux from terminating it's whole session when I press Ctrl+C ? I just want to terminate the Minecraft server process itself instead of the whole tmux session. When starting the server from the command line in a manually created tmux session this does work (session stays alive) but not when the session was brought up by systemd .

FlKo ,Nov 12, 2018 at 6:21

When starting the server from the command line in a manually created tmux session this does work (session stays alive) but not when the session was brought up by systemd .

The difference between these situations is actually unrelated to systemd. In one case, you're starting the server from a shell within the tmux session, and when the server terminates, control returns to the shell. In the other case, you're starting the server directly within the tmux session, and when it terminates there's no shell to return to, so the tmux session also dies.

tmux has an option to keep the session alive after the process inside it dies (look for remain-on-exit in the manpage), but that's probably not what you want: you want to be able to return to an interactive shell, to restart the server, investigate why it died, or perform maintenance tasks, for example. So it's probably better to change your command to this:

'LD_LIBRARY_PATH=. /home/mine/minecraftserver/ ; exec bash'

That is, first run the server, and then, after it terminates, replace the process (the shell which tmux implicitly spawns to run the command, but which will then exit) with another, interactive shell. (For some other ways to get an interactive shell after the command exits, see e. g. this question – but note that the <(echo commands) syntax suggested in the top answer is not available in systemd unit files.)

FlKo ,Nov 12, 2018 at 6:21

I as able to solve this by using systemd's ExecStartPost and tmux's send-keys like this:
[Unit]
Description=tmux minecraft_server detached

[Service]
Type=forking
WorkingDirectory=/home/mine/minecraftserver
ExecStart=/usr/bin/tmux new -d -s minecraftserver
ExecStartPost=/usr/bin/tmux send-keys -t minecraftserver "cd /home/mine/minecraftserver/" Enter "LD_LIBRARY_PATH=. ./bedrock_server" Enter

User=mine

[Install]
WantedBy=multi-user.target

[Mar 01, 2019] Emergency reboot/shutdown using SysRq by Ilija Matoski

peakoilbarrel.com
As you know linux implements some type of mechanism to gracefully shutdown and reboot, this means the daemons are stopping, usually linux stops them one by one, the file cache is synced to disk.

But what sometimes happens is that the system will not reboot or shutdown no mater how many times you issue the shutdown or reboot command.

If the server is close to you, you can always just do a physical reset, but what if it's far away from you, where you can't reach it, sometimes it's not feasible, why if the OpenSSH server crashes and you cannot log in again in the system.

If you ever find yourself in a situation like that, there is another option to force the system to reboot or shutdown.

The magic SysRq key is a key combination understood by the Linux kernel, which allows the user to perform various low-level commands regardless of the system's state. It is often used to recover from freezes, or to reboot a computer without corrupting the filesystem.

Description QWERTY
Immediately reboot the system, without unmounting or syncing filesystems b
Sync all mounted filesystems s
Shut off the system o
Send the SIGKILL signal to all processes except init i

So if you are in a situation where you cannot reboot or shutdown the server, you can force an immediate reboot by issuing

echo 1 > /proc/sys/kernel/sysrq 
echo b > /proc/sysrq-trigger

If you want you can also force a sync before rebooting by issuing these commands

echo 1 > /proc/sys/kernel/sysrq 
echo s > /proc/sysrq-trigger
echo b > /proc/sysrq-trigger

These are called magic commands , and they're pretty much synonymous with holding down Alt-SysRq and another key on older keyboards. Dropping 1 into /proc/sys/kernel/sysrq tells the kernel that you want to enable SysRq access (it's usually disabled). The second command is equivalent to pressing * Alt-SysRq-b on a QWERTY keyboard.

If you want to keep SysRq enabled all the time, you can do that with an entry in your server's sysctl.conf:

echo "kernel.sysrq = 1" >> /etc/sysctl.conf

[Mar 01, 2019] Molly-guard for CentOS 7 UoB Unix by dg12158

Sep 21, 2015 | bris.ac.uk

Since I was looking at this already and had a few things to investigate and fix in our systemd-using hosts, I checked how plausible it is to insert a molly-guard-like password prompt as part of the reboot/shutdown process on CentOS 7 (i.e. using systemd).

Problems encountered include:

So for now this is shelved. It would be nice to have a solution though, so any hints from systemd experts are gratefully received!

(Note that CentOS 7 uses systemd 208, so new features in later versions which help won't be available to us) This entry was posted in Uncategorized by dg12158 . Bookmark the permalink .

[Mar 01, 2019] molly-guard protects machines from accidental shutdowns-reboots by ruchi

Nov 28, 2009 | www.ubuntugeek.com
molly-guard installs a shell script that overrides the existing shutdown/reboot/halt/poweroff commands and first runs a set of scripts, which all have to exit successfully, before molly-guard invokes the real command.

One of the scripts checks for existing SSH sessions. If any of the four commands are called interactively over an SSH session, the shell script prompts you to enter the name of the host you wish to shut down. This should adequately prevent you from accidental shutdowns and reboots.

This shell script passes through the commands to the respective binaries in /sbin and should thus not get in the way if called non-interactively, or locally.

The tool is basically a replacement for halt, reboot and shutdown to prevent such accidents.

Install molly-guard in ubuntu

sudo apt-get install molly-guard

or click on the following link

apt://molly-guard

Now that it's installed, try it out (on a non production box). Here you can see it save me from rebooting the box Ubuntu-test

Ubuntu-test:~$ sudo reboot
W: molly-guard: SSH session detected!
Please type in hostname of the machine to reboot: ruchi
Good thing I asked; I won't reboot Ubuntu-test ...
W: aborting reboot due to 30-query-hostname exiting with code 1.
Ubuntu-Test:~$

By default you're only protected on sessions that look like SSH sessions (have $SSH_CONNECTION set). If, like us, you use alot of virtual machines and RILOE cards, edit /etc/molly-guard/rc and uncomment ALWAYS_QUERY_HOSTNAME=true. Now you should be prompted for any interactive session.

[Mar 01, 2019] Confirm before executing shutdown-reboot command on linux by Ilija Matoski

Notable quotes:
"... rushing to leave and was still logged into a server so I wanted to shutdown my laptop, but what I didn't notice is that I was still connected to the remote server. ..."
Oct 23, 2017 | matoski.com
rushing to leave and was still logged into a server so I wanted to shutdown my laptop, but what I didn't notice is that I was still connected to the remote server. Luckily before pressing enter I noticed I'm not on my machine but on a remote server. So I was thinking there should be a very easy way to prevent it from happening again, to me or to anyone else.

So first thing we need to create a new bash script at /usr/local/bin/confirm with the contents bellow and with execution permissions

#!/usr/bin/env bash
echo "About to execute $1 command"
echo -n "Would you like to proceed y/n? "
read reply

if [ "$reply" = y -o "$reply" = Y ]
then
   $1 "${@:2}"
else
   echo "$1 ${@:2} cancelled"
fi

Now only thing left to do is to setup the aliases so they go through this command to confirm instead of directly calling the command.

So I create the following files

/etc/profile.d/confirm-shutdown.sh

alias shutdown="/usr/local/bin/confirm /sbin/shutdown"

/etc/profile.d/confirm-reboot.sh

alias reboot="/usr/local/bin/confirm /sbin/reboot"

Now when I actually try to do a shutdown/reboot it will prompt me like so.

ilijamt@x1 ~ $ reboot 
Before proceeding to perform /sbin/reboot, please ensure you have approval to perform this task
Would you like to proceed y/n? n
/sbin/reboot  cancelled

[Feb 26, 2019] THE CRISIS OF NEOLIBERALISM by Julie A. Wilson

Highly recommended!
Notable quotes:
"... While the Tea Party was critical of status-quo neoliberalism -- especially its cosmopolitanism and embrace of globalization and diversity, which was perfectly embodied by Obama's election and presidency -- it was not exactly anti-neoliberal. Rather, it was anti-left neoliberalism-, it represented a more authoritarian, right [wing] version of neoliberalism. ..."
"... Within the context of the 2016 election, Clinton embodied the neoliberal center that could no longer hold. Inequality. Suffering. Collapsing infrastructures. Perpetual war. Anger. Disaffected consent. ..."
"... Both Sanders and Trump were embedded in the emerging left and right responses to neoliberalism's crisis. Specifically, Sanders' energetic campaign -- which was undoubtedly enabled by the rise of the Occupy movement -- proposed a decidedly more "commongood" path. Higher wages for working people. Taxes on the rich, specifically the captains of the creditocracy. ..."
"... In other words, Trump supporters may not have explicitly voted for neoliberalism, but that's what they got. In fact, as Rottenberg argues, they got a version of right neoliberalism "on steroids" -- a mix of blatant plutocracy and authoritarianism that has many concerned about the rise of U.S. fascism. ..."
"... We can't know what would have happened had Sanders run against Trump, but we can think seriously about Trump, right and left neoliberalism, and the crisis of neoliberal hegemony. In other words, we can think about where and how we go from here. As I suggested in the previous chapter, if we want to construct a new world, we are going to have to abandon the entangled politics of both right and left neoliberalism; we have to reject the hegemonic frontiers of both disposability and marketized equality. After all, as political philosopher Nancy Fraser argues, what was rejected in the election of 2016 was progressive, left neoliberalism. ..."
"... While the rise of hyper-right neoliberalism is certainly nothing to celebrate, it does present an opportunity for breaking with neoliberal hegemony. We have to proceed, as Gary Younge reminds us, with the realization that people "have not rejected the chance of a better world. They have not yet been offered one."' ..."
Oct 08, 2017 | www.amazon.com

Quote from the book is courtesy of Amazon preview of the book Neoliberalism (Key Ideas in Media & Cultural Studies)

In Chapter 1, we traced the rise of our neoliberal conjuncture back to the crisis of liberalism during the late nineteenth and early twentieth centuries, culminating in the Great Depression. During this period, huge transformations in capitalism proved impossible to manage with classical laissez-faire approaches. Out of this crisis, two movements emerged, both of which would eventually shape the course of the twentieth century and beyond. The first, and the one that became dominant in the aftermath of the crisis, was the conjuncture of embedded liberalism. The crisis indicated that capitalism wrecked too much damage on the lives of ordinary citizens. People (white workers and families, especially) warranted social protection from the volatilities and brutalities of capitalism. The state's public function was expanded to include the provision of a more substantive social safety net, a web of protections for people and a web of constraints on markets. The second response was the invention of neoliberalism. Deeply skeptical of the common-good principles that undergirded the emerging social welfare state, neoliberals began organizing on the ground to develop a "new" liberal govemmentality, one rooted less in laissez-faire principles and more in the generalization of competition and enterprise. They worked to envision a new society premised on a new social ontology, that is, on new truths about the state, the market, and human beings. Crucially, neoliberals also began building infrastructures and institutions for disseminating their new' knowledges and theories (i.e., the Neoliberal Thought Collective), as well as organizing politically to build mass support for new policies (i.e., working to unite anti-communists, Christian conservatives, and free marketers in common cause against the welfare state). When cracks in embedded liberalism began to surface -- which is bound to happen with any moving political equilibrium -- neoliberals were there with new stories and solutions, ready to make the world anew.

We are currently living through the crisis of neoliberalism. As I write this book, Donald Trump has recently secured the U.S. presidency, prevailing in the national election over his Democratic opponent Hillary Clinton. Throughout the election, I couldn't help but think back to the crisis of liberalism and the two responses that emerged. Similarly, after the Great Recession of 2008, we've saw two responses emerge to challenge our unworkable status quo, which dispossesses so many people of vital resources for individual and collective life. On the one hand, we witnessed the rise of Occupy Wall Street. While many continue to critique the movement for its lack of leadership and a coherent political vision, Occupy was connected to burgeoning movements across the globe, and our current political horizons have been undoubtedly shaped by the movement's success at repositioning class and economic inequality within our political horizon. On the other hand, we saw' the rise of the Tea Party, a right-wing response to the crisis. While the Tea Party was critical of status-quo neoliberalism -- especially its cosmopolitanism and embrace of globalization and diversity, which was perfectly embodied by Obama's election and presidency -- it was not exactly anti-neoliberal. Rather, it was anti-left neoliberalism-, it represented a more authoritarian, right [wing] version of neoliberalism.

Within the context of the 2016 election, Clinton embodied the neoliberal center that could no longer hold. Inequality. Suffering. Collapsing infrastructures. Perpetual war. Anger. Disaffected consent. There were just too many fissures and fault lines in the glossy, cosmopolitan world of left neoliberalism and marketized equality. Indeed, while Clinton ran on status-quo stories of good governance and neoliberal feminism, confident that demographics and diversity would be enough to win the election, Trump effectively tapped into the unfolding conjunctural crisis by exacerbating the cracks in the system of marketized equality, channeling political anger into his celebrity brand that had been built on saying "f*** you" to the culture of left neoliberalism (corporate diversity, political correctness, etc.) In fact, much like Clinton's challenger in the Democratic primary, Benie Sanders, Trump was a crisis candidate.

Both Sanders and Trump were embedded in the emerging left and right responses to neoliberalism's crisis. Specifically, Sanders' energetic campaign -- which was undoubtedly enabled by the rise of the Occupy movement -- proposed a decidedly more "commongood" path. Higher wages for working people. Taxes on the rich, specifically the captains of the creditocracy.

Universal health care. Free higher education. Fair trade. The repeal of Citizens United. Trump offered a different response to the crisis. Like Sanders, he railed against global trade deals like NAFTA and the Trans-Pacific Partnership (TPP). However, Trump's victory was fueled by right neoliberalism's culture of cruelty. While Sanders tapped into and mobilized desires for a more egalitarian and democratic future, Trump's promise was nostalgic, making America "great again" -- putting the nation back on "top of the world," and implying a time when women were "in their place" as male property, and minorities and immigrants were controlled by the state.

Thus, what distinguished Trump's campaign from more traditional Republican campaigns was that it actively and explicitly pitted one group's equality (white men) against everyone else's (immigrants, women, Muslims, minorities, etc.). As Catherine Rottenberg suggests, Trump offered voters a choice between a multiracial society (where folks are increasingly disadvantaged and dispossessed) and white supremacy (where white people would be back on top). However, "[w]hat he neglected to state," Rottenberg writes,

is that neoliberalism flourishes in societies where the playing field is already stacked against various segments of society, and that it needs only a relatively small select group of capital-enhancing subjects, while everyone else is ultimately dispensable. 1

In other words, Trump supporters may not have explicitly voted for neoliberalism, but that's what they got. In fact, as Rottenberg argues, they got a version of right neoliberalism "on steroids" -- a mix of blatant plutocracy and authoritarianism that has many concerned about the rise of U.S. fascism.

We can't know what would have happened had Sanders run against Trump, but we can think seriously about Trump, right and left neoliberalism, and the crisis of neoliberal hegemony. In other words, we can think about where and how we go from here. As I suggested in the previous chapter, if we want to construct a new world, we are going to have to abandon the entangled politics of both right and left neoliberalism; we have to reject the hegemonic frontiers of both disposability and marketized equality. After all, as political philosopher Nancy Fraser argues, what was rejected in the election of 2016 was progressive, left neoliberalism.

While the rise of hyper-right neoliberalism is certainly nothing to celebrate, it does present an opportunity for breaking with neoliberal hegemony. We have to proceed, as Gary Younge reminds us, with the realization that people "have not rejected the chance of a better world. They have not yet been offered one."'

Mark Fisher, the author of Capitalist Realism, put it this way:

The long, dark night of the end of history has to be grasped as an enormous opportunity. The very oppressive pervasiveness of capitalist realism means that even glimmers of alternative political and economic possibilities can have a disproportionately great effect. The tiniest event can tear a hole in the grey curtain of reaction which has marked the horizons of possibility under capitalist realism. From a situation in which nothing can happen, suddenly anything is possible again.4

I think that, for the first time in the history of U.S. capitalism, the vast majority of people might sense the lie of liberal, capitalist democracy. They feel anxious, unfree, disaffected. Fantasies of the good life have been shattered beyond repair for most people. Trump and this hopefully brief triumph of right neoliberalism will soon lay this bare for everyone to see. Now, with Trump, it is absolutely clear: the rich rule the world; we are all disposable; this is no democracy. The question becomes: How will we show up for history? Will there be new stories, ideas, visions, and fantasies to attach to? How can we productively and meaningful intervene in the crisis of neoliberalism? How can we "tear a hole in the grey curtain" and open up better worlds? How can we put what we've learned to use and begin to imagine and build a world beyond living in competition? I hope our critical journey through the neoliberal conjuncture has enabled you to begin to answer these questions.

More specifically, in recent decades, especially since the end of the Cold War, our common-good sensibilities have been channeled into neoliberal platforms for social change and privatized action, funneling our political energies into brand culture and marketized struggles for equality (e.g., charter schools, NGOs and non-profits, neoliberal antiracism and feminism). As a result, despite our collective anger and disaffected consent, we find ourselves stuck in capitalist realism with no real alternative. Like the neoliberal care of the self, we are trapped in a privatized mode of politics that relies on cruel optimism; we are attached, it seems, to politics that inspire and motivate us to action, while keeping us living in competition.

To disrupt the game, we need to construct common political horizons against neoliberal hegemony. We need to use our common stories and common reason to build common movements against precarity -- for within neoliberalism, precarity is what ultimately has the potential to thread all of our lives together. Put differently, the ultimate fault line in the neoliberal conjiuicture is the way it subjects us all to precarity and the biopolitics of disposability, thereby creating conditions of possibility for new coalitions across race, gender, citizenship, sexuality, and class. Recognizing this potential for coalition in the face of precarization is the most pressing task facing those who are yearning for a new world. The question is: How do we get there? How do we realize these coalitional potentialities and materialize common horizons?

HOW WE GET THERE

Ultimately, mapping the neoliberal conjuncture through everyday life in enterprise culture has not only provided some direction in terms of what we need; it has also cultivated concrete and practical intellectual resources for political interv ention and social interconnection -- a critical toolbox for living in common. More specifically, this book has sought to provide resources for thinking and acting against the four Ds: resources for engaging in counter-conduct, modes of living that refuse, on one hand, to conduct one's life according to the norm of enterprise, and on the other, to relate to others through the norm of competition. Indeed, we need new ways of relating, interacting, and living as friends, lovers, workers, vulnerable bodies, and democratic people if we are to write new stories, invent new govemmentalities, and build coalitions for new worlds.

Against Disimagination: Educated Hope and Affirmative Speculation

We need to stop turning inward, retreating into ourselves, and taking personal responsibility for our lives (a task which is ultimately impossible). Enough with the disimagination machine! Let's start looking outward, not inward -- to the broader structures that undergird our lives. Of course, we need to take care of ourselves; we must survive. But I firmly believe that we can do this in ways both big and small, that transform neoliberal culture and its status-quo stories.

Here's the thing I tell my students all the time. You cannot escape neoliberalism. It is the air we breathe, the water in which we swim. No job, practice of social activism, program of self-care, or relationship will be totally free from neoliberal impingements and logics. There is no pure "outside" to get to or work from -- that's just the nature of the neoliberalism's totalizing cultural power. But let's not forget that neoliberalism's totalizing cultural power is also a source of weakness. Potential for resistance is everywhere, scattered throughout our everyday lives in enterprise culture. Our critical toolbox can help us identify these potentialities and navigate and engage our conjuncture in ways that tear open up those new worlds we desire.

In other words, our critical perspective can help us move through the world with what Henry Giroux calls educated hope. Educated hope means holding in tension the material realities of power and the contingency of history. This orientation of educated hope knows very well what we're up against. However, in the face of seemingly totalizing power, it also knows that neoliberalism can never become total because the future is open. Educated hope is what allows us to see the fault lines, fissures, and potentialities of the present and emboldens us to think and work from that sliver of social space where we do have political agency and freedom to construct a new world. Educated hope is what undoes the power of capitalist realism. It enables affirmative speculation (such as discussed in Chapter 5), which does not try to hold the future to neoliberal horizons (that's cruel optimism!), but instead to affirm our commonalities and the potentialities for the new worlds they signal. Affirmative speculation demands a different sort of risk calculation and management. It senses how little we have to lose and how much we have to gain from knocking the hustle of our lives.

Against De-democratization: Organizing and Collective Coverning

We can think of educated hope and affirmative speculation as practices of what Wendy Brown calls "bare democracy" -- the basic idea that ordinary' people like you and me should govern our lives in common, that we should critique and try to change our world, especially the exploitative and oppressive structures of power that maintain social hierarchies and diminish lives. Neoliberal culture works to stomp out capacities for bare democracy by transforming democratic desires and feelings into meritocratic desires and feelings. In neoliberal culture, utopian sensibilities are directed away from the promise of collective utopian sensibilities are directed away from the promise of collective governing to competing for equality.

We have to get back that democractic feeling! As Jeremy Gilbert taught us, disaffected consent is a post-democratic orientation. We don't like our world, but we don't think we can do anything about it. So, how do we get back that democratic feeling? How do we transform our disaffected consent into something new? As I suggested in the last chapter, we organize. Organizing is simply about people coming together around a common horizon and working collectively to materialize it. In this way, organizing is based on the idea of radical democracy, not liberal democracy. While the latter is based on formal and abstract rights guaranteed by the state, radical democracy insists that people should directly make the decisions that impact their lives, security, and well-being. Radical democracy is a practice of collective governing: it is about us hashing out, together in communities, what matters, and working in common to build a world based on these new sensibilities.

The work of organizing is messy, often unsatisfying, and sometimes even scary. Organizing based on affirmative speculation and coalition-building, furthermore, will have to be experimental and uncertain. As Lauren Berlant suggests, it means "embracing the discomfort of affective experience in a truly open social life that no

one has ever experienced." Organizing through and for the common "requires more adaptable infrastructures. Keep forcing the existing infrastructures to do what they don't know how to do. Make new ways to be local together, where local doesn't require a physical neighborhood." 5 What Berlant is saying is that the work of bare democracy requires unlearning, and detaching from, our current stories and infrastructures in order to see and make things work differently. Organizing for a new world is not easy -- and there are no guarantees -- but it is the only way out of capitalist realism.

Against Disposability: Radical Equality

Getting back democratic feeling will at once require and help us lo move beyond the biopolitics of disposability and entrenched systems of inequality. On one hand, organizing will never be enough if it is not animated by bare democracy, a sensibility that each of us is equally important when it comes to the project of determining our lives in common. Our bodies, our hurts, our dreams, and our desires matter regardless of our race, gender, sexuality, or citizenship, and regardless of how r much capital (economic, social, or cultural) we have. Simply put, in a radical democracy, no one is disposable. This bare-democratic sense of equality must be foundational to organizing and coalition-building. Otherwise, we will always and inevitably fall back into a world of inequality.

On the other hand, organizing and collective governing will deepen and enhance our sensibilities and capacities for radical equality. In this context, the kind of self-enclosed individualism that empowers and underwrites the biopolitics of disposability melts away, as we realize the interconnectedness of our lives and just how amazing it feels to

fail, we affirm our capacities for freedom, political intervention, social interconnection, and collective social doing.

Against Dispossession: Shared Security and Common Wealth

Thinking and acting against the biopolitics of disposability goes hand-in-hand with thinking and acting against dispossession. Ultimately, when we really understand and feel ourselves in relationships of interconnection with others, we want for them as we want for ourselves. Our lives and sensibilities of what is good and just are rooted in radical equality, not possessive or self-appreciating individualism. Because we desire social security and protection, we also know others desire and deserve the same.

However, to really think and act against dispossession means not only advocating for shared security and social protection, but also for a new society that is built on the egalitarian production and distribution of social wealth that we all produce. In this sense, we can take Marx's critique of capitalism -- that wealth is produced collectively but appropriated individually -- to heart. Capitalism was built on the idea that one class -- the owners of the means of production -- could exploit and profit from the collective labors of everyone else (those who do not own and thus have to work), albeit in very different ways depending on race, gender, or citizenship. This meant that, for workers of all stripes, their lives existed not for themselves, but for others (the appropriating class), and that regardless of what we own as consumers, we are not really free or equal in that bare-democratic sense of the word.

If we want to be really free, we need to construct new material and affective social infrastructures for our common wealth. In these new infrastructures, wealth must not be reduced to economic value; it must be rooted in social value. Here, the production of wealth does not exist as a separate sphere from the reproduction of our lives. In other words, new infrastructures, based on the idea of common wealth, will not be set up to exploit our labor, dispossess our communities, or to divide our lives. Rather, they will work to provide collective social resources and care so that we may all be free to pursue happiness, create beautiful and/or useful things, and to realize our potential within a social world of living in common. Crucially, to create the conditions for these new, democratic forms of freedom rooted in radical equality, we need to find ways to refuse and exit the financial networks of Empire and the dispossessions of creditocracy, building new systems that invite everyone to participate in the ongoing production of new worlds and the sharing of the wealth that we produce in common.

It's not up to me to tell you exactly where to look, but I assure you that potentialities for these new worlds are everywhere around you.

[Feb 21, 2019] https://github.com/MikeDacre/careful_rm

Feb 21, 2019 | github.com

rm is a powerful *nix tool that simply drops a file from the drive index. It doesn't delete it or put it in a Trash can, it just de-indexes it which makes the file hard to recover unless you want to put in the work, and pretty easy to recover if you are willing to spend a few hours trying (use shred to actually secure erase files).

careful_rm.py is inspired by the -I interactive mode of rm and by safe-rm . safe-rm adds a recycle bin mode to rm, and the -I interactive mode adds a prompt if you delete more than a handful of files or recursively delete a directory. ZSH also has an option to warn you if you recursively rm a directory.

These are all great, but I found them unsatisfying. What I want is for rm to be quick and not bother me for single file deletions (so rm -i is out), but to let me know when I am deleting a lot of files, and to actually print a list of files that are about to be deleted . I also want it to have the option to trash/recycle my files instead of just straight deleting them.... like safe-rm , but not so intrusive (safe-rm defaults to recycle, and doesn't warn).

careful_rm.py is fundamentally a simple rm wrapper, that accepts all of the same commands as rm , but with a few additional options features. In the source code CUTOFF is set to 3 , so deleting more files than that will prompt the user. Also, deleting a directory will prompt the user separately with a count of all files and subdirectories within the folders to be deleted.

Furthermore, careful_rm.py implements a fully integrated trash mode that can be toggled on with -c . It can also be forced on by adding a file at ~/.rm_recycle , or toggled on only for $HOME (the best idea), by ~/.rm_recycle_home . The mode can be disabled on the fly by passing --direct , which forces off recycle mode.

The recycle mode tries to find the best location to recycle to on MacOS or Linux, on MacOS it also tries to use Apple Script to trash files, which means the original location is preserved (note Applescript can be slow, you can disable it by adding a ~/.no_apple_rm file, but Put Back won't work). The best location for trashes goes in this order:

  1. $HOME/.Trash on Mac or $HOME/.local/share/Trash on Linux
  2. <mountpoint>/.Trashes on Mac or <mountpoint>/.Trash-$UID on Linux
  3. /tmp/$USER_trash

Always the best trash can to avoid Volume hopping is favored, as moving across file systems is slow. If the trash does not exist, the user is prompted to create it, they then also have the option to fall back to the root trash ( /tmp/$USER_trash ) or just rm the files.

/tmp/$USER_trash is almost always used for deleting system/root files, but note that you most likely do not want to save those files, and straight rm is generally better.

[Feb 21, 2019] https://github.com/lagerspetz/linux-stuff/blob/master/scripts/saferm.sh by Eemil Lagerspetz

Shell script that tires to implement trash can idea
Feb 21, 2019 | github.com
#!/bin/bash
##
## saferm.sh
## Safely remove files, moving them to GNOME/KDE trash instead of deleting.
## Made by Eemil Lagerspetz
## Login <vermind@drache>
##
## Started on Mon Aug 11 22:00:58 2008 Eemil Lagerspetz
## Last update Sat Aug 16 23:49:18 2008 Eemil Lagerspetz
##
version= " 1.16 " ;

... ... ...

[Feb 21, 2019] The rm='rm -i' alias is an horror

Feb 21, 2019 | superuser.com

The rm='rm -i' alias is an horror because after a while using it, you will expect rm to prompt you by default before removing files. Of course, one day you'll run it with an account that hasn't that alias set and before you understand what's going on, it is too late.

... ... ...

If you want save aliases, but don't want to risk getting used to the commands working differently on your system than on others, you can to disable rm like this
alias rm='echo "rm is disabled, use remove or trash or /bin/rm instead."'

Then you can create your own safe alias, e.g.

alias remove='/bin/rm -irv'

or use trash instead.

[Feb 21, 2019] Ubuntu Manpage trash - Command line trash utility.

Feb 21, 2019 | manpages.ubuntu.com

xenial ( 1 ) trash.1.gz

Provided by: trash-cli_0.12.9.14-2_all

NAME

       trash - Command line trash utility.
SYNOPSIS 
       trash [arguments] ...
DESCRIPTION 
       Trash-cli  package  provides  a command line interface trashcan utility compliant with the
       FreeDesktop.org Trash Specification.  It remembers the name, original path, deletion date,
       and permissions of each trashed file.

ARGUMENTS 
       Names of files or directory to move in the trashcan.
EXAMPLES
       $ cd /home/andrea/
       $ touch foo bar
       $ trash foo bar
BUGS 
       Report bugs to http://code.google.com/p/trash-cli/issues
AUTHORS
       Trash  was  written  by Andrea Francia <andreafrancia@users.sourceforge.net> and Einar Orn
       Olason <eoo@hi.is>.  This manual page was written by  Steve  Stalcup  <vorian@ubuntu.com>.
       Changes made by Massimo Cavalleri <submax@tiscalinet.it>.

SEE ALSO 
       trash-list(1),   trash-restore(1),   trash-empty(1),   and   the   FreeDesktop.org   Trash
       Specification at http://www.ramendik.ru/docs/trashspec.html.

       Both are released under the GNU General Public License, version 2 or later.

[Feb 21, 2019] How to prompt and read user input in a Bash shell script

Feb 21, 2019 | alvinalexander.com

By Alvin Alexander. Last updated: June 22 2017 Unix/Linux bash shell script FAQ: How do I prompt a user for input from a shell script (Bash shell script), and then read the input the user provides?

Answer: I usually use the shell script read function to read input from a shell script. Here are two slightly different versions of the same shell script. This first version prompts the user for input only once, and then dies if the user doesn't give a correct Y/N answer:

# (1) prompt user, and read command line argument
read -p "Run the cron script now? " answer

# (2) handle the command line argument we were given
while true
do
  case $answer in
   [yY]* ) /usr/bin/wget -O - -q -t 1 http://www.example.com/cron.php
           echo "Okay, just ran the cron script."
           break;;

   [nN]* ) exit;;

   * )     echo "Dude, just enter Y or N, please."; break ;;
  esac
done

This second version stays in a loop until the user supplies a Y/N answer:

while true
do
  # (1) prompt user, and read command line argument
  read -p "Run the cron script now? " answer

  # (2) handle the input we were given
  case $answer in
   [yY]* ) /usr/bin/wget -O - -q -t 1 http://www.example.com/cron.php
           echo "Okay, just ran the cron script."
           break;;

   [nN]* ) exit;;

   * )     echo "Dude, just enter Y or N, please.";;
  esac
done

I prefer the second approach, but I thought I'd share both of them here. They are subtly different, so not the extra break in the first script.

This Linux Bash 'read' function is nice, because it does both things, prompting the user for input, and then reading the input. The other nice thing it does is leave the cursor at the end of your prompt, as shown here:

Run the cron script now? _

(This is so much nicer than what I had to do years ago.)

[Feb 15, 2019] Losing a job in your 50s is especially tough. Here are 3 steps to take when layoffs happen by Peter Dunn

Unemployment usually is just six month or so; this is the time when you can plan you "downsizing". You do not need to rush.
Often losing job logically requires selling your home and moving to a modest apartment, especially if no children are living with you. At 50 it is abut time... You need to do it later anyway, so why not now.
But that's a very tough decision to make... Still, if the current housing market is close to the top, this is one of the best moves you can make. Getting from your house several hundred thousand dollars allows you to create kind of private pension to compensate for losses in income till you hit your Social Security check, which currently means 66.
$300K investment in A quality bonds that returns 3% per year are enough to provides you with $24K per year "pension" from 50 to age of 66. That allows you to pay for the apartment and amenities. The food is extra...
This way you can take lower paid job and survive.
And in this case you 401k remains intact and can supplement your SS income later on. Simple Excel spreadsheet can provide you with a complete picture of what you can afford and what not. Actually ability to walk of fresh air for 3 or more hours each day worth a lot of money ;-)
Notable quotes:
"... Losing a job in your 50s is a devastating moment, especially if the job is connected to a long career ripe with upward mobility. As a frequent observer of this phenomenon, it's as scary and troublesome as unchecked credit card debt or an expensive chronic health condition. This is one of the many reasons why I believe our 50s can be the most challenging decade of our lives. ..."
"... The first thing you should do is identify the exact day your job income stops arriving ..."
"... Next, and by next I mean five minutes later, explore your eligibility for unemployment benefits, and then file for them if you're able. ..."
"... Grab your bank statement, a marker, and a calculator. As much as you want to pretend its business as usual, you shouldn't. Identify expenses that don't make sense if you don't have a job. Circle them. Add them up. Resolve to eliminate them for the time being, and possibly permanently. While this won't necessarily lengthen your fuse, it could lessen the severity of a potential boom. ..."
Feb 15, 2019 | finance.yahoo.com

... ... ...

Losing a job in your 50s is a devastating moment, especially if the job is connected to a long career ripe with upward mobility. As a frequent observer of this phenomenon, it's as scary and troublesome as unchecked credit card debt or an expensive chronic health condition. This is one of the many reasons why I believe our 50s can be the most challenging decade of our lives.

Assuming you can clear the mental challenges, the financial and administrative obstacles can leave you feeling like a Rube Goldberg machine.

Income, health insurance, life insurance, disability insurance, bills, expenses, short-term savings and retirement savings are all immediately important in the face of a job loss. Never mind your Parent PLUS loans, financially-dependent aging parents, and boomerang children (adult kids who live at home), which might all be lurking as well.

When does your income stop?

From the shocking moment a person learns their job is no longer their job, the word "triage" must flash in bright lights like an obnoxiously large sign in Times Square. This is more challenging than you might think. Like a pickpocket bumping into you right before he grabs your wallet, the distraction is the problem that takes your focus away from the real problem.

This is hard to do because of the emotion that arrives with the dirty deed. The mind immediately begins to race to sources of money and relief. And unfortunately that relief is often found in the wrong place.

The first thing you should do is identify the exact day your job income stops arriving . That's how much time you have to defuse the bomb. Your fuse may come in the form of a severance package, or work you've performed but have't been paid for yet.

When do benefits kick in?

Next, and by next I mean five minutes later, explore your eligibility for unemployment benefits, and then file for them if you're able. However, in some states severance pay affects your immediate eligibility for unemployment benefits. In other words, you can't file for unemployment until your severance payments go away.

Assuming you can't just retire at this moment, which you likely can't, you must secure fresh employment income quickly. But quickly is relative to the length of your fuse. I've witnessed way too many people miscalculate the length and importance of their fuse. If you're able to get back to work quickly, the initial job loss plus severance ends up enhancing your financial life. If you take too much time, by your choice or that of the cosmos, boom.

The next move is much more hands-on, and must also be performed the day you find yourself without a job.

What nonessentials do I cut?

Grab your bank statement, a marker, and a calculator. As much as you want to pretend its business as usual, you shouldn't. Identify expenses that don't make sense if you don't have a job. Circle them. Add them up. Resolve to eliminate them for the time being, and possibly permanently. While this won't necessarily lengthen your fuse, it could lessen the severity of a potential boom.

The idea of diving into your spending habits on the day you lose your job is no fun. But when else will you have such a powerful reason to do so? You won't. It's better than dipping into your assets to fund your current lifestyle. And that's where we'll pick it up the next time.

We've covered day one. In my next column we will tackle day two and beyond.

Peter Dunn is an author, speaker and radio host, and he has a free podcast: "Million Dollar Plan." Have a question for Pete the Planner? Email him at AskPete@petetheplanner.com. The views and opinions expressed in this column are the author's and do not necessarily reflect those of USA TODAY.

[Feb 13, 2019] Microsoft patches 0-day vulnerabilities in IE and Exchange

It is unclear how long this vulnerability exists, but this is pretty serious staff that shows how Hillary server could be hacked via Abedin account. As Abedin technical level was lower then zero, to hack into her home laptop just just trivial.
Feb 13, 2019 | arstechnica.com

Microsoft also patched Exchange against a vulnerability that allowed remote attackers with little more than an unprivileged mailbox account to gain administrative control over the server. Dubbed PrivExchange, CVE-2019-0686 was publicly disclosed last month , along with proof-of-concept code that exploited it. In Tuesday's advisory , Microsoft officials said they haven't seen active exploits yet but that they were "likely."

[Feb 12, 2019] Older Workers Need a Different Kind of Layoff A 60-year-old whose position is eliminated might be unable to find another job, but could retire if allowed early access to Medicare

Highly recommended!
This is a constructive suggestion that is implementable even under neoliberalism. As everything is perverted under neoliberalism that might prompt layoffs before the age of 55.
Notable quotes:
"... Older workers often struggle to get rehired as easily as younger workers. Age discrimination is a well-known problem in corporate America. What's a 60-year-old back office worker supposed to do if downsized in a merger? The BB&T-SunTrust prospect highlights the need for a new type of unemployment insurance for some of the workforce. ..."
"... One policy might be treating unemployed older workers differently than younger workers. Giving them unemployment benefits for a longer period of time than younger workers would be one idea, as well as accelerating the age of Medicare eligibility for downsized employees over the age of 55. The latter idea would help younger workers as well, by encouraging older workers to accept buyout packages -- freeing up career opportunities for younger workers. ..."
Feb 12, 2019 | www.bloomberg.com

The proposed merger between SunTrust and BB&T makes sense for both firms -- which is why Wall Street sent both stocks higher on Thursday after the announcement. But employees of the two banks, especially older workers who are not yet retirement age, are understandably less enthused at the prospect of downsizing. In a nation with almost 37 million workers over the age of 55, the quandary of SunTrust-BB&T workforce will become increasingly familiar across the U.S. economy.

But what's good for the firms isn't good for all of the workers. Older workers often struggle to get rehired as easily as younger workers. Age discrimination is a well-known problem in corporate America. What's a 60-year-old back office worker supposed to do if downsized in a merger? The BB&T-SunTrust prospect highlights the need for a new type of unemployment insurance for some of the workforce.

One policy might be treating unemployed older workers differently than younger workers. Giving them unemployment benefits for a longer period of time than younger workers would be one idea, as well as accelerating the age of Medicare eligibility for downsized employees over the age of 55. The latter idea would help younger workers as well, by encouraging older workers to accept buyout packages -- freeing up career opportunities for younger workers.

The economy can be callous toward older workers, but policy makers don't have to be. We should think about ways of dealing with this shift in the labor market before it happens.

[Feb 11, 2019] Resuming rsync on a interrupted transfer

May 15, 2013 | stackoverflow.com

Glitches , May 15, 2013 at 18:06

I am trying to backup my file server to a remove file server using rsync. Rsync is not successfully resuming when a transfer is interrupted. I used the partial option but rsync doesn't find the file it already started because it renames it to a temporary file and when resumed it creates a new file and starts from beginning.

Here is my command:

rsync -avztP -e "ssh -p 2222" /volume1/ myaccont@backup-server-1:/home/myaccount/backup/ --exclude "@spool" --exclude "@tmp"

When this command is ran, a backup file named OldDisk.dmg from my local machine get created on the remote machine as something like .OldDisk.dmg.SjDndj23 .

Now when the internet connection gets interrupted and I have to resume the transfer, I have to find where rsync left off by finding the temp file like .OldDisk.dmg.SjDndj23 and rename it to OldDisk.dmg so that it sees there already exists a file that it can resume.

How do I fix this so I don't have to manually intervene each time?

Richard Michael , Nov 6, 2013 at 4:26

TL;DR : Use --timeout=X (X in seconds) to change the default rsync server timeout, not --inplace .

The issue is the rsync server processes (of which there are two, see rsync --server ... in ps output on the receiver) continue running, to wait for the rsync client to send data.

If the rsync server processes do not receive data for a sufficient time, they will indeed timeout, self-terminate and cleanup by moving the temporary file to it's "proper" name (e.g., no temporary suffix). You'll then be able to resume.

If you don't want to wait for the long default timeout to cause the rsync server to self-terminate, then when your internet connection returns, log into the server and clean up the rsync server processes manually. However, you must politely terminate rsync -- otherwise, it will not move the partial file into place; but rather, delete it (and thus there is no file to resume). To politely ask rsync to terminate, do not SIGKILL (e.g., -9 ), but SIGTERM (e.g., pkill -TERM -x rsync - only an example, you should take care to match only the rsync processes concerned with your client).

Fortunately there is an easier way: use the --timeout=X (X in seconds) option; it is passed to the rsync server processes as well.

For example, if you specify rsync ... --timeout=15 ... , both the client and server rsync processes will cleanly exit if they do not send/receive data in 15 seconds. On the server, this means moving the temporary file into position, ready for resuming.

I'm not sure of the default timeout value of the various rsync processes will try to send/receive data before they die (it might vary with operating system). In my testing, the server rsync processes remain running longer than the local client. On a "dead" network connection, the client terminates with a broken pipe (e.g., no network socket) after about 30 seconds; you could experiment or review the source code. Meaning, you could try to "ride out" the bad internet connection for 15-20 seconds.

If you do not clean up the server rsync processes (or wait for them to die), but instead immediately launch another rsync client process, two additional server processes will launch (for the other end of your new client process). Specifically, the new rsync client will not re-use/reconnect to the existing rsync server processes. Thus, you'll have two temporary files (and four rsync server processes) -- though, only the newer, second temporary file has new data being written (received from your new rsync client process).

Interestingly, if you then clean up all rsync server processes (for example, stop your client which will stop the new rsync servers, then SIGTERM the older rsync servers, it appears to merge (assemble) all the partial files into the new proper named file. So, imagine a long running partial copy which dies (and you think you've "lost" all the copied data), and a short running re-launched rsync (oops!).. you can stop the second client, SIGTERM the first servers, it will merge the data, and you can resume.

Finally, a few short remarks:

JamesTheAwesomeDude , Dec 29, 2013 at 16:50

Just curious: wouldn't SIGINT (aka ^C ) be 'politer' than SIGTERM ? – JamesTheAwesomeDude Dec 29 '13 at 16:50

Richard Michael , Dec 29, 2013 at 22:34

I didn't test how the server-side rsync handles SIGINT, so I'm not sure it will keep the partial file - you could check. Note that this doesn't have much to do with Ctrl-c ; it happens that your terminal sends SIGINT to the foreground process when you press Ctrl-c , but the server-side rsync has no controlling terminal. You must log in to the server and use kill . The client-side rsync will not send a message to the server (for example, after the client receives SIGINT via your terminal Ctrl-c ) - might be interesting though. As for anthropomorphizing, not sure what's "politer". :-) – Richard Michael Dec 29 '13 at 22:34

d-b , Feb 3, 2015 at 8:48

I just tried this timeout argument rsync -av --delete --progress --stats --human-readable --checksum --timeout=60 --partial-dir /tmp/rsync/ rsync://$remote:/ /src/ but then it timed out during the "receiving file list" phase (which in this case takes around 30 minutes). Setting the timeout to half an hour so kind of defers the purpose. Any workaround for this? – d-b Feb 3 '15 at 8:48

Cees Timmerman , Sep 15, 2015 at 17:10

@user23122 --checksum reads all data when preparing the file list, which is great for many small files that change often, but should be done on-demand for large files. – Cees Timmerman Sep 15 '15 at 17:10

[Feb 11, 2019] prsync command man page - pssh

Originally from Brent N. Chun ~ Intel Research Berkeley
Feb 11, 2019 | www.mankier.com

prsync -- parallel file sync program

Synopsis

prsync [ - v A r a z ] [ -h hosts_file ] [ -H [ user @] host [: port ]] [ -l user ] [ -p par ] [ -o outdir ] [ -e errdir ] [ -t timeout ] [ -O options ] [ -x args ] [ -X arg ] [ -S args ] local ... remote

Description

prsync is a program for copying files in parallel to a number of hosts using the popular rsync program. It provides features such as passing a password to ssh, saving output to files, and timing out.

Options
-h host_file
--hosts host_file
Read hosts from the given host_file . Lines in the host file are of the form [ user @] host [: port ] and can include blank lines and comments (lines beginning with "#"). If multiple host files are given (the -h option is used more than once), then prsync behaves as though these files were concatenated together. If a host is specified multiple times, then prsync will connect the given number of times.
-H
[ user @] host [: port ]
--host
[ user @] host [: port ]
-H
"[ user @] host [: port ] [ [ user @] host [: port ] ... ]"
--host
"[ user @] host [: port ] [ [ user @] host [: port ] ... ]"

Add the given host strings to the list of hosts. This option may be given multiple times, and may be used in conjunction with the -h option.

-l user
--user user
Use the given username as the default for any host entries that don't specifically specify a user.
-p parallelism
--par parallelism
Use the given number as the maximum number of concurrent connections.
-t timeout
--timeout timeout
Make connections time out after the given number of seconds. With a value of 0, prsync will not timeout any connections.
-o outdir
--outdir outdir
Save standard output to files in the given directory. Filenames are of the form [ user @] host [: port ][. num ] where the user and port are only included for hosts that explicitly specify them. The number is a counter that is incremented each time for hosts that are specified more than once.
-e errdir
--errdir errdir
Save standard error to files in the given directory. Filenames are of the same form as with the -o option.
-x args
--extra-args args
Passes extra rsync command-line arguments (see the rsync(1) man page for more information about rsync arguments). This option may be specified multiple times. The arguments are processed to split on whitespace, protect text within quotes, and escape with backslashes. To pass arguments without such processing, use the -X option instead.
-X arg
--extra-arg arg
Passes a single rsync command-line argument (see the rsync(1) man page for more information about rsync arguments). Unlike the -x option, no processing is performed on the argument, including word splitting. To pass multiple command-line arguments, use the option once for each argument.
-O options
--options options
SSH options in the format used in the SSH configuration file (see the ssh_config(5) man page for more information). This option may be specified multiple times.
-A
--askpass
Prompt for a password and pass it to ssh. The password may be used for either to unlock a key or for password authentication. The password is transferred in a fairly secure manner (e.g., it will not show up in argument lists). However, be aware that a root user on your system could potentially intercept the password.
-v
--verbose
Include error messages from rsync with the -i and \ options.
-r
--recursive
Recursively copy directories.
-a
--archive
Use rsync archive mode (rsync's -a option).
-z
--compress
Use rsync compression.
-S args
--ssh-args args
Passes extra SSH command-line arguments (see the ssh(1) man page for more information about SSH arguments). The given value is appended to the ssh command (rsync's -e option) without any processing.
Tips

The ssh_config file can include an arbitrary number of Host sections. Each host entry specifies ssh options which apply only to the given host. Host definitions can even behave like aliases if the HostName option is included. This ssh feature, in combination with pssh host files, provides a tremendous amount of flexibility.

Exit Status

The exit status codes from prsync are as follows:

0
Success
1
Miscellaneous error
2
Syntax or usage error
3
At least one process was killed by a signal or timed out.
4
All processes completed, but at least one rsync process reported an error (exit status other than 0).
Authors

Written by Brent N. Chun <bnc@theether.org> and Andrew McNabb <amcnabb@mcnabbs.org>.

https://github.com/lilydjwg/pssh

See Also

rsync(1) , ssh(1) , ssh_config(5) , pssh(1) , prsync (1), pslurp(1) , pnuke(1) ,

Referenced By

pnuke(1) , pscp.pssh(1) , pslurp(1) , pssh(1) .

[Feb 07, 2019] Installing Nagios-3.4 in CentOS 6.3 LinTut

Feb 07, 2019 | lintut.com

Nagios is an opensource software used for network and infrastructure monitoring . Nagios will monitor servers, switches, applications and services . It alerts the System Administrator when something went wrong and also alerts back when the issues has been rectified.

View also: How to Enable EPEL Repository for RHEL/CentOS 6/5

View also: How to Enable EPEL Repository for RHEL/CentOS 6/5
yum install nagios nagios-devel nagios-plugins* gd gd-devel httpd php gcc glibc glibc-common

Bydefualt on doing yum install nagios, in cgi.cfg file, authorized user name nagiosadmin is mentioned and for htpasswd file /etc/nagios/passwd file is used.So for easy steps I am using the same name.
# htpasswd -c /etc/nagios/passwd nagiosadmin

Check the below given values in /etc/nagios/cgi.cfg
nano /etc/nagios/cgi.cfg
# AUTHENTICATION USAGE
use_authentication=1
# SYSTEM/PROCESS INFORMATION ACCESS
authorized_for_system_information=nagiosadmin
# CONFIGURATION INFORMATION ACCESS
authorized_for_configuration_information=nagiosadmin
# SYSTEM/PROCESS COMMAND ACCESS
authorized_for_system_commands=nagiosadmin
# GLOBAL HOST/SERVICE VIEW ACCESS
authorized_for_all_services=nagiosadmin
authorized_for_all_hosts=nagiosadmin
# GLOBAL HOST/SERVICE COMMAND ACCESS
authorized_for_all_service_commands=nagiosadmin
authorized_for_all_host_commands=nagiosadmin

For provoding the access to nagiosadmin user in http, /etc/httpd/conf.d/nagios.conf file exist. Below is the nagios.conf configuration for nagios server.
cat /etc/http/conf.d/nagios.conf
# SAMPLE CONFIG SNIPPETS FOR APACHE WEB SERVER
# Last Modified: 11-26-2005
#
# This file contains examples of entries that need
# to be incorporated into your Apache web server
# configuration file. Customize the paths, etc. as
# needed to fit your system.

ScriptAlias /nagios/cgi-bin/ "/usr/lib/nagios/cgi-bin/"
# SSLRequireSSL
Options ExecCGI
AllowOverride None
Order allow,deny
Allow from all
# Order deny,allow
# Deny from all
# Allow from 127.0.0.1
AuthName "Nagios Access"
AuthType Basic
AuthUserFile /etc/nagios/passwd
Require valid-user

Alias /nagios "/usr/share/nagios/html"
# SSLRequireSSL
Options None
AllowOverride None
Order allow,deny
Allow from all
# Order deny,allow
# Deny from all
Allow from 127.0.0.1
AuthName "Nagios Access"
AuthType Basic
AuthUserFile /etc/nagios/passwd
Require valid-user

Start the httpd and nagios /etc/init.d/httpd start /etc/init.d/nagios start [warn]Note: SELINUX and IPTABLE are disabled.[/warn] Access the nagios server by http://nagios_server_ip-address/nagios Give the username = nagiosadmin and password which you have given to nagiosadmin user.

[Feb 04, 2019] Do not play those dangerous games with resing of partitions unless absolutly nessesary

Copying to additional drive (can be USB), repartitioning and then copying everything back is a safer bet
May 07, 2017 | superuser.com
womble

In theory, you could reduce the size of sda1, increase the size of the extended partition, shift the contents of the extended partition down, then increase the size of the PV on the extended partition and you'd have the extra room.

However, the number of possible things that can go wrong there is just astronomical

So I'd recommend either buying a second hard drive (and possibly transferring everything onto it in a more sensible layout, then repartitioning your current drive better) or just making some bind mounts of various bits and pieces out of /home into / to free up a bit more space.

--womble

[Feb 04, 2019] Ticket 3745 (Integration mc with mc2(Lua))

This ticket is from2016...
Dec 01, 2020 | midnight-commander.org
Ticket #3745 (closed enhancement: invalid)

Opened 2 years ago

Last modified 2 years ago Integration mc with mc2(Lua)

Description I think that it is necessary that code base mc and mc2 correspond each other. mooffie? can you check that patches from andrew_b easy merged with mc2 and if some patch conflict with mc2 code hold this changes by writing about in corresponding ticket. zaytsev can you help automate this( continues integration, travis and so on). Sorry, but some words in Russian:

Ребята, я не пытаюсь давать ЦУ, Вы делаете классную работу. Просто яхотел обратить внимание, что Муфья пытается поддерживать свой код в актуальном состоянии, но видя как у него возникают проблемы на ровном месте боюсь энтузиазм у него может пропасть.

Change History comment:1 Changed 2 years ago by zaytsev-work

​ https://mail.gnome.org/archives/mc-devel/2016-February/msg00021.html

I have asked what plans does mooffie have for mc 2 sometime ago and never got an answer. Note that I totally don't blame him for that. Everyone here is working at their own pace. Sometimes I disappear for weeks or months, because I can't get a spare 5 minutes not even speaking of several hours due to the non-mc related workload. I hope that one day we'll figure out the way towards merging it, and eventually get it done.

In the mean time, he's working together with us by offering extremely important and well-prepared contributions, which are a pleasure to deal with and we are integrating them as fast as we can, so it's not like we are at war and not talking to each other.

Anyways, creating random noise in the ticket tracking system will not help to advance your cause. The only way to influence the process is to invest serious amount of time in the development.

[Feb 02, 2019] Google Employees Are Fighting With Executives Over Pay

Notable quotes:
"... In July, Bloomberg reported that, for the first time, more than 50 percent of Google's workforce were temps, contractors, and vendors. ..."
Feb 02, 2019 | www.wired.com

... ... ...

Asked whether they have confidence in CEO Sundar Pichai and his management team to "effectively lead in the future," 74 percent of employees responded "positive," as opposed to "neutral" or "negative," in late 2018, down from 92 percent "positive" the year before. The 18-point drop left employee confidence at its lowest point in at least six years. The results of the survey, known internally as Googlegeist, also showed a decline in employees' satisfaction with their compensation, with 54 percent saying they were satisfied, compared with 64 percent the prior year.

The drop in employee sentiment helps explain why internal debate around compensation, pay equity, and trust in executives has heated up in recent weeks -- and why an HR presentation from 2016 went viral inside the company three years later.

The presentation, first reported by Bloomberg and reviewed by WIRED, dates from July 2016, about a year after Google started an internal effort to curb spending . In the slide deck, Google's human-resources department presents potential ways to cut the company's $20 billion compensation budget. Ideas include: promoting fewer people, hiring proportionately more low-level employees, and conducting an audit to make sure Google is paying benefits "(only) for the right people." In some cases, HR suggested ways to implement changes while drawing little attention, or tips on how to sell the changes to Google employees. Some of the suggestions were implemented, like eliminating the annual employee holiday gift; most were not.

Another, more radical proposal floated inside the company around the same time didn't appear in the deck. That suggested converting some full-time employees to contractors to save money. A person familiar with the situation said this proposal was not implemented. In July, Bloomberg reported that, for the first time, more than 50 percent of Google's workforce were temps, contractors, and vendors.

[Jan 31, 2019] Troubleshooting performance issue in CentOS-RHEL using collectl utility The Geek Diary

Jan 31, 2019 | www.thegeekdiary.com

Troubleshooting performance issue in CentOS/RHEL using collectl utility

By admin

Unlike most monitoring tools that either focus on a small set of statistics, format their output in only one way, run either interactively or as a daemon but not both, collectl tries to do it all. You can choose to monitor any of a broad set of subsystems which currently include buddyinfo, cpu, disk, inodes, InfiniBand, lustre, memory, network, nfs, processes, quadrics, slabs, sockets and tcp.

Installing collectl

The collectl community project is maintained at http://collectl.sourceforge.net/ as well as provided in the Fedora community project. For Red Hat Enterprise Linux 6 and 7, the easiest way to install collectl is via the EPEL repositories (Extra Packages for Enterprise Linux) maintained by the Fedora community.

Once set up, collectl can be installed with the following command:

# yum install collectl

The packages are also available for direct download using the following links:

RHEL 5 x86_64 (available in the EPEL archives) https://archive.fedoraproject.org/pub/archive/epel/5/x86_64/
RHEL 6 x86_64 http://dl.fedoraproject.org/pub/epel/6/x86_64/
RHEL 7 x86_64 http://dl.fedoraproject.org/pub/epel/7/x86_64/

General usage of collectl

The collectl utility can be run manually via the command line or as a service. Data will be logged to /var/log/collectl/*.raw.gz . The logs will be rotated every 24 hours by default. To run as a service:

# chkconfig collectl on       # [optional, to start at boot time]
# service collectl start
Sample Intervals

When run manually from the command line, the first Interval value is 1 . When running as a service, default sample intervals are as show below. It might sometimes be desired to lower these to avoid averaging, such as 1,30,60.

# grep -i interval /etc/collectl.conf 
#Interval =     10
#Interval2 =    60
#Interval3 =   120
Using collectl to troubleshoot disk or SAN storage performance

The defaults of 10s for all but process data which is collected at 60s intervals are best left as is, even for storage performance analysis.

The SAR Equivalence Matrix shows common SAR command equivalents to help experienced SAR users learn to use Collectl. The following example command will view summary detail of the CPU, Network and Disk from the file /var/log/collectl/HOSTNAME-20190116-164506.raw.gz :

# collectl -scnd -oT -p HOSTNAME-20190116-164506.raw.gz
#         <----CPU[HYPER]-----><----------Disks-----------><----------Network---------->
#Time     cpu sys inter  ctxsw KBRead  Reads KBWrit Writes   KBIn  PktIn  KBOut  PktOut 
16:46:10    9   2 14470  20749      0      0     69      9      0      1      0       2 
16:46:20   13   4 14820  22569      0      0    312     25    253    174      7      79 
16:46:30   10   3 15175  21546      0      0     54      5      0      2      0       3 
16:46:40    9   2 14741  21410      0      0     57      9      1      2      0       4 
16:46:50   10   2 14782  23766      0      0    374      8    250    171      5      75 
....

The next example will output the 1 minute period from 17:00 – 17:01.

# collectl -scnd -oT --from 17:00 --thru 17:01 -p HOSTNAME-20190116-164506.raw.gz
#         <----CPU[HYPER]-----><----------Disks-----------><----------Network---------->
#Time     cpu sys inter  ctxsw KBRead  Reads KBWrit Writes   KBIn  PktIn  KBOut  PktOut 
17:00:00   13   3 15870  25320      0      0     67      9    251    172      6      90 
17:00:10   16   4 16386  24539      0      0    315     17    246    170      6      84 
17:00:20   10   2 14959  22465      0      0     65     26      5      6      1       8 
17:00:30   11   3 15056  24852      0      0    323     12    250    170      5      69 
17:00:40   18   5 16595  23826      0      0    463     13      1      5      0       5 
17:00:50   12   3 15457  23663      0      0     57      9    250    170      6      76 
17:01:00   13   4 15479  24488      0      0    304      7    254    176      5      70

The next example will output Detailed Disk data.

# collectl -scnD -oT -p HOSTNAME-20190116-164506.raw.gz

### RECORD    7 >>> tabserver <<< (1366318860.001) (Thu Apr 18 17:01:00 2013) ###

# CPU[HYPER] SUMMARY (INTR, CTXSW & PROC /sec)
# User  Nice   Sys  Wait   IRQ  Soft Steal  Idle  CPUs  Intr  Ctxsw  Proc  RunQ   Run   Avg1  Avg5 Avg15 RunT BlkT
     8     0     3     0     0     0     0    86     8   15K    24K     0   638     5   1.07  1.05  0.99    0    0

# DISK STATISTICS (/sec)
#          <---------reads---------><---------writes---------><--------averages--------> Pct
#Name       KBytes Merged  IOs Size  KBytes Merged  IOs Size  RWSize  QLen  Wait SvcTim Util
sda              0      0    0    0     304     11    7   44      44     2    16      6    4
sdb              0      0    0    0       0      0    0    0       0     0     0      0    0
dm-0             0      0    0    0       0      0    0    0       0     0     0      0    0
dm-1             0      0    0    0       5      0    1    4       4     1     2      2    0
dm-2             0      0    0    0     298      0   14   22      22     1     4      3    4
dm-3             0      0    0    0       0      0    0    0       0     0     0      0    0
dm-4             0      0    0    0       0      0    0    0       0     0     0      0    0
dm-5             0      0    0    0       0      0    0    0       0     0     0      0    0
dm-6             0      0    0    0       0      0    0    0       0     0     0      0    0
dm-7             0      0    0    0       0      0    0    0       0     0     0      0    0
dm-8             0      0    0    0       0      0    0    0       0     0     0      0    0
dm-9             0      0    0    0       0      0    0    0       0     0     0      0    0
dm-10            0      0    0    0       0      0    0    0       0     0     0      0    0
dm-11            0      0    0    0       0      0    0    0       0     0     0      0    0

# NETWORK SUMMARY (/sec)
# KBIn  PktIn SizeIn  MultI   CmpI  ErrsI  KBOut PktOut  SizeO   CmpO  ErrsO
   253    175   1481      0      0      0      5     70     79      0      0
....
Commonly used options

These generate summary, which is the total of ALL data for a particular type

These generate detail data, typically but not limited to the device level

The most useful switches are listed here

Final Thoughts

Performance Co-Pilot (PCP) is the preferred tool for collecting comprehensive performance metrics for performance analysis and troubleshooting. It is shipped and supported in Red Hat Enterprise Linux 6 & 7 and is the preferred recommendation over Collectl or Sar/Sysstat. It also includes conversion tools between its own performance data and Collectl & Sar/Syststat.

[Jan 31, 2019] Linus Torvalds and others on Linux's systemd by By Steven J. Vaughan-Nichols

Notable quotes:
"... I think some of the design details are insane (I dislike the binary logs, for example) ..."
"... Systemd problems might not have mattered that much, except that GNOME has a similar attitude; they only care for a small subset of the Linux desktop users, and they have historically abandoned some ways of interacting the Desktop in the interest of supporting touchscreen devices and to try to attract less technically sophisticated users. ..."
"... If you don't fall in the demographic of what GNOME supports, you're sadly out of luck. (Or you become a second class citizen, being told that you have to rely on GNOME extensions that may break on every single new version of GNOME.) ..."
"... As a result, many traditional GNOME users have moved over to Cinnamon, XFCE, KDE, etc. But as systemd starts subsuming new functions, components like network-manager will only work on systemd or other components that are forced to be used due to a network of interlocking dependencies; and it may simply not be possible for these alternate desktops to continue to function, because there is [no] viable alternative to systemd supported by more and more distributions. ..."
| www.zdnet.com

So what do Linux's leaders think of all this? I asked them and this is what they told me.

Linus Torvalds said:

"I don't actually have any particularly strong opinions on systemd itself. I've had issues with some of the core developers that I think are much too cavalier about bugs and compatibility, and I think some of the design details are insane (I dislike the binary logs, for example) , but those are details, not big issues."

Theodore "Ted" Ts'o, a leading Linux kernel developer and a Google engineer, sees systemd as potentially being more of a problem. "The bottom line is that they are trying to solve some real problems that matter in some use cases. And, [that] sometimes that will break assumptions made in other parts of the system."

Another concern that Ts'o made -- which I've heard from many other developers -- is that the systemd move was made too quickly: "The problem is sometimes what they break are in other parts of the software stack, and so long as it works for GNOME, they don't necessarily consider it their responsibility to fix the rest of the Linux ecosystem."

This, as Ts'o sees it, feeds into another problem:

" Systemd problems might not have mattered that much, except that GNOME has a similar attitude; they only care for a small subset of the Linux desktop users, and they have historically abandoned some ways of interacting the Desktop in the interest of supporting touchscreen devices and to try to attract less technically sophisticated users.

If you don't fall in the demographic of what GNOME supports, you're sadly out of luck. (Or you become a second class citizen, being told that you have to rely on GNOME extensions that may break on every single new version of GNOME.) "

Ts'o has an excellent point. GNOME 3.x has alienated both users and developers . He continued,

" As a result, many traditional GNOME users have moved over to Cinnamon, XFCE, KDE, etc. But as systemd starts subsuming new functions, components like network-manager will only work on systemd or other components that are forced to be used due to a network of interlocking dependencies; and it may simply not be possible for these alternate desktops to continue to function, because there is [no] viable alternative to systemd supported by more and more distributions. "

Of course, Ts'o continued, "None of these nightmare scenarios have happened yet. The people who are most stridently objecting to systemd are people who are convinced that the nightmare scenario is inevitable so long as we continue on the same course and altitude."

Ts'o is "not entirely certain it's going to happen, but he's afraid it will.

What I find puzzling about all this is that even though everyone admits that sysvinit needed replacing and many people dislike systemd, the distributions keep adopting it. Only a few distributions, including Slackware , Gentoo , PCLinuxOS , and Chrome OS , haven't adopted it.

It's not like there aren't alternatives. These include Upstart , runit , and OpenRC .

If systemd really does turn out to be as bad as some developers fear, there are plenty of replacements waiting in the wings. Indeed, rather than hear so much about how awful systemd is, I'd rather see developers spending their time working on an alternative.

[Jan 29, 2019] hardware - Is post-sudden-power-loss filesystem corruption on an SSD drive's ext3 partition expected behavior

Dec 04, 2012 | serverfault.com

My company makes an embedded Debian Linux device that boots from an ext3 partition on an internal SSD drive. Because the device is an embedded "black box", it is usually shut down the rude way, by simply cutting power to the device via an external switch.

This is normally okay, as ext3's journalling keeps things in order, so other than the occasional loss of part of a log file, things keep chugging along fine.

However, we've recently seen a number of units where after a number of hard-power-cycles the ext3 partition starts to develop structural issues -- in particular, we run e2fsck on the ext3 partition and it finds a number of issues like those shown in the output listing at the bottom of this Question. Running e2fsck until it stops reporting errors (or reformatting the partition) clears the issues.

My question is... what are the implications of seeing problems like this on an ext3/SSD system that has been subjected to lots of sudden/unexpected shutdowns?

My feeling is that this might be a sign of a software or hardware problem in our system, since my understanding is that (barring a bug or hardware problem) ext3's journalling feature is supposed to prevent these sorts of filesystem-integrity errors. (Note: I understand that user-data is not journalled and so munged/missing/truncated user-files can happen; I'm specifically talking here about filesystem-metadata errors like those shown below)

My co-worker, on the other hand, says that this is known/expected behavior because SSD controllers sometimes re-order write commands and that can cause the ext3 journal to get confused. In particular, he believes that even given normally functioning hardware and bug-free software, the ext3 journal only makes filesystem corruption less likely, not impossible, so we should not be surprised to see problems like this from time to time.

Which of us is right?

Embedded-PC-failsafe:~# ls
Embedded-PC-failsafe:~# umount /mnt/unionfs
Embedded-PC-failsafe:~# e2fsck /dev/sda3
e2fsck 1.41.3 (12-Oct-2008)
embeddedrootwrite contains a file system with errors, check forced.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Invalid inode number for '.' in directory inode 46948.
Fix<y>? yes

Directory inode 46948, block 0, offset 12: directory corrupted
Salvage<y>? yes

Entry 'status_2012-11-26_14h13m41.csv' in /var/log/status_logs (46956) has deleted/unused inode 47075.  Clear<y>? yes
Entry 'status_2012-11-26_10h42m58.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47076.  Clear<y>? yes
Entry 'status_2012-11-26_11h29m41.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47080.  Clear<y>? yes
Entry 'status_2012-11-26_11h42m13.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47081.  Clear<y>? yes
Entry 'status_2012-11-26_12h07m17.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47083.  Clear<y>? yes
Entry 'status_2012-11-26_12h14m53.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47085.  Clear<y>? yes
Entry 'status_2012-11-26_15h06m49.csv' in /var/log/status_logs (46956) has deleted/unused inode 47088.  Clear<y>? yes
Entry 'status_2012-11-20_14h50m09.csv' in /var/log/status_logs (46956) has deleted/unused inode 47073.  Clear<y>? yes
Entry 'status_2012-11-20_14h55m32.csv' in /var/log/status_logs (46956) has deleted/unused inode 47074.  Clear<y>? yes
Entry 'status_2012-11-26_11h04m36.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47078.  Clear<y>? yes
Entry 'status_2012-11-26_11h54m45.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47082.  Clear<y>? yes
Entry 'status_2012-11-26_12h12m20.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47084.  Clear<y>? yes
Entry 'status_2012-11-26_12h33m52.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47086.  Clear<y>? yes
Entry 'status_2012-11-26_10h51m59.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47077.  Clear<y>? yes
Entry 'status_2012-11-26_11h17m09.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47079.  Clear<y>? yes
Entry 'status_2012-11-26_12h54m11.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47087.  Clear<y>? yes

Pass 3: Checking directory connectivity
'..' in /etc/network/run (46948) is <The NULL inode> (0), should be /etc/network (46953).
Fix<y>? yes

Couldn't fix parent of inode 46948: Couldn't find parent directory entry

Pass 4: Checking reference counts
Unattached inode 46945
Connect to /lost+found<y>? yes

Inode 46945 ref count is 2, should be 1.  Fix<y>? yes
Inode 46953 ref count is 5, should be 4.  Fix<y>? yes

Pass 5: Checking group summary information
Block bitmap differences:  -(208264--208266) -(210062--210068) -(211343--211491) -(213241--213250) -(213344--213393) -213397 -(213457--213463) -(213516--213521) -(213628--213655) -(213683--213688) -(213709--213728) -(215265--215300) -(215346--215365) -(221541--221551) -(221696--221704) -227517
Fix<y>? yes

Free blocks count wrong for group #6 (17247, counted=17611).
Fix<y>? yes

Free blocks count wrong (161691, counted=162055).
Fix<y>? yes

Inode bitmap differences:  +(47089--47090) +47093 +47095 +(47097--47099) +(47101--47104) -(47219--47220) -47222 -47224 -47228 -47231 -(47347--47348) -47350 -47352 -47356 -47359 -(47457--47488) -47985 -47996 -(47999--48000) -48017 -(48027--48028) -(48030--48032) -48049 -(48059--48060) -(48062--48064) -48081 -(48091--48092) -(48094--48096)
Fix<y>? yes

Free inodes count wrong for group #6 (7608, counted=7624).
Fix<y>? yes

Free inodes count wrong (61919, counted=61935).
Fix<y>? yes


embeddedrootwrite: ***** FILE SYSTEM WAS MODIFIED *****

embeddedrootwrite: ********** WARNING: Filesystem still has errors **********

embeddedrootwrite: 657/62592 files (24.4% non-contiguous), 87882/249937 blocks

Embedded-PC-failsafe:~# 
Embedded-PC-failsafe:~# e2fsck /dev/sda3
e2fsck 1.41.3 (12-Oct-2008)
embeddedrootwrite contains a file system with errors, check forced.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Directory entry for '.' in ... (46948) is big.
Split<y>? yes

Missing '..' in directory inode 46948.
Fix<y>? yes

Setting filetype for entry '..' in ... (46948) to 2.
Pass 3: Checking directory connectivity
'..' in /etc/network/run (46948) is <The NULL inode> (0), should be /etc/network (46953).
Fix<y>? yes

Pass 4: Checking reference counts
Inode 2 ref count is 12, should be 13.  Fix<y>? yes

Pass 5: Checking group summary information

embeddedrootwrite: ***** FILE SYSTEM WAS MODIFIED *****
embeddedrootwrite: 657/62592 files (24.4% non-contiguous), 87882/249937 blocks
Embedded-PC-failsafe:~# 
Embedded-PC-failsafe:~# e2fsck /dev/sda3
e2fsck 1.41.3 (12-Oct-2008)
embeddedrootwrite: clean, 657/62592 files, 87882/249937 blocks
filesystems hardware ssd ext3 share | improve this question edited Dec 5 '12 at 18:40 ewwhite 173k 75 364 712 asked Dec 4 '12 at 1:13 Jeremy Friesner Jeremy Friesner 611 1 8 25 add a comment | 2 Answers 2 active oldest votes 10 You're both wrong (maybe?)... ext3 is coping the best it can with having its underlying storage removed so abruptly.

Your SSD probably has some type of onboard cache. You don't mention the make/model of SSD in use, but this sounds like a consumer-level SSD versus an enterprise or industrial-grade model .

Either way, the cache is used to help coalesce writes and prolong the life of the drive. If there are writes in-transit, the sudden loss of power is definitely the source of your corruption. True enterprise and industrial SSD's have supercapacitors that maintain power long enough to move data from cache to nonvolatile storage, much in the same way battery-backed and flash-backed RAID controller caches work .

If your drive doesn't have a supercap, the in-flight transactions are being lost, hence the filesystem corruption. ext3 is probably being told that everything is on stable storage, but that's just a function of the cache. share | improve this answer edited Apr 13 '17 at 12:14 Community ♦ 1 answered Dec 4 '12 at 1:24 ewwhite ewwhite 173k 75 364 712

add a comment | 2 You are right and your coworker is wrong. Barring something going wrong the journal makes sure you never have inconsistent fs metadata. You might check with hdparm to see if the drive's write cache is enabled. If it is, and you have not enabled IO barriers ( off by default on ext3, on by default in ext4 ), then that would be the cause of the problem.

The barriers are needed to force the drive write cache to flush at the correct time to maintain consistency, but some drives are badly behaved and either report that their write cache is disabled when it is not, or silently ignore the flush commands. This prevents the journal from doing its job. share | improve this answer answered Dec 5 '12 at 19:09 psusi psusi 2,617 11 9

[Jan 29, 2019] xfs corrupted after power failure

Highly recommended!
Oct 15, 2013 | www.linuxquestions.org

katmai90210

hi guys,

i have a problem. yesterday there was a power outage at one of my datacenters, where i have a relatively large fileserver. 2 arrays, 1 x 14 tb and 1 x 18 tb both in raid6, with a 3ware card.

after the outage, the server came back online, the xfs partitions were mounted, and everything looked okay. i could access the data and everything seemed just fine.

today i woke up to lots of i/o errors, and when i rebooted the server, the partitions would not mount:

Oct 14 04:09:17 kp4 kernel:
Oct 14 04:09:17 kp4 kernel: XFS internal error XFS_WANT_CORRUPTED_RETURN a<ffffffff80056933>] pdflush+0x0/0x1fb
Oct 14 04:09:17 kp4 kernel: [<ffffffff80056a84>] pdflush+0x151/0x1fb
Oct 14 04:09:17 kp4 kernel: [<ffffffff800cd931>] wb_kupdate+0x0/0x16a
Oct 14 04:09:17 kp4 kernel: [<ffffffff80032c2b>] kthread+0xfe/0x132
Oct 14 04:09:17 kp4 kernel: [<ffffffff8005dfc1>] child_rip+0xa/0x11
Oct 14 04:09:17 kp4 kernel: [<ffffffff800a3ab7>] keventd_create_kthread+0x0/0xc4
Oct 14 04:09:17 kp4 kernel: [<ffffffff80032b2d>] kthread+0x0/0x132
Oct 14 04:09:17 kp4 kernel: [<ffffffff8005dfb7>] child_rip+0x0/0x11
Oct 14 04:09:17 kp4 kernel:
Oct 14 04:09:17 kp4 kernel: XFS internal error XFS_WANT_CORRUPTED_RETURN at line 279 of file fs/xfs/xfs_alloc.c. Caller 0xffffffff88342331
Oct 14 04:09:17 kp4 kernel:

got a bunch of these in dmesg.

The array is fine:

[root@kp4 ~]# tw_cli
//kp4> focus c6
s
//kp4/c6> how

Unit UnitType Status %RCmpl %V/I/M Stripe Size(GB) Cache AVrfy
------------------------------------------------------------------------------
u0 RAID-6 OK - - 256K 13969.8 RiW ON
u1 RAID-6 OK - - 256K 16763.7 RiW ON

VPort Status Unit Size Type Phy Encl-Slot Model
------------------------------------------------------------------------------
p0 OK u1 2.73 TB SATA 0 - Hitachi HDS723030AL
p1 OK u1 2.73 TB SATA 1 - Hitachi HDS723030AL
p2 OK u1 2.73 TB SATA 2 - Hitachi HDS723030AL
p3 OK u1 2.73 TB SATA 3 - Hitachi HDS723030AL
p4 OK u1 2.73 TB SATA 4 - Hitachi HDS723030AL
p5 OK u1 2.73 TB SATA 5 - Hitachi HDS723030AL
p6 OK u1 2.73 TB SATA 6 - Hitachi HDS723030AL
p7 OK u1 2.73 TB SATA 7 - Hitachi HDS723030AL
p8 OK u0 2.73 TB SATA 8 - Hitachi HDS723030AL
p9 OK u0 2.73 TB SATA 9 - Hitachi HDS723030AL
p10 OK u0 2.73 TB SATA 10 - Hitachi HDS723030AL
p11 OK u0 2.73 TB SATA 11 - Hitachi HDS723030AL
p12 OK u0 2.73 TB SATA 12 - Hitachi HDS723030AL
p13 OK u0 2.73 TB SATA 13 - Hitachi HDS723030AL
p14 OK u0 2.73 TB SATA 14 - Hitachi HDS723030AL

Name OnlineState BBUReady Status Volt Temp Hours LastCapTest
---------------------------------------------------------------------------
bbu On Yes OK OK OK 0 xx-xxx-xxxx

i googled for solutions and i think i jumped the horse by doing

xfs_repair -L /dev/sdc

it would not clean it with xfs_repair /dev/sdc, and everybody pretty much says the same thing.

this is what i was getting when trying to mount the array.

Filesystem Corruption of in-memory data detected. Shutting down filesystem xfs_check

Did i jump the gun by using the -L switch :/ ?

jefro

Here is the RH data on that.

https://docs.fedoraproject.org/en-US...xfsrepair.html

[Jan 29, 2019] an HVAC tech that confused the BLACK button that got pushed to exit the room with the RED button clearly marked EMERGENCY POWER OFF.

Jan 29, 2019 | thwack.solarwinds.com

George Sutherland Jul 8, 2015 9:58 AM ( in response to RandyBrown ) had similar thing happen with an HVAC tech that confused the BLACK button that got pushed to exit the room with the RED button clearly marked EMERGENCY POWER OFF. Clear plastic cover installed with in 24 hours.... after 3 hours of recovery!

PS... He told his boss that he did not do it.... the camera that focused on the door told a much different story. He was persona non grata at our site after that.

[Jan 29, 2019] HVAC units greatly help to increase reliability

Jan 29, 2019 | thwack.solarwinds.com

sleeper_777 Jul 15, 2015 1:07 PM

Worked at a bank. 6" raised floor. Liebert cooling units on floor with all network equipment. Two units developed a water drain issue over a weekend.

About an hour into Monday morning, devices, servers, routers, in a domino effect starting shorting out and shutting down or blowing up, literally.

Opened the floor tiles to find three inches of water.

We did not have water alarms on the floor at the time.

Shortly after the incident, we did.

But the mistake was very costly and multiple 24 hour shifts of IT people made it a week of pure h3ll.

[Jan 29, 2019] In a former life, I had every server crash over the weekend when the facilities group took down the climate control and HVAC systems without warning

Jan 29, 2019 | thwack.solarwinds.com

[Jan 29, 2019] [SOLVED] Unable to mount root file system after a power failure

Jan 29, 2019 | www.linuxquestions.org
07-01-2012, 12:56 PM # 1
damateem LQ Newbie
Registered: Dec 2010 Posts: 8
Rep: Reputation: 0
Unable to mount root file system after a power failure

[ Log in to get rid of this advertisement] We had a storm yesterday and the power dropped out, causing my Ubuntu server to shut off. Now, when booting, I get

[ 0.564310] Kernel panic - not syncing: VFS: Unable to mount root fs on unkown-block(0,0)

It looks like a file system corruption, but I'm having a hard time fixing the problem. I'm using Rescue Remix 12-04 to boot from USB and get access to the system.

Using

sudo fdisk -l

Shows the hard drive as

/dev/sda1: Linux
/dev/sda2: Extended
/dev/sda5: Linux LVM

Using

sudo lvdisplay

Shows LV Names as

/dev/server1/root
/dev/server1/swap_1

Using

sudo blkid

Shows types as

/dev/sda1: ext2
/dev/sda5: LVM2_member
/dev/mapper/server1-root: ext4
/dev/mapper/server1-swap_1: swap

I can mount sda1 and server1/root and all the files appear normal, although I'm not really sure what issues I should be looking for. On sda1, I see a grub folder and several other files. On root, I see the file system as it was before I started having trouble.

I've ran the following fsck commands and none of them report any errors

sudo fsck -f /dev/sda1
sudo fsck -f /dev/server1/root
sudo fsck.ext2 -f /dev/sda1
sudo fsck.ext4 -f /dev/server1/root

and I still get the same error when the system boots.

I've hit a brick wall.

What should I try next?

What can I look at to give me a better understanding of what the problem is?

Thanks,
David

damateem
View Public Profile
View LQ Blog
View Review Entries
View HCL Entries
Find More Posts by damateem
Old 07-02-2012, 05:58 AM # 2
syg00 LQ Veteran
Registered: Aug 2003 Location: Australia Distribution: Lots ... Posts: 17,415
Rep: Reputation: 2720 Reputation: 2720 Reputation: 2720 Reputation: 2720 Reputation: 2720 Reputation: 2720 Reputation: 2720 Reputation: 2720 Reputation: 2720 Reputation: 2720 Reputation: 2720
Might depend a bit on what messages we aren't seeing.

Normally I'd reckon that means that either the filesystem or disk controller support isn't available. But with something like Ubuntu you'd expect that to all be in place from the initrd. And that is on the /boot partition, and shouldn't be subject to update activity in a normal environment. Unless maybe you're real unlucky and an update was in flight.

Can you chroot into the server (disk) install and run from there successfully ?.

syg00
View Public Profile
View LQ Blog
View Review Entries
View HCL Entries
Find More Posts by syg00
Old 07-02-2012, 06:08 PM # 3
damateem LQ Newbie
Registered: Dec 2010 Posts: 8
Original Poster
Rep: Reputation: 0
I had a very hard time getting the Grub menu to appear. There must be a very small window for detecting the shift key. Holding it down through the boot didn't work. Repeatedly hitting it at about twice per second didn't work. Increasing the rate to about 4 hits per second got me into it.

Once there, I was able to select an older kernel (2.6.32-39-server). The non-booting kernel was 2.6.32-40-server. 39 booted without any problems.

When I initially setup this system, I couldn't send email from it. It wasn't important to me at the time, so I planned to come back and fix it later. Last week (before the power drop), email suddenly started working on its own. I was surprised because I haven't specifically performed any updates. However, I seem to remember setting up automatic updates, so perhaps an auto update was done that introduced a problem, but it wasn't seen until the reboot that was forced by the power outage.

Next, I'm going to try updating to the latest kernel and see if it has the same problem.

Thanks,
David

damateem
View Public Profile
View LQ Blog
View Review Entries
View HCL Entries
Find More Posts by damateem
Old 07-02-2012, 06:24 PM # 4
frieza Senior Member Contributing Member
Registered: Feb 2002 Location: harvard, il Distribution: Ubuntu 11.4,DD-WRT micro plus ssh,lfs-6.6,Fedora 15,Fedora 16 Posts: 3,233
Rep: Reputation: 405 Reputation: 405 Reputation: 405 Reputation: 405 Reputation: 405
imho auto updates are dangerous, if you want my opinion, make sure auto updates are off, and only have the system tell you there are updates, that way you can chose not to install them during a power failure

as for a possible future solution for what you went through, unlike other keys, the shift key being held doesn't register as a stuck key to the best of my knowledge, so you can hold the shift key to get into grub, after that, edit the recovery line (the e key) to say at the end, init=/bin/bash then boot the system using the keys specified on the bottom of the screen, then once booted to a prompt, you would run
Code:

fsck -f {root partition}
(in this state, the root partition should be either not mounted or mounted read-only, so you can safely run an fsck on the drive)

note the -f seems to be an undocumented flag that does a more thorough scan than merely a standard run of fsck.

then reboot, and hopefully that fixes things

glad things seem to be working for the moment though.

frieza
View Public Profile
View LQ Blog
View Review Entries
View HCL Entries
Visit frieza's homepage!
Find More Posts by frieza
Old 07-02-2012, 06:32 PM # 5
suicidaleggroll LQ Guru Contributing Member
Registered: Nov 2010 Location: Colorado Distribution: OpenSUSE, CentOS Posts: 5,573
Rep: Reputation: 2132 Reputation: 2132 Reputation: 2132 Reputation: 2132 Reputation: 2132 Reputation: 2132 Reputation: 2132 Reputation: 2132 Reputation: 2132 Reputation: 2132 Reputation: 2132
Quote:
Originally Posted by damateem View Post However, I seem to remember setting up automatic updates, so perhaps an auto update was done that introduced a problem, but it wasn't seen until the reboot that was forced by the power outage.
I think this is very likely. Delayed reboots after performing an update can make tracking down errors impossibly difficult. I had a system a while back that wouldn't boot, turns out it was caused by an update I had done 6 MONTHS earlier, and the system had simply never been restarted afterward.
suicidaleggroll
View Public Profile
View LQ Blog
View Review Entries
View HCL Entries
Find More Posts by suicidaleggroll
Old 07-04-2012, 10:18 AM # 6
damateem LQ Newbie
Registered: Dec 2010 Posts: 8
Original Poster
Rep: Reputation: 0
I discovered the root cause of the problem. When I attempted the update, I found that the boot partition was full. So I suspect that caused issues for the auto update, but they went undetected until the reboot.

I next tried to purge old kernels using the instructions at

http://www.liberiangeek.net/2011/11/...neiric-ocelot/

but that failed because a previous install had not completed, but it couldn't complete because of the full partition. So had no choice but to manually rm the oldest kernel and it's associated files. With that done, the command

apt-get -f install

got far enough that I could then purge the unwanted kernels. Finally,

sudo apt-get update
sudo apt-get upgrade

brought everything up to date.

I will be deactivating the auto updates.

Thanks for all the help!

David

[Jan 29, 2019] How to Setup DRBD to Replicate Storage on Two CentOS 7 Servers by Aaron Kili

Notable quotes:
"... It mirrors the content of block devices such as hard disks, partitions, logical volumes etc. between servers. ..."
"... It involves a copy of data on two storage devices, such that if one fails, the data on the other can be used. ..."
"... Originally, DRBD was mainly used in high availability (HA) computer clusters, however, starting with version 9, it can be used to deploy cloud storage solutions. In this article, we will show how to install DRBD in CentOS and briefly demonstrate how to use it to replicate storage (partition) on two servers. ..."
www.thegeekdiary.com
The DRBD (stands for Distributed Replicated Block Device ) is a distributed, flexible and versatile replicated storage solution for Linux. It mirrors the content of block devices such as hard disks, partitions, logical volumes etc. between servers.

It involves a copy of data on two storage devices, such that if one fails, the data on the other can be used.

You can think of it somewhat like a network RAID 1 configuration with the disks mirrored across servers. However, it operates in a very different way from RAID and even network RAID.

Originally, DRBD was mainly used in high availability (HA) computer clusters, however, starting with version 9, it can be used to deploy cloud storage solutions. In this article, we will show how to install DRBD in CentOS and briefly demonstrate how to use it to replicate storage (partition) on two servers.

... ... ...

For the purpose of this article, we are using two nodes cluster for this setup.

... ... ...

Reference : The DRBD User's Guide .
Summary
Jan 19, 2019 | www.tecmint.com

DRBD is extremely flexible and versatile, which makes it a storage replication solution suitable for adding HA to just about any application. In this article, we have shown how to install DRBD in CentOS 7 and briefly demonstrated how to use it to replicate storage. Feel free to share your thoughts with us via the feedback form below.

[Jan 29, 2019] mc2 is the first version of Midnight commander that supports LUA by mooffie

Highly recommended!
That was three years ago. No progress so far in merging it with mainstream version. Sad but typical...
Links are now broken as the site was migrated to www.geek.co.il. Valid link is Getting started
Oct 15, 2015 | n2.nabble.com

[ANN] mc^2 11 posts

mc^2 is a fork of Midnight Commander with Lua support:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/

...but let's skip the verbiage and go directly to the screenshots:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/guide/SCREENSHOTS.md.html

Now, I assume most of you here aren't users of MC.

So I won't bore you with description of how Lua makes MC a better file-manager. Instead, I'll just list some details that may interest
any developer who works on extending some application.

And, as you'll shortly see, you may find mc^2 useful even if you aren't a user of MC!

So, some interesting details:

* Programmer Goodies

- You can restart the Lua system from within MC.

- Since MC has a built-in editor, you can edit Lua code right there and restart Lua. So it's somewhat like a live IDE:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/images/screenshots/game.png

- It comes with programmer utilities: regular expressions; global scope protected by default; good pretty printer for Lua tables; calculator where you can type Lua expressions; the editor can "lint" Lua code (and flag uses of global variables).

- It installs a /usr/bin/mcscript executable letting you use all the goodies from "outside" MC:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/guide/60-standalone.md.html

* User Interface programming (UI)

- You can program a UI (user interface) very easily. The API is fun
yet powerful. It has some DOM/JavaScript borrowings in it: you can
attach functions to events like on_click, on_change, etc. The API
uses "properties", so your code tends to be short and readable:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/guide/40-user-interface.md.html

- The UI has a "canvas" object letting you draw your own stuff. The
system is so fast you can program arcade games. Pacman, Tetris,
Digger, whatever:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/classes/ui.Canvas.html

Need timers in your game? You've got them:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/modules/timer.html

- This UI API is an ideal replacement for utilities like dialog(1).
You can write complex frontends to command-line tools with ease:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/images/screenshots/frontend-scanimage.png

- Thanks to the aforementioned /usr/bin/mcscript, you can run your
games/frontends from "outside" MC:

http://www.typo.co.il/~mooffie/mc-lua/docs/html/images/screenshots/standalone-game.png

* Misc

- You can compile it against Lua 5.1, 5.2, 5.3, or LuaJIT.

- Extensive documentation.

[Jan 29, 2019] hstr -- Bash and zsh shell history suggest box - easily view, navigate, search and manage your command history

This is quite useful command. RPM exists for CentOS7. You need to build on other versions.
Nov 17, 2018 | dvorka.github.io

hstr -- Bash and zsh shell history suggest box - easily view, navigate, search and manage your command history.

View on GitHub

Configuration

Get most of HSTR by configuring it with:

hstr --show-configuration >> ~/.bashrc

Run hstr --show-configuration to determine what will be appended to your Bash profile. Don't forget to source ~/.bashrc to apply changes.


For more configuration options details please refer to:

Check also configuration examples .

Binding HSTR to Keyboard Shortcut

Bash uses Emacs style keyboard shortcuts by default. There is also Vi mode. Find out how to bind HSTR to a keyboard shortcut based on the style you prefer below.

Check your active Bash keymap with:

bind -v | grep editing-mode
bind -v | grep keymap

To determine character sequence emitted by a pressed key in terminal, type Ctrlv and then press the key. Check your current bindings using:

bind -S
Bash Emacs Keymap (default)

Bind HSTR to a Bash key e.g. to Ctrlr :

bind '"\C-r": "\C-ahstr -- \C-j"'

or CtrlAltr :

bind '"\e\C-r":"\C-ahstr -- \C-j"'

or CtrlF12 :

bind '"\e[24;5~":"\C-ahstr -- \C-j"'

Bind HSTR to Ctrlr only if it is interactive shell:

if [[ $- =~ .*i.* ]]; then bind '"\C-r": "\C-a hstr -- \C-j"'; fi

You can bind also other HSTR commands like --kill-last-command :

if [[ $- =~ .*i.* ]]; then bind '"\C-xk": "\C-a hstr -k \C-j"'; fi
Bash Vim Keymap

Bind HSTR to a Bash key e.g. to Ctrlr :

bind '"\C-r": "\e0ihstr -- \C-j"'
Zsh Emacs Keymap

Bind HSTR to a zsh key e.g. to Ctrlr :

bindkey -s "\C-r" "\eqhstr --\n"
Alias

If you want to make running of hstr from command line even easier, then define alias in your ~/.bashrc :

alias hh=hstr

Don't forget to source ~/.bashrc to be able to to use hh command.

Colors

Let HSTR to use colors:

export HSTR_CONFIG=hicolor

or ensure black and white mode:

export HSTR_CONFIG=monochromatic
Default History View

To show normal history by default (instead of metrics-based view, which is default) use:

export HSTR_CONFIG=raw-history-view

To show favorite commands as default view use:

export HSTR_CONFIG=favorites-view
Filtering

To use regular expressions based matching:

export HSTR_CONFIG=regexp-matching

To use substring based matching:

export HSTR_CONFIG=substring-matching

To use keywords (substrings whose order doesn't matter) search matching (default):

export HSTR_CONFIG=keywords-matching

Make search case sensitive (insensitive by default):

export HSTR_CONFIG=case-sensitive

Keep duplicates in raw-history-view (duplicate commands are discarded by default):

export HSTR_CONFIG=duplicates
Static favorites

Last selected favorite command is put the head of favorite commands list by default. If you want to disable this behavior and make favorite commands list static, then use the following configuration:

export HSTR_CONFIG=static-favorites
Skip favorites comments

If you don't want to show lines starting with # (comments) among favorites, then use the following configuration:

export HSTR_CONFIG=skip-favorites-comments
Blacklist

Skip commands when processing history i.e. make sure that these commands will not be shown in any view:

export HSTR_CONFIG=blacklist

Commands to be stored in ~/.hstr_blacklist file with trailing empty line. For instance:

cd
my-private-command
ls
ll
Confirm on Delete

Do not prompt for confirmation when deleting history items:

export HSTR_CONFIG=no-confirm
Verbosity

Show a message when deleting the last command from history:

export HSTR_CONFIG=verbose-kill

Show warnings:

export HSTR_CONFIG=warning

Show debug messages:

export HSTR_CONFIG=debug
Bash History Settings

Use the following Bash settings to get most out of HSTR.

Increase the size of history maintained by BASH - variables defined below increase the number of history items and history file size (default value is 500):

export HISTFILESIZE=10000
export HISTSIZE=${HISTFILESIZE}

Ensure syncing (flushing and reloading) of .bash_history with in-memory history:

export PROMPT_COMMAND="history -a; history -n; ${PROMPT_COMMAND}"

Force appending of in-memory history to .bash_history (instead of overwriting):

shopt -s histappend

Use leading space to hide commands from history:

export HISTCONTROL=ignorespace

Suitable for a sensitive information like passwords.

zsh History Settings

If you use zsh , set HISTFILE environment variable in ~/.zshrc :

export HISTFILE=~/.zsh_history
Examples

More colors with case sensitive search of history:

export HSTR_CONFIG=hicolor,case-sensitive

Favorite commands view in black and white with prompt at the bottom of the screen:

export HSTR_CONFIG=favorites-view,prompt-bottom

Keywords based search in colors with debug mode verbosity:

export HSTR_CONFIG=keywords-matching,hicolor,debug

[Jan 29, 2019] Split string into an array in Bash

May 14, 2012 | stackoverflow.com

Lgn ,May 14, 2012 at 15:15

In a Bash script I would like to split a line into pieces and store them in an array.

The line:

Paris, France, Europe

I would like to have them in an array like this:

array[0] = Paris
array[1] = France
array[2] = Europe

I would like to use simple code, the command's speed doesn't matter. How can I do it?

antak ,Jun 18, 2018 at 9:22

This is #1 Google hit but there's controversy in the answer because the question unfortunately asks about delimiting on , (comma-space) and not a single character such as comma. If you're only interested in the latter, answers here are easier to follow: stackoverflow.com/questions/918886/ – antak Jun 18 '18 at 9:22

Dennis Williamson ,May 14, 2012 at 15:16

IFS=', ' read -r -a array <<< "$string"

Note that the characters in $IFS are treated individually as separators so that in this case fields may be separated by either a comma or a space rather than the sequence of the two characters. Interestingly though, empty fields aren't created when comma-space appears in the input because the space is treated specially.

To access an individual element:

echo "${array[0]}"

To iterate over the elements:

for element in "${array[@]}"
do
    echo "$element"
done

To get both the index and the value:

for index in "${!array[@]}"
do
    echo "$index ${array[index]}"
done

The last example is useful because Bash arrays are sparse. In other words, you can delete an element or add an element and then the indices are not contiguous.

unset "array[1]"
array[42]=Earth

To get the number of elements in an array:

echo "${#array[@]}"

As mentioned above, arrays can be sparse so you shouldn't use the length to get the last element. Here's how you can in Bash 4.2 and later:

echo "${array[-1]}"

in any version of Bash (from somewhere after 2.05b):

echo "${array[@]: -1:1}"

Larger negative offsets select farther from the end of the array. Note the space before the minus sign in the older form. It is required.

l0b0 ,May 14, 2012 at 15:24

Just use IFS=', ' , then you don't have to remove the spaces separately. Test: IFS=', ' read -a array <<< "Paris, France, Europe"; echo "${array[@]}" – l0b0 May 14 '12 at 15:24

Dennis Williamson ,May 14, 2012 at 16:33

@l0b0: Thanks. I don't know what I was thinking. I like to use declare -p array for test output, by the way. – Dennis Williamson May 14 '12 at 16:33

Nathan Hyde ,Mar 16, 2013 at 21:09

@Dennis Williamson - Awesome, thorough answer. – Nathan Hyde Mar 16 '13 at 21:09

dsummersl ,Aug 9, 2013 at 14:06

MUCH better than multiple cut -f calls! – dsummersl Aug 9 '13 at 14:06

caesarsol ,Oct 29, 2015 at 14:45

Warning: the IFS variable means split by one of these characters , so it's not a sequence of chars to split by. IFS=', ' read -a array <<< "a,d r s,w" => ${array[*]} == a d r s w – caesarsol Oct 29 '15 at 14:45

Jim Ho ,Mar 14, 2013 at 2:20

Here is a way without setting IFS:
string="1:2:3:4:5"
set -f                      # avoid globbing (expansion of *).
array=(${string//:/ })
for i in "${!array[@]}"
do
    echo "$i=>${array[i]}"
done

The idea is using string replacement:

${string//substring/replacement}

to replace all matches of $substring with white space and then using the substituted string to initialize a array:

(element1 element2 ... elementN)

Note: this answer makes use of the split+glob operator . Thus, to prevent expansion of some characters (such as * ) it is a good idea to pause globbing for this script.

Werner Lehmann ,May 4, 2013 at 22:32

Used this approach... until I came across a long string to split. 100% CPU for more than a minute (then I killed it). It's a pity because this method allows to split by a string, not some character in IFS. – Werner Lehmann May 4 '13 at 22:32

Dieter Gribnitz ,Sep 2, 2014 at 15:46

WARNING: Just ran into a problem with this approach. If you have an element named * you will get all the elements of your cwd as well. thus string="1:2:3:4:*" will give some unexpected and possibly dangerous results depending on your implementation. Did not get the same error with (IFS=', ' read -a array <<< "$string") and this one seems safe to use. – Dieter Gribnitz Sep 2 '14 at 15:46

akostadinov ,Nov 6, 2014 at 14:31

not reliable for many kinds of values, use with care – akostadinov Nov 6 '14 at 14:31

Andrew White ,Jun 1, 2016 at 11:44

quoting ${string//:/ } prevents shell expansion – Andrew White Jun 1 '16 at 11:44

Mark Thomson ,Jun 5, 2016 at 20:44

I had to use the following on OSX: array=(${string//:/ }) – Mark Thomson Jun 5 '16 at 20:44

bgoldst ,Jul 19, 2017 at 21:20

All of the answers to this question are wrong in one way or another.

Wrong answer #1

IFS=', ' read -r -a array <<< "$string"

1: This is a misuse of $IFS . The value of the $IFS variable is not taken as a single variable-length string separator, rather it is taken as a set of single-character string separators, where each field that read splits off from the input line can be terminated by any character in the set (comma or space, in this example).

Actually, for the real sticklers out there, the full meaning of $IFS is slightly more involved. From the bash manual :

The shell treats each character of IFS as a delimiter, and splits the results of the other expansions into words using these characters as field terminators. If IFS is unset, or its value is exactly <space><tab><newline> , the default, then sequences of <space> , <tab> , and <newline> at the beginning and end of the results of the previous expansions are ignored, and any sequence of IFS characters not at the beginning or end serves to delimit words. If IFS has a value other than the default, then sequences of the whitespace characters <space> , <tab> , and <newline> are ignored at the beginning and end of the word, as long as the whitespace character is in the value of IFS (an IFS whitespace character). Any character in IFS that is not IFS whitespace, along with any adjacent IFS whitespace characters, delimits a field. A sequence of IFS whitespace characters is also treated as a delimiter. If the value of IFS is null, no word splitting occurs.

Basically, for non-default non-null values of $IFS , fields can be separated with either (1) a sequence of one or more characters that are all from the set of "IFS whitespace characters" (that is, whichever of <space> , <tab> , and <newline> ("newline" meaning line feed (LF) ) are present anywhere in $IFS ), or (2) any non-"IFS whitespace character" that's present in $IFS along with whatever "IFS whitespace characters" surround it in the input line.

For the OP, it's possible that the second separation mode I described in the previous paragraph is exactly what he wants for his input string, but we can be pretty confident that the first separation mode I described is not correct at all. For example, what if his input string was 'Los Angeles, United States, North America' ?

IFS=', ' read -ra a <<<'Los Angeles, United States, North America'; declare -p a;
## declare -a a=([0]="Los" [1]="Angeles" [2]="United" [3]="States" [4]="North" [5]="America")

2: Even if you were to use this solution with a single-character separator (such as a comma by itself, that is, with no following space or other baggage), if the value of the $string variable happens to contain any LFs, then read will stop processing once it encounters the first LF. The read builtin only processes one line per invocation. This is true even if you are piping or redirecting input only to the read statement, as we are doing in this example with the here-string mechanism, and thus unprocessed input is guaranteed to be lost. The code that powers the read builtin has no knowledge of the data flow within its containing command structure.

You could argue that this is unlikely to cause a problem, but still, it's a subtle hazard that should be avoided if possible. It is caused by the fact that the read builtin actually does two levels of input splitting: first into lines, then into fields. Since the OP only wants one level of splitting, this usage of the read builtin is not appropriate, and we should avoid it.

3: A non-obvious potential issue with this solution is that read always drops the trailing field if it is empty, although it preserves empty fields otherwise. Here's a demo:

string=', , a, , b, c, , , '; IFS=', ' read -ra a <<<"$string"; declare -p a;
## declare -a a=([0]="" [1]="" [2]="a" [3]="" [4]="b" [5]="c" [6]="" [7]="")

Maybe the OP wouldn't care about this, but it's still a limitation worth knowing about. It reduces the robustness and generality of the solution.

This problem can be solved by appending a dummy trailing delimiter to the input string just prior to feeding it to read , as I will demonstrate later.


Wrong answer #2

string="1:2:3:4:5"
set -f                     # avoid globbing (expansion of *).
array=(${string//:/ })

Similar idea:

t="one,two,three"
a=($(echo $t | tr ',' "\n"))

(Note: I added the missing parentheses around the command substitution which the answerer seems to have omitted.)

Similar idea:

string="1,2,3,4"
array=(`echo $string | sed 's/,/\n/g'`)

These solutions leverage word splitting in an array assignment to split the string into fields. Funnily enough, just like read , general word splitting also uses the $IFS special variable, although in this case it is implied that it is set to its default value of <space><tab><newline> , and therefore any sequence of one or more IFS characters (which are all whitespace characters now) is considered to be a field delimiter.

This solves the problem of two levels of splitting committed by read , since word splitting by itself constitutes only one level of splitting. But just as before, the problem here is that the individual fields in the input string can already contain $IFS characters, and thus they would be improperly split during the word splitting operation. This happens to not be the case for any of the sample input strings provided by these answerers (how convenient...), but of course that doesn't change the fact that any code base that used this idiom would then run the risk of blowing up if this assumption were ever violated at some point down the line. Once again, consider my counterexample of 'Los Angeles, United States, North America' (or 'Los Angeles:United States:North America' ).

Also, word splitting is normally followed by filename expansion ( aka pathname expansion aka globbing), which, if done, would potentially corrupt words containing the characters * , ? , or [ followed by ] (and, if extglob is set, parenthesized fragments preceded by ? , * , + , @ , or ! ) by matching them against file system objects and expanding the words ("globs") accordingly. The first of these three answerers has cleverly undercut this problem by running set -f beforehand to disable globbing. Technically this works (although you should probably add set +f afterward to reenable globbing for subsequent code which may depend on it), but it's undesirable to have to mess with global shell settings in order to hack a basic string-to-array parsing operation in local code.

Another issue with this answer is that all empty fields will be lost. This may or may not be a problem, depending on the application.

Note: If you're going to use this solution, it's better to use the ${string//:/ } "pattern substitution" form of parameter expansion , rather than going to the trouble of invoking a command substitution (which forks the shell), starting up a pipeline, and running an external executable ( tr or sed ), since parameter expansion is purely a shell-internal operation. (Also, for the tr and sed solutions, the input variable should be double-quoted inside the command substitution; otherwise word splitting would take effect in the echo command and potentially mess with the field values. Also, the $(...) form of command substitution is preferable to the old `...` form since it simplifies nesting of command substitutions and allows for better syntax highlighting by text editors.)


Wrong answer #3

str="a, b, c, d"  # assuming there is a space after ',' as in Q
arr=(${str//,/})  # delete all occurrences of ','

This answer is almost the same as #2 . The difference is that the answerer has made the assumption that the fields are delimited by two characters, one of which being represented in the default $IFS , and the other not. He has solved this rather specific case by removing the non-IFS-represented character using a pattern substitution expansion and then using word splitting to split the fields on the surviving IFS-represented delimiter character.

This is not a very generic solution. Furthermore, it can be argued that the comma is really the "primary" delimiter character here, and that stripping it and then depending on the space character for field splitting is simply wrong. Once again, consider my counterexample: 'Los Angeles, United States, North America' .

Also, again, filename expansion could corrupt the expanded words, but this can be prevented by temporarily disabling globbing for the assignment with set -f and then set +f .

Also, again, all empty fields will be lost, which may or may not be a problem depending on the application.


Wrong answer #4

string='first line
second line
third line'

oldIFS="$IFS"
IFS='
'
IFS=${IFS:0:1} # this is useful to format your code with tabs
lines=( $string )
IFS="$oldIFS"

This is similar to #2 and #3 in that it uses word splitting to get the job done, only now the code explicitly sets $IFS to contain only the single-character field delimiter present in the input string. It should be repeated that this cannot work for multicharacter field delimiters such as the OP's comma-space delimiter. But for a single-character delimiter like the LF used in this example, it actually comes close to being perfect. The fields cannot be unintentionally split in the middle as we saw with previous wrong answers, and there is only one level of splitting, as required.

One problem is that filename expansion will corrupt affected words as described earlier, although once again this can be solved by wrapping the critical statement in set -f and set +f .

Another potential problem is that, since LF qualifies as an "IFS whitespace character" as defined earlier, all empty fields will be lost, just as in #2 and #3 . This would of course not be a problem if the delimiter happens to be a non-"IFS whitespace character", and depending on the application it may not matter anyway, but it does vitiate the generality of the solution.

So, to sum up, assuming you have a one-character delimiter, and it is either a non-"IFS whitespace character" or you don't care about empty fields, and you wrap the critical statement in set -f and set +f , then this solution works, but otherwise not.

(Also, for information's sake, assigning a LF to a variable in bash can be done more easily with the $'...' syntax, e.g. IFS=$'\n'; .)


Wrong answer #5

countries='Paris, France, Europe'
OIFS="$IFS"
IFS=', ' array=($countries)
IFS="$OIFS"

Similar idea:

IFS=', ' eval 'array=($string)'

This solution is effectively a cross between #1 (in that it sets $IFS to comma-space) and #2-4 (in that it uses word splitting to split the string into fields). Because of this, it suffers from most of the problems that afflict all of the above wrong answers, sort of like the worst of all worlds.

Also, regarding the second variant, it may seem like the eval call is completely unnecessary, since its argument is a single-quoted string literal, and therefore is statically known. But there's actually a very non-obvious benefit to using eval in this way. Normally, when you run a simple command which consists of a variable assignment only , meaning without an actual command word following it, the assignment takes effect in the shell environment:

IFS=', '; ## changes $IFS in the shell environment

This is true even if the simple command involves multiple variable assignments; again, as long as there's no command word, all variable assignments affect the shell environment:

IFS=', ' array=($countries); ## changes both $IFS and $array in the shell environment

But, if the variable assignment is attached to a command name (I like to call this a "prefix assignment") then it does not affect the shell environment, and instead only affects the environment of the executed command, regardless whether it is a builtin or external:

IFS=', ' :; ## : is a builtin command, the $IFS assignment does not outlive it
IFS=', ' env; ## env is an external command, the $IFS assignment does not outlive it

Relevant quote from the bash manual :

If no command name results, the variable assignments affect the current shell environment. Otherwise, the variables are added to the environment of the executed command and do not affect the current shell environment.

It is possible to exploit this feature of variable assignment to change $IFS only temporarily, which allows us to avoid the whole save-and-restore gambit like that which is being done with the $OIFS variable in the first variant. But the challenge we face here is that the command we need to run is itself a mere variable assignment, and hence it would not involve a command word to make the $IFS assignment temporary. You might think to yourself, well why not just add a no-op command word to the statement like the : builtin to make the $IFS assignment temporary? This does not work because it would then make the $array assignment temporary as well:

IFS=', ' array=($countries) :; ## fails; new $array value never escapes the : command

So, we're effectively at an impasse, a bit of a catch-22. But, when eval runs its code, it runs it in the shell environment, as if it was normal, static source code, and therefore we can run the $array assignment inside the eval argument to have it take effect in the shell environment, while the $IFS prefix assignment that is prefixed to the eval command will not outlive the eval command. This is exactly the trick that is being used in the second variant of this solution:

IFS=', ' eval 'array=($string)'; ## $IFS does not outlive the eval command, but $array does

So, as you can see, it's actually quite a clever trick, and accomplishes exactly what is required (at least with respect to assignment effectation) in a rather non-obvious way. I'm actually not against this trick in general, despite the involvement of eval ; just be careful to single-quote the argument string to guard against security threats.

But again, because of the "worst of all worlds" agglomeration of problems, this is still a wrong answer to the OP's requirement.


Wrong answer #6

IFS=', '; array=(Paris, France, Europe)

IFS=' ';declare -a array=(Paris France Europe)

Um... what? The OP has a string variable that needs to be parsed into an array. This "answer" starts with the verbatim contents of the input string pasted into an array literal. I guess that's one way to do it.

It looks like the answerer may have assumed that the $IFS variable affects all bash parsing in all contexts, which is not true. From the bash manual:

IFS The Internal Field Separator that is used for word splitting after expansion and to split lines into words with the read builtin command. The default value is <space><tab><newline> .

So the $IFS special variable is actually only used in two contexts: (1) word splitting that is performed after expansion (meaning not when parsing bash source code) and (2) for splitting input lines into words by the read builtin.

Let me try to make this clearer. I think it might be good to draw a distinction between parsing and execution . Bash must first parse the source code, which obviously is a parsing event, and then later it executes the code, which is when expansion comes into the picture. Expansion is really an execution event. Furthermore, I take issue with the description of the $IFS variable that I just quoted above; rather than saying that word splitting is performed after expansion , I would say that word splitting is performed during expansion, or, perhaps even more precisely, word splitting is part of the expansion process. The phrase "word splitting" refers only to this step of expansion; it should never be used to refer to the parsing of bash source code, although unfortunately the docs do seem to throw around the words "split" and "words" a lot. Here's a relevant excerpt from the linux.die.net version of the bash manual:

Expansion is performed on the command line after it has been split into words. There are seven kinds of expansion performed: brace expansion , tilde expansion , parameter and variable expansion , command substitution , arithmetic expansion , word splitting , and pathname expansion .

The order of expansions is: brace expansion; tilde expansion, parameter and variable expansion, arithmetic expansion, and command substitution (done in a left-to-right fashion); word splitting; and pathname expansion.

You could argue the GNU version of the manual does slightly better, since it opts for the word "tokens" instead of "words" in the first sentence of the Expansion section:

Expansion is performed on the command line after it has been split into tokens.

The important point is, $IFS does not change the way bash parses source code. Parsing of bash source code is actually a very complex process that involves recognition of the various elements of shell grammar, such as command sequences, command lists, pipelines, parameter expansions, arithmetic substitutions, and command substitutions. For the most part, the bash parsing process cannot be altered by user-level actions like variable assignments (actually, there are some minor exceptions to this rule; for example, see the various compatxx shell settings , which can change certain aspects of parsing behavior on-the-fly). The upstream "words"/"tokens" that result from this complex parsing process are then expanded according to the general process of "expansion" as broken down in the above documentation excerpts, where word splitting of the expanded (expanding?) text into downstream words is simply one step of that process. Word splitting only touches text that has been spit out of a preceding expansion step; it does not affect literal text that was parsed right off the source bytestream.


Wrong answer #7

string='first line
        second line
        third line'

while read -r line; do lines+=("$line"); done <<<"$string"

This is one of the best solutions. Notice that we're back to using read . Didn't I say earlier that read is inappropriate because it performs two levels of splitting, when we only need one? The trick here is that you can call read in such a way that it effectively only does one level of splitting, specifically by splitting off only one field per invocation, which necessitates the cost of having to call it repeatedly in a loop. It's a bit of a sleight of hand, but it works.

But there are problems. First: When you provide at least one NAME argument to read , it automatically ignores leading and trailing whitespace in each field that is split off from the input string. This occurs whether $IFS is set to its default value or not, as described earlier in this post. Now, the OP may not care about this for his specific use-case, and in fact, it may be a desirable feature of the parsing behavior. But not everyone who wants to parse a string into fields will want this. There is a solution, however: A somewhat non-obvious usage of read is to pass zero NAME arguments. In this case, read will store the entire input line that it gets from the input stream in a variable named $REPLY , and, as a bonus, it does not strip leading and trailing whitespace from the value. This is a very robust usage of read which I've exploited frequently in my shell programming career. Here's a demonstration of the difference in behavior:

string=$'  a  b  \n  c  d  \n  e  f  '; ## input string

a=(); while read -r line; do a+=("$line"); done <<<"$string"; declare -p a;
## declare -a a=([0]="a  b" [1]="c  d" [2]="e  f") ## read trimmed surrounding whitespace

a=(); while read -r; do a+=("$REPLY"); done <<<"$string"; declare -p a;
## declare -a a=([0]="  a  b  " [1]="  c  d  " [2]="  e  f  ") ## no trimming

The second issue with this solution is that it does not actually address the case of a custom field separator, such as the OP's comma-space. As before, multicharacter separators are not supported, which is an unfortunate limitation of this solution. We could try to at least split on comma by specifying the separator to the -d option, but look what happens:

string='Paris, France, Europe';
a=(); while read -rd,; do a+=("$REPLY"); done <<<"$string"; declare -p a;
## declare -a a=([0]="Paris" [1]=" France")

Predictably, the unaccounted surrounding whitespace got pulled into the field values, and hence this would have to be corrected subsequently through trimming operations (this could also be done directly in the while-loop). But there's another obvious error: Europe is missing! What happened to it? The answer is that read returns a failing return code if it hits end-of-file (in this case we can call it end-of-string) without encountering a final field terminator on the final field. This causes the while-loop to break prematurely and we lose the final field.

Technically this same error afflicted the previous examples as well; the difference there is that the field separator was taken to be LF, which is the default when you don't specify the -d option, and the <<< ("here-string") mechanism automatically appends a LF to the string just before it feeds it as input to the command. Hence, in those cases, we sort of accidentally solved the problem of a dropped final field by unwittingly appending an additional dummy terminator to the input. Let's call this solution the "dummy-terminator" solution. We can apply the dummy-terminator solution manually for any custom delimiter by concatenating it against the input string ourselves when instantiating it in the here-string:

a=(); while read -rd,; do a+=("$REPLY"); done <<<"$string,"; declare -p a;
declare -a a=([0]="Paris" [1]=" France" [2]=" Europe")

There, problem solved. Another solution is to only break the while-loop if both (1) read returned failure and (2) $REPLY is empty, meaning read was not able to read any characters prior to hitting end-of-file. Demo:

a=(); while read -rd,|| [[ -n "$REPLY" ]]; do a+=("$REPLY"); done <<<"$string"; declare -p a;
## declare -a a=([0]="Paris" [1]=" France" [2]=$' Europe\n')

This approach also reveals the secretive LF that automatically gets appended to the here-string by the <<< redirection operator. It could of course be stripped off separately through an explicit trimming operation as described a moment ago, but obviously the manual dummy-terminator approach solves it directly, so we could just go with that. The manual dummy-terminator solution is actually quite convenient in that it solves both of these two problems (the dropped-final-field problem and the appended-LF problem) in one go.

So, overall, this is quite a powerful solution. It's only remaining weakness is a lack of support for multicharacter delimiters, which I will address later.


Wrong answer #8

string='first line
        second line
        third line'

readarray -t lines <<<"$string"

(This is actually from the same post as #7 ; the answerer provided two solutions in the same post.)

The readarray builtin, which is a synonym for mapfile , is ideal. It's a builtin command which parses a bytestream into an array variable in one shot; no messing with loops, conditionals, substitutions, or anything else. And it doesn't surreptitiously strip any whitespace from the input string. And (if -O is not given) it conveniently clears the target array before assigning to it. But it's still not perfect, hence my criticism of it as a "wrong answer".

First, just to get this out of the way, note that, just like the behavior of read when doing field-parsing, readarray drops the trailing field if it is empty. Again, this is probably not a concern for the OP, but it could be for some use-cases. I'll come back to this in a moment.

Second, as before, it does not support multicharacter delimiters. I'll give a fix for this in a moment as well.

Third, the solution as written does not parse the OP's input string, and in fact, it cannot be used as-is to parse it. I'll expand on this momentarily as well.

For the above reasons, I still consider this to be a "wrong answer" to the OP's question. Below I'll give what I consider to be the right answer.


Right answer

Here's a naοve attempt to make #8 work by just specifying the -d option:

string='Paris, France, Europe';
readarray -td, a <<<"$string"; declare -p a;
## declare -a a=([0]="Paris" [1]=" France" [2]=$' Europe\n')

We see the result is identical to the result we got from the double-conditional approach of the looping read solution discussed in #7 . We can almost solve this with the manual dummy-terminator trick:

readarray -td, a <<<"$string,"; declare -p a;
## declare -a a=([0]="Paris" [1]=" France" [2]=" Europe" [3]=$'\n')

The problem here is that readarray preserved the trailing field, since the <<< redirection operator appended the LF to the input string, and therefore the trailing field was not empty (otherwise it would've been dropped). We can take care of this by explicitly unsetting the final array element after-the-fact:

readarray -td, a <<<"$string,"; unset 'a[-1]'; declare -p a;
## declare -a a=([0]="Paris" [1]=" France" [2]=" Europe")

The only two problems that remain, which are actually related, are (1) the extraneous whitespace that needs to be trimmed, and (2) the lack of support for multicharacter delimiters.

The whitespace could of course be trimmed afterward (for example, see How to trim whitespace from a Bash variable? ). But if we can hack a multicharacter delimiter, then that would solve both problems in one shot.

Unfortunately, there's no direct way to get a multicharacter delimiter to work. The best solution I've thought of is to preprocess the input string to replace the multicharacter delimiter with a single-character delimiter that will be guaranteed not to collide with the contents of the input string. The only character that has this guarantee is the NUL byte . This is because, in bash (though not in zsh, incidentally), variables cannot contain the NUL byte. This preprocessing step can be done inline in a process substitution. Here's how to do it using awk :

readarray -td '' a < <(awk '{ gsub(/, /,"\0"); print; }' <<<"$string, "); unset 'a[-1]';
declare -p a;
## declare -a a=([0]="Paris" [1]="France" [2]="Europe")

There, finally! This solution will not erroneously split fields in the middle, will not cut out prematurely, will not drop empty fields, will not corrupt itself on filename expansions, will not automatically strip leading and trailing whitespace, will not leave a stowaway LF on the end, does not require loops, and does not settle for a single-character delimiter.


Trimming solution

Lastly, I wanted to demonstrate my own fairly intricate trimming solution using the obscure -C callback option of readarray . Unfortunately, I've run out of room against Stack Overflow's draconian 30,000 character post limit, so I won't be able to explain it. I'll leave that as an exercise for the reader.

function mfcb { local val="$4"; "$1"; eval "$2[$3]=\$val;"; };
function val_ltrim { if [[ "$val" =~ ^[[:space:]]+ ]]; then val="${val:${#BASH_REMATCH[0]}}"; fi; };
function val_rtrim { if [[ "$val" =~ [[:space:]]+$ ]]; then val="${val:0:${#val}-${#BASH_REMATCH[0]}}"; fi; };
function val_trim { val_ltrim; val_rtrim; };
readarray -c1 -C 'mfcb val_trim a' -td, <<<"$string,"; unset 'a[-1]'; declare -p a;
## declare -a a=([0]="Paris" [1]="France" [2]="Europe")

fbicknel ,Aug 18, 2017 at 15:57

It may also be helpful to note (though understandably you had no room to do so) that the -d option to readarray first appears in Bash 4.4. – fbicknel Aug 18 '17 at 15:57

Cyril Duchon-Doris ,Nov 3, 2017 at 9:16

You should add a "TL;DR : scroll 3 pages to see the right solution at the end of my answer" – Cyril Duchon-Doris Nov 3 '17 at 9:16

dawg ,Nov 26, 2017 at 22:28

Great answer (+1). If you change your awk to awk '{ gsub(/,[ ]+|$/,"\0"); print }' and eliminate that concatenation of the final ", " then you don't have to go through the gymnastics on eliminating the final record. So: readarray -td '' a < <(awk '{ gsub(/,[ ]+/,"\0"); print; }' <<<"$string") on Bash that supports readarray . Note your method is Bash 4.4+ I think because of the -d in readarray – dawg Nov 26 '17 at 22:28

datUser ,Feb 22, 2018 at 14:54

Looks like readarray is not an available builtin on OSX. – datUser Feb 22 '18 at 14:54

bgoldst ,Feb 23, 2018 at 3:37

@datUser That's unfortunate. Your version of bash must be too old for readarray . In this case, you can use the second-best solution built on read . I'm referring to this: a=(); while read -rd,; do a+=("$REPLY"); done <<<"$string,"; (with the awk substitution if you need multicharacter delimiter support). Let me know if you run into any problems; I'm pretty sure this solution should work on fairly old versions of bash, back to version 2-something, released like two decades ago. – bgoldst Feb 23 '18 at 3:37

Jmoney38 ,Jul 14, 2015 at 11:54

t="one,two,three"
a=($(echo "$t" | tr ',' '\n'))
echo "${a[2]}"

Prints three

shrimpwagon ,Oct 16, 2015 at 20:04

I actually prefer this approach. Simple. – shrimpwagon Oct 16 '15 at 20:04

Ben ,Oct 31, 2015 at 3:11

I copied and pasted this and it did did not work with echo, but did work when I used it in a for loop. – Ben Oct 31 '15 at 3:11

Pinaki Mukherjee ,Nov 9, 2015 at 20:22

This is the simplest approach. thanks – Pinaki Mukherjee Nov 9 '15 at 20:22

abalter ,Aug 30, 2016 at 5:13

This does not work as stated. @Jmoney38 or shrimpwagon if you can paste this in a terminal and get the desired output, please paste the result here. – abalter Aug 30 '16 at 5:13

leaf ,Jul 17, 2017 at 16:28

@abalter Works for me with a=($(echo $t | tr ',' "\n")) . Same result with a=($(echo $t | tr ',' ' ')) . – leaf Jul 17 '17 at 16:28

Luca Borrione ,Nov 2, 2012 at 13:44

Sometimes it happened to me that the method described in the accepted answer didn't work, especially if the separator is a carriage return.
In those cases I solved in this way:
string='first line
second line
third line'

oldIFS="$IFS"
IFS='
'
IFS=${IFS:0:1} # this is useful to format your code with tabs
lines=( $string )
IFS="$oldIFS"

for line in "${lines[@]}"
    do
        echo "--> $line"
done

Stefan van den Akker ,Feb 9, 2015 at 16:52

+1 This completely worked for me. I needed to put multiple strings, divided by a newline, into an array, and read -a arr <<< "$strings" did not work with IFS=$'\n' . – Stefan van den Akker Feb 9 '15 at 16:52

Stefan van den Akker ,Feb 10, 2015 at 13:49

Here is the answer to make the accepted answer work when the delimiter is a newline . – Stefan van den Akker Feb 10 '15 at 13:49

,Jul 24, 2015 at 21:24

The accepted answer works for values in one line.
If the variable has several lines:
string='first line
        second line
        third line'

We need a very different command to get all lines:

while read -r line; do lines+=("$line"); done <<<"$string"

Or the much simpler bash readarray :

readarray -t lines <<<"$string"

Printing all lines is very easy taking advantage of a printf feature:

printf ">[%s]\n" "${lines[@]}"

>[first line]
>[        second line]
>[        third line]

Mayhem ,Dec 31, 2015 at 3:13

While not every solution works for every situation, your mention of readarray... replaced my last two hours with 5 minutes... you got my vote – Mayhem Dec 31 '15 at 3:13

Derek 朕會功夫 ,Mar 23, 2018 at 19:14

readarray is the right answer. – Derek 朕會功夫 Mar 23 '18 at 19:14

ssanch ,Jun 3, 2016 at 15:24

This is similar to the approach by Jmoney38, but using sed:
string="1,2,3,4"
array=(`echo $string | sed 's/,/\n/g'`)
echo ${array[0]}

Prints 1

dawg ,Nov 26, 2017 at 19:59

The key to splitting your string into an array is the multi character delimiter of ", " . Any solution using IFS for multi character delimiters is inherently wrong since IFS is a set of those characters, not a string.

If you assign IFS=", " then the string will break on EITHER "," OR " " or any combination of them which is not an accurate representation of the two character delimiter of ", " .

You can use awk or sed to split the string, with process substitution:

#!/bin/bash

str="Paris, France, Europe"
array=()
while read -r -d $'\0' each; do   # use a NUL terminated field separator 
    array+=("$each")
done < <(printf "%s" "$str" | awk '{ gsub(/,[ ]+|$/,"\0"); print }')
declare -p array
# declare -a array=([0]="Paris" [1]="France" [2]="Europe") output

It is more efficient to use a regex you directly in Bash:

#!/bin/bash

str="Paris, France, Europe"

array=()
while [[ $str =~ ([^,]+)(,[ ]+|$) ]]; do
    array+=("${BASH_REMATCH[1]}")   # capture the field
    i=${#BASH_REMATCH}              # length of field + delimiter
    str=${str:i}                    # advance the string by that length
done                                # the loop deletes $str, so make a copy if needed

declare -p array
# declare -a array=([0]="Paris" [1]="France" [2]="Europe") output...

With the second form, there is no sub shell and it will be inherently faster.


Edit by bgoldst: Here are some benchmarks comparing my readarray solution to dawg's regex solution, and I also included the read solution for the heck of it (note: I slightly modified the regex solution for greater harmony with my solution) (also see my comments below the post):

## competitors
function c_readarray { readarray -td '' a < <(awk '{ gsub(/, /,"\0"); print; };' <<<"$1, "); unset 'a[-1]'; };
function c_read { a=(); local REPLY=''; while read -r -d ''; do a+=("$REPLY"); done < <(awk '{ gsub(/, /,"\0"); print; };' <<<"$1, "); };
function c_regex { a=(); local s="$1, "; while [[ $s =~ ([^,]+),\  ]]; do a+=("${BASH_REMATCH[1]}"); s=${s:${#BASH_REMATCH}}; done; };

## helper functions
function rep {
    local -i i=-1;
    for ((i = 0; i<$1; ++i)); do
        printf %s "$2";
    done;
}; ## end rep()

function testAll {
    local funcs=();
    local args=();
    local func='';
    local -i rc=-1;
    while [[ "$1" != ':' ]]; do
        func="$1";
        if [[ ! "$func" =~ ^[_a-zA-Z][_a-zA-Z0-9]*$ ]]; then
            echo "bad function name: $func" >&2;
            return 2;
        fi;
        funcs+=("$func");
        shift;
    done;
    shift;
    args=("$@");
    for func in "${funcs[@]}"; do
        echo -n "$func ";
        { time $func "${args[@]}" >/dev/null 2>&1; } 2>&1| tr '\n' '/';
        rc=${PIPESTATUS[0]}; if [[ $rc -ne 0 ]]; then echo "[$rc]"; else echo; fi;
    done| column -ts/;
}; ## end testAll()

function makeStringToSplit {
    local -i n=$1; ## number of fields
    if [[ $n -lt 0 ]]; then echo "bad field count: $n" >&2; return 2; fi;
    if [[ $n -eq 0 ]]; then
        echo;
    elif [[ $n -eq 1 ]]; then
        echo 'first field';
    elif [[ "$n" -eq 2 ]]; then
        echo 'first field, last field';
    else
        echo "first field, $(rep $[$1-2] 'mid field, ')last field";
    fi;
}; ## end makeStringToSplit()

function testAll_splitIntoArray {
    local -i n=$1; ## number of fields in input string
    local s='';
    echo "===== $n field$(if [[ $n -ne 1 ]]; then echo 's'; fi;) =====";
    s="$(makeStringToSplit "$n")";
    testAll c_readarray c_read c_regex : "$s";
}; ## end testAll_splitIntoArray()

## results
testAll_splitIntoArray 1;
## ===== 1 field =====
## c_readarray   real  0m0.067s   user 0m0.000s   sys  0m0.000s
## c_read        real  0m0.064s   user 0m0.000s   sys  0m0.000s
## c_regex       real  0m0.000s   user 0m0.000s   sys  0m0.000s
##
testAll_splitIntoArray 10;
## ===== 10 fields =====
## c_readarray   real  0m0.067s   user 0m0.000s   sys  0m0.000s
## c_read        real  0m0.064s   user 0m0.000s   sys  0m0.000s
## c_regex       real  0m0.001s   user 0m0.000s   sys  0m0.000s
##
testAll_splitIntoArray 100;
## ===== 100 fields =====
## c_readarray   real  0m0.069s   user 0m0.000s   sys  0m0.062s
## c_read        real  0m0.065s   user 0m0.000s   sys  0m0.046s
## c_regex       real  0m0.005s   user 0m0.000s   sys  0m0.000s
##
testAll_splitIntoArray 1000;
## ===== 1000 fields =====
## c_readarray   real  0m0.084s   user 0m0.031s   sys  0m0.077s
## c_read        real  0m0.092s   user 0m0.031s   sys  0m0.046s
## c_regex       real  0m0.125s   user 0m0.125s   sys  0m0.000s
##
testAll_splitIntoArray 10000;
## ===== 10000 fields =====
## c_readarray   real  0m0.209s   user 0m0.093s   sys  0m0.108s
## c_read        real  0m0.333s   user 0m0.234s   sys  0m0.109s
## c_regex       real  0m9.095s   user 0m9.078s   sys  0m0.000s
##
testAll_splitIntoArray 100000;
## ===== 100000 fields =====
## c_readarray   real  0m1.460s   user 0m0.326s   sys  0m1.124s
## c_read        real  0m2.780s   user 0m1.686s   sys  0m1.092s
## c_regex       real  17m38.208s   user 15m16.359s   sys  2m19.375s
##

bgoldst ,Nov 27, 2017 at 4:28

Very cool solution! I never thought of using a loop on a regex match, nifty use of $BASH_REMATCH . It works, and does indeed avoid spawning subshells. +1 from me. However, by way of criticism, the regex itself is a little non-ideal, in that it appears you were forced to duplicate part of the delimiter token (specifically the comma) so as to work around the lack of support for non-greedy multipliers (also lookarounds) in ERE ("extended" regex flavor built into bash). This makes it a little less generic and robust. – bgoldst Nov 27 '17 at 4:28

bgoldst ,Nov 27, 2017 at 4:28

Secondly, I did some benchmarking, and although the performance is better than the other solutions for smallish strings, it worsens exponentially due to the repeated string-rebuilding, becoming catastrophic for very large strings. See my edit to your answer. – bgoldst Nov 27 '17 at 4:28

dawg ,Nov 27, 2017 at 4:46

@bgoldst: What a cool benchmark! In defense of the regex, for 10's or 100's of thousands of fields (what the regex is splitting) there would probably be some form of record (like \n delimited text lines) comprising those fields so the catastrophic slow-down would likely not occur. If you have a string with 100,000 fields -- maybe Bash is not ideal ;-) Thanks for the benchmark. I learned a thing or two. – dawg Nov 27 '17 at 4:46

Geoff Lee ,Mar 4, 2016 at 6:02

Try this
IFS=', '; array=(Paris, France, Europe)
for item in ${array[@]}; do echo $item; done

It's simple. If you want, you can also add a declare (and also remove the commas):

IFS=' ';declare -a array=(Paris France Europe)

The IFS is added to undo the above but it works without it in a fresh bash instance

MrPotatoHead ,Nov 13, 2018 at 13:19

Pure bash multi-character delimiter solution.

As others have pointed out in this thread, the OP's question gave an example of a comma delimited string to be parsed into an array, but did not indicate if he/she was only interested in comma delimiters, single character delimiters, or multi-character delimiters.

Since Google tends to rank this answer at or near the top of search results, I wanted to provide readers with a strong answer to the question of multiple character delimiters, since that is also mentioned in at least one response.

If you're in search of a solution to a multi-character delimiter problem, I suggest reviewing Mallikarjun M 's post, in particular the response from gniourf_gniourf who provides this elegant pure BASH solution using parameter expansion:

#!/bin/bash
str="LearnABCtoABCSplitABCaABCString"
delimiter=ABC
s=$str$delimiter
array=();
while [[ $s ]]; do
    array+=( "${s%%"$delimiter"*}" );
    s=${s#*"$delimiter"};
done;
declare -p array

Link to cited comment/referenced post

Link to cited question: Howto split a string on a multi-character delimiter in bash?

Eduardo Cuomo ,Dec 19, 2016 at 15:27

Use this:
countries='Paris, France, Europe'
OIFS="$IFS"
IFS=', ' array=($countries)
IFS="$OIFS"

#${array[1]} == Paris
#${array[2]} == France
#${array[3]} == Europe

gniourf_gniourf ,Dec 19, 2016 at 17:22

Bad: subject to word splitting and pathname expansion. Please don't revive old questions with good answers to give bad answers. – gniourf_gniourf Dec 19 '16 at 17:22

Scott Weldon ,Dec 19, 2016 at 18:12

This may be a bad answer, but it is still a valid answer. Flaggers / reviewers: For incorrect answers such as this one, downvote, don't delete! – Scott Weldon Dec 19 '16 at 18:12

George Sovetov ,Dec 26, 2016 at 17:31

@gniourf_gniourf Could you please explain why it is a bad answer? I really don't understand when it fails. – George Sovetov Dec 26 '16 at 17:31

gniourf_gniourf ,Dec 26, 2016 at 18:07

@GeorgeSovetov: As I said, it's subject to word splitting and pathname expansion. More generally, splitting a string into an array as array=( $string ) is a (sadly very common) antipattern: word splitting occurs: string='Prague, Czech Republic, Europe' ; Pathname expansion occurs: string='foo[abcd],bar[efgh]' will fail if you have a file named, e.g., food or barf in your directory. The only valid usage of such a construct is when string is a glob. – gniourf_gniourf Dec 26 '16 at 18:07

user1009908 ,Jun 9, 2015 at 23:28

UPDATE: Don't do this, due to problems with eval.

With slightly less ceremony:

IFS=', ' eval 'array=($string)'

e.g.

string="foo, bar,baz"
IFS=', ' eval 'array=($string)'
echo ${array[1]} # -> bar

caesarsol ,Oct 29, 2015 at 14:42

eval is evil! don't do this. – caesarsol Oct 29 '15 at 14:42

user1009908 ,Oct 30, 2015 at 4:05

Pfft. No. If you're writing scripts large enough for this to matter, you're doing it wrong. In application code, eval is evil. In shell scripting, it's common, necessary, and inconsequential. – user1009908 Oct 30 '15 at 4:05

caesarsol ,Nov 2, 2015 at 18:19

put a $ in your variable and you'll see... I write many scripts and I never ever had to use a single eval – caesarsol Nov 2 '15 at 18:19

Dennis Williamson ,Dec 2, 2015 at 17:00

Eval command and security issues – Dennis Williamson Dec 2 '15 at 17:00

user1009908 ,Dec 22, 2015 at 23:04

You're right, this is only usable when the input is known to be clean. Not a robust solution. – user1009908 Dec 22 '15 at 23:04

Eduardo Lucio ,Jan 31, 2018 at 20:45

Here's my hack!

Splitting strings by strings is a pretty boring thing to do using bash. What happens is that we have limited approaches that only work in a few cases (split by ";", "/", "." and so on) or we have a variety of side effects in the outputs.

The approach below has required a number of maneuvers, but I believe it will work for most of our needs!

#!/bin/bash

# --------------------------------------
# SPLIT FUNCTION
# ----------------

F_SPLIT_R=()
f_split() {
    : 'It does a "split" into a given string and returns an array.

    Args:
        TARGET_P (str): Target string to "split".
        DELIMITER_P (Optional[str]): Delimiter used to "split". If not 
    informed the split will be done by spaces.

    Returns:
        F_SPLIT_R (array): Array with the provided string separated by the 
    informed delimiter.
    '

    F_SPLIT_R=()
    TARGET_P=$1
    DELIMITER_P=$2
    if [ -z "$DELIMITER_P" ] ; then
        DELIMITER_P=" "
    fi

    REMOVE_N=1
    if [ "$DELIMITER_P" == "\n" ] ; then
        REMOVE_N=0
    fi

    # NOTE: This was the only parameter that has been a problem so far! 
    # By Questor
    # [Ref.: https://unix.stackexchange.com/a/390732/61742]
    if [ "$DELIMITER_P" == "./" ] ; then
        DELIMITER_P="[.]/"
    fi

    if [ ${REMOVE_N} -eq 1 ] ; then

        # NOTE: Due to bash limitations we have some problems getting the 
        # output of a split by awk inside an array and so we need to use 
        # "line break" (\n) to succeed. Seen this, we remove the line breaks 
        # momentarily afterwards we reintegrate them. The problem is that if 
        # there is a line break in the "string" informed, this line break will 
        # be lost, that is, it is erroneously removed in the output! 
        # By Questor
        TARGET_P=$(awk 'BEGIN {RS="dn"} {gsub("\n", "3F2C417D448C46918289218B7337FCAF"); printf $0}' <<< "${TARGET_P}")

    fi

    # NOTE: The replace of "\n" by "3F2C417D448C46918289218B7337FCAF" results 
    # in more occurrences of "3F2C417D448C46918289218B7337FCAF" than the 
    # amount of "\n" that there was originally in the string (one more 
    # occurrence at the end of the string)! We can not explain the reason for 
    # this side effect. The line below corrects this problem! By Questor
    TARGET_P=${TARGET_P%????????????????????????????????}

    SPLIT_NOW=$(awk -F"$DELIMITER_P" '{for(i=1; i<=NF; i++){printf "%s\n", $i}}' <<< "${TARGET_P}")

    while IFS= read -r LINE_NOW ; do
        if [ ${REMOVE_N} -eq 1 ] ; then

            # NOTE: We use "'" to prevent blank lines with no other characters 
            # in the sequence being erroneously removed! We do not know the 
            # reason for this side effect! By Questor
            LN_NOW_WITH_N=$(awk 'BEGIN {RS="dn"} {gsub("3F2C417D448C46918289218B7337FCAF", "\n"); printf $0}' <<< "'${LINE_NOW}'")

            # NOTE: We use the commands below to revert the intervention made 
            # immediately above! By Questor
            LN_NOW_WITH_N=${LN_NOW_WITH_N%?}
            LN_NOW_WITH_N=${LN_NOW_WITH_N#?}

            F_SPLIT_R+=("$LN_NOW_WITH_N")
        else
            F_SPLIT_R+=("$LINE_NOW")
        fi
    done <<< "$SPLIT_NOW"
}

# --------------------------------------
# HOW TO USE
# ----------------

STRING_TO_SPLIT="
 * How do I list all databases and tables using psql?

\"
sudo -u postgres /usr/pgsql-9.4/bin/psql -c \"\l\"
sudo -u postgres /usr/pgsql-9.4/bin/psql <DB_NAME> -c \"\dt\"
\"

\"
\list or \l: list all databases
\dt: list all tables in the current database
\"

[Ref.: https://dba.stackexchange.com/questions/1285/how-do-i-list-all-databases-and-tables-using-psql]


"

f_split "$STRING_TO_SPLIT" "bin/psql -c"

# --------------------------------------
# OUTPUT AND TEST
# ----------------

ARR_LENGTH=${#F_SPLIT_R[*]}
for (( i=0; i<=$(( $ARR_LENGTH -1 )); i++ )) ; do
    echo " > -----------------------------------------"
    echo "${F_SPLIT_R[$i]}"
    echo " < -----------------------------------------"
done

if [ "$STRING_TO_SPLIT" == "${F_SPLIT_R[0]}bin/psql -c${F_SPLIT_R[1]}" ] ; then
    echo " > -----------------------------------------"
    echo "The strings are the same!"
    echo " < -----------------------------------------"
fi

sel-en-ium ,May 31, 2018 at 5:56

Another way to do it without modifying IFS:
read -r -a myarray <<< "${string//, /$IFS}"

Rather than changing IFS to match our desired delimiter, we can replace all occurrences of our desired delimiter ", " with contents of $IFS via "${string//, /$IFS}" .

Maybe this will be slow for very large strings though?

This is based on Dennis Williamson's answer.

rsjethani ,Sep 13, 2016 at 16:21

Another approach can be:
str="a, b, c, d"  # assuming there is a space after ',' as in Q
arr=(${str//,/})  # delete all occurrences of ','

After this 'arr' is an array with four strings. This doesn't require dealing IFS or read or any other special stuff hence much simpler and direct.

gniourf_gniourf ,Dec 26, 2016 at 18:12

Same (sadly common) antipattern as other answers: subject to word splitting and filename expansion. – gniourf_gniourf Dec 26 '16 at 18:12

Safter Arslan ,Aug 9, 2017 at 3:21

Another way would be:
string="Paris, France, Europe"
IFS=', ' arr=(${string})

Now your elements are stored in "arr" array. To iterate through the elements:

for i in ${arr[@]}; do echo $i; done

bgoldst ,Aug 13, 2017 at 22:38

I cover this idea in my answer ; see Wrong answer #5 (you might be especially interested in my discussion of the eval trick). Your solution leaves $IFS set to the comma-space value after-the-fact. – bgoldst Aug 13 '17 at 22:38

[Jan 29, 2019] A new term PEBKAC

Jan 29, 2019 | thwack.solarwinds.com

dtreloar Jul 30, 2015 8:51 PM PEBKAC

P roblem

E xists

B etween

K eyboard

A nd

C hair

or the most common fault is the id ten t or ID10T

[Jan 29, 2019] Are you sure?

Jan 29, 2019 | thwack.solarwinds.com

RichardLetts

Jul 13, 2015 8:13 PM Dealing with my ISP:

Me: There is a problem with your head-end router, you need to get an engineer to troubleshoot it

Them: no the problem is with your cable modem and router, we can see it fine on our network

Me: That's interesting because I powered it off and disconnected it from the wall before we started this conversation.

Them: Are you sure?

Me: I'm pretty sure that the lack of blinky lights means it's got no power but if you think it's still working fine then I'd suggest the problem at your end of this phone conversation and not at my end.

[Jan 29, 2019] RHEL7 is a fine OS, the only thing it s missing is a really good init system.

Highly recommended!
Or in other words, a simple, reliable and clear solution (which has some faults due to its age) was replaced with a gigantic KISS violation. No engineer worth the name will ever do that. And if it needs doing, any good engineer will make damned sure to achieve maximum compatibility and a clean way back. The systemd people seem to be hell-bent on making it as hard as possible to not use their monster. That alone is a good reason to stay away from it.
Notable quotes:
"... We are systemd. Lower your memory locks and surrender your processes. We will add your calls and code distinctiveness to our own. Your functions will adapt to service us. Resistance is futile. ..."
"... I think we should call systemd the Master Control Program since it seems to like making other programs functions its own. ..."
"... RHEL7 is a fine OS, the only thing it's missing is a really good init system. ..."
Oct 14, 2018 | linux.slashdot.org

Reverend Green ( 4973045 ) , Monday December 11, 2017 @04:48AM ( #55714431 )

Re: Does systemd make ... ( Score: 5 , Funny)

Systemd is nothing but a thinly-veiled plot by Vladimir Putin and Beyonce to import illegal German Nazi immigrants over the border from Mexico who will then corner the market in kimchi and implement Sharia law!!!

Anonymous Coward , Monday December 11, 2017 @01:38AM ( #55714015 )

Re:It violates fundamental Unix principles ( Score: 4 , Funny)

The Emacs of the 2010s.

DontBeAMoran ( 4843879 ) , Monday December 11, 2017 @01:57AM ( #55714059 )
Re:It violates fundamental Unix principles ( Score: 5 , Funny)

We are systemd. Lower your memory locks and surrender your processes. We will add your calls and code distinctiveness to our own. Your functions will adapt to service us. Resistance is futile.

serviscope_minor ( 664417 ) , Monday December 11, 2017 @04:47AM ( #55714427 ) Journal
Re:It violates fundamental Unix principles ( Score: 4 , Insightful)

I think we should call systemd the Master Control Program since it seems to like making other programs functions its own.

Anonymous Coward , Monday December 11, 2017 @01:47AM ( #55714035 )
Don't go hating on systemd ( Score: 5 , Funny)

RHEL7 is a fine OS, the only thing it's missing is a really good init system.

[Jan 29, 2019] Your tax dollars at government It work

Jan 29, 2019 | thwack.solarwinds.com

pzjones Jul 8, 2015 10:34 AM

My story is about required processes...Need to add DHCP entries to the DHCP server. Here is the process. Receive request. Write 5 page document (no exaggeration) detailing who submitted the request, why the request was submitted, what the solution would be, the detailed steps of the solution including spreadsheet showing how each field would be completed and backup procedures. Produce second document to include pre execution test plan, and post execution test plan in minute detail. Submit to CAB board for review, submit to higher level advisory board for review; attend CAB meeting for formal approval; attend additional approval board meeting if data center is in freeze; attend post implementation board for lessons learned...Lesson learned: now I know where our tax dollars go...

[Jan 29, 2019] Your worst sysadmin horror story

Notable quotes:
"... Disk Array not found. ..."
"... Disk Array not found. ..."
"... Windows 2003 is now loading. ..."
Jan 29, 2019 | www.reddit.com

highlord_fox Moderator | /r/sysadmin Sock Puppet 10 points 11 points 12 points 3 years ago (1 child)

9-10 year old Poweredge 2950. Four drives, 250GB ea, RAID 5. Not even sure the fourth drive was even part of the array at this point. Backups consist of cloud file-level backup of most of the server's files. I was working on the server, updating the OS, rebooting it to solve whatever was ailing it at the time, and it was probably about 7-8PM on a Friday. I powered it off, and went to power it back on.

Disk Array not found.

SHIT SHIT SHIT SHIT SHIT SHIT SHIT . Power it back off. Power it back on.

Disk Array not found.

I stared at it, and hope I don't have to call for emergency support on the thing. Power it off and back on a third time.

Windows 2003 is now loading.

OhThankTheGods

I didn't power it off again until I replaced it, some 4-6 months later. And then it stayed off for a good few weeks, before I had to buy a Perc 5i card off ebay to get it running again. Long story short, most of the speed issues I was having was due to the card dying. AH WELL.

EDIT: Formatting.

[Jan 29, 2019] Extra security can be a dangerious thing

Viewing backup logs is vital. Often it only looks that backup is going fine...
Notable quotes:
"... Things looked fine until someone noticed that a directory with critically important and sensitive data was missing. Turned out that some manager had decided to 'secure' the directory by doing 'chmod 000 dir' to protect the data from inquisitive eyes when the data was not being used. ..."
"... Of course, tar complained about the situation and returned with non-null status, but since the backup procedure had seemed to work fine, no one thought it necessary to view the logs... ..."
Jul 20, 2017 | www.linuxjournal.com

Anonymous, 11/08/2002

At an unnamed location it happened thus... The customer had been using a home built 'tar' -based backup system for a long time. They were informed enough to have even tested and verified that recovery would work also.

Everything had been working fine, and they even had to do a recovery which went fine. Well, one day something evil happened to a disk and they had to replace the unit and do a full recovery.

Things looked fine until someone noticed that a directory with critically important and sensitive data was missing. Turned out that some manager had decided to 'secure' the directory by doing 'chmod 000 dir' to protect the data from inquisitive eyes when the data was not being used.

Of course, tar complained about the situation and returned with non-null status, but since the backup procedure had seemed to work fine, no one thought it necessary to view the logs...

[Jan 29, 2019] Backing things up with rsync

Notable quotes:
"... I RECURSIVELY DELETED ALL THE LIVE CORPORATE WEBSITES ON FRIDAY AFTERNOON AT 4PM! ..."
"... This is why it's ALWAYS A GOOD IDEA to use Midnight Commander or something similar to delete directories!! ..."
"... rsync with ssh as the transport mechanism works very well with my nightly LAN backups. I've found this page to be very helpful: http://www.mikerubel.org/computers/rsync_snapshots/ ..."
Jul 20, 2017 | www.linuxjournal.com

Anonymous on Fri, 11/08/2002 - 03:00.

The Subject, not the content, really brings back memories.

Imagine this, your tasked with complete control over the network in a multi-million dollar company. You've had some experience in the real world of network maintaince, but mostly you've learned from breaking things at home.

Time comes to implement (yes this was a startup company), a backup routine. You carefully consider the best way to do it and decide copying data to a holding disk before the tape run would be perfect in the situation, faster restore if the holding disk is still alive.

So off you go configuring all your servers for ssh pass through, and create the rsync scripts. Then before the trial run you think it would be a good idea to create a local backup of all the websites.

You logon to the web server, create a temp directory and start testing your newly advance rsync skills. After a couple of goes, you think your ready for the real thing, but you decide to run the test one more time.

Everything seems fine so you delete the temp directory. You pause for a second and your month drops open wider than it has ever opened before, and a feeling of terror overcomes you. You want to hide in a hole and hope you didn't see what you saw.

I RECURSIVELY DELETED ALL THE LIVE CORPORATE WEBSITES ON FRIDAY AFTERNOON AT 4PM!

Anonymous on Sun, 11/10/2002 - 03:00.

This is why it's ALWAYS A GOOD IDEA to use Midnight Commander or something similar to delete directories!!

...Root for (5) years and never trashed a filesystem yet (knockwoody)...

Anonymous on Fri, 11/08/2002 - 03:00.

rsync with ssh as the transport mechanism works very well with my nightly LAN backups. I've found this page to be very helpful: http://www.mikerubel.org/computers/rsync_snapshots/

[Jan 29, 2019] It helps if somebody checked if the equpment really has power, but often this step is skipped.

Notable quotes:
"... On closer inspection, noticed this power lead was only half in the socket... I connected this back to the original switch, grabbed the "I.T manager" and asked him to "just push the power lead"... his face? Looked like Casper the friendly ghost. ..."
Jan 29, 2019 | thwack.solarwinds.com

nantwiched Jul 13, 2015 11:18 AM

I've had a few horrors, heres a few...

Had to travel from Cheshire to Glasgow (4+hours) at 3am to get to a major high street store for 8am, an hour before opening. A switch had failed and taken out a whole floor of the store. So I prepped the new switch, using the same power lead from the failed switch as that was the only available lead / socket. No power. Initially thought the replacement switch was faulty and I would be in trouble for not testing this prior to attending site...

On closer inspection, noticed this power lead was only half in the socket... I connected this back to the original switch, grabbed the "I.T manager" and asked him to "just push the power lead"... his face? Looked like Casper the friendly ghost.

Problem solved at a massive expense to the company due to the out of hours charges. Surely that would be the first thing to check? Obviously not...

The same thing happened in Aberdeen, a 13 hour round trip to resolve a fault on a "failed router". The router looked dead at first glance, but after taking the side panel off the cabinet, I discovered it always helps if the router is actually plugged in...

Yet the customer clearly said everything is plugged in as it should be and it "must be faulty"... It does tend to appear faulty when not supplied with any power...

[Jan 29, 2019] It can be hot inside the rack

Jan 29, 2019 | thwack.solarwinds.com

jemertz Mar 28, 2016 12:16 PM

Shortly after I started my first remote server-monitoring job, I started receiving, one by one, traps for servers that had gone heartbeat missing/no-ping at a remote site. I looked up the site, and there were 16 total servers there, of which about 4 or 5 (and counting) were already down. Clearly not network issues. I remoted into one of the ones that was still up, and found in the Windows event viewer that it was beginning to overheat.

I contacted my front-line team and asked them to call the site to find out if the data center air conditioner had gone out, or if there was something blocking the servers' fans or something. He called, the client at the site checked and said the data center was fine, so I dispatched IBM (our remote hands) to go to the site and check out the servers. They got there and called in laughing.

There was construction in the data center, and the contractors, being thoughtful, had draped a painter's dropcloth over the server racks to keep off saw dust. Of COURSE this caused the servers to overheat. Somehow the client had failed to mention this.

...so after all this went down, the client had the gall to ask us to replace the servers "just in case" there was any damage, despite the fact that each of them had shut itself down in order to prevent thermal damage. We went ahead and replaced them anyway. (I'm sure they were rebuilt and sent to other clients, but installing these servers on site takes about 2-3 hours of IBM's time on site and 60-90 minutes of my remote team's time, not counting the rebuild before recycling.
Oh well. My employer paid me for my time, so no skin off my back.

[Jan 29, 2019] "Sure, I get out my laptop, plug in the network cable, get on the internet from home. I start the VPN client, take out this paper with the code on it, and type it in..." Yup. He wrote down the RSA token's code before he went home.

Jan 29, 2019 | thwack.solarwinds.com

jm_sysadmin Expert Jul 8, 2015 7:04 AM

I was just starting my IT career, and I was told a VIP user couldn't VPN in, and I was asked to help. Everything checked out with the computer, so I asked the user to try it in front of me. He took out his RSA token, knew what to do with it, and it worked.

I also knew this user had been complaining of this issue for some time, and I wasn't the first person to try to fix this. Something wasn't right.

I asked him to walk me through every step he took from when it failed the night before.

"Sure, I get out my laptop, plug in the network cable, get on the internet from home. I start the VPN client, take out this paper with the code on it, and type it in..." Yup. He wrote down the RSA token's code before he went home. See that little thing was expensive, and he didn't want to lose it. I explained that the number changes all time, and that he needed to have it with him. VPN issue resolved.

[Jan 29, 2019] How electricians can help to improve server uptime

Notable quotes:
"... "Oh my God, the server room is full of smoke!" Somehow they hooked up things wrong and fed 220v instead of 110v to all the circuits. Every single UPS was dead. Several of the server power supplies were fried. ..."
Jan 29, 2019 | thwack.solarwinds.com

wfordham Jul 13, 2015 1:09 PM

This happened back when we had an individual APC UPS for each server. Most of the servers were really just whitebox PCs in a rack mount case running a server OS.

The facilities department was doing some planned maintenance on the electrical panel in the server room over the weekend. They assured me that they were not going to touch any of the circuits for the server room, just for the rooms across the hallway. Well, they disconnected power to the entire panel. Then they called me to let me know what they did. I was able to remotely verify that everything was running on battery just fine. I let them know that they had about 20 minutes to restore power or I would need to start shutting down servers. They called me again and said,

"Oh my God, the server room is full of smoke!" Somehow they hooked up things wrong and fed 220v instead of 110v to all the circuits. Every single UPS was dead. Several of the server power supplies were fried.

And a few motherboards didn't make it either. It took me the rest of the weekend kludging things together to get the critical systems back online.

[Jan 29, 2019] 7th Circuit Rules Age Discrimination Law Does Not Include Job Applicants

Notable quotes:
"... By Jerri-Lynn Scofield, who has worked as a securities lawyer and a derivatives trader. She is currently writing a book about textile artisans. ..."
"... Kleber filed suit, pursuing claims for both disparate treatment and disparate impact under the ADEA. The Chicago Tribune notes in Hinsdale man loses appeal in age discrimination case that challenged experience caps in job ads that "Kleber had out of work and job hunting for three years" when he applied for the CareFusion job. ..."
"... Unfortunately, the seventh circuit has now held that the disparate impact section of the ADEA does not extend to job applicants. .Judge Michael Scudder, a Trump appointee, wrote the majority 8-4 opinion, which reverses an earlier 2-1 panel ruling last April in Kleber's favor that had initially overruled the district court's dismissal of Kleber's disparate impact claim. ..."
"... hiring discrimination is difficult to prove and often goes unreported. Only 3 percent have made a formal complaint. ..."
"... The decision narrowly applies to disparate impact claims of age discrimination under the ADEA. It is important to remember that job applicants are protected under the disparate treatment portion of the statute. ..."
"... I forbade my kids to study programming. ..."
"... I'm re reading the classic of Sociology Ain't No Makin It by Jay MacLeod, in which he studies the employment prospects of youths in the 1980s and determined that even then there was no stable private sector employment and your best option is a government job or to have an excellent "network" which is understandably hard for most people to achieve. ..."
"... I think the trick is to study something and programming, so the programming becomes a tool rather than an end. ..."
"... the problem is it is almost impossible to exit the programming business and join another domain. Anyone can enter it. (evidence – all the people with "engineering" degrees from India) Also my wages are now 50% of what i made 10 years ago (nominal). Also I notice that almost no one is doing sincere work. Most are just coasting, pretending to work with the latest toy (ie, preparing for the next interview). ..."
"... I am an "aging" former STEM worker (histology researcher) as well. Much like the IT landscape, you are considered "over-the-hill" at 35, which I turn on the 31st. ..."
"... Most of the positions in science and engineering fields now are basically "gig" positions, lasting a few months to a year. ..."
Jan 29, 2019 | www.nakedcapitalism.com

By Jerri-Lynn Scofield, who has worked as a securities lawyer and a derivatives trader. She is currently writing a book about textile artisans.

The US Court of Appeals for the Seventh Circuit decided in Kleber v. CareFusion Corporation last Wednesday that disparate impact liability under the Age Discrimination in Employment Act (ADEA) applies only to current employees and does not include job applicants.

The case was brought by Dale Kleber, an attorney, who applied for a senior position in CareFusion's legal department. The job description required applicants to have "3 to 7 years (no more than 7 years) of relevant legal experience."

Kleber was 58 at the time he applied and had more than seven years of pertinent experience. CareFusion hired a 29-year-old applicant who met but did not exceed the experience requirement.

Kleber filed suit, pursuing claims for both disparate treatment and disparate impact under the ADEA. The Chicago Tribune notes in Hinsdale man loses appeal in age discrimination case that challenged experience caps in job ads that "Kleber had out of work and job hunting for three years" when he applied for the CareFusion job.

Some Basics

Let's start with some basics, as the US Equal Employment Opportunity Commission (EEOC) set out in a brief primer on basic US age discrimination law entitled Questions and Answers on EEOC Final Rule on Disparate Impact and "Reasonable Factors Other Than Age" Under the Age Discrimination in Employment Act of 1967 . The EEOC began with a brief description of the purpose of the ADEA:

The purpose of the ADEA is to prohibit employment discrimination against people who are 40 years of age or older. Congress enacted the ADEA in 1967 because of its concern that older workers were disadvantaged in retaining and regaining employment. The ADEA also addressed concerns that older workers were barred from employment by some common employment practices that were not intended to exclude older workers, but that had the effect of doing so and were unrelated to job performance.

It was with these concerns in mind that Congress created a system that included liability for both disparate treatment and disparate impact. What's the difference between these two concepts?

According to the EEOC:

[The ADEA] prohibits discrimination against workers because of their older age with respect to any aspect of employment. In addition to prohibiting intentional discrimination against older workers (known as "disparate treatment"), the ADEA prohibits practices that, although facially neutral with regard to age, have the effect of harming older workers more than younger workers (known as "disparate impact"), unless the employer can show that the practice is based on an [Reasonable Factor Other Than Age (RFAO)]

The crux: it's much easier for a plaintiff to prove disparate impact, because s/he needn't show that the employer intended to discriminate. Of course, many if not most employers are savvy enough not to be explicit about their intentions to discriminate against older people as they don't wish to get sued.

District, Panel, and Full Seventh Circuit Decisions

The district court dismissed Kleber's disparate impact claim, on the grounds that the text of the statute- (§ 4(a)(2))- did not extend to outside job applicants. Kleber then voluntarily dismissed his separate claim for disparate treatment liability to appeal the dismissal of his disparate impact claim. No doubt he was aware – either because he was an attorney, or because of the legal advice received – that it is much more difficult to prevail on a disparate treatment claim, which would require that he establish CareFusion's intent to discriminate.

Or at least that was true before this decision was rendered.

Unfortunately, the seventh circuit has now held that the disparate impact section of the ADEA does not extend to job applicants. .Judge Michael Scudder, a Trump appointee, wrote the majority 8-4 opinion, which reverses an earlier 2-1 panel ruling last April in Kleber's favor that had initially overruled the district court's dismissal of Kleber's disparate impact claim.

The majority ruled:

By its terms, § 4(a)(2) proscribes certain conduct by employers and limits its protection to employees. The prohibited conduct entails an employer acting in any way to limit, segregate, or classify its employees based on age. The language of § 4(a)(2) then goes on to make clear that its proscriptions apply only if an employer's actions have a particular impact -- "depriv[ing] or tend[ing] to deprive any individual of em- ployment opportunities or otherwise adversely affect[ing] his status as an employee." This language plainly demonstrates that the requisite impact must befall an individual with "status as an employee." Put most simply, the reach of § 4(a)(2) does not extend to applicants for employment, as common dictionary definitions confirm that an applicant has no "status as an employee." (citation omitted)[opinion, pp. 3-4]

By contrast, in the disparate treatment part of the statute (§ 4(a)(1)):

Congress made it unlawful for an employer "to fail or refuse to hire or to discharge any individual or otherwise discriminate against any individual with respect to his compensation, terms, conditions, or privi- leges of employment, because of such individual's age."[opinion, p.6]

The court compared the disparate treatment section – § 4(a)(1) – directly with the disparate impact section – § 4(a)(2):

Yet a side-by-side comparison of § 4(a)(1) with § 4(a)(2) shows that the language in the former plainly covering appli-cants is conspicuously absent from the latter. Section 4(a)(2) says nothing about an employer's decision "to fail or refuse to hire any individual" and instead speaks only in terms of an employer's actions that "adversely affect his status as an employee." We cannot conclude this difference means nothing: "when 'Congress includes particular language in one section of a statute but omits it in another' -- let alone in the very next provision -- the Court presumes that Congress intended a difference in meaning." (citations omitted)[opinion, pp. 6-7]

The majority's conclusion:

In the end, the plain language of § 4(a)(2) leaves room for only one interpretation: Congress authorized only employees to bring disparate impact claims.[opinion, p.8]

Greying of the Workforce

Older people account for a growing percentage of the workforce, as Reuters reports in Age bias law does not cover job applicants: U.S. appeals court :

People 55 or older comprised 22.4 percent of U.S. workers in 2016, up from 11.9 percent in 1996, and may account for close to one-fourth of the labor force by 2022, according to the Bureau of Labor Statistics.

The greying of the workforce is "thanks to better health in older age and insufficient savings that require people to keeping working longer," according to the Chicago Tribune. Yet:

numerous hiring practices are under fire for negatively impacting older applicants. In addition to experience caps, lawsuits have challenged the exclusive use of on-campus recruiting to fill positions and algorithms that target job ads to show only in certain people's social media feeds.

Unless Congress amends the ADEA to include job applicants, older people will continue to face barriers to getting jobs.

The Chicago Tribune reports:

The [EEOC], which receives about 20,000 age discrimination charges every year, issued a report in June citing surveys that found 3 in 4 older workers believe their age is an obstacle in getting a job. Yet hiring discrimination is difficult to prove and often goes unreported. Only 3 percent have made a formal complaint. Allowing older applicants to challenge policies that have an unintentionally discriminatory impact would offer another tool for fighting age discrimination, Ray Peeler, associate legal counsel at the EEOC, has said.

How will these disparate impact claims now fare?

The Bottom Line

FordHarrison, a firm specialising in human relations law, noted in Seventh Circuit Limits Job Applicants' Age Discrimination Claims :

The decision narrowly applies to disparate impact claims of age discrimination under the ADEA. It is important to remember that job applicants are protected under the disparate treatment portion of the statute. There is no split among the federal appeals courts on this issue, making it an unlikely candidate for Supreme Court review, but the four judges in dissent read the statute as being vague and susceptible to an interpretation that includes job applicants.

Their conclusion: "a decision finding disparate impact liability for job applicants under the ADEA is unlikely in the near future."

Alas, for reasons of space, I will not consider the extensive dissent. My purpose in writing this post is to discuss the majority decision, not to opine on which side made the better arguments.

antidlc , January 27, 2019 at 3:28 pm

8-4 opinion. Which judges ruled for the majority? Which judges ruled for the minority opinion?

Sorry,,,don't have time to research right now. It says a Trump appointee wrote the majority opinion. Who were the other 7?

grayslady , January 27, 2019 at 6:09 pm

There were 3 judges who dissented in whole and one who dissented in part. Of the three full dissensions, two were Clinton appointees (including the Chief Justice, who was one of the dissenters) and one was a Reagan appointee. The partial dissenter was also a Reagan appointee.

run75441 , January 27, 2019 at 11:25 pm

ant: Not your law clerk, read the opinion. Easterbook and Wood dissented. Find the other two and and you can figure out who agreed.

YankeeFrank , January 27, 2019 at 3:58 pm

"depriv[ing] or tend[ing] to deprive any individual of employment opportunities or otherwise adversely affect[ing] his status as an employee."

–This language plainly demonstrates that the requisite impact must befall an individual with "status as an employee."

So they totally ignore the first part of the sentence -- "depriv[ing] or tend[ing] to deprive any individual of employment opportunities " -- "employment opportunities" clearly applies to applicants.

Its as if these judges cannot make sense of the English language. Hopefully the judges on appeal will display better command of the language.

Alfred , January 27, 2019 at 5:56 pm

I agree. "Employment opportunities," in the "plain language" so meticulously respected by the 7th Circuit, must surely refer at minimum to 'the chance to apply for a job and to have one's application fairly considered'. It seems on the other hand a stretch to interpret the phrase to mean only 'the chance to keep a job one already has'. Both are important, however; to split them would challenge even Solomonic wisdom, as I suppose the curious decision discussed here demonstrates. I am less convinced that the facts as presented here establish a clear case of age discrimination. True, they point in that direction. But a hypothetical 58-year old who only earned a law degree in his or her early 50s, perhaps after an earlier career in paralegal work, could have legitimately applied for a position requiring 3 to 7 years of "relevant legal experience." That last phrase, is of course, quite weasel-y: what counts as "relevant" and what counts as "legal" experience would under any circumstances be subject to (discriminatory) interpretation. The limitation of years of experience in the job announcement strikes me as a means to keep the salary within a certain budgetary range as prescribed either by law or collective bargaining.

KLG , January 27, 2019 at 6:42 pm

Almost like the willful misunderstanding of "A well regulated militia being necessary to the security of a free State "? Of course, that militia also meant slave patrols and the occasional posse to put down the native "savages," but still.

Lambert Strether , January 28, 2019 at 2:08 am

> "depriv[ing] or tend[ing] to deprive any individual of employment opportunities or otherwise adversely affect[ing] his status as an employee."

Says "or." Not "and."

Magic Sam , January 27, 2019 at 5:53 pm

They are failing to find what they don't want to find.

Magic Sam , January 27, 2019 at 5:58 pm

Being pro-Labor will not get you Federalist Society approval to be nominated to the bench by Trump. This decision came down via the ideological makeup of the court, not the letter of the law. Their stated pretext is obviously b.s.. It contradicts itself.

Mattie , January 27, 2019 at 6:05 pm

Yep. That is when their Utah et al property mgt teams began breaking into homes, tossing contents – including pets – outside & changing locks

Even when borrowers were in approved HAMP, etc. pipelines

PLUG: If you haven't yet – See "The Florida Project"

nothing but the truth , January 27, 2019 at 7:18 pm

as an aging "stem" (cough coder) worker who typically has to look for a new "gig" every few years, i am trembling at this.

Luckily, i bought a small business when I had a few saved up, so I won't starve.

Health insurance is another matter.

I forbade my kids to study programming.

Lambert Strether , January 28, 2019 at 2:09 am

Plumbing. Electrical work. Permaculture. Get those kids Jackpot-ready!

Joe Well , January 28, 2019 at 11:40 am

I'm re reading the classic of Sociology Ain't No Makin It by Jay MacLeod, in which he studies the employment prospects of youths in the 1980s and determined that even then there was no stable private sector employment and your best option is a government job or to have an excellent "network" which is understandably hard for most people to achieve. So I'm genuinely interested in what possible options there are for anyone entering the job market today or God help you, re-entering. I am guessing the barriers to entry to those trades are quite high but would love to be corrected.

jrs , January 28, 2019 at 1:39 pm

what is the point of being jackpot ready if you can't even support yourself today? To fantasize about collapse while sleeping in a rented closet and driving for Uber? In that case one's personal collapse has already happened, which will matter a lot more to an individual than any potential jackpot.

Plumbers and electricians can make money now of course (although yea barriers to entry do seem high, don't you kind of have to know people to get in those industries?). But permaculture?

Ford Prefect , January 28, 2019 at 1:00 pm

I think the trick is to study something and programming, so the programming becomes a tool rather than an end. A couple of my kids used to ride horses. One of the instructors and stable owners said that a lot of people went to school for equine studies and ended up shoveling horse poop for a living. She said the thing to do was to study business and do the equestrian stuff as a hobby/minor. That way you came out prepared to run a business and hire the equine studies people to clean the stalls.

jrs , January 28, 2019 at 1:36 pm

Do you actually see that many jobs requiring something and programming though? I haven't really. There seems no easy transition out of software work which that would make possible either. Might as well just study the "something".

rd , January 28, 2019 at 2:21 pm

Programming is a means to an end, not the end itself. If all you do is program, then you are essentially a machine lathe operator, not somebody creating the products the lathe operators turn out.

Understanding what needs to be done helps with structured programs and better input/output design. In turn, structured programming is a good tool to understand the basics of how to manage tasks. At the higher level, Fred Brooks book "The Mythical Man-Month" has a lot of useful project management information that can be re-applied for non computer program development.

We are doing a lot of work with mobile computing and data collection to assist in our regular work. The people doing this are mainly non-computer scientists that have learned enough programming to get by.

The engineering programs that we use are typically written more by engineers than by programmers as the entire point behind the program is to apply the theory into a numerical computation and presentation system. Programmers with a graphic design background can assist in creating much better user interfaces.

If you have some sort of information theory background (GIS, statistics, etc.) then big data actually means something.

nothing but the truth , January 28, 2019 at 7:02 pm

the problem is it is almost impossible to exit the programming business and join another domain. Anyone can enter it. (evidence – all the people with "engineering" degrees from India) Also my wages are now 50% of what i made 10 years ago (nominal). Also I notice that almost no one is doing sincere work. Most are just coasting, pretending to work with the latest toy (ie, preparing for the next interview).

Now almost every "interview" requires writing a coding exam. Which other profession will make you write an exam for 25-30 year veterans? Can you write your high school exam again today? What if your profession requires you to write it a couple of times almost every year?

Hepativore , January 28, 2019 at 2:56 pm

I am an "aging" former STEM worker (histology researcher) as well. Much like the IT landscape, you are considered "over-the-hill" at 35, which I turn on the 31st. While I do not have children and never intend to get married, many biotech companies consider this the age at which a worker is getting long in the tooth. This is because there is the underlying assumption that is when people start having familial obligations.

Most of the positions in science and engineering fields now are basically "gig" positions, lasting a few months to a year. A lot of people my age are finding how much harder it is to find any position at all in these areas as there is a massive pool of people to choose from, even for permatemp work simply because serfs in their mid-30s might get uppity about benefits like family health plans or 401k

Steve , January 27, 2019 at 7:32 pm

I am 59 and do not mind having employers discriminate against me due to age. ( I also need a job) I had my own business and over the years got quite damaged. I was a contractor specializing in older (historical) work.

I was always the lead worker with many friends and other s working with me. At 52 I was given a choice of very involved neck surgery or quit. ( no small businesses have disability insurance!)

I shut down everything and helped my friends who worked for me take some of the work or find something else. I was also a nationally published computer consultant a long time ago and graphic artist.

Reality is I can still do many things but I do nothing as well as I did when I was younger and the cost to employers for me is far higher than a younger person. I had my chance and I chose poorly. Younger people, if that makes them abetter fit, deserve a chance now more than I do.

Joe Well , January 27, 2019 at 7:49 pm

I'm sorry for your predicament. Do you mean you chose poorly when you chose not to get neck surgery? What was the choice you regret?

Steve , January 27, 2019 at 10:12 pm

My career choices. Choosing to close my business to possibly avoid the surgery was actually a good choice.

Joe Well , January 28, 2019 at 11:47 am

I'm sorry for your challenges but I don't think there were many good careers you could have chosen and it would have required a crystal ball to know which were the good ones. Americans your age entered the job market just after the very end of the Golden Age of labor conditions and have been weathering the decline your entire working lives. At least I entered the job market when everyone knew for years things were falling apart. It's not your fault. You were cheated plain and simple.

Lambert Strether , January 28, 2019 at 2:14 am

> I had my chance and I chose poorly.

I don't see how it's possible to predict the labor market years in advance. Why blame yourself for poor choices when so much chance is involved?

With a Jobs Guarantee, such questions would not arise. I also don't think it's only a question of doing, but a question of sharing ("experience, strength, and hope," as AA -- a very successful organization! -- puts it, in a way of thinking that has wide application).

Dianne Shatin , January 27, 2019 at 7:46 pm

Unelected plutocrat and his international syndicate funded by former IBM artificial intelligence developer and social darwinian. data manipulation electronic platforms and social media are at the levels of power in the USA. Anti justice, anti enlightenment, etc.

Since the installation of GW Bush by the Supreme Court, almost 20 yrs. ago, they have tunneled deeply, speaking through propaganda machines such as Rush Limbaugh gaining traction .making it over the finish line with KGB and Russian oligarch backing. The net effect on us? The loss of all built on the foundation of the enlightenment and an exceptional nation no king, a nation of, for and by the people, and the rule of law. There is nothing Judeo-Christian about social darwinism but is eerily similar to National Socialism (Nazis). The ruling againt the plaintiff by the 7th circuit in the U.S. and their success in creating chaos in Great Britain vis a vis "Brexit" by fascist Lafarge Inc. are indicators how easy their ascent.
ows how powerful they have become.

anon y'mouse , January 27, 2019 at 9:19 pm

They had better get ready to lower the SSI retirement age to 55, then. Or I predict blood in the streets.

jrs , January 28, 2019 at 1:49 pm

I wish it was so. They just expect the older crowd to die quietly.

How is it legal , January 27, 2019 at 10:04 pm

Where are the Bipartisan Presidential Candidates and Legislators on oral and verbal condemnation of Age Discrimination , along with putting teeth into Age Discrimination Laws, and Tax Policy. – nowhere to be seen , or heard, that I've noticed; particularly in Blue ™ California, which is famed for Age Discrimination of those as young as 36 years of age, since Mark Zuckerberg proclaimed anyone over 35, over the hill in the early 2000's , and never got crushed for it by the media, or the Politicians, as he should have (particularly in Silicon Valley).

I know those Republicans are venal, but I dare anyone to show me a meaningful Age Discrimination Policy Proposal, pushed by Blue Obama, Hillary, even Sanders and Jill Stein. Certainly none of California's Nationally known (many well over retirement age) Gubernatorial and Legislative Democratic Politicians: Jerry Brown, Gavin Newsom, Dianne Feinstein, Barbara Boxer, Nancy Pelosi, Kamala Harris, and Ro Khanna (or the lesser known California Federal State and Local Democratic Politicians) have ever addressed it; despite the fact that homelessness deaths of those near 'retirement age' have been frighteningly increasing in California's obscenely wealthy homelessness 'hotspots,' such as Silicon Valley.

Such a tragic issue, which has occurred while the last over a decade of Mainstream News and Online Pundits, have Proclaimed 50 to be the new 30. Sadistic. I have no doubt this is linked to the ever increasing Deaths of Despair and attempted and successful suicides of those under, and just over retirement age– while the US has an average Senate age of 65, and a President and 2020 Presidential contenders, over 70 (I am not at all saying older persons shouldn't be elected, nor that younger persons shouldn't be elected, I'm pointing out the imbalance, insanity, and cruelty of it).

Further, age discrimination has been particularly brutal to single, divorced, and widowed females , whom have most assuredly made far, far less on the dollar than males (if they could even get hired for the position, or leave the kids alone, and housekeeping undone, to get a job):

Patrick Button, an assistant economics professor at Tulane University, was part of a research project last year that looked at callback rates from resumes in various entry-level jobs. He said women seeking the positions appeared to be most affected.

"Based on over 40,000 job applications, we find robust evidence of age discrimination in hiring against older women, especially those near retirement age, but considerably less evidence of age discrimination against men," according to an abstract of the study.

Jacquelyn James, co-director of the Center on Aging and Work at Boston College, said age discrimination in employment is a crucial issue in part because of societal changes that are forcing people to delay retirement. Moves away from defined-¬benefit pension plans to less assured forms of retirement savings are part of the reason.

Lambert Strether , January 28, 2019 at 2:15 am

> "Based on over 40,000 job applications, we find robust evidence of age discrimination in hiring against older women, especially those near retirement age, but considerably less evidence of age discrimination against men," according to an abstract of the study.

Well, these aren't real women, obviously. If they were, the Democrats would already be taking care of them.

jrs , January 28, 2019 at 1:58 pm

From the article: The greying of the workforce is "thanks to better health in older age and insufficient savings that require people to keeping working longer," according to the Chicago Tribune.

Get on the clue train Chicago Tribune, because your like W and Trump not knowing how a supermarket works, that's how dense you are. Even if one saved, and even if one won the luck lottery in terms of job stability and adequate income to save from, healthcare alone is a reason to work, either to get employer provided if lucky, or to work without it and put most of one's money toward an ACA plan or the like if not lucky. Yes the cost of almost all other necessities has also increased greatly, but even parts of the country without a high cost of living have unaffordable healthcare.

Enquiring Mind , January 27, 2019 at 11:07 pm

Benefits may be 23-30% or so of payroll and represent another expense management opportunity for the diligent executive. One piece of low-hanging fruit is the age-related healthcare cost. If you hire young people, who under-consume healthcare relative to older cohorts, you save money, ceteris paribus. They have lower premiums, lower loss experience and they rebound more quickly, so you hit a triple at your first at-bat swinging at that fruit. Yes, metaphors are fungible along with every line on the income statement.

If your company still has the vestiges of a pension or similar blandishment, you may even back-load contributions more aggressively, of course to the extent allowable. That added expense diligence will pay off when those annuated employees leave before hitting the more expensive funding years.

NB, the above reflects what I saw and heard at a Fortune 500 company.

rd , January 28, 2019 at 12:56 pm

Another good reason for a Canadian style single payer system. That turns a deciding factor into a non-factor.

Jack Hayes , January 28, 2019 at 8:15 am

A reason why the court system is overburdened is lack of clarity in laws and regulations. Fix the disparity between the two sections of the law so that courts don't have to decide which section rules.

rd , January 28, 2019 at 2:24 pm

Polarization has made tweaks and repairs of laws impossible.

Jeff N , January 28, 2019 at 10:17 am

Yep. Many police departments *legally* refuse to hire anyone over 35 years old (exceptions for prior police experience or certain military service)

Joe Well , January 28, 2019 at 12:36 pm

It amazes me how often the government will give itself exemptions to its own laws and principles, and also how often "progressive" nonprofits and political groups will also give themselves such exemptions, for instance, regarding health insurance, paid overtime, paid training, etc. that they are legally required to provide.

Ford Prefect , January 28, 2019 at 2:27 pm

There are specific physical demands in things like policing. So it doesn't make much sense to hire 55 year old rookie policemen when many policemen are retiring at that age.

Arthur Dent , January 28, 2019 at 2:59 pm

Its an interesting quandary. We have older staff that went back to school and changed careers. They do a good job and get paid at a rate similar to the younger staff with similar job-related experience. However, they will be retiring at about the same time as the much more experienced staff, so they will not be future succession replacements for the senior staff.

So we also have to hire people in their 20s and 30s because that will be the future when people like me retire in a few years. That could very well be the reason for the specific wording of the job opening (I haven't read the opinion). I know of current hiring for a position where the firm is primarily looking for somebody in their 20s or early 30s for precisely that reason. The staff currently doing the work are in their 40s and 50s and need to start bringing up the next generation. If somebody went back to school late and was in their 40s or 50s (so would be at a lower billing rate due to lack of job related experience), they would be seriously considered. But the firm would still be left with the challenge of having to hire another person at the younger age within a couple of years to build the succession. Once people make it past 5 years at the firm, they tend to stay for a long time with senior staff generally having been at the firm for 20 years or more, so hiring somebody really is a long-term investment.

[Jan 28, 2019] Testing backup system as the main source of power outatages

Highly recommended!
Jan 28, 2019 | thwack.solarwinds.com

gcp Jul 8, 2015 10:33 PM

Many years ago I worked at an IBM Mainframe site. To make systems more robust they installed a UPS system for the mainframe with battery bank and a honkin' great diesel generator in the yard.

During the commissioning of the system, they decided to test the UPS cutover one afternoon - everything goes *dark* in seconds. Frantic running around to get power back on and MF restarted and databases recovered (afternoon, remember? during the work day...). Oh! The UPS batteries were not charged! Oops.

Over the next few weeks, they did two more 'tests' during the working day, with everything going *dark* in seconds for various reasons. Oops.

Then they decided - perhaps we should test this outside of office hours. (YAY!)

Still took a few more efforts to get everything working - diesel generator wouldn't start automatically, fixed that and forgot to fill up the diesel tank so cutover was fine until the fuel ran out.

Many, many lessons learned from this episode.

[Jan 28, 2019] False alarm: bas small inmashine room due to electrical light not a server

Jan 28, 2019 | www.reddit.com

radiomix Jack of All Trades 5 points 6 points 7 points 3 years ago (2 children)

I was in my main network facility, for a municipal fiber optic ring. Outside were two technicians replacing our backup air conditioning unit. I walk inside after talking with the two technicians, turn on the lights and begin walking around just visually checking things around the room. All of a sudden I started smelling that dreaded electric hot/burning smell. In this place I have my core switch, primary router, a handful of servers, some customer equipment and a couple of racks for my service provider. I start running around the place like a mad man sniffing all the equipment. I even called in the AC technicians to help me sniff.

After 15 minutes we could not narrow down where it was coming from. Finally I noticed that one of the florescent lights had not come on. I grabbed a ladder and opened it up.

The ballast had burned out on the light and it just so happen to be the light right in front of the AC vent blowing the smell all over the room.

The last time I had smelled that smell in that room a major piece of equipment went belly up and there was nothing I could do about it.

benjunmun 2 points 3 points 4 points 3 years ago (0 children)
The exact same thing has happened to me. Nothing quite as terrifying as the sudden smell of ozone as you're surrounded by critical computers and electrical gear.

[Jan 28, 2019] Loss of power problems: Machines are running, but every switch in the cabinet is dead. Some servers are dead. Panic sets in.

Jan 28, 2019 | www.reddit.com

eraser_6776 VP IT/Sec (a damn suit) 9 points 10 points 11 points 3 years ago (1 child)

May 22, 2004. There was a rather massive storm here that spurred one of the [biggest Tornaodes recorded in Nebraska]( www.tornadochaser.net/hallam.html ) and I was a sysadmin for a small company. It was a Saturday, aka beer day, and as all hell was breaking loose my friends and roomates' pagers and phones were all going off. "Ha ha!" I said, looking at a silent cellphone "sucks to be you!"

Next morning around 10 my phone rings, and I groggily answer it because it's the owner of the company. "You'd better come in here, none of the computers will turn on" he says. Slight panic, but I hadn't received any emails. So it must have been breakers, and I can get that fixed. No problem.

I get into the office and something strikes me. That eery sound of silence. Not a single machine is on.. why not? Still shaking off too much beer from the night before, I go into the server room and find out why I didn't get paged. Machines are running, but every switch in the cabinet is dead. Some servers are dead. Panic sets in.

I start walking around the office trying to turn on machines and.. dead. All of them. Every last desktop won't power on. That's when panic REALLY set in.

In the aftermath I found out two things - one, when the building was built, it was built with a steel roof and steel trusses. Two, when my predecessor had the network cabling wired he hired an idiot who didn't know fire code and ran the network cabling, conveniently, along the trusses into the ceiling. Thus, when lightning hit the building it had a perfect ground path to every workstation in the company. Some servers that weren't in the primary cabinet had been wired to a wall jack (which, in turn, went up into the ceiling then back down into the cabinet because you know, wire management!). Thankfully they were all "legacy" servers.

The only thing that saved the main servers was that Cisco 2924 XL-EN's are some badass mofo's that would die before they let that voltage pass through to the servers in the cabinet. At least that's what I told myself.

All in all, it ended up being one of the longest work weeks ever as I first had to source a bunch of switches, fast to get things like mail and the core network back up. Next up was feeding my buddies a bunch of beer and pizza after we raided every box store in town for spools of Cat 5 and threw wire along the floor.

Finally I found out that CDW can and would get you a whole lot of desktops delivered to your door with your software pre-installed in less than 24 hours if you have an open checkbook. Thanks to a great insurance policy, we did. Shipping and "handling" for those were more than the cost of the machines (again, this was back in 2004 and they were business desktops so you can imagine).

Still, for weeks after I had non-stop user complaints that generally involved "..I think this is related to the lightning ". I drank a lot that summer.

[Jan 28, 2019] Format of wrong particon initiated during RHEL install

Notable quotes:
"... Look at the screen, check out what it is doing, realize that the installer had grabbed the backend and he said yeah format all(we are not sure exactly how he did it). ..."
Jan 28, 2019 | www.reddit.com

kitched 5 points 6 points 7 points 3 years ago (2 children)

~10 years ago. 100GB drives on a node attached to an 8TB SAN. Cabling is all hooked up as we are adding this new node to manage the existing data on the SAN. A guy that is training up to help, we let him install RedHat and go through the GUI setup. Did not pay attention to him, and after a while wonder what is taking so long. Walk over to him and he is still staring at the install screen and says, "Hey guys, this format sure is taking a while".

Look at the screen, check out what it is doing, realize that the installer had grabbed the backend and he said yeah format all(we are not sure exactly how he did it).

Middle of the day, better kick off the tape restore for 8TB of data.

[Jan 28, 2019] I still went to work that day, tired, grumpy and hyped on caffeine teetering between consciousness and a comatose state

Big mistake. This is a perfect state to commit some big SNAFU
Jan 28, 2019 | thwack.solarwinds.com

porterseceng Jul 9, 2015 9:44 AM

I was the on-call technician for the security team supporting a Fortune 500 logistics company, in fact it was my first time being on-call. My phone rings at about 2:00 AM and the help desk agent says that the Citrix portal is down for everyone. This is a big deal because it's a 24/7 shop with people remoting in all around the world. While not strictly a security appliance, my team was responsible for the Citrix Access Gateway that was run on a NetScaler. Also on the line are the systems engineers responsible for the Citrix presentation/application servers.

I log in, check the appliance, look at all of the monitors, everything is reporting up. After about 4 hours of troubleshooting and trying everything within my limited knowledge of this system we get my boss on the line to help.

It came down to this: the Citrix team didn't troubleshoot anything and it was the StoreFront and broker servers that were having the troubles; but since the CAG wouldn't let people see any applications they instantly pointed the finger at the security team and blamed us.

I still went to work that day, tired, grumpy and hyped on caffeine teetering between consciousness and a comatose state because of two reasons: the Citrix team doesn't know how to do their job and I was too tired to ask the investigating questions like "when did it stop working? has anything changed? what have you looked at so far?".

[Jan 28, 2019] Any horror stories about tired sysadmins...

Long story short, don't drink soda late at night, especially near your laptop! Soda spills are not easy to cleanup.
Jan 28, 2019 | thwack.solarwinds.com

mickyred 1 point 2 points 3 points 4 years ago (1 child)

I initially read this as "Any horror stories about tired sysadmins..."
cpbills Sr. Linux Admin 1 point 2 points 3 points 4 years ago (0 children)
They exist. This is why 'good' employers provide coffee.

[Jan 28, 2019] Something about the meaning of the word space

Jul 13, 2015 | thwack.solarwinds.com

Jul 13, 2015 7:44 AM

Trying to walk a tech through some switch config.

me: type config space t

them: it doesn't work

me: <sigh> <spells out config> space the single letter t

them: it still doesn't work

--- try some other rudimentary things ---

me: uh, are you typing in the word 'space'?

them: you said to

[Jan 28, 2019] Any horror stories about fired sysadmins

Notable quotes:
"... leave chat logs on his computer detailing criminal activity like doing drugs in the office late at night and theft ..."
"... the law assumes that [he/she] has suffered this harm ..."
"... assumed by the law ..."
"... The Police are asking the public if anyone has information on "The Ethernet Killer" to please come forward ..."
Jan 28, 2016 | www.reddit.com

nai1sirk

Everyone seems to be really paranoid when firing a senior sysadmin. Advice seems to range from "check for backdoors" to "remove privileges while he is in the severage meeting"

I think it sounds a bit paranoid to be honest. I know media loves these stories, and I doubt they are that common.

Has anyone actually personally experienced a fired sysadmin who has retaliated?

skibumatbu 42 points 43 points 44 points 4 years ago (5 children)
Many moons ago I worked for a a very large dot com. I won't even call it a startup as they were pretty big and well used. They were hacked once. The guy did a ransom type of thing and the company paid him. But they also hired him as a sysadmin with a focus on security. For 6 months he did nothing but surf IRC channels. One thing leads to another and the guy was fired. A month later I'm looking a an issue on a host and notice a weird port open on the front end web server (the guy was so good at security that he insisted on no firewalls). Turns out the guy hacked back into our servers. The next week our primary database goes down for 24 hours. I wonder how that happened.

He eventually got on the secret services radar for stealing credit card information from thousands of people. He's now in jail.

nai1sirk [ S ] 39 points 40 points 41 points 4 years ago (3 children)
Flawless logic; hire criminal as sheriff, suddenly he's a good guy
skibumatbu 13 points 14 points 15 points 4 years ago (0 children)
It works in TV shows.
VexingRaven 7 points 8 points 9 points 4 years ago (0 children)
It works for the FBI. But it all depends on the TYPE of blackhat you hire. You want the kind in it just to prove they can do it, not the type that are out for making money.
AsciiFace DevOps Tooling 7 points 8 points 9 points 4 years ago (0 children)
To be fair, I have a large group of friends that exactly this happened to. From what I hear, it is kind of cool to be legally allowed to commit purger for the sake of work (infosec).
cpbills Sr. Linux Admin 3 points 4 points 5 points 4 years ago (0 children)

the guy was so good at security that he insisted on no firewalls

HAHAHAHAHAHAHAHAHA.

Slamp872 Linux Admin 12 points 13 points 14 points 4 years ago (17 children)

"haha, nice backups dickhead"

That's career suicide, especially in a smaller IT market.

Ashmedai 4 points 5 points 6 points 4 years ago (15 children)

's career suicide, especially in a smaller IT market.

It ought to be, but it would be libel per se to say that he did this, meaning that if he sues, you'd have to prove what you said true, or you'd lose by default, and the finding would assumed by the court to be large. Libel per se is nothing to sneeze at.

dmsean DevOps 3 points 4 points 5 points 4 years ago (5 children)
I always hated that. We had a guy steal from us, server's, mobile phones, computers, etc. We caught him with footage, and he was even dumb enough to leave chat logs on his computer detailing criminal activity like doing drugs in the office late at night and theft . We were such a small shop at the time. We fired him and nobody followed up and filed any charges. Around 2 months later we get a call from employment insurance office and they say the dispute was we claimed him stole office equipment but had no proof. We would have had to higher lawyers and it just wasn't worth it...we let him go and have his free money. Always pissed me off.
Ashmedai 4 points 5 points 6 points 4 years ago (0 children)
That resolution is typical, I'm afraid.
VexingRaven 4 points 5 points 6 points 4 years ago (3 children)
This is why you press charges. If he'd been convicted of theft, which he almost surely would have if you had so much evidence, he not only would've had no ground to stand on for that suit, but he'd be in jail. The best part is, the police handle the pressing of charges, because criminal prosecution is state v. accused.
dmsean DevOps 3 points 4 points 5 points 4 years ago (2 children)
Yah I really wish we had. But when your vp of customer service takes support calls, when your CFO is also a lead programmer and your accountant's primary focus is calling customers to pay their bills it's easier said then done!
neoice Principal Linux Systems Engineer 1 point 2 points 3 points 4 years ago (1 child)
I dunno, for criminal proceedings the police do most of the legwork. when my car got stolen, I had to fill out and sign a statement then the state did the rest.

you probably would have spent a 2-16 hours with a detective going over the evidence and filing your report. I think that's a pretty low cost to let the wheels of justice spin!

dmsean DevOps 0 points 1 point 2 points 4 years ago (0 children)
I'm canadian, the theft was under $5000 so it would have had to go to the local court. The Vancouver PD are really horrible.
the_ancient1 Say no to BYOD 2 points 3 points 4 points 4 years ago (4 children)
Depending on the position of the person, i.e a manager or in management at the former employees company, a person "bad mouthing" an ex employee will also be in violation of anti-blacklistling laws that are in place in many/most states which prohibit companies and authorized agents of companies (HR, managers etc) from (and I quote from the statute)

using any words that any discharged employees, or attempt[ing] by words or writing, or any other means whatever, to prevent such discharged employee, or any employee who may have voluntarily left said company's service from obtaining employment with any other person, or company. -

So most business to not authorize their managers or HR to do anything other than confirm dates of employment, job title, and possibly wage.

Ashmedai 1 point 2 points 3 points 4 years ago (3 children)
I'm (only a little) surprised by this. Since you are quoting from statute, can you link me. I'm curious about that whole section of code. I should assume this is some specific state, yes?
the_ancient1 Say no to BYOD 2 points 3 points 4 points 4 years ago * (2 children)
http://www.in.gov/legislative/ic/2010/title22/ar5/ch3.pdf

the quote is from IC 22-5-3-2, which on its face seem to apply only to "Railroads" but also clearly says "any other company" in the text.

There are some of the standard Libale protections as well for truthful statements, but you must prove it was truthful, so something like "he was always late" if you have time cards to prove that would not be in violation, but something like "He smelled bad, was rude and incompetent" would likely be a violation

user4201 1 point 2 points 3 points 4 years ago (1 child)
Your second example is actually just three opinions which a person cannot sue you for. If I declare that I thought you smelled bad and where an incompetent employee I'm not making libelous statements because libel laws don't cover personal opinions. If I say you where late everyday, that is a factual statement that can either be proven or disproven, so libel law now applies.
the_ancient1 Say no to BYOD 0 points 1 point 2 points 4 years ago (0 children)

libel laws don't cover personal opinions.

Libel laws do not, blacklisting laws do if your statements are in relation to an inquiry by a potential employer of a former employee

CaptainDave Infrastructure Engineer 0 points 1 point 2 points 4 years ago (3 children)
Libel per se is just libel that doesn't need intent to be proved. It has nothing to do with burdens or quanta of proof, much less presumptions.
Ashmedai 0 points 1 point 2 points 4 years ago * (2 children)
"CaptainDave is a convicted pederast" would be an example of libel per se. It doesn't matter if I believe it true. It doesn't matter if I am not malicious in my statement. The statement must be true for it to not be libel. If CaptainDave were to sue me, I'd have to show proof of that conviction. CaptainDave would not be required to prove the statement false. The court would not be interested in investigating the matter itself, so the burden of proof would shift to me . If I were not to succeed in proving this statement, the court would assume the damages of this class of libel to be high. Generally; with caveats.
CaptainDave Infrastructure Engineer 0 points 1 point 2 points 4 years ago (1 child)
That's what I was saying: there's no intent element. That's what's meant by "libel per se." That doesn't shift the burden of proof, it just means there's one less thing to show. You have burden shifting confused with the affirmative defense that the statement was true; however, that, as an affirmative defense, is always on the party pressing it. There is thus no burden shifting. Moreover, you have to prove your damages no matter what; there is no presumption as to their amount (beyond "nominal," IIRC).
Ashmedai 0 points 1 point 2 points 4 years ago * (0 children)

Moreover, you have to prove your damages no matter what; t

These are the actual written instructions given to juries in California:

"Even if [name of plaintiff] has not proved any actual damages for harm to reputation or shame, mortification or hurt feelings, the law assumes that [he/she] has suffered this harm . Without presenting evidence of damage, [name of plaintiff] is entitled to receive compensation for this assumed harm in whatever sum you believe is reasonable."

Juries, of course, are not instructed on an actual amount here. As you say, it might only be nominal. But in the case of an employer leaving a negative reference about an employee accusing them of a crime? It won't be as a matter of practice, now will it? The jury has been told the harm to reputation and mortification is assumed by the law . While this does not guarantee a sympathetic jury, and obviously the case will have its context, I'll make the assumption starting right now that you don't want to be on the receiving end of a legitimate libel per se case, is that fair? :-P

At least in California. I've been told not all states have libel per se laws, but I really wouldn't know.

As far as my statement that "would be assumed by the court to be large," this was sloppily worded, yes. Let's just say that, with wording like the above, the only real test is... "is the jury offended on your behalf?" Because if they are, with instructions like that, and any actually serious libel per se case, defendant is screwed. It's also a bit of a stinger that attorney's fees are generally included in libel per se cases (at least according to black letter law; IANAL, so I'm not acquainted with real case histories).

cpbills Sr. Linux Admin 0 points 1 point 2 points 4 years ago (0 children)

When we got rid of the last guy I worked with he remoted into one of our servers

People need to know that this is a big no-no. Whether your employer remembered to delete your accounts or not, attempting to access or accessing servers once you've been termed is against the law in most places.

Whether you're being malicious or not, accessing or even attempting to access systems you no longer have permission to access can easily be construed as malicious.

secretphoto 13 points 14 points 15 points 4 years ago (0 children)
i was working late night in our colo within $LARGE_DATA_RECOVERY_CENTER . one of the sysadmins (financials) for the hosting company was there telling me about how she was getting the axe and had to train her overseas counterparts how to do her job. lets say she was less than gruntled. she mentioned something about "at(1) jobs they'll never find".

years later i read a vague article in an industry journal about insider sabotage at a local company that caused millions of dollars of downtime..

i'm not sure if that was her, but the details lined up very closely and it makes a lot of sense that the f500 company would want to sweep this under the rug.

VexingRaven 2 points 3 points 4 points 4 years ago (6 children)
That's ridiculous. If that's a crime, this whole sub and /r/talesfromtechsupport is filled with crimes. Instead, we call it stupidity.
thatmorrowguy Netsec Admin 8 points 9 points 10 points 4 years ago (5 children)
The Childs case is like if you took your average /r/talesfromtechsupport story and mixed it with about 50% more paranoia and half as much common sense - continuing to refuse requests for the administrator passwords even after being arrested. If management asked me for the passwords to all of my systems, they can have them. In fact, in my exit interview, I would be more than happy to point out each and every remote access method that I have to their systems, and request that all of those passwords are changed. I don't WANT there to be any conceivable way for me to get back into a previous employers' environment when I go. Whenever I leave a team, my last action is deactivating all of my own database logins, removing my sudo rights, removing myself from any groups with elevated rights, ensuring that the team will be changing admin passwords ASAP. That way when colleagues and customers come back pleading for me to fix stuff, I can honestly tell them I no longer have the ability to solve their problem - go hit up the new guy. They stop calling much quicker that way.
neoice Principal Linux Systems Engineer 2 points 3 points 4 points 4 years ago (4 children)

Whenever I leave a team, my last action is deactivating all of my own database logins, removing my sudo rights, removing myself from any groups with elevated rights, ensuring that the team will be changing admin passwords ASAP.

I <3 our version controlled infrastructure. I could remove my database logins, sudo rights and lock my user account with a single commit. then I could push another commit to revoke my commit privs :)

David_Crockett 0 points 1 point 2 points 4 years ago (1 child)
Sounds nifty. How do you have it set up? SVN?
neoice Principal Linux Systems Engineer 0 points 1 point 2 points 4 years ago (0 children)
git+gitolite+Puppet.
thatmorrowguy Netsec Admin 0 points 1 point 2 points 4 years ago (1 child)
I am envious of your setup. Ours is very fragmented, but cobbled together with tons of somewhat fragile home-grown scripts, mostly manageable. Somehow configuration management never seems to make it to the top of the project list ...
neoice Principal Linux Systems Engineer 0 points 1 point 2 points 4 years ago (0 children)
doooo it. it's so so worth it. even if your initial rollout is just getting a config mgmt daemon installed and managed. once you get over the initial hurdle of having your config mgmt infrastructure in place, updates become so cheap and fast. it really will accelerate your organization.
Antoak 1 point 2 points 3 points 4 years ago (0 children)
What a coincidence, I'm listening to an interview with the involved chief security officer involved in that incident here .
the_ancient1 Say no to BYOD 25 points 26 points 27 points 4 years ago (21 children)
Most of these "horror stories" are not malicious in nature, but trace back to poor documentation, and experience.

If an admin has been in the same environment for 10+ years, they know all the quirks, all of one off scripts that hold critical systems together, how each piece fits with each other piece, etc.

So when a new person comes in off the street with no time to do a knowledge transfer, which normally takes months or years in some cases, problems arise and the immediate reaction is "Ex Employee did this on purpose because we fired them"

loquacious 16 points 17 points 18 points 4 years ago (3 children)
Solution: Replace management with a small shell script.
David_Crockett 1 point 2 points 3 points 4 years ago (1 child)

to0 valuable.

FTFY

Kreiger81 2 points 3 points 4 points 4 years ago (0 children)
I had a warehouse job where I ran into a similar issue. The person I was bring brought in to replace had been there for 25+ years, and even tho I was being taught by her to replace, I wasn't up to her speed on a lot of the procedures, and I didn't have the entire warehouse memorized like she did (She was on the planning team who BUILT the damned thing)

They never understood that I could not do in six months what it took her 25 years to perfect. "But Kreiger, she's old and retiring, how come you can't do it that fast"

the_ancient1 Say no to BYOD 8 points 9 points 10 points 4 years ago (10 children)
Bus factor ....

Unfortunately most companies have a bus factor of 1 on most systems. They do not want to pay the money to get a higher number

gawdimatwrk 4 points 5 points 6 points 4 years ago (1 child)
I was hired because of bus factor. My boss crashed his motorcycle and management was forced to hire someone while he was recovering. But when he returned to work it was back to the old habits. Even now he leaves me off the all the important stuff and documents nothing. Senior management is aware and they don't care. Needless to say, I haven't stopped looking.
the_ancient1 Say no to BYOD 4 points 5 points 6 points 4 years ago (0 children)
This is why Bean Counters and IT always clash

IT sees redundancy as critical to operational health and security.

Bean Counters see Redundancy as waste and an easy way to bump the quarterly numbers

dmsean DevOps 2 points 3 points 4 points 4 years ago (3 children)
I hate the bus factor. I prefer to be one of those optimist types and say "What if Jon Bob won the lottery!"
ChrisOfAllTrades Admin ALL the things! 8 points 9 points 10 points 4 years ago (2 children)
Well, not being a dick, if I won the lottery, I'd probably stick around doing stuff but having way, way less stress about any of it. Finish up documentation, wrap up loose ends, take the team out for dinner and beers, then leave.

Hit by a bus? I'm not going to be doing jack shit if I'm in a casket.

VexingRaven 0 points 1 point 2 points 4 years ago (0 children)
TIL Jack Shit likes dead people.
neoice Principal Linux Systems Engineer 0 points 1 point 2 points 4 years ago (3 children)
my company's bus factor is 1.5-1.75. there are still places where knowledge is stored in a single head, but we're confident that the other party could figure things out given some time.
the_ancient1 Say no to BYOD 1 point 2 points 3 points 4 years ago (2 children)

other party could figure things out given some time.

ofcourse, given enough time a qualified person could figure out all systems, that is not what the bus factor is about.

The Bus Factor is, if you die, quit, or are in some way incapacitated TODAY, can someone pick up where you left off with out any impact to business operations.

That does not mean "Yes we only have 1 admin for this systems, but the admins of another system can figure out it over 6 mos" That would be a 1 for a bus factor.

neoice Principal Linux Systems Engineer 0 points 1 point 2 points 4 years ago (1 child)
I think it would be 3-6mo before anything broke that wasn't bus factor 2. probably 6-9mo for something actually important.
Strelock 1 point 2 points 3 points 4 years ago (0 children)
Or tomorrow...
Nyarlathotep124 0 points 1 point 2 points 4 years ago (0 children)
"Job Security"
jhulbe Citrix Admin 5 points 6 points 7 points 4 years ago (1 child)
yeah, it's usually a lot reseting on one persons shoulders and then he becomes overworked and resentful of his job but happy to have it. Then if he's ever let go he feels like hes owed something and gets upset because of all the time has put into his masterpiece.

Or wait, is that serial killers?

Shock223 Student 1 point 2 points 3 points 4 years ago (0 children)

Or wait, is that serial killers?

The Police are asking the public if anyone has information on "The Ethernet Killer" to please come forward

curiousGambler 1 point 2 points 3 points 4 years ago (0 children)

the bean counters

Having just started as a software engineer at a major bank, I love this. Or hate it haha!

Slagwag 6 points 7 points 8 points 4 years ago (0 children)
At a previous job we had a strict password change policy when someone left the company or was let go. Unfortunately the password change didn't have a task to change it on the backup system and we had a centralized backup location for all of our customers offsite. An employee that was let go must have tried all systems and found this one was available. He connected in and deleted all backup data and stopped them from backing up. He then somehow connected into the customer somehow (I believe this customer wanted RDP open on a specific port despite our advice) and used that to connect in and delete their data.

The person tried to make this look like it was not them by using a local public wifi but it was traced to him since the location it was done at showed he was nearby due to his EZPass triggering when driving there.

Unfortunately I think today years after this occurred it is still pending investigation and nothing was really done.

Loki-L Please contact your System Administrator 7 points 8 points 9 points 4 years ago (2 children)
So far nothing worse than bad online review has ever happened from a co-worker leaving. Mostly that was because everyone here is sort of a professional and half of the co-workers that have left, have left to customers or partner companies or otherwise kept in business contact. There has been very little bridge burning despite a relatively high turnover in the IT department.

Part of me is hoping that someone would try to do something just so I could have something to show to the bosses about why I am always talking about having a better exit procedure than just stopping paying people and having the rest of the company find out by themselves sooner or later. There have been several instances of me deactivating accounts days or even weeks after someone stopped working for us because nobody thought to tell anyone....

On the flip-side it appears that if I ever left my current employer I would not need to sabotage them or withhold any critical information or anything. Based on the fact that they managed to call me on the first day of my vacation (before I actually planned to get up really) for something that was both obvious and well documented, I half expect them to simply collapse by themselves If I stayed away for more than two weeks.

mwerte in over my head 1 point 2 points 3 points 4 years ago (0 children)
My old company kept paying people on several occasions because nobody bothered to fill out the 3 form sheet (first, last, date of termination) that was very prominently displayed on our intranet, send us an email, or even stop by and say "by the way...". It was good times.
Lagkiller 2 points 3 points 4 points 4 years ago (1 child)
It's not paranoia if they really are out to get you
AceBacker 0 points 1 point 2 points 4 years ago (1 child)
Reminds me of a saying that I heard once.

If Network Security guys ran the police department, the police would stop writing tickets. Instead they would just shoot the speeders.

punkwalrus DevOps 7 points 8 points 9 points 4 years ago (0 children)
It was 1998, and our company had been through another round of layoffs. A few months later, a rep in member services got a weird error while attempting to log into a large database. "Please enter in administrative password." She showed it to her supervisor, who had a password for those types of errors. The manager usually just keyed in the password, which was used to fix random corrupt or incomplete records. But instead, she paused.

"Why would my rep get this error upon login?"

She called down to the database folks, who did a search and immediately shut down the database access, which pretty much killed all member service reps from doing any work for the rest of the day.

Turns out, one of the previous DBA/programmers had released a "time bomb" of sorts into the database client. Long story short, it was one of those, "if date is greater than [6 months from last build], run a delete query on the primary key upon first login." His mistake was that the db client was used by a rep who didn't have access to delete records. Had her manager just typed in a password, they would have wiped and made useless over 50 million records. Sure, they had backups, but upon restore, it would have done it again.

IIRC, the supervisor and rep got some kind of reward or bonus.

The former DBA was formally charged with whatever the law was back then, but I don't know what became of him after he was charged.

Sideonecincy 6 points 7 points 8 points 4 years ago (3 children)
This isn't a personal experience but was a recent news story that lead to prison time. Guy ended up with a 4 year prison sentence and a $500k fine.

In June 2012, Mitchell found out he was going to be fired from EnerVest and in response he decided to reset the company's servers to their original factory settings. He also disabled cooling equipment for EnerVest's systems and disabled a data-replication process.

Mitchell's actions left EnerVest unable to "fully communicate or conduct business operations" for about 30 days, according to Booth's office. The company also had to spend hundreds of thousands of dollars on data-recovery efforts, and part of the information could not be retrieved.

http://www.pcworld.com/article/2158020/it-pro-gets-prison-time-for-sabotaging-exemployers-system.html

MightySasquatch 2 points 3 points 4 points 4 years ago (2 children)
I honestly think people don't stop to think about how intentionally damaging equipment is illegal.
telemecanique 0 points 1 point 2 points 4 years ago (1 child)
huh? they just don't think period, but that's the point... we're all capable of it
MightySasquatch 0 points 1 point 2 points 4 years ago (0 children)
Maybe that's true, I suppose it depends on circumstance
jdom22 Master of none 6 points 7 points 8 points 4 years ago (2 children)
gaining access to a network you are not permitted to access = federal crime. doing so after a bitter departure makes you suspect #1. Don't do it. You will get caught, you will go to jail, you will likely never work in IT again.
wolfmann Jack of All Trades 2 points 3 points 4 points 4 years ago (0 children)

you will likely never work in IT again.

not so sure about that... there are several hackers out there that have their own consulting businesses that are doing quite well.

cpbills Sr. Linux Admin 0 points 1 point 2 points 4 years ago (0 children)
Even attempting to access systems you no longer have permission to access can be construed as malicious in nature and a crime.
lawrish Automation Lover 5 points 6 points 7 points 4 years ago (0 children)
Once upon a time my company was a mess, they have next to no network infrastructure. Firewalls? Too fancy. Everything in an external facing server was open. A contractor put a back door in 4 of those servers, granting root access. Did he ever use it? No idea. I discovered it 5 years later. Not only that, he was bright enough to upload that really unique code into github, with his full name and a linkedin profile, linking him to my current company for 3 months.
jaydestro Sysadmin 16 points 17 points 18 points 4 years ago (5 children)
here's a tip from a senior sysadmin to anyone considering terminating a peer's employment...

treating someone like garbage is one of the reasons a lot of times the person in question might put "backdoors" or anything else that could be malicious. i've been fired from a job before, and you know what i did to "get back at them?" i got another, better job.

be a pro to the person and they'll be a pro to you, even when it's time to move on.

i know one person who retaliated after being fired, and he went to prison for a year. he was really young and dumb at the time, but it taught me a big lesson on how to act in this industry. getting mad gets you no where.

dmsean DevOps 6 points 7 points 8 points 4 years ago (2 children)
I've watched 3 senior IT people fired. All of them were given very cushy severances (like 4 months) and walked out the door with all sorts of statements like "we are willing to be a good reference for you" etc etc.
superspeck 3 points 4 points 5 points 4 years ago (1 child)
Seen this happen too, but it's usually been when a senior person gets a little crusty around the edges, starts being an impediment, and refuses to do things the 'new' way.
AceBacker 2 points 3 points 4 points 4 years ago (0 children)
I call this the Dick Van Dyke on Scrubs effect.

The Scrubs episode "My Brother, My Keeper" goes into perfect detail about this.

wolfmann Jack of All Trades 1 point 2 points 3 points 4 years ago (0 children)
Fear leads to anger. Anger leads to hate. Hate leads to suffering.

should have just watched Star Wars instead.

telemecanique 0 points 1 point 2 points 4 years ago (0 children)
you assume you/that person can think rationally at that point in time in every case, that assumption is incorrect.
telemecanique 1 point 2 points 3 points 4 years ago (2 children)
it has nothing to do with logic, EVERYONE can snap when under the right circumstances. It's why school shootings, postman shootings even regular road rage and really any craziness happens, we all have different amount of stress that will make us simply not care, but we're all capable of losing our shit. Imagine if your wife divorces you, you lose your kids, you get raped in divorce court, your work suffers, you get fired and you have access to a gun or in this case a keyboard + admin access... million ways for a person to snap.
telemecanique 0 points 1 point 2 points 4 years ago (0 children)
and 99.9% of people in 99.9% of cases do, you're missing the simple truth that it can happen to anyone at anytime, you never know what someone you're firing today has been going through in the last 6 months. Hence you should worry.
JetlagMk2 Master of None 4 points 5 points 6 points 4 years ago (0 children)
The BACKGROUND section of this file is relevant
Omega Engineering Corp. ("Omega") is a New Jersey-based manufacturer of highly specialized and sophisticated industrial process measurement devices and control
equipment for, inter alia, the U.S. Navy and NASA. On July 31, 1996, all its design and production computer programs were permanently deleted. About 1,200 computer programs
were deleted and purged, crippling Omega's manufacturing capabilities and resulting in a loss of millions of dollars in sales and contracts.

There's an interesting rumor that because of the insurance payout Omega actually profited from the sabotage. Maybe that's the real business lesson.

danfirst 6 points 7 points 8 points 4 years ago (0 children)
Not a horror story exactly, but, my first IT job I worked for a small non profit as the "IT Guy" so servers, networks, users, whatever. My manager wanted to get her nephew into IT, so she "chose not to renew my contract". Her and the HR lady brought me in, told me I wasn't being renewed and said that for someone in my position they should have someone go to my desk and clean everything for me and escort me out so I can't damage anything.

I told her, "listen, you both should know I'm not that sort of person, but really, I can access the entire system from home, easily, if I wanted to trash things I could, but I don't do that sort of thing. So how about you give me 10 minutes so I can pack up my own personal things?" They both turned completely white and nodded their heads and I left.

I got emails from the staff for months, the new guy was horrible. My manager was let go a few months later. Too bad on the timing really as it was a pretty great first IT job.

lawtechie 3 points 4 points 5 points 4 years ago (0 children)
I used to be a sysadmin at a small web hosting company owned by a family member. When I went to law school, I asked to take a reduced role. The current management didn't really understand systems administration, so they asked the outside developer to take on this role.

They then got into a licensing dispute with the developer over ownership of their code. The dev locked them out and threatened to wipe the servers if he wasn't paid. He started deleting email accounts and websites as threats. So, I get called the night before the bar exam by the family member.

I walk him through manually changing the root passwords and locking out all the unknown users. The real annoyance came when I asked the owner some simple information to threaten the developer. Turns out, the owner didn't even know the guy's full name or address. The checks were sent to a P.O. box.

BerkeleyFarmGirl Jane of Most Trades 4 points 5 points 6 points 4 years ago (0 children)
I had some minor issues with a co-worker. He got bounced out because he had a bad habit of changing things and not testing them (and not being around to deal with the fallout). He was also super high control.

I knew enough about the system (he was actually my "replacement" when my helpdesk/support/sysadmin job was too big for one person) to head things off at the pass, but one day I was at the office late doing sysadmin stuff and got bounced off everything. Turns out he had set "login hours" on my account.

munky9002 7 points 8 points 9 points 4 years ago (3 children)
I had one where I was taking over and they still had access; we weren't supposed to cut off access. Well they setup backup exclusion and deleted all the backups of a certain directory. This directory had about 20 scripts which eliminates people's jobs and after about 1 week they deleted the folder.

Mind you I had no idea it was even there. The disaster started in the morning and eventually after lunch all I did was log in as their user and restore it from their recycle bin.

We then kept the story going and asking them for copies of the scripts etc etc. They played it off like 'oh wow you guys havent even taken over yet and there's a disaster' and 'unfortunately we don't have copies of your scripts.'

It was days before they managed to find them and sent them to us. You also read the things. REM This script is a limited license use to use only if you are our customer. Copyrights are ours.

So naturally I fix their scripts as there was problems with them and I put GPL at the top. Month later they contact the CFO with a quote of $40,000 to allow them to keep using their intellectual property. I wish I got to see their face when they got the email back saying.

"We caught you deleting the scripts and since it took you too long to respond and provide us with the scripts we wrote our owned and we licensed this with GPL, because it would be unethical to do otherwise.

Fortunately since we are not using your scripts and you just sent them to us without any mention of cost; we owe nothing."

munky9002 7 points 8 points 9 points 4 years ago (1 child)

However, slapping the GPL on top of someone else's licensed code doesn't actually GPL it.

I never said I put GPL on their work. I put GPL on my work. I recreated the scripts from scratch. I can license my own work however I damned well feel.

Skrp 2 points 3 points 4 points 4 years ago (0 children)
They're not that common, but the malicious insider threat is a very real concern.
punkwalrus DevOps 2 points 3 points 4 points 4 years ago (0 children)
In the 1980s, there was a story that went around the computer hobbyist circles about a travel agency in our area. They were big, and had all kinds of TV ads. Their claim to fame was how modern they were (for the time), and used computers to find the best deal at the last second, predict travel cost trends, and so on.

But behind the scenes, all was not well. The person in charge of their computer system was this older, crotchety bastard who was a former IBM or DEC employee (I forget which). The stereotype of the BOFH before that was even a thing. He was unfriendly, made his own hours, and as time went on, demanded more money, less work, more hardware, and management hated him. They tried to hire him help, but he refused to tell the new guys anything, and after 2-3 years of assistant techs quitting, they finally fired the guy and hired a consulting team to take over.

The programmer left quietly, didn't create a fuss, and no one suspected anything was amiss. But at some point, he dialed back into the mainframe and wiped all records and data. The backup tapes were all blank, too. He didn't document anything.

This pretty much fucked the company. They were out of business within a few months.

The big news about this was at the time, there was no precedence for this type of behavior, and there were no laws specific to this kind of crime. Essentially, they didn't have any proof of what he did, and those that could prove it didn't have a case because it wasn't a crime yet. He couldn't be charged with destruction of property, because no property was actually touched (from a legal perspective). This led to more modern laws and some of the first laws in preventing data from being deleted.

BerkeleyFarmGirl Jane of Most Trades 1 point 2 points 3 points 4 years ago (0 children)
I worked for a local government agency and our group acquired a sociopath boss.

My then supervisor (direct report to $Crazy) found another job and gave his notice. On his last day he admitted that he had considered screwing with things but that the people mostly hurt by it would be us and his beef was not with us.

$Crazy must have heard because all future leavers in the group (and there was hella turnover) got put on admin leave the minute they gave notice. E.g. no system access.

girlgerms Windows Syster 0 points 1 point 2 points 4 years ago (0 children)
This is also more a people/process issue than a technical one.

If your processes are in place to ensure documentation is written, access is listed somewhere etc. then it shouldn't be an issue.

If the people who are hiring are like this, then there was an issue in the hiring process - people with these kind of ethics aren't good admins. They're not even admins. They're chumps.

tahoebigah 0 points 1 point 2 points 4 years ago (0 children)
The guy I actually replaced was leaving on bad terms and let loose Conficker right before he left and caused a lot of other issues. He is now the Director of IT at another corporation ....
Pookiebeary 0 points 1 point 2 points 4 years ago (0 children)
Change admin and the pw of Terminated Domain Admin. Reformat TAD's pcs. Tell Tad he's welcome to x months of severance as long as Tad doesn't come back or start shit. Worked for us so far...

[Jan 28, 2019] Happy Sysadmin Appreciation Day 2016

Jan 28, 2019 | opensource.com

dale.sykora on 29 Jul 2016 Permalink

I have a horror story from another IT person. One day they were tasked with adding a new server to a rack in their data center. They added the server... being careful to not bump a cable to the nearby production servers, SAN, and network switch. The physical install went well. But when they powered on the server, the ENTIRE RACK went dark. Customers were not happy:( IT turns out that the power circuit they attached the server to was already at max capacity and thus they caused the breaker to trip. Lessons learned... use redundant power and monitor power consumption.

Another issue was being a newbie on a Cisco switch and making a few changes and thinking the innocent sounding "reload" command would work like Linux does when you restart a daemon. Watching 48 link activity LEDs go dark on your vmware cluster switch... Priceless

[Jan 28, 2019] The ghost of the failed restore

Notable quotes:
"... "Of course! You told me that I had to stay a couple of extra hours to perform that task," I answered. "Exactly! But you preferred to leave early without finishing that task," he said. "Oh my! I thought it was optional!" I exclaimed. ..."
"... "It was, it was " ..."
"... Even with the best solution that promises to make the most thorough backups, the ghost of the failed restoration can appear, darkening our job skills, if we don't make a habit of validating the backup every time. ..."
Nov 01, 2018 | opensource.com

In a well-known data center (whose name I do not want to remember), one cold October night we had a production outage in which thousands of web servers stopped responding due to downtime in the main database. The database administrator asked me, the rookie sysadmin, to recover the database's last full backup and restore it to bring the service back online.

But, at the end of the process, the database was still broken. I didn't worry, because there were other full backup files in stock. However, even after doing the process several times, the result didn't change.

With great fear, I asked the senior sysadmin what to do to fix this behavior.

"You remember when I showed you, a few days ago, how the full backup script was running? Something about how important it was to validate the backup?" responded the sysadmin.

"Of course! You told me that I had to stay a couple of extra hours to perform that task," I answered. "Exactly! But you preferred to leave early without finishing that task," he said. "Oh my! I thought it was optional!" I exclaimed.

"It was, it was "

Moral of the story: Even with the best solution that promises to make the most thorough backups, the ghost of the failed restoration can appear, darkening our job skills, if we don't make a habit of validating the backup every time.

[Jan 28, 2019] The danger of a single backup harddrive (USB or not)

The most typical danger is dropping of the hard drive on the floor.
Notable quotes:
"... Also, backing up to another disk in the same computer will probably not save you when lighting strikes, as the backup disk is just as likely to be fried as the main disk. ..."
"... In real life, the backup strategy and hardware/software choices to support it is (as most other things) a balancing act. The important thing is that you have a strategy, and that you test it regularly to make sure it works as intended (as the main point is in the article). Also, realizing that achieving 100% backup security is impossible might save a lot of time in setting up the strategy. ..."
Nov 08, 2002 | www.linuxjournal.com

Anonymous on Fri, 11/08/2002

Why don't you just buy an extra hard disk and have a copy of your important data there. With today's prices it doesn't cost anything.

Anonymous on Fri, 11/08/2002 - 03:00. A lot of people seams to have this idea, and in many situations it should work fine.

However, there is the human factor. Sometimes simple things go wrong (as simple as copying a file), and it takes a while before anybody notices that the contents of this file is not what is expected. This means you have to have many "generations" of backup of the file in order to be able to restore it, and in order to not put all the "eggs in the same basket" each of the file backups should be on a physical device.

Also, backing up to another disk in the same computer will probably not save you when lighting strikes, as the backup disk is just as likely to be fried as the main disk.

In real life, the backup strategy and hardware/software choices to support it is (as most other things) a balancing act. The important thing is that you have a strategy, and that you test it regularly to make sure it works as intended (as the main point is in the article). Also, realizing that achieving 100% backup security is impossible might save a lot of time in setting up the strategy.

(I.e. you have to say that this strategy has certain specified limits, like not being able to restore a file to its intermediate state sometime during a workday, only to the state it had when it was last backed up, which should be a maximum of xxx hours ago and so on...)

Hallvard P

[Jan 28, 2019] Those power cables ;-)

Jan 28, 2019 | opensource.com

John Fano on 31 Jul 2016

I was reaching down to power up the new UPS as my guy was stepping out from behind the rack and the whole rack went dark. His foot caught the power cord of the working UPS and pulled it just enough to break the contacts and since the battery was failed it couldn't provide power and shut off. It took about 30 minutes to bring everything back up..

Things went much better with the second UPS replacement. :-)

[Jan 28, 2019] "Right," I said. "Time to get the backup." I knew I had to leave when I saw his face start twitching and he whispered: "Backup ...?"

Jan 28, 2019 | opensource.com

SemperOSS on 13 Sep 2016 Permalink This one seems to be a classic too:

Working for a large UK-based international IT company, I had a call from newest guy in the internal IT department: "The main server, you know ..."

"Yes?"

"I was cleaning out somebody's homedir ..."

"Yes?"

"Well, the server stopped running properly ..."

"Yes?"

"... and I can't seem to get it to boot now ..."

"Oh-kayyyy. I'll just totter down to you and give it an eye."

I went down to the basement where the IT department was located and had a look at his terminal screen on his workstation. Going back through the terminal history, just before a hefty amount of error messages, I found his last command: 'rm -rf /home/johndoe /*'. And I probably do not have to say that he was root at the time (it was them there days before sudo, not that that would have helped in his situation).

"Right," I said. "Time to get the backup." I knew I had to leave when I saw his face start twitching and he whispered: "Backup ...?"

==========

Bonus entries from same company:

It was the days of the 5.25" floppy disks (Wikipedia is your friend, if you belong to the younger generation). I sometimes had to ask people to send a copy of a floppy to check why things weren't working properly. Once I got a nice photocopy and another time, the disk came with a polite note attached ... stapled through the disk, to be more precise!

[Jan 28, 2019] regex - Safe rm -rf function in shell script

Jan 28, 2019 | stackoverflow.com

community wiki
5 revs
,May 23, 2017 at 12:26

This question is similar to What is the safest way to empty a directory in *nix?

I'm writing bash script which defines several path constants and will use them for file and directory manipulation (copying, renaming and deleting). Often it will be necessary to do something like:

rm -rf "/${PATH1}"
rm -rf "${PATH2}/"*

While developing this script I'd want to protect myself from mistyping names like PATH1 and PATH2 and avoid situations where they are expanded to empty string, thus resulting in wiping whole disk. I decided to create special wrapper:

rmrf() {
    if [[ $1 =~ "regex" ]]; then
        echo "Ignoring possibly unsafe path ${1}"
        exit 1
    fi

    shopt -s dotglob
    rm -rf -- $1
    shopt -u dotglob
}

Which will be called as:

rmrf "/${PATH1}"
rmrf "${PATH2}/"*

Regex (or sed expression) should catch paths like "*", "/*", "/**/", "///*" etc. but allow paths like "dir", "/dir", "/dir1/dir2/", "/dir1/dir2/*". Also I don't know how to enable shell globbing in case like "/dir with space/*". Any ideas?

EDIT: this is what I came up with so far:

rmrf() {
    local RES
    local RMPATH="${1}"
    SAFE=$(echo "${RMPATH}" | sed -r 's:^((\.?\*+/+)+.*|(/+\.?\*+)+.*|[\.\*/]+|.*/\.\*+)$::g')
    if [ -z "${SAFE}" ]; then
        echo "ERROR! Unsafe deletion of ${RMPATH}"
        return 1
    fi

    shopt -s dotglob
    if [ '*' == "${RMPATH: -1}" ]; then
        echo rm -rf -- "${RMPATH/%\*/}"*
        RES=$?
    else
        echo rm -rf -- "${RMPATH}"
        RES=$?
    fi
    shopt -u dotglob

    return $RES
}

Intended use is (note an asterisk inside quotes):

rmrf "${SOMEPATH}"
rmrf "${SOMEPATH}/*"

where $SOMEPATH is not system or /home directory (in my case all such operations are performed on filesystem mounted under /scratch directory).

CAVEATS:

SpliFF ,Jun 14, 2009 at 13:45

I've found a big danger with rm in bash is that bash usually doesn't stop for errors. That means that:
cd $SOMEPATH
rm -rf *

Is a very dangerous combination if the change directory fails. A safer way would be:

cd $SOMEPATH && rm -rf *

Which will ensure the rf won't run unless you are really in $SOMEPATH. This doesn't protect you from a bad $SOMEPATH but it can be combined with the advice given by others to help make your script safer.

EDIT: @placeybordeaux makes a good point that if $SOMEPATH is undefined or empty cd doesn't treat it as an error and returns 0. In light of that this answer should be considered unsafe unless $SOMEPATH is validated as existing and non-empty first. I believe cd with no args should be an illegal command since at best is performs a no-op and at worse it can lead to unexpected behaviour but it is what it is.

Sazzad Hissain Khan ,Jul 6, 2017 at 11:45

nice trick, I am one stupid victim. – Sazzad Hissain Khan Jul 6 '17 at 11:45

placeybordeaux ,Jun 21, 2018 at 22:59

If $SOMEPATH is empty won't this rm -rf the user's home directory? – placeybordeaux Jun 21 '18 at 22:59

SpliFF ,Jun 27, 2018 at 4:10

@placeybordeaux The && only runs the second command if the first succeeds - so if cd fails rm never runs – SpliFF Jun 27 '18 at 4:10

placeybordeaux ,Jul 3, 2018 at 18:46

@SpliFF at least in ZSH the return value of cd $NONEXISTANTVAR is 0placeybordeaux Jul 3 '18 at 18:46

ruakh ,Jul 13, 2018 at 6:46

Instead of cd $SOMEPATH , you should write cd "${SOMEPATH?}" . The ${varname?} notation ensures that the expansion fails with a warning-message if the variable is unset or empty (such that the && ... part is never run); the double-quotes ensure that special characters in $SOMEPATH , such as whitespace, don't have undesired effects. – ruakh Jul 13 '18 at 6:46

community wiki
2 revs
,Jul 24, 2009 at 22:36

There is a set -u bash directive that will cause exit, when uninitialized variable is used. I read about it here , with rm -rf as an example. I think that's what you're looking for. And here is set's manual .

,Jun 14, 2009 at 12:38

I think "rm" command has a parameter to avoid the deleting of "/". Check it out.

Max ,Jun 14, 2009 at 12:56

Thanks! I didn't know about such option. Actually it is named --preserve-root and is not mentioned in the manpage. – Max Jun 14 '09 at 12:56

Max ,Jun 14, 2009 at 13:18

On my system this option is on by default, but it cat't help in case like rm -ri /* – Max Jun 14 '09 at 13:18

ynimous ,Jun 14, 2009 at 12:42

I would recomend to use realpath(1) and not the command argument directly, so that you can avoid things like /A/B/../ or symbolic links.

Max ,Jun 14, 2009 at 13:30

Useful but non-standard command. I've found possible bash replacement: archlinux.org/pipermail/pacman-dev/2009-February/008130.htmlMax Jun 14 '09 at 13:30

Jonathan Leffler ,Jun 14, 2009 at 12:47

Generally, when I'm developing a command with operations such as ' rm -fr ' in it, I will neutralize the remove during development. One way of doing that is:
RMRF="echo rm -rf"
...
$RMRF "/${PATH1}"

This shows me what should be deleted - but does not delete it. I will do a manual clean up while things are under development - it is a small price to pay for not running the risk of screwing up everything.

The notation ' "/${PATH1}" ' is a little unusual; normally, you would ensure that PATH1 simply contains an absolute pathname.

Using the metacharacter with ' "${PATH2}/"* ' is unwise and unnecessary. The only difference between using that and using just ' "${PATH2}" ' is that if the directory specified by PATH2 contains any files or directories with names starting with dot, then those files or directories will not be removed. Such a design is unlikely and is rather fragile. It would be much simpler just to pass PATH2 and let the recursive remove do its job. Adding the trailing slash is not necessarily a bad idea; the system would have to ensure that $PATH2 contains a directory name, not just a file name, but the extra protection is rather minimal.

Using globbing with ' rm -fr ' is usually a bad idea. You want to be precise and restrictive and limiting in what it does - to prevent accidents. Of course, you'd never run the command (shell script you are developing) as root while it is under development - that would be suicidal. Or, if root privileges are absolutely necessary, you neutralize the remove operation until you are confident it is bullet-proof.

Max ,Jun 14, 2009 at 13:09

To delete subdirectories and files starting with dot I use "shopt -s dotglob". Using rm -rf "${PATH2}" is not appropriate because in my case PATH2 can be only removed by superuser and this results in error status for "rm" command (and I verify it to track other errors). – Max Jun 14 '09 at 13:09

Jonathan Leffler ,Jun 14, 2009 at 13:37

Then, with due respect, you should use a private sub-directory under $PATH2 that you can remove. Avoid glob expansion with commands like 'rm -rf' like you would avoid the plague (or should that be A/H1N1?). – Jonathan Leffler Jun 14 '09 at 13:37

Max ,Jun 14, 2009 at 14:10

Meanwhile I've found this perl project: http://code.google.com/p/safe-rm/

community wiki
too much php
,Jun 15, 2009 at 1:55

If it is possible, you should try and put everything into a folder with a hard-coded name which is unlikely to be found anywhere else on the filesystem, such as ' foofolder '. Then you can write your rmrf() function as:
rmrf() {
    rm -rf "foofolder/$PATH1"
    # or
    rm -rf "$PATH1/foofolder"
}

There is no way that function can delete anything but the files you want it to.

vadipp ,Jan 13, 2017 at 11:37

Actually there is a way: if PATH1 is something like ../../someotherdirvadipp Jan 13 '17 at 11:37

community wiki
btop
,Jun 15, 2009 at 6:34

You may use
set -f    # cf. help set

to disable filename generation (*).

community wiki
Howard Hong
,Oct 28, 2009 at 19:56

You don't need to use regular expressions.
Just assign the directories you want to protect to a variable and then iterate over the variable. eg:
protected_dirs="/ /bin /usr/bin /home $HOME"
for d in $protected_dirs; do
    if [ "$1" = "$d" ]; then
        rm=0
        break;
    fi
done
if [ ${rm:-1} -eq 1 ]; then
    rm -rf $1
fi

,

Add the following codes to your ~/.bashrc
# safe delete
move_to_trash () { now="$(date +%Y%m%d_%H%M%S)"; mv "$@" ~/.local/share/Trash/files/"$@_$now"; }
alias del='move_to_trash'

# safe rm
alias rmi='rm -i'

Every time you need to rm something, first consider del , you can change the trash folder. If you do need to rm something, you could go to the trash folder and use rmi .

One small bug for del is that when del a folder, for example, my_folder , it should be del my_folder but not del my_folder/ since in order for possible later restore, I attach the time information in the end ( "$@_$now" ). For files, it works fine.

[Jan 28, 2019] That's how I learned to always check with somebody else before rebooting a production server, no matter how minor it may seem

Jan 28, 2019 | www.reddit.com

VexingRaven 1 point 2 points 3 points 3 years ago (1 child)

Not really a horror story but definitely one of my first "Oh shit" moments. I was the FNG helpdesk/sysadmin at a company of 150 people. I start getting calls that something (I think it was Outlook) wasn't working in Citrix, apparently something broken on one of the Citrix servers. I'm 100% positive it will be fixed with a reboot (I've seen this before on individual PCs), so I diligently start working to get people off that Citrix server (one of three) so I can reboot it.

I get it cleared out, hit Reboot... And almost immediately get a call from the call center manager saying every single person just got kicked off Citrix. Oh shit. But there was nobody on that server! Apparently that server also housed the Secure Gateway server which my senior hadn't bothered to tell me or simply didn't know (Set up by a consulting firm). Whoops. Thankfully the servers were pretty fast and people's sessions reconnected a few minutes later, no harm no foul. And on the plus side, it did indeed fix the problem.

And that's how I learned to always check with somebody else before rebooting a production server, no matter how minor it may seem.

[Jan 26, 2019] How and why i run my own dns servers

Notable quotes:
"... Learn Bash the Hard Way ..."
"... Learn Bash the Hard Way ..."
zwischenzugs
Introduction Despite my woeful knowledge of networking, I run my own DNS servers on my own websites run from home. I achieved this through trial and error and now it requires almost zero maintenance, even though I don't have a static IP at home.

Here I share how (and why) I persist in this endeavour.

Overview This is an overview of the setup: DNSSetup

This is how I set up my DNS. I:

How? Walking through step-by-step how I did it: 1) Set up two Virtual Private Servers (VPSes) You will need two stable machines with static IP addresses. If you're not lucky enough to have these in your possession, then you can set one up on the cloud. I used this site , but there are plenty out there. NB I asked them, and their IPs are static per VPS. I use the cheapest cloud VPS (1$/month) and set up debian on there. NOTE: Replace any mention of DNSIP1 and DNSIP2 below with the first and second static IP addresses you are given. Log on and set up root password SSH to the servers and set up a strong root password. 2) Set up domains You will need two domains: one for your dns servers, and one for the application running on your host. I use dot.tk to get free throwaway domains. In this case, I might setup a myuniquedns.tk DNS domain and a myuniquesite.tk site domain. Whatever you choose, replace your DNS domain when you see YOURDNSDOMAIN below. Similarly, replace your app domain when you see YOURSITEDOMAIN below. 3) Set up a 'glue' record If you use dot.tk as above, then to allow you to manage the YOURDNSDOMAIN domain you will need to set up a 'glue' record. What this does is tell the current domain authority (dot.tk) to defer to your nameservers (the two servers you've set up) for this specific domain. Otherwise it keeps referring back to the .tk domain for the IP. See here for a fuller explanation. Another good explanation is here . To do this you need to check with the authority responsible how this is done, or become the authority yourself. dot.tk has a web interface for setting up a glue record, so I used that. There, you need to go to 'Manage Domains' => 'Manage Domain' => 'Management Tools' => 'Register Glue Records' and fill out the form. Your two hosts will be called ns1.YOURDNSDOMAIN and ns2.YOURDNSDOMAIN , and the glue records will point to either IP address. Note, you may need to wait a few hours (or longer) for this to take effect. If really unsure, give it a day.
If you like this post, you might be interested in my book Learn Bash the Hard Way , available here for just $5. hero
4) Install bind on the DNS Servers On a Debian machine (for example), and as root, type: apt install bind9 bind is the domain name server software you will be running. 5) Configure bind on the DNS Servers Now, this is the hairy bit. There are two parts this with two files involved: named.conf.local , and the db.YOURDNSDOMAIN file. They are both in the /etc/bind folder. Navigate there and edit these files. Part 1 – named.conf.local This file lists the 'zone's (domains) served by your DNS servers. It also defines whether this bind instance is the 'master' or the 'slave'. I'll assume ns1.YOURDNSDOMAIN is the 'master' and ns2.YOURDNSDOMAIN is the 'slave.
Part 1a – the master
On the master/ ns1.YOURNDSDOMAIN , the named.conf.local should be changed to look like this:
zone "YOURDNSDOMAIN" {
 type master;
 file "/etc/bind/db.YOURDNSDOMAIN";
 allow-transfer { DNSIP2; };
};
zone "YOURSITEDOMAIN" {
 type master;
 file "/etc/bind/YOURDNSDOMAIN";
 allow-transfer { DNSIP2; };
};

zone "14.127.75.in-addr.arpa" {
 type master;
 notify no;
 file "/etc/bind/db.75";
 allow-transfer { DNSIP2; };
};

logging {
 channel query.log {
 file "/var/log/query.log";
 // Set the severity to dynamic to see all the debug messages.
 severity debug 3;
 };
category queries { query.log; };
};
The logging at the bottom is optional (I think). I added it a while ago, and I leave it in here for interest. I don't know what the 14.127 zone stanza is about.
Part 1b – the slave
Jan 26, 2019 | zwischenzugs.com

On the slave/ ns2.YOURNDSDOMAIN , the named.conf.local should be changed to look like this:

zone "YOURDNSDOMAIN" {
 type slave;
 file "/var/cache/bind/db.YOURDNSDOMAIN";
 masters { DNSIP1; };
};

zone "YOURSITEDOMAIN" {
 type slave;
 file "/var/cache/bind/db.YOURSITEDOMAIN";
 masters { DNSIP1; };
};

zone "14.127.75.in-addr.arpa" {
 type slave;
 file "/var/cache/bind/db.75";
 masters { DNSIP1; };
};
Part 2 – db.YOURDNSDOMAIN

Now we get to the meat – your DNS database is stored in this file.

On the master/ ns1.YOURDNSDOMAIN the db.YOURDNSDOMAIN file looks like this :

$TTL 4800
@ IN SOA ns1.YOURDNSDOMAIN. YOUREMAIL.YOUREMAILDOMAIN. (
  2018011615 ; Serial
  604800 ; Refresh
  86400 ; Retry
  2419200 ; Expire
  604800 ) ; Negative Cache TTL
;
@ IN NS ns1.YOURDNSDOMAIN.
@ IN NS ns2.YOURDNSDOMAIN.
ns1 IN A DNSIP1
ns2 IN A DNSIP2
YOURSITEDOMAIN. IN A YOURDYNAMICIP

On the slave/ ns2.YOURDNSDOMAIN it's very similar, but has ns1 in the SOA line, and the IN NS lines reversed. I can't remember if this reversal is needed or not :

$TTL 4800 @ IN SOA ns1.YOURDNSDOMAIN. YOUREMAIL.YOUREMAILDOMAIN. (
  2018011615 ; Serial
 604800 ; Refresh
 86400 ; Retry
 2419200 ; Expire
 604800 ) ; Negative Cache TTL
;
@ IN NS ns1.YOURDNSDOMAIN.
@ IN NS ns2.YOURDNSDOMAIN.
ns1 IN A DNSIP1
ns2IN A DNSIP2
YOURSITEDOMAIN. IN A YOURDYNAMICIP

A few notes on the above:

the next step is to dynamically update the DNS server with your dynamic IP address whenever it changes.

6) Copy ssh keys

Before setting up your dynamic DNS you need to set up your ssh keys so that your home server can access the DNS servers.

NOTE: This is not security advice. Use at your own risk.

First, check whether you already have an ssh key generated:

ls ~/.ssh/id_rsa

If that returns a file, you're all set up. Otherwise, type:

ssh-keygen

and accept the defaults.

Then, once you have a key set up, copy your ssh ID to the nameservers:

ssh-copy-id root@DNSIP1
ssh-copy-id root@DNSIP2

Inputting your root password on each command.

7) Create an IP updater script

Now ssh to both servers and place this script in /root/update_ip.sh :

#!/bin/bash
set -o nounset
sed -i "s/^(.*) IN A (.*)$/1 IN A $1/" /etc/bind/db.YOURDNSDOMAIN
sed -i "s/.*Serial$/ $(date +%Y%m%d%H) ; Serial/" /etc/bind/db.YOURDNSDOMAIN
/etc/init.d/bind9 restart

Make it executable by running:

chmod +x /root/update_ip.sh

Going through it line by line:

This line throws an error if the IP is not passed in as the argument to the script.

Replaces the IP address with the contents of the first argument to the script.

Ups the 'serial number'

Restart the bind service on the host.

8) Cron Your Dynamic DNS

At this point you've got access to update the IP when your dynamic IP changes, and the script to do the update.

Here's the raw cron entry:

* * * * * curl ifconfig.co 2>/dev/null > /tmp/ip.tmp && (diff /tmp/ip.tmp /tmp/ip || (mv /tmp/ip.tmp /tmp/ip && ssh root@DNSIP1 "/root/update_ip.sh $(cat /tmp/ip)")); curl ifconfig.co 2>/dev/null > /tmp/ip.tmp2 && (diff /tmp/ip.tmp2 /tmp/ip2 || (mv /tmp/ip.tmp2 /tmp/ip2 && ssh root@192.210.238.236 "/root/update_ip.sh $(cat /tmp/ip2)"))

Breaking this command down step by step:

curl ifconfig.co 2>/dev/null > /tmp/ip.tmp

This curls a 'what is my IP address' site, and deposits the output to /tmp/ip.tmp

diff /tmp/ip.tmp /tmp/ip || (mv /tmp/ip.tmp /tmp/ip && ssh root@DNSIP1 "/root/update_ip.sh $(cat /tmp/ip)"))

This diffs the contents of /tmp/ip.tmp with /tmp/ip (which is yet to be created, and holds the last-updated ip address). If there is an error (ie there is a new IP address to update on the DNS server), then the subshell is run. This overwrites the ip address, and then ssh'es onto the

The same process is then repeated for DNSIP2 using separate files ( /tmp/ip.tmp2 and /tmp/ip2 ).

Why!?

You may be wondering why I do this in the age of cloud services and outsourcing. There's a few reasons.

It's Cheap

The cost of running this stays at the cost of the two nameservers (24$/year) no matter how many domains I manage and whatever I want to do with them.

Learning

I've learned a lot by doing this, probably far more than any course would have taught me.

More Control

I can do what I like with these domains: set up any number of subdomains, try my hand at secure mail techniques, experiment with obscure DNS records and so on.

I could extend this into a service. If you're interested, my rates are very low :)


If you like this post, you might be interested in my book Learn Bash the Hard Way , available here for just $5.

[Jan 26, 2019] Shell startup scripts

flowblok's blog
that diagram shows what happens according to the man page, and not what happens when you actually try it out in real life. This second diagram more accurately captures the insanity of bash:

See how remote interactive login shells read /etc/bash.bashrc, but normal interactive login shells don't? Sigh.

Finally, here's a repository containing my implementation and the graphviz files for the above diagram. If your POSIX-compliant shell isn't listed here, or if I've made a horrible mistake (or just a tiny one), please send me a pull request or make a comment below, and I'll update this post accordingly.

[1]

and since I'm writing this, I can make you say whatever I want for the purposes of narrative.

[Jan 26, 2019] Shell startup script order of execution

Highly recommended!
Jan 26, 2019 | flowblok.id.au

Adrian • a month ago ,

6 years late, but...

In my experience, if your bash sources /etc/bash.bashrc, odds are good it also sources /etc/bash.bash_logout or something similar on logout (after ~/.bash_logout, of course).

From bash-4.4/config-top.h:

/* System-wide .bashrc file for interactive shells. */
/* #define SYS_BASHRC "/etc/bash.bashrc" */

/* System-wide .bash_logout for login shells. */
/* #define SYS_BASH_LOGOUT "/etc/bash.bash_logout" */

(Yes, they're disabled by default.)

Check the FILES section of your system's bash man page for details.

[Jan 26, 2019] Systemd developers don't want to replace the kernel, they are more than happy to leverage Linus's good work on what they see as a collection of device driver

Jan 26, 2019 | blog.erratasec.com

John Morris said...

They don't want to replace the kernel, they are more than happy to leverage Linus's good work on what they see as a collection of device drivers. No, they want to replace the GNU/X in the traditional Linux/GNU/X arrangement. All of the command line tools, up to and including bash are to go, replaced with the more Windows like tools most of the systemd developers grew up on, while X and the desktop environments all get rubbished for Wayland and GNOME3.

And I would wish them luck, the world could use more diversity in operating systems. So long as they stayed the hell over at RedHat and did their grand experiment and I could still find a Linux/GNU/X distribution to run. But they had to be borg and insist that all must bend the knee and to that I say HELL NO!

[Jan 26, 2019] The coming enhancement to systemd

Jan 26, 2019 | blog.erratasec.com

Siegfried Kiermayer said...

I'm waiting for pulse audio being included in systemd to have proper a boot sound :D

[Jan 26, 2019] Ten Things I Wish I'd Known About about bash

Highly recommended!
Jan 06, 2018 | zwischenzugs.com
Intro

Recently I wanted to deepen my understanding of bash by researching as much of it as possible. Because I felt bash is an often-used (and under-understood) technology, I ended up writing a book on it .

A preview is available here .

You don't have to look hard on the internet to find plenty of useful one-liners in bash, or scripts. And there are guides to bash that seem somewhat intimidating through either their thoroughness or their focus on esoteric detail.

Here I've focussed on the things that either confused me or increased my power and productivity in bash significantly, and tried to communicate them (as in my book) in a way that emphasises getting the understanding right.

Enjoy!

hero

1) `` vs $()

These two operators do the same thing. Compare these two lines:

$ echo `ls`
$ echo $(ls)

Why these two forms existed confused me for a long time.

If you don't know, both forms substitute the output of the command contained within it into the command.

The principal difference is that nesting is simpler.

Which of these is easier to read (and write)?

    $ echo `echo \`echo \\\`echo inside\\\`\``

or:

    $ echo $(echo $(echo $(echo inside)))

If you're interested in going deeper, see here or here .

2) globbing vs regexps

Another one that can confuse if never thought about or researched.

While globs and regexps can look similar, they are not the same.

Consider this command:

$ rename -n 's/(.*)/new$1/' *

The two asterisks are interpreted in different ways.

The first is ignored by the shell (because it is in quotes), and is interpreted as '0 or more characters' by the rename application. So it's interpreted as a regular expression.

The second is interpreted by the shell (because it is not in quotes), and gets replaced by a list of all the files in the current working folder. It is interpreted as a glob.

So by looking at man bash can you figure out why these two commands produce different output?

$ ls *
$ ls .*

The second looks even more like a regular expression. But it isn't!

3) Exit Codes

Not everyone knows that every time you run a shell command in bash, an 'exit code' is returned to bash.

Generally, if a command 'succeeds' you get an error code of 0 . If it doesn't succeed, you get a non-zero code. 1 is a 'general error', and others can give you more information (eg which signal killed it, for example).

But these rules don't always hold:

$ grep not_there /dev/null
$ echo $?

$? is a special bash variable that's set to the exit code of each command after it runs.

Grep uses exit codes to indicate whether it matched or not. I have to look up every time which way round it goes: does finding a match or not return 0 ?

Grok this and a lot will click into place in what follows.

4) if statements, [ and [[

Here's another 'spot the difference' similar to the backticks one above.

What will this output?

if grep not_there /dev/null
then
    echo hi
else
    echo lo
fi

grep's return code makes code like this work more intuitively as a side effect of its use of exit codes.

Now what will this output?

a) hihi
b) lolo
c) something else

if [ $(grep not_there /dev/null) = '' ]
then
    echo -n hi
else
    echo -n lo
fi
if [[ $(grep not_there /dev/null) = '' ]]
then
    echo -n hi
else
    echo -n lo
fi

The difference between [ and [[ was another thing I never really understood. [ is the original form for tests, and then [[ was introduced, which is more flexible and intuitive. In the first if block above, the if statement barfs because the $(grep not_there /dev/null) is evaluated to nothing, resulting in this comparison:

[ = '' ]

which makes no sense. The double bracket form handles this for you.

This is why you occasionally see comparisons like this in bash scripts:

if [ x$(grep not_there /dev/null) = 'x' ]

so that if the command returns nothing it still runs. There's no need for it, but that's why it exists.

5) set s

Bash has configurable options which can be set on the fly. I use two of these all the time:

set -e

exits from a script if any command returned a non-zero exit code (see above).

This outputs the commands that get run as they run:

set -x

So a script might start like this:

#!/bin/bash
set -e
set -x
grep not_there /dev/null
echo $?

What would that script output?

6) ​​ <()

This is my favourite. It's so under-used, perhaps because it can be initially baffling, but I use it all the time.

It's similar to $() in that the output of the command inside is re-used.

In this case, though, the output is treated as a file. This file can be used as an argument to commands that take files as an argument.

Confused? Here's an example.

Have you ever done something like this?

$ grep somestring file1 > /tmp/a
$ grep somestring file2 > /tmp/b
$ diff /tmp/a /tmp/b

That works, but instead you can write:

diff <(grep somestring file1) <(grep somestring file2)

Isn't that neater?

7) Quoting

Quoting's a knotty subject in bash, as it is in many software contexts.

Firstly, variables in quotes:

A='123'  
echo "$A"
echo '$A'

Pretty simple – double quotes dereference variables, while single quotes go literal.

So what will this output?

mkdir -p tmp
cd tmp
touch a
echo "*"
echo '*'

Surprised? I was.

8) Top three shortcuts

There are plenty of shortcuts listed in man bash , and it's not hard to find comprehensive lists. This list consists of the ones I use most often, in order of how often I use them.

Rather than trying to memorize them all, I recommend picking one, and trying to remember to use it until it becomes unconscious. Then take the next one. I'll skip over the most obvious ones (eg !! – repeat last command, and ~ – your home directory).

!$

I use this dozens of times a day. It repeats the last argument of the last command. If you're working on a file, and can't be bothered to re-type it command after command it can save a lot of work:

grep somestring /long/path/to/some/file/or/other.txt
vi !$

​​ !:1-$

This bit of magic takes this further. It takes all the arguments to the previous command and drops them in. So:

grep isthere /long/path/to/some/file/or/other.txt
egrep !:1-$
fgrep !:1-$

The ! means 'look at the previous command', the : is a separator, and the 1 means 'take the first word', the - means 'until' and the $ means 'the last word'.

Note: you can achieve the same thing with !* . Knowing the above gives you the control to limit to a specific contiguous subset of arguments, eg with !:2-3 .

:h

I use this one a lot too. If you put it after a filename, it will change that filename to remove everything up to the folder. Like this:

grep isthere /long/path/to/some/file/or/other.txt
cd !$:h

which can save a lot of work in the course of the day.

9) startup order

The order in which bash runs startup scripts can cause a lot of head-scratching. I keep this diagram handy (from this great page):

shell-startup-actual

It shows which scripts bash decides to run from the top, based on decisions made about the context bash is running in (which decides the colour to follow).

So if you are in a local (non-remote), non-login, interactive shell (eg when you run bash itself from the command line), you are on the 'green' line, and these are the order of files read:

/etc/bash.bashrc
~/.bashrc
[bash runs, then terminates]
~/.bash_logout

This can save you a hell of a lot of time debugging.

10) getopts (cheapci)

If you go deep with bash, you might end up writing chunky utilities in it. If you do, then getting to grips with getopts can pay large dividends.

For fun, I once wrote a script called cheapci which I used to work like a Jenkins job.

The code here implements the reading of the two required, and 14 non-required arguments . Better to learn this than to build up a bunch of bespoke code that can get very messy pretty quickly as your utility grows.


This is based on some of the contents of my book Learn Bash the Hard Way , available at $7 :

[Jan 25, 2019] Some systemd problems that arise in reasonably complex datacenter environment

May 10, 2018 | theregister.co.uk
Thursday 10th May 2018 16:34 GMT Nate Amsden

as a linux user for 22 users

(20 of which on Debian, before that was Slackware)

I am new to systemd, maybe 3 or 4 months now tops on Ubuntu, and a tiny bit on Debian before that.

I was confident I was going to hate systemd before I used it just based on the comments I had read over the years, I postponed using it as long as I could. Took just a few minutes of using it to confirm my thoughts. Now to be clear, if I didn't have to mess with the systemd to do stuff then I really wouldn't care since I don't interact with it (which is the case on my laptop at least though laptop doesn't have systemd anyway). I manage about 1,000 systems running Ubuntu for work, so I have to mess with systemd, and init etc there.

If systemd would just do ONE thing I think it would remove all of the pain that it has inflicted on me over the past several months and I could learn to accept it.

That one thing is, if there is an init script, RUN IT. Not run it like systemd does now. But turn off ALL intelligence systemd has when it finds that script and run it. Don't put it on any special timers, don't try to detect if it is running already, or stopped already or whatever, fire the script up in blocking mode and wait till it exits.

My first experience with systemd was on one of my home servers, I re-installed Debian on it last year, rebuilt the hardware etc and with it came systemd. I believe there is a way to turn systemd off but I haven't tried that yet. The first experience was with bind. I have a slightly custom init script (from previous debian) that I have been using for many years. I copied it to the new system and tried to start bind. Nothing. I looked in the logs and it seems that it was trying to interface with rndc(internal bind thing) for some reason, and because rndc was not working(I never used it so I never bothered to configure it) systemd wouldn't launch bind. So I fixed rndc and systemd would now launch bind, only to stop it within 1 second of launching. My first workaround was just to launch bind by hand at the CLI (no init script), left it running for a few months. Had a discussion with a co-worker who likes systemd and he explained that making a custom unit file and using the type=forking option may fix it.. That did fix the issue.

Next issue came up when dealing with MySQL clusters. I had to initialize the cluster with the "service mysql bootstrap-pxc" command (using the start command on the first cluster member is a bad thing). Run that with systemd, and systemd runs it fine. But go to STOP the service, and systemd thinks the service is not running so doesn't even TRY to stop the service(the service is running). My workaround for my automation for mysql clusters at this point is to just use mysqladmin to shut the mysql instances down. Maybe newer mysql versions have better systemd support though a co-worker who is our DBA and has used mysql for many years says even the new Maria DB builds don't work well with systemd. I am working with Mysql 5.6 which is of course much much older.

Next issue came up with running init scripts that have the same words in them, in the case of most recently I upgraded systems to systemd that run OSSEC. OSSEC has two init scripts for us on the server side (ossec and ossec-auth). Systemd refuses to run ossec-auth because it thinks there is a conflict with the ossec service. I had the same problem with multiple varnish instances running on the same system (varnish instances were named varnish-XXX and varnish-YYY). In the varnish case using custom unit files I got systemd to the point where it would start the service but it still refuses to "enable" the service because of the name conflict (I even changed the name but then systemd was looking at the name of the binary being called in the unit file and said there is a conflict there).

fucking a. Systemd shut up, just run the damn script. It's not hard.

Later a co-worker explained the "systemd way" for handling something like multiple varnish instances on the system but I'm not doing that, in the meantime I just let chef start the services when it runs after the system boots(which means they start maybe 1 or 2 mins after bootup).

Another thing bit us with systemd recently as well again going back to bind. Someone on the team upgraded our DNS systems to systemd and the startup parameters for bind were not preserved because systemd ignores the /etc/default/bind file. As a result we had tons of DNS failures when bind was trying to reach out to IPv6 name servers(ugh), when there is no IPv6 connectivity in the network (the solution is to start bind with a -4 option).

I believe I have also caught systemd trying to mess with file systems(iscsi mount points). I have lots of automation around moving data volumes on the SAN between servers and attaching them via software iSCSI directly to the VMs themselves(before vsphere 4.0 I attached them via fibre channel to the hypervisor but a feature in 4.0 broke that for me). I noticed on at least one occasion when I removed the file systems from a system that SOMETHING (I assume systemd) mounted them again, and it was very confusing to see file systems mounted again for block devices that DID NOT EXIST on the server at the time. I worked around THAT one I believe with the "noauto" option in fstab again. I had to put a lot of extra logic in my automation scripts to work around systemd stuff.

I'm sure I've only scratched the surface of systemd pain. I'm sure it provides good value to some people, I hear it's good with containers (I have been running LXC containers for years now, I see nothing with systemd that changes that experience so far).

But if systemd would just do this one thing and go into dumb mode with init scripts I would be quite happy.

[Jan 25, 2019] SystemD vs Solaris 10 SMF

"Shadow files" approach of Solaris 10, where additional functions of init are controlled by XML script that exist in a separate directory with the same names as init scripts can be improved but architecturally it is much cleaner then systemd approach.
Notable quotes:
"... Solaris has a similar parallellised startup system, with some similar problems, but it didn't need pid 1. ..."
"... Agreed, Solaris svcadm and svcs etc are an example of how it should be done. A layered approach maintaining what was already there, while adding functionality for management purposes. Keeps all the old text based log files and uses xml scripts (human readable and editable) for higher level functions. ..."
"... AFAICT everyone followed RedHat because they also dominate Gnome, and chose to make Gnome depend on systemd. Thus if one had any aspirations for your distro supporting Gnome in any way, you have to have systemd underneath it all. ..."
Jan 25, 2019 | theregister.co.uk

Re: Poettering still doesn't get it... Pid 1 is for people wearing big boy pants.

SystemD is corporate money (Redhat support dollars) triumphing over the long hairs sadly. Enough money can buy a shitload of code and you can overwhelm the hippies with hairball dependencies (the key moment was udev being dependent on systemd) and soon get as much FOSS as possible dependent on the Linux kernel.

This has always been the end game as Red Hat makes its bones on Linux specifically not on FOSS in general (that say runs on Solaris or HP-UX).

The tighter they can glue the FOSS ecosystem and the Linux kernel together ala Windows lite style the better for their bottom line. Poettering is just being a good employee asshat extraordinaire he is.

Re: Ahhh SystemD

I honestly would love someone to lay out the problems it solves.

Solaris has a similar parallellised startup system, with some similar problems, but it didn't need pid 1.

Re: Ahhh SystemD

Agreed, Solaris svcadm and svcs etc are an example of how it should be done. A layered approach maintaining what was already there, while adding functionality for management purposes. Keeps all the old text based log files and uses xml scripts (human readable and editable) for higher level functions.

Afaics, systemd is a power grab by Red Hat and an ego trip for it's primary developer.

Dumped bloatware Linux in favour of FreeBSD and others after Suse 11.4, though that was bad enough with Gnome 3...

starbase7, Thursday 10th May 2018 04:36 GMT

SMF?

As an older timer (on my way but not there yet), I never cared for the init.d startup and I dislike the systemd monolithic architecture.

What I do like is Solaris SMF and wish Linux would have adopted a method such as or similar to that. I still think SMF was/is a great comprise to the init.d method or systemd manor.

I used SMF professionally, but now I have moved on with Linux professionally as Solaris is, well, dead. I only get to enjoy SMF on my home systems, and savor it. I'm trying to like Linux over all these years, but this systemd thing is a real big road block for me to get enthusiastic.

I have a hard time understanding why all the other Linux distros joined hands with Redhat and implemented that thing, systemd. Sigh.

Anonymous Coward, Thursday 10th May 2018 04:53 GMT

Re: SMF?

You're not alone in liking SMF and Solaris.

AFAICT everyone followed RedHat because they also dominate Gnome, and chose to make Gnome depend on systemd. Thus if one had any aspirations for your distro supporting Gnome in any way, you have to have systemd underneath it all.

RedHat seem to call the shots these days as to what a Linux distro has. I personally have mixed opinions on this; I think the vast anarchy of Linux is a bad thing for Linux adoption ("this is the year of the Linux desktop" don't make me laugh), and Linux would benefit from a significant culling of the vast number of distros out there. However if that did happen and all that was left was something controlled by RedHat, that would be a bad situation.

Steve Davies, Thursday 10th May 2018 07:30 GMT 3

Re: SMF?
Remember who 'owns' SMF... namely Oracle. They may well have made it impossible for anyone to adopt. That stance is not unknown now is it...?

As for systemd, I have bit my teeth and learned to tolerate it. I'll never be as comfortable with it as I was with the old init system but I did start running into issues especially with shutdown syncing with it on some complex systems.

Still not sure if systemd is the right way forward even after four years.

Daggerchild, Thursday 10th May 2018 14:30 GMT

Re: SMF?
SMF should be good, and yet they released it before they'd documented it. Strange priorities...

And XML is *not* a config file format you should let humans at. Finding out the correct order to put the XML elements in to avoid unexplained "parse error", was *not* a fun game.

And someone correct me, but it looks like there are SMF properties of a running service that can only be modified/added by editing the file, reloading *and* restarting the service. A metadata and state/dependency tracking system shouldn't require you to shut down the priority service it's meant to be ensuring... Again, strange priorities...

12 1 Reply
Friday 11th May 2018 07:55 GMTonefangSilver badge
Reply Icon
FAIL
Re: SMF?
"XML is *not* a config file format you should let humans at"

XML is a format you shouldn't let computers at, it was designed to be human readable and writable. It fails totally.

5 1 Reply
Friday 6th July 2018 12:27 GMTHans 1Silver badge
Reply Icon
Re: SMF?
Finding out the correct order to put the XML elements in to avoid unexplained "parse error", was *not* a fun game.

Hm, you do know the grammar is in a dtd ? Yes, XML takes time to learn, but very powerful once mastered.

0 1 Reply
Thursday 10th May 2018 13:24 GMTCrazyOldCatManSilver badge
Reply Icon
Re: SMF?
I have a hard time understanding why all the other Linux distros joined hands with Redhat and implemented that thing, systemd

Several reasons:

A lot of other distros use Redhat (or Fedora) as their base and then customise it.

A lot of other distros include things dependant on systemd (Gnome being the one with biggest dependencies - you can just about to get it to run without systemd but it's a pain and every update will break your fixes).

Redhat has a lot of clout.

6 3

[Jan 17, 2019] The financial struggles of unplanned retirement

People who are kicked out of their IT jobs around 55 now has difficulties to find even full-time McJobs... Only part time jobs are available. With the current round of layoff and job freezes, neoliberalism in the USA is entering terminal phase, I think.
Jan 17, 2019 | finance.yahoo.com

A survey by Transamerica Center for Retirement Studies found on average Americans are retiring at age 63, with more than half indicating they retired sooner than they had planned. Among them, most retired for health or employment-related reasons.

... ... ...

On April 3, 2018, Linda LaBarbera received the phone call that changed her life forever. "We are outsourcing your work to India and your services are no longer needed, effective today," the voice on the other end of the phone line said.

... ... ...

"It's not like we are starving or don't have a home or anything like that," she says. "But we did have other plans for before we retired and setting ourselves up a little better while we both still had jobs."

... ... ...

Linda hasn't needed to dip into her 401(k) yet. She plans to start collecting Social Security when she turns 70, which will give her the maximum benefit. To earn money and keep busy, Linda has taken short-term contract editing jobs. She says she will only withdraw money from her savings if something catastrophic happens. Her husband's salary is their main source of income.

"I am used to going out and spending money on other people," she says. "We are very generous with our family and friends who are not as well off as we are. So we take care of a lot of people. We can't do that anymore. I can't go out and be frivolous anymore. I do have to look at what we spend - what I spend."

Vogelbacher says cutting costs is essential when living in retirement, especially for those on a fixed income. He suggests moving to a tax-friendly location if possible. Kiplinger ranks Alaska, Wyoming, South Dakota, Mississippi, and Florida as the top five tax-friendly states for retirees. If their health allows, Vogelbacher recommends getting a part-time job. For those who own a home, he says paying off the mortgage is a smart financial move.

... ... ...

Monica is one of the 44 percent of unmarried persons who rely on Social Security for 90 percent or more of their income. At the beginning of 2019, Monica and more than 62 million Americans received a 2.8 percent cost of living adjustment from Social Security. The increase is the largest since 2012.

With the Social Security hike, Monica's monthly check climbed $33. Unfortunately, the new year also brought her a slight increase in what she pays for Medicare; along with a $500 property tax bill and the usual laundry list of monthly expenses.

"If you don't have much, the (Social Security) raise doesn't represent anything," she says with a dry laugh. "But it's good to get it."

[Jan 14, 2019] Safe rm stops you accidentally wiping the system! @ New Zealand Linux

Jan 14, 2019 | www.nzlinux.com
  1. Francois Marier October 21, 2009 at 10:34 am

    Another related tool, to prevent accidental reboots of servers this time, is molly-guard:

    http://packages.debian.org/sid/molly-guard

    It asks you to type the hostname of the machine you want to reboot as an extra confirmation step.

[Jan 14, 2019] Linux-UNIX xargs command examples

Jan 14, 2019 | www.linuxtechi.com

Example:10 Move files to a different location

linuxtechi@mail:~$ pwd
/home/linuxtechi
linuxtechi@mail:~$ ls -l *.sh
-rw-rw-r– 1 linuxtechi linuxtechi 0 Sep 15 22:53 abcde.sh
-rw-rw-r– 1 linuxtechi linuxtechi 0 Sep 15 22:53 abcd.sh
-rw-rw-r– 1 linuxtechi linuxtechi 0 Sep 15 22:53 fg.sh

linuxtechi@mail:~$ sudo find . -name "*.sh" -print0 | xargs -0 -I {} mv {} backup/
linuxtechi@mail:~$ ls -ltr backup/

total 0
-rw-rw-r– 1 linuxtechi linuxtechi 0 Sep 15 22:53 abcd.sh
-rw-rw-r– 1 linuxtechi linuxtechi 0 Sep 15 22:53 abcde.sh
-rw-rw-r– 1 linuxtechi linuxtechi 0 Sep 15 22:53 fg.sh
linuxtechi@mail:~$

[Jan 14, 2019] xargs command tutorial with examples by George Ornbo

Sep 11, 2017 | shapeshed.com
How to use xargs

By default xargs reads items from standard input as separated by blanks and executes a command once for each argument. In the following example standard input is piped to xargs and the mkdir command is run for each argument, creating three folders.

echo 'one two three' | xargs mkdir
ls
one two three
How to use xargs with find

The most common usage of xargs is to use it with the find command. This uses find to search for files or directories and then uses xargs to operate on the results. Typical examples of this are removing files, changing the ownership of files or moving files.

find and xargs can be used together to operate on files that match certain attributes. In the following example files older than two weeks in the temp folder are found and then piped to the xargs command which runs the rm command on each file and removes them.

find /tmp -mtime +14 | xargs rm
xargs v exec {}

The find command supports the -exec option that allows arbitrary commands to be found on files that are found. The following are equivalent.

find ./foo -type f -name "*.txt" -exec rm {} \; 
find ./foo -type f -name "*.txt" | xargs rm

So which one is faster? Let's compare a folder with 1000 files in it.

time find . -type f -name "*.txt" -exec rm {} \;
0.35s user 0.11s system 99% cpu 0.467 total

time find ./foo -type f -name "*.txt" | xargs rm
0.00s user 0.01s system 75% cpu 0.016 total

Clearly using xargs is far more efficient. In fact several benchmarks suggest using xargs over exec {} is six times more efficient.

How to print commands that are executed

The -t option prints each command that will be executed to the terminal. This can be helpful when debugging scripts.

echo 'one two three' | xargs -t rm
rm one two three
How to view the command and prompt for execution

The -p command will print the command to be executed and prompt the user to run it. This can be useful for destructive operations where you really want to be sure on the command to be run. l

echo 'one two three' | xargs -p touch
touch one two three ?...
How to run multiple commands with xargs

It is possible to run multiple commands with xargs by using the -I flag. This replaces occurrences of the argument with the argument passed to xargs. The following prints echos a string and creates a folder.

cat foo.txt
one
two
three

cat foo.txt | xargs -I % sh -c 'echo %; mkdir %'
one 
two
three

ls 
one two three
Further reading

[Jan 10, 2019] When idiots are offloaded to security department, interesting things with network eventually happen

Highly recommended!
Security department often does more damage to the network then any sophisticated hacker can. Especially if they are populated with morons, as they usually are. One of the most blatant examples is below... Those idiots decided to disable Traceroute (which means ICMP) in order to increase security.
Notable quotes:
"... Traceroute is disabled on every network I work with to prevent intruders from determining the network structure. Real pain in the neck, but one of those things we face to secure systems. ..."
"... Also really stupid. A competent attacker (and only those manage it into your network, right?) is not even slowed down by things like this. ..."
"... Breaking into a network is a slow process. Slow and precise. Trying to fix problems is a fast reactionary process. Who do you really think you're hurting? Yes another example of how ignorant opinions can become common sense. ..."
"... Disable all ICMP is not feasible as you will be disabling MTU negotiation and destination unreachable messages. You are essentially breaking the TCP/IP protocol. And if you want the protocol working OK, then people can do traceroute via HTTP messages or ICMP echo and reply. ..."
"... You have no fucking idea what you're talking about. I run a multi-regional network with over 130 peers. Nobody "disables ICMP". IP breaks without it. Some folks, generally the dimmer of us, will disable echo responses or TTL expiration notices thinking it is somehow secure (and they are very fucking wrong) but nobody blocks all ICMP, except for very very dim witted humans, and only on endpoint nodes. ..."
"... You have no idea what you're talking about, at any level. "disabled ICMP" - state statement alone requires such ignorance to make that I'm not sure why I'm even replying to ignorant ass. ..."
"... In short, he's a moron. I have reason to suspect you might be, too. ..."
"... No, TCP/IP is not working fine. It's broken and is costing you performance and $$$. But it is not evident because TCP/IP is very good about dealing with broken networks, like yours. ..."
"... It's another example of security by stupidity which seldom provides security, but always buys added cost. ..."
"... A brief read suggests this is a good resource: https://john.albin.net/essenti... [albin.net] ..."
"... Linux has one of the few IP stacks that isn't derived from the BSD stack, which in the industry is considered the reference design. Instead for linux, a new stack with it's own bugs and peculiarities was cobbled up. ..."
"... Reference designs are a good thing to promote interoperability. As far as TCP/IP is concerned, linux is the biggest and ugliest stepchild. A theme that fits well into this whole discussion topic, actually. ..."
May 27, 2018 | linux.slashdot.org

jfdavis668 ( 1414919 ) , Sunday May 27, 2018 @11:09AM ( #56682996 )

Re:So ( Score: 5 , Interesting)

Traceroute is disabled on every network I work with to prevent intruders from determining the network structure. Real pain in the neck, but one of those things we face to secure systems.

Anonymous Coward writes:
Re: ( Score: 2 , Insightful)

What is the point? If an intruder is already there couldn't they just upload their own binary?

Hylandr