Softpanorama

Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
May the source be with you, but remember the KISS principle ;-)
Skepticism and critical thinking is not panacea, but can help to understand the world better

Slightly Skeptical View on Enterprise Unix Administration

News Webliography of problems with "pure" cloud environment Recommended Books Recommended Links Shadow IT Orthodox Editors Programmable Keyboards
The tar pit of Red Hat overcomplexity Systemd invasion into Linux Server space Unix System Monitoring Nagios in Large Enterprise Environment Sudoer File Examples Dealing with multiple flavors of Unix SSH Configuration
Unix Configuration Management Tools Job schedulers Red Hat Certification Program Red Hat Enterprise Linux Life Cycle Registering a server using Red Hat Subscription Manager (RHSM) Open source politics: IBM acquires Red Hat Recommended Tools to Enhance Command Line Usage in Windows
Is DevOps a yet another "for profit" technocult Using HP ILO virtual CDROM iDRAC7 goes unresponsive - can't connect to iDRAC7 Resetting frozen iDRAC without unplugging the server Troubleshooting HPOM agents Saferm -- wrapper for rm command ILO command line interface
Bare metal recovery of Linux systems Relax-and-Recover on RHEL HP Operations Manager Troubleshooting HPOM agents Number of Servers per Sysadmin Tivoli Enterprise Console Tivoli Workload Scheduler
Over 50 and unemployed Surviving a Bad Performance Review Understanding Micromanagers and Control Freaks Bosos or Empty Suits (Aggressive Incompetent Managers) Narcissists Female Sociopaths Bully Managers
Slackerism Information Overload Workaholism and Burnout Unix Sysadmin Tips Sysadmin Horror Stories Admin Humor Sysadmin Health Issues


The KISS rule can be expanded as: Keep It Simple, Sysadmin ;-)

This page is written as a protest against overcomplexity and bizarre data center atmosphere typical in "semi-outsourced" or fully outsourced datacenters ;-). Unix/Linux sysadmins are being killed by overcomplexity of the environment.  Later swats  of Linux knowledge (and many excellent  books)  were  killed with introduction of systemd. Especially for older, most experience members of the team, who have unique set of organization knowledge as well as specifics of their career which allowed them to watch the development of Linux almost from the version 0.92.

System administration is still a unique area were people with the ability to program can display their own creativity with relative ease and can still enjoy "old style" atmosphere of software development, when you yourself put a specification, implement it, test the program and then use it in daily work. This is a very exciting, unique opportunity that no DevOps can ever provide. Then why an increasing number of sysadmins are far from being excited about working in those positions, or outright want to quick the  field (or, at least, work 4 days a week). And that include sysadmins who have tremendous speed and capability to process and learn new information. Even for them "enough is enough".   The answer is different for each individual sysadmins, but usually is some variation of the following themes: 

  1.  Too rapid pace of change with a lot of "change for the sake of the change"  often serving as smokescreen for outsourcing efforts (VMware yesterday, Azure today, Amazon cloud tomorrow, etc)
  2. Job insecurity due to outsourcing/offshoring -- constant pressure to cut headcount in the name of 'efficiency" which in reality is more connected with the size of top brass bonuses then anything related to IT datacenter functioning.   Sysadmin over 50 are especially vulnerable category here and in case the are laid off have almost no chances to get back into the IT workforce at the previous level of salary/benefits. often the only job they can find is job  as Home Depot, or similar retail outlets. 
  3. Back breaking level of overcomplexity and bizarre tech decisions crippling the data center (aka crapification ). Potemkin-style  culture often prevails in evaluation of software in large US corporations. The surface sheen is more important than the substance. The marketing brochures and manuals are no different from mainstream news media in the level of BS they spew. IBM is especially guilty (look how they marketed IBM Watson; ; as Oren Etzioni, CEO of the Allen Institute for AI noted "the only intelligent thing about Watson was IBM PR department [push]").
  4. Bureaucratization/fossilization of the large companies IT environment. That includes using "Performance Reviews" (prevalent in IT variant of waterboarding ;-) for the enforcement of management policies, priorities, whims, etc.   That creates alienation from the company (as it should). One can think of the modern corporate Data Center as an organization where the administration has more tremendously power in the decision-making process and eats up more of the corporate budget while the people who do the actual work are increasingly ignored and their share of the budget shrinks.
  5. "Neoliberal austerity" (which is essentially another name for the "war on labor") -- Drastic cost cutting measures at the expense of workforce such as elimination of external vendor training, crapification of benefits, limitation of business trips and enforcing useless or outright harmful for business "new" products instead of "tried and true" old with  the same function.    They are accompanied by the new cultural obsession with ‘character’ (as in "he/she has a right character" -- which in "Neoliberal speak" means he/she is a toothless conformist ;-), glorification of groupthink,   and the intensification of surveillance.

As Charlie Schluting noted in 2010: (Enterprise Networking Plane, April 7, 2010)

What happened to the old "sysadmin" of just a few years ago? We've split what used to be the sysadmin into application teams, server teams, storage teams, and network teams. There were often at least a few people, the holders of knowledge, who knew how everything worked, and I mean everything. Every application, every piece of network gear, and how every server was configured -- these people could save a business in times of disaster.

Now look at what we've done. Knowledge is so decentralized we must invent new roles to act as liaisons between all the IT groups.

Architects now hold much of the high-level "how it works" knowledge, but without knowing how any one piece actually does work.

In organizations with more than a few hundred IT staff and developers, it becomes nearly impossible for one person to do and know everything. This movement toward specializing in individual areas seems almost natural. That, however, does not provide a free ticket for people to turn a blind eye.

Specialization

You know the story: Company installs new application, nobody understands it yet, so an expert is hired. Often, the person with a certification in using the new application only really knows how to run that application. Perhaps they aren't interested in learning anything else, because their skill is in high demand right now. And besides, everything else in the infrastructure is run by people who specialize in those elements. Everything is taken care of.

Except, how do these teams communicate when changes need to take place? Are the storage administrators teaching the Windows administrators about storage multipathing; or worse logging in and setting it up because it's faster for the storage gurus to do it themselves? A fundamental level of knowledge is often lacking, which makes it very difficult for teams to brainstorm about new ways evolve IT services. The business environment has made it OK for IT staffers to specialize and only learn one thing.

If you hire someone certified in the application, operating system, or network vendor you use, that is precisely what you get. Certifications may be a nice filter to quickly identify who has direct knowledge in the area you're hiring for, but often they indicate specialization or compensation for lack of experience.

Resource Competition

Does your IT department function as a unit? Even 20-person IT shops have turf wars, so the answer is very likely, "no." As teams are split into more and more distinct operating units, grouping occurs. One IT budget gets split between all these groups. Often each group will have a manager who pitches his needs to upper management in hopes they will realize how important the team is.

The "us vs. them" mentality manifests itself at all levels, and it's reinforced by management having to define each team's worth in the form of a budget. One strategy is to illustrate a doomsday scenario. If you paint a bleak enough picture, you may get more funding. Only if you are careful enough to illustrate the failings are due to lack of capital resources, not management or people. A manager of another group may explain that they are not receiving the correct level of service, so they need to duplicate the efforts of another group and just implement something themselves. On and on, the arguments continue.

Most often, I've seen competition between server groups result in horribly inefficient uses of hardware. For example, what happens in your organization when one team needs more server hardware? Assume that another team has five unused servers sitting in a blade chassis. Does the answer change? No, it does not. Even in test environments, sharing doesn't often happen between IT groups.

With virtualization, some aspects of resource competition get better and some remain the same. When first implemented, most groups will be running their own type of virtualization for their platform. The next step, I've most often seen, is for test servers to get virtualized. If a new group is formed to manage the virtualization infrastructure, virtual machines can be allocated to various application and server teams from a central pool and everyone is now sharing. Or, they begin sharing and then demand their own physical hardware to be isolated from others' resource hungry utilization. This is nonetheless a step in the right direction. Auto migration and guaranteed resource policies can go a long way toward making shared infrastructure, even between competing groups, a viable option.

Blamestorming

The most damaging side effect of splitting into too many distinct IT groups is the reinforcement of an "us versus them" mentality. Aside from the notion that specialization creates a lack of knowledge, blamestorming is what this article is really about. When a project is delayed, it is all too easy to blame another group. The SAN people didn't allocate storage on time, so another team was delayed. That is the timeline of the project, so all work halted until that hiccup was restored. Having someone else to blame when things get delayed makes it all too easy to simply stop working for a while.

More related to the initial points at the beginning of this article, perhaps, is the blamestorm that happens after a system outage.

Say an ERP system becomes unresponsive a few times throughout the day. The application team says it's just slowing down, and they don't know why. The network team says everything is fine. The server team says the application is "blocking on IO," which means it's a SAN issue. The SAN team say there is nothing wrong, and other applications on the same devices are fine. You've ran through nearly every team, but without an answer still. The SAN people don't have access to the application servers to help diagnose the problem. The server team doesn't even know how the application runs.

See the problem? Specialized teams are distinct and by nature adversarial. Specialized staffers often relegate themselves into a niche knowing that as long as they continue working at large enough companies, "someone else" will take care of all the other pieces.

I unfortunately don't have an answer to this problem. Maybe rotating employees between departments will help. They gain knowledge and also get to know other people, which should lessen the propensity to view them as outsiders

The tragic part of the current environment is that it is like shifting sands. And it is not only due to the "natural process of crapification of operating systems" in which the OS gradually loses its architectural integrity. The pace of change is just too fast to adapt for mere humans. And most of it represents "change for the  sake of change" not some valuable improvement or extension of capabilities.

If you are a sysadmin, who is writing  his own scripts, you write on the sand, spending a lot of time thinking over and debugging your scripts. Which raise you productivity and diminish the number of possible errors. But the next OS version wipes considerable part of your word and you need to revise your scripts again. The tale of Sisyphus can now be re-interpreted as a prescient warning about the thankless task of sysadmin to learn new staff and maintain their own script library ;-)  Sometimes a lot of work is wiped out because the corporate brass decides to to switch to a different flavor of Linux, or we add "yet another flavor" due to a large acquisition.  Add to this inevitable technological changes and the question arise, can't you get a more respectable profession, in which 66% of knowledge is not replaced in the next ten years.  

Balkanization of linux demonstrated also in the Babylon  Tower of system programming languages (C, C++, Perl, Python, Ruby, Go, Java to name a few) and systems that supposedly should help you but mostly do quite opposite (Puppet, Ansible, Chef, etc). Add to this monitoring infrastructure (say Nagios) and you definitely have an information overload.

Inadequate training just add to the stress. First of all corporations no longer want to pay for it. So you are your own and need to do it mostly on your free time, as the workload is substantial in most organizations. Using free or low cost courses if they are available, or buying your own books and trying to learn new staff using them (which of course is the mark of any good sysadmin, but should not the only source of new knowledge  Days when you can for a week travel to vendor training center and have a chance to communicate with other admins from different organization for a week (which probably was the most valuable part of the whole exercise; although I can tell that training by Sun (Solaris) and IBM (AIX) in late 1990th was really high quality using highly qualified instructors, from which you can learn a lot outside the main topic of the course.  Thos days are long in the past. Unlike "Trump University" Sun courses could probably have been called "Sun University." Most training now is via Web and chances for face-to-face communication disappeared.  Also from learning "why" the stress now is on learning of "how".  Why topic typically are reserved to "advanced" courses.

Also the necessary to relearn staff again and again (and often new technologies/daemons/version of OS) are iether the same, or even inferior to previous, or represent open scam in which training is the way to extract money from lemmings (Agile, most of DevOps hoopla, etc). This is typical neoliberal mentality (" greed is good") implemented in education. There is also tendency to treat virtual machines and cloud infrastructure as separate technologies, which requires separate training and separate set of certifications (ASW, Asure).  This is a kind of infantilization of profession when a person who learned a lot of staff in previous 10 years need to forget it and relearn most of it again and again.

Of course  sysadmins not the only suffered. Computer scientists also now struggle with  the excessive level of complexity and too quickly shifting sand. Look at the tragedy of Donald Knuth with this life long idea to create comprehensive monograph for system programmers (The Art of Computer programming). He was flattened by the shifting sands and probably will not be able to finish even volume 4 (out of seven that were planned) in his lifetime. 

Of course much  depends on the evolution of hardware and changes caused by the evolution of hardware such as mass introduction of large SSDs, multi-core CPUs and large RAM

Nobody is now surprised to see a server with 128GB of RAM, laptop with  16Gb of RAM, or cellphones with  4GB of RAM and 1GHZ CPU (Please not that IBM Pc stated with 1 MBof RAM (of which only 640KB was available for programs) and 4.7 MHz (not GHz) single core CPU without floating arithmetic unit).  Such changes while  painful are inevitable and hardware progress slowed down recently as it reached physical limits of technology (we probably will not see 2 nanometer lithography based CPU and 8GHz CPU clock speed in our lifetimes. .

 The other are changes caused by fashion and the desire to entrench their position by the dominate player are more difficult to accept. It is difficult or even impossible to predict which technology became fashionable tomorrow and how long DevOp will remain in fashion. Typically such thing last around ten years.  After that everything is typically fades in oblivion,  or even is crossed out, and former idols will be shattered. This strange period of re-invention of "glass-walls datacenter" under then banner of DevOps  (and old timers still remember that IBM datacenters were hated with passion, and this hate created additional non-technological incentive for mini-computers and later for IBM PC)  is characterized by the level of hype usually reserved for woman fashion.  Now it sometimes looks to me that the movie The Devil Wears Prada  is a subtle parable on sysadmin work.

Add to this horrible job  market, especially for university graduated and older sysadmins (see Over 50 and unemployed ) and one probably start suspect that the life of modern sysadmin is far from paradise. When you read some job description  on sites like Monster, Dice or  Indeed you just ask yourself, if those people really want to hire anybody, or this is just a smoke screen for H1B candidates job certification.  The level of details often is so precise that it is almost impossible to change your current specialization. They do not care about the level of talent, they do not want to train a suitable candidate. They want a person who fit 100% from day 1.  Also in place like NYC or SF rent and property prices and valuations are growing while income growth has been stagnant.

Vandalism of Unix performed by Red Hat with RHEL 7 makes the current  environment somewhat unhealthy. It is clear that this was done by the whim of Red Hat brass, not in the interest of the community. This is a typical Microsoft-style trick which make dozens of high quality books written by very talented authors instantly semi-obsolete.  And question arise whether it make sense to write any book about RHEL other then for solid advance.  It generated some backlash, but the position  of Red Hat as Microsoft on Linux  allowed it to shove down the throat their inferior technical decisions. In a way it reminds me the way Microsoft dealt with Windows 7 replacing it with Windows 10.  Essentially destroying previous windows interface ecosystem (while preserving binary compatibility)

See also

Here are my notes/reflection of sysadmin problem that often arise if rather strange (and sometimes pretty toxic) IT departments of large corporations:


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

Home 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999

For the list of top articles see Recommended Links section

2018 2017 2016 2015 2014 2013 2012 2011 2010 2009
2008 2007 2006 2005 2004 2003 2002 2001 2000 1999

"I appreciate Woody Allen's humor because one of my safety valves is an appreciation for life's absurdities. His message is that life isn't a funeral march to the grave. It's a polka."

-- Dennis Kusinich

[Aug 22, 2019] How To Display Bash History Without Line Numbers - OSTechNix

Aug 22, 2019 | www.ostechnix.com

Method 2 – Using history command

We can use the history command's write option to print the history without numbers like below.

$ history -w /dev/stdout
Method 3 – Using history and cut commands

One such way is to use history and cut commands like below.

$ history | cut -c 8-

[Aug 22, 2019] Why Micro Data Centers Deliver Good Things in Small Packages by Calvin Hennick

Aug 22, 2019 | solutions.cdw.com

Enterprises are deploying self-contained micro data centers to power computing at the network edge.

Calvin Hennick is a freelance journalist who specializes in business and technology writing. He is a contributor to the CDW family of technology magazines.

The location for data processing has changed significantly throughout the history of computing. During the mainframe era, data was processed centrally, but client/server architectures later decentralized computing. In recent years, cloud computing centralized many processing workloads, but digital transformation and the Internet of Things are poised to move computing to new places, such as the network edge .

"There's a big transformation happening," says Thomas Humphrey, segment director for edge computing at APC . "Technologies like IoT have started to require that some local computing and storage happen out in that distributed IT architecture."

For example, some IoT systems require processing of data at remote locations rather than a centralized data center , such as at a retail store instead of a corporate headquarters.

To meet regulatory requirements and business needs, IoT solutions often need low latency, high bandwidth, robust security and superior reliability . To meet these demands, many organizations are deploying micro data centers: self-contained solutions that provide not only essential infrastructure, but also physical security, power and cooling and remote management capabilities.

"Digital transformation happens at the network edge, and edge computing will happen inside micro data centers ," says Bruce A. Taylor, executive vice president at Datacenter Dynamics . "This will probably be one of the fastest growing segments -- if not the fastest growing segment -- in data centers for the foreseeable future."

What Is a Micro Data Center?

Delivering the IT capabilities needed for edge computing represents a significant challenge for many organizations, which need manageable and secure solutions that can be deployed easily, consistently and close to the source of computing . Vendors such as APC have begun to create comprehensive solutions that provide these necessary capabilities in a single, standardized package.

"From our perspective at APC, the micro data center was a response to what was happening in the market," says Humphrey. "We were seeing that enterprises needed more robust solutions at the edge."

Most micro data center solutions rely on hyperconverged infrastructure to integrate computing, networking and storage technologies within a compact footprint . A typical micro data center also incorporates physical infrastructure (including racks), fire suppression, power, cooling and remote management capabilities. In effect, the micro data center represents a sweet spot between traditional IT closets and larger modular data centers -- giving organizations the ability to deploy professional, powerful IT resources practically anywhere .

Standardized Deployments Across the Country

Having robust IT resources at the network edge helps to improve reliability and reduce latency, both of which are becoming more and more important as analytics programs require that data from IoT deployments be processed in real time .

"There's always been edge computing," says Taylor. "What's new is the need to process hundreds of thousands of data points for analytics at once."

Standardization, redundant deployment and remote management are also attractive features, especially for large organizations that may need to deploy tens, hundreds or even thousands of micro data centers. "We spoke to customers who said, 'I've got to roll out and install 3,500 of these around the country,'" says Humphrey. "And many of these companies don't have IT staff at all of these sites." To address this scenario, APC designed standardized, plug-and-play micro data centers that can be rolled out seamlessly. Additionally, remote management capabilities allow central IT departments to monitor and troubleshoot the edge infrastructure without costly and time-intensive site visits.

In part because micro data centers operate in far-flung environments, security is of paramount concern. The self-contained nature of micro data centers ensures that only authorized personnel will have access to infrastructure equipment , and security tools such as video surveillance provide organizations with forensic evidence in the event that someone attempts to infiltrate the infrastructure.

How Micro Data Centers Can Help in Retail, Healthcare

Micro data centers make business sense for any organization that needs secure IT infrastructure at the network edge. But the solution is particularly appealing to organizations in fields such as retail, healthcare and finance , where IT environments are widely distributed and processing speeds are often a priority.

In retail, for example, edge computing will become more important as stores find success with IoT technologies such as mobile beacons, interactive mirrors and real-time tools for customer experience, behavior monitoring and marketing .

"It will be leading-edge companies driving micro data center adoption, but that doesn't necessarily mean they'll be technology companies," says Taylor. "A micro data center can power real-time analytics for inventory control and dynamic pricing in a supermarket."

In healthcare, digital transformation is beginning to touch processes and systems ranging from medication carts to patient records, and data often needs to be available locally; for example, in case of a data center outage during surgery. In finance, the real-time transmission of data can have immediate and significant financial consequences. And in both of these fields, regulations governing data privacy make the monitoring and security features of micro data centers even more important.

Micro data centers also have enormous potential to power smart city initiatives and to give energy companies a cost-effective way of deploying resources in remote locations , among other use cases.

"The proliferation of edge computing will be greater than anything we've seen in the past," Taylor says. "I almost can't think of a field where this won't matter."

Learn more about how solutions and services from CDW and APC can help your organization overcome its data center challenges.

Micro Data Centers Versus IT Closets

Think the micro data center is just a glorified update on the traditional IT closet? Think again.

"There are demonstrable differences," says Bruce A. Taylor, executive vice president at Datacenter Dynamics. "With micro data centers, there's a tremendous amount of computing capacity in a very small, contained space, and we just didn't have that capability previously ."

APC identifies three key differences between IT closets and micro data centers:

Difference #1: Uptime Expectations. APC notes that, of the nearly 3 million IT closets in the U.S., over 70 percent report outages directly related to human error. In an unprotected IT closet, problems can result from something as preventable as cleaning staff unwittingly disconnecting a cable. Micro data centers, by contrast, utilize remote monitoring, video surveillance and sensors to reduce downtime related to human error.

Difference #2: Cooling Configurations. The cooling of IT wiring closets is often approached both reactively and haphazardly, resulting in premature equipment failure. Micro data centers are specifically designed to assure cooling compatibility with anticipated loads.

Difference #3: Power Infrastructure. Unlike many IT closets, micro data centers incorporate uninterruptible power supplies, ensuring that infrastructure equipment has the power it needs to help avoid downtime.

[Aug 20, 2019] Is it possible to insert separator in midnight commander menu?

Jun 07, 2010 | superuser.com

Ask Question Asked 9 years, 2 months ago Active 7 years, 10 months ago Viewed 363 times 2

okutane ,Jun 7, 2010 at 3:36

I want to insert some items into mc menu (which is opened by F2) grouped together. Is it possible to insert some sort of separator before them or put them into some submenu?
Probably, not.
The format of the menu file is very simple. Lines that start with anything but
space or tab are considered entries for the menu (in order to be able to use
it like a hot key, the first character should be a letter). All the lines that
start with a space or a tab are the commands that will be executed when the
entry is selected.

But MC allows you to make multiple menu entries with same shortcut and title, so you can make a menu entry that looks like separator and does nothing, like:

a hello
  echo world
- --------
b world
  echo hello
- --------
c superuser
  ls /

This will look like:

[Aug 20, 2019] Midnight Commander, using date in User menu

Dec 31, 2013 | unix.stackexchange.com

user2013619 ,Dec 31, 2013 at 0:43

I would like to use MC (midnight commander) to compress the selected dir with date in its name, e.g: dirname_20131231.tar.gz

The command in the User menu is :

tar -czf dirname_`date '+%Y%m%d'`.tar.gz %d

The archive is missing because %m , and %d has another meaning in MC. I made an alias for the date, but it also doesn't work.

Does anybody solved this problem ever?

John1024 ,Dec 31, 2013 at 1:06

To escape the percent signs, double them:
tar -czf dirname_$(date '+%%Y%%m%%d').tar.gz %d

The above would compress the current directory (%d) to a file also in the current directory. If you want to compress the directory pointed to by the cursor rather than the current directory, use %f instead:

tar -czf %f_$(date '+%%Y%%m%%d').tar.gz %f

mc handles escaping of special characters so there is no need to put %f in quotes.

By the way, midnight commander's special treatment of percent signs occurs not just in the user menu file but also at the command line. This is an issue when using shell commands with constructs like ${var%.c} . At the command line, the same as in the user menu file, percent signs can be escaped by doubling them.

[Aug 20, 2019] How to exclude file when using scp command recursively

Aug 12, 2019 | www.cyberciti.biz

I need to copy all the *.c files from local laptop named hostA to hostB including all directories. I am using the following scp command but do not know how to exclude specific files (such as *.out): $ scp -r ~/projects/ user@hostB:/home/delta/projects/ How do I tell scp command to exclude particular file or directory at the Linux/Unix command line? One can use scp command to securely copy files between hosts on a network. It uses ssh for data transfer and authentication purpose. Typical scp command syntax is as follows: scp file1 user@host:/path/to/dest/ scp -r /path/to/source/ user@host:/path/to/dest/ scp [options] /dir/to/source/ user@host:/dir/to/dest/

Scp exclude files

I don't think so you can filter or exclude files when using scp command. However, there is a great workaround to exclude files and copy it securely using ssh. This page explains how to filter or excludes files when using scp to copy a directory recursively.

How to use rsync command to exclude files

The syntax is:

rsync -av -e ssh --exclude='*.out' /path/to/source/ user@hostB:/path/to/dest/

Where,

  1. -a : Recurse into directories i.e. copy all files and subdirectories. Also, turn on archive mode and all other options (-rlptgoD)
  2. -v : Verbose output
  3. -e ssh : Use ssh for remote shell so everything gets encrypted
  4. --exclude='*.out' : exclude files matching PATTERN e.g. *.out or *.c and so on.
Example of rsync command

In this example copy all file recursively from ~/virt/ directory but exclude all *.new files:
$ rsync -av -e ssh --exclude='*.new' ~/virt/ root@centos7:/tmp

[Aug 19, 2019] Moreutils - A Collection Of More Useful Unix Utilities - OSTechNix

Parallel is a really useful utility. RPM is installable from EPEL.
Aug 19, 2019 | www.ostechnix.com

... ... ...

On RHEL , CentOS , Scientific Linux :
$ sudo yum install epel-release
$ sudo yum install moreutils

[Aug 19, 2019] mc - Is there are any documentation about user-defined menu in midnight-commander - Unix Linux Stack Exchange

Aug 19, 2019 | unix.stackexchange.com

Is there are any documentation about user-defined menu in midnight-commander? Ask Question Asked 5 years, 2 months ago Active 1 year, 2 months ago Viewed 3k times 6 2


login ,Jun 11, 2014 at 13:13

I'd like to create my own user-defined menu for mc ( menu file). I see some lines like
+ t r & ! t t

or

+ t t

What does it mean?

goldilocks ,Jun 11, 2014 at 13:35

It is documented in the help, the node is "Edit Menu File" under "Command Menu"; if you scroll down you should find "Addition Conditions":

If the condition begins with '+' (or '+?') instead of '=' (or '=?') it is an addition condition. If the condition is true the menu entry will be included in the menu. If the condition is false the menu entry will not be included in the menu.

This is preceded by "Default conditions" (the = condition), which determine which entry will be highlighted as the default choice when the menu appears. Anyway, by way of example:

+ t r & ! t t

t r means if this is a regular file ("t(ype) r"), and ! t t means if the file has not been tagged in the interface.

Jarek

On top what has been written above, this page can be browsed in the Internet, when searching for man pages, e.g.: https://www.systutorials.com/docs/linux/man/1-mc/

Search for "Menu File Edit" .

Best regards, Jarek

[Aug 14, 2019] bash - PID background process - Unix Linux Stack Exchange

Aug 14, 2019 | unix.stackexchange.com

PID background process Ask Question Asked 2 years, 8 months ago Active 2 years, 8 months ago Viewed 2k times 2


Raul ,Nov 27, 2016 at 18:21

As I understand pipes and commands, bash takes each command, spawns a process for each one and connects stdout of the previous one with the stdin of the next one.

For example, in "ls -lsa | grep feb", bash will create two processes, and connect the output of "ls -lsa" to the input of "grep feb".

When you execute a background command like "sleep 30 &" in bash, you get the pid of the background process running your command. Surprisingly for me, when I wrote "ls -lsa | grep feb &" bash returned only one PID.

How should this be interpreted? A process runs both "ls -lsa" and "grep feb"? Several process are created but I only get the pid of one of them?

Raul ,Nov 27, 2016 at 19:21

Spawns 2 processes. The & displays the PID of the second process. Example below.
$ echo $$
13358
$ sleep 100 | sleep 200 &
[1] 13405
$ ps -ef|grep 13358
ec2-user 13358 13357  0 19:02 pts/0    00:00:00 -bash
ec2-user 13404 13358  0 19:04 pts/0    00:00:00 sleep 100
ec2-user 13405 13358  0 19:04 pts/0    00:00:00 sleep 200
ec2-user 13406 13358  0 19:04 pts/0    00:00:00 ps -ef
ec2-user 13407 13358  0 19:04 pts/0    00:00:00 grep --color=auto 13358
$

> ,

When you run a job in the background, bash prints the process ID of its subprocess, the one that runs the command in that job. If that job happens to create more subprocesses, that's none of the parent shell's business.

When the background job is a pipeline (i.e. the command is of the form something1 | something2 & , and not e.g. { something1 | something2; } & ), there's an optimization which is strongly suggested by POSIX and performed by most shells including bash: each of the elements of the pipeline are executed directly as subprocesses of the original shell. What POSIX mandates is that the variable $! is set to the last command in the pipeline in this case. In most shells, that last command is a subprocess of the original process, and so are the other commands in the pipeline.

When you run ls -lsa | grep feb , there are three processes involved: the one that runs the left-hand side of the pipe (a subshell that finishes setting up the pipe then executes ls ), the one that runs the right-hand side of the pipe (a subshell that finishes setting up the pipe then executes grep ), and the original process that waits for the pipe to finish.

You can watch what happens by tracing the processes:

$ strace -f -e clone,wait4,pipe,execve,setpgid bash --norc
execve("/usr/local/bin/bash", ["bash", "--norc"], [/* 82 vars */]) = 0
setpgid(0, 24084)                       = 0
bash-4.3$ sleep 10 | sleep 20 &

Note how the second sleep is reported and stored as $! , but the process group ID is the first sleep . Dash has the same oddity, ksh and mksh don't.

[Aug 14, 2019] unix - How to get PID of process by specifying process name and store it in a variable to use further - Stack Overflow

Aug 14, 2019 | stackoverflow.com

Nidhi ,Nov 28, 2014 at 0:54

pids=$(pgrep <name>)

will get you the pids of all processes with the given name. To kill them all, use

kill -9 $pids

To refrain from using a variable and directly kill all processes with a given name issue

pkill -9 <name>

panticz.de ,Nov 11, 2016 at 10:11

On a single line...
pgrep -f process_name | xargs kill -9

flazzarini ,Jun 13, 2014 at 9:54

Another possibility would be to use pidof it usually comes with most distributions. It will return you the PID of a given process by using it's name.
pidof process_name

This way you could store that information in a variable and execute kill -9 on it.

#!/bin/bash
pid=`pidof process_name`
kill -9 $pid

Pawel K ,Dec 20, 2017 at 10:27

use grep [n]ame to remove that grep -v name this is first... Sec using xargs in the way how it is up there is wrong to rnu whatever it is piped you have to use -i ( interactive mode) otherwise you may have issues with the command.

ps axf | grep | grep -v grep | awk '{print "kill -9 " $1}' ? ps aux |grep [n]ame | awk '{print "kill -9 " $2}' ? isnt that better ?

[Aug 14, 2019] linux - How to get PID of background process - Stack Overflow

Highly recommended!
Aug 14, 2019 | stackoverflow.com

How to get PID of background process? Ask Question Asked 9 years, 8 months ago Active 7 months ago Viewed 238k times 336 64


pixelbeat ,Mar 20, 2013 at 9:11

I start a background process from my shell script, and I would like to kill this process when my script finishes.

How to get the PID of this process from my shell script? As far as I can see variable $! contains the PID of the current script, not the background process.

WiSaGaN ,Jun 2, 2015 at 14:40

You need to save the PID of the background process at the time you start it:
foo &
FOO_PID=$!
# do other stuff
kill $FOO_PID

You cannot use job control, since that is an interactive feature and tied to a controlling terminal. A script will not necessarily have a terminal attached at all so job control will not necessarily be available.

Phil ,Dec 2, 2017 at 8:01

You can use the jobs -l command to get to a particular jobL
^Z
[1]+  Stopped                 guard

my_mac:workspace r$ jobs -l
[1]+ 46841 Suspended: 18           guard

In this case, 46841 is the PID.

From help jobs :

-l Report the process group ID and working directory of the jobs.

jobs -p is another option which shows just the PIDs.

Timo ,Dec 2, 2017 at 8:03

Here's a sample transcript from a bash session ( %1 refers to the ordinal number of background process as seen from jobs ):

$ echo $$
3748

$ sleep 100 &
[1] 192

$ echo $!
192

$ kill %1

[1]+  Terminated              sleep 100

lepe ,Dec 2, 2017 at 8:29

An even simpler way to kill all child process of a bash script:
pkill -P $$

The -P flag works the same way with pkill and pgrep - it gets child processes, only with pkill the child processes get killed and with pgrep child PIDs are printed to stdout.

Luis Ramirez ,Feb 20, 2013 at 23:11

this is what I have done. Check it out, hope it can help.
#!/bin/bash
#
# So something to show.
echo "UNO" >  UNO.txt
echo "DOS" >  DOS.txt
#
# Initialize Pid List
dPidLst=""
#
# Generate background processes
tail -f UNO.txt&
dPidLst="$dPidLst $!"
tail -f DOS.txt&
dPidLst="$dPidLst $!"
#
# Report process IDs
echo PID=$$
echo dPidLst=$dPidLst
#
# Show process on current shell
ps -f
#
# Start killing background processes from list
for dPid in $dPidLst
do
        echo killing $dPid. Process is still there.
        ps | grep $dPid
        kill $dPid
        ps | grep $dPid
        echo Just ran "'"ps"'" command, $dPid must not show again.
done

Then just run it as: ./bgkill.sh with proper permissions of course

root@umsstd22 [P]:~# ./bgkill.sh
PID=23757
dPidLst= 23758 23759
UNO
DOS
UID        PID  PPID  C STIME TTY          TIME CMD
root      3937  3935  0 11:07 pts/5    00:00:00 -bash
root     23757  3937  0 11:55 pts/5    00:00:00 /bin/bash ./bgkill.sh
root     23758 23757  0 11:55 pts/5    00:00:00 tail -f UNO.txt
root     23759 23757  0 11:55 pts/5    00:00:00 tail -f DOS.txt
root     23760 23757  0 11:55 pts/5    00:00:00 ps -f
killing 23758. Process is still there.
23758 pts/5    00:00:00 tail
./bgkill.sh: line 24: 23758 Terminated              tail -f UNO.txt
Just ran 'ps' command, 23758 must not show again.
killing 23759. Process is still there.
23759 pts/5    00:00:00 tail
./bgkill.sh: line 24: 23759 Terminated              tail -f DOS.txt
Just ran 'ps' command, 23759 must not show again.
root@umsstd22 [P]:~# ps -f
UID        PID  PPID  C STIME TTY          TIME CMD
root      3937  3935  0 11:07 pts/5    00:00:00 -bash
root     24200  3937  0 11:56 pts/5    00:00:00 ps -f

Phil ,Oct 15, 2013 at 18:22

You might also be able to use pstree:
pstree -p user

This typically gives a text representation of all the processes for the "user" and the -p option gives the process-id. It does not depend, as far as I understand, on having the processes be owned by the current shell. It also shows forks.

Phil ,Dec 4, 2018 at 9:46

pgrep can get you all of the child PIDs of a parent process. As mentioned earlier $$ is the current scripts PID. So, if you want a script that cleans up after itself, this should do the trick:
trap 'kill $( pgrep -P $$ | tr "\n" " " )' SIGINT SIGTERM EXIT

[Aug 10, 2019] Midnight Commander (mc) convenient hard links creation from user menu "

Notable quotes:
"... You can create hard links and symbolic links using C-x l and C-x s keyboard shortcuts. However, these two shortcuts invoke two completely different dialogs. ..."
"... he had also uploaded a sample mc user menu script ( local copy ), which works wonderfully! ..."
Dec 03, 2015 | bogdan.org.ua

Midnight Commander (mc): convenient hard links creation from user menu

3rd December 2015

Midnight Commander is a convenient two-panel file manager with tons of features.

You can create hard links and symbolic links using C-x l and C-x s keyboard shortcuts. However, these two shortcuts invoke two completely different dialogs.

While for C-x s you get 2 pre-populated fields (path to the existing file, and path to the link – which is pre-populated with your opposite file panel path plus the name of the file under cursor; simply try it to see what I mean), for C-x l you only get 1 empty field: path of the hard link to create for a file under cursor. Symlink's behaviour would be much more convenient

Fortunately, a good man called Wiseman1024 created a feature request in the MC's bug tracker 6 years ago. Not only had he done so, but he had also uploaded a sample mc user menu script ( local copy ), which works wonderfully! You can select multiple files, then F2 l (lower-case L), and hard-links to your selected files (or a file under cursor) will be created in the opposite file panel. Great, thank you Wiseman1024 !

Word of warning: you must know what hard links are and what their limitations are before using this menu script. You also must check and understand the user menu code before adding it to your mc (by F9 C m u , and then pasting the script from the file).

Word of hope: 4 years ago Wiseman's feature request was assigned to Future Releases version, so a more convenient C-x l will (sooner or later) become the part of mc. Hopefully

[Aug 10, 2019] How to check the file size in Linux-Unix bash shell scripting by Vivek Gite

Aug 10, 2019 | www.cyberciti.biz

The stat command shows information about the file. The syntax is as follows to get the file size on GNU/Linux stat:

stat -c %s "/etc/passwd"

OR

stat --format=%s "/etc/passwd"

[Aug 10, 2019] bash - How to check size of a file - Stack Overflow

Aug 10, 2019 | stackoverflow.com

[ -n file.txt ] doesn't check its size , it checks that the string file.txt is non-zero length, so it will always succeed.

If you want to say " size is non-zero", you need [ -s file.txt ] .

To get a file's size , you can use wc -c to get the size ( file length) in bytes:

file=file.txt
minimumsize=90000
actualsize=$(wc -c <"$file")
if [ $actualsize -ge $minimumsize ]; then
    echo size is over $minimumsize bytes
else
    echo size is under $minimumsize bytes
fi

In this case, it sounds like that's what you want.

But FYI, if you want to know how much disk space the file is using, you could use du -k to get the size (disk space used) in kilobytes:

file=file.txt
minimumsize=90
actualsize=$(du -k "$file" | cut -f 1)
if [ $actualsize -ge $minimumsize ]; then
    echo size is over $minimumsize kilobytes
else
    echo size is under $minimumsize kilobytes
fi

If you need more control over the output format, you can also look at stat . On Linux, you'd start with something like stat -c '%s' file.txt , and on BSD/Mac OS X, something like stat -f '%z' file.txt .

--Mikel

On Linux, you'd start with something like stat -c '%s' file.txt , and on BSD/Mac OS X, something like stat -f '%z' file.txt .

Oz Solomon ,Jun 13, 2014 at 21:44

It surprises me that no one mentioned stat to check file size. Some methods are definitely better: using -s to find out whether the file is empty or not is easier than anything else if that's all you want. And if you want to find files of a size, then find is certainly the way to go.

I also like du a lot to get file size in kb, but, for bytes, I'd use stat :

size=$(stat -f%z $filename) # BSD stat

size=$(stat -c%s $filename) # GNU stat?
alternative solution with awk and double parenthesis:
FILENAME=file.txt
SIZE=$(du -sb $FILENAME | awk '{ print $1 }')

if ((SIZE<90000)) ; then 
    echo "less"; 
else 
    echo "not less"; 
fi

[Aug 07, 2019] Find files and tar them (with spaces)

Aug 07, 2019 | stackoverflow.com

Ask Question Asked 8 years, 3 months ago Active 1 month ago Viewed 104k times 106 45


porges ,Sep 6, 2012 at 17:43

Alright, so simple problem here. I'm working on a simple back up code. It works fine except if the files have spaces in them. This is how I'm finding files and adding them to a tar archive:
find . -type f | xargs tar -czvf backup.tar.gz

The problem is when the file has a space in the name because tar thinks that it's a folder. Basically is there a way I can add quotes around the results from find? Or a different way to fix this?

Brad Parks ,Mar 2, 2017 at 18:35

Use this:
find . -type f -print0 | tar -czvf backup.tar.gz --null -T -

It will:

Also see:

czubehead ,Mar 19, 2018 at 11:51

There could be another way to achieve what you want. Basically,
  1. Use the find command to output path to whatever files you're looking for. Redirect stdout to a filename of your choosing.
  2. Then tar with the -T option which allows it to take a list of file locations (the one you just created with find!)
    find . -name "*.whatever" > yourListOfFiles
    tar -cvf yourfile.tar -T yourListOfFiles
    

gsteff ,May 5, 2011 at 2:05

Try running:
    find . -type f | xargs -d "\n" tar -czvf backup.tar.gz

Caleb Kester ,Oct 12, 2013 at 20:41

Why not:
tar czvf backup.tar.gz *

Sure it's clever to use find and then xargs, but you're doing it the hard way.

Update: Porges has commented with a find-option that I think is a better answer than my answer, or the other one: find -print0 ... | xargs -0 ....

Kalibur x ,May 19, 2016 at 13:54

If you have multiple files or directories and you want to zip them into independent *.gz file you can do this. Optional -type f -atime
find -name "httpd-log*.txt" -type f -mtime +1 -exec tar -vzcf {}.gz {} \;

This will compress

httpd-log01.txt
httpd-log02.txt

to

httpd-log01.txt.gz
httpd-log02.txt.gz

Frank Eggink ,Apr 26, 2017 at 8:28

Why not give something like this a try: tar cvf scala.tar `find src -name *.scala`

tommy.carstensen ,Dec 10, 2017 at 14:55

Another solution as seen here :
find var/log/ -iname "anaconda.*" -exec tar -cvzf file.tar.gz {} +

Robino ,Sep 22, 2016 at 14:26

The best solution seem to be to create a file list and then archive files because you can use other sources and do something else with the list.

For example this allows using the list to calculate size of the files being archived:

#!/bin/sh

backupFileName="backup-big-$(date +"%Y%m%d-%H%M")"
backupRoot="/var/www"
backupOutPath=""

archivePath=$backupOutPath$backupFileName.tar.gz
listOfFilesPath=$backupOutPath$backupFileName.filelist

#
# Make a list of files/directories to archive
#
echo "" > $listOfFilesPath
echo "${backupRoot}/uploads" >> $listOfFilesPath
echo "${backupRoot}/extra/user/data" >> $listOfFilesPath
find "${backupRoot}/drupal_root/sites/" -name "files" -type d >> $listOfFilesPath

#
# Size calculation
#
sizeForProgress=`
cat $listOfFilesPath | while read nextFile;do
    if [ ! -z "$nextFile" ]; then
        du -sb "$nextFile"
    fi
done | awk '{size+=$1} END {print size}'
`

#
# Archive with progress
#
## simple with dump of all files currently archived
#tar -czvf $archivePath -T $listOfFilesPath
## progress bar
sizeForShow=$(($sizeForProgress/1024/1024))
echo -e "\nRunning backup [source files are $sizeForShow MiB]\n"
tar -cPp -T $listOfFilesPath | pv -s $sizeForProgress | gzip > $archivePath

user3472383 ,Jun 27 at 1:11

Would add a comment to @Steve Kehlet post but need 50 rep (RIP).

For anyone that has found this post through numerous googling, I found a way to not only find specific files given a time range, but also NOT include the relative paths OR whitespaces that would cause tarring errors. (THANK YOU SO MUCH STEVE.)

find . -name "*.pdf" -type f -mtime 0 -printf "%f\0" | tar -czvf /dir/zip.tar.gz --null -T -
  1. . relative directory
  2. -name "*.pdf" look for pdfs (or any file type)
  3. -type f type to look for is a file
  4. -mtime 0 look for files created in last 24 hours
  5. -printf "%f\0" Regular -print0 OR -printf "%f" did NOT work for me. From man pages:

This quoting is performed in the same way as for GNU ls. This is not the same quoting mechanism as the one used for -ls and -fls. If you are able to decide what format to use for the output of find then it is normally better to use '\0' as a terminator than to use newline, as file names can contain white space and newline characters.

  1. -czvf create archive, filter the archive through gzip , verbosely list files processed, archive name

[Aug 06, 2019] Tar archiving that takes input from a list of files>

Aug 06, 2019 | stackoverflow.com

Tar archiving that takes input from a list of files Ask Question Asked 7 years, 9 months ago Active 6 months ago Viewed 123k times 131 29


Kurt McKee ,Apr 29 at 10:22

I have a file that contain list of files I want to archive with tar. Let's call it mylist.txt

It contains:

file1.txt
file2.txt
...
file10.txt

Is there a way I can issue TAR command that takes mylist.txt as input? Something like

tar -cvf allfiles.tar -[someoption?] mylist.txt

So that it is similar as if I issue this command:

tar -cvf allfiles.tar file1.txt file2.txt file10.txt

Stphane ,May 25 at 0:11

Yes:
tar -cvf allfiles.tar -T mylist.txt

drue ,Jun 23, 2014 at 14:56

Assuming GNU tar (as this is Linux), the -T or --files-from option is what you want.

Stphane ,Mar 1, 2016 at 20:28

You can also pipe in the file names which might be useful:
find /path/to/files -name \*.txt | tar -cvf allfiles.tar -T -

David C. Rankin ,May 31, 2018 at 18:27

Some versions of tar, for example, the default versions on HP-UX (I tested 11.11 and 11.31), do not include a command line option to specify a file list, so a decent work-around is to do this:
tar cvf allfiles.tar $(cat mylist.txt)

Jan ,Sep 25, 2015 at 20:18

On Solaris, you can use the option -I to read the filenames that you would normally state on the command line from a file. In contrast to the command line, this can create tar archives with hundreds of thousands of files (just did that).

So the example would read

tar -cvf allfiles.tar -I mylist.txt

,

For me on AIX, it worked as follows:
tar -L List.txt -cvf BKP.tar

[Aug 06, 2019] Shell command to tar directory excluding certain files-folders

Aug 06, 2019 | stackoverflow.com

Shell command to tar directory excluding certain files/folders Ask Question Asked 10 years, 1 month ago Active 1 month ago Viewed 787k times 720 186


Rekhyt ,Jun 24, 2014 at 16:06

Is there a simple shell command/script that supports excluding certain files/folders from being archived?

I have a directory that need to be archived with a sub directory that has a number of very large files I do not need to backup.

Not quite solutions:

The tar --exclude=PATTERN command matches the given pattern and excludes those files, but I need specific files & folders to be ignored (full file path), otherwise valid files might be excluded.

I could also use the find command to create a list of files and exclude the ones I don't want to archive and pass the list to tar, but that only works with for a small amount of files. I have tens of thousands.

I'm beginning to think the only solution is to create a file with a list of files/folders to be excluded, then use rsync with --exclude-from=file to copy all the files to a tmp directory, and then use tar to archive that directory.

Can anybody think of a better/more efficient solution?

EDIT: Charles Ma 's solution works well. The big gotcha is that the --exclude='./folder' MUST be at the beginning of the tar command. Full command (cd first, so backup is relative to that directory):

cd /folder_to_backup
tar --exclude='./folder' --exclude='./upload/folder2' -zcvf /backup/filename.tgz .

James O'Brien ,Nov 24, 2016 at 9:55

You can have multiple exclude options for tar so
$ tar --exclude='./folder' --exclude='./upload/folder2' -zcvf /backup/filename.tgz .

etc will work. Make sure to put --exclude before the source and destination items.

Johan Soderberg ,Jun 11, 2009 at 23:10

You can exclude directories with --exclude for tar.

If you want to archive everything except /usr you can use:

tar -zcvf /all.tgz / --exclude=/usr

In your case perhaps something like

tar -zcvf archive.tgz arc_dir --exclude=dir/ignore_this_dir

cstamas ,Oct 8, 2018 at 18:02

Possible options to exclude files/directories from backup using tar:

Exclude files using multiple patterns

tar -czf backup.tar.gz --exclude=PATTERN1 --exclude=PATTERN2 ... /path/to/backup

Exclude files using an exclude file filled with a list of patterns

tar -czf backup.tar.gz -X /path/to/exclude.txt /path/to/backup

Exclude files using tags by placing a tag file in any directory that should be skipped

tar -czf backup.tar.gz --exclude-tag-all=exclude.tag /path/to/backup

Anish Ramaswamy ,Apr 1 at 16:18

old question with many answers, but I found that none were quite clear enough for me, so I would like to add my try.

if you have the following structure

/home/ftp/mysite/

with following file/folders

/home/ftp/mysite/file1
/home/ftp/mysite/file2
/home/ftp/mysite/file3
/home/ftp/mysite/folder1
/home/ftp/mysite/folder2
/home/ftp/mysite/folder3

so, you want to make a tar file that contain everyting inside /home/ftp/mysite (to move the site to a new server), but file3 is just junk, and everything in folder3 is also not needed, so we will skip those two.

we use the format

tar -czvf <name of tar file> <what to tar> <any excludes>

where the c = create, z = zip, and v = verbose (you can see the files as they are entered, usefull to make sure none of the files you exclude are being added). and f= file.

so, my command would look like this

cd /home/ftp/
tar -czvf mysite.tar.gz mysite --exclude='file3' --exclude='folder3'

note the files/folders excluded are relatively to the root of your tar (I have tried full path here relative to / but I can not make that work).

hope this will help someone (and me next time I google it)

not2qubit ,Apr 4, 2018 at 3:24

You can use standard "ant notation" to exclude directories relative.
This works for me and excludes any .git or node_module directories.
tar -cvf myFile.tar --exclude=**/.git/* --exclude=**/node_modules/*  -T /data/txt/myInputFile.txt 2> /data/txt/myTarLogFile.txt

myInputFile.txt Contains:

/dev2/java
/dev2/javascript

GeertVc ,Feb 9, 2015 at 13:37

I've experienced that, at least with the Cygwin version of tar I'm using ("CYGWIN_NT-5.1 1.7.17(0.262/5/3) 2012-10-19 14:39 i686 Cygwin" on a Windows XP Home Edition SP3 machine), the order of options is important.

While this construction worked for me:

tar cfvz target.tgz --exclude='<dir1>' --exclude='<dir2>' target_dir

that one didn't work:

tar cfvz --exclude='<dir1>' --exclude='<dir2>' target.tgz target_dir

This, while tar --help reveals the following:

tar [OPTION...] [FILE]

So, the second command should also work, but apparently it doesn't seem to be the case...

Best rgds,

Scott Stensland ,Feb 12, 2015 at 20:55

This exclude pattern handles filename suffix like png or mp3 as well as directory names like .git and node_modules
tar --exclude={*.png,*.mp3,*.wav,.git,node_modules} -Jcf ${target_tarball}  ${source_dirname}

Michael ,May 18 at 23:29

I found this somewhere else so I won't take credit, but it worked better than any of the solutions above for my mac specific issues (even though this is closed):
tar zc --exclude __MACOSX --exclude .DS_Store -f <archive> <source(s)>

J. Lawson ,Apr 17, 2018 at 23:28

For those who have issues with it, some versions of tar would only work properly without the './' in the exclude value.
Tar --version

tar (GNU tar) 1.27.1

Command syntax that work:

tar -czvf ../allfiles-butsome.tar.gz * --exclude=acme/foo

These will not work:

$ tar -czvf ../allfiles-butsome.tar.gz * --exclude=./acme/foo
$ tar -czvf ../allfiles-butsome.tar.gz * --exclude='./acme/foo'
$ tar --exclude=./acme/foo -czvf ../allfiles-butsome.tar.gz *
$ tar --exclude='./acme/foo' -czvf ../allfiles-butsome.tar.gz *
$ tar -czvf ../allfiles-butsome.tar.gz * --exclude=/full/path/acme/foo
$ tar -czvf ../allfiles-butsome.tar.gz * --exclude='/full/path/acme/foo'
$ tar --exclude=/full/path/acme/foo -czvf ../allfiles-butsome.tar.gz *
$ tar --exclude='/full/path/acme/foo' -czvf ../allfiles-butsome.tar.gz *

Jerinaw ,May 6, 2017 at 20:07

For Mac OSX I had to do

tar -zcv --exclude='folder' -f theOutputTarFile.tar folderToTar

Note the -f after the --exclude=

Aaron Votre ,Jul 15, 2016 at 15:56

I agree the --exclude flag is the right approach.
$ tar --exclude='./folder_or_file' --exclude='file_pattern' --exclude='fileA'

A word of warning for a side effect that I did not find immediately obvious: The exclusion of 'fileA' in this example will search for 'fileA' RECURSIVELY!

Example:A directory with a single subdirectory containing a file of the same name (data.txt)

data.txt
config.txt
--+dirA
  |  data.txt
  |  config.docx

Znik ,Nov 15, 2014 at 5:12

To avoid possible 'xargs: Argument list too long' errors due to the use of find ... | xargs ... when processing tens of thousands of files, you can pipe the output of find directly to tar using find ... -print0 | tar --null ... .
# archive a given directory, but exclude various files & directories 
# specified by their full file paths
find "$(pwd -P)" -type d \( -path '/path/to/dir1' -or -path '/path/to/dir2' \) -prune \
   -or -not \( -path '/path/to/file1' -or -path '/path/to/file2' \) -print0 | 
   gnutar --null --no-recursion -czf archive.tar.gz --files-from -
   #bsdtar --null -n -czf archive.tar.gz -T -

Mike ,May 9, 2014 at 21:29

After reading this thread, I did a little testing on RHEL 5 and here are my results for tarring up the abc directory:

This will exclude the directories error and logs and all files under the directories:

tar cvpzf abc.tgz abc/ --exclude='abc/error' --exclude='abc/logs'

Adding a wildcard after the excluded directory will exclude the files but preserve the directories:

tar cvpzf abc.tgz abc/ --exclude='abc/error/*' --exclude='abc/logs/*'

Alex B ,Jun 11, 2009 at 23:03

Use the find command in conjunction with the tar append (-r) option. This way you can add files to an existing tar in a single step, instead of a two pass solution (create list of files, create tar).
find /dir/dir -prune ... -o etc etc.... -exec tar rvf ~/tarfile.tar {} \;

frommelmak ,Sep 10, 2012 at 14:08

You can also use one of the "--exclude-tag" options depending on your needs:

The folder hosting the specified FILE will be excluded.

camh ,Jun 12, 2009 at 5:53

You can use cpio(1) to create tar files. cpio takes the files to archive on stdin, so if you've already figured out the find command you want to use to select the files the archive, pipe it into cpio to create the tar file:
find ... | cpio -o -H ustar | gzip -c > archive.tar.gz

PicoutputCls ,Aug 21, 2018 at 14:13

gnu tar v 1.26 the --exclude needs to come after archive file and backup directory arguments, should have no leading or trailing slashes, and prefers no quotes (single or double). So relative to the PARENT directory to be backed up, it's:

tar cvfz /path_to/mytar.tgz ./dir_to_backup --exclude=some_path/to_exclude

user2553863 ,May 28 at 21:41

After reading all this good answers for different versions and having solved the problem for myself, I think there are very small details that are very important, and rare to GNU/Linux general use , that aren't stressed enough and deserves more than comments.

So I'm not going to try to answer the question for every case, but instead, try to register where to look when things doesn't work.

IT IS VERY IMPORTANT TO NOTICE:

  1. THE ORDER OF THE OPTIONS MATTER: it is not the same put the --exclude before than after the file option and directories to backup. This is unexpected at least to me, because in my experience, in GNU/Linux commands, usually the order of the options doesn't matter.
  2. Different tar versions expects this options in different order: for instance, @Andrew's answer indicates that in GNU tar v 1.26 and 1.28 the excludes comes last, whereas in my case, with GNU tar 1.29, it's the other way.
  3. THE TRAILING SLASHES MATTER : at least in GNU tar 1.29, it shouldn't be any .

In my case, for GNU tar 1.29 on Debian stretch, the command that worked was

tar --exclude="/home/user/.config/chromium" --exclude="/home/user/.cache" -cf file.tar  /dir1/ /home/ /dir3/

The quotes didn't matter, it worked with or without them.

I hope this will be useful to someone.

jørgensen ,Dec 19, 2015 at 11:10

Your best bet is to use find with tar, via xargs (to handle the large number of arguments). For example:
find / -print0 | xargs -0 tar cjf tarfile.tar.bz2

Ashwini Gupta ,Jan 12, 2018 at 10:30

tar -cvzf destination_folder source_folder -X /home/folder/excludes.txt

-X indicates a file which contains a list of filenames which must be excluded from the backup. For Instance, you can specify *~ in this file to not include any filenames ending with ~ in the backup.

George ,Sep 4, 2013 at 22:35

Possible redundant answer but since I found it useful, here it is:

While a FreeBSD root (i.e. using csh) I wanted to copy my whole root filesystem to /mnt but without /usr and (obviously) /mnt. This is what worked (I am at /):

tar --exclude ./usr --exclude ./mnt --create --file - . (cd /mnt && tar xvd -)

My whole point is that it was necessary (by putting the ./ ) to specify to tar that the excluded directories where part of the greater directory being copied.

My €0.02

t0r0X ,Sep 29, 2014 at 20:25

I had no luck getting tar to exclude a 5 Gigabyte subdirectory a few levels deep. In the end, I just used the unix Zip command. It worked a lot easier for me.

So for this particular example from the original post
(tar --exclude='./folder' --exclude='./upload/folder2' -zcvf /backup/filename.tgz . )

The equivalent would be:

zip -r /backup/filename.zip . -x upload/folder/**\* upload/folder2/**\*

(NOTE: Here is the post I originally used that helped me https://superuser.com/questions/312301/unix-zip-directory-but-excluded-specific-subdirectories-and-everything-within-t )

RohitPorwal ,Jul 21, 2016 at 9:56

Check it out
tar cvpzf zip_folder.tgz . --exclude=./public --exclude=./tmp --exclude=./log --exclude=fileName

tripleee ,Sep 14, 2017 at 4:38

The following bash script should do the trick. It uses the answer given here by Marcus Sundman.
#!/bin/bash

echo -n "Please enter the name of the tar file you wish to create with out extension "
read nam

echo -n "Please enter the path to the directories to tar "
read pathin

echo tar -czvf $nam.tar.gz
excludes=`find $pathin -iname "*.CC" -exec echo "--exclude \'{}\'" \;|xargs`
echo $pathin

echo tar -czvf $nam.tar.gz $excludes $pathin

This will print out the command you need and you can just copy and paste it back in. There is probably a more elegant way to provide it directly to the command line.

Just change *.CC for any other common extension, file name or regex you want to exclude and this should still work.

EDIT

Just to add a little explanation; find generates a list of files matching the chosen regex (in this case *.CC). This list is passed via xargs to the echo command. This prints --exclude 'one entry from the list'. The slashes () are escape characters for the ' marks.

[Aug 06, 2019] bash - More efficient way to find tar millions of files - Stack Overflow

Aug 06, 2019 | stackoverflow.com

More efficient way to find & tar millions of files Ask Question Asked 9 years, 3 months ago Active 8 months ago Viewed 25k times 22 13


theomega ,Apr 29, 2010 at 13:51

I've got a job running on my server at the command line prompt for a two days now:
find data/ -name filepattern-*2009* -exec tar uf 2009.tar {} ;

It is taking forever , and then some. Yes, there are millions of files in the target directory. (Each file is a measly 8 bytes in a well hashed directory structure.) But just running...

find data/ -name filepattern-*2009* -print > filesOfInterest.txt

...takes only two hours or so. At the rate my job is running, it won't be finished for a couple of weeks .. That seems unreasonable. Is there a more efficient to do this? Maybe with a more complicated bash script?

A secondary questions is "why is my current approach so slow?"

Stu Thompson ,May 6, 2013 at 1:11

If you already did the second command that created the file list, just use the -T option to tell tar to read the files names from that saved file list. Running 1 tar command vs N tar commands will be a lot better.

Matthew Mott ,Jul 3, 2014 at 19:21

One option is to use cpio to generate a tar-format archive:
$ find data/ -name "filepattern-*2009*" | cpio -ov --format=ustar > 2009.tar

cpio works natively with a list of filenames from stdin, rather than a top-level directory, which makes it an ideal tool for this situation.

bashfu ,Apr 23, 2010 at 10:05

Here's a find-tar combination that can do what you want without the use of xargs or exec (which should result in a noticeable speed-up):
tar --version    # tar (GNU tar) 1.14 

# FreeBSD find (on Mac OS X)
find -x data -name "filepattern-*2009*" -print0 | tar --null --no-recursion -uf 2009.tar --files-from -

# for GNU find use -xdev instead of -x
gfind data -xdev -name "filepattern-*2009*" -print0 | tar --null --no-recursion -uf 2009.tar --files-from -

# added: set permissions via tar
find -x data -name "filepattern-*2009*" -print0 | \
    tar --null --no-recursion --owner=... --group=... --mode=... -uf 2009.tar --files-from -

Stu Thompson ,Apr 28, 2010 at 12:50

There is xargs for this:
find data/ -name filepattern-*2009* -print0 | xargs -0 tar uf 2009.tar

Guessing why it is slow is hard as there is not much information. What is the structure of the directory, what filesystem do you use, how it was configured on creating. Having milions of files in single directory is quite hard situation for most filesystems.

bashfu ,May 1, 2010 at 14:18

To correctly handle file names with weird (but legal) characters (such as newlines, ...) you should write your file list to filesOfInterest.txt using find's -print0:
find -x data -name "filepattern-*2009*" -print0 > filesOfInterest.txt
tar --null --no-recursion -uf 2009.tar --files-from filesOfInterest.txt

Michael Aaron Safyan ,Apr 23, 2010 at 8:47

The way you currently have things, you are invoking the tar command every single time it finds a file, which is not surprisingly slow. Instead of taking the two hours to print plus the amount of time it takes to open the tar archive, see if the files are out of date, and add them to the archive, you are actually multiplying those times together. You might have better success invoking the tar command once, after you have batched together all the names, possibly using xargs to achieve the invocation. By the way, I hope you are using 'filepattern-*2009*' and not filepattern-*2009* as the stars will be expanded by the shell without quotes.

ruffrey ,Nov 20, 2018 at 17:13

There is a utility for this called tarsplitter .
tarsplitter -m archive -i folder/*.json -o archive.tar -p 8

will use 8 threads to archive the files matching "folder/*.json" into an output archive of "archive.tar"

https://github.com/AQUAOSOTech/tarsplitter

syneticon-dj ,Jul 22, 2013 at 8:47

Simplest (also remove file after archive creation):
find *.1  -exec tar czf '{}.tgz' '{}' --remove-files \;

[Aug 06, 2019] backup - Fastest way combine many files into one (tar czf is too slow) - Unix Linux Stack Exchange

Aug 06, 2019 | unix.stackexchange.com

Fastest way combine many files into one (tar czf is too slow) Ask Question Asked 7 years, 11 months ago Active 21 days ago Viewed 32k times 22 5


Gilles ,Nov 5, 2013 at 0:05

Currently I'm running tar czf to combine backup files. The files are in a specific directory.

But the number of files is growing. Using tzr czf takes too much time (more than 20 minutes and counting).

I need to combine the files more quickly and in a scalable fashion.

I've found genisoimage , readom and mkisofs . But I don't know which is fastest and what the limitations are for each of them.

Rufo El Magufo ,Aug 24, 2017 at 7:56

You should check if most of your time are being spent on CPU or in I/O. Either way, there are ways to improve it:

A: don't compress

You didn't mention "compression" in your list of requirements so try dropping the "z" from your arguments list: tar cf . This might be speed up things a bit.

There are other techniques to speed-up the process, like using "-N " to skip files you already backed up before.

B: backup the whole partition with dd

Alternatively, if you're backing up an entire partition, take a copy of the whole disk image instead. This would save processing and a lot of disk head seek time. tar and any other program working at a higher level have a overhead of having to read and process directory entries and inodes to find where the file content is and to do more head disk seeks , reading each file from a different place from the disk.

To backup the underlying data much faster, use:

dd bs=16M if=/dev/sda1 of=/another/filesystem

(This assumes you're not using RAID, which may change things a bit)

,

To repeat what others have said: we need to know more about the files that are being backed up. I'll go with some assumptions here. Append to the tar file

If files are only being added to the directories (that is, no file is being deleted), make sure you are appending to the existing tar file rather than re-creating it every time. You can do this by specifying the existing archive filename in your tar command instead of a new one (or deleting the old one).

Write to a different disk

Reading from the same disk you are writing to may be killing performance. Try writing to a different disk to spread the I/O load. If the archive file needs to be on the same disk as the original files, move it afterwards.

Don't compress

Just repeating what @Yves said. If your backup files are already compressed, there's not much need to compress again. You'll just be wasting CPU cycles.

[Aug 04, 2019] 10 YAML tips for people who hate YAML Enable SysAdmin

Aug 04, 2019 | www.redhat.com

10 YAML tips for people who hate YAML Do you hate YAML? These tips might ease your pain.

Posted June 10, 2019 | by Seth Kenlon (Red Hat)

Image
There are lots of formats for configuration files: a list of values, key and value pairs, INI files, YAML, JSON, XML, and many more. Of these, YAML sometimes gets cited as a particularly difficult one to handle for a few different reasons. While its ability to reflect hierarchical values is significant and its minimalism can be refreshing to some, its Python-like reliance upon syntactic whitespace can be frustrating.

However, the open source world is diverse and flexible enough that no one has to suffer through abrasive technology, so if you hate YAML, here are 10 things you can (and should!) do to make it tolerable. Starting with zero, as any sensible index should.

0. Make your editor do the work

Whatever text editor you use probably has plugins to make dealing with syntax easier. If you're not using a YAML plugin for your editor, find one and install it. The effort you spend on finding a plugin and configuring it as needed will pay off tenfold the very next time you edit YAML.

For example, the Atom editor comes with a YAML mode by default, and while GNU Emacs ships with minimal support, you can add additional packages like yaml-mode to help.

Emacs in YAML and whitespace mode.

If your favorite text editor lacks a YAML mode, you can address some of your grievances with small configuration changes. For instance, the default text editor for the GNOME desktop, Gedit, doesn't have a YAML mode available, but it does provide YAML syntax highlighting by default and features configurable tab width:

Configuring tab width and type in Gedit.

With the drawspaces Gedit plugin package, you can make white space visible in the form of leading dots, removing any question about levels of indentation.

Take some time to research your favorite text editor. Find out what the editor, or its community, does to make YAML easier, and leverage those features in your work. You won't be sorry.

1. Use a linter

Ideally, programming languages and markup languages use predictable syntax. Computers tend to do well with predictability, so the concept of a linter was invented in 1978. If you're not using a linter for YAML, then it's time to adopt this 40-year-old tradition and use yamllint .

You can install yamllint on Linux using your distribution's package manager. For instance, on Red Hat Enterprise Linux 8 or Fedora :

$ sudo dnf install yamllint

Invoking yamllint is as simple as telling it to check a file. Here's an example of yamllint 's response to a YAML file containing an error:

$ yamllint errorprone.yaml
errorprone.yaml
23:10     error    syntax error: mapping values are not allowed here
23:11     error    trailing spaces  (trailing-spaces)

That's not a time stamp on the left. It's the error's line and column number. You may or may not understand what error it's talking about, but now you know the error's location. Taking a second look at the location often makes the error's nature obvious. Success is eerily silent, so if you want feedback based on the lint's success, you can add a conditional second command with a double-ampersand ( && ). In a POSIX shell, && fails if a command returns anything but 0, so upon success, your echo command makes that clear. This tactic is somewhat superficial, but some users prefer the assurance that the command did run correctly, rather than failing silently. Here's an example:

$ yamllint perfect.yaml && echo "OK"
OK

The reason yamllint is so silent when it succeeds is that it returns 0 errors when there are no errors.

2. Write in Python, not YAML

If you really hate YAML, stop writing in YAML, at least in the literal sense. You might be stuck with YAML because that's the only format an application accepts, but if the only requirement is to end up in YAML, then work in something else and then convert. Python, along with the excellent pyyaml library, makes this easy, and you have two methods to choose from: self-conversion or scripted.

Self-conversion

In the self-conversion method, your data files are also Python scripts that produce YAML. This works best for small data sets. Just write your JSON data into a Python variable, prepend an import statement, and end the file with a simple three-line output statement.

#!/usr/bin/python3	
import yaml 

d={
"glossary": {
  "title": "example glossary",
  "GlossDiv": {
	"title": "S",
	"GlossList": {
	  "GlossEntry": {
		"ID": "SGML",
		"SortAs": "SGML",
		"GlossTerm": "Standard Generalized Markup Language",
		"Acronym": "SGML",
		"Abbrev": "ISO 8879:1986",
		"GlossDef": {
		  "para": "A meta-markup language, used to create markup languages such as DocBook.",
		  "GlossSeeAlso": ["GML", "XML"]
		  },
		"GlossSee": "markup"
		}
	  }
	}
  }
}

f=open('output.yaml','w')
f.write(yaml.dump(d))
f.close

Run the file with Python to produce a file called output.yaml file.

$ python3 ./example.json
$ cat output.yaml
glossary:
  GlossDiv:
	GlossList:
	  GlossEntry:
		Abbrev: ISO 8879:1986
		Acronym: SGML
		GlossDef:
		  GlossSeeAlso: [GML, XML]
		  para: A meta-markup language, used to create markup languages such as DocBook.
		GlossSee: markup
		GlossTerm: Standard Generalized Markup Language
		ID: SGML
		SortAs: SGML
	title: S
  title: example glossary

This output is perfectly valid YAML, although yamllint does issue a warning that the file is not prefaced with --- , which is something you can adjust either in the Python script or manually.

Scripted conversion

In this method, you write in JSON and then run a Python conversion script to produce YAML. This scales better than self-conversion, because it keeps the converter separate from the data.

Create a JSON file and save it as example.json . Here is an example from json.org :

{
	"glossary": {
	  "title": "example glossary",
	  "GlossDiv": {
		"title": "S",
		"GlossList": {
		  "GlossEntry": {
			"ID": "SGML",
			"SortAs": "SGML",
			"GlossTerm": "Standard Generalized Markup Language",
			"Acronym": "SGML",
			"Abbrev": "ISO 8879:1986",
			"GlossDef": {
			  "para": "A meta-markup language, used to create markup languages such as DocBook.",
			  "GlossSeeAlso": ["GML", "XML"]
			  },
			"GlossSee": "markup"
			}
		  }
		}
	  }
	}

Create a simple converter and save it as json2yaml.py . This script imports both the YAML and JSON Python modules, loads a JSON file defined by the user, performs the conversion, and then writes the data to output.yaml .

#!/usr/bin/python3
import yaml
import sys
import json

OUT=open('output.yaml','w')
IN=open(sys.argv[1], 'r')

JSON = json.load(IN)
IN.close()
yaml.dump(JSON, OUT)
OUT.close()

Save this script in your system path, and execute as needed:

$ ~/bin/json2yaml.py example.json
3. Parse early, parse often

Sometimes it helps to look at a problem from a different angle. If your problem is YAML, and you're having a difficult time visualizing the data's relationships, you might find it useful to restructure that data, temporarily, into something you're more familiar with.

If you're more comfortable with dictionary-style lists or JSON, for instance, you can convert YAML to JSON in two commands using an interactive Python shell. Assume your YAML file is called mydata.yaml .

$ python3
>>> f=open('mydata.yaml','r')
>>> yaml.load(f)
{'document': 34843, 'date': datetime.date(2019, 5, 23), 'bill-to': {'given': 'Seth', 'family': 'Kenlon', 'address': {'street': '51b Mornington Road\n', 'city': 'Brooklyn', 'state': 'Wellington', 'postal': 6021, 'country': 'NZ'}}, 'words': 938, 'comments': 'Good article. Could be better.'}

There are many other examples, and there are plenty of online converters and local parsers, so don't hesitate to reformat data when it starts to look more like a laundry list than markup.

4. Read the spec

After I've been away from YAML for a while and find myself using it again, I go straight back to yaml.org to re-read the spec. If you've never read the specification for YAML and you find YAML confusing, a glance at the spec may provide the clarification you never knew you needed. The specification is surprisingly easy to read, with the requirements for valid YAML spelled out with lots of examples in chapter 6 .

5. Pseudo-config

Before I started writing my book, Developing Games on the Raspberry Pi , Apress, 2019, the publisher asked me for an outline. You'd think an outline would be easy. By definition, it's just the titles of chapters and sections, with no real content. And yet, out of the 300 pages published, the hardest part to write was that initial outline.

YAML can be the same way. You may have a notion of the data you need to record, but that doesn't mean you fully understand how it's all related. So before you sit down to write YAML, try doing a pseudo-config instead.

A pseudo-config is like pseudo-code. You don't have to worry about structure or indentation, parent-child relationships, inheritance, or nesting. You just create iterations of data in the way you currently understand it inside your head.

A pseudo-config.

Once you've got your pseudo-config down on paper, study it, and transform your results into valid YAML.

6. Resolve the spaces vs. tabs debate

OK, maybe you won't definitively resolve the spaces-vs-tabs debate , but you should at least resolve the debate within your project or organization. Whether you resolve this question with a post-process sed script, text editor configuration, or a blood-oath to respect your linter's results, anyone in your team who touches a YAML project must agree to use spaces (in accordance with the YAML spec).

Any good text editor allows you to define a number of spaces instead of a tab character, so the choice shouldn't negatively affect fans of the Tab key.

Tabs and spaces are, as you probably know all too well, essentially invisible. And when something is out of sight, it rarely comes to mind until the bitter end, when you've tested and eliminated all of the "obvious" problems. An hour wasted to an errant tab or group of spaces is your signal to create a policy to use one or the other, and then to develop a fail-safe check for compliance (such as a Git hook to enforce linting).

7. Less is more (or more is less)

Some people like to write YAML to emphasize its structure. They indent vigorously to help themselves visualize chunks of data. It's a sort of cheat to mimic markup languages that have explicit delimiters.

Here's a good example from Ansible's documentation :

# Employee records
-  martin:
        name: Martin D'vloper
        job: Developer
        skills:
            - python
            - perl
            - pascal
-  tabitha:
        name: Tabitha Bitumen
        job: Developer
        skills:
            - lisp
            - fortran
            - erlang

For some users, this approach is a helpful way to lay out a YAML document, while other users miss the structure for the void of seemingly gratuitous white space.

If you own and maintain a YAML document, then you get to define what "indentation" means. If blocks of horizontal white space distract you, then use the minimal amount of white space required by the YAML spec. For example, the same YAML from the Ansible documentation can be represented with fewer indents without losing any of its validity or meaning:

---
- martin:
   name: Martin D'vloper
   job: Developer
   skills:
   - python
   - perl
   - pascal
- tabitha:
   name: Tabitha Bitumen
   job: Developer
   skills:
   - lisp
   - fortran
   - erlang
8. Make a recipe

I'm a big fan of repetition breeding familiarity, but sometimes repetition just breeds repeated stupid mistakes. Luckily, a clever peasant woman experienced this very phenomenon back in 396 AD (don't fact-check me), and invented the concept of the recipe .

If you find yourself making YAML document mistakes over and over, you can embed a recipe or template in the YAML file as a commented section. When you're adding a section, copy the commented recipe and overwrite the dummy data with your new real data. For example:

---
# - <common name>:
#   name: Given Surname
#   job: JOB
#   skills:
#   - LANG
- martin:
  name: Martin D'vloper
  job: Developer
  skills:
  - python
  - perl
  - pascal
- tabitha:
  name: Tabitha Bitumen
  job: Developer
  skills:
  - lisp
  - fortran
  - erlang
9. Use something else

I'm a fan of YAML, generally, but sometimes YAML isn't the answer. If you're not locked into YAML by the application you're using, then you might be better served by some other configuration format. Sometimes config files outgrow themselves and are better refactored into simple Lua or Python scripts.

YAML is a great tool and is popular among users for its minimalism and simplicity, but it's not the only tool in your kit. Sometimes it's best to part ways. One of the benefits of YAML is that parsing libraries are common, so as long as you provide migration options, your users should be able to adapt painlessly.

If YAML is a requirement, though, keep these tips in mind and conquer your YAML hatred once and for all! What to read next

[Aug 04, 2019] Ansible IT automation for everybody Enable SysAdmin

Aug 04, 2019 | www.redhat.com

Skip to main content We use cookies on our websites to deliver our online services. Details about how we use cookies and how you may disable them are set out in our Privacy Statement . By using this website you agree to our use of cookies. × Search Enable SysAdmin

Ansible: IT automation for everybody Kick the tires with Ansible and start automating with these simple tasks.

Posted July 31, 2019 | by Jörg Kastning

Image

Ansible is an open source tool for software provisioning, application deployment, orchestration, configuration, and administration. Its purpose is to help you automate your configuration processes and simplify the administration of multiple systems. Thus, Ansible essentially pursues the same goals as Puppet, Chef, or Saltstack.

What I like about Ansible is that it's flexible, lean, and easy to start with. In most use cases, it keeps the job simple.

I chose to use Ansible back in 2016 because no agent has to be installed on the managed nodes -- a node is what Ansible calls a managed remote system. All you need to start managing a remote system with Ansible is SSH access to the system, and Python installed on it. Python is preinstalled on most Linux systems, and I was already used to managing my hosts via SSH, so I was ready to start right away. And if the day comes where I decide not to use Ansible anymore, I just have to delete my Ansible controller machine (control node) and I'm good to go. There are no agents left on the managed nodes that have to be removed.

Ansible offers two ways to control your nodes. The first one uses playbooks . These are simple ASCII files written in Yet Another Markup Language (YAML) , which is easy to read and write. And second, there are the ad-hoc commands , which allow you to run a command or module without having to create a playbook first.

You organize the hosts you would like to manage and control in an inventory file, which offers flexible format options. For example, this could be an INI-like file that looks like:

mail.example.com

[webservers]
foo.example.com
bar.example.com

[dbservers]
one.example.com
two.example.com
three.example.com

[site1:children]
webservers
dbservers
Examples

I would like to give you two small examples of how to use Ansible. I started with these really simple tasks before I used Ansible to take control of more complex tasks in my infrastructure.

Ad-hoc: Check if Ansible can remote manage a system

As you might recall from the beginning of this article, all you need to manage a remote host is SSH access to it, and a working Python interpreter on it. To check if these requirements are fulfilled, run the following ad-hoc command against a host from your inventory:

[jkastning@ansible]$ ansible mail.example.com -m ping
mail.example.com | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
Playbook: Register a system and attach a subscription

This example shows how to use a playbook to keep installed packages up to date. The playbook is an ASCII text file which looks like this:

---
# Make sure all packages are up to date
- name: Update your system
  hosts: mail.example.com
  tasks:
  - name: Make sure all packages are up to date
    yum:
      name: "*"
      state: latest

Now, we are ready to run the playbook:

[jkastning@ansible]$ ansible-playbook yum_update.yml 

PLAY [Update your system] **************************************************************************

TASK [Gathering Facts] *****************************************************************************
ok: [mail.example.com]

TASK [Make sure all packages are up to date] *******************************************************
ok: [mail.example.com]

PLAY RECAP *****************************************************************************************
mail.example.com : ok=2    changed=0    unreachable=0    failed=0

Here everything is ok and there is nothing else to do. All installed packages are already the latest version.

It's simple: Try and use it

The examples above are quite simple and should only give you a first impression. But, from the start, it did not take me long to use Ansible for more complex tasks like the Poor Man's RHEL Mirror or the Ansible Role for RHEL Patchmanagment .

Today, Ansible saves me a lot of time and supports my day-to-day work tasks quite well. So what are you waiting for? Try it, use it, and feel a bit more comfortable at work. What to read next Image 10 YAML tips for people who hate YAML Do you hate YAML? These tips might ease your pain. Posted: June 10, 2019 Author: Seth Kenlon (Red Hat) Topics: Automation Ansible AUTOMATION FOR EVERYONE

Getting started with Ansible Get started Jörg Kastning Joerg is a sysadmin for over ten years now. He is a member of the Red Hat Accelerators and runs his own blog at https://www.my-it-brain.de. More about me Related Content Image 10 YAML tips for people who hate YAML Do you hate YAML? These tips might ease your pain. Posted: June 10, 2019 Author: Seth Kenlon (Red Hat)

OUR BEST CONTENT, DELIVERED TO YOUR INBOX

https://www.redhat.com/sysadmin/eloqua-embedded-subscribe.html?offer_id=701f20000012gE7AAI The opinions expressed on this website are those of each author, not of the author's employer or of Red Hat.

Red Hat and the Red Hat logo are trademarks of Red Hat, Inc., registered in the United States and other countries.

Copyright ©2019 Red Hat, Inc.

Twitter Facebook 0 LinkedIn Reddit 33 Email Twitter Facebook 0 LinkedIn Reddit 33 Email x Subscribe now

Get the highlights in your inbox every week.

https://www.redhat.com/sysadmin/eloqua-embedded-email-capture-block.html?offer_id=701f20000012gE7AAI ✓ Thanks for sharing! Facebook Twitter Email Pinterest LinkedIn Reddit WhatsApp Gmail Telegram Pocket Mix Tumblr Amazon Wish List AOL Mail Balatarin BibSonomy Bitty Browser Blinklist Blogger BlogMarks Bookmarks.fr Box.net Buffer Care2 News CiteULike Copy Link Design Float Diary.Ru Diaspora Digg Diigo Douban Draugiem DZone Evernote Facebook Messenger Fark Flipboard Folkd Google Bookmarks Google Classroom Google+ Hacker News Hatena Houzz Instapaper Kakao Kik Kindle It Known Line LiveJournal Mail.Ru Mastodon Mendeley Meneame Mixi MySpace Netvouz Odnoklassniki Outlook.com Papaly Pinboard Plurk Print PrintFriendly Protopage Bookmarks Pusha Qzone Rediff MyPage Refind Renren Sina Weibo SiteJot Skype Slashdot SMS StockTwits Svejo Symbaloo Bookmarks Threema Trello Tuenti Twiddla TypePad Post Viadeo Viber VK Wanelo WeChat WordPress Wykop XING Yahoo Mail Yoolink Yummly AddToAny Facebook Twitter Email Pinterest LinkedIn Reddit WhatsApp Gmail Email Gmail AOL Mail Outlook.com Yahoo Mail More

https://static.addtoany.com/menu/sm.21.html#type=page&event=load&url=https%3A%2F%2Fwww.redhat.com%2Fsysadmin%2Fansible-it-automation-everybody&referrer=https%3A%2F%2Fwww.linuxtoday.com%2Fit_management%2Fansible-it-automation-for-everybody-190731052032.html

https://redhat.demdex.net/dest5.html?d_nsid=0#https%3A%2F%2Fwww.redhat.com%2Fsysadmin%2Fansible-it-automation-everybody

[Aug 03, 2019] Creating Bootable Linux USB Drive with Etcher

Aug 03, 2019 | linuxize.com

There are several different applications available for free use which will allow you to flash ISO images to USB drives. In this example, we will use Etcher. It is a free and open-source utility for flashing images to SD cards & USB drives and supports Windows, macOS, and Linux.

Head over to the Etcher downloads page , and download the most recent Etcher version for your operating system. Once the file is downloaded, double-click on it and follow the installation wizard.

Creating Bootable Linux USB Drive using Etcher is a relatively straightforward process, just follow the steps outlined below:

  1. Connect the USB flash drive to your system and Launch Etcher.
  2. Click on the Select image button and locate the distribution .iso file.
  3. If only one USB drive is attached to your machine, Etcher will automatically select it. Otherwise, if more than one SD cards or USB drives are connected make sure you have selected the correct USB drive before flashing the image.

[Aug 02, 2019] linux - How to tar directory and then remove originals including the directory - Super User

Aug 02, 2019 | superuser.com

How to tar directory and then remove originals including the directory? Ask Question Asked 9 years, 6 months ago Active 4 years, 6 months ago Viewed 124k times 28 7


mit ,Dec 7, 2016 at 1:22

I'm trying to tar a collection of files in a directory called 'my_directory' and remove the originals by using the command:
tar -cvf files.tar my_directory --remove-files

However it is only removing the individual files inside the directory and not the directory itself (which is what I specified in the command). What am I missing here?

EDIT:

Yes, I suppose the 'remove-files' option is fairly literal. Although I too found the man page unclear on that point. (In linux I tend not to really distinguish much between directories and files that much, and forget sometimes that they are not the same thing). It looks like the consensus is that it doesn't remove directories.

However, my major prompting point for asking this question stems from tar's handling of absolute paths. Because you must specify a relative path to a file/s to be compressed, you therefore must change to the parent directory to tar it properly. As I see it using any kind of follow-on 'rm' command is potentially dangerous in that situation. Thus I was hoping to simplify things by making tar itself do the remove.

For example, imagine a backup script where the directory to backup (ie. tar) is included as a shell variable. If that shell variable value was badly entered, it is possible that the result could be deleted files from whatever directory you happened to be in last.

Arjan ,Feb 13, 2016 at 13:08

You are missing the part which says the --remove-files option removes files after adding them to the archive.

You could follow the archive and file-removal operation with a command like,

find /path/to/be/archived/ -depth -type d -empty -exec rmdir {} \;


Update: You may be interested in reading this short Debian discussion on,
Bug 424692: --remove-files complains that directories "changed as we read it" .

Kim ,Feb 13, 2016 at 13:08

Since the --remove-files option only removes files , you could try
tar -cvf files.tar my_directory && rm -R my_directory

so that the directory is removed only if the tar returns an exit status of 0

redburn ,Feb 13, 2016 at 13:08

Have you tried to put --remove-files directive after archive name? It works for me.
tar -cvf files.tar --remove-files my_directory

shellking ,Oct 4, 2010 at 19:58

source={directory argument}

e.g.

source={FULL ABSOLUTE PATH}/my_directory
parent={parent directory of argument}

e.g.

parent={ABSOLUTE PATH of 'my_directory'/
logFile={path to a run log that captures status messages}

Then you could execute something along the lines of:

cd ${parent}

tar cvf Tar_File.`date%Y%M%D_%H%M%S` ${source}

if [ $? != 0 ]

then

 echo "Backup FAILED for ${source} at `date` >> ${logFile}

else

 echo "Backup SUCCESS for ${source} at `date` >> ${logFile}

 rm -rf ${source}

fi

mit ,Nov 14, 2011 at 13:21

This was probably a bug.

Also the word "file" is ambigous in this case. But because this is a command line switch I would it expect to mean also directories, because in unix/lnux everything is a file, also a directory. (The other interpretation is of course also valid, but It makes no sense to keep directories in such a case. I would consider it unexpected and confusing behavior.)

But I have found that in gnu tar on some distributions gnu tar actually removes the directory tree. Another indication that keeping the tree was a bug. Or at least some workaround until they fixed it.

This is what I tried out on an ubuntu 10.04 console:

mit:/var/tmp$ mkdir tree1                                                                                               
mit:/var/tmp$ mkdir tree1/sub1                                                                                          
mit:/var/tmp$ > tree1/sub1/file1                                                                                        

mit:/var/tmp$ ls -la                                                                                                    
drwxrwxrwt  4 root root 4096 2011-11-14 15:40 .                                                                              
drwxr-xr-x 16 root root 4096 2011-02-25 03:15 ..
drwxr-xr-x  3 mit  mit  4096 2011-11-14 15:40 tree1

mit:/var/tmp$ tar -czf tree1.tar.gz tree1/ --remove-files

# AS YOU CAN SEE THE TREE IS GONE NOW:

mit:/var/tmp$ ls -la
drwxrwxrwt  3 root root 4096 2011-11-14 15:41 .
drwxr-xr-x 16 root root 4096 2011-02-25 03:15 ..
-rw-r--r--  1 mit   mit    159 2011-11-14 15:41 tree1.tar.gz                                                                   


mit:/var/tmp$ tar --version                                                                                             
tar (GNU tar) 1.22                                                                                                           
Copyright © 2009 Free Software Foundation, Inc.

If you want to see it on your machine, paste this into a console at your own risk:

tar --version                                                                                             
cd /var/tmp
mkdir -p tree1/sub1                                                                                          
> tree1/sub1/file1                                                                                        
tar -czf tree1.tar.gz tree1/ --remove-files
ls -la

[Jul 31, 2019] Mounting archives with FUSE and archivemount Linux.com The source for Linux information

Jul 31, 2019 | www.linux.com

Mounting archives with FUSE and archivemount Author: Ben Martin The archivemount FUSE filesystem lets you mount a possibly compressed tarball as a filesystem. Because FUSE exposes its filesystems through the Linux kernel, you can use any application to load and save files directly into such mounted archives. This lets you use your favourite text editor, image viewer, or music player on files that are still inside an archive file. Going one step further, because archivemount also supports write access for some archive formats, you can edit a text file directly from inside an archive too.

I couldn't find any packages that let you easily install archivemount for mainstream distributions. Its distribution includes a single source file and a Makefile.

archivemount depends on libarchive for the heavy lifting. Packages of libarchive exist for Ubuntu Gutsy and openSUSE for not for Fedora. To compile libarchive you need to have uudecode installed; my version came with the sharutils package on Fedora 8. Once you have uudecode, you can build libarchive using the standard ./configure; make; sudo make install process.

With libarchive installed, either from source or from packages, simply invoke make to build archivemount itself. To install archivemount, copy its binary into /usr/local/bin and set permissions appropriately. A common setup on Linux distributions is to have a fuse group that a user must be a member of in order to mount a FUSE filesystem. It makes sense to have the archivemount command owned by this group as a reminder to users that they require that permission in order to use the tool. Setup is shown below:

# cp -av archivemount /usr/local/bin/
# chown root:fuse /usr/local/bin/archivemount
# chmod 550 /usr/local/bin/archivemount

To show how you can use archivemount I'll first create a trivial compressed tarball, then mount it with archivemount. You can then explore the directory structure of the contents of the tarball with the ls command, and access a file from the archive directly with cat.

$ mkdir -p /tmp/archivetest
$ cd /tmp/archivetest
$ date >datefile1
$ date >datefile2
$ mkdir subA
$ date >subA/foobar
$ cd /tmp
$ tar czvf archivetest.tar.gz archivetest
$ mkdir testing
$ archivemount archivetest.tar.gz testing
$ ls -l testing/archivetest/
-rw-r--r-- 0 root root 29 2008-04-02 21:04 datefile1
-rw-r--r-- 0 root root 29 2008-04-02 21:04 datefile2
drwxr-xr-x 0 root root 0 2008-04-02 21:04 subA
$ cat testing/archivetest/datefile2
Wed Apr 2 21:04:08 EST 2008

Next, I'll create a new file in the archive and read its contents back again. Notice that the first use of the tar command directly on the tarball does not show that the newly created file is in the archive. This is because archivemount delays all write operations until the archive is unmounted. After issuing the fusermount -u command, the new file is added to the archive itself.

$ date > testing/archivetest/new-file1
$ cat testing/archivetest/new-file1
Wed Apr 2 21:12:07 EST 2008
$ tar tzvf archivetest.tar.gz
drwxr-xr-x root/root 0 2008-04-02 21:04 archivetest/
-rw-r--r-- root/root 29 2008-04-02 21:04 archivetest/datefile2
-rw-r--r-- root/root 29 2008-04-02 21:04 archivetest/datefile1
drwxr-xr-x root/root 0 2008-04-02 21:04 archivetest/subA/
-rw-r--r-- root/root 29 2008-04-02 21:04 archivetest/subA/foobar

$ fusermount -u testing
$ tar tzvf archivetest.tar.gz
drwxr-xr-x root/root 0 2008-04-02 21:04 archivetest/
-rw-r--r-- root/root 29 2008-04-02 21:04 archivetest/datefile2
-rw-r--r-- root/root 29 2008-04-02 21:04 archivetest/datefile1
drwxr-xr-x root/root 0 2008-04-02 21:04 archivetest/subA/
-rw-r--r-- root/root 29 2008-04-02 21:04 archivetest/subA/foobar
-rw-rw-r-- ben/ben 29 2008-04-02 21:12 archivetest/new-file1

When you unmount a FUSE filesystem, the unmount command can return before the FUSE filesystem has fully exited. This can lead to a situation where the FUSE filesystem might run into an error in some processing but not have a good place to report that error. The archivemount documentation warns that if there is an error writing changes to an archive during unmount then archivemount cannot be blamed for a loss of data. Things are not quite as grim as they sound though. I mounted a tar.gz archive to which I had only read access and attempted to create new files and write to existing ones. The operations failed immediately with a "Read-only filesystem" message.

In an effort to trick archivemount into losing data, I created an archive in a format that libarchive has only read support for. I created archivetest.zip with the original contents of the archivetest directory and mounted it. Creating a new file worked, and reading it back was fine. As expected from the warnings on the README file for archivemount, I did not see any error message when I unmounted the zip file. However, attempting to view the manifest of the zip file with unzip -l failed. It turns out that my archivemount operations had turned the file into archivetest.zip, which was now a non-compressed POSIX tar archive. Using tar tvf I saw that the manifest of the archivetest.zip tar archive included the contents including the new file that I created. There was also a archivetest.zip.orig which was in zip format and contained the contents of the zip archive when I mounted it with archivemount.

So it turns out to be fairly tricky to get archivemount to lose data. Mounting a read-only archive file didn't work, and modifying an archive format that libarchive could only read from didn't work, though in the last case you will have to contend with the archive format being silently changed. One other situation could potentially trip you up: Because archivemount creates a new archive at unmount time, you should make sure that you will not run out of disk space where the archives are stored.

To test archivemount's performance, I used the bonnie++ filesystem benchmark version 1.03. Because archivemount holds off updating the actual archive until the filesystem is unmounted, you will get good performance when accessing and writing to a mounted archive. As shown below, when comparing the use of archivemount on an archive file stored in /tmp to direct access to a subdirectory in /tmp, seek times for archivemount were halved on average relative to direct access, and you can expect about 70% of the performance of direct access when using archivemount for rewriting. The bonnie++ documentation explains that for the rewrite test, a chunk of data is a read, dirtied, and written back to a file, and this requires a seek, so archivemount's slower seek performance likely causes this benchmark to be slower as well.

$ cd /tmp
$ mkdir empty
$ ls -d empty | cpio -ov > empty.cpio
$ mkdir empty-mounted
$ archivemount empty.cpio empty-mounted
$ mkdir bonnie-test
$ /usr/sbin/bonnie++ -d /tmp/bonnie-test
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
v8tsrv 2G 14424 25 14726 4 13930 6 28502 49 52581 17 8322 123

$ /usr/sbin/bonnie++ -d /tmp/empty-mounted
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
v8tsrv 2G 12016 19 12918 7 9766 6 27543 40 52937 6 4457 24

When you want to pluck a few files out of a tarball, archivemount might be just the command for the job. Instead of expanding the archive into /tmp just to load a few files into Emacs, just mount the archive and run Emacs directly on the archivemount filesystem. As the bonnie++ benchmarks above show, an application using an archivemount filesystem does not necessarily suffer a performance hit.

[Jul 31, 2019] Advanced GNU tar Operations

Jul 31, 2019 | www.gnu.org

GNU tar 1.32 4.2

In the last chapter, you learned about the first three operations to tar . This chapter presents the remaining five operations to tar : `--append' , `--update' , `--concatenate' , `--delete' , and `--compare' .

You are not likely to use these operations as frequently as those covered in the last chapter; however, since they perform specialized functions, they are quite useful when you do need to use them. We will give examples using the same directory and files that you created in the last chapter. As you may recall, the directory is called `practice' , the files are `jazz' , `blues' , `folk' , and the two archive files you created are `collection.tar' and `music.tar' .

We will also use the archive files `afiles.tar' and `bfiles.tar' . The archive `afiles.tar' contains the members `apple' , `angst' , and `aspic' ; `bfiles.tar' contains the members `./birds' , `baboon' , and `./box' .

Unless we state otherwise, all practicing you do and examples you follow in this chapter will take place in the `practice' directory that you created in the previous chapter; see Preparing a Practice Directory for Examples . (Below in this section, we will remind you of the state of the examples where the last chapter left them.)

The five operations that we will cover in this chapter are:

`--append'
`-r'
Add new entries to an archive that already exists.
`--update'
`-u'
Add more recent copies of archive members to the end of an archive, if they exist.
`--concatenate'
`--catenate'
`-A'
Add one or more pre-existing archives to the end of another archive.
`--delete'
Delete items from an archive (does not work on tapes).
`--compare'
`--diff'
`-d'
Compare archive members to their counterparts in the file system.

[ < ] [ > ] [ << ] [ Up ] [ >> ] [ Top ] [ Contents ] [ Index ] [ ? ]
4.2.2 How to Add Files to Existing Archives: `--append'

If you want to add files to an existing archive, you don't need to create a new archive; you can use `--append' ( `-r' ). The archive must already exist in order to use `--append' . (A related operation is the `--update' operation; you can use this to add newer versions of archive members to an existing archive. To learn how to do this with `--update' , see section Updating an Archive .)

If you use `--append' to add a file that has the same name as an archive member to an archive containing that archive member, then the old member is not deleted. What does happen, however, is somewhat complex. tar allows you to have infinite number of files with the same name. Some operations treat these same-named members no differently than any other set of archive members: for example, if you view an archive with `--list' ( `-t' ), you will see all of those members listed, with their data modification times, owners, etc.

Other operations don't deal with these members as perfectly as you might prefer; if you were to use `--extract' to extract the archive, only the most recently added copy of a member with the same name as other members would end up in the working directory. This is because `--extract' extracts an archive in the order the members appeared in the archive; the most recently archived members will be extracted last. Additionally, an extracted member will replace a file of the same name which existed in the directory already, and tar will not prompt you about this (10) . Thus, only the most recently archived member will end up being extracted, as it will replace the one extracted before it, and so on.

There exists a special option that allows you to get around this behavior and extract (or list) only a particular copy of the file. This is `--occurrence' option. If you run tar with this option, it will extract only the first copy of the file. You may also give this option an argument specifying the number of copy to be extracted. Thus, for example if the archive `archive.tar' contained three copies of file `myfile' , then the command

tar --extract --file archive.tar --occurrence=2 myfile

would extract only the second copy. See section --occurrence , for the description of `--occurrence' option.

See hag - you might want to incorporate some of the above into the MMwtSN node; not sure. i didn't know how to make it simpler...

There are a few ways to get around this. Xref to Multiple Members with the Same Name, maybe.

If you want to replace an archive member, use `--delete' to delete the member you want to remove from the archive, and then use `--append' to add the member you want to be in the archive. Note that you can not change the order of the archive; the most recently added member will still appear last. In this sense, you cannot truly "replace" one member with another. (Replacing one member with another will not work on certain types of media, such as tapes; see Removing Archive Members Using `--delete' and Tapes and Other Archive Media , for more information.)

4.2.2.1 Appending Files to an Archive
4.2.2.2 Multiple Members with the Same Name
4.2.2.1 Appending Files to an Archive

The simplest way to add a file to an already existing archive is the `--append' ( `-r' ) operation, which writes specified files into the archive whether or not they are already among the archived files.

When you use `--append' , you must specify file name arguments, as there is no default. If you specify a file that already exists in the archive, another copy of the file will be added to the end of the archive. As with other operations, the member names of the newly added files will be exactly the same as their names given on the command line. The `--verbose' ( `-v' ) option will print out the names of the files as they are written into the archive.

`--append' cannot be performed on some tape drives, unfortunately, due to deficiencies in the formats those tape drives use. The archive must be a valid tar archive, or else the results of using this operation will be unpredictable. See section Tapes and Other Archive Media .

To demonstrate using `--append' to add a file to an archive, create a file called `rock' in the `practice' directory. Make sure you are in the `practice' directory. Then, run the following tar command to add `rock' to `collection.tar' :

$ tar --append --file=collection.tar rock

If you now use the `--list' ( `-t' ) operation, you will see that `rock' has been added to the archive:

$ tar --list --file=collection.tar
-rw-r--r-- me/user          28 1996-10-18 16:31 jazz
-rw-r--r-- me/user          21 1996-09-23 16:44 blues
-rw-r--r-- me/user          20 1996-09-23 16:44 folk
-rw-r--r-- me/user          20 1996-09-23 16:44 rock
4.2.2.2 Multiple Members with the Same Name

You can use `--append' ( `-r' ) to add copies of files which have been updated since the archive was created. (However, we do not recommend doing this since there is another tar option called `--update' ; See section Updating an Archive , for more information. We describe this use of `--append' here for the sake of completeness.) When you extract the archive, the older version will be effectively lost. This works because files are extracted from an archive in the order in which they were archived. Thus, when the archive is extracted, a file archived later in time will replace a file of the same name which was archived earlier, even though the older version of the file will remain in the archive unless you delete all versions of the file.

Supposing you change the file `blues' and then append the changed version to `collection.tar' . As you saw above, the original `blues' is in the archive `collection.tar' . If you change the file and append the new version of the file to the archive, there will be two copies in the archive. When you extract the archive, the older version of the file will be extracted first, and then replaced by the newer version when it is extracted.

You can append the new, changed copy of the file `blues' to the archive in this way:

$ tar --append --verbose --file=collection.tar blues
blues

Because you specified the `--verbose' option, tar has printed the name of the file being appended as it was acted on. Now list the contents of the archive:

$ tar --list --verbose --file=collection.tar
-rw-r--r-- me/user          28 1996-10-18 16:31 jazz
-rw-r--r-- me/user          21 1996-09-23 16:44 blues
-rw-r--r-- me/user          20 1996-09-23 16:44 folk
-rw-r--r-- me/user          20 1996-09-23 16:44 rock
-rw-r--r-- me/user          58 1996-10-24 18:30 blues

The newest version of `blues' is now at the end of the archive (note the different creation dates and file sizes). If you extract the archive, the older version of the file `blues' will be replaced by the newer version. You can confirm this by extracting the archive and running `ls' on the directory.

If you wish to extract the first occurrence of the file `blues' from the archive, use `--occurrence' option, as shown in the following example:

$ tar --extract -vv --occurrence --file=collection.tar blues
-rw-r--r-- me/user          21 1996-09-23 16:44 blues

See section Changing How tar Writes Files , for more information on `--extract' and see -occurrence , for a description of `--occurrence' option.

4.2.3 Updating an Archive

In the previous section, you learned how to use `--append' to add a file to an existing archive. A related operation is `--update' ( `-u' ). The `--update' operation updates a tar archive by comparing the date of the specified archive members against the date of the file with the same name. If the file has been modified more recently than the archive member, then the newer version of the file is added to the archive (as with `--append' ).

Unfortunately, you cannot use `--update' with magnetic tape drives. The operation will fail.

See other examples of media on which -update will fail? need to ask charles and/or mib/thomas/dave shevett..

Both `--update' and `--append' work by adding to the end of the archive. When you extract a file from the archive, only the version stored last will wind up in the file system, unless you use the `--backup' option. See section Multiple Members with the Same Name , for a detailed discussion.

4.2.3.1 How to Update an Archive Using `--update'

You must use file name arguments with the `--update' ( `-u' ) operation. If you don't specify any files, tar won't act on any files and won't tell you that it didn't do anything (which may end up confusing you).

To see the `--update' option at work, create a new file, `classical' , in your practice directory, and some extra text to the file `blues' , using any text editor. Then invoke tar with the `update' operation and the `--verbose' ( `-v' ) option specified, using the names of all the files in the `practice' directory as file name arguments:

$ tar --update -v -f collection.tar blues folk rock classical
blues
classical
$

Because we have specified verbose mode, tar prints out the names of the files it is working on, which in this case are the names of the files that needed to be updated. If you run `tar --list' and look at the archive, you will see `blues' and `classical' at its end. There will be a total of two versions of the member `blues' ; the one at the end will be newer and larger, since you added text before updating it.

The reason tar does not overwrite the older file when updating it is that writing to the middle of a section of tape is a difficult process. Tapes are not designed to go backward. See section Tapes and Other Archive Media , for more information about tapes.

`--update' ( `-u' ) is not suitable for performing backups for two reasons: it does not change directory content entries, and it lengthens the archive every time it is used. The GNU tar options intended specifically for backups are more efficient. If you need to run backups, please consult Performing Backups and Restoring Files .


[ < ] [ > ] [ << ] [ Up ] [ >> ] [ Top ] [ Contents ] [ Index ] [ ? ]
4.2.4 Combining Archives with `--concatenate'

Sometimes it may be convenient to add a second archive onto the end of an archive rather than adding individual files to the archive. To add one or more archives to the end of another archive, you should use the `--concatenate' ( `--catenate' , `-A' ) operation.

To use `--concatenate' , give the first archive with `--file' option and name the rest of archives to be concatenated on the command line. The members, and their member names, will be copied verbatim from those archives to the first one (11) . The new, concatenated archive will be called by the same name as the one given with the `--file' option. As usual, if you omit `--file' , tar will use the value of the environment variable TAPE , or, if this has not been set, the default archive name.

See There is no way to specify a new name...

To demonstrate how `--concatenate' works, create two small archives called `bluesrock.tar' and `folkjazz.tar' , using the relevant files from `practice' :

$ tar -cvf bluesrock.tar blues rock
blues
rock
$ tar -cvf folkjazz.tar folk jazz
folk
jazz

If you like, You can run `tar --list' to make sure the archives contain what they are supposed to:

$ tar -tvf bluesrock.tar
-rw-r--r-- melissa/user    105 1997-01-21 19:42 blues
-rw-r--r-- melissa/user     33 1997-01-20 15:34 rock
$ tar -tvf jazzfolk.tar
-rw-r--r-- melissa/user     20 1996-09-23 16:44 folk
-rw-r--r-- melissa/user     65 1997-01-30 14:15 jazz

We can concatenate these two archives with tar :

$ cd ..
$ tar --concatenate --file=bluesrock.tar jazzfolk.tar

If you now list the contents of the `bluesrock.tar' , you will see that now it also contains the archive members of `jazzfolk.tar' :

$ tar --list --file=bluesrock.tar
blues
rock
folk
jazz

When you use `--concatenate' , the source and target archives must already exist and must have been created using compatible format parameters. Notice, that tar does not check whether the archives it concatenates have compatible formats, it does not even check if the files are really tar archives.

Like `--append' ( `-r' ), this operation cannot be performed on some tape drives, due to deficiencies in the formats those tape drives use.

It may seem more intuitive to you to want or try to use cat to concatenate two archives instead of using the `--concatenate' operation; after all, cat is the utility for combining files.

However, tar archives incorporate an end-of-file marker which must be removed if the concatenated archives are to be read properly as one archive. `--concatenate' removes the end-of-archive marker from the target archive before each new archive is appended. If you use cat to combine the archives, the result will not be a valid tar format archive. If you need to retrieve files from an archive that was added to using the cat utility, use the `--ignore-zeros' ( `-i' ) option. See section Ignoring Blocks of Zeros , for further information on dealing with archives improperly combined using the cat shell utility.


[ < ] [ > ] [ << ] [ Up ] [ >> ] [ Top ] [ Contents ] [ Index ] [ ? ]
4.2.5 Removing Archive Members Using `--delete'

You can remove members from an archive by using the `--delete' option. Specify the name of the archive with `--file' ( `-f' ) and then specify the names of the members to be deleted; if you list no member names, nothing will be deleted. The `--verbose' option will cause tar to print the names of the members as they are deleted. As with `--extract' , you must give the exact member names when using `tar --delete' . `--delete' will remove all versions of the named file from the archive. The `--delete' operation can run very slowly.

Unlike other operations, `--delete' has no short form.

This operation will rewrite the archive. You can only use `--delete' on an archive if the archive device allows you to write to any point on the media, such as a disk; because of this, it does not work on magnetic tapes. Do not try to delete an archive member from a magnetic tape; the action will not succeed, and you will be likely to scramble the archive and damage your tape. There is no safe way (except by completely re-writing the archive) to delete files from most kinds of magnetic tape. See section Tapes and Other Archive Media .

To delete all versions of the file `blues' from the archive `collection.tar' in the `practice' directory, make sure you are in that directory, and then,

$ tar --list --file=collection.tar
blues
folk
jazz
rock
$ tar --delete --file=collection.tar blues
$ tar --list --file=collection.tar
folk
jazz
rock

See Check if the above listing is actually produced after running all the examples on collection.tar.

The `--delete' option has been reported to work properly when tar acts as a filter from stdin to stdout .

4.2.6 Comparing Archive Members with the File System

The `--compare' ( `-d' ), or `--diff' operation compares specified archive members against files with the same names, and then reports differences in file size, mode, owner, modification date and contents. You should only specify archive member names, not file names. If you do not name any members, then tar will compare the entire archive. If a file is represented in the archive but does not exist in the file system, tar reports a difference.

You have to specify the record size of the archive when modifying an archive with a non-default record size.

tar ignores files in the file system that do not have corresponding members in the archive.

The following example compares the archive members `rock' , `blues' and `funk' in the archive `bluesrock.tar' with files of the same name in the file system. (Note that there is no file, `funk' ; tar will report an error message.)

$ tar --compare --file=bluesrock.tar rock blues funk
rock
blues
tar: funk not found in archive

The spirit behind the `--compare' ( `--diff' , `-d' ) option is to check whether the archive represents the current state of files on disk, more than validating the integrity of the archive media. For this latter goal, see Verifying Data as It is Stored .

[Jul 30, 2019] The difference between tar and tar.gz archives

With tar.gz to extract a file archiver first creates an intermediary tarball x.tar file from x.tar.gz by uncompressing the whole archive then unpack requested files from this intermediary tarball. In tar.gz archive is large unpacking can take several hours or even days.
Jul 30, 2019 | askubuntu.com

[Jul 29, 2019] A Guide to Kill, Pkill and Killall Commands to Terminate a Process in Linux

Jul 26, 2019 | www.tecmint.com
... ... ...

How about killing a process using process name

You must be aware of process name, before killing and entering a wrong process name may screw you.

# pkill mysqld
Kill more than one process at a time.
# kill PID1 PID2 PID3

or

# kill -9 PID1 PID2 PID3

or

# kill -SIGKILL PID1 PID2 PID3
What if a process have too many instances and a number of child processes, we have a command ' killall '. This is the only command of this family, which takes process name as argument in-place of process number.

Syntax:

# killall [signal or option] Process Name

To kill all mysql instances along with child processes, use the command as follow.

# killall mysqld

You can always verify the status of the process if it is running or not, using any of the below command.

# service mysql status
# pgrep mysql
# ps -aux | grep mysql

That's all for now, from my side. I will soon be here again with another Interesting and Informative topic. Till Then, stay tuned, connected to Tecmint and healthy. Don't forget to give your valuable feedback in comment section.

[Jul 29, 2019] Locate Command in Linux

Jul 25, 2019 | linuxize.com

... ... ...

The locate command also accepts patterns containing globbing characters such as the wildcard character * . When the pattern contains no globbing characters the command searches for *PATTERN* , that's why in the previous example all files containing the search pattern in their names were displayed.

The wildcard is a symbol used to represent zero, one or more characters. For example, to search for all .md files on the system you would use:

locate *.md

To limit the search results use the -n option followed by the number of results you want to be displayed. For example, the following command will search for all .py files and display only 10 results:

locate -n 10 *.py

By default, locate performs case-sensitive searches. The -i ( --ignore-case ) option tels locate to ignore case and run case-insensitive search.

locate -i readme.md
/home/linuxize/p1/readme.md
/home/linuxize/p2/README.md
/home/linuxize/p3/ReadMe.md

To display the count of all matching entries, use the -c ( --count ) option. The following command would return the number of all files containing .bashrc in their names:

locate -c .bashrc
6

By default, locate doesn't check whether the found files still exist on the file system. If you deleted a file after the latest database update if the file matches the search pattern it will be included in the search results.

To display only the names of the files that exist at the time locate is run use the -e ( --existing ) option. For example, the following would return only the existing .json files:

locate -e *.json

If you need to run a more complex search you can use the -r ( --regexp ) option which allows you to search using a basic regexp instead of patterns. This option can be specified multiple times.
For example, to search for all .mp4 and .avi files on your system and ignore case you would run:

locate --regex -i "(\.mp4|\.avi)"

[Jul 29, 2019] How do I tar a directory of files and folders without including the directory itself - Stack Overflow

Jan 05, 2017 | stackoverflow.com

How do I tar a directory of files and folders without including the directory itself? Ask Question Asked 10 years, 1 month ago Active 8 months ago Viewed 464k times 348 105


tvanfosson ,Jan 5, 2017 at 12:29

I typically do:
tar -czvf my_directory.tar.gz my_directory

What if I just want to include everything (including any hidden system files) in my_directory, but not the directory itself? I don't want:

my_directory
   --- my_file
   --- my_file
   --- my_file

I want:

my_file
my_file
my_file

PanCrit ,Feb 19 at 13:04

cd my_directory/ && tar -zcvf ../my_dir.tgz . && cd -

should do the job in one line. It works well for hidden files as well. "*" doesn't expand hidden files by path name expansion at least in bash. Below is my experiment:

$ mkdir my_directory
$ touch my_directory/file1
$ touch my_directory/file2
$ touch my_directory/.hiddenfile1
$ touch my_directory/.hiddenfile2
$ cd my_directory/ && tar -zcvf ../my_dir.tgz . && cd ..
./
./file1
./file2
./.hiddenfile1
./.hiddenfile2
$ tar ztf my_dir.tgz
./
./file1
./file2
./.hiddenfile1
./.hiddenfile2

JCotton ,Mar 3, 2015 at 2:46

Use the -C switch of tar:
tar -czvf my_directory.tar.gz -C my_directory .

The -C my_directory tells tar to change the current directory to my_directory , and then . means "add the entire current directory" (including hidden files and sub-directories).

Make sure you do -C my_directory before you do . or else you'll get the files in the current directory.

Digger ,Mar 23 at 6:52

You can also create archive as usual and extract it with:
tar --strip-components 1 -xvf my_directory.tar.gz

jwg ,Mar 8, 2017 at 12:56

Have a look at --transform / --xform , it gives you the opportunity to massage the file name as the file is added to the archive:
% mkdir my_directory
% touch my_directory/file1
% touch my_directory/file2
% touch my_directory/.hiddenfile1
% touch my_directory/.hiddenfile2
% tar -v -c -f my_dir.tgz --xform='s,my_directory/,,' $(find my_directory -type f)
my_directory/file2
my_directory/.hiddenfile1
my_directory/.hiddenfile2
my_directory/file1
% tar -t -f my_dir.tgz 
file2
.hiddenfile1
.hiddenfile2
file1

Transform expression is similar to that of sed , and we can use separators other than / ( , in the above example).
https://www.gnu.org/software/tar/manual/html_section/tar_52.html

Alex ,Mar 31, 2017 at 15:40

TL;DR
find /my/dir/ -printf "%P\n" | tar -czf mydir.tgz --no-recursion -C /my/dir/ -T -

With some conditions (archive only files, dirs and symlinks):

find /my/dir/ -printf "%P\n" -type f -o -type l -o -type d | tar -czf mydir.tgz --no-recursion -C /my/dir/ -T -
Explanation

The below unfortunately includes a parent directory ./ in the archive:

tar -czf mydir.tgz -C /my/dir .

You can move all the files out of that directory by using the --transform configuration option, but that doesn't get rid of the . directory itself. It becomes increasingly difficult to tame the command.

You could use $(find ...) to add a file list to the command (like in magnus' answer ), but that potentially causes a "file list too long" error. The best way is to combine it with tar's -T option, like this:

find /my/dir/ -printf "%P\n" -type f -o -type l -o -type d | tar -czf mydir.tgz --no-recursion -C /my/dir/ -T -

Basically what it does is list all files ( -type f ), links ( -type l ) and subdirectories ( -type d ) under your directory, make all filenames relative using -printf "%P\n" , and then pass that to the tar command (it takes filenames from STDIN using -T - ). The -C option is needed so tar knows where the files with relative names are located. The --no-recursion flag is so that tar doesn't recurse into folders it is told to archive (causing duplicate files).

If you need to do something special with filenames (filtering, following symlinks etc), the find command is pretty powerful, and you can test it by just removing the tar part of the above command:

$ find /my/dir/ -printf "%P\n" -type f -o -type l -o -type d
> textfile.txt
> documentation.pdf
> subfolder2
> subfolder
> subfolder/.gitignore

For example if you want to filter PDF files, add ! -name '*.pdf'

$ find /my/dir/ -printf "%P\n" -type f ! -name '*.pdf' -o -type l -o -type d
> textfile.txt
> subfolder2
> subfolder
> subfolder/.gitignore
Non-GNU find

The command uses printf (available in GNU find ) which tells find to print its results with relative paths. However, if you don't have GNU find , this works to make the paths relative (removes parents with sed ):

find /my/dir/ -type f -o -type l -o -type d | sed s,^/my/dir/,, | tar -czf mydir.tgz --no-recursion -C /my/dir/ -T -

BrainStone ,Dec 21, 2016 at 22:14

This Answer should work in most situations. Notice however how the filenames are stored in the tar file as, for example, ./file1 rather than just file1 . I found that this caused problems when using this method to manipulate tarballs used as package files in BuildRoot .

One solution is to use some Bash globs to list all files except for .. like this:

tar -C my_dir -zcvf my_dir.tar.gz .[^.]* ..?* *

This is a trick I learnt from this answer .

Now tar will return an error if there are no files matching ..?* or .[^.]* , but it will still work. If the error is a problem (you are checking for success in a script), this works:

shopt -s nullglob
tar -C my_dir -zcvf my_dir.tar.gz .[^.]* ..?* *
shopt -u nullglob

Though now we are messing with shell options, we might decide that it is neater to have * match hidden files:

shopt -s dotglob
tar -C my_dir -zcvf my_dir.tar.gz *
shopt -u dotglob

This might not work where your shell globs * in the current directory, so alternatively, use:

shopt -s dotglob
cd my_dir
tar -zcvf ../my_dir.tar.gz *
cd ..
shopt -u dotglob

PanCrit ,Jun 14, 2010 at 6:47

cd my_directory
tar zcvf ../my_directory.tar.gz *

anion ,May 11, 2018 at 14:10

If it's a Unix/Linux system, and you care about hidden files (which will be missed by *), you need to do:
cd my_directory
tar zcvf ../my_directory.tar.gz * .??*

I don't know what hidden files look like under Windows.

gpz500 ,Feb 27, 2014 at 10:46

I would propose the following Bash function (first argument is the path to the dir, second argument is the basename of resulting archive):
function tar_dir_contents ()
{
    local DIRPATH="$1"
    local TARARCH="$2.tar.gz"
    local ORGIFS="$IFS"
    IFS=$'\n'
    tar -C "$DIRPATH" -czf "$TARARCH" $( ls -a "$DIRPATH" | grep -v '\(^\.$\)\|\(^\.\.$\)' )
    IFS="$ORGIFS"
}

You can run it in this way:

$ tar_dir_contents /path/to/some/dir my_archive

and it will generate the archive my_archive.tar.gz within current directory. It works with hidden (.*) elements and with elements with spaces in their filename.

med ,Feb 9, 2017 at 17:19

cd my_directory && tar -czvf ../my_directory.tar.gz $(ls -A) && cd ..

This one worked for me and it's include all hidden files without putting all files in a root directory named "." like in tomoe's answer :

Breno Salgado ,Apr 16, 2016 at 15:42

Use pax.

Pax is a deprecated package but does the job perfectly and in a simple fashion.

pax -w > mydir.tar mydir

asynts ,Jun 26 at 16:40

Simplest way I found:

cd my_dir && tar -czvf ../my_dir.tar.gz *

marcingo ,Aug 23, 2016 at 18:04

# tar all files within and deeper in a given directory
# with no prefixes ( neither <directory>/ nor ./ )
# parameters: <source directory> <target archive file>
function tar_all_in_dir {
    { cd "$1" && find -type f -print0; } \
    | cut --zero-terminated --characters=3- \
    | tar --create --file="$2" --directory="$1" --null --files-from=-
}

Safely handles filenames with spaces or other unusual characters. You can optionally add a -name '*.sql' or similar filter to the find command to limit the files included.

user1456599 ,Feb 13, 2013 at 21:37

 tar -cvzf  tarlearn.tar.gz --remove-files mytemp/*

If the folder is mytemp then if you apply the above it will zip and remove all the files in the folder but leave it alone

 tar -cvzf  tarlearn.tar.gz --remove-files --exclude='*12_2008*' --no-recursion mytemp/*

You can give exclude patterns and also specify not to look into subfolders too

Aaron Digulla ,Jun 2, 2009 at 15:33

tar -C my_dir -zcvf my_dir.tar.gz `ls my_dir`

[Jul 28, 2019] command line - How do I extract a specific file from a tar archive - Ask Ubuntu

Jul 28, 2019 | askubuntu.com

CMCDragonkai, Jun 3, 2016 at 13:04

1. Using the Command-line tar

Yes, just give the full stored path of the file after the tarball name.

Example: suppose you want file etc/apt/sources.list from etc.tar :

tar -xf etc.tar etc/apt/sources.list

Will extract sources.list and create directories etc/apt under the current directory.

2. Extract it with the Archive Manager

Open the tar in Archive Manager from Nautilus, go down into the folder hierarchy to find the file you need, and extract it.

3. Using Nautilus/Archive-Mounter

Right-click the tar in Nautilus, and select Open with ArchiveMounter.

The tar will now appear similar to a removable drive on the left, and you can explore/navigate it like a normal drive and drag/copy/paste any file(s) you need to any destination.

[Jul 28, 2019] iso - midnight commander rules for accessing archives through VFS - Unix Linux Stack Exchange

Jul 28, 2019 | unix.stackexchange.com

,

Midnight Commander uses virtual filesystem ( VFS ) for displying files, such as contents of a .tar.gz archive, or of .iso image. This is configured in mc.ext with rules such as this one ( Open is Enter , View is F3 ):
regex/\.([iI][sS][oO])$
    Open=%cd %p/iso9660://
    View=%view{ascii} isoinfo -d -i %f

When I press Enter on an .iso file, mc will open the .iso and I can browse individual files. This is very useful.

Now my question: I have also files which are disk images, i.e. created with pv /dev/sda1 > sda1.img

I would like mc to "browse" the files inside these images in the same fashion as .iso .

Is this possible ? How would such rule look like ?

[Jul 28, 2019] Find files in tar archives and extract specific files from tar archives - Raymii.org

Jul 28, 2019 | raymii.org

Find files in tar archives and extract specific files from tar archives

Published: 17-10-2018 | Author: Remy van Elst | Text only version of this article


Table of Contents
This is a small tip, to find specific files in tar archives and how to extract those specific files from said archive. Usefull when you have a 2 GB large tar file with millions of small files, and you need just one.

If you like this article, consider sponsoring me by trying out a Digital Ocean VPS. With this link you'll get $100 credit for 60 days). (referral link)

Finding files in tar archives

Using the command line flags -ft (long flags are --file --list ) we can list the contents of an archive. Using grep you can search that list for the correct file. Example:

tar -ft large_file.tar.gz | grep "the-filename-you-want"

Output:

"full-path/to-the-file/in-the-archive/the-filename-you-want"

With a modern tar on modern linux you can omit the flags for compressed archives and just pass a .tar.gz or .tar.bz2 file directly.

Extracting one file from a tar archive

When extracting a tar archive, you can specify the filename of the file you want (full path, use the command above to find it), as the second command line option. Example:

tar -xf large_file.tar.gz "full-path/to-the-file/in-the-archive/the-filename-you-want"

It might just take a long time, at least for my 2 GB file it took a while.

An alternative is to use "mc" (midnight commander), which can open archive files just a a local folder.

Tags: archive , bash , grep , shell , snippets , tar

[Jul 28, 2019] How to Use Midnight Commander, a Visual File Manager

Jul 28, 2019 | www.linode.com
  1. Another tool that can save you time is Midnight Commander's user menu. Go back to /tmp/test where you created nine files. Press F2 and bring up the user menu. Select Compress the current subdirectory (tar.gz) . After you choose the name for the archive, this will be created in /tmp (one level up from the directory being compressed). If you highlight the .tar.gz file and press ENTER you'll notice it will open like a regular directory. This allows you to browse archives and extract files by simply copying them ( F5 ) to the opposite panel's working directory.

    Midnight Commander User Menu

  2. To find out the size of a directory (actually, the size of all the files it contains), highlight the directory and then press CTRL+SPACE .
  3. To search, go up in your directory tree until you reach the top level, / , called root directory. Now press F9 , then c , followed by f . After the Find File dialog opens, type *.gz . This will find any accessible gzip archive on the system. In the results dialog, press l (L) for Panelize . All the results will be fed to one of your panels so you can easily browse, copy, view and so on. If you enter a directory from that list, you lose the list of found files, but you can easily return to it with F9 , l (L) then z (to select Panelize from the Left menu).

    Midnight Commander - Find File Dialog

  4. Managing files is not always done locally. Midnight Commander also supports accessing remote filesystems through SSH's Secure File Transfer Protocol, SFTP . This way you can easily transfer files between servers.

    Press F9 , followed by l (L), then select the SFTP link menu entry. In the dialog box titled SFTP to machine enter sftp://example@203.0.113.0 . Replace example with the username you have created on the remote machine and 203.0.113.1 with the IP address of your server. This will work only if the server at the other end accepts password logins. If you're logging in with SSH keys, then you'll first need to create and/or edit ~/.ssh/config . It could look something like this:

    ~/.ssh/config
    1
    2
    3
    4
    5
    
    Host sftp_server
        HostName 203.0.113.1
        Port 22
        User your_user
        IdentityFile ~/.ssh/id_rsa
    

    You can choose whatever you want as the Host value, it's only an identifier. IdentityFile is the path to your private SSH key.

    After the config file is setup, access your SFTP server by typing the identifier value you set after Host in the SFTP to machine dialog. In this example, enter sftp_server .

[Jul 28, 2019] Bartosz Kosarzycki's blog Midnight Commander how to compress a file-directory; Make a tar archive with midnight commander

Jul 28, 2019 | kosiara87.blogspot.com

Midnight Commander how to compress a file/directory; Make a tar archive with midnight commander

To compress a file in Midnight Commader (e.g. to make a tar.gz archive) navigate to the directory you want to pack and press 'F2'. This will bring up the 'User menu'. Choose the option 'Compress the current subdirectory'. This will compress the WHOLE directory you're currently in - not the highlighted directory.

[Jul 26, 2019] Sort Command in Linux [10 Useful Examples] by Christopher Murray

Notable quotes:
"... The sort command option "k" specifies a field, not a column. ..."
"... In gnu sort, the default field separator is 'blank to non-blank transition' which is a good default to separate columns. ..."
"... What is probably missing in that article is a short warning about the effect of the current locale. It is a common mistake to assume that the default behavior is to sort according ASCII texts according to the ASCII codes. ..."
Jul 12, 2019 | linuxhandbook.com
5. Sort by months [option -M]

Sort also has built in functionality to arrange by month. It recognizes several formats based on locale-specific information. I tried to demonstrate some unqiue tests to show that it will arrange by date-day, but not year. Month abbreviations display before full-names.

Here is the sample text file in this example:

March
Feb
February
April
August
July
June
November
October
December
May
September
1
4
3
6
01/05/19
01/10/19
02/06/18

Let's sort it by months using the -M option:

sort filename.txt -M

Here's the output you'll see:

01/05/19
01/10/19
02/06/18
1
3
4
6
Jan
Feb
February
March
April
May
June
July
August
September
October
November
December

... ... ...

7. Sort Specific Column [option -k]

If you have a table in your file, you can use the -k option to specify which column to sort. I added some arbitrary numbers as a third column and will display the output sorted by each column. I've included several examples to show the variety of output possible. Options are added following the column number.

1. MX Linux 100
2. Manjaro 400
3. Mint 300
4. elementary 500
5. Ubuntu 200

sort filename.txt -k 2

This will sort the text on the second column in alphabetical order:

4. elementary 500
2. Manjaro 400
3. Mint 300
1. MX Linux 100
5. Ubuntu 200
sort filename.txt -k 3n

This will sort the text by the numerals on the third column.

1. MX Linux 100
5. Ubuntu 200
3. Mint 300
2. Manjaro 400
4. elementary 500
sort filename.txt -k 3nr

Same as the above command just that the sort order has been reversed.

4. elementary 500
2. Manjaro 400
3. Mint 300
5. Ubuntu 200
1. MX Linux 100
8. Sort and remove duplicates [option -u]

If you have a file with potential duplicates, the -u option will make your life much easier. Remember that sort will not make changes to your original data file. I chose to create a new file with just the items that are duplicates. Below you'll see the input and then the contents of each file after the command is run.

READ Learn to Use CURL Command in Linux With These Examples

1. MX Linux
2. Manjaro
3. Mint
4. elementary
5. Ubuntu
1. MX Linux
2. Manjaro
3. Mint
4. elementary
5. Ubuntu
1. MX Linux
2. Manjaro
3. Mint
4. elementary
5. Ubuntu

sort filename.txt -u > filename_duplicates.txt

Here's the output files sorted and without duplicates.

1. MX Linux 
2. Manjaro 
3. Mint 
4. elementary 
5. Ubuntu
9. Ignore case while sorting [option -f]

Many modern distros running sort will implement ignore case by default. If yours does not, adding the -f option will produce the expected results.

sort filename.txt -f

Here's the output where cases are ignored by the sort command:

alpha
alPHa
Alpha
ALpha
beta
Beta
BEta
BETA
10. Sort by human numeric values [option -h]

This option allows the comparison of alphanumeric values like 1k (i.e. 1000).

sort filename.txt -h

Here's the sorted output:

10.0
100
1000.0
1k

I hope this tutorial helped you get the basic usage of the sort command in Linux. If you have some cool sort trick, why not share it with us in the comment section?

Christopher works as a Software Developer in Orlando, FL. He loves open source, Taco Bell, and a Chi-weenie named Max. Visit his website for more information or connect with him on social media.

John
The sort command option "k" specifies a field, not a column. In your example all five lines have the same character in column 2 – a "."

Stephane Chauveau

In gnu sort, the default field separator is 'blank to non-blank transition' which is a good default to separate columns. In his example, the "." is part of the first column so it should work fine. If –debug is used then the range of characters used as keys is dumped.

What is probably missing in that article is a short warning about the effect of the current locale. It is a common mistake to assume that the default behavior is to sort according ASCII texts according to the ASCII codes. For example, the command echo `printf ".nxn0nXn@në" | sort` produces ". 0 @ X x ë" with LC_ALL=C but ". @ 0 ë x X" with LC_ALL=en_US.UTF-8.

[Jul 26, 2019] How To Check Swap Usage Size and Utilization in Linux by Vivek Gite

Jul 26, 2019 | www.cyberciti.biz

The procedure to check swap space usage and size in Linux is as follows:

  1. Open a terminal application.
  2. To see swap size in Linux, type the command: swapon -s .
  3. You can also refer to the /proc/swaps file to see swap areas in use on Linux.
  4. Type free -m to see both your ram and your swap space usage in Linux.
  5. Finally, one can use the top or htop command to look for swap space Utilization on Linux too.
How to Check Swap Space in Linux using /proc/swaps file

Type the following cat command to see total and used swap size:
# cat /proc/swaps
Sample outputs:

Filename                           Type            Size    Used    Priority
/dev/sda3                               partition       6291448 65680   0

Another option is to type the grep command as follows:
grep Swap /proc/meminfo

SwapCached:            0 kB
SwapTotal:        524284 kB
SwapFree:         524284 kB
Look for swap space in Linux using swapon command

Type the following command to show swap usage summary by device
# swapon -s
Sample outputs:

Filename                           Type            Size    Used    Priority
/dev/sda3                               partition       6291448 65680   0
Use free command to monitor swap space usage

Use the free command as follows:
# free -g
# free -k
# free -m

Sample outputs:

             total       used       free     shared    buffers     cached
Mem:         11909      11645        264          0        324       8980
-/+ buffers/cache:       2341       9568
Swap:         6143         64       6079
See swap size in Linux using vmstat command

Type the following vmstat command:
# vmstat
# vmstat 1 5

... ... ...

Vivek Gite is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly email newsletter.

[Jul 26, 2019] Cheat.sh Shows Cheat Sheets On The Command Line Or In Your Code Editor>

The choice of shell as a programming language is strange, but the idea is good...
Notable quotes:
"... The tool is developed by Igor Chubin, also known for its console-oriented weather forecast service wttr.in , which can be used to retrieve the weather from the console using only cURL or Wget. ..."
Jul 26, 2019 | www.linuxuprising.com

While it does have its own cheat sheet repository too, the project is actually concentrated around the creation of a unified mechanism to access well developed and maintained cheat sheet repositories.

The tool is developed by Igor Chubin, also known for its console-oriented weather forecast service wttr.in , which can be used to retrieve the weather from the console using only cURL or Wget.

It's worth noting that cheat.sh is not new. In fact it had its initial commit around May, 2017, and is a very popular repository on GitHub. But I personally only came across it recently, and I found it very useful, so I figured there must be some Linux Uprising readers who are not aware of this cool gem.

cheat.sh features & more
cheat.sh tar example
cheat.sh major features:

The command line client features a special shell mode with a persistent queries context and readline support. It also has a query history, it integrates with the clipboard, supports tab completion for shells like Bash, Fish and Zsh, and it includes the stealth mode I mentioned in the cheat.sh features.

The web, curl and cht.sh (command line) interfaces all make use of https://cheat.sh/ but if you prefer, you can self-host it .

It should be noted that each editor plugin supports a different feature set (configurable server, multiple answers, toggle comments, and so on). You can view a feature comparison of each cheat.sh editor plugin on the Editors integration section of the project's GitHub page.

Want to contribute a cheat sheet? See the cheat.sh guide on editing or adding a new cheat sheet.

Interested in bookmarking commands instead? You may want to give Marker, a command bookmark manager for the console , a try.

cheat.sh curl / command line client usage examples
Examples of using cheat.sh using the curl interface (this requires having curl installed as you'd expect) from the command line:

Show the tar command cheat sheet:

curl cheat.sh/tar

Example with output:
$ curl cheat.sh/tar
# To extract an uncompressed archive:
tar -xvf /path/to/foo.tar

# To create an uncompressed archive:
tar -cvf /path/to/foo.tar /path/to/foo/

# To extract a .gz archive:
tar -xzvf /path/to/foo.tgz

# To create a .gz archive:
tar -czvf /path/to/foo.tgz /path/to/foo/

# To list the content of an .gz archive:
tar -ztvf /path/to/foo.tgz

# To extract a .bz2 archive:
tar -xjvf /path/to/foo.tgz

# To create a .bz2 archive:
tar -cjvf /path/to/foo.tgz /path/to/foo/

# To extract a .tar in specified Directory:
tar -xvf /path/to/foo.tar -C /path/to/destination/

# To list the content of an .bz2 archive:
tar -jtvf /path/to/foo.tgz

# To create a .gz archive and exclude all jpg,gif,... from the tgz
tar czvf /path/to/foo.tgz --exclude=\*.{jpg,gif,png,wmv,flv,tar.gz,zip} /path/to/foo/

# To use parallel (multi-threaded) implementation of compression algorithms:
tar -z ... -> tar -Ipigz ...
tar -j ... -> tar -Ipbzip2 ...
tar -J ... -> tar -Ipixz ...

cht.sh also works instead of cheat.sh:
curl cht.sh/tar

Want to search for a keyword in all cheat sheets? Use:
curl cheat.sh/~keyword

List the Python programming language cheat sheet for random list :
curl cht.sh/python/random+list

Example with output:
$ curl cht.sh/python/random+list
#  python - How to randomly select an item from a list?
#  
#  Use random.choice
#  (https://docs.python.org/2/library/random.htmlrandom.choice):

import random

foo = ['a', 'b', 'c', 'd', 'e']
print(random.choice(foo))

#  For cryptographically secure random choices (e.g. for generating a
#  passphrase from a wordlist), use random.SystemRandom
#  (https://docs.python.org/2/library/random.htmlrandom.SystemRandom)
#  class:

import random

foo = ['battery', 'correct', 'horse', 'staple']
secure_random = random.SystemRandom()
print(secure_random.choice(foo))

#  [Pēteris Caune] [so/q/306400] [cc by-sa 3.0]

Replace python with some other programming language supported by cheat.sh, and random+list with the cheat sheet you want to show.

Want to eliminate the comments from your answer? Add ?Q at the end of the query (below is an example using the same /python/random+list):

$ curl cht.sh/python/random+list?Q
import random

foo = ['a', 'b', 'c', 'd', 'e']
print(random.choice(foo))

import random

foo = ['battery', 'correct', 'horse', 'staple']
secure_random = random.SystemRandom()
print(secure_random.choice(foo))

For more flexibility and tab completion you can use cht.sh, the command line cheat.sh client; you'll find instructions for how to install it further down this article. Examples of using the cht.sh command line client:

Show the tar command cheat sheet:

cht.sh tar

List the Python programming language cheat sheet for random list :
cht.sh python random list

There is no need to use quotes with multiple keywords.

You can start the cht.sh client in a special shell mode using:

cht.sh --shell

And then you can start typing your queries. Example:
$ cht.sh --shell
cht.sh> bash loop

If all your queries are about the same programming language, you can start the client in the special shell mode, directly in that context. As an example, start it with the Bash context using:
cht.sh --shell bash

Example with output:
$ cht.sh --shell bash
cht.sh/bash> loop
...........
cht.sh/bash> switch case

Want to copy the previously listed answer to the clipboard? Type c , then press Enter to copy the whole answer, or type C and press Enter to copy it without comments.

Type help in the cht.sh interactive shell mode to see all available commands. Also look under the Usage section from the cheat.sh GitHub project page for more options and advanced usage.

How to install cht.sh command line client
You can use cheat.sh in a web browser, from the command line with the help of curl and without having to install anything else, as explained above, as a code editor plugin, or using its command line client which has some extra features, which I already mentioned. The steps below are for installing this cht.sh command line client.

If you'd rather install a code editor plugin for cheat.sh, see the Editors integration page.

1. Install dependencies.

To install the cht.sh command line client, the curl command line tool will be used, so this needs to be installed on your system. Another dependency is rlwrap , which is required by the cht.sh special shell mode. Install these dependencies as follows.

sudo apt install curl rlwrap

sudo dnf install curl rlwrap

sudo pacman -S curl rlwrap

sudo zypper install curl rlwrap

The packages seem to be named the same on most (if not all) Linux distributions, so if your Linux distribution is not on this list, just install the curl and rlwrap packages using your distro's package manager.

2. Download and install the cht.sh command line interface.

You can install this either for your user only (so only you can run it), or for all users:

curl https://cht.sh/:cht.sh > ~/.bin/cht.sh

chmod +x ~/.bin/cht.sh

curl https://cht.sh/:cht.sh | sudo tee /usr/local/bin/cht.sh

sudo chmod +x /usr/local/bin/cht.sh

If the first command appears to have frozen displaying only the cURL output, press the Enter key and you'll be prompted to enter your password in order to save the file to /usr/local/bin .

You may also download and install the cheat.sh command completion for Bash or Zsh:

mkdir ~/.bash.d

curl https://cheat.sh/:bash_completion > ~/.bash.d/cht.sh

echo ". ~/.bash.d/cht.sh" >> ~/.bashrc

mkdir ~/.zsh.d

curl https://cheat.sh/:zsh > ~/.zsh.d/_cht

echo 'fpath=(~/.zsh.d/ $fpath)' >> ~/.zshrc

Opening a new shell / terminal and it will load the cheat.sh completion.

[Jul 26, 2019] What Is /dev/null in Linux by Alexandru Andrei

Images removed...
Jul 23, 2019 | www.maketecheasier.com
... ... ...

In technical terms, "/dev/null" is a virtual device file. As far as programs are concerned, these are treated just like real files. Utilities can request data from this kind of source, and the operating system feeds them data. But, instead of reading from disk, the operating system generates this data dynamically. An example of such a file is "/dev/zero."

In this case, however, you will write to a device file. Whatever you write to "/dev/null" is discarded, forgotten, thrown into the void. To understand why this is useful, you must first have a basic understanding of standard output and standard error in Linux or *nix type operating systems.

Related : How to Use the Tee Command in Linux

stdout and stder

A command-line utility can generate two types of output. Standard output is sent to stdout. Errors are sent to stderr.

By default, stdout and stderr are associated with your terminal window (or console). This means that anything sent to stdout and stderr is normally displayed on your screen. But through shell redirections, you can change this behavior. For example, you can redirect stdout to a file. This way, instead of displaying output on the screen, it will be saved to a file for you to read later – or you can redirect stdout to a physical device, say, a digital LED or LCD display.

A full article about pipes and redirections is available if you want to learn more.

Related : 12 Useful Linux Commands for New User

Use /dev/null to Get Rid of Output You Don't Need

Since there are two types of output, standard output and standard error, the first use case is to filter out one type or the other. It's easier to understand through a practical example. Let's say you're looking for a string in "/sys" to find files that refer to power settings.

grep -r power /sys/

There will be a lot of files that a regular, non-root user cannot read. This will result in many "Permission denied" errors.

These clutter the output and make it harder to spot the results that you're looking for. Since "Permission denied" errors are part of stderr, you can redirect them to "/dev/null."

grep -r power /sys/ 2>/dev/null

As you can see, this is much easier to read.

In other cases, it might be useful to do the reverse: filter out standard output so you can only see errors.

ping google.com 1>/dev/null

The screenshot above shows that, without redirecting, ping displays its normal output when it can reach the destination machine. In the second command, nothing is displayed while the network is online, but as soon as it gets disconnected, only error messages are displayed.

You can redirect both stdout and stderr to two different locations.

ping google.com 1>/dev/null 2>error.log

In this case, stdout messages won't be displayed at all, and error messages will be saved to the "error.log" file.

Redirect All Output to /dev/null

Sometimes it's useful to get rid of all output. There are two ways to do this.

grep -r power /sys/ >/dev/null 2>&1

The string >/dev/null means "send stdout to /dev/null," and the second part, 2>&1 , means send stderr to stdout. In this case you have to refer to stdout as "&1" instead of simply "1." Writing "2>1" would just redirect stdout to a file named "1."

What's important to note here is that the order is important. If you reverse the redirect parameters like this:

grep -r power /sys/ 2>&1 >/dev/null

it won't work as intended. That's because as soon as 2>&1 is interpreted, stderr is sent to stdout and displayed on screen. Next, stdout is supressed when sent to "/dev/null." The final result is that you will see errors on the screen instead of suppressing all output. If you can't remember the correct order, there's a simpler redirect that is much easier to type:

grep -r power /sys/ &>/dev/null

In this case, &>/dev/null is equivalent to saying "redirect both stdout and stderr to this location."

Other Examples Where It Can Be Useful to Redirect to /dev/null

Say you want to see how fast your disk can read sequential data. The test is not extremely accurate but accurate enough. You can use dd for this, but dd either outputs to stdout or can be instructed to write to a file. With of=/dev/null you can tell dd to write to this virtual file. You don't even have to use shell redirections here. if= specifies the location of the input file to be read; of= specifies the name of the output file, where to write.

dd if=debian-disk.qcow2 of=/dev/null status=progress bs=1M iflag=direct

In some scenarios, you may want to see how fast you can download from a server. But you don't want to write to your disk unnecessarily. Simply enough, don't write to a regular file, write to "/dev/null."

wget -O /dev/null http://ftp.halifax.rwth-aachen.de/ubuntu-releases/18.04/ubuntu-18.04.2-desktop-amd64.iso
Conclusion

Hopefully, the examples in this article can inspire you to find your own creative ways to use "/dev/null."

Know an interesting use-case for this special device file? Leave a comment below and share the knowledge!

[Jul 26, 2019] How to check open ports in Linux using the CLI> by Vivek Gite

Jul 26, 2019 | www.cyberciti.biz

Using netstat to list open ports

Type the following netstat command
sudo netstat -tulpn | grep LISTEN

... ... ...

For example, TCP port 631 opened by cupsd process and cupsd only listing on the loopback address (127.0.0.1). Similarly, TCP port 22 opened by sshd process and sshd listing on all IP address for ssh connections:

Proto Recv-Q Send-Q Local Address           Foreign Address         State       User       Inode      PID/Program name 
tcp   0      0      127.0.0.1:631           0.0.0.0:*               LISTEN      0          43385      1821/cupsd  
tcp   0      0      0.0.0.0:22              0.0.0.0:*               LISTEN      0          44064      1823/sshd

Where,

Use ss to list open ports

The ss command is used to dump socket statistics. It allows showing information similar to netstat. It can display more TCP and state information than other tools. The syntax is:
sudo ss -tulpn

... ... ...

Vivek Gite is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly email newsletter.

[Jul 26, 2019] The day the virtual machine manager died by Nathan Lager

"Dangerous" commands like dd should probably be always typed first in the editor and only when you verity that you did not make a blunder , executed...
A good decision was to go home and think the situation over, not to aggravate it with impulsive attempts to correct the situation, which typically only make it worse.
Lack of checking of the health of backups suggest that this guy is an arrogant sucker, despite his 20 years of sysadmin experience.
Notable quotes:
"... I started dd as root , over the top of an EXISTING DISK ON A RUNNING VM. What kind of idiot does that?! ..."
"... Since my VMs were still running, and I'd already done enough damage for one night, I stopped touching things and went home. ..."
Jul 26, 2019 | www.redhat.com

... ... ...

See, my RHEV manager was a VM running on a stand-alone Kernel-based Virtual Machine (KVM) host, separate from the cluster it manages. I had been running RHEV since version 3.0, before hosted engines were a thing, and I hadn't gone through the effort of migrating. I was already in the process of building a new set of clusters with a new manager, but this older manager was still controlling most of our production VMs. It had filled its disk again, and the underlying database had stopped itself to avoid corruption.

See, for whatever reason, we had never set up disk space monitoring on this system. It's not like it was an important box, right?

So, I logged into the KVM host that ran the VM, and started the well-known procedure of creating a new empty disk file, and then attaching it via virsh . The procedure goes something like this: Become root , use dd to write a stream of zeros to a new file, of the proper size, in the proper location, then use virsh to attach the new disk to the already running VM. Then, of course, log into the VM and do your disk expansion.

I logged in, ran sudo -i , and started my work. I ran cd /var/lib/libvirt/images , ran ls -l to find the existing disk images, and then started carefully crafting my dd command:

dd ... bs=1k count=40000000 if=/dev/zero ... of=./vmname-disk ...

Which was the next disk again? <Tab> of=vmname-disk2.img <Back arrow, Back arrow, Back arrow, Back arrow, Backspace> Don't want to dd over the existing disk, that'd be bad. Let's change that 2 to a 3 , and Enter . OH CRAP, I CHANGED THE 2 TO A 2 NOT A 3 ! <Ctrl+C><Ctrl+C><Ctrl+C><Ctrl+C><Ctrl+C><Ctrl+C>

I still get sick thinking about this. I'd done the stupidest thing I possibly could have done, I started dd as root , over the top of an EXISTING DISK ON A RUNNING VM. What kind of idiot does that?! (The kind that's at work late, trying to get this one little thing done before he heads off to see his friend. The kind that thinks he knows better, and thought he was careful enough to not make such a newbie mistake. Gah.)

So, how fast does dd start writing zeros? Faster than I can move my fingers from the Enter key to the Ctrl+C keys. I tried a number of things to recover the running disk from memory, but all I did was make things worse, I think. The system was still up, but still broken like it was before I touched it, so it was useless.

Since my VMs were still running, and I'd already done enough damage for one night, I stopped touching things and went home. The next day I owned up to the boss and co-workers pretty much the moment I walked in the door. We started taking an inventory of what we had, and what was lost. I had taken the precaution of setting up backups ages ago. So, we thought we had that to fall back on.

I opened a ticket with Red Hat support and filled them in on how dumb I'd been. I can only imagine the reaction of the support person when they read my ticket. I worked a help desk for years, I know how this usually goes. They probably gathered their closest coworkers to mourn for my loss, or get some entertainment out of the guy who'd been so foolish. (I say this in jest. Red Hat's support was awesome through this whole ordeal, and I'll tell you how soon. )

So, I figured the next thing I would need from my broken server, which was still running, was the backups I'd diligently been collecting. They were on the VM but on a separate virtual disk, so I figured they were safe. The disk I'd overwritten was the last disk I'd made to expand the volume the database was on, so that logical volume was toast, but I've always set up my servers such that the main mounts -- / , /var , /home , /tmp , and /root -- were all separate logical volumes.

In this case, /backup was an entirely separate virtual disk. So, I scp -r 'd the entire /backup mount to my laptop. It copied, and I felt a little sigh of relief. All of my production systems were still running, and I had my backup. My hope was that these factors would mean a relatively simple recovery: Build a new VM, install RHEV-M, and restore my backup. Simple right?

By now, my boss had involved the rest of the directors, and let them know that we were looking down the barrel of a possibly bad time. We started organizing a team meeting to discuss how we were going to get through this. I returned to my desk and looked through the backups I had copied from the broken server. All the files were there, but they were tiny. Like, a couple hundred kilobytes each, instead of the hundreds of megabytes or even gigabytes that they should have been.

Happy feeling, gone.

Turns out, my backups were running, but at some point after an RHEV upgrade, the database backup utility had changed. Remember how I said this system had existed since version 3.0? Well, 3.0 didn't have an engine-backup utility, so in my RHEV training, we'd learned how to make our own. Mine broke when the tools changed, and for who knows how long, it had been getting an incomplete backup -- just some files from /etc .

No database. Ohhhh ... Fudge. (I didn't say "Fudge.")

I updated my support case with the bad news and started wondering what it would take to break through one of these 4th-floor windows right next to my desk. (Ok, not really.)

At this point, we basically had three RHEV clusters with no manager. One of those was for development work, but the other two were all production. We started using these team meetings to discuss how to recover from this mess. I don't know what the rest of my team was thinking about me, but I can say that everyone was surprisingly supportive and un-accusatory. I mean, with one typo I'd thrown off the entire department. Projects were put on hold and workflows were disrupted, but at least we had time: We couldn't reboot machines, we couldn't change configurations, and couldn't get to VM consoles, but at least everything was still up and operating.

Red Hat support had escalated my SNAFU to an RHEV engineer, a guy I'd worked with in the past. I don't know if he remembered me, but I remembered him, and he came through yet again. About a week in, for some unknown reason (we never figured out why), our Windows VMs started dropping offline. They were still running as far as we could tell, but they dropped off the network, Just boom. Offline. In the course of a workday, we lost about a dozen windows systems. All of our RHEL machines were working fine, so it was just some Windows machines, and not even every Windows machine -- about a dozen of them.

Well great, how could this get worse? Oh right, add a ticking time bomb. Why were the Windows servers dropping off? Would they all eventually drop off? Would the RHEL systems eventually drop off? I made a panicked call back to support, emailed my account rep, and called in every favor I'd ever collected from contacts I had within Red Hat to get help as quickly as possible.

I ended up on a conference call with two support engineers, and we got to work. After about 30 minutes on the phone, we'd worked out the most insane recovery method. We had the newer RHEV manager I mentioned earlier, that was still up and running, and had two new clusters attached to it. Our recovery goal was to get all of our workloads moved from the broken clusters to these two new clusters.

Want to know how we ended up doing it? Well, as our Windows VMs were dropping like flies, the engineers and I came up with this plan. My clusters used a Fibre Channel Storage Area Network (SAN) as their storage domains. We took a machine that was not in use, but had a Fibre Channel host bus adapter (HBA) in it, and attached the logical unit numbers (LUNs) for both the old cluster's storage domains and the new cluster's storage domains to it. The plan there was to make a new VM on the new clusters, attach blank disks of the proper size to the new VM, and then use dd (the irony is not lost on me) to block-for-block copy the old broken VM disk over to the newly created empty VM disk.

I don't know if you've ever delved deeply into an RHEV storage domain, but under the covers it's all Logical Volume Manager (LVM). The problem is, the LV's aren't human-readable. They're just universally-unique identifiers (UUIDs) that the RHEV manager's database links from VM to disk. These VMs are running, but we don't have the database to reference. So how do you get this data?

virsh ...

Luckily, I managed KVM and Xen clusters long before RHEV was a thing that was viable. I was no stranger to libvirt 's virsh utility. With the proper authentication -- which the engineers gave to me -- I was able to virsh dumpxml on a source VM while it was running, get all the info I needed about its memory, disk, CPUs, and even MAC address, and then create an empty clone of it on the new clusters.

Once I felt everything was perfect, I would shut down the VM on the broken cluster with either virsh shutdown , or by logging into the VM and shutting it down. The catch here is that if I missed something and shut down that VM, there was no way I'd be able to power it back on. Once the data was no longer in memory, the config would be completely lost, since that information is all in the database -- and I'd hosed that. Once I had everything, I'd log into my migration host (the one that was connected to both storage domains) and use dd to copy, bit-for-bit, the source storage domain disk over to the destination storage domain disk. Talk about nerve-wracking, but it worked! We picked one of the broken windows VMs and followed this process, and within about half an hour we'd completed all of the steps and brought it back online.

We did hit one snag, though. See, we'd used snapshots here and there. RHEV snapshots are lvm snapshots. Consolidating them without the RHEV manager was a bit of a chore, and took even more leg work and research before we could dd the disks. I had to mimic the snapshot tree by creating symbolic links in the right places, and then start the dd process. I worked that one out late that evening after the engineers were off, probably enjoying time with their families. They asked me to write the process up in detail later. I suspect that it turned into some internal Red Hat documentation, never to be given to a customer because of the chance of royally hosing your storage domain.

Somehow, over the course of 3 months and probably a dozen scheduled maintenance windows, I managed to migrate every single VM (of about 100 VMs) from the old zombie clusters to the working clusters. This migration included our Zimbra collaboration system (10 VMs in itself), our file servers (another dozen VMs), our Enterprise Resource Planning (ERP) platform, and even Oracle databases.

We didn't lose a single VM and had no more unplanned outages. The Red Hat Enterprise Linux (RHEL) systems, and even some Windows systems, never fell to the mysterious drop-off that those dozen or so Windows servers did early on. During this ordeal, though, I had trouble sleeping. I was stressed out and felt so guilty for creating all this work for my co-workers, I even had trouble eating. No exaggeration, I lost 10lbs.

So, don't be like Nate. Monitor your important systems, check your backups, and for all that's holy, double-check your dd output file. That way, you won't have drama, and can truly enjoy Sysadmin Appreciation Day!

Nathan Lager is an experienced sysadmin, with 20 years in the industry. He runs his own blog at undrground.org, and hosts the Iron Sysadmin Podcast. More about me

[Jul 13, 2019] >Articles on Linux by Ken Hess

Jul 13, 2019 | www.linuxtoday.com

Hardening Linux for Production Use (Jul 12, 2019)

Quick and Dirty MySQL Performance Troubleshooting (May 09, 2019)

[Jun 26, 2019] The Individual Costs of Occupational Decline

Jun 26, 2019 | www.nakedcapitalism.com

Yves here. You have to read a bit into this article on occupational decline, aka, "What happens to me after the robots take my job?" to realize that the authors studied Swedish workers. One has to think that the findings would be more pronounced in the US, due both to pronounced regional and urban/rural variations, as well as the weakness of social institutions in the US. While there may be small cities in Sweden that have been hit hard by the decline of a key employer, I don't have the impression that Sweden has areas that have suffered the way our Rust Belt has. Similarly, in the US, a significant amount of hiring starts with resume reviews with the job requirements overspecified because the employer intends to hire someone who has done the same job somewhere else and hence needs no training (which in practice is an illusion; how companies do things is always idiosyncratic and new hires face a learning curve). On top of that, many positions are filled via personal networks, not formal recruiting. Some studies have concluded that having a large network of weak ties is more helpful in landing a new post than fewer close connections. It's easier to know a lot of people casually in a society with strong community institutions.

The article does not provide much in the way of remedies; it hints at "let them eat training" when programs have proven to be ineffective. One approach would be aggressive enforcement of laws against age discrimination. And even though some readers dislike a Job Guarantee, not only would it enable people who wanted to work to keep working, but private sector employers are particularly loath to employ someone who has been out of work for more than six months, so a Job Guarantee post would also help keep someone who'd lost a job from looking like damaged goods.

By Per-Anders Edin, Professor of Industrial Relations, Uppsala University; Tiernan Evans, Economics MRes/PhD Candidate, LSE; Georg Graetz, Assistant Professor in the Department of Economics, Uppsala University; Sofia Hernnäs, PhD student, Department of Economics, Uppsala University; Guy Michaels,Associate Professor in the Department of Economics, LSE. Originally published at VoxEU

As new technologies replace human labour in a growing number of tasks, employment in some occupations invariably falls. This column compares outcomes for similar workers in similar occupations over 28 years to explore the consequences of large declines in occupational employment for workers' careers. While mean losses in earnings and employment for those initially working in occupations that later declined are relatively moderate, low-earners lose significantly more.

How costly is it for workers when demand for their occupation declines? As new technologies replace human labour in a growing number of tasks, employment in some occupations invariably falls. Until recently, technological change mostly automated routine production and clerical work (Autor et al. 2003). But machines' capabilities are expanding, as recent developments include self-driving vehicles and software that outperforms professionals in some tasks. Debates on the labour market implications of these new technologies are ongoing (e.g. Brynjolfsson and McAfee 2014, Acemoglu and Restrepo 2018). But in these debates, it is important to ask not only "Will robots take my job?", but also "What would happen to my career if robots took my job?"

Much is at stake. Occupational decline may hurt workers and their families, and may also have broader consequences for economic inequality, education, taxation, and redistribution. If it exacerbates differences in outcomes between economic winners and losers, populist forces may gain further momentum (Dal Bo et al. 2019).

In a new paper (Edin et al. 2019) we explore the consequences of large declines in occupational employment for workers' careers. We assemble a dataset with forecasts of occupational employment changes that allow us to identify unanticipated declines, population-level administrative data spanning several decades, and a highly detailed occupational classification. These data allow us to compare outcomes for similar workers who perform similar tasks and have similar expectations of future occupational employment trajectories, but experience different actual occupational changes.

Our approach is distinct from previous work that contrasts career outcomes of routine and non-routine workers (e.g. Cortes 2016), since we compare workers who perform similar tasks and whose careers would likely have followed similar paths were it not for occupational decline. Our work is also distinct from studies of mass layoffs (e.g. Jacobson et al. 1993), since workers who experience occupational decline may take action before losing their jobs.

In our analysis, we follow individual workers' careers for almost 30 years, and we find that workers in declining occupations lose on average 2-5% of cumulative earnings, compared to other similar workers. Workers with low initial earnings (relative to others in their occupations) lose more – about 8-11% of mean cumulative earnings. These earnings losses reflect both lost years of employment and lower earnings conditional on employment; some of the employment losses are due to increased time spent in unemployment and retraining, and low earners spend more time in both unemployment and retraining.

Estimating the Consequences of Occupational Decline

We begin by assembling data from the Occupational Outlook Handbooks (OOH), published by the US Bureau of Labor Statistics, which cover more than 400 occupations. In our main analysis we define occupations as declining if their employment fell by at least 25% from 1984-2016, although we show that our results are robust to using other cutoffs. The OOH also provides information on technological change affecting each occupation, and forecasts of employment over time. Using these data, we can separate technologically driven declines, and also unanticipated declines. Occupations that declined include typesetters, drafters, proof readers, and various machine operators.

We then match the OOH data to detailed Swedish occupations. This allows us to study the consequences of occupational decline for workers who, in 1985, worked in occupations that declined over the subsequent decades. We verify that occupations that declined in the US also declined in Sweden, and that the employment forecasts that the BLS made for the US have predictive power for employment changes in Sweden.

Detailed administrative micro-data, which cover all Swedish workers, allow us to address two potential concerns for identifying the consequences of occupational decline: that workers in declining occupations may have differed from other workers, and that declining occupations may have differed even in absence of occupational decline. To address the first concern, about individual sorting, we control for gender, age, education, and location, as well as 1985 earnings. Once we control for these characteristics, we find that workers in declining occupations were no different from others in terms of their cognitive and non-cognitive test scores and their parents' schooling and earnings. To address the second concern, about occupational differences, we control for occupational earnings profiles (calculated using the 1985 data), the BLS forecasts, and other occupational and industry characteristics.

Assessing the losses and how their incidence varied

We find that prime age workers (those aged 25-36 in 1985) who were exposed to occupational decline lost about 2-6 months of employment over 28 years, compared to similar workers whose occupations did not decline. The higher end of the range refers to our comparison between similar workers, while the lower end of the range compares similar workers in similar occupations. The employment loss corresponds to around 1-2% of mean cumulative employment. The corresponding earnings losses were larger, and amounted to around 2-5% of mean cumulative earnings. These mean losses may seem moderate given the large occupational declines, but the average outcomes do not tell the full story. The bottom third of earners in each occupation fared worse, losing around 8-11% of mean earnings when their occupations declined.

The earnings and employment losses that we document reflect increased time spent in unemployment and government-sponsored retraining – more so for workers with low initial earnings. We also find that older workers who faced occupational decline retired a little earlier.

We also find that workers in occupations that declined after 1985 were less likely to remain in their starting occupation. It is quite likely that this reduced supply to declining occupations contributed to mitigating the losses of the workers that remained there.

We show that our main findings are essentially unchanged when we restrict our analysis to technology-related occupational declines.

Further, our finding that mean earnings and employment losses from occupational decline are small is not unique to Sweden. We find similar results using a smaller panel dataset on US workers, using the National Longitudinal Survey of Youth 1979.

Theoretical implications

Our paper also considers the implications of our findings for Roy's (1951) model, which is a workhorse model for labour economists. We show that the frictionless Roy model predicts that losses are increasing in initial occupational earnings rank, under a wide variety of assumptions about the skill distribution. This prediction is inconsistent with our finding that the largest earnings losses from occupational decline are incurred by those who earned the least. To reconcile our findings, we add frictions to the model: we assume that workers who earn little in one occupation incur larger time costs searching for jobs or retraining if they try to move occupations. This extension of the model, especially when coupled with the addition of involuntary job displacement, allows us to reconcile several of our empirical findings.

Conclusions

There is a vivid academic and public debate on whether we should fear the takeover of human jobs by machines. New technologies may replace not only factory and office workers but also drivers and some professional occupations. Our paper compares similar workers in similar occupations over 28 years. We show that although mean losses in earnings and employment for those initially working in occupations that later declined are relatively moderate (2-5% of earnings and 1-2% of employment), low-earners lose significantly more.

The losses that we find from occupational decline are smaller than those suffered by workers who experience mass layoffs, as reported in the existing literature. Because the occupational decline we study took years or even decades, its costs for individual workers were likely mitigated through retirements, reduced entry into declining occupations, and increased job-to-job exits to other occupations. Compared to large, sudden shocks, such as plant closures, the decline we study may also have a less pronounced impact on local economies.

While the losses we find are on average moderate, there are several reasons why future occupational decline may have adverse impacts. First, while we study unanticipated declines, the declines were nevertheless fairly gradual. Costs may be larger for sudden shocks following, for example, a quick evolution of machine learning. Second, the occupational decline that we study mainly affected low- and middle-skilled occupations, which require less human capital investment than those that may be impacted in the future. Finally, and perhaps most importantly, our findings show that low-earning individuals are already suffering considerable (pre-tax) earnings losses, even in Sweden, where institutions are geared towards mitigating those losses and facilitating occupational transitions. Helping these workers stay productive when they face occupational decline remains an important challenge for governments.

Please see original post for references

[Jun 26, 2019] >Linux Package Managers Compared - AppImage vs Snap vs Flatpak

Jun 26, 2019 | www.ostechnix.com

by editor · Published June 24, 2019 · Updated June 24, 2019

Package managers provide a way of packaging, distributing, installing, and maintaining apps in an operating system. With modern desktop, server and IoT applications of the Linux operating system and the hundreds of different distros that exist, it becomes necessary to move away from platform specific packaging methods to platform agnostic ones. This post explores 3 such tools, namely AppImage , Snap and Flatpak , that each aim to be the future of software deployment and management in Linux. At the end we summarize a few key findings.

1. AppImage

AppImage follows a concept called "One app = one file" . This is to be understood as an AppImage being a regular independent "file" containing one application with everything it needs to run in the said file. Once made executable, the AppImage can be run like any application in a computer by simply double-clicking it in the users file system.[1]

It is a format for creating portable software for Linux without requiring the user to install the said application. The format allows the original developers of the software (upstream developers) to create a platform and distribution independent (also called a distribution-agnostic binary) version of their application that will basically run on any flavor of Linux.

AppImage has been around for a long time. Klik , a predecessor of AppImage was created by Simon Peter in 2004. The project was shut down in 2011 after not having passed the beta stage. A project named PortableLinuxApps was created by Simon around the same time and the format was picked up by a few portals offering software for Linux users. The project was renamed again in 2013 to its current name AppImage and a repository has been maintained in GitHub (project link ) with all the latest changes to the same since 2018.[2][3]

Written primarily in C and donning the MIT license since 2013, AppImage is currently developed by The AppImage project . It is a very convenient way to use applications as demonstrated by the following features:

  1. AppImages can run on virtually any Linux system. As mentioned before applications derive a lot of functionality from the operating system and a few common libraries. This is a common practice in the software world since if something is already done, there is no point in doing it again if you can pick and choose which parts from the same to use. The problem is that many Linux distros might not have all the files a particular application requires to run since it is left to the developers of that particular distro to include the necessary packages. Hence developers need to separately include the dependencies of the application for each Linux distro they are publishing their app for. Using the AppImage format developers can choose to include all the libraries and files that they cannot possibly hope the target operating system to have as part of the AppImage file. Hence the same AppImage format file can work on different operating systems and machines without needing granular control.
  2. The one app one file philosophy means that user experience is simple and elegant in that users need only download and execute one file that will serve their needs for using the application.
  3. No requirement of root access . System administrators will require people to have root access to stop them from messing with computers and their default setup. This also means that people with no root access or super user privileges cannot install the apps they need as they please. The practice is common in a public setting (such as library or university computers or on enterprise systems). The AppImage file does not require users to "install" anything and hence users need only download the said file and make it executable to start using it. This removes the access dilemmas that system administrators have and makes their job easier without sacrificing user experience.
  4. No effect on core operating system . The AppImage-application format allows using applications with their full functionality without needing to change or even access most system files. Meaning whatever the applications do, the core operating system setup and files remain untouched.
  5. An AppImage can be made by a developer for a particular version of their application. Any updated version is made as a different AppImage. Hence users if need be can test multiple versions of the same application by running different instances using different AppImages. This is an invaluable feature when you need to test your applications from an end-user POV to notice differences.
  6. Take your applications where you go. As mentioned previously AppImages are archived files of all the files that an application requires and can be used without installing or even bothering about the distribution the system uses. Hence if you have a set of apps that you use regularly you may even mount a few AppImage files on a thumb drive and take it with you to use on multiple computers running multiple different distros without worrying whether they'll work or not.

Furthermore, the AppImageKit allows users from all backgrounds to build their own AppImages from applications they already have or for applications that are not provided an AppImage by their upstream developer.

The package manager is platform independent but focuses primarily on software distribution to end users on their desktops with a dedicated daemon AppImaged for integrating the AppImage formats into respective desktop environments. AppImage is supported natively now by a variety of distros such as Ubuntu, Debian, openSUSE, CentOS, Fedora etc. and others may set it up as per their needs. AppImages can also be run on servers with limited functionality via the CLI tools included.

To know more about AppImages, go to the official AppImage documentation page.


Suggested read:


2. Snappy

Snappy is a software deployment and package management system like AppImage or any other package manager for that instance. It is originally designed for the now defunct Ubuntu Touch Operating system. Snappy lets developers create software packages for use in a variety of Linux based distributions. The initial intention behind creating Snappy and deploying "snaps" on Ubuntu based systems is to obtain a unified single format that could be used in everything from IoT devices to full-fledged computer systems that ran some version of Ubuntu and in a larger sense Linux itself.[4]

The lead developer behind the project is Canonical , the same company that pilots the Ubuntu project. Ubuntu had native snap support from version 16.04 LTS with more and more distros supporting it out of the box or via a simple setup these days. If you use Arch or Debian or openSUSE you'll find it easy to install support for the package manager using simple commands in the terminal as explained later in this section. This is also made possible by making the necessary snap platform files available on the respective repos.[5]

Snappy has the following important components that make up the entire package manager system.[6]

The snapd component is written primarily in C and Golang whereas the Snapcraft framework is built using Python . Although both the modules use the GPLv3 license it is to be noted that snapd has proprietary code from Canonical for its server-side operations with just the client side being published under the GPL license. This is a major point of contention with developers since this involves developers signing a CLA form to participate in snap development.[7]

Going deeper into the finer details of the Snappy package manager the following may be noted:

  1. Snaps as noted before are all inclusive and contain all the necessary files (dependencies) that the application needs to run. Hence, developers need not to make different snaps for the different distros that they target. Being mindful of the runtimes is all that's necessary if base runtimes are excluded from the snap.
  2. Snappy packages are meant to support transactional updates. Such a transactional update is atomic and fully reversible, meaning you can use the application while its being updated and that if an update does not behave the way its supposed to, you can reverse the same with no other effects whatsoever. The concept is also called as delta programming in which only changes to the application are transmitted as an update instead of the whole package. An Ubuntu derivative called Ubuntu Core actually promises the snappy update protocol to the OS itself.[8]
  3. A key point of difference between snaps and AppImages, is how they handle version differences. Using AppImages different versions of the application will have different AppImages allowing you to concurrently use 2 or more different versions of the same application at the same time. However, using snaps means conforming to the transactional or delta update system. While this means faster updates, it keeps you from running two instances of the same application at the same time. If you need to use the old version of an app you'll need to reverse or uninstall the new version. Snappy does support a feature called "parallel install" which will let users accomplish similar goals, however, it is still in an experimental stage and cannot be considered to be a stable implementation. Snappy also makes use of channels meaning you can use the beta or the nightly build of an app and the stable version at the same time.[9]
  4. Extensive support from major Linux distros and major developers including Google, Mozilla, Microsoft, etc.[4]
  5. Snapd the desktop integration tool supports taking "snapshots" of the current state of all the installed snaps in the system. This will let users save the current configuration state of all the applications that are installed via the Snappy package manager and let users revert to that state whenever they desire so. The same feature can also be set to automatically take snapshots at a frequency deemed necessary by the user. Snapshots can be created using the snap save command in the snapd framework.[10]
  6. Snaps are designed to be sandboxed during operation. This provides a much-required layer of security and isolation to users. Users need not worry about snap-based applications messing with the rest of the software on their computer. Sandboxing is implemented using three levels of isolation viz, classic , strict and devmode . Each level of isolation allows the app different levels of access within the file system and computer.[11]

On the flip side of things, snaps are widely criticized for being centered around Canonical's modus operandi . Most of the commits to the project are by Canonical employees or contractors and other contributors are required to sign a release form (CLA). The sandboxing feature, a very important one indeed from a security standpoint, is flawed in that the sandboxing actually requires certain other core services to run (such as Mir) while applications running the X11 desktop won't support the said isolation, hence making the said security feature irrelevant. Questionable press releases and other marketing efforts from Canonical and the "central" and closed app repository are also widely criticized aspects of Snappy. Furthermore, the file sizes of the different snaps are also comparatively very large compared to the app sizes of the packages made using AppImage.[7]

For more details, check Snap official documentation .


Related read:


3. Flatpak

Like the Snap/Snappy listed above, Flatpak is also a software deployment tool that aims to ease software distribution and use in Linux. Flatpak was previously known as "xdg-app" and was based on concept proposed by Lennart Poettering in 2004. The idea was to contain applications in a secure virtual sandbox allowing for using applications without the need of root privileges and without compromising on the systems security. Alex started tinkering with Klik (thought to be a former version of AppImage) and wanted to implement the concept better. Alexander Larsson who at the time was working with Red Hat wrote an implementation called xdg-app in 2015 that acted as a pre-cursor to the current Flatpak format.

Flatpak officially came out in 2016 with backing from Red Hat, Endless Computers and Collabora. Flathub is the official repository of all Flatpak application packages. At its surface Flatpak like the other is a framework for building and packaging distribution agnostic applications for Linux. It simply requires the developers to conform to a few desktop environment guidelines in order for the application to be successfully integrated into the Flatpak environment.

Targeted primarily at the three popular desktop implementations FreeDesktop , KDE , and GNOME , the Flatpak framework itself is written in C and works on a LGPL license. The maintenance repository can be accessed via the GitHub link here .

A few features of Flatpak that make it stand apart are mentioned below. Notice that features Flatpak shares with AppImage and Snappy are omitted here.

One of the most criticized aspects of Flatpak however is it's the sandbox feature itself. Sandboxing is how package managers such as Snappy and Flatpak implement important security features. Sandboxing essentially isolates the application from everything else in the system only allowing for user defined exchange of information from within the sandbox to outside. The flaw with the concept being that the sandbox cannot be inherently impregnable. Data has to be eventually transferred between the two domains and simple Linux commands can simply get rid of the sandbox restriction meaning that malicious applications might potentially jump out of the said sandbox.[15]

This combined with the worse than expected commitment to rolling out security updates for Flatpak has resulted in widespread criticism of the team's tall claim of providing a secure framework. The blog (named flatkill ) linked at the end of this guide in fact mentions a couple of exploits that were not addressed by the Flatpak team as soon as they should've been.[15]

For more details, I suggest you to read Flatpak official documentation .


Related read:


AppImage vs Snap vs Flatpak

The table attached below summarizes all the above findings into a concise and technical comparison of the three frameworks.

Feature AppImage Snappy Flatpak
Unique feature
Not an appstore or repository, its simply put a packaging format for software distribution. Led by Canonical (Same company as Ubuntu), features central app repository and active contribution from Canonical. Features an app store called FlatHub, however, individuals may still host packages and distribute it.
Target system Desktops and Servers. Desktops, Servers, IoT devices, Embedded devices etc. Desktops and limited function on servers.
Libraries/Dependencies Base system. Runtimes optional, Libraries and other dependencies packaged. Base system or via Plugins or can be packaged. GNOME, KDE, Freedesktop bundled or custom bundled.
Developers Community Driven led by Simon Peter. Corporate driven by Canonical Ltd. Community driven by flatpak team supported by enterprise.
Written in C. Golang, C and Python. C.
Initial release 2004. 2014. 2015.
Sandboxing Can be implemented. 3 modes – strict, classic, and devmode with varying confinement capabilities. Runs in isolation. Isolated but Uses system files to run applications by default.
Sandboxing Platform Firejail, AppArmor, Bubblewrap. AppArmor. Bubblewrap.
App Installation Not necessary. Will act as self mounted disc. Installation using snapd. Installed using flatpak client tools.
App Execution Can be run after setting executing bit. Using desktop integrated snap tools. Runs isolated with user defined resources. Needs to be executed using flatpak command if CLI is used.
User Privileges Can be run w/o root user access. Can be run w/o root user access. Selectively required.
Hosting Applications Can be hosted anywhere by anybody. Has to be hosted with Canonical servers which are proprietary. Can be hosted anywhere by anybody.
Portable Execution from non system locations Yes. No. Yes, after flatpak client is configured.
Central Repository AppImageHub. Snap Store. Flathub.
Running multiple versions of the app Possible, any number of versions simultaneously. One version of the app in one channel. Has to be separately configured for more. Yes.
Updating applications Using CLI command AppImageUpdate or via an updater tool built into the AppImage. Requires snapd installed. Supports delta updating, will automatically update. Required flatpak installed. Update Using flatpak update command.
Package sizes on disk Application remains archived. Application remains archived. Client side is uncompressed.

Here is a long tabular comparison of AppImage vs. Snap vs. Flatpak features. Please note that the comparison is made from an AppImage perspective.

Conclusion

While all three of these platforms have a lot in common with each other and aim to be platform agnostic in approach, they offer different levels of competencies in a few areas. While Snaps can run on a variety of devices including embedded ones, AppImages and Flatpaks are built with the desktop user in mind. AppImages of popular applications on the other had have superior packaging sizes and portability whereas Flatpak really shines with its forward compatibility when its used in a set it and forget it system.

If there are any flaws in this guide, please let us know in the comment section below. We will update the guide accordingly.

References:

[Jun 23, 2019] Utilizing multi core for tar+gzip-bzip compression-decompression

Highly recommended!
Notable quotes:
"... There is effectively no CPU time spent tarring, so it wouldn't help much. The tar format is just a copy of the input file with header blocks in between files. ..."
"... You can also use the tar flag "--use-compress-program=" to tell tar what compression program to use. ..."
Jun 23, 2019 | stackoverflow.com

user1118764 , Sep 7, 2012 at 6:58

I normally compress using tar zcvf and decompress using tar zxvf (using gzip due to habit).

I've recently gotten a quad core CPU with hyperthreading, so I have 8 logical cores, and I notice that many of the cores are unused during compression/decompression.

Is there any way I can utilize the unused cores to make it faster?

Warren Severin , Nov 13, 2017 at 4:37

The solution proposed by Xiong Chiamiov above works beautifully. I had just backed up my laptop with .tar.bz2 and it took 132 minutes using only one cpu thread. Then I compiled and installed tar from source: gnu.org/software/tar I included the options mentioned in the configure step: ./configure --with-gzip=pigz --with-bzip2=lbzip2 --with-lzip=plzip I ran the backup again and it took only 32 minutes. That's better than 4X improvement! I watched the system monitor and it kept all 4 cpus (8 threads) flatlined at 100% the whole time. THAT is the best solution. – Warren Severin Nov 13 '17 at 4:37

Mark Adler , Sep 7, 2012 at 14:48

You can use pigz instead of gzip, which does gzip compression on multiple cores. Instead of using the -z option, you would pipe it through pigz:
tar cf - paths-to-archive | pigz > archive.tar.gz

By default, pigz uses the number of available cores, or eight if it could not query that. You can ask for more with -p n, e.g. -p 32. pigz has the same options as gzip, so you can request better compression with -9. E.g.

tar cf - paths-to-archive | pigz -9 -p 32 > archive.tar.gz

user788171 , Feb 20, 2013 at 12:43

How do you use pigz to decompress in the same fashion? Or does it only work for compression?

Mark Adler , Feb 20, 2013 at 16:18

pigz does use multiple cores for decompression, but only with limited improvement over a single core. The deflate format does not lend itself to parallel decompression.

The decompression portion must be done serially. The other cores for pigz decompression are used for reading, writing, and calculating the CRC. When compressing on the other hand, pigz gets close to a factor of n improvement with n cores.

Garrett , Mar 1, 2014 at 7:26

The hyphen here is stdout (see this page ).

Mark Adler , Jul 2, 2014 at 21:29

Yes. 100% compatible in both directions.

Mark Adler , Apr 23, 2015 at 5:23

There is effectively no CPU time spent tarring, so it wouldn't help much. The tar format is just a copy of the input file with header blocks in between files.

Jen , Jun 14, 2013 at 14:34

You can also use the tar flag "--use-compress-program=" to tell tar what compression program to use.

For example use:

tar -c --use-compress-program=pigz -f tar.file dir_to_zip

Valerio Schiavoni , Aug 5, 2014 at 22:38

Unfortunately by doing so the concurrent feature of pigz is lost. You can see for yourself by executing that command and monitoring the load on each of the cores. – Valerio Schiavoni Aug 5 '14 at 22:38

bovender , Sep 18, 2015 at 10:14

@ValerioSchiavoni: Not here, I get full load on all 4 cores (Ubuntu 15.04 'Vivid'). – bovender Sep 18 '15 at 10:14

Valerio Schiavoni , Sep 28, 2015 at 23:41

On compress or on decompress ? – Valerio Schiavoni Sep 28 '15 at 23:41

Offenso , Jan 11, 2017 at 17:26

I prefer tar - dir_to_zip | pv | pigz > tar.file pv helps me estimate, you can skip it. But still it easier to write and remember. – Offenso Jan 11 '17 at 17:26

Maxim Suslov , Dec 18, 2014 at 7:31

Common approach

There is option for tar program:

-I, --use-compress-program PROG
      filter through PROG (must accept -d)

You can use multithread version of archiver or compressor utility.

Most popular multithread archivers are pigz (instead of gzip) and pbzip2 (instead of bzip2). For instance:

$ tar -I pbzip2 -cf OUTPUT_FILE.tar.bz2 paths_to_archive
$ tar --use-compress-program=pigz -cf OUTPUT_FILE.tar.gz paths_to_archive

Archiver must accept -d. If your replacement utility hasn't this parameter and/or you need specify additional parameters, then use pipes (add parameters if necessary):

$ tar cf - paths_to_archive | pbzip2 > OUTPUT_FILE.tar.gz
$ tar cf - paths_to_archive | pigz > OUTPUT_FILE.tar.gz

Input and output of singlethread and multithread are compatible. You can compress using multithread version and decompress using singlethread version and vice versa.

p7zip

For p7zip for compression you need a small shell script like the following:

#!/bin/sh
case $1 in
  -d) 7za -txz -si -so e;;
   *) 7za -txz -si -so a .;;
esac 2>/dev/null

Save it as 7zhelper.sh. Here the example of usage:

$ tar -I 7zhelper.sh -cf OUTPUT_FILE.tar.7z paths_to_archive
$ tar -I 7zhelper.sh -xf OUTPUT_FILE.tar.7z
xz

Regarding multithreaded XZ support. If you are running version 5.2.0 or above of XZ Utils, you can utilize multiple cores for compression by setting -T or --threads to an appropriate value via the environmental variable XZ_DEFAULTS (e.g. XZ_DEFAULTS="-T 0" ).

This is a fragment of man for 5.1.0alpha version:

Multithreaded compression and decompression are not implemented yet, so this option has no effect for now.

However this will not work for decompression of files that haven't also been compressed with threading enabled. From man for version 5.2.2:

Threaded decompression hasn't been implemented yet. It will only work on files that contain multiple blocks with size information in block headers. All files compressed in multi-threaded mode meet this condition, but files compressed in single-threaded mode don't even if --block-size=size is used.

Recompiling with replacement

If you build tar from sources, then you can recompile with parameters

--with-gzip=pigz
--with-bzip2=lbzip2
--with-lzip=plzip

After recompiling tar with these options you can check the output of tar's help:

$ tar --help | grep "lbzip2\|plzip\|pigz"
  -j, --bzip2                filter the archive through lbzip2
      --lzip                 filter the archive through plzip
  -z, --gzip, --gunzip, --ungzip   filter the archive through pigz

mpibzip2 , Apr 28, 2015 at 20:57

I just found pbzip2 and mpibzip2 . mpibzip2 looks very promising for clusters or if you have a laptop and a multicore desktop computer for instance. – user1985657 Apr 28 '15 at 20:57

oᴉɹǝɥɔ , Jun 10, 2015 at 17:39

Processing STDIN may in fact be slower. – oᴉɹǝɥɔ Jun 10 '15 at 17:39

selurvedu , May 26, 2016 at 22:13

Plus 1 for xz option. It the simplest, yet effective approach. – selurvedu May 26 '16 at 22:13

panticz.de , Sep 1, 2014 at 15:02

You can use the shortcut -I for tar's --use-compress-program switch, and invoke pbzip2 for bzip2 compression on multiple cores:
tar -I pbzip2 -cf OUTPUT_FILE.tar.bz2 DIRECTORY_TO_COMPRESS/

einpoklum , Feb 11, 2017 at 15:59

A nice TL;DR for @MaximSuslov's answer . – einpoklum Feb 11 '17 at 15:59
If you want to have more flexibility with filenames and compression options, you can use:
find /my/path/ -type f -name "*.sql" -o -name "*.log" -exec \
tar -P --transform='s@/my/path/@@g' -cf - {} + | \
pigz -9 -p 4 > myarchive.tar.gz
Step 1: find

find /my/path/ -type f -name "*.sql" -o -name "*.log" -exec

This command will look for the files you want to archive, in this case /my/path/*.sql and /my/path/*.log . Add as many -o -name "pattern" as you want.

-exec will execute the next command using the results of find : tar

Step 2: tar

tar -P --transform='s@/my/path/@@g' -cf - {} +

--transform is a simple string replacement parameter. It will strip the path of the files from the archive so the tarball's root becomes the current directory when extracting. Note that you can't use -C option to change directory as you'll lose benefits of find : all files of the directory would be included.

-P tells tar to use absolute paths, so it doesn't trigger the warning "Removing leading `/' from member names". Leading '/' with be removed by --transform anyway.

-cf - tells tar to use the tarball name we'll specify later

{} + uses everyfiles that find found previously

Step 3: pigz

pigz -9 -p 4

Use as many parameters as you want. In this case -9 is the compression level and -p 4 is the number of cores dedicated to compression. If you run this on a heavy loaded webserver, you probably don't want to use all available cores.

Step 4: archive name

> myarchive.tar.gz

Finally.

[Jun 23, 2019] Test with rsync between two partitions

Jun 23, 2019 | www.fsarchiver.org

An important test is done using rsync. It requires two partitions: the original one, and a spare partition where to restore the archive. It allows to know whether or not there are differences between the original and the restored filesystem. rsync is able to compare both the files contents, and files attributes (timestamps, permissions, owner, extended attributes, acl, ), so that's a very good test. The following command can be used to know whether or not files are the same (data and attributes) on two file-systems:

rsync -axHAXnP /mnt/part1/ /mnt/part2/

[Jun 22, 2019] Using SSH and Tmux for screen sharing Enable by Seth Kenlon Tmux

Jun 22, 2019 | www.redhat.com

Tmux is a screen multiplexer, meaning that it provides your terminal with virtual terminals, allowing you to switch from one virtual session to another. Modern terminal emulators feature a tabbed UI, making the use of Tmux seem redundant, but Tmux has a few peculiar features that still prove difficult to match without it.

First of all, you can launch Tmux on a remote machine, start a process running, detach from Tmux, and then log out. In a normal terminal, logging out would end the processes you started. Since those processes were started in Tmux, they persist even after you leave.

Secondly, Tmux can "mirror" its session on multiple screens. If two users log into the same Tmux session, then they both see the same output on their screens in real time.

Tmux is a lightweight, simple, and effective solution in cases where you're training someone remotely, debugging a command that isn't working for them, reviewing text, monitoring services or processes, or just avoiding the ten minutes it sometimes takes to read commands aloud over a phone clearly enough that your user is able to accurately type them.

To try this option out, you must have two computers. Assume one computer is owned by Alice, and the other by Bob. Alice remotely logs into Bob's PC and launches a Tmux session:

alice$ ssh bob.local
alice$ tmux

On his PC, Bob starts Tmux, attaching to the same session:

bob$ tmux attach

When Alice types, Bob sees what she is typing, and when Bob types, Alice sees what he's typing.

It's a simple but effective trick that enables interactive live sessions between computer users, but it is entirely text-based.

Collaboration

With these two applications, you have access to some powerful methods of supporting users. You can use these tools to manage systems remotely, as training tools, or as support tools, and in every case, it sure beats wandering around the office looking for somebody's desk. Get familiar with SSH and Tmux, and start using them today.

[Jun 20, 2019] Exploring run filesystem on Linux by Sandra Henry-Stocker

Jun 20, 2019 | www.networkworld.com

/run is home to a wide assortment of data. For example, if you take a look at /run/user, you will notice a group of directories with numeric names.

$ ls /run/user
1000  1002  121

A long file listing will clarify the significance of these numbers.

$ ls -l
total 0
drwx------ 5 shs  shs  120 Jun 16 12:44 1000
drwx------ 5 dory dory 120 Jun 16 16:14 1002
drwx------ 8 gdm  gdm  220 Jun 14 12:18 121

This allows us to see that each directory is related to a user who is currently logged in or to the display manager, gdm. The numbers represent their UIDs. The content of each of these directories are files that are used by running processes.

The /run/user files represent only a very small portion of what you'll find in /run. There are lots of other files, as well. A handful contain the process IDs for various system processes.

$ ls *.pid
acpid.pid  atopacctd.pid  crond.pid  rsyslogd.pid
atd.pid    atop.pid       gdm3.pid   sshd.pid

As shown below, that sshd.pid file listed above contains the process ID for the ssh daemon (sshd).

[Jun 19, 2019] America s Suicide Epidemic

Highly recommended!
Notable quotes:
"... A suicide occurs in the United States roughly once every 12 minutes . What's more, after decades of decline, the rate of self-inflicted deaths per 100,000 people annually -- the suicide rate -- has been increasing sharply since the late 1990s. Suicides now claim two-and-a-half times as many lives in this country as do homicides , even though the murder rate gets so much more attention. ..."
"... In some states the upsurge was far higher: North Dakota (57.6%), New Hampshire (48.3%), Kansas (45%), Idaho (43%). ..."
"... Since 2008 , suicide has ranked 10th among the causes of death in this country. For Americans between the ages of 10 and 34, however, it comes in second; for those between 35 and 45, fourth. The United States also has the ninth-highest rate in the 38-country Organization for Economic Cooperation and Development. Globally , it ranks 27th. ..."
"... The rates in rural counties are almost double those in the most urbanized ones, which is why states like Idaho, Kansas, New Hampshire, and North Dakota sit atop the suicide list. Furthermore, a far higher percentage of people in rural states own guns than in cities and suburbs, leading to a higher rate of suicide involving firearms, the means used in half of all such acts in this country. ..."
"... Education is also a factor. The suicide rate is lowest among individuals with college degrees. Those who, at best, completed high school are, by comparison, twice as likely to kill themselves. Suicide rates also tend to be lower among people in higher-income brackets. ..."
"... Evidence from the United States , Brazil , Japan , and Sweden does indicate that, as income inequality increases, so does the suicide rate. ..."
"... One aspect of the suicide epidemic is puzzling. Though whites have fared far better economically (and in many other ways) than African Americans, their suicide rate is significantly higher . ..."
"... The higher suicide rate among whites as well as among people with only a high school diploma highlights suicide's disproportionate effect on working-class whites. This segment of the population also accounts for a disproportionate share of what economists Anne Case and Angus Deaton have labeled " deaths of despair " -- those caused by suicides plus opioid overdoses and liver diseases linked to alcohol abuse. Though it's hard to offer a complete explanation for this, economic hardship and its ripple effects do appear to matter. ..."
"... Trump has neglected his base on pretty much every issue; this one's no exception. ..."
Jun 19, 2019 | www.nakedcapitalism.com

Yves here. This post describes how the forces driving the US suicide surge started well before the Trump era, but explains how Trump has not only refused to acknowledge the problem, but has made matters worse.

However, it's not as if the Democrats are embracing this issue either.

BY Rajan Menon, the Anne and Bernard Spitzer Professor of International Relations at the Powell School, City College of New York, and Senior Research Fellow at Columbia University's Saltzman Institute of War and Peace Studies. His latest book is The Conceit of Humanitarian Intervention Originally published at TomDispatch .

We hear a lot about suicide when celebrities like Anthony Bourdain and Kate Spade die by their own hand. Otherwise, it seldom makes the headlines. That's odd given the magnitude of the problem.

In 2017, 47,173 Americans killed themselves. In that single year, in other words, the suicide count was nearly seven times greater than the number of American soldiers killed in the Afghanistan and Iraq wars between 2001 and 2018.

A suicide occurs in the United States roughly once every 12 minutes . What's more, after decades of decline, the rate of self-inflicted deaths per 100,000 people annually -- the suicide rate -- has been increasing sharply since the late 1990s. Suicides now claim two-and-a-half times as many lives in this country as do homicides , even though the murder rate gets so much more attention.

In other words, we're talking about a national epidemic of self-inflicted deaths.

Worrisome Numbers

Anyone who has lost a close relative or friend to suicide or has worked on a suicide hotline (as I have) knows that statistics transform the individual, the personal, and indeed the mysterious aspects of that violent act -- Why this person? Why now? Why in this manner? -- into depersonalized abstractions. Still, to grasp how serious the suicide epidemic has become, numbers are a necessity.

According to a 2018 Centers for Disease Control study , between 1999 and 2016, the suicide rate increased in every state in the union except Nevada, which already had a remarkably high rate. In 30 states, it jumped by 25% or more; in 17, by at least a third. Nationally, it increased 33% . In some states the upsurge was far higher: North Dakota (57.6%), New Hampshire (48.3%), Kansas (45%), Idaho (43%).

Alas, the news only gets grimmer.

Since 2008 , suicide has ranked 10th among the causes of death in this country. For Americans between the ages of 10 and 34, however, it comes in second; for those between 35 and 45, fourth. The United States also has the ninth-highest rate in the 38-country Organization for Economic Cooperation and Development. Globally , it ranks 27th.

More importantly, the trend in the United States doesn't align with what's happening elsewhere in the developed world. The World Health Organization, for instance, reports that Great Britain, Canada, and China all have notably lower suicide rates than the U.S., as do all but six countries in the European Union. (Japan's is only slightly lower.)

World Bank statistics show that, worldwide, the suicide rate fell from 12.8 per 100,000 in 2000 to 10.6 in 2016. It's been falling in China , Japan (where it has declined steadily for nearly a decade and is at its lowest point in 37 years), most of Europe, and even countries like South Korea and Russia that have a significantly higher suicide rate than the United States. In Russia, for instance, it has dropped by nearly 26% from a high point of 42 per 100,000 in 1994 to 31 in 2019.

We know a fair amount about the patterns of suicide in the United States. In 2017, the rate was highest for men between the ages of 45 and 64 (30 per 100,000) and those 75 and older (39.7 per 100,000).

The rates in rural counties are almost double those in the most urbanized ones, which is why states like Idaho, Kansas, New Hampshire, and North Dakota sit atop the suicide list. Furthermore, a far higher percentage of people in rural states own guns than in cities and suburbs, leading to a higher rate of suicide involving firearms, the means used in half of all such acts in this country.

There are gender-based differences as well. From 1999 to 2017, the rate for men was substantially higher than for women -- almost four-and-a-half times higher in the first of those years, slightly more than three-and-a-half times in the last.

Education is also a factor. The suicide rate is lowest among individuals with college degrees. Those who, at best, completed high school are, by comparison, twice as likely to kill themselves. Suicide rates also tend to be lower among people in higher-income brackets.

The Economics of Stress

This surge in the suicide rate has taken place in years during which the working class has experienced greater economic hardship and psychological stress. Increased competition from abroad and outsourcing, the results of globalization, have contributed to job loss, particularly in economic sectors like manufacturing, steel, and mining that had long been mainstays of employment for such workers. The jobs still available often paid less and provided fewer benefits.

Technological change, including computerization, robotics, and the coming of artificial intelligence, has similarly begun to displace labor in significant ways, leaving Americans without college degrees, especially those 50 and older, in far more difficult straits when it comes to finding new jobs that pay well. The lack of anything resembling an industrial policy of a sort that exists in Europe has made these dislocations even more painful for American workers, while a sharp decline in private-sector union membership -- down from nearly 17% in 1983 to 6.4% today -- has reduced their ability to press for higher wages through collective bargaining.

Furthermore, the inflation-adjusted median wage has barely budged over the last four decades (even as CEO salaries have soared). And a decline in worker productivity doesn't explain it: between 1973 and 2017 productivity increased by 77%, while a worker's average hourly wage only rose by 12.4%. Wage stagnation has made it harder for working-class Americans to get by, let alone have a lifestyle comparable to that of their parents or grandparents.

The gap in earnings between those at the top and bottom of American society has also increased -- a lot. Since 1979, the wages of Americans in the 10th percentile increased by a pitiful 1.2%. Those in the 50th percentile did a bit better, making a gain of 6%. By contrast, those in the 90th percentile increased by 34.3% and those near the peak of the wage pyramid -- the top 1% and especially the rarefied 0.1% -- made far more substantial gains.

And mind you, we're just talking about wages, not other forms of income like large stock dividends, expensive homes, or eyepopping inheritances. The share of net national wealth held by the richest 0.1% increased from 10% in the 1980s to 20% in 2016. By contrast, the share of the bottom 90% shrank in those same decades from about 35% to 20%. As for the top 1%, by 2016 its share had increased to almost 39% .

The precise relationship between economic inequality and suicide rates remains unclear, and suicide certainly can't simply be reduced to wealth disparities or financial stress. Still, strikingly, in contrast to the United States, suicide rates are noticeably lower and have been declining in Western European countries where income inequalities are far less pronounced, publicly funded healthcare is regarded as a right (not demonized as a pathway to serfdom), social safety nets far more extensive, and apprenticeships and worker retraining programs more widespread.

Evidence from the United States , Brazil , Japan , and Sweden does indicate that, as income inequality increases, so does the suicide rate. If so, the good news is that progressive economic policies -- should Democrats ever retake the White House and the Senate -- could make a positive difference. A study based on state-by-state variations in the U.S. found that simply boosting the minimum wage and Earned Income Tax Credit by 10% appreciably reduces the suicide rate among people without college degrees.

The Race Enigma

One aspect of the suicide epidemic is puzzling. Though whites have fared far better economically (and in many other ways) than African Americans, their suicide rate is significantly higher . It increased from 11.3 per 100,000 in 2000 to 15.85 per 100,000 in 2017; for African Americans in those years the rates were 5.52 per 100,000 and 6.61 per 100,000. Black men are 10 times more likely to be homicide victims than white men, but the latter are two-and-half times more likely to kill themselves.

The higher suicide rate among whites as well as among people with only a high school diploma highlights suicide's disproportionate effect on working-class whites. This segment of the population also accounts for a disproportionate share of what economists Anne Case and Angus Deaton have labeled " deaths of despair " -- those caused by suicides plus opioid overdoses and liver diseases linked to alcohol abuse. Though it's hard to offer a complete explanation for this, economic hardship and its ripple effects do appear to matter.

According to a study by the St. Louis Federal Reserve , the white working class accounted for 45% of all income earned in the United States in 1990, but only 27% in 2016. In those same years, its share of national wealth plummeted, from 45% to 22%. And as inflation-adjusted wages have decreased for men without college degrees, many white workers seem to have lost hope of success of any sort. Paradoxically, the sense of failure and the accompanying stress may be greater for white workers precisely because they traditionally were much better off economically than their African American and Hispanic counterparts.

In addition, the fraying of communities knit together by employment in once-robust factories and mines has increased social isolation among them, and the evidence that it -- along with opioid addiction and alcohol abuse -- increases the risk of suicide is strong . On top of that, a significantly higher proportion of whites than blacks and Hispanics own firearms, and suicide rates are markedly higher in states where gun ownership is more widespread.

Trump's Faux Populism

The large increase in suicide within the white working class began a couple of decades before Donald Trump's election. Still, it's reasonable to ask what he's tried to do about it, particularly since votes from these Americans helped propel him to the White House. In 2016, he received 64% of the votes of whites without college degrees; Hillary Clinton, only 28%. Nationwide, he beat Clinton in counties where deaths of despair rose significantly between 2000 and 2015.

White workers will remain crucial to Trump's chances of winning in 2020. Yet while he has spoken about, and initiated steps aimed at reducing, the high suicide rate among veterans , his speeches and tweets have never highlighted the national suicide epidemic or its inordinate impact on white workers. More importantly, to the extent that economic despair contributes to their high suicide rate, his policies will only make matters worse.

The real benefits from the December 2017 Tax Cuts and Jobs Act championed by the president and congressional Republicans flowed to those on the top steps of the economic ladder. By 2027, when the Act's provisions will run out, the wealthiest Americans are expected to have captured 81.8% of the gains. And that's not counting the windfall they received from recent changes in taxes on inheritances. Trump and the GOP doubled the annual amount exempt from estate taxes -- wealth bequeathed to heirs -- through 2025 from $5.6 million per individual to $11.2 million (or $22.4 million per couple). And who benefits most from this act of generosity? Not workers, that's for sure, but every household with an estate worth $22 million or more will.

As for job retraining provided by the Workforce Innovation and Opportunity Act, the president proposed cutting that program by 40% in his 2019 budget, later settling for keeping it at 2017 levels. Future cuts seem in the cards as long as Trump is in the White House. The Congressional Budget Office projects that his tax cuts alone will produce even bigger budget deficits in the years to come. (The shortfall last year was $779 billion and it is expected to reach $1 trillion by 2020.) Inevitably, the president and congressional Republicans will then demand additional reductions in spending for social programs.

This is all the more likely because Trump and those Republicans also slashed corporate taxes from 35% to 21% -- an estimated $1.4 trillion in savings for corporations over the next decade. And unlike the income tax cut, the corporate tax has no end date . The president assured his base that the big bucks those companies had stashed abroad would start flowing home and produce a wave of job creation -- all without adding to the deficit. As it happens, however, most of that repatriated cash has been used for corporate stock buy-backs, which totaled more than $800 billion last year. That, in turn, boosted share prices, but didn't exactly rain money down on workers. No surprise, of course, since the wealthiest 10% of Americans own at least 84% of all stocks and the bottom 60% have less than 2% of them.

And the president's corporate tax cut hasn't produced the tsunami of job-generating investments he predicted either. Indeed, in its aftermath, more than 80% of American companies stated that their plans for investment and hiring hadn't changed. As a result, the monthly increase in jobs has proven unremarkable compared to President Obama's second term, when the economic recovery that Trump largely inherited began. Yes, the economy did grow 2.3% in 2017 and 2.9% in 2018 (though not 3.1% as the president claimed). There wasn't, however, any "unprecedented economic boom -- a boom that has rarely been seen before" as he insisted in this year's State of the Union Address .

Anyway, what matters for workers struggling to get by is growth in real wages, and there's nothing to celebrate on that front: between 2017 and mid-2018 they actually declined by 1.63% for white workers and 2.5% for African Americans, while they rose for Hispanics by a measly 0.37%. And though Trump insists that his beloved tariff hikes are going to help workers, they will actually raise the prices of goods, hurting the working class and other low-income Americans the most .

Then there are the obstacles those susceptible to suicide face in receiving insurance-provided mental-health care. If you're a white worker without medical coverage or have a policy with a deductible and co-payments that are high and your income, while low, is too high to qualify for Medicaid, Trump and the GOP haven't done anything for you. Never mind the president's tweet proclaiming that "the Republican Party Will Become 'The Party of Healthcare!'"

Let me amend that: actually, they have done something. It's just not what you'd call helpful. The percentage of uninsured adults, which fell from 18% in 2013 to 10.9% at the end of 2016, thanks in no small measure to Obamacare , had risen to 13.7% by the end of last year.

The bottom line? On a problem that literally has life-and-death significance for a pivotal portion of his base, Trump has been AWOL. In fact, to the extent that economic strain contributes to the alarming suicide rate among white workers, his policies are only likely to exacerbate what is already a national crisis of epidemic proportions.


Seamus Padraig , June 19, 2019 at 6:46 am

Trump has neglected his base on pretty much every issue; this one's no exception.

DanB , June 19, 2019 at 8:55 am

Trump is running on the claim that he's turned the economy around; addressing suicide undermines this (false) claim. To state the obvious, NC readers know that Trump is incapable of caring about anyone or anything beyond his in-the-moment interpretation of his self-interest.

JCC , June 19, 2019 at 9:25 am

Not just Trump. Most of the Republican Party and much too many Democrats have also abandoned this base, otherwise known as working class Americans.

The economic facts are near staggering and this article has done a nice job of summarizing these numbers that are spread out across a lot of different sites.

I've experienced this rise within my own family and probably because of that fact I'm well aware that Trump is only a symptom of an entire political system that has all but abandoned it's core constituency, the American Working Class.

sparagmite , June 19, 2019 at 10:13 am

Yep It's not just Trump. The author mentions this, but still focuses on him for some reason. Maybe accurately attributing the problems to a failed system makes people feel more hopeless. Current nihilists in Congress make it their duty to destroy once helpful institutions in the name of "fiscal responsibility," i.e., tax cuts for corporate elites.

dcblogger , June 19, 2019 at 12:20 pm

Maybe because Trump is president and bears the greatest responsibility in this particular time. A great piece and appreciate all the documentation.

Svante , June 19, 2019 at 7:00 am

I'd assumed, the "working class" had dissappeared, back during Reagan's Miracle? We'd still see each other, sitting dazed on porches & stoops of rented old places they'd previously; trying to garden, fix their car while smoking, drinking or dazed on something? Those able to morph into "middle class" lives, might've earned substantially less, especially benefits and retirement package wise. But, a couple decades later, it was their turn, as machines and foreigners improved productivity. You could lease a truck to haul imported stuff your kids could sell to each other, or help robots in some warehouse, but those 80s burger flipping, rent-a-cop & repo-man gigs dried up. Your middle class pals unemployable, everybody in PayDay Loan debt (without any pay day in sight?) SHTF Bug-out bags® & EZ Credit Bushmasters began showing up at yard sales, even up North. Opioids became the religion of the proletariat Whites simply had much farther to fall, more equity for our betters to steal. And it was damned near impossible to get the cops to shoot you?

Man, this just ain't turning out as I'd hoped. Need coffee!

Svante , June 19, 2019 at 7:55 am

We especially love the euphemism "Deaths O' Despair." since it works so well on a Chyron, especially supered over obese crackers waddling in crusty MossyOak™ Snuggies®

https://mobile.twitter.com/BernieSanders/status/1140998287933300736
https://m.youtube.com/watch?v=apxZvpzq4Mw

DanB , June 19, 2019 at 9:29 am

This is a very good article, but I have a comment about the section titled, "The Race Enigma." I think the key to understanding why African Americans have a lower suicide rate lies in understanding the sociological notion of community, and the related concept Emil Durkheim called social solidarity. This sense of solidarity and community among African Americans stands in contrast to the "There is no such thing as society" neoliberal zeitgeist that in fact produces feelings of extreme isolation, failure, and self-recriminations. An aside: as a white boy growing up in 1950s-60s Detroit I learned that if you yearned for solidarity and community what you had to do was to hang out with black people.

Amfortas the hippie , June 19, 2019 at 2:18 pm

" if you yearned for solidarity and community what you had to do was to hang out with black people."
amen, to that. in my case rural black people.
and I'll add Hispanics to that.
My wife's extended Familia is so very different from mine.
Solidarity/Belonging is cool.
I recommend it.
on the article we keep the scanner on("local news").we had a 3-4 year rash of suicides and attempted suicides(determined by chisme, or deduction) out here.
all of them were despair related more than half correlated with meth addiction itself a despair related thing.
ours were equally male/female, and across both our color spectrum.
that leaves economics/opportunity/just being able to get by as the likely cause.

David B Harrison , June 19, 2019 at 10:05 am

What's left out here is the vast majority of these suicides are men.

Christy , June 19, 2019 at 1:53 pm

Actually, in the article it states:
"There are gender-based differences as well. From 1999 to 2017, the rate for men was substantially higher than for women -- almost four-and-a-half times higher in the first of those years, slightly more than three-and-a-half times in the last."

jrs , June 19, 2019 at 1:58 pm

which in some sense makes despair the wrong word, as females are actually quite a bit more likely to be depressed for instance, but much less likely to "do the deed". Despair if we mean a certain social context maybe, but not just a psychological state.

Ex-Pralite Monk , June 19, 2019 at 10:10 am

obese cracker

You lay off the racial slur "cracker" and I'll lay off the racial slur "nigger". Deal?

rd , June 19, 2019 at 10:53 am

Suicide deaths are a function of the suicide attempt rate and the efficacy of the method used. A unique aspect of the US is the prevalence of guns in the society and therefore the greatly increased usage of them in suicide attempts compared to other countries. Guns are a very efficient way of committing suicide with a very high "success" rate. As of 2010, half of US suicides were using a gun as opposed to other countries with much lower percentages. So if the US comes even close to other countries in suicide rates then the US will surpass them in deaths. https://en.wikipedia.org/wiki/Suicide_methods#Firearms

Now we can add in opiates, especially fentanyl, that can be quite effective as well.

The economic crisis hitting middle America over the past 30 years has been quite focused on the states and populations that also tend to have high gun ownership rates. So suicide attempts in those populations have a high probability of "success".

Joe Well , June 19, 2019 at 11:32 am

I would just take this opportunity to add that the police end up getting called in to prevent on lot of suicide attempts, and just about every successful one.

In the face of so much blanket demonization of the police, along with justified criticism, it's important to remember that.

B:H , June 19, 2019 at 11:44 am

As someone who works in the mental health treatment system, acute inpatient psychiatry to be specific, I can say that of the 25 inpatients currently here, 11 have been here before, multiple times. And this is because of several issues, in my experience: inadequate inpatient resources, staff burnout, inadequate support once they leave the hospital, and the nature of their illnesses. It's a grim picture here and it's been this way for YEARS. Until MAJOR money is spent on this issue it's not going to get better. This includes opening more facilities for people to live in long term, instead of closing them, which has been the trend I've seen.

B:H , June 19, 2019 at 11:53 am

One last thing the CEO wants "asses in beds", aka census, which is the money maker. There's less profit if people get better and don't return. And I guess I wouldn't have a job either. Hmmmm: sickness generates wealth.

[Jun 18, 2019] Introduction to Bash Shell Parameter Expansions

Jun 18, 2019 | linuxconfig.org

Before proceeding further, let me give you one tip. In the example above the shell tried to expand a non-existing variable, producing a blank result. This can be very dangerous, especially when working with path names, therefore, when writing scripts, it's always recommended to use the nounset option which causes the shell to exit with error whenever a non existing variable is referenced:

$ set -o nounset
$ echo "You are reading this article on $site_!"
bash: site_: unbound variable
Working with indirection

The use of the ${!parameter} syntax, adds a level of indirection to our parameter expansion. What does it mean? The parameter which the shell will try to expand is not parameter ; instead it will try to use the the value of parameter as the name of the variable to be expanded. Let's explain this with an example. We all know the HOME variable expands in the path of the user home directory in the system, right?

$ echo "${HOME}"
/home/egdoc

Very well, if now we assign the string "HOME", to another variable, and use this type of expansion, we obtain:

$ variable_to_inspect="HOME"
$ echo "${!variable_to_inspect}"
/home/egdoc

As you can see in the example above, instead of obtaining "HOME" as a result, as it would have happened if we performed a simple expansion, the shell used the value of variable_to_inspect as the name of the variable to expand, that's why we talk about a level of indirection.

Case modification expansion

This parameter expansion syntax let us change the case of the alphabetic characters inside the string resulting from the expansion of the parameter. Say we have a variable called name ; to capitalize the text returned by the expansion of the variable we would use the ${parameter^} syntax:

$ name="egidio"
$ echo "${name^}"
Egidio

What if we want to uppercase the entire string, instead of capitalize it? Easy! we use the ${parameter^^} syntax:

$ echo "${name^^}"
EGIDIO

Similarly, to lowercase the first character of a string, we use the ${parameter,} expansion syntax:

$ name="EGIDIO"
$ echo "${name,}"
eGIDIO

To lowercase the entire string, instead, we use the ${parameter,,} syntax:

$ name="EGIDIO"
$ echo "${name,,}"
egidio

In all cases a pattern to match a single character can also be provided. When the pattern is provided the operation is applied only to the parts of the original string that matches it:

$ name="EGIDIO"
$ echo "${name,,[DIO]}"
EGidio

me name=


In the example above we enclose the characters in square brackets: this causes anyone of them to be matched as a pattern.

When using the expansions we explained in this paragraph and the parameter is an array subscripted by @ or * , the operation is applied to all the elements contained in it:

$ my_array=(one two three)
$ echo "${my_array[@]^^}"
ONE TWO THREE

When the index of a specific element in the array is referenced, instead, the operation is applied only to it:

$ my_array=(one two three)
$ echo "${my_array[2]^^}"
THREE
Substring removal

The next syntax we will examine allows us to remove a pattern from the beginning or from the end of string resulting from the expansion of a parameter.

Remove matching pattern from the beginning of the string

The next syntax we will examine, ${parameter#pattern} , allows us to remove a pattern from the beginning of the string resulting from the parameter expansion:

$ name="Egidio"
$ echo "${name#Egi}"
dio

A similar result can be obtained by using the "${parameter##pattern}" syntax, but with one important difference: contrary to the one we used in the example above, which removes the shortest matching pattern from the beginning of the string, it removes the longest one. The difference is clearly visible when using the * character in the pattern :

$ name="Egidio Docile"
$ echo "${name#*i}"
dio Docile

In the example above we used * as part of the pattern that should be removed from the string resulting by the expansion of the name variable. This wildcard matches any character, so the pattern itself translates in "'i' character and everything before it". As we already said, when we use the ${parameter#pattern} syntax, the shortest matching pattern is removed, in this case it is "Egi". Let's see what happens when we use the "${parameter##pattern}" syntax instead:

$ name="Egidio Docile"
$ echo "${name##*i}"
le

This time the longest matching pattern is removed ("Egidio Doci"): the longest possible match includes the third 'i' and everything before it. The result of the expansion is just "le".

Remove matching pattern from the end of the string

The syntax we saw above remove the shortest or longest matching pattern from the beginning of the string. If we want the pattern to be removed from the end of the string, instead, we must use the ${parameter%pattern} or ${parameter%%pattern} expansions, to remove, respectively, the shortest and longest match from the end of the string:

$ name="Egidio Docile"
$ echo "${name%i*}"
Egidio Doc

In this example the pattern we provided roughly translates in "'i' character and everything after it starting from the end of the string". The shortest match is "ile", so what is returned is "Egidio Doc". If we try the same example but we use the syntax which removes the longest match we obtain:

$ name="Egidio Docile"
$ echo "${name%%i*}"
Eg

In this case the once the longest match is removed, what is returned is "Eg".

In all the expansions we saw above, if parameter is an array and it is subscripted with * or @ , the removal of the matching pattern is applied to all its elements:

$ my_array=(one two three)
$ echo "${my_array[@]#*o}"
ne three

me name=


Search and replace pattern

We used the previous syntax to remove a matching pattern from the beginning or from the end of the string resulting from the expansion of a parameter. What if we want to replace pattern with something else? We can use the ${parameter/pattern/string} or ${parameter//pattern/string} syntax. The former replaces only the first occurrence of the pattern, the latter all the occurrences:

$ phrase="yellow is the sun and yellow is the
lemon"
$ echo "${phrase/yellow/red}"
red is the sun and yellow is the lemon

The parameter (phrase) is expanded, and the longest match of the pattern (yellow) is matched against it. The match is then replaced by the provided string (red). As you can observe only the first occurrence is replaced, so the lemon remains yellow! If we want to change all the occurrences of the pattern, we must prefix it with the / character:

$ phrase="yellow is the sun and yellow is the
lemon"
$ echo "${phrase//yellow/red}"
red is the sun and red is the lemon

This time all the occurrences of "yellow" has been replaced by "red". As you can see the pattern is matched wherever it is found in the string resulting from the expansion of parameter . If we want to specify that it must be matched only at the beginning or at the end of the string, we must prefix it respectively with the # or % character.

Just like in the previous cases, if parameter is an array subscripted by either * or @ , the substitution happens in each one of its elements:

$ my_array=(one two three)
$ echo "${my_array[@]/o/u}"
une twu three
Substring expansion

The ${parameter:offset} and ${parameter:offset:length} expansions let us expand only a part of the parameter, returning a substring starting at the specified offset and length characters long. If the length is not specified the expansion proceeds until the end of the original string. This type of expansion is called substring expansion :

$ name="Egidio Docile"
$ echo "${name:3}"
dio Docile

In the example above we provided just the offset , without specifying the length , therefore the result of the expansion was the substring obtained by starting at the character specified by the offset (3).

If we specify a length, the substring will start at offset and will be length characters long:

$ echo "${name:3:3}"
dio

If the offset is negative, it is calculated from the end of the string. In this case an additional space must be added after : otherwise the shell will consider it as another type of expansion identified by :- which is used to provide a default value if the parameter to be expanded doesn't exist (we talked about it in the article about managing the expansion of empty or unset bash variables ):

$ echo "${name: -6}"
Docile

If the provided length is negative, instead of being interpreted as the total number of characters the resulting string should be long, it is considered as an offset to be calculated from the end of the string. The result of the expansion will therefore be a substring starting at offset and ending at length characters from the end of the original string:

$ echo "${name:7:-3}"
Doc

When using this expansion and parameter is an indexed array subscribed by * or @ , the offset is relative to the indexes of the array elements. For example:

$ my_array=(one two three)
$ echo "${my_array[@]:0:2}"
one two
$ echo "${my_array[@]: -2}"
two three

[Jun 17, 2019] Accessing remote desktops by Seth Kenlon

Jun 17, 2019 | www.redhat.com

Accessing remote desktops Need to see what's happening on someone else's screen? Here's what you need to know about accessing remote desktops.

Posted June 13, 2019 | by Seth Kenlon (Red Hat) Anyone who's worked a support desk has had the experience: sometimes, no matter how descriptive your instructions, and no matter how concise your commands, it's just easier and quicker for everyone involved to share screens. Likewise, anyone who's ever maintained a server located in a loud and chilly data center -- or across town, or the world -- knows that often a remote viewer is the easiest method for viewing distant screens.

Linux is famously capable of being managed without seeing a GUI, but that doesn't mean you have to manage your box that way. If you need to see the desktop of a computer that you're not physically in front of, there are plenty of tools for the job.

Barriers

Half the battle of successfully screen sharing is getting into the target computer. That's by design, of course. It should be difficult to get into a computer without explicit consent.

Usually, there are up to 3 blockades for accessing a remote machine:

  1. The network firewall
  2. The target computer's firewall
  3. Screen share settings

Specific instruction on how to get past each barrier is impossible. Every network and every computer is configured uniquely, but here are some possible solutions.

Barrier 1: The network firewall

A network firewall is the target computer's LAN entry point, often a part of the router (whether an appliance from an Internet Service Provider or a dedicated server in a rack). In order to pass through the firewall and access a computer remotely, your network firewall must be configured so that the appropriate port for the remote desktop protocol you're using is accessible.

The most common, and most universal, protocol for screen sharing is VNC.

If the network firewall is on a Linux server you can access, you can broadly allow VNC traffic to pass through using firewall-cmd , first by getting your active zone, and then by allowing VNC traffic in that zone:

$ sudo firewall-cmd --get-active-zones
example-zone
  interfaces: enp0s31f6
$ sudo firewall-cmd --add-service=vnc-server --zone=example-zone

If you're not comfortable allowing all VNC traffic into the network, add a rich rule to firewalld in order to let in VNC traffic from only your IP address. For example, using an example IP address of 93.184.216.34, a rule to allow VNC traffic is:

$ sudo firewall-cmd \
--add-rich-rule='rule family="ipv4" source address="93.184.216.34" service name=vnc-server accept'

To ensure the firewall changes were made, reload the rules:

$ sudo firewall-cmd --reload

If network reconfiguration isn't possible, see the section "Screen sharing through a browser."

Barrier 2: The computer's firewall

Most personal computers have built-in firewalls. Users who are mindful of security may actively manage their firewall. Others, though, blissfully trust their default settings. This means that when you're trying to access their computer for screen sharing, their firewall may block incoming remote connection requests without the user even realizing it. Your request to view their screen may successfully pass through the network firewall only to be silently dropped by the target computer's firewall.

Changing zones in Linux.

To remedy this problem, have the user either lower their firewall or, on Fedora and RHEL, place their computer into the trusted zone. Do this only for the duration of the screen sharing session. Alternatively, have them add either one of the rules you added to the network firewall (if your user is on Linux).

A reboot is a simple way to ensure the new firewall setting is instantiated, so that's probably the easiest next step for your user. Power users can instead reload the firewall rules manually :

$ sudo firewall-cmd --reload

If you have a user override their computer's default firewall, remember to close the session by instructing them to re-enable the default firewall zone. Don't leave the door open behind you!

Barrier 3: The computer's screen share settings

To share another computer's screen, the target computer must be running remote desktop software (technically, a remote desktop server , since this software listens to incoming requests). Otherwise, you have nothing to connect to.

Some desktops, like GNOME, provide screen sharing options, which means you don't have to launch a separate screen sharing application. To activate screen sharing in GNOME, open Settings and select Sharing from the left column. In the Sharing panel, click on Screen Sharing and toggle it on:

Remote desktop viewers

There are a number of remote desktop viewers out there. Here are some of the best options.

GNOME Remote Desktop Viewer

The GNOME Remote Desktop Viewer application is codenamed Vinagre . It's a simple application that supports multiple protocols, including VNC, Spice, RDP, and SSH. Vinagre's interface is intuitive, and yet this application offers many options, including whether you want to control the target computer or only view it.

If Vinagre's not already installed, use your distribution's package manager to add it. On Red Hat Enterprise Linux and Fedora , use:

$ sudo dnf install vinagre

In order to open Vinagre, go to the GNOME desktop's Activities menu and launch Remote Desktop Viewer . Once it opens, click the Connect button in the top left corner. In the Connect window that appears, select the VNC protocol. In the Host field, enter the IP address of the computer you're connecting to. If you want to use the computer's hostname instead, you must have a valid DNS service in place, or Avahi , or entries in /etc/hosts . Do not prepend your entry with a username.

Select any additional options you prefer, and then click Connect .

If you use the GNOME Remote Desktop Viewer as a full-screen application, move your mouse to the screen's top center to reveal additional controls. Most importantly, the exit fullscreen button.

If you're connecting to a Linux virtual machine, you can use the Spice protocol instead. Spice is robust, lightweight, and transmits both audio and video, usually with no noticeable lag.

TigerVNC and TightVNC

Sometimes you're not on a Linux machine, so the GNOME Remote Desktop Viewer isn't available. As usual, open source has an answer. In fact, open source has several answers, but two popular ones are TigerVNC and TightVNC , which are both cross-platform VNC viewers. TigerVNC offers separate downloads for each platform, while TightVNC has a universal Java client.

Both of these clients are simple, with additional options included in case you need them. The defaults are generally acceptable. In order for these particular clients to connect, turn off the encryption setting for GNOME's embedded VNC server (codenamed Vino) as follows:

$ gsettings set org.gnome.Vino require-encryption false

This modification must be done on the target computer before you attempt to connect, either in person or over SSH.

Red Hat Enterprise Linux 7 remoted to RHEL 8 with TightVNC

Use the option for an SSH tunnel to ensure that your VNC connection is fully encrypted.

Screen sharing through a browser

If network re-configuration is out of the question, sharing over an online meeting or collaboration platform is yet another option. The best open source platform for this is Nextcloud , which offers screen sharing over plain old HTTPS. With no firewall exceptions and no additional encryption required, Nextcloud's Talk app provides video and audio chat, plus whole-screen sharing using WebRTC technology.

This option requires a Nextcloud installation, but given that it's the best open source groupware package out there, it's probably worth looking at if you're not already running an instance. You can install Nextcloud yourself, or you can purchase hosting from Nextcloud.

To install the Talk app, go to Nextcloud's app store. Choose the Social & Communication category and then select the Talk plugin.

Next, add a user for the target computer's owner. Have them log into Nextcloud, and then click on the Talk app in the top left of the browser window.

When you start a new chat with your user, they'll be prompted by their browser to allow notifications from Nextcloud. Whether they accept or decline, Nextcloud's interface alerts them of the incoming call in the notification area at the top right corner.

Once you're in the call with your remote user, have them click on the Share screen button at the bottom of their chat window.

Remote screens

Screen sharing can be an easy method of support as long as you plan ahead so your network and clients support it from trusted sources. Integrate VNC into your support plan early, and use screen sharing to help your users get better at what they do. Topics: Linux Seth Kenlon Seth Kenlon is a free culture advocate and UNIX geek.

OUR BEST CONTENT, DELIVERED TO YOUR INBOX

https://www.redhat.com/sysadmin/eloqua-embedded-subscribe.html?offer_id=701f20000012gE7AAI The opinions expressed on this website are those of each author, not of the author's employer or of Red Hat.

Red Hat and the Red Hat logo are trademarks of Red Hat, Inc., registered in the United States and other countries.

Copyright ©2019 Red Hat, Inc.

https://redhat.demdex.net/dest5.html?d_nsid=0#https%3A%2F%2Fwww.redhat.com%2Fsysadmin%2Faccessing-remote-desktops

[Jun 17, 2019] How to use tee command in Linux by Fahmida Yesmin

Several examples. Mostly trivial. But a couple are interesting.
Notable quotes:
"... `tee` command can be used to store the output of any command into more than one files. ..."
"... `tee` command with '-i' option is used in this example to ignore any interrupt at the time of command execution. ..."
Jun 17, 2019 | linuxhint.com

Example-3: Writing the output into multiple files

`tee` command can be used to store the output of any command into more than one files. You have to write the file names with space to do this task. Run the following commands to store the output of `date` command into two files, output1.txt, and output2.txt.

$ date | tee output1.txt output2.txt
$ cat output1.txt output2.txt

... ... ...

Example-4: Ignoring interrupt signal

`tee` command with '-i' option is used in this example to ignore any interrupt at the time of command execution. So, the command will execute properly even the user presses CTRL+C. Run the following commands from the terminal and check the output.

$ wc -l output.txt | tee -i output3.txt
$ cat output.txt
$ cat output3.txt

... ... ...

Example-5: Passing `tee` command output into another command

The output of the `tee` command can be passed to another command by using the pipe. In this example, the first command output is passed to `tee` command and the output of `tee` command is passed to another command. Run the following commands from the terminal.

$ ls | tee output4.txt | wc -lcw
$ ls
$ cat output4.txt

Output:
... ... ...

[Jun 10, 2019] Screen Command Examples To Manage Multiple Terminal Sessions

Jun 10, 2019 | www.ostechnix.com

OSTechNix

Screen Command Examples To Manage Multiple Terminal Sessions

by sk · Published June 6, 2019 · Updated June 7, 2019

Screen Command Examples To Manage Multiple Terminal Sessions GNU Screen is a terminal multiplexer (window manager). As the name says, Screen multiplexes the physical terminal between multiple interactive shells, so we can perform different tasks in each terminal session. All screen sessions run their programs completely independent. So, a program or process running inside a screen session will keep running even if the session is accidentally closed or disconnected. For instance, when upgrading Ubuntu server via SSH, Screen command will keep running the upgrade process just in case your SSH session is terminated for any reason.

The GNU Screen allows us to easily create multiple screen sessions, switch between different sessions, copy text between sessions, attach or detach from a session at any time and so on. It is one of the important command line tool every Linux admins should learn and use wherever necessary. In this brief guide, we will see the basic usage of Screen command with examples in Linux.

Installing GNU Screen

GNU Screen is available in the default repositories of most Linux operating systems.

To install GNU Screen on Arch Linux, run:

$ sudo pacman -S screen

On Debian, Ubuntu, Linux Mint:

$ sudo apt-get install screen

On Fedora:

$ sudo dnf install screen

On RHEL, CentOS:

$ sudo yum install screen

On SUSE/openSUSE:

$ sudo zypper install screen

Let us go ahead and see some screen command examples.

Screen Command Examples To Manage Multiple Terminal Sessions

The default prefix shortcut to all commands in Screen is Ctrl+a . You need to use this shortcut a lot when using Screen. So, just remember this keyboard shortcut.

Create new Screen session

Let us create a new Screen session and attach to it. To do so, type the following command in terminal:

screen

Now, run any program or process inside this session. The running process or program will keep running even if you're disconnected from this session.

Detach from Screen sessions

To detach from inside a screen session, press Ctrl+a and d . You don't have to press the both key combinations at the same time. First press Ctrl+a and then press d . After detaching from a session, you will see an output something like below.

[detached from 29149.pts-0.sk]

Here, 29149 is the screen ID and pts-0.sk is the name of the screen session. You can attach, detach and kill Screen sessions using either screen ID or name of the respective session.

Create a named session

You can also create a screen session with any custom name of your choice other than the default username like below.

screen -S ostechnix

The above command will create a new screen session with name "xxxxx.ostechnix" and attach to it immediately. To detach from the current session, press Ctrl+a followed by d .

Naming screen sessions can be helpful when you want to find which processes are running on which sessions. For example, when a setup LAMP stack inside a session, you can simply name it like below.

screen -S lampstack
Create detached sessions

Sometimes, you might want to create a session, but don't want to attach it automatically. In such cases, run the following command to create detached session named "senthil" :

screen -S senthil -d -m

Or, shortly:

screen -dmS senthil

The above command will create a session called "senthil", but won't attach to it.

List Screen sessions

To list all running sessions (attached or detached), run:

screen -ls

Sample output:

There are screens on:
	29700.senthil	(Detached)
	29415.ostechnix	(Detached)
	29149.pts-0.sk	(Detached)
3 Sockets in /run/screens/S-sk.

As you can see, I have three running sessions and all are detached.

Attach to Screen sessions

If you want to attach to a session at any time, for example 29415.ostechnix , simply run:

screen -r 29415.ostechnix

Or,

screen -r ostechnix

Or, just use the screen ID:

screen -r 29415

To verify if we are attached to the aforementioned session, simply list the open sessions and check.

screen -ls

Sample output:

There are screens on:
        29700.senthil   (Detached)
        29415.ostechnix (Attached)
        29149.pts-0.sk  (Detached)
3 Sockets in /run/screens/S-sk.

As you see in the above output, we are currently attached to 29415.ostechnix session. To exit from the current session, press ctrl+a, d.

Create nested sessions

When we run "screen" command, it will create a single session for us. We can, however, create nested sessions (a session inside a session).

First, create a new session or attach to an opened session. I am going to create a new session named "nested".

screen -S nested

Now, press Ctrl+a and c inside the session to create another session. Just repeat this to create any number of nested Screen sessions. Each session will be assigned with a number. The number will start from 0 .

You can move to the next session by pressing Ctrl+n and move to previous by pressing Ctrl+p .

Here is the list of important Keyboard shortcuts to manage nested sessions.

Lock sessions

Screen has an option to lock a screen session. To do so, press Ctrl+a and x . Enter your Linux password to lock the screen.

Screen used by sk <sk> on ubuntuserver.
Password:
Logging sessions

You might want to log everything when you're in a Screen session. To do so, just press Ctrl+a and H .

Alternatively, you can enable the logging when starting a new session using -L parameter.

screen -L

From now on, all activities you've done inside the session will recorded and stored in a file named screenlog.x in your $HOME directory. Here, x is a number.

You can view the contents of the log file using cat command or any text viewer applications.

Log screen sessions

Log screen sessions


Suggested read:


Kill Screen sessions

If a session is not required anymore, just kill it. To kill a detached session named "senthil":

screen -r senthil -X quit

Or,

screen -X -S senthil quit

Or,

screen -X -S 29415 quit

If there are no open sessions, you will see the following output:

$ screen -ls
No Sockets found in /run/screens/S-sk.

For more details, refer man pages.

$ man screen

There is also a similar command line utility named "Tmux" which does the same job as GNU Screen. To know more about it, refer the following guide.

Resource:

[Jun 06, 2019] For Profit College, Student Loan Default, and the Economic Impact of Student Loans

We should object to the neoliberal complete "instumentalization" of education: education became just a mean to get nicely paid job. And even this hope is mostly an illusion for all but the top 5% of students...
And while students share their own part of responsibility for accumulating the debt the predatory behaviour of neoliberal universities is an important factor that should not be discounted and perpetrators should be held responsible. Especially dirty tricks of ballooning its size and pushing students into "hopeless" specialties, which would be fine, if they were sons or daughters of well to do and parent still support then financially.
Actually neoliberalism justifies predatory behaviour and as such is a doomed social system as without solidarity some members of financial oligarchy that rules the country sooner or later might hand from the lampposts.
Notable quotes:
"... It also never ceases to amaze me the number of anti-educational opinions which flare up when the discussion of student loan default arises. There are always those who will prophesize there is no need to attain a higher level of education as anyone could be something else and be successful and not require a higher level of education. Or they come forth with the explanation on how young 18 year-olds and those already struggling should be able to ascertain the risk of higher debt when the cards are already stacked against them legally. ..."
"... There does not appear to be much movement on the part of Congress to reconcile the issues in favor of students as opposed to the non-profit and for profit institutes. ..."
"... It's easy to explain, really. According to the Department of Education ( https://studentaid.ed.gov/sa/repay-loans/understand/plans ) you're going to be paying off that loan at minimum payments for 25 years. Assuming your average bachelor's degree is about $30k if you go all-loans ( http://collegecost.ed.gov/catc/ ) and the average student loan interest rate is a generous 5% ( http://www.direct.ed.gov/calc.html ), you're going to be paying $175 a month for a sizable chunk of your adult life. ..."
"... Majoring in IT or Computer Science would have a been a great move in the late 1990's; however, if you graduated around 2000, you likely would have found yourself facing a tough job market.. Likewise, majoring in petroleum engineering or petroleum geology would have seemed like a good move a couple of years ago; however, now that oil prices are crashing, it's presumably a much tougher job market. ..."
"... To confuse going to college with vocational education is to commit a major category error. I think bright, ambitious high school graduates– who are looking for upward social mobility– would be far better served by a plumbing or carpentry apprenticeship program. A good plumber can earn enough money to send his or her children to Yale to study Dante, Boccaccio, and Chaucer. ..."
"... A bright working class kid who goes off to New Haven, to study medieval lit, will need tremendous luck to overcome the enormous class prejudice she will face in trying to establish herself as a tenure-track academic. If she really loves medieval literature for its own sake, then to study it deeply will be "worth it" even if she finds herself working as a barista or store-clerk. ..."
"... As a middle-aged doctoral student in the humanities you should not even be thinking much about your loans. Write the most brilliant thesis that you can, get a book or some decent articles published from it– and swim carefully in the shark-infested waters of academia until you reach the beautiful island of tenured full-professorship. If that island turns out to be an ever-receding mirage, sell your soul to our corporate overlords and pay back your loans! Alternatively, tune in, drop out, and use your finely tuned research and rhetorical skills to help us overthrow the kleptocratic regime that oppresses us all!! ..."
"... Genuine education should provide one with profound contentment, grateful for the journey taken, and a deep appreciation of life. ..."
"... Instead many of us are left confused – confusing career training (redundant and excessive, as it turned out, unfortunate for the student, though not necessarily bad for those on the supply side, one must begrudgingly admit – oops, there goes one's serenity) with enlightenment. ..."
"... We all should be against Big Educational-Complex and its certificates-producing factory education that does not put the student's health and happiness up there with co-existing peacefully with Nature. ..."
"... Remember DINKs? Dual Income No Kids. Dual Debt Bad Job No House No Kids doesn't work well for acronyms. Better for an abbreviated hash tag? ..."
"... I graduated law school with $100k+ in debt inclusive of undergrad. I've never missed a loan payment and my credit score is 830. my income has never reached $100k. my payments started out at over $1000 a month and through aggressive payment and refinancing, I've managed to reduce the payments to $500 a month. I come from a lower middle class background and my parents offered what I call 'negative help' throughout college. ..."
"... my unfortunate situation is unique and I wouldn't wish my debt on anyone. it's basically indentured servitude. it's awful, it's affects my life and health in ways no one should have to live, I have all sorts of stress related illnesses. I'm basically 2 months away from default of everything. my savings is negligible and my net worth is still negative 10 years after graduating. ..."
"... My story is very similar to yours, although I haven't had as much success whittling down my loan balances. But yes, it's made me a socialist as well; makes me wonder how many of us, i.e. ppl radicalized by student loans, are out there. Perhaps the elites' grand plan to make us all debt slaves will eventually backfire in more ways than via the obvious economic issues? ..."
Nov 09, 2015 | naked capitalism

It also never ceases to amaze me the number of anti-educational opinions which flare up when the discussion of student loan default arises. There are always those who will prophesize there is no need to attain a higher level of education as anyone could be something else and be successful and not require a higher level of education. Or they come forth with the explanation on how young 18 year-olds and those already struggling should be able to ascertain the risk of higher debt when the cards are already stacked against them legally. In any case during a poor economy, those with more education appear to be employed at a higher rate than those with less education. The issue for those pursuing an education is the ever increasing burden and danger of student loans and associated interest rates which prevent younger people from moving into the economy successfully after graduation, the failure of the government to support higher education and protect students from for-profit fraud, the increased risk of default and becoming indentured to the government, and the increased cost of an education which has surpassed healthcare in rising costs.

There does not appear to be much movement on the part of Congress to reconcile the issues in favor of students as opposed to the non-profit and for profit institutes.

Ranger Rick, November 9, 2015 at 11:34 am

It's easy to explain, really. According to the Department of Education ( https://studentaid.ed.gov/sa/repay-loans/understand/plans ) you're going to be paying off that loan at minimum payments for 25 years. Assuming your average bachelor's degree is about $30k if you go all-loans ( http://collegecost.ed.gov/catc/ ) and the average student loan interest rate is a generous 5% ( http://www.direct.ed.gov/calc.html ), you're going to be paying $175 a month for a sizable chunk of your adult life.

If you're merely hitting the median income of a bachelor's degree after graduation, $55k (http://nces.ed.gov/fastfacts/display.asp?id=77 ), and good luck with that in this economy, you're still paying ~31.5% of that in taxes (http://www.oecd.org/ctp/tax-policy/taxing-wages-20725124.htm ) you're left with $35.5k before any other costs. Out of that, you're going to have to come up with the down payment to buy a house and a car after spending more money than you have left (http://www.bls.gov/cex/csxann13.pdf).

Louis, November 9, 2015 at 12:33 pm

The last paragraph sums it up perfectly, especially the predictable counterarguments. Accurately assessing what job in demand several years down the road is very difficult, if not impossible.

Majoring in IT or Computer Science would have a been a great move in the late 1990's; however, if you graduated around 2000, you likely would have found yourself facing a tough job market.. Likewise, majoring in petroleum engineering or petroleum geology would have seemed like a good move a couple of years ago; however, now that oil prices are crashing, it's presumably a much tougher job market.

Do we blame the computer science majors graduating in 2000 or the graduates struggling to break into the energy industry, now that oil prices have dropped, for majoring in "useless" degrees? It's much easier to create a strawman about useless degrees that accept the fact that there is a element of chance in terms of what the job market will look like upon graduation.

The cost of higher education is absurd and there simply aren't enough good jobs to go around-there are people out there who majored in the "right" fields and have found themselves underemployed or unemployed-so I'm not unsympathetic to the plight of many people in my generation.

At the same time, I do believe in personal responsibility-I'm wary of creating a moral hazard if people can discharge loans in bankruptcy. I've been paying off my student loans (grad school) for a couple of years-I kept the level debt below any realistic starting salary-and will eventually have the loans paid off, though it may be a few more years.

I am really conflicted between believing in personal responsibility but also seeing how this generation has gotten screwed. I really don't know what the right answer is.

Ulysses, November 9, 2015 at 1:47 pm

"The cost of higher education is absurd and there simply aren't enough good jobs to go around-there are people out there who majored in the "right" fields and have found themselves underemployed or unemployed-so I'm not unsympathetic to the plight of many people in my generation."

To confuse going to college with vocational education is to commit a major category error. I think bright, ambitious high school graduates– who are looking for upward social mobility– would be far better served by a plumbing or carpentry apprenticeship program. A good plumber can earn enough money to send his or her children to Yale to study Dante, Boccaccio, and Chaucer.

A bright working class kid who goes off to New Haven, to study medieval lit, will need tremendous luck to overcome the enormous class prejudice she will face in trying to establish herself as a tenure-track academic. If she really loves medieval literature for its own sake, then to study it deeply will be "worth it" even if she finds herself working as a barista or store-clerk.

None of this, of course, excuses the outrageously high tuition charges, administrative salaries, etc. at the "top schools." They are indeed institutions that reinforce class boundaries. My point is that strictly career education is best begun at a less expensive community college. After working in the IT field, for example, a talented associate's degree-holder might well find that her employer will subsidize study at an elite school with an excellent computer science program.

My utopian dream would be a society where all sorts of studies are open to everyone– for free. Everyone would have a basic Job or Income guarantee and could study as little, or as much, as they like!

Ulysses, November 9, 2015 at 2:05 pm

As a middle-aged doctoral student in the humanities you should not even be thinking much about your loans. Write the most brilliant thesis that you can, get a book or some decent articles published from it– and swim carefully in the shark-infested waters of academia until you reach the beautiful island of tenured full-professorship.

If that island turns out to be an ever-receding mirage, sell your soul to our corporate overlords and pay back your loans! Alternatively, tune in, drop out, and use your finely tuned research and rhetorical skills to help us overthrow the kleptocratic regime that oppresses us all!!

subgenius, November 9, 2015 at 3:07 pm

except (in my experience) the corporate overlords want young meat.

I have 2 masters degrees 2 undergraduate degrees and a host of random diplomas – but at 45, I am variously too old, too qualified, or lacking sufficient recent corporate experience in the field to get hired

Trying to get enough cash to get a contractor license seems my best chance at anything other than random day work.

MyLessThanPrimeBeef, November 9, 2015 at 3:41 pm

Genuine education should provide one with profound contentment, grateful for the journey taken, and a deep appreciation of life.

Instead many of us are left confused – confusing career training (redundant and excessive, as it turned out, unfortunate for the student, though not necessarily bad for those on the supply side, one must begrudgingly admit – oops, there goes one's serenity) with enlightenment.

"I would spend another 12 soul-nourishing years pursuing those non-profit degrees' vs 'I can't feed my family with those paper certificates.'

jrs, November 9, 2015 at 2:55 pm

I am anti-education as the solution to our economic woes. We need jobs or a guaranteed income. And we need to stop outsourcing the jobs that exist. And we need a much higher minimum wage. And maybe we need work sharing. I am also against using screwdrivers to pound in a nail. But why are you so anti screwdriver anyway?

And I see calls for more and more education used to make it seem ok to pay people without much education less than a living wage. Because they deserve it for being whatever drop outs. And it's not ok.

I don't actually have anything against the professors (except their overall political cowardice in times demanding radicalism!). Now the administrators, yea I can see the bloat and the waste there. But mostly, I have issues with more and more education being preached as the answer to a jobs and wages crisis.

MyLessThanPrimeBeef -> jrs, November 9, 2015 at 3:50 pm

We all should be against Big Educational-Complex and its certificates-producing factory education that does not put the student's health and happiness up there with co-existing peacefully with Nature.

Kris Alman, November 9, 2015 at 11:11 am

Remember DINKs? Dual Income No Kids. Dual Debt Bad Job No House No Kids doesn't work well for acronyms. Better for an abbreviated hash tag?

debitor serf, November 9, 2015 at 7:17 pm

I graduated law school with $100k+ in debt inclusive of undergrad. I've never missed a loan payment and my credit score is 830. my income has never reached $100k. my payments started out at over $1000 a month and through aggressive payment and refinancing, I've managed to reduce the payments to $500 a month. I come from a lower middle class background and my parents offered what I call 'negative help' throughout college.

my unfortunate situation is unique and I wouldn't wish my debt on anyone. it's basically indentured servitude. it's awful, it's affects my life and health in ways no one should have to live, I have all sorts of stress related illnesses. I'm basically 2 months away from default of everything. my savings is negligible and my net worth is still negative 10 years after graduating.

student loans, combined with a rigged system, turned me into a closeted socialist. I am smart, hard working and resourceful. if I can't make it in this world, heck, then who can? few, because the system is rigged!

I have no problems at all taking all the wealth of the oligarchs and redistributing it. people look at me like I'm crazy. confiscate it all I say, and reset the system from scratch. let them try to make their billions in a system where things are fair and not rigged...

Ramoth, November 9, 2015 at 9:23 pm

My story is very similar to yours, although I haven't had as much success whittling down my loan balances. But yes, it's made me a socialist as well; makes me wonder how many of us, i.e. ppl radicalized by student loans, are out there. Perhaps the elites' grand plan to make us all debt slaves will eventually backfire in more ways than via the obvious economic issues?

[May 24, 2019] Deal with longstanding issues like government favoritism toward local companies

May 24, 2019 | theregister.co.uk

How is it that that can be a point of contention ? Name me one country in this world that doesn't favor local companies.

These people company representatives who are complaining about local favoritism would be howling like wolves if Huawei was given favor in the US over any one of them.

I'm not saying that there are no reasons to be unhappy about business with China, but that is not one of them. 6 0 Reply


A.P. Veening , 1 day

Re: "deal with longstanding issues like government favoritism toward local companies"

Name me one country in this world that doesn't favor local companies.

I'll give you two: Liechtenstein and Vatican City, though admittedly neither has a lot of local companies.

STOP_FORTH , 1 day
Re: "deal with longstanding issues like government favoritism toward local companies"

Doesn't Liechtenstein make most of the dentures in the EU. Try taking a bite out of that market.

Kabukiwookie , 1 day
Re: "deal with longstanding issues like government favoritism toward local companies"

How can you leave Andorra out of that list?

A.P. Veening , 14 hrs
Re: "deal with longstanding issues like government favoritism toward local companies"

While you are at it, how can you leave Monaco and San Marino out of that list?

[May 24, 2019] Huawei equipment can't be trusted? As distinct from Cisco which we already have backdoored :]

May 24, 2019 | theregister.co.uk

" The Trump administration, backed by US cyber defense experts, believes that Huawei equipment can't be trusted " .. as distinct from Cisco which we already have backdoored :]

Sir Runcible Spoon
Re: Huawei equipment can't be trusted?

Didn't someone once say "I don't trust anyone who can't be bribed"?

Not sure why that popped into my head.

[May 24, 2019] The USA isn't annoyed at Huawei spying, they are annoyed that Huawei isn't spying for them

May 24, 2019 | theregister.co.uk

Pick your poison

The USA isn't annoyed at Huawei spying, they are annoyed that Huawei isn't spying for them . If you don't use Huawei who would you use instead? Cisco? Yes, just open up and let the NSA ream your ports. Oooo, filthy.

If you don't know the chip design, can't verify the construction, don't know the code and can't verify the deployment to the hardware; you are already owned.

The only question is, but which state actor; China, USA, Israel, UK.....? Anonymous Coward

[May 24, 2019] This is going to get ugly

May 24, 2019 | theregister.co.uk

..and we're all going to be poorer for it. Americans, Chinese and bystanders.

I was recently watching the WW1 channel on youtube (awesome thing, go Indy and team!) - the delusion, lack of situational understanding and short sightedness underscoring the actions of the main actors that started the Great War can certainly be paralleled to the situation here.

The very idea that you can manage to send China 40 years back in time with no harm on your side is bonkers.

[May 24, 2019] Networks are usually highly segmented and protected via firewalls and proxy. so access to routers from Internet is impossible

You can put backdoor in the router. The problem is that you will never be able to access it. also for improtant deployment countires inpect the source code of firmware. USA is playing dirty games here., no matter whether Chinese are right or wrong.
May 24, 2019 | theregister.co.uk
Re: Technological silos

They're not necessarily silos. If you design a network as a flat space with all interactions peer to peer then you have set yourself the problem of ensuring all nodes on that network are secure and enforcing traffic rules equally on each node. This is impractical -- its not that if couldn't be done but its a huge waste of resources. A more practical strategy is to layer the network, providing choke points where traffic can be monitored and managed. We currently do this with firewalls and demilitarized zones, the goal being normally to prevent unwanted traffic coming in (although it can be used to monitor and control traffic going out). This has nothing to do with incompatible standards.

I'm not sure about the rest of the FUD in this article. Yes, its all very complicated. But just as we have to know how to layer our networks we also know how to manage our information. For example, anyone who as a smartphone that they co-mingle sensitive data and public access on, relying on the integrity of its software to keep everything separate, is just plain asking for trouble. Quite apart from the risk of data leakage between applications its a portable device that can get lost, stolen or confiscated (and duplicated.....). Use common sense. Manage your data.

[May 24, 2019] Internet and phones aren't the issue. Its the chips

Notable quotes:
"... The real issue is the semiconductors - the actual silicon. ..."
"... China has some fabs now, but far too few to handle even just their internal demand - and tech export restrictions have long kept their leading edge capabilities significantly behind the cutting edge. ..."
"... On the flip side: Foxconn, Huawei et al are so ubiquitous in the electronics global supply chain that US retail tech companies - specifically Apple - are going to be severely affected, or at least extremely vulnerable to being pushed forward as a hostage. ..."
May 24, 2019 | theregister.co.uk

Duncan Macdonald

Internet, phones, Android aren't the issue - except if the US is able to push China out of GSM/ITU.

The real issue is the semiconductors - the actual silicon.

The majority of raw silicon wafers as well as the finished chips are created in the US or its most aligned allies: Japan, Taiwan. The dominant manufacturers of semiconductor equipment are also largely US with some Japanese and EU suppliers.

If Fabs can't sell to China, regardless of who actually paid to manufacture the chips, because Applied Materials has been banned from any business related to China, this is pretty severe for 5-10 years until the Chinese can ramp up their capacity.

China has some fabs now, but far too few to handle even just their internal demand - and tech export restrictions have long kept their leading edge capabilities significantly behind the cutting edge.

On the flip side: Foxconn, Huawei et al are so ubiquitous in the electronics global supply chain that US retail tech companies - specifically Apple - are going to be severely affected, or at least extremely vulnerable to being pushed forward as a hostage.

Interesting times...

[May 24, 2019] We shared and the Americans shafted us. And now *they* are bleating about people not respecting Intellectual Property Rights?

Notable quotes:
"... The British aerospace sector (not to be confused with the company of a similar name but more Capital Letters) developed, amongst other things, the all-flying tailplane, successful jet-powered VTOL flight, noise-and drag-reducing rotor blades and the no-tailrotor systems and were promised all sorts of crunchy goodness if we shared it with our wonderful friends across the Atlantic. ..."
"... We shared and the Americans shafted us. Again. And again. And now *they* are bleating about people not respecting Intellectual Property Rights? ..."
May 24, 2019 | theregister.co.uk

Anonymous Coward

Sic semper tyrannis

"Without saying so publicly, they're glad there's finally some effort to deal with longstanding issues like government favoritism toward local companies, intellectual property theft, and forced technology transfers."

The British aerospace sector (not to be confused with the company of a similar name but more Capital Letters) developed, amongst other things, the all-flying tailplane, successful jet-powered VTOL flight, noise-and drag-reducing rotor blades and the no-tailrotor systems and were promised all sorts of crunchy goodness if we shared it with our wonderful friends across the Atlantic.

We shared and the Americans shafted us. Again. And again. And now *they* are bleating about people not respecting Intellectual Property Rights?

And as for moaning about backdoors in Chinese kit, who do Cisco et al report to again? Oh yeah, those nice Three Letter Acronym people loitering in Washington and Langley...

[May 24, 2019] Oh dear. Secret Huawei enterprise router snoop 'backdoor' was Telnet service, sighs Vodafone The Register

May 24, 2019 | theregister.co.uk

A claimed deliberate spying "backdoor" in Huawei routers used in the core of Vodafone Italy's 3G network was, in fact, a Telnet -based remote debug interface.

The Bloomberg financial newswire reported this morning that Vodafone had found "vulnerabilities going back years with equipment supplied by Shenzhen-based Huawei for the carrier's Italian business".

"Europe's biggest phone company identified hidden backdoors in the software that could have given Huawei unauthorized access to the carrier's fixed-line network in Italy," wailed the newswire.

Unfortunately for Bloomberg, Vodafone had a far less alarming explanation for the deliberate secret "backdoor" – a run-of-the-mill LAN-facing diagnostic service, albeit a hardcoded undocumented one.

"The 'backdoor' that Bloomberg refers to is Telnet, which is a protocol that is commonly used by many vendors in the industry for performing diagnostic functions. It would not have been accessible from the internet," said the telco in a statement to The Register , adding: "Bloomberg is incorrect in saying that this 'could have given Huawei unauthorized access to the carrier's fixed-line network in Italy'.

"This was nothing more than a failure to remove a diagnostic function after development."

It added the Telnet service was found during an audit, which means it can't have been that secret or hidden: "The issues were identified by independent security testing, initiated by Vodafone as part of our routine security measures, and fixed at the time by Huawei."

Huawei itself told us: "We were made aware of historical vulnerabilities in 2011 and 2012 and they were addressed at the time. Software vulnerabilities are an industry-wide challenge. Like every ICT vendor we have a well-established public notification and patching process, and when a vulnerability is identified we work closely with our partners to take the appropriate corrective action."

Prior to removing the Telnet server, Huawei was said to have insisted in 2011 on using the diagnostic service to configure and test the network devices. Bloomberg reported, citing a leaked internal memo from then-Vodafone CISO Bryan Littlefair, that the Chinese manufacturer thus refused to completely disable the service at first:

Vodafone said Huawei then refused to fully remove the backdoor, citing a manufacturing requirement. Huawei said it needed the Telnet service to configure device information and conduct tests including on Wi-Fi, and offered to disable the service after taking those steps, according to the document.

El Reg understands that while Huawei indeed resisted removing the Telnet functionality from the affected items – broadband network gateways in the core of Vodafone Italy's 3G network – this was done to the satisfaction of all involved parties by the end of 2011, with another network-level product de-Telnet-ised in 2012.

Broadband network gateways in 3G UMTS mobile networks are described in technical detail in this Cisco (sorry) PDF . The devices are also known as Broadband Remote Access Servers and sit at the edge of a network operator's core.

The issue is separate from Huawei's failure to fully patch consumer-grade routers , as exclusively revealed by The Register in March.

Plenty of other things (cough, cough, Cisco) to panic about

Characterising this sort of Telnet service as a covert backdoor for government spies is a bit like describing your catflap as an access portal that allows multiple species to pass unhindered through a critical home security layer. In other words, massively over-egging the pudding.

Many Reg readers won't need it explaining, but Telnet is a routinely used method of connecting to remote devices for management purposes. When deployed with appropriate security and authentication controls in place, it can be very useful. In Huawei's case, the Telnet service wasn't facing the public internet, and was used to set up and test devices.

Look, it's not great that this was hardcoded into the equipment and undocumented – it was, after all, declared a security risk – and had to be removed after some pressure. However, it's not quite the hidden deliberate espionage backdoor for Beijing that some fear.

Twitter-enabled infoseccer Kevin Beaumont also shared his thoughts on the story, highlighting the number of vulns in equipment from Huawei competitor Cisco, a US firm:

me title=

For example, a pretty bad remote access hole was discovered in some Cisco gear , which the mainstream press didn't seem too fussed about. Ditto hardcoded root logins in Cisco video surveillance boxes. Lots of things unfortunately ship with insecure remote access that ought to be removed; it's not evidence of a secret backdoor for state spies.

Given Bloomberg's previous history of trying to break tech news, when it claimed that tiny spy chips were being secretly planted on Supermicro server motherboards – something that left the rest of the tech world scratching its collective head once the initial dust had settled – it may be best to take this latest revelation with a pinch of salt. Telnet wasn't even mentioned in the latest report from the UK's Huawei Cyber Security Evaluation Centre, which savaged Huawei's pisspoor software development practices.

While there is ample evidence in the public domain that Huawei is doing badly on the basics of secure software development, so far there has been little that tends to show it deliberately implements hidden espionage backdoors. Rhetoric from the US alleging Huawei is a threat to national security seems to be having the opposite effect around the world.

With Bloomberg, an American company, characterising Vodafone's use of Huawei equipment as "defiance" showing "that countries across Europe are willing to risk rankling the US in the name of 5G preparedness," it appears that the US-Euro-China divide on 5G technology suppliers isn't closing up any time soon. ®

Bootnote

This isn't shaping up to be a good week for Bloomberg. Only yesterday High Court judge Mr Justice Nicklin ordered the company to pay up £25k for the way it reported a live and ongoing criminal investigation.

[May 17, 2019] Shareholder Capitalism, the Military, and the Beginning of the End for Boeing

Highly recommended!
Notable quotes:
"... Like many of its Wall Street counterparts, Boeing also used complexity as a mechanism to obfuscate and conceal activity that is incompetent, nefarious and/or harmful to not only the corporation itself but to society as a whole (instead of complexity being a benign byproduct of a move up the technology curve). ..."
"... The economists who built on Friedman's work, along with increasingly aggressive institutional investors, devised solutions to ensure the primacy of enhancing shareholder value, via the advocacy of hostile takeovers, the promotion of massive stock buybacks or repurchases (which increased the stock value), higher dividend payouts and, most importantly, the introduction of stock-based pay for top executives in order to align their interests to those of the shareholders. These ideas were influenced by the idea that corporate efficiency and profitability were impinged upon by archaic regulation and unionization, which, according to the theory, precluded the ability to compete globally. ..."
"... "Return on Net Assets" (RONA) forms a key part of the shareholder capitalism doctrine. ..."
"... If the choice is between putting a million bucks into new factory machinery or returning it to shareholders, say, via dividend payments, the latter is the optimal way to go because in theory it means higher net returns accruing to the shareholders (as the "owners" of the company), implicitly assuming that they can make better use of that money than the company itself can. ..."
"... It is an absurd conceit to believe that a dilettante portfolio manager is in a better position than an aviation engineer to gauge whether corporate investment in fixed assets will generate productivity gains well north of the expected return for the cash distributed to the shareholders. But such is the perverse fantasy embedded in the myth of shareholder capitalism ..."
"... When real engineering clashes with financial engineering, the damage takes the form of a geographically disparate and demoralized workforce: The factory-floor denominator goes down. Workers' wages are depressed, testing and quality assurance are curtailed. ..."
May 17, 2019 | www.nakedcapitalism.com

The fall of the Berlin Wall and the corresponding end of the Soviet Empire gave the fullest impetus imaginable to the forces of globalized capitalism, and correspondingly unfettered access to the world's cheapest labor. What was not to like about that? It afforded multinational corporations vastly expanded opportunities to fatten their profit margins and increase the bottom line with seemingly no risk posed to their business model.

Or so it appeared. In 2000, aerospace engineer L.J. Hart-Smith's remarkable paper, sardonically titled "Out-Sourced Profits – The Cornerstone of Successful Subcontracting," laid out the case against several business practices of Hart-Smith's previous employer, McDonnell Douglas, which had incautiously ridden the wave of outsourcing when it merged with the author's new employer, Boeing. Hart-Smith's intention in telling his story was a cautionary one for the newly combined Boeing, lest it follow its then recent acquisition down the same disastrous path.

Of the manifold points and issues identified by Hart-Smith, there is one that stands out as the most compelling in terms of understanding the current crisis enveloping Boeing: The embrace of the metric "Return on Net Assets" (RONA). When combined with the relentless pursuit of cost reduction (via offshoring), RONA taken to the extreme can undermine overall safety standards.

Related to this problem is the intentional and unnecessary use of complexity as an instrument of propaganda. Like many of its Wall Street counterparts, Boeing also used complexity as a mechanism to obfuscate and conceal activity that is incompetent, nefarious and/or harmful to not only the corporation itself but to society as a whole (instead of complexity being a benign byproduct of a move up the technology curve).

All of these pernicious concepts are branches of the same poisoned tree: " shareholder capitalism ":

[A] notion best epitomized by Milton Friedman that the only social responsibility of a corporation is to increase its profits, laying the groundwork for the idea that shareholders, being the owners and the main risk-bearing participants, ought therefore to receive the biggest rewards. Profits therefore should be generated first and foremost with a view toward maximizing the interests of shareholders, not the executives or managers who (according to the theory) were spending too much of their time, and the shareholders' money, worrying about employees, customers, and the community at large. The economists who built on Friedman's work, along with increasingly aggressive institutional investors, devised solutions to ensure the primacy of enhancing shareholder value, via the advocacy of hostile takeovers, the promotion of massive stock buybacks or repurchases (which increased the stock value), higher dividend payouts and, most importantly, the introduction of stock-based pay for top executives in order to align their interests to those of the shareholders. These ideas were influenced by the idea that corporate efficiency and profitability were impinged upon by archaic regulation and unionization, which, according to the theory, precluded the ability to compete globally.

"Return on Net Assets" (RONA) forms a key part of the shareholder capitalism doctrine. In essence, it means maximizing the returns of those dollars deployed in the operation of the business. Applied to a corporation, it comes down to this: If the choice is between putting a million bucks into new factory machinery or returning it to shareholders, say, via dividend payments, the latter is the optimal way to go because in theory it means higher net returns accruing to the shareholders (as the "owners" of the company), implicitly assuming that they can make better use of that money than the company itself can.

It is an absurd conceit to believe that a dilettante portfolio manager is in a better position than an aviation engineer to gauge whether corporate investment in fixed assets will generate productivity gains well north of the expected return for the cash distributed to the shareholders. But such is the perverse fantasy embedded in the myth of shareholder capitalism.

Engineering reality, however, is far more complicated than what is outlined in university MBA textbooks. For corporations like McDonnell Douglas, for example, RONA was used not as a way to prioritize new investment in the corporation but rather to justify disinvestment in the corporation. This disinvestment ultimately degraded the company's underlying profitability and the quality of its planes (which is one of the reasons the Pentagon helped to broker the merger with Boeing; in another perverse echo of the 2008 financial disaster, it was a politically engineered bailout).

RONA in Practice

When real engineering clashes with financial engineering, the damage takes the form of a geographically disparate and demoralized workforce: The factory-floor denominator goes down. Workers' wages are depressed, testing and quality assurance are curtailed. Productivity is diminished, even as labor-saving technologies are introduced. Precision machinery is sold off and replaced by inferior, but cheaper, machines. Engineering quality deteriorates. And the upshot is that a reliable plane like Boeing's 737, which had been a tried and true money-spinner with an impressive safety record since 1967, becomes a high-tech death trap.

The drive toward efficiency is translated into a drive to do more with less. Get more out of workers while paying them less. Make more parts with fewer machines. Outsourcing is viewed as a way to release capital by transferring investment from skilled domestic human capital to offshore entities not imbued with the same talents, corporate culture and dedication to quality. The benefits to the bottom line are temporary; the long-term pathologies become embedded as the company's market share begins to shrink, as the airlines search for less shoddy alternatives.

You must do one more thing if you are a Boeing director: you must erect barriers to bad news, because there is nothing that bursts a magic bubble faster than reality, particularly if it's bad reality.

The illusion that Boeing sought to perpetuate was that it continued to produce the same thing it had produced for decades: namely, a safe, reliable, quality airplane. But it was doing so with a production apparatus that was stripped, for cost reasons, of many of the means necessary to make good aircraft. So while the wine still came in a bottle signifying Premier Cru quality, and still carried the same price, someone had poured out the contents and replaced them with cheap plonk.

And that has become remarkably easy to do in aviation. Because Boeing is no longer subject to proper independent regulatory scrutiny. This is what happens when you're allowed to " self-certify" your own airplane , as the Washington Post described: "One Boeing engineer would conduct a test of a particular system on the Max 8, while another Boeing engineer would act as the FAA's representative, signing on behalf of the U.S. government that the technology complied with federal safety regulations."

This is a recipe for disaster. Boeing relentlessly cut costs, it outsourced across the globe to workforces that knew nothing about aviation or aviation's safety culture. It sent things everywhere on one criteria and one criteria only: lower the denominator. Make it the same, but cheaper. And then self-certify the plane, so that nobody, including the FAA, was ever the wiser.

Boeing also greased the wheels in Washington to ensure the continuation of this convenient state of regulatory affairs for the company. According to OpenSecrets.org , Boeing and its affiliates spent $15,120,000 in lobbying expenses in 2018, after spending, $16,740,000 in 2017 (along with a further $4,551,078 in 2018 political contributions, which placed the company 82nd out of a total of 19,087 contributors). Looking back at these figures over the past four elections (congressional and presidential) since 2012, these numbers represent fairly typical spending sums for the company.

But clever financial engineering, extensive political lobbying and self-certification can't perpetually hold back the effects of shoddy engineering. One of the sad byproducts of the FAA's acquiescence to "self-certification" is how many things fall through the cracks so easily.

[May 05, 2019] The Left Needs to Stop Crushing on the Generals by Danny Sjursen

Highly recommended!
Pentagon serves Wall Street and is controlled by CIA which is actually can be viewed as a Wall Street arm as well.
Notable quotes:
"... This time, though, the general got to talking about Russia. So I perked up. He made it crystal clear that he saw Moscow as an adversary to be contained, checked, and possibly defeated. There was no nuance, no self-reflection, not even a basic understanding of the general complexity of geopolitics in the 21st century. ..."
"... General It-Doesn't-Matter-His-Name thundered that we need not worry, however, because his tanks and troops could "mop the floor" with the Russians, in a battle that "wouldn't even be close." It was oh-so-typical, another U.S. Army general -- who clearly longs for the Cold War fumes that defined his early career -- overestimating the Russian menace and underestimating Russian military capability . ..."
"... The problem with the vast majority of generals, however, is that they don't think strategically. What they call strategy is really large-scale operations -- deploying massive formations and winning campaigns replete with battles. Many remain mired in the world of tactics, still operating like lieutenants or captains and proving the Peter Principle right, as they get promoted past their respective levels of competence. ..."
"... If America's generals, now and over the last 18 years, really were strategic thinkers, they'd have spoken out about -- and if necessary resigned en masse over -- mission sets that were unwinnable, illegal (in the case of Iraq), and counterproductive . Their oath is to the Constitution, after all, not Emperors Bush, Obama, and Trump. Yet few took that step. It's all symptomatic of the disease of institutionalized intellectual mediocrity. ..."
"... Let's start with Mattis. "Mad Dog" Mattis was so anti-Iran and bellicose in the Persian Gulf that President Barack Obama removed him from command of CENTCOM. ..."
"... Furthermore, the supposedly morally untainted, "intellectual" " warrior monk " chose, when he finally resigned, to do so in response to Trump's altogether reasonable call for a modest troop withdrawal from Afghanistan and Syria. ..."
May 03, 2019 | www.theamericanconservative.com

The two-star army general strode across the stage in his rumpled combat fatigues, almost like George Patton -- all that was missing was the cigar and riding crop. It was 2017 and I was in the audience, just another mid-level major attending yet another mandatory lecture in the auditorium of the Command and General Staff College at Fort Leavenworth, Kansas.

The general then commanded one of the Army's two true armored divisions and had plenty of his tanks forward deployed in Eastern Europe, all along the Russian frontier. Frankly, most CGSC students couldn't stand these talks. Substance always seemed lacking, as each general reminded us to "take care of soldiers" and "put the mission first," before throwing us a few nuggets of conventional wisdom on how to be good staff officers should we get assigned to his vaunted command.

This time, though, the general got to talking about Russia. So I perked up. He made it crystal clear that he saw Moscow as an adversary to be contained, checked, and possibly defeated. There was no nuance, no self-reflection, not even a basic understanding of the general complexity of geopolitics in the 21st century. Generals can be like that -- utterly "in-the-box," "can-do" thinkers. They take pride in how little they discuss policy and politics, even when they command tens of thousands of troops and control entire districts, provinces, or countries. There is some value in this -- we'd hardly want active generals meddling in U.S. domestic affairs. But they nonetheless can take the whole "aw shucks" act a bit too far.

General It-Doesn't-Matter-His-Name thundered that we need not worry, however, because his tanks and troops could "mop the floor" with the Russians, in a battle that "wouldn't even be close." It was oh-so-typical, another U.S. Army general -- who clearly longs for the Cold War fumes that defined his early career -- overestimating the Russian menace and underestimating Russian military capability . Of course, it was all cloaked in the macho bravado so common among generals who think that talking like sergeants will win them street cred with the troops. (That's not their job anymore, mind you.) He said nothing, of course, about the role of mid- and long-range nuclear weapons that could be the catastrophic consequence of an unnecessary war with the Russian Bear.

I got to thinking about that talk recently as I reflected in wonder at how the latest generation of mainstream "liberals" loves to fawn over generals, admirals -- any flag officers, really -- as alternatives to President Donald Trump. The irony of that alliance should not be lost on us. It's built on the standard Democratic fear of looking "soft" on terrorism, communism, or whatever-ism, and their visceral, blinding hatred of Trump. Some of this is understandable. Conservative Republicans masterfully paint liberals as "weak sisters" on foreign policy, and Trump's administration is, well, a wild card in world affairs.

The problem with the vast majority of generals, however, is that they don't think strategically. What they call strategy is really large-scale operations -- deploying massive formations and winning campaigns replete with battles. Many remain mired in the world of tactics, still operating like lieutenants or captains and proving the Peter Principle right, as they get promoted past their respective levels of competence.

If America's generals, now and over the last 18 years, really were strategic thinkers, they'd have spoken out about -- and if necessary resigned en masse over -- mission sets that were unwinnable, illegal (in the case of Iraq), and counterproductive . Their oath is to the Constitution, after all, not Emperors Bush, Obama, and Trump. Yet few took that step. It's all symptomatic of the disease of institutionalized intellectual mediocrity. More of the same is all they know: their careers were built on fighting "terror" anywhere it raised its evil head. Some, though no longer most, still subscribe to the faux intellectualism of General Petraeus and his legion of Coindinistas , who never saw a problem that a little regime change, followed by expert counterinsurgency, couldn't solve. Forget that they've been proven wrong time and again and can count zero victories since 2002. Generals (remember this!) are never held accountable.

Flag officers also rarely seem to recognize that they owe civilian policymakers more than just tactical "how" advice. They ought to be giving "if" advice -- if we invade Iraq, it will take 500,000 troops to occupy the place, and even then we'll ultimately destabilize the country and region, justify al-Qaeda's worldview, kick off a nationalist insurgency, and become immersed in an unwinnable war. Some, like Army Chief General Eric Shinseki and CENTCOM head John Abizaid, seemed to know this deep down. Still, Shinseki quietly retired after standing up to Secretary of Defense Donald Rumsfeld, and Abizaid rode out his tour to retirement.

Trump Scores, Breaks Generals' 50-Year War Record Afghanistan and America's 'Indispensable Nation' Hubris

Generals also love to tell the American people that victory is "just around the corner," or that there's a "light at the end of the tunnel." General William Westmoreland used the very same language when predicting imminent victory in Vietnam. Two months later, the North Vietnamese and Vietcong unleashed the largest uprising of the war, the famed Tet Offensive.

Take Afghanistan as exhibit A: 17 or so generals have now commanded U.S. troops in this, America's longest war. All have commanded within the system and framework of their predecessors. Sure, they made marginal operational and tactical changes -- some preferred surges, others advising, others counterterror -- but all failed to achieve anything close to victory, instead laundering failure into false optimism. None refused to play the same-old game or question the very possibility of victory in landlocked, historically xenophobic Afghanistan. That would have taken real courage, which is in short supply among senior officers.

Exhibit B involves Trump's former cabinet generals -- National Security Advisor H.R. McMaster, Chief of Staff John Kelley, and Defense Secretary Jim Mattis -- whom adoring and desperate liberals took as saviors and canonized as the supposed adults in the room . They were no such thing. The generals' triumvirate consisted ultimately of hawkish conventional thinkers married to the dogma of American exceptionalism and empire. Period.

Let's start with Mattis. "Mad Dog" Mattis was so anti-Iran and bellicose in the Persian Gulf that President Barack Obama removed him from command of CENTCOM.

Furthermore, the supposedly morally untainted, "intellectual" " warrior monk " chose, when he finally resigned, to do so in response to Trump's altogether reasonable call for a modest troop withdrawal from Afghanistan and Syria.

Helping Saudi Arabia terror bomb Yemen and starve 85,000 children to death? Mattis rebuked Congress and supported that. He never considered resigning in opposition to that war crime. No, he fell on his "courageous" sword over downgrading a losing 17-year-old war in Afghanistan. Not to mention he came to Trump's cabinet straight from the board of contracting giant General Dynamics, where he collected hundreds of thousands of military-industrial complex dollars.

Then there was John Kelley, whom Press Secretary Sarah Sanders implied was above media questioning because he was once a four-star marine general. And there's McMaster, another lauded intellectual who once wrote an interesting book and taught history at West Point. Yet he still drew all the wrong conclusions in his famous book on Vietnam -- implying that more troops, more bombing, and a mass invasion of North Vietnam could have won the war. Furthermore, his work with Mattis on Trump's unhinged , imperial National Defense Strategy proved that he was, after all, just another devotee of American hyper-interventionism.

So why reflect on these and other Washington generals? It's simple: liberal veneration for these, and seemingly all, military flag officers is a losing proposition and a formula for more intervention, possible war with other great powers, and the creeping militarization of the entire U.S. government. We know what the generals expect -- and potentially want -- for America's foreign policy future.

Just look at the curriculum at the various war and staff colleges from Kansas to Rhode Island. Ten years ago, they were all running war games focused on counterinsurgency in the Middle East and Africa. Now those same schools are drilling for future "contingencies" in the Baltic, Caucasus, and in the South China Sea. Older officers have always lamented the end of the Cold War "good old days," when men were men and the battlefield was "simple." A return to a state of near-war with Russia and China is the last thing real progressives should be pushing for in 2020.

The bottom line is this: the faint hint that mainstream libs would relish a Six Days in May style military coup is more than a little disturbing, no matter what you think of Trump. Democrats must know the damage such a move would do to our ostensible republic. I say: be a patriot. Insist on civilian control of foreign affairs. Even if that means two more years of The Donald.

Danny Sjursen is a retired U.S. Army Major and regular contributor to Truthdig . His work has also appeared in Harper's, the Los Angeles Times , The Nation , Tom Dispatch , and The Hill . He served combat tours in Iraq and Afghanistan, and later taught history at his alma mater, West Point. He is the author of Ghostriders of Baghdad: Soldiers, Civilians, and the Myth of the Surge . Follow him on Twitter @SkepticalVet .

[ Note: The views expressed in this article are those of the author, expressed in an unofficial capacity, and do not reflect the official policy or position of the Department of the Army, Department of Defense, or the U.S. government.]

[May 05, 2019] Does America Have an Economy or Any Sense of Reality by Paul Craig Roberts

Notable quotes:
"... We are having a propaganda barrage about the great Trump economy. We have been hearing about the great economy for a decade while the labor force participation rate declined, real family incomes stagnated, and debt burdens rose. The economy has been great only for large equity owners whose stock ownership benefited from the trillions of dollars the Fed poured into financial markets and from buy-backs by corporations of their own stocks. ..."
"... Federal Reserve data reports that a large percentage of the younger work force live at home with parents, because the jobs available to them are insufficient to pay for an independent existence. How then can the real estate, home furnishings, and appliance markets be strong? ..."
"... In contrast, Robotics, instead of displacing labor, eliminates it. Unlike jobs offshoring which shifted jobs from the US to China, robotics will cause jobs losses in both countries. If consumer incomes fall, then demand for output also falls, and output will fall. Robotics, then, is a way to shrink gross domestic product. ..."
"... The tech nerds and corporations who cannot wait for robotics to reduce labor cost in their profits calculation are incapable of understanding that when masses of people are without jobs, there is no consumer income with which to purchase the products of robots. The robots themselves do not need housing, food, clothing, entertainment, transportation, and medical care. The mega-rich owners of the robots cannot possibly consume the robotic output. An economy without consumers is a profitless economy. ..."
"... A country incapable of dealing with real problems has no future. ..."
May 02, 2019 | www.unz.com

We are having a propaganda barrage about the great Trump economy. We have been hearing about the great economy for a decade while the labor force participation rate declined, real family incomes stagnated, and debt burdens rose. The economy has been great only for large equity owners whose stock ownership benefited from the trillions of dollars the Fed poured into financial markets and from buy-backs by corporations of their own stocks.

I have pointed out for years that the jobs reports are fabrications and that the jobs that do exist are lowly paid domestic service jobs such as waitresses and bartenders and health care and social assistance. What has kept the American economy going is the expansion of consumer debt, not higher pay from higher productivity. The reported low unemployment rate is obtained by not counting discouraged workers who have given up on finding a job.

Do you remember all the corporate money that the Trump tax cut was supposed to bring back to America for investment? It was all BS. Yesterday I read reports that Apple is losing its trillion dollar market valuation because Apple is using its profits to buy back its own stock. In other words, the demand for Apple's products does not justify more investment. Therefore, the best use of the profit is to repurchase the equity shares, thus shrinking Apple's capitalization. The great economy does not include expanding demand for Apple's products.

I read also of endless store and mall closings, losses falsely attributed to online purchasing, which only accounts for a small percentage of sales.

Federal Reserve data reports that a large percentage of the younger work force live at home with parents, because the jobs available to them are insufficient to pay for an independent existence. How then can the real estate, home furnishings, and appliance markets be strong?

When a couple of decades ago I first wrote of the danger of jobs offshoring to the American middle class, state and local government budgets, and pension funds, idiot critics raised the charge of Luddite.

The Luddites were wrong. Mechanization raised the productivity of labor and real wages, but jobs offshoring shifts jobs from the domestic economy to abroad. Domestic labor is displaced, but overseas labor gets the jobs, thus boosting jobs there. In other words, labor income declines in the country that loses jobs and rises in the country to which the jobs are offshored. This is the way American corporations spurred the economic development of China. It was due to jobs offshoring that China developed far more rapidly than the CIA expected.

In contrast, Robotics, instead of displacing labor, eliminates it. Unlike jobs offshoring which shifted jobs from the US to China, robotics will cause jobs losses in both countries. If consumer incomes fall, then demand for output also falls, and output will fall. Robotics, then, is a way to shrink gross domestic product.

The tech nerds and corporations who cannot wait for robotics to reduce labor cost in their profits calculation are incapable of understanding that when masses of people are without jobs, there is no consumer income with which to purchase the products of robots. The robots themselves do not need housing, food, clothing, entertainment, transportation, and medical care. The mega-rich owners of the robots cannot possibly consume the robotic output. An economy without consumers is a profitless economy.

One would think that there would be a great deal of discussion about the economic effects of robotics before the problems are upon us, just as one would think there would be enormous concern about the high tensions Washington has caused between the US and Russia and China, just as one would think there would be preparations for the adverse economic consequences of global warming, whatever the cause. Instead, the US, a country facing many crises, is focused on whether President Trump obstructed investigation of a crime that the special prosecutor said did not take place.

A country incapable of dealing with real problems has no future.

[May 04, 2019] Someone is getting a raise. It just isn't you

stackoverflow.com

As is usual, the headline economic number is always the rosiest number .

Wages for production and nonsupervisory workers accelerated to a 3.4 percent annual pace, signaling gains for lower-paid employees.

That sounds pretty good. Except for the part where it is a lie.
For starters, it doesn't account for inflation .

Labor Department numbers released Wednesday show that real average hourly earnings, which compare the nominal rise in wages with the cost of living, rose 1.7 percent in January on a year-over-year basis.

1.7% is a lot less than 3.4%.
While the financial news was bullish, the actual professionals took the news differently.

Wage inflation was also muted with average hourly earnings rising six cents, or 0.2% in April after rising by the same margin in March.
Average hourly earnings "were disappointing," said Ian Lyngen, head of U.S. rates strategy at BMO Capital Markets in New York.

Secondly, 1.7% is an average, not a median. For instance, none of this applied to you if you are an older worker .

Weekly earnings for workers aged 55 to 64 were only 0.8% higher in the first quarter of 2019 than they were in the first quarter of 2007, after accounting for inflation, they found. For comparison, earnings rose 4.7% during that same period for workers between the ages of 35 and 54.

On the other hand, if you worked for a bank your wages went up at a rate far above average. This goes double if you are in management.

Among the biggest standouts: commercial banks, which employ an estimated 1.3 million people in the U.S. Since Trump took office in January 2017, they have increased their average hourly wage at an annualized pace of almost 11 percent, compared with just 3.3 percent under Obama.

Finally, there is the reason for this incredibly small wage increase fo regular workers. Hint: it wasn't because of capitalism and all the bullsh*t jobs it creates. The tiny wage increase that the working class has seen is because of what the capitalists said was a terrible idea .

For Americans living in the 21 states where the federal minimum wage is binding, inflation means that the minimum wage has lost 16 percent of its purchasing power.

But elsewhere, many workers and employers are experiencing a minimum wage well above 2009 levels. That's because state capitols and, to an unprecedented degree, city halls have become far more active in setting their own minimum wages.
...
Averaging across all of these federal, state and local minimum wage laws, the effective minimum wage in the United States -- the average minimum wage binding each hour of minimum wage work -- will be $11.80 an hour in 2019. Adjusted for inflation, this is probably the highest minimum wage in American history.
The effective minimum wage has not only outpaced inflation in recent years, but it has also grown faster than typical wages. We can see this from the Kaitz index, which compares the minimum wage with median overall wages.

So if you are waiting for capitalism to trickle down on you, it's never going to happen. span y gjohnsit on Fri, 05/03/2019 - 6:21pm

Carolinas

Teachers need free speech protection

Thousands of South Carolina teachers rallied outside their state capitol Wednesday, demanding pay raises, more planning time, increased school funding -- and, in a twist, more legal protections for their freedom of speech
SC for Ed, the grassroots activist group that organized Wednesday's demonstration, told CNN that many teachers fear protesting or speaking up about education issues, worrying they'll face retaliation at work. Saani Perry, a teacher in Fort Mill, S.C., told CNN that people in his profession are "expected to sit in the classroom and stay quiet and not speak [their] mind."

To address these concerns, SC for Ed is lobbying for the Teachers' Freedom of Speech Act, which was introduced earlier this year in the state House of Representatives. The bill would specify that "a public school district may not willfully transfer, terminate or fail to renew the contract of a teacher because the teacher has publicly or privately supported a public policy decision of any kind." If that happens, teachers would be able to sue for three times their salary.

Teachers across the country are raising similar concerns about retaliation. Such fears aren't unfounded: Lawmakers in some states that saw strikes last year have introduced bills this year that would punish educators for skipping school to protest.

[May 03, 2019] Creating a Redhat package repository

Apr 12, 2016 | linuxconfig.org
Details
Redhat
Introduction

If your Redhat server is not connected to the official RHN repositories, you will need to configure your own private repository which you can later use to install packages. The procedure of creating a Redhat repository is quite simple task. In this article we will show you how to create a local file Redhat repository as well as remote HTTP repository.

Using Official Redhat DVD as repository

After default installation and without registering your server to official RHN repositories your are left without any chance to install new packages from redhat repository as your repository list will show 0 entries:

# yum repolist
Loaded plugins: product-id, refresh-packagekit, security, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
repolist: 0

At this point the easiest thing to do is to attach your Redhat installation DVD as a local repository. To do that, first make sure that your RHEL DVD is mounted:

# mount | grep iso9660
/dev/sr0 on /media/RHEL_6.4 x86_64 Disc 1 type iso9660 (ro,nosuid,nodev,uhelper=udisks,uid=500,gid=500,iocharset=utf8,mode=0400,dmode=0500)

The directory which most interests us at the moment is " /media/RHEL_6.4 x86_64 Disc 1/repodata " as this is the directory which contains information about all packages found on this particular DVD disc.

Next we need to define our new repository pointing to " /media/RHEL_6.4 x86_64 Disc 1/ " by creating a repository entry in /etc/yum.repos.d/. Create a new file called: /etc/yum.repos.d/RHEL_6.4_Disc.repo using vi editor and insert a following text:

[RHEL_6.4_Disc]
name=RHEL_6.4_x86_64_Disc
baseurl="file:///media/RHEL_6.4 x86_64 Disc 1/"
gpgcheck=0

Once file was created your local Redhat DVD repository should be ready to use:

# yum repolist
Loaded plugins: product-id, refresh-packagekit, security, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
repo id                                                     repo name                                                           status
RHEL_6.4_Disc                                               RHEL_6.4_x86_64_Disc                                                3,648
repolist: 3,648

SUBSCRIBE TO NEWSLETTER
Subscribe to Linux Career NEWSLETTER and receive latest Linux news, jobs, career advice and tutorials.

me name=


Creating a local file Redhat repository

Normally having a Redhat DVD repository will be enough to get you started however, the only disadvantage is that you are not able to alter your repository in any way and thus not able to insert new/updated packages into it. The resolve this issue we can create a local file repository sitting somewhere on the filesystem. To aid us with this plan we will use a createrepo utility.

By default createrepo may not be installed on your system:

# yum list installed | grep createrepo
#

No output indicates that this packages is currently not present in your system. If you have followed a previous section on how to attach RHEL official DVD as your system's repository, then to install createrepo package simply execute:

# yum install createrepo

The above command will install createrepo utility along with all prerequisites. In case that you do not have your repository defined yet, you can install createrepo manually:

Using your mounted RedHat DVD first install prerequisites:

# rpm -hiv /media/RHEL_6.4\ x86_64\ Disc\ 1/Packages/deltarpm-*
# rpm -hiv /media/RHEL_6.4\ x86_64\ Disc\ 1/Packages/python-deltarpm-*

followed by the installation of the actual createrepo package:

# rpm -hiv /media/RHEL_6.4\ x86_64\ Disc\ 1/Packages/createrepo-*

If all went well you should be able to see createrepo package installed in your system:

# yum list installed | grep createrepo
createrepo.noarch                        0.9.9-17.el6                          installed

At this stage we are ready to create our own Redhat local file repository. Create a new directory called /rhel_repo:

# mkdir /rhel_repo

Next, copy all packages from your mounted RHEL DVD to your new directory:

# cp /media/RHEL_6.4\ x86_64\ Disc\ 1/Packages/* /rhel_repo/

When copy is finished execute createrepo command with a single argument which is your new local repository directory name:

# createrepo /rhel_repo/
Spawning worker 0 with 3648 pkgs
Workers Finished
Gathering worker results

Saving Primary metadata
Saving file lists metadata
Saving other metadata
Generating sqlite DBs
Sqlite DBs complete

You are also able to create Redhat repository on any debian-like Linux system such as Debian, Ubuntu or mint. The procedure is the same except that installation of createrepo utility will be: # apt-get install createrepo


me name=


As a last step we will create a new yum repository entry:

# vi /etc/yum.repos.d/rhel_repo.repo
[rhel_repo]
name=RHEL_6.4_x86_64_Local
baseurl="file:///rhel_repo/"
gpgcheck=0

Your new repository should now be accessible:

# yum repolist
Loaded plugins: product-id, refresh-packagekit, security, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
rhel_repo                                                                                                      | 2.9 kB     00:00 ... 
rhel_repo/primary_db                                                                                           | 367 kB     00:00 ... 
repo id                                                     repo name                                                           status
RHEL_6.4_Disc                                               RHEL_6.4_x86_64_Disc                                                3,648
rhel_repo                                                   RHEL_6.4_x86_64_Local                                                 3,648
Creating a remote HTTP Redhat repository

If you have multiple Redhat servers you may want to create a single Redhat repository accessible by all other servers on the network. For this you will need apache web server. Detailed installation and configuration of Apache web server is beyond the scope of this guide therefore, we assume that your httpd daemon ( Apache webserver ) is already configured. In order to make your new repository accessible via http configure your apache with /rhel_repo/ directory created in previous section as document root directory or simply copy entire directory to: /var/www/html/ ( default document root ).

Then create a new yum repository entry on your client system by creating a new repo configuration file:

vi /etc/yum.repos.d/rhel_http_repo.repo

with a following content, where my host is a IP address or hostname of your Redhat repository server:

[rhel_repo_http]
name=RHEL_6.4_x86_64_HTTP
baseurl="http://myhost/rhel_repo/"
gpgcheck=0

Confirm the correctness of your new repository by:

# yum repolist
Loaded plugins: product-id, refresh-packagekit, security, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
repo id                                                      repo name                                                          status
rhel_repo_http                                               RHEL_6.4_x86_64_HTTP                                               3,648
repolist: 3,648
Conclusion

Creating your own package repository gives you more options on how to manage packages on your Redhat system even without paid RHN subscription. When using a remote HTTP Redhat repository you may also want to configure GPGCHECK as part of your repository to make sure that no packages had been tampered to prior their installation.

[Apr 29, 2019] When the disaster hit, you need to resolve things quickly and efficiently, with panic being the worst enemy. Amount of training and previous experience become crucial factors in such situations

It is rarely just one thing that causes an “accident”. There are multiple contributors here.
Notable quotes:
"... Panic in my experience stems from a number of things here, but two crucial ones are: ..."
"... not knowing what to do, or learned actions not having any effect ..."
Apr 29, 2019 | www.nakedcapitalism.com

vlade , April 29, 2019 at 11:04 am

...I suspect that for both of those, when they hit, you need to resolve things quickly and efficiently, with panic being the worst enemy.

Panic in my experience stems from a number of things here, but two crucial ones are:
input overload
not knowing what to do, or learned actions not having any effect

Both of them can be, to a very large extent, overcome with training, training, and more training (of actually practising the emergency situation, not just reading about it and filling questionairres).

... ... ...

[Apr 28, 2019] Prisoners of Overwork A Dilemma by Peter Dorman

Highly recommended!
This is true about IT jobs. Probably even more then for lawyers. IT became plantation economy under neoliberalism.
Notable quotes:
"... mandatory overwork in professional jobs. ..."
"... The logical solution is some form of binding regulation. ..."
"... One place to start would be something like France's right-to-disconnect law . ..."
"... "the situation it describes is a classic prisoners dilemma." ..."
Apr 28, 2019 | angrybearblog.com

The New York Times has an illuminating article today summarizing recent research on the gender effects of mandatory overwork in professional jobs. Lawyers, people in finance and other client-centered occupations are increasingly required to be available round-the-clock, with 50-60 or more hours of work per week the norm. Among other costs, the impact on wage inequality between men and women is severe. Since women are largely saddled with primary responsibility for child care, even when couples ostensibly embrace equality on a theoretical level, the workaholic jobs are allocated to men. This shows up in dramatic differences between typical male and female career paths. The article doesn't discuss comparable issues in working class employment, but availability for last-minute changes in work schedules and similar demands are likely to impact men and women differentially as well.

What the article doesn't point out is that the situation it describes is a classic prisoners dilemma.* Consider law firms. They compete for clients, and clients prefer attorneys who are available on call, always prepared and willing to adjust to whatever schedule the client throws at them. Assume that most lawyers want sane, predictable work hours if they are offered without a severe penalty in pay. If law firms care about the well-being of their employees but also about profits, we have all the ingredients to construct a standard PD payoff matrix:

There is a penalty to unilateral cooperation, cutting work hours back to a work-life balance level. If your firm does it and the others don't, you lose clients to them.

There is a benefit to unilateral defection. If everyone else is cutting hours but you don't, you scoop up the lion's share of the clients.

Mutual cooperation is preferred to mutual defection. Law firms, we are assuming, would prefer a world in which overwork was removed from the contest for competitive advantage. They would compete for clients as before, but none would require their staff to put in soul-crushing hours. The alternative equilibrium, in which competition is still on the basis of the quality of work but everyone is on call 24/7 is inferior.

If the game is played once, mutual defection dominates. If it is played repeatedly there is a possibility for mutual cooperation to establish itself, but only under favorable conditions (which apparently don't exist in the world of NY law firms). The logical solution is some form of binding regulation.

The reason for bringing this up is that it strengthens the case for collective action rather than placing all the responsibility on individuals caught in the system, including for that matter individual law firms. Or, the responsibility is political, to demand constraints on the entire industry. One place to start would be something like France's right-to-disconnect law .

*I haven't read the studies by economists and sociologists cited in the article, but I suspect many of them make the same point I'm making here.

Sandwichman said...
"the situation it describes is a classic prisoners dilemma."

Now why didn't I think of that?

https://econospeak.blogspot.com/2016/04/zero-sum-foolery-4-of-4-wage-prisoners.html April 26, 2019 at 6:22 PM

[Apr 28, 2019] AI is software. Software bugs. Software doesn't autocorrect bugs. Men correct bugs. A bugging self-driving car leads its passengers to death. A man driving a car can steer away from death

Apr 28, 2019 | www.unz.com

Vojkan , April 27, 2019 at 7:42 am GMT

The infatuation with AI makes people overlook three AI's built-in glitches. AI is software. Software bugs. Software doesn't autocorrect bugs. Men correct bugs. A bugging self-driving car leads its passengers to death. A man driving a car can steer away from death. Humans love to behave in erratic ways, it is just impossible to program AI to respond to all possible erratic human behaviour. Therefore, instead of adapting AI to humans, humans will be forced to adapt to AI, and relinquish a lot of their liberty as humans. Humans have moral qualms (not everybody is Hillary Clinton), AI being strictly utilitarian, will necessarily be "psychopathic".

In short AI is the promise of communism raised by several orders of magnitude. Welcome to the "Brave New World".

Digital Samizdat , says: April 27, 2019 at 11:42 am GMT

@Vojkan You've raised some interesting objections, Vojkan. But here are a few quibbles:

1) AI is software. Software bugs. Software doesn't autocorrect bugs. Men correct bugs. A bugging self-driving car leads its passengers to death. A man driving a car can steer away from death.

Learn to code! Seriously, until and unless the AI devices acquire actual power over their human masters (as in The Matrix ), this is not as big a problem as you think. You simply test the device over and over and over until the bugs are discovered and worked out -- in other words, we just keep on doing what we've always done with software: alpha, beta, etc.

2) Humans love to behave in erratic ways, it is just impossible to program AI to respond to all possible erratic human behaviour. Therefore, instead of adapting AI to humans, humans will be forced to adapt to AI, and relinquish a lot of their liberty as humans.

There's probably some truth to that. This reminds me of the old Marshall McCluhan saying that "the medium is the message," and that we were all going to adapt our mode of cognition (somewhat) to the TV or the internet, or whatever. Yeah, to some extent that has happened. But to some extent, that probably happened way back when people first began domesticating horses and riding them. Human beings are 'programmed', as it were, to adapt to their environments to some extent, and to condition their reactions on the actions of other things/creatures in their environment.

However, I think you may be underestimating the potential to create interfaces that allow AI to interact with a human in much more complex ways, such as how another human would interact with human: sublte visual cues, pheromones, etc. That, in fact, was the essence of the old Turing Test, which is still the Holy Grail of AI:

https://en.wikipedia.org/wiki/Turing_test

3) Humans have moral qualms (not everybody is Hillary Clinton), AI being strictly utilitarian, will necessarily be "psychopathic".

I don't see why AI devices can't have some moral principles -- or at least moral biases -- programmed into them. Isaac Asimov didn't think this was impossible either:

https://en.wikipedia.org/wiki/Three_Laws_of_Robotics

reiner Tor , says: April 27, 2019 at 11:47 am GMT
@Digital Samizdat

You simply test the device over and over and over until the bugs are discovered and worked out -- in other words, we just keep on doing what we've always done with software: alpha, beta, etc.

Some bugs stay dormant for decades. I've seen one up close.

Digital Samizdat , says: April 27, 2019 at 11:57 am GMT
@reiner Tor

Well, you fix it whenever you find it!

That's a problem as old as programming; in fact, it's a problem as old as engineering itself. It's nothing new.

reiner Tor , says: April 27, 2019 at 12:11 pm GMT
@Digital Samizdat

What's new with AI is the amount of damage a faulty software multiplied many times over can do. My experience was pretty horrible (I was one of the two humans overseeing the system, but it was a pretty horrifying experience), but if the system was fully autonomous, it'd have driven my employer bankrupt.

Now I'm not against using AI in any form whatsoever; I also think that it's inevitable anyway. I'd support AI driving cars or flying planes, because they are likely safer than humans, though it's of course changing a manageable risk for a very small probability tail risk. But I'm pretty worried about AI in general.

[Mar 26, 2019] I wiped out a call center by mistyping the user profile expiration purge parameters in a script before leaving for the day.

Mar 26, 2019 | twitter.com

SwiftOnSecurity ‏ 7:07 PM - 25 Mar 2019

I wiped out a call center by mistyping the user profile expiration purge parameters in a script before leaving for the day.

https:// twitter.com/soniagupta504/ status/1109979183352942592

SwiftOnSecurity ‏ 7:08 PM - 25 Mar 2019

Luckily most of it was backed up with a custom-built user profile roaming system, but still it was down for an hour and a half and degraded for more...

[Mar 25, 2019] How to Monitor Disk IO in Linux Linux Hint

Mar 25, 2019 | linuxhint.com

Monitoring Specific Storage Devices or Partitions with iostat:

By default, iostat monitors all the storage devices of your computer. But, you can monitor specific storage devices (such as sda, sdb etc) or specific partitions (such as sda1, sda2, sdb4 etc) with iostat as well.

For example, to monitor the storage device sda only, run iostat as follows:

$ sudo iostat sda

Or

$ sudo iostat -d 2 sda

As you can see, only the storage device sda is monitored.

You can also monitor multiple storage devices with iostat.

For example, to monitor the storage devices sda and sdb , run iostat as follows:

$ sudo iostat sda sdb

Or

$ sudo iostat -d 2 sda sdb

If you want to monitor specific partitions, then you can do so as well.

For example, let's say, you want to monitor the partitions sda1 and sda2 , then run iostat as follows:

$ sudo iostat sda1 sda2

Or

$ sudo iostat -d 2 sda1 sda2

As you can see, only the partitions sda1 and sda2 are monitored.

Monitoring LVM Devices with iostat:

You can monitor the LVM devices of your computer with the -N option of iostat.

To monitor the LVM devices of your Linux machine as well, run iostat as follows:

$ sudo iostat -N -d 2

You can also monitor specific LVM logical volume as well.

For example, to monitor the LVM logical volume centos-root (let's say), run iostat as follows:

$ sudo iostat -N -d 2 centos-root

Changing the Units of iostat:

By default, iostat generates reports in kilobytes (kB) unit. But there are options that you can use to change the unit.

For example, to change the unit to megabytes (MB), use the -m option of iostat.

You can also change the unit to human readable with the -h option of iostat. Human readable format will automatically pick the right unit depending on the available data.

To change the unit to megabytes, run iostat as follows:

$ sudo iostat -m -d 2 sda

To change the unit to human readable format, run iostat as follows:

$ sudo iostat -h -d 2 sda

I copied as file and as you can see, the unit is now in megabytes (MB).

It changed to kilobytes (kB) as soon as the file copy is over.

Extended Display of iostat:

If you want, you can display a lot more information about disk i/o with iostat. To do that, use the -x option of iostat.

For example, to display extended information about disk i/o, run iostat as follows:

$ sudo iostat -x -d 2 sda

You can find what each of these fields (rrqm/s, %wrqm etc) means in the man page of iostat.

Getting Help:

If you need more information on each of the supported options of iostat and what each of the fields of iostat means, I recommend you take a look at the man page of iostat.

You can access the man page of iostat with the following command:

$ man iostat

So, that's how you use iostat in Linux. Thanks for reading this article.

[Mar 25, 2019] Concatenating Strings with the += Operator

Mar 25, 2019 | linuxize.com

https://acdn.adnxs.com/ib/static/usersync/v3/async_usersync.html

https://bh.contextweb.com/visitormatch

Concatenating Strings with the += Operator

Another way of concatenating strings in bash is by appending variables or literal strings to a variable using the += operator:

VAR1="Hello, "
VAR1+=" World"
echo "$VAR1"
Hello, World

The following example is using the += operator to concatenate strings in bash for loop :

languages.sh
VAR=""
for ELEMENT in 'Hydrogen' 'Helium' 'Lithium' 'Beryllium'; do
  VAR+="${ELEMENT} "
done

echo "$VAR"

[Mar 13, 2019] Getting started with the cat command by Alan Formy-Duval

Mar 13, 2019 | opensource.com

6 comments

Cat can also number a file's lines during output. There are two commands to do this, as shown in the help documentation: -b, --number-nonblank number nonempty output lines, overrides -n
-n, --number number all output lines

If I use the -b command with the hello.world file, the output will be numbered like this:

   $ cat -b hello.world
   1 Hello World !

In the example above, there is an empty line. We can determine why this empty line appears by using the -n argument:

$ cat -n hello.world
   1 Hello World !
   2
   $

Now we see that there is an extra empty line. These two arguments are operating on the final output rather than the file contents, so if we were to use the -n option with both files, numbering will count lines as follows:

   
   $ cat -n hello.world goodbye.world
   1 Hello World !
   2
   3 Good Bye World !
   4
   $

One other option that can be useful is -s for squeeze-blank . This argument tells cat to reduce repeated empty line output down to one line. This is helpful when reviewing files that have a lot of empty lines, because it effectively fits more text on the screen. Suppose I have a file with three lines that are spaced apart by several empty lines, such as in this example, greetings.world :

   $ cat greetings.world
   Greetings World !

   Take me to your Leader !

   We Come in Peace !
   $

Using the -s option saves screen space:

$ cat -s greetings.world

Cat is often used to copy contents of one file to another file. You may be asking, "Why not just use cp ?" Here is how I could create a new file, called both.files , that contains the contents of the hello and goodbye files:

$ cat hello.world goodbye.world > both.files
$ cat both.files
Hello World !
Good Bye World !
$
zcat

There is another variation on the cat command known as zcat . This command is capable of displaying files that have been compressed with Gzip without needing to uncompress the files with the gunzip command. As an aside, this also preserves disk space, which is the entire reason files are compressed!

The zcat command is a bit more exciting because it can be a huge time saver for system administrators who spend a lot of time reviewing system log files. Where can we find compressed log files? Take a look at /var/log on most Linux systems. On my system, /var/log contains several files, such as syslog.2.gz and syslog.3.gz . These files are the result of the log management system, which rotates and compresses log files to save disk space and prevent logs from growing to unmanageable file sizes. Without zcat , I would have to uncompress these files with the gunzip command before viewing them. Thankfully, I can use zcat :

$ cd / var / log
$ ls * .gz
syslog.2.gz syslog.3.gz
$
$ zcat syslog.2.gz | more
Jan 30 00:02: 26 workstation systemd [ 1850 ] : Starting GNOME Terminal Server...
Jan 30 00:02: 26 workstation dbus-daemon [ 1920 ] : [ session uid = 2112 pid = 1920 ] Successful
ly activated service 'org.gnome.Terminal'
Jan 30 00:02: 26 workstation systemd [ 1850 ] : Started GNOME Terminal Server.
Jan 30 00:02: 26 workstation org.gnome.Terminal.desktop [ 2059 ] : # watch_fast: "/org/gno
me / terminal / legacy / " (establishing: 0, active: 0)
Jan 30 00:02:26 workstation org.gnome.Terminal.desktop[2059]: # unwatch_fast: " / org / g
nome / terminal / legacy / " (active: 0, establishing: 1)
Jan 30 00:02:26 workstation org.gnome.Terminal.desktop[2059]: # watch_established: " /
org / gnome / terminal / legacy / " (establishing: 0)
--More--

We can also pass both files to zcat if we want to review both of them uninterrupted. Due to how log rotation works, you need to pass the filenames in reverse order to preserve the chronological order of the log contents:

$ ls -l * .gz
-rw-r----- 1 syslog adm 196383 Jan 31 00:00 syslog.2.gz
-rw-r----- 1 syslog adm 1137176 Jan 30 00:00 syslog.3.gz
$ zcat syslog.3.gz syslog.2.gz | more

The cat command seems simple but is very useful. I use it regularly. You also don't need to feed or pet it like a real cat. As always, I suggest you review the man pages ( man cat ) for the cat and zcat commands to learn more about how it can be used. You can also use the --help argument for a quick synopsis of command line arguments.

Victorhck on 13 Feb 2019 Permalink

and there's also a "tac" command, that is just a "cat" upside down!
Following your example:

~~~~~

tac both.files
Good Bye World!
Hello World!
~~~~
Happy hacking! :)
Johan Godfried on 26 Feb 2019 Permalink

Interesting article but please don't misuse cat to pipe to more......

I am trying to teach people to use less pipes and here you go abusing cat to pipe to other commands. IMHO, 99.9% of the time this is not necessary!

In stead of "cat file | command" most of the time, you can use "command file" (yes, I am an old dinosaur from a time where memory was very expensive and forking multiple commands could fill it all up)

Uri Ran on 03 Mar 2019 Permalink

Run cat then press keys to see the codes your shortcut send. (Press Ctrl+C to kill the cat when you're done.)

For example, on my Mac, the key combination option-leftarrow is ^[^[[D and command-downarrow is ^[[B.

I learned it from https://stackoverflow.com/users/787216/lolesque in his answer to https://stackoverflow.com/questions/12382499/looking-for-altleftarrowkey...

Geordie on 04 Mar 2019 Permalink

cat is also useful to make (or append to) text files without an editor:

$ cat >> foo << "EOF"
> Hello World
> Another Line
> EOF
$

[Mar 13, 2019] Pilots Complained About Boeing 737 Max 8 For Months Before Second Deadly Crash

Mar 13, 2019 | www.zerohedge.com

Several Pilots repeatedly warned federal authorities of safety concerns over the now-grounded Boeing 737 Max 8 for months leading up to the second deadly disaster involving the plane, according to an investigation by the Dallas Morning News . One captain even called the Max 8's flight manual " inadequate and almost criminally insufficient ," according to the report.

" The fact that this airplane requires such jury-rigging to fly is a red flag. Now we know the systems employed are error-prone -- even if the pilots aren't sure what those systems are, what redundancies are in place and failure modes. I am left to wonder: what else don't I know?" wrote the captain.

At least five complaints about the Boeing jet were found in a federal database which pilots routinely use to report aviation incidents without fear of repercussions.

The complaints are about the safety mechanism cited in preliminary reports for an October plane crash in Indonesia that killed 189.

The disclosures found by The News reference problems during flights of Boeing 737 Max 8s with an autopilot system during takeoff and nose-down situations while trying to gain altitude. While records show these flights occurred during October and November, information regarding which airlines the pilots were flying for at the time is redacted from the database. - Dallas Morning News

One captain who flies the Max 8 said in November that it was "unconscionable" that Boeing and federal authorities have allowed pilots to fly the plane without adequate training - including a failure to fully disclose how its systems were distinctly different from other planes.

An FAA spokesman said the reporting system is directly filed to NASA, which serves as an neutral third party in the reporting of grievances.

"The FAA analyzes these reports along with other safety data gathered through programs the FAA administers directly, including the Aviation Safety Action Program, which includes all of the major airlines including Southwest and American," said FAA southwest regional spokesman Lynn Lunsford.

Meanwhile, despite several airlines and foreign countries grounding the Max 8, US regulators have so far declined to follow suit. They have, however, mandated that Boeing upgrade the plane's software by April.

Sen. Ted Cruz (R-TX), who chairs a Senate subcommittee overseeing aviation, called for the grounding of the Max 8 in a Thursday statement.

"Further investigation may reveal that mechanical issues were not the cause, but until that time, our first priority must be the safety of the flying public," said Cruz.

At least 18 carriers -- including American Airlines and Southwest Airlines, the two largest U.S. carriers flying the 737 Max 8 -- have also declined to ground planes , saying they are confident in the safety and "airworthiness" of their fleets. American and Southwest have 24 and 34 of the aircraft in their fleets, respectively. - Dallas Morning News

The United States should be leading the world in aviation safety," said Transport Workers Union president John Samuelsen. " And yet, because of the lust for profit in the American aviation, we're still flying planes that dozens of other countries and airlines have now said need to grounded ." Tags Disaster Accident

[Mar 13, 2019] Boeing's automatic trim for the 737 MAX was not disclosed to the Pilots by Bjorn Fehrm

The background to Boeing's 737 MAX automatic trim
Mar 13, 2019 | leehamnews.com

The automatic trim we described last week has a name, MCAS, or Maneuvering Characteristics Automation System.

It's unique to the MAX because the 737 MAX no longer has the docile pitch characteristics of the 737NG at high Angles Of Attack (AOA). This is caused by the larger engine nacelles covering the higher bypass LEAP-1B engines.

The nacelles for the MAX are larger and placed higher and further forward of the wing, Figure 1.

Figure 1. Boeing 737NG (left) and MAX (right) nacelles compared. Source: Boeing 737 MAX brochure.

By placing the nacelle further forward of the wing, it could be placed higher. Combined with a higher nose landing gear, which raises the nacelle further, the same ground clearance could be achieved for the nacelle as for the 737NG.

The drawback of a larger nacelle, placed further forward, is it destabilizes the aircraft in pitch. All objects on an aircraft placed ahead of the Center of Gravity (the line in Figure 2, around which the aircraft moves in pitch) will contribute to destabilize the aircraft in pitch.

... ... ...

The 737 is a classical flight control aircraft. It relies on a naturally stable base aircraft for its flight control design, augmented in selected areas. Once such area is the artificial yaw damping, present on virtually all larger aircraft (to stop passengers getting sick from the aircraft's natural tendency to Dutch Roll = Wagging its tail).

Until the MAX, there was no need for artificial aids in pitch. Once the aircraft entered a stall, there were several actions described last week which assisted the pilot to exit the stall. But not in normal flight.

The larger nacelles, called for by the higher bypass LEAP-1B engines, changed this. When flying at normal angles of attack (3° at cruise and say 5° in a turn) the destabilizing effect of the larger engines are not felt.

The nacelles are designed to not generate lift in normal flight. It would generate unnecessary drag as the aspect ratio of an engine nacelle is lousy. The aircraft designer focuses the lift to the high aspect ratio wings.

But if the pilot for whatever reason manoeuvres the aircraft hard, generating an angle of attack close to the stall angle of around 14°, the previously neutral engine nacelle generates lift. A lift which is felt by the aircraft as a pitch up moment (as its ahead of the CG line), now stronger than on the 737NG. This destabilizes the MAX in pitch at higher Angles Of Attack (AOA). The most difficult situation is when the maneuver has a high pitch ratio. The aircraft's inertia can then provoke an over-swing into stall AOA.

To counter the MAX's lower stability margins at high AOA, Boeing introduced MCAS. Dependent on AOA value and rate, altitude (air density) and Mach (changed flow conditions) the MCAS, which is a software loop in the Flight Control computer, initiates a nose down trim above a threshold AOA.

It can be stopped by the Pilot counter-trimming on the Yoke or by him hitting the CUTOUT switches on the center pedestal. It's not stopped by the Pilot pulling the Yoke, which for normal trim from the autopilot or runaway manual trim triggers trim hold sensors. This would negate why MCAS was implemented, the Pilot pulling so hard on the Yoke that the aircraft is flying close to stall.

It's probably this counterintuitive characteristic, which goes against what has been trained many times in the simulator for unwanted autopilot trim or manual trim runaway, which has confused the pilots of JT610. They learned that holding against the trim stopped the nose down, and then they could take action, like counter-trimming or outright CUTOUT the trim servo. But it didn't. After a 10 second trim to a 2.5° nose down stabilizer position, the trimming started again despite the Pilots pulling against it. The faulty high AOA signal was still present.

How should they know that pulling on the Yoke didn't stop the trim? It was described nowhere; neither in the aircraft's manual, the AFM, nor in the Pilot's manual, the FCOM. This has created strong reactions from airlines with the 737 MAX on the flight line and their Pilots. They have learned the NG and the MAX flies the same. They fly them interchangeably during the week.

They do fly the same as long as no fault appears. Then there are differences, and the Pilots should have been informed about the differences.

  1. Bruce Levitt
    November 14, 2018
    In figure 2 it shows the same center of gravity for the NG as the Max. I find this a bit surprising as I would have expected that mounting heavy engines further forward would have cause a shift forward in the center of gravity that would not have been offset by the longer tailcone, which I'm assuming is relatively light even with APU installed.

    Based on what is coming out about the automatic trim, Boeing must be counting its lucky stars that this incident happened to Lion Air and not to an American aircraft. If this had happened in the US, I'm pretty sure the fleet would have been grounded by the FAA and the class action lawyers would be lined up outside the door to get their many pounds of flesh.

    This is quite the wake-up call for Boeing.

    • OV-099
      November 14, 2018
      If the FAA is not going to comprehensively review the certification for the 737 MAX, I would not be surprised if EASA would start taking a closer look at the aircraft and why the FAA seemingly missed the seemingly inadequate testing of the automatic trim when they decided to certified the 737 MAX 8. Reply
      • Doubting Thomas
        November 16, 2018
        One wonders if there are any OTHER goodies in the new/improved/yet identical handling latest iteration of this old bird that Boeing did not disclose so that pilots need not be retrained.
        EASA & FAA likely already are asking some pointed questions and will want to verify any statements made by the manufacturer.
        Depending on the answers pilot training requirements are likely to change materially.
    • jbeeko
      November 14, 2018
      CG will vary based on loading. I'd guess the line is the rear-most allowed CG.
    • ahmed
      November 18, 2018
      hi dears
      I think that even the pilot didnt knew about the MCAS ; this case can be corrected by only applying the boeing check list (QRH) stabilizer runaway.
      the pilot when they noticed that stabilizer are trimming without a knewn input ( from pilot or from Auto pilot ) ; shout put the cut out sw in the off position according to QRH. Reply
      • TransWorld
        November 19, 2018
        Please note that the first actions pulling back on the yoke to stop it.

        Also keep in mind the aircraft is screaming stall and the stick shaker is activated.

        Pulling back on the yoke in that case is the WRONG thing to do if you are stalled.

        The Pilot has to then determine which system is lading.

        At the same time its chaning its behavior from previous training, every 5 seconds, it does it again.

        There also was another issue taking place at the same time.

        So now you have two systems lying to you, one that is actively trying to kill you.

        If the Pitot static system is broken, you also have several key instruments feeding you bad data (VSI, altitude and speed)

    • TransWorld
      November 14, 2018
      Grubbie: I can partly answer that.

      Pilots are trained to immediately deal with emergency issues (engine loss etc)

      Then there is a follow up detailed instructions for follow on actions (if any).

      Simulators are wonderful things because you can train lethal scenes without lethal results.

      In this case, with NO pilot training let alone in the manuals, pilots have to either be really quick in the situation or you get the result you do. Some are better at it than others (Sullenbergers along with other aspects elected to turn on his APU even though it was not part of the engine out checklist)

      The other one was to ditch, too many pilots try to turn back even though we are trained not to.

      What I can tell you from personal expereince is having got myself into a spin without any training, I was locked up logic wise (panic) as suddenly nothing was working the way it should.

      I was lucky I was high enough and my brain kicked back into cold logic mode and I knew the counter to a spin from reading)

      Another 500 feet and I would not be here to post.

      While I did parts of the spin recovery wrong, fortunately in that aircraft it did not care, right rudder was enough to stop it.

      Reply
  1. OV-099
    November 14, 2018
    It's starting to look as if Boeing will not be able to just pay victims' relatives in the form of "condolence money", without admitting liability. Reply
    • Dukeofurl
      November 14, 2018
      Im pretty sure, even though its an Indonesian Airline, any whiff of fault with the plane itself will have lawyers taking Boeing on in US courts.
  1. Tech-guru
    November 14, 2018
    Astonishing to say the least. It is quite unlike Boeing. They are normally very good in the documentation and training. It makes everyone wonder how such vital change on the MAX aircraft was omitted from books as weel as in crew training.
    Your explanation is very good as to why you need this damn MCAS. But can you also tell us how just one faulty sensor can trigger this MCAS. In all other Boeing models like B777, the two AOA sensor signals are compared with a calculated AOA and choose the mid value within the ADIRU. It eliminates a drastic mistake of following a wrong sensor input.
    • Bjorn Fehrm
      November 14, 2018
      Hi Tech-Gury,

      it's not sure it's a one sensor fault. One sensor was changed amid information there was a 20 degree diff between the two sides. But then it happened again. I think we might be informed something else is at the root of this, which could also trip such a plausibility check you mention. We just don't know. What we know is the MCAS function was triggered without the aircraft being close to stall.

      Reply
      • Matthew
        November 14, 2018
        If it's certain that the MCAS was doing unhelpful things, that coupled with the fact that no one was telling pilots anything about it suggests to me that this is already effectively an open-and-shut case so far as liability, regulatory remedies are concerned.

        The tecnical root cause is also important, but probably irrelevant so far as estbalishing the ultimate reason behind the crash.

        Reply

[Mar 13, 2019] Boeing Crapification Second 737 Max Plane Within Five Months Crashes Just After Takeoff

Notable quotes:
"... The key point I want to pick up on from that earlier post is this: the Boeing 737 Max includes a new "safety" feature about which the company failed to inform the Federal Aviation Administration (FAA). ..."
"... Boeing Co. withheld information about potential hazards associated with a new flight-control feature suspected of playing a role in last month's fatal Lion Air jet crash, according to safety experts involved in the investigation, as well as midlevel FAA officials and airline pilots. ..."
"... Notice that phrase: "under unusual conditions". Seems now that the pilots of two of these jets may have encountered such unusual conditions since October. ..."
"... Why did Boeing neglect to tell the FAA – or, for that matter, other airlines or regulatory authorities – about the changes to the 737 Max? Well, the airline marketed the new jet as not needing pilots to undergo any additional training in order to fly it. ..."
"... In addition to considerable potential huge legal liability, from both the Lion Air and Ethiopian Airlines crashes, Boeing also faces the commercial consequences of grounding some if not all 737 Max 8 'planes currently in service – temporarily? indefinitely? -and loss or at minimum delay of all future sales of this aircraft model. ..."
"... If this tragedy had happened on an aircraft of another manufacturer other than big Boeing, the fleet would already have been grounded by the FAA. The arrogance of engineers both at Airbus and Boeing, who refuse to give the pilots easy means to regain immediate and full authority over the plane (pitch and power) is just appalling. ..."
"... Boeing has made significant inroads in China with its 737 MAX family. A dozen Chinese airlines have ordered 180 of the planes, and 76 of them have been delivered, according Boeing. About 85% of Boeing's unfilled Chinese airline orders are for 737 MAX planes. ..."
"... "It's pretty asinine for them to put a system on an airplane and not tell the pilots who are operating the airplane, especially when it deals with flight controls," Captain Mike Michaelis, chairman of the safety committee for the Allied Pilots Association, told the Wall Street Journal. ..."
"... The aircraft company concealed the new system and minimized the differences between the MAX and other versions of the 737 to boost sales. On the Boeing website, the company claims that airlines can save "millions of dollars" by purchasing the new plane "because of its commonality" with previous versions of the plane. ..."
"... "Years of experience representing hundreds of victims has revealed a common thread through most air disaster cases," said Charles Herrmann the principle of Herrmann Law. "Generating profit in a fiercely competitive market too often involves cutting safety measures. In this case, Boeing cut training and completely eliminated instructions and warnings on a new system. Pilots didn't even know it existed. I can't blame so many pilots for being mad as hell." ..."
"... The Air France Airbus disaster was jumped on – Boeing's traditional hydraulic links between the sticks for the two pilots ensuring they move in tandem; the supposed comments by Captain Sully that the Airbus software didn't allow him to hit the water at the optimal angle he wanted, causing the rear rupture in the fuselage both showed the inferiority of fly-by-wire until Boeing started using it too. (Sully has taken issue with the book making the above point and concludes fly-by-wire is a "mixed blessing".) ..."
"... Money over people. ..."
Mar 13, 2019 | www.nakedcapitalism.com

Posted on March 11, 2019 by Jerri-Lynn Scofield By Jerri-Lynn Scofield, who has worked as a securities lawyer and a derivatives trader. She is currently writing a book about textile artisans.

Yesterday, an Ethiopian Airlines flight crashed minutes after takeoff, killing all 157 passengers on board.

The crash occurred less than five months after a Lion Air jet crashed near Jakarta, Indonesia, also shortly after takeoff, and killed all 189 passengers.

Both jets were Boeing's latest 737 Max 8 model.

The Wall Street Journal reports in Ethiopian Crash Carries High Stakes for Boeing, Growing African Airline :

The state-owned airline is among the early operators of Boeing's new 737 MAX single-aisle workhorse aircraft, which has been delivered to carriers around the world since 2017. The 737 MAX represents about two-thirds of Boeing's future deliveries and an estimated 40% of its profits, according to analysts.

Having delivered 350 of the 737 MAX planes as of January, Boeing has booked orders for about 5,000 more, many to airlines in fast-growing emerging markets around the world.

The voice and data recorders for the doomed flight have already been recovered, the New York Times reported in Ethiopian Airline Crash Updates: Data and Voice Recorders Recovered . Investigators will soon be able to determine whether the same factors that caused the Lion Air crash also caused the latest Ethiopian Airlines tragedy.

Boeing, Crapification, Two 737 Max Crashes Within Five Months

Yves wrote a post in November, Boeing, Crapification, and the Lion Air Crash , analyzing a devastating Wall Street Journal report on that earlier crash. I will not repeat the details of her post here, but instead encourage interested readers to read it iin full.

The key point I want to pick up on from that earlier post is this: the Boeing 737 Max includes a new "safety" feature about which the company failed to inform the Federal Aviation Administration (FAA). As Yves wrote:

The short version of the story is that Boeing had implemented a new "safety" feature that operated even when its plane was being flown manually, that if it went into a stall, it would lower the nose suddenly to pick airspeed and fly normally again. However, Boeing didn't tell its buyers or even the FAA about this new goodie. It wasn't in pilot training or even the manuals. But even worse, this new control could force the nose down so far that it would be impossible not to crash the plane. And no, I am not making this up. From the Wall Street Journal:

Boeing Co. withheld information about potential hazards associated with a new flight-control feature suspected of playing a role in last month's fatal Lion Air jet crash, according to safety experts involved in the investigation, as well as midlevel FAA officials and airline pilots.

The automated stall-prevention system on Boeing 737 MAX 8 and MAX 9 models -- intended to help cockpit crews avoid mistakenly raising a plane's nose dangerously high -- under unusual conditions can push it down unexpectedly and so strongly that flight crews can't pull it back up. Such a scenario, Boeing told airlines in a world-wide safety bulletin roughly a week after the accident, can result in a steep dive or crash -- even if pilots are manually flying the jetliner and don't expect flight-control computers to kick in.

Notice that phrase: "under unusual conditions". Seems now that the pilots of two of these jets may have encountered such unusual conditions since October.

Why did Boeing neglect to tell the FAA – or, for that matter, other airlines or regulatory authorities – about the changes to the 737 Max? Well, the airline marketed the new jet as not needing pilots to undergo any additional training in order to fly it.

I see. Why Were 737 Max Jets Still in Service? Today, Boeing executives no doubt rue not pulling all 737 Max 8 jets out of service after the October Lion Air crash, to allow their engineers and engineering safety regulators to make necessary changes in the 'plane's design or to develop new training protocols.

In addition to considerable potential huge legal liability, from both the Lion Air and Ethiopian Airlines crashes, Boeing also faces the commercial consequences of grounding some if not all 737 Max 8 'planes currently in service – temporarily? indefinitely? -and loss or at minimum delay of all future sales of this aircraft model.

Over to Yves again, who in her November post cut to the crux:

And why haven't the planes been taken out of service? As one Wall Street Journal reader put it:

If this tragedy had happened on an aircraft of another manufacturer other than big Boeing, the fleet would already have been grounded by the FAA. The arrogance of engineers both at Airbus and Boeing, who refuse to give the pilots easy means to regain immediate and full authority over the plane (pitch and power) is just appalling.

Accident and incident records abound where the automation has been a major contributing factor or precursor. Knowing our friends at Boeing, it is highly probable that they will steer the investigation towards maintenance deficiencies as primary cause of the accident

In the wake of the Ethiopian Airlines crash, other countries have not waited for the FAA to act. China and Indonesia, as well as Ethiopian Airlines and Cayman Airways, have grounded flights of all Boeing 737 Max 8 aircraft, the Guardian reported in Ethiopian Airlines crash: Boeing faces safety questions over 737 Max 8 jets . The FT has called the Chinese and Indonesian actions an "unparalleled flight ban" (see China and Indonesia ground Boeing 737 Max 8 jets after latest crash ). India's air regulator has also issued new rules covering flights of the 737 Max aircraft, requiring pilots to have a minimum of 1,000 hours experience to fly these 'planes, according to a report in the Economic Times, DGCA issues additional safety instructions for flying B737 MAX planes.

Future of Boeing?

The commercial consequences of grounding the 737 Max in China alone are significant, according to this CNN account, Why grounding 737 MAX jets is a big deal for Boeing . The 737 Max is Boeing's most important plane; China is also the company's major market:

"A suspension in China is very significant, as this is a major market for Boeing," said Greg Waldron, Asia managing editor at aviation research firm FlightGlobal.

Boeing has predicted that China will soon become the world's first trillion-dollar market for jets. By 2037, Boeing estimates China will need 7,690 commercial jets to meet its travel demands.

Airbus (EADSF) and Commercial Aircraft Corporation of China, or Comac, are vying with Boeing for the vast and rapidly growing Chinese market.

Comac's first plane, designed to compete with the single-aisle Boeing 737 MAX and Airbus A320, made its first test flight in 2017. It is not yet ready for commercial service, but Boeing can't afford any missteps.

Boeing has made significant inroads in China with its 737 MAX family. A dozen Chinese airlines have ordered 180 of the planes, and 76 of them have been delivered, according Boeing. About 85% of Boeing's unfilled Chinese airline orders are for 737 MAX planes.

The 737 has been Boeing's bestselling product for decades. The company's future depends on the success the 737 MAX, the newest version of the jet. Boeing has 4,700 unfilled orders for 737s, representing 80% of Boeing's orders backlog. Virtually all 737 orders are for MAX versions.

As of the time of posting, US airlines have yet to ground their 737 Max 8 fleets. American Airlines, Alaska Air, Southwest Airlines, and United Airlines have ordered a combined 548 of the new 737 jets, of which 65 have been delivered, according to CNN.

Legal Liability?

Prior to Sunday's Ethiopian Airlines crash, Boeing already faced considerable potential legal liability for the October Lion Air crash. Just last Thursday, the Hermann Law Group of personal injury lawyers filed suit against Boeing on behalf of the families of 17 Indonesian passengers who died in that crash.

The Families of Lion Air Crash File Lawsuit Against Boeing – News Release did not mince words;

"It's pretty asinine for them to put a system on an airplane and not tell the pilots who are operating the airplane, especially when it deals with flight controls," Captain Mike Michaelis, chairman of the safety committee for the Allied Pilots Association, told the Wall Street Journal.

The president of the pilots union at Southwest Airlines, Jon Weaks, said, "We're pissed that Boeing didn't tell the companies, and the pilots didn't get notice."

The aircraft company concealed the new system and minimized the differences between the MAX and other versions of the 737 to boost sales. On the Boeing website, the company claims that airlines can save "millions of dollars" by purchasing the new plane "because of its commonality" with previous versions of the plane.

"Years of experience representing hundreds of victims has revealed a common thread through most air disaster cases," said Charles Herrmann the principle of Herrmann Law. "Generating profit in a fiercely competitive market too often involves cutting safety measures. In this case, Boeing cut training and completely eliminated instructions and warnings on a new system. Pilots didn't even know it existed. I can't blame so many pilots for being mad as hell."

Additionally, the complaint alleges the United States Federal Aviation Administration is partially culpable for negligently certifying Boeing's Air Flight Manual without requiring adequate instruction and training on the new system. Canadian and Brazilian authorities did require additional training.

What's Next?

The consequences for Boeing could be serious and will depend on what the flight and voice data recorders reveal. I also am curious as to what additional flight training or instructions, if any, the Ethiopian Airlines pilots received, either before or after the Lion Air crash, whether from Boeing, an air safety regulator, or any other source.


el_tel , March 11, 2019 at 5:04 pm

Of course we shouldn't engage in speculation, but we will anyway 'cause we're human. If fly-by-wire and the ability of software to over-ride pilots are indeed implicated in the 737 Max 8 then you can bet the Airbus cheer-leaders on YouTube videos will engage in huge Schaudenfreude.

I really shouldn't even look at comments to YouTube videos – it's bad for my blood pressure. But I occasionally dip into the swamp on ones in areas like airlines. Of course – as you'd expect – you get a large amount of "flag waving" between Europeans and Americans. But the level of hatred and suspiciously similar comments by the "if it ain't Boeing I ain't going" brigade struck me as in a whole new league long before the "SJW" troll wars regarding things like Captain Marvel etc of today.

The Air France Airbus disaster was jumped on – Boeing's traditional hydraulic links between the sticks for the two pilots ensuring they move in tandem; the supposed comments by Captain Sully that the Airbus software didn't allow him to hit the water at the optimal angle he wanted, causing the rear rupture in the fuselage both showed the inferiority of fly-by-wire until Boeing started using it too. (Sully has taken issue with the book making the above point and concludes fly-by-wire is a "mixed blessing".)

I'm going to try to steer clear of my YouTube channels on airlines. Hopefully NC will continue to provide the real evidence as it emerges as to what's been going on here.

Monty , March 11, 2019 at 7:14 pm

Re SJW troll wars.

It is really disheartening how an idea as reasonable as "a just society" has been so thoroughly discredited among a large swath of the population.

No wonder there is such a wide interest in primitive construction and technology on YouTube. This society is very sick and it is nice to pretend there is a way to opt out.

none , March 11, 2019 at 8:17 pm

The version I heard (today, on Reddit) was "if it's Boeing, I'm not going". Hadn't seen the opposite version to just now.

Octopii , March 12, 2019 at 5:19 pm

Nobody is going to provide real evidence but the NTSB.

albert , March 12, 2019 at 6:44 pm

Indeed. The NTSB usually works with local investigation teams (as well as a manufacturers rep) if the manufacturer is located in the US, or if specifically requested by the local authorities. I'd like to see their report. I don't care what the FAA or Boeing says about it.
. .. . .. -- .

d , March 12, 2019 at 5:58 pm

fly by wire has been around the 90s, its not new

notabanker , March 11, 2019 at 6:37 pm

Contains a link to a Seattle Times report as a "comprehensive wrap":
Speaking before China's announcement, Cox, who previously served as the top safety official for the Air Line Pilots Association, said it's premature to think of grounding the 737 MAX fleet.

"We don't know anything yet. We don't have close to sufficient information to consider grounding the planes," he said. "That would create economic pressure on a number of the airlines that's unjustified at this point.

China has grounded them . US? Must not create undue economic pressure on the airlines. Right there in black and white. Money over people.

Joey , March 11, 2019 at 11:13 pm

I just emailed southwest about an upcoming flight asking about my choices for refusal to board MAX 8/9 planes based on this "feature". I expect pro forma policy recitation, but customer pressure could trump too big to fail sweeping the dirt under the carpet. I hope.

Thuto , March 12, 2019 at 3:35 am

We got the "safety of our customers is our top priority and we are remaining vigilant and are in touch with Boeing and the Civial Aviation Authority on this matter but will not be grounding the aircraft model until further information on the crash becomes available" speech from a local airline here in South Africa. It didn't take half a day for customer pressure to effect a swift reversal of that blatant disregard for their "top priority", the model is grounded so yeah, customer muscle flexing will do it

Jessica , March 12, 2019 at 5:26 am

On PPRUNE.ORG (where a lot of pilots hang out), they reported that after the Lion Air crash, Southwest added an extra display (to indicate when the two angle of attack sensors were disagreeing with each other) that the folks on PPRUNE thought was an extremely good idea and effective.
Of course, if the Ethiopian crash was due to something different from the Lion Air crash, that extra display on the Southwest planes may not make any difference.

JerryDenim , March 12, 2019 at 2:09 pm

"On PPRUNE.ORG (where a lot of pilots hang out)"

Take those comments with a large dose of salt. Not to say everyone commenting on PPRUNE and sites like PPRUNE are posers, but PPRUNE.org is where a lot of wanna-be pilots and guys that spend a lot of time in basements playing flight simulator games hang out. The "real pilots" on PPRUNE are more frequently of the aspiring airline pilot type that fly smaller, piston-powered planes.

Altandmain , March 11, 2019 at 5:31 pm

We will have to wait and see what the final investigation reveals. However this does not look good for Boeing at all.

The Maneuvering Characteristics Augmentation System (MCAS) system was implicated in the Lion Air crash. There have been a lot of complaints about the system on many of the pilot forums, suggesting at least anecdotally that there are issues. It is highly suspected that the MCAS system is responsible for this crash too.

Keep in mind that Ethiopian Airlines is a pretty well-known and regarded airline. This is not a cut rate airline we are talking about.

At this point, all we can do is to wait for the investigation results.

d , March 12, 2019 at 6:01 pm

one other minor thing. you remember that shut down? seems that would have delayed any updates from Boeing. seems thats one of the things the pilots pointed out when it shutdown was in progress

WestcoastDeplorable , March 11, 2019 at 5:33 pm

What really is the icing on this cake is the fact the new, larger engines on the "Max" changed the center of gravity of the plane and made it unstable. From what I've read on aviation blogs, this is highly unusual for a commercial passenger jet. Boeing then created the new "safety" feature which makes the plane fly nose down to avoid a stall. But of course garbage in, garbage out on sensors (remember AF447 which stalled right into the S. Atlantic?).
It's all politics anyway .if Boeing had been forthcoming about the "Max" it would have required additional pilot training to certify pilots to fly the airliner. They didn't and now another 189 passengers are D.O.A.
I wouldn't fly on one and wouldn't let family do so either.

Carey , March 11, 2019 at 5:40 pm

If I have read correctly, the MCAS system (not known of by pilots until after the Lion Air crash) is reliant on a single Angle of Attack sensor, without redundancy (!). It's too early
to say if MCAS was an issue in the crashes, I guess, but this does not look good.

Jessica , March 12, 2019 at 5:42 am

If it was some other issue with the plane, that will be almost worse for Boeing. Two crash-causing flaws would require grounding all of the planes, suspending production, then doing some kind of severe testing or other to make sure that there isn't a third flaw waiting to show up.

vomkammer , March 12, 2019 at 3:19 pm

If MCAS relies only on one Angle of Attack (AoA) sensor, then it might have been an error in the system design an the safety assessment, from which Boeing may be liable.

It appears that a failure of the AoA can produce an unannuntiated erroneous pitch trim:
a) If the pilots had proper traning and awareness, this event would "only" increase their workload,
b) But for an unaware or untrained pilot, the event would impair its ability to fly and introduce excessive workload.

The difference is important, because according to standard civil aviation safety assessment (see for instance EASA AMC 25.1309 Ch. 7), the case a) should be classified as "Major" failure, whereas b) should be classified as "Hazardous". "Hazardous" failures are required to have much lower probability, which means MCAS needs two AoA sensors.

In summary: a safe MCAS would need either a second AoA or pilot training. It seems that it had neither.

drumlin woodchuckles , March 12, 2019 at 1:01 am

What are the ways an ignorant lay air traveler can find out about whether a particular airline has these new-type Boeing 737 MAXes in its fleet? What are the ways an ignorant air traveler can find out which airlines do not have ANY of these airplanes in their fleet?

What are the ways an ignorant air traveler can find out ahead of time, when still planning herm's trip, which flights use a 737 MAX as against some other kind of plane?

The only way the flying public could possibly torture the airlines into grounding these planes until it is safe to de-ground them is a total all-encompassing "fearcott" against this airplane all around the world. Only if the airlines in the "go ahead and fly it" countries sell zero seats, without exception, on every single 737 MAX plane that flies, will the airlines themselves take them out of service till the issues are resolved.

Hence my asking how people who wish to save their own lives from future accidents can tell when and where they might be exposed to the risk of boarding a Boeing 737 MAX plane.

Carey , March 12, 2019 at 2:13 am

Should be in your flight info, if not, contact the airline. I'm not getting on a 737 MAX.

pau llauter , March 12, 2019 at 10:57 am

Look up the flight on Seatguru. Generally tells type of aircraft. Of course, airlines do change them, too.

Old Jake , March 12, 2019 at 2:57 pm

Stop flying. Your employer requires it? Tell'em where to get off. There are alternatives. The alternatives are less polluting and have lower climate impact also. Yes, this is a hard pill to swallow. No, I don't travel for employment any more, I telecommute. I used to enjoy flying, but I avoid it like plague any more. Crapification.

Darius , March 12, 2019 at 5:09 pm

Additional training won't do. If they wanted larger engines, they needed a different plane. Changing to an unstable center of gravity and compensating for it with new software sounds like a joke except for the hundreds of victims. I'm not getting on that plane.

Joe Well , March 11, 2019 at 5:35 pm

Has there been any study of crapification as a broad social phenomenon? When I Google the word I only get links to NC and sites that reference NC. And yet, this seems like one of the guiding concepts to understand our present world (the crapification of UK media and civil service go a long way towards understanding Brexit, for instance).

I mean, my first thought is, why would Boeing commit corporate self-harm for the sake of a single bullet in sales materials (requires no pilot retraining!). And the answer, of course, is crapification: the people calling the shots don't know what they're doing.

none , March 11, 2019 at 11:56 pm

"Market for lemons" maybe? Anyway the phenomenon is well known.

Alfred , March 12, 2019 at 1:01 am

Google Books finds the word "crapification" quoted (from a 2004) in a work of literary criticism published in 2008 (Literature, Science and a New Humanities, by J. Gottschall). From 2013 it finds the following in a book by Edward Keenan, Some Great Idea: "Policy-wise, it represented a shift in momentum, a slowing down of the childish, intentional crapification of the city ." So there the word appears clearly in the sense understood by regular readers here (along with an admission that crapfication can be intentional and not just inadvertent). To illustrate that sense, Google Books finds the word used in Misfit Toymakers, by Keith T. Jenkins (2014): "We had been to the restaurant and we had water to drink, because after the takeover's, all of the soda makers were brought to ruination by the total crapification of their product, by government management." But almost twenty years earlier the word "crapification" had occurred in a comic strip published in New York Magazine (29 January 1996, p. 100): "Instant crapification! It's the perfect metaphor for the mirror on the soul of America!" The word has been used on television. On 5 January 2010 a sketch subtitled "Night of Terror – The Crapification of the American Pant-scape" ran on The Colbert Report per: https://en.wikipedia.org/wiki/List_of_The_Colbert_Report_episodes_(2010) . Searching the internet, Google results do indeed show many instances of the word "crapification" on NC, or quoted elsewhere from NC posts. But the same results show it used on many blogs since ca. 2010. Here, at http://nyceducator.com/2018/09/the-crapification-factor.html , is a recent example that comments on the word's popularization: "I stole that word, "crapification," from my friend Michael Fiorillo, but I'm fairly certain he stole it from someone else. In any case, I think it applies to our new online attendance system." A comment here, https://angrybearblog.com/2017/09/open-thread-sept-26-2017.html , recognizes NC to have been a vector of the word's increasing usage. Googling shows that there have been numerous instances of the verb "crapify" used in computer-programming contexts, from at least as early as 2006. Google Books finds the word "crapified" used in a novel, Sonic Butler, by James Greve (2004). The derivation, "de-crapify," is also attested. "Crapify" was suggested to Merriam-Webster in 2007 per: http://nws.merriam-webster.com/opendictionary/newword_display_alpha.php?letter=Cr&last=40 . At that time the suggested definition was, "To make situations/things bad." The verb was posted to Urban Dictionary in 2003: https://www.urbandictionary.com/define.php?term=crapify . The earliest serious discussion I could quickly find on crapificatjon as a phenomenon was from 2009 at https://www.cryptogon.com/?p=10611 . I have found only two attempts to elucidate the causes of crapification: http://malepatternboldness.blogspot.com/2017/03/my-jockey-journey-or-crapification-of.html (an essay on undershirts) and https://twilightstarsong.blogspot.com/2017/04/complaints.html (a comment on refrigerators). This essay deals with the mechanics of job crapification: http://asserttrue.blogspot.com/2015/10/how-job-crapification-works.html (relating it to de-skilling). An apparent Americanism, "crapification" has recently been 'translated' into French: "Mon bled est en pleine urbanisation, comprends : en pleine emmerdisation" [somewhat literally -- My hole in the road is in the midst of development, meaning: in the midst of crapification]: https://twitter.com/entre2passions/status/1085567796703096832 Interestingly, perhaps, a comprehensive search of amazon.com yields "No results for crapification."

Joe Well , March 12, 2019 at 12:27 pm

You deserve a medal! That's amazing research!

drumlin woodchuckles , March 12, 2019 at 1:08 am

This seems more like a specific bussiness conspiracy than like general crapification. This isn't " they just don't make them like they used to". This is like Ford deliberately selling the Crash and Burn Pinto with its special explode-on-impact gas-tank feature

Maybe some Trump-style insults should be crafted for this plane so they can get memed-up and travel faster than Boeing's ability to manage the story. Epithets like " the new Boeing crash-a-matic dive-liner
with nose-to-the-ground pilot-override autocrash built into every plane." It seems unfair, but life and safety should come before fairness, and that will only happen if a world wide wave of fear MAKES it happen.

pretzelattack , March 12, 2019 at 2:17 am

yeah first thing i thought of was the ford pinto.

The Rev Kev , March 12, 2019 at 4:19 am

Now there is a car tailor made to modern suicidal Jihadists. You wouldn't even have to load it up with explosives but just a full fuel tank-

https://www.youtube.com/watch?v=lgOxWPGsJNY

drumlin woodchuckles , March 12, 2019 at 3:27 pm

" Instant car bomb. Just add gas."

EoH , March 12, 2019 at 8:47 am

Good time to reread Yves' recent, Is a Harvard MBA Bad For You? :

The underlying problem is increasingly mercenary values in society.

JerryDenim , March 12, 2019 at 2:49 pm

I think crapification is the end result of a self-serving belief in the unfailing goodness and superiority of Ivy faux-meritocracy and the promotion/exaltation of the do-nothing, know-nothing, corporate, revolving-door MBA's and Psych-major HR types over people with many years of both company and industry experience who also have excellent professional track records. The latter group was the group in charge of major corporations and big decisions in the 'good old days', now it's the former. These morally bankrupt people and their vapid, self-righteous culture of PR first, management science second, and what-the-hell-else-matters anyway, are the prime drivers of crapification. Read the bio of an old-school celebrated CEO like Gordon Bethune (Continental CEO with corporate experience at Boeing) who skipped college altogether and joined the Navy at 17, and ask yourself how many people like that are in corporate board rooms today? I'm not saying going back to a 'Good Ole Boy's Club' is the best model of corporate governnace either but at least people like Bethune didn't think they were too good to mix with their fellow employees, understood leadership, the consequences of bullshit, and what 'The buck stops here' thing was really about. Corporate types today sadly believe their own propaganda, and when their fraudulent schemes, can-kicking, and head-in-the sand strategies inevitably blow up in their faces, they accept no blame and fail upwards to another posh corporate job or a nice golden parachute. The wrong people are in charge almost everywhere these days, hence crapification. Bad incentives, zero white collar crime enforcement, self-replicating board rooms, group-think, begets toxic corporate culture, which equals crapification.

Jeff Zink , March 12, 2019 at 5:46 pm

Also try "built in obsolescence"

VietnamVet , March 11, 2019 at 5:40 pm

As a son of a deceased former Boeing aeronautic engineer, this is tragic. It highlights the problem of financialization, neoliberalism, and lack of corporate responsibility pointed out daily here on NC. The crapification was signaled by the move of the headquarters from Seattle to Chicago and spending billions to build a second 787 line in South Carolina to bust their Unions. Boeing is now an unregulated multinational corporation superior to sovereign nations. However, if the 737 Max crashes have the same cause, this will be hard to whitewash. The design failure of windows on the de Havilland Comet killed the British passenger aircraft business. The EU will keep a discrete silence since manufacturing major airline passenger planes is a duopoly with Airbus. However, China hasn't (due to the trade war with the USA) even though Boeing is building a new assembly line there. Boeing escaped any blame for the loss of two Malaysian Airline's 777s. This may be an existential crisis for American aviation. Like a President who denies calling Tim Cook, Tim Apple, or the soft coup ongoing in DC against him, what is really happening globally is not factually reported by corporate media.

Jerry B , March 11, 2019 at 6:28 pm

===Boeing is now an unregulated multinational corporation superior to sovereign nations===

Susan Strange 101.

Or more recently Quinn Slobodian's Globalists: The End of Empire and the Birth of Neoliberalism.

And the beat goes on.

Synoia , March 11, 2019 at 6:49 pm

The design failure of windows on the de Havilland Comet killed the British passenger aircraft business.

Yes, a misunderstanding the the effect of square windows and 3 dimensional stress cracking.

Gary Gray , March 11, 2019 at 7:54 pm

Sorry, but 'sovereign' nations were always a scam. Nothing than a excuse to build capital markets, which are the underpinning of capitalism. Capital Markets are what control countries and have since the 1700's. Maybe you should blame the monarchies for selling out to the bankers in the late middle ages. Sovereign nations are just economic units for the bankers, their businesses they finance and nothing more. I guess they figured out after the Great Depression, they would throw a bunch of goodies at "Indo Europeans" face in western europe ,make them decadent and jaded via debt expansion. This goes back to my point about the yellow vests ..me me me me me. You reek of it. This stuff with Boeing is all profit based. It could have happened in 2000, 1960 or 1920. It could happen even under state control. Did you love Hitler's Voltswagon?

As for the soft coup .lol you mean Trumps soft coup for his allies in Russia and the Middle East viva la Saudi King!!!!!? Posts like these represent the problem with this board. The materialist over the spiritualist. Its like people who still don't get some of the biggest supporters of a "GND" are racialists and being somebody who has long run the environmentalist rally game, they are hugely in the game. Yet Progressives completely seem blind to it. The media ignores them for con men like David Duke(who's ancestry is not clean, no its not) and "Unite the Right"(or as one friend on the environmental circuit told me, Unite the Yahweh apologists) as whats "white". There is a reason they do this.

You need to wake up and stop the self-gratification crap. The planet is dying due to mishandlement. Over urbanization, over population, constant need for me over ecosystem. It can only last so long. That is why I like Zombie movies, its Gaia Theory in a nutshell. Good for you Earth .or Midgard. Which ever you prefer.

Carey , March 11, 2019 at 8:05 pm

Your job seems to be to muddy the waters, and I'm sure we'll be seeing much more of the same; much more.

Thanks!

pebird , March 11, 2019 at 10:24 pm

Hitler had an electric car?

JerryDenim , March 12, 2019 at 3:05 pm

Hee-hee. I noticed that one too.

TimR , March 12, 2019 at 9:41 am

Interesting but I'm unclear on some of it.. GND supporters are racialist?

JerryDenim , March 12, 2019 at 3:02 pm

Spot on comment VietnamVet, a lot of chickens can be seen coming home to roost in this latest Boeing disaster. Remarkable how not many years ago the government could regulate the aviation industry without fear of killing it, since there was more than one aerospace company, not anymore! The scourge of monsopany/monopoly power rears its head and bites in unexpected places.

Ptb , March 11, 2019 at 5:56 pm

More detail on the "MCAS" system responsible for the previous Lion Air crash here (theaircurrent.com)

It says the bigger and repositioned engine, which give the new model its fuel efficiency, and wing angle tweaks needed to fit the engine vs landing gear and clearance,
change the amount of pitch trim it needs in turns to remain level.

The auto system was added top neutralize the pitch trim during turns, too make it handle like the old model.

There is another pitch trim control besides the main "stick". To deactivate the auto system, this other trim control has to be used, the main controls do not deactivate it (perhaps to prevent it from being unintentionally deactivated, which would be equally bad). If the sensor driving the correction system gives a false reading and the pilot were unaware, there would be seesawing and panic

Actually, if this all happened again I would be very surprised. Nobody flying a 737 would not know after the previous crash. Curious what they find.

Ptb , March 11, 2019 at 6:38 pm

Ok typo fixes didn't register gobbledygook.

EoH , March 12, 2019 at 8:38 am

While logical, If your last comment were correct, it should have prevented this most recent crash. It appears that the "seesawing and panic" continue.

I assume it has now gone beyond the cockpit, and beyond the design, and sales teams and reached the Boeing board room. From there, it is likely to travel to the board rooms of every airline flying this aircraft or thinking of buying one, to their banks and creditors, and to those who buy or recommend their stock. But it may not reach the FAA for some time.

marku52 , March 12, 2019 at 2:47 pm

Full technical discussion of why this was needed at:

https://leehamnews.com/2018/11/14/boeings-automatic-trim-for-the-737-max-was-not-disclosed-to-the-pilots/

Ptb , March 12, 2019 at 5:32 pm

Excellent link, thanks!

Kimac , March 11, 2019 at 6:20 pm

As to what's next?

Think, Too Big To Fail.

Any number of ways will be found to put lipstick on this pig once we recognize the context.

allan , March 11, 2019 at 6:38 pm

"Canadian and Brazilian authorities did require additional training" from the quote at the bottom is not
something I've seen before. What did they know and when did they know it?

rd , March 11, 2019 at 8:31 pm

They probably just assumed that the changes in the plane from previous 737s were big enough to warrant treating it like a major change requiring training.

Both countries fly into remote areas with highly variable weather conditions and some rugged terrain.

dcrane , March 11, 2019 at 7:25 pm

Re: withholding information from the FAA

For what it's worth, the quoted section says that Boeing withheld info about the MCAS from "midlevel FAA officials", while Jerri-Lynn refers to the FAA as a whole.

This makes me wonder if top-level FAA people certified the system.

Carey , March 11, 2019 at 7:37 pm

See under "regulatory capture"

Corps run the show, regulators are window-dressing.

IMO, of course. Of course

allan , March 11, 2019 at 8:04 pm

It wasn't always this way. From 1979:

DC-10 Type Certificate Lifted [Aviation Week]

FAA action follows finding of new cracks in pylon aft bulkhead forward flange; crash investigation continues

Suspension of the McDonnell Douglas DC-10's type certificate last week followed a separate grounding order from a federal court as government investigators were narrowing the scope of their investigation of the American Airlines DC-10 crash May 25 in Chicago.

The American DC-10-10, registration No. N110AA, crashed shortly after takeoff from Chicago's O'Hare International Airport, killing 259 passengers, 13 crewmembers and three persons on the ground. The 275 fatalities make the crash the worst in U.S. history.

The controversies surrounding the grounding of the entire U.S. DC-10 fleet and, by extension, many of the DC-10s operated by foreign carriers, by Federal Aviation Administrator Langhorne Bond on the morning of June 6 to revolve around several issues.

Carey , March 11, 2019 at 8:39 pm

Yes, I remember back when the FAA would revoke a type certificate if a plane was a danger to public safety. It wasn't even that long ago. Now their concern is any threat to Boeing™. There's a name for that

Joey , March 11, 2019 at 11:22 pm

'Worst' disaster in Chicago would still ground planes. Lucky for Boeing its brown and browner.

Max Peck , March 11, 2019 at 7:30 pm

It's not correct to claim the MCAS was concealed. It's right in the January 2017 rev of the NG/MAX differences manual.

Carey , March 11, 2019 at 7:48 pm

Mmm. Why do the dudes and dudettes *who fly the things* say they knew nothing
about MCAS? Their training is quite rigorous.

Max Peck , March 11, 2019 at 10:00 pm

See a post below for link. I'd have provided it in my original post but was on a phone in an inconvenient place for editing.

Carey , March 12, 2019 at 1:51 am

'Boeing's automatic trim for the 737 MAX was not disclosed to the Pilots':

https://leehamnews.com/2018/11/14/boeings-automatic-trim-for-the-737-max-was-not-disclosed-to-the-pilots/

marku52 , March 12, 2019 at 2:39 pm

Leeham news is the best site for info on this. For those of you interested in the tech details got to Bjorns Corner, where he writes about aeronautic design issues.

I was somewhat horrified to find that modern aircraft flying at near mach speeds have a lot of somewhat pasted on pilot assistances. All of them. None of them fly with nothing but good old stick-and-rudder. Not Airbus (which is actually fully Fly By wire-all pilot inputs got through a computer) and not Boeing, which is somewhat less so.

This latest "solution came about becuse the larger engines (and nacelles) fitted on the Max increased lift ahead of the center of gravity in a pitchup situation, which was destabilizing. The MCAS uses inputs from air speed and angle of attack sensors to put a pitch down input to the horizonatal stablisizer.

A faluty AofA sensor lead to Lion Air's Max pushing the nose down against the pilots efforts all the way into the sea.

This is the best backgrounder

https://leehamnews.com/2018/11/14/boeings-automatic-trim-for-the-737-max-was-not-disclosed-to-the-pilots/

The Rev Kev , March 11, 2019 at 7:48 pm

One guy said last night on TV that Boeing had eight years of back orders for this aircraft so you had better believe that this crash will be studied furiously. Saw a picture of the crash site and it looks like it augured in almost straight down. There seems to be a large hole and the wreckage is not strew over that much area. I understand that they were digging out the cockpit as it was underground. Strange that.

Carey , March 11, 2019 at 7:55 pm

It's said that the Flight Data Recorders have been found, FWIW.

EoH , March 12, 2019 at 9:28 am

Suggestive of a high-speed, nose-first impact. Not the angle of attack a pilot would ordinarily choose.

Max Peck , March 11, 2019 at 9:57 pm

It's not true that Boeing hid the existence of the MCAS. They documented it in the January 2017 rev of the NG/MAX differences manual and probably earlier than that. One can argue whether the description was adequate, but the system was in no way hidden.

Carey , March 11, 2019 at 10:50 pm

Looks like, for now, we're stuck between your "in no way hidden", and numerous 737 pilots' claims on various online aviation boards that they knew nothing about MCAS. Lots of money involved, so very cloudy weather expected. For now I'll stick with the pilots.

Alex V , March 12, 2019 at 2:27 am

To the best of my understanding and reading on the subject, the system was well documented in the Boeing technical manuals, but not in the pilots' manuals, where it was only briefly mentioned, at best, and not by all airlines. I'm not an airline pilot, but from what I've read, airlines often write their own additional operators manuals for aircraft models they fly, so it was up to them to decide the depth of documentation. These are in theory sufficient to safely operate the plane, but do not detail every aircraft system exhaustively, as a modern aircraft is too complex to fully understand. Other technical manuals detail how the systems work, and how to maintain them, but a pilot is unlikely to read them as they are used by maintenance personnel or instructors. The problem with these cases (if investigations come to the same conclusions) is that insufficient information was included in the pilots manual explaining the MCAS, even though the information was communicated via other technical manuals.

vlade , March 12, 2019 at 11:50 am

This is correct.

A friend of mine is a commercial pilot who's just doing a 'training' exercise having moved airlines.

He's been flying the planes in question most of his life, but the airline is asking him to re-do it all according to their manuals and their rules. If the airline manual does not bring it up, then the pilots will not read it – few of them have time to go after the actual technical manuals and read those in addition to what the airline wants. [oh, and it does not matter that he has tens of thousands of hours on the airplane in question, if he does not do something in accordance with his new airline manual, he'd get kicked out, even if he was right and the airline manual wrong]

I believe (but would have to check with him) that some countries regulators do their own testing over and above the airlines, but again, it depends on what they put in.

Alex V , March 12, 2019 at 11:58 am

Good to head my understanding was correct. My take on the whole situation was that Boeing was negligent in communicating the significance of the change, given human psychology and current pilot training. The reason was to enable easier aircraft sales. The purpose of the MCAS system is however quite legitimate – it enables a more fuel efficient plane while compensating for a corner case of the flight envelope.

Max Peck , March 12, 2019 at 8:01 am

The link is to the actual manual. If that doesn't make you reconsider, nothing will. Maybe some pilots aren't expected to read the manuals, I don't know.

Furthermore, the post stated that Boeing failed to inform the FAA about the MCAS. Surely the FAA has time to read all of the manuals.

Darius , March 12, 2019 at 6:18 pm

Nobody reads instruction manuals. They're for reference. Boeing needed to yell at the pilots to be careful to read new pages 1,576 through 1,629 closely. They're a lulu.

Also, what's with screwing with the geometry of a stable plane so that it will fall out of the sky without constant adjustments by computer software? It's like having a car designed to explode but don't worry. We've loaded software to prevent that. Except when there's an error. But don't worry. We've included reboot instructions. It takes 15 minutes but it'll be OK. And you can do it with one hand and drive with the other. No thanks. I want the car not designed to explode.

The Rev Kev , March 11, 2019 at 10:06 pm

The FAA is already leaping to the defense of the Boeing 737 Max 8 even before they have a chance to open up the black boxes. Hope that nothing "happens" to those recordings.

https://www.bbc.com/news/world-africa-47533052

Milton , March 11, 2019 at 11:04 pm

I don't know, crapification, at least for me, refers to products, services, or infrastructure that has declined to the point that it has become a nuisance rather than a benefit it once was. This case with Boeing borders on criminal negligence.

pretzelattack , March 12, 2019 at 8:20 am

i came across a word that was new to me "crapitalism", goes well with crapification.

TG , March 12, 2019 at 12:50 am

1. It's really kind of amazing that we can fly to the other side of the world in a few hours – a journey that in my grandfather's time would have taken months and been pretty unpleasant and risky – and we expect perfect safety.

2. Of course the best-selling jet will see these issues. It's the law of large numbers.

3. I am not a fan of Boeing's corporate management, but still, compared to Wall Street and Defense Contractors and big education etc. they still produce an actual technical useful artifact that mostly works, and at levels of performance that in other fields would be considered superhuman.

4. Even for Boeing, one wonders when the rot will set in. Building commercial airliners is hard! So many technical details, nowhere to hide if you make even one mistake so easy to just abandon the business entirely. Do what the (ex) US auto industry did, contract out to foreign manufacturers and just slap a "USA" label on it and double down on marketing. Milk the cost-plus cash cow of the defense market. Or just financialize the entire thing and become too big to fail and walk away with all the profits before the whole edifice crumbles. Greed is good, right?

marku52 , March 12, 2019 at 2:45 pm

"Of course the best-selling jet will see these issues. It's the law of large numbers."

2 crashes of a new model in vary similar circumstances is very unusual. And FAA admits they are requiring a FW upgrade sometime in April. Pilots need to be hyperaware of what this MCAS system is doing. And they currently aren't.

Prairie Bear , March 12, 2019 at 2:42 am

if it went into a stall, it would lower the nose suddenly to pick airspeed and fly normally again.

A while before I read this post, I listened to a news clip that reported that the plane was observed "porpoising" after takeoff. I know only enough about planes and aviation to be a more or less competent passenger, but it does seem like that is something that might happen if the plane had such a feature and the pilot was not familiar with it and was trying to fight it? The below link is not to the story I saw I don't think, but another one I just found.

if it went into a stall, it would lower the nose suddenly to pick airspeed and fly normally again.

https://www.yahoo.com/gma/know-boeing-737-max-8-crashed-ethiopia-221411537.html

none , March 12, 2019 at 5:33 am

https://www.reuters.com/article/us-ethiopia-airplane-witnesses/ethiopian-plane-smoked-and-shuddered-before-deadly-plunge-idUSKBN1QS1LJ

Reuters reports people saw smoke and debris coming out of the plane before the crash.

Jessica , March 12, 2019 at 6:06 am

At PPRUNE.ORG, many of the commentators are skeptical of what witnesses of airplane crashes say they see, but more trusting of what they say they hear.
The folks at PPRUNE.ORG who looked at the record of the flight from FlightRadar24, which only covers part of the flight because FlightRadar24's coverage in that area is not so good and the terrain is hilly, see a plane flying fast in a straight line very unusually low.

EoH , March 12, 2019 at 8:16 am

The dodge about making important changes that affect aircraft handling but not disclosing them – so as to avoid mandatory pilot training, which would discourage airlines from buying the modified aircraft – is an obvious business-over-safety choice by an ethics and safety challenged corporation.

But why does even a company of that description, many of whose top managers, designers, and engineers live and breathe flight, allow its s/w engineers to prevent the pilots from overriding a supposed "safety" feature while actually flying the aircraft? Was it because it would have taken a little longer to write and test the additional s/w or because completing the circle through creating a pilot override would have mandated disclosure and additional pilot training?

Capt. "Sully" Sullenberger and his passengers and crew would have ended up in pieces at the bottom of the Hudson if the s/w on his aircraft had prohibited out of the ordinary flight maneuvers that contradicted its programming.

Alan Carr , March 12, 2019 at 9:13 am

If you carefully review the over all airframe of the 737 it has not hardly changed over the past 20 years or so, for the most part Boeing 737 specifications . What I believe the real issue here is the Avionics upgrades over the years has changed dramatically. More and more precision avionics are installed with less and less pilot input and ultimately no control of the aircraft. Though Boeing will get the brunt of the lawsuits, the avionics company will be the real culprit. I believe the avionics on the Boeing 737 is made by Rockwell Collins, which you guessed it, is owned by Boeing.

Max Peck , March 12, 2019 at 9:38 am

Rockwell Collins has never been owned by Boeing.

Also, to correct some upthread assertions, MCAS has an off switch.

WobblyTelomeres , March 12, 2019 at 10:02 am

United Technologies, UTX, I believe. If I knew how to short, I'd probably short this 'cause if they aren't partly liable, they'll still be hurt if Boeing has to slow (or, horror, halt) production.

Alan Carr , March 12, 2019 at 11:47 am

You are right Max I mis spoke. Rockwell Collins is owned by United Technologies Corporation

Darius , March 12, 2019 at 6:24 pm

Which astronaut are you? Heh.

EoH , March 12, 2019 at 9:40 am

Using routine risk management protocols, the American FAA should need continuing "data" on an aircraft for it to maintain its airworthiness certificate. Its current press materials on the Boeing 737 Max 8 suggest it needs data to yank it or to ground the aircraft pending review. Has it had any other commercial aircraft suffer two apparently similar catastrophic losses this close together within two years of the aircraft's launch?

Synoia , March 12, 2019 at 11:37 am

I am raising an issue with "crapification" as a meme. Crapification is a symptom of a specific behaviour.

GREED.

Please could you reconsider your writing to invlude this very old, tremendously venal, and "worst" sin?

US incentiveness of inventing a new word, "crapification" implies that some error cuould be corrected. If a deliberate sin, it requires atonement and forgiveness, and a sacrifice of wolrdy assets, for any chance of forgiveness and redemption.

Alan Carr , March 12, 2019 at 11:51 am

Something else that will be interesting to this thread is that Boeing doesn't seem to mind letting the Boeing 737 Max aircraft remain for sale on the open market

vlade , March 12, 2019 at 11:55 am

the EU suspends MAX 8s too

Craig H. , March 12, 2019 at 2:29 pm

The moderators in reddit.com/r/aviation are fantastic.

They have corralled everything into one mega-thread which is worth review:

https://www.reddit.com/r/aviation/comments/azzp0r/ethiopian_airlines_et302_and_boeing_737_max_8/

allan , March 12, 2019 at 3:00 pm

Thanks. That's a great link with what seem to be some very knowledgeable comments.

John Beech , March 12, 2019 at 2:30 pm

Experienced private pilot here. Lots of commercial pilot friends. First, the EU suspending the MAX 8 is politics. Second, the FAA mandated changes were already in the pipeline. Three, this won't stop the ignorant from staking out a position on this, and speculating about it on the internet, of course. Fourth, I'd hop a flight in a MAX 8 without concern – especially with a US pilot on board. Why? In part because the Lion Air event a few months back led to pointed discussion about the thrust line of the MAX 8 vs. the rest of the 737 fleet and the way the plane has software to help during strong pitch up events (MAX 8 and 9 have really powerful engines).

Basically, pilots have been made keenly aware of the issue and trained in what to do. Another reason I'd hop a flight in one right now is because there have been more than 31,000 trouble free flights in the USA in this new aircraft to date. My point is, if there were a systemic issue we'd already know about it. Note, the PIC in the recent crash had +8000 hours but the FO had about 200 hours and there is speculation he was flying. Speculation.

Anyway, US commercial fleet pilots are very well trained to deal with runaway trim or uncommanded flight excursions. How? Simple, by switching the breaker off. It's right near your fingers. Note, my airplane has an autopilot also. In the event the autopilot does something unexpected, just like the commercial pilot flying the MAX 8, I'm trained in what to do (the very same thing, switch the thing off).

Moreover, I speak form experience because I've had it happen twice in 15 years – once an issue with a servo causing the plane to slowly drift right wing low, and once a connection came loose leaving the plane trimmed right wing low (coincidence). My reaction is/was about the same as that of a experienced typist automatically hitting backspace on the keyboard upon realizing they mistyped a word, e.g. not reflex but nearly so. In my case, it was to throw the breaker to power off the autopilot as I leveled the plane. No big deal.

Finally, as of yet there been no analysis from the black boxes. I advise holding off on the speculation until they do. They've been found and we'll learn something soon. The yammering and near hysteria by non-pilots – especially with this thread – reminds me of the old saw about now knowing how smart or ignorant someone is until they open their mouth.

notabanker , March 12, 2019 at 5:29 pm

So let me get this straight.

While Boeing is designing a new 787, Airbus redesigns the A320. Boeing cannot compete with it, so instead of redesigning the 737 properly, they put larger engines on it further forward, which is never intended in the original design. So to compensate they use software with two sensors, not three, making it mathematically impossible to know if you have a faulty sensor which one it would be, to automatically adjust the pitch to prevent a stall, and this is the only true way to prevent a stall. But since you can kill the breaker and disable it if you have a bad sensor and can't possibly know which one, everything is ok. And now that the pilots can disable a feature required for certification, we should all feel good about these brand new planes, that for the first time in history, crashed within 5 months.

And the FAA, which hasn't had a Director in 14 months, knows better than the UK, Europe, China, Australia, Singapore, India, Indonesia, Africa and basically every other country in the world except Canada. And the reason every country in the world except Canada has grounded the fleet is political? Singapore put Silk Air out of business because of politics?

How many people need to be rammed into the ground at 500 mph from 8000 feet before yammering and hysteria are justified here? 400 obviously isn't enough.

VietnamVet , March 12, 2019 at 5:26 pm

Overnight since my first post above, the 737 Max 8 crash has become political. The black boxes haven't been officially read yet. Still airlines and aviation authorities have grounded the airplane in Europe, India, China, Mexico, Brazil, Australia and S.E. Asia in opposition to FAA's "Continued Airworthiness Notification to the International Community" issued yesterday.

I was wrong. There will be no whitewash. I thought they would remain silent. My guess this is a result of an abundance of caution plus greed (Europeans couldn't help gutting Airbus's competitor Boeing). This will not be discussed but it is also a manifestation of Trump Derangement Syndrome (TDS). Since the President has started dissing Atlantic Alliance partners, extorting defense money, fighting trade wars, and calling 3rd world countries s***-holes; there is no sympathy for the collapsing hegemon. Boeing stock is paying the price. If the cause is the faulty design of the flight position sensors and fly by wire software control system, it will take a long while to design and get approval of a new safe redundant control system and refit the airplanes to fly again overseas. A real disaster for America's last manufacturing industry.

[Mar 13, 2019] Boing might not survive the third crash

Too much automation and too complex flight control computer engager life of pilots and passengers...
Notable quotes:
"... When systems (like those used to fly giant aircraft) become too automatic while remaining essentially stupid or limited by the feedback systems, they endanger the airplane and passengers. These two "accidents" are painful warnings for air passengers and voters. ..."
"... This sort of problem is not new. Search the web for pitot/static port blockage, erroneous stall / overspeed indications. Pilots used to be trained to handle such emergencies before the desk-jockey suits decided computers always know best. ..."
"... @Sky Pilot, under normal circumstances, yes. but there are numerous reports that Boeing did not sufficiently test the MCAS with unreliable or incomplete signals from the sensors to even comply to its own quality regulations. ..."
"... Boeing did cut corners when designing the B737 MAX by just replacing the engines but not by designing a new wing which would have been required for the new engine. ..."
"... I accept that it should be easier for pilots to assume manual control of the aircraft in such situations but I wouldn't rush to condemn the programmers before we get all the facts. ..."
Mar 13, 2019 | www.nytimes.com

Shirley OK March 11

I want to know if Boeing 767s, as well as the new 737s, now has the Max 8 flight control computer installed with pilots maybe not being trained to use it or it being uncontrollable.

A 3rd Boeing - not a passenger plane but a big 767 cargo plane flying a bunch of stuff for Amazon crashed near Houston (where it was to land) on 2-23-19. The 2 pilots were killed. Apparently there was no call for help (at least not mentioned in the AP article about it I read).

'If' the new Max 8 system had been installed, had either Boeing or the owner of the cargo plane business been informed of problems with Max 8 equipment that had caused a crash and many deaths in a passenger plane (this would have been after the Indonesian crash)? Was that info given to the 2 pilots who died if Max 8 is also being used in some 767s? Did Boeing get the black box from that plane and, if so, what did they find out?

Those 2 pilots' lives matter also - particularly since the Indonesian 737 crash with Max 8 equipment had already happened. Boeing hasn't said anything (yet, that I've seen) about whether or not the Max 8 new configuration computer and the extra steps to get manual control is on other of their planes.

I want to know about the cause of that 3rd Boeing plane crashing and if there have been crashes/deaths in other of Boeing's big cargo planes. What's the total of all Boeing crashes/fatalies in the last few months and how many of those planes had Max 8?

Rufus SF March 11

Gentle readers: In the aftermath of the Lion Air crash, do you think it possible that all 737Max pilots have not received mandatory training review in how to quickly disconnect the MCAS system and fly the plane manually?

Do you think it possible that every 737Max pilot does not have a "disconnect review" as part of his personal checklist? Do you think it possible that at the first hint of pitch instability, the pilot does not first think of the MCAS system and whether to disable it?

Harold Orlando March 11

Compare the altitude fluctuations with those from Lion Air in NYTimes excellent coverage( https://www.nytimes.com/interactive/2018/11/16/world/asia/lion-air-crash-cockpit.html ), and they don't really suggest to me a pilot struggling to maintain proper pitch. Maybe the graph isn't detailed enough, but it looks more like a major, single event rather than a number of smaller corrections. I could be wrong.

Reports of smoke and fire are interesting; there is nothing in the modification that (we assume) caused Lion Air's crash that would explain smoke and fire. So I would hesitate to zero in on the modification at this point. Smoke and fire coming from the luggage bay suggest a runaway Li battery someone put in their suitcase. This is a larger issue because that can happen on any aircraft, Boeing, Airbus, or other.

mrpisces Loui March 11

Is is a shame that Boeing will not ground this aircraft knowing they introduced the MCAS component to automate the stall recovery of the 737 MAX and is behind these accidents in my opinion. Stall recovery has always been a step all pilots handled when the stick shaker and other audible warnings were activated to alert the pilots.

Now, Boeing invented MCAS as a "selling and marketing point" to a problem that didn't exist. MCAS kicks in when the aircraft is about to enter the stall phase and places the aircraft in a nose dive to regain speed. This only works when the air speed sensors are working properly. Now imagine when the air speed sensors have a malfunction and the plane is wrongly put into a nose dive.

The pilots are going to pull back on the stick to level the plane. The MCAS which is still getting incorrect air speed data is going to place the airplane back into a nose dive. The pilots are going to pull back on the stick to level the aircraft. This repeats itself till the airplane impacts the ground which is exactly what happened.

Add the fact that Boeing did not disclose the existence of the MCAS and its role to pilots. At this point only money is keeping the 737 MAX in the air. When Boeing talks about safety, they are not referring to passenger safety but profit safety.

Tony San Diego March 11

1. The procedure to allow a pilot to take complete control of the aircraft from auto-pilot mode should have been a standard eg pull back on the control column. It is not reasonable to expect a pilot to follow some checklist to determine and then turn off a misbehaving module especially in emergency situations. Even if that procedure is written in fine print in a manual. (The number of modules to disable may keep increasing if this is allowed).

2. How are US airlines confident of the safety of the 737 MAX right now when nothing much is known about the cause of the 2nd crash? What is known if that both the crashed aircraft were brand new, and we should be seeing news articles on how the plane's brand-new advanced technology saved the day from the pilot and not the other way round

3. In the first crash, the plane's advanced technology could not even recognize that either the flight path was abnormal and/or the airspeed readings were too erroneous and mandate the pilot to therefore take complete control immediately!

John✔️✔️Brews Tucson, AZ March 11

It's straightforward to design for standard operation under normal circumstances. But when bizarre operation occurs resulting in extreme circumstances a lot more comes into play. Not just more variables interacting more rapidly, testing system response times, but much happening quickly, testing pilot response times and experience. It is doubtful that the FAA can assess exactly what happened in these crashes. It is a result of a complex and rapid succession of man-machine-software-instrumentation interactions, and the number of permutations is huge. Boeing didn't imagine all of them, and didn't test all those it did think of.

The FAA is even less likely to do so. Boeing eventually will fix some of the identified problems, and make pilot intervention more effective. Maybe all that effort to make the new cockpit look as familiar as the old one will be scrapped? Pilot retraining will be done? Redundant sensors will be added? Additional instrumentation? Software re-written?

That'll increase costs, of course. Future deliveries will cost more. Looks likely there will be some downtime. Whether the fixes will cover sufficient eventualities, time will tell. Whether Boeing will be more scrupulous in future designs, less willing to cut corners without evaluating them? Will heads roll? Well, we'll see...

Ron SC March 11

Boeing has been in trouble technologically since its merger with McDonnell Douglas, which some industry analysts called a takeover, though it isn't clear who took over whom since MD got Boeing's name while Boeing took the MD logo and moved their headquarters from Seattle to Chicago.

In addition to problems with the 737 Max, Boeing is charging NASA considerably more than the small startup, SpaceX, for a capsule designed to ferry astronauts to the space station. Boeing's Starliner looks like an Apollo-era craft and is launched via a 1960's-like ATLAS booster.

Despite using what appears to be old technology, the Starliner is well behind schedule and over budget while the SpaceX capsule has already docked with the space station using state-of-art reusable rocket boosters at a much lower cost. It seems Boeing is in trouble, technologically.

BSmith San Francisco March 11

When you read that this model of the Boeing 737 Max was more fuel efficient, and view the horrifying graphs (the passengers spent their last minutes in sheer terror) of the vertical jerking up and down of both air crafts, and learn both crashes occurred minutes after take off, you are 90% sure that the problem is with design, or design not compatible with pilot training. Pilots in both planes had received permission to return to the airports. The likely culprit. to a trained designer, is the control system for injecting the huge amounts of fuel necessary to lift the plane to cruising altitude. Pilots knew it was happening and did not know how to override the fuel injection system.

These two crashes foretell what will happen if airlines, purely in the name of saving money, elmininate human control of aircraft. There will be many more crashes.

These ultra-complicated machines which defy gravity and lift thousands of pounds of dead weight into the stratesphere to reduce friction with air, are immensely complex and common. Thousands of flight paths cover the globe each day. Human pilots must ultimately be in charge - for our own peace of mind, and for their ability to deal with unimaginable, unforeseen hazards.

When systems (like those used to fly giant aircraft) become too automatic while remaining essentially stupid or limited by the feedback systems, they endanger the airplane and passengers. These two "accidents" are painful warnings for air passengers and voters.

Brez Spring Hill, TN March 11

1. Ground the Max 737.

2. Deactivate the ability of the automated system to override pilot inputs, which it apparently can do even with the autopilot disengaged.

3. Make sure that the autopilot disengage button on the yoke (pickle switch) disconnects ALL non-manual control inputs.

4. I do not know if this version of the 737 has direct input ("rope start") gyroscope, airspeed and vertical speed inticators for emergencies such as failure of the electronic wonder-stuff. If not, install them. Train pilots to use them.

5. This will cost money, a lot of money, so we can expect more self-serving excuses until the FAA forces Boeing to do the right thing.

6. This sort of problem is not new. Search the web for pitot/static port blockage, erroneous stall / overspeed indications. Pilots used to be trained to handle such emergencies before the desk-jockey suits decided computers always know best.

Harper Arkansas March 11

I flew big jets for 34 years, mostly Boeing's. Boeing added new logic to the trim system and was allowed to not make it known to pilots. However it was in maintenance manuals. Not great, but these airplanes are now so complex there are many systems that pilots don't know all of the intimate details.

NOT IDEAL, BUT NOT OVERLY SIGNIFICANT. Boeing changed one of the ways to stop a runaway trim system by eliminating the control column trim brake, ie airplane nose goes up, push down (which is instinct) and it stops the trim from running out of control.

BIG DEAL BOIENG AND FAA, NOT TELLING PILOTS. Boeing produces checklists for almost any conceivable malfunction. We pilots are trained to accomplish the obvious then go immediately to the checklist. Some items on the checklist are so important they are called "Memory Items" or "Red Box Items".

These would include things like in an explosive depressurization to put on your o2 mask, check to see that the passenger masks have dropped automatically and start a descent.

Another has always been STAB TRIM SWITCHES ...... CUTOUT which is surrounded by a RED BOX.

For very good reasons these two guarded switches are very conveniently located on the pedestal right between the pilots.

So if the nose is pitching incorrectly, STAB TRIM SWITCHES ..... CUTOUT!!! Ask questions later, go to the checklist. THAT IS THE PILOTS AND TRAINING DEPARTMENTS RESPONSIBILITY. At this point it is not important as to the cause.

David Rubien New York March 11

If these crashes turn out to result from a Boeing flaw, how can that company continue to stay in business? It should be put into receivership and its executives prosecuted. How many deaths are persmissable?

Osama Portland OR March 11

The emphasis on software is misplaced. The software intervention is triggered by readings from something called an Angle of Attack sensor. This sensor is relatively new on airplanes. A delicate blade protrudes from the fuselage and is deflected by airflow. The direction of the airflow determines the reading. A false reading from this instrument is the "garbage in" input to the software that takes over the trim function and directs the nose of the airplane down. The software seems to be working fine. The AOA sensor? Not so much.

experience Michiigan March 11

The basic problem seems to be that the 737 Max 8 was not designed for the larger engines and so there are flight characteristics that could be dangerous. To compensate for the flaw, computer software was used to control the aircraft when the situation was encountered. The software failed to prevent the situation from becoming a fatal crash.

A work around that may be the big mistake of not redesigning the aircraft properly for the larger engines in the first place. The aircraft may need to be modified at a cost that would be not realistic and therefore abandoned and a entirely new aircraft design be implemented. That sounds very drastic but the only other solution would be to go back to the original engines. The Boeing Company is at a crossroad that could be their demise if the wrong decision is made.

Sky Pilot NY March 11

It may be a training issue in that the 737 Max has several systems changes from previous 737 models that may not be covered adequately in differences training, checklists, etc. In the Lyon Air crash, a sticky angle-of-attack vane caused the auto-trim to force the nose down in order to prevent a stall. This is a worthwhile safety feature of the Max, but the crew was slow (or unable) to troubleshoot and isolate the problem. It need not have caused a crash. I suspect the same thing happened with Ethiopian Airlines. The circumstances are temptingly similar.

Thomas Singapore March 11

@Sky Pilot, under normal circumstances, yes. but there are numerous reports that Boeing did not sufficiently test the MCAS with unreliable or incomplete signals from the sensors to even comply to its own quality regulations. And that is just one of the many quality issues with the B737 MAX that have been in the news for a long time and have been of concern to some of the operators while at the same time being covered up by the FAA.

Just look at the difference in training requirements between the FAA and the Brazilian aviation authority.

Brazilian pilots need to fully understand the MCAS and how to handle it in emergency situations while FAA does not even require pilots to know about it.

Thomas Singapore March 11

This is yet another beautiful example of the difference in approach between Europeans and US Americans. While Europeans usually test their before they deliver the product thoroughly in order to avoid any potential failures of the product in their customers hands, the US approach is different: It is "make it work somehow and fix the problems when the client has them".

Which is what happened here as well. Boeing did cut corners when designing the B737 MAX by just replacing the engines but not by designing a new wing which would have been required for the new engine.

So the aircraft became unstable to fly at low speedy and tight turns which required a fix by implementing the MCAS which then was kept from recertification procedures for clients for reasons competitive sales arguments. And of course, the FAA played along and provided a cover for this cutting of corners as this was a product of a US company.

Then the proverbial brown stuff hit the fan, not once but twice. So Boeing sent its "thoughts and prayers" and started to hope for the storm to blow over and for finding a fix that would not be very expensive and not eat the share holder value away.

Sorry, but that is not the way to design and maintain aircraft. If you do it, do it right the first time and not fix it after more than 300 people died in accidents. There is a reason why China has copied the Airbus A-320 and not the Boeing B737 when building its COMAC C919. The Airbus is not a cheap fix, still tested by customers.

Rafael USA March 11

@Thomas And how do you know that Boeing do not test the aircrafts before delivery? It is a requirement by FAA for all complete product, systems, parts and sub-parts to be tested before delivery. However it seems Boeing has not approached the problem (or maybe they do not know the real issue).

As for the design, are you an engineer that can say whatever the design and use of new engines without a complete re-design is wrong? Have you seen the design drawings of the airplane? I do work in an industry in which our products are use for testing different parts of aircratfs and Boeing is one of our customers.

Our products are use during manufacturing and maintenance of airplanes. My guess is that Boeing has no idea what is going on. Your biased opinion against any US product is evident. There are regulations in the USA (and not in other Asia countries) that companies have to follow. This is not a case of untested product, it is a case of unknown problem and Boeing is really in the dark of what is going on...

Sam Europe March 11

Boeing and Regulators continue to exhibit criminal behaviour in this case. Ethical responsibility expects that when the first brand new MAX 8 fell for potentially issues with its design, the fleet should have been grounded. Instead, money was a priority; and unfortunately still is. They are even now flying. Disgraceful and criminal behaviour.

Imperato NYC March 11

@Sam no...too soon to come anywhere near that conclusion.

YW New York, NY March 11

A terrible tragedy for Ethiopia and all of the families affected by this disaster. The fact that two 737 Max jets have crashed in one year is indeed suspicious, especially as it has long been safer to travel in a Boeing plane than a car or school bus. That said, it is way too early to speculate on the causes of the two crashes being identical. Eyewitness accounts of debris coming off the plane in mid-air, as has been widely reported, would not seem to square with the idea that software is again at fault. Let's hope this puzzle can be solved quickly.

Wayne Brooklyn, New York March 11

@Singh the difference is consumer electronic products usually have a smaller number of components and wiring compared to commercial aircraft with miles of wiring and multitude of sensors and thousands of components. From what I know they usually have a preliminary report that comes out in a short time. But the detailed reported that takes into account analysis will take over one year to be written.

John A San Diego March 11

The engineers and management at Boeing need a crash course in ethics. After the crash in Indonesia, Boeing was trying to pass the blame rather than admit responsibility. The planes should all have been grounded then. Now the chickens have come to roost. Boeing is in serious trouble and it will take a long time to recover the reputation. Large multinationals never learn.

Imperato NYC March 11

@John A the previous pilot flying the Lion jet faced the same problem but dealt with it successfully. The pilot on the ill fated flight was less experienced and unfortunately failed.

BSmith San Francisco March 11

@Imperato Solving a repeat problem on an airplane type must not solely depend upon a pilot undertaking an emergency response! That is nonsense even to a non-pilot! This implies that Boeing allows a plane to keep flying which it knows has a fatal flaw! Shouldn't it be grounding all these planes until it identifies and solves the same problem?

Jimi DC March 11

NYT recently did an excellent job explaining how pilots were kept in the dark, by Boeing, during software update for 737 Max: https://www.nytimes.com/2019/02/03/world/asia/lion-air-plane-crash-pilots.html#click=https://t.co/MRgpKKhsly

Steve Charlotte, NC March 11

Something is wrong with those two graphs of altitude and vertical speed. For example, both are flat at the end, even though the vertical speed graph indicates that the plane was climbing rapidly. So what is the source of those numbers? Is it ground-based radar, or telemetry from onboard instruments? If the latter, it might be a clue to the problem.

Imperato NYC March 11

@Steve Addis Ababa is almost at 8000ft.

George North Carolina March 11

I wonder if, somewhere, there is a a report from some engineers saying that the system pushed by administrative-types to get the plane on the market quickly, will results in serious problems down the line.

Rebecca Michigan March 11

If we don't know why the first two 737 Max Jets crashed, then we don't know how long it will be before another one has a catastrophic failure. All the planes need to be grounded until the problem can be duplicated and eliminated.

Shirley OK March 11

@Rebecca And if it is something about the plane itself - and maybe an interaction with the new software - then someone has to be ready to volunteer to die to replicate what's happened.....

Rebecca Michigan March 12

@Shirley Heavens no. When investigating failures, duplicating the problem helps develop the solution. If you can't recreate the problem, then there is nothing to solve. Duplicating the problem generally is done through analysis and simulations, not with actual planes and passengers.

Sisifo Carrboro, NC March 11

Computer geeks can be deadly. This is clearly a software problem. The more software goes into a plane, the more likely it is for a software failure to bring down a plane. And computer geeks are always happy to try "new things" not caring what the effects are in the real world. My PC has a feature that controls what gets typed depending on the speed and repetitiveness of what I type. The darn thing is a constant source of annoyance as I sit at my desk, and there is absolutely no way to neutralize it because a computer geek so decided. Up in an airliner cockpit, this same software idiocy is killing people like flies.

Pooja MA March 11

@Sisifo Software that goes into critical systems like aircraft have a lot more constraints. Comparing it to the user interface on your PC doesn't make any sense. It's insulting to assume programmers are happy to "try new things" at the expense of lives. If you'd read about the Lion Air crash carefully you'd remember that there were faulty sensors involved. The software was doing what it was designed to do but the input it was getting was incorrect. I accept that it should be easier for pilots to assume manual control of the aircraft in such situations but I wouldn't rush to condemn the programmers before we get all the facts.

BSmith San Francisco March 11

@Pooja Mistakes happen. If humans on board can't respond to terrible situations then there is something wrong with the aircraft and its computer systems. By definition.

Patriot NJ March 11

Airbus had its own experiences with pilot "mode confusion" in the 1990's with at least 3 fatal crashes in the A320, but was able to control the media narrative until they resolved the automation issues. Look up Air Inter 148 in Wikipedia to learn the similarities.

Opinioned! NYC -- currently wintering in the Pacific March 11

"Commands issued by the plane's flight control computer that bypasses the pilots." What could possibly go wrong? Now let's see whether Boeing's spin doctors can sell this as a feature, not a bug.

Chris Hartnett Minneapolis March 11

It is telling that the Chinese government grounded their fleet of 737 Max 8 aircraft before the US government. The world truly has turned upside down when it potentially is safer to fly in China than the US. Oh, the times we live in. Chris Hartnett Datchet, UK (formerly Minneapolis)

Hollis Barcelona March 11

As a passenger who likes his captains with a head full of white hair, even if the plane is nosediving to instrument failure, does not every pilot who buckles a seat belt worldwide know how to switch off automatic flight controls and fly the airplane manually?

Even if this were 1000% Boeing's fault pilots should be able to override electronics and fly the plane safely back to the airport. I'm sure it's not that black and white in the air and I know it's speculation at this point but can any pilots add perspective regarding human responsibility?

Karl Rollings Sydney, Australia March 11

@Hollis I'm not a pilot nor an expert, but my understanding is that planes these days are "fly by wire", meaning the control surfaces are operated electronically, with no mechanical connection between the pilot's stick and the wings. So if the computer goes down, the ability to control the plane goes with it.

William Philadelphia March 11

@Hollis The NYT's excellent reporting on the Lion Air crash indicated that in nearly all other commercial aircraft, manual control of the pilot's yoke would be sufficient to override the malfunctioning system (which was controlling the tail wings in response to erroneous sensor data). Your white haired captain's years of training would have ingrained that impulse.

Unfortunately, on the Max 8 that would not sufficiently override the tail wings until the pilots flicked a switch near the bottom of the yoke. It's unclear whether individual airlines made pilots aware of this. That procedure existed in older planes but may not have been standard practice because the yoke WOULD sufficiently override the tail wings. Boeing's position has been that had pilots followed the procedure, a crash would not have occurred.

Nat Netherlands March 11

@Hollis No, that is the entire crux of this problem; switching from auto-pilot to manual does NOT solve it. Hence the danger of this whole system. T

his new Boeing 737-Max series are having the engines placed a bit further away than before and I don't know why they did this, but the result is that there can be some imbalance in air, which they then tried to correct with this strange auto-pilot technical adjustment.

Problem is that it stalls the plane (by pushing its nose down and even flipping out small wings sometimes) even when it shouldn't, and even when they switch to manual this system OVERRULES the pilot and switches back to auto-pilot, continuing to try to 'stabilize' (nose dive) the plane. That's what makes it so dangerous.

It was designed to keep the plane stable but basically turned out to function more or less like a glitch once you are taking off and need the ascend. I don't know why it only happens now and then, as this plane had made many other take-offs prior, but when it hits, it can be deadly. So far Boeings 'solution' is sparsely sending out a HUGE manual for pilots about how to fight with this computer problem.

Which are complicated to follow in a situation of stress with a plane computer constantly pushing the nose of your plane down. Max' mechanism is wrong and instead of correcting it properly, pilots need special training. Or a new technical update may help... which has been delayed and still hasn't been provided.

Mark Lebow Milwaukee, WI March 11

Is it the inability of the two airlines to maintain one of the plane's fly-by-wire systems that is at fault, not the plane itself? Or are both crashes due to pilot error, not knowing how to operate the system and then overreacting when it engages? Is the aircraft merely too advanced for its own good? None of these questions seems to have been answered yet.

Shane Marin County, CA March 11 Times Pick

This is such a devastating thing for Ethiopian Airlines, which has been doing critical work in connecting Africa internally and to the world at large. This is devastating for the nation of Ethiopia and for all the family members of those killed. May the memory of every passenger be a blessing. We should all hope a thorough investigation provides answers to why this make and model of airplane keep crashing so no other people have to go through this horror again.

Mal T KS March 11

A possible small piece of a big puzzle: Bishoftu is a city of 170,000 that is home to the main Ethiopian air force base, which has a long runway. Perhaps the pilot of Flight 302 was seeking to land there rather than returning to Bole Airport in Addis Ababa, a much larger and more densely populated city than Bishoftu. The pilot apparently requested return to Bole, but may have sought the Bishoftu runway when he experienced further control problems. Detailed analysis of radar data, conversations between pilot and control tower, flight path, and other flight-related information will be needed to establish the cause(s) of this tragedy.

Nan Socolow West Palm Beach, FL March 11

The business of building and selling airplanes is brutally competitive. Malfunctions in the systems of any kind on jet airplanes ("workhorses" for moving vast quantities of people around the earth) lead to disaster and loss of life. Boeing's much ballyhooed and vaunted MAX 8 737 jet planes must be grounded until whatever computer glitches brought down Ethiopian Air and LION Air planes -- with hundreds of passenger deaths -- are explained and fixed.

In 1946, Arthur Miller's play, "All My Sons", brought to life guilt by the airplane industry leading to deaths of WWII pilots in planes with defective parts. Arthur Miller was brought before the House UnAmerican Activities Committee because of his criticism of the American Dream. His other seminal American play, "Death of a Salesman", was about an everyman to whom attention must be paid. Attention must be paid to our aircraft industry. The American dream must be repaired.

Rachel Brooklyn, NY March 11

This story makes me very afraid of driverless cars.

Chuck W. Seattle, WA March 11

Meanwhile, human drivers killed 40,000 and injured 4.5 million people in 2018... For comparison, 58,200 American troops died in the entire Vietnam war. Computers do not fall asleep, get drunk, drive angry, or get distracted. As far as I am concerned, we cannot get unreliable humans out from behind the wheel fast enough.

jcgrim Knoxville, TN March 11

@Chuck W. Humans write the algorithms of driverless cars. Algorithms are not 100% fail-safe. Particularly when humans can't seem to write snap judgements or quick inferences into an algorithm. An algorithm can make driverless cars safe in predictable situations but that doesn't mean driveless cars will work in unpredictable events. Also, I don't trust the hype from Uber or the tech industry. https://www.nytimes.com/2017/02/24/technology/anthony-levandowski-waymo-uber-google-lawsuit.html?mtrref=t.co&gwh=D6880521C2C06930788921147F4506C8&gwt=pay

John NYC March 11

The irony here seems to be that in attempting to make the aircraft as safe as possible (with systems updates and such) Boeing may very well have made their product less safe. Since the crashes, to date, have been limited to the one product that product should be grounded until a viable determination has been made. John~ American Net'Zen

cosmos Washington March 11

Knowing quite a few Boeing employees and retirees, people who have shared numerous stories of concerns about Boeing operations -- I personally avoid flying. As for the assertion: "The business of building and selling jets is brutally competitive" -- it is monopolistic competition, as there are only two players. That means consumers (in this case airlines) do not end up with the best and widest array of airplanes. The more monopolistic a market, the more it needs to be regulated in the public interest -- yet I seriously doubt the FAA or any governmental agency has peeked into all the cost cutting measures Boeing has implemented in recent years

drdeanster tinseltown March 11

@cosmos Patently ridiculous. Your odds are greater of dying from a lightning strike, or in a car accident. Or even from food poisoning. Do you avoid driving? Eating? Something about these major disasters makes people itching to abandon all sense of probability and statistics.

Bob Milan March 11

When the past year was the dealiest one in decades, and when there are two disasters involved the same plane within that year, how can anyone not draw an inference that there are something wrong with the plane? In statistical studies of a pattern, this is a very very strong basis for a logical reasoning that something is wrong with the plane. When the number involves human lives, we must take very seriously the possibility of design flaws. The MAX planes should be all grounded for now. Period.

65 Recommend
mak pakistan March 11

@Bob couldn't agree more - however the basic design and engineering of the 737 is proven to be dependable over the past ~ 6 decades......not saying that there haven't been accidents - but these probably lie well within the industry / type averages. the problems seems to have arisen with the introduction of systems which have purportedly been introduced to take a part of the work-load off the pilots & pass it onto a central compuertised system.

Maybe the 'automated anti-stalling ' programme installed into the 737 Max, due to some erroneous inputs from the sensors, provide inaccurate data to the flight management controls leading to stalling of the aircraft. It seems that the manufacturer did not provide sufficent technical data about the upgraded software, & incase of malfunction, the corrective procedures to be followed to mitigate such diasters happening - before delivery of the planes to customers.

The procedure for the pilot to take full control of the aircraft by disengaging the central computer should be simple and fast to execute. Please we don't want Tesla driverless vehicles high up in the sky !

James Conner Northwestern Montana March 11

All we know at the moment is that a 737 Max crashed in Africa a few minutes after taking off from a high elevation airport. Some see similarities with the crash of Lion Air's 737 Max last fall -- but drawing a line between the only two dots that exist does not begin to present a useful picture of the situation.

Human nature seeks an explanation for an event, and may lead some to make assumptions that are without merit in order to provide closure. That tendency is why following a dramatic event, when facts are few, and the few that exist may be misleading, there is so much cocksure speculation masquerading as solid, reasoned, analysis. At this point, it's best to keep an open mind and resist connecting dots.

Peter Sweden March 11

@James Conner 2 deadly crashes after the introduction of a new airplane has no precedence in recent aviation history. And the time it has happened (with Comet), it was due to a faulty aircraft design. There is, of course, some chance that there is no connection between the two accidents, but if there is, the consequences are huge. Especially because the two events happened with very similar fashion (right after takeoff, with wild altitude changes), so there is more similarities than just the type of the plane. So there is literally no reason to keep this model in the air until the investigation is concluded. Oh well, there is: money. Over human lives.

svenbi NY March 11

It might be a wrong analogy, but if Toyota/Lexus recall over 1.5 million vehicles due to at least over 20 fatalities in relations to potentially fawlty airbags, Boeing should -- after over 300 deaths in just about 6 months -- pull their product of the market voluntarily until it is sorted out once and for all.

This tragic situation recalls the early days of the de Havilland Comet, operated by BOAC, which kept plunging from the skies within its first years of operation until the fault was found to be in the rectangular windows, which did not withstand the pressure due its jet speed and the subsequent cracks in body ripped the planes apart in midflight.

Thore Eilertsen Oslo March 11

A third crash may have the potential to take the aircraft manufacturer out of business, it is therefore unbelievable that the reasons for the Lion Air crash haven't been properly established yet. With more than a 100 Boeing 737 Max already grounded, I would expect crash investigations now to be severely fast tracked.

And the entire fleet should be grounded on the principle of "better safe than sorry". But then again, that would cost Boeing money, suggesting that the company's assessment of the risks involved favours continued operations above the absolute safety of passengers.

Londoner London March 11

@Thore Eilertsen This is also not a case for a secretive and extended crash investigation process. As soon as the cockpit voice recording is extracted - which might be later today - it should be made public. We also need to hear the communications between the controllers and the aircraft and to know about the position regarding the special training the pilots received after the Lion Air crash.

Trevor Canada March 11

@Thore Eilertsen I would imagine that Boeing will be the first to propose grounding these planes if they believe with a high degree of probability that it's their issue. They have the most to lose. Let logic and patience prevail.

Marvin McConoughey oregon March 11

It is very clear, even in these early moments, that aircraft makers need far more comprehensive information on everything pertinent that is going on in cockpits when pilots encounter problems. That information should be continually transmitted to ground facilities in real time to permit possible ground technical support.

[Mar 11, 2019] The university professors, who teach but do not learn: neoliberal shill DeJong tries to prolong the life of neoliberalism in the USA

Highly recommended!
DeJong is more dangerous them Malkin... It poisons students with neoliberalism more effectively.
Mar 11, 2019 | www.nakedcapitalism.com

Kurtismayfield , , March 10, 2019 at 10:52 am

Re:Wall Street Democrats

They know, however, that they've been conned, played, and they're absolute fools in the game.

Thank you Mr. Black for the laugh this morning. They know exactly what they have been doing. Whether it was deregulating so that Hedge funds and vulture capitalism can thrive, or making sure us peons cannot discharge debts, or making everything about financalization. This was all done on purpose, without care for "winning the political game". Politics is economics, and the Wall Street Democrats have been winning.

notabanker , , March 10, 2019 at 12:26 pm

For sure. I'm quite concerned at the behavior of the DNC leadership and pundits. They are doubling down on blatant corporatist agendas. They are acting like they have this in the bag when objective evidence says they do not and are in trouble. Assuming they are out of touch is naive to me. I would assume the opposite, they know a whole lot more than what they are letting on.

urblintz , , March 10, 2019 at 12:49 pm

I think the notion that the DNC and the Democrat's ruling class would rather lose to a like-minded Republican corporatist than win with someone who stands for genuine progressive values offering "concrete material benefits." I held my nose and read comments at the kos straw polls (where Sanders consistently wins by a large margin) and it's clear to me that the Clintonista's will do everything in their power to derail Bernie.

polecat , , March 10, 2019 at 1:00 pm

"It's the Externalities, stupid economists !" *should be the new rallying cry ..

rd , , March 10, 2019 at 3:26 pm

Keynes' "animal spirits" and the "tragedy of the commons" (Lloyd, 1833 and Hardin, 1968) both implied that economics was messier than Samuelson and Friedman would have us believe because there are actual people with different short- and long-term interests.

The behavioral folks (Kahnemann, Tversky, Thaler etc.) have all shown that people are even messier than we would have thought. So most macro-economic stuff over the past half-century has been largely BS in justifying trickle-down economics, deregulation etc.

There needs to be some inequality as that provides incentives via capitalism but unfettered it turns into France 1989 or the Great Depression. It is not coincidence that the major experiment in this in the late 90s and early 2000s required massive government intervention to keep the ship from sinking less than a decade after the great unregulated creative forces were unleashed.

MMT is likely to be similar where productive uses of deficits can be beneficial, but if the money is wasted on stupid stuff like unnecessary wars, then the loss of credibility means that the fiat currency won't be quite as fiat anymore. Britain was unbelievably economically powerfully in the late 1800s but in half a century went to being an economic afterthought hamstrung by deficits after two major wars and a depression.

So it is good that people like Brad DeLong are coming to understand that the pretty economic theories have some truths but are utter BS (and dangerous) when extrapolated without accounting for how people and societies actually behave.

Chris Cosmos , , March 10, 2019 at 6:43 pm

I never understood the incentive to make more money -- that only works if money = true value and that is the implication of living in a capitalist society (not economy)–everything then becomes a commodity and alienation results and all the depression, fear, anxiety that I see around me. Whereas human happiness actually comes from helping others and finding meaning in life not money or dominating others. That's what social science seems to be telling us.

Oregoncharles , , March 10, 2019 at 2:46 pm

Quoting DeLong:

" He says we are discredited. Our policies have failed. And they've failed because we've been conned by the Republicans."

That's welcome, but it's still making excuses. Neoliberal policies have failed because the economics were wrong, not because "we've been conned by the Republicans." Furthermore, this may be important – if it isn't acknowledged, those policies are quite likely to come sneaking back, especially if Democrats are more in the ascendant., as they will be, given the seesaw built into the 2-Party.

The Rev Kev , , March 10, 2019 at 7:33 pm

Might be right there. Groups like the neocons were originally attached the the left side of politics but when the winds changed, detached themselves and went over to the Republican right. The winds are changing again so those who want power may be going over to what is called the left now to keep their grip on power. But what you say is quite true. It is not really the policies that failed but the economics themselves that were wrong and which, in an honest debate, does not make sense either.

marku52 , , March 10, 2019 at 3:39 pm

"And they've failed because we've been conned by the Republicans.""

Not at all. What about the "free trade" hokum that DeJong and his pal Krugman have been peddling since forever? History and every empirical test in the modern era shows that it fails in developing countries and only exacerbates inequality in richer ones.

That's just a failed policy.

I'm still waiting for an apology for all those years that those two insulted anyone who questioned their dogma as just "too ignorant to understand."

Glen , , March 10, 2019 at 4:47 pm

Thank you!

He created FAILED policies. He pushed policies which have harmed America, harmed Americans, and destroyed the American dream.

Kevin Carhart , , March 10, 2019 at 4:29 pm

It's intriguing, but two other voices come to mind. One is Never Let a Serious Crisis Go To Waste by Mirowski and the other is Generation Like by Doug Rushkoff.

Neoliberalism is partially entrepreneurial self-conceptions which took a long time to promote. Rushkoff's Frontline shows the Youtube culture. There is a girl with a "leaderboard" on the wall of her suburban room, keeping track of her metrics.

There's a devastating VPRO Backlight film on the same topic. Internet-platform neoliberalism does not have much to do with the GOP.

It's going to be an odd hybrid at best – you could have deep-red communism but enacted for and by people whose self-conception is influenced by decades of Becker and Hayek? One place this question leads is to ask what's the relationship between the set of ideas and material conditions-centric philosophies? If new policies pass that create a different possibility materially, will the vise grip of the entrepreneurial self loosen?

Partially yeah, maybe, a Job Guarantee if it passes and actually works, would be an anti-neoliberal approach to jobs, which might partially loosen the regime of neoliberal advice for job candidates delivered with a smug attitude that There Is No Alternative. (Described by Gershon). We take it seriously because of a sense of dread that it might actually be powerful enough to lock us out if we don't, and an uncertainty of whether it is or not.

There has been deep damage which is now a very broad and resilient base. It is one of the prongs of why 2008 did not have the kind of discrediting effect that 1929 did. At least that's what I took away from _Never Let_.

Brad DeLong handing the baton might mean something but it is not going to ameliorate the sense-of-life that young people get from managing their channels and metrics.

Take the new 1099 platforms as another focal point. Suppose there were political measures that splice in on the platforms and take the edge off materially, such as underwritten healthcare not tied to your job. The platforms still use star ratings, make star ratings seem normal, and continually push a self-conception as a small business. If you have overt DSA plus covert Becker it is, again, a strange hybrid,

Jeremy Grimm , , March 10, 2019 at 5:13 pm

Your comment is very insightful. Neoliberalism embeds its mindset into the very fabric of our culture and self-concepts. It strangely twists many of our core myths and beliefs.

Raulb , , March 10, 2019 at 6:36 pm

This is nothing but a Trojan horse to 'co-opt' and 'subvert'. Neoliberals sense a risk to their neo feudal project and are simply attempting to infiltrate and hollow out any threats from within.

There are the same folks who have let entire economics departments becomes mouthpieces for corporate propaganda and worked with thousands of think tanks and international organizations to mislead, misinform and cause pain to millions of people.

They have seeded decontextualized words like 'wealth creators' and 'job creators' to create a halo narrative for corporate interests and undermine society, citizenship, the social good, the environment that make 'wealth creation' even possible. So all those take a backseat to 'wealth creator' interests. Since you can't create wealth without society this is some achievement.

Its because of them that we live in a world where the most important economic idea is protecting people like Kochs business and personal interests and making sure government is not 'impinging on their freedom'. And the corollary a fundamental anti-human narrative where ordinary people and workers are held in contempt for even expecting living wages and conditions and their access to basics like education, health care and living conditions is hollowed out out to promote privatization and become 'entitlements'.

Neoliberalism has left us with a decontextualized highly unstable world that exists in a collective but is forcefully detached into a context less individual existence. These are not mistakes of otherwise 'well meaning' individuals, there are the results of hard core ideologues and high priests of power.

Dan , , March 10, 2019 at 7:31 pm

Two thumbs up. This has been an ongoing agenda for decades and it has succeeded in permeating every aspect of society, which is why the United States is such a vacuous, superficial place. And it's exporting that superficiality to the rest of the world.

VietnamVet , , March 10, 2019 at 7:17 pm

I read Brad DeLong's and Paul Krugman's blogs until their contradictions became too great. If anything, we need more people seeing the truth. The Global War on Terror is into its 18th year. In October the USA will spend approximately $6 trillion and will have accomplish nothing except to create blow back. The Middle Class is disappearing. Those who remain in their homes are head over heels in debt.

The average American household carries $137,063 in debt. The wealthy are getting richer.

The Jeff Bezos, Warren Buffett and Bill Gates families together have as much wealth as the lowest half of Americans. Donald Trump's Presidency and Brexit document that neoliberal politicians have lost contact with reality. They are nightmares that there is no escaping. At best, perhaps, Roosevelt Progressives will be reborn to resurrect regulated capitalism and debt forgiveness.

But more likely is a middle-class revolt when Americans no longer can pay for water, electricity, food, medicine and are jailed for not paying a $1,500 fine for littering the Beltway.

A civil war inside a nuclear armed nation state is dangerous beyond belief. France is approaching this.

[Mar 10, 2019] How do I detach a process from Terminal, entirely?

Mar 10, 2019 | superuser.com

stackoverflow.com, Aug 25, 2016 at 17:24

I use Tilda (drop-down terminal) on Ubuntu as my "command central" - pretty much the way others might use GNOME Do, Quicksilver or Launchy.

However, I'm struggling with how to completely detach a process (e.g. Firefox) from the terminal it's been launched from - i.e. prevent that such a (non-)child process

For example, in order to start Vim in a "proper" terminal window, I have tried a simple script like the following:

exec gnome-terminal -e "vim $@" &> /dev/null &

However, that still causes pollution (also, passing a file name doesn't seem to work).

lhunath, Sep 23, 2016 at 19:08

First of all; once you've started a process, you can background it by first stopping it (hit Ctrl - Z ) and then typing bg to let it resume in the background. It's now a "job", and its stdout / stderr / stdin are still connected to your terminal.

You can start a process as backgrounded immediately by appending a "&" to the end of it:

firefox &

To run it in the background silenced, use this:

firefox </dev/null &>/dev/null &

Some additional info:

nohup is a program you can use to run your application with such that its stdout/stderr can be sent to a file instead and such that closing the parent script won't SIGHUP the child. However, you need to have had the foresight to have used it before you started the application. Because of the way nohup works, you can't just apply it to a running process .

disown is a bash builtin that removes a shell job from the shell's job list. What this basically means is that you can't use fg , bg on it anymore, but more importantly, when you close your shell it won't hang or send a SIGHUP to that child anymore. Unlike nohup , disown is used after the process has been launched and backgrounded.

What you can't do, is change the stdout/stderr/stdin of a process after having launched it. At least not from the shell. If you launch your process and tell it that its stdout is your terminal (which is what you do by default), then that process is configured to output to your terminal. Your shell has no business with the processes' FD setup, that's purely something the process itself manages. The process itself can decide whether to close its stdout/stderr/stdin or not, but you can't use your shell to force it to do so.

To manage a background process' output, you have plenty of options from scripts, "nohup" probably being the first to come to mind. But for interactive processes you start but forgot to silence ( firefox < /dev/null &>/dev/null & ) you can't do much, really.

I recommend you get GNU screen . With screen you can just close your running shell when the process' output becomes a bother and open a new one ( ^Ac ).


Oh, and by the way, don't use " $@ " where you're using it.

$@ means, $1 , $2 , $3 ..., which would turn your command into:

gnome-terminal -e "vim $1" "$2" "$3" ...

That's probably not what you want because -e only takes one argument. Use $1 to show that your script can only handle one argument.

It's really difficult to get multiple arguments working properly in the scenario that you gave (with the gnome-terminal -e ) because -e takes only one argument, which is a shell command string. You'd have to encode your arguments into one. The best and most robust, but rather cludgy, way is like so:

gnome-terminal -e "vim $(printf "%q " "$@")"

Limited Atonement ,Aug 25, 2016 at 17:22

nohup cmd &

nohup detaches the process completely (daemonizes it)

Randy Proctor ,Sep 13, 2016 at 23:00

If you are using bash , try disown [ jobspec ] ; see bash(1) .

Another approach you can try is at now . If you're not superuser, your permission to use at may be restricted.

Stephen Rosen ,Jan 22, 2014 at 17:08

Reading these answers, I was under the initial impression that issuing nohup <command> & would be sufficient. Running zsh in gnome-terminal, I found that nohup <command> & did not prevent my shell from killing child processes on exit. Although nohup is useful, especially with non-interactive shells, it only guarantees this behavior if the child process does not reset its handler for the SIGHUP signal.

In my case, nohup should have prevented hangup signals from reaching the application, but the child application (VMWare Player in this case) was resetting its SIGHUP handler. As a result when the terminal emulator exits, it could still kill your subprocesses. This can only be resolved, to my knowledge, by ensuring that the process is removed from the shell's jobs table. If nohup is overridden with a shell builtin, as is sometimes the case, this may be sufficient, however, in the event that it is not...


disown is a shell builtin in bash , zsh , and ksh93 ,

<command> &
disown

or

<command> &; disown

if you prefer one-liners. This has the generally desirable effect of removing the subprocess from the jobs table. This allows you to exit the terminal emulator without accidentally signaling the child process at all. No matter what the SIGHUP handler looks like, this should not kill your child process.

After the disown, the process is still a child of your terminal emulator (play with pstree if you want to watch this in action), but after the terminal emulator exits, you should see it attached to the init process. In other words, everything is as it should be, and as you presumably want it to be.

What to do if your shell does not support disown ? I'd strongly advocate switching to one that does, but in the absence of that option, you have a few choices.

  1. screen and tmux can solve this problem, but they are much heavier weight solutions, and I dislike having to run them for such a simple task. They are much more suitable for situations in which you want to maintain a tty, typically on a remote machine.
  2. For many users, it may be desirable to see if your shell supports a capability like zsh's setopt nohup . This can be used to specify that SIGHUP should not be sent to the jobs in the jobs table when the shell exits. You can either apply this just before exiting the shell, or add it to shell configuration like ~/.zshrc if you always want it on.
  3. Find a way to edit the jobs table. I couldn't find a way to do this in tcsh or csh , which is somewhat disturbing.
  4. Write a small C program to fork off and exec() . This is a very poor solution, but the source should only consist of a couple dozen lines. You can then pass commands as commandline arguments to the C program, and thus avoid a process specific entry in the jobs table.

Sheljohn ,Jan 10 at 10:20

  1. nohup $COMMAND &
  2. $COMMAND & disown
  3. setsid command

I've been using number 2 for a very long time, but number 3 works just as well. Also, disown has a 'nohup' flag of '-h', can disown all processes with '-a', and can disown all running processes with '-ar'.

Silencing is accomplished by '$COMMAND &>/dev/null'.

Hope this helps!

dunkyp

add a comment ,Mar 25, 2009 at 1:51
I think screen might solve your problem

Nathan Fellman ,Mar 23, 2009 at 14:55

in tcsh (and maybe in other shells as well), you can use parentheses to detach the process.

Compare this:

> jobs # shows nothing
> firefox &
> jobs
[1]  + Running                       firefox

To this:

> jobs # shows nothing
> (firefox &)
> jobs # still shows nothing
>

This removes firefox from the jobs listing, but it is still tied to the terminal; if you logged in to this node via 'ssh', trying to log out will still hang the ssh process.

,

To disassociate tty shell run command through sub-shell for e.g.

(command)&

When exit used terminal closed but process is still alive.

check -

(sleep 100) & exit

Open other terminal

ps aux | grep sleep

Process is still alive.

[Mar 10, 2019] linux - How to attach terminal to detached process

Mar 10, 2019 | unix.stackexchange.com

Ask Question 86


Gilles ,Feb 16, 2012 at 21:39

I have detac