Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Enterprise Linux sickness with overcomplexity:
slightly skeptical view on enterprise Linux distributions

News

Slightly Skeptical View on Enterprise Unix Administration

Recommended Books

Recommended Links Backup and Restore LPI Certification
Oracle Linux Red Hat Suse Ubuntu Simple Unix Backup Tools Troubleshooing
Network Utilities ifconfig ethtool netstat route Nmap
The Linux Logical Volume Manager Linux Disk Management Snapshots Virtual memory Startup and Shutdown Linux Multipath
Linux Run Levels Configuring serial console Root password recovery Booting into Rescue Mode Grub Log Rotation
Linux Networking Linux Performance Tuning Dell DRAC IPMI   Security
Working with ISO Images in Linux Ministributions Admin Horror Stories Linux Tips Humor Etc

Introduction

Image a language in which both grammar and vocabulary is changing each three to five years.  And both are so huge that are beyond any normal human comprehension. You can learn some subset of both vocalirary and grammar when you closely work with a particular subsytem for several months in a row, only to forget it after a couple of months or quarters.  The classic exmple here is RHEL kickstart. 

In a sense all talks about linux security is a joke as you can't secure the OS, which is far, far beyond your ability to comprehend. So state-sponsored hackers will always have an edge in breaking into linux.

Linux became two complex to master for a single person. Now this is yet another monstrous OS, that nobody know well (as the level completely puts it far above mere mortal capabilities)   And that's the problem. Both Red Hat and Suse are now software development companies that can be called "overcomplexity junks". And it shows in their recent products.  Actually SLES is even worse then RHEL in this respect, despite being (originally) a German distribution. 

Generally in Linux administration (like previously in enterprise Unix administration) you get what you paid for. Nothing can replace multi-year experience, and experience often is acquired by making expensive mistakes (see Admin Horror Stories). Vendor training is expensive and is more or less available only to sysadmin in few industries (financial industry is one). For Red Hat we have the situation that closely resembles the situation well know from Solaris: training is rather good, but prices are exorbitant.

Due to the current complexity (or, more correctly, overcomplexity) of Linux environments most sysadmins can master it well only for commonly used subsystems and for just one flavor of Linux. Better one might be able to  support two (with highly asymmetrical level of skills, being usually considerably more proficient in one flavor over the other). In other words Unix wars are now replaced on Linux turf with vengeance.

The level of mental overload and frustration from the overcomplexity of two major enterprise Linux flavours (RHEL and SLES) is such that people are ready for a change. Note that in OS ecosystem there is a natural tendency toward monopoly -- nothing succeed like success and the critical mass of installation that those two "monstrously complex" Linux distribution hold prevent any escape. Especially in enterprise environment.  Red Hat can essentially dictate what linux should be -- as it did with incorporating systemd in RHEL 7. 

Still there is a large difference between RHEL and SLES popularity: 

Ubuntu -- a dumped-down Linux based on Debian, with some strange design decisions -- is now getting traditional the expense of Suse. It still mainly desktop Os but it gradually acquires some enterprise share two.  That makes the number of enterprise linux distribution close to what we used to have in commercial Unix space (Solaris, AIX and HP-UX) and Debian and Ubuntu playing  the role of Solaris.   

Package Hell

The idea of precompiled package is great until it is not. And that's what we have now. Important package such as R language or Infiniband drivers from Mellanox routinely prevent the ability to patch systems in RHEL 6.

The total number of packages is just way too great with many overlapping packages. Typically it is over one thousand, unless you use base system, or HPC computational node distribution.  In the latter case it is still over six hundred.

Number of daemons running in the default RHEL installation is also  very high and few people understand what all those daemons are doing and why they are running after the startup.  In other words RHEL is Microsoft Windows of the Linux word.  And with systemd pushed through the throat of enterprise customers, you will understand even less.

Support is expensive but he help from support is marginal. All those guys do and to look into database tos ee if something similar exists. That works some some problems, but for most it does not. Using free version of Linux such as CENTOS is an escape but with commercial applications you are facing troubles: they can easily blame Os for the the problem you are having and them you are holding the bag.

No efforts are make to consolidate those hundreds of overlapping packages (with some barely supported or unsupported).  This "package mess" is a distinct feature of modern enterprise linux distribution.  Long with libraries hell.

Troubles with SElinux

SLES until recently was slightly simpler then RHEL, as it did not include horribly complex security subsystem that RHEL uses -- SELinux.  It takes a lot of efforts to learn even basics of SELinux and configure properly one facing Internet server. Most sysadmin just use it blindly iether enabling it and disabling it without understanding any details of its functioning (or, more correctly, understanding it on the level allowing them to use common protocols, much like is the case with firewalls)

Actually it has a better solution in Linux-space used in SLES (AppArmor). Which was pretty elegant solution to a complex problem, if you ask me. But the critical mass of installation and m,arket share secured by Red Hat, made it "king of the hill" and prevented AppArmor from becoming Linux standard. A the result SUSE was forced to incorporate SELinux.

SELinux provides a Mandatory Access Control (MAC) system built into the Linux kernel (that is staff that labels things as "super secret", "secret" and "confidential" that three letter agencies are using to guard information). Historically Security Enhanced Linux (SELinux) was an open source project sponsored by the National Security Agency. Despite the user-friendly GUI, SELinux is difficult to configure and hard to understand. The documentation does not help much either. Most administrators are just turning SELinux subsystem off during the initial install but for Internet facing server you need to configure and use it, or...   And sometimes effects can be really subtle: for example you can login as root using password authentication but can't using passwordless ssh certificate.  That's why many complex applications, especially in HPC area explicidly recommend disabling SElinux as a starting point of installation. You can find articles on the WEB devoted to this topic. See for example

SELinux produces some very interesting errors, see for example http://bugs.mysql.com/bug.php?id=12676 and is not very compatible with some subsystems and complex applications.  Especially telling is the comment to the this blog post How to disable SELinux in RHEL 5:

Aeon said... @ May 13, 2008 2:34 PM
Thanks a million! I was dealing with a samba refusing to access the server shared folders. After about 2 hours of scrolling forums I found out the issue may be this shitty thing samba_selinux.

I usually disable it when I install, but this time I had to use the Dell utilities (no choice at all) and they enabled the thing. Disabled it your way, rebooted and it works as I wanted it. Thanks again!

SLES has one significant defect: by default it does not assign each user a unique group like RHEL does. But this can be fixed with a special wrapper for useradd command. In simplest for it can be just:


   #wrapper for useradd command
   # accepts two arguments: UID and user name, for example
   # uadd 3333 joedoers

   function uadd
   {
   groupadd -g $1 $2
   useradd -u $1 -g $1 -m $2
   }


Working closely with commercial Linuxes and seeing all their warts and such, one instantly understand that the traditional Open Source (GPL-based Open Source), is a very problematic business model. Historically (especially in case of Red Hat) is was used as a smoke screen for the VCs to get software engineers to work for free, not even for minimum wage, but for free! And grab as much money from suckers as they can, using all right words as an anesthetic. Essentially they take their hard work, pump $$$ in marketing and either sell the resulting company to one of their other portfolio companies or take it public and dump the shares on the public. Meanwhile the software engineers that worked to develop that software for free, aka slave labor, get $0.00 for their hard work while the VCs top brass of the startup and investment bankers make a killing.

And of course then they get their buddies in mainstream media hype the GPL-based Open Source development as the best thing after sliced bread.

Licensing

RHEL licensing is a mess too. In addition two higher level licenses are expensive and make Microsoft server license look very competitive.  Recently they went "IBM way" and started to change different prices for 4 socket servers: you can't just use two 2 socket licenses to license 4 socket server with their new registration-manager.  The next step will be classic IBM per core licensing; that's why so many people passionately hate IBM.

 There are three different types of licensing (let's call them patch-only, regular and with premium support). Each has seeral variations (for example cHPc computationa node is a variabt of patch only license but oes not provide GUI and many packages in repository). The level of tech support with the latter two (which are truly enterprise licenses) is very similar -- similarly dismal -- especially for complex problems, unless you press them really hard. 

In addition Red Hat people screwed their portal so much that you can't tell which server is assigned to what licnese. that situation improved with registration manger but new problem arise.

Generally the level of screw up of RHEL user portal is such, that there doubts that they can do anything useful in Linux space in the future, other then try to hold to their market share.

All is all while RHEL 6 is very complex but still a usable enterprise Linux distribution because if did not radically changed from RHEL 4, and 5. But it is not fan to use it, anymore. It's a pain. It's a headache. The same is true for SLES.

For RHEL 7 more strong words are applicable. 


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

2016 2015 2014 2013 2012 2011 2010 2009 2008
2007 2006 2005 2004 2003 2002 2001 2000 1999

[Jun 09, 2017] Sneaky hackers use Intel management tools to bypass Windows firewall

Notable quotes:
"... the group's malware requires AMT to be enabled and serial-over-LAN turned on before it can work. ..."
"... Using the AMT serial port, for example, is detectable. ..."
"... Do people really admin a machine through AMT through an external firewall? ..."
"... Businesses demanded this technology and, of course, Intel beats the drum for it as well. While I understand their *original* concerns I would never, ever connect it to the outside LAN. A real admin, in jeans and a tee, is a much better solution. ..."
Jun 09, 2017 | arstechnica.com
When you're a bad guy breaking into a network, the first problem you need to solve is, of course, getting into the remote system and running your malware on it. But once you're there, the next challenge is usually to make sure that your activity is as hard to detect as possible. Microsoft has detailed a neat technique used by a group in Southeast Asia that abuses legitimate management tools to evade firewalls and other endpoint-based network monitoring.

The group, which Microsoft has named PLATINUM, has developed a system for sending files -- such as new payloads to run and new versions of their malware-to compromised machines. PLATINUM's technique leverages Intel's Active Management Technology (AMT) to do an end-run around the built-in Windows firewall. The AMT firmware runs at a low level, below the operating system, and it has access to not just the processor, but also the network interface.

The AMT needs this low-level access for some of the legitimate things it's used for. It can, for example, power cycle systems, and it can serve as an IP-based KVM (keyboard/video/mouse) solution, enabling a remote user to send mouse and keyboard input to a machine and see what's on its display. This, in turn, can be used for tasks such as remotely installing operating systems on bare machines. To do this, AMT not only needs to access the network interface, it also needs to simulate hardware, such as the mouse and keyboard, to provide input to the operating system.

But this low-level operation is what makes AMT attractive for hackers: the network traffic that AMT uses is handled entirely within AMT itself. That traffic never gets passed up to the operating system's own IP stack and, as such, is invisible to the operating system's own firewall or other network monitoring software. The PLATINUM software uses another piece of virtual hardware-an AMT-provided virtual serial port-to provide a link between the network itself and the malware application running on the infected PC.

Communication between machines uses serial-over-LAN traffic, which is handled by AMT in firmware. The malware connects to the virtual AMT serial port to send and receive data. Meanwhile, the operating system and its firewall are none the wiser. In this way, PLATINUM's malware can move files between machines on the network while being largely undetectable to those machines.

PLATINUM uses AMT's serial-over-LAN (SOL) to bypass the operating system's network stack and firewall.

Enlarge / PLATINUM uses AMT's serial-over-LAN (SOL) to bypass the operating system's network stack and firewall. Microsoft

AMT has been under scrutiny recently after the discovery of a long-standing remote authentication flaw that enabled attackers to use AMT features without needing to know the AMT password. This in turn could be used to enable features such as the remote KVM to control systems and run code on them.

However, that's not what PLATINUM is doing: the group's malware requires AMT to be enabled and serial-over-LAN turned on before it can work. This isn't exploiting any flaw in AMT; the malware just uses the AMT as it's designed in order to do something undesirable.

Both the PLATINUM malware and the AMT security flaw require AMT to be enabled in the first place; if it's not turned on at all, there's no remote access. Microsoft's write-up of the malware expressed uncertainty about this part; it's possible that the PLATINUM malware itself enabled AMT-if the malware has Administrator privileges, it can enable many AMT features from within Windows-or that AMT was already enabled and the malware managed to steal the credentials.

While this novel use of AMT is useful for transferring files while evading firewalls, it's not undetectable. Using the AMT serial port, for example, is detectable. Microsoft says that its own Windows Defender Advanced Threat Protection can even distinguish between legitimate uses of serial-over-LAN and illegitimate ones. But it's nonetheless a neat way of bypassing one of the more common protective measures that we depend on to detect and prevent unwanted network activity. potato44819 , Ars Legatus Legionis Jun 8, 2017 8:59 PM Popular

"Microsoft says that its own Windows Defender Advanced Threat Protection can even distinguish between legitimate uses of serial-over-LAN and illegitimate ones. But it's nonetheless a neat way of bypassing one of the more common protective measures that we depend on to detect and prevent unwanted network activity."

It's worth noting that this is NOT Windows Defender.

Windows Defender Advanced Threat Protection is an enterprise product.

aexcorp , Ars Scholae Palatinae Jun 8, 2017 9:04 PM Popular
This is pretty fascinating and clever TBH. AMT might be convenient for sysadmin, but it's proved to be a massive PITA from the security perspective. Intel needs to really reconsider its approach or drop it altogether.

"it's possible that the PLATINUM malware itself enabled AMT-if the malware has Administrator privileges, it can enable many AMT features from within Windows"

I've only had 1 machine that had AMT (a Thinkpad T500 that somehow still runs like a charm despite hitting the 10yrs mark this summer), and AMT was toggled directly via the BIOS (this is all pre-UEFI.) Would Admin privileges be able to overwrite a BIOS setting? Would it matter if it was handled via UEFI instead? 1810 posts | registered 8/28/2012

bothered , Ars Scholae Palatinae Jun 8, 2017 9:16 PM
Always on and undetectable. What more can you ask for? I have to imagine that and IDS system at the egress point would help here. 716 posts | registered 11/14/2012
faz , Ars Praefectus Jun 8, 2017 9:18 PM
Using SOL and AMT to bypass the OS sounds like it would work over SOL and IPMI as well.

I only have one server that supports AMT, I just double-checked that the webui for AMT does not allow you to enable/disable SOL. It does not, at least on my version. But my IPMI servers do allow someone to enable SOL from the web interface.

xxx, Jun 8, 2017 9:24 PM
But do we know of an exploit over AMT? I wouldn't think any router firewall would allow packets bound for an AMT to go through. Is this just a mechanism to move within a LAN once an exploit has a beachhead? That is not a small thing, but it would give us a way to gauge the severity of the threat.

Do people really admin a machine through AMT through an external firewall? 178 posts | registered 2/25/2016

zogus , Ars Tribunus Militum Jun 8, 2017 9:26 PM
fake-name wrote:
Quote:
blockquote

Hi there! I do hardware engineering, and I wish more computers had serial ports. Just because you don't use them doesn't mean their disappearance is "fortunate".

Just out of curiosity, what do you use on the PC end when you still do require traditional serial communication? USB-to-RS232 adapter? 1646 posts | registered 11/17/2006

bthylafh , Ars Tribunus Angusticlavius Jun 8, 2017 9:34 PM Popular
zogus wrote:
Just out of curiosity, what do you use on the PC end when you still do require traditional serial communication? USB-to-RS232 adapter?
tomca13 , Wise, Aged Ars Veteran Jun 8, 2017 9:53 PM
This PLATINUM group must be pissed about the INTEL-SA-00075 vulnerability being headline news. All those perfectly vulnerable systems having AMT disabled and limiting their hack. 175 posts | registered 8/9/2002
Darkness1231 , Ars Tribunus Militum et Subscriptor Jun 8, 2017 10:41 PM
Causality wrote:
Intel AMT is a fucking disaster from a security standpoint. It is utterly dependent on security through obscurity with its "secret" coding, and anybody should know that security through obscurity is no security at all.
Businesses demanded this technology and, of course, Intel beats the drum for it as well. While I understand their *original* concerns I would never, ever connect it to the outside LAN. A real admin, in jeans and a tee, is a much better solution.

Hopefully, either Intel will start looking into improving this and/or MSFT will make enough noise that businesses might learn to do their update, provisioning in a more secure manner.

Nah, that ain't happening. Who am I kidding? 1644 posts | registered 3/31/2012

Darkness1231 , Ars Tribunus Militum et Subscriptor Jun 8, 2017 10:45 PM
meta.x.gdb wrote:
But do we know of an exploit over AMT? I wouldn't think any router firewall would allow packets bound for an AMT to go through. Is this just a mechanism to move within a LAN once an exploit has a beachhead? That is not a small thing, but it would give us a way to gauge the severity of the threat. Do people really admin a machine through AMT through an external firewall?
The interconnect is via W*. We ran this dog into the ground last month. Other OSs (all as far as I know (okay, !MSDOS)) keep them separate. Lan0 and lan1 as it were. However it is possible to access the supposedly closed off Lan0/AMT via W*. Which is probably why this was caught in the first place.

Note that MSFT has stepped up to the plate here. This is much better than their traditional silence until forced solution. Which is just the same security through plugging your fingers in your ears that Intel is supporting. 1644 posts | registered 3/31/2012

rasheverak , Wise, Aged Ars Veteran Jun 8, 2017 11:05 PM
Hardly surprising: https://blog.invisiblethings.org/papers ... armful.pdf

This is why I adamantly refuse to use any processor with Intel management features on any of my personal systems. 160 posts | registered 3/6/2014

michaelar , Smack-Fu Master, in training Jun 8, 2017 11:12 PM
Brilliant. Also, manifestly evil.

Is there a word for that? Perhaps "bastardly"?

JDinKC , Smack-Fu Master, in training Jun 8, 2017 11:23 PM
meta.x.gdb wrote:
But do we know of an exploit over AMT? I wouldn't think any router firewall would allow packets bound for an AMT to go through. Is this just a mechanism to move within a LAN once an exploit has a beachhead? That is not a small thing, but it would give us a way to gauge the severity of the threat. Do people really admin a machine through AMT through an external firewall?
The catch would be any machine that leaves your network with AMT enabled. Say perhaps an AMT managed laptop plugged into a hotel wired network. While still a smaller attack surface, any cabled network an AMT computer is plugged into, and not managed by you, would be a source of concern. 55 posts | registered 11/19/2012
Anonymouspock , Wise, Aged Ars Veteran Jun 8, 2017 11:42 PM
Serial ports are great. They're so easy to drive that they work really early in the boot process. You can fix issues with machines that are otherwise impossible to debug.
sphigel , Ars Centurion Jun 9, 2017 12:57 AM
aexcorp wrote:
This is pretty fascinating and clever TBH. AMT might be convenient for sysadmin, but it's proved to be a massive PITA from the security perspective. Intel needs to really reconsider its approach or drop it altogether.

"it's possible that the PLATINUM malware itself enabled AMT-if the malware has Administrator privileges, it can enable many AMT features from within Windows"

I've only had 1 machine that had AMT (a Thinkpad T500 that somehow still runs like a charm despite hitting the 10yrs mark this summer), and AMT was toggled directly via the BIOS (this is all pre-UEFI.) Would Admin privileges be able to overwrite a BIOS setting? Would it matter if it was handled via UEFI instead?

I'm not even sure it's THAT convenient for sys admins. I'm one of a couple hundred sys admins at a large organization and none that I've talked with actually use Intel's AMT feature. We have an enterprise KVM (raritan) that we use to access servers pre OS boot up and if we have a desktop that we can't remote into after sending a WoL packet then it's time to just hunt down the desktop physically. If you're just pushing out a new image to a desktop you can do that remotely via SCCM with no local KVM access necessary. I'm sure there's some sys admins that make use of AMT but I wouldn't be surprised if the numbers were quite small. 273 posts | registered 5/5/2010
gigaplex , Ars Scholae Palatinae Jun 9, 2017 3:53 AM
zogus wrote:
fake-name wrote:
blockquote Quote: blockquote

Hi there! I do hardware engineering, and I wish more computers had serial ports. Just because you don't use them doesn't mean their disappearance is "fortunate".

Just out of curiosity, what do you use on the PC end when you still do require traditional serial communication? USB-to-RS232 adapter?
We just got some new Dell workstations at work recently. They have serial ports. We avoid the consumer machines. 728 posts | registered 9/23/2011

GekkePrutser , Ars Centurion Jun 9, 2017 4:18 AM
Quote:
Physical serial ports (the blue ones) are fortunately a relic of a lost era and are nowadays quite rare to find on PCs.
Not that fortunately.. Serial ports are still very useful for management tasks. It's simple and it works when everything else fails. The low speeds impose little restrictions on cables.

Sure, they don't have much security but that is partly mitigated by them usually only using a few metres cable length. So they'd be covered under the same physical security as the server itself. Making this into a LAN protocol without any additional security, that's where the problem was introduced. Wherever long-distance lines were involved (modems) the security was added at the application level.

[Feb 04, 2017] Restoring deleted /tmp folder

Jan 13, 2015 | cyberciti.biz

As my journey continues with Linux and Unix shell, I made a few mistakes. I accidentally deleted /tmp folder. To restore it all you have to do is:

mkdir /tmp
chmod 1777 /tmp
chown root:root /tmp
ls -ld /tmp
 
mkdir /tmp chmod 1777 /tmp chown root:root /tmp ls -ld /tmp 

[Feb 04, 2017] Use CDPATH to access frequent directories in bash - Mac OS X Hints

Feb 04, 2017 | hints.macworld.com
The variable CDPATH defines the search path for the directory containing directories. So it served much like "directories home". The dangers are in creating too complex CDPATH. Often a single directory works best. For example export CDPATH = /srv/www/public_html . Now, instead of typing cd /srv/www/public_html/CSS I can simply type: cd CSS
Use CDPATH to access frequent directories in bash UNIX
Mar 21, '05 10:01:00AM • Contributed by: jonbauman

I often find myself wanting to cd to the various directories beneath my home directory (i.e. ~/Library, ~/Music, etc.), but being lazy, I find it painful to have to type the ~/ if I'm not in my home directory already. Enter CDPATH , as desribed in man bash ):

The search path for the cd command. This is a colon-separated list of directories in which the shell looks for destination directories specified by the cd command. A sample value is ".:~:/usr".
Personally, I use the following command (either on the command line for use in just that session, or in .bash_profile for permanent use):
CDPATH=".:~:~/Library"

This way, no matter where I am in the directory tree, I can just cd dirname , and it will take me to the directory that is a subdirectory of any of the ones in the list. For example:
$ cd
$ cd Documents 
/Users/baumanj/Documents
$ cd Pictures
/Users/username/Pictures
$ cd Preferences
/Users/username/Library/Preferences
etc...

[ robg adds: No, this isn't some deeply buried treasure of OS X, but I'd never heard of the CDPATH variable, so I'm assuming it will be of interest to some other readers as well.]

cdable_vars is also nice
Authored by: clh on Mar 21, '05 08:16:26PM

Check out the bash command shopt -s cdable_vars

From the man bash page:

cdable_vars

If set, an argument to the cd builtin command that is not a directory is assumed to be the name of a variable whose value is the directory to change to.

With this set, if I give the following bash command:

export d="/Users/chap/Desktop"

I can then simply type

cd d

to change to my Desktop directory.

I put the shopt command and the various export commands in my .bashrc file.

[Dec 26, 2016] A Typo Led To Podestas Email Hack, Says Report

Dec 26, 2016 | yro.slashdot.org
(thehill.com) 274

Posted by BeauHD on Tuesday December 13, 2016 @06:30PM from the auto-correct dept. tomhath quotes a report from The Hill:

Last March, Podesta received an email purportedly from Google saying hackers had tried to infiltrate his Gmail account . When an aide emailed the campaign's IT staff to ask if the notice was real, Clinton campaign aide Charles Delavan replied that it was "a legitimate email" and that Podesta should "change his password immediately."

Instead of telling the aide that the email was a threat and that a good response would be to change his password directly through Google's website, he had inadvertently told the aide to click on the fraudulent email and give the attackers access to the account.

Delavan told The New York Times he had intended to type "illegitimate," a typo he still has not forgiven himself for making.

The email was a phishing scam that ultimately revealed Podesta's password to hackers.

Soon after, WikiLeaks began releasing 10 years of his emails.

[Dec 26, 2016] U2F Security Keys May Be the World's Best Hope Against Account Takeovers

Notable quotes:
"... After more than two years of public implementation and internal study, Google security architects have declared Security Keys their preferred form of two-factor authentication. ..."
Dec 26, 2016 | it.slashdot.org
(arstechnica.com) 153

Posted by BeauHD on Friday December 23, 2016 @09:05PM from the new-kid-on-the-block dept.

earlytime writes:

Large scale account hacks such as the billion user Yahoo breach and targeted phishing hacks of gmail accounts during the U.S. election have made 2016 an infamous year for web security. Along comes U2F/web-security keys to address these issues at a critical time.

Ars Technica reports that U2F keys "may be the world's best hope against account takeovers":

"The Security Keys are based on Universal Second Factor , an open standard that's easy for end users to use and straightforward for engineers to stitch into hardware and websites. When plugged into a standard USB port, the keys provide a 'cryptographic assertion' that's just about impossible for attackers to guess or phish. Accounts can require that cryptographic key in addition to a normal user password when users log in. Google, Dropbox, GitHub, and other sites have already implemented the standard into their platforms.

After more than two years of public implementation and internal study, Google security architects have declared Security Keys their preferred form of two-factor authentication.

The architects based their assessment on the ease of using and deploying keys, the security it provided against phishing and other types of password attacks, and the lack of privacy trade-offs that accompany some other forms of two-factor authentication."

The researchers wrote in a recently published report :

"We have shipped support for Security Keys in the Chrome browser, have deployed it within Google's internal sign-in system, and have enabled Security Keys as an available second factor in Google's Web services.

In this work, we demonstrate that Security Keys lead to both an increased level of security and user satisfaction as well as cheaper support cost."

[May 31, 2016] RHEL 6.8 is out

Notable quotes:
"... For customers with ever-increasing volumes of data, the Scalable File System Add-on for Red Hat Enterprise Linux 6.8 now supports xfs filesystem sizes up to 300TB. ..."
"... enables customers to migrate their traditional workloads into container-based applications – suitable for deployment on Red Hat Enterprise Linux 7 and Red Hat Enterprise Linux Atomic Host. ..."
redhat.com

Red Hat Enterprise Linux 6.8 adds improved system archiving, new visibility into storage performance and an updated open standard for secure virtual private networks

Red Hat, Inc. (NYSE: RHT), the world's leading provider of open source solutions, today announced the general availability of Red Hat Enterprise Linux 6.8, the latest version of the Red Hat Enterprise Linux 6 platform. Red Hat Enterprise Linux 6.8 delivers new capabilities and provides a stable and trusted platform for critical IT infrastructure. With nearly six years of field-proven success, Red Hat Enterprise Linux 6 has set the stage for the innovations of today, as Red Hat Enterprise Linux continues to power not only existing workloads, but also the technologies of the future, from cloud-native applications to Linux containers.

With enhancements to security features and management, Red Hat Enterprise Linux 6.8 remains a solid, proven base for modern enterprise IT operations.

Jim Totton vice president and general manager, Platforms Business Unit, Red Hat

Red Hat Enterprise Linux 6.8 includes a number of new and updated features to help organizations bolster platform security and enhance systems management/monitoring capabilities, including:

Enhanced Security, Authentication, and Interoperability

To enhance security for virtual private networks (VPNs), Red Hat Enterprise Linux 6.8 includes libreswan, an implementation of one of the most widely supported and standardized VPN protocols, which replaces openswan as the Red Hat Enterprise Linux 6 VPN endpoint solution, giving Red Hat Enterprise Linux 6 customers access to recent advances in VPN security.

Customers running the latest version of Red Hat Enterprise Linux 6 can see increased client-side performance and simpler management through the addition of new capabilities to the Identity Management client code (SSSD). Cached authentication lookup on the client reduces the unnecessary exchange of user credentials with Active Directory servers. Support for adcli simplifies the management of Red Hat Enterprise Linux 6 systems interoperating with an Active Directory domain. In addition, SSSD now supports user authentication via smart cards, for both system login and related functions such as sudo.

Enhanced Management and Monitoring
The inclusion of Relax-and-Recover, a system archiving tool, provides a more streamlined system administration experience, enabling systems administrators to create local backups in an ISO format that can be centrally archived and replicated remotely for simplified disaster recovery operations. An enhanced yum tool simplifies the addition of packages, adding intelligence to the process of locating required packages to add/enable new platform features.

Red Hat Enterprise Linux 6.8 provides increased visibility into storage usage and performance through dmstats, a program that displays and manages I/O statistics for user-defined regions of devices using the device-mapper driver.

Additional Enhancements and Updates

For customers with ever-increasing volumes of data, the Scalable File System Add-on for Red Hat Enterprise Linux 6.8 now supports xfs filesystem sizes up to 300TB.

Additionally, the general availability of Red Hat Enterprise Linux 6.8 includes the launch of an updated Red Hat Enterprise Linux 6.8 base image which enables customers to migrate their traditional workloads into container-based applications – suitable for deployment on Red Hat Enterprise Linux 7 and Red Hat Enterprise Linux Atomic Host.

Today's release also marks the transition of Red Hat Enterprise Linux 6 into Production Phase 2, a phase which prioritizes ongoing stability and security features for critical platform deployments. More information on the Red Hat Enterprise Linux lifecycle can be found at https://access.redhat.com/support/policy/updates/errata .

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/6.8_Release_Notes/new_features_compiler_and_tools.html

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/6.8_Release_Notes/new_features_file_systems.htmlhttps://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/6.8_Release_Notes/new_features_networking.htmlhttps://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/6.8_Release_Notes/new_features_servers_and_services.htmlhttps://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/6.8_Release_Notes/new_features_storage.htmlhttps://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/6.8_Release_Notes/new_features_system_and_subscription_management.htmlhttps://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/6.8_Release_Notes/chap-Red_Hat_Enterprise_Linux-6.8_Release_Notes-Red_Hat_Software_Collections.html

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/6.8_Release_Notes/part-Red_Hat_Enterprise_Linux-6.8_Release_Notes-Known_Issues.html

[May 31, 2016] Red Hat Enterprise Linux 6.8 Deprecates Btrfs

Notable quotes:
"... Deprecated functionality continues to be supported until the end of life of Red Hat Enterprise Linux 6. Deprecated functionality will likely not be supported in future major releases of this product and is not recommended for new deployments. ..."
www.phoronix.com
Buried within the notes for today's Red Hat Enterprise Linux 6.8 release are a few interesting notes.

First, RHEL has deprecated support for the Btrfs file-system.

Btrfs file system
Development of B-tree file system (Btrfs) has been discontinued, and Btrfs is considered deprecated. Btrfs was previously provided as a Technology Preview, available on AMD64 and Intel 64 architectures.

Huh? Since when was Btrfs development discontinued? At least in the upstream space, it's still ongoing and Facebook (as well as other companies) continue pouring resources into stabilizing and advancing the capabilities of Btrfs, which is widely sought as a Linux alternative to ZFS. There's no signs of things stalling on the Btrfs mailing list. Especially as Red Hat hasn't been packaging ZFS for RHEL officially (but you can grab packages via ZFSOnLinux.org) as an alternative, this move doesn't make a lot of sense. While Btrfs development has dragged on for a while and short of OpenSUSE/SUSE hasn't seen it deployed by default by other tier-one Linux distributions, it's a bit odd that Red Hat seems to be tossing in the towel on Btrfs.

Red Hat's definition of "deprecated" in their RHEL context means (as shown on the same page), "Deprecated functionality continues to be supported until the end of life of Red Hat Enterprise Linux 6. Deprecated functionality will likely not be supported in future major releases of this product and is not recommended for new deployments."

[Apr 25, 2016] What's New in Red Hat Enterprise Linux 7.2

Video presentation.

[Dec 09, 2015] Three ways to easily encrypt your data on Linux

Ok, so you need to quickly encrypt the contents of you pen drive. The easiest solution is to compress them using the 7z archive file format, that is open source, cross-platform, and supports 256-bit encryption using the AES algorithm.

Encrypt with Seahorse

The third option that I will show basically utilizes the popular GNU PG tool to encrypt anything you want in your disk. What we need to install first are the following packages: gpg, seahorse, seahorse-nautilus, seahorse-daemon, and seahorse-contracts which is needed if you're using ElementaryOS like I do. The encryption will be based on a key that we need to create first by opening a terminal, and typing the following command:

[Aug 22, 2015]How Complex Systems Fail

"...This is really a profound observation – things rarely fail in an out-the-blue, unimaginable, catastrophic way. Very often just such as in the MIT article the fault or faults in the system are tolerated. But if they get incrementally worse, then the ad-hoc fixes become the risk (i.e. the real risk isn't the original fault condition, but the application of the fixes)."
.
"...It is that cumulative concentration of wealth and power over time which is ultimately destabilizing, producing accepted social norms and customs that lead to fragility in the face of both expected and unexpected shocks. This fragility comes from all sorts of specific consequences of that inequality, from secrecy to group think to brain drain to two-tiered justice to ignoring incompetence and negligence to protecting incumbents necessary to maintain such an unnatural order."
.
"...The problem arises with any societal order over time in that corrosive elements in the form of corruptive behavior (not principle based) by decision makers are institutionalized. I may not like Trump as a person but the fact that he seems to unravel and shake the present arrangement and serves as an indicator that the people begin to realize what game is being played, makes me like him in that specific function."
.
".... . .but it is also true that the incentives of the capitalist system ensure that there will be more and worse accidents than necessary, as the agents involved in maintaining the system pursue their own personal interests which often conflict with the interests of system stability and safety."
.
"...Globalization factors in maximizing the impact of Murphy's Law..."
.
"...Operators or engineers controlling or modifying the system are providing feedback. Feedback can push the system past "safe" limits. Once past safe limits, the system can fail catastrophically Such failure happen very quickly, and are always "a surprise"."
.
"...Where one can only say: "Forgive them Father, for they know not what they do""
.
"...The Iron Law of Institutions (agents act in ways that benefit themselves in the context of the institution [system], regardless of the effect those actions have on the larger system) would seem to mitigate against any attempts to correct our many, quickly failing complex social and technological systems."
Aug 21, 2015 | naked capitalism
August 21, 2015 by Yves Smith

Lambert found a short article by Richard Cook that I've embedded at the end of the post. I strongly urge you to read it in full. It discusses how complex systems are prone to catastrophic failure, how that possibility is held at bay through a combination of redundancies and ongoing vigilance, but how, due to the impractical cost of keeping all possible points of failure fully (and even identifying them all) protected, complex systems "always run in degraded mode". Think of the human body. No one is in perfect health. At a minimum, people are growing cancers all the time, virtually all of which recede for reasons not well understood.

The article contends that failures therefore are not the result of single causes. As Clive points out:

This is really a profound observation – things rarely fail in an out-the-blue, unimaginable, catastrophic way. Very often just such as in the MIT article the fault or faults in the system are tolerated. But if they get incrementally worse, then the ad-hoc fixes become the risk (i.e. the real risk isn't the original fault condition, but the application of the fixes). https://en.wikipedia.org/wiki/Windscale_fire#Wigner_energy documents how a problem of core instability was a snag, but the disaster was caused by what was done to try to fix it. The plant operators kept applying the fix in ever more extreme does until the bloody thing blew up.

But I wonder about the validity of one of the hidden assumptions of this article. There is a lack of agency in terms of who is responsible for the care and feeding of complex systems (the article eventually identifies "practitioners" but even then, that's comfortably vague). The assumption is that the parties who have influence and responsibility want to preserve the system, and have incentives to do at least an adequate job of that.

There are reasons to doubt that now. Economics has promoted ways of looking at commercial entities that encourage "practitioners" to compromise on safety measures. Mainstream economics has as a core belief that economies have a propensity to equilibrium, and that equilibrium is at full employment. That assumption has served as a wide-spread justification for encouraging businesses and governments to curtail or end pro-stability measures like regulation as unnecessary costs.

To put it more simply, the drift of both economic and business thinking has been to optimize activity for efficiency. But highly efficient systems are fragile. Formula One cars are optimized for speed and can only run one race.

Highly efficient systems also are more likely to suffer from what Richard Bookstaber called "tight coupling." A tightly coupled system in one in which events occur in a sequence that cannot be interrupted. A way to re-characterize a tightly coupled system is a complex system that has been in part reoptimized for efficiency, maybe by accident, maybe at a local level. That strips out some of the redundancies that serve as safeties to prevent positive feedback loops from having things spin out of control.

To use Bookstaber's nomenclature, as opposed to this paper's, in a tightly coupled system, measures to reduce risk directly make things worse. You need to reduce the tight coupling first.

A second way that the economic thinking has arguably increased the propensity of complex systems of all sorts to fail is by encouraging people to see themselves as atomized agents operating in markets. And that's not just an ideology; it's reflected in low attachment to institutions of all sorts, ranging from local communities to employers (yes, employers may insist on all sorts of extreme shows of fealty, but they are ready to throw anyone in the dust bin at a moment's notice). The reality of weak institutional attachments and the societal inculcation of selfish viewpoints means that more and more people regard complex systems as vehicles for personal advancement. And if they see those relationships as short-term or unstable, they don't have much reason to invest in helping to preserving the soundness of that entity. Hence the attitude called "IBY/YBG" ("I'll Be Gone, You'll Be Gone") appears to be becoming more widespread.

I've left comments open because I'd very much enjoy getting reader reactions to this article. Thanks!

James Levy August 21, 2015 at 6:35 am

So many ideas….
Mike Davis argues that in the case of Los Angeles, the key to understanding the city's dysfunction is in the idea of sunk capital – every major investment leads to further investments (no matter how dumb or large) to protect the value of past investments.

Tainter argues that the energy cost (defined broadly) of maintaining the dysfunction eventually overwhelms the ability of the system to generate surpluses to meet the rising needs of maintenance.

Goldsworthy has argued powerfully and persuasively that the Roman Empire in the West was done in by a combination of shrinking revenue base and the subordination of all systemic needs to the needs of individual emperors to stay in power and therefore stay alive. Their answer was endlessly subdividing power and authority below them and using massive bribes to the bureaucrats and the military to try to keep them loyal.

In each case, some elite individual or grouping sees throwing good money after bad as necessary to keeping their power and their positions. Our current sclerotic system seems to fit this description nicely.

Jim August 21, 2015 at 8:15 am

I immediately thought of Tainter's "The Complex of Complex Cultures" when I starting reading this. One point that Tainter made is that collapse is not all bad. He presents evidence that the average well being of people in Italy was probably higher in the sixth century than in the fifth century as the Western Roman Empire died. Somewhat like death being necessary for biological evolution collapse may be the only solution to the problem of excessive complexity.

xxx August 22, 2015 at 4:39 am

Tainter insists culture has nothing to do with collapse, and therefore refuses to consider it, but he then acknowledges that the elites in some societies were able to pull them out of a collapse trajectory. And from the inside, it sure as hell looks like culture, as in a big decay in what is considered to be acceptable conduct by our leaders, and what interests they should be serving (historically, at least the appearance of the greater good, now unabashedly their own ends) sure looks to be playing a big, and arguably the defining role, in the rapid rise of open corruption and related social and political dysfunction.

Praedor August 21, 2015 at 9:19 am

That also sounds like the EU and even Greece's extreme actions to stay in the EU.

jgordon August 21, 2015 at 7:44 am

Then I'll add my two cents: you've left out that when systems scale linearly, the amount of complexity, and points for failure, and therefore instability, that they contain scale exponentially–that is according to the analysis of James Rickards, and supported by the work of people like Joseph Tainter and Jared Diamond.

Ever complex problem that arises in a complex system is fixed with an even more complex "solution" which requires ever more energy to maintain, and eventually the inevitably growing complexity of the system causes the complex system to collapse in on itself. This process requires no malignant agency by humans, only time.

nowhere August 21, 2015 at 12:10 pm

Sounds a lot like JMG and catabolic collapse.

jgordon August 21, 2015 at 2:04 pm

Well, he got his stuff from somewhere too.

Synoia August 21, 2015 at 1:26 pm

There are no linear systems. They are all non-linear because the include a random, non-linear element – people.

Jim August 21, 2015 at 2:26 pm

Long before there were people the Earth's eco-system was highly complex and highly unstable.

Ormond Otvos August 21, 2015 at 4:37 pm

The presumption that fixes increase complexity may be incorrect.

Fixes should include awareness of complexity.

That was the beauty of Freedom Club by Kaczinsky, T.

JTMcPhee August 21, 2015 at 4:44 pm

Maybe call the larger entity "meta-stable?" Astro and geo inputs seem to have been big perturbers. Lots of genera were around a very long time before naked apes set off on their romp. But then folks, even these hot, increasingly dry days, brag on their ability to anticipate, and profit from, and even cause, with enough leverage, de- stability. Good thing the macrocosms of our frail, violent, kindly, destructive bodies are blessed with the mechanisms of homeostasis.

Too bad our "higher" functions are not similarly gifted… But that's what we get to chat about, here and in similar meta-spaces…

MikeW August 21, 2015 at 7:52 am

Agree, positive density of ideas, thoughts and implications.

I wonder if the reason that humans don't appreciate the failure of complex systems is that (a) complex systems are constantly trying to correct, or cure as in your cancer example, themselves all the time until they can't at which point they collapse, (b) that things, like cancer leading to death, are not commonly viewed as a complex system failure when in fact that is what it is. Thus, while on a certain scale we do experience complex system failure on one level on a daily basis because we don't interpret it as such, and given that we are hardwired for pattern recognition, we don't address complex systems in the right ways.

This, to my mind, has to be extended to the environment and the likely disaster we are currently trying to instigate. While the system is collapsing at one level, massive species extinctions, while we have experienced record temperatures, while the experts keep warning us, etc., most people to date have experienced climate change as an inconvenience - not the early stages of systemwide failure.

Civilization collapses have been regular, albeit spaced out, occurrences. We seem to think we are immune to them happening again. Yet, it isn't hard to list the near catastrophic system failures that have occurred or are currently occurring (famines, financial markets, genocides, etc.).

And, in most systems that relate to humans with an emphasis on short term gain how does one address system failures?

Brooklin Bridge August 21, 2015 at 9:21 am

Good-For-Me-Who-Effing-Cares-If-It's-Bad-For-You-And-Everyone-Else

would be a GREAT category heading though it's perhaps a little close to "Imperial Collapse"

Whine Country August 21, 2015 at 9:52 am

To paraphrase President Bill Clinton, who I would argue was one of the major inputs that caused the catastrophic failure of our banking system (through the repeal of Glass-Steagall), it all depends on what the definition of WE is.

jrs August 21, 2015 at 10:12 pm

And all that just a 21st century version of "apres moi le deluge", which sounds very likely to be the case.

Oregoncharles August 21, 2015 at 3:55 pm

JT – just go to the Archdruid site. They link it regularly, I suppose for this purpose.

Jim August 21, 2015 at 8:42 am

Civilizational collapse is extremely common in history when one takes a long term view. I'm not sure though that I would describe it as having that much "regularity" and while internal factors are no doubt often important external factors like the Mongol Onslaught are also important. It's usually very hard to know exactly what happened since historical documentation tends to disappear in periods of collapse. In the case of Mycenae the archaeological evidence indicates a near total population decline of 99% in less than a hundred years together with an enormous cultural decline but we don't know what caused it.

As for long term considerations the further one tries to project into the future the more uncertain such projections become so that long term planning far into the future is not likely to be evolutionarily stable. Because much more information is available about present conditions than future conditions organisms are probably selected much more to optimize for the short term rather than for the largely unpredicatble long term.

Gio Bruno August 21, 2015 at 1:51 pm

…it's not in question. Evolution is about responding to the immediate environment. Producing survivable offspring (which requires finding a niche). If the environment changes (Climate?) faster than the production of survivable offspring then extinction (for that specie) ensues.

Now, Homo sapien is supposedly "different" in some respects, but I don't think so.

Jim August 21, 2015 at 2:14 pm

I agree. There's nothing uniquely special about our species. Of course species can often respond to gradual change by migration. The really dangerous things are global catastrophes such as the asteroid impact at the end of the Cretaceous or whatever happened at the Permian-Triassic boundary (gamma ray burst maybe?).

Ormond Otvos August 21, 2015 at 4:46 pm

Interesting that you sit there and type on a world-spanning network batting around ideas from five thousand years ago, or yesterday, and then use your fingers to type that the human species isn't special.

Do you really think humans are unable to think about the future, like a bear hibernating, or perhaps the human mind, and its offspring, human culture and history, can't see ahead?

Why is "Learn the past, or repeat it!" such a popular saying, then?

diptherio August 21, 2015 at 9:24 am

The Iron Law of Institutions (agents act in ways that benefit themselves in the context of the institution [system], regardless of the effect those actions have on the larger system) would seem to mitigate against any attempts to correct our many, quickly failing complex social and technological systems.

jgordon August 21, 2015 at 10:40 am

This would tend to imply that attempts to organize large scale social structures is temporary at best, and largely futile. I agree. The real key is to embrace and ride the wave as it crests and callapses so its possible to manage the fall–not to try to stand against so you get knocked down and drowned. Focus your efforts on something useful instead of wasting them on a hopeless, and worthless, cause.

Jim August 21, 2015 at 2:21 pm

Civilization is obviously highly unstabe. However it should remembered that even Neolithic cultures are almost all less than 10,000 years old. So there has been little time for evolutionary adaptations to living in complex cultures (although there is evidence that the last 10,000 years has seen very rapid genetic changes in human populations). If civilization can continue indefinitely which of course is not very clear then it would be expected that evolutionary selection would produce humans much better adapted to living in complex cultures so they might become more stable in the distant future. At present mean time to collapse is probably a few hundred years.

Ormond Otvos August 21, 2015 at 4:50 pm

But perhaps you're not contemplating that too much individual freedom can destabilize society. Is that a part of your vast psychohistorical equation?

washunate August 21, 2015 at 10:34 am

Well said, but something I find intriguing is that the author isn't talking so much about civilizational collapse. The focus is more on various subsystems of civilization (transportation, energy, healthcare, etc.).

These individual components are not inherently particularly dangerous (at a systemic/civilizational level). They have been made that way by purposeful public policy choices, from allowing enormous compensation packages in healthcare to dismantling our passenger rail system to subsidizing fossil fuel energy over wind and solar to creating tax incentives that distort community development. These things are not done for efficiency. They are done to promote inequality, to allow connected insiders and technocratic gatekeepers to expropriate the productive wealth of society. Complexity isn't a byproduct; it is the mechanism of the looting. If MDs in hospital management made similar wages as home health aides, then how would they get rich off the labor of others? And if they couldn't get rich, what would be the point of managing the hospital in the first place? They're not actually trying to provide quality, affordable healthcare to all Americans.

It is that cumulative concentration of wealth and power over time which is ultimately destabilizing, producing accepted social norms and customs that lead to fragility in the face of both expected and unexpected shocks. This fragility comes from all sorts of specific consequences of that inequality, from secrecy to group think to brain drain to two-tiered justice to ignoring incompetence and negligence to protecting incumbents necessary to maintain such an unnatural order.

Linus Huber August 21, 2015 at 7:05 pm

I tend to agree with your point of view.

The problem arises with any societal order over time in that corrosive elements in the form of corruptive behavior (not principle based) by decision makers are institutionalized. I may not like Trump as a person but the fact that he seems to unravel and shake the present arrangement and serves as an indicator that the people begin to realize what game is being played, makes me like him in that specific function. There may be some truth in Thomas Jefferson's quote: "The tree of liberty must be refreshed from time to time with the blood of patriots and tyrants. It is its natural manure." Those presently benefiting greatly from the present arrangement are fighting with all means to retain their position, whether successfully or not, we will see.

animalogic August 22, 2015 at 2:18 am

Well said, washunate. I think an argument could be run that outside economic areas, the has been a drive to de-complexity.
Non economic institutions, bodies which exist for non market/profit reasons are or have been either hollowed out, or co-opted to market purposes. Charities as vast engines of self enrichment for a chain of insiders. Community groups, defunded, or shriveled to an appendix by "market forces". The list goes on…and on.
Reducing the "not-market" to the status of sliced-white-bread makes us all the more dependant on the machinated complexities of "the market"….god help us….

Jay Jay August 21, 2015 at 8:00 am

Joseph Tainter's thesis, set out in "The Collapse of Complex Societies" is simple: as a civilization ages its use of energy becomes less efficient and more costly, until the Law of Diminishing Returns kicks in, generates its own momentum and the system grinds to a halt. Perhaps this article describes a late stage of that process. However, it is worth noting that, for the societies Tainter studied, the process was ineluctable. Not so for our society: we have the ability -- and the opportunity -- to switch energy sources.

Moneta August 21, 2015 at 5:48 pm

In my grandmother's youth, they did not burn wood for nothing. Splitting wood was hard work that required calories.

Today, we heat up our patios at night with gas heaters… The amount of economic activity based on burning energy not related to survival is astounding.

A huge percentage of our GDP is based on economies of scale and economic efficiencies but are completely disconnected from environmental efficiencies.

This total loss is control between nature and our lifestyles will be our waterloo .

TG August 21, 2015 at 8:20 am

An interesting article as usual, but here is another take.

Indeed, sometimes complex systems can collapse under the weight of their own complexity (Think: credit default swaps). But sometimes there is a single simple thing that is crushing the system, and the complexity is a desperate attempt to patch things up that is eventually destroyed by brute force.

Consider a forced population explosion: the people are multiplied exponentially. This reduces per capita physical resources, tends to reduce per-capita capital, and limits the amount of time available to adapt: a rapidly growing population puts an economy on a treadmill that gets faster and faster and steeper and steeper until it takes superhuman effort just to maintain the status quo. There is a reason why, for societies without an open frontier, essentially no nation has ever become prosperous with out first moderating the fertility rate.

However, you can adapt. New technologies can be developed. New regulations written to coordinate an ever more complex system. Instead of just pumping water from a reservoir, you need networks of desalinization plants – with their own vast networks of power plants and maintenance supply chains – and recycling plans, and monitors and laws governing water use, and more efficient appliances, etc.etc.

As an extreme, consider how much effort and complexity it takes to keep a single person alive in the space station.

That's why in California cars need to be emissions tested, but in Alabama they don't – and the air is cleaner in Alabama. More people needs more controls and more exotic technology and more rules.

Eventually the whole thing starts to fall apart. But to blame complexity itself, is possibly missing the point.

Steve H. August 21, 2015 at 8:30 am

No system is ever 'the'.

Jim Haygood August 21, 2015 at 11:28 am

Two words, Steve: Soviet Union.

It's gone now. But we're rebuilding it, bigger and better.

Ormond Otvos August 21, 2015 at 4:54 pm

If, of course, bigger is better.

Facts not in evidence.

Ulysses August 21, 2015 at 8:40 am

"But because system operations are never trouble free, human practitioner adaptations to changing conditions actually create safety from moment to moment. These adaptations often amount to just the selection of a well-rehearsed routine from a store of available responses; sometimes, however, the adaptations are novel combinations or de novo creations of new approaches."

This may just be a rationalization, on my part, for having devoted so much time to historical studies– but it seems to me that historians help civilizations prevent collapse, by preserving for them the largest possible "store of available responses."

aronj August 21, 2015 at 8:41 am

Yves,

Thanks for posting this very interesting piece! As you know, I am a fan Bookstaber's concept of tight coupling. Interestingly, Bookstaber (2007) does not reference Cook's significant work on complex systems.

Before reading this article, I considered the most preventable accidents involve a sequence of events uninterrupted by human intelligence. This needs to be modified by Cook's points 8, 9. 10 and 12.

In using the aircraft landing in the New York river as an example of interrupting a sequence of events, the inevitable accident occurred but no lives were lost. Thus the human intervention was made possible by the unknowable probability of coupling the cause with a possible alternative landing site. A number of aircraft accidents involve failed attempts to find a possible landing site, even though Cook's point #12 was in play.

Thanks for the post!!!!!

Brooklin Bridge August 21, 2015 at 8:47 am

A possible issue with or a misunderstanding of #7. Catastrophic failure can be made up of small failures that tend to follow a critical path or multiple critical paths. While a single point of origin for catastrophic failure may rarely if ever occur in a complex system, it is possible and likely in such a system to have collections of small failures that occur or tend to occur in specific sequences of order. Population explosion (as TG points out) would be a good example of a failure in a complex social system that is part of a critical path to catastrophic failure.

Such sequences, characterized by orders of precedence, are more likely in tightly coupled systems (which as Yves points out can be any system pushed to the max). The point is, they can be identified and isolated at least in situations where a complex system is not being misused or pushed to it's limits or created due to human corruption where such sequences of likelihood may be viewed or baked into the system (such as by propaganda->ideology) as features and not bugs.

Spring Texan August 21, 2015 at 8:53 am

I agree completely that maximum efficiency comes with horrible costs. When hospitals are staffed so that people are normally busy every minute, patients routinely suffer more as often no one has time to treat them like a human being, and when things deviate from the routine, people have injuries and deaths. Same is true in other contexts.

washunate August 21, 2015 at 10:40 am

Agreed, but that's not caused by efficiency. That's caused by inequality. Healthcare has huge dispariaties in wages and working conditions. The point of keeping things tightly staffed is to allow big bucks for the top doctors and administrators.

susan the other August 21, 2015 at 2:55 pm

Yes. When one efficiency conflicts with and destroys another efficiency. Eq. Your mother juggled a job and a family and ran around in turbo mode but she dropped everything when her kids were in trouble. That is an example of an efficiency that can juggle contradictions and still not fail.

JTMcPhee August 21, 2015 at 11:38 am

Might this nurse observe that in hospitals, there isn't and can't be a "routine" to deviate from, no matter how fondly "managers" wish to try to make it and how happy they may be to take advantage of the decent, empathic impulses of many nurses and/or the need to work to eat of those that are just doing a job. Hence the kindly (sic) practice of "calling nurses off" or sending them home if "the census is down," which always runs aground against a sudden influx of billable bodies or medical crises that the residual staff is expected to just somehow cope with caring for or at least processing, until the idiot frictions in the staffing machinery add a few more person-hours of labor to the mix. The larger the institution, the greater the magnitude and impact (pain, and dead or sicker patients and staff too) of the "excursions from the norm."

It's all about the ruling decisions on what are deemed (as valued by where the money goes) appropriate outcomes of the micro-political economy… In the absence of an organizing principle that values decency and stability and sustainability rather than upward wealth transfer.

Will August 21, 2015 at 8:54 am

I'll join the choir recommending Tainter as a critical source for anybody interested in this stuff.

IBG/YBG is a new concept for me, with at least one famous antecedent. "Après moi, le déluge."

diptherio August 21, 2015 at 9:17 am

The author presents the best-case scenario for complex systems: one in which the practitioners involved are actually concerned with maintaining system integrity. However, as Yves points out, that is far from being case in many of our most complex systems.

For instance, the Silvertip pipeline spill near Billings, MT a few years ago may indeed have been a case of multiple causes leading to unforeseen/unforeseeable failure of an oil pipeline as it crossed the Yellowstone river. However, the failure was made immeasurably worse due to the fact that Exxon had failed to supply that pump-station with a safety manual, so when the alarms started going off the guy in the station had to call around to a bunch of people to figure out what was going on. So while it's possible that the failure would have occurred no matter what, the failure of the management to implement even the most basic of safety procedures made the failure much worse than it otherwise would have been.

And this is a point that the oil company apologists are all too keen to obscure. The argument gets trotted out with some regularity that because these oil/gas transmission systems are so complex, some accidents and mishaps are bound to occur. This is true–but it is also true that the incentives of the capitalist system ensure that there will be more and worse accidents than necessary, as the agents involved in maintaining the system pursue their own personal interests which often conflict with the interests of system stability and safety.

Complex systems have their own built-in instabilities, as the author points out; but we've added a system of un-accountability and irresponsibility on top of our complex systems which ensures that failures will occur more often and with greater fall-out than the best-case scenario imagined by the author.

Brooklin Bridge August 21, 2015 at 9:42 am

As Yves pointed out, there is a lack of agency in the article. A corrupt society will tend to generate corrupt systems just as it tends to generate corrupt technology and corrupt ideology. For instance, we get lots of little cars driving themselves about, profitably to the ideology of consumption, but also with an invisible thumb of control, rather than a useful system of public transportation. We get "abstenence only" population explosion because "groath" rather than any rational assessment of obvious future catastrophe.

washunate August 21, 2015 at 10:06 am

Right on. The primary issue of our time is a failure of management. Complexity is an excuse more often than an explanatory variable.

abynormal August 21, 2015 at 3:28 pm

abynormal
August 21, 2015 at 2:46 pm

Am I the only hearing 9″Nails, March of the Pigs

Aug. 21, 2015 1:54 a.m. ET

A Carlyle Group LP hedge fund that anticipated a sudden currency-policy shift in China gained roughly $100 million in two days last week, a sign of how some bearish bets on the world's second-largest economy are starting to pay off.
http://www.wsj.com/articles/hedge-fund-gains-100-million-in-two-days-on-bearish-china-bet-1440136499?mod=e2tw

oink oink is the sound of system fail

Oregoncharles August 21, 2015 at 3:40 pm

A very important principle:

All systems have a failure rate, including people. We don't get to live in a world where we don't need to lock our doors and banks don't need vaults. (If you find it, be sure to radio back.)

The article is about how we deal with that failure rate. Pointing out that there are failures misses the point.

cnchal August 21, 2015 at 5:05 pm

. . .but it is also true that the incentives of the capitalist system ensure that there will be more and worse accidents than necessary, as the agents involved in maintaining the system pursue their own personal interests which often conflict with the interests of system stability and safety.

How true. A Chinese city exploded. Talk about a black swan. I wonder what the next disaster will be?

hemeantwell August 21, 2015 at 9:32 am

After a skimmy read of the post and reading James' lead-off comment re emperors (Brooklin Bridge comment re misuse is somewhat resonant) it seems to me that a distinguishing feature of systems is not being addressed and therefore being treated as though it's irrelevant.

What about the mandate for a system to have an overarching, empowered regulatory agent, one that could presumably learn from the reflections contained in this post? In much of what is posted here at NC writers give due emphasis to the absence/failure of a range of regulatory functions relevant to this stage of capitalism. These run from SEC corruption to the uncontrolled movement of massive amount of questionably valuable value in off the books transactions between banks, hedge funds etc. This system intentionally has a deliberately weakened control/monitoring function, ideologically rationalized as freedom but practically justified as maximizing accumulation possibilities for the powerful. It is self-lobotomizing, a condition exacerbated by national economic territories (to some degree). I'm not going to now jump up with 3 cheers for socialism as capable of resolving problems posed by capitalism. But, to stay closer to the level of abstraction of the article, doesn't the distinction between distributed opacity + unregulated concentrations of power vs. transparency + some kind of central governing authority matter? Maybe my Enlightenment hubris is riding high after the morning coffee, but this is a kind of self-awareness that assumes its range is limited, even as it posits that limit. Hegel was all over this, which isn't to say he resolved the conundrum, but it's not even identified here.

Ormond Otvos August 21, 2015 at 5:06 pm

Think of Trump as the pimple finally coming to a head: he's making the greed so obvious, and pissing off so many people that some useful regulation might occur.

Another thought about world social collapse: if such a thing is likely, (and I'm sure the PTB know if it is, judging from the reports from the Pentagon about how Global Warming being a national security concern) wouldn't it be a good idea to have a huge ability to overpower the rest of the world?

We might be the only nation that survives as a nation, and we might actually have an Empire of the World, previously unattainable. Maybe SkyNet is really USANet. It wouldn't require any real change in the national majority of creepy grabby people.

Jim August 21, 2015 at 9:43 am

Government bureaucrats and politicians pursue their own interests just as businessmen do. Pollution was much worst in the non-capitalist Soviet Union, East Germany and Eastern Europe than it was in the Capitalist West. Chernobyl happened under socialism not capitalism. The present system in China, although not exactly "socialism", certainly involves a massively powerful govenment but a glance at the current news shows that massive governmental power does not necessarily prevent accidents. The agency problem is not unique to or worse in capitalism than in other systems.

Holly August 21, 2015 at 9:51 am

I'd throw in the theory of cognitive dissonance as an integral part of the failure of complex systems. (Example Tarvis and Aronon's recent book: Mistakes Were Made (But Not by me))

We are more apt to justify bad decisions, with bizarre stories, than to accept our own errors (or mistakes of people important to us). It explains (but doesn't make it easier to accept) the complete disconnect between accepted facts and fanciful justifications people use to support their ideas/organization/behavior.

craazymann August 21, 2015 at 10:03 am

I think this one suffers "Metaphysical Foo Foo Syndrome" MFFS. That means use of words to reference realities that are inherently ill-defined and often unobservable leading to untestable theories and deeply personal approaches to epistemological reasoning.

just what is a 'complex system"? A system implies a boundary - there are things part of the system and things outside the system. That's a hard concept to identify - just where the system ends and something else begins. So when 'the system' breaks down, it's hard to tell with any degree of testable objectivity whether the breakdown resulted from "the system" or from something outside the system and the rest was just "an accident that could have happened to anybody'"

maybe the idea is; '"if something breaks down at the worst possible time and in a way that fkks everything up, then it must have been a complex system". But it could also have been a simple system that ran into bad luck. Consider your toilet. Maybe you put too much toilet paper in it, and it clogged. Then it overflowed and ran out into your hallway with your shit everywhere. Then you realized you had an expensive Chinese rug on the floor. oh no! That was bad. you were gonna put tthat rug away as soon as you had a chance to admire it unrolled. Why did you do that? Big fckk up. But it wasn't a complex system. It was just one of those things.

susan the other August 21, 2015 at 12:14 pm

thanks for that, I think…

Gio Bruno August 21, 2015 at 2:27 pm

Actually, it was a system too complex for this individual. S(He) became convinced the plumbing would work as it had previously. But doo to poor maintenance, too much paper, or a stiff BM the "system" didn't work properly. There must have been opportunity to notice something anomalous, but appropriate oversight wasn't applied.

Oregoncharles August 21, 2015 at 3:29 pm

You mean the BM was too tightly coupled?

craazyman August 21, 2015 at 4:22 pm

It coould happen to anybody after enough pizza and red wine

people weren't meant to be efficient. paper towels and duct tape can somettmes help

This ocurred to me: The entire 1960s music revolution would't have happened if anybody had to be efficient about hanging out and jamming. You really have to lay around and do nothing if you want to achieve great things. You need many opportunities to fail and learn before the genius flies. That's why tightly coupled systems are self-defeating. Because they wipe too many people out before they've had a chance to figure out the universe.

JustAnObserver August 21, 2015 at 3:01 pm

Excellent example of tight coupling: Toilet -> Floor -> Hallway -> $$$ Rug

Fix: Apply Break coupling procedure #1: Shut toilet door.
Then: Procedure #2 Jam inexpensive old towels in gap at the bottom.

As with all such measures this buys the most important thing of all – time. In this case to get the $$$Rug out of the way.

IIRC one of Bookstaber's points was that that, in the extreme, tight coupling allows problems to propagate through the system so fast and so widely that we have no chance to mitigate before they escalate to disaster.

washunate August 21, 2015 at 10:03 am

To put it more simply, the drift of both economic and business thinking has been to optimize activity for efficiency.

I think that's an interesting framework. I would say effeciency is achieving the goal in the most effective manner possible. Perhaps that's measured in energy, perhaps labor, perhaps currency units, but whatever the unit of measure, you are minimizing that input cost.

What our economics and business thinking (and most importantly, political thinking) has primarily been doing, I would say, is not optimizing for efficiency. Rather, they are changing the goal being optimized. The will to power has replaced efficiency as the actual outcome.

Unchecked theft, looting, predation, is not efficient. Complexity and its associated secrecy is used to hide the inefficiency, to justify and promote that which would not otherwise stand scrutiny in the light of day.

BigEd August 21, 2015 at 10:11 am

What nonsense. All around us 'complex systems' (airliners, pipelines, coal mines, space stations, etc.) have become steadily LESS prone to failure/disaster over the decades. We are near the stage where the only remaining danger in air travel is human error. We will soon see driverless cars & trucks, and you can be sure accident rates will decline as the human element is taken out of their operation.

tegnost August 21, 2015 at 12:23 pm

see fukushima, lithium batteries spontaneously catching fire, financial engineering leading to collapse unless vast energy is invested in them to re stabilize…Driverless cars and trucks are not that soon, tech buddies say ten years I say malarkey based on several points made in the article, while as brooklyn bridge points out public transit languishes, and washunate points out that trains and other more efficient means of locomotion are starved while more complex methods have more energy thrown at them which could be better applied elsewhere. I think you're missing the point by saying look at all our complex systems, they work fine and then you ramble off a list of things with high failure potential and say look they haven't broken yet, while things that have broken and don't support your view are left out. By this mechanism safety protocols are eroded (that accident you keep avoiding hasn't happened, which means you're being too cautious so your efficiency can be enhanced by not worrying about it until it happens then you can fix it but as pointed out above tightly coupled systems can't react fast enough at which point we all have to hear the whocoodanode justification…)

susan the other August 21, 2015 at 12:34 pm

And the new points of failure will be what?

susan the other August 21, 2015 at 3:00 pm

So here's a question. What is the failure heirarchy. And why don't those crucial nodes of failsafe protect the system. Could it be that we don't know what they are?

Moneta August 22, 2015 at 8:09 am

While 90% of people were producing food a few decades ago, I think a large percentage will be producing energy in a few decades… right now we are still propping up our golf courses and avoiding investing in pipelines and refineries. We are still exploiting the assets of the 50s and 60s to live our hyper material lives. Those investments are what gave us a few decades of consumerism.

Now everyone wants government to spend on infra without even knowing what needs to go and what needs to stay. Maybe half of Californians need to get out of there and forget about building more infra there… just a thought.

America still has a frontier ethos… how in the world can the right investments in infra be made with a collection of such values?

We're going to get city after city imploding. More workers producing energy and less leisure over the next few decades. That's what breakdown is going to look like.

Moneta August 22, 2015 at 8:22 am

Flying might get safer and safer while we get more and more cities imploding.

Just like statues on Easter Island were getting increasingly elaborate as trees were disappearing.

ian August 21, 2015 at 4:02 pm

What you say is true, but only if you have a sufficient number of failures to learn from. A lot of planes had to crash for air travel to be as safe as it is today.

wm.annis August 21, 2015 at 10:19 am

I am surprised to see no reference to John Gall's General Systematics in this discussion, an entire study of systems and how they misbehave. I tend to read it from the standpoint of managing a complex IT infrastructure, but his work starts from human systems (organizations).

The work is organized around aphorisms - Systems tend to oppose their own proper function - The real world is what it is reported to the system - but one or two from this paper should be added to that repertoire. Point 7 seems especially important. From Gall, I have come to especially appreciate the Fail-Safe Theorem: "when a Fail-Safe system fails, it fails by failing to fail safe."

flora August 21, 2015 at 10:32 am

Instead of writing something long and rambling about complex systems being aggregates of smaller, discrete systems, each depending on a functioning and accurate information processing/feedback (not IT) system to maintain its coherence; and upon equally well functioning feedback systems between the parts and the whole - instead of that I'll quote a poem.

" Turning and turning in the widening gyre
The falcon cannot hear the falconer;
Things fall apart; the centre cannot hold; "

-Yates, "The Second Coming"

flora August 21, 2015 at 10:46 am

erm… make that "Yeats", as in W.B.

Steve H. August 21, 2015 at 11:03 am

So, naturalists observe, a flea
Has smaller fleas that on him prey;
And these have smaller still to bite 'em,
And so proceed ad infinitum.

– Swift

LifelongLib August 21, 2015 at 7:38 pm

IIRC in Robert A. Heinlein's "The Puppet Masters" there's a different version:

Big fleas have little fleas
Upon their backs to bite 'em,
And little fleas have lesser fleas
And so, ad infinitum.

Since the story is about humans being parasitized and controlled by alien "slugs" that sit on their backs, and the slugs in turn being destroyed by an epidemic disease started by the surviving humans, the verse has a macabre appropriateness.

LifelongLib August 21, 2015 at 10:14 pm

Original reply got eaten, so I hope not double post. Robert A. Heinlein's (and others?) version:

Big fleas have little fleas
Upon their backs to bite 'em
And little fleas have lesser fleas
And so ad infinitum!

Lambert Strether August 21, 2015 at 10:26 pm

The order Siphonoptera….

Oregoncharles August 21, 2015 at 10:59 pm

"And what rough beast, its hour come round at last,
slouches toward Bethlehem to be born?"

I can't leave that poem without its ending – especially as it becomes ever more relevant.

Oldeguy August 21, 2015 at 11:02 am

Terrific post- just the sort of thing that has made me a NC fan for years.
I'm a bit surprised that the commentators ( thus far ) have not referred to the Financial Crisis of 2008 and the ensuing Great Recession as being an excellent example of Cook's failure analysis.

Bethany McLean and Joe Nocera's

All The Devils Are Here www.amazon.com/All-Devils-Are-Here-Financial/dp/159184438X/

describes beautifully how the erosion of the protective mechanisms in the U.S. financial system, no single one of which would have of itself been deadly in its absence ( Cook's Point 3 ) combined to produce the Perfect Storm.

It brought to mind Garett Hardin's The Tragedy Of The Commons https://en.wikipedia.org/wiki/Tragedy_of_the_commons . While the explosive growth of debt ( and therefore risk ) obviously jeopardized the entire system, it was very much within the narrow self interest of individual players to keep the growth ( and therefore the danger ) increasing.

Ormond Otvos August 21, 2015 at 5:14 pm

Bingo. Failure of the culture to properly train its members. Not so much a lack of morality as a failure to point out that when the temple falls, it falls on Samson.

The next big fix is to use the US military to wall off our entire country, maybe include Canada (language is important in alliances) during the Interregnum.

Why is no one mentioning the Foundation Trilogy and Hari Seldon here?

Deloss August 21, 2015 at 11:29 am

My only personal experience with the crash of a complex, tightly-coupled system was the crash of the trading floor of a very big stock exchange in the early part of this century. The developers were in the computer room, telling the operators NOT to roll back to the previous release, and the operators ignored them and did so anyway. Crash!

In Claus Jensen's fascinating account of the Challenger disaster, NO DOWNLINK, he describes how the managers overrode the engineers' warnings not to fly under existing weather conditions. We all know the result.

Human error was the final cause in both cases.

Now we are undergoing the terrible phenomenon of global warming, which everybody but Republicans, candidates and elected, seems to understand is real and catastrophic. The Republicans have a majority in Congress, and refuse–for ideological and monetary reasons–to admit that the problem exists. I think this is another unfolding disaster that we can ascribe to human error.

Ormond Otvos August 21, 2015 at 5:17 pm

"Human error" needs unpacking here. In this discussion, it's become a Deus ex Humanitas. Humans do what they do because their cultural experiences impel them to do so. Human plus culture is not the same as human. That's why capitalism doesn't work in a selfish society.

Oldeguy August 21, 2015 at 5:52 pm

" capitalism doesn't work in a selfish society "
Very true, not nearly so widely realized as it should be, and the Irony of Ironies .

BayesianGame August 21, 2015 at 11:48 am

But highly efficient systems are fragile. Formula One cars are optimized for speed and can only run one race.

Another problem with obsessing about (productive or technical) efficiency is that it usually means a narrow focus on the most measured or measurable inputs and outputs, to the detriment of less measurable but no less important aspects. Wages are easier to measure than the costs of turnover, including changes in morale, loss of knowledge and skill, and regard for the organization vs. regard for the individual. You want low cost fish? Well, it might be caught by slaves. Squeeze the measurable margins, and the hidden margins will move.

Donw August 21, 2015 at 3:18 pm

You hint at a couple fallacies.

1) Measuring what is easy instead of what is important.
2) Measuring many things and then optimizing all of them optimizes the whole.

Then, have some linear thinker try to optimize those in a complex system (like any organization involving humans) with multiple hidden and delayed feedback loops, and the result will certainly be unexpected. Whether for good or ill is going to be fairly unpredictable unless someone has actually looked for the feedback loops.

IsabelPS August 21, 2015 at 1:02 pm

Very good.

It's nice to see well spelled out a couple of intuitions I've had for a long time. For example, that we are going in the wrong direction when we try to streamline instead of following the path of biology: redundancies, "dirtiness" and, of course, the king of mechanisms, negative feedback (am I wrong in thinking that the main failure of finance, as opposed to economy, is that it has inbuilt positive feedback instead of negative?). And yes, my professional experience has taught me that when things go really wrong it was never just one mistake, it is a cluster of those.

downunderer August 22, 2015 at 3:52 am

Yes, as you hint here, and I would make forcefully explicit: COMPLEX vs NOT-COMPLEX is a false dichotomy that is misleading from the start.

We ourselves, and all the organisms we must interact with in order to stay alive, are individually among the most complex systems that we know of. And the interactions of all of us that add up to Gaia are yet more complex. And still it moves.

Natural selection built the necessary stability features into our bodily complexity. We even have a word for it: homeostasis. Based on negative feedback loops that can keep the balancing act going. And our bodies are vastly more complex than our societies.

Society's problem right now is not complexity per se, but the exploitation of complexity by system components that want to hog the resources and to hell with the whole, quite exactly parallel to the behavior of cancer cells in our bodies when regulatory systems fail.

In our society's case, it is the intelligent teamwork of the stupidly selfish that has destroyed the regulatory systems. Instead of negative feedback keeping deviations from optimum within tolerable limits, we now have positive feedback so obvious it is trite: the rich get richer.

We not only don't need to de-complexify, we don't dare to. We really need to foster the intelligent teamwork that our society is capable of, or we will fail to survive challenges like climate change and the need to sensibly control the population. The alternative is to let natural selection do the job for us, using the old reliable four horsemen.

We are unlikely to change our own evolved selfishness, and probably shouldn't. But we need to control the monsters that we have created within our society. These monsters have all the selfishness of a human at his worst, plus several natural large advantages, including size, longevity, and the ability to metamorphose and regenerate. And as powerful as they already were, they have recently been granted all the legal rights of human citizens, without appropriate negative feedback controls. Everyone here will already know what I'm talking about, so I'll stop.

Peter Pan August 21, 2015 at 1:18 pm

Formula One cars are optimized for speed and can only run one race.

Actually I believe F1 has rules regarding the number of changes that can be made to a car during the season. This is typically four or five changes (replacements or rebuilds), so a F1 car has to be able to run more than one race or otherwise face penalties.

jo6pac August 21, 2015 at 1:41 pm

Yes, F-1 allows four power planets per-season it has been up dated lately to 5. There isn't anything in the air or ground as complex as a F-1 car power planet. The cars are feeding 30 or more engineers at the track and back home normal in England millions of bit of info per second and no micro-soft is not used but very complex programs watching every system in the car. A pit stop in F-1 is 2.7 seconds anything above 3.5 and your not trying hard enough.

Honda who pride themselves in Engineering has struggled in power planet design this year and admit they have but have put more engineers on the case. The beginning of this Tech engine design the big teams hired over 100 more engineers to solve the problems. Ferrari throw out the first design and did a total rebuild and it working.

This is how the world of F-1 has moved into other designs, long but a fun read.
http://www.wired.com/2015/08/mclaren-applied-technologies-f1/

I'm sure those in F-1 system designs would look at stories like this and would come to the conclusion that these nice people are the gate keepers and not the future. Yes, I'm a long time fan of F-1. Then again what do I know.

The sad thing in F-1 the gate keepers are the owners CVC.

Brooklin Bridge August 21, 2015 at 3:25 pm

Interesting comment! One has to wonder why every complex system can't be treated as the be-all. Damn the torpedos. Spare no expense! Maybe if we just admitted we are all doing absolutely nothing but going around in a big circle at an ever increasing speed, we could get a near perfect complex system to help us along.

Ormond Otvos August 21, 2015 at 5:21 pm

If the human race were as important as auto racing, maybe. But we know that's not true ;->

jo6pac August 21, 2015 at 5:51 pm

In the link it's the humans of McLaren that make all the decisions on the car and the race on hand. The link is about humans working together either in real race time or designing out problems created by others.

Marsha August 21, 2015 at 1:19 pm

Globalization factors in maximizing the impact of Murphy's Law:

  1. Meltdown potential of a globalized 'too big to fail' financial system associated with trade imbalances and international capital flows, and boom and bust impact of volatile "hot money".
  2. Environmental damage associated with inefficiency of excessive long long supply chains seeking cheap commodities and dirty polluting manufacturing zones.
  3. Military vulnerability of same long tightly coupled 'just in time" supply chains across vast oceans, war zones, choke points that are very easy to attack and nearly impossible to defend.
  4. Consumer product safety threat of manufacturing somewhere offshore out of sight out of mind outside the jurisdiction of the domestic regulatory system.
  5. Geographic concentration and contagion of risk of all kinds – fragile pattern of horizontal integration – manufacturing in China, finance in New York and London, industrialized mono culture agriculture lacking biodiversity (Iowa feeds the world). If all the bulbs on the Christmas tree are wired in series, it takes only one to fail and they all go out.

Globalization is not a weather event, not a thermodynamic process of atoms and molecules, not a principle of Newtonian physics, not water running downhill, but a hyper aggressive top down policy agenda by power hungry politicians and reckless bean counter economists. An agenda hell bent on creating a tightly coupled globally integrated unstable house of cards with a proven capacity for catastrophic (trade) imbalance, global financial meltdown, contagion of bad debt, susceptibility to physical threats of all kinds.

Synoia August 21, 2015 at 1:23 pm

Any complex system contains non-linear feedback. Management presumes it is their skill that keeps the system working over some limited range, where the behavior approximates linear. Outside those limits, the system can fail catastrophically. What is perceived as operating or management skill is either because the system is kept in "safe" limits, or just happenstance. See chaos theory.

Operators or engineers controlling or modifying the system are providing feedback. Feedback can push the system past "safe" limits. Once past safe limits, the system can fail catastrophically Such failure happen very quickly, and are always "a surprise".

Synoia August 21, 2015 at 1:43 pm

All complex system contain non-linear feedback, and all appear manageable over a small rage of operation, under specific conditions.

These are the systems' safe working limits, and sometimes the limits are known, but in many case the safe working limits are unknown (See Stock Markets).

All systems with non-linear feedback can and will fail, catastrophically.

All predicted by Chaos Theory. Best mathematical filed applicable to the real world of systems.

So I'll repeat. All complex system will fail when operating outside safe limits, change in the system, management induced and stimulus induced, can and will redefine those limits, with spectacular results.

We hope and pray system will remain within safe limits, but greed and complacency lead us humans to test those limits (loosen the controls), or enable greater levels of feedback (increase volumes of transactions). See Crash of 2007, following repeal of Glass-Stegal, etc.

Brooklin Bridge August 21, 2015 at 4:05 pm

It's Ronnie Ray Gun. He redefined it as, "Safe for me but not for thee." Who says you can't isolate the root?

Synoia August 21, 2015 at 5:25 pm

Ronnie Ray Gun was the classic example of a Manager.

Where one can only say: "Forgive them Father, for they know not what they do"

Oregoncharles August 21, 2015 at 2:54 pm

Three quite different thoughts:

First, I don't think the use of "practitioner" is an evasion of agency. Instead, it reflects the very high level of generality inherent in systems theory. The pitfall is that generality is very close to vagueness. However, the piece does contain an argument against the importance of agency; it argues that the system is more important than the individual practitioners, that since catastrophic failures have multiple causes, individual agency is unimportant. That might not apply to practitioners with overall responsibility or who intentionally wrecked the system; there's a naive assumption that everyone's doing their best. I think the author would argue that control fraud is also a system failure, that there are supposed to be safeguards against malicious operators. Bill Black would probably agree. (Note that I dropped off the high level of generality to a particular example.)

Second, this appears to defy the truism from ecology that more complex systems are more stable. I think that's because ecologies generally are not tightly coupled. There are not only many parts but many pathways (and no "practitioners"). So "coupling" is a key concept not much dealt with in the article. It's about HUMAN systems, even though the concept should apply more widely than that.

Third, Yves mentioned the economists' use of "equilibrium." This keeps coming up; the way the word is used seems to me to badly need definition. It comes from chemistry, where it's used to calculate the production from a reaction. The ideal case is a closed system: for instance, the production of ammonia from nitrogen and hydrogen in a closed pressure chamber. You can calculate the proportion of ammonia produced from the temperature and pressure of the vessel. It's a fairly fast reaction, so time isn't a big factor.

The Earth is not a closed system, nor are economies. Life is driven by the flow of energy from the Sun (and various other factors, like the steady rain of material from space). In open systems, "equilibrium" is a constantly moving target. In principle, you could calculate the results at any given condition , given long enough for the many reactions to finish. It's as if the potential equilibrium drives the process (actually, the inputs do).

Not only is the target moving, but the whole system is chaotic in the sense that it's highly dependent on variables we can't really measure, like people, so the outcomes aren't actually predictable. That doesn't really mean you can't use the concept of equilibrium, but it has to be used very carefully. Unfortunately, most economists are pretty ignorant of physical science, so ignorant they insistently defy the laws of thermodynamics ("groaf"), so there's a lot of magical thinking going on. It's really ideology, so the misuse of "equilibrium" is just one aspect of the system failure.

Synoia August 21, 2015 at 5:34 pm

Really?

"equilibrium…from chemistry, where it's used to calculate the production from a reaction"

That is certainly a definition in one scientific field.

There is another definition from physics.

When all the forces that act upon an object are balanced, then the object is said to be in a state of equilibrium.

However objects on a table are considered in equilibrium, until one considers an earthquake.

The condition for an equilibrium need to be carefully defined, and there are few cases, if any, of equilibrium "under all conditions."

nat scientist August 21, 2015 at 7:42 pm

Equilibrium ceases when Chemistry breaks out, dear Physicist.

Synoia August 21, 2015 at 10:19 pm

Equilibrium ceases when Chemistry breaks out

This is only a subset.

Oregoncharles August 21, 2015 at 10:56 pm

I avoided physics, being not so very mathematical, so learned the chemistry version – but I do think it's the one the economists are thinking of.

What I neglected to say: it's an analogy, hence potentially useful but never literally true – especially since there's no actual stopping point, like your table.

John Merryman August 21, 2015 at 3:09 pm

There is much simpler way to look at it, in terms of natural cycles, because the alternative is that at the other extreme, a happy medium is also a flatline on the big heart monitor. So the bigger it builds, the more tension and pressure accumulates. The issue then becomes as to how to leverage the consequences. As they say, a crisis should never be wasted. At its heart, there are two issues, economic overuse of resources and a financial medium in which the rent extraction has overwhelmed its benefits. These actually serve as some sort of balance, in that we are in the process of an economic heart attack, due to the clogging of this monetary circulation system, that will seriously slow economic momentum.

The need then is to reformulate how these relationships function, in order to direct and locate our economic activities within the planetary resources. One idea to take into consideration being that money functions as a social contract, though we treat it as a commodity. So recognizing it is not property to be collected, rather contracts exchanged, then there wouldn't be the logic of basing the entire economy around the creation and accumulation of notational value, to the detriment of actual value. Treating money as a public utility seems like socialism, but it is just an understanding of how it functions. Like a voucher system, simply creating excess notes to keep everyone happy is really, really stupid, big picture wise.

Obviously some parts of the system need more than others, but not simply for ego gratification. Like a truck needs more road than a car, but an expensive car only needs as much road as an economy car. The brain needs more blood than the feet, but it doesn't want the feet rotting off due to poor circulation either.
So basically, yes, complex systems are finite, but we need to recognize and address the particular issues of the system in question.

Bob Stapp August 21, 2015 at 5:30 pm

Perhaps in a too-quick scan of the comments, I overlooked any mention of Nassim Nicholas Taleb's book, Antifragile. If so, my apologies. If not, it's a serious omission from this discussion.

Local to Oakland August 21, 2015 at 6:34 pm

Thank you for this.

I first wondered about something related to this theme when I first heard about just in time sourcing of inventory. (Now also staff.) I wondered then whether this was possible because we (middle and upper class US citizens) had been shielded from war and other catastrophic events. We can plan based on everything going right because most of us don't know in our gut that things can always go wrong.

I'm genX, but 3 out of 4 of my grandparents were born during or just after WWI. Their generation built for redundancy, safety, stability. Our generation, well. We take risks and I'm not sure the decision makers have a clue that any of it can bite them.

Jeremy Grimm August 22, 2015 at 4:23 pm

The just-in-time supply of components for manufacturing was described in Barry Lynn's book "Cornered" and identified as creating extreme fragility in the American production system. There have already been natural disasters that shutdown American automobile production in our recent past.

Everything going right wasn't part of the thinking that went into just-in-time parts. Everything going right - long enough - to steal away market share on price-point was the thinking. Decision makers don't worry about any of this biting them. Passing the blame down and golden parachutes assure that.

flora August 21, 2015 at 7:44 pm

This is really a very good paper. My direct comments are:

point 2: yes. provided the safety shields are not discarded for bad reasons like expedience or ignorance or avarice. See Glass-Steagall Act, for example.

point 4: yes. true of all dynamic systems.

point 7: 'root cause' is not the same as 'key factors'. ( And here the doctor's sensitivity to malpractice suits may be guiding his language.) It is important to determine key factors in order to devise better safety shields for the system. Think airplane black boxes and the 1932 Pecora Commission after the 1929 stock market crash.

Jay M August 21, 2015 at 9:01 pm

It's easy, complexity became too complex. And I can't read the small print. We are devolving into a world of happy people with gardens full of flowers that they live in on their cell phones.

Ancaeus August 22, 2015 at 5:22 am

There are a number of counter-examples; engineered and natural systems with a high degree of complexity that are inherently stable and fault-tolerant, nonetheless.

1. Subsumption architecture is a method of controlling robots, invented by Rodney Brooks in the 1980s. This scheme is modeled on the way the nervous systems of animals work. In particular, the parts of the robot exist in a hierarchy of subsystems, e.g., foot, leg, torso, etc. Each of these subsystems is autonomously controlled. Each of the subsystems can override the autonomous control of its constituent subsystems. So, the leg controller can directly control the leg muscle, and can override the foot subsystem. This method of control was remarkably successful at producing walking robots which were not sensitive to unevenness of the surface. In other words, the were not brittle in the sense of Dr. Cook. Of course, subsumption architecture is not a panacea. But it is a demonstrated way to produce very complex engineered systems consisting of many interacting parts that are very stable.

2. The inverted pendulum Suppose you wanted to build a device to balance a pencil on its point. You could imagine a sensor to detect the angle of the pencil, an actuator to move the balance point, and a controller to link the two in a feedback loop. Indeed, this is, very roughly, how a Segway remains upright. However, there is a simpler way to do it, without a sensor or a feedback controller. It turns out that if your device just moves the balance point sinusoidaly (e.g., in a small circle) and if the size of the circle and the rate are within certain ranges, then the pencil will be stable. This is a well-known consequence of the Mathieu equation. The lesson here is that stability (i.e., safety) can be inherent in systems for subtle reasons that defy a straightforward fault/response feedback.

3. Emergent behavior of swarms Large numbers of very simple agents interacting with one another can sometimes exhibit complex, even "intelligent" behavior. Ants are a good example. Each ant has only simple behavior. However, the entire ant colony can act in complex and effective ways that would be hard to predict from the individual ant behaviors. A typical ant colony is highly resistant to disturbances in spite of the primitiveness of its constituent ants.

4. Another example is the mammalian immune system that uses negative selection as one mechanism to avoid attacking the organism itself. Immature B cells are generated in large numbers at random, each one with receptors for specifically configured antigens. During maturation, if they encounter a matching antigen (likely a protein of the organism) then the B cell either dies, or is inactivated. At maturity, what is left is a highly redundant cohort of B cells that only recognize (and neutralize) foreign antigens.

Well, these are just a few examples of systems that exhibit stability (or fault-tolerance) that defies the kind of Cartesian analysis in Dr. Cook's article.

Marsha August 22, 2015 at 11:42 am

Glass-Steagall Act: interactions between unrelated functionality is something to be avoided. Auto recall: honking the horn could stall the engine by shorting out the ignition system. Simple fix is is a bit of insulation.

ADA software language: Former DOD standard for large scale safety critical software development: encapsulation, data hiding, strong typing of data, minimization of dependencies between parts to minimize impact of fixes and changes. Has safety critical software gone the way of the Glass-Steagall Act? Now it is buffer overflows, security holes, and internet protocol in hardware control "critical infrastructure" that can blow things up.

[Feb 11, 2015] GHOST: glibc vulnerability (CVE-2015-0235)

First of all this is kind of system error that is not easy to exploit. You need to locate the vulnerable functions in core image and be able to overwrite them via call (length of which any reasonable programmer will check). So whether this vulnerability is exploitable or not for applications that we are running is an open question.

In any case most installed systems are theoretically vilnerable. Practically too if they are running applications that do not check length for such system calls.

Only recently patched systems with glibc-2.11.3-17.74.13.x86_64 and above are not vulnerable.

[Aug 31, 2012] Scientific Linux 6.3 Live CD/DVD Has Been Released

Official site is www.scientificlinux.org. Download are available from CERN
August 27, 2012 | Softpedia

Scientific Linux 6.3 is now based on Red Had Enterprise Linux 6.3, powered by Linux kernel 2.6.32, and features XOrg Server 1.7.7, IceWM 1.2.37, GNOME 2.28, Firefox 10.0.6, Thunderbird 10.0.6, LibreOffice 3.4.5.2 and KDE Software Compilation 4.3.4.

Moreover, the distro includes software from rpmforge, epel and elrepo in order to provide support for NTFS and Reiserfs filesystems, secure network connection via OpenVPN, VPNC, PPTP, better multimedia support, and various filesystem tools like dd_rescue, gparted, ddrescue, gdisk.

Scientific Linux 6.3 is distributed as Live CD and DVD ISO images, supporting both 32-bit and 64-bit architectures.

The complete list of changes with a comprehensive list of fixes, improvements, removed and updated packages, can be found in the official release announcement for Scientific Linux 6.3 Live CD/DVD.

[Aug 1, 2012] Oracle Linux A better alternative to CentOS

They provide conversion script: centos2ol.sh

Oracle Linux: A better alternative to CentOS We firmly believe that Oracle Linux is the best Linux distribution on the market today. It's reliable, it's affordable, it's 100% compatible with your existing applications, and it gives you access to some of the most cutting-edge innovations in Linux like Ksplice and dtrace.

But if you're here, you're a CentOS user. Which means that you don't pay for a distribution at all, for at least some of your systems. So even if we made the best paid distribution in the world (and we think we do), we can't actually get it to you... or can we?

We're putting Oracle Linux in your hands by doing two things:

◦We've made the Oracle Linux software available free of charge ◦We've created a simple script to switch your CentOS systems to Oracle Linux We think you'll like what you find, and we'd love for you to give it a try.

Switch your CentOS systems to Oracle Linux Run the following as root:

curl -O https://linux.oracle.com/switch/centos2ol.sh 
sh centos2ol.sh 

FAQ Q: Wait, doesn't Oracle Linux cost money? A: Oracle Linux support costs money. If you just want the software, it's 100% free. And it's all in our yum repo at public-yum.oracle.com. Major releases, errata, the whole shebang. Free source code, free binaries, free updates, freely redistributable, free for production use. Yes, we know that this is Oracle, but it's actually free. Seriously.

[Apr 20, 2012] Oracle Linux The Past, Present and Future Revealed

Apr 19, 2012 | The VAR Guy

During our conversation, Coekaerts touched on a range of additional topics - such as:

[Feb 28, 2012] Red Hat vs. Oracle Linux Support 10 Years Is New Standard

The VAR Guy

The support showdown started a couple of weeks ago, when Red Hat extended the life cycle of Red Hat Enterprise Linux (RHEL) versions 5 and 6 from the norm of seven years to a new standard of 10 years. A few days later, Oracle responded by extending Oracle Linux life cycles to 10 years. Side note: It sounds like SUSE, now owned by Attachmate, also offers extended Linux support of up to 10 years.


Red Hat's success aside, it's hard to profit from free by Barb Darrow

Dec 19 2014 | dewaynenet.wordpress.com

Posted by wa8dzp

Red Hat's success aside, it's hard to profit from free

<https://gigaom.com/2014/12/19/red-hats-success-aside-its-hard-to-profit-from-free/>

Red Hat, which just reported a profit of $47.9 million (or 26 cents a share) on revenue of $456 million for its third quarter, has managed to pull off a tricky feat: It's been able to make money off of free, well, open-source, software. (It's profit for the year-ago quarter was $52 million.)

In a blog post, Red Hat CEO Jim Whitehurst said the old days when IT pros risked their careers by betting on open source rather than proprietary software are over. That old adage that you can't be fired for buying IBM should be updated, I guess.

In what looks something like a victory lap, Whitehurst wrote that every company now runs some sort of open source software. He wrote:

Many of us remember the now infamous "Halloween Documents," the classic quote from former Microsoft CEO Steve Ballmer describing Linux as a "cancer," and comments made by former Microsoft CEO Bill Gates, saying, "So certainly we think of [Linux] as a competitor in the student and hobbyist market. But I really do not think in the commercial market, we'll see it [compete with Windows] in any significant way."

He contrasted that to Ballmer successor's Satya Nadella's professed love of Linux. To be fair, Azure was well down the road to embracing open source late in Ballmer's reign but Microsoft's transition from open-source basher to open-source lover is still noteworthy - and indicative of open-source software's wide spread adoption. If you can't beat 'em, join 'em.

Open source is great, but profitable?

So everyone agrees that open source is goodness. But not everyone is sure that many companies will be able to replicate Red Hat's success profiting from it.

Sure, Microsoft wants people to run Linux and Java and whatever on Azure because that gives Azure a critical mass of new-age users who are not necessarily enamored of .NET and Windows. And, Microsoft has lots of revenue opportunities once those developers and companies are on Azure. (The fact that Microsoft is open-sourcing .NET is icing on the open-source cake.)

But how does a company that is 100 percent focused on say, selling support and services and enhancements to Apache Hadoop, make money? A couple of these companies are extremely well-funded and it's unclear where the cash burn ends and the profits can begin.

[snip]

Docker - FreeBSD like container+API for L


Introduction


Introduction

inux>

Linux Containers (LXC) is a virtualization method for running multiple isolated Linux systems. Docker extends LXC. It uses LXC, cgroups, Linux kernel and other parts to automate the deployment of applications inside software containers.

It comes with API to runs processes in isolation. With docker I can pack WordPress (or any other app written in Python/Ruby/Php & friends) and its dependencies in a lightweight, portable, self-sufficient container. I can deploy and test such container on any Linux based server.

Bad Lockup Bug Plagues Linux

Slashdot

jones_supa (887896) writes "A hard to track system lockup bug seems to have appeared in the span of couple of most recent Linux kernel releases. Dave Jones of Red Hat was the one to first report his experience of frequent lockups with 3.18. Later he found out that the issue is present in 3.17 too. The problem was first suspected to be related to Xen.

A patch dating back to 2005 was pushed for Xen to fix a vmalloc_fault() path that was similar to what was reported by Dave. The patch had a comment that read "the line below does not always work. Needs investigating!" But it looks like this issue was never properly investigated. Due to the nature of the bug and its difficulty in tracking down, testers might be finding multiple but similar bugs within the kernel. Linus even suggested taking a look in the watchdog code. He also concluded the Xen bug to be a different issue. The bug hunt continues in the Linux Kernel Mailing List."

Selected Skeptical Comments

binarylarry (1338699) on Saturday November 29, 2014 @01:04PM (#48485753)

Re: Have they checked systemd? (Score:5, Funny)

It's not systemd related, you can check by opening a termin

Anonymous Coward on Saturday November 29, 2014 @12:34PM (#48485599)

Re: What's happening to Linux? (Score:0)

The kernel with the above problems isn't in the 14.04 ubuntu repo, the latest kernel in 14.04 is 3.13 and is not having this problem. I'm sure it will be fixed soon.

Anonymous Coward on Saturday November 29, 2014 @01:15PM (#48485819)

Re:What's happening to Linux? (Score:1)

I love the assumption that this isn't happening in the corporate world.

It is. It just happens behind closed doors. Thus, patches.

raymorris (2726007) on Saturday November 29, 2014 @01:08PM (#48485775)

Try a stable distro like RH/CentOS. Or Mac (Score:3)

> First got into it ... because Linux was totally stable

If stable is your top priority, Fedora is approximately the worst possible choice. Fedora is essentially Red Hat Beta. If you want stable, the devel / beta branch is not for you. You'll probably be much happier with Red Hat or its twin, CentOS.

Also, you mentioned that you did an "upgrade" to Debian Unstable. You didn't mention any _reason_ for doing that. If stability is a top priority for you, don't upgrade just because you can, don't fix it if it aint broke.

Mac OSX may indeed be a good choice for you also. It is certified Unix and if you use the commondand line in Linux you'll find that day-to-day tasks are the same on a Mac. System internals are different of course, but bash, sed, awk, grep, and vim work just like they do on Linux.

Anonymous Coward on Saturday November 29, 2014 @02:14PM (#48486131)

Re:But guys... (Score:0)

RHEL is an entire distribution. Does this magically make every package inside "enterprise"?
I was referring to single tools and programs. Before you hit me with that "Windows is not a single tool" bat - it does not contain too much. Let's take usable entities instead of packages, software, tools, etc.

And that "doubled Software thing", it was kind of "finger intelligence", i.e. if your fingers type stupid things for themselves. I have another such example: Ever typed Touring complete instead of Turing complete? How about reading holocaust instead of localhost? ;)

jones_supa (887896) on Saturday November 29, 2014 @02:08PM (#48486099)

Re: But guys... (Score:4, Informative)

Have you ever compared enterprise class software (I also count Windows 7 Enterprise) with OSS Software? Windows does not even reliably support STR and resume. Using multiple monitors is a PITA.

Suspend and multiple monitors have always worked great in Windows for me. Under Linux, they have also worked fine in some machines, but I have also occasionally experienced serious problems with those areas. During recent times I have found out that even laptop screen brightness adjustment cannot be expected to work reliably out of the box under Linux.

SuricouRaven (1897204) on Saturday November 29, 2014 @03:26PM (#48486683)

Re: But guys... (Score:2)

There's an imbalance in development. Under windows, every hardware manufacturer does all they can to ensure their hardware is good - investing a lot of money in developing and testing the drivers. Under linux, the manufacturers usually don't care - aside from some server hardware, there just aren't enough resources to justify it from a business perspective. So development falls to three-man team on a side project, and sometimes it's down to community volunteers working from reverse-engineered specifications.

jellomizer (103300) on Saturday November 29, 2014 @03:09PM (#48486527)

Re: Come on Slashdot, get your news current (Score:3)

A Microsoft bug, proof of the incompetence of closed source.
A Linux bug. Either point to some closed source factor, or claim its solving a victory in the flexibility of open source.

Anonymous Coward on Saturday November 29, 2014 @01:36PM (#48485973)

Some actual information (Score:0)

So it may be a "bad" lockup bug in the sense that nobody knows exactly what causes it, but it's not "bad" in the sense that people should worry overly.

Why?

Dave Jones sees it only under insane loads (CPU loads of 150+) running a stress tester that is designed to do crazy things (trinity). And he can reproduce it on only one of his machines, and even there it takes hours. And it happens on a debug kernel that has DEBUG_PAGEALLOC and other explicit (and complex) debug code enabled. And even then the bug is a "Hmm. We made no progress in the last 21 seconds", rather than anything stranger.

In other words, it's "bad" in the sense that any unknown behavior is bad, but it's unknown mainly because it's so hard to trigger. Nobody else than core developers should really care. And those developers do care, so it's not like it's worrisome there either. It just takes longer to figure out because the usual "bisect it" approach isn't very easy when it can take a day to reproduce..

Recommended Links

Google matched content

Softpanorama Recommended

Top articles

Sites

Please visit nixCraft site. It has material well worth your visit.

Dr. Nikolai Bezroukov


Top 10 Classic Unix Humor Stories

1. The Jargon File the most famous Unix-related humor file.

Please note that so called "hacker dictionary" is the jargon file spoiled by Eric Raymond :-) -- earlier versions of jargon file are better than the latest hacker dictionary...

2. Tao_Of_Programming (originated in 1992). This is probably No. 2 classic. There are several variants, but the link provided seems to be the original text (or at least an early version close to the original).

Here is a classic quote:

"When you have learned to snatch the error code from the trap frame, it will be time for you to leave."

... ...

If the Tao is great, then the operating system is great. If the operating system is great, then the compiler is great. If the compiler is greater, then the applications is great. The user is pleased and there is harmony in the world.

3. Know your Unix System Administrator by Stephan Zielinski -- Probably the third most famous Unix humor item. See also KNOW YOUR UNIX SYSTEM ADMINISTRATOR also at Field Guide to System Administrators [rec.humor.funny]. I personally like the descriptions of idiots and fascists and tend to believe that a lot of administrative fascists are ex-secretaries :-). At the same time former programmers can became sadists also quite often -- there is something in sysadmin job that seems cultivates the feeling of superiority and sadism ( "Users are Losers" mentality. IMHO other members of classification are not that realistic :-) :

There are four major species of Unix sysad:

  1. The

    Technical Thug.
    Usually a systems programmer who has been forced into system administration; writes scripts in a polyglot of the Bourne shell, sed, C, awk, perl, and APL.

  2. The Administrative Fascist.
    Usually a retentive drone (or rarely, a harridan ex-secretary) who has been forced into system administration.
  3. The Maniac.
    Usually an aging cracker who discovered that neither the Mossad nor Cuba are willing to pay a living wage for computer espionage. Fell into system administration; occasionally approaches major competitors with indesp schemes.
  4. The Idiot.
    Usually a cretin, morphodite, or old COBOL programmer selected to be the system administrator by a committee of cretins, morphodites, and old COBOL programmers.

---------------- SITUATION: Root disk fails. ----------------

TECHNICAL THUG:

Repairs drive. Usually is able to repair filesystem from boot monitor. Failing that, front-panel toggles microkernel in and starts script on neighboring machine to load binary boot code into broken machine, reformat and reinstall OS. Lets it run over the weekend while he goes mountain climbing.

ADMINISTRATIVE FASCIST:
Begins investigation to determine who broke the drive. Refuses to fix system until culprit is identified and charged for the equipment.
MANIAC, LARGE SYSTEM:
Rips drive from system, uses sledgehammer to smash same to flinders. Calls manufacturer, threatens pets. Abuses field engineer while they put in a new drive and reinstall the OS.
MANIAC, SMALL SYSTEM:
Rips drive from system, uses ball-peen hammer to smash same to flinders. Calls Requisitions, threatens pets. Abuses bystanders while putting in new drive and reinstalling OS.
IDIOT:
Doesn't notice anything wrong.

---------------- SITUATION: Poor network response. ----------------

TECHNICAL THUG:

Writes scripts to monitor network, then rewires entire machine room, improving response time by 2%. Shrugs shoulders, says, "I've done all I can do," and goes mountain climbing.

ADMINISTRATIVE FASCIST:
Puts network usage policy in motd. Calls up Berkeley and AT&T, badgers whoever answers for network quotas. Tries to get xtrek freaks fired.
MANIAC:
Every two hours, pulls ethernet cable from wall and waits for connections to time out.
IDIOT:
# compress -f /dev/en0

---------------- SITUATION: User questions. ----------------

TECHNICAL THUG:

Hacks the code of emacs' doctor-mode to answer new users questions. Doesn't bother to tell people how to start the new "guru-mode", or for that matter, emacs.

ADMINISTRATIVE FASCIST:
Puts user support policy in motd. Maintains queue of questions. Answers them when he gets a chance, often within two weeks of receipt of the proper form.
MANIAC:
Screams at users until they go away. Sometimes barters knowledge for powerful drink and/or sycophantic adulation.
IDIOT:
Answers all questions to best of his knowledge until the user realizes few UNIX systems support punched cards or JCL.

4. RFC 1925 The Twelve Networking Truths by R. Callon

  1. It Has To Work.
  2. No matter how hard you push and no matter what the priority, you can't increase the speed of light. (2a) (corollary). No matter how hard you try, you can't make a baby in much less than 9 months. Trying to speed this up *might* make it slower, but it won't make it happen any quicker.
  3. With sufficient thrust, pigs fly just fine. However, this is not necessarily a good idea. It is hard to be sure where they are going to land, and it could be dangerous sitting under them as they fly overhead.
  4. Some things in life can never be fully appreciated nor understood unless experienced firsthand. Some things in networking can never be fully understood by someone who neither builds commercial networking equipment nor runs an operational network.
  5. It is always possible to aglutenate multiple separate problems into a single complex interdependent solution. In most cases this is a bad idea.
  6. It is easier to move a problem around (for example, by moving the problem to a different part of the overall network architecture) than it is to solve it. (6a) (corollary). It is always possible to add another level of indirection.
  7. It is always something (7a) (corollary). Good, Fast, Cheap: Pick any two (you can't have all three).
  8. It is more complicated than you think.
  9. For all resources, whatever it is, you need more. (9a) (corollary) Every networking problem always takes longer to solve than it seems like it should.
  10. One size never fits all.
  11. Every old idea will be proposed again with a different name and a different presentation, regardless of whether it works. (11a) (corollary). See rule 6a.
  12. In protocol design, perfection has been reached not when there is nothing left to add, but when there is nothing left to take away.

5. Murphy's laws -- I especially like "Experts arose from their own urgent need to exist." :-). See also

  1. Nothing is as easy as it looks.

  2. Everything takes longer than you think.
  3. Anything that can go wrong will go wrong.
  4. If there is a possibility of several things going wrong, the one that will cause the most damage will be the one to go wrong. Corollary: If there is a worse time for something to go wrong, it will happen then.
  5. If anything simply cannot go wrong, it will anyway.
  6. If you perceive that there are four possible ways in which a procedure can go wrong, and circumvent these, then a fifth way, unprepared for, will promptly develop.
  7. Left to themselves, things tend to go from bad to worse.
  8. If everything seems to be going well, you have obviously overlooked something.
  9. Nature always sides with the hidden flaw.
  10. Mother nature is a bitch.
  11. It is impossible to make anything foolproof because fools are so ingenious.
  12. Whenever you set out to do something, something else must be done first.
  13. Every solution breeds new problems.

... ... ....

6. Network Week/The Bastard Operator from Hell. The classic story about an Administrative Fascist sysadmin.

7. Academic Programmers- A Spotter's Guide by Pete Fenelon; Department of Computer Science, University of York

Preamble
I Am The Greatest
Internet Vegetable
Rabid Prototyper
Get New Utilities!
Square Peg...
Objectionably ...

My Favourite ...
Give Us The Tools!
Macro Magician
Nightmare Networker
Configuration ...
Artificial Stupidity
Number Crusher

Meta Problem Solver
What's A Core File?
I Come From Ruritania
Old Fart At Play
I Can Do That!
What Colour ...
It's Safety Critical!

Objectionably Oriented

OO experienced a Road To Damascus situation the moment objects first crossed her mind. From that moment on everything in her life became object oriented and the project never looked back. Or forwards.

Instead, it kept sending messages to itself asking it what direction it was facing in and would it mind having a look around and send me a message telling me what was there...

OO thinks in Smalltalk and talks to you in Eiffel or Modula-3; unfortunately she's filled the disk with the compilers for them and instead of getting any real work done she's busy writing papers on holes in the type systems and, like all OOs, is designing her own perfect language.

The most dangerous OOs are OODB hackers; they inevitably demand a powerful workstation with local disk onto which they'll put a couple of hundred megabytes of unstructured, incoherent pointers all of which point to the number 42; any attempt to read or write it usually results in the network being down for a week at least.

8 Real Programmers Don't Write Specs

Real Programmers don't write specs -- users should consider themselves lucky to get any programs at all, and take what they get.

Real Programmers don't comment their code. If it was hard to write, it should be hard to understand.

Real Programmers don't write application programs, they program right down on the bare metal. Application programming is for feebs who can't do system programming.

... ... ...

Real Programmers aren't scared of GOTOs... but they really prefer branches to absolute locations.

9. Real Programmers Don't Use Pascal -- [ A letter to the editor of Datamation, volume 29 number 7, July 1983. Ed Post Tektronix, Inc. P.O. Box 1000 m/s 63-205 Wilsonville, OR 97070 Copyright (c) 1982]

Back in the good old days-- the "Golden Era" of computers-- it was easy to separate the men from the boys (sometimes called "Real Men" and "Quiche Eaters" in the literature). During this period, the Real Men were the ones who understood computer programming, and the Quiche Eaters were the ones who didn't. A real computer programmer said things like "DO 10 I=1,10" and "ABEND" (they actually talked in capital letters, you understand), and the rest of the world said things like "computers are too complicated for me" and "I can't relate to computers-- they're so impersonal". (A previous work [1] points out that Real Men don't "relate" to anything, and aren't afraid of being impersonal.)

But, as usual, times change. We are faced today with a world in which little old ladies can get computers in their microwave ovens, 12 year old kids can blow Real Men out of the water playing Asteroids and Pac-Man, and anyone can buy and even understand their very own personal Computer. The Real Programmer is in danger of becoming extinct, of being replaced by high school students with TRASH-80s.

There is a clear need to point out the differences between the typical high school junior Pac-Man player and a Real Programmer. If this difference is made clear, it will give these kids something to aspire to -- a role model, a Father Figure. It will also help explain to the employers of Real Programmers why it would be a mistake to replace the Real Programmers on their staff with 12 year old Pac-Man players (at a considerable salary savings).

10. bsd_logo_story

Last week I walked into a local "home style cookin' restaurant/watering hole" to pick up a take out order. I spoke briefly to the waitress behind the counter, who told me my order would be done in a few minutes.

So, while I was busy gazing at the farm implements hanging on the walls, I was approached by two, uh, um... well, let's call them "natives".

These guys might just be the original Texas rednecks -- complete with ten-gallon hats, snakeskin boots and the pervasive odor of cheap beer and whiskey.

"Pardon us, ma'am. Mind of we ask you a question?"

Well, people keep telling me that Texans are real friendly, so I nodded.

"Are you a Satanist?"

Etc: other historically important items

Programming Eagles

... ... ... ... ... ... ... ... ...

And they showed me the way There were salesmen down the corridor I thought I heard them say Welcome to Mountain View California Such a lovely place Such a lovely place (backgrounded) Such a lovely trace(1) Plenty of jobs at Mountain View California Any time of year Any time of year (backgrounded) You can find one here You can find one here

... ... ... ... ... ... ...

John Lennon's Yesterday -- variation for programmers.

Yesterday,
All those backups seemed a waste of pay.
Now my database has gone away.
Oh I believe in yesterday.

Suddenly,
There's not half the files there used to be,
And there's a milestone hanging over me
The system crashed so suddenly.

I pushed something wrong
What it was I could not say.
Now all my data's gone
and I long for yesterday-ay-ay-ay.

Yesterday,
The need for back-ups seemed so far away.
I knew my data was all here to stay,
Now I believe in yesterday.

The UNIX cult -- a satiric history of Unix

Notes from some recent archeological findings on the birth of the UNIX cult on Sol 3 are presented. Recently discovered electronic records have shed considerable light on the beginnings of the cult. A sketchy history of the cult is attempted.

On the Design of the UNIX operating System

This article was written in 1984 and was published in various UNIX newsletters across the world. I thought that it should be revived to mark the first 25 years of UNIX. If you like this, then you might also like The UNIX Cult.
Peter Collinson

,,, ,,, ,,,

'I Provide Office Solutions,' Says Pitiful Little Man a nice parody on programmers in general and open source programmers in particular

"VisTech is your one-stop source for Internet and Intranet open source development, as well as open source software support and collaborative development" said Smuda, adjusting the toupee he has worn since age 23. "We are a full-service company that can evaluate and integrate multi-platform open source solutions, including Linux, Solaris, Aix and HP-UX"

"Remember, no job is too small for the professionals at VisTech," added the spouseless, childless man, who is destined to die alone and unloved. "And no job is too big, either."

Unofficial Unix Administration Horror Story Summary

Best of DATAMATION GOTO-less

By R. Lawrence Clark*

From DATAMATION, December, 1973


Nearly six years after publication of Dijkstra's now-famous letter, [1] the subject of GOTO-less programming still stirs considerable controversy. Dijkstra and his supporters claim that the GOTO statement leads to difficulty in debugging, modifying, understanding and proving programs. GOTO advocates argues that this statement, used correctly, need not lead to problems, and that it provides a natural straightforward solution to common programming procedures.

Numerous solutions have been advanced in an attempt to resolve this debate. Nevertheless, despite the efforts of some of the foremost computer scientists, the battle continues to rage.

The author has developed a new language construct on which, he believes, both the pro- and the anti-GOTO factions can agree. This construct is called the COME FROM statement. Although usage of the COME FROM statement is independent of the linguistic environment, its use will be illustrated within the FORTRAN language.

Netslave quiz

AT YOUR LAST JOB INTERVIEW, YOU EXHIBITED:

A. Optimism
B. Mild Wariness
C. Tried to overcome headache. I was really tied
D. Controlled Hostility

2. DESCRIBE YOUR WORKPLACE:

A. An enterprising, dynamic group of individuals laying the groundwork for tomorrow's economy.
B. A bunch of geeks with questionable social skills.
C. An anxiety-ridden, with long hours and a lot of stress because of backbiting bunch of finger-pointers.
D. Jerks and PHB

3. DESCRIBE YOUR HOME:

A. Small, but efficient.
B. Shared and dormlike.
C. Rubble-strewn and fetid.
D. I have a personal network at my home with three or more connected computers and permanent connection to the Internet

NEW ELEMENT DISCOVERED!

The heaviest element known to science was recently discovered by university physicists. The new element was tentatively named Administratium. It has no protons and no electrons, and thus has an atomic number of 0. However, it does have one neutron, 15 assistant neutrons, 70 vice-neutrons, and 161 assistant vice-neutrons. This gives it an atomic mass of 247. These 247 particles are held together by a force that involves constant exchange of a special class of particle called morons.

Since it does not have electrons, Administratium is inert. However, it can be detected chemically as it impedes every reaction with which it comes into contact. According to the discoverers, a minute amount of Administratium added to one reaction caused it to take over four days to complete. Without Administratium, the reaction took less than one second.

Administratium has a half-life of approximately three years, after which it does not normally decay but instead undergoes a complex nuclear process called "Reorganization". In this little-understood process, assistant neutrons, vice-neutrons, and assistant vice-neutrons appear to exchange places. Early results indicate that atomic mass actually increases after each "Reorganization".

Misc Unproductive Time Classification -- nice parody on timesheets

You Might Be A Programmer If... By Clay Shannon - [email protected]

Jokes Magazine Drug Dealers Vs Software Developers

Jokes Magazine Ten Commandments For Stress Free Programming December 23, 1999

  1. Thou shalt not worry about bugs. Bugs in your software are actually special features.
  2. Thou shalt not fix abort conditions. Your user has a better chance of winning state lottery than getting the same abort again.
  3. Thou shalt not handle errors. Error handing was meant for error prone people, neither you or your users are error prone.
  4. Thou shalt not restrict users. Don't do any editing, let the user input anything, anywhere, anytime. That is being very user friendly.
  5. Thou shalt not optimize. Your user are very thankful to get the information, they don't worry about speed and efficiency.
  6. Thou shalt not provide help. If your users can not figure out themselves how to use your software than they are too dumb to deserve the benefits of your software any way.
  7. Thou shalt not document. Documentation only comes in handy for making future modifications. You made the software perfect the first time, it will never need mods.
  8. Thou shalt not hurry. Only the cute and the mighty should get the program by deadline.
  9. Thou shalt not revise. Your interpretation of specs was right, you know the users' requirements better than them.
  10. Thou shalt not share. If other programmers needed some of your code, they should have written it themselves.

Other Collections of Unix Humor


Don't let a few insignificant facts distract you from waging a holy war

A Slashdot post

It's spelled Linux, but it's pronounced "Not Windows"

- Usenet sig

It is time to unmask the programming community as a Secret Society for the Creation and Preservation of Artificial Complexity.

Edsger W. Dijkstra: The next forty years (EWD 1051)



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Created May 16, 1996; Last modified: September 17, 2017