Softpanorama

Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
May the source be with you, but remember the KISS principle ;-)
Bigger doesn't imply better. Bigger often is a sign of obesity, of lost control, of overcomplexity, of cancerous cells

The tar pit of Red Hat overcomplexity

RHEL 6 and RHEL 7 differences are no smaller then between SUSE and RHEL which essentially doubles workload of sysadmins as any "extra" OS leads to mental overflow and loss of productivity. That's why most sysadmins hate RHEL 7. The joke is that "The D in Systemd stands for 'Dammmmit!' "

News Administration Recommended Books Recommended Links Installation of Red Hat from a USB drive Installation Open source politics: IBM acquires Red Hat Migrating systems from RHN to RHNSM
Red Hat Certification Program Modifying ISO image to include kickstart file Kickstart LVM Disabling useless daemons in RHEL 6 servers Disabling useless daemons in RHEL 5 linux servers Disabling RHEL 6 Network Manager Biosdevname and renaming of Ethernet interfaces
Systemd Networking Network Manager overwrites resolv.conf RHEL 6 NTP configuration RHEL handling of DST change RHEL6 registration on proxy protected network Log rotation in RHEL/Centos/Oracle linux Xinetd
Disabling useless daemons in RHEL/Centos/Oracle 6 servers Disabling useless daemons in RHEL/Centos/Oracle 5 servers Certification Program Oracle Linux Changing runlevel when booting with grub Infiniband Subnet Manager Infiniband Installing Mellanox Infiniband Driver on RHEL 6.5
RPM YUM Anaconda VNC-based Installation in Anaconda Cron Wheel Group PAM RHEL handling of DST change
RHEL subscription management RHEL Runlevels RHEL4 registration RHEL5 registration on proxy protected network SELinux Security Disk Management IP address change
Redundant daemons in RHEL Disabling the avahi daemon Apache rsyslog SSH NFS Samba NTP
 serial console Screen vim Log rotation rsync Sendmail VNC/VINO Midnight Commander
Red Hat Startup Scripts bash Fedora Oracle Linux Administration Notes RHEL 7 lands with systemd RHEL5 registration on proxy protected network Differences between RHEL 6 and RHEL 7 Red Hat history
Tuning Virtualization Xen Red Hat vs. Solaris

Sysadmin Horror Stories

Tips Humor Etc
 
Without that discipline, too often, software teams get lost in what are known in the field as "boil-the-ocean" projects -- vast schemes to improve everything at once. That can be inspiring, but in the end we might prefer that they hunker down and make incremental improvements to rescue us from bugs and viruses and make our computers easier to use.

Idealistic software developers love to dream about world-changing innovations; meanwhile, we wait and wait for all the potholes to be fixed.

Frederick Brooks, Jr,
The Mythical Man-Month, 1975


Introduction

Image a language in which both grammar and vocabulary is changing each decade. Add to this that the syntax is complex and the vocabulary is huge.  Far beyond any normal human comprehension. You can learn some subset when  you closely work with a particular subsystem only to forget it after a couple quarters of half a year.  This is the reality of Red Hat enterprise editions.

Moreover, Red Hat exists in several incarnations (some of which are free for developers, and some are low cost ($100 a year for patches or so):

The key problem with RHEL7 is its adoption of systems in RHEL 7 making this distribution less attractive choice for many large enterprises, as learning  curve is steep and troubleshooting skills needed are different  from RHEL6.  The distance between RHEL6 and  RHEL7 is approximately the same as distance between RHEL6 and Suse, so we can speak of RHEL7 as a new flavor of Linux. This daemon replaces init in a way I find problematic, including, but not limited to, paying outsize attention for the  subsystem that load the initial set of daemons and replacing runlevels with a different and 100 timers more complex alternative.  Again, systemd made RHEL7 as different from RHEL6 as Suse distribution is from Red Hat.

Systemd made RHEL7 as different from RHEL6 as Suse distribution is from Red Hat.

Hopefully before 2020 we do not need to upgrade to RHEL7, but eventually, when support of RHEL 6 ends, all RHEL users iether need to switch, or abandon RHEL for other distribution that does not use systemd. This is actually an opportunity for Oracle to bring Solaris solution of this problem to Linux, but I doubt that they want to spend money on it. They probably will continue their successful attempts to clone Red Hat (under the name of Oracle Linux) improving kernel and some other subsystems to work better with Oracle database. Taking into account dominant market share of RHEL (which became Microsoft of Linux) finding alternative is a difficult proposition.  But Red Hat acquisition by IBM can change that. Actually it can change a lot as neither HP, nor Oracle have any warm feelings toward IBM and preexisting Red Hat neutrality to major hardware vendors now disappears, making IBM owner of enterprise Linux and this hurting HP and Oracle.

IBM acquisition of Red Hat changed a lot of things as neither HP, nor Oracle have any warm feelings toward IBM and preexisting Red Hat neutrality to major hardware vendors now disappears, making IBM owner of enterprise Linux and this hurting HP and Oracle.

If Devuan distribution survives this can be one for customers who do not depend much on commercial software (for example for genomic computational clusters), but this is not an enterprise distribution per se, as there no commercial support for it from the vendor.  So most probably you will be forced to convert in the same way as you was forced to convert from Windows 7 in 2020 to Windows 10.  So much about open source. With the current complexity of Linux the key question is "Open for whom?"  I do not see it radically different from, say Solaris 11 (which while more expensive is also more secure and have better polished light weight virtual machines -- zones and better filesystem ZFS, although XFS is not bad iether). 

Actually Lennart Poettering is an interesting Trojan horse within Red Hat.  This is an example how one talented, motivated and productive Apple (or Windows) desktop biased C programmer can cripple a large open source project with no any organized resistance. Of course, that means that his goal align with the goals of Red Hat management, which is to control the Linux  environment in a way  similar to Microsoft -- via complexity (Windows can be called the king of software complexity)  providing for lame sysadmins GUI based tools for "click, click, click" style administration, inner working of which they do not understand.

And respite wide resentment, I did not see the slogan "Boycott RHEL 7 too often and none on major websites jointed the "resistance".  While the key for stowing systems in other distributions developers throats (both Suse and Ubuntu adopted systemd)  was its close integration with Gnome, it is evident, that the absence of "architectural council" in projects like Red Hat is a serious weakness. It also suggests that developers from companies representing major Linux distribution became uncontrolled elite of Linux word, "open source nomenklatura", if you wish.  In other words we see the  "iron law of oligarchy" in action here. See Systemd invasion into Linux distributions

RHEL 7 fiasco

Architectural quality of RHEL 7 is low.  Complexity of this distribution is considerably higher then RHEL6 with almost zero return on investment.

All-in-all RHEL 7 is way too complex to administer and requires regular human being to remember so many things that they can never fit into one  head. This means this is by definition a brittle system, the elephant that nobody understands  completely.  And I am talking not only  about systemd, which was introduced in this version. Other subsystems suffer from overcomplexity as well. Some of them came directly from RHEL (Biosdevname).

Troubleshooting became increasingly difficult.  Partially this is due to generally complexity of modern Linux environment (library hell, introduction of redundant utilities, etc), partially this this due to lack of architectural vision on the part of Red Hat. Add to this multiple examples of sloppy programming (dealing with proxies is still not unified and individual packages have their own settings which can conflict with the environment  variable settings) and old warts (RPM format now is old and partially outlived its usefulness, creating rather complex issues with patching and installing software, issues that take a lot of sysadmin time to resolve) and you get the picture. 

Nice example of Red Hat inaptitude is how they handle proxy settings. Even for software fully controlled by Red Hat  such as yum and subscription manager they use proxy settings in each and every configuration file. Why not to put them  /etc/sysconfig/network and always, consistently read then from the environment variables first and this this file second?  And any well behaving application need to read environment variable which should take precedence over settings in configuration files. They do not do it.  God knows why. 

Those giants of system programming even manage to embed proxy settings from /etc/rhsm/rhsm.conf into yum file /etc/yum.repos.d/redhat.repo, so the proxy value is taken from this file. Not from  your /etc/yum.conf settings, as you would expect.  Moreover this is done without any elementary checks for consistency: if you make a pretty innocent  mistake and specify proxy setting in /etc/rhsm/rhsm.conf as

proxy http://yourproxy.yourdomain.com

The Red Hat registration manager will accept this and will work fine. But for yum to work properly /etc/rhsm/rhsm.conf proxy specification requires just DNS name without prefix http:// or https://  -- prefix https will be added blindly (and that's wrong) in redhat.repo   without checking if you specified http:// (or https://) prefix or not. This SNAFU will lead to generation in  redhat.repo  the proxy statement of the form https://http://yourproxy.yourdomain.com

At this point you are up for a nasty surprise -- yum will not work with any Redhat repository. And there is no any meaningful diagnostic messages. Looks like RHEL managers are iether engaged in binge drinking, or watch too much porn on the job ;-). 

Yum which started as a very helpful utility. But gradually it turned into a complex monster that requires quite a bit of study and introduces by itself  a set of very complex bugs, some of which are almost features.

SELinux was never a transparent security subsystems and has a lot of quirks of its own. And its key idea is far from being elegant like were the key ideas of AppArmor ( per application assigning umask for key directories and config files)  which actually disappeared from the Linux security landscape. Many sysadmins simply disable SElinux leaving only firewall to protect the server. Some application require disabling SELinux for proper functioning and this is specified in the documentation.

The deterioration of architectural vision within Red Hat as company is clearly visible in the terrible (simply terrible, without any exaggeration) quality of the customer portal, which is probably the worst I ever encountered. Sometimes I just put tickets to understand how to perform particular operation on the portal.  Old or Classic as they call it RHEL customer portal actually was OK, and even has had some useful features. Then for some reason they tried to introduced something new and completely messes the thing.  As of August  2017  quality somewhat improves, but still leaves much to be desired. Sometimes I wonder why I am still using the distribution, if the company which produced it (and charges substantial money for it ) is so tremendously architecturally inapt, that it is unable to create a usable customer portal. And in view of existence of Oracle Linux I do not really know the answer to this question.  Although Oracle has its own set of problems, thousands of EPEL packages, signed and built by Oracle, have been added to Oracle Linux yum server.

RHEL 7 is a much bigger mess then RHEL 6

I hate RHEL 7. I really do. Red Hat honchos iether out of envy to Apple (and/or Microsoft) success in desktop space, or for some other reason ) such as the ability to dictate the composition of enterprise Linux) make server Linux hostage of desktop Linux enthusiasts and  broke way to many things by introducing systemd (which is actually useful mostly for laptops and similar portable devices, as the initial agenda was to speed up boot). See Systemd invasion into Linux Server space.

Revamping anaconda in "consumer friendly" fashion in RHEL 7 and doing a lot of other unnecessary for server space things somewhat destroys or at least diminishes the value of Red Hat as a server OS. Most of those changes also increase complexity by hiding "basic" things upon the layers of "indirection".  Try to remove Network manager in RHEL 7.  Now explain to me why we need it for a server room with "forever" attached by 10 Mbit/sec cable servers.

It is undeniable that in Version 7 RHEL became even more complex. But what is really bad is that it denies usefulness of previous knowledge of Red Hat and tons of books (some of which are good) published in 1998-2010  -- dot com boom and inertia after dot-com crash. After then the genre of Red Hat books dies like died computer books genre  in general and shelves with computer books almost disappeared from Banes and Nobles.

This utter disrespect to people who spend years learning Red Hat crap increased my desire to switch to CentOS or Oracle Linux. I do not want to pay Red Hat money any longer and support deteriorated to the level when it is almost completely useless, unless you buy premium support. And even in this case much depends on to what analyst your ticket is assigned. 

Rising cost, deteriorating quality of tech support: This is an expensive open source, my friend

RHEL is very expensive distribution for small and medium firms. There is no free lunch and if you are using commercial distribution you need to pay annual maintenance just as insurance. In some areas you can be OK with just patches, but in others you need support from the vendor, and preferably qualified support. The less static your datacenter is, the more you need vendor support. 

RHEL support deteriorated recently while prices almost doubled from RHEL5 to 6 (especially is you use virtual guests a lot; see discussion at RHEL 6 how much for your package (The Register). And with deteriorating support is it not very clear what you are paying for.

First I was incensed with the quality of RHEL support, but with time I start to understand that they have now choice -- the product is way over their heads.

Rising costs also need to preference for low cost "self-support" licenses for all similar server expect one (on which you can perform the troubleshooting) and other similar cost-saving tricks that further complicates your licensing picture (different number of repositories attached to each type of RHEL licenses is one such problem). And outside financial industry companies are now mostly tight on money for IT. 

But with rising complexity qualified support gradually shrunk and then disappeared. Now you what you get is low level support which mostly consist of pointing you to relevant (or often irrelevant) to your case article sin Red Hat knowledgebase. With patience you can upgrade your ticket and get to proper specialist, but the amount of effort might nor justify that -- for the amount of time and money you might be better off using external consultants. For example from universities, which have high quality people in need of additional money. 

In most cases the level of support still can be viewed as acceptable (especially if you use Premium subscription for critical servers),  but at least for me this creates resentment; that why I an trying to install CentOS or Scientific Linux instead if they work OK for a particular application.

New licensing scheme while improving  the situation in many areas has a couple of serious drawbacks , such as dealing with 4 socket servers and expiration of licenses problems). We will talk about 4 socket servers later (and who now need them of Intel packs 15 cored into one  CPU ;-). Bu too tight control of licenses by licensing manager that Red Hat probably enjoys, is a hassle for me: if the license expired now this is "your problem", as there is no automatic renewal from the pool of available licenses (which was one of the positive thing with RHN), you need to write scripts and deploy them on all nodes to be able to use patch command on all nodes simultaneously via some parallel execution tool (which is OK way to deploy security patches after testing). 

And please note that typically large corporation overpay Red Hat for licensing as they have bad licensing controls and prefer to err on higher number of licenses then they need. This adds insult to injury -- why on the earth we can't use available free licenses to automatically replace them when the license expires but equivalent pool of licenses is available and not used ? 

So IMHO it makes sense to mix and match RHEL with CentOS and used is on non critical servers instead of commercial version just to avoid troubles with licensing.

For HPC clusters Red Hat provides discounted version (for computations nodes only; the headnode is licensed for a full price) with limited number of packages, called Red Hat Enterprise Linux for High Performance Computing. See sp.ts.fujitsu.com  for more or less OK explanations of that you should expect.  But in this cost-cutting world this is also an option (actually HPC structure when several similar server are managed to one node is applicable in any many areas not directly connected with computations or genomic research). The same is true for freely available HPC schedulers such as Son of SGE.

Essentially, buying HPC license you pay the full price for the headnode and discounted price for each computational node. That can cut you costs, but I am not sure, if t Oracle Linux is not a better deal as in this case you have the same distribution both for headnode and computational nodes for the same price as Red Hat HPC license  two different distributions.  Truth be told Red Hat does provides optimized networking stack with HPC computer node license.  The question is what is the difference and should you pay such a price for it.

The product is so complex and big that to provide quality support for it is impossible. First of all deep understand of OS is lacking. Now all tech support does when trying to resolve most of tickets is to search the database of cases, and post as a solution that is related to your case (or may be not ;-)

Premium support still is above average, and they can switch  you to a live engineer on a different continent in critical cases in later hours,  so if server downtime is important this is a kind of (weak) insurance.

In any case, Red Hat support is probably overwhelmed by problems with RHEL7 and even for subsystem fully developed by Red Hat such as subscription manager and yum is usually dismal, unless you are lucky and get knowledgeable guy, who is willing to help. 

As I already mentioned before, in most case RHEL tech support limits themselves to searching of the database (which is good first stem, but only the first step) and recommending something from the article that they found most close to your case.  To me it looks they have no intelligent way to analyze SOS files which are provided with each more or less complex ticket. Often their replies demonstrates complete lack of understanding what problem you are experiencing.

Sometimes this "quote service" from their database that they sell instead of customer support helps, but often it is just a "go to hell" type of response, an imitation of support, if you wish.  In the past (in time of RHEL 4) the quality of support was much better. Now it is unclear what we are paying for.  So it you have substantial money (and I mean 100 or more systems to support) you probably should be thinking about third party that suit your needs

There are two viable options here:

Dismal quality and the large amount of problems with RHEL7 means also that you need to spend real money of configuration management (or hire a couple of dedicated guys to create your own system).  Making images before each change,  storing a set of images so that you can return to the previous one in a matter of hours is a better insurance then RHEL tech support.  For example with USB 3 using small USB drive (like Samsung) or even a flash drive like FIT 9which now max at 256GB) to store recovery image using, created, say,  by Relax-and-Recover does make sense.

Using GIT or Subversion (or any other version control system that you are comfortable with; GIT is not well suited for  sysadmin changes management control as by  default it does not preserve attributes and ACLs, but it can be added) for all changes is another opportunity (at least via etckeeper, which is far from being perfect but  at least can serve as starting point).   That capitalizes on  the fact that after the installation OS usually works. So we observe  the phenomenon who is well known to Windows administrators: self-disintegration of OS with time ;-)

Initially RHEL7 works and is more or less stable. The problems typically comes later with additional packages, libraries, etc, which often are not critical for system functionality. "Keep it simple stupid" is a good approach  here.  Although for servers that are in research this  mantra is impossible to implement.

For example for many servers you do not need X11 to be installed (and Gnome is a horrible package, if you ask me. LDXE (which is default on Knoppix) is smaller and is adequate for most sysadmin and  most users needs.  That cuts a lot of packages and  an lot of complexity. Exotic protocols designed for laptops also can be eliminated for server which uses regulate wired connection  and static IP address (although in RHEL 7 this is difficult/impossible to do; I would not recommend trying to deinstall Network Manager on RHEL7 -- you need to suffer) . Avoiding complex taxing package in favor of something simpler is another worthwhile approach.

In any case Unix  now (and Linux in particular) is an operating  system which is clearly above that human capability to comprehend it. So in a way it is amazing that it still works. Also strong architects (like Thomson) are long gone, so "entropy" is high and initially clean architectural solutions with time became much less clean.

Despite several level of support included in licenses (with premium supposedly to be higher level) technical support for really complex cases is uniformly weak, with mostly "monkey looking in database" type of service. If you have a complex problem, you are usually stuck, although premium service provide you an opportunity to talk with a live person, which might help.   In a way, unless you buy premium license,  the only way to use RHEL is "as is".  And with RHEL 7 even this is not a very attractive proposition as the switch to systemd creates its own set of problems and a learning curve for sysadmins.

Some of this deterioration is connected with the fact that Linux became very complex, Byzantine OS that nobody actually knows. Even a number of utilities are such that nobody knows probably more then 30% or 50% of them. and even if you learn some utility during particular case of troubleshooting you will soon forget as you probably will not get the same case in a year or two. In this sense the title "Red Hat Engineer" became a sad joke.

Even if you learned something important today you will soon forget if you do not use it as there way too may utilities, application, configuration files. You name it.

Licensing model: four different types of RHEL licenses

RHEL is struggling to fence off "copycats" by complicating access to the source of patches, but the problem is that its licensing model in Byzantium.  It is based of a half-dozen of different types of subscriptions. Some pretty expensive. In the past it I resented paying Red Hat for our 4 socket servers to the extent that I stop using this type of servers and completely switched to two socket servers. Which with Intel CPUs rising core count was a easy escape from RHEL restrictions and greed.  Currently Red Hat probably has most complex, the most Byzantine system of subscriptions after IBM (which is probably the leader in licensing obscurantism and the ability to force customers to overpay for its software ;-).

And there are at least four different RHEL licenses for real (hardware-based) servers ( https://access.redhat.com/support/offerings/production/sla  )

  1. Self-support. If you have many identical or almost identical servers or virtual machines it does not make sense to buy standard or premium licenses for all. It can be bought for one server and all other can be used with self-support licenses, which provides access to patches). They sometimes are used for a group for servers, for example a park of webservers with only one server getting standard of premium licensing. 
  2. Standard (actually means web only, although formally you can try chat and phone during business hours (if you manage to get to a support specialist). So this is mostly web-based support.
  3. Premium (Web and phone with phone 24 x7 for severity 1 and 2 problems). Here you really can get specialist on phone if you have a critical problems. Even in after hours.
  4. HPC computational nodes with limited number of packages in the distribution and repositories (I wonder if using Oracle Linux is not a better deal for computational nodes then this type of RHEL licenses; sometime CENTOS can be used too which eliminates this problem)
  5. No-Cost RHEL Developer Subscription is available from March 2016  -- I do not know much about this license.

RHEL licensing scheme is based on so called "entitlements" which oversimplifying is one license for a 2 socket server. In the past they were "mergeable" so if your 4 socket license expired and you have two spare two socket licenses RHEL was happy to accommodate your needs. Now they are not. 

But that does not assure the right mix if you need different types of licenses for different classes of servers. All is fine until you use mixture of licenses (for example some cluster licenses, some patch only ( aka self-support), some premium license -- 4 types of licenses altogether). In this case to understand where is particular license landed in the past was almost impossible. But if you use uniform licensees this scheme works reasonably well. Red hat fixed this problem with the introduction of subscription manager, so it is in the past. 

This path works well but to cut the costs you need to buy five year license with the server which is a lot of money and you lack the ability to switch linux flavor. This also a problem with buying cluster license -- Dell and HP can install basic cluster software on the enclosure for minimal fee but they force upon an additional software which you might not want or need.  And believe me this HPC can be used outside computational tasks. It is actually an interesting paradigm of managing heterogeneous datacenter.  The only problem is that you need to learn to use it :-). For example SGE is very well engineered scheduler (originally from SUN, but later open sourced). While this is a free software it  beats many commercial offerings and while it lacks calendar scheduling, any calendar scheduler can be used with it to compensate for this (even cron -- in this each cron task becomes SGE submit script).

Still using HPC-like config might be an option to lower the fees if you use multiple similar servers (for example blade enclosure with 16 identical blades). It is to organize a particular set of servers as a cluster with SGE (or other similar scheduler) installed on the head node. Now Hadoop is a fashionable thing (while being just a simple case of distributed search) and you can already claim tat this is a Hadoop type of service. In this case you pay twice higher price for headnode, but all computation nodes are $100 a year each or so. Although you can get the same self-support license from Oracle for the same price without  Red hat restrictions, so from other point of view, why bother?.

Note about licensing management system

There are two licensing system used by Red Hat

  1.  Classic(RHN) -- old system that was phased out in mid 2017
  2. "New" (RHSM) -- a new, better, system used predominantly on RHEL 6 and 7. Obligatory from August 2017.

RHSM is complex and requires study.  Many hours of sysadmin time are wasted on mastering its complexities, while in reality this is an overhead that allows Red Hat to charge money for the product. So the fact that they are NOT supporting it well tells us a lot about the level of deterioration of the company.   So those Red Hat honchos with high salaries essentially create a new job in enterprise environment -- a license administrator. Congratulations !

All-in-all Red Hat successful created almost un-penetrable mess of obsolete and semi obsolete notes, poorly written and incomplete documentation, dismal diagnostic and poor troubleshooting tools. And the level of frustration sometimes reaches such a level that people just abandon RHEL. I did for several non-critical system. If CentOS or Academic Linux works there is no reason to suffer from Red Hat licensing issues. Also that makes Oracle, surprisingly, more attractive option too :-). Oracle Linux is also cheaper. But usually you are bound by corporate policy here.

"New" subscription system (RHSM) is slightly better then RHN for large organizations but it created problem with 4 socket server which now are treated as a distinct entity from two socket servers.

RHSM allows to assign specific license to specific box and list the current status of licensing.  But like RHN it requires to use proxy setting in configuration file, it does not take them from the environment. If the company has several proxies and you have mismatch you can be royally screwed. In general you need to check consistency of your environment  settings with conf files settings.  The level of understanding of proxies environment by RHEL tech support is basic of worse, so they are  using the database of articles instead of actually troubleshooting based on sosreport data. Moreover each day there might a new person working on your ticket, so there no continuity.

RHEL System Registration Guide (https://access.redhat.com/articles/737393) is weak and does not cover more complex cases and typical mishaps.

RHN system of RHEL licenses also can cover various  number of sockets (the default is 2). For 4 socket server it will just take two 2-socket licenses. This was pretty logical (albeit expensive) solution. This is not the case with RHNSM. They want you to buy specific lince for 4 socket server and generally those are tiled to the upper levels on RHEL licensing (no self-support for 4 socket servers). In RHN, at least, licenses were eventually converted into some kind of uniform licensing tokens that are assigned to unlicensed systems more or less automatically (for example if you have 4 socket system then two tokens were consumed). With RHNSM this is not true, which creating for large enterprises a set of complex problems.

In general licensing by physical socket (or worse by number of cores -- an old IBM trick) is a dirty trick step of Red Hat which point its direction in the future.

"New" subscription system (RHSM) is slightly better then old RHN in a sense that it creates for you a pool of licensing and gives you the ability to assign more expensive licensird to more valuable servers.  It allows to assign specific license to specific box and to list the current status of licensing. 

Troubleshooting

Learn More

The RHEL System Registration Guide (https://access.redhat.com/articles/737393) which outlines major options available for registering a system (and carefully avoids mentioning bugs and pitfalls, which are many).  For some reason migration from RHN to RHNSM usually worked well

Also might be useful (to the extent any Red Hat purely written documentation is useful) is the document: How to register and subscribe a system to the Red Hat Customer Portal using Red Hat Subscription-Manager (https://access.redhat.com/solutions/253273). At least it tires to  answers to some most basic questions:

There is also an online tool to assist you in selecting the most appropriate registration technology for your system - Red Hat Labs Registration Assistant (https://access.redhat.com/labs/registrationassistant/). If you would prefer to use this tool, please visit https://access.redhat.com/labs/registrationassistant"

Pretty convoluted RPM packaging system which creates problems

The idea of RPM was to simplify installation of complex packages. But they create of a set of problem of their own. Especially connected with libraries (which not exactly Red Hat problem, it is Linux problem called "libraries hell"). One example is so called multilib problem that is detected by YUM:

--> Finished Dependency Resolution

Error:  Multilib version problems found. This often means that the root
       cause is something else and multilib version checking is just
       pointing out that there is a problem. Eg.:

         1. You have an upgrade for libicu which is missing some
            dependency that another package requires. Yum is trying to
            solve this by installing an older version of libicu of the
            different architecture. If you exclude the bad architecture
            yum will tell you what the root cause is (which package
            requires what). You can try redoing the upgrade with
            --exclude libicu.otherarch ... this should give you an error
            message showing the root cause of the problem.

         2. You have multiple architectures of libicu installed, but
            yum can only see an upgrade for one of those arcitectures.
            If you don't want/need both architectures anymore then you
            can remove the one with the missing update and everything
            will work.

         3. You have duplicate versions of libicu installed already.
            You can use "yum check" to get yum show these errors.

       ...you can also use --setopt=protected_multilib=false to remove
       this checking, however this is almost never the correct thing to
       do as something else is very likely to go wrong (often causing
       much more problems).

       Protected multilib versions: libicu-4.2.1-14.el6.x86_64 != libicu-4.2.1-11.el6.i686

Selecting packages for installation

You can improve the typical for RHEL situation with a lot of useless daemons installed by carefully selecting packages and then reusing generated kickstart file. That can be done via advanced menu for one box and then using this kickstart file for all other boxes with minor modifications. Kickstart still works, despite trend toward overcomplexity in other parts of distribution ;-)

Problems with architectural vision of Red Hat brass

Both architectural level of thinking of Red Hat brass (with daemons like avahi, systemd, network manager installed by default for all types of server ) and clear attempts along the lines "Not invented here" in virtualization creates concerns. It is clear that Red Hat by itself can't become a major virtualization player like VMware. It just does not have enough money for development and marketing. 

You would think that the safest bet is to reuse the leader among open source offerings which is currently Xen. But Red Hat brass thinks differently and wants to play more dangerous poker game: it started promoting KVM, making it obligatory part of RHCSA exam. Actually, Red Hat has released Enterprise Linux 5 with integrated virtualization (Xen) and then changed their mind after RHEL 5.5 or so. In RHEL 6 Xen is replaced by KVM.

What is good that after ten years they eventually manage to some extent  to re-implement Solaris 10 zones (without RBAC). In RHEL 7 they are more or less usable.

Security overkill with SELinux

RHEL contain security layer called SELinux, but in most cases of corporate deployment it is either disabled, or operates in permissive mode.  The reason is that is notoriously difficult to configure correctly and in many cases the game does not worth the candles.

Firewall is more usable in  corporate deployments, especially in cases when you have obnoxious or incompetent security department (a pretty typical situation for a large corporation ;-) as it prevents a lot of stupid questions from utterly incompetent "security gurus" about opened ports and can stop dead scanning attempts of tools that test for known vulnerabilities and by using which security departments are trying to justify their miserable existence. Those tools sometimes crash production server. 

Generally it is dangerous to allow exploits used in such tools which local script kiddies (aka "security team") recklessly launch against your production server (as if checking for a particular vulnerability using internal script is inferior solution). There were reports of crashes of production servers due to such games. Some "security script kiddie" who understand very little in Unix even try to prove their worth by downloading exploits from hacker site and then using it against production servers on the internal corporate network. Unfortunately they are not always fired for such valiant efforts.   

To get the idea about the level of complexity of SE try to read the RHEL7 Deployment Guide. Full set of documentation is available from www.redhat.com/docs/manuals/enterprise/

So it is not accidental that in many cases SElinix is disabled in enterprise installations. Some commercial software packages explicitly recommend to disable it in their installation manuals.

There is an alternative to SElinux which is more elegant, usable and understandable -- AppArmor which is/was used in SLES  (only by knowledgeable sysadmins ;-), but never achieved real popularity iether (SLES now  ha SELinux  too and suffers from overcomplexity even more then RHEL ;-).  It is essentially the idea of per-application unmask for all major directories, which can block dead attempts to write to them and/or read the information form certain sensitive files even if the application has a vulnerability.

But Solaris is dying in enterprise environment as Oracle limits it to their own (expensive) hardware.  But you can use Solaris for X86 via VM (ZEN).  Still IMHO if you need a really high level of security for a particular server this might be  a preferable path to go. Or you can use Solaris if you have knowledgeable Solaris sysadmin on the floor (security via obscurity actually works pretty well in this case) and Solaris has RBAC implemented which is a better solution then sudo. 

RHEL became kind of Microsoft of Linux  world and as such this is the most hackable flavor of linux just due to its general popularity. That means that it is a very bad idea to use RHEL, if security is of vital importance, although with enabled SE it definably more hardened variant of OS then without. See Potemkin Villages of Computer Security  for more detailed discussion.

Road to hell in paved with good intentions: biosdevname package on Dell servers

Loss of architectural integrity of Unix is now very pronounced in RHEL. Both RHEL6 and  RHEL7 although 7 is definitely worse. And this is not only systemd fiasco. For example, recently I spend a day troubleshooting an interesting and unusual problem: one out of 16 identical (both hardware and software wise) blades in a small HPC cluster (and only one) failed to enable bonded interface on boot and this remained off line. Most sysadmins would think that something is wrong with hardware. For example, Ethernet card on the blade and/or switch port or even internal enclosure interconnect. I also initially thought his way. But this was not the case  ;-)  

Tech support from Dell was not able to locate any hardware problem although they did diligently upgraded CMC on enclosure and BIOS and firmware on the blade. BTW this blade has had similar problems in the past and Dell tech support once even replaced Ethernet card in it, thinking that it is culprit. Now i know that this was a completely wrong decision on their part, and waist of both time and money  :-), They come to this conclusion by swapping the blade to a different slot and seeing that the problem migrated into new slot. Bingo -- the card is the root cause.  The problem is that it was not.  What is especially funny is that replacing the card did solve the problem for a while. After reading the information provided below you will be as puzzled as me why it happened. 

But after yet another power outage the problem  returned.

This  time I started to suspect that the card has nothing to do with the problem. After more close examination I discovered that in its infinite wisdom in RHEL 6 Red Hat introduced a package called biosdevname. The package was developed by Dell (the fact which seriously undermined my trust in Dell hardware ;-).

This package renames interfaces to a new set of names, supposedly consistent with their etching on the case of rack servers. It is useless (or more correctly harmful) for blades.  The package is primitive and does not understand that server is a rack server or blade.

Moreover by doing this supposedly useful remaining this package introduces in  70-persistent-net.rules file a stealth rule:

KERNEL=="eth*", ACTION=="add", PROGRAM="/sbin/biosdevname -i %k", NAME="%c"

I did not look at the code but from the observed behaviour it looks that in some cases in RHEL 6 (and most probably in RHEL 7 too) the package adds a "stealth" rule to the END (not the beginning but the end !!!) of  /udev/rules.d/70-persistent-net.rules file, which means that if similar rule in 70-persistent-net.rules exits it is overwritten. Or something similar to this effect. 

If you look at Dell knowledge base there are dozens of advisories related to this package (search just for biosdevname). Which suggests that there some deeply wrong with its architecture.

What I observed  that on some (the key word is some, converting the situation in Alice in Wonderland environment) rules for interfaces listed in  70-persistent-net.rules file simply does not work if this package is enabled. For example Dell Professional services in their infinite wisdom renamed interfaces  back to eth0-eth4 for Inter X710 4-port 10Gb Ethernet card that  we have on some blades. On 15 out of 16 blades in the Dell enclosure this absolutely wrong  idea works perfectly well. But on blades 16 sometimes it does not as a result this blade does not boot after power outage of reboot. When this happens is unpredictable. Sometimes it  boots, but sometimes it does not.  And you can't understand what is happening, no matter how hard you try because of stealth nature of changes introduced by biosdevname package. 

Two interfaces on this blades (as you now suspect eth0 and eth1) were bonded on this blade. After around 6 hours of poking around the problem I discover that despite presence of rule for "eth0-eth4 in 70-persistent-net.rules file RHEL 6.7 still renames on boot all four interfaces to em0-em4 scheme and naturally bonding fails as eth0 and eth1 interfaces do not exit.

First I decided to deinstall biosdevname package and see what will happen. Did not work (see below why -- de-installation script in this RPM is incorrect and contains a bug -- it is not enough to remove files  you also need to run  the command update-initramfs -u (Hat tip to Oler).

Searching for "Renaming em to eth" I found a post in which the author recommended disabling this "feature" via adding biosdevname=0 to kernel parameters /etc/grub.conf

That worked.  So two days my life were lost for finding a way to disable this completely unnecessary for blades RHEL "enhancement".

Here is some information about this package

biosdevname
Copyright (c) 2006, 2007 Dell, Inc.  
Licensed under the GNU General Public License, Version 2.

biosdevname in its simplest form takes a kernel device name as an argument, and returns the BIOS-given name it "should" be.  This is
necessary on systems where the BIOS name for a given device (e.g. the label on the chassis is "Gb1") doesn't map directly 
and obviously to the kernel name (e.g. eth0).
The distro-patches/sles10/ directory contains a patch needed to integrate biosdevname into the SLES10 udev ethernet 
naming rules.This also works as a straight udev rule.  On RHEL4, that looks like:
KERNEL=="eth*", ACTION=="add", PROGRAM="/sbin/biosdevname -i %k", NAME="%c"
This makes use of various BIOS-provided tables:
PCI Confuration Space
PCI IRQ Routing Table ($PIR)
PCMCIA Card Information Structure
SMBIOS 2.6 Type 9, Type 41, and HP OEM-specific types
therefore it's likely that this will only work well on architectures that provide such information in their BIOS.

To add insult to injury this behaviour was demonstrated on only one of 16 absolutely identically configured Dell M630 blades with identical hardware and absolutely identical (cloned) OS instances.  Which makes RHEL kind of "Alice in Wonderland" system. After this experience it is difficult not to hate Red Hat, but we can do very little to change this situation to the better.

This is just one example. I have more similar  stories to tell

I would like to stress that  the fact that the utility that is included in this package does not understand that it the target for installation is a blade ( there is no etching on blade network interfaces ;-) or server is a pretty typical RHEL behaviour, despite the fact that the package was developed by Dell. They also install audio packages on boxes that have no audio card and do a lot of similar things ;-) 

If you look at this topic using your favorite search engine (which should be Google anymore ;-) you will find dozens of posts in which people try to resolve this problem with various levels of competency and success. Such a tremendous waist of time and efforts.  Among best that I have found were:

Current versions and year of end of support

As of October 2018 supported versions of RHEL are 6.10 and 7.3-7.5. Usually a large enterprise uses a mixture of versions, so tunning  out of support is a commopn problem.

Compatibility within a single version of RHEL is usually very good (I would say on par with Solaris) and the risk on upgrading from, say, 6.5 to 6.10 is minimal.  There were some broken minor upgrades like RHEL 6.6, but this is int he past.

Problems arise with major version upgrade. Usually totoal reinstallation is the best bet.  Formally you can upgrade from RHEL 6.10 to RHEL 7.5 but only for a very narrow class of servers and not much installed (or much de-installed ;-) .  You need to remove Gnome via yum groupremove gnome-desktop, if it works (how to remove gnome-desktop using yum ) and several other packages.

Againt,  here your mileage may vary and reinstallation might be a better option (RedHat 6 to RedHat 7 upgrade):

1. Prerequisites

1. Make sure that you are on running on the latest minor version (i.e rhel 6.8).

2. The upgrade process can handle only the following package groups and packages: Minimal (@minimal),
Base (@base), Web Server (@web-server), DHCP Server, File Server (@nfs-server), and Print Server
(@print-server).

3. Upgrading GNOME and KDE is not supported, So please uninstall the GUI desktop before the upgrade and
install it after the upgrade.

4. Backup the entire system to avoid potential data loss.

5. If in case you system is registered with RHN classic , you must first unregister from RHN classic and
register with subscription-manager.

6. Make sure /usr is not on separate partition.

2. Assessment

Before upgrading we need to assess the machine first and check if it is eligible for the upgrade and this can be
done by an utility called "Preupgrade Assistant".

Preupgrade Assistant does the following:

2.1. Installing dependencies of Preupgrade Assistant:

Preupgrade Assistant needs some dependencies packages (openscap , openscap-engine-sce, openscap-
utils , pykickstart, mod_wsgi) all these packages can be installed from the installation media but "openscap-
engine-sce" package needs to be downloaded from the redhat portal.

See Red Hat Enterprise Linux - Wikipedia.  and DistroWatch.com Red Hat Enterprise Linux.

Tip:

In linux there is no convention for determination which flavor of linux you are running. For Red Hat in order to  determine which version is installed on the server you can use command

cat /etc/redhat-release

Oracle linux adds its own file preserving RHEL file, so a more appropriate  command would be

cat /etc/*release

End of support issues

See Red Hat Enterprise Linux Life Cycle - Red Hat Customer Portal:

What's new in Rhel7

RHEL 7 was released in June 2014. With the release of RHEL 7 we see hard push to systemd exclusivity.  Runlevels are gone. The release of RHEL 7 with systemd as the only option for system and process management has reignited the old debate weather Red Hat is trying to establish Microsoft-style monopoly over enterprise Linux and move Linux closer to Windows: closed but user-friendly system. 

Scripting languages version in RHEL7

RHEL7 uses Bash 4.2, Perl 5.16.3, PHP 5.4.16, Python 2.7.5. 

XFS filesystem

The high-capacity, 64-bit XFS file system, which was available in RHEL6, now the default file system. It originated in the Silicon Graphics Irix operating system. It can scale up to 500 TB of addressable memory. In comparison, previous file systems, such EXT 4, were limited to 16 TBs.

For more packages version information see DistroWatch.com Red Hat Enterprise Linux

Systemd

for server sysadmins systemd is a massive, fundamental change to core Linux administration for no perceivable gain. So while there is a high level of support of systemd from Linux users who run Linux on their laptops and maybe as home server, there is a strong backlash against systemd from Linux system administrators who are responsible for significant number of Linux servers in enterprise environment.

After all runlevels were used in production environment, if only to run system with or without X11.  Please read  an interesting essay on systemd (ProSystemdAntiSystemd).

Often initiated by opponents, they will lament on the horrors of PulseAudio and point out their scorn for Lennart Poettering. This later became a common canard for proponents to dismiss criticism as Lennart-bashing. Futile to even discuss, but it’s a staple.

Lennart’s character is actually, at times, relevant.. Trying to have a large discussion about systemd without ever invoking him is like discussing glibc in detail without ever mentioning Ulrich Drepper. Most people take it overboard, however.

A lot of systemd opponents will express their opinions regarding a supposed takeover of the Linux ecosystem by systemd, as its auxiliaries (all requiring governance by the systemd init) expose APIs, which are then used by various software in the desktop stack, creating dependency chains between it and systemd that the opponents deemed unwarranted. They will also point out the udev debacle and occasionally quote Lennart. Opponents see this as anti-competitive behavior and liken it to “embrace, extend, extinguish”. They often exaggerate and go all out with their vitriol though, as they start to contemplate shadowy backroom conspiracies at Red Hat (admittedly it is pretty fun to pretend that anyone defending a given piece of software is actually a shill who secretly works for it, but I digress), leaving many of their concerns to be ignored and deem ridiculous altogether.

... ... ...

In addition, the Linux community is known for reinventing the square wheel over and over again. Chaos is both Linux’s greatest strength and its greatest weakness. Remember HAL? Distro adoption is not an indicator of something being good, so much as something having sufficient mindshare.

... ... ...

The observation that sysinit is dumb and heavily flawed with its clunky inittab and runlevel abstractions, is absolutely nothing new. Richard Gooch wrote a paper back in 2002 entitled “Linux Boot Scripts”, which criticized both the SysV and BSD approaches, based on his earlier work on simpleinit(8). That said, his solution is still firmly rooted in the SysV and BSD philosophies, but he makes it more elegant by supplying primitives for modularity and expressing dependencies.

Even before that, DJB wrote the famous daemontools suite which has had many successors influenced by its approach, including s6, perp, runit and daemontools-encore. The former two are completely independent implementations, but based on similar principles, though with significant improvements. An article dated to 2007 entitled “Init Scripts Considered Harmful” encourages this approach and criticizes initscripts.

Around 2002, Richard Lightman wrote depinit(8), which introduced parallel service start, a dependency system, named service groups rather than runlevels (similar to systemd targets), its own unmount logic on shutdown, arbitrary pipelines between daemons for logging purposes, and more. It failed to gain traction and is now a historical relic.

Other systems like initng and eINIT came afterward, which were based on highly modular plugin-based architectures, implementing large parts of their logic as plugins, for a wide variety of actions that software like systemd implements as an inseparable part of its core. Initmacs, anyone?

Even Fefe, anti-bloat activist extraordinaire, wrote his own system called minit early on, which could handle dependencies and autorestart. As is typical of Fefe’s software, it is painful to read and makes you want to contemplate seppuku with a pizza cutter.

And that’s just Linux. Partial list, obviously.

At the end of the day, all comparing to sysvinit does is show that you’ve been living under a rock for years. What’s more, it is no secret to a lot of people that the way distros have been writing initscripts has been totally anathema to basic software development practices, like modularizing and reusing common functions, for years. Among other concerns such as inadequate use of already leaky abstractions like start-stop-daemon(8). Though sysvinit does encourage poor work like this to an extent, it’s distro maintainers who do share a deal of the blame for the mess. See the BSDs for a sane example of writing initscripts. OpenRC was directly inspired by the BSDs’ example. Hint: it’s in the name - “RC”.

The rather huge scope and opinionated nature of systemd leads to people yearning for the days of sysvinit. A lot of this is ignorance about good design principles, but a good part may also be motivated from an inability to properly convey desires of simple and transparent systems. In this way, proponents and opponents get caught in feedback loops of incessantly going nowhere with flame wars over one initd implementation (that happened to be dominant), completely ignoring all the previous research on improving init, as it all gets left to bite the dust. Even further, most people fail to differentiate init from rc scripts, and sort of hold sysvinit to be equivalent to the shoddy initscripts that distros have written, and all the hacks they bolted on top like LSB headers and startpar(2). This is a huge misunderstanding that leads to a lot of wasted energy.

Don’t talk about sysvinit. Talk about systemd on its own merits and the advantages or disadvantages of how it solves problems, potentially contrasting them to other init systems. But don’t immediately go “SysV initscripts were way better and more configurable, I don’t see what systemd helps solve beyond faster boot times.”, or from the other side “systemd is way better than sysvinit, look at how clean unit files are compared to this horribly written initscript I cherrypicked! Why wouldn’t you switch?”

... ... ...

Now that we pointed out how most systemd debates play out in practice and why it’s usually a colossal waste of time to partake in them, let’s do a crude overview of the personalities that make this clusterfuck possible.

The technically competent sides tend to largely fall in these two broad categories:

a) Proponents are usually part of the modern Desktop Linux bandwagon. They run contemporary mainstream distributions with the latest software, use and contribute to large desktop environment initiatives and related standards like the *kits. They’re not necessarily purely focused on the Linux desktop. They’ll often work on features ostensibly meant for enterprise server management, cloud computing, embedded systems and other needs, but the rhetoric of needing a better desktop and following the example set by Windows and OS X is largely pervasive amongst their ranks. They will decry what they perceive as “integration failures”, “fragmentation” and are generally hostile towards research projects and anything they see as “toy projects”. They are hackers, but their mindset is largely geared towards reducing interface complexity, instead of implementation complexity, and will frequently argue against the alleged pitfalls of too much configurability, while seeing computers as appliances instead of tools.

b) Opponents are a bit more varied in their backgrounds, but they typically hail from more niche distributions like Slackware, Gentoo, CRUX and others. They are largely uninterested in many of the Desktop Linux “advancements”, value configuration, minimalism and care about malleability more than user friendliness. They’re often familiar with many other Unix-like environments besides Linux, though they retain a fondness for the latter. They have their own pet projects and are likely to use, contribute to or at least follow a lot of small projects in the low-level system plumbing area. They can likely name at least a dozen alternatives to the GNU coreutils (I can name about 7, I think), generally favor traditional Unix principles and see computers as tools. These are the people more likely to be sympathetic to things like the suckless philosophy.

It should really come as no surprise that the former group dominates. They’re the ones that largely shape the end user experience. The latter are pretty apathetic or even critical of it, in contrast. Additionally, the former group simply has far more manpower in the right places. Red Hat’s employees alone dominate much of the Linux kernel, the GNU base system, GNOME, NetworkManager, many projects affiliated with Freedesktop.org standards (including Polkit) and more. There’s no way to compete with a vast group of paid individuals like those.

Conclusion

The “Year of the Linux Desktop” has become a meme at this point, one that is used most often sarcastically. Yet there are still a lot of people who deeply hold onto it and think that if only Linux had a good abstraction engine for package manager backends, those Windows users will be running Fedora in no time.

What we’re seeing is undoubtedly a cultural clash by two polar opposites that coexist in the Linux community. We can see it in action through the vitriol against Red Hat developers, and conversely the derision against Gentoo users on part of Lennart Poettering, Greg K-H and others. Though it appears in this case “Gentoo user” is meant as a metonym for Linux users whose needs fall outside the mainstream application set. Theo de Raadt infamously quipped that Linux is “for people who hate Microsoft”, but that quote is starting to appear outdated.

Many of the more technically competent people with views critical of systemd have been rather quiet in public, for some reason. Likely it’s a realization that the Linux desktop’s direction is inevitable, and thus trying to criticize it is a futile endeavor. There are people who still think GNOME abandoning Sawfish was a mistake, so yes.

The non-desktop people still have their own turf, but they feel threatened by systemd to one degree or another. Still, I personally do not see them dwindling down. What I believe will happen is that they will become even more segregated than they already are from mainstream Linux and that using their software will feel more otherworldly as time goes on.

There are many who are predicting a huge renaissance for BSD in the aftermath of systemd, but I’m skeptical of this. No doubt there will be increased interest, but as a whole it seems most of the anti-systemd crowd is still deeply invested in sticking to Linux.

Ultimately, the cruel irony is that in systemd’s attempt to supposedly unify the distributions, it has created a huge rift unlike any other and is exacerbating the long-present hostilities between desktop Linux and minimalist Linux sides at rates that are absolutely atypical. What will end up of systemd remains unknown. Given Linux’s tendency for chaos, it might end up the new HAL, though with a significantly more painful aftermath, or it might continue on its merry way and become a Linux standard set in stone, in which case the Linux community will see a sharp ideological divide. Or perhaps it won’t. Perhaps things will go on as usual, on an endless spiral of reinvention without climax. Perhaps we will be doomed to flame on systemd for all eternity. Perhaps we’ll eventually get sick of it and just part our own ways into different corners.

Either way, I’ve become less and less fond of politics for uselessd and see systemd debates as being metaphorically like car crashes. I likely won’t help but chime in at times, though I intend uselessd to branch off into its own direction with time.

SYSTEMD

A very controversial subsystem, systemd is implemented. systemd is a suite of system management daemons, libraries, and utilities designed for Linux and programmed exclusively for the Linux API. There is no more runlevels. For servers systemd makes little sense. Sysadmins now need to learn new systemd commands  for starting and stopping various services. There is still ‘service’ command included for backwards compatibility, but it may go away in future releases. See CentOS 7 - RHEL 7 systemd commands Linux BrigadeCentOS 7 - RHEL 7 systemd commands

From Wikipedia (systemd)

In a 2012 interview, Slackware's founder Patrick Volkerding  expressed the following reservations about the systemd architecture which are fully applicable to the server environment

Concerning systemd, I do like the idea of a faster boot time (obviously), but I also like controlling the startup of the system with shell scripts that are readable, and I'm guessing that's what most Slackware users prefer too. I don't spend all day rebooting my machine, and having looked at systemd config files it seems to me a very foreign way of controlling a system to me, and attempting to control services, sockets, devices, mounts, etc., all within one daemon flies in the face of the UNIX concept of doing one thing and doing it well.

In an August 2014 article published in InfoWorld, Paul Venezia wrote about the systemd controversy, and attributed the controversy to violation of the Unix philosophy, and to "enormous egos who firmly believe they can do no wrong."[42] The article also characterizes the architecture of systemd as more similar to that of Microsoft Windows software:[42]

While systemd has succeeded in its original goals, it's not stopping there. systemd is becoming the Svchost of Linux – which I don't think most Linux folks want. You see, systemd is growing, like wildfire, well outside the bounds of enhancing the Linux boot experience. systemd wants to control most, if not all, of the fundamental functional aspects of a Linux system – from authentication to mounting shares to network configuration to syslog to cron.
 

LINUX CONTAINERS

After 10 years or so after Solaris 10 Linux at last got them.

Linux containers have emerged as a key open source application packaging and delivery technology, combining lightweight application isolation with the flexibility of image-based deployment methods. Developers have rapidly embraced Linux containers because they simplify and accelerate application deployment, and many Platform-as-a-Service (PaaS) platforms are built around Linux container technology, including OpenShift by Red Hat.

Red Hat Enterprise Linux 7 implements Linux containers using core technologies such as control groups (cGroups) for resource management, namespaces for process isolation, and SELinux for security, enabling secure multi-tenancy and reducing the potential for security exploits. The Red Hat container certification ensures that application containers built using Red Hat Enterprise Linux will operate seamlessly across certified container hosts.

NUMA AFFINITY

With more and more systems, even at the low end, presenting non-uniform memory access (NUMA) topologies, Red Hat Enterprise Linux 7 addresses the performance irregularities that such systems present. A new, kernel-based NUMA affinity mechanism automates memory and scheduler optimization. It attempts to match processes that consume significant resources with available memory and CPU resources in order to reduce cross-node traffic. The resulting improved NUMA resource alignment improves performance for applications and virtual machines, especially when running memory-intensive workloads.

HARDWARE EVENT REPORTING MECHANISM

Red Hat Enterprise Linux 7 unifies hardware event reporting into a single reporting mechanism. Instead of various tools collecting errors from different sources with different timestamps, a new hardware event reporting mechanism (HERM) will make it easier to correlate events and get an accurate picture of system behavior. HERM reports events in a single location and in a sequential timeline. HERM uses a new userspace daemon, rasdaemon, to catch and log all RAS events coming from the kernel tracing infrastructure.

VIRTUALIZATION GUEST INTEGRATION WITH VMWARE

Red Hat Enterprise Linux 7 advances the level of integration and usability between the Red Hat Enterprise Linux guest and VMware vSphere. Integration now includes: • Open VM Tools — bundled open source virtualization utilities. • 3D graphics drivers for hardware-accelerated OpenGL and X11 rendering. • Fast communication mechanisms between VMware ESX and the virtual machine.

PARTITIONING DEFAULTS FOR ROLLBACK

The ability to revert to a known, good system configuration is crucial in a production environment. Using LVM snapshots with ext4 and XFS (or the integrated snapshotting feature in Btrfs described in the “Snapper” section) an administrator can capture the state of a system and preserve it for future use. An example use case would involve an in-place upgrade that does not present a desired outcome and an administrator who wants to restore the original configuration.

CREATING INSTALLATION MEDIA

Red Hat Enterprise Linux 7 introduces Live Media Creator for creating customized installation media from a kickstart file for a range of deployment use cases. Media can then be used to deploy standardized images whether on standardized corporate desktops, standardized servers, virtual machines, or hyperscale deployments. Live Media Creator, especially when used with templates, provides a way to control and manage configurations across the enterprise.

SERVER PROFILE TEMPLATES

Red Hat Enterprise Linux 7 features the ability to use installation templates to create servers for common workloads. These templates can simplify and speed creating and deploying Red Hat Enterprise Linux servers, even for those with little or no experience with Linux.

Top Visited
Switchboard
Latest
Past week
Past month


NEWS CONTENTS

Old News ;-)

[Nov 09, 2018] OpenStack is overkill for Docker

Notable quotes:
"... OpenStack's core value is to gather a pool of hypervisor-enabled computers and enable the delivery of virtual machines (VMs) on demand to users. ..."
Nov 09, 2018 | www.techrepublic.com

javascript:void(0)

Both OpenStack and Docker were conceived to make IT more agile. OpenStack has strived to do this by turning hitherto static IT resources into elastic infrastructure, whereas Docker has reached for this goal by harmonizing development, test, and production resources, as Red Hat's Neil Levine suggests .

But while Docker adoption has soared, OpenStack is still largely stuck in neutral. OpenStack is kept relevant by so many wanting to believe its promise, but never hitting its stride due to a host of factors , including complexity.

And yet Docker could be just the thing to turn OpenStack's popularity into productivity. Whether a Docker-plus-OpenStack pairing is right for your enterprise largely depends on the kind of capacity your enterprise hopes to deliver. If simply Docker, OpenStack is probably overkill.

An open source approach to delivering virtual machines

OpenStack is an operational model for delivering virtualized compute capacity.

Sure, some give it a more grandiose definition ("OpenStack is a set of software tools for building and managing cloud computing platforms for public and private clouds"), but if we ignore secondary services like Cinder, Heat, and Magnum, for example, OpenStack's core value is to gather a pool of hypervisor-enabled computers and enable the delivery of virtual machines (VMs) on demand to users.

That's it.

Not that this is a small thing. After all, without OpenStack, the hypervisor sits idle, lonesome on a single computer, with no way to expose that capacity programmatically (or otherwise) to users.

Before cloudy systems like OpenStack or Amazon's EC2, users would typically file a help ticket with IT. An IT admin, in turn, would use a GUI or command line to create a VM, and then share the credentials with the user.

Systems like OpenStack significantly streamline this process, enabling IT to programmatically deliver capacity to users. That's a big deal.

Docker peanut butter, meet OpenStack jelly

Docker, the darling of the containers world, is similar to the VM in the IaaS picture painted above.

A Docker host is really the unit of compute capacity that users need, and not the container itself. Docker addresses what you do with a host once you've got it, but it doesn't really help you get the host in the first place.

A Docker machine provides a client-side tool that lets you request Docker hosts from an IaaS provider (like EC2 or OpenStack or vSphere), but it's far from a complete solution. In part, this stems from the fact that Docker doesn't have a tenancy model.

With a hypervisor, each VM is a tenant. But in Docker, the Docker host is a tenant. You typically don't want multiple users sharing a Docker host because then they see each others' containers. So typically an enterprise will layer a cloud system underneath Docker to add tenancy. This yields a stack that looks like: hardware > hypervisor > Docker host > container.

A common approach today would be to take OpenStack and use it as the enterprise platform to deliver capacity on demand to users. In other words, users rely on OpenStack to request a Docker host, and then they use Docker to run containers in their Docker host.

So far, so good.

If all you need is Docker...

Things get more complicated when we start parsing what capacity needs delivering.

When an enterprise wants to use Docker, they need to get Docker hosts from a data center. OpenStack can do that, and it can do it alongside delivering all sorts of other capacity to the various teams within the enterprise.

But if all an enterprise IT team needs is Docker containers delivered, then OpenStack -- or a similar orchestration tool -- may be overkill, as VMware executive Jared Rosoff told me.

For this sort of use case, we really need a new platform. This platform could take the form of a piece of software that an enterprise installs on all of its computers in the data center. It would expose an interface to developers that lets them programmatically create Docker hosts when they need them, and then use Docker to create containers in those hosts.

Google has a vision for something like this with its Google Container Engine . Amazon has something similar in its EC2 Container Service . These are both API's that developers can use to provision some Docker-compatible capacity from their data center.

As for Docker, the company behind Docker, the technology, it seems to have punted on this problem. focusing instead on what happens on the host itself.

While we probably don't need to build up a big OpenStack cloud simply to manage Docker instances, it's worth asking what OpenStack should look like if what we wanted to deliver was only Docker hosts, and not VMs.

Again, we see Google and Amazon tackling the problem, but when will OpenStack, or one of its supporters, do the same? The obvious candidate would be VMware, given its longstanding dominance of tooling around virtualization. But the company that solves this problem first, and in a way that comforts traditional IT with familiar interfaces yet pulls them into a cloudy future, will win, and win big.

[Nov 06, 2018] Welcome to devuan.org Devuan GNU+Linux Free Operating System

Nov 06, 2018 | devuan.org

Devuan GNU+Linux is a fork of Debian without systemd. Devuan's stable release is now 2.0.0 ASCII . The 1.0.0 Jessie release (LTS) has moved to oldstable status. Since the declaration of intention to fork in 2014 , infrastructure has been put in place to support Devuan's mission to offer users control over their system. Devuan Jessie provided a safe upgrade path from Debian 7 (Wheezy) and Debian 8 (Jessie). Now Devuan ASCII offers an upgrade from Devuan Jessie as well as a transition from Debian 9 (Stretch) that avoids unnecessary entanglements and ensures Init Freedom .

Devuan aliases its releases using minor planet names as codenames . Devuan file names follow this release naming scheme .

Devuan release Suites Planet nr. Debian release
Jessie Oldstable 10464 Jessie
ASCII Stable 3568 Stretch
Beowulf In development 38086 Buster
Ceres Unstable 1 Sid

[Nov 06, 2018] Init system support in Debian by jake

Notable quotes:
"... The Devuan distribution is a Debian derivative that has removed systemd; many of the vocal anti-systemd Debian developers have switched, ..."
Oct 31, 2018 | lwn.net

The " systemd question " has roiled Debian multiple times over the years, but things had mostly been quiet on that front of late.

The Devuan distribution is a Debian derivative that has removed systemd; many of the vocal anti-systemd Debian developers have switched, which helps reduce the friction on the Debian mailing lists.

But that seems to have led to support for init system alternatives (and System V init in particular) to bitrot in Debian.

There are signs that a bit of reconciliation between Debian and Devuan will help fix that problem.

[Nov 03, 2018] Is Red Hat IBM's 'Hail Mary' pass

Notable quotes:
"... if those employees become unhappy, they can effectively go anywhere they want. ..."
"... IBM's partner/reseller ecosystem is nowhere near what it was since it owned the PC and Server businesses that Lenovo now owns. And IBM's Softlayer/BlueMix cloud is largely tied to its legacy software business, which, again, is slowing. ..."
"... I came to IBM from their SoftLayer acquisition. Their ability to stomp all over the things SoftLayer was almost doing right were astounding. I stood and listened to Ginni say things like, "We purchased SoftLayer because we need to learn from you," and, "We want you to teach us how to do Cloud the right way, since we spent all these years doing things the wrong way," and, "If you find yourself in a meeting with one of our old teams, you guys are gonna be the ones in charge. You are the ones who know how this is supposed to work - our culture has failed at it." Promises which were nothing more than hollow words. ..."
"... Next, it's a little worrisome that the author, now over the whole IBM thing is recommending firing "older people," you know, the ones who helped the company retain its performance in years' past. The smartest article I've read about IBM worried about its cheap style of "acquiring" non-best-of-breed companies and firing oodles of its qualified R&D guys. THAT author was right. ..."
"... Four years in GTS ... joined via being outsourced to IBM by my previous employer. Left GTS after 4 years. ..."
"... The IBM way of life was throughout the Oughts and the Teens an utter and complete failure from the perspective of getting work done right and using people to their appropriate and full potential. ..."
"... As a GTS employee, professional technical training was deemed unnecessary, hence I had no access to any unless I paid for it myself and used my personal time ... the only training available was cheesy presentations or other web based garbage from the intranet, or casual / OJT style meetings with other staff who were NOT professional or expert trainers. ..."
"... As a GTS employee, I had NO access to the expert and professional tools that IBM fricking made and sold to the same damn customers I was supposed to be supporting. Did we have expert and professional workflow / document management / ITIL aligned incident and problem management tools? NO, we had fricking Lotus Notes and email. Instead of upgrading to the newest and best software solutions for data center / IT management & support, we degraded everything down the simplest and least complex single function tools that no "best practices" organization on Earth would ever consider using. ..."
"... And the people management paradigm ... employees ranked annually not against a static or shared goal or metric, but in relation to each other, and there was ALWAYS a "top 10 percent" and a "bottom ten percent" required by upper management ... a system that was sociopathic in it's nature because it encourages employees to NOT work together ... by screwing over one's coworkers, perhaps by not giving necessary information, timely support, assistance as needed or requested, one could potentially hurt their performance and make oneself look relatively better. That's a self-defeating system and it was encouraged by the way IBM ran things. ..."
Nov 03, 2018 | www.zdnet.com
Brain drain is a real risk

IBM has not had a particularly great track record when it comes to integrating the cultures of other companies into its own, and brain drain with a company like Red Hat is a real risk because if those employees become unhappy, they can effectively go anywhere they want. They have the skills to command very high salaries at any of the top companies in the industry.

The other issue is that IBM hasn't figured out how to capture revenue from SMBs -- and that has always been elusive for them. Unless a deal is worth at least $1 million, and realistically $10 million, sales guys at IBM don't tend to get motivated.

Also: Red Hat changes its open-source licensing rules

The 5,000-seat and below market segment has traditionally been partner territory, and when it comes to reseller partners for its cloud, IBM is way, way behind AWS, Microsoft, Google, or even (gasp) Oracle, which is now offering serious margins to partners that land workloads on the Oracle cloud.

IBM's partner/reseller ecosystem is nowhere near what it was since it owned the PC and Server businesses that Lenovo now owns. And IBM's Softlayer/BlueMix cloud is largely tied to its legacy software business, which, again, is slowing.

... ... ...

But I think that it is very unlikely the IBM Cloud, even when juiced on Red Hat steroids, will become anything more ambitious than a boutique business for hybrid workloads when compared with AWS or Azure. Realistically, it has to be the kind of cloud platform that interoperates well with the others or nobody will want it.


geek49203_z , Wednesday, April 26, 2017 10:27 AM

Ex-IBM contractor here...

1. IBM used to value long-term employees. Now they "value" short-term contractors -- but they still pull them out of production for lots of training that, quite frankly, isn't exactly needed for what they are doing. Personally, I think that IBM would do well to return to valuing employees instead of looking at them as expendable commodities, but either way, they need to get past the legacies of when they had long-term employees all watching a single main frame.

2. As IBM moved to an army of contractors, they killed off the informal (but important!) web of tribal knowledge. You know, a friend of a friend who new the answer to some issue, or knew something about this customer? What has happened is that the transaction costs (as economists call it) have escalated until IBM can scarcely order IBM hardware for its own projects, or have SDM's work together.

M Wagner geek49203_z , Wednesday, April 26, 2017 10:35 AM
geek49203_z Number 2 is a problem everywhere. As long-time employees (mostly baby-boomers) retire, their replacements are usually straight out of college with various non-technical degrees. They come in with little history and few older-employees to which they can turn for "the tricks of the trade".
Shmeg , Wednesday, April 26, 2017 10:41 AM
I came to IBM from their SoftLayer acquisition. Their ability to stomp all over the things SoftLayer was almost doing right were astounding. I stood and listened to Ginni say things like, "We purchased SoftLayer because we need to learn from you," and, "We want you to teach us how to do Cloud the right way, since we spent all these years doing things the wrong way," and, "If you find yourself in a meeting with one of our old teams, you guys are gonna be the ones in charge. You are the ones who know how this is supposed to work - our culture has failed at it." Promises which were nothing more than hollow words.
geek49203_z , Wednesday, April 26, 2017 10:27 AM
Ex-IBM contractor here...

1. IBM used to value long-term employees. Now they "value" short-term contractors -- but they still pull them out of production for lots of training that, quite frankly, isn't exactly needed for what they are doing. Personally, I think that IBM would do well to return to valuing employees instead of looking at them as expendable commodities, but either way, they need to get past the legacies of when they had long-term employees all watching a single main frame.

2. As IBM moved to an army of contractors, they killed off the informal (but important!) web of tribal knowledge. You know, a friend of a friend who new the answer to some issue, or knew something about this customer? What has happened is that the transaction costs (as economists call it) have escalated until IBM can scarcely order IBM hardware for its own projects, or have SDM's work together.

M Wagner geek49203_z , Wednesday, April 26, 2017 10:35 AM
geek49203_z Number 2 is a problem everywhere. As long-time employees (mostly baby-boomers) retire, their replacements are usually straight out of college with various non-technical degrees. They come in with little history and few older-employees to which they can turn for "the tricks of the trade".
Shmeg , Wednesday, April 26, 2017 10:41 AM
I came to IBM from their SoftLayer acquisition. Their ability to stomp all over the things SoftLayer was almost doing right were astounding. I stood and listened to Ginni say things like, "We purchased SoftLayer because we need to learn from you," and, "We want you to teach us how to do Cloud the right way, since we spent all these years doing things the wrong way," and, "If you find yourself in a meeting with one of our old teams, you guys are gonna be the ones in charge. You are the ones who know how this is supposed to work - our culture has failed at it." Promises which were nothing more than hollow words.
cavman , Wednesday, April 26, 2017 3:58 PM
In the 1970's 80's and 90's I was working in tech support for a company called ROLM. We were doing communications , voice and data and did many systems for Fortune 500 companies along with 911 systems and the secure system at the White House. My job was to fly all over North America to solve problems with customers and integration of our equipment into their business model. I also did BETA trials and documented systems so others would understand what it took to make it run fine under all conditions.

In 84 IBM bought a percentage of the company and the next year they bought out the company. When someone said to me "IBM just bought you out , you must thing you died and went to heaven." My response was "Think of them as being like the Federal Government but making a profit". They were so heavily structured and hide bound that it was a constant battle working with them. Their response to any comments was "We are IBM"

I was working on an equipment project in Colorado Springs and IBM took control. I was immediately advised that I could only talk to the people in my assigned group and if I had a question outside of my group I had to put it in writing and give it to my manager and if he thought it was relevant it would be forwarded up the ladder of management until it reached a level of a manager that had control of both groups and at that time if he thought it was relevant it would be sent to that group who would send the answer back up the ladder.

I'm a Vietnam Veteran and I used my military training to get things done just like I did out in the field. I went looking for the person I could get an answer from.

At first others were nervous about doing that but within a month I had connections all over the facility and started introducing people at the cafeteria. Things moved quickly as people started working together as a unit. I finished my part of the work which was figuring all the spares technicians would need plus the costs for packaging and service contract estimates. I submitted it to all the people that needed it. I was then hauled into a meeting room by the IBM management and advised that I was a disruptive influence and would be removed. Just then the final contracts that vendors had to sign showed up and it used all my info. The IBM people were livid that they were not involved.

By the way a couple months later the IBM THINK magazine came out with a new story about a radical concept they had tried. A cover would not fit on a component and under the old system both the component and the cover would be thrown out and they would start from scratch doing it over. They decided to have the two groups sit together and figure out why it would not fit and correct it on the spot.

Another great example of IBM people is we had a sales contract to install a multi node voice mail system at WANG computers but we lost it because the IBM people insisted on bundling in AS0400 systems into the sale to WANG computer. Instead we lost a multi million dollar contract.

Eventually Siemens bought 50% of the company and eventually full control. Now all we heard was "That is how we do it in Germany" Our response was "How did that WW II thing work out".

Stockholder , Wednesday, April 26, 2017 7:20 PM
The author may have more loyalty to Microsoft than he confides, is the first thing noticeable about this article. The second thing is that in terms of getting rid of those aged IBM workers, I think he may have completely missed the mark, in fairness, that may be the product of his IBM experience, The sheer hubris of tech-talking from the middle of the story and missing the global misstep that is today's IBM is noticeable. As a stockholder, the first question is, "Where is the investigation to the breach of fiduciary duty by a board that owes its loyalty to stockholders who are scratching their heads at the 'positive' spin the likes of Ginni Rometty is putting on 20 quarters of dead losses?" Got that, 20 quarters of losses.

Next, it's a little worrisome that the author, now over the whole IBM thing is recommending firing "older people," you know, the ones who helped the company retain its performance in years' past. The smartest article I've read about IBM worried about its cheap style of "acquiring" non-best-of-breed companies and firing oodles of its qualified R&D guys. THAT author was right.

IBM's been run into the ground by Ginni, I'll use her first name, since apparently my money is now used to prop up this sham of a leader, who from her uncomfortable public announcement with Tim Cook of Apple, which HAS gone up, by the way, has embraced every political trend, not cause but trend from hiring more women to marginalizing all those old-time white males...You know the ones who produced for the company based on merit, sweat, expertise, all those non-feeling based skills that ultimately are what a shareholder is interested in and replaced them with young, and apparently "social" experts who are pasting some phony "modernity" on a company that under Ginni's leadership has become more of a pet cause than a company.

Finally, regarding ageism and the author's advocacy for the same, IBM's been there, done that as they lost an age discrimination lawsuit decades ago. IBM gave up on doing what it had the ability to do as an enormous business and instead under Rometty's leadership has tried to compete with the scrappy startups where any halfwit knows IBM cannot compete.

The company has rendered itself ridiculous under Rometty, a board that collects paychecks and breaches any notion of fiduciary duty to shareholders, an attempt at partnering with a "mod" company like Apple that simply bolstered Apple and left IBM languishing and a rejection of what has a track record of working, excellence, rewarding effort of employees and the steady plod of performance. Dump the board and dump Rometty.

jperlow Stockholder , Wednesday, April 26, 2017 8:36 PM
Stockholder Your comments regarding any inclination towards age discrimination are duly noted, so I added a qualifier in the piece.
Gravyboat McGee , Wednesday, April 26, 2017 9:00 PM
Four years in GTS ... joined via being outsourced to IBM by my previous employer. Left GTS after 4 years.

The IBM way of life was throughout the Oughts and the Teens an utter and complete failure from the perspective of getting work done right and using people to their appropriate and full potential. I went from a multi-disciplinary team of engineers working across technologies to support corporate needs in the IT environment to being siloed into a single-function organization.

My first year of on-boarding with IBM was spent deconstructing application integration and cross-organizational structures of support and interwork that I had spent 6 years building and maintaining. Handing off different chunks of work (again, before the outsourcing, an Enterprise solution supported by one multi-disciplinary team) to different IBM GTS work silos that had no physical spacial relationship and no interworking history or habits. What we're talking about here is the notion of "left hand not knowing what the right hand is doing" ...

THAT was the IBM way of doing things, and nothing I've read about them over the past decade or so tells me it has changed.

As a GTS employee, professional technical training was deemed unnecessary, hence I had no access to any unless I paid for it myself and used my personal time ... the only training available was cheesy presentations or other web based garbage from the intranet, or casual / OJT style meetings with other staff who were NOT professional or expert trainers.

As a GTS employee, I had NO access to the expert and professional tools that IBM fricking made and sold to the same damn customers I was supposed to be supporting. Did we have expert and professional workflow / document management / ITIL aligned incident and problem management tools? NO, we had fricking Lotus Notes and email. Instead of upgrading to the newest and best software solutions for data center / IT management & support, we degraded everything down the simplest and least complex single function tools that no "best practices" organization on Earth would ever consider using.

And the people management paradigm ... employees ranked annually not against a static or shared goal or metric, but in relation to each other, and there was ALWAYS a "top 10 percent" and a "bottom ten percent" required by upper management ... a system that was sociopathic in it's nature because it encourages employees to NOT work together ... by screwing over one's coworkers, perhaps by not giving necessary information, timely support, assistance as needed or requested, one could potentially hurt their performance and make oneself look relatively better. That's a self-defeating system and it was encouraged by the way IBM ran things.

The "not invented here" ideology was embedded deeply in the souls of all senior IBMers I ever met or worked with ... if you come on board with any outside knowledge or experience, you must not dare to say "this way works better" because you'd be shut down before you could blink. The phrase "best practices" to them means "the way we've always done it".

IBM gave up on innovation long ago. Since the 90's the vast majority of their software has been bought, not built. Buy a small company, strip out the innovation, slap an IBM label on it, sell it as the next coming of Jesus even though they refuse to expend any R&D to push the product to the next level ... damn near everything IBM sold was gentrified, never cutting edge.

And don't get me started on sales practices ... tell the customer how product XYZ is a guaranteed moonshot, they'll be living on lunar real estate in no time at all, and after all the contracts are signed hand the customer a box of nuts & bolts and a letter telling them where they can look up instructions on how to build their own moon rocket. Or for XX dollars more a year, hire a Professional Services IBMer to build it for them.

I have no sympathy for IBM. They need a clean sweep throughout upper management, especially any of the old True Blue hard-core IBMers.

billa201 , Thursday, April 27, 2017 11:24 AM
You obviously have been gone from IBM as they do not treat their employees well anymore and get rid of good talent not keep it a sad state.
ClearCreek , Tuesday, May 9, 2017 7:04 PM
We tried our best to be SMB partners with IBM & Arrow in the early 2000s ... but could never get any traction. I personally needed a mentor, but never found one. I still have/wear some of their swag, and I write this right now on a re-purposed IBM 1U server that is 10 years old, but ... I can't see any way our small company can make $ with them.

Watson is impressive, but you can't build a company on just Watson. This author has some great ideas, yet the phrase that keeps coming to me is internal politics. That corrosive reality has & will kill companies, and it will kill IBM unless it is dealt with.

Turn-arounds are possible (look at MS), but they are hard and dangerous. Hope IBM can figure it out...

[Nov 02, 2018] The D in Systemd stands for 'Dammmmit!' A nasty DHCPv6 packet can pwn a vulnerable Linux box by Shaun Nichols

Notable quotes:
"... Hole opens up remote-code execution to miscreants – or a crash, if you're lucky ..."
"... You can use NAT with IPv6. ..."
Oct 26, 2018 | theregister.co.uk

Hole opens up remote-code execution to miscreants – or a crash, if you're lucky A security bug in Systemd can be exploited over the network to, at best, potentially crash a vulnerable Linux machine, or, at worst, execute malicious code on the box.

The flaw therefore puts Systemd-powered Linux computers – specifically those using systemd-networkd – at risk of remote hijacking: maliciously crafted DHCPv6 packets can try to exploit the programming cockup and arbitrarily change parts of memory in vulnerable systems, leading to potential code execution. This code could install malware, spyware, and other nasties, if successful.

The vulnerability – which was made public this week – sits within the written-from-scratch DHCPv6 client of the open-source Systemd management suite, which is built into various flavors of Linux.

This client is activated automatically if IPv6 support is enabled, and relevant packets arrive for processing. Thus, a rogue DHCPv6 server on a network, or in an ISP, could emit specially crafted router advertisement messages that wake up these clients, exploit the bug, and possibly hijack or crash vulnerable Systemd-powered Linux machines.

Here's the Red Hat Linux summary :

systemd-networkd is vulnerable to an out-of-bounds heap write in the DHCPv6 client when handling options sent by network adjacent DHCP servers. A attacker could exploit this via malicious DHCP server to corrupt heap memory on client machines, resulting in a denial of service or potential code execution.

Felix Wilhelm, of the Google Security team, was credited with discovering the flaw, designated CVE-2018-15688 . Wilhelm found that a specially crafted DHCPv6 network packet could trigger "a very powerful and largely controlled out-of-bounds heap write," which could be used by a remote hacker to inject and execute code.

"The overflow can be triggered relatively easy by advertising a DHCPv6 server with a server-id >= 493 characters long," Wilhelm noted.

In addition to Ubuntu and Red Hat Enterprise Linux, Systemd has been adopted as a service manager for Debian, Fedora, CoreOS, Mint, and SUSE Linux Enterprise Server. We're told RHEL 7, at least, does not use the vulnerable component by default.

Systemd creator Lennart Poettering has already published a security fix for the vulnerable component – this should be weaving its way into distros as we type.

If you run a Systemd-based Linux system, and rely on systemd-networkd, update your operating system as soon as you can to pick up the fix when available and as necessary.

The bug will come as another argument against Systemd as the Linux management tool continues to fight for the hearts and minds of admins and developers alike. Though a number of major admins have in recent years adopted and championed it as the replacement for the old Init era, others within the Linux world seem to still be less than impressed with Systemd and Poettering's occasionally controversial management of the tool. ® Page:

2 3 Next →

Oh Homer , 6 days

Meh

As anyone who bothers to read my comments (BTW "hi" to both of you) already knows, I despise systemd with a passion, but this one is more an IPv6 problem in general.

Yes this is an actual bug in networkd, but IPv6 seems to be far more bug prone than v4, and problems are rife in all implementations. Whether that's because the spec itself is flawed, or because nobody understands v6 well enough to implement it correctly, or possibly because there's just zero interest in making any real effort, I don't know, but it's a fact nonetheless, and my primary reason for disabling it wherever I find it. Which of course contributes to the "zero interest" problem that perpetuates v6's bug prone condition, ad nauseam.

IPv6 is just one of those tech pariahs that everyone loves to hate, much like systemd, albeit fully deserved IMO.

Oh yeah, and here's the obligatory "systemd sucks". Personally I always assumed the "d" stood for "destroyer". I believe the "IP" in "IPv6" stands for "Idiot Protocol".

Anonymous Coward , 6 days
Re: Meh

"nonetheless, and my primary reason for disabling it wherever I find it. "

The very first guide I read to hardening a system recommended disabling services you didn't need and emphasized IPV6 for the reasons you just stated.

Wasn't there a bux in Xorg reported recently as well?

https://www.theregister.co.uk/2018/10/25/x_org_server_vulnerability/

"FreeDesktop.org Might Formally Join Forces With The X.Org Foundation"

https://www.phoronix.com/scan.php?page=news_item&px=FreeDesktop-org-Xorg-Forces

Also, does this mean that Facebook was vulnerable to attack, again?

"Simply put, you could say Facebook loves systemd."

https://www.phoronix.com/scan.php?page=news_item&px=Facebook-systemd-2018

Jay Lenovo , 6 days
Re: Meh

IPv6 and SystemD: Forced industry standard diseases that requires most of us to bite our lips and bear it.

Fortunately, IPv6 by lack of adopted use, limits the scope of this bug.

vtcodger , 6 days
Re: Meh
Fortunately, IPv6 by lack of adopted use, limits the scope of this bug.

Yeah, fortunately IPv6 is only used by a few fringe organizations like Google and Microsoft.

Seriously, I personally want nothing to do with either systemd or IPv6. Both seem to me to fall into the bin labeled "If it ain't broke, let's break it" But still it's troubling that things that some folks regard as major system components continue to ship with significant security flaws. How can one trust anything connected to the Internet that is more sophisticated and complex than a TV streaming box?

DougS , 6 days
Re: Meh

Was going to say the same thing, and I disable IPv6 for the exact same reason. IPv6 code isn't as well tested, as well audited, or as well targeted looking for exploits as IPv4. Stuff like this only proves that it was smart to wait, and I should wait some more.

Nate Amsden , 6 days
Re: Meh

Count me in the camp of who hates systemd(hates it being "forced" on just about every distro, otherwise wouldn't care about it - and yes I am moving my personal servers to Devuan, thought I could go Debian 7->Devuan but turns out that may not work, so I upgraded to Debian 8 a few weeks ago, and will go to Devuan from there in a few weeks, upgraded one Debian 8 to Devuan already 3 more to go -- Debian user since 1998), when reading this article it reminded me of

https://www.theregister.co.uk/2017/06/29/systemd_pwned_by_dns_query/

bombastic bob , 6 days
The gift that keeps on giving (systemd) !!!

This makes me glad I'm using FreeBSD. The Xorg version in FreeBSD's ports is currently *slightly* older than the Xorg version that had that vulnerability in it. AND, FreeBSD will *NEVER* have systemd in it!

(and, for Linux, when I need it, I've been using Devuan)

That being said, the whole idea of "let's do a re-write and do a 'systemd' instead of 'system V init' because WE CAN and it's OUR TURN NOW, 'modern' 'change for the sake of change' etc." kinda reminds me of recent "update" problems with Win-10-nic...

Oh, and an obligatory Schadenfreude laugh: HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA!!!!!!!!!!!!!!!!!!!

Long John Brass , 6 days
Re: The gift that keeps on giving (systemd) !!!

Finally got all my machines cut over from Debian to Devuan.

Might spin a FreeBSD system up in a VM and have a play.

I suspect that the infestation of stupid into the Linux space won't stop with or be limited to SystemD. I will wait and watch to see what damage the re-education gulag has done to Sweary McSwearFace (Mr Torvalds)

Dan 55 , 6 days
Re: Meh

I despise systemd with a passion, but this one is more an IPv6 problem in general.

Not really, systemd has its tentacles everywhere and runs as root. Exploits which affect systemd therefore give you the keys to the kingdom.

Orv , 3 days
Re: Meh
Not really, systemd has its tentacles everywhere and runs as root.

Yes, but not really the problem in this case. Any DHCP client is going to have to run at least part of the time as root. There's not enough nuance in the Linux privilege model to allow it to manipulate network interfaces, otherwise.

4 1
Long John Brass , 3 days
Re: Meh
Yes, but not really the problem in this case. Any DHCP client is going to have to run at least part of the time as root. There's not enough nuance in the Linux privilege model to allow it to manipulate network interfaces, otherwise.

Sorry but utter bullshit. You can if you are so inclined you can use the Linux Capabilities framework for this kind of thing. See https://wiki.archlinux.org/index.php/capabilities

3 0
JohnFen , 6 days
Yay for me

"If you run a Systemd-based Linux system"

I remain very happy that I don't use systemd on any of my machines anymore. :)

"others within the Linux world seem to still be less than impressed with Systemd"

Yep, I'm in that camp. I gave it a good, honest go, but it increased the amount of hassle and pain of system management without providing any noticeable benefit, so I ditched it.

ElReg!comments!Pierre , 2 days
Re: Time to troll

> Just like it's entirely possible to have a Linux system without any GNU in it

Just like it's possible to have a GNU system without Linux on it - ho well as soon as GNU MACH is finally up to the task ;-)

On the systemd angle, I, too, am in the process of switching all my machines from Debian to Devuan but on my personnal(*) network a few systemd-infected machines remain, thanks to a combination of laziness from my part and stubborn "systemd is quite OK" attitude from the raspy foundation. That vuln may be the last straw : one on the aforementionned machines sits on my DMZ, chatting freely with the outside world. Nothing really crucial on it, but i'd hate it if it became a foothold for nasties on my network.

(*) policy at work is RHEL, and that's negociated far above my influence level, but I don't really care as all my important stuff runs on Z/OS anyway ;-) . Ok we have to reboot a few VMs occasionnally when systemd throws a hissy fit -which is surprisingly often for an "enterprise" OS -, but meh.

Destroy All Monsters , 5 days
Re: Not possible

This code is actually pretty bad and should raise all kinds of red flags in a code review.

Anonymous Coward , 5 days
Re: Not possible

ITYM Lennart

Christian Berger , 5 days
Re: Not possible

"This code is actually pretty bad and should raise all kinds of red flags in a code review."

Yeah, but for that you need people who can do code reviews, and also people who can accept criticism. That also means saying "no" to people who are bad at coding, and saying that repeatedly if they don't learn.

SystemD seems to be the area where people gather who want to get code in for their resumes, not for people who actually want to make the world a better place.

26 1
jake , 6 days
There is a reason ...

... that an init, traditionally, is a small bit of code that does one thing very well. Like most of the rest of the *nix core utilities. All an init should do is start PID1, set run level, spawn a tty (or several), handle a graceful shutdown, and log all the above in plaintext to make troubleshooting as simplistic as possible. Anything else is a vanity project that is best placed elsewhere, in it's own stand-alone code base.

Inventing a clusterfuck init variation that's so big and bulky that it needs to be called a "suite" is just asking for trouble.

IMO, systemd is a cancer that is growing out of control, and needs to be cut out of Linux before it infects enough of the system to kill it permanently.

AdamWill , 6 days
Re: There is a reason ...

That's why systemd-networkd is a separate, optional component, and not actually part of the init daemon at all. Most systemd distros do not use it by default and thus are not vulnerable to this unless the user actively disables the default network manager and chooses to use networkd instead.

Anonymous Coward , 4 days
Re: There is a reason ...

"Just go install a default Fedora or Ubuntu system and check for yourself: you'll have systemd, but you *won't* have systemd-networkd running."

Funny that I installed ubuntu 18.04 a few weeks ago and the fucking thing installed itself then! ( and was a fucking pain to remove).

LP is a fucking arsehole.

Orv , 3 days
Re: There is a reason ...
Pardon my ignorance (I don't use a distro with systemd) why bother with networkd in the first place if you don't have to use it.

Mostly because the old-style init system doesn't cope all that well with systems that move from network to network. It works for systems with a static IP, or that do a DHCP request at boot, but it falls down on anything more dynamic.

In order to avoid restarting the whole network system every time they switch WiFi access points, people have kludged on solutions like NetworkManager. But it's hard to argue it's more stable or secure than networkd. And this is always going to be a point of vulnerability because anything that manipulates network interfaces will have to be running as root.

These days networking is essential to the basic functionality of most computers; I think there's a good argument that it doesn't make much sense to treat it as a second-class citizen.

AdamWill , 2 days
Re: There is a reason ...

"Funny that I installed ubuntu 18.04 a few weeks ago and the fucking thing installed itself then! ( and was a fucking pain to remove)."

So I looked into it a bit more, and from a few references at least, it seems like Ubuntu has a sort of network configuration abstraction thingy that can use both NM and systemd-networkd as backends; on Ubuntu desktop flavors NM is usually the default, but apparently for recent Ubuntu Server, networkd might indeed be the default. I didn't notice that as, whenever I want to check what's going on in Ubuntu land, I tend to install the default desktop spin...

"LP is a fucking arsehole."

systemd's a lot bigger than Lennart, you know. If my grep fu is correct, out of 1543 commits to networkd, only 298 are from Lennart...

1 0
alain williams , 6 days
Old is good

in many respects when it comes to software because, over time, the bugs will have been found and squashed. Systemd brings in a lot of new code which will, naturally, have lots of bugs that will take time to find & remove. This is why we get problems like this DHCP one.

Much as I like the venerable init: it did need replacing. Systemd is one way to go, more flexible, etc, etc. Something event driven is a good approach.

One of the main problems with systemd is that it has become too big, slurped up lots of functionality which has removed choice, increased fragility. They should have concentrated on adding ways of talking to existing daemons, eg dhcpd, through an API/something. This would have reused old code (good) and allowed other implementations to use the API - this letting people choose what they wanted to run.

But no: Poettering seems to want to build a Cathedral rather than a Bazzar.

He appears to want to make it his way or no way. This is bad, one reason that *nix is good is because different solutions to a problem have been able to be chosen, one removed and another slotted in. This encourages competition and the 'best of breed' comes out on top. Poettering is endangering that process.

Also: he refusal to accept patches to let it work on non-Linux Unix is just plain nasty.

oiseau , 4 days
Re: Old is good

Hello:

One of the main problems with systemd is that it has become too big, slurped up lots of functionality which has removed choice, increased fragility.

IMO, there is a striking paralell between systemd and the registry in Windows OSs.

After many years of dealing with the registry (W98 to XPSP3) I ended up seeing the registry as a sort of developer sanctioned virus running inside the OS, constantly changing and going deeper and deeper into the OS with every iteration and as a result, progressively putting an end to the possibility of knowing/controlling what was going on inside your box/the OS.

Years later, when I learned about the existence of systemd (I was already running Ubuntu) and read up on what it did and how it did it, it dawned on me that systemd was nothing more than a registry class virus and it was infecting Linux_land at the behest of the developers involved.

So I moved from Ubuntu to PCLinuxOS and then on to Devuan.

Call me paranoid but I am convinced that there are people both inside and outside IT that actually want this and are quite willing to pay shitloads of money for it to happen.

I don't see this MS cozying up to Linux in various ways lately as a coincidence: these things do not happen just because or on a senior manager's whim.

What I do see (YMMV) is systemd being a sort of convergence of Linux with Windows, which will not be good for Linux and may well be its undoing.

Cheers,

O.

Rich 2 , 4 days
Re: Old is good

"Also: he refusal to accept patches to let it work on non-Linux Unix is just plain nasty"

Thank goodness this crap is unlikely to escape from Linux!

By the way, for a systemd-free Linux, try void - it's rather good.

Michael Wojcik , 3 days
Re: Old is good

Much as I like the venerable init: it did need replacing.

For some use cases, perhaps. Not for any of mine. SysV init, or even BSD init, does everything I need a Linux or UNIX init system to do. And I don't need any of the other crap that's been built into or hung off systemd, either.

Orv , 3 days
Re: Old is good

BSD init and SysV init work pretty darn well for their original purpose -- servers with static IP addresses that are rebooted no more than once in a fortnight. Anything more dynamic starts to give it trouble.

Chairman of the Bored , 6 days
Too bad Linus swore off swearing

Situations like this go beyond a little "golly gee, I screwed up some C"...

jake , 6 days
Re: Too bad Linus swore off swearing

Linus doesn't care. systemd has nothing to do with the kernel ... other than the fact that the lead devs for systemd have been banned from working on the kernel because they don't play nice with others.

JLV , 6 days
how did it get to this?

I've been using runit, because I am too lazy and clueless to write init scripts reliably. It's very lightweight, runs on a bunch of systems and really does one thing - keep daemons up.

I am not saying it's the best - but it looks like it has a very small codebase, it doesn't do much and generally has not bugged me after I configured each service correctly. I believe other systems also exist to avoid using init scripts directly. Not Monit, as it relies on you configuring the daemon start/stop commands elsewhere.

On the other hand, systemd is a massive sprawl, does a lot of things - some of them useful, like dependencies and generally has needed more looking after. Twice I've had errors on a Django server that, after a lot of looking around ended up because something had changed in the, Chef-related, code that's exposed to systemd and esoteric (not emitted by systemd) errors resulted when systemd could not make sense of the incorrect configuration.

I don't hate it - init scripts look a bit antiquated to me and they seem unforgiving to beginners - but I don't much like it. What I certainly do hate is how, in an OS that is supposed to be all about choice, sometime excessively so as in the window manager menagerie, we somehow ended up with one mandatory daemon scheduler on almost all distributions. Via, of all types of dependencies, the GUI layer. For a window manager that you may not even have installed.

Talk about the antithesis of the Unix philosophy of do one thing, do it well.

Oh, then there are also the security bugs and the project owner is an arrogant twat. That too.

Doctor Syntax , 6 days
Re: how did it get to this?

"init scripts look a bit antiquated to me and they seem unforgiving to beginners"

Init scripts are shell scripts. Shell scripts are as old as Unix. If you think that makes them antiquated then maybe Unix-like systems are not for you. In practice any sub-system generally gets its own scripts installed with the rest of the S/W so if being unforgiving puts beginners off tinkering with them so much the better. If an experienced Unix user really needs to modify one of the system-provided scripts their existing shell knowledge will let them do exactly what's needed. In the extreme, if you need to develop a new init script then you can do so in the same way as you'd develop any other script - edit and test from the command line.

33 4
onefang , 6 days
Re: how did it get to this?

"Init scripts are shell scripts."

While generally true, some sysv init style inits can handle init "scripts" written in any language.

sed gawk , 6 days
Re: how did it get to this?

I personally like openrc as an init system, but systemd is a symptom of the tooling problem.

It's for me a retrograde step but again, it's linux, one can, as you and I do, just remove systemd.

There are a lot of people in the industry now who don't seem able to cope with shell scripts nor are minded to research the arguments for or against shell as part of a unix style of system design.

In conclusion, we are outnumbered, but it will eventually collapse under its own weight and a worthy successor shall rise, perhaps called SystemV, might have to shorten that name a bit.

AdamWill , 6 days
Just about nothing actually uses networkd

"In addition to Ubuntu and Red Hat Enterprise Linux, Systemd has been adopted as a service manager for Debian, Fedora, CoreOS, Mint, and SUSE Linux Enterprise Server. We're told RHEL 7, at least, does not use the vulnerable component by default."

I can tell you for sure that no version of Fedora does, either, and I'm fairly sure that neither does Debian, SLES or Mint. I don't know anything much about CoreOS, but https://coreos.com/os/docs/latest/network-config-with-networkd.html suggests it actually *might* use systemd-networkd.

systemd-networkd is not part of the core systemd init daemon. It's an optional component, and most distros use some other network manager (like NetworkManager or wicd) by default.

Christian Berger , 5 days
The important word here is "still"

I mean commercial distributions seem to be particularly interested in trying out new things that can increase their number of support calls. It's probably just that networkd is either to new and therefore not yet in the release, or still works so badly even the most rudimentary tests fail.

There is no reason to use that NTP daemon of systemd, yet more and more distros ship with it enabled, instead of some sane NTP-server.

NLCSGRV , 6 days
The Curse of Poettering strikes again.
_LC_ , 6 days
Now hang on, please!

Ser iss no neet to worry, systemd will becum stable soon after PulseAudio does.

Ken Hagan , 6 days
Re: Now hang on, please!

I won't hold my breath, then. I have a laptop at the moment that refuses to boot because (as I've discovered from looking at the journal offline) pulseaudio is in an infinite loop waiting for the successful detection of some hardware that, presumably, I don't have.

I imagine I can fix it by hacking the file-system (offline) so that fuckingpulse is no longer part of the boot configuration, but I shouldn't have to. A decent init system would be able to kick of everything else in parallel and if one particular service doesn't come up properly then it just logs the error. I *thought* that was one of the claimed advantages of systemd, but apparently that's just a load of horseshit.

26 0
Obesrver1 , 5 days
Reason for disabling IVP6

That it punches thru NAT routers enabling all your little goodies behind them as directly accessible.

MS even supplies tunneling (Ivp4 to Ivp6) so if using Linux in a VM on a MS system you may still have it anyway.

NAT was always recommended to be used in hardening your system, I prefer to keep all my idIoT devices behind one.

As they are just Idiot devices.

In future I will need a NAT that acts as a DNS and offers some sort of solution to keeping Ivp4.

Orv , 3 days
Re: Reason for disabling IVP6

My NAT router statefully firewalls incoming IPv6 by default, which I consider equivalently secure. NAT adds security mostly by accident, because it de-facto adds a firewall that blocks incoming packets. It's not the address translation itself that makes things more secure, it's the inability to route in from the outside.

dajames , 3 days
Re: Reason for disabling IVP6

You can use NAT with IPv6.

You can, but why would you want to.

NAT is schtick for connecting a whole LAN to a WAN using a single IPv4 address (useful with IPv4 because most ISPs don't give you a /24 when you sign up). If you have a native IPv6 address you'll have something like 2^64 addresses, so machines on your LAN can have an actual WAN-visible address of their own without needing a trick like NAT.

Using NAT with IPv6 is just missing the point.

JohnFen , 3 days
Re: Reason for disabling IVP6

"so machines on your LAN can have an actual WAN-visible address of their own without needing a trick like NAT."

Avoiding that configuration is exactly the use case for using NAT with IPv6. As others have pointed out, you can accomplish the same thing with IPv6 router configuration, but NAT is easier in terms of configuration and maintenance. Given that, and assuming that you don't want to be able to have arbitrary machines open ports that are visible to the internet, then why not use NAT?

Also, if your goal is to make people more likely to move to IPv6, pointing out IPv4 methods that will work with IPv6 (even if you don't consider them optimal) seems like a really, really good idea. It eases the transition.

Destroy All Monsters , 5 days
Please El Reg these stories make ma rage at breakfast, what's this?

The bug will come as another argument against Systemd as the Linux management tool continues to fight for the hearts and minds of admins and developers alike.

Less against systemd (which should get attacked on the design & implementation level) or against IPv6 than against the use of buffer-overflowable languages in 2018 in code that processes input from the Internet (it's not the middle ages anymore) or at least very hard linting of the same.

But in the end, what did it was a violation of the Don't Repeat Yourself principle and lack of sufficently high-level datastructures. Pointer into buffer, and the remaining buffer length are two discrete variables that need to be updated simultaneously to keep the invariant and this happens in several places. This is just a catastrophe waiting to happen. You forget to update it once, you are out! Use structs and functions updating the structs correctly.

And use assertions in the code , this stuff all seems disturbingly assertion-free.

Excellent explanation by Felix Wilhelm:

https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1795921

The function receives a pointer to the option buffer buf, it's remaining size buflen and the IA to be added to the buffer. While the check at (A) tries to ensure that the buffer has enough space left to store the IA option, it does not take the additional 4 bytes from the DHCP6Option header into account (B). Due to this the memcpy at (C) can go out-of-bound and *buflen can underflow [i.e. you suddenly have a gazillion byte buffer, Ed.] in (D) giving an attacker a very powerful and largely controlled OOB heap write starting at (E).

TheSkunkyMonk , 5 days
Init is 1026 lines of code in one file and it works great.
Anonymous Coward , 5 days
"...and Poettering's occasionally controversial management of the tool."

Shouldn't that be "...Potterings controversial management as a tool."?

clocKwize , 4 days
Re: Contractor rights

why don't we stop writing code in languages that make it easy to screw up so easily like this?

There are plenty about nowadays, I'd rather my DHCP client be a little bit slower at processing packets if I had more confidence it would not process then incorrectly and execute code hidden in said packets...

Anonymous Coward , 4 days
Switch, as easy as that

The circus that is called "Linux" have forced me to Devuan and the likes however the circus is getting worse and worse by the day, thus I have switched to the BSD world, I will learn that rather than sit back and watch this unfold As many of us have been saying, the sudden switch to SystemD was rather quick, perhaps you guys need to go investigate why it really happened, don't assume you know, go dig and you will find the answers, it's rather scary, thus I bid the Linux world a farewell after 10 years of support, I will watch the grass dry out from the other side of the fence, It was destined to fail by means of infiltration and screw it up motive(s) on those we do not mention here.

oiseau , 3 days
Re: Switch, as easy as that

Hello:

As many of us have been saying, the sudden switch to SystemD was rather quick, perhaps you guys need to go investigate why it really happened, don't assume you know, go dig and you will find the answers, it's rather scary ...

Indeed, it was rather quick and is very scary.

But there's really no need to dig much, just reason it out.

It's like a follow the money situation of sorts.

I'll try to sum it up in three short questions:

Q1: Hasn't the Linux philosophy (programs that do one thing and do it well) been a success?

A1: Indeed, in spite of the many init systems out there, it has been a success in stability and OS management. And it can easily be tested and debugged, which is an essential requirement.

Q2: So what would Linux need to have the practical equivalent of the registry in Windows for?

A2: So that whatever the registry does in/to Windows can also be done in/to Linux.

Q3: I see. And just who would want that to happen? Makes no sense, it is a huge step backwards.

A3: ....

Cheers,

O.

Dave Bell , 4 days
Reporting weakness

OK, so I was able to check through the link you provided, which says "up to and including 239", but I had just installed a systemd update and when you said there was already a fix written, working it's way through the distro update systems, all I had to do was check my log.

Linux Mint makes it easy.

But why didn't you say something such as "reported to affect systemd versions up to and including 239" and then give the link to the CVE? That failure looks like rather careless journalism.

W.O.Frobozz , 3 days
Hmm.

/sbin/init never had these problems. But then again /sbin/init didn't pretend to be the entire operating system.

[Nov 01, 2018] IBM was doing their best to demoralize their staff (and contractors) and annoy their customers as much as possible!

Notable quotes:
"... I worked as a contractor for IBM's IGS division in the late '90s and early 2000s at their third biggest customer, and even then, IBM was doing their best to demoralize their staff (and contractors) and annoy their customers as much as possible! ..."
"... IBM charged themselves 3x the actual price to customers for their ThinkPads at the time! This meant that I never had a laptop or desktop PC from IBM in the 8 years I worked there. If it wasn't for the project work I did I would not have had a PC to work on! ..."
"... What was strange is that every single time I got a pay cut, IBM would then announce that they had bought a new company! I would have quit long before I did, but I was tied to them while waiting for my Green Card to be approved. I know that raises are few in the current IBM for normal employees and that IBM always pleads poverty for any employee request. Yet, they somehow manage to pay billions of dollars for a new company. Strange that, isn't it? ..."
"... I moved to the company that had won the contract and regret not having the chance to tell that IBM manager what I thought about him and where he could stick the new laptop. ..."
"... After that experience I decided to never work for them in any capacity ever again. I feel pity for the current Red Hat employees and my only advice to them is to get out while they can. "DON'T FOLLOW THE RED HAT TO HELL" ..."
Nov 01, 2018 | theregister.co.uk

Edwin Tumblebunny Ashes to ashes, dust to dust - Red Hat is dead.

Red Hat will be a distant memory in a few years as it gets absorbed by the abhorrent IBM culture and its bones picked clean by the IBM beancounters. Nothing good ever happens to a company bought by IBM.

I worked as a contractor for IBM's IGS division in the late '90s and early 2000s at their third biggest customer, and even then, IBM was doing their best to demoralize their staff (and contractors) and annoy their customers as much as possible!

Some examples:

The on-site IBM employees (and contractors) had to use Lotus Notes for email. That was probably the worst piece of software I have ever used - I think baboons on drugs could have done a better design job. IBM set up a T1 (1.54 Mbps) link between the customer and the local IBM hub for email, etc. It sounds great until you realize there were over 150 people involved and due to the settings of Notes replication, it could often take over an hour to actually download email to read.

To do my job I needed to install some IBM software. My PC did not have enough disk space for this software as well as the other software I needed. Rather than buy me a bigger hard disk I had to spend 8 hours a week installing and reinstalling software to do my job.

I waited three months for a $50 stick of memory to be approved. When it finally arrived my machine had been changed out (due to a new customer project) and the memory was not compatible! Since I worked on a lot of projects I often had machines supplied by the customer on my desk. So, I would use one of these as my personal PC and would get an upgrade when the next project started!

I was told I could not be supplied with a laptop or desktop from IBM as they were too expensive (my IBM division did not want to spend money on anything). IBM charged themselves 3x the actual price to customers for their ThinkPads at the time! This meant that I never had a laptop or desktop PC from IBM in the 8 years I worked there. If it wasn't for the project work I did I would not have had a PC to work on!

IBM has many strange and weird processes that allow them to circumvent the contract they have with their preferred contractor companies. This meant that for a number of years I ended up getting a pay cut. What was strange is that every single time I got a pay cut, IBM would then announce that they had bought a new company! I would have quit long before I did, but I was tied to them while waiting for my Green Card to be approved. I know that raises are few in the current IBM for normal employees and that IBM always pleads poverty for any employee request. Yet, they somehow manage to pay billions of dollars for a new company. Strange that, isn't it?

Eventually I was approved to get a laptop and excitedly watched it move slowly through the delivery system. I got confused when it was reported as delivered to Ohio rather than my work (not in Ohio). After some careful searching I discovered that my manager and his wife both worked for IBM from their home in, yes you can guess, Ohio. It looked like he had redirected my new laptop for his own use and most likely was going to send me his old one and claim it was a new one. I never got the chance to confront him about it, though, as IBM lost the contract with the customer that month and before the laptop should have arrived IBM was out! I moved to the company that had won the contract and regret not having the chance to tell that IBM manager what I thought about him and where he could stick the new laptop.

After that experience I decided to never work for them in any capacity ever again. I feel pity for the current Red Hat employees and my only advice to them is to get out while they can. "DON'T FOLLOW THE RED HAT TO HELL"

Certain Hollywood stars seem to be psychic types : https://twitter.com/JimCarrey/status/1057328878769721344

rmstock , 2 days

Re: "DON'T FOLLOW THE RED HAT TO HELL"

I sense that a global effort is ongoing to shutdown open source software by brute force. First, the enforcement of the EU General Data Protection Regulation (GDPR) by ICANN.org to enable untraceable takeovers of domains. Microsoft buying github. Linus Torvalds forced out of his own Linux kernel project because of the Code of Conduct and now IBM buying RedHat. I wrote the following at https://lulz.com/linux-devs-threaten-killswitch-coc-controversy-1252/ "Torvalds should lawyer up. The problems are the large IT Tech firms who platinum donated all over the place in Open Source land. When IBM donated with 1 billion USD to Linux in 2000 https://itsfoss.com/ibm-invest-1-billion-linux/ a friend who vehemently was against the GPL and what Torvalds was doing, told me that in due time OSS would simply just go away.

These Community Organizers, not Coders per se, are on a mission to overtake and control the Linux Foundation, and if they can't, will search and destroy all of it, even if it destroys themselves. Coraline is merely a expendable pion here. Torvalds is now facing unjust confrontations and charges resembling the nomination of Judge Brett Kavanaugh. Looking at the CoC document it even might have been written by a Google executive, who themselves currently are facing serious charges and lawsuits from their own Code of Conduct. See theintercept.com, their leaked video the day after the election of 2016. They will do anything to pursue this. However to pursue a personal bias or agenda furnishing enactments or acts such as to, omit contradicting facts (code), commit perjury, attend riots and harassments, cleanse Internet archives and search engines of exculpatory evidence and ultimately hire hit-men to exterminate witnesses of truth (developers), in an attempt to elevate bias as fabricated fact (code) are crimes and should be prosecuted accordingly."

[Nov 01, 2018] Will Red Hat Survive IBM Ownership

It does not matter if somebody "stresses independence word are cheap. The mere fact that this is not IBM changes relationships. also IBM executives need to show "leadership" and that entails some "general direction" for Red Hat from now on. At least with Microsoft and HP relationship will be much cooler, then before.
Also IBM does not like "charity" projects like CentOS and that will be affected too, no matter what executives tell you right now. Paradoxically this greatly strengthen the position of Oracle Linux.
The status of IBM software in corporate world 9outside finance companies) is low and their games with licenses (licensing products per core, etc) are viewed by most of their customers as despicable. This was one of the reasons IBM lost it share in enterprise software. For example, greed in selling Tivoli more or less killed the software. All credits to Lou Gerstner who initially defined this culture of relentless outsource and cult of the "bottom line" (which was easy for his because he does not understood technology at all). His successor was even more active in driving company into the ground. R ampant cost-arbitrage driven offshoring has left a legacy of dissatisfied customers. Most projects are over priced. and most of those which were priced more or less on industry level had bad quality results and costs overruns.
IBM cut severance pay to 1 month, is firing people left and right and is insensitive to the fired employees and the result is enormous negativity for them. Good people are scared to work for them and people are asking tough questions.
It's strategy since Gerstner. Palmisano (this guys with history diploma witched into Cybersecurity after retirement in 2012 and led Obama cybersecurity commission) & Ginni Rometty was completely neoliberal mantra "share holder value", grow earnings per share at all costs. They all manage IBM for financial appearance rather than the quaility products. There was no focus on breakthrough-innovation, market leadership in mega-trends (like cloud computing) or even giving excellent service to customers.
Ginni Rometty accepted bonuses during times of layoffs and outsourcing.
When company have lost it architectural talent the brand will eventually die. IBM still has very high number of patents every year. It is still a very large company in terms of revenue and number of employees. However, there are strong signs that its position in the technology industry might deteriorate further.
Nov 01, 2018 | www.itprotoday.com

Cormier also stressed Red Hat independence when we asked how the acquisition would affect ongoing partnerships in place with Amazon Web Services, Microsoft Azure, and other public clouds.

"One of the things you keep seeing in this is that we're going to run Red Hat as a separate entity within IBM," he said. "One of the reasons is business. We need to, and will, remain Switzerland in terms of how we interact with our partners. We're going to continue to prioritize what we do for our partners within our products on a business case perspective, including IBM as a partner. We're not going to do unnatural things. We're going to do the right thing for the business, and most importantly, the customer, in terms of where we steer our products."

Red Hat promises that independence will extend to Red Hat's community projects, such as its freely available Fedora Linux distribution which is widely used by developers. When asked what impact the sale would have on Red Hat maintained projects like Fedora and CentoOS, Cormier replied, "None. We just came from an all-hands meeting from the company around the world for Red Hat, and we got asked this question and my answer was that the day after we close, I don't intend to do anything different or in any other way than we do our every day today. Arvind, Ginnie, Jim, and I have talked about this extensively. For us, it's business as usual. Whatever we were going to do for our road maps as a stand alone will continue. We have to do what's right for the community, upstream, our associates, and our business."

This all sounds good, but as they say, talk is cheap. Six months from now we'll have a better idea of how well IBM can walk the walk and leave Red Hat alone. If it can't and Red Hat doesn't remain clearly distinct from IBM ownership control, then Big Blue will have wasted $33 billion it can't afford to lose and put its future, as well as the future of Red Hat, in jeopardy.

[Oct 30, 2018] Red Hat hired the CentOS developers 4.5-years ago

Oct 30, 2018 | linux.slashdot.org

quantaman ( 517394 ) , Sunday October 28, 2018 @04:22PM ( #57550805 )

Re:Well at least we'll still have Cent ( Score: 4 , Informative)
Fedora is fully owned by Red Hat and CentOS requires the availability of the Red Hat repositories which they aren't obliged to make public to non-customers..

Fedora is fully under Red Hat's control. It's used as a bleeding edge distro for hobbyists and as a testing ground for code before it goes into RHEL. I doubt its going away since it does a great job of establishing mindshare but no business in their right mind is going to run Fedora in production.

But CentOS started as a separate organization with a fairly adversarial relationship to Red Hat since it really is free RHEL which cuts into their actual customer base. They didn't need Red Hat repos back then, just the code which they rebuilt from scratch (which is why they were often a few months behind).

If IBM kills CentOS a new one will pop up in a week, that's the beauty of the GPL.

Luthair ( 847766 ) , Sunday October 28, 2018 @04:22PM ( #57550799 )
Re:Well at least we'll still have Cent ( Score: 3 )

Red Hat hired the CentOS developers 4.5-years ago.

[Oct 30, 2018] We run just about everything on CentOS around here, downstream of RHEL. Should we be worried?

Oct 30, 2018 | arstechnica.com

Muon , Ars Scholae Palatinae 6 hours ago Popular

We run just about everything on CentOS around here, downstream of RHEL. Should we be worried? 649 posts | registered 1/26/2009
brandnewmath , Smack-Fu Master, in training et Subscriptor 6 hours ago Popular
We'll see. Companies in an acquisition always rush to explain how nothing will change to reassure their customers. But we'll see.
Kilroy420 , Ars Tribunus Militum 6 hours ago Popular
Perhaps someone can explain this... Red Hat's revenue and assets barely total about $5B. Even factoring in market share and capitalization, how the hey did IBM come up with $34B cash being a justifiable purchase price??

Honestly, why would Red Hat have said no? 1648 posts | registered 4/3/2012

dorkbert , Ars Tribunus Militum 6 hours ago Popular
My personal observation of IBM over the past 30 years or so is that everything it acquires dies horribly.
barackorama , Smack-Fu Master, in training 6 hours ago
...IBM's own employees see it as a company in free fall. This is not good news.

In other news, property values in Raleigh will rise even more...

Moodyz , Ars Centurion 6 hours ago Popular
Quote:
This is fine

Looking back at what's happened with many of IBM's past acquisitions, I'd say no, not quite fine.
I am not your friend , Wise, Aged Ars Veteran et Subscriptor 6 hours ago Popular
I just can't comprehend that price. Cloud has a rich future, but I didn't even know Red Hat had any presence there, let alone $35 billion worth.
jandrese , Ars Tribunus Angusticlavius et Subscriptor 6 hours ago Popular
50me12 wrote:
Will IBM even know what to do with them?

IBM has been fumbling around for a while. They didn't know how to sell Watson as they sold it like a weird magical drop in service and it failed repeatedly, where really it should be a long term project that you bring customer's along for the ride...

I had a buddy using their cloud service and they went to spin up servers and IBM was all "no man we have to set them up first".... like that's not cloud IBM...

If IBM can't figure out how to sell its own services I'm not sure the powers that be are capable of getting the job done ever. IBM's own leadership seems incompatible with the state of the world.

IBM basically bought a ton of service contracts for companies all over the world. This is exactly what the suits want: reliable cash streams without a lot of that pesky development stuff.

IMHO this is perilous for RHEL. It would be very easy for IBM to fire most of the developers and just latch on to the enterprise services stuff to milk it till its dry.

skizzerz , Wise, Aged Ars Veteran et Subscriptor 6 hours ago
toturi wrote:
I can only see this as a net positive - the ability to scale legacy mainframes onto "Linux" and push for even more security auditing.

I would imagine the RHEL team will get better funding but I would be worried if you're a centos or fedora user.

I'm nearly certain that IBM's management ineptitude will kill off Fedora and CentOS (or at least severely gimp them compared to how they currently are), not realizing how massively important both of these projects are to the core RHEL product. We'll see RHEL itself suffer as a result.

I normally try to understand things with an open mindset, but in this case, IBM has had too long of a history of doing things wrong for me to trust them. I'll be watching this carefully and am already prepping to move off of my own RHEL servers once the support contract expires in a couple years just in case it's needed.

Iphtashu Fitz , Ars Scholae Palatinae 6 hours ago Popular
50me12 wrote:
Will IBM even know what to do with them?

My previous job (6+ years ago now) was at a university that was rather heavily invested in IBM for a high performance research cluster. It was something around 100 or so of their X-series blade servers, all of which were running Red Hat Linux. It wouldn't surprise me if they decided to acquire Red Hat in large part because of all these sorts of IBM systems that run Red Hat on them.

TomXP411 , Ars Tribunus Angusticlavius 6 hours ago Popular
Iphtashu Fitz wrote:
50me12 wrote:
blockquote> Will IBM even know what to do with them?

My previous job (6+ years ago now) was at a university that was rather heavily invested in IBM for a high performance research cluster. It was something around 100 or so of their X-series blade servers, all of which were running Red Hat Linux. It wouldn't surprise me if they decided to acquire Red Hat in large part because of all these sorts of IBM systems that run Red Hat on them.

That was my thought. IBM wants to own an operating system again. With AIX being relegated to obscurity, buying Red Hat is simpler than creating their own Linux fork.

anon_lawyer , Wise, Aged Ars Veteran 6 hours ago Popular
Valuing Red Hat at $34 billion means valuing it at more than 1/4 of IBMs current market cap. From my perspective this tells me IBM is in even worse shape than has been reported.
dmoan , Ars Centurion 6 hours ago
I am not your friend wrote:
I just can't comprehend that price. Cloud has a rich future, but I didn't even know Red Hat had any presence there, let alone $35 billion worth.

Redhat made 258 million in income last year so they paid over 100 times its net income that's crazy valuation here...

[Oct 30, 2018] I have worked at IBM 17 years and have worried about being layed off for about 11 of them. Moral is in the toilet. Bonuses for the rank and file are in the under 1% range while the CEO gets millions

Notable quotes:
"... Adjusting for inflation, I make $6K less than I did my first day. My group is a handful of people as at least 1/2 have quit or retired. To support our customers, we used to have several people, now we have one or two and if someone is sick or on vacation, our support structure is to hope nothing breaks. ..."
Oct 30, 2018 | features.propublica.org

Buzz , Friday, March 23, 2018 12:00 PM

I've worked there 17 years and have worried about being layed off for about 11 of them. Moral is in the toilet. Bonuses for the rank and file are in the under 1% range while the CEO gets millions. Pay raises have been non existent or well under inflation for years.

Adjusting for inflation, I make $6K less than I did my first day. My group is a handful of people as at least 1/2 have quit or retired. To support our customers, we used to have several people, now we have one or two and if someone is sick or on vacation, our support structure is to hope nothing breaks.

We can't keep millennials because of pay, benefits and the expectation of being available 24/7 because we're shorthanded. As the unemployment rate drops, more leave to find a different job, leaving the old people as they are less willing to start over with pay, vacation, moving, selling a house, pulling kids from school, etc.

The younger people are generally less likely to be willing to work as needed on off hours or to pull work from a busier colleague.

I honestly have no idea what the plan is when the people who know what they are doing start to retire, we are way top heavy with 30-40 year guys who are on their way out, very few of the 10-20 year guys due to hiring freezes and we can't keep new people past 2-3 years. It's like our support business model is designed to fail.

[Oct 30, 2018] Will systemd become standard on mainframes as well?

It will be interesting to see what happens in any case.
Oct 30, 2018 | theregister.co.uk

Doctor Syntax , 1 day

So now it becomes Blue Hat. Will systemd become standard on mainframes as well?
DCFusor , 15 hrs
@Doctor

Maybe we get really lucky and they RIF Lennart Poettering or he quits? I hear IBM doesn't tolerate prima donnas and cults of personality quite as much as RH?

Anonymous Coward , 15 hrs
I hear IBM doesn't tolerate prima donnas and cults of personality quite as much as RH?

Quite the contrary. IBM is run and managed by prima donnas and personality cults.

Waseem Alkurdi , 18 hrs
Re: Poettering

OS/2 and Poettering? Best joke I've ever heard!

(It'd be interesting if somebody locked them both up in an office and see what happens!)

Glen Turner 666 , 16 hrs
Re: Patents

IBM already had access to Red Hat's patents, including for patent defence purposes. Look up "open innovation network".

This acquisition is about: (1) IBM needing growth, or at least a plausible scenario for growth. (2) Red Hat wanting an easy expansion of its sales channels, again for growth. (3) Red Hat stockholders being given an offer they can't refuse.

This acquisition is not about: cultural change at IBM. Which is why the acquisition will 'fail'. The bottom line is that engineering matters at the moment (see: Google, Amazon), and IBM sacked their engineering culture across the past two decades. To be successful IBM need to get that culture back, and acquiring Red Hat gives IBM the opportunity to create a product-building, client-service culture within IBM. Except that IBM aren't taking the opportunity, so there's a large risk the reverse will happen -- the acquisition will destroy Red Hat's engineering- and service-oriented culture.

Anonymous Coward , 1 day
The kraken versus the container ship

This could be interesting: will the systemd kraken manage to wrap its tentacles around the big blue container ship and bring it to a halt, or will the container ship turn out to be well armed and fatally harpoon the kraken (causing much rejoicing in the rest of the Linux world)?

Sitaram Chamarty , 1 day
disappointed...

Honestly, this is a time for optimism: if they manage to get rid of Lennart Poettering, everything else will be tolerable!

dbtx , 1 day
*if*

you can change the past so that a "proper replacement" isn't automatically expected to do lots of things that systemd does. That damage is done. We got "better is worse" and enough people liked it-- good luck trying to go back to "worse is better"

tfb , 13 hrs
I presume they were waiting to see what happened to Solaris. When Oracle bought Sun (presumably the only other company who might have bought them was IBM) there were really three enterprise unixoid platforms: Solaris, AIX and RHEL (there were some smaller ones and some which were clearly dying like HPUX). It seemed likely at the time, but not yet certain, that Solaris was going to die (I worked for Sun at the time this happened and that was my opinion anyway). If Solaris did die, then if one company owned both AIX and RHEL then that company would own the enterprise unixoid market. If Solaris didn't die on the other hand then RHEL would be a lot less valuable to IBM as there would be meaningful competition. So, obviously, they waited to see what would happen.

Well, Solaris is perhaps not technically quite dead yet but certainly is moribund, and IBM now owns both AIX and RHEL & hence the enterprise unixoid market. As an interesting side-note, unless Oracle can keep Solaris on life-support this means that IBM own all-but own Oracle's OS as well ('Oracle Linux' is RHEL with, optionally, some of their own additions to the kernel).

[Oct 30, 2018] The next version of systemd will be branded IBM(R) SystemD/2.

Oct 30, 2018 | linux.slashdot.org

Anonymous Coward , Sunday October 28, 2018 @03:21PM ( #57550457 )

systemd ( Score: 4 , Funny)

The next version will be branded IBM(R) SystemD/2.

Anonymous Coward , Sunday October 28, 2018 @05:10PM ( #57551035 )
Re:Please God No ( Score: 5 , Funny)

Look on the bright side: Poettering works for Red Hat. (Reposting because apparently Poettering has mod points.)

Anonymous Coward , Sunday October 28, 2018 @05:09PM ( #57551031 )
Re: It all ( Score: 5 , Informative)

Lennart already fucked up RHEL, I hope IBM will get rid of him and systemd.

gweihir ( 88907 ) , Sunday October 28, 2018 @07:14PM ( #57551677 )
Re: It all ( Score: 4 , Insightful)

Indeed. Maybe they will even sack Poettering. If so, they will do a ton of good.

HanzoSpam ( 713251 ) , Sunday October 28, 2018 @10:56PM ( #57552591 )
Re: It all ( Score: 5 , Insightful)
Indeed. Maybe they will even sack Poettering. If so, they will do a ton of good.

Surely you jest. Knowing IBM, as I do, they're more likely to make Poettering the head of the division.

cayenne8 ( 626475 ) , Sunday October 28, 2018 @06:18PM ( #57551371 ) Homepage Journal
Re:Lol ( Score: 5 , Interesting)
With systemd how can they fuck it up worse than it already is?

Well, it *is* IBM....I'm sure they can still make it worse.

[Oct 30, 2018] IBM's Red Hat acquisition is a 'desperate deal,' says analyst

Notable quotes:
"... "It's a desperate deal by a company that missed the boat for the last five years," the managing director at BTIG said on " Closing Bell ." "I'm not surprised that they bought Red Hat , I'm surprised that it took them so long. They've been behind the cloud eight ball." ..."
Oct 30, 2018 | www.cnbc.com
IBM's $34 billion acquisition of Red Hat is a last-ditch effort by IBM to play catch-up in the cloud industry, analyst Joel Fishbein told CNBC on Monday.

"It's a desperate deal by a company that missed the boat for the last five years," the managing director at BTIG said on " Closing Bell ." "I'm not surprised that they bought Red Hat , I'm surprised that it took them so long. They've been behind the cloud eight ball."

This is IBM's largest deal ever and the third-biggest tech deal in the history of the United States. IBM is paying more than a 60 percent premium for the software maker, but CEO Ginni Rometty told CNBC earlier in the day it was a "fair price."

[Oct 30, 2018] Sam Palmisano now infamous Roadmap 2015 ran the company into the ground through its maniacal focus on increasing EPS at any and all costs. Literally.

Oct 30, 2018 | features.propublica.org

GoingGone , Friday, April 13, 2018 6:06 PM

As a 25yr+ vet of IBM, I can confirm that this article is spot-on true. IBM used to be a proud and transparent company that clearly demonstrated that it valued its employees as much as it did its stock performance or dividend rate or EPS, simply because it is good for business. Those principles helped make and keep IBM atop the business world as the most trusted international brand and business icon of success for so many years. In 2000, all that changed when Sam Palmisano became the CEO. Palmisano's now infamous "Roadmap 2015" ran the company into the ground through its maniacal focus on increasing EPS at any and all costs. Literally.

Like, its employees, employee compensation, benefits, skills, and education opportunities. Like, its products, product innovation, quality, and customer service.

All of which resulted in the devastation of its technical capability and competitiveness, employee engagement, and customer loyalty. Executives seemed happy enough as their compensation grew nicely with greater financial efficiencies, and Palisano got a sweet $270M+ exit package in 2012 for a job well done.

The new CEO, Ginni Rometty has since undergone a lot of scrutiny for her lack of business results, but she was screwed from day one. Of course, that doesn't leave her off the hook for the business practices outlined in the article, but what do you expect: she was hand picked by Palmisano and approved by the same board that thought Palmisano was golden.

People (and companies) who have nothing to hide, hide nothing. People (and companies) who are proud of their actions, share it proudly. IBM believes it is being clever and outsmarting employment discrimination laws and saving the company money while retooling its workforce. That may end up being so (but probably won't), but it's irrelevant. Through its practices, IBM has lost the trust of its employees, customers, and ironically, stockholders (just ask Warren Buffett), who are the very(/only) audience IBM was trying to impress. It's just a huge shame.

HiJinks , Sunday, March 25, 2018 3:07 AM
I agree with many who state the report is well done. However, this crap started in the early 1990s. In the late 1980s, IBM offered decent packages to retirement eligible employees. For those close to retirement age, it was a great deal - 2 weeks pay for every year of service (capped at 26 years) plus being kept on to perform their old job for 6 months (while collecting retirement, until the government stepped in an put a halt to it). Nobody eligible was forced to take the package (at least not to general knowledge). The last decent package was in 1991 - similar, but not able to come back for 6 months. However, in 1991, those offered the package were basically told take it or else. Anyone with 30 years of service or 15 years and 55 was eligible and anyone within 5 years of eligibility could "bridge" the difference. They also had to sign a form stating they would not sue IBM in order to get up to a years pay - not taxable per IRS documents back then (but IBM took out the taxes anyway and the IRS refused to return - an employee group had hired lawyers to get the taxes back, a failed attempt which only enriched the lawyers). After that, things went downhill and accelerated when Gerstner took over. After 1991, there were still a some workers who could get 30 years or more, but that was more the exception. I suspect the way the company has been run the past 25 years or so has the Watsons spinning in their graves. Gone are the 3 core beliefs - "Respect for the individual", "Service to the customer" and "Excellence must be a way of life".
ArnieTracey , Saturday, March 24, 2018 7:15 PM
IBM's policy reminds me of the "If a citizen = 30 y.o., then mass execute such, else if they run then hunt and kill them one by one" social policy in the Michael York movie "Logan's Run."

From Wiki, in case you don't know: "It depicts a utopian future society on the surface, revealed as a dystopia where the population and the consumption of resources are maintained in equilibrium by killing everyone who reaches the age of 30. The story follows the actions of Logan 5, a "Sandman" who has terminated others who have attempted to escape death, and is now faced with termination himself."

Jr Jr , Saturday, March 24, 2018 4:37 PM
Corporate loyalty has been gone for 25 years. This isnt surprising. But this age discrimination is blatantly illegal.

[Oct 30, 2018] This might just be the deal that kills IBM because there's no way that they don't do a writedown of 90% of the value of this acquisition within 5 years.

Oct 30, 2018 | arstechnica.com

afidel, 2018-10-29T13:17:22-04:00

tipoo wrote:
Kilroy420 wrote:
Perhaps someone can explain this... Red Hat's revenue and assets barely total about $5B. Even factoring in market share and capitalization, how the hey did IBM come up with $34B cash being a justifiable purchase price??

Honestly, why would Red Hat have said no?

You don't trade at your earnings, you trade at your share price, which for Red Hat and many other tech companies can be quite high on Price/Earnings. They were trading at 52 P/E. Investors factor in a bunch of things involving future growth, and particularly for any companies in the cloud can quite highly overvalue things.

A 25 year old company trading at a P/E of 52 was already overpriced, buying at more than 2x that is insane. This might just be the deal that kills IBM because there's no way that they don't do a writedown of 90% of the value of this acquisition within 5 years.

[Oct 30, 2018] The insttuinaliuzed stupidity of IBM brass is connected with the desire to get bonuses

Oct 30, 2018 | arstechnica.com

3 hours ago afidel wrote: show nested quotes Kilroy420 wrote: Perhaps someone can explain this... Red Hat's revenue and assets barely total about $5B. Even factoring in market share and capitalization, how the hey did IBM come up with $34B cash being a justifiable purchase price??

Honestly, why would Red Hat have said no?

You don't trade at your earnings, you trade at your share price, which for Red Hat and many other tech companies can be quite high on Price/Earnings. They were trading at 52 P/E. Investors factor in a bunch of things involving future growth, and particularly for any companies in the cloud can quite highly overvalue things.
A 25 year old company trading at a P/E of 52 was already overpriced, buying at more than 2x that is insane. This might just be the deal that kills IBM because there's no way that they don't do a writedown of 90% of the value of this acquisition within 5 years.

OK. I did 10 years at IBM Boulder..

The problem isn't the purchase price or the probable write-down later.

The problem is going to be with the executives above it. One thing I noticed at IBM is that the executives needed to put their own stamp on operations to justify their bonuses. We were on a 2 year cycle of execs coming in and saying "Whoa.. things are too centralized, we need to decentralize", then the next exec coming in and saying "things are too decentralized, we need to centralize".

No IBM exec will get a bonus if they are over RedHat and exercise no authority over it. "We left it alone" generates nothing for the PBC. If they are in the middle of a re-org, then the specific metrics used to calculate their bonus can get waived. (Well, we took an unexpected hit this year on sales because we are re-orging to better optimize our resources). With that P/E, no IBM exec is going to get a bonus based on metrics. IBM execs do *not* care about what is good for IBM's business. They are all about gaming the bonuses. Customers aren't even on the list of things they care about.

I am reminded of a coworker who quit in frustration back in the early 2000's due to just plain bad management. At the time, IBM was working on Project Monterey. This was supposed to be a Unix system across multiple architectures. My coworker sent his resignation out to all hands basically saying "This is stupid. we should just be porting Linux". He even broke down the relative costs. Billions for Project Monterey vs thousands for a Linux port. Six months later, we get an email from on-high announcing this great new idea that upper management had come up with. It would be far cheaper to just support Linux than write a new OS.. you'd think that would be a great thing, but the reality is that all it did was create the AIX 5L family, which was AIX 5 with an additional CD called Linux ToolBox, which was loaded with a few Linux programs ported to a specific version of AIX, but never kept current. IBM can make even great decisions into bad decisions.

In May 2007, IBM announced the transition to LEAN. Sounds great, but this LEAN was not on the manufacturing side of the equation. It was in e-Business under Global Services. The new procedures were basically call center operations. Now, prior to this, IBM would have specific engineers for specific accounts. So, Major Bank would have that AIX admin, that Sun admin, that windows admin, etc. They knew who to call and those engineers would have docs and institutional knowledge of that account. During the LEAN announcement, Bob Moffat described the process. Accounts would now call an 800 number and the person calling would open a ticket. This would apply to *any* work request as all the engineers would be pooled and whoever had time would get the ticket. So, reset a password - ticket. So, load a tape - ticket. Install 20 servers - ticket.

Now, the kicker to this was that the change was announced at 8AM and went live at noon. IBM gave their customers who represented over $12 Billion in contracts 4 *hours* notice that they were going to strip their support teams and treat them like a call center. (I will leave it as an exercise to the reader to determine if they would accept that kind of support after spending hundreds of millions on a support contract).

(The pilot program for the LEAN process had its call center outsourced overseas, if that helps you try to figure out why IBM wanted to get rid of dedicated engineers and move to a call-center operation).

[Oct 30, 2018] Presumably the acquisition will have to jump various regulatory hurdles before it is set in stone. If it is successful, Red Hat will be absorbed into IBM's Hybrid Cloud unit

IBM will have to work hard to overcome RH customers' natural (and IMHO largely justified) suspicion of Big Blue.
Notable quotes:
"... focused its workforce on large "hub" cities where graduate engineers prefer to live – New York City, San Francisco, and Austin, in the US for instance – which allowed it to drive out older, settled staff who refused to move closer to the office. ..."
"... The acquisition of Sun Microsystems by Oracle comes to mind there. ..."
"... When Microsoft bought out GitHub, they made a promise to let it run independently and now IBM's given a similar pledge in respect of RedHat. They ought to abide by that promise because the alternatives are already out there in the form of Ubuntu and SUSE Linux Enterprise Server. ..."
Oct 30, 2018 | theregister.co.uk

...That transformation has led to accusations of Big Blue ditching its older staff for newer workers to somehow spark some new energy within it. It also cracked down on remote employees , and focused its workforce on large "hub" cities where graduate engineers prefer to live – New York City, San Francisco, and Austin, in the US for instance – which allowed it to drive out older, settled staff who refused to move closer to the office.

Ledswinger, 1 day

Easy, the same way they deal with their existing employees. It'll be the IBM way or the highway. We'll see the usual repetitive and doomed IBM strategy of brutal downsizings accompanied by the earnest IBM belief that a few offshore wage slaves can do as good a job as anybody else.

The product and service will deteriorate, pricing will have to go up significantly to recover the tens of billions of dollars of "goodwill" that IBM have just splurged, and in five years time we'll all be saying "remember how the clueless twats at IBM bought Red Hat and screwed it up?"

One of IBM's main problems is lack of revenue, and yet Red Hat only adds about $3bn to their revenue. As with most M&A the motivators here are a surplus of cash and hopeless optimism, accompanied by the suppression of all common sense.

Well done Gini, it's another winner.

TVU, 12 hrs
Re: "they will buy a lot of talent."

"What happens over the next 12 -- 24 months will be ... interesting. Usually the acquisition of a relatively young, limber outfit with modern product and service by one of the slow-witted traditional brontosaurs does not end well"

The acquisition of Sun Microsystems by Oracle comes to mind there.

When Microsoft bought out GitHub, they made a promise to let it run independently and now IBM's given a similar pledge in respect of RedHat. They ought to abide by that promise because the alternatives are already out there in the form of Ubuntu and SUSE Linux Enterprise Server.

[Oct 30, 2018] About time: Red Hat support was bad alread

Oct 30, 2018 | theregister.co.uk
Anonymous Coward, 15 hrs

and the support goes bad already...

Someone (RH staff, or one of the Gods) is unhappy with the deal: Red Hat's support site has been down all day.

https://status.redhat.com

[Oct 30, 2018] Purple rain will fall from that blue cloud

Notable quotes:
"... IBM was already "working on Linux." For decades. With multiple hundreds of full-time Linux developers--more than any other corporate contributor--around the world. And not just on including IBM-centric function into Linux, but on mainstream community projects. There have been lots of Linux people in IBM since the early-90's. ..."
"... From a customer standpoint the main thing RedHat adds is formal support. There are still a lot of companies who are uncomfortable deploying an OS that has product support only from StackExchange and web forums ..."
"... You would do better to look at the execution on the numbers - RedHat is not hitting it's targets and there are signs of trouble. These two businesses were both looking for a prop and the RedHat shareholders are getting out while the business is near it's peak. ..."
Oct 30, 2018 | theregister.co.uk

Anonymous Coward , 6 hrs

Re: So exactly how is IBM going to tame employees that are used to going to work in shorts...

Purple rain will fall from that cloud...

asdf , 5 hrs
Wow bravo brutal and accurate. +1
I can't believe its not butter , 1 day
Redhat employees - get out now

You're utterly fucked. Run away now as you have zero future in IBM.

Anonymous Coward , 1 day
Re: Redhat employees - get out now

You're utterly fucked. Run away now as you have zero future in IBM.

That was my immediate thought upon hearing this. I've already worked for IBM, swore never to do it again. Time to dust off & update the resume.

Jove , 19 hrs
Re: Redhat employees - get out now

Another major corporate splashes out a fortune on a star business only to find the clash of cultures destroys substantial value.

W@ldo , 1 day
Re: At least is isnt oracle or M$

Sort of the lesser of evils---do you want to be shot or hung by the neck? No good choice for this acquisition.

Anonymous Coward , 1 day
Re: At least is isnt oracle or M$

honestly, MS would be fine. They're big into Linux and open source and still a heavily pro-engineering company.

Companies like Oracle and IBM are about nothing but making money. Which is why they're both going down the tubes. No-one who doesn't already have them goes near them.

Uncle Ron , 14 hrs
Re: At least is isnt oracle or M$

IBM was already "working on Linux." For decades. With multiple hundreds of full-time Linux developers--more than any other corporate contributor--around the world. And not just on including IBM-centric function into Linux, but on mainstream community projects. There have been lots of Linux people in IBM since the early-90's.

Orv , 9 hrs
Re: At least it is not Oracle or M$

The OS is from Linus and chums, Redhat adds a few storage bits and some Redhat logos and erm.......

From a customer standpoint the main thing RedHat adds is formal support. There are still a lot of companies who are uncomfortable deploying an OS that has product support only from StackExchange and web forums. This market is fairly insensitive to price, which is good for a company like RedHat. (Although there has been an exodus of higher education customers as the price has gone up; like Sun did back in the day, they've been squeezing out that market. Two campuses I've worked for have switched wholesale to CentOS.)

Jove , 11 hrs
RedHat take-over IBM - @HmmmYes

"Just compare the share price of RH v IBM"

You would do better to look at the execution on the numbers - RedHat is not hitting it's targets and there are signs of trouble. These two businesses were both looking for a prop and the RedHat shareholders are getting out while the business is near it's peak.

[Oct 30, 2018] Pay your licensing fee

Oct 30, 2018 | linux.slashdot.org

red crab ( 1044734 ) , Monday October 29, 2018 @12:10AM ( #57552797 )

Re:Pay your licensing fee ( Score: 4 , Interesting)

Footnote: $699 License Fee applies to your systemP server running RHEL 7 with 4 cores activated for one year.

To activate additional processor cores on the systemP server, a fee of $199 per core applies. systemP offers a new Semi-Activation Mode now. In systemP Semi-Activation Mode, you will be only charged for all processor calls exceeding 258 MIPS, which will be processed by additional semi-activated cores on a pro-rata basis.

RHEL on systemP servers also offers a Partial Activation Mode, where additional cores can be activated in Inhibited Efficiency Mode.

To know more about Semi-Activation Mode, Partial Activation Mode and Inhibited Efficiency Mode, visit http://www.ibm.com/systemp [ibm.com] or contact your IBM systemP Sales Engineer.

[Oct 30, 2018] $34B? I was going to say this is the biggest tech acquisition ever, but it's second after Dell buying EMC

Notable quotes:
"... I'm not too sure what IBM is going to do with that, but congrats to whoever is getting the money... ..."
Oct 30, 2018 | theregister.co.uk

ratfox , 1 day

$34B? I was going to say this is the biggest tech acquisition ever, but it's second after Dell buying EMC. I'm not too sure what IBM is going to do with that, but congrats to whoever is getting the money...

[Oct 30, 2018] "OMG" comments rests on three assumptions: Red Hat is 100% brilliant and speckless, IBM is beyond hope and unchangeable, this is a hostile takeover

Notable quotes:
"... But I do beg to differ about the optimism, because, as my boss likes to quote, "culture eats strategy for breakfast". ..."
"... So the problem is that IBM have bought a business whose competencies and success factors differ from the IBM core. Its culture is radically different, and incompatible with IBM. ..."
"... Many of its best employees will be hostile to IBM ..."
"... And just like a gas giant, IBM can be considered a failed star ..."
Oct 30, 2018 | theregister.co.uk

LeoP , 1 day

Less pessimistic here

Quite a lot of the "OMG" moments rests on three assumptions:

I beg to differ on all counts. Call me beyond hope myself because of my optimism, but I do think what IBM bought most is a way to run a business. RH is just too big to be borged into a failing giant without leaving quite a substantial mark.

Ledswinger , 1 day
Re: Less pessimistic here

I beg to differ on all counts.

Note: I didn't downvote you, its a valid argument. I can understand why you think that, because I'm in the minority that think the IBM "bear" case is overdone. They've been cleaning their stables for some years now, and that means dropping quite a lot of low margin business, and seeing the topline shrink. That attracts a lot of criticism, although it is good business sense.

But I do beg to differ about the optimism, because, as my boss likes to quote, "culture eats strategy for breakfast".

And (speaking as a strategist) that's 100% true, and 150% true when doing M&A.

So the problem is that IBM have bought a business whose competencies and success factors differ from the IBM core. Its culture is radically different, and incompatible with IBM.

Many of its best employees will be hostile to IBM . RedHat will be borged, and it will leave quite a mark. A bit like Shoemaker-Levi did on Jupiter. Likewise there will be lots of turbulence, but it won't endure, and at the end of it all the gas giant will be unchanged (just a bit poorer). And just like a gas giant, IBM can be considered a failed star .

[Oct 30, 2018] If IBM buys Redhat then what will happen to CentOS?

Notable quotes:
"... As long as IBM doesn't close-source RH stuff -- most of which they couldn't if they wanted to -- CentOS will still be able to do builds of it. The only thing RH can really enforce control over is the branding and documentation. ..."
"... Might be a REALLY good time to fork CentOS before IBM pulls an OpenSolaris on it. Same thing with Fedora. ..."
"... I used to be a Solaris admin. Now I am a linux admin in a red hat shop. Sun was bought by Oracle and more or less died a death. Will the same happen now? I know that Sun and RH are _very_ differeht beasts but I am thinking that now is the time to stop playing on the merry go round called systems administration. ..."
Oct 30, 2018 | theregister.co.uk

Orv , 9 hrs

Re: If IBM buys Redhat then what will happen to CentOS?

If IBM buys Redhat then what will happen to CentOS?

As long as IBM doesn't close-source RH stuff -- most of which they couldn't if they wanted to -- CentOS will still be able to do builds of it. The only thing RH can really enforce control over is the branding and documentation.

Anonymous Coward , 9 hrs
Re: "they will buy a lot of talent."

"I found that those with real talent that matched IBM needs are well looked after."

The problem is that when you have served your need you tend to get downsized as the expectation is that the cheaper offshore bodies can simply take over support etc after picking up the skills they need over a few months !!!

This 'Blue meets Red and assimilates' will be very interesting to watch and will need lots and lots of popcorn on hand !!!

:)

FrankAlphaXII , 1 day
Might be a REALLY good time to fork CentOS before IBM pulls an OpenSolaris on it. Same thing with Fedora.

Kind of sad really, when I used Linux CentOS and Fedora were my go-to distros.

Missing Semicolon , 1 day
Goodbye Centos

Centos is owned by RedHat now. So why on earth would IBM bother keeping it?

Plus, of course, all the support and coding will be done in India now.....

Doctor Syntax , 17 hrs
Re: Goodbye Centos

"Centos is owned by RedHat now."

RedHat is The Upstream Vendor of Scientific Linux. What happens to them if IBM turn nasty?

Anonymous Coward , 16 hrs
Re: Scientific Linux

Scientific Linux is a good idea in theory, dreadful in practice.

The idea of a research/academic-software-focused distro is a good one: unfortunately (I say unfortunately, but it's certainly what I would do myself), increasing numbers of researchers are now developing their pet projects on Debian or Ubuntu, and so therefore often only make .deb packages available.

Anyone who has had any involvement in research software knows that if you find yourself in the position of needing to compile someone else's pet project from source, you are often in for an even more bumpy ride than usual.

And the lack of compatible RPM packages just encourages more and more researchers to go where the packages (and the free-ness) are, namely Debian and friends, which continue to gather momentum, while Red Hat continues to stagnate.

Red Hat may be very stable for running servers (as long as you don't need anything reasonably new (not bleeding edge, but at least newer than three years old)), but I have never really seen the attraction in it myself (especially as there isn't much of a "community" feeling around it, as its commercial focus gets in the way).

Roger Kynaston , 1 day
another buy out of my bread and butter

I used to be a Solaris admin. Now I am a linux admin in a red hat shop. Sun was bought by Oracle and more or less died a death. Will the same happen now? I know that Sun and RH are _very_ differeht beasts but I am thinking that now is the time to stop playing on the merry go round called systems administration.

Anonymous Coward , 1 day
Don't become a Mad Hatter

Dear Red Hatters, welcome to the world of battery hens, all clucking and clicking away to produce the elusive golden egg while the axe looms over their heads. Even if your mental health survives, you will become chicken feed by the time you are middle aged. IBM doesn't have a heart; get out while you can. Follow the example of IBMers in Australia who are jumping ship in their droves, leaving behind a crippled, demoralised workforce. Don't become a Mad Hatter.

[Oct 30, 2018] Kind of surprising since IBM was aligned with SUSE for so many years.

Oct 30, 2018 | arstechnica.com

tgx Ars Centurion reply 5 hours ago

Kind of surprising since IBM was aligned with SUSE for so many years.

IBM is an awful software company. OS/2, Lotus Notes, AIX all withered on the vine.

Doesn't bode well for RedHat.

[Oct 30, 2018] Hello, we are mandating some new policies, git can no longer be used, we must use IBM synergy software with rational rose

Oct 30, 2018 | arstechnica.com

lordofshadows , Smack-Fu Master, in training 6 hours ago

Hello, we are mandating some new policies, git can no longer be used, we must use IBM synergy software with rational rose.

All open source software is subject to corporate approval first, we know we know, to help streamline this process we have approved GNU CC and are looking into this Mak file program.

We are very pleased with systemd, we wish to further expand it's dhcp capabilities and also integrate IBM analytics -- We are also going through a rebranding operation as we feel the color red is too jarring for our customers, we will now be known as IBM Rational Hat and will only distribute through our retail channels to boost sales -- Look for us at walmart, circuit city, and staples

[Oct 30, 2018] And RH customers will want to check their contracts...

Oct 30, 2018 | arstechnica.com

CousinSven , Smack-Fu Master, in training et Subscriptor 4 hours ago New Poster

IBM are paying around 12x annual revenue for Red Hat which is a significant multiple so they will have to squeeze more money out of the business somehow. Either they grow customers or they increase margins or both.

IBM had little choice but to do something like this. They are in a terminal spiral thanks to years of bad leadership. The confused billing of the purchase smacks of rush, so far I have seen Red Hat described as a cloud company, an info sec company, an open source company...

So IBM are buying Red Hat as a last chance bid to avoid being put through the PE threshing machine. Red Hat get a ludicrous premium so will take the money.

And RH customers will want to check their contracts...

[Oct 30, 2018] IBM To Buy Red Hat, the Top Linux Distributor, For $34 Billion

Notable quotes:
"... IBM license fees are predatory. Plus they require you to install agents on your servers for the sole purpose of calculating use and licenses. ..."
"... IBM exploits workers by offshoring and are slow to fix bugs and critical CVEs ..."
Oct 30, 2018 | linux.slashdot.org

Anonymous Coward , Sunday October 28, 2018 @03:34PM ( #57550555 )

Re: Damn. ( Score: 5 , Insightful)

IBM license fees are predatory. Plus they require you to install agents on your servers for the sole purpose of calculating use and licenses.

IBM exploits workers by offshoring and are slow to fix bugs and critical CVEs (WAS and DB2 especially)

The Evil Atheist ( 2484676 ) , Sunday October 28, 2018 @04:13PM ( #57550755 ) Homepage
Re:Damn. ( Score: 4 , Insightful)

IBM buys a company, fires all the transferred employees and hopes they can keep selling their acquired software without further development. If they were serious, they'd have improved their own Linux contribution efforts.

But they literally think they can somehow keep selling software without anyone with knowledge of the software, or for transferring skills to their own employees.

They literally have no interest in actual software development. It's all about sales targets.

Anonymous Coward , Monday October 29, 2018 @01:00AM ( #57552963 )
Re:Damn. ( Score: 3 , Informative)

My advice to Red Hat engineers is to get out now. I was an engineer at a company that was acquired by IBM. I was fairly senior so I stayed on and ended up retiring from the IBM, even though I hated my last few years working there. I worked for several companies during my career, from startups to fortune 100 companies. IBM was the worst place I worked by far. Consider every bad thing you've ever heard about IBM. I've heard those things too, and the reality was much worse.

IBM hasn't improved their Linux contribution efforts because it wouldn't know how. It's not for lack of talented engineers. The management culture is simply pathological. No dissent is allowed. Everyone lives in fear of a low stack ranking and getting laid off. In the end it doesn't matter anyway. Eventually the product you work on that they originally purchased becomes unprofitable and they lay you off anyway. They've long forgotten how to develop software on their own. Don't believe me? Try to think of an IBM branded software product that they built from the ground up in the last 25 years that has significant market share. Development managers chase one development fad after another hoping to find the silver bullet that will allow them to continue the relentless cost cutting regime made necessary in order to make up revenue that has been falling consistently for over a decade now.

As far as I could tell, IBM is good at two things:

  1. Financial manipulation to disguise there shrinking revenue
  2. Buying software companies and mining them for value

Yes, there are still some brilliant people that work there. But IBM is just not good at turning ideas into revenue producing products. They are nearly always unsuccessful when they try and then the go out and buy a company the succeeded in bring to market the kind of product that they tried and failed to build themselves.

They used to be good at customer support, but that is mainly lip service now. Just before I left the company I was tapped to deliver a presentation at a customer seminar. The audience did not care much about my presentation. The only thing they wanted to talk about was how much they had invested millions in re-engineering their business to use our software and now IBM appeared to be wavering in their long term commitment to supporting the product. It was all very embarrassing because I knew what they didn't, that the amount of development and support resources currently allocated to the product line were a small fraction of what they once were. After having worked there I don't know why anyone would ever want to buy a license for any of their products.

gtall ( 79522 ) , Sunday October 28, 2018 @03:59PM ( #57550691 )
Re:A Cloudy argument. ( Score: 5 , Insightful)

So you are saying that IBM has been asleep at the wheel for the last 8 years. Buying Red Hat won't save them, IBM is IBM's enemy.

Aighearach ( 97333 ) writes:
Re: ( Score: 3 )

They're already one of the large cloud providers, but you don't know that because they only focus on big customers.

The Evil Atheist ( 2484676 ) , Sunday October 28, 2018 @04:29PM ( #57550829 ) Homepage
Re:A Cloudy argument. ( Score: 5 , Insightful)

IBM engineers aren't actually crappy. It's the fucking MBAs in management who have no clue about how to run a software development company. Their engineers will want to do good work, but management will worry more about headcount and sales.

The Evil Atheist ( 2484676 ) , Sunday October 28, 2018 @03:57PM ( #57550679 ) Homepage
Goodbye Redhat. ( Score: 5 , Insightful)

IBM acquisitions never go well. All companies acquired by IBM go through a process of "Blue washing", in which the heart and soul of the acquired company is ripped out, the body burnt, and the remaining ashes to be devoured and defecated by its army of clueless salesmen and consultants. It's a sad, and infuriating, repeated pattern. They no longer develop internal talent. They drive away the remaining people left over from the time when they still did develop things. They think they can just buy their way into a market or technology, somehow completely oblivious to the fact that their strategy of firing all their acquired employees/knowledge and hoping to sell software they have no interest in developing would somehow still retain customers. They literally could have just reshuffled and/or hired more developers to work on the kernel, but the fact they didn't shows they have no intention of actually contributing.

Nkwe ( 604125 ) , Sunday October 28, 2018 @04:26PM ( #57550819 )
Cha-Ching ( Score: 3 )

Red Hat closed Friday at $116.68 per share, looks like the buy out is for $190. Not everyone will be unhappy with this. I hope the Red Hat employees that won't like the upcoming cultural changes have stock and options, it may soften the blow a bit.

DougDot ( 966387 ) writes: < dougr@parrot-farm.net > on Sunday October 28, 2018 @05:43PM ( #57551189 ) Homepage
AIX Redux ( Score: 5 , Interesting)

Oh, good. Now IBM can turn RH into AIX while simultaneously suffocating whatever will be left of Redhat's staff with IBM's crushing, indifferent, incompetent bureaucracy.

This is what we call a lose - lose situation. Well, except for the president of Redhat, of course. Jim Whitehurst just got rich.

Tough Love ( 215404 ) writes: on Sunday October 28, 2018 @10:55PM ( #57552583 )
Re:AIX Redux ( Score: 2 )

Worse than Redhat's crushing, indifferent, incompetent bureaucracy? Maybe, but it's close.

ArchieBunker ( 132337 ) , Sunday October 28, 2018 @06:01PM ( #57551279 ) Homepage
Re:AIX Redux ( Score: 5 , Insightful)

Redhat is damn near AIX already. AIX had binary log files long before systemd.

Antique Geekmeister ( 740220 ) , Monday October 29, 2018 @05:51AM ( #57553587 )
Re:Please God No ( Score: 4 , Informative)

The core CentOS leadership are now Red Hat employees. They're not clear of nor uninvolved in this purchase.

alvinrod ( 889928 ) , Sunday October 28, 2018 @03:46PM ( #57550623 )
Re:Please God No ( Score: 5 , Insightful)

Depends on the state. Non-compete clauses are unenforceable in some jurisdictions. IBM would want some of the people to stick around. You can't just take over a complex system from someone else and expect everything to run smoothly or know how to fix or extend it. Also, not everyone who works at Red Hat gets anything from the buyout unless they were regularly giving employees stock. A lot of people are going to want the stable paycheck of working for IBM instead of trying to start a new company.

However, some will inevitably get sick of working at IBM or end up being laid off at some point. If these people want to keep doing what they're doing, they can start a new company. If they're good at what they do, they probably won't have much trouble attracting some venture capital either.

wyattstorch516 ( 2624273 ) , Sunday October 28, 2018 @04:41PM ( #57550887 )
Re:Please God No ( Score: 2 )

Red Hat went public in 1999, they are far from being a start-up. They have acquired several companies themselves so they are just as corporate as IBM although significantly smaller.

Anonymous Coward , Sunday October 28, 2018 @05:10PM ( #57551035 )
Re:Please God No ( Score: 5 , Funny)

Look on the bright side: Poettering works for Red Hat. (Reposting because apparently Poettering has mod points.)

Anonymous Coward , Monday October 29, 2018 @09:24AM ( #57554491 )
Re: It all ( Score: 5 , Interesting)

My feelings exactly. As a former employee for both places, I see this as the death knell for Red Hat. Not immediately, not quickly, but eventually Red Hat's going to go the same way as every other company IBM has acquired.

Red Hat's doom (again, all IMO) started about 10 years ago or so when Matt Szulik left and Jim Whitehurst came on board. Nothing against Jim, but he NEVER seemed to grasp what F/OSS was about. Hell, when he came onboard he wouldn't (and never did) use Linux at all: instead he used a Mac, and so did the rest of the EMT (executive management team) over time. What company is run by people who refuse to use its own product except for one that doesn't have faith. The person on top of the BRAND AND PEOPLE team "needed" an iPad, she said, to do her work (quoting a friend in the IT dept who was asked to get it and set it up for her).

Then when they (the EMTs) wanted to move away from using F/OSS internally to outsourcing huge aspects of our infrastructure (like no longer using F/OSS for email and instead contracting with GOOGLE to do our email, calendaring and document sharing) is when, again for me, the plane started to spiral. How can we sell to OUR CUSTOMERS the idea that "Red Hat and F/OSS will suit all of your corporate needs" when, again, the people running the ship didn't think it would work for OURS? We had no special email or calendar needs, and if we did WE WERE THE LEADERS OF OPEN SOURCE, couldn't we make it do what we want? Hell, I was on an internal (but on our own time) team whose goal was to take needs like this and incubate them with an open source solution to meet that need.

But the EMTs just didn't want to do that. They were too interested in what was "the big thing" (at the time Open Shift was where all of our hiring and resources were being poured) to pay attention to the foundations that were crumbling.

And now, here we are. Red Hat is being subsumed by the largest closed-source company on the planet, one who does their job sub-optimally (to be nice). This is the end of Red Hat as we know it. Without 5-7 years Red Hat will go the way of Tivoli and Lotus: it will be a brand name that lacks any of what made the original company what it was when it was acquired.

[Oct 30, 2018] Why IBM did do the same as Oracle?

Notable quotes:
"... Just fork it, call it Blue Hat Linux, and rake in those sweet support dollars. ..."
Oct 30, 2018 | arstechnica.com

Twilight Sparkle , Ars Scholae Palatinae 4 hours ago

Why would you pay even 34 dollars for software worth $0?

Just fork it, call it Blue Hat Linux, and rake in those sweet support dollars.

[Oct 30, 2018] Soon after I started, the company fired hundreds of 50-something employees and put we "kids" in their jobs. Seeing that employee loyalty was a one way street at that place, I left after a couple of years. Best career move I ever made.

Oct 30, 2018 | features.propublica.org

Al Romig , Wednesday, April 18, 2018 5:20 AM

As a new engineering graduate, I joined a similar-sized multinational US-based company in the early '70s. Their recruiting pitch was, "Come to work here, kid. Do your job, keep your nose clean, and you will enjoy great, secure work until you retire on easy street".

Soon after I started, the company fired hundreds of 50-something employees and put we "kids" in their jobs. Seeing that employee loyalty was a one way street at that place, I left after a couple of years. Best career move I ever made.

GoingGone , Friday, April 13, 2018 6:06 PM
As a 25yr+ vet of IBM, I can confirm that this article is spot-on true. IBM used to be a proud and transparent company that clearly demonstrated that it valued its employees as much as it did its stock performance or dividend rate or EPS, simply because it is good for business. Those principles helped make and keep IBM atop the business world as the most trusted international brand and business icon of success for so many years. In 2000, all that changed when Sam Palmisano became the CEO. Palmisano's now infamous "Roadmap 2015" ran the company into the ground through its maniacal focus on increasing EPS at any and all costs. Literally. Like, its employees, employee compensation, benefits, skills, and education opportunities. Like, its products, product innovation, quality, and customer service. All of which resulted in the devastation of its technical capability and competitiveness, employee engagement, and customer loyalty. Executives seemed happy enough as their compensation grew nicely with greater financial efficiencies, and Palisano got a sweet $270M+ exit package in 2012 for a job well done. The new CEO, Ginni Rometty has since undergone a lot of scrutiny for her lack of business results, but she was screwed from day one. Of course, that doesn't leave her off the hook for the business practices outlined in the article, but what do you expect: she was hand picked by Palmisano and approved by the same board that thought Palmisano was golden.
Paul V Sutera , Tuesday, April 3, 2018 7:33 PM
In 1994, I saved my job at IBM for the first time, and survived. But I was 36 years old. I sat down at the desk of a man in his 50s, and found a few odds and ends left for me in the desk. Almost 20 years later, it was my turn to go. My health and well-being is much better now. Less money but better health. The sins committed by management will always be: "I was just following orders".

[Oct 30, 2018] IBM age discrimination

Notable quotes:
"... Correction, March 24, 2018: Eileen Maroney lives in Aiken, South Carolina. The name of her city was incorrect in the original version of this story. ..."
Oct 30, 2018 | features.propublica.org

Consider, for example, a planning presentation that former IBM executives said was drafted by heads of a business unit carved out of IBM's once-giant software group and charged with pursuing the "C," or cloud, portion of the company's CAMS strategy.

The presentation laid out plans for substantially altering the unit's workforce. It was shown to company leaders including Diane Gherson, the senior vice president for human resources, and James Kavanaugh, recently elevated to chief financial officer. Its language was couched in the argot of "resources," IBM's term for employees, and "EP's," its shorthand for early professionals or recent college graduates.

Among the goals: "Shift headcount mix towards greater % of Early Professional hires." Among the means: "[D]rive a more aggressive performance management approach to enable us to hire and replace where needed, and fund an influx of EPs to correct seniority mix." Among the expected results: "[A] significant reduction in our workforce of 2,500 resources."

A slide from a similar presentation prepared last spring for the same leaders called for "re-profiling current talent" to "create room for new talent." Presentations for 2015 and 2016 for the 50,000-employee software group also included plans for "aggressive performance management" and emphasized the need to "maintain steady attrition to offset hiring."

IBM declined to answer questions about whether either presentation was turned into company policy. The description of the planned moves matches what hundreds of older ex-employees told ProPublica they believe happened to them: They were ousted because of their age. The company used their exits to hire replacements, many of them young; to ship their work overseas; or to cut its overall headcount.

Ed Alpern, now 65, of Austin, started his 39-year run with IBM as a Selectric typewriter repairman. He ended as a project manager in October of 2016 when, he said, his manager told him he could either leave with severance and other parting benefits or be given a bad job review -- something he said he'd never previously received -- and risk being fired without them.

Albert Poggi, now 70, was a three-decade IBM veteran and ran the company's Palisades, New York, technical center where clients can test new products. When notified in November of 2016 he was losing his job to layoff, he asked his bosses why, given what he said was a history of high job ratings. "They told me," he said, "they needed to fill it with someone newer."

The presentations from the software group, as well as the stories of ex-employees like Alpern and Poggi, square with internal documents from two other major IBM business units. The documents for all three cover some or all of the years from 2013 through the beginning of 2018 and deal with job assessments, hiring, firing and layoffs.

The documents detail practices that appear at odds with how IBM says it treats its employees. In many instances, the practices in effect, if not intent, tilt against the company's older U.S. workers.

For example, IBM spokespeople and lawyers have said the company never considers a worker's age in making decisions about layoffs or firings.

But one 2014 document reviewed by ProPublica includes dates of birth. An ex-IBM employee familiar with the process said executives from one business unit used it to decide about layoffs or other job changes for nearly a thousand workers, almost two-thirds of them over 50.

Documents from subsequent years show that young workers are protected from cuts for at least a limited period of time. A 2016 slide presentation prepared by the company's global technology services unit, titled "U.S. Resource Action Process" and used to guide managers in layoff procedures, includes bullets for categories considered "ineligible" for layoff. Among them: "early professional hires," meaning recent college graduates.

In responding to age-discrimination complaints that ex-employees file with the EEOC, lawyers for IBM say that front-line managers make all decisions about who gets laid off, and that their decisions are based strictly on skills and job performance, not age.

But ProPublica reviewed spreadsheets that indicate front-line managers hardly acted alone in making layoff calls. Former IBM managers said the spreadsheets were prepared for upper-level executives and kept continuously updated. They list hundreds of employees together with codes like "lift and shift," indicating that their jobs were to be lifted from them and shifted overseas, and details such as whether IBM's clients had approved the change.

An examination of several of the spreadsheets suggests that, whatever the criteria for assembling them, the resulting list of those marked for layoff was skewed toward older workers. A 2016 spreadsheet listed more than 400 full-time U.S. employees under the heading "REBAL," which refers to "rebalancing," the process that can lead to laying off workers and either replacing them or shifting the jobs overseas. Using the job search site LinkedIn, ProPublica was able to locate about 100 of these employees and then obtain their ages through public records. Ninety percent of those found were 40 or older. Seventy percent were over 50.

IBM frequently cites its history of encouraging diversity in its responses to EEOC complaints about age discrimination. "IBM has been a leader in taking positive actions to ensure its business opportunities are made available to individuals without regard to age, race, color, gender, sexual orientation and other categories," a lawyer for the company wrote in a May 2017 letter. "This policy of non-discrimination is reflected in all IBM business activities."

But ProPublica found at least one company business unit using a point system that disadvantaged older workers. The system awarded points for attributes valued by the company. The more points a person garnered, according to the former employee, the more protected she or he was from layoff or other negative job change; the fewer points, the more vulnerable.

The arrangement appears on its face to favor younger newcomers over older veterans. Employees were awarded points for being relatively new at a job level or in a particular role. Those who worked for IBM for fewer years got more points than those who'd been there a long time.

The ex-employee familiar with the process said a 2014 spreadsheet from that business unit, labeled "IBM Confidential," was assembled to assess the job prospects of more than 600 high-level employees, two-thirds of them from the U.S. It included employees' years of service with IBM, which the former employee said was used internally as a proxy for age. Also listed was an assessment by their bosses of their career trajectories as measured by the highest job level they were likely to attain if they remained at the company, as well as their point scores.

The tilt against older workers is evident when employees' years of service are compared with their point scores. Those with no points and therefore most vulnerable to layoff had worked at IBM an average of more than 30 years; those with a high number of points averaged half that.

Perhaps even more striking is the comparison between employees' service years and point scores on the one hand and their superiors' assessments of their career trajectories on the other.

Along with many American employers, IBM has argued it needs to shed older workers because they're no longer at the top of their games or lack "contemporary" skills.

But among those sized up in the confidential spreadsheet, fully 80 percent of older employees -- those with the most years of service but no points and therefore most vulnerable to layoff -- were rated by superiors as good enough to stay at their current job levels or be promoted. By contrast, only a small percentage of younger employees with a high number of points were similarly rated.

"No major company would use tools to conduct a layoff where a disproportionate share of those let go were African Americans or women," said Cathy Ventrell-Monsees, senior attorney adviser with the EEOC and former director of age litigation for the senior lobbying giant AARP. "There's no difference if the tools result in a disproportionate share being older workers."

In addition to the point system that disadvantaged older workers in layoffs, other documents suggest that IBM has made increasingly aggressive use of its job-rating machinery to pave the way for straight-out firings, or what the company calls "management-initiated separations." Internal documents suggest that older workers were especially targets.

Like in many companies, IBM employees sit down with their managers at the start of each year and set goals for themselves. IBM graded on a scale of 1 to 4, with 1 being top-ranked.

Those rated as 3 or 4 were given formal short-term goals known as personal improvement plans, or PIPs. Historically many managers were lenient, especially toward those with 3s whose ratings had dropped because of forces beyond their control, such as a weakness in the overall economy, ex-employees said.

But within the past couple of years, IBM appears to have decided the time for leniency was over. For example, a software group planning document for 2015 said that, over and above layoffs, the unit should seek to fire about 3,000 of the unit's 50,000-plus workers.

To make such deep cuts, the document said, executives should strike an "aggressive performance management posture." They needed to double the share of employees given low 3 and 4 ratings to at least 6.6 percent of the division's workforce. And because layoffs cost the company more than outright dismissals or resignations, the document said, executives should make sure that more than 80 percent of those with low ratings get fired or forced to quit.

Finally, the 2015 document said the division should work "to attract the best and brightest early professionals" to replace up to two-thirds of those sent packing. A more recent planning document -- the presentation to top executives Gherson and Kavanaugh for a business unit carved out of the software group -- recommended using similar techniques to free up money by cutting current employees to fund an "influx" of young workers.

In a recent interview, Poggi said he was resigned to being laid off. "Everybody at IBM has a bullet with their name on it," he said. Alpern wasn't nearly as accepting of being threatened with a poor job rating and then fired.

Alpern had a particular reason for wanting to stay on at IBM, at least until the end of last year. His younger son, Justin, then a high school senior, had been named a National Merit semifinalist. Alpern wanted him to be able to apply for one of the company's Watson scholarships. But IBM had recently narrowed eligibility so only the children of current employees could apply, not also retirees as it was until 2014.

Alpern had to make it through December for his son to be eligible.

But in August, he said, his manager ordered him to retire. He sought to buy time by appealing to superiors. But he said the manager's response was to threaten him with a bad job review that, he was told, would land him on a PIP, where his work would be scrutinized weekly. If he failed to hit his targets -- and his managers would be the judges of that -- he'd be fired and lose his benefits.

Alpern couldn't risk it; he retired on Oct. 31. His son, now a freshman on the dean's list at Texas A&M University, didn't get to apply.

"I can think of only a couple regrets or disappointments over my 39 years at IBM,"" he said, "and that's one of them."

'Congratulations on Your Retirement!'

Like any company in the U.S., IBM faces few legal constraints to reducing the size of its workforce. And with its no-disclosure strategy, it eliminated one of the last regular sources of information about its employment practices and the changing size of its American workforce.

But there remained the question of whether recent cutbacks were big enough to trigger state and federal requirements for disclosure of layoffs. And internal documents, such as a slide in a 2016 presentation titled "Transforming to Next Generation Digital Talent," suggest executives worried that "winning the talent war" for new young workers required IBM to improve the "attractiveness of (its) culture and work environment," a tall order in the face of layoffs and firings.

So the company apparently has sought to put a softer face on its cutbacks by recasting many as voluntary rather than the result of decisions by the firm. One way it has done this is by converting many layoffs to retirements.

Some ex-employees told ProPublica that, faced with a layoff notice, they were just as happy to retire. Others said they felt forced to accept a retirement package and leave. Several actively objected to the company treating their ouster as a retirement. The company nevertheless processed their exits as such.

Project manager Ed Alpern's departure was treated in company paperwork as a voluntary retirement. He didn't see it that way, because the alternative he said he was offered was being fired outright.

Lorilynn King, a 55-year-old IT specialist who worked from her home in Loveland, Colorado, had been with IBM almost as long as Alpern by May 2016 when her manager called to tell her the company was conducting a layoff and her name was on the list.

King said the manager told her to report to a meeting in Building 1 on IBM's Boulder campus the following day. There, she said, she found herself in a group of other older employees being told by an IBM human resources representative that they'd all be retiring. "I have NO intention of retiring," she remembers responding. "I'm being laid off."

ProPublica has collected documents from 15 ex-IBM employees who got layoff notices followed by a retirement package and has talked with many others who said they received similar paperwork. Critics say the sequence doesn't square well with the law.

"This country has banned mandatory retirement," said Seiner, the University of South Carolina law professor and former EEOC appellate lawyer. "The law says taking a retirement package has to be voluntary. If you tell somebody 'Retire or we'll lay you off or fire you,' that's not voluntary."

Until recently, the company's retirement paperwork included a letter from Rometty, the CEO, that read, in part, "I wanted to take this opportunity to wish you well on your retirement While you may be retiring to embark on the next phase of your personal journey, you will always remain a valued and appreciated member of the IBM family." Ex-employees said IBM stopped sending the letter last year.

IBM has also embraced another practice that leads workers, especially older ones, to quit on what appears to be a voluntary basis. It substantially reversed its pioneering support for telecommuting, telling people who've been working from home for years to begin reporting to certain, often distant, offices. Their other choice: Resign.

David Harlan had worked as an IBM marketing strategist from his home in Moscow, Idaho, for 15 years when a manager told him last year of orders to reduce the performance ratings of everybody at his pay grade. Then in February last year, when he was 50, came an internal video from IBM's new senior vice president, Michelle Peluso, which announced plans to improve the work of marketing employees by ordering them to work "shoulder to shoulder." Those who wanted to stay on would need to "co-locate" to offices in one of six cities.

Early last year, Harlan received an email congratulating him on "the opportunity to join your team in Raleigh, North Carolina." He had 30 days to decide on the 2,600-mile move. He resigned in June.

David Harlan worked for IBM for 15 years from his home in Moscow, Idaho, where he also runs a drama company. Early last year, IBM offered him a choice: Move 2,600 miles to Raleigh-Durham to begin working at an office, or resign. He left in June. (Rajah Bose for ProPublica)

After the Peluso video was leaked to the press, an IBM spokeswoman told the Wall Street Journal that the " vast majority " of people ordered to change locations and begin reporting to offices did so. IBM Vice President Ed Barbini said in an initial email exchange with ProPublica in July that the new policy affected only about 2,000 U.S. employees and that "most" of those had agreed to move.

But employees across a wide range of company operations, from the systems and technology group to analytics, told ProPublica they've also been ordered to co-locate in recent years. Many IBMers with long service said that they quit rather than sell their homes, pull children from school and desert aging parents. IBM declined to say how many older employees were swept up in the co-location initiative.

"They basically knew older employees weren't going to do it," said Eileen Maroney, a 63-year-old IBM product manager from Aiken, South Carolina, who, like Harlan, was ordered to move to Raleigh or resign. "Older people aren't going to move. It just doesn't make any sense." Like Harlan, Maroney left IBM last June.

Having people quit rather than being laid off may help IBM avoid disclosing how much it is shrinking its U.S. workforce and where the reductions are occurring.

Under the federal WARN Act , adopted in the wake of huge job cuts and factory shutdowns during the 1980s, companies laying off 50 or more employees who constitute at least one-third of an employer's workforce at a site have to give advance notice of layoffs to the workers, public agencies and local elected officials.

Similar laws in some states where IBM has a substantial presence are even stricter. California, for example, requires advanced notice for layoffs of 50 or more employees, no matter what the share of the workforce. New York requires notice for 25 employees who make up a third.

Because the laws were drafted to deal with abrupt job cuts at individual plants, they can miss reductions that occur over long periods among a workforce like IBM's that was, at least until recently, widely dispersed because of the company's work-from-home policy.

IBM's training sessions to prepare managers for layoffs suggest the company was aware of WARN thresholds, especially in states with strict notification laws such as California. A 2016 document entitled "Employee Separation Processing" and labeled "IBM Confidential" cautions managers about the "unique steps that must be taken when processing separations for California employees."

A ProPublica review of five years of WARN disclosures for a dozen states where the company had large facilities that shed workers found no disclosures in nine. In the other three, the company alerted authorities of just under 1,000 job cuts -- 380 in California, 369 in New York and 200 in Minnesota. IBM's reported figures are well below the actual number of jobs the company eliminated in these states, where in recent years it has shuttered, sold off or leveled plants that once employed vast numbers.

By contrast, other employers in the same 12 states reported layoffs last year alone totaling 215,000 people. They ranged from giant Walmart to Ostrom's Mushroom Farms in Washington state.

Whether IBM operated within the rules of the WARN act, which are notoriously fungible, could not be determined because the company declined to provide ProPublica with details on its layoffs.

A Second Act, But Poorer

W ith 35 years at IBM under his belt, Ed Miyoshi had plenty of experience being pushed to take buyouts, or early retirement packages, and refusing them. But he hadn't expected to be pushed last fall.

Miyoshi, of Hopewell Junction, New York, had some years earlier launched a pilot program to improve IBM's technical troubleshooting. With the blessing of an IBM vice president, he was busily interviewing applicants in India and Brazil to staff teams to roll the program out to clients worldwide.

The interviews may have been why IBM mistakenly assumed Miyoshi was a manager, and so emailed him to eliminate the one U.S.-based employee still left in his group.

"That was me," Miyoshi realized.

In his sign-off email to colleagues shortly before Christmas 2016, Miyoshi, then 57, wrote: "I am too young and too poor to stop working yet, so while this is good-bye to my IBM career, I fully expect to cross paths with some of you very near in the future."

He did, and perhaps sooner than his colleagues had expected; he started as a subcontractor to IBM about two weeks later, on Jan. 3.

Miyoshi is an example of older workers who've lost their regular IBM jobs and been brought back as contractors. Some of them -- not Miyoshi -- became contract workers after IBM told them their skills were out of date and no longer needed.

Employment law experts said that hiring ex-employees as contractors can be legally dicey. It raises the possibility that the layoff of the employee was not for the stated reason but perhaps because they were targeted for their age, race or gender.

IBM appears to recognize the problem. Ex-employees say the company has repeatedly told managers -- most recently earlier this year -- not to contract with former employees or sign on with third-party contracting firms staffed by ex-IBMers. But ProPublica turned up dozens of instances where the company did just that.

Only two weeks after IBM laid him off in December 2016, Ed Miyoshi of Hopewell Junction, New York, started work as a subcontractor to the company. But he took a $20,000-a-year pay cut. "I'm not a millionaire, so that's a lot of money to me," he says. (Demetrius Freeman for ProPublica)

Responding to a question in a confidential questionnaire from ProPublica, one 35-year company veteran from New York said he knew exactly what happened to the job he left behind when he was laid off. "I'M STILL DOING IT. I got a new gig eight days after departure, working for a third-party company under contract to IBM doing the exact same thing."

In many cases, of course, ex-employees are happy to have another job, even if it is connected with the company that laid them off.

Henry, the Columbus-based sales and technical specialist who'd been with IBM's "resiliency services" unit, discovered that he'd lost his regular IBM job because the company had purchased an Indian firm that provided the same services. But after a year out of work, he wasn't going to turn down the offer of a temporary position as a subcontractor for IBM, relocating data centers. It got money flowing back into his household and got him back where he liked to be, on the road traveling for business.

The compensation most ex-IBM employees make as contractors isn't comparable. While Henry said he collected the same dollar amount, it didn't include health insurance, which cost him $1,325 a month. Miyoshi said his paycheck is 20 percent less than what he made as an IBM regular.

"I took an over $20,000 hit by becoming a contractor. I'm not a millionaire, so that's a lot of money to me," Miyoshi said.

And lower pay isn't the only problem ex-IBM employees-now-subcontractors face. This year, Miyoshi's payable hours have been cut by an extra 10 "furlough days." Internal documents show that IBM repeatedly furloughs subcontractors without pay, often for two, three or more weeks a quarter. In some instances, the furloughs occur with little advance notice and at financially difficult moments. In one document, for example, it appears IBM managers, trying to cope with a cost overrun spotted in mid-November, planned to dump dozens of subcontractors through the end of the year, the middle of the holiday season.

Former IBM employees now on contract said the company controls costs by notifying contractors in the midst of projects they have to take pay cuts or lose the work. Miyoshi said that he originally started working for his third-party contracting firm for 10 percent less than at IBM, but ended up with an additional 10 percent cut in the middle of 2017, when IBM notified the contractor it was slashing what it would pay.

For many ex-employees, there are few ways out. Henry, for example, sought to improve his chances of landing a new full-time job by seeking assistance to finish a college degree through a federal program designed to retrain workers hurt by offshoring of jobs.

But when he contacted the Ohio state agency that administers the Trade Adjustment Assistance, or TAA, program, which provides assistance to workers who lose their jobs for trade-related reasons, he was told IBM hadn't submitted necessary paperwork. State officials said Henry could apply if he could find other IBM employees who were laid off with him, information that the company doesn't provide.

TAA is overseen by the Labor Department but is operated by states under individual agreements with Washington, so the rules can vary from state to state. But generally employers, unions, state agencies and groups of employers can petition for training help and cash assistance. Labor Department data compiled by the advocacy group Global Trade Watch shows that employers apply in about 40 percent of cases. Some groups of IBM workers have obtained retraining funds when they or their state have applied, but records dating back to the early 1990s show IBM itself has applied for and won taxpayer assistance only once, in 2008, for three Chicago-area workers whose jobs were being moved to India.

Teasing New Jobs

A s IBM eliminated thousands of jobs in 2016, David Carroll, a 52-year-old Austin software engineer, thought he was safe.

His job was in mobile development, the "M" in the company's CAMS strategy. And if that didn't protect him, he figured he was only four months shy of qualifying for a program that gives employees who leave within a year of their three-decade mark access to retiree medical coverage and other benefits.

But the layoff notice Carroll received March 2 gave him three months -- not four -- to come up with another job. Having been a manager, he said he knew the gantlet he'd have to run to land a new position inside IBM.

Still, he went at it hard, applying for more than 50 IBM jobs, including one for a job he'd successfully done only a few years earlier. For his effort, he got one offer -- the week after he'd been forced to depart. He got severance pay but lost access to what would have been more generous benefits.

Edward Kishkill, then 60, of Hillsdale, New Jersey, had made a similar calculation.

A senior systems engineer, Kishkill recognized the danger of layoffs, but assumed he was immune because he was working in systems security, the "S" in CAMS and another hot area at the company.

The precaution did him no more good than it had Carroll. Kishkill received a layoff notice the same day, along with 17 of the 22 people on his systems security team, including Diane Moos. The notice said that Kishkill could look for other jobs internally. But if he hadn't landed anything by the end of May, he was out.

With a daughter who was a senior in high school headed to Boston University, he scrambled to apply, but came up dry. His last day was May 31, 2016.

For many, the fruitless search for jobs within IBM is the last straw, a final break with the values the company still says it embraces. Combined with the company's increasingly frequent request that departing employees train their overseas replacements, it has left many people bitter. Scores of ex-employees interviewed by ProPublica said that managers with job openings told them they weren't allowed to hire from layoff lists without getting prior, high-level clearance, something that's almost never given.

ProPublica reviewed documents that show that a substantial share of recent IBM layoffs have involved what the company calls "lift and shift," lifting the work of specific U.S. employees and shifting it to specific workers in countries such as India and Brazil. For example, a document summarizing U.S. employment in part of the company's global technology services division for 2015 lists nearly a thousand people as layoff candidates, with the jobs of almost half coded for lift and shift.

Ex-employees interviewed by ProPublica said the lift-and-shift process required their extensive involvement. For example, shortly after being notified she'd be laid off, Kishkill's colleague, Moos, was told to help prepare a "knowledge transfer" document and begin a round of conference calls and email exchanges with two Indian IBM employees who'd be taking over her work. Moos said the interactions consumed much of her last three months at IBM.

Next Chapters

W hile IBM has managed to keep the scale and nature of its recent U.S. employment cuts largely under the public's radar, the company drew some unwanted attention during the 2016 presidential campaign, when then-candidate Donald Trump lambasted it for eliminating 500 jobs in Minnesota, where the company has had a presence for a half century, and shifting the work abroad.

The company also has caught flak -- in places like Buffalo, New York ; Dubuque, Iowa ; Columbia, Missouri , and Baton Rouge, Louisiana -- for promising jobs in return for state and local incentives, then failing to deliver. In all, according to public officials in those and other places, IBM promised to bring on 3,400 workers in exchange for as much as $250 million in taxpayer financing but has hired only about half as many.

After Trump's victory, Rometty, in a move at least partly aimed at courting the president-elect, pledged to hire 25,000 new U.S. employees by 2020. Spokesmen said the hiring would increase IBM's U.S. employment total, although, given its continuing job cuts, the addition is unlikely to approach the promised hiring total.

When The New York Times ran a story last fall saying IBM now has more employees in India than the U.S., Barbini, the corporate spokesman, rushed to declare, "The U.S. has always been and remains IBM's center of gravity." But his stream of accompanying tweets and graphics focused as much on the company's record for racking up patents as hiring people.

IBM has long been aware of the damage its job cuts can do to people. In a series of internal training documents to prepare managers for layoffs in recent years, the company has included this warning: "Loss of a job often triggers a grief reaction similar to what occurs after a death."

Most, though not all, of the ex-IBM employees with whom ProPublica spoke have weathered the loss and re-invented themselves.

Marjorie Madfis, the digital marketing strategist, couldn't land another tech job after her 2013 layoff, so she headed in a different direction. She started a nonprofit called Yes She Can Inc. that provides job skills development for young autistic women, including her 21-year-old daughter.

After almost two years of looking and desperate for useful work, Brian Paulson, the widely traveled IBM senior manager, applied for and landed a position as a part-time rural letter carrier in Plano, Texas. He now works as a contract project manager for a Las Vegas gaming and lottery firm.

Ed Alpern, who started at IBM as a Selectric typewriter repairman, watched his son go on to become a National Merit Scholar at Texas A&M University, but not a Watson scholarship recipient.

Lori King, the IT specialist and 33-year IBM veteran who's now 56, got in a parting shot. She added an addendum to the retirement papers the firm gave her that read in part: "It was never my plan to retire earlier than at least age 60 and I am not committing to retire. I have been informed that I am impacted by a resource action effective on 2016-08-22, which is my last day at IBM, but I am NOT retiring."

King has aced more than a year of government-funded coding boot camps and university computer courses, but has yet to land a new job.

David Harlan still lives in Moscow, Idaho, after refusing IBM's "invitation" to move to North Carolina, and is artistic director of the Moscow Art Theatre (Too).

Ed Miyoshi is still a technical troubleshooter working as a subcontractor for IBM.

Ed Kishkill, the senior systems engineer, works part time at a local tech startup, but pays his bills as an associate at a suburban New Jersey Staples store.

This year, Paul Henry was back on the road, working as an IBM subcontractor in Detroit, about 200 miles from where he lived in Columbus. On Jan. 8, he put in a 14-hour day and said he planned to call home before turning in. He died in his sleep.

Correction, March 24, 2018: Eileen Maroney lives in Aiken, South Carolina. The name of her city was incorrect in the original version of this story.

Do you have information about age discrimination at IBM?

Let us know.

Peter Gosselin joined ProPublica as a contributing reporter in January 2017 to cover aging. He has covered the U.S. and global economies for, among others, the Los Angeles Times and The Boston Globe, focusing on the lived experiences of working people. He is the author of "High Wire: The Precarious Financial Lives of American Families."

Ariana Tobin is an engagement reporter at ProPublica, where she works to cultivate communities to inform our coverage. She was previously at The Guardian and WNYC. Ariana has also worked as digital producer for APM's Marketplace and contributed to outlets including The New Republic , On Being , the St. Louis Beacon and Bustle .

Production by Joanna Brenner and Hannah Birch . Art direction by David Sleight . Illustrations by Richard Borge .

[Oct 30, 2018] Cutting 'Old Heads' at IBM

Notable quotes:
"... I took an early retirement package when IBM first started downsizing. I had 30 years with them, but I could see the writing on the wall so I got out. I landed an exec job with a biotech company some years later and inherited an IBM consulting team that were already engaged. I reviewed their work for 2 months then had the pleasure of terminating the contract and actually escorting the team off the premises because the work product was so awful. ..."
"... Every former or prospective IBM employee is a potential future IBM customer or partner. How you treat them matters! ..."
"... I advise IBM customers now. My biggest professional achievements can be measured in how much revenue IBM lost by my involvement - millions. Favorite is when IBM paid customer to stop the bleeding. ..."
Oct 30, 2018 | features.propublica.org

I took an early retirement package when IBM first started downsizing. I had 30 years with them, but I could see the writing on the wall so I got out. I landed an exec job with a biotech company some years later and inherited an IBM consulting team that were already engaged. I reviewed their work for 2 months then had the pleasure of terminating the contract and actually escorting the team off the premises because the work product was so awful.

They actually did a presentation of their interim results - but it was a 52 slide package that they had presented to me in my previous job but with the names and numbers changed. see more

DarthVaderMentor dauwkus , Thursday, April 5, 2018 4:43 PM

Intellectual Capital Re-Use! LOL! Not many people realize in IBM that many, if not all of the original IBM Consulting Group materials were made under the Type 2 Materials clause of the IBM Contract, which means the customers actually owned the IP rights of the documents. Can you imagine the mess if just one customer demands to get paid for every re-use of the IP that was developed for them and then re-used over and over again?
NoGattaca dauwkus , Monday, May 7, 2018 5:37 PM
Beautiful! Yea, these companies so fast to push experienced people who have dedicated their lives to the firm - how can you not...all the hours and commitment it takes - way underestimate the power of the network of those left for dead and their influence in that next career gig. Memories are long...very long when it comes to experiences like this.
davosil North_40 , Sunday, March 25, 2018 5:19 PM
True dat! Every former or prospective IBM employee is a potential future IBM customer or partner. How you treat them matters!
Playing Defense North_40 , Tuesday, April 3, 2018 4:41 PM
I advise IBM customers now. My biggest professional achievements can be measured in how much revenue IBM lost by my involvement - millions. Favorite is when IBM paid customer to stop the bleeding.

[Oct 30, 2018] It s all about making the numbers so the management can present a Potemkin Village of profits and ever-increasing growth sufficient to get bonuses. There is no relation to any sort of quality or technological advancement, just HR 3-card monte

Notable quotes:
"... It's no coincidence whatsoever that Diane Gherson, mentioned prominently in the article, blasted out an all-employees email crowing about IBM being a great place to work according to (ahem) LinkedIn. I desperately want to post a link to this piece in the corporate Slack, but that would get me fired immediately instead of in a few months at the next "resource action." It's been a whole 11 months since our division had one, so I know one is coming soon. ..."
"... I used to say when I was there that: "After every defeat, they pin medals on the generals and shoot the soldiers". ..."
"... 1990 is also when H-1B visa rules were changed so that companies no longer had to even attempt to hire an American worker as long as the job paid $60,000, which hasn't changed since. This article doesn't even mention how our work visa system facilitated and even rewarded this abuse of Americans. ..."
"... Well, starting in the 1980s, the American management was allowed by Reagan to get rid of its workforce. ..."
"... It's all about making the numbers so the management can present a Potemkin Village of profits and ever-increasing growth sufficient to get bonuses. There is no relation to any sort of quality or technological advancement, just HR 3-card monte. They have installed air bearing in Old Man Watson's coffin as it has been spinning ever faster ..."
"... Corporate America executive management is all about stock price management. Their bonus's in the millions of dollars are based on stock performance. With IBM's poor revenue performance since Ginny took over, profits can only be maintained by cost reduction. Look at the IBM executive's bonus's throughout the last 20 years and you can see that all resource actions have been driven by Palmisano's and Rominetty's greed for extravagant bonus's. ..."
"... Also worth noting is that IBM drastically cut the cap on it's severance pay calculation. Almost enough to make me regret not having retired before that changed. ..."
"... Yeah, severance started out at 2 yrs pay, went to 1 yr, then to 6 mos. and is now 1 month. ..."
"... You need to investigate AT&T as well, as they did the same thing. I was 'sold' by IBM to AT&T as part of he Network Services operation. AT&T got rid of 4000 of the 8000 US employees sent to AT&T within 3 years. Nearly everyone of us was a 'senior' employee. ..."
Oct 30, 2018 | disqus.com

dragonflap7 months ago I'm a 49-year-old SW engineer who started at IBM as part of an acquisition in 2000. I got laid off in 2002 when IBM started sending reqs to Bangalore in batches of thousands. After various adventures, I rejoined IBM in 2015 as part of the "C" organization referenced in the article.

It's no coincidence whatsoever that Diane Gherson, mentioned prominently in the article, blasted out an all-employees email crowing about IBM being a great place to work according to (ahem) LinkedIn. I desperately want to post a link to this piece in the corporate Slack, but that would get me fired immediately instead of in a few months at the next "resource action." It's been a whole 11 months since our division had one, so I know one is coming soon.

Stewart Dean7 months ago ,

The lead-in to this piece makes it sound like IBM was forced into these practices by inescapable forces. I'd say not, rather that it pursued them because a) the management was clueless about how to lead IBM in the new environment and new challenges so b) it started to play with numbers to keep the (apparent) profits up....to keep the bonuses coming. I used to say when I was there that: "After every defeat, they pin medals on the generals and shoot the soldiers".

And then there's the Pig with the Wooden Leg shaggy dog story that ends with the punch line, "A pig like that you don't eat all at once", which has a lot of the flavor of how many of us saw our jobs as IBM die a slow death.

IBM is about to fall out of the sky, much as General Motors did. How could that happen? By endlessly beating the cow to get more milk.

IBM was hiring right through the Great Depression such that It Did Not Pay Unemployment Insurance. Because it never laid people off, Because until about 1990, your manager was responsible for making sure you had everything you needed to excel and grow....and you would find people that had started on the loading dock and had become Senior Programmers. But then about 1990, IBM starting paying unemployment insurance....just out of the goodness of its heart. Right.

CRAW Stewart Dean7 months ago ,

1990 is also when H-1B visa rules were changed so that companies no longer had to even attempt to hire an American worker as long as the job paid $60,000, which hasn't changed since. This article doesn't even mention how our work visa system facilitated and even rewarded this abuse of Americans.

DDRLSGC Stewart Dean7 months ago ,

Well, starting in the 1980s, the American management was allowed by Reagan to get rid of its workforce.

Georgann Putintsev Stewart Dean7 months ago ,

I found that other Ex-IBMer's respect other Ex-IBMer's work ethics, knowledge and initiative.

Other companies are happy to get them as a valueable resource. In '89 when our Palo Alto Datacenter moved, we were given two options: 1.) to become a Programmer (w/training) 2.) move to Boulder or 3.) to leave.

I got my training with programming experience and left IBM in '92, when for 4 yrs IBM offerred really good incentives for leaving the company. The Executives thought that the IBM Mainframe/MVS z/OS+ was on the way out and the Laptop (Small but Increasing Capacity) Computer would take over everything.

It didn't. It did allow many skilled IBMers to succeed outside of IBM and help built up our customer skill sets. And like many, when the opportunity arose to return I did. In '91 I was accidentally given a male co-workers paycheck and that was one of the reasons for leaving. During my various Contract work outside, I bumped into other male IBMer's that had left too, some I had trained, and when they disclosed that it was their salary (which was 20-40%) higher than mine was the reason they left, I knew I had made the right decision.

Women tend to under-value themselves and their capabilities. Contracting also taught me that companies that had 70% employees and 30% contractors, meant that contractors would be let go if they exceeded their quarterly expenditures.

I first contracted with IBM in '98 and when I decided to re-join IBM '01, I had (3) job offers and I took the most lucrative exciting one to focus on fixing & improving DB2z Qry Parallelism. I developed a targeted L3 Technical Change Team to help L2 Support reduce Customer problems reported and improve our product. The instability within IBM remained and I saw IBM try to eliminate aging, salaried, benefited employees. The 1.) find a job within IBM ... to 2.) to leave ... was now standard.

While my salary had more than doubled since I left IBM the first time, it still wasn't near other male counterparts. The continual rating competition based on salary ranged titles and timing a title raise after a round of layoffs, not before. I had another advantage going and that was that my changed reduced retirement benefits helped me stay there. It all comes down to the numbers that Mgmt is told to cut & save IBM. While much of this article implies others were hired, at our Silicon Valley Location and other locations, they had no intent to backfill. So the already burdened employees were laden with more workloads & stress.

In the early to mid 2000's IBM setup a counter lab in China where they were paying 1/4th U.S. salaries and many SVL IBMers went to CSDL to train our new world 24x7 support employees. But many were not IBM loyal and their attrition rates were very high, so it fell to a wave of new-hires at SVL to help address it.

Stewart Dean Georgann Putintsev7 months ago ,

It's all about making the numbers so the management can present a Potemkin Village of profits and ever-increasing growth sufficient to get bonuses. There is no relation to any sort of quality or technological advancement, just HR 3-card monte. They have installed air bearing in Old Man Watson's coffin as it has been spinning ever faster

IBM32_retiree • 7 months ago ,

Corporate America executive management is all about stock price management. Their bonus's in the millions of dollars are based on stock performance. With IBM's poor revenue performance since Ginny took over, profits can only be maintained by cost reduction. Look at the IBM executive's bonus's throughout the last 20 years and you can see that all resource actions have been driven by Palmisano's and Rominetty's greed for extravagant bonus's.

Dan Yurman7 months ago ,

Bravo ProPublica for another "sock it to them" article - journalism in honor of the spirit of great newspapers everywhere that the refuge of justice in hard times is with the press.

Felix Domestica7 months ago ,

Also worth noting is that IBM drastically cut the cap on it's severance pay calculation. Almost enough to make me regret not having retired before that changed.

RonF Felix Domestica7 months ago ,

Yeah, severance started out at 2 yrs pay, went to 1 yr, then to 6 mos. and is now 1 month.

mjmadfis RonF7 months ago ,

When I was let go in June 2013 it was 6 months severance.

Terry Taylor7 months ago ,

You need to investigate AT&T as well, as they did the same thing. I was 'sold' by IBM to AT&T as part of he Network Services operation. AT&T got rid of 4000 of the 8000 US employees sent to AT&T within 3 years. Nearly everyone of us was a 'senior' employee.

weelittlepeople Terry Taylor7 months ago ,

Good Ol Ma Bell is following the IBM playbook to a Tee

emnyc7 months ago ,

ProPublica deserves a Pulitzer for this article and all the extensive research that went into this investigation.

Incredible job! Congrats.

On a separate note, IBM should be ashamed of themselves and the executive team that enabled all of this should be fired.

WmBlake7 months ago ,

As a permanent old contractor and free-enterprise defender myself, I don't blame IBM a bit for wanting to cut the fat. But for the outright *lies, deception and fraud* that they use to break laws, weasel out of obligations... really just makes me want to shoot them... and I never even worked for them.

Michael Woiwood7 months ago ,

Great Article.

Where I worked, In Rochester,MN, people have known what is happening for years. My last years with IBM were the most depressing time in my life.

I hear a rumor that IBM would love to close plants they no longer use but they are so environmentally polluted that it is cheaper to maintain than to clean up and sell.

scorcher147 months ago ,

One of the biggest driving factors in age discrimination is health insurance costs, not salary. It can cost 4-5x as much to insure and older employee vs. a younger one, and employers know this. THE #1 THING WE CAN DO TO STOP AGE DISCRIMINATION IS TO MOVE AWAY FROM OUR EMPLOYER-PROVIDED INSURANCE SYSTEM. It could be single-payer, but it could also be a robust individual market with enough pool diversification to make it viable. Freeing employers from this cost burden would allow them to pick the right talent regardless of age.

DDRLSGC scorcher147 months ago ,

The American business have constantly fought against single payer since the end of World War II and why should I feel sorry for them when all of a sudden, they are complaining about health care costs? It is outrageous that workers have to face age discrimination; however, the CEOs don't have to deal with that issue since they belong to a tiny group of people who can land a job anywhere else.

pieinthesky scorcher147 months ago ,

Single payer won't help. We have single payer in Canada and just as much age discrimination in employment. Society in general does not like older people so unless you're a doctor, judge or pharmacist you will face age bias. It's even worse in popular culture never mind in employment.

OrangeGina scorcher147 months ago ,

I agree. Yet, a determined company will find other methods, explanations and excuses.

JohnCordCutter7 months ago ,

Thanks for the great article. I left IBM last year. USA based. 49. Product Manager in one of IBMs strategic initiatives, however got told to relocate or leave. I found another job and left. I came to IBM from an acquisition. My only regret is, I wish I had left this toxic environment earlier. It truely is a dreadful place to work.

60 Soon • 7 months ago ,

The methodology has trickled down to smaller companies pursuing the same net results for headcount reduction. The similarities to my experience were painful to read. The grief I felt after my job was "eliminated" 10 years ago while the Recession was at its worst and shortly after my 50th birthday was coming back. I never have recovered financially but have started writing a murder mystery. The first victim? The CEO who let me go. It's true. Revenge is best served cold.

donttreadonme97 months ago ,

Well written . people like me have experienced exactly what you wrote. IBM is a shadow of it's former greatness and I have advised my children to stay away from IBM and companies like it as they start their careers. IBM is a corrupt company. Shame on them !

annapurna7 months ago ,

I hope they find some way to bring a class action lawsuit against these assholes.

Mark annapurna7 months ago ,

I suspect someone will end up hunt them down with an axe at some point. That's the only way they'll probably learn. I don't know about IBM specifically, but when Carly Fiorina ran HP, she travelled with and even went into engineering labs with an armed security detail.

OrangeGina Mark7 months ago ,

all the bigwig CEOs have these black SUV security details now.

Sarahw7 months ago ,

IBM has been using these tactics at least since the 1980s, when my father was let go for similar 'reasons.'

Vin7 months ago ,

Was let go after 34 years of service. Mine Resource Action latter had additional lines after '...unless you are offered ... position within IBM before that date.' , implying don't even try to look for a position. They lines were ' Additional business controls are in effect to manage the business objectives of this resource action, therefore, job offers within (the name of division) will be highly unlikely.'.

Mark Vin7 months ago ,

Absolutely and utterly disgusting.

Greybeard7 months ago ,

I've worked for a series of vendors for over thirty years. A job at IBM used to be the brass ring; nowadays, not so much.

I've heard persistent rumors from IBMers that U.S. headcount is below 25,000 nowadays. Given events like the recent downtime of the internal systems used to order parts (5 or so days--website down because staff who maintained it were let go without replacements), it's hard not to see the spiral continue down the drain.

What I can't figure out is whether Rometty and cronies know what they're doing or are just clueless. Either way, the result is the same: destruction of a once-great company and brand. Tragic.

ManOnTheHill Greybeard7 months ago ,

Well, none of these layoffs/ageist RIFs affect the execs, so they don't see the effects, or they see the effects but attribute them to some other cause.

(I'm surprised the article doesn't address this part of the story; how many affected by layoffs are exec/senior management? My bet is very few.)

ExIBMExec ManOnTheHill7 months ago ,

I was a D-banded exec (Director-level) who was impacted and I know even some VPs who were affected as well, so they do spread the pain, even in the exec ranks.

ManOnTheHill ExIBMExec7 months ago ,

That's different than I have seen in companies I have worked for (like HP). There RIFs (Reduction In Force, their acronym for layoff) went to the director level and no further up.

[Oct 30, 2018] There are plenty of examples of people who were doing their jobs, IN SPADES, putting in tons of unpaid overtime, and generally doing whatever was humanly possible to make sure that whatever was promised to the customer was delivered within their span of control. As they grew older corporations threw them out like an empty can

Notable quotes:
"... The other alternative is a market-based life that, for many, will be cruel, brutish, and short. ..."
Oct 30, 2018 | features.propublica.org

Lorilynn King

Step back and think about this for a minute. There are plenty of examples of people who were doing their jobs, IN SPADES, putting in tons of unpaid overtime, and generally doing whatever was humanly possible to make sure that whatever was promised to the customer was delivered (within their span of control... I'm not going to get into a discussion of how IBM pulls the rug out from underneath contracts after they've been signed).

These people were, and still are, high performers, they are committed to the job and the purpose that has been communicated to them by their peers, management, and customers; and they take the time (their OWN time) to pick up new skills and make sure that they are still current and marketable. They do this because they are committed to doing the job to the best of their ability.... it's what makes them who they are.

IBM (and other companies) are firing these very people ***for one reason and one reason ONLY***: their AGE. They have the skills and they're doing their jobs. If the same person was 30 you can bet that they'd still be there. Most of the time it has NOTHING to do with performance or lack of concurrency. Once the employee is fired, the job is done by someone else. The work is still there, but it's being done by someone younger and/or of a different nationality.

The money that is being saved by these companies has to come from somewhere. People that are having to withdraw their retirement savings 20 or so years earlier than planned are going to run out of funds.... and when they're in nursing homes, guess who is going to be supporting them? Social security will be long gone, their kids have their own monetary challenges.... so it will be government programs.... maybe.

This is not just a problem that impacts the 40 and over crowd. This is going to impact our entire society for generations to come.

NoPolitician
The business reality you speak of can be tempered via government actions. A few things:

The other alternative is a market-based life that, for many, will be cruel, brutish, and short.

[Oct 30, 2018] Elimination of loyalty: what corporations cloak as weeding out the low performers tranparantly reveals catching the older workers in the net as well.

Oct 30, 2018 | features.propublica.org

Great White North, Thursday, March 22, 2018 11:29 PM

There's not a word of truth quoted in this article. That is, quoted from IBM spokespeople. It's the culture there now. They don't even realize that most of their customers have become deaf to the same crap from their Sales and Marketing BS, which is even worse than their HR speak.

The sad truth is that IBM became incapable of taking its innovation (IBM is indeed a world beating, patent generating machine) to market a long time ago. It has also lost the ability (if it ever really had it) to acquire other companies and foster their innovation either - they ran most into the ground. As a result, for nearly a decade revenues have declined and resource actions grown. The resource actions may seem to be the ugly problem, but they're only the symptom of a fat greedy and pompous bureaucracy that's lost its ability to grow and stay relevant in a very competitive and changing industry. What they have been able to perfect and grow is their ability to downsize and return savings as dividends (Big Sam Palmisano's "innovation"). Oh, and for senior management to line their pockets.

Nothing IBM is currently doing is sustainable.

If you're still employed there, listen to the pain in the words of your fallen comrades and don't knock yourself out trying to stay afloat. Perhaps learn some BS of your own and milk your job (career? not...) until you find freedom and better pastures.

If you own stock, do like Warren Buffett, and sell it while it still has some value.

Danllo , Thursday, March 22, 2018 10:43 PM
This is NOTHING NEW! All major corporations have and will do this at some point in their existence. Another industry that does this regularly every 3 to 5 years is the pharamaceutical industry. They'll decimate their sales forces in order to, as they like to put it, "right size" the company.

They'll cloak it as weeding out the low performers, but they'll try to catch the "older" workers in the net as well.

[Oct 30, 2018] American companies pay health insurance premiums based on their specific employee profiles

Notable quotes:
"... As long as companies pay for their employees' health insurance they will have an incentive to fire older employees. ..."
"... The answer is to separate health insurance from employment. Companies can't be trusted. Not only health care, but retirement is also sorely abused by corporations. All the money should be in protected employee based accounts. ..."
Oct 30, 2018 | features.propublica.org

sometimestheyaresomewhatright , Thursday, March 22, 2018 4:13 PM

American companies pay health insurance premiums based on their specific employee profiles. Insurance companies compete with each other for the business, but costs are actual. And based on the profile of the pool of employees. So American companies fire older workers just to lower the average age of their employees. Statistically this is going to lower their health care costs.

As long as companies pay for their employees' health insurance they will have an incentive to fire older employees. They have an incentive to fire sick employees and employees with genetic risks. Those are harder to implement as ways to lower costs. Firing older employees is simple to do, just look up their ages.

The answer is to separate health insurance from employment. Companies can't be trusted. Not only health care, but retirement is also sorely abused by corporations. All the money should be in protected employee based accounts.

By the way, most tech companies are actually run by older people. The goal is to broom out mid-level people based on age. Nobody is going to suggest to a sixty year old president that they should self fire, for the good of the company.

[Oct 30, 2018] Cutting Old Heads at IBM by Peter Gosselin and Ariana Tobin

Mar 22, 2018 | features.propublica.org

This story was co-published with Mother Jones.

F or nearly a half century, IBM came as close as any company to bearing the torch for the American Dream.

As the world's dominant technology firm, payrolls at International Business Machines Corp. swelled to nearly a quarter-million U.S. white-collar workers in the 1980s. Its profits helped underwrite a broad agenda of racial equality, equal pay for women and an unbeatable offer of great wages and something close to lifetime employment, all in return for unswerving loyalty.

How the Crowd Led Us to Investigate IBM

Our project started with a digital community of ex-employees. Read more about how we got this story.

Email Updates

Sign up to get ProPublica's major investigations delivered to your inbox.

Do you have information about age discrimination at IBM?

Let us know.

But when high tech suddenly started shifting and companies went global, IBM faced the changing landscape with a distinction most of its fiercest competitors didn't have: a large number of experienced and aging U.S. employees.

The company reacted with a strategy that, in the words of one confidential planning document, would "correct seniority mix." It slashed IBM's U.S. workforce by as much as three-quarters from its 1980s peak, replacing a substantial share with younger, less-experienced and lower-paid workers and sending many positions overseas. ProPublica estimates that in the past five years alone, IBM has eliminated more than 20,000 American employees ages 40 and over, about 60 percent of its estimated total U.S. job cuts during those years.

In making these cuts, IBM has flouted or outflanked U.S. laws and regulations intended to protect later-career workers from age discrimination, according to a ProPublica review of internal company documents, legal filings and public records, as well as information provided via interviews and questionnaires filled out by more than 1,000 former IBM employees.

Among ProPublica's findings, IBM:

Denied older workers information the law says they need in order to decide whether they've been victims of age bias, and required them to sign away the right to go to court or join with others to seek redress. Targeted people for layoffs and firings with techniques that tilted against older workers, even when the company rated them high performers. In some instances, the money saved from the departures went toward hiring young replacements. Converted job cuts into retirements and took steps to boost resignations and firings. The moves reduced the number of employees counted as layoffs, where high numbers can trigger public disclosure requirements. Encouraged employees targeted for layoff to apply for other IBM positions, while quietly advising managers not to hire them and requiring many of the workers to train their replacements. Told some older employees being laid off that their skills were out of date, but then brought them back as contract workers, often for the same work at lower pay and fewer benefits.

IBM declined requests for the numbers or age breakdown of its job cuts. ProPublica provided the company with a 10-page summary of its findings and the evidence on which they were based. IBM spokesman Edward Barbini said that to respond the company needed to see copies of all documents cited in the story, a request ProPublica could not fulfill without breaking faith with its sources. Instead, ProPublica provided IBM with detailed descriptions of the paperwork. Barbini declined to address the documents or answer specific questions about the firm's policies and practices, and instead issued the following statement:

"We are proud of our company and our employees' ability to reinvent themselves era after era, while always complying with the law. Our ability to do this is why we are the only tech company that has not only survived but thrived for more than 100 years."

With nearly 400,000 people worldwide, and tens of thousands still in the U.S., IBM remains a corporate giant. How it handles the shift from its veteran baby-boom workforce to younger generations will likely influence what other employers do. And the way it treats its experienced workers will eventually affect younger IBM employees as they too age.

Fifty years ago, Congress made it illegal with the Age Discrimination in Employment Act , or ADEA, to treat older workers differently than younger ones with only a few exceptions, such as jobs that require special physical qualifications. And for years, judges and policymakers treated the law as essentially on a par with prohibitions against discrimination on the basis of race, gender, sexual orientation and other categories.

In recent decades, however, the courts have responded to corporate pleas for greater leeway to meet global competition and satisfy investor demands for rising profits by expanding the exceptions and shrinking the protections against age bias .

"Age discrimination is an open secret like sexual harassment was until recently," said Victoria Lipnic, the acting chair of the Equal Employment Opportunity Commission, or EEOC, the independent federal agency that administers the nation's workplace anti-discrimination laws.

"Everybody knows it's happening, but often these cases are difficult to prove" because courts have weakened the law, Lipnic said. "The fact remains it's an unfair and illegal way to treat people that can be economically devastating."

Many companies have sought to take advantage of the court rulings. But the story of IBM's downsizing provides an unusually detailed portrait of how a major American corporation systematically identified employees to coax or force out of work in their 40s, 50s and 60s, a time when many are still productive and need a paycheck, but face huge hurdles finding anything like comparable jobs.

The dislocation caused by IBM's cuts has been especially great because until recently the company encouraged its employees to think of themselves as "IBMers" and many operated under the assumption that they had career-long employment.

When the ax suddenly fell, IBM provided almost no information about why an employee was cut or who else was departing, leaving people to piece together what had happened through websites, listservs and Facebook groups such as "Watching IBM" or "Geographically Undesirable IBM Marketers," as well as informal support groups.

Marjorie Madfis, at the time 57, was a New York-based digital marketing strategist and 17-year IBM employee when she and six other members of her nine-person team -- all women in their 40s and 50s -- were laid off in July 2013. The two who remained were younger men.

Since her specialty was one that IBM had said it was expanding, she asked for a written explanation of why she was let go. The company declined to provide it.

"They got rid of a group of highly skilled, highly effective, highly respected women, including me, for a reason nobody knows," Madfis said in an interview. "The only explanation is our age."

Brian Paulson, also 57, a senior manager with 18 years at IBM, had been on the road for more than a year overseeing hundreds of workers across two continents as well as hitting his sales targets for new services, when he got a phone call in October 2015 telling him he was out. He said the caller, an executive who was not among his immediate managers, cited "performance" as the reason, but refused to explain what specific aspects of his work might have fallen short.

It took Paulson two years to land another job, even though he was equipped with an advanced degree, continuously employed at high-level technical jobs for more than three decades and ready to move anywhere from his Fairview, Texas, home.

"It's tough when you've worked your whole life," he said. "The company doesn't tell you anything. And once you get to a certain age, you don't hear a word from the places you apply."

Paul Henry, a 61-year-old IBM sales and technical specialist who loved being on the road, had just returned to his Columbus home from a business trip in August 2016 when he learned he'd been let go. When he asked why, he said an executive told him to "keep your mouth shut and go quietly."

Henry was jobless more than a year, ran through much of his savings to cover the mortgage and health insurance and applied for more than 150 jobs before he found a temporary slot.

"If you're over 55, forget about preparing for retirement," he said in an interview. "You have to prepare for losing your job and burning through every cent you've saved just to get to retirement."

IBM's latest actions aren't anything like what most ex-employees with whom ProPublica talked expected from their years of service, or what today's young workers think awaits them -- or are prepared to deal with -- later in their careers.

"In a fast-moving economy, employers are always going to be tempted to replace older workers with younger ones, more expensive workers with cheaper ones, those who've performed steadily with ones who seem to be up on the latest thing," said Joseph Seiner, an employment law professor at the University of South Carolina and former appellate attorney for the EEOC.

"But it's not good for society," he added. "We have rules to try to maintain some fairness in our lives, our age-discrimination laws among them. You can't just disregard them."

[Oct 30, 2018] How do you say "Red Hat" in Hindi??

Oct 30, 2018 | theregister.co.uk

christie23356 , 14 hrs

Re: How do you say "Red Hat" in Hindi??

Hello)

[Oct 30, 2018] IBM must be borrowing a lot of cash to fund the acquisition. At last count it had about $12B in the bank. Layoffs are emminent in such situation as elimnation of headcount is one of the way to justify the price paid

Oct 30, 2018 | theregister.co.uk

Anonymous Coward 1 day

Borrowing $ at low rates

IBM must be borrowing a lot of cash to fund the acquisition. At last count it had about $12B in the bank... https://www.marketwatch.com/investing/stock/ibm/financials/balance-sheet.

Unlike everyone else - https://www.thestreet.com/story/14513643/1/apple-microsoft-google-are-sitting-on-crazy-amounts-of-cash.html

Jove
Over-paid ...

Looking at the Red Hat numbers, I would not want to be an existing IBM share-holder this morning; both companies missing market expectations and in need of each other to get out of the rut.

It is going to take a lot of effort to make that 63% premium pay-off. If it does not pay-off pretty quickly, the existing RedHat leadership with gone in 18 months.

P.S.

Apparently this is going to be financed by a mixture of cash and debt - increasing IBM's existing debt by nearly 50%. Possible credit rating downgrade on the way?

steviebuk
Goodbye...

...Red Hat.

No doubt IBM will scare off all the decent employees that make it what it is.

SecretSonOfHG
RH employees will start to jump ship

As soon as they have a minimum of experience with the terrible IBM change management processes, the many layers of bureocracy and management involved and the zero or negative value they add to anything at all.

IBM is a shinking ship, the only question being how long it will take to happen. Anyone thinkin RH has any future other than languish and disappear under IBM management is dellusional. Or a IBM stock owner.

Jove
Product lines EoL ...

What get's the chop because it either does not fit in with the Hybrid-Cloud model, or does not generate sufficient margin?

cloth
But *Why* did they buy them?

I'm still trying to figure out "why" they bought Red hat.

The only thing my not insignificant google trawling can find me is that Red Hat sell to the likes of Microsoft and google - now, that *is* interesting. IBM seem to be saying that they can't compete directly but they will sell upwards to their overlords - no ?

Anonymous Coward

Re: But *Why* did they buy them?

As far as I can tell, it is be part of IBM's cloud (or hybrid cloud) strategy. RH have become/are becoming increasingly successful in this arena.

If I was being cynical, I would also say that it will enable IBM to put the RH brand and appropriate open source soundbites on the table for deal-making and sales with or without the RH workforce and philosophy. Also, RH's subscription-base must figure greatly here - a list of perfect customers ripe for "upselling". bazza

Re: But *Why* did they buy them?

I'm fairly convinced that it's because of who uses RedHat. Certainly a lot of financial institutions do, they're in the market for commercial support (the OS cost itself is irrelevant). You can tell this by looking at the prices RedHat were charging for RedHat MRG - beloved by the high speed share traders. To say eye-watering, PER ANNUM too, is an understatement. You'd have to have got deep pockets before such prices became ignorable.

IBM is a business services company that just happens to make hardware and write OSes. RedHat has a lot of customers interested in business services. The ones I think who will be kicking themselves are Hewlett Packard (or whatever they're called these days).

tfb
Re: But *Why* did they buy them?

Because AIX and RHEL are the two remaining enterprise unixoid platforms (Solaris & HPUX are moribund and the other players are pretty small). Now both of those are owned by IBM: they now own the enterprise unixoid market.

theblackhand
Re: But *Why* did they buy them?

"I'm still trying to figure out "why" they bought Red hat."

What they say? It somehow helps them with cloud. Doesn't sound like much money there - certainly not enough to justify the significant increase in debt (~US$17B).

What could it be then? Well RedHat pushed up support prices and their customers didn't squeal much. A lot of those big enterprise customers moved from expensive hardware/expensive OS support over the last ten years to x86 with much cheaper OS support so there's plenty of scope for squeezing more.

[Oct 29, 2018] If I (hypothetically) worked for a company acquired by Big Blue

Oct 29, 2018 | arstechnica.com

MagicDot / Ars Praetorian reply 6 hours ago

If I (hypothetically) worked for a company acquired by Big Blue, I would offer the following:

...but this is all hypothetical. \

Belisarius , Ars Tribunus Angusticlavius et Subscriptor 5 hours ago

sviola wrote:
show nested quotes
I can see what each company will get out of the deal and how they might potentially benefit. However, Red Hat's culture is integral to their success. Both company CEOs were asked today at an all-hands meeting about how they intend to keep the promise of remaining distinct and maintaining the RH culture without IBM suffocating it. Nothing is supposed to change (for now), but IBM has a track record of driving successful companies and open dynamic cultures into the ground. Many, many eyes will be watching this.

Hopefully IBM current top Brass will be smart and give some autonomy to Red Hat and leave it to its own management style. Of course, that will only happen if they deliver IBM goals (and that will probably mean high double digit y2y growth) on regular basis...

One thing is sure, they'll probably kill any overlapsing product in the medium term (who will survive between JBOSS and Websphere is an open bet).

(On a dream side note, maybe, just maybe they'll move some of their software development to Red Hat)

Good luck. Every CEO thinks they're the latest incarnation of Adam Smith, and they're all dying to be seen as "doing something." Doing nothing, while sometimes a really smart thing and oftentimes the right thing to do, isn't looked upon favorably these days in American business. IBM will definitely do something with Red Hat; it's just a matter of what.

[Oct 29, 2018] The D in Systemd stands for 'Dammmmit!' A nasty DHCPv6 packet can pwn a vulnerable Linux box

Oct 29, 2018 | lxer.com
A security bug in Systemd can be exploited over the network to, at best, potentially crash a vulnerable Linux machine, or, at worst, execute malicious code on the box... Systemd creator Leonard Poettering has already published a security fix for the vulnerable component – this should be weaving its way into distros as we type.

[Oct 29, 2018] IBM to acquire software company Red Hat for $34 billion

Oct 29, 2018 | finance.yahoo.com

BIG BLUE

IBM was founded in 1911 and is known in the technology industry as Big Blue, a reference to its once ubiquitous blue computers. It has faced years of revenue declines, as it transitions its legacy computer maker business into new technology products and services. Its recent initiatives have included artificial intelligence and business lines around Watson, named after the supercomputer it developed.

To be sure, IBM is no stranger to acquisitions. It acquired cloud infrastructure provider Softlayer in 2013 for $2 billion, and the Weather Channel's data assets for more than $2 billion in 2015. It also acquired Canadian business software maker Cognos in 2008 for $5 billion.

Other big technology companies have also recently sought to reinvent themselves through acquisitions. Microsoft this year acquired open source software platform GitHub for $7.5 billion; chip maker Broadcom Inc agreed to acquire software maker CA Inc for nearly $19 billion; and Adobe Inc agreed to acquire marketing software maker Marketo for $5 billion.

One of IBM's main competitors, Dell Technologies Inc, made a big bet on software and cloud computing two years ago, when it acquired data storage company EMC for $67 billion. As part of that deal, Dell inherited an 82 percent stake in virtualization software company VMware Inc.

The deal between IBM and Red Hat is expected to close in the second half of 2019. IBM said it planned to suspend its share repurchase program in 2020 and 2021 to help pay for the deal.

IBM said Red Hat would continue to be led by Red Hat CEO Jim Whitehurst and Red Hat's current management team. It intends to maintain Red Hat's headquarters, facilities, brands and practices.

[Oct 28, 2018] In Desperation Move, IBM Buys Red Hat For $34 Billion In Largest Ever Acquisition

Oct 28, 2018 | www.zerohedge.com

In what can only be described as a desperation move, IBM announced that it would acquire Linux distributor Red Hat for a whopping $33.4 billion, its biggest purchase ever, as the company scrambles to catch up to the competition and to boost its flagging cloud sales. Still hurting from its Q3 earnings , which sent its stock tumbling to the lowest level since 2010 after Wall Street was disappointed by yet another quarter of declining revenue...

... IBM will pay $190 for the Raleigh, NC-based Red Hat , a 63% premium to the company's stock price, which closed at $116.68 on Friday, and down 3% on the year.

In the statement, IBM CEO Ginni Rometty said that "the acquisition of Red Hat is a game-changer. It changes everything about the cloud market," but what the acquisition really means is that the company has thrown in the towel in years of accounting gimmicks and attempts to paint lipstick on a pig with the help of ever lower tax rates and pro forma addbacks, and instead will now "kitchen sink" its endless income statement troubles and non-GAAP adjustments in the form of massive purchase accounting tricks for the next several years.

While Rometty has been pushing hard to transition the 107-year-old company into modern business such as the cloud, AI and security software, the company's recent improvements had been largely from IBM's legacy mainframe business, rather than its so-called strategic imperatives. Meanwhile, revenues have continued the shrink and after a brief rebound, sales dipped once again this quarter, after an unprecedented period of 22 consecutive declines starting in 2012, when Rometty took over as CEO.

[Oct 26, 2018] RHCSA Rapid Track course with exam - RH200

The cost is $3,895 USD (Plus all applicable taxes) or 13 Training Units
Oct 08, 2018 | www.redhat.com
Course overview On completion of course materials, students should be prepared to take the Red Hat Certified System Administrator (RHCSA) exam. This version of the course includes the exam.

Note: This course builds on a student's existing understanding of command-line based Linux system administration. Students should be able to execute common commands using the shell, work with common command options, and access man pages for help. Students lacking this knowledge are strongly encouraged to take Red Hat System Administration I (RH124) and II (RH134) instead.

Course content summary Outline for this course
Accessing the command line
Log in to a Linux system and run simple commands using the shell.
Managing files from the command line
Work with files from the bash shell prompt.
Managing local Linux users and groups
Manage Linux users and groups and administer local password policies.
Controlling access to files with Linux file system permissions
Set access permissions on files and interpret the security effects of different permission settings.
Managing SELinux security
Use SELinux to manage access to files and interpret and troubleshoot SELinux security effects.
Monitoring and managing Linux processes
Monitor and control processes running on the system.
Installing and updating software packages
Download, install, update, and manage software packages from Red Hat and yum package repositories.
Controlling services and daemons
Control and monitor network services and system daemons using systemd.
Managing Red Hat Enterprise Linux networking
Configure basic IPv4 networking on Red Hat Enterprise Linux systems.
Analyzing and storing logs
Locate and interpret relevant system log files for troubleshooting purposes.
Managing storage and file systems
Create and use disk partitions, logical volumes, file systems, and swap spaces.
Scheduling system tasks
Schedule recurring system tasks using cron and systemd timer units.
Mounting network file systems
Mount network file system (NFS) exports and server message block (SMB) shares from network file servers.
Limiting network communication with firewalld
Configure a basic local firewall.
Virtualization and kickstart
Manage KVMs and install them with Red Hat Enterprise Linux using Kickstart.

[Oct 18, 2018] Fedora switching from NetworkManager to explicit ifcfg networking Linux

Nov 23, 2015 | lxer.com

penguinist. Nov 23, 2015

Now that I've spent uncounted hours reaching a solution on this one I wanted to document it somewhere for other LXers who might be faced with a similar problem in the future.

In a fresh installation of Fedora23 the default configuration came up with NetworkManager running the show. This workstation however has a more complex configuration than average with two network interfaces, one running with dhcp and the other serving as a static gateway to an internal lan. Since this configuration is set up once and never changes, the right way seemed to be an explicit configuration, one with custom crafted /etc/sysconfig/network-scripts/ifcfg-ethx config files. After setting up those files, the next part went fairly easily:

systemctl stop NetworkManager.service
systemctl disable NetworkManager.service
systemctl enable network.service
systemctl start network.service

Checking the result however showed errors which indicated that dhclient had been invoked on the eth1 interface even though its ifcfg-eth1 configuration was clearly marked:

BOOTPROTO=none

After a long investigation it turned out that NetworkManager had saved an eth1 lease file under /var/lib/dhclient/ and network.service dutifully attempted to restore that lease even though the interface was explicitly marked for no dhcp service (should file a bugzilla report on this one).

Manually removing the extraneous lease file fixed the problem and we now start network cleanly with NetworkManager disabled.

JaseP

Nov 23, 2015
8:48 PM EDT Nice catch & fix!

[Oct 17, 2018] How to upgrade Red Hat Linux 6.9 to 7.4 Experiences Sharing by Comnet

Oct 17, 2018 | mycomnet.info

Red Hat is a one of Linux distribution among many others such as Ubuntu, CentOS, Fedora, and others. Many servers around the world use Red Hat to run their server.

I have recently had to do an upgrade to one of our clients from Red Hat Linux 6.8 to 7.4 and I would like to show you how I have set up a lap to test the tools to upgrade. I recommend you to duplicate the environment of production server for lab testing.

Please note the following below

· This guide aims to show you the tools Red Hat has given us to upgrade from version 6 to version 7

· The environment is only on VM which is not reflect the actual environment of our client, therefore, there could be some different outcome when trying to upgrade on production server!!!

· I assume you can install Red Hat on your VM which could be Virtual Box, VMware or any other VM application you familiar with. (Mine is Virtual Box).

Precaution: Please backup your system before running the upgrade to in case anything happens and you might need to fresh install Red Hat 7.4 or roll back to Red Hat 6.9

1. First thing first, check your current Red Hat version. Mine was Red Hat 6.9

2. You should be sure to update your Red Hat 6 to the latest version before attempt the preupgrade tools. So, do 'yum update' to update to latest Red Hat 6.

If the error shows as below. These require an internet connection to connect to Red Hat server.

Be sure to register your subscription carefully again. I have found that for the system that runs for a long time like my client. I have to unregister and register again for 'yum update' to properly run.

Refer to thread on https://access.redhat.com/discussions/3066851?tour=8

3. For some system, the error might show something like 'It is registered but cannot get update' The methods are the same try to run these command in sequence. Sometimes you don't have to unregister, only refresh and attach –auto might do the trick.

sudo subscription-manager remove --all

sudo subscription-manager unregister

sudo subscription-manager clean

Now you can re-register the system, attach the subscriptions

sudo subscription-manager register

sudo subscription-manager refresh

sudo subscription-manager attach --auto

Note that when you unregister, your server is not down, this is only an unregister Red Hat subscription meaning you cannot get any update from them but your server can still be running.

After that, you should now be able to do 'yum update' then download the update. At the end, you should see the screen below which mean you can now proceed to upgrade procedure.

4. Then enable your subscription to the repository of Preupgrade Assistant.

Then install the Preupgrade Tools

5. Once you have installed everything, run the pre upgrade tool, it should take awhile. This tool will examine every package in the system and determine if there could be any error you need to fix before an upgrade. In my experience, I found solutions to most errors by googling the Internet, but it may not always work for your environment.

After preupg is finished running, please check the file in '/root/preupgrade/result.html' which can view in any browser. You could transfer the file the computer that has browser.

The file result.html will show all necessary information about your system before an upgrade. Basically, if you see information like 5.1 on the screen, you good to go.

See 5.2 for all the result after running the Preupgrade tool, be sure to check them all. I found that some information is just informational but please check the 'needs_action' section carefully.

Go down and you will see specific information about result. Check Remediation description for any 'needs_action' to perform the suggested instruction.

6. So, you have checked everything from the Preupgrade tool. Now it's time to start an upgrade.

6.1 Install the upgrade tool

[root@localhost ~]# yum -y install redhat-upgrade-tool

6.2 Disable all active repository

[root@localhost ~]# yum -y install yum-utils

[root@localhost ~]# yum-config-manager --disable \*

Now start an upgrade. I recommend you to save iso file of Red Hat 7.4 to the server then issue the command like below. It's easier. Although, you could use other option like

--device [DEV]

Device or mount point of mounted install media. If DEV is omitted,

redhat-upgrade-tool will scan all currently-mounted removable devices

(for example USB disks and optical media).

--network RELEASEVER

Online repos. RELEASEVER will be used to replace $releasever variable

if it occurs in some repo URL.

[root@localhost /]# cd /root/

[root@localhost ~]# ls
anaconda-ks.cfg  install.log.syslog  preupgrade          rhel-server-7.4-x86_64-dvd.iso

install.log      playground          preupgrade-results

[root@localhost ~]# redhat-upgrade-tool --iso rhel-server-7.4-x86_64-dvd.iso

Then reboot

[root@localhost ~]# reboot

7. Upgrade is now completed. Check your version after upgrade!!!! Then don't forget to check other software and functionality that it runs correctly

[root@localhost ~]# cat /etc/redhat-release

Red Hat Enterprise Linux Server release 7.4 (Maipo)

---------------------------------------------------------------------------------------------------------------------------

I hope the information could guide you through how to upgrade Red Hat Linux 6.9 to 7.4 more or less. I'm also new to Red Hat myself and still have a lot to learn.

Please let me know your experience of upgrade your own Red Hat or if you have any questions, I would try my best to help. Thanks for reading!

Reference Sites

1. https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/migration_planning_guide/chap-red_hat_enterprise_linux-migration_planning_guide-upgrading

2. https://access.redhat.com/discussions/3066851?tour=8 โพสท์ใน Technology | ติดป้ายกำกับ LINUX , RHEL | ใส่ความเห็น

[Oct 16, 2018] You love Systemd – you just don t know it yet, wink Red Hat bods by Shaun Nichols

Notable quotes:
"... You're not alone in liking SMF and Solaris. ..."
"... AFAICT everyone followed RedHat because they also dominate Gnome, and chose to make Gnome depend on systemd. Thus if one had any aspirations for your distro supporting Gnome in any way, you have to have systemd underneath it all. ..."
"... The source of systemd is 15628582 bytes in size - the source of sysvinit is 224531 bytes (less than 2% of the size). (Both sizes from zips of the sources downloaded today - does not include config files makefiles etc - only the contents of the src directory.) ..."
"... Today I've kickstarted RHEL7 on a rack of 40 identical servers using same script. On about 25 out of 40 postinstall script added to rc.local failed to run with some obscure error about script being terminated because something unintelligible did not like it. It never ever happened on RHEL6, it happens all the time on RHEL7. And that's exactly the reason I absolutely hate it both RHEL7 and systemd. ..."
"... I have no problem with Red Hat wanting their semi-proprietary system. Unfortunately, in order to preserve their data centre presence they had to eliminate competition from non-systemd distros by ensuring it got into the likes of Debian. ..."
"... I've been using Linux ( RedHat, CentOS, Ubuntu), BSD (Solaris, SunOS, freeBSD) and Unix ( aix, sysv all of the way back to AT&T 3B2 servers) in farms of up to 400 servers since 1988 and I never, ever had issues with eth1 becoming eth0 after a reboot. ..."
"... I can see some (well, one) benefit of systemd in the depandancies. So things may only start when a pre-requisite has been met. Unfortunately that is outwighed by the annoyances. ..."
"... For me the worst is stuffing everything in a binary log and not telling you any error message when you restart the service. So you have to go hunting for an error message and in some cases just end up running manually what the service was meant to do just to see what it's complaining about. It's time I'd rather spend fixing the problem itself. ..."
"... Like other commenters I tolerate systemd as it's part of RHEL/CentOS/Oracle Linux etc 7.x and that's what we have to use. Obviously that's no endorsement. ..."
"... It's not clever, but it's the future. From now on, all major distributions will be called SNU Linux. ..."
"... I don't recall any major agreement that init needed fixing. Between BSD and SysV inits, probably 99.999% of all use cases were covered. In the 1 in 100,000 use case, a little bit of C (stand alone code, or patching init itself) covered the special case. In the case of Slackware's SysV/BSD amalgam, I suspect it was more like one in ten million. ..."
"... So in all reality, systemd is an answer to a problem that nobody had. There was no reason for it in the first place. There still isn't a reason for it ... especially not in the 999,999 places out of 1,000,000 where it is being used. Throw in the fact that it's sticking its tentacles[0] into places where nobody in their right mind would expect an init as a dependency (disk partitioning software? WTF??), can you understand why us "old guard" might question the sanity of people singing it's praises? ..."
"... If systemd would just do ONE thing I think it would remove all of the pain that it has inflicted on me over the past several months and I could learn to accept it. That one thing is, if there is an init script, RUN IT. Not run it like systemd does now. But turn off ALL intelligence systemd has when it finds that script and run it. Don't put it on any special timers, don't try to detect if it is running already, or stopped already or whatever, fire the script up in blocking mode and wait till it exits. ..."
"... Don't use a Linux with systemd! Stay with Ubuntu 14 LTS or Ubuntu 16 LTS, or the new Devuan ASCII (Debian). ..."
"... Unfortunately it's not really an option for businesses that need safety of support contract, either from Red Hat or Oracle and majority of environments which actually pay sysadmins money do run either RHEL or OL. ..."
"... Systemd is the Linux world's MS Office "ribbon," another of those sad inflection points where developers have crawled up their own asses and lost sight of the light of day. ..."
Oct 15, 2018 | theregister.co.uk

You love Systemd – you just don't know it yet, wink Red Hat bods

At the Red Hat confab, Breard admitted that since Systemd was officially introduced as the default init option in Red Hat Enterprise Linux 7 in 2014, the software hasn't always been met with open arms by the community.

"People respond to it with anything from curiosity to rage," Breard mused. "The more people learn about it, the more they like it. We have seen this pan out over the last few years."

Breard and Poettering told attendees that, in many cases, Systemd is able to dramatically simplify the management of processes while at the same time giving administrators tighter control over their machines.

For example, noted Poettering, Systemd can fully track down and kill off all processes associated with a service being shut down, something rival init systems are unable to cleanly do.

"It sounds super basic, but actually it is much more complex than people think," Poettering said. "Because Systemd knows which service a process belongs to, it can shut down that process."

Systemd is pretty good at enforcing security policies, we were told. Because it has the ability to limit services' access to resources, controls can be put in place to effectively sandbox software and lock down code from doing potentially malicious things, such as write to storage or read sensitive data in particular directories.

Sure, you can do all this sort of thing with containers and chroots, and your own shell scripts, but it's there out of the box from Systemd if you need it. That's the pro-Systemd argument, anyway.

Additionally, services' access to processor core time can be regulated, thus tuning system performance as certain programs and servers are given a lot, or fewer, CPU cycles.

Breard and Poettering said they will try to further enhance Systemd by, for instance, extending its ability to manage network connections and containers.

And perhaps, in the process, you may warm up a bit more to the tool


Anonymous Coward

Re: Poettering still doesn't get it... Pid 1 is for people wearing big boy pants.

SystemD is corporate money (Redhat support dollars) triumphing over the long hairs sadly. Enough money can buy a shitload of code and you can overwhelm the hippies with hairball dependencies (the key moment was udev being dependent on systemd) and soon get as much FOSS as possible dependent on the Linux kernel. This has always been the end game as Red Hat makes its bones on Linux specifically not on FOSS in general (that say runs on Solaris or HP-UX). The tighter they can glue the FOSS ecosystem and the Linux kernel together ala Windows lite style the better for their bottom line. Poettering is just being a good employee asshat extraordinaire he is.

Tinslave_the_Barelegged
Re: Ahhh SystemD

> A solution that no one wants for problems no one has.

That's not strictly true - systemd introduces loads of additional problems...

I've just hit another, and the answer was classic. I'm testing OpenSUSE 15.0, and as expected it is already rock solid. But there's an odd error message about vconsole at boot, a known issue for a few years. Systemd's (Poetering's) response is that an upstream application has to change to fit in with what systemd wants to do. It's that attitude, of forcing changes all over the linux ecosphere, that is a genuine cause for concern. We thought that aggression would come from an antagonistic proprietary corporate, but no, it's come from the supposed good guy, Red Hat.

P. Lee
Re: Ahhh SystemD

>A solution that no one wants for problems no one has.

Hmmm. I think we should be a little more accurate. There are real problems which it solves. It just appears to have used the worst possible method of solving them.

Daggerchild
Re: Ahhh SystemD

I honestly would love someone to lay out the problems it solves. Solaris has a similar parallellised startup system, with some similar problems, but it didn't need pid 1.

starbase7
SMF?

As an older timer (on my way but not there yet), I never cared for the init.d startup and I dislike the systemd monolithic architecture. What I do like is Solaris SMF and wish Linux would have adopted a method such as or similar to that. I still think SMF was/is a great comprise to the init.d method or systemd manor. I used SMF professionally, but now I have moved on with Linux professionally as Solaris is, well, dead. I only get to enjoy SMF on my home systems, and savor it. I'm trying to like Linux over all these years, but this systemd thing is a real big road block for me to get enthusiastic. I have a hard time understanding why all the other Linux distros joined hands with Redhat and implemented that thing, systemd. Sigh.

Anonymous Coward

You're not alone in liking SMF and Solaris.

AFAICT everyone followed RedHat because they also dominate Gnome, and chose to make Gnome depend on systemd. Thus if one had any aspirations for your distro supporting Gnome in any way, you have to have systemd underneath it all.

RedHat seem to call the shots these days as to what a Linux distro has. I personally have mixed opinions on this; I think the vast anarchy of Linux is a bad thing for Linux adoption ("this is the year of the Linux desktop" don't make me laugh), and Linux would benefit from a significant culling of the vast number of distros out there. However if that did happen and all that was left was something controlled by RedHat, that would be a bad situation.

Steve Davies 3

Remember who 'owns' SMF... namely Oracle. They may well have made it impossible for anyone to adopt.

That stance is not unknown now is it...?

as for systemd, I have bit my teeth and learned to tolerate it. I'll never be as comfortable with it as I was with the old init system but I did start running into issues especially with shutdown syncing with it on some complex systems.

Still not sure if systemd is the right way forward even after four years.

CrazyOldCatMan

I have a hard time understanding why all the other Linux distros joined hands with Redhat and implemented that thing, systemd

Several reasons:

A lot of other distros use Redhat (or Fedora) as their base and then customise it.

A lot of other distros include things dependant on systemd (Gnome being the one with biggest dependencies - you can just about to get it to run without systemd but it's a pain and every update will break your fixes).

Redhat has a lot of clout.

Daggerchild

SMF should be good, and yet they released it before they'd documented it. Strange priorities...

And XML is *not* a config file format you should let humans at. Finding out the correct order to put the XML elements in to avoid unexplained "parse error", was *not* a fun game.

And someone correct me, but it looks like there are SMF properties of a running service that can only be modified/added by editing the file, reloading *and* restarting the service. A metadata and state/dependency tracking system shouldn't require you to shut down the priority service it's meant to be ensuring... Again, strange priorities...

onefang

"XML is *not* a config file format you should let humans at"

XML is a format you shouldn't let computers at, it was designed to be human readable and writable. It fails totally.

Chris King

If systemd is the MCP, does that make Poettering Sark ?

"Greetings. The Master Control Program has chosen you to serve your system on the Game Grid. Those of you who continue to profess a belief in the Users will receive the standard substandard training which will result in your eventual elimination. Those of you who renounce this superstitious and hysterical belief will be eligible to join the warrior elite of the MCP. You will each receive an identity disc. Everything you do or learn will be imprinted on this disc. If you lose your disc, or fail to follow commands, you will be subject to immediate de-resolution. That will be all".

Hmm, this explains a lot...

ds6
Re: How can we make money?

This is scarily possible and undeserving of the troll icon.

Harkens easily to non-critical software developers intentionally putting undocumented, buggy code into production systems, forcing the company to keep the guy on payroll to keep the wreck chugging along.

tim
Those who do not understand Unix are condemned to reinvent it, poorly.

"It sounds super basic, but actually it is much more complex than people think," Poettering said. "Because Systemd knows which service a process belongs to, it can shut down that process."

Poettering and Red Hat,

Please learn about "Process Groups"

Init has had the groundwork for most of the missing features since the early 1980s. For example the "id" field in /etc/inittab was intended for a "makefile" like syntax to fix most of these problems but was dropped in the early days of System V because it wasn't needed.

Herby
Process 1 IS complicated.

That is the main problem. With different processes you get different results. For all its faults, SysV init and RC scripts was understandable to some extent. My (cursory) understanding of systemd is that it appears more complicated to UNDERSTAND than the init stuff.

The init scripts are nice text scripts which are executed by a nice well documented shell (bash mostly). Systemd has all sorts of blobs that somehow do things and are totally confusing to me. It suffers from "anti- kiss "

Perhaps a nice book could be written WITH example to show what is going on.

Now let's see does audio come before or after networking (or at the same time)?

Anonymous Coward

Frankly, systemD's abominable configuration scheme needs to be thrown away, shot, buried, and replaced with more structured methods, because the thesaurus is running out of synonyms.

Lennart Poettering is a German who can barely articulate himself in English. https://en.wikipedia.org/wiki/Lennart_Poettering

Let's ban Poettering from Linux, and his PulseAudio and SystemD. And let's revert back to MATE, as GNOME3 is butt ugly.

wayne 8

Re: Process 1 IS complicated.

A connection to PulseAudio, why am I not surprised. Audio that mysteriously refuses to recognize a browser playing audio as a sound source. "Predictable" network interface names? Not to me. I am fine with "enp" and "wlx", but the seemingly random character strings that follow are not predictable. Oxymoronic. I dropped Gnome when it went 2.0. Ubuntu when it went to Unity. Firefox when it redesigned the theme to just appear different.

ds6
Re: Process 1 IS complicated. To digress, I am so, so disappointed in Firefox and Mozilla. A shell of what they once were, now dedicated to the Mozilla "brand" as if they're a clothing company. Why has technology in general gone down the shitter? Why is everything so awful? Will anyone save us from this neverending technohell?
onefang
Re: Process 1 IS complicated.

"Why has technology in general gone down the shitter? Why is everything so awful?"

Coz the most important thing in the world is making more profit than you did last quarter. Nothing else is important. "Will anyone save us from this neverending technohell?" I try, but every one ignores me, coz I don't have the marketing department of the profit makers.

Duncan Macdonald

Systemd seems to me to be an attempt at "Embrace, extend, and extinguish" .

With some time any reasonable competent programmer can follow init scripts and find out where any failures in startup are occurring. As init is purely driven by the scripts there are no hidden interactions to cause unexplained failures. The same is NOT true of systemd.

The source of systemd is 15628582 bytes in size - the source of sysvinit is 224531 bytes (less than 2% of the size). (Both sizes from zips of the sources downloaded today - does not include config files makefiles etc - only the contents of the src directory.)

It is of note that the most widely used Linux kernel of all - (the kernel in Android) does NOT use systemd

Dabbb
Quite understandable that people who don't know anything else would accept systemd. For everyone else it has nothing to do with old school but everything to do with unpredictability of systemd.

Today I've kickstarted RHEL7 on a rack of 40 identical servers using same script. On about 25 out of 40 postinstall script added to rc.local failed to run with some obscure error about script being terminated because something unintelligible did not like it. It never ever happened on RHEL6, it happens all the time on RHEL7. And that's exactly the reason I absolutely hate it both RHEL7 and systemd.

Doctor Syntax
the people with the problem with it are more of the Torvalds type, old school who want to use Unix-like systems

FTFY

I have no problem with Red Hat wanting their semi-proprietary system. Unfortunately, in order to preserve their data centre presence they had to eliminate competition from non-systemd distros by ensuring it got into the likes of Debian.

Jay 2
Ah yes, another systemd (RHEL?) annoyance in that by default the actual rc.local script is not set to be executable and that rc-local.service is disabled. It's almost as if someone doesn't want you to be using it.

Also in systemd, rc.local is no longer the last script/thing to be run. So in order to fudge something to work in rc.local I had to create an override module for rc-local.service that has a dependancy on the network being up before it runs.

stiine
sysV init

I've been using Linux ( RedHat, CentOS, Ubuntu), BSD (Solaris, SunOS, freeBSD) and Unix ( aix, sysv all of the way back to AT&T 3B2 servers) in farms of up to 400 servers since 1988 and I never, ever had issues with eth1 becoming eth0 after a reboot. I also never needed to run ifconfig before configuring an interface just to determine what the inteface was going to be named on a server at this time. Then they hired Poettering... now, if you replace a failed nic, 9 times out of 10, the interface is going to have a randomly different name.

HieronymusBloggs
"if only all the people put half the effort of writing the tirades about Pottering and SystemD into writing their SysV init replacement we would have a working alternative"

Why would I feel the need to write a replacement for something which has been working for me without major problems for two decades?

If I did want to stop using sysvinit there are already several working alternatives apart from systemd, but it seems that pretending otherwise is quite a popular activity.

Jay 2
I can see some (well, one) benefit of systemd in the depandancies. So things may only start when a pre-requisite has been met. Unfortunately that is outwighed by the annoyances.

For me the worst is stuffing everything in a binary log and not telling you any error message when you restart the service. So you have to go hunting for an error message and in some cases just end up running manually what the service was meant to do just to see what it's complaining about. It's time I'd rather spend fixing the problem itself.

Like other commenters I tolerate systemd as it's part of RHEL/CentOS/Oracle Linux etc 7.x and that's what we have to use. Obviously that's no endorsement.

Doctor Syntax
"The more people learn about it, the more they like it."

Translation: We define those who don't like it as not have learned enough about it.

The Electron
Fudging the start-up and restoring eth0

I knew systemd was coming thanks to playing with Fedora. The quicker start-up times were welcomed. That was about it!

I have had to kickstart many of my CentOS 7 builds

In a previous job, I maintained 2 sets of CentOS 7 'infrastructure' servers that provided DNS, DHCP, NTP, and LDAP to a large number of historical vlans. Despite enabling the systemd-network wait online option, which is supposed to start all networks *before* listening services, systemd would run off flicking all the "on" switches having only set-up a couple of vlans. Result: NTP would only be listening on one or two vlan interfaces. The only way I found to get around that was to enable rc.local and call systemd to restart the NTP daemon after 20 seconds. I never had the time to raise a bug with Red Hat, and I assume the issue still persists as no-one designed systemd to handle 15-odd vlans!?

HieronymusBloggs
Re: Fudging the start-up and restoring eth0

"no-one designed systemd to handle 15-odd vlans!?"

Why would Lennart's laptop need to handle 15 vlans?

tekHedd
Not UNIX-like? SNU!

From now on, I will call Systemd-based Linux distros "SNU Linux". Because Systemd's Not Unix-like.

It's not clever, but it's the future. From now on, all major distributions will be called SNU Linux. You can still freely choose to use a non-SNU linux distro, but if you want to use any of the "normal" ones, you will have to call it "SNU" whether you like it or not. It's for your own good. You'll thank me later.

Chris King
Re: Not UNIX-like? SNU!

It's not clever, but it's the future. From now on, all major distributions will be called SNU Linux. You can still freely choose to use a non-SNU linux distro, but if you want to use any of the "normal" ones, you will have to call it "SNU" whether you like it or not. It's for your own good. You'll thank me later.

And if you have a pair of machines go belly-up at the same time because of SystemD, is that Death by Snu-Snu ?

Stevie
Bah!

It is fascinating to me how the Unix(like) IT community can agree that "things need fixing" yet sit on their hands until someone leaps in, at which point they all know better.

Massive confirmation bias larded on thick there, but my observation stands.

Coming to Korn Shell almost three decades ago after many years working in a very stable mainframe environment I was appalled that every other man page listed "known bugs" with essential utilities (like grep) which had been there for twenty years and more, with no sign anyone was even slightly interested in fixing them.

So I guess my point is: If you think systemd is a load of dingoes kidneys and care passionately about that, why on earth aren't you organizing and specifying-out an alternative that does all the things needed that don't work right with init, yet avoids all the nasty with systemd?

You can complain the systemd design is poor and the implementation bad all you want, but if there is no better alternative - and for the world at large it seems that init isn't hacking it and, as someone so astutely mentioned, Solaris has now a questionable future so *its* "fix" isn't gong to become a systemd rival any time soon - that is what will be taking the carpet from under your feet.

Welcome to my world. When I started a power user was someone who could read paper tape without feeding it through a Westrex. I've lost count of the paradigm changes and Other People's Bad Choices I Have To Live With I've weathered over the years.

Now absent thyselves from my greensward soonest, varlets!

jake
Re: Bah!

Nice rant. Kinda.

However, I don't recall any major agreement that init needed fixing. Between BSD and SysV inits, probably 99.999% of all use cases were covered. In the 1 in 100,000 use case, a little bit of C (stand alone code, or patching init itself) covered the special case. In the case of Slackware's SysV/BSD amalgam, I suspect it was more like one in ten million.

So in all reality, systemd is an answer to a problem that nobody had. There was no reason for it in the first place. There still isn't a reason for it ... especially not in the 999,999 places out of 1,000,000 where it is being used. Throw in the fact that it's sticking its tentacles[0] into places where nobody in their right mind would expect an init as a dependency (disk partitioning software? WTF??), can you understand why us "old guard" might question the sanity of people singing it's praises?

[0] My spall chucker insists that the word should be "testicles". Tempting ...

jake
Re: However, I don't recall any major agreement that init needed fixing.

"Then why did Red Hat commit their resources, time and effort to developing and releasing systemd into the world at large?"

Marketing making engineering decisions would be my guess.

"Are you telling me they decided to change it up for the sake of it?"

Essentially, yes.

"Comments in this very thread show that init is not up to the job of firing up computers that *aren't* single-purpose servers."

Those are exceptions to the rule. In the 35 years since SysV init was released, I think I can count on on both hands the number of times I've had to actually code something that it couldn't handle. And those cases were extreme edge cases (SLAC, Sandia, NASA, USGS, etc.). And note that in none of those cases would systemd have been any help. In a couple of those cases, BSD worked where SysV didn't.

"And considering that every desktop distro I've looked at now comes with a daily FDA requirement of systemd, it would appear as though those building the distros don't agree with you either."

Do they actually not agree with me? Or is it more that they are blindly following Redhat's lead, simply because it's easier to base their distro on somebody else's work than it is to rollout their own?

"saying that the startup can be got to work with some script changes and a little bit of C code is a non-starter. "

Correct. For the vast majority of folks. But then, for the vast majority of folks a box-stock Slackware installation will work quite nicely. They are never going to even know that init exists, much less if it's SysV, BSD or systemd. They don't care, either. Nor should they. All they want to do is B0rk faces, twatter about, and etc. The very concept of PID1 is foreign to people like that. So why make this vast sweeping change THAT DOESN'T HELP THE VAST MAJORITY OF PEOPLE WHO HAVE IT INFLICTED UPON THEM? Especially when it adds complexity and makes the system less secure and less stable?

Nate Amsden
as a linux user for 22 users (20 of which on Debian, before that was Slackware)

I am new to systemd, maybe 3 or 4 months now tops on Ubuntu, and a tiny bit on Debian before that.

I was confident I was going to hate systemd before I used it just based on the comments I had read over the years, I postponed using it as long as I could. Took just a few minutes of using it to confirm my thoughts. Now to be clear, if I didn't have to mess with the systemd to do stuff then I really wouldn't care since I don't interact with it (which is the case on my laptop at least though laptop doesn't have systemd anyway). I manage about 1,000 systems running Ubuntu for work, so I have to mess with systemd, and init etc there.

If systemd would just do ONE thing I think it would remove all of the pain that it has inflicted on me over the past several months and I could learn to accept it. That one thing is, if there is an init script, RUN IT. Not run it like systemd does now. But turn off ALL intelligence systemd has when it finds that script and run it. Don't put it on any special timers, don't try to detect if it is running already, or stopped already or whatever, fire the script up in blocking mode and wait till it exits.

My first experience with systemd was on one of my home servers, I re-installed Debian on it last year, rebuilt the hardware etc and with it came systemd. I believe there is a way to turn systemd off but I haven't tried that yet. The first experience was with bind. I have a slightly custom init script (from previous debian) that I have been using for many years. I copied it to the new system and tried to start bind. Nothing. I looked in the logs and it seems that it was trying to interface with rndc(internal bind thing) for some reason, and because rndc was not working(I never used it so I never bothered to configure it) systemd wouldn't launch bind. So I fixed rndc and systemd would now launch bind, only to stop it within 1 second of launching. My first workaround was just to launch bind by hand at the CLI (no init script), left it running for a few months. Had a discussion with a co-worker who likes systemd and he explained that making a custom unit file and using the type=forking option may fix it.. That did fix the issue.

Next issue came up when dealing with MySQL clusters. I had to initialize the cluster with the "service mysql bootstrap-pxc" command (using the start command on the first cluster member is a bad thing). Run that with systemd, and systemd runs it fine. But go to STOP the service, and systemd thinks the service is not running so doesn't even TRY to stop the service(the service is running). My workaround for my automation for mysql clusters at this point is to just use mysqladmin to shut the mysql instances down. Maybe newer mysql versions have better systemd support though a co-worker who is our DBA and has used mysql for many years says even the new Maria DB builds don't work well with systemd. I am working with Mysql 5.6 which is of course much much older.

Next issue came up with running init scripts that have the same words in them, in the case of most recently I upgraded systems to systemd that run OSSEC. OSSEC has two init scripts for us on the server side (ossec and ossec-auth). Systemd refuses to run ossec-auth because it thinks there is a conflict with the ossec service. I had the same problem with multiple varnish instances running on the same system (varnish instances were named varnish-XXX and varnish-YYY). In the varnish case using custom unit files I got systemd to the point where it would start the service but it still refuses to "enable" the service because of the name conflict (I even changed the name but then systemd was looking at the name of the binary being called in the unit file and said there is a conflict there).

fucking a. Systemd shut up, just run the damn script. It's not hard.

Later a co-worker explained the "systemd way" for handling something like multiple varnish instances on the system but I'm not doing that, in the meantime I just let chef start the services when it runs after the system boots(which means they start maybe 1 or 2 mins after bootup).

Another thing bit us with systemd recently as well again going back to bind. Someone on the team upgraded our DNS systems to systemd and the startup parameters for bind were not preserved because systemd ignores the /etc/default/bind file. As a result we had tons of DNS failures when bind was trying to reach out to IPv6 name servers(ugh), when there is no IPv6 connectivity in the network (the solution is to start bind with a -4 option).

I believe I have also caught systemd trying to mess with file systems(iscsi mount points). I have lots of automation around moving data volumes on the SAN between servers and attaching them via software iSCSI directly to the VMs themselves(before vsphere 4.0 I attached them via fibre channel to the hypervisor but a feature in 4.0 broke that for me). I noticed on at least one occasion when I removed the file systems from a system that SOMETHING (I assume systemd) mounted them again, and it was very confusing to see file systems mounted again for block devices that DID NOT EXIST on the server at the time. I worked around THAT one I believe with the "noauto" option in fstab again. I had to put a lot of extra logic in my automation scripts to work around systemd stuff.

I'm sure I've only scratched the surface of systemd pain. I'm sure it provides good value to some people, I hear it's good with containers (I have been running LXC containers for years now, I see nothing with systemd that changes that experience so far).

But if systemd would just do this one thing and go into dumb mode with init scripts I would be quite happy.

GrumpenKraut
Re: as a linux user for 22 users

Now more seriously: it really strikes me that complaints about systemd come from people managing non-trivial setups like the one you describe. While it might have been a PITA to get this done with the old init mechanism, you could make it work reliably.

If systemd is a solution to any set of problems, I'd love to have those problems back!

Anonymous Coward

Linux without systemd: Devuan ASCII or Ubuntu 16 LTS

It's time to install Debian without systemd, called Devuan: https://dev1galaxy.org/viewtopic.php?id=2047

Don't use a Linux with systemd! Stay with Ubuntu 14 LTS or Ubuntu 16 LTS, or the new Devuan ASCII (Debian).

And stop supporting Ubuntu 18 LTS, Debian and RedHat. They are sleeping with the evil cancerous corp. MSFT is paying RedHat and Ubuntu to EEE Linux with systemd

Dabbb

Re: What's the alternative?

Unfortunately it's not really an option for businesses that need safety of support contract, either from Red Hat or Oracle and majority of environments which actually pay sysadmins money do run either RHEL or OL.

So there's no alternative once RHEL6/OL6 are EOL.

There might be an opportunity for Oracle (the only real vendor that has developers and resources required) to create systemd-less variant of OL7 and put the nail in the coffin of RHEL7 while gaining lots of new customers and significantly grow OL install base, but they don't seem interested doing it.

doug_bostrom
Systemd is the Linux world's MS Office "ribbon," another of those sad inflection points where developers have crawled up their own asses and lost sight of the light of day.
cjcox
He's a pain

Early on I warned that he was trying to solve a very large problem space. He insisted he could do it with his 10 or so "correct" ways of doing things, which quickly became 20, then 30, then 50, then 90, etc.. etc. I asked for some of the features we had in init, he said "no valid use case". Then, much later (years?), he implements it (no use case provided btw).

Interesting fellow. Very bitter. And not a good listener. But you don't need to listen when you're always right.

Kabukiwookie
Re: Spherical wheel is superior.

I do know that I never saw a non-distribution provided init script that handled correctly the basic of corner cases – service already running

This only shows that you don't have much real life experience managing lots of hosts.

like application double forking when it shouldn't

If this is a problem in the init script, this should be fixed in the init script. If this is a problem in the application itself, it should be fixed in the application, not worked around by the init mechanism. If you're suggesting the latter, you should not be touching any production box.

"La, la, la, sysv is working fine on my machine, thankyouverymuch" is not what you can call "participating in discussion".

Shoving down systemd down people's throat as a solution to a non-existing problem, is not a discussion either; it is the very definition of 'my way or the highway' thinking.

now in the real world, people that have to deal with init systems on daily basis

Indeed and having a bunch of sub-par developers, focused on the 'year of the Linux desktop' to decide what the best way is for admins to manage their enterprise environment is not helping.

"the dogs may bark, but the caravan moves on"

Indeed. It's your way or the highway; I thought you were just complaining about the people complaining about systemd not wanting to have a discussion, while all the while it's systemd proponents ignoring and dismissing very valid complaints.

[Oct 16, 2018] How to Enable or Disable Services on Boot in Linux Using chkconfig and systemctl Command by Prakash Subramanian

Oct 15, 2018 | www.2daygeek.com
It's a important topic for Linux admin (such a wonderful topic) so, everyone must be aware of this and practice how to use this in the efficient way.

In Linux, whenever we install any packages which has services or daemons. By default all the services "init & systemd" scripts will be added into it but it wont enabled.

Hence, we need to enable or disable the service manually if it's required. There are three major init systems are available in Linux which are very famous and still in use.

What is init System?

In Linux/Unix based operating systems, init (short for initialization) is the first process that started during the system boot up by the kernel.

It's holding a process id (PID) of 1. It will be running in the background continuously until the system is shut down.

Init looks at the /etc/inittab file to decide the Linux run level then it starts all other processes & applications in the background as per the run level.

BIOS, MBR, GRUB and Kernel processes were kicked up before hitting init process as part of Linux booting process.

Below are the available run levels for Linux (There are seven runlevels exist, from zero to six).

Below three init systems are widely used in Linux.

What is System V (Sys V)?

System V (Sys V) is one of the first and traditional init system for Unix like operating system. init is the first process that started during the system boot up by the kernel and it's a parent process for everything.

Most of the Linux distributions started using traditional init system called System V (Sys V) first. Over the years, several replacement init systems were released to address design limitations in the standard versions such as launchd, the Service Management Facility, systemd and Upstart.

But systemd has been adopted by several major Linux distributions over the traditional SysV init systems.

What is Upstart?

Upstart is an event-based replacement for the /sbin/init daemon which handles starting of tasks and services during boot, stopping them during shutdown and supervising them while the system is running.

It was originally developed for the Ubuntu distribution, but is intended to be suitable for deployment in all Linux distributions as a replacement for the venerable System-V init.

It was used in Ubuntu from 9.10 to Ubuntu 14.10 & RHEL 6 based systems after that they are replaced with systemd.

What is systemd?

Systemd is a new init system and system manager which was implemented/adapted into all the major Linux distributions over the traditional SysV init systems.

systemd is compatible with SysV and LSB init scripts. It can work as a drop-in replacement for sysvinit system. systemd is the first process get started by kernel and holding PID 1.

It's a parant process for everything and Fedora 15 is the first distribution which was adapted systemd instead of upstart. systemctl is command line utility and primary tool to manage the systemd daemons/services such as (start, restart, stop, enable, disable, reload & status).

systemd uses .service files Instead of bash scripts (SysVinit uses). systemd sorts all daemons into their own Linux cgroups and you can see the system hierarchy by exploring /cgroup/systemd file.

How to Enable or Disable Services on Boot Using chkconfig Commmand?

The chkconfig utility is a command-line tool that allows you to specify in which
runlevel to start a selected service, as well as to list all available services along with their current setting.

Also, it will allows us to enable or disable a services from the boot. Make sure you must have superuser privileges (either root or sudo) to use this command.

All the services script are located on /etc/rd.d/init.d .

How to list All Services in run-level

The -–list parameter displays all the services along with their current status (What run-level the services are enabled or disabled).

# chkconfig --list
NetworkManager     0:off    1:off    2:on    3:on    4:on    5:on    6:off
abrt-ccpp          0:off    1:off    2:off    3:on    4:off    5:on    6:off
abrtd              0:off    1:off    2:off    3:on    4:off    5:on    6:off
acpid              0:off    1:off    2:on    3:on    4:on    5:on    6:off
atd                0:off    1:off    2:off    3:on    4:on    5:on    6:off
auditd             0:off    1:off    2:on    3:on    4:on    5:on    6:off
.
.

How to check the Status of Specific Service

If you would like to see a particular service status in run-level then use the following format and grep the required service.

In this case, we are going to check the auditd service status in run-level

[Oct 16, 2018] CentOS - [CentOS] Cemtos 7 Systemd alternatives

Oct 16, 2018 | n5.nabble.com

Ned Slider

On 08/07/14 02:22, Always Learning wrote:
>
> On Mon, 2014-07-07 at 20:46 -0400, Robert Moskowitz wrote:
>
>> On 07/07/2014 07:47 PM, Always Learning wrote:
>>> Reading about systemd, it seems it is not well liked and reminiscent of
>>> Microsoft's "put everything into the Windows Registry" (Win 95 onwards).
>>>
>>> Is there a practical alternative to omnipresent, or invasive, systemd ?
>
>> So you are following the thread on the Fedora list? I have been
>> ignoring it.
>
> No. I read some of
> http://www.phoronix.com/scan.php?page=news_topic&q=systemd
>
> The systemd proponent, advocate and chief developer? wants to
> abolish /etc and /var in favour of having the /etc and /var data
> in /usr.
>
> Seems a big revolution is being forced on Linux users when stability and
> the "same old familiar Linux" is desired by many, including me.
> ... [ show rest of quote ]
It's already started. Some configs have already moved from /etc to /usr
under el7.

Whilst I'm as resistant to change as the next man, I've learned you can't fight it so best start getting used to it ;-)

[Oct 15, 2018] Breaking News! SUSE Linux Sold for $2.5 Billion It's FOSS by Abhishek Prakash

Aqusition by a private equity shark is never good news for a software vendor...
Jul 03, 2018 | itsfoss.com

British software company Micro Focus International has agreed to sell SUSE Linux and its associated software business to Swedish private equity group EQT Partners for $2.535 billion. Read the details. ­ rm 3 months ago

This comment is awaiting moderation

Novell acquired SUSE in 2003 for $210 million ­ asoc 4 months ago

This comment is awaiting moderation

"It has over 1400 employees all over the globe "
They should be updating their CVs.

[Oct 15, 2018] I honestly, seriously sometimes wonder if systemd is Skynet... or, a way for Skynet to 'waken'.

Oct 15, 2018 | linux.slashdot.org

thegarbz ( 1787294 ) , Sunday August 30, 2015 @04:08AM ( #50419549 )

Re:Hang on a minute... ( Score: 5 , Funny)
I honestly, seriously sometimes wonder if systemd is Skynet... or, a way for Skynet to 'waken'.

Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. At 2:15am it crashes.
No one knows why. The binary log file was corrupted in the process and is unrecoverable. All anyone could remember is a bug listed in the systemd bug tracker talking about su which was classified as WON'T FIX as the developer thought it was a broken concept.

[Oct 15, 2018] Systemd as doord interface for cars ;-) by Nico Schottelius

Notable quotes:
"... Let's say every car manufacturer recently discovered a new technology named "doord", which lets you open up car doors much faster than before. It only takes 0.05 seconds, instead of 1.2 seconds on average. So every time you open a door, you are much, much faster! ..."
"... Many of the manufacturers decide to implement doord, because the company providing doord makes it clear that it is beneficial for everyone. And additional to opening doors faster, it also standardises things. How to turn on your car? It is the same now everywhere, it is not necessarily to look for the keyhole anymore. ..."
"... Unfortunately though, sometimes doord does not stop the engine. Or if it is cold outside, it stops the ignition process, because it takes too long. Doord also changes the way how your navigation system works, because that is totally related to opening doors, but leads to some users being unable to navigate, which is accepted as collateral damage. In the end, you at least have faster door opening and a standard way to turn on the car. Oh, and if you are in a traffic jam and have to restart the engine often, it will stop restarting it after several times, because that's not what you are supposed to do. You can open the engine hood and tune that setting though, but it will be reset once you buy a new car. ..."
Oct 15, 2018 | blog.ungleich.ch

Let's say every car manufacturer recently discovered a new technology named "doord", which lets you open up car doors much faster than before. It only takes 0.05 seconds, instead of 1.2 seconds on average. So every time you open a door, you are much, much faster!

Many of the manufacturers decide to implement doord, because the company providing doord makes it clear that it is beneficial for everyone. And additional to opening doors faster, it also standardises things. How to turn on your car? It is the same now everywhere, it is not necessarily to look for the keyhole anymore.

Unfortunately though, sometimes doord does not stop the engine. Or if it is cold outside, it stops the ignition process, because it takes too long. Doord also changes the way how your navigation system works, because that is totally related to opening doors, but leads to some users being unable to navigate, which is accepted as collateral damage. In the end, you at least have faster door opening and a standard way to turn on the car. Oh, and if you are in a traffic jam and have to restart the engine often, it will stop restarting it after several times, because that's not what you are supposed to do. You can open the engine hood and tune that setting though, but it will be reset once you buy a new car.

[Oct 15, 2018] Future History of Init Systems

Oct 15, 2018 | linux.slashdot.org

AntiSol ( 1329733 ) , Saturday August 29, 2015 @03:52PM ( #50417111 )

Re:Approaching the Singularity ( Score: 4 , Funny)

Future History of Init Systems

Future History of Init Systems
  • 2015: systemd becomes default boot manager in debian.
  • 2017: "complete, from-scratch rewrite" [jwz.org]. In order to not have to maintain backwards compatibility, project is renamed to system-e.
  • 2019: debut of systemf, absorbtion of other projects including alsa, pulseaudio, xorg, GTK, and opengl.
  • 2021: systemg maintainers make the controversial decision to absorb The Internet Archive. Systemh created as a fork without Internet Archive.
  • 2022: systemi, a fork of systemf focusing on reliability and minimalism becomes default debian init system.
  • 2028: systemj, a complete, from-scratch rewrite is controversial for trying to reintroduce binary logging. Consensus is against the systemj devs as sysadmins remember the great systemd logging bug of 2017 unkindly. Systemj project is eventually abandoned.
  • 2029: systemk codebase used as basis for a military project to create a strong AI, known as "project skynet". Software behaves paradoxically and project is terminated.
  • 2033: systeml - "system lean" - a "back to basics", from-scratch rewrite, takes off on several server platforms, boasting increased reliability. systemm, "system mean", a fork, used in security-focused distros.
  • 2117: critical bug discovered in the long-abandoned but critical and ubiquitous system-r project. A new project, system-s, is announced to address shortcomings in the hundred-year-old codebase. A from-scratch rewrite begins.
  • 2142: systemu project, based on a derivative of systemk, introduces "Artificially intelligent init system which will shave 0.25 seconds off your boot time and absolutely definitely will not subjugate humanity". Millions die. The survivors declare "thou shalt not make an init system in the likeness of the human mind" as their highest law.
  • 2147: systemv - a collection of shell scripts written around a very simple and reliable PID 1 introduced, based on the brand new religious doctrines of "keep it simple, stupid" and "do one thing, and do it well". People's computers start working properly again, something few living people can remember. Wyld Stallyns release their 94th album. Everybody lives in peace and harmony.

[Oct 15, 2018] They should have just rename the machinectl into command.com.

Oct 15, 2018 | linux.slashdot.org

RabidReindeer ( 2625839 ) , Saturday August 29, 2015 @11:38AM ( #50415833 )

What's with all the awkward systemd command names? ( Score: 5 , Insightful)

I know systemd sneers at the old Unix convention of keeping it simple, keeping it separate, but that's not the only convention they spit on. God intended Unix (Linux) commands to be cryptic things 2-4 letters long (like "su", for example). Not "systemctl", "machinectl", "journalctl", etc. Might as well just give everything a 47-character long multi-word command like the old Apple commando shell did.

Seriously, though, when you're banging through system commands all day long, it gets old and their choices aren't especially friendly to tab completion. On top of which why is "machinectl" a shell and not some sort of hardware function? They should have just named the bloody thing command.com.

[Oct 15, 2018] Oh look, another Powershell

Oct 15, 2018 | linux.slashdot.org

Anonymous Coward , Saturday August 29, 2015 @11:37AM ( #50415825 )

Cryptic command names ( Score: 5 , Funny)

Great to see that systemd is finally doing something about all of those cryptic command names that plague the unix ecosystem.

Upcoming systemd re-implementations of standard utilities:

ls to be replaced by filectl directory contents [pathname] grep to be replaced by datactl file contents search [plaintext] (note: regexp no longer supported as it's ambiguous) gimp to be replaced by imagectl open file filename draw box [x1,y1,x2,y2] draw line [x1,y1,x2,y2] ...
Anonymous Coward , Saturday August 29, 2015 @11:58AM ( #50415939 )
Re: Cryptic command names ( Score: 3 , Funny)

Oh look, another Powershell

[Oct 15, 2018] Check man chroot. The authors of chroot say it's useless for security

Notable quotes:
"... Noexec is basically a suggestion, not an enforcement mechanism . Just run ld /path/to/executable. ld is the loader/lilinker for elf binaries. Without ld ,you can't run bash, or ls. With ld, noexec is ignored. ..."
Oct 15, 2018 | linux.slashdot.org

raymorris ( 2726007 ) , Saturday August 29, 2015 @07:53PM ( #50418235 ) Journal

read the man page ( Score: 5 , Informative)

> In short: I think chroot is plenty good for security

Check man chroot. The authors of chroot say it's useless for security. Perhaps you think you know more than they do ,and more than security professionals like myself do. Let's find out.

> you get a shell in one of my chroot's used for security, then.....
ur uid and gid are not going to be 0. Good luck telling the kernel to try and get you out.
There aren't going to be any /dev, /proc, or other special filesystems

Gonna be kind of tthough to have a ahell without a tty, aka /dev/*tty*

So yeah, you need /dev. Can't launch a process, including /bin/ls, without /proc, so you're going to need proc. Have a look in /proc/1. You'll see a very interesting symlink there.

> mounted noexec

Noexec is basically a suggestion, not an enforcement mechanism . Just run ld /path/to/executable. ld is the loader/lilinker for elf binaries. Without ld ,you can't run bash, or ls. With ld, noexec is ignored.

My company does IT security for banks. Meaning we show the banks how they can be hacked. When I say chroot is not a security control, I'm not guessing.

[Oct 15, 2018] Systemd moved Linux closer to Windows

And that's why it is supported by Red Hat management. It's role as middleware for containers is very questionable indeed.
Oct 15, 2018 | linux.slashdot.org

Opportunist ( 166417 ) , Monday December 11, 2017 @05:19AM ( #557 14501 )

Systemd moved Linux closer to Windows ( Score: 5 , Interesting)

Windows is a very complex system. Not necessarily because it needs to be complex, but rather because of "wouldn't it be great if we could also..." thinking. Take the registry. Good idea in its core, a centralized repository for all configuration files. Great. But wouldn't it be nice if we could also store some states in there? And we could put the device database in there, too. And how about the security settings? And ...

And eventually you had the mess you have now, where we're again putting configuration files into the %appdata% directory. But when we have configuration in there already anyway, couldn't we... and we could sync this for roaming, ya know...

Which is the second MS disease. How many users actually need roaming? 2, maybe 3 out of 10? The rest is working on a stationary desktop, never moving, never roaming. But they have to have this feature, needed or not. And if you take a look through the services, you'll notice that a lot of services that you simply know you don't need MUST run because the OS needs them for some freakish reason. Because of "wouldn't it be great if this service did also...".

systemd now brought this to the Linux world. Yes, it can do a lot. But unfortunately it does so, whether you need it or not. And it requires you to take these "features" into account when configuring it, even if you have exactly zero use for them and wouldn't potentially not even know just wtf they're supposed to do.

systemd is as overengineered as many Windows components. And thus of course as error prone. And while it can make things more manageable for huge systems, everything becomes more convoluted and complicated for anyone that has no use for these "wouldn't it be great if it also..." features.

[Oct 15, 2018] I don't care about systemd. By the time I used systemd, main problems were ironed out and now system just works

A very naive and self-centered view. Systemd is an architectural blunder. In such cases problem never seize to exist. That's the nature of the beast.
Oct 15, 2018 | linux.slashdot.org

Plus1Entropy ( 4481723 ) , Monday December 11, 2017 @02:34AM ( #55714139 )

I have no problem with systemd ( Score: 4 , Interesting)

Yeah, yeah I know the history of its development and how log files are binary and the whole debug kernel flag fiasco. And I don't care. By the time I used systemd, that had already long passed.

I switched from Squeeze to Jessie a couple years ago, had some growing pains as I learned how to use systemd... but that was it. No stability issues, no bugs. Can't say whether things run better, but they definitely don't run worse.

I had only really been using Linux for a few years before the onset of systemd, and honestly I think that's part of the problem. People who complain about systemd the most seem to have been using Linux for a very long time and just "don't want to change". Whether its nostalgia or sunk-cost fallacy, I can't say, but beyond that it seems much more like a philosophical difference than a practical one. It just reminds me of people's refusal to use the metric system, for no better reason than they are unfamiliar with it.

If systemd is so terrible, then why did a lot of the major distros switch over? If they didn't, it would just be a footnote in the history of open source: "Hey remember when they tried to replace sysV and init with that stupid thing with the binary log files? What was it called? SystemP?" The fact that Devaun has not overtaken Debian in any real way (at least from what I've seen, feel free to correct me if I'm wrong) indicates that my experience with systemd is the norm, not the exception. The market has spoken.

I read TFA, there is not one single specific bug or instability mentioned about systemd. What is the "tiny detail" that split the community? I have no idea, because TFA doesn't say what it is. I know that part of the philosophy behind Linux is "figuring it out yourself", but if you don't explain to me these low level kernel details (if that's even what they are; again, I have no idea), then don't expect people like me to be on your side. Linux is just a tool to me, I don't have any emotional attachment to it, so if things are working OK I am not going to start poking around under the hood just because someone posts an article claiming there are problems, but never specifying what those problems are and how they affect me as a user.

Honestly TFA reads like "We are having development problems, therefore systemd sucks." I get that when major changes to the platform happens there are going to be issues and annoyances, but that's the way software development has always been and will always be. Even if systemd was perfect there would still be all kinds of compatibility issues and new conventions that developers would have to adapt to. That's what I would expect to happen whenever any major change is made to a widely used and versatile platform like Linux.

Even Linus doesn't really care [zdnet.com]:

"I don't actually have any particularly strong opinions on systemd itself. I've had issues with some of the core developers that I think are much too cavalier about bugs and compatibility, and I think some of the design details are insane (I dislike the binary logs, for example), but those are details, not big issues."

I'm not saying systemd is "better" or "the right answer". If you want to stick to distros that don't use it, that's up to you. But what I am saying is, get over it.

chaoskitty ( 11449 ) writes: < john AT sixgirls DOT org > on Monday December 11, 2017 @03:14AM ( #55714245 ) Homepage
Re:I have no problem with systemd ( Score: 5 , Insightful)

Perhaps you have had no problems with systemd because you aren't trying to use it to do much.

Lots of people, myself included, have had issues trying to get things which are trivial in pre-systemd or on other OSes to work properly and consistently on systemd. There are many, many, many examples of issues. If someone asked me for examples, I'd have a hard time deciding where to start because so many things have been gratuitously changed. If you really think there aren't examples, just read this thread.

On the other hand, I have yet to see real technical discussion about problems that systemd apparently is fixing. I honestly and openmindedly am curious about what makes systemd good, so I've tried on several occasions to find these discussions where good technical reasoning is used to explain the motivations behind systemd. If they exist, I haven't found any yet. I'm hoping some will appear as a result of this thread.

But you bring up the idea that the "market has spoken"? You do realize that a majority of users use Windows, right? And people in the United States are constantly electing politicians who directly hurt the people who vote for them more than anyone else. It's called marketing. Just because something has effective marketing doesn't mean it doesn't suck.

Plus1Entropy ( 4481723 ) writes:
Re: ( Score: 3 )

Don't let the perfect be the enemy of the good. If you have so many examples that you "don't know where to start", then start anywhere. You don't have to come at me with the best, most perfect example. Any example will do! I'm actually very interested. And I have, out of curiosity, looked a bit. But like you looking for why systemd is better, I came across a similar problem.

Your reply just continues the cycle I spoke of, where people who potentially know better than me, like you, claim there are problems bu

amorsen ( 7485 ) writes: < benny+slashdot@amorsen.dk > on Monday December 11, 2017 @04:35AM ( #55714405 )
Re:I have no problem with systemd ( Score: 5 , Insightful)

systemd fails silently if something is wrong in /etc/fstab. It just doesn't finish booting. Which is moderately annoying if you have access to the system console and you can guess that an unchanged /etc/fstab from before systemd that worked for a while with systemd is now suddenly toxic

If you do not have easy access to the system console or you are not blessed with divine inspiration, that is quite a bit more than annoying. Thanks to the binary log files you cannot even boot something random and read the logs, but at least you aren't missing anything, because nothing pertinent to the error is logged anyway.

The problem is that one camp won't admit that old init is a pile of shit from the 80's whose only virtue is that the stench has faded over time, and the other camp won't admit that their new shiny toy needs to be understandable and debuggable.

A proper init system needs dependencies and service monitoring. init + monit does not cut it today. Systemd does that bit rather impressively well. It's just terrible at actually booting the system, all the early boot stuff that you could depend on old init to get right every time, or at least spit out aggressive messages about why it failed.

marcansoft ( 727665 ) writes: < (moc.tfosnacram) (ta) (rotceh) > on Monday December 11, 2017 @05:25AM ( #55714513 ) Homepage
Re:I have no problem with systemd ( Score: 4 , Interesting)

Meanwhile here I am, running Gentoo, with init scripts that have had real dependencies for over 15 years (as well as a bash-based but much nicer scaffolding to write them), with simple to use admin tools and fully based on text files, with cgroup-based process monitoring (these days), and I'm wondering why everyone else didn't get the memo and suddenly decided to switch to systemd instead and bring along all the other baggage it comes with. Debian and Ubuntu had garbage init systems, and yet it seems *nobody* ever took notice of how Gentoo has been doing things right for a decade and a half. You can also use systemd with Gentoo if you want, because user choice is a good thing.

lkcl ( 517947 ) writes: < lkcl@lkcl.net > on Monday December 11, 2017 @05:05AM ( #55714467 ) Homepage
Re:I have no problem with systemd ( Score: 5 , Informative)
People who complain about systemd the most seem to have been using Linux for a very long time and just "don't want to change".

no, that's not it. people who have been using linux for a long time usually *know the corner-cases better*. in other words, they know *exactly* why it doesn't work and won't work, they know *exactly* the hell that it can and will create, under what circumstances, and they know *precisely* how they've been betrayed by the rail-roaded decisions made by distros without consulting them as to the complexities of the scenario to which they have been (successfully up until that point) deploying a GNU/Linux system.

also they've done the research - looked up systemd vs other init systems on the CVE mitre databases and gone "holy fuck".

also they've seen - perhaps even reported bugs themselves over the years - how well bugs are handled, and how reasonable and welcoming (or in some sad cases not, but generally it's ok) the developers are... then they've looked up the systemd bug database and how pottering abruptly CLOSES LEGITIMATE BUGREPORTS and they've gone "WHAT the fuck??"

also, they've been through the hell that was the "proprietary world", if they're REALLY old they've witnessed first-hand the "Unix Wars" and if they're not that old they experienced the domination of Windows through the 1990s. they know what a monoculture looks like and how dangerous that is for a computing eco-system.

in short, i have to apologise for pointing this out: they can read the danger signs far better than you can. sorry! :)

marcansoft ( 727665 ) writes: < (moc.tfosnacram) (ta) (rotceh) > on Monday December 11, 2017 @05:10AM ( #55714477 ) Homepage
Re:I have no problem with systemd ( Score: 4 , Informative)

Everyone* switched to systemd because everyone* was using something that was much, much worse. Traditional sysvinit is a joke for service startup, it can't even handle dependencies in a way that actually works reliably (sure, it works until a process fails to start or hangs, then all bets are off, and good luck keeping dependencies starting in the right order as the system changes). Upstart is a mess (with plenty of corner case bugs) and much harder to make sense of and use than systemd. I'm a much happier person writing systemd units than Upstart whatever-you-call-thems on the Ubuntu systems I have to maintain.

The problem with systemd is that although it does init systems *better* than everything else*, it's also trying to take over half a dozen more responsibilities that are none of its damn business. It's a monolithic repo, and it's trying as hard as it can to position itself as a hard dependency for every Linux system on the face of the planet. Distros needed* a new init system, and they got an attempt to take over the Linux ecosystem along with it.

* The exception is Gentoo, which for over 15 years has had an rc-script system (later rewritten as OpenRC) based on sysvinit as PID 1 but with real dependencies, easy to write initscripts, and all the features you might need in a server environment (works great for desktops too). It's the only distro that has had a truly server-worthy init system, with the right balance of features and understandability and ease of maintenance. Gentoo is the only major distro that hasn't switched to systemd, though it does offer systemd as an option for those who want it. OpenRC was proposed as a systemd alternative in the Debian talks, but Gentoo didn't advertise it, and nobody on the Debian side cared to give it a try. Interestingly Poettering seems to be *very* careful to *never, ever* mention OpenRC when he talks about how systemd is better than everything else. I wonder why. Gentoo developers have had to fork multiple things assimilated by systemd (like udev) in order to keep offering OpenRC as an option.

[Oct 15, 2018] The role played by http://angband.pl/debian/ [angband.pl] should be absorbed into the main debian packaging, providing "Replaces / Provides / Conflicts" alternatives of pulseaudio, libcups, bsdutils, udev, util-linux, uuid-runtime, xserver-xorg and many more - all with a -nosystemd extension on the package name

Oct 15, 2018 | linux.slashdot.org

lkcl ( 517947 ) writes: < lkcl@lkcl.net > on Monday December 11, 2017 @01:55AM ( #55714053 ) Homepage

Re:It's the implementation. ( Score: 5 , Interesting)
I don't think there's a problem with the idea of systemd. Having a standard way to handle process start-up, dependencies, failures, recovery, "contracts", etc... isn't a bad, or unique, thing -- Solaris has Service Manager, for example.

the difference is that solaris is - was - written and maintained by a single vendor. they have - had - the resources to keep it running, and you "bought in" to the sun microsystems (now oracle) way, and that was that. problems? pay oracle some money, get support... fixed.

free software is not *just* about a single way of doing things... because the single way doesn't fit absolutely *all* cases. take angstrom linux for example: an embedded version of GNU/Linux that doesn't even *have* an init system! you're expected to write your own initialisation system with hard-coded entries in /dev. why? because on an embedded system with only 32mb of RAM there *wasn't room* to run an init service.

then also we have freebsd and netbsd to consider, where security is much tighter and the team is smaller. in short: in the free software world unlike solaris there *is* no "single way" and any "single way" is guaranteed to be a nightmare pain-in-the-ass for at least somebody, somewhere.

this is what the "majority voting" that primarily debian - other distros less so because to some extent they have a narrower focus than debian - completely failed to appreciate. the "majority rule" decision-making, for all that it is blindly accepted to be "How Democracy Works" basically pissed in the faces of every debian sysadmin who has a setup that the "one true systemd way" does not suit - for whatever reason, where that reason ultimately DOES NOT MATTER, betraying an IMPLICIT trust placed by those extremely experienced users in the debian developers that you DO NOT fuck about with the underlying infrastructure without making it entirely optional.

now, it has to be said that the loss of several key debian developers, despite the incredible reasonable-ness of the way that they went about making their decision, made it clear to the whole debian team quite how badly they misjudged things: joey hess leaving with the declaration that debian's charter is a "toxic document" for example, and on that basis they have actually tried very hard to undo some of that damage.

the problem is that their efforts simply don't go far enough. udisk2, policykit, and several absolutely CRITICAL programs without which it is near flat-out impossible to run a desktop system - all gone. the only way to get those back is to add http://angband.pl/debian/ [angband.pl] to /etc/apt/sources.list and use the (often out-of-date) nosystemd recompiled versions of packages that SHOULD BE A PERMANENT PART OF DEBIAN.

in essence: whilst debian developers are getting absolutely fed up of hearing about systemd, they need to accept that the voices that tell them that there is a problem - even though those voices cannot often actually quite say precisely what is wrong - are never, ever, going to stop, UNTIL the day that the role played by http://angband.pl/debian/ [angband.pl] is absorbed into the main debian packaging, providing "Replaces / Provides / Conflicts" alternatives of pulseaudio, libcups, bsdutils, udev, util-linux, uuid-runtime, xserver-xorg and many more - all with a -nosystemd extension on the package name.

ONLY WHEN it is possible for debian users to run a debian system COMPLETELY free of everything associated with systemd - including libsystemd - will the utterly relentless voices and complaints stop, because only then, FINALLY, will people feel safer about running a debian system where there is absolutely NO possibility of harm, cost or inconvenience caused by the poisonous and utterly irresponsible attitude shown by pottering, with his blatant disregard for security, good design practices, and complete lack of respect for other peoples' valuable input by abruptly and irrationally closing extremely important bugreports. we may have been shocked that there were people who *literally* wanted to kill him, but those people did not react the way that they did, despite their inability to properly and rationally express themselves, without having a good underlying reason for doing so.

software libre is supposed to be founded on ethical principles. that's what the GPL license is actually about (the four freedoms are a reflection of ETHICAL standards). can you honestly declare that systemd has been developed - and then adopted - in a truly ethical fashion? because everything i see about systemd says the complete and absolute opposite. and that is why i won't allow it on any computers that i run - not just because technically it's an inferior design with no overall redeeming features (mass-adoption is NOT a redeeming feature, it's a monoculture-level microsoft-emulating disaster), but because its developers and its blatant rail-roaded adoption across so many distributions fundamentally violates the ethical principles on which the software libre community is *supposed* to be based.

[Oct 15, 2018] Systemd Absorbs su Command Functionality

Systemd might signify the change of generations of developers...
Notable quotes:
"... Lennart Poettering's long story short: "`su` is really a broken concept ..."
Oct 15, 2018 | linux.slashdot.org

mysidia ( 191772 ) , Saturday August 29, 2015 @11:34AM ( #50415809 )

Bullshit ( Score: 5 , Insightful)

Lennart Poettering's long story short: "`su` is really a broken concept

Declaring established concepts as broken so you can "fix" them.

Su is not a broken concept; it's a long well-established fundamental of BSD Unix/Linux. You need a shell with some commands to be run with additional privileges in the original user's context.

If you need a full login you invoke 'su -' or 'sudo bash -'

Deciding what a full login comprises is the shell's responsibility, not your init system's job.

RightwingNutjob ( 1302813 ) , Saturday August 29, 2015 @12:38PM ( #50416133 )
Re:Hang on a minute... ( Score: 5 , Interesting)

I've had a job now for about 10 years where a large fraction of the time I wear a software engineer's hat. Looking back now, I can point to a lot of design decisions in the software I work on that made me go "WTF?" when I first saw them as a young'un, but after having to contend with them for a good number of years, and thinking about how I would do them differently, I've come to the conclusion that the original WTF may be ugly and could use some polish, but the decisionmaking that produced it was fundamentally sound.

The more I hear about LP and systemd, the more it screams out that this guy just hasn't worked with Unix and Linux long enough to understand what it's used for and why it's built the way it is. His pronouncements just sound to me like an echo of my younger, stupider, self (and I just turned 30), and I can't take any of his output seriously. I really hope a critical mass of people are of the same mind with me and this guy can be made to redirect his energies somewhere where it doesn't fuck it up for the rest of us.

magamiako1 ( 1026318 ) , Saturday August 29, 2015 @01:42PM ( #50416503 )
Re:Hang on a minute... ( Score: 5 , Insightful)

Welcome to IT. Where the youngin's come in and rip up everything that was built for decades because "oh that's too complicated".

TheGratefulNet ( 143330 ) , Saturday August 29, 2015 @02:19PM ( #50416699 )
Re:Hang on a minute... ( Score: 5 , Insightful)

its the other way around. we used to have small, simple programs that did not take whole systems to build and gigs of mem to run in. things were easier to understand and concepts were not overdone a hundred times, just because 'reasons'.

now, we have software that can't be debugged well, people who are current software eng's have no attention span to fix bugs or do proper design, older guys who DO remember 'why' are no longer being hired and we can't seem to stand on our giants' shoulders anymore. again, because 'reasons'.

[Oct 15, 2018] Developers with the mentality of hobbyists are a problem; super productive developers with the mentality of a hobbyist can be a menace

I wonder how it happens that Red Hat has no developers able to to form a countervailing force to Poettering and his "desktop linux" clique. May be because he has implicit support of management as Windowization of Linux is a the strategic goal of Red hat.
Looks like Pottering never used environment modules package
Oct 15, 2018 | linux.slashdot.org

rubycodez ( 864176 ) , Saturday August 29, 2015 @11:53AM ( #50415919 )

Re:Bullshit ( Score: 4 , Interesting)

Poettering is so very wrong on many things, having a superficial and shallow understanding of why Unix is designed the way it is. He is just a hobbyist, not a hardened sys admin with years of experience. It's almost time to throw popular Linux distros in the garbage can and just go to BSD

Anonymous Coward writes:
Change for change's sake ( Score: 2 , Insightful)
he is the guy who delivers.

"Delivering" the wrong thing is not an asset, it's a liability.

And that's why Poettering is a liability to the Linux community.

0123456 ( 636235 ) , Saturday August 29, 2015 @12:24PM ( #50416057 )
Re:Bullshit ( Score: 4 , Insightful)

There are plenty of programmers who can spew out hundreds of lines of crap code in a day.

The problem is that others then have to spend years fixing it.

It's even worse when you let the code-spewers actually design the system, because you'll never be allowed to go back and redo things right.

Anonymous Coward , Saturday August 29, 2015 @12:53PM ( #50416235 )
Re:Bullshit ( Score: 5 , Insightful)

He bring new code, but brings nothing new. That's called re-inventing the wheel, and in Poettering's case, the old wheels worked better and didn't go flat as often, and were easier for average people to fix.

lucm ( 889690 ) writes:
Re: ( Score: 3 )
What we're dealing with now is something that neither "average person" nor "master geek" find easy to fix.

This is the best summary I've seen of the whole systemd thing. They try to Apple-ize linux but it's half-baked and neither more user-friendly or more reliable than the stuff they replace.

rnturn ( 11092 ) , Saturday August 29, 2015 @04:37PM ( #50417391 )
Re:Bullshit ( Score: 4 , Insightful)
``They try to Apple-ize linux but it's half-baked and neither more user-friendly or more reliable than the stuff they replace.

I've had the same complaint about CUPS -- Apple's screwball replacement for simple lpd -- for years. (And it's not just the Linux version that, IMHO, sucks. I recently had to live through using CUPS in an Apple shop and getting hard copy of anything was a real time sink.) I have a hard time figuring out what problem CUPS was intended to solve. All I can come up with was that it was shiny and new whereas lpd was old (but reliable). For my trusty, rock-solid HP LaserJet, I keep an old Linux distribution running so I can set it up using LPRng. A couple of lines in a text file and -- Voila! -- I have a print queue. Time spent^Wwasted in CUPS' GUI never seemed to make anything work.

Systemd and well, just about anything Poettering touches is more obtuse than what it replaces, has commands that are difficult to remember, require more typing (making them prone to typos), and don't make much sense. Am I looking for the status of "servicename" or am I looking for the status of "servicename.target"? What's the difference? The guy's pushing me back to Slackware. Or, as someone above mentioned, BSD.

techno-vampire ( 666512 ) , Saturday August 29, 2015 @04:02PM ( #50417183 ) Homepage
The way this should end ( Score: 4 , Insightful)

PoetteringOS

In the long run, he's not going to be satisfied until he's created his own OS, kernel and all because he calls anything he didn't write a "broken concept," whatever that is, and does his best to shove his version down everybody's throat. And, since his version is far more complex, far more pervasive and much, much harder to use or maintain, the community suffers. I do wish he would get off the pot and start developing the One True (Pottering) kernel so that the rest of the world can go back to ignoring him.

Kavonte ( 4239129 ) writes:
Re: ( Score: 3 , Interesting)

I tried a bunch of them a few years ago. I found that FreeBSD was the best one, even though it doesn't come with a GUI by default, and so you have to install it afterwards. (Seems kind of ridiculous to me, but that's how they package it for some reason.) I don't know if they've changed the documentation since then, but note that you don't have to compile X11 and your window manager, as there is a system that can install pre-compiled packages that they don't bother to mention until after they tell you how

rubycodez ( 864176 ) writes:
Re: ( Score: 3 )

But the are distros based on FreeBSD such as PC-BSD that have the UI and other desktop features and apps canned and ready to go

Anonymous Coward , Saturday August 29, 2015 @11:55AM ( #50415925 )
Re:Bullshit ( Score: 5 , Informative)

Just like he considers exit statuses, stderr, and syslog "broken concepts." That is why systemd supports them so poorly. He just doesn't understand why those things are critical. An su system that doesn't properly log to syslog is a serious security problem.

phantomfive ( 622387 ) , Saturday August 29, 2015 @01:55PM ( #50416559 ) Journal
Re:Bullshit ( Score: 5 , Interesting)

ok, I just spent my morning researching the problem, and why the feature got built, starting from here [github.com] (linked to in the article). Essentially, the timeline goes like this:

1) On Linux, the su command uses PAM to manage logins (that's probably ok).
2) systemd wrote their own version of PAM (because containers)
3) Unlike normal su, the systemd-pam su doesn't transfer over all environment variables, which led to:
4) A bug filed by a user, that the XDG_RUNTIME_DIR variable wasn't being maintained when su was run.
5) Lennart said that's because su is confusing, and he wouldn't fix it.
6) The user asked for a feature request to be added to machinectl, that would retain that environment variable
7) Lennart said, "sure, no problem." (Which shows why systemd is gaining usage, when people want a feature, he adds it)

It's important to note that there isn't a conspiracy here to destroy su. The process would more accurately be called "design by feature accretion," which doesn't really make you feel better, but it's not malice.

gweihir ( 88907 ) , Saturday August 29, 2015 @12:35PM ( #50416121 )
Re:Bullshit ( Score: 5 , Insightful)
Deciding what a full login comprises is the shell's responsibility, not your init system's job.

And certainly not the job of one Poettering, who still has not produced one piece of good software in his life.

Anonymous Coward , Saturday August 29, 2015 @12:27PM ( #50416069 )
Re:Bullshit ( Score: 5 , Interesting)
If you want a FULL shell
Oh I dont know 'su bash' usually works pretty fng good...

It does if you are fine to only get root privilege, without FULL environment of root. But if you would have to make sure you have FULL root environment, first discarding anything you had in calling user and then executing root users environment (/etc/profile etc.) you better use "su - bash" or "sudo -i". Compare what you get both ways "su bash" vs "su - bash" with runnint "set" and "env" commands, please.

Failing to have FULL root environment, can have security implications (umask, wrong path, wrong path order, ...) which may or may not be critical depending what system you are operating and to whom. Also some commands may fail or misbehave just because of path differences etc.

Above is trivial information and should be clear without further explanation anyone running *nix systems for someone else as part of job ie. work professionally on the field. Incase you don't, it's still useful information you should learn about sysadmin of the platform you happen to use.

[Oct 15, 2018] The debate over replacing the "init system" was a complete red herring; systemd knows no boundaries and continues to expand its tentacles over the system as it subsumes more and more components.

Notable quotes:
"... The debate over replacing the "init system" was a complete red herring; systemd knows no boundaries and continues to expand its tentacles over the system as it subsumes more and more components. ..."
"... My problem with this is that once a distribution has adopted systemd, they have to basically just accept whatever crap is shovelled out in the subsequent systemd releases--it's all or nothing and once you're on the train you can't get off it. This was absolutely obvious years ago. Quality software engineering and a solid base system walked out of the door when systemd arrived; I certainly did. ..."
Oct 15, 2018 | linux.slashdot.org

wnfJv8eC ( 1579097 ) , Saturday August 29, 2015 @12:43PM ( #50416173 )

Thinking about leaving any systemd linux behind ( Score: 5 , Insightful)

I am really tired of systemd. So really tired of the developers shoving that shit down the linux throat. It's not pretty, it seems to grow out of control, taking on more and more responsibility .... I don't even have an idea how to look at my logs anymore. Nor how to clear the damn things out! Adding toolkits should make the system as clear to understand as it was, not more complex. If it gets any worse it might as well be Windows 10! init was easy to understand, easy to use. syslog was easy read easy to understand and easy to clear. All this bull about "it's a faster startup" is just ... well bull. I'm using a computer 20 times faster than I was a decade ago. You think 20 seconds off a minute startup is an achievement? It's seconds on a couple of days uptime; big f*cking deal. Redhat, Fedora, turn away from the light and return to your roots!

rl117 ( 110595 ) writes: < rleigh@[ ]elibre.net ['cod' in gap] > on Saturday August 29, 2015 @03:57PM ( #50417157 ) Homepage
Re:What path have we chosen? ( Score: 5 , Interesting)

I can't speak for any distribution, after quitting as a Debian developer some months back, for several reasons one of which was systemd. But speaking for myself, it was quite clear during the several years of "debate" (i.e. flamewars) over systemd that this was the inevitable outcome. The debate over replacing the "init system" was a complete red herring; systemd knows no boundaries and continues to expand its tentacles over the system as it subsumes more and more components.

My problem with this is that once a distribution has adopted systemd, they have to basically just accept whatever crap is shovelled out in the subsequent systemd releases--it's all or nothing and once you're on the train you can't get off it. This was absolutely obvious years ago. Quality software engineering and a solid base system walked out of the door when systemd arrived; I certainly did.

When I commit to a system such as a Linux distribution like Debian, I'm making an investment of my time and effort to use it. I do want to be able to rely on future releases being sane and not too radical a departure from previous releases--I am after all basing my work and livelihood upon it. With systemd, I don't know what I'm going to get with future versions and being able to rely on the distribution being usable and reliable in the future is now an unknown. That's why I got off this particular train before the jessie release. After 18 years, that wasn't an easy decision to make, but I still think it was the right one. And yes, I'm one of the people who moved to FreeBSD. Not because I wanted to move from Debian after having invested so much into it personally, but because I was forced to by this stupidity. And FreeBSD is a good solid dose of sanity.

[Oct 15, 2018] Ever stop and ask why Red Hat executives support systemd?

That does not prevent Oracle copying it, does it ?
Oct 15, 2018 | linux.slashdot.org

walterbyrd ( 182728 ) , Saturday August 29, 2015 @11:09PM ( #50418815 )

Ever stop and ask why? ( Score: 5 , Insightful)

This has been going on for years, and has years more to go. This is a long term strategy.

But why?

Why has Red Hat been replacing standard Linux components with Red Hat components, when the Red Hat stuff is worse?

Why isn't systemd optional? It is just an init replacement, right? Why does Red Hat care which init you use?

Why is systemd being tied to so many other components?

Why binary logging? Who asked for that?

Why throw away POSIX, and the entire UNIX philosophy? Clearly you do not have to do that just to replace init.

Why does Red Hat instantly berate anybody who does not like systemd? Why the barrage of ad hominem attacks systemd critics?

I think there is only one logical answer to all of those questions, and it's glaringly obvious.

[Oct 15, 2018] Actually, the 'magic' in su is in the kernel. Basically, since it's marked suid root, the kernel sets the uid on the new process to root before it even starts running. The program itself just then decides if it is willing to do anything for you.

Oct 15, 2018 | linux.slashdot.org

sjames ( 1099 ) , Saturday August 29, 2015 @12:55PM ( #50416253 ) Homepage Journal

Re:Is it April 1st already? ( Score: 2 )

Actually, the 'magic' in su is in the kernel. Basically, since it's marked suid root, the kernel sets the uid on the new process to root before it even starts running. The program itself just then decides if it is willing to do anything for you.

rubycodez ( 864176 ) , Saturday August 29, 2015 @11:55AM ( #50415923 )
Re:BSD is looking better all the time ( Score: 2 , Insightful)

That's what Poettering has been doing his whole life, getting into good open source projects, squatting and then shitting all over them. The infection, stink and filth then linger for decades. He's a cancer on open source.

0123456 ( 636235 ) , Saturday August 29, 2015 @12:25PM ( #50416063 )
Re:BSD is looking better all the time ( Score: 5 , Insightful)
That's a bit rude... I think Poettering's main motivation has been to simply modernize Linux.

Where 'modernize' is a codeword for 'shit all over'.

el_chicano ( 36361 ) writes:
Re: ( Score: 2 )
That's a bit rude... I think Poettering's main motivation has been to simply modernize Linux.

I can see that as being one of his goals but if you want to improve Linux why a new init system plus? I did not hear any system admins asking for this.

He would be considered a saint if he would do something useful like fix the desktop environments so the "Year of the Linux Desktop" finally gets here.

phantomfive ( 622387 ) , Saturday August 29, 2015 @12:35PM ( #50416117 ) Journal
Re:BSD is looking better all the time ( Score: 5 , Insightful)
That's a bit rude... I think Poettering's main motivation has been to simply modernize Linux.

Yeah, that's true. He sees features people want, and he builds them. For example, Debian distro builders were frustrated writing init scripts, so Poettering made something that filled the need of those distro builders [slashdot.org]. That's why it got adopted, because it contained features they wanted.

The problem of course is that he doesn't understand the Unix way [catb.org], especially when it comes to good interfaces between code [slashdot.org] (IMNSHO).

The people who like systemd tend to like the features.......the people who dislike it, the architecture.

RabidReindeer ( 2625839 ) writes:
Re: ( Score: 3 )

I had trouble with init scripts. The systemd init subsystem was a better approach. The problem was, systemd also brought in a lot of stuff that wasn't directly part of the init subsystem that I didn't want, don't want, and don't see any probability of ever wanting.

Because Poettering doesn't understand "modular", I don't get just the good stuff - it's all or nothing. And because systemd isn't even modular as an overgrown bloated monstrosity, the only way to avoid it is to either run old distros or some other

phantomfive ( 622387 ) , Saturday August 29, 2015 @02:32PM ( #50416759 ) Journal
Re:BSD is looking better all the time ( Score: 4 , Insightful)
I had trouble with init scripts. The systemd init subsystem was a better approach. The problem was, systemd also brought in a lot of stuff that wasn't directly part of the init subsystem that I didn't want, don't want, and don't see any probability of ever wanting.

Yeah, that's basically the problem. Systemd is really three different things:

1) init system
2) cgroups manager (cgroups architecture is still crap, btw)
3) session manager

It probably does more stuff, but it's hard to keep track of it all

ezakimak ( 160186 ) , Saturday August 29, 2015 @02:16PM ( #50416681 )
Re:BSD is looking better all the time ( Score: 5 , Informative)

OpenRC++

openrc init scripts are fairly straight forward.
Coupled with gentoo's baselayout, and the config file layout is fairly normalized also.

Electricity Likes Me ( 1098643 ) writes:
Re: ( Score: 3 )

Yes and init scripts are just a bastion of race-free stateful design, and service monitoring. Except not at all those things.

menkhaura ( 103150 ) writes: < espinafre@gmail.com > on Saturday August 29, 2015 @12:39PM ( #50416147 ) Homepage Journal
Re:BSD is looking better all the time ( Score: 5 , Insightful)

Please remember devuan (http://www.devuan.org), a Debian fork which aims to do away with systemd and all that bullcrap. It's picking up steam, and I believe things like these make it more and more worth it to help the new fork.

[Oct 15, 2018] There are two types of German engineering. Good engineering and over engineering. And there is a fine line between them. And it looks like Mr. Poettering crossed it

RHEL7 story looks more and more like Windows 10 story.
Oct 15, 2018 | linux.slashdot.org

prefec2 ( 875483 ) , Saturday August 29, 2015 @12:39PM ( #50416149 )

Strange path he is taking ( Score: 2 )

First of all, there are two types of German engineering. Good engineering and over engineering. And there is a fine line between them. And it looks like Mr. Poettering crossed it. However, it could also be German advertising and that is either bad or worse. In general, you do not build bloated components. In old Unix days these where called programs and could be combined in various ways including pipes and files. In GNU days many of these programs were bundled together in one archive, but stayed separate.

Now with systemd I am puzzled, is he really integrating that thing in the init system?

Integrating something which does not belong to a init system? In that case he is nuts and definitely over engineering. Or he has just created a new program and just bundles it in the same package as systemd.

Then this is acceptable, however, a little weird. It would be like bundling systemd with a sound service.

Session separation or VM separation is a task of the operating system. And you may write any number of tool to call the necessary OS functions, but PLEASE keep them out of components which have nothing to do with that.

[Oct 15, 2018] The rumours that vi will become part of systemd are groundless, comrade. Anyone who suggests such a thing is guilty of agitation and propaganda, and will be sent to the re-education camps.

Oct 15, 2018 | linux.slashdot.org

0123456 ( 636235 ) , Saturday August 29, 2015 @12:31PM ( #50416085 )

Re:Embrace, Extend, Extinguish ( Score: 2 )
The feature creep will be fast and merciless, but I'm just a systemd "hater", right?

The rumours that vi will become part of systemd are groundless, comrade. Anyone who suggests such a thing is guilty of agitation and propaganda, and will be sent to the re-education camps.

[Oct 15, 2018] Doing everything as systemd do, and adding 'su', is likely a new security threat

Oct 15, 2018 | linux.slashdot.org

slashways ( 4172247 ) , Saturday August 29, 2015 @11:42AM ( #50415845 )

Security ( Score: 5 , Insightful)

Doing everything as systemd do, and adding 'su', is likely a new security threat.

ThorGod ( 456163 ) writes:
Re: ( Score: 2 )

That's a pretty good point I think

Microlith ( 54737 ) writes:
Re: ( Score: 3 , Interesting)

No offense, but I see lots of attacks like this on systemd. Can you explain how it is "likely a new security threat" or is it simply FUD?

phantomfive ( 622387 ) , Saturday August 29, 2015 @12:42PM ( #50416169 ) Journal
Re:Security ( Score: 5 , Insightful)
Can you explain how it is "likely a new security threat" or is it simply FUD?

Bruce Schneier (in Cryptography Engineering ) pointed out that to keep something secure, you need to keep it simple (because exploits hide in complexity). When you have a large, complex, system that does a lot of different things, there's a high chance that there are security flaws. If you go to DefCon, speakers will actually say that one of the things they look for when doing 'security research' is a large, complex interface.

So that's the reason. When you see a large complex system running as root, it means hackers will be root.

phantomfive ( 622387 ) , Saturday August 29, 2015 @11:42AM ( #50415853 ) Journal
quality engineering ( Score: 4 , Insightful)

There is no reason the creation of privileged sessions should depend on a particular init system. It's fairly obvious that is a bad idea from a software design perspective. The only architectural reason to build it like that is because so many distros already include systemd, so they don't have to worry about getting people to adopt this (incidentally, that's the same reason Microsoft tried to deeply embed the browser in their OS.....remember active desktop?)

If there are any systemd fans out there, I would love to hear them justify this from an architectural perspective.

QuietLagoon ( 813062 ) , Saturday August 29, 2015 @12:10PM ( #50415985 )
Re:quality engineering ( Score: 5 , Insightful)

Poettering is following the philosophy that has created nearly every piece of bloated software that is in existence today: the design is not complete unless there is nothing more than can be added. Bloated software feeds upon the constant influx of new features, regardless of whether those new features are appropriate or not. They are new therefore they are justified.

.
You know you have achieved perfection in design, not when you have nothing more to add, but when you have nothing more to take away.
-- Antoine de Saint-Exupery

penguinoid ( 724646 ) , Saturday August 29, 2015 @11:51AM ( #50415899 ) Homepage Journal
Upgrade ( Score: 5 , Funny)

You should replace it with the fu command.

QuietLagoon ( 813062 ) , Saturday August 29, 2015 @12:02PM ( #50415953 )
systemd is a broken concept ( Score: 5 , Insightful)

... Lennart Poettering's long story short: "`su` is really a broken concept. ...

So every command that Poettering thinks may be broken is added to the already bloated systemd?

.
How long before there is nothing left to GNU/Linux besides the Linux kernel and systemd?

Anonymous Coward writes:
Re: ( Score: 3 , Insightful)

I'd just like to interject for moment. What you're refering to as GNU/Linux, is in fact, Systemd/Linux, or as I've recently taken to calling it, Systemd plus Linux. GNU is not a modern userland unto itself, but rather another free component of a fully functioning Linux system that needs to be replaced by a shitty nonfunctional init system, broken logging system, and half-assed vital system components comprising a fully broken OS as defined by Lennart Poettering.

Many computer users run a version of the Syste

Anonymous Coward writes:
Seems like a 'while they were at it' sort of thing ( Score: 2 , Interesting)

So systemd has ambition of being a container and VM management infrastucture (I have no idea how this should make sense for VMs though.)

machinectl shell looks to be designed to be some way to attach to a container environment with an interactive shell, without said container needing to do anything to provide such a way in. While they were at the task of doing that not too terribly unreasonable thing, they did the same function for what they call '.host', essentially meaning they can use the same syntax for

tlambert ( 566799 ) , Saturday August 29, 2015 @12:23PM ( #50416047 )
I, for one, welcome this addition... ( Score: 5 , Insightful)

I, for one, welcome this addition... every privilege escalation path you add is good for literally years of paid contract work.

Anonymous Coward , Saturday August 29, 2015 @12:11PM ( #50415997 )
Seems like a 'while they were at it' sort of thing ( Score: 2 , Interesting)

So systemd has ambition of being a container and VM management infrastucture (I have no idea how this should make sense for VMs though.)

machinectl shell looks to be designed to be some way to attach to a container environment with an interactive shell, without said container needing to do anything to provide such a way in. While they were at the task of doing that not too terribly unreasonable thing, they did the same function for what they call '.host', essentially meaning they can use the same syntax for current container context as guest contexts. A bit superfluous, but so trivial as not to raise any additional eyebrows (at least until Lennart did his usual thing and stated one of the most straightforward, least troublesome parts of UNIX is hopelessly broken and the world desperately needed his precious answer). In short, systemd can have their little 'su' so long as no one proposes removal of su or sudo or making them wrappers over the new and 'improved' systemd behavior.

Funnily enough, they used sudo in the article talking about how awesome an idea this is... I am amused.

tlambert ( 566799 ) , Saturday August 29, 2015 @12:23PM ( #50416047 )
I, for one, welcome this addition... ( Score: 5 , Insightful)

I, for one, welcome this addition... every privilege escalation path you add is good for literally years of paid contract work.

butlerm ( 3112 ) , Saturday August 29, 2015 @12:25PM ( #50416059 )
Only incidentally similar to su ( Score: 5 , Informative)

machinectl shell is only incidentally similar to su. Its primary purpose is to establish an su-like session on a different container or VM. Systemd refers to these as 'machines', hence the name machinectl.

http://www.freedesktop.org/sof... [freedesktop.org]

su cannot and does not do that sort of thing. machinectl shell is more like a variant of rsh than a replacement for su.

LVSlushdat ( 854194 ) , Saturday August 29, 2015 @04:09PM ( #50417233 )
Re:What path have we chosen? ( Score: 4 , Informative)

I currently run Ubuntu 14.04, and see where part of systemd has already begun its encroachment on what *had* been a great Linux distro. My only actual full-on experience so far with systemd is trying to get Virtualbox guest additions installed on a CentOS7 vm... I've installed those additions countless times since I started using VBox, and I think I could almost do the install in my sleep.. Not so with CentOS7.. systemd bitches loudly with strange "errors" and when it tells me to use journalctl to see what the error was, there *is* no error.. But still the additions don't install... I'm soooooo NOT looking forward to the next LTS out of Ubuntu, which I'm told will be infested with this systemd crap... Guess its time to dust off the old Slackware DVD and get acquainted with Pat again... GO FUCK YOURSELF, POETTERING.....

rl117 ( 110595 ) writes: < rleigh@[ ]elibre.net ['cod' in gap] > on Saturday August 29, 2015 @04:47PM ( #50417429 ) Homepage
Re:What path have we chosen? ( Score: 4 , Informative)

The main thing I noticed with Ubuntu 15.04 at work is that rather than startup becoming faster and more deterministic as claimed, it's actually slower and randomly fails due to what looks like some race condition, requiring me to reset the machine. So the general experience is "meh", plus annoyance that it's actually degraded the reliability of booting.

I also suffered from the "we won't allow you to boot if your fstab contains an unmountable filesystem". So I reformatted an ext4 filesystem as NTFS to accomplish some work task on Windows; this really shouldn't be a reason to refuse to start up. I know the justification for doing this, and I think it's as bogus as the first time I saw it. I want my systems to boot, not hang up on a technicality because the configuration or system wasn't "perfect". i.e. a bit of realism and pragmatism rather than absolutionist perfectionism--like we used to have when people like me wrote the init scripts.

PPH ( 736903 ) , Saturday August 29, 2015 @01:30PM ( #50416429 )
Fully isolated? ( Score: 5 , Interesting)

I just skimmed TFA (Pottering's rambling really don't make much

sense anyway). By "fully isolated", it sounds like machinectl breaks the audit trail that su has always supported (not being 'fully isolated' by design). Many *NIX systems are configured to prohibit root logins from anything other than the system console. And the reason that su doesn't do a 'full login' either as root or another user is to maintain the audit trail of who (which system user) is actually running what.

Lennart, this UNIX/Linus stuff appears to be way over your head. Sure, it seems neat for lots of gamers who can't be bothered with security and just want all the machine cycles for rendering FPS games. Perhaps you'd be better off playing with an XBox.

alvieboy ( 61292 ) , Saturday August 29, 2015 @02:49PM ( #50416849 ) Homepage
What about sandwiches ? ( Score: 3 )

So, now we have to say "machinectl shell systemd-run do make me a sandwich" ?

Looks way more complicated.

https://xkcd.com/149/ [xkcd.com]

lucm ( 889690 ) , Saturday August 29, 2015 @04:03PM ( #50417185 )
Fountainhead anyone? ( Score: 3 )

This systemd guy is just like Ellsworth Toohey. As long as the sheep follow he'll keep pushing things further and further into idiotland and have a good laugh in the process.

"Kill man's sense of values. Kill his capacity to recognise greatness or to achieve it. Great men can't be ruled. We don't want any great men. Don't deny conception of greatness. Destroy it from within. The great is the rare, the difficult, the exceptional. Set up standards of achievement open to all, to the least, to the most inept – and you stop the impetus to effort in men, great or small. You stop all incentive to improvement, to excellence, to perfection. Laugh at Roark and hold Peter Keating as a great architect. You've destroyed architecture. Build Lois Cook and you've destroyed literature. Hail Ike and you've destroyed the theatre. Glorify Lancelot Clankey and you've destroyed the press. Don't set out to raze all shrines – you'll frighten men, Enshrine mediocrity - and the shrines are razed."

-- Ellsworth Toohey

[Oct 15, 2018] The importance of Devuan by Nico Schottelius

Without strong corporate support is difficult to develop and maintain a distrubution. Red hat, Oracle, Suse and Ubuntu are versors that have some money. which means that now Ubuntu indirectly supports Debian with systemd. So what will happen with Devian in 10 year is unclear. Hopefully it will survive.
Notable quotes:
"... Let's say every car manufacturer recently discovered a new technology named "doord", which lets you open up car doors much faster than before. It only takes 0.05 seconds, instead of 1.2 seconds on average. So every time you open a door, you are much, much faster! ..."
"... Many of the manufacturers decide to implement doord, because the company providing doord makes it clear that it is beneficial for everyone. And additional to opening doors faster, it also standardises things. How to turn on your car? It is the same now everywhere, it is not necessarily to look for the keyhole anymore. ..."
"... Unfortunately though, sometimes doord does not stop the engine. Or if it is cold outside, it stops the ignition process, because it takes too long. Doord also changes the way how your navigation system works, because that is totally related to opening doors, but leads to some users being unable to navigate, which is accepted as collateral damage. In the end, you at least have faster door opening and a standard way to turn on the car. Oh, and if you are in a traffic jam and have to restart the engine often, it will stop restarting it after several times, because that's not what you are supposed to do. You can open the engine hood and tune that setting though, but it will be reset once you buy a new car. ..."
Dec 10, 2017 | blog.ungleich.ch
Good morning,

my name is Nico Schottelius, I am the CEO of ungleich glarus ltd . It is a beautiful Sunday morning with the house surrounded by a meter of snow. A perfect time to write about the importance of Devuan to us, but also for the future of Linux.

But first, let me put some warning out here: Dear Devuan friends, while I honor your work, I also have to be very honest with you: in theory, you should not have done this. Looking at creating Devuan, which means splitting of Debian, economically, you caused approximately infinite cost. Additional maintenance to what is being done in Debian already plus the work spent to clean packages of their systemd dependencies PLUS causing headache for everyone else: Should I use Debian? Is it required to use Devuan? What are the advantages of either? Looking at it from a user point of view, you added "a second, almost equal option". That's horrible!

Think of it in real world terms: You are in a supermarket and there is a new variant of a product that you used to buy (let it be razor blades, toilet paper rolls, whiskey, you name it). Instead of instantly buying what you used to buy, you might spend minutes staring at both options, comparing and in the end still being unable to properly choose, because both options are TOO SIMILAR. Yes, dear Devuan community, you have to admit it, you caused this cost for every potential Linux user.

For those who have read until here and actually wonder, why systemd is considered to be a problem, let me give you a real world analogy:

Let's say every car manufacturer recently discovered a new technology named "doord", which lets you open up car doors much faster than before. It only takes 0.05 seconds, instead of 1.2 seconds on average. So every time you open a door, you are much, much faster!

Many of the manufacturers decide to implement doord, because the company providing doord makes it clear that it is beneficial for everyone. And additional to opening doors faster, it also standardises things. How to turn on your car? It is the same now everywhere, it is not necessarily to look for the keyhole anymore.

Unfortunately though, sometimes doord does not stop the engine. Or if it is cold outside, it stops the ignition process, because it takes too long. Doord also changes the way how your navigation system works, because that is totally related to opening doors, but leads to some users being unable to navigate, which is accepted as collateral damage. In the end, you at least have faster door opening and a standard way to turn on the car. Oh, and if you are in a traffic jam and have to restart the engine often, it will stop restarting it after several times, because that's not what you are supposed to do. You can open the engine hood and tune that setting though, but it will be reset once you buy a new car.

Some of you might ask yourselves now "Is systemd THAT bad?". And my answer to it is: No. It is even worse. Systemd developers split the community over a tiny detail that decreases stability significantly and increases complexity for not much real value. And this is not theoretical: We tried to build Data Center Light on Debian and Ubuntu, but servers that don't boot, that don't reboot or systemd-resolved that constantly interferes with our core network configuration made it too expensive to run Debian or Ubuntu.

Yes, you read right: too expensive. While I am writing here in flowery words, the reason to use Devuan is hard calculated costs. We are a small team at ungleich and we simply don't have the time to fix problems caused by systemd on a daily basis. This is even without calculating the security risks that come with systemd. Our objective is to create a great, easy-to-use platform for VM hosting, not to walk a tightrope.

So, coming back to the original title of this essay: the importance of Devuan. Yes, the Devuan community creates infinite economic costs, but it is not their fault. Creating Devuan is simply a counteraction to ensure Linux stays stable. which is of high importance for a lot of people.

Yes, you read right: what the Devuan developers are doing is creating stability. Think about it not in a few repeating systemd bugs or about the insecurity caused by a huge, monolithic piece of software running with root privileges. Why do people favor Linux on servers over Windows? It is very easy: people don't use Windows, because it is too complex, too error-prone and not suitable as a stable basis. Read it again. This is exactly what systemd introduces into Linux: error-prone complexity and instability.

With systemd the main advantage to use Linux is obsolete.

So what is the importance of Devuan? It is not only crucial to Linux users, but to everyone who is running servers. Or rockets. Or watches. Or anything that actually depends on a stable operating system.

Thus I would like to urge every reader who made it until here: Do what we do:

Support Devuan.

Support the future with stability.

[Oct 15, 2018] Does Systemd Make Linux Complex, Error-Prone, and Unstable

Oct 14, 2018 | linux.slashdot.org

nightfire-unique ( 253895 ) , Monday December 11, 2017 @12:45AM ( #55713823 )

Problems with Linux that should have been solved ( Score: 5 , Insightful)

Here's a list of actual problems that should have been solved instead of introducing the nightmare of systemd upon the Linux (Debian specifically) world:

My $0.02, as a 25-year Linux admin.

gweihir ( 88907 ) , Monday December 11, 2017 @01:56AM ( #55714055 )
Re:Problems with Linux that should have been solve ( Score: 5 , Insightful)

I disagree on SELinux, not because its interface is well-designed (it is not), but because it is needed for some things.

On the rest, I fully agree. And instead, systemd solves things that were already solved and does it badly. The amount of stupidity in that decision is staggering.

drinkypoo ( 153816 ) writes: < martin.espinoza@gmail.com > on Monday December 11, 2017 @10:03AM ( #55715515 ) Homepage Journal
Re:Problems with Linux that should have been solve ( Score: 5 , Informative)
I really struggle to reconcile the Slashdot view that systemd is total crap and the fact that every major Linux distro has switched to it.

The Linux ecosystem is not sane . Redhat wanted more control of Linux so they pushed systemd. GNOME developers are easily distracted by shiny things (as proof I submit GNOME 3) so they went ahead and made GNOME dependent on it. And then Debian (which most Linux distributions are based upon) adopted systemd because GNOME depended on it.

There were some other excuses, but that's the biggest reason. You can blame Redhat and Debian for this clusterfuck, and really, only a small handful of people in the Debian community are actually responsible for Debian's involvement.

Debian's leaders were split almost down the middle on whether they should go to systemd. This is why major changes should require a 2/3 vote (or more!)

phantomfive ( 622387 ) , Monday December 11, 2017 @05:43AM ( #55714555 ) Journal
Re:Problems with Linux that should have been solve ( Score: 5 , Informative)
Can I ask, why don't you and other admins/devs like you start to contribute to systemd?

Lennart Poettering has specifically said that he will not accept many important kinds of patches, for example he refuses to merge any patch that improves cross-platform compatibility.

And what's the reason, because people on forums are complaining? Because binary log files break the UNIX philosophy?

Here is my analysis of systemd, spread across multiple posts (links towards the bottom) [slashdot.org]. It's poorly written software (the interfaces are bad, you can read through my links for more explanation), and that will only get worse over time if an effort isn't made to isolate it over time. This is basic system architecture.

silentcoder ( 1241496 ) , Monday December 11, 2017 @06:02AM ( #55714599 )
Re:Problems with Linux that should have been solve ( Score: 5 , Insightful)

>To me, the fact that the major distros have adopted systemd is strong evidence that it is probably better

"Better" is a subjective term. Software (and any product really) does not have some absolute measurable utility. It's utility is specific to an audience. The fact that the major distros switch is probably strong evidence that systemd is "better" for distro developers. But the utility it brings them may not apply to all users, or even any particular user.
A big part of the reason people were upset was exactly that - the key reasons distros had for switching was benefits to people building distros which subsequent users would never experience. These should not have trumped the user experience.

All that would still have been fine - we could easily have ended up with a world that had systemd for those who wanted it, and didn't have it for those who didn't want it. Linux systems are supposed to be flexible enough that you can set them up to whatever purpose you desire.

So where the real anger came in was the systemd's obsessive feature-creep made it go into a lots and lots of areas that have nothing to do with it's supposed purpose (boot process management), in that area it's biggest advantages are only useful to people building distributions (who have to maintain thousands of packages and ensure they reliable handle their bootup requirements regardless of what combination of them is actually installed- systemd genuinely did make that easier on them - but no user or admin ever experiences that scenario). But that feature creep itself wasn't even the issue, the issue was that - as it entered into all these unrelated areas (login was the first of many) - it broke compatibility with the existing software to do those jobs. This meant that, if you built a system to support systemd, that same system could not use any alternatives. So now, you had to create hard dependencies on systemd to support it at all - for distros to gain those benefits, they had to remove the capacity for anybody to forgo them, or alternatively provide two versions of every package - even ones that never touch the boot process and get no benefit from systemd's changes there.

And the trouble is - in none of those other areas has it offered anything of significant value to anybody. Logind doesn't actually do anything that good old login didn't do anyway, but it's incompatible so a distro that compiles it's packages around logind can't work with anything else. Replacing the process handler... and not only did it not add any new functionality it broke some existing functionality (granted, in rarer edge cases -but there was no reason for any breakage at all because these were long-solved problems).

Many years ago, I worked as a unix admin for a company that developed for lots of different target unix systems. As such I had to maintain test environments running all the targets. I had several linux systems running about 5 different distros, I had solaris boxes with every version from 8 onwards (yep, actual Sparcs), I had IBM's running AIX, I even had two original (by then 30 year old) DEC Alphas running Tru64... and I had several HPUX boxes.

At the time, while adminning all these disparate unix environments on a day-to-day basis and learning all their various issues and problems - I came to announce routinely that Solaris pre-Version-10 had the worst init system in the word to admin, but the worst Unix in the world was definitely HPUX because HPUX was the only Unix where I could not, with absolute certainty, know that if I kill -9 a process - that process would definitely be gone. WIped out of memory and the process table with absolutely no regard for anything else - it's a nuclear option, and it's supposed to work that way - because sometimes that what you need to keep things running.
SystemD brought to Linux an init system that replicated everything I used to hate about the Solaris 8/9 init system - but what's worse than that, it brought the one breakage that got me to declare HPUX the absolute worst unix system in history: it made kill -9 less than one hundred percent absolutely, infallibly reliable (nothing less than perfect is good enough - because perfect HAS been achieved there, in fact outside of HPUX and SystemD - no other Unix system has ever had anything LESS than absolute perfection on this one).

I absolutely despise it. And yet I'm running systemd systems - both professionally and at home, because I'm a grown man now, I have other responsibilities, I don't want to spend all my time working and even my home playing-with-the-computer time is limited so I want to focus on interesting stuff - there is simply not enough time for the amount of effort required to use a non-niche distro. I don't have the time to custom build the many software the small distros simply don't have packages for and deal with the integration issues of not using proper distro-built-and-tested packages.
I live with systemd. I tolerate it. It's not an unsurvivable trainsmash -but I still hate it. It still makes my life harder than it used to be.
It makes my job more difficult and time-consuming. it makes my personal ventures more complicated and annoying. It adds no value whatsoever to my life (seriously - who reboots a Linux system often enough to CARE about boot-time - you only DO that if you have a security patch for the kernel or glibc - anything else is a soft-restart) it just adds hassle and extra effort... the best thing I can say about it is that it adds LESS extra effort than avoiding it does, but that's not because it's superior to me in any way - it's because it's taken over every distro with a decent sized package repository that isn't "built-by-hand" like arch or gentoo.

silentcoder ( 1241496 ) , Monday December 11, 2017 @08:49AM ( #55715147 )
Re: Problems with Linux that should have been solv ( Score: 4 , Insightful)

Tough question. Depends what that functionality is. Compatibility is valuable but sometimes it must be sacrificed to deal with technical debt or make genuine progress. Even Microsoft had a huge compatibility break with Vista which was needed at the time (even if Vista itself was atrocious).
It would depend what those features were, what benefits it gave me. It would be a trade off and should be evaluated as such. A major sacrifice requires an even more major advantage to be worthwhile. I've yet to see any such advantage from anything systemd has added. I'm not saying advantages don't exist, I'm saying whatever they may be they do not benefit me, personally, in any measurable way. The disadvantages however do, and compatibility is the least of them.
Config outside /etc is a major deal - it utterly breaks with a standard around which disk space allocation is done professionally. /use ought to not even need backups because everything there is supposed to be installed and never hand edited. It means modifying backup strategy which is a big, very risky, cange. Logs aren't where I expect them. Boot errors flash on screen and disappear before you can read them so you have to remember to go look in the binary log to figure out if it was something serious.

I was never a fan of system V. It was a complicated, slow, mess if code duplication. It needed a replacement. I was championing Richard Gooch's make-init circa 2001 (and his devfs, the forerunner to udev, was in my kernels - I built a powerful hardware autoconfig system on it in 2005 when I built the first installable live CD distribution, the way they all work now: I invented it [I later discovered that pclinuxos had invented the same thing independently at the same time but Ubuntu for example still came on two disks, a live CD and separate text based installation disk and more than once I had machines where the live cd ran great but the installed system broke due to disparate hardware setup systems]). Later I praised upstart - it was a fantastic unit system that solved the issues with system V, retained compatibility but was easy to admin, standards and philosophy compliant and fast. It was even parallel.

That is the system that should have won the unit wars. I'm not a huge fan of Ubuntu's eclectic side, unity has always been a fugly unusable mess of a desktop to me - but upstart was great, that and PPAs are Ubuntu two most amazing accomplishments. Sadly one got lost instead of being the world changing tech it deserved to be and it lost to a wholly inferior technology for no sane reason.

It's the Amiga of the Linux world.

fisted ( 2295862 ) , Monday December 11, 2017 @06:11AM ( #55714613 )
Re:Problems with Linux that should have been solve ( Score: 3 )
To me, the fact that the major distros have adopted systemd is strong evidence that it is probably better.

Raises the question, better for whom? Systemd seems to make some things easier for distro maintainers, at the cost of fucking shit up for users and admins.

That said, Debian's vote on the matter was essentially 50:50, and they're going to keep supporting SysV init. Most distros are descendants of Debian, so there's that. Redhat switched for obvious reasons (having the main systemd developer on their payroll and massively profiting from increased support demands).

With Debian and Redhat removed, what remains on the list of major distros [futurist.se]?

Yeah.. strong evidence...

lkcl ( 517947 ) writes: < lkcl@lkcl.net > on Monday December 11, 2017 @01:01AM ( #55713891 ) Homepage
faster boot time as well ( Score: 5 , Interesting)

it turns out that, on arm embedded systems at the very least, where context-switching is a little slower and booting off of microsd cards results in amplification of any performance-related issues associated with drive reads/writes when compared to an SSD or HDD, sysvinit easily outperforms systemd for boot times.

Anonymous Coward , Monday December 11, 2017 @01:04AM ( #55713901 )
It violates fundamental Unix principles ( Score: 5 , Insightful)

Do one thing, and do it well. Systemd has eaten init, udev, inetd, syslog and soon dhcpd. Yes, that is getting ridiculous.

fahrbot-bot ( 874524 ) , Monday December 11, 2017 @01:10AM ( #55713923 )
It's the implementation. ( Score: 5 , Insightful)

I don't think there's a problem with the idea of systemd. Having a standard way to handle process start-up, dependencies, failures, recovery, "contracts", etc... isn't a bad, or unique, thing -- Solaris has Service Manager, for example. I think there's just too many things unnecessarily built into systemd rather than it utilizing external, usually, already existing utilities. Does systemd really need, for example, NFS, DNS, NTP services built-in? Why can't it run as PID 2 and leave PID1 for init to simply reap orphaned processes? Would make it easier to restart or handle a failed systemd w/o rebooting the entire system (or so I've read).

In short, systemd has too many things stuffed into its kitchen sink -- if you want that, use Emacs :-)
[ Note, I'm a fan and long-time user of Emacs, so the joke's in good fun. ]

[Oct 14, 2018] Does Systemd Make Linux Complex, Error-Prone, and Unstable

Or in other words, a simple, reliable and clear solution (which has some faults due to its age) was replaced with a gigantic KISS violation. No engineer worth the name will ever do that. And if it needs doing, any good engineer will make damned sure to achieve maximum compatibility and a clean way back. The systemd people seem to be hell-bent on making it as hard as possible to not use their monster. That alone is a good reason to stay away from it.
Oct 14, 2018 | linux.slashdot.org

Reverend Green ( 4973045 ) , Monday December 11, 2017 @04:48AM ( #55714431 )

Re: Does systemd make ... ( Score: 5 , Funny)

Systemd is nothing but a thinly-veiled plot by Vladimir Putin and Beyonce to import illegal German Nazi immigrants over the border from Mexico who will then corner the market in kimchi and implement Sharia law!!!

Anonymous Coward , Monday December 11, 2017 @01:38AM ( #55714015 )

Re:It violates fundamental Unix principles ( Score: 4 , Funny)

The Emacs of the 2010s.

DontBeAMoran ( 4843879 ) , Monday December 11, 2017 @01:57AM ( #55714059 )
Re:It violates fundamental Unix principles ( Score: 5 , Funny)

We are systemd. Lower your memory locks and surrender your processes. We will add your calls and code distinctiveness to our own. Your functions will adapt to service us. Resistance is futile.

serviscope_minor ( 664417 ) , Monday December 11, 2017 @04:47AM ( #55714427 ) Journal
Re:It violates fundamental Unix principles ( Score: 4 , Insightful)

I think we should call systemd the Master Control Program since it seems to like making other programs functions its own.

Anonymous Coward , Monday December 11, 2017 @01:47AM ( #55714035 )
Don't go hating on systemd ( Score: 5 , Funny)

RHEL7 is a fine OS, the only thing it's missing is a really good init system.

[Oct 14, 2018] The problem isn't so much new tools as new tools that suck

Systemd looks OK until you get into major troubles and start troubleshooting. After that you are ready to kill systemd developers and blow up Red Hat headquarters ;-)
Notable quotes:
"... Crap tools written by morons with huge egos and rather mediocre skills. Happens time and again an the only sane answer to these people is "no". Good new tools also do not have to be pushed on anybody, they can compete on merit. As soon as there is pressure to use something new though, you can be sure it is inferior. ..."
Oct 14, 2018 | linux.slashdot.org

drinkypoo ( 153816 ) writes: < martin.espinoza@gmail.com > on Sunday May 27, 2018 @11:14AM ( #56683018 ) Homepage Journal

Re:That would break scripts which use the UI ( Score: 5 , Informative)
In general, it's better for application programs, including scripts to use an application programming interface (API) such as /proc, rather than a user interface such as ifconfig, but in reality tons of scripts do use ifconfig and such.

...and they have no other choice, and shell scripting is a central feature of UNIX.

The problem isn't so much new tools as new tools that suck. If I just type ifconfig it will show me the state of all the active interfaces on the system. If I type ifconfig interface I get back pretty much everything I want to know about it. If I want to get the same data back with the ip tool, not only can't I, but I have to type multiple commands, with far more complex arguments.

The problem isn't new tools. It's crap tools.

gweihir ( 88907 ) , Sunday May 27, 2018 @12:22PM ( #56683440 )
Re:That would break scripts which use the UI ( Score: 5 , Insightful)
The problem isn't new tools. It's crap tools.

Crap tools written by morons with huge egos and rather mediocre skills. Happens time and again an the only sane answer to these people is "no". Good new tools also do not have to be pushed on anybody, they can compete on merit. As soon as there is pressure to use something new though, you can be sure it is inferior.

Anonymous Coward , Sunday May 27, 2018 @02:00PM ( #56684068 )
Re:That would break scripts which use the UI ( Score: 5 , Interesting)
The problem isn't new tools. It's crap tools.

The problem isn't new tools. It's not even crap tools. It's the mindset that we need to get rid of an ~70KB netstat, ~120KB ifconfig, etc. Like others have posted, this has more to do with the ego of the new tools creators and/or their supporters who see the old tools as some sort of competition. Well, that's the real problem, then, isn't it? They don't want to have to face competition and the notion that their tools aren't vastly superior to the user to justify switching completely, so they must force the issue.

Now, it'd be different if this was 5 years down the road, netstat wasn't being maintained*, and most scripts/dependents had already been converted over. At that point there'd be a good, serious reason to consider removing an outdated package. That's obviously not the debate, though.

* Vs developed. If seven year old stable tools are sufficiently bug free that no further work is necessary, that's a good thing.

locofungus ( 179280 ) , Sunday May 27, 2018 @02:46PM ( #56684296 )
Re:That would break scripts which use the UI ( Score: 4 , Informative)
If I type ifconfig interface I get back pretty much everything I want to know about it

How do you tell in ifconfig output which addresses are deprecated? When I run ifconfig eth0.100 it lists 8 global addreses. I can deduce that the one with fffe in the middle is the permanent address but I have no idea what the address it will use for outgoing connections.

ip addr show dev eth0.100 tells me what I need to know. And it's only a few more keystrokes to type.

Anonymous Coward , Sunday May 27, 2018 @11:13AM ( #56683016 )
Re:So ( Score: 5 , Insightful)

Following the systemd model, "if it aint broken, you're not trying hard enough"...

Anonymous Coward , Sunday May 27, 2018 @11:35AM ( #56683144 )
That's the reason ( Score: 5 , Interesting)

It done one thing: Maintain the routing table.

"ip" (and "ip2" and whatever that other candidate not-so-better not-so-replacement of ifconfig was) all have the same problem: They try to be the one tool that does everything "ip". That's "assign ip address somewhere", "route the table", and all that. But that means you still need a complete zoo of other tools, like brconfig, iwconfig/iw/whatever-this-week.

In other words, it's a modeling difference. On sane systems, ifconfig _configures the interface_, for all protocols and hardware features, bridges, vlans, what-have-you. And then route _configures the routing table_. On linux... the poor kids didn't understand what they were doing, couldn't fix their broken ifconfig to save their lives, and so went off to reinvent the wheel, badly, a couple times over.

And I say the blogposter is just as much an idiot.

Per various people, netstat et al operate by reading various files in /proc, and doing this is not the most efficient thing in the world

So don't use it. That does not mean you gotta change the user interface too. Sheesh.

However, the deeper issue is the interface that netstat, ifconfig, and company present to users.

No, that interface is a close match to the hardware. Here is an interface, IOW something that connects to a radio or a wire, and you can make it ready to talk IP (or back when, IPX, appletalk, and whatever other networks your system supported). That makes those tools hardware-centric. At least on sane systems. It's when you want to pretend shit that it all goes awry. And boy, does linux like to pretend. The linux ifconfig-replacements are IP-only-stack-centric. Which causes problems.

For example because that only does half the job and you still need the aforementioned zoo of helper utilities that do things you can have ifconfig do if your system is halfway sane. Which linux isn't, it's just completely confused. As is this blogposter.

On the other hand, the users expect netstat, ifconfig and so on to have their traditional interface (in terms of output, command line arguments, and so on); any number of scripts and tools fish things out of ifconfig output, for example.

linux' ifconfig always was enormously shitty here. It outputs lots of stuff I expect to find through netstat and it doesn't output stuff I expect to find out through ifconfig. That's linux, and that is NOT "traditional" compared to, say, the *BSDs.

As the Linux kernel has changed how it does networking, this has presented things like ifconfig with a deep conflict; their traditional output is no longer necessarily an accurate representation of reality.

Was it ever? linux is the great pretender here.

But then, "linux" embraced the idiocy oozing out of poettering-land. Everything out of there so far has caused me problems that were best resolved by getting rid of that crap code. Point in case: "Network-Manager". Another attempt at "replacing ifconfig" with something that causes problems and solves very few.

locofungus ( 179280 ) , Sunday May 27, 2018 @03:27PM ( #56684516 )
Re:That's the reason ( Score: 4 , Insightful)
It done one thing: Maintain the routing table.

Should the ip rule stuff be part of route or a separate command?

There are things that could be better with ip. IIRC it's very fussy about where the table selector goes in the argument list but route doesn't support this at all.

I also don't think route has anything like 'nexthop dev $if' which is a godsend for ipv6 configuration.

I stayed with route for years. But ipv6 exposed how incomplete the tool is - and clearly nobody cares enough to add all the missing functionality.

Perhaps ip addr, ip route, ip rule, ip mroute, ip link should be separate commands. I've never looked at the sourcecode to see whether it's mostly common or mostly separate.

Anonymous Coward writes:
Re: That's the reason ( Score: 3 , Informative)

^this^

The people who think the old tools work fine don't understand all the advanced networking concepts that are only possible with the new tools: interfaces can have multiple IPs, one IP can be assigned to multiple interfaces, there's more than one routing table, firewall rules can add metadata to packets that affects routing, etc. These features can't be accommodated by the old tools without breaking compatibility.

DamnOregonian ( 963763 ) , Sunday May 27, 2018 @09:11PM ( #56686032 )
Re:That's the reason ( Score: 3 )
Someone cared enough to implement an entirely different tool to do the same old jobs plus some new stuff, it's too bad they didn't do the sane thing and add that functionality to the old tool where it would have made sense.

It's not that simple. The iproute2 suite wasn't written to *replace* anything.
It was written to provide a user interface to the rapidly expanding RTNL API.
The net-tools maintainers (or anyone who cared) could have started porting it if they liked. They didn't. iproute2 kept growing to provide access to all the new RTNL interfaces, while net-tools got farther and farther behind.
What happened was organic. If someone brought net-tools up to date tomorrow and everyone liked the interface, iproute2 would be dead in its tracks. As it sits, myself, and most of the more advanced level system and network engineers I know have been using iproute2 for just over a decade now (really, the point where ifconfig became on incomplete and poorly simplified way to manage the networking stack)

DamnOregonian ( 963763 ) , Monday May 28, 2018 @02:26AM ( #56686960 )
Re:That's the reason ( Score: 4 , Informative)

Nope. Kernel authors come up with fancy new netlink interface for better interaction with the kernel's network stack. They don't give two squirts of piss whether or not a user-space interface exists for it yet. Some guy decides to write an interface to it. Initially, it only support things like modifying the routing rule database (something that can't be done with route) and he is trying to make an implementation of this protocal, not try to hack it into software that already has its own framework using different APIs.
This source was always freely available for the net-tools guys to take and add to their own software.
Instead, we get this. [sourceforge.net]
Nobody is giving a positive spin. This is simply how it happened. This is what happens when software isn't maintained, and you don't get to tell other people to maintain it. You're free, right now, today, to port the iproute2 functionality into net-tools. They're unwilling to, however. That's their right. It's also the right of other people to either fork it, or move to more functional software. It's your right to help influence that. Or bitch on slashdot. That probably helps, too.

TeknoHog ( 164938 ) writes:
Re: ( Score: 2 )
keep the command names the same but rewrite how they function?

Well, keep the syntax too, so old scripts would still work. The old command name could just be a script that calls the new commands under the hood. (Perhaps this is just what you meant, but I thought I'd elaborate.)

gweihir ( 88907 ) , Sunday May 27, 2018 @12:18PM ( #56683412 )
Re:So ( Score: 4 , Insightful)
What was the reason for replacing "route" anyhow? It's worked for decades and done one thing.

Idiots that confuse "new" with better and want to put their mark on things. Because they are so much greater than the people that got the things to work originally, right? Same as the systemd crowd. Sometimes, they realize decades later they were stupid, but only after having done a lot of damage for a long time.

TheRaven64 ( 641858 ) writes:
Re: ( Score: 2 )

I didn't RTFA (this is Slashdot, after all) but from TFS it sounds like exactly the reason I moved to FreeBSD in the first place: the Linux attitude of 'our implementation is broken, let's completely change the interface'. ALSA replacing OSS was the instance of this that pushed me away. On Linux, back around 2002, I had some KDE and some GNOME apps that talked to their respective sound daemon, and some things like XMMS and BZFlag that used /dev/dsp directly. Unfortunately, Linux decided to only support s

zippthorne ( 748122 ) writes:
Re: ( Score: 3 )

On the other hand, on most systems, vi is basically an alias to vim....

goombah99 ( 560566 ) , Sunday May 27, 2018 @11:08AM ( #56682986 )
Bad idea ( Score: 5 , Insightful)

Unix was founded on the ideas of lots os simple command line tools that do one job well and don't depend on system idiosyncracies. If you make the tool have to know the lower layers of the system to exploit them then you break the encapsulation. Polling proc has worked across eons of linux flavors without breaking. when you make everthing integrated it creates paralysis to change down the road for backward compatibility. small speed game now for massive fragility and no portability later.

goombah99 ( 560566 ) writes:
Re: ( Score: 3 )

Gnu may not be unix but it's foundational idea lies in the simple command tool paradigm. It's why GNU was so popular and it's why people even think that Linux is unix. That idea is the character of linux. if you want an marvelously smooth, efficient, consistent integrated system that then after a decade of revisions feels like a knotted tangle of twine in your junk drawer, try Windows.

llamalad ( 12917 ) , Sunday May 27, 2018 @11:46AM ( #56683198 )
Re:Bad idea ( Score: 5 , Insightful)

The error you're making is thinking that Linux is UNIX.

It's not. It's merely UNIX-like. And with first SystemD and now this nonsense, it's rapidly becoming less UNIX-like. The Windows of the UNIX(ish) world.

Happily, the BSDs seem to be staying true to their UNIX roots.

petes_PoV ( 912422 ) , Sunday May 27, 2018 @12:01PM ( #56683282 )
The dislike of support work ( Score: 5 , Interesting)
In theory netstat, ifconfig, and company could be rewritten to use netlink too; in practice this doesn't seem to have happened and there may be political issues involving different groups of developers with different opinions on which way to go.

No, it is far simpler than looking for some mythical "political" issues. It is simply that hackers - especially amateur ones, who write code as a hobby - dislike trying to work out how old stuff works. They like writing new stuff, instead.

Partly this is because of the poor documentation: explanations of why things work, what other code was tried but didn't work out, the reasons for weird-looking constructs, techniques and the history behind patches. It could even be that many programmers are wedded to a particular development environment and lack the skill and experience (or find it beyond their capacity) to do things in ways that are alien to it. I feel that another big part is that merely rewriting old code does not allow for the " look how clever I am " element that is present in fresh, new, software. That seems to be a big part of the amateur hacker's effort-reward equation.

One thing that is imperative however is to keep backwards compatibility. So that the same options continue to work and that they provide the same content and format. Possibly Unix / Linux only remaining advantage over Windows for sysadmins is its scripting. If that was lost, there would be little point keeping it around.

DamnOregonian ( 963763 ) , Sunday May 27, 2018 @05:13PM ( #56685074 )
Re:The dislike of support work ( Score: 5 , Insightful)

iproute2 exists because ifconfig, netstat, and route do not support the full capabilities of the linux network stack.
This is because today's network stack is far more complicated than it was in the past. For very simple networks, the old tools work fine. For complicated ones, you must use the new ones.

Your post could not be any more wrong. Your moderation amazes me. It seems that slashdot is full of people who are mostly amateurs.
iproute2 has been the main network management suite for linux amongst higher end sysadmins for a decade. It wasn't written to sate someone's desire to change for the sake of change, to make more complicated, to NIH. It was written because the old tools can't encompass new functionality without being rewritten themselves.

Craig Cruden ( 3592465 ) , Sunday May 27, 2018 @12:11PM ( #56683352 )
So windowification (making it incompatible) ( Score: 5 , Interesting)

So basically there is a proposal to dump existing terminal utilities that are cross-platform and create custom Linux utilities - then get rid of the existing functionality? That would be moronic! I already go nuts remoting into a windows platform and then an AIX and Linux platform and having different command line utilities / directory separators / etc. Adding yet another difference between my Linux and macOS/AIX terminals would absolutely drive me bonkers!

I have no problem with updating or rewriting or adding functionalities to existing utilities (for all 'nix platforms), but creating a yet another incompatible platform would be crazily annoying.

(not a sys admin, just a dev who has to deal with multiple different server platforms)

Anonymous Coward , Sunday May 27, 2018 @12:16PM ( #56683388 )
Output for 'ip' is machine readable, not human ( Score: 5 , Interesting)

All output for 'ip' is machine readable, not human.
Compare
$ ip route
to
$ route -n

Which is more readable? Fuckers.

Same for
$ ip a
and
$ ifconfig
Which is more readable? Fuckers.

The new commands should generally make the same output as the old, using the same options, by default. Using additional options to get new behavior. -m is commonly used to get "machine readable" output. Fuckers.

It is like the systemd interface fuckers took hold of everything. Fuckers.

BTW, I'm a happy person almost always, but change for the sake of change is fucking stupid.

Want to talk about resolv.conf, anyone? Fuckers! Easier just to purge that shit.

SigmundFloyd ( 994648 ) , Sunday May 27, 2018 @12:39PM ( #56683558 )
Linux' userland is UNSTABLE ! ( Score: 3 )

I'm growing increasingly annoyed with Linux' userland instability. Seriously considering a switch to NetBSD because I'm SICK of having to learn new ways of doing old things.

For those who are advocating the new tools as additions rather than replacements: Remember that this will lead to some scripts expecting the new tools and some other scripts expecting the old tools. You'll need to keep both flavors installed to do ONE thing. I don't know about you, but I HATE to waste disk space on redundant crap.

fluffernutter ( 1411889 ) , Sunday May 27, 2018 @12:46PM ( #56683592 )
Piss and vinigar ( Score: 5 , Interesting)

What pisses me off is when I go to run ifconfig and it isn't there, and then I Google on it and there doesn't seem to be *any* direct substitute that gives me the same information. If you want to change the command then fine, but allow the same output from the new commands. Furthermore, another bitch I have is most systemd installations don't have an easy substitute for /etc/rc.local.

what about ( 730877 ) , Sunday May 27, 2018 @01:35PM ( #56683874 ) Homepage
Let's try hard to break Linux ( Score: 3 , Insightful)

It does not make any sense that some people spend time and money replacing what is currently working with some incompatible crap.

Therefore, the only logical alternative is that they are paid (in some way) to break what is working.

Also, if you rewrite tons of systems tools you have plenty of opportunities to insert useful bugs that can be used by the various spying agencies.

You do not think that the current CPU Flaws are just by chance, right ?
Immagine the wonder of being able to spy on any machine, regardless of the level of SW protection.

There is no need to point out that I cannot prove it, I know, it just make sense to me.

Kjella ( 173770 ) writes:
Re: ( Score: 3 )
It does not make any sense that some people spend time and money replacing what is currently working with some incompatible crap. (...) There is no need to point out that I cannot prove it, I know, it just make sense to me.

Many developers fix problems like a guy about to lose a two week vacation because he can't find his passport. Rip open every drawer, empty every shelf, spread it all across the tables and floors until you find it, then rush out the door leaving everything in a mess. It solved HIS problem.

WaffleMonster ( 969671 ) , Sunday May 27, 2018 @01:52PM ( #56684010 )
Changes for changes sake ( Score: 4 , Informative)

TFA is full of shit.

IP aliases have always and still do appear in ifconfig as separate logical interfaces.

The assertion ifconfig only displays one IP address per interface also demonstrably false.

Using these false bits of information to advocate for change seems rather ridiculous.

One change I would love to see... "ping" bundled with most Linux distros doesn't support IPv6. You have to call IPv6 specific analogue which is unworkable. Knowing address family in advance is not a reasonable expectation and works contrary to how all other IPv6 capable software any user would actually run work.

Heck for a while traceroute supported both address families. The one by Olaf Kirch eons ago did then someone decided not invented here and replaced it with one that works like ping6 where you have to call traceroute6 if you want v6.

It seems anymore nobody spends time fixing broken shit... they just spend their time finding new ways to piss me off. Now I have to type journalctl and wait for hell to freeze over just to liberate log data I previously could access nearly instantaneously. It almost feels like Microsoft's event viewer now.

DamnOregonian ( 963763 ) , Sunday May 27, 2018 @05:30PM ( #56685130 )
Re:Changes for changes sake ( Score: 4 , Insightful)
TFA is full of shit. IP aliases have always and still do appear in ifconfig as separate logical interfaces.

No, you're just ignorant.
Aliases do not appear in ifconfig as separate logical interfaces.
Logical interfaces appear in ifconfig as logical interfaces.
Logical interfaces are one way to add an alias to an interface. A crude way, but a way.

The assertion ifconfig only displays one IP address per interface also demonstrably false.

Nope. Again, your'e just ignorant.

root@swalker-samtop:~# tunctl
Set 'tap0' persistent and owned by uid 0
root@swalker-samtop:~# ifconfig tap0 10.10.10.1 netmask 255.255.255.0 up
root@swalker-samtop:~# ip addr add 10.10.10.2/24 dev tap0
root@swalker-samtop:~# ifconfig tap0:0 10.10.10.3 netmask 255.255.255.0 up
root@swalker-samtop:~# ip addr add 10.10.1.1/24 scope link dev tap0:0
root@swalker-samtop:~# ifconfig tap0 | grep inet
inet 10.10.1.1 netmask 255.255.255.0 broadcast 0.0.0.0
root@swalker-samtop:~# ifconfig tap0:0 | grep inet
inet 10.10.10.3 netmask 255.255.255.0 broadcast 10.10.10.255
root@swalker-samtop:~# ip addr show dev tap0 | grep inet
inet 10.10.1.1/24 scope link tap0
inet 10.10.10.1/24 brd 10.10.10.255 scope global tap0
inet 10.10.10.2/24 scope global secondary tap0
inet 10.10.10.3/24 brd 10.10.10.255 scope global secondary tap0:0

If you don't understand what the differences are, you really aren't qualified to opine on the matter.
Ifconfig is fundamentally incapable of displaying the amount of information that can go with layer-3 addresses, interfaces, and the architecture of the stack in general. This is why iproute2 exists.

JustNiz ( 692889 ) , Sunday May 27, 2018 @01:55PM ( #56684030 )
I propose a new word: ( Score: 5 , Funny)

SysD: (v). To force an unnecessary replacement of something that already works well with an alternative that the majority perceive as fundamentally worse.
Example usage: Wow you really SysD'd that up.

[Oct 14, 2018] Ever tried to edit systemd files? Depending on systemd version you have to create overrides, modify symlinks or edit systemd files straight up which can be in about 5 different locations and on top of that, systemd can have overrides on any changes either with an update or just inherited.

Notable quotes:
"... A big ol ball? My init.d was about 13 scripts big which were readable and editable. Ever tried to edit systemd files? Depending on systemd version you have to create overrides, modify symlinks or edit systemd files straight up which can be in about 5 different locations and on top of that, systemd can have overrides on any changes either with an update or just inherited. ..."
"... Remove/fail a hard drive and your system will boot into single user mode ..."
"... because it was in fstab and apparently everything in fstab is a hard dependency on systemd. ..."
"... So the short answer is: Yes, systemd makes things unnecessarily complex with little benefit. ..."
"... Troubleshooting is really a bitch with systemd, much more time-consuming. For instance, often systemctl reports a daemon as failed while it's not, or suddenly decides that it didn't start because of some mysterious arbitrary timeout while the daemon just needs some time to run a maintenance tasks at startup time. And getting anything of value out of the log is a pain in the ass. ..."
"... Granted, I have never needed any kind of tampering or corruption mitigation in my log files over the last 20 years of Linux administration. So the value for at least my usage of journalctl has been sum negative because I don't see the value in a command that by default truncates log output. ..."
"... So the answer for systemd is to workaround it by using a "legacy" service to restore decades of functionality. ..."
Oct 14, 2018 | linux.slashdot.org

guruevi ( 827432 ) writes: < evi&evcircuits,com > on Monday December 11, 2017 @12:46AM ( #55713829 ) Homepage

Re: Ah yes the secret to simplicity ( Score: 5 , Insightful)

A big ol ball? My init.d was about 13 scripts big which were readable and editable. Ever tried to edit systemd files? Depending on systemd version you have to create overrides, modify symlinks or edit systemd files straight up which can be in about 5 different locations and on top of that, systemd can have overrides on any changes either with an update or just inherited.

Systemd makes every system into a dependency mess.

Remove/fail a hard drive and your system will boot into single user mode, not even remote access will be available so you better be near the machine just because it was in fstab and apparently everything in fstab is a hard dependency on systemd.

Z00L00K ( 682162 ) , Monday December 11, 2017 @12:52AM ( #55713851 ) Homepage
Re: Ah yes the secret to simplicity ( Score: 5 , Insightful)

So the short answer is: Yes, systemd makes things unnecessarily complex with little benefit.

That matches my experience - losing a lot of time trying to figure out why things don't work. The improved boot time is lost several times over.

lucm ( 889690 ) , Monday December 11, 2017 @01:33AM ( #55713995 )
Re: Ah yes the secret to simplicity ( Score: 5 , Informative)
So the short answer is: Yes, systemd makes things unnecessarily complex with little benefit.

That matches my experience - losing a lot of time trying to figure out why things don't work. The improved boot time is lost several times over.

I completely agree. Troubleshooting is really a bitch with systemd, much more time-consuming. For instance, often systemctl reports a daemon as failed while it's not, or suddenly decides that it didn't start because of some mysterious arbitrary timeout while the daemon just needs some time to run a maintenance tasks at startup time. And getting anything of value out of the log is a pain in the ass.

Quite often I end up writing control shell scripts specifically to be called by systemd, because this junkware is too fragile and capricious to work with actual daemons. That says a lot about the overal usefulness of systemd.

Nothing has been gained with systemd, at least not on servers.

93 Escort Wagon ( 326346 ) , Monday December 11, 2017 @01:39AM ( #55714017 )
Re: Ah yes the secret to simplicity ( Score: 5 , Informative)
Troubleshooting is really a bitch with systemd, much more time-consuming. For instance, often systemctl reports a daemon as failed while it's not, or suddenly decides that it didn't start because of some mysterious arbitrary timeout while the daemon just needs some time to run a maintenance tasks at startup time.

Not to mention that the damn logs are not plain text, which in itself complicates things before you even have the chance to start troubleshooting.

merky1 ( 83978 ) , Monday December 11, 2017 @07:41AM ( #55714833 ) Journal
Re: Ah yes the secret to simplicity ( Score: 5 , Insightful)

Granted, I have never needed any kind of tampering or corruption mitigation in my log files over the last 20 years of Linux administration. So the value for at least my usage of journalctl has been sum negative because I don't see the value in a command that by default truncates log output.

So the answer for systemd is to workaround it by using a "legacy" service to restore decades of functionality.

SMF was the death knell for Solaris (along with the Oracle purchase), and it feels like systemd is going to be the anchor which drags Linux into the abyss.

[Oct 14, 2018] I got about 15y left and this thing called Linux that I've made a good living on will be the-next-guys steaming pile to deal with

If Redhat7 is the future, I long for the past...
Notable quotes:
"... OP, if you're an old hat like me, I'd fucking LOVE to know how old? You sound like you've got about 5 days soaking wet under your belt with a Milkshake IPA in your hand. You sound like a millennial developer-turned-sysadmin-for-a-day who's got all but cloud-framework-administration under your belt and are being a complete poser. ..."
"... I'd say 0.0001% of any of us are in control of those types of changes, no matter how we feel about is as end-user administrators of those tools we've grown to be complacent about. I got about 15y left and this thing called Linux that I've made a good living on will be the-next-guys steaming pile to deal with. ..."
"... My point being that I don't think it's unreasonable to expect to know and understand every aspect of my system's behavior, and to expect it to continue to behave in the way that I know it does ..."
"... We Linux developers virtually created Internet programming, where most of our effort was accomplished online, but in those days everybody still used books and of course the Linux Documentation Project. I have a huge stack of UNIX and Linux books from the 1990's, and I even wrote a mini-HOWTO. There was no Google. People who used Linux back then may seem like wizards today because we had to memorize everything, or else waste time looking it up in a book. Today, even if I'm fairly certain I already know how to do something, I look it up with Google anyway. ..."
"... Given that, ip and route were downright offensive. We were supposed to switch from a well-documented system to programs written by somebody who can barely speak English (the lingua franca of Linux development)? ..."
"... Today, the discussion is irrelevant. Solaris, HP-UX, and the other commercial UNIX versions are dead. Ubuntu has the common user and CentOS has the server. Google has complete documentation for these tools at a glance. In my mind, there is now no reason to not switch. ..."
Oct 14, 2018 | linux.slashdot.org

adosch ( 1397357 ) , Sunday May 27, 2018 @04:04PM ( #56684724 )

Thats... the argument? FML ( Score: 4 , Interesting)

The OP's argument is that netlink sockets are more efficient in theory so we should abandon anything that uses a pseudo-proc, re-invent the wheel and move even farther from the UNIX tradition and POSIX compliance? And it may be slower on larger systems? Define that for me because I've never experienced that.

I've worked on single stove-pipe x86 systems, to the 'SPARC archteciture' generation where everyone thought Sun/Solaris was the way to go with single entire systems in a 42U rack, IRIX systems, all the way on hundreds of RPM-base linux distro that are physical, hypervised and containered nodes in an HPC which are LARGE compute systems (fat and compute nodes).

That's a total shit comment with zero facts to back it up. This is like Good Will Hunting 'the bar scene' revisited...

OP, if you're an old hat like me, I'd fucking LOVE to know how old? You sound like you've got about 5 days soaking wet under your belt with a Milkshake IPA in your hand. You sound like a millennial developer-turned-sysadmin-for-a-day who's got all but cloud-framework-administration under your belt and are being a complete poser.

Any true sys-admin is going to flip-their-shit just like we ALL did with systemd, and that shit still needs to die. There, I got that off my chest.

I'd say you got two things right, but are completely off on one of them:

Anymore, I'm just a disgruntled and I'm sure soon-to-be-modded-down voice on /. that should be taken with a grain of salt. I'm not happy with the way the movements of Linux have gone, and if this doesn't sound old hat I don't know what is: At the end of the day, you have to embrace change.

I'd say 0.0001% of any of us are in control of those types of changes, no matter how we feel about is as end-user administrators of those tools we've grown to be complacent about. I got about 15y left and this thing called Linux that I've made a good living on will be the-next-guys steaming pile to deal with.

Greyfox ( 87712 ) , Sunday May 27, 2018 @05:45PM ( #56685218 ) Homepage Journal
Re:Thats... the argument? FML ( Score: 3 )

Yeah. The other day I set up some demo video streaming on a Linux box. Fire up screen, start my streaming program. Disconnect screen and exit my ssh system, and my streaming freezes. There're a metric fuckton of reports of systemd killing detached/nohup'd processes, but I check my config file and it's not that. Although them being that willing to walk away from expected system behavior is already cause to blow a gasket. But no, something else is going on here. I tweak the streaming code to catch all catchable signals, still nothing. So probably not systemd, but I can't be 100% certain. I'm still testing all the possibilities -- if I start the servers from the console in screen and then detach and exit, I don't have the problem, it's only if I start them from ssh. And if I ssh in later, attach and detach, I still don't have the problem. So I'm looking forward to a couple of days of digging around in the ssh code to see if I can figure out what's going on with it. In the mean time, I have a reasonable workaround.

My point being that I don't think it's unreasonable to expect to know and understand every aspect of my system's behavior, and to expect it to continue to behave in the way that I know it does. I've worked on systems where you had to type the sequence of numbers that was the machine code for the bootstrap sequence in order to boot the system.

I know how the boot sequence works. I know how fork and exec and the default file handles work. I know how my system is supposed to start from the very first process. And I don't mind changing those things, as long as I trust the judgment of the people changing them. And I very much don't, for a lot of this new shit.

Not systemd, not wayland, not the new networking utilities. Still not enough to take matters into my own hands, though.

jgotts ( 2785 ) writes: < jgottsNO@SPAMgmail.com > on Sunday May 27, 2018 @06:17PM ( #56685380 )
Some historical color ( Score: 4 , Interesting)

Just to give you guys some color commentary, I was participating quite heavily in Linux development from 1994-1999, and Linus even added me to the CREDITS file while I was at the University of Michigan for my fairly modest contributions to the kernel. [I prefer application development, and I'm still a Linux developer after 24 years. I currently work for the company Internet Brands.]

What I remember about ip and net is that they came about seemingly out of nowhere two decades ago and the person who wrote the tools could barely communicate in English. There was no documentation. net-tools by that time was a well-understood and well-documented package, and many Linux devs at the time had UNIX experience pre-dating Linux (which was announced in 1991 but not very usable until 1994).

We Linux developers virtually created Internet programming, where most of our effort was accomplished online, but in those days everybody still used books and of course the Linux Documentation Project. I have a huge stack of UNIX and Linux books from the 1990's, and I even wrote a mini-HOWTO. There was no Google. People who used Linux back then may seem like wizards today because we had to memorize everything, or else waste time looking it up in a book. Today, even if I'm fairly certain I already know how to do something, I look it up with Google anyway.

Given that, ip and route were downright offensive. We were supposed to switch from a well-documented system to programs written by somebody who can barely speak English (the lingua franca of Linux development)?

Today, the discussion is irrelevant. Solaris, HP-UX, and the other commercial UNIX versions are dead. Ubuntu has the common user and CentOS has the server. Google has complete documentation for these tools at a glance. In my mind, there is now no reason to not switch.

Although, to be fair, I still use ifconfig, even if it is not installed by default.

Hognoxious ( 631665 ) , Sunday May 27, 2018 @04:28PM ( #56684830 ) Homepage Journal
Re:Poor reasoning ( Score: 3 )
If the output has to change, there can either be a new tool or ifconfig itself changes. Either way, my script has to be changed.

Yes. I mean it would be mathematically impossible to introduce a new tool and leave the old one there too .

[Oct 14, 2018] The fix for Debian removing systemd using angband.pl

Dec 11, 2017 | linux.slashdot.org

lkcl ( 517947 ) writes: < lkcl@lkcl.net > on Monday December 11, 2017 @05:07AM ( #55714471 ) Homepage

Re:Fix ( Score: 3 )
apt purge systemd

add http://angband.pl/debian/ [angband.pl] to /etc/apt/sources.list before doing that and it will actually succeed. okok it's a bit more complex than that, but you can read the instructions online which are neeearly as simple :)

[Oct 14, 2018] Yes, systemd makes things unnecessarily complex with little benefit. Nothing has been gained with systemd, at least not on servers.

Oct 14, 2018 | linux.slashdot.org

Z00L00K ( 682162 ) , Monday December 11, 2017 @12:52AM ( #55713851 ) Homepage

Re: Ah yes the secret to simplicity ( Score: 5 , Insightful)

So the short answer is: Yes, systemd makes things unnecessarily complex with little benefit.

That matches my experience - losing a lot of time trying to figure out why things don't work. The improved boot time is lost several times over.

lucm ( 889690 ) , Monday December 11, 2017 @01:33AM ( #55713995 )
Re: Ah yes the secret to simplicity ( Score: 5 , Informative)
So the short answer is: Yes, systemd makes things unnecessarily complex with little benefit.

That matches my experience - losing a lot of time trying to figure out why things don't work. The improved boot time is lost several times over.

I completely agree. Troubleshooting is really a bitch with systemd, much more time-consuming. For instance, often systemctl reports a daemon as failed while it's not, or suddenly decides that it didn't start because of some mysterious arbitrary timeout while the daemon just needs some time to run a maintenance tasks at startup time. And getting anything of value out of the log is a pain in the ass.

Quite often I end up writing control shell scripts specifically to be called by systemd, because this junkware is too fragile and capricious to work with actual daemons. That says a lot about the overal usefulness of systemd.

Nothing has been gained with systemd, at least not on servers.

93 Escort Wagon ( 326346 ) , Monday December 11, 2017 @01:39AM ( #55714017 )
Re: Ah yes the secret to simplicity ( Score: 5 , Informative)
Troubleshooting is really a bitch with systemd, much more time-consuming. For instance, often systemctl reports a daemon as failed while it's not, or suddenly decides that it didn't start because of some mysterious arbitrary timeout while the daemon just needs some time to run a maintenance tasks at startup time.

Not to mention that the damn logs are not plain text, which in itself complicates things before you even have the chance to start troubleshooting.

merky1 ( 83978 ) , Monday December 11, 2017 @07:41AM ( #55714833 ) Journal
Re: Ah yes the secret to simplicity ( Score: 5 , Insightful)

Granted, I have never needed any kind of tampering or corruption mitigation in my log files over the last 20 years of Linux administration. So the value for at least my usage of journalctl has been sum negative because I don't see the value in a command that by default truncates log output.

So the answer for systemd is to workaround it by using a "legacy" service to restore decades of functionality.

SMF was the death knell for Solaris (along with the Oracle purchase), and it feels like systemd is going to be the anchor which drags Linux into the abyss.

coofercat ( 719737 ) writes:
Re: ( Score: 3 )

I'd say it is worse having to type something different for one log on the system, when the other 100+ are plain text and so accessible with the old tools we've all learned backwards. It means you don't have the necessary switches or key presses to hand because you don't do it often enough.

"journalctl" might be the best thing since sliced bread, but making it a hard requirement of systemd makes adoption of systemd that much harder. IMHO, systemd should "pick it's battles" and concentrate on managing system p

Junta ( 36770 ) writes:
Re: ( Score: 3 )

The point being that text format is more universally readable, and also should it get corrupted, it has a better shot of still being readable.

On the other hand, pure binary logging was not necessary to achieve what they wanted. In fact, strictly speaking a split format of fixed-size, well aligned binary metadata alongside a text record of the variable length data would have been even *better* performance and still be readable.

Junta ( 36770 ) , Monday December 11, 2017 @09:49AM ( #55715423 )
Re: Ah yes the secret to simplicity ( Score: 5 , Insightful)

Using text processing skills to process a generic text file isn't any harder than using journalctl. The difference is that the former is generically applicable to just about any other software on the planet, and the latter is for journald. It's not that complex to confer the journalctl benefits without ditching *native* text log capability, but they refuse to do so.

Using ForwardToSyslog just means there's an unnecessary middle-man, meaning both services must be functional to complete logging. The problem is the time when you actually want logs is the time when there's something going wrong. A few weeks ago was trying to support someone who did something pretty catastrophic to his system. One of the side effects was that it broke the syslog forwarding (syslog would still work, but nothing from journald would get to it). The other thing that happened would be for the system to lock out all access. I thought 'ok, I'll reboot and use jornalctl', but wait, on CentOS 7, journald defaults to not persisting journald across boot, because you have syslog to do that.

Of course the other problem (not entirely systemd project fault) was the quest to 'simplify' console output so he just saw 'fail' instead of the much more useful error messages that would formerly spam the console on experiencing the sort of problem he hit (because it would be terrible to have an 'ugly' console...). This hints about another source of the systemd controversy, that it's also symbolic of a lot of other design choices that have come out of the distros.

[Oct 14, 2018] In essence, Red Hat is attempting to out-MS MS by polluting and warping Linux needlessly but surely

Highly recommended!
So there is strategy behind systemd introduction, and pretty nefarious one...
Oct 14, 2018 | linux.slashdot.org

OneHundredAndTen ( 1523865 ) , Monday December 11, 2017 @09:36AM ( #55715351 )

Re:Oh stop complaining FFS ( Score: 5 , Interesting)

I don't agree. Systemd is the most visible part of a clear trend within Red Hat, consisting in an attempt to make their particular version of Linux THE canonical Linux, to the point that, if you are not using Red Hat, or some derived distribution, things will not work. In essence, Red Hat is attempting to out-MS MS by polluting and warping Linux needlessly but surely. The latest: they have come up with the 'timedatectl' command, which does exactly the same as 'date'. The latter is to be deprecated. Red Hat, the MS wannabee. They will not pull it off, but they are inflicting a lot of damage on Linux in the process.

[Oct 14, 2018] I m curious if Redhat are regretting their decision on the early hard switch to systemd in RHEL7

RHEL7 looks like a fiasco to me...
Notable quotes:
"... I have friends in the industry still on RHEL6 as the upgrade to version 7 is a logistical nightmare and one who works at a reasonably large enterprise considering ditching Redhat entirely to go to a systemd-less alternative. ..."
"... That faster boot time sure helps with servers that are only restarted once a year ... ..."
"... The best part is how they want to use systemd to save 12 seconds of boot time on my servers that take 5-10 minutes to boot, once every few years. Anyone who thinks that complexity is worth that negligible difference is insane. Of course there are other arguments for systemd, but don't tell me I need it on my servers so they "boot faster". ..."
Oct 14, 2018 | linux.slashdot.org

Zarjazz ( 36278 ) , Monday December 11, 2017 @08:42AM ( #55715117 )

Enterprise Headache ( Score: 4 , Informative)

I'm curious if Redhat are regretting their decision on the early hard switch to systemd in RHEL7. I know for a fact their support system was flooded with issues from early adopters.

I have friends in the industry still on RHEL6 as the upgrade to version 7 is a logistical nightmare and one who works at a reasonably large enterprise considering ditching Redhat entirely to go to a systemd-less alternative.

That faster boot time sure helps with servers that are only restarted once a year ...

jon3k ( 691256 ) , Monday December 11, 2017 @10:44AM ( #55715759 )
Re:Enterprise Headache ( Score: 4 , Informative)

The best part is how they want to use systemd to save 12 seconds of boot time on my servers that take 5-10 minutes to boot, once every few years. Anyone who thinks that complexity is worth that negligible difference is insane. Of course there are other arguments for systemd, but don't tell me I need it on my servers so they "boot faster".

[Oct 13, 2018] How to Change the Log Level in Systemd

Apr 01, 2016 | 410gone.click

SyslogLevel=
See syslog(3) for details. This option is
only useful when StandardOutput= or StandardError= are set to
syslog or kmsg . Note that individual lines output by the daemon
might be prefixed with a different log level which can be used to
override the default log level specified here. The interpretation
of these prefixes may be disabled with SyslogLevelPrefix= , see
below. For details, see sd-daemon(3) . Defaults to info .

The Current Log Level

To check the log level of systemd , which are currently set, and then use the show command of systemctl command.

* To set the 'LogLevel' parameter of -p option.

[code]systemctl -pLogLevel show
LogLevel=info[/code]

How to Temporarily Change the Log Level

To temporarily change the log level of systemd , use the set-log-level option of systemd-analyze command.

It is an example to change the log level to notice.

[code]systemd-analyze set-log-level notice[/code]

How to Permanently Change the Log Level

To enable the log level you have also changed after a restart of the system , change the ' LogLevel ' of /etc/systemd/system.conf .

It is an example to change the log level to notice.

[code]vi /etc/systemd/system.conf
#LogLevel=info
LogLevel=notice
[/code]

[Oct 12, 2018] How to Change the Log Level in Systemd

Apr 01, 2016 | 410gone.click

The Current Log Level

To check the log level of systemd , which are currently set, and then use the show command of systemctl command.

* To set the 'LogLevel' parameter of -p option.

[code]systemctl -pLogLevel show
LogLevel=info[/code]

How to Temporarily Change the Log Level

To temporarily change the log level of systemd , use the set-log-level option of systemd-analyze command.

It is an example to change the log level to notice.

[code]systemd-analyze set-log-level notice[/code]

How to Permanently Change the Log Level

To enable the log level you have also changed after a restart of the system , change the ' LogLevel ' of /etc/systemd/system.conf .

It is an example to change the log level to notice.

[code]vi /etc/systemd/system.conf
#LogLevel=info
LogLevel=notice
[/code]

[Oct 12, 2018] RHEL 7 Root Password Recovery - Red Hat Customer Portal

Notable quotes:
"... That should be "touch /.autorelabel" (forward-slash dot instead of dot forward-slash). ..."
Oct 12, 2018 | access.redhat.com

Robert Krátký

Adding rd.break to the end of the line with kernel parameters in Grub stops the start up process before the regular root filesystem is mounted (hence the necessity to chroot into sysroot ). Emergency mode, on the other hand, does mount the regular root filesystem, but it only mounts it in a read-only mode. Note that you can even change to emergency mode using the systemctl command on a running system ( systemctl emergency ).

I updated the linked solution ( Resetting the Root Password of RHEL-7 / systemd ) to make the instructions easier to follow.

Guru 6827 points 11 November 2014 1:44 AM R. Hinton Community Leader

I've found recently that sometimes on a virtual system you have to add "console=tty0" at the end on a virtual system else the output goes to a non-available serial console.

IO Loreal.Linux 9 November 2017 2:40 PMRHEL 7 root Password Reset.

Step 1: Break the Consloe while Linux boot. Step 2: Press "e" to edit the kernel. Step 3: append the entry at the end of linux line as below rd.break console=tty1

Step 4: Press CTRL+X

Step 5: mount -o remount,rw /sysroot

Step 6: chroot /sysroot; change root password

Step 7: touch /.autorelabel

Step 8: Type exit two times

Thanks & Regards Namasivayam

Siggy Sigwald 22 February 2018 3:55 PM

1 - on grub menu, select the kernel to boot from and press "e". 2 - append the following to the end of the line that starts with "linux16" rd.break console=tty1 3 - use ctrl+x to bootup - mount -o remount,rw /sysroot 4 - chroot /sysroot 5 - passwd (enter new password). 6 - touch /.autorelabel 7 - ctrl+d twice to resume normal boot process.

William Schmidt, 18 April 2018 11:14 AM

If you use the syntax above (touch ./autorelabel (dot forward-slash)), the result is to create a file called autorelabel in the current directory. The result will NOT be a relabel function and a system you may not be able to log in to.

That should be "touch /.autorelabel" (forward-slash dot instead of dot forward-slash). Notice the position of the period in front of the autorelabel filename. The result is to create a file called .autorelabel in the root directory. The result will be a relabel function and a system you can log in to.

I have made that mistake!

// Bill //

[Oct 08, 2018] RHCSA Exam Training by Infinite Skills

For $29.99 they provide a course with 6.5 hours of on demand video
Oct 08, 2018 | www.udemy.com

Description

This Red Hat Certified Systems Administrator Exam EX200 training course from Infinite Skills will teach you everything you need to know to become a Red Hat Certified System Administrator (RHCSA) and pass the EX200 Exam. This course is designed for users that are familiar with Red Hat Enterprise Linux environments.

You will start by learning the fundamentals, such as basic shell commands, creating and modifying users, and changing passwords. The course will then teach you about the shell, explaining how to manage files, use the stream editor, and locate files. This video tutorial will also cover system management, including booting and rebooting, network services, and installing packages. Other topics that are covered include storage management, server management, virtual machines, and security.

Once you have completed this computer based training course, you will be fully capable of taking the RHCSA EX200 exam and becoming a Red Hat Certified System Administrator.

*Infinite Skills has no affiliation with Red Hat, Inc. The Red Hat trademark is used for identification purposes only and is not intended to indicate affiliation with or approval by Red Hat, Inc

[Sep 04, 2018] Unifying custom scripts system-wide with rpm on Red Hat-CentOS

Highly recommended!
Aug 24, 2018 | linuxconfig.org
Objective Our goal is to build rpm packages with custom content, unifying scripts across any number of systems, including versioning, deployment and undeployment. Operating System and Software Versions Requirements Privileged access to the system for install, normal access for build. Difficulty MEDIUM Conventions Introduction One of the core feature of any Linux system is that they are built for automation. If a task may need to be executed more than one time - even with some part of it changing on next run - a sysadmin is provided with countless tools to automate it, from simple shell scripts run by hand on demand (thus eliminating typo errors, or only save some keyboard hits) to complex scripted systems where tasks run from cron at a specified time, interacting with each other, working with the result of another script, maybe controlled by a central management system etc.

While this freedom and rich toolset indeed adds to productivity, there is a catch: as a sysadmin, you write a useful script on a system, which proves to be useful on another, so you copy the script over. On a third system the script is useful too, but with minor modification - maybe a new feature useful only that system, reachable with a new parameter. Generalization in mind, you extend the script to provide the new feature, and complete the task it was written for as well. Now you have two versions of the script, the first is on the first two system, the second in on the third system.

You have 1024 computers running in the datacenter, and 256 of them will need some of the functionality provided by that script. In time you will have 64 versions of the script all over, every version doing its job. On the next system deployment you need a feature you recall you coded at some version, but which? And on which systems are they?

On RPM based systems, such as Red Hat flavors, a sysadmin can take advantage of the package manager to create order in the custom content, including simple shell scripts that may not provide else but the tools the admin wrote for convenience.

In this tutorial we will build a custom rpm for Red Hat Enterprise Linux 7.5 containing two bash scripts, parselogs.sh and pullnews.sh to provide a way that all systems have the latest version of these scripts in the /usr/local/sbin directory, and thus on the path of any user who logs in to the system.


me width=


Distributions, major and minor versions In general, the minor and major version of the build machine should be the same as the systems the package is to be deployed, as well as the distribution to ensure compatibility. If there are various versions of a given distribution, or even different distributions with many versions in your environment (oh, joy!), you should set up build machines for each. To cut the work short, you can just set up build environment for each distribution and each major version, and have them on the lowest minor version existing in your environment for the given major version. Of cause they don't need to be physical machines, and only need to be running at build time, so you can use virtual machines or containers.

In this tutorial our work is much easier, we only deploy two scripts that have no dependencies at all (except bash ), so we will build noarch packages which stand for "not architecture dependent", we'll also not specify the distribution the package is built for. This way we can install and upgrade them on any distribution that uses rpm , and to any version - we only need to ensure that the build machine's rpm-build package is on the oldest version in the environment. Setting up building environment To build custom rpm packages, we need to install the rpm-build package:

# yum install rpm-build
From now on, we do not use root user, and for a good reason. Building packages does not require root privilege, and you don't want to break your building machine.

Building the first version of the package Let's create the directory structure needed for building:

$ mkdir -p rpmbuild/SPECS
Our package is called admin-scripts, version 1.0. We create a specfile that specifies the metadata, contents and tasks performed by the package. This is a simple text file we can create with our favorite text editor, such as vi . The previously installed rpmbuild package will fill your empty specfile with template data if you use vi to create an empty one, but for this tutorial consider the specification below called admin-scripts-1.0.spec :

me width=


Name:           admin-scripts
Version:        1
Release:        0
Summary:        FooBar Inc. IT dept. admin scripts
Packager:       John Doe 
Group:          Application/Other
License:        GPL
URL:            www.foobar.com/admin-scripts
Source0:        %{name}-%{version}.tar.gz
BuildArch:      noarch

%description
Package installing latest version the admin scripts used by the IT dept.

%prep
%setup -q


%build

%install
rm -rf $RPM_BUILD_ROOT
mkdir -p $RPM_BUILD_ROOT/usr/local/sbin
cp scripts/* $RPM_BUILD_ROOT/usr/local/sbin/

%clean
rm -rf $RPM_BUILD_ROOT

%files
%defattr(-,root,root,-)
%dir /usr/local/sbin
/usr/local/sbin/parselogs.sh
/usr/local/sbin/pullnews.sh

%doc

%changelog
* Wed Aug 1 2018 John Doe 
- release 1.0 - initial release
Place the specfile in the rpmbuild/SPEC directory we created earlier.

We need the sources referenced in the specfile - in this case the two shell scripts. Let's create the directory for the sources (called as the package name appended with the main version):

$ mkdir -p rpmbuild/SOURCES/admin-scripts-1/scripts
And copy/move the scripts into it:
$ ls rpmbuild/SOURCES/admin-scripts-1/scripts/
parselogs.sh  pullnews.sh

me width=


As this tutorial is not about shell scripting, the contents of these scripts are irrelevant. As we will create a new version of the package, and the pullnews.sh is the script we will demonstrate with, it's source in the first version is as below:
#!/bin/bash
echo "news pulled"
exit 0
Do not forget to add the appropriate rights to the files in the source - in our case, execution right:
chmod +x rpmbuild/SOURCES/admin-scripts-1/scripts/*.sh
Now we create a tar.gz archive from the source in the same directory:
cd rpmbuild/SOURCES/ && tar -czf admin-scripts-1.tar.gz admin-scripts-1
We are ready to build the package:
rpmbuild --bb rpmbuild/SPECS/admin-scripts-1.0.spec
We'll get some output about the build, and if anything goes wrong, errors will be shown (for example, missing file or path). If all goes well, our new package will appear in the RPMS directory generated by default under the rpmbuild directory (sorted into subdirectories by architecture):
$ ls rpmbuild/RPMS/noarch/
admin-scripts-1-0.noarch.rpm
We have created a simple yet fully functional rpm package. We can query it for all the metadata we supplied earlier:
$ rpm -qpi rpmbuild/RPMS/noarch/admin-scripts-1-0.noarch.rpm 
Name        : admin-scripts
Version     : 1
Release     : 0
Architecture: noarch
Install Date: (not installed)
Group       : Application/Other
Size        : 78
License     : GPL
Signature   : (none)
Source RPM  : admin-scripts-1-0.src.rpm
Build Date  : 2018. aug.  1., Wed, 13.27.34 CEST
Build Host  : build01.foobar.com
Relocations : (not relocatable)
Packager    : John Doe 
URL         : www.foobar.com/admin-scripts
Summary     : FooBar Inc. IT dept. admin scripts
Description :
Package installing latest version the admin scripts used by the IT dept.
And of cause we can install it (with root privileges): Installing custom scripts with rpm Installing custom scripts with rpm

me width=


As we installed the scripts into a directory that is on every user's $PATH , you can run them as any user in the system, from any directory:
$ pullnews.sh 
news pulled
The package can be distributed as it is, and can be pushed into repositories available to any number of systems. To do so is out of the scope of this tutorial - however, building another version of the package is certainly not. Building another version of the package Our package and the extremely useful scripts in it become popular in no time, considering they are reachable anywhere with a simple yum install admin-scripts within the environment. There will be soon many requests for some improvements - in this example, many votes come from happy users that the pullnews.sh should print another line on execution, this feature would save the whole company. We need to build another version of the package, as we don't want to install another script, but a new version of it with the same name and path, as the sysadmins in our organization already rely on it heavily.

First we change the source of the pullnews.sh in the SOURCES to something even more complex:

#!/bin/bash
echo "news pulled"
echo "another line printed"
exit 0
We need to recreate the tar.gz with the new source content - we can use the same filename as the first time, as we don't change version, only release (and so the Source0 reference will be still valid). Note that we delete the previous archive first:
cd rpmbuild/SOURCES/ && rm -f admin-scripts-1.tar.gz && tar -czf admin-scripts-1.tar.gz admin-scripts-1
Now we create another specfile with a higher release number:
cp rpmbuild/SPECS/admin-scripts-1.0.spec rpmbuild/SPECS/admin-scripts-1.1.spec
We don't change much on the package itself, so we simply administrate the new version as shown below:
Name:           admin-scripts
Version:        1
Release:        1
Summary:        FooBar Inc. IT dept. admin scripts
Packager:       John Doe 
Group:          Application/Other
License:        GPL
URL:            www.foobar.com/admin-scripts
Source0:        %{name}-%{version}.tar.gz
BuildArch:      noarch

%description
Package installing latest version the admin scripts used by the IT dept.

%prep
%setup -q


%build

%install
rm -rf $RPM_BUILD_ROOT
mkdir -p $RPM_BUILD_ROOT/usr/local/sbin
cp scripts/* $RPM_BUILD_ROOT/usr/local/sbin/

%clean
rm -rf $RPM_BUILD_ROOT

%files
%defattr(-,root,root,-)
%dir /usr/local/sbin
/usr/local/sbin/parselogs.sh
/usr/local/sbin/pullnews.sh

%doc

%changelog
* Wed Aug 22 2018 John Doe 
- release 1.1 - pullnews.sh v1.1 prints another line
* Wed Aug 1 2018 John Doe 
- release 1.0 - initial release

me width=


All done, we can build another version of our package containing the updated script. Note that we reference the specfile with the higher version as the source of the build:
rpmbuild --bb rpmbuild/SPECS/admin-scripts-1.1.spec
If the build is successful, we now have two versions of the package under our RPMS directory:
ls rpmbuild/RPMS/noarch/
admin-scripts-1-0.noarch.rpm  admin-scripts-1-1.noarch.rpm
And now we can install the "advanced" script, or upgrade if it is already installed. Upgrading custom scripts with rpm Upgrading custom scripts with rpm

And our sysadmins can see that the feature request is landed in this version:

rpm -q --changelog admin-scripts
* sze aug 22 2018 John Doe 
- release 1.1 - pullnews.sh v1.1 prints another line

* sze aug 01 2018 John Doe 
- release 1.0 - initial release
Conclusion

We wrapped our custom content into versioned rpm packages. This means no older versions left scattered across systems, everything is in it's place, on the version we installed or upgraded to. RPM gives the ability to replace old stuff needed only in previous versions, can add custom dependencies or provide some tools or services our other packages rely on. With effort, we can pack nearly any of our custom content into rpm packages, and distribute it across our environment, not only with ease, but with consistency.

[Jun 12, 2018] How to convert RHEL 6.x to CentOS 6.x The Picky SysAdmin

Jun 12, 2018 | www.pickysysadmin.ca

How to convert RHEL 6.x to CentOS 6.x 2014-07-15 by Eric Schewe

28

Last modified: [last-modified]

This post relates to my older post about converting RHEL 5.x to CentOS 5.x . All the reasons for doing so and other background information can be found in that post.

This post will cover how to convert RHEL 6.x to 5.x.

Updated 2016-03-29 – Thanks to feedback from here I've updated the guide.

Updates and Backups!
  1. Fully patch your system and reboot your system before starting this process
  2. Take a full backup of your system or a Snapshot if it's a VM
Conversion
  1. Login to the server and become root
  2. Clean up yum's cache
    1 localhost :~ root # yum clean all
  3. Create a temporary working area
    1 2 localhost :~ root # mkdir -p /temp/centos localhost :~ root # cd /temp/centos
  4. Determine your version of RHEL
    1 localhost :~ root # cat /etc/redhat-release
  5. Determine your architecture (32-bit = i386, 64-bit = x86_64)
    1 localhost :~ root # uname -i
  6. Download the applicable files for your release and architecture. The version numbers on these packages could change. To find the current versions of these files browse this FTP site: http://mirror.centos.org/centos/6/os/i386/Packages/ (32-bit) or http://mirror.centos.org/centos/6/os/x86_64/Packages/ (64-bit) and replace the 'x' values below with the current version numbers
    CentOS 6.5 / 32-bit
    1 2 3 4 5 localhost :~ root # wget http://mirror.centos.org/centos/6/os/i386/RPM-GPG-KEY-CentOS-6 localhost :~ root # wget http://mirror.centos.org/centos/6/os/i386/Packages/centos-release-6-x.el6.centos.x.x.i686.rpm localhost :~ root # wget http://mirror.centos.org/centos/6/os/i386/Packages/centos-indexhtml-6-x.el6.centos.noarch.rpm localhost :~ root # wget http://mirror.centos.org/centos/6/os/i386/Packages/yum-x.x.x-x.el6.centos.noarch.rpm localhost :~ root # wget http://mirror.centos.org/centos/6/os/i386/Packages/yum-plugin-fastestmirror-x.x.x-x.el6.noarch.rpm

    CentOS 6.5 / 64-bit
    1 2 3 4 5 localhost :~ root # wget http://mirror.centos.org/centos/6/os/x86_64/RPM-GPG-KEY-CentOS-6 localhost :~ root # wget http://mirror.centos.org/centos/6/os/x86_64/Packages/centos-release-6-x.el6.centos.xx.x.x86_64.rpm localhost :~ root # wget http://mirror.centos.org/centos/6/os/x86_64/Packages/centos-indexhtml-6-x.el6.centos.noarch.rpm localhost :~ root # wget http://mirror.centos.org/centos/6/os/x86_64/Packages/yum-x.x.xx-xx.el6.centos.noarch.rpm localhost :~ root # wget http://mirror.centos.org/centos/6/os/x86_64/Packages/yum-plugin-fastestmirror-x.x.xx-xx.el6.noarch.rpm
  7. Import the GPG key for the appropriate version of CentOS
    1 localhost :~ root # rpm --import RPM-GPG-KEY-CentOS-6
  8. Remove RHEL packages

    Note: If the 'rpm -e' command fails saying one of the packages is not installed remove the package from the command and run it again.

    1 2 localhost :~ root # yum remove rhnlib abrt-plugin-bugzilla redhat-release-notes* localhost :~ root # rpm -e --nodeps redhat-release-server-6Server redhat-indexhtml
  9. Remove any left over RHEL subscription information and the subscription-manager

    Note: If you do not do this every time you run 'yum' you will receive the following message: "This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register."

    1 2 localhost :~ root # subscription-manager clean localhost :~ root # yum remove subscription-manager
  10. Force install the CentOS RPMs we downloaded
    1 localhost :~ root # rpm -Uvh --force *.rpm
  11. Clean up yum one more time and then upgrade
    1 2 localhost :~ root # yum clean all localhost :~ root # yum upgrade
  12. Reboot your server
  13. Verify functionality
  14. Delete VM Snapshot if you took one as part of the backup
References

[Apr 29, 2018] RHEL 6.10 Beta is available

Apr 29, 2018 | www.serverwatch.com

...On April 26, Red Hat announced the RHEL 6.10 beta, providing stability and security updates.

RHEL 6 was first released in November 2010 and was superseded as the leading edge of RHEL development when RHEL 7 was released in June 2014.

"Red Hat Enterprise Linux offers a ten year lifecycle, one of the longest in the industry, and version 6 is within its seventh year of support, putting it in the Maintenance Support 2 phase," Marcel Kolaja, product manager, Red Hat Enterprise Linux at Red Hat, told ServerWatch . "This means that Red Hat Enterprise Linux 6 receives Critical-rated Red Hat Security Advisories (RHSAs), and selected Urgent-rated Red Hat Bug Fix Advisories (RHBAs) may be released as they become available."

Kolaja noted that the Red Hat Enterprise Linux 6.10 Beta delivers updates only to maintain and enhance the security, stability, and reliability of the platform in production roles.

[Apr 26, 2018] chkservice - A Tool For Managing Systemd Units From Linux Terminal

Apr 26, 2018 | www.2daygeek.com

For RPM Based Systems , use DNF Command to install chkservice .

$ sudo yum install https://github.com/linuxenko/chkservice/releases/download/0.1/chkservice_0.1.0-amd64.rpm
How To Use chkservice

Just fire the following command to launch the chkservice tool. The output is split to four parts.

$ sudo chkservice

[Jan 29, 2018] How good is Red Hat Enterprise Linux or CentOS as a desktop OS - Quora

Jan 29, 2018 | www.quora.com

It's harder to use CentOS as a desktop than Debian, the latter of which has many, many more packages. You can use the Nux repo at My RPM repositories (or use his CentOS 6 remix called Stella: Stella, stellae) to beef up your CentOS installation.

But I think there's safety in numbers, and a lot more developers package things for Debian.

That said, I use Fedora because it's pretty much the CentOS of the future, and there are a lot more packages. Sure, you need to add RPM Fusion for some things, but that's not so difficult.

Stability is a funny thing. Often in the Linux world, "stable" just means old. I find Fedora -- with much newer packages -- very stable in terms of its ability to keep running. Using what works with your hardware is more important than what is labeled "stable." Older hardware is often better with older software. And newer hardware may be more "stable" with newer software.

Your mileage will definitely vary, and it doesn't hurt to do a few installations (Debian, CentOS, Fedora, Ubuntu) to see what works best for you. Try before you buy (even if what you're "buying" is free).

[Oct 25, 2017] So after 4 hours of debugging systemd and NetworkManager; nothing but pain linux

Oct 25, 2017 | www.reddit.com

This is old but pretty instructive post. Comments provide important tips on how to deal with systemd problems.

it drops the default route every time I reboot the machine

Can't work out how to resolve this at all. Nothing in logs or anything. Two hours down the pan.

ALL of this is config management stuff related to systemd and NetworkManager. Not at all impressed. Excuse the rant but this is motivating me to move our CentOS 5.x machines to FreeBSD which has some semblance of debuggability, configuration management that doesn't make you want to gouge your eyes out and stability.

This feels like Windows, WMI and the registry which is an abhorrent place for Linux to be. I know because I deal with Windows Server as well.

I know there is a fan following of systemd on here but this is what it's like in the real world for those of us who need to argue with it to get shit done, so Dear Lennart (and RedHat): Fuck you for taking away 4 hours of my life and fuck you for forcing these bits of crap on the distributions.

Edit: please read all my replies before you kick me down.

Edit 2: I've spoken to our CTO. 95% of our kit can be moved to windows as it's hosting JVMs so that's going in the mix as well.

I have a few CentOS 7 servers online and have had no trouble with routes being dropped, during reboot or at any other time.

[–] godlesspeon S ] 2 points 3 points 4 points 3 years ago (3 children)

How did you do the install on these? I'm genuinely interested in where this has gone wrong.

[–] e_t_ 8 points 9 points 10 points 3 years ago (2 children)

I went through Anaconda. It wrote a config file to /etc/sysconfig/network-scripts, which NM picks up. I think the same file could be created with nmtui .

[–] godlesspeon S ] 2 points 3 points 4 points 3 years ago (1 child)

I used nmtui and the sysconfig folder is expected (all contents as expected, correctly configured) and this is good hardware (Intel gbit cards).

I will say that the damn thing has just spontaneously started working which is even more worrying as the whole thing is completely non-deterministic. Ugh.

[–] riking27 9 points 10 points 11 points 3 years ago (0 children)

Nondeterministic? Sounds like a missing ordering dependency to me ( After= , Before= ).

Example: NFS mounts need an After=network-online.target to work; anything using the network on stop needs an After=network.target .

[–] azalynx 67 points 68 points 69 points 3 years ago (124 children)

[...] this is motivating me to move our CentOS 5.x machines to FreeBSD which has [...]

As a fan of systemd, I was going to take your side and agree that these issues are unacceptable, until you said this. You're not helping, seriously. Your problem is related to bugs in a piece of software, not to systemd's design philosophy, and especially not to Linux as a whole.

The fact that your solution to some bugs in the software is to throw the baby out with the bathwater, speaks a lot about your attitude. Chances are the issues you mentioned will be resolved as RHEL/CentOS 7 become more deployed, and others run across similar problems and actually report and/or fix the actual bugs instead of embarking on an anti-Lennart crusade and spewing vitriol all over the place.

I would also be willing to fucking bet that other people have had similar problems with every new version of RHEL or CentOS ever, in history, but because this time there's a convenient bullseye in the form of systemd to assign blame to, everyone loses their shit.

[...] and fuck you for forcing these bits of crap on the distributions.

Woah, what? This fallacy again? Red Hat isn't forcing jack shit on anyone . The reality is that many users/businesses need this functionality; just because you don't need it, that doesn't mean that there aren't a thousand other use cases out there that require systemd. Please stop assuming that you are at the center of the universe and that everyone elses use cases are like yours . Distributions have always been designed to be "one-size-fits-all" in order to fit everyone's uses; obviously many distributions want systemd's functionality because they have users who demand it.

Once again, as I said in my first sentence; it's not ok for an enterprise product to have the problems you experienced, but your attitude isn't helping, really. It's also worth noting that if everyone just flipped tables when something broke, we'd never have any progress.

[Oct 25, 2017] Top 5 Linux pain points in 2017

Looks like the author is a dilettante, if he things that library compatibility problem is not an issue. He also failed to mention systemd.
Oct 25, 2017 | opensource.com
2016 Open Source Yearbook article on troubleshooting tips for the 5 most common Linux issues , Linux installs and operates as expected for most users, but some inevitably run into problems. How have things changed over the past year in this regard? Once again, I posted the question to LinuxQuestions.org and on social media, and analyzed LQ posting patterns. Here are the updated results.

1. Documentation

Documentation, or lack thereof, was one of the largest pain points this year. Although open source methodology produces superior code, the importance of producing quality documentation has only recently come to the forefront. As more non-technical users adopt Linux and open source software, the quality and quantity of documentation will become paramount. If you've wanted to contribute to an open source project but don't feel you are technical enough to offer code, improving documentation is a great way to participate. Many projects even keep the documentation in their repository, so you can use your contribution to get acclimated to the version control workflow.

If you've wanted to contribute to an open source project but don't feel you are technical enough to offer code, improving documentation is a great way to participate.

2. Software/library version incompatibility I was surprised by this one, but software/library version incompatibility was mentioned frequently... 3. UEFI and secure boot

Although this issue continues to improve as more supported hardware is deployed, many users indicate that they still have issues with UEFI and/or secure boot. Using a distribution that fully supports UEFI/secure boot out of the box is the best solution here.

4. Deprecation of 32-bit

Many users are lamenting the death of 32-bit support in their favorite distributions and software projects. Although you still have many options if 32-bit support is a must, fewer and fewer projects are likely to continue supporting a platform with decreasing market share and mind share. Luckily, we're talking about open source, so you'll likely have at least a couple options as long as someone cares about the platform.

5. Deteriorating support and testing for X-forwarding Although many longtime and power users of Linux regularly use X-forwarding and consider it critical functionality, as Linux becomes more mainstream it appears to be seeing less testing and support; especially from newer apps. With Wayland network transparency still evolving, the situation may get worse before it improves.

[Sep 27, 2017] Chkservice - An Easy Way to Manage Systemd Units in Terminal

Sep 27, 2017 | linoxide.com

Systemd is a system and service manager for Linux operating systems which introduces the concept of systemd units and provides a number of features such as parallel startup of system services at boot time, on-demand activation of daemons, etc. It helps to manage services on your Linux OS such as starting/stopping/reloading. But to operate on services with systemd, you need to know the different services launched and the name which exactly matches the service. There is a tool provided which can help Linux users to navigate through the different services available on your Linux as you do for the different process in progress on your system with top command.

What is chkservice?

chkservice is a new and handy tool for systemd units management in a terminal. It is a GitHub project developed by Svetlana Linuxenko. It has the particularity to list the differents services presents on your system. You have a view of each service available and you are able to manage it as you want.

Debian:

sudo add-apt-repository ppa:linuxenko/chkservice
sudo apt-get update
sudo apt-get install chkservice

Arch

git clone https://aur.archlinux.org/chkservice.git
cd chkservice
makepkg -si

Fedora

dnf copr enable srakitnican/default
dnf install chkservice

chkservice require super user privileges to make changes into unit states or sysv scripts. For user it works read-only.

Package dependencies:

Build dependencies:

Build and install debian package.

git clone https://github.com/linuxenko/chkservice.git
mkdir build
cd build
cmake -DCMAKE_INSTALL_PREFIX=/usr ../
cpack

dpkg -i chkservice-x.x.x.deb

Build release version.

git clone https://github.com/linuxenko/chkservice.git
mkdir build
cd build
cmake ../
make

[Sep 23, 2017] CentOS 7 Server Hardening Guide Lisenet.com Linux Security Networking

Highly recommended!
Notable quotes:
"... As a rule of thumb, malicious applications usually write to /tmp and then attempt to run whatever was written. A way to prevent this is to mount /tmp on a separate partition with the options noexec , nodev and nosuid enabled. ..."
Sep 23, 2017 | www.lisenet.com

Remove packages which you don't require on a server, e.g. firmware of sound cards, firmware of WinTV, wireless drivers etc.

# yum remove alsa-* ivtv-* iwl*firmware ic94xx-firmware
2. System Settings – File Permissions and Masks 2.1 Restrict Partition Mount Options

Partitions should have hardened mount options:

  1. /boot – rw,nodev,noexec,nosuid
  2. /home – rw,nodev,nosuid
  3. /tmp – rw,nodev,noexec,nosuid
  4. /var – rw,nosuid
  5. /var/log – rw,nodev,noexec,nosuid
  6. /var/log/audit – rw,nodev,noexec,nosuid
  7. /var/www – rw,nodev,nosuid

As a rule of thumb, malicious applications usually write to /tmp and then attempt to run whatever was written. A way to prevent this is to mount /tmp on a separate partition with the options noexec , nodev and nosuid enabled.

This will deny binary execution from /tmp , disable any binary to be suid root, and disable any block devices from being created.

The storage location /var/tmp should be bind mounted to /tmp , as having multiple locations for temporary storage is not required:

/tmp /var/tmp none rw,nodev,noexec,nosuid,bind 0 0

The same applies to shared memory /dev/shm :

tmpfs /dev/shm tmpfs rw,nodev,noexec,nosuid 0 0

The proc pseudo-filesystem /proc should be mounted with hidepid . When setting hidepid to 2, directories entries in /proc will hidden.

proc /proc proc rw,hidepid=2 0 0

Harden removeable media mounts by adding nodev noexec and nosuid , e.g.:

/dev/cdrom /mnt/cdrom iso9660 ro,noexec,nosuid,nodev,noauto 0 0
2.2 Restrict Dynamic Mounting and Unmounting of Filesystems

Add the following to /etc/modprobe.d/hardening.conf to disable uncommon filesystems:

install cramfs /bin/true

install freevxfs /bin/true

install jffs2 /bin/true

install hfs /bin/true

install hfsplus /bin/true

install squashfs /bin/true

install udf /bin/true

Depending on a setup (if you don't run clusters, NFS, CIFS etc), you may consider disabling the following too:

install fat /bin/true

install vfat /bin/true

install cifs /bin/true

install nfs /bin/true

install nfsv3 /bin/true

install nfsv4 /bin/true

install gfs2 /bin/true

It is wise to leave ext4, xfs and btrfs enabled at all times.

2.3 Prevent Users Mounting USB Storage

Add the following to /etc/modprobe.d/hardening.conf to disable modprobe loading of USB and FireWire storage drivers:

blacklist usb-storage

blacklist firewire-core

install usb-storage /bin/true

Disable USB authorisation. Create a file /opt/usb-auth.sh with the following content:

#!/bin/bash

echo 0 > /sys/bus/usb/devices/usb1/authorized

echo 0 > /sys/bus/usb/devices/usb1/authorized_default

If more than one USB device is available, then add them all. Create a service file /etc/systemd/system/usb-auth.service with the following content:

[Unit]

Description=Disable USB auth

DefaultDependencies=no



[Service]

Type=oneshot

ExecStart=/bin/bash /opt/usb-auth.sh



[Install]

WantedBy=multi-user.target

Set permissions, enable and start the service:

# chmod 0700 /opt/usb-auth.sh

# systemctl enable usb-auth.service

# systemctl start usb-auth.service

If required, disable kernel support for USB via bootloader configuration. To do so, append nousb to the kernel line GRUB_CMDLINE_LINUX in /etc/default/grub and generate the Grub2 configuration file:

# grub2-mkconfig -o /boot/grub2/grub.cfg

Note that disabling all kernel support for USB will likely cause problems for systems with USB-based keyboards etc.

2.4 Restrict Programs from Dangerous Execution Patterns

Configure /etc/sysctl.conf with the following:

# Disable core dumps

fs.suid_dumpable = 0



# Disable System Request debugging functionality

kernel.sysrq = 0



# Restrict access to kernel logs

kernel.dmesg_restrict = 1



# Enable ExecShield protection

kernel.exec-shield = 1



# Randomise memory space

kernel.randomize_va_space = 2



# Hide kernel pointers

kernel.kptr_restrict = 2

Load sysctl settings:

# sysctp -p
2.5 Set UMASK 027

The following files require umask hardening: /etc/bashrc , /etc/csh.cshrc , /etc/init.d/functions and /etc/profile .

Sed one-liner:

# sed -i -e 's/umask 022/umask 027/g' -e 's/umask 002/umask 027/g' /etc/bashrc

# sed -i -e 's/umask 022/umask 027/g' -e 's/umask 002/umask 027/g' /etc/csh.cshrc

# sed -i -e 's/umask 022/umask 027/g' -e 's/umask 002/umask 027/g' /etc/profile

# sed -i -e 's/umask 022/umask 027/g' -e 's/umask 002/umask 027/g' /etc/init.d/functions
2.6 Disable Core Dumps

Open /etc/security/limits.conf and set the following:

*  hard  core  0
2.7 Set Security Limits to Prevent DoS

Add the following to /etc/security/limits.conf to enforce sensible security limits:

# 4096 is a good starting point

*      soft   nofile    4096

*      hard   nofile    65536

*      soft   nproc     4096

*      hard   nproc     4096

*      soft   locks     4096

*      hard   locks     4096

*      soft   stack     10240

*      hard   stack     32768

*      soft   memlock   64

*      hard   memlock   64

*      hard   maxlogins 10



# Soft limit 32GB, hard 64GB

*      soft   fsize     33554432

*      hard   fsize     67108864



# Limits for root

root   soft   nofile    4096

root   hard   nofile    65536

root   soft   nproc     4096

root   hard   nproc     4096

root   soft   stack     10240

root   hard   stack     32768

root   soft   fsize     33554432
2.8 Verify Permissions of Files

Ensure that all files are owned by a user:

# find / -ignore_readdir_race -nouser -print -exec chown root {} \;

Ensure that all files are owned by a group:

# find / -ignore_readdir_race -nogroup -print -exec chgrp root {} \;

Automate the process by creating a cron file /etc/cron.daily/unowned_files with the following content:

#!/bin/bash

find / -ignore_readdir_race -nouser -print -exec chown root {} \;

find / -ignore_readdir_race -nogroup -print -exec chgrp root {} \;

Set ownership and permissions:

# chown root:root /etc/cron.daily/unowned_files

# chmod 0700 /etc/cron.daily/unowned_files
2.9 Monitor SUID/GUID Files

Search for setuid/setgid files and identify if all are required:

# find / -xdev -type f -perm -4000 -o -perm -2000
3. System Settings – Firewall and Network Configuration 3.1 Firewall

Setting the default firewalld zone to drop makes any packets which are not explicitly permitted to be rejected.

# sed -i "s/DefaultZone=.*/DefaultZone=drop/g" /etc/firewalld/firewalld.conf

Unless firewalld is required, mask it and replace with iptables:

# systemctl stop firewalld.service

# systemctl mask firewalld.service

# systemctl daemon-reload

# yum install iptables-services

# systemctl enable iptables.service ip6tables.service

Add the following to /etc/sysconfig/iptables to allow only minimal outgoing traffic (DNS, NTP, HTTP/S and SMTPS):

*filter

-F INPUT

-F OUTPUT

-F FORWARD

-P INPUT ACCEPT

-P FORWARD DROP

-P OUTPUT ACCEPT

-A INPUT -i lo -m comment --comment local -j ACCEPT

-A INPUT -d 127.0.0.0/8 ! -i lo -j REJECT --reject-with icmp-port-unreachable

-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT

-A INPUT -p tcp -m tcp -m conntrack --ctstate NEW --dport 22 -s 10.0.0.0/8 -j ACCEPT

-A INPUT -p tcp -m tcp -m conntrack --ctstate NEW --dport 22 -s 172.16.0.0/12 -j ACCEPT

-A INPUT -p tcp -m tcp -m conntrack --ctstate NEW --dport 22 -s 192.168.0.0/16 -j ACCEPT

-A INPUT -p tcp -m tcp -m conntrack --ctstate NEW --dport 22 -j ACCEPT

-A INPUT -j DROP

-A OUTPUT -d 127.0.0.0/8 -o lo -m comment --comment local -j ACCEPT

-A OUTPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT

-A OUTPUT -p icmp -m icmp --icmp-type any -j ACCEPT

-A OUTPUT -p udp -m udp -m conntrack --ctstate NEW --dport 53 -j ACCEPT

-A OUTPUT -p tcp -m tcp -m conntrack --ctstate NEW --dport 53 -j ACCEPT

-A OUTPUT -p udp -m udp -m conntrack --ctstate NEW --dport 123 -j ACCEPT

-A OUTPUT -p tcp -m tcp -m conntrack --ctstate NEW --dport 80 -j ACCEPT

-A OUTPUT -p tcp -m tcp -m conntrack --ctstate NEW --dport 443 -j ACCEPT

-A OUTPUT -p tcp -m tcp -m conntrack --ctstate NEW --dport 587 -j ACCEPT

-A OUTPUT -j LOG --log-prefix "iptables_output "

-A OUTPUT -j REJECT --reject-with icmp-port-unreachable

COMMIT

Note that the rule allowing all incoming SSH traffic should be removed restricting access to an IP whitelist only, or hiding SSH behind a VPN.

Add the following to /etc/sysconfig/ip6tables to deny all IPv6:

*filter

-F INPUT

-F OUTPUT

-F FORWARD

-P INPUT DROP

-P FORWARD DROP

-P OUTPUT DROP

COMMIT

Apply configurations:

# iptables-restore < /etc/sysconfig/iptables

# ip6tables-restore < /etc/sysconfig/ip6tables
3.2 TCP Wrappers

Open /etc/hosts.allow and allow localhost traffic and SSH:

ALL: 127.0.0.1

sshd: ALL

The file /etc/hosts.deny should be configured to deny all by default:

ALL: ALL
3.3 Kernel Parameters Which Affect Networking

Open /etc/sysctl.conf and add the following:

# Disable packet forwarding

net.ipv4.ip_forward = 0



# Disable redirects, not a router

net.ipv4.conf.all.accept_redirects = 0

net.ipv4.conf.default.accept_redirects = 0

net.ipv4.conf.all.send_redirects = 0

net.ipv4.conf.default.send_redirects = 0

net.ipv4.conf.all.secure_redirects = 0

net.ipv4.conf.default.secure_redirects = 0

net.ipv6.conf.all.accept_redirects = 0

net.ipv6.conf.default.accept_redirects = 0



# Disable source routing

net.ipv4.conf.all.accept_source_route = 0

net.ipv4.conf.default.accept_source_route = 0

net.ipv6.conf.all.accept_source_route = 0



# Enable source validation by reversed path

net.ipv4.conf.all.rp_filter = 1

net.ipv4.conf.default.rp_filter = 1



# Log packets with impossible addresses to kernel log

net.ipv4.conf.all.log_martians = 1

net.ipv4.conf.default.log_martians = 1



# Disable ICMP broadcasts

net.ipv4.icmp_echo_ignore_broadcasts = 1



# Ignore bogus ICMP errors

net.ipv4.icmp_ignore_bogus_error_responses = 1



# Against SYN flood attacks

net.ipv4.tcp_syncookies = 1



# Turning off timestamps could improve security but degrade performance.

# TCP timestamps are used to improve performance as well as protect against

# late packets messing up your data flow. A side effect of this feature is 

# that the uptime of the host can sometimes be computed.

# If you disable TCP timestamps, you should expect worse performance 

# and less reliable connections.

net.ipv4.tcp_timestamps = 1



# Disable IPv6 unless required

net.ipv6.conf.lo.disable_ipv6 = 1

net.ipv6.conf.all.disable_ipv6 = 1

net.ipv6.conf.default.disable_ipv6 = 1



# Do not accept router advertisements

net.ipv6.conf.all.accept_ra = 0

net.ipv6.conf.default.accept_ra = 0
3.4 Kernel Modules Which Affect Networking

Open /etc/modprobe.d/hardening.conf and disable Bluetooth kernel modules:

install bnep /bin/true

install bluetooth /bin/true

install btusb /bin/true

install net-pf-31 /bin/true

Also disable AppleTalk:

install appletalk /bin/true

Unless required, disable support for IPv6:

options ipv6 disable=1

Disable (uncommon) protocols:

install dccp /bin/true

install sctp /bin/true

install rds /bin/true

install tipc /bin/true

Since we're looking at server security, wireless shouldn't be an issue, therefore we can disable all the wireless drivers.

# for i in $(find /lib/modules/$(uname -r)/kernel/drivers/net/wireless -name "*.ko" -type f);do \

  echo blacklist "$i" >>/etc/modprobe.d/hardening-wireless.conf;done
3.5 Disable Radios

Disable radios (wifi and wwan):

# nmcli radio all off
3.6 Disable Zeroconf Networking

Open /etc/sysconfig/network and add the following:

NOZEROCONF=yes
3.7 Disable Interface Usage of IPv6

Open /etc/sysconfig/network and add the following:

NETWORKING_IPV6=no

IPV6INIT=no
3.8 Network Sniffer

The server should not be acting as a network sniffer and capturing packages. Run the following to determine if any interface is running in promiscuous mode:

# ip link | grep PROMISC
3.9 Secure VPN Connection

Install the libreswan package if implementation of IPsec and IKE is required.

# yum install libreswan
3.10 Disable DHCP Client

Manual assignment of IP addresses provides a greater degree of management.

For each network interface that is available on the server, open a corresponding file /etc/sysconfig/network-scripts/ifcfg- interface and configure the following parameters:

BOOTPROTO=none

IPADDR=

NETMASK=

GATEWAY=
4. System Settings – SELinux

Ensure that SELinux is not disabled in /etc/default/grub , and verify that the state is enforcing:

# sestatus
5. System Settings – Account and Access Control 5.1 Delete Unused Accounts and Groups

Remove any account which is not required, e.g.:

# userdel -r adm

# userdel -r ftp

# userdel -r games

# userdel -r lp

Remove any group which is not required, e.g.:

# groupdel games
5.2 Disable Direct root Login
# echo > /etc/securetty
5.3 Enable Secure (high quality) Password Policy

Note that running authconfig will overwrite the PAM configuration files destroying any manually made changes. Make sure that you have a backup

Secure password policy rules are outlined below.

  1. Minimum length of a password – 16.
  2. Minimum number of character classes in a password – 4.
  3. Maximum number of same consecutive characters in a password – 2.
  4. Maximum number of consecutive characters of same class in a password – 2.
  5. Require at least one lowercase and one uppercase characters in a password.
  6. Require at least one digit in a password.
  7. Require at least one other character in a password.

The following command will enable SHA512 as well as set the above password requirements:

# authconfig --passalgo=sha512 \

 --passminlen=16 \

 --passminclass=4 \

 --passmaxrepeat=2 \

 --passmaxclassrepeat=2 \

 --enablereqlower \

 --enablerequpper \

 --enablereqdigit \

 --enablereqother \

 --update

Open /etc/security/pwquality.conf and add the following:

difok = 8

gecoscheck = 1

These will ensure that 8 characters in the new password must not be present in the old password, and will check for the words from the passwd entry GECOS string of the user.

5.4 Prevent Log In to Accounts With Empty Password

Remove any instances of nullok from /etc/pam.d/system-auth and /etc/pam.d/password-auth to prevent logins with empty passwords.

Sed one-liner:

# sed -i 's/\<nullok\>//g' /etc/pam.d/system-auth /etc/pam.d/password-auth
5.5 Set Account Expiration Following Inactivity

Disable accounts as soon as the password has expired.

Open /etc/default/useradd and set the following:

INACTIVE=0

Sed one-liner:

# sed -i 's/^INACTIVE.*/INACTIVE=0/' /etc/default/useradd
5.6 Secure Pasword Policy

Open /etc/login.defs and set the following:

PASS_MAX_DAYS 60

PASS_MIN_DAYS 1

PASS_MIN_LEN 14

PASS_WARN_AGE 14

Sed one-liner:

# sed -i -e 's/^PASS_MAX_DAYS.*/PASS_MAX_DAYS 60/' \

  -e 's/^PASS_MIN_DAYS.*/PASS_MIN_DAYS 1/' \

  -e 's/^PASS_MIN_LEN.*/PASS_MIN_LEN 14/' \

  -e 's/^PASS_WARN_AGE.*/PASS_WARN_AGE 14/' /etc/login.defs
5.7 Log Failed Login Attemps

Open /etc/login.defs and enable logging:

FAILLOG_ENAB yes

Also add a delay in seconds before being allowed another attempt after a login failure:

FAIL_DELAY 4
5.8 Ensure Home Directories are Created for New Users

Open /etc/login.defs and configure:

CREATE_HOME yes
5.9 Verify All Account Password Hashes are Shadowed

The command below should return "x":

# cut -d: -f2 /etc/passwd|uniq
5.10 Set Deny and Lockout Time for Failed Password Attempts

Add the following line immediately before the pam_unix.so statement in the AUTH section of /etc/pam.d/system-auth and /etc/pam.d/password-auth :

auth required pam_faillock.so preauth silent deny=3 unlock_time=900 fail_interval=900

Add the following line immediately after the pam_unix.so statement in the AUTH section of /etc/pam.d/system-auth and /etc/pam.d/password-auth :

auth [default=die] pam_faillock.so authfail deny=3 unlock_time=900 fail_interval=900

Add the following line immediately before the pam_unix.so statement in the ACCOUNT section of /etc/pam.d/system-auth and /etc/pam.d/password-auth :

account required pam_faillock.so

The content of the file /etc/pam.d/system-auth can be seen below.

#%PAM-1.0

auth        required      pam_env.so

auth        required      pam_faillock.so preauth silent deny=3 unlock_time=900 fail_interval=900

auth        sufficient    pam_unix.so  try_first_pass

auth        [default=die] pam_faillock.so authfail deny=3 unlock_time=900 fail_interval=900

auth        requisite     pam_succeed_if.so uid >= 1000 quiet_success

auth        required      pam_deny.so



account     required      pam_unix.so

account     required      pam_faillock.so

account     sufficient    pam_localuser.so

account     sufficient    pam_succeed_if.so uid < 1000 quiet

account     required      pam_permit.so



password    requisite     pam_pwquality.so try_first_pass local_users_only retry=3 authtok_type=

password    sufficient    pam_unix.so sha512 shadow  try_first_pass use_authtok remember=5

password    required      pam_deny.so



session     optional      pam_keyinit.so revoke

session     required      pam_limits.so

-session    optional      pam_systemd.so

session     [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid

session     required      pam_unix.so

Also, do not allow users to reuse recent passwords by adding the remember option.

Make /etc/pam.d/system-auth and /etc/pam.d/password-auth configurations immutable so that they don't get overwritten when authconfig is run:

# chattr +i /etc/pam.d/system-auth /etc/pam.d/password-auth

Accounts will get locked after 3 failed login attemtps:

login[]: pam_faillock(login:auth): Consecutive login failures for user tomas account temporarily locked

Use the following to clear user's fail count:

# faillock --user tomas --reset
5.11 Set Boot Loader Password

Prevent users from entering the grub command line and edit menu entries:

# grub2-setpassword

# grub2-mkconfig -o /boot/grub2/grub.cfg

This will create the file /boot/grub2/user.cfg if one is not already present, which will contain the hashed Grub2 bootloader password.

Verify permissions of /boot/grub2/grub.cfg :

# chmod 0600 /boot/grub2/grub.cfg
5.12 Password-protect Single User Mode

CentOS 7 single user mode is password protected by the root password by default as part of the design of Grub2 and systemd.

5.13 Ensure Users Re-Authenticate for Privilege Escalation

The NOPASSWD tag allows a user to execute commands using sudo without having to provide a password. While this may sometimes be useful it is also dangerious.

Ensure that the NOPASSWD tag does not exist in /etc/sudoers configuration file or /etc/sudoers.d/ .

5.14 Multiple Console Screens and Console Locking

Install the screen package to be able to emulate multiple console windows:

# yum install screen

Install the vlock package to enable console screen locking:

# yum install vlock
5.15 Disable Ctrl-Alt-Del Reboot Activation

Prevent a locally logged-in console user from rebooting the system when Ctrl-Alt-Del is pressed:

# systemctl mask ctrl-alt-del.target
5.16 Warning Banners for System Access

Add the following line to the files /etc/issue and /etc/issue.net :

Unauthorised access prohibited. Logs are recorded and monitored.
5.17 Set Interactive Session Timeout

Open /etc/profile and set:

readonly TMOUT=900
5.18 Two Factor Authentication

The recent version of OpenSSH server allows to chain several authentication methods, meaning that all of them have to be satisfied in order for a user to log in successfully.

Adding the following line to /etc/ssh/sshd_config would require a user to authenticate with a key first, and then also provide a password.

AuthenticationMethods publickey,password

This is by definition a two factor authentication: the key file is something that a user has, and the account password is something that a user knows.

Alternatively, two factor authentication for SSH can be set up by using Google Authenticator.

5.19 Configure History File Size

Open /etc/profile and set the number of commands to remember in the command history to 5000:

HISTSIZE=5000

Sed one-liner:

# sed -i 's/HISTSIZE=.*/HISTSIZE=5000/g' /etc/profile
6. System Settings – System Accounting with auditd 6.1 Auditd Configuration

Open /etc/audit/auditd.conf and configure the following:

local_events = yes

write_logs = yes

log_file = /var/log/audit/audit.log

max_log_file = 25

num_logs = 10

max_log_file_action = rotate

space_left = 30

space_left_action = email

admin_space_left = 10

admin_space_left_action = email

disk_full_action = suspend

disk_error_action = suspend

action_mail_acct = root@example.com

flush = data

The above auditd configuration should never use more than 250MB of disk space (10x25MB=250MB) on /var/log/audit .

Set admin_space_left_action=single if you want to cause the system to switch to single user mode for corrective action rather than send an email.

Automatically rotating logs ( max_log_file_action=rotate ) minimises the chances of the system unexpectedly running out of disk space by being filled up with log data.

We need to ensure that audit event data is fully synchronised ( flush=data ) with the log files on the disk .

6.2 Auditd Rules

System audit rules must have mode 0640 or less permissive and owned by the root user:

# chown root:root /etc/audit/rules.d/audit.rules

# chmod 0600 /etc/audit/rules.d/audit.rules

Open /etc/audit/rules.d/audit.rules and add the following:

# Delete all currently loaded rules

-D



# Set kernel buffer size

-b 8192



# Set the action that is performed when a critical error is detected.

# Failure modes: 0=silent 1=printk 2=panic

-f 1



# Record attempts to alter the localtime file

-w /etc/localtime -p wa -k audit_time_rules



# Record events that modify user/group information

-w /etc/group -p wa -k audit_rules_usergroup_modification

-w /etc/passwd -p wa -k audit_rules_usergroup_modification

-w /etc/gshadow -p wa -k audit_rules_usergroup_modification

-w /etc/shadow -p wa -k audit_rules_usergroup_modification

-w /etc/security/opasswd -p wa -k audit_rules_usergroup_modification



# Record events that modify the system's network environment

-w /etc/issue.net -p wa -k audit_rules_networkconfig_modification

-w /etc/issue -p wa -k audit_rules_networkconfig_modification

-w /etc/hosts -p wa -k audit_rules_networkconfig_modification

-w /etc/sysconfig/network -p wa -k audit_rules_networkconfig_modification

-a always,exit -F arch=b32 -S sethostname -S setdomainname -k audit_rules_networkconfig_modification

-a always,exit -F arch=b64 -S sethostname -S setdomainname -k audit_rules_networkconfig_modification



# Record events that modify the system's mandatory access controls

-w /etc/selinux/ -p wa -k MAC-policy



# Record attempts to alter logon and logout events

-w /var/log/tallylog -p wa -k logins

-w /var/log/lastlog -p wa -k logins

-w /var/run/faillock/ -p wa -k logins



# Record attempts to alter process and session initiation information

-w /var/log/btmp -p wa -k session

-w /var/log/wtmp -p wa -k session

-w /var/run/utmp -p wa -k session



# Ensure auditd collects information on kernel module loading and unloading

-w /usr/sbin/insmod -p x -k modules

-w /usr/sbin/modprobe -p x -k modules

-w /usr/sbin/rmmod -p x -k modules

-a always,exit -F arch=b64 -S init_module -S delete_module -k modules



# Ensure auditd collects system administrator actions

-w /etc/sudoers -p wa -k actions



# Record attempts to alter time through adjtimex

-a always,exit -F arch=b32 -S adjtimex -S settimeofday -S stime -k audit_time_rules



# Record attempts to alter time through settimeofday

-a always,exit -F arch=b64 -S adjtimex -S settimeofday -k audit_time_rules



# Record attempts to alter time through clock_settime

-a always,exit -F arch=b32 -S clock_settime -F a0=0x0 -k time-change



# Record attempts to alter time through clock_settime

-a always,exit -F arch=b64 -S clock_settime -F a0=0x0 -k time-change



# Record events that modify the system's discretionary access controls

-a always,exit -F arch=b32 -S chmod -S fchmod -S fchmodat -F auid>=1000 -F auid!=4294967295 -k perm_mod

-a always,exit -F arch=b32 -S chown -S fchown -S fchownat -S lchown -F auid>=1000 -F auid!=4294967295 -k perm_mod

-a always,exit -F arch=b64 -S chmod -S fchmod -S fchmodat -F auid>=1000 -F auid!=4294967295 -k perm_mod

-a always,exit -F arch=b64 -S chown -S fchown -S fchownat -S lchown -F auid>=1000 -F auid!=4294967295 -k perm_mod

-a always,exit -F arch=b32 -S setxattr -S lsetxattr -S fsetxattr -S removexattr -S lremovexattr -S fremovexattr -F auid>=1000 -F auid!=4294967295 -k perm_mod

-a always,exit -F arch=b64 -S setxattr -S lsetxattr -S fsetxattr -S removexattr -S lremovexattr -S fremovexattr -F auid>=1000 -F auid!=4294967295 -k perm_mod



# Ensure auditd collects unauthorised access attempts to files (unsuccessful)

-a always,exit -F arch=b32 -S creat -S open -S openat -S open_by_handle_at -S truncate -S ftruncate -F exit=-EACCES -F auid>=1000 -F auid!=4294967295 -k access

-a always,exit -F arch=b32 -S creat -S open -S openat -S open_by_handle_at -S truncate -S ftruncate -F exit=-EPERM -F auid>=1000 -F auid!=4294967295 -k access

-a always,exit -F arch=b64 -S creat -S open -S openat -S open_by_handle_at -S truncate -S ftruncate -F exit=-EACCES -F auid>=1000 -F auid!=4294967295 -k access

-a always,exit -F arch=b64 -S creat -S open -S openat -S open_by_handle_at -S truncate -S ftruncate -F exit=-EPERM -F auid>=1000 -F auid!=4294967295 -k access



# Ensure auditd collects information on exporting to media (successful)

-a always,exit -F arch=b32 -S mount -F auid>=1000 -F auid!=4294967295 -k export

-a always,exit -F arch=b64 -S mount -F auid>=1000 -F auid!=4294967295 -k export



# Ensure auditd collects file deletion events by user

-a always,exit -F arch=b32 -S rmdir -S unlink -S unlinkat -S rename -S renameat -F auid>=1000 -F auid!=4294967295 -k delete

-a always,exit -F arch=b64 -S rmdir -S unlink -S unlinkat -S rename -S renameat -F auid>=1000 -F auid!=4294967295 -k delete



# Ensure auditd collects information on the use of privileged commands

-a always,exit -F path=/usr/bin/chage -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/chcon -F perm=x -F auid>=1000 -F auid!=4294967295 -F key=privileged-priv_change

-a always,exit -F path=/usr/bin/chfn -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/chsh -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/crontab -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/gpasswd -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/mount -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/newgrp -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/passwd -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/pkexec -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/screen -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/ssh-agent -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/sudo -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/sudoedit -F perm=x -F auid>=1000 -F auid!=4294967295 -F key=privileged

-a always,exit -F path=/usr/bin/su -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/umount -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/wall -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/write -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/lib64/dbus-1/dbus-daemon-launch-helper -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/libexec/openssh/ssh-keysign -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/libexec/utempter/utempter -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/lib/polkit-1/polkit-agent-helper-1 -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/sbin/netreport -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/sbin/pam_timestamp_check -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/sbin/postdrop -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/sbin/postqueue -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/sbin/restorecon -F perm=x -F auid>=1000 -F auid!=4294967295 -F key=privileged-priv_change

-a always,exit -F path=/usr/sbin/semanage -F perm=x -F auid>=1000 -F auid!=4294967295 -F key=privileged-priv_change

-a always,exit -F path=/usr/sbin/setsebool -F perm=x -F auid>=1000 -F auid!=4294967295 -F key=privileged-priv_change

-a always,exit -F path=/usr/sbin/unix_chkpwd -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/sbin/userhelper -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/sbin/usernetctl -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged



# Make the auditd configuration immutable.

# The configuration can only be changed by rebooting the machine.

-e 2

The auditd service does not include the ability to send audit records to a centralised server for management directly.

It does, however, include a plug-in for audit event multiplexor to pass audit records to the local syslog server.

To do so, open the file /etc/audisp/plugins.d/syslog.conf and set:

active = yes

Enable and start the service:

# systemctl enable auditd.service

# systemctl start auditd.service
6.3. Enable Kernel Auditing

Open /etc/default/grub and append the following parameter to the kernel boot line GRUB_CMDLINE_LINUX:

audit=1

Update Grub2 configuration to reflect changes:

# grub2-mkconfig -o /boot/grub2/grub.cfg
7. System Settings – Software Integrity Checking 7.1 Advanced Intrusion Detection Environment (AIDE)

Install AIDE:

# yum install aide

Build AIDE database:

# /usr/sbin/aide --init

By default, the database will be written to the file /var/lib/aide/aide.db.new.gz .

# cp /var/lib/aide/aide.db.new.gz /var/lib/aide/aide.db.gz

Storing the database and the configuration file /etc/aide.conf (or SHA2 hashes of the files) in a secure location provides additional assurance about their integrity.

Check AIDE database:

# /usr/sbin/aide --check

By default, AIDE does not install itself for periodic execution. Configure periodic execution of AIDE by adding to cron:

# echo "30 4 * * * root /usr/sbin/aide --check|mail -s 'AIDE' root@example.com" >> /etc/crontab

Periodically running AIDE is necessary in order to reveal system changes.

7.2 Tripwire

Open Source Tripwire is an alternative to AIDE. It is recommended to use one or another, but not both.

Install Tripwire from the EPEL repository:

# yum install epel-release

# yum install tripwire

# /usr/sbin/tripwire-setup-keyfiles

The Tripwire configuration file is /etc/tripwire/twcfg.txt and the policy file is /etc/tripwire/twpol.txt . These can be edited and configured to match the system Tripwire is installed on, see this blog post for more details.

Initialise the database to implement the policy:

# tripwire --init

Check for policy violations:

# tripwire --check

Tripwire adds itself to /etc/cron.daily/ for daily execution therefore no extra configuration is required.

7.3 Prelink

Prelinking is done by the prelink package, which is not installed by default.

# yum install prelink

To disable prelinking, open the file /etc/sysconfig/prelink and set the following:

PRELINKING=no

Sed one-liner:

# sed -i 's/PRELINKING.*/PRELINKING=no/g' /etc/sysconfig/prelink

Disable existing prelinking on all system files:

# prelink -ua
8. System Settings – Logging and Message Forwarding 8.1 Configure Persistent Journald Storage

By default, journal stores log files only in memory or a small ring-buffer in the directory /run/log/journal . This is sufficient to show recent log history with journalctl, but logs aren't saved permanently. Enabling persistent journal storage ensures that comprehensive data is available after system reboot.

Open the file /etc/systemd/journald.conf and put the following:

[Journal]

Storage=persistent



# How much disk space the journal may use up at most

SystemMaxUse=256M



# How much disk space systemd-journald shall leave free for other uses

SystemKeepFree=512M



# How large individual journal files may grow at most

SystemMaxFileSize=32M

Restart the service:

# systemctl restart systemd-journald
8.2 Configure Message Forwarding to Remote Server

Depending on your setup, open /etc/rsyslog.conf and add the following to forward messages to a some remote server:

*.* @graylog.example.com:514

Here *.* stands for facility.severity . Note that a single @ sends logs over UDP, where a double @ sends logs using TCP.

8.3 Logwatch

Logwatch is a customisable log-monitoring system.

# yum install logwatch

Logwatch adds itself to /etc/cron.daily/ for daily execution therefore no configuration is mandatory.

9. System Settings – Security Software 9.1 Malware Scanners

Install Rkhunter and ClamAV:

# yum install epel-release

# yum install rkhunter clamav clamav-update

# rkhunter --update

# rkhunter --propupd

# freshclam -v

Rkhunter adds itself to /etc/cron.daily/ for daily execution therefore no configuration is required. ClamAV scans should be tailored to individual needs.

9.2 Arpwatch

Arpwatch is a tool used to monitor ARP activity of a local network (ARP spoofing detection), therefore it is unlikely one will use it in the cloud, however, it is still worth mentioning that the tools exist.

Be aware of the configuration file /etc/sysconfig/arpwatch which you use to set the email address where to send the reports.

9.3 Commercial AV

Consider installing a commercial AV product that provides real-time on-access scanning capabilities.

9.4 Grsecurity

Grsecurity is an extensive security enhancement to the Linux kernel. Although it isn't free nowadays, the software is still worth mentioning.

The company behind Grsecurity stopped publicly distributing stable patches back in 2015, with an exception of the test series continuing to be available to the public in order to avoid impact to the Gentoo Hardened and Arch Linux communities.

Two years later, the company decided to cease free distribution of the test patches as well, therefore as of 2017, Grsecurity software is available to paying customers only.

10. System Settings – OS Update Installation

Install the package yum-utils for better consistency checking of the package database.

# yum install yum-utils

Configure automatic package updates via yum-cron.

# yum install yum-cron

Add the following to /etc/yum/yum-cron.conf to get notified via email when new updates are available:

update_cmd = default

update_messages = yes

download_updates = no	

apply_updates = no

emit_via = email	

email_from = root@example.com

email_to = user@example.com

email_host = localhost

Add the following to /etc/yum/yum-cron-hourly.conf to check for security-related updates every hour and automatically download and install them:

update_cmd = security

update_messages = yes

download_updates = yes

apply_updates = yes

emit_via = stdio

Enable and start the service:

# systemctl enable yum-cron.service

# systemctl start yum-cron.service
11. System Settings – Process Accounting

The package psacct contain utilities for monitoring process activities:

  1. ac – displays statistics about how long users have been logged on.
  2. lastcomm – displays information about previously executed commands.
  3. accton – turns process accounting on or off.
  4. sa – summarises information about previously executed commands.

Install and enable the service:

# yum install psacct

# systemctl enable psacct.service

# systemctl start psacct.service
1. Services – SSH Server

Create a group for SSH access as well as some regular user account who will be a member of the group:

# groupadd ssh-users

# useradd -m -s /bin/bash -G ssh-users tomas

Generate SSH keys for the user:

# su - tomas

$ mkdir --mode=0700 ~/.ssh

$ ssh-keygen -b 4096 -t rsa -C "tomas" -f ~/.ssh/id_rsa

Generate SSH host keys:

# ssh-keygen -b 4096 -t rsa -N "" -f /etc/ssh/ssh_host_rsa_key

# ssh-keygen -b 1024 -t dsa -N "" -f /etc/ssh/ssh_host_dsa_key

# ssh-keygen -b 521 -t ecdsa -N "" -f /etc/ssh/ssh_host_ecdsa_key

# ssh-keygen -t ed25519 -N "" -f /etc/ssh/ssh_host_ed25519_key

For RSA keys, 2048 bits is considered sufficient. DSA keys must be exactly 1024 bits as specified by FIPS 186-2.

For ECDSA keys, the -b flag determines the key length by selecting from one of three elliptic curve sizes: 256, 384 or 521 bits. ED25519 keys have a fixed length and the -b flag is ignored.

The host can be impersonated if an unauthorised user obtains the private SSH host key file, therefore ensure that permissions of /etc/ssh/*_key are properly set:

# chmod 0600 /etc/ssh/*_key

Configure /etc/ssh/sshd_config with the following:

# SSH port

Port 22



# Listen on IPv4 only

ListenAddress 0.0.0.0



# Protocol version 1 has been exposed

Protocol 2



# Limit the ciphers to those which are FIPS-approved, the AES and 3DES ciphers

# Counter (CTR) mode is preferred over cipher-block chaining (CBC) mode

Ciphers aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,aes192-cbc,aes256-cbc,3des-cbc



# Use FIPS-approved MACs

MACs hmac-sha2-512,hmac-sha2-256,hmac-sha1



# INFO is a basic logging level that will capture user login/logout activity

# DEBUG logging level is not recommended for production servers

LogLevel INFO



# Disconnect if no successful login is made in 60 seconds

LoginGraceTime 60



# Do not permit root logins via SSH

PermitRootLogin no



# Check file modes and ownership of the user's files before login

StrictModes yes



# Close TCP socket after 2 invalid login attempts

MaxAuthTries 2



# The maximum number of sessions per network connection

MaxSessions 2



# User/group permissions

AllowUsers

AllowGroups ssh-users

DenyUsers root

DenyGroups root



# Password and public key authentications

PasswordAuthentication no

PermitEmptyPasswords no

PubkeyAuthentication yes

AuthorizedKeysFile  .ssh/authorized_keys



# Disable unused authentications mechanisms

RSAAuthentication no # DEPRECATED

RhostsRSAAuthentication no # DEPRECATED

ChallengeResponseAuthentication no

KerberosAuthentication no

GSSAPIAuthentication no

HostbasedAuthentication no

IgnoreUserKnownHosts yes



# Disable insecure access via rhosts files

IgnoreRhosts yes



AllowAgentForwarding no

AllowTcpForwarding no



# Disable X Forwarding

X11Forwarding no



# Disable message of the day but print last log

PrintMotd no

PrintLastLog yes



# Show banner

Banner /etc/issue



# Do not send TCP keepalive messages

TCPKeepAlive no



# Default for new installations

UsePrivilegeSeparation sandbox



# Prevent users from potentially bypassing some access restrictions

PermitUserEnvironment no



# Disable compression

Compression no



# Disconnect the client if no activity has been detected for 900 seconds

ClientAliveInterval 900

ClientAliveCountMax 0



# Do not look up the remote hostname

UseDNS no



UsePAM yes

In case you want to change the default SSH port to something else, you will need to tell SELinux about it.

# yum install policycoreutils-python

For example, to allow SSH server to listen on TCP 2222, do the following:

# semanage port -a -t ssh_port_t 2222 -p tcp

Ensure that the firewall allows incoming traffic on the new SSH port and restart the sshd service.

2. Service – Network Time Protocol

CentOS 7 should come with Chrony, make sure that the service is enabled:

# systemctl enable chronyd.service
3. Services – Mail Server 3.1 Postfix

Postfix should be installed and enabled already. In case it isn't, the do the following:

# yum install postfix

# systemctl enable postfix.service

Open /etc/postfix/main.cf and configure the following to act as a null client:

smtpd_banner = $myhostname ESMTP

inet_interfaces = loopback-only

inet_protocols = ipv4

mydestination =

local_transport = error: local delivery disabled

unknown_local_recipient_reject_code = 550

mynetworks = 127.0.0.0/8

relayhost = [mail.example.com]:587

Optionally (depending on your setup), you can configure Postfix to use authentication:

# yum install cyrus-sasl-plain

Open /etc/postfix/main.cf and add the following:

smtp_sasl_auth_enable = yes

smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd

smtp_sasl_security_options = noanonymous

smtp_tls_CApath = /etc/ssl/certs

smtp_use_tls = yes

Open /etc/postfix/sasl_passwd and put authentication credentials in a format of:

[mail.example.com]:587 user@example.com:password

Set permissions and create a database file:

# chmod 0600 /etc/postfix/sasl_passwd

# postmap /etc/postfix/sasl_passwd

Restart the service and ensure that firewall allows outgoing traffic to the SMTP relay server.

3.2 Mail Distribution to Active Mail Accounts

Configure the file /etc/aliases to have a forward rule for the root user.

4. Services – Remove Obsolete Services

None of these should be installed on CentOS 7 minimal:

# yum erase xinetd telnet-server rsh-server \

  telnet rsh ypbind ypserv tfsp-server bind \

  vsfptd dovercot squid net-snmpd talk-erver talk

Check all enabled services:

# systemctl list-unit-files --type=service|grep enabled

Disable kernel dump service:

# systemctl disable kdump.service

# systemctl mask kdump.service

Disable everything that is not required, e.g.:

# systemctl disable tuned.service
5. Services – Restrict at and cron to Authorised Users

If the file cron.allow exists, then only users listed in the file are allowed to use cron, and the cron.deny file is ignored.

# echo root > /etc/cron.allow

# echo root > /etc/at.allow

# rm -f /etc/at.deny /etc/cron.deny

Note that the root user can always use cron, regardless of the usernames listed in the access control files.

6. Services – Disable X Windows Startup

This can be achieved by setting a default target:

# systemctl set-default multi-user.target
7. Services – Fail2ban

Install Fail2ban from the EPEL repository:

# yum install epel-release

# yum install fail2ban

If using iptables rather than firewalld, open the file /etc/fail2ban/jail.d/00-firewalld.conf and comment out the following line:

#banaction = firewallcmd-ipset

Fail2Ban uses /etc/fail2ban/jail.conf . Configuration snippet for SSH is provided below:

[sshd]

port    = ssh

enabled = true

ignoreip = 10.8.8.61

bantime  = 600

maxretry = 5

If you run SSH on a non-default port, you can change the port value to any positive integer and then enable the jail.

# systemctl enable fail2ban.service
# systemctl start fail2ban.service
8. Services – Sysstat to Collect Performance Activity

Sysstat may provide useful insight into system usage and performance, however, unless used, the service should be disabled, or not installed at all.

# yum install sysstat
# systemctl enable sysstat.service
# systemctl start sysstat.service
References

[Sep 18, 2017] Kickstart File Example

Kickstart File Example

Below is an example of a kickstart file that you can use to install and configure Parallels Cloud Server in unattended mode. You can use this file as the basis for creating your own kickstart files.

# Install Parallels Cloud Server

install

# Uncomment the line below to install Parallels Cloud Server in a completely unattended mode

# cmdline

# Use the path of http://example.com/pcs to get the installation files.

url --url http://example.com/pcs

# Use English as the language during the installation and as the default system language.

lang en_US.UTF-8

# Use the English keyboard type.

keyboard us

# Uncomment the line below to remove all partitions from the SDA hard drive and create these partitions: /boot, /, /vz, and swap.

# clearpart --drives=sda --all --initlabel

# zerombr

part /boot --fstype=ext4 --size=512

part / --fstype=ext4 --size=20096

part /vz --fstype=ext4 --size=40768 --grow

part swap --size=4000

# Use a DHCP server to obtain network configuration.

network --bootproto dhcp

# Set the root password for the server.

rootpw xxxxxxxxx

# Use the SHA-512 encryption for user passwords and enable shadow passwords.

auth --enableshadow --passalgo=sha512

# Set the system time zone to America/New York and the hardware clock to UTC.

timezone --utc America/New_York

# Set sda as the first drive in the BIOS boot order and write the boot record to mbr.

bootloader --location=mbr

# Tell the Parallels Cloud Server installer to reboot the system after installation.

reboot

# Install the Parallels Cloud Server license on the server.

key XXXXXX-XXXXXX-XXXXXX-XXXXXX-XXXXXX

# Create the virt_network1 Virtual Network on the server and associate it with the network adapter eth0.

vznetcfg --net=virt_network1:eth0

# Configure the ip_tables ipt_REJECT ipt_tos ipt_limit modules to be loaded in Containers.

# Use the http://myrepository.com to handle Fedora OS and application templates.

vztturlmap $FC_SERVER http://myrepository.com

# Install the listed EZ templates. Cache all OS templates after installation. Skip the installation of pre-created templates.

nosfxtemplate

%eztemplates --cache

centos-6-x86_64

centos-6-x86

mailman-centos-6-x86_64

mailman-centos-6-x86

# Install the packages for Parallels Cloud Server on the server.

%packages

@base

@core

@vz

@ps

Kickstart file example for installing on EFI-based systems

You can use the file above to install Parallels Cloud Server on BIOS-based systems. For installation on EFI-based systems, you need to modify the following places in the file (the changes are highlighted in bold):

# The following 4 commands are used to remove all partitions from the SDA hard drive and create these partitions: /boot/efi (required for EFI-based systems), /boot, /, /vz, and swap.

# clearpart --drives=sda --all --initlabel

part /boot/efi --fstype=efi --grow --maxsize=200 --size=20

part /boot --fstype=ext4 --size=512

part / --fstype=ext4 --size=20096

part /vz --fstype=ext4 --size=40768 --grow

part swap --size=4000

# Configure the bootloader.

bootloader --location=partition

Kickstart file example for upgrading to Parallels Cloud Server 6.0

Below is an example of a kickstart file you can use to upgrade your system to Parallels Cloud Server 6.0.

# Upgrade to Parallels Cloud Server rather than perform a fresh installation.

upgrade

# Use the path of http://example.com/pcs to get the installation files.

url --url http://example.com/pcs

# Use English as the language during the upgrade and as the default system language.

lang en_US.UTF-8

# Use the English keyboard type.

keyboard us

# Set the system time zone to America/New York and the hardware clock to UTC.

timezone --utc America/New_York

# Upgrade the bootloader configuration.

bootloader --upgrade

[Sep 18, 2017] RHEL ISO with kickstart file

Aug 06, 2017 | unix.stackexchange.com

Hugo , asked Jan 7 '15 at 16:36

I am trying to edit the original RHEL 6.5 DVD (rhel-server-6.5-x86_64-dvd.iso) from redhat in order to add kickstart file on it. The goal is to have one 3.4Go iso with automatic install. And not one boot media and one DVD.

This technique is not supported by redhat officially, but I found a procedure : https://access.redhat.com/solutions/60959

My ks.cfg looks like :

install
cdrom
repo --name="Red Hat Enterprise Linux"  --baseurl=file:/mnt/source --cost=100
repo --name=HighAvailability --baseurl=file:///mnt/source/HighAvailability

I got an error when the installer start : it didn't find the disk Red Hat Enterprise Linux.

I guess this is because installer is not looking on its own media.

Is there a way to achieve this ? Does the cdrom have optional parameter to hard link the device ?

tonioc , answered Jan 7 '15 at 17:30

You don't need to set the repo URLs in ks.cfg, here is an example of kickstart that I use currently with rhel6.
# interactive install from CD-ROM/DVD
interactive
install
cdrom

key --skip
lang en_US.UTF-8
# keyboard us

#
clearpart --all --initlabel
part /boot --fstype ext4 --size=100
part pv.100000 --size=1 --grow
volgroup vg00 --pesize=32768 pv.100000
logvol / --fstype ext4 --name=lvroot --vgname=vg00 --size=15360
logvol swap --fstype swap --name=lvswap --vgname=vg00 --size=2048
logvol /var --fstype ext4 --name=lvvar --vgname=vg00 --size 5120

timezone Europe/Paris
firewall --disabled
authconfig --useshadow --passalgo=sha512
selinux --enforcing

#skipx

# pre-set list of packages/groups to install 
%packages
@core
@server-policy
acpid
device-mapper-multipath
dmidecode
# ... and so on the list of packages/groups I pre-customize (and with - those I don't want)
vsftpd
wget
xfsprogs
-autoconf
-automake
-bc
# ... and so on
#-----------------------------------------------------------------------------
# postinstall, execution avec chroot dans le systeme installé.
%post --interpreter=/bin/sh --log=/root/post_install.log
echo -e "================================================================="
echo -e "       Starting kickStart post install script "

# do some extra stuff here , like mounting cd-rom copying add-ons specific for my product

Hugo , asked Jan 7 '15 at 16:36

I am trying to edit the original RHEL 6.5 DVD (rhel-server-6.5-x86_64-dvd.iso) from redhat in order to add kickstart file on it. The goal is to have one 3.4Go iso with automatic install. And not one boot media and one DVD.

This technique is not supported by redhat officially, but I found a procedure : https://access.redhat.com/solutions/60959

My ks.cfg looks like :

install
cdrom
repo --name="Red Hat Enterprise Linux"  --baseurl=file:/mnt/source --cost=100
repo --name=HighAvailability --baseurl=file:///mnt/source/HighAvailability

I got an error when the installer start : it didn't find the disk Red Hat Enterprise Linux.

I guess this is because installer is not looking on its own media.

Is there a way to achieve this ? Does the cdrom have optional parameter to hard link the device ?

tonioc , answered Jan 7 '15 at 17:30

You don't need to set the repo URLs in ks.cfg, here is an example of kickstart that I use currently with rhel6.
# interactive install from CD-ROM/DVD
interactive
install
cdrom

key --skip
lang en_US.UTF-8
# keyboard us

#
clearpart --all --initlabel
part /boot --fstype ext4 --size=100
part pv.100000 --size=1 --grow
volgroup vg00 --pesize=32768 pv.100000
logvol / --fstype ext4 --name=lvroot --vgname=vg00 --size=15360
logvol swap --fstype swap --name=lvswap --vgname=vg00 --size=2048
logvol /var --fstype ext4 --name=lvvar --vgname=vg00 --size 5120

timezone Europe/Paris
firewall --disabled
authconfig --useshadow --passalgo=sha512
selinux --enforcing

#skipx

# pre-set list of packages/groups to install 
%packages
@core
@server-policy
acpid
device-mapper-multipath
dmidecode
# ... and so on the list of packages/groups I pre-customize (and with - those I don't want)
vsftpd
wget
xfsprogs
-autoconf
-automake
-bc
# ... and so on
#-----------------------------------------------------------------------------
# postinstall, execution avec chroot dans le systeme installé.
%post --interpreter=/bin/sh --log=/root/post_install.log
echo -e "================================================================="
echo -e "       Starting kickStart post install script "

# do some extra stuff here , like mounting cd-rom copying add-ons specific for my product

[Sep 17, 2017] RHEL6 Unable to download kickstart file

Sep 17, 2017 | unix.stackexchange.com
Oops! I didn't mean to do this.
up vote down vote favorite

robertpas , asked Nov 9 '15 at 15:13

In our lab we have a set of scripts that automatically configure a kickstart installation for RHEL5 on HP ProLiant DL380p Gen8. Based on data from several configuration files, it does the following steps:
  1. mounts redhat dvd
  2. modifies isolinux.cfg accordingly
  3. creates ks.cfg
  4. creates a bootdisk with the installation data (isolinux.cfg, ks.cfg, etc)
  5. creates a http server with the bootdisk directory.
  6. mounts the bootdisk through ILO ( /dev/scd1 )
  7. installs RHEL5

Here is the line referring to the kickstart file location :

append initrd=initrd.img ks=hd:scd1:/isolinux/ks.cfg ksdevice=eth4

Everything works well for RHEL5, but there have been requests for RHEL6.

For RHEL6, everything seems to work OK until #7, where it returns the message "unable to download kickstart file" . I have commented some lines in the scripts, eliminating the installation part and leaving only the ILO mount part.

The bootdisk is mounted and accessible on /dev/scd1 . The ks.cfg file is present there. I have also tested and the files from the Kickstart server are accessible with wget .

I have also tried accessing the ks.cfg file through http :

append initrd=initrd.img ks=http://<ip>:<port>/boot/isolinux/ks.cfg ksdevice=eth4

The above part did not work.

But what really vexes me is that RHEL5 works in the same conditions, but RHEL6 does not.

I have been talking to redhat support for a week and they don't seem to know what is wrong.

Any help would be greatly appreciated.

robertpas , answered Nov 11 '15 at 8:11

I have figured out the problem.

There seems to be a difference between RHEL5 and RHEL6 at the installation level.

RHEL5 will detect your physical cdrom and mount it on /dev/scd0 , therefore the location of the mount will be /dev/scd1 . RHEL6 does not seem to do this, therefore the mount location will be /dev/scd0 .

The correct way to declare the ks file location in a case like this is :

append initrd=initrd.img ks=hd:scd0:/isolinux/ks.cfg ksdevice=eth4

I hope someone will find this helpful in the future.

[Sep 17, 2017] systemd-free.org Archlinux, systemd-free

Notable quotes:
"... Since the adoption of systemd by Arch Linux I've encountered many problems with my systems, ranging from lost temporary files which systemd deemed proper to delete without asking (changing default behaviour on a whim), to total, consistent boot lockups because systemd-210+ couldn't mount an empty /usr/local partition (whereas systemd-208 could; go figure). ..."
"... As each "upgrade" of systemd aggressively assimilated more and more key system components into itself, it became apparent that the only way to avoid this single most critical point of failure was to stay as far away from it as possible. ..."
"... How about defaulting KillUserProcesses to yes , which effectively kills all backgrounded user processes (tmux and screen included) on logout? ..."
Aug 02, 2017 | systemd-free.org

An init system must be an init system

Systemd problems

Since the adoption of systemd by Arch Linux I've encountered many problems with my systems, ranging from lost temporary files which systemd deemed proper to delete without asking (changing default behaviour on a whim), to total, consistent boot lockups because systemd-210+ couldn't mount an empty /usr/local partition (whereas systemd-208 could; go figure).

As each "upgrade" of systemd aggressively assimilated more and more key system components into itself, it became apparent that the only way to avoid this single most critical point of failure was to stay as far away from it as possible.

Reading the list of those system components is daunting: login, pam, getty, syslog, udev, cryptsetup, cron, at, dbus, acpi, cgroups, gnome-session, autofs, tcpwrappers, audit, chroot, mount ... How about defaulting KillUserProcesses to yes , which effectively kills all backgrounded user processes (tmux and screen included) on logout?

It would seem that the only thing still missing from systemd is a decent init system.

The solution: Remove systemd, install OpenRC

"Coincidentally", there were others before me who had had similar concerns and had prepared the way.

Their efforts and experience is summarised in these pages. Sincere, warm thanks go to artoo and Aaditya who have done most of the work in Archland and, of course, the Gentoo developers who have made this possible in the first place. I administer a handsome lot of linux boxes and I've performed the migration procedure (successfully and without exception) in all of them, even remote ones.

The procedure is explained in Installation ; however you might want to read about OpenRC in the links below:

The Archlinux OpenRC wiki page doesn't contain information on the migration process anymore; it breaks down things in several different articles and provides links to other resources not always Arch-specific, which unnecessarily obfuscates things, not to mention the omnipresent warning to not remove systemd. The migration procedure described here instead is reliable and as plain and simple as possible, explaining what is to be done and why in clearly defined steps, despite what a FUD-spreading Arch Wiki admin says against it in his every other post. This is proven time and again on many different boxes and setups.

For your convenience, an up-to-date OpenRC ISO image is also provided for clean installations. Go to Installation for additional information.

Other Linux distros: Escape from systemd

Here we focus on removing systemd from Arch Linux and derivatives: Manjaro, ArchBang, Antergos etc. For information about removing systemd from other Linux distributions (namely Debian and deb/apt-get based ones like Ubuntu and Mint) you can visit the Without systemd wiki .
Additionally, a list of Operating systems without systemd in the default installation might be of special interest as, ultimately, the future of the Linux init systems will be determined by the popularity (or lack thereof) of systemd-free distros like Gentoo , Slackware , PCLinuxOS , Void Linux and Devuan .

Non-Linux OSes are also a viable (if somewhat last-resort) option, especially if the situation for non-systemd setups significantly worsens; FreeBSD and DragonFlyBSD are totally worth taking a shot at.

Clean installations

Contact You may contact us at Freenode, channels #openrc, #manjaro-openrc and #arch-openrc.

[Aug 14, 2017] Are 32-bit applications supported in RHEL 7?

Aug 14, 2017 | access.redhat.com
Solution Verified - Updated June 1 2017 at 1:22 PM -

Environment

Issue Resolution

Red Hat Enterprise Linux 7 does not support installation on i686, 32 bit hardware. ISO installation media is only provided for 64-bit hardware. Refer to Red Hat Enterprise Linux technology capabilities and limits for additional details.

However, 32-bit applications are supported in a 64-bit RHEL 7 environment in the following scenarios:

While RHEL 7 will not natively support 32-bit hardware, certified hardware can be searched for in the certified hardware database .

[Aug 14, 2017] CentOS-RHEL - WineHQ Wiki

Aug 14, 2017 | wiki.winehq.org

Notes on EPEL 7

At the time of this writing, EPEL 7 still has no 32-bit packages (including wine and its dependencies). There is a 64-bit version of wine, but without the 32-bit libraries needed for WoW64 capabilities, it cannot support any 32-bit Windows apps (the vast majority) and even many 64-bit ones (that still include 32-bit components).

This is primarily because with release 7, Red Hat didn't have enough customer demand to justify an i386 build. While Red Hat itself still comes with lean multilib and 32-bit support for legacy needs, this is part of Red Hat's release process, not the packages themselves. Therefore CentOS 7 had to develop its own workflow for building an i386 release, a process that was completed in Oct 2015 .

With its i386 release, CentOS has cleared a major hurdle on the way to an EPEL with 32-bit libraries, and now the ball is in the Fedora project's court (as the maintainers of the EPEL). Once a i386 version of the EPEL becomes available, you should be able to follow the same instructions above to install a fully functional wine package for CentOS 7 and its siblings.

Thankfully, this also means that EPEL 8 shouldn't suffer from this same problem. In the meantime though, you can keep reading for some hints on getting a recent version of wine from the source code.

Special Considerations for Red Hat

Those with a Red Hat subscription should have access to enhanced help and support, but we wanted to provide some very quick notes on enabling the EPEL for your system. Before installing the epel-release package, you'll first need to activate some additional repositories.

On release 6 and older, which used the Red Hat Network Classic system for package subscriptions, you need to activate the optional repo with rhn-channel

rhn-channel -a -c rhel-6-server-optional-rpms -u <your-RHN-username> -p <your-RHN-password>

Starting with release 7 and the Subscription Manager system, you'll need to activate both the optional and extras repos with subscription-manager

subscription-manager repos --enable=rhel-7-server-optional-rpms
subscription-manager repos --enable=rhel-7-server-extras-rpms

As for source RPMs signed by Red Hat, there doesn't seem to be much public-facing documentation. With a subscription, you should be able to login and browse the repos ; this post on LWN also has some background.

[Aug 14, 2017] CentOS 6 or CentOS 7

Aug 14, 2017 | www.lowendtalk.com

[Aug 13, 2017] The Tar Pit - How and why systemd has won

Notable quotes:
"... However, despite this and despite the flame wars it has caused throughout the open source communities, and the endless attempts to boycott it, systemd has already won. Red Hat Enterprise Linux now uses it; Debian made it the default init system for their next version 6 and as a consequence, Ubuntu is replacing Upstart with systemd; openSUSE and Arch have it enabled for quite some time now. Basically every major GNU/Linux distribution is now using it 7 . ..."
"... systemd doesn't even know what the fuck it wants to be. It is variously referred to as a "system daemon" or a "basic userspace building block to make an OS from", both of which are highly ambiguous. [...] Ironically, despite aiming to standardize Linux distributions, it itself has no clear standard, and is perpetually rolling. ..."
Sep 27, 2014 | thetarpit.org

systemd is the work of Lennart Poettering, some guy from Germany who makes free and open source software, and who's been known to rub people the wrong way more than once. In case you haven't heard of him, he's also the author of PulseAudio, also known as that piece of software people often remove from their GNU/Linux systems in order to make their sound work. Like any software engineer, or rather like one who's gotten quite a few projects up and running, Poettering has an ego. Well, this should be about systemd, not about Poettering, but it very well is.

systemd started as a replacement for the System V init process. Like everything else, operating systems have a beginning and an end, and like every other operating system, Linux also has one: the Linux kernel passes control over to user space by executing a predefined process commonly called init , but which can be whatever process the user desires. Now, the problem with the old System V approach is that, well, I don't really know what the problem is with it, other than the fact that it's based on so-called "init scripts" 1 and that this, and maybe a few other design aspects impose some fundamental performance limitations. Of course, there are other aspects, such as the fact that no one ever expects or wants the init process to die, otherwise the whole system will crash.

The broader history is that systemd isn't the first attempt to stand out as a new, "better" init system. Canonical have already tried that with Upstart; Gentoo relies on OpenRC; Android uses a combination between Busybox and their own home-made flavour of initialization scripts, but then again, Android does a lot of things differently. However, contrary to the basic tenets 2 of the Unix philosophy , systemd also aims to do a lot of things differently.

For example, it aims to integrate as many other system-critical daemons as possible: from device management, IPC and logging to session management and time-based scheduling, systemd wants to do it all. This is indeed rather stupid from a software engineering point of view 3 , as it increases software complexity and bugs and the attack surface and whatnot 4 , but I can understand the rationale behind it: the maintainers want more control over everything, so they end up requesting that all other daemons are written as systemd plugins 5 .

However, despite this and despite the flame wars it has caused throughout the open source communities, and the endless attempts to boycott it, systemd has already won. Red Hat Enterprise Linux now uses it; Debian made it the default init system for their next version 6 and as a consequence, Ubuntu is replacing Upstart with systemd; openSUSE and Arch have it enabled for quite some time now. Basically every major GNU/Linux distribution is now using it 7 .

At the end of the day, systemd has won by being integrated into the democratic ecosystem that is GNU/Linux. As much as I hate PulseAudio and as much as I don't like Poettering, I see that distribution developers and maintainers seem to desperately need it, although I must confess I don't really know why. Either way, compare this :

systemd doesn't even know what the fuck it wants to be. It is variously referred to as a "system daemon" or a "basic userspace building block to make an OS from", both of which are highly ambiguous. [...] Ironically, despite aiming to standardize Linux distributions, it itself has no clear standard, and is perpetually rolling.

to this :

Verifiable Systems are closely related to stateless systems: if the underlying storage technology can cryptographically ensure that the vendor-supplied OS is trusted and in a consistent state, then it must be made sure that /etc or /var are either included in the OS image, or simply unnecessary for booting.

and this . Some of the stuff there might be downright weird or unrealistic or bullshit, but other than that, these guys (especially Poettering) have a damn good idea what they want to do and where they're going, unlike many other free software, open source projects.

And now's one of those times when such a clear vision makes all the difference.


  1. That is, it's "imperative" instead of "declarative". Does this matter to the average guy? I haven't the vaguest idea, to be honest.
  2. Some people don't consider software engineering a science, that's why. But I guess it would be fairer to call them "principles", wouldn't it?
  3. One does not simply integrate components for the sake of "integration". There are good reasons to have isolation and well-established communication protocols between software components: for example if I want to build my own udev or cron or you-name-it, systemd won't let me do that, because it "integrates". Well, fuck that.
  4. And guess what; for system software, systemd has a shitload of bugs . This is just not acceptable for production. Not. Acceptable. Ever.
  5. That's what "having systemd as a dependency" really means, no matter how they try to sugarcoat it.
  6. Jessie, at the time of writing.
  7. Well, except Slackware.

[Aug 13, 2017] Is Modern Linux Becoming Too Complex

One man's variety is another man's hopelessly confusing goddamn mess.

Feb 11, 2015 | Slashdot

An anonymous reader writes: Debian developer John Goerzen asks whether Li