Softpanorama

Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
May the source be with you, but remember the KISS principle ;-)
Bigger doesn't imply better. Bigger often is a sign of obesity, of lost control, of overcomplexity, of cancerous cells

Enterprise Linux sickness with overcomplexity:
slightly skeptical view on enterprise Linux distributions

News

Slightly Skeptical View on Enterprise Unix Administration

Recommended Books

Recommended Links Open source politics: IBM acquires Red Hat Backup and Restore LPI Certification
Oracle Linux Red Hat Suse Ubuntu Systemd invasion into Linux Server space Simple Unix Backup Tools Troubleshooing
Network Utilities ifconfig ethtool netstat Grub route Nmap
The Linux Logical Volume Manager Linux Disk Management Snapshots Virtual memory Linux Disk Management Startup and Shutdown Linux Multipath
Linux Run Levels Configuring serial console Root password recovery Booting into Rescue Mode Enterprise Job schedulers Grub Log Rotation
Linux Networking Linux Performance Tuning Biosdevname and renaming of Ethernet interfaces IPMI Is DevOps a yet another "for profit" technocult? Dell DRAC Security
Working with ISO Images in Linux Ministributions Admin Humor Admin Horror Stories Linux Tips Humor Etc

Introduction

Image a language in which both grammar and vocabulary is changing each three to five years.  And both are so huge that are beyond any normal human comprehension. You can learn some subset of both vocabulary and grammar when you closely work with a particular subsystem for several months in a row, only to forget it after a couple of months or quarters.  The classic example here is RHEL kickstart. 

In a sense all talks about linux security is a joke as you can't secure the OS, which is far, far beyond your ability to comprehend. So state-sponsored hackers will always have an edge in breaking into linux.

Linux became two complex to master for a single person. Now this is yet another monstrous OS, that nobody know well (as the level completely puts it far above mere mortal capabilities)   And that's the problem. Both Red Hat and Suse are now software development companies that can be called "overcomplexity junks". And it shows in their recent products.  Actually SLES is even worse then RHEL in this respect, despite being (originally) a German distribution. 

Generally in Linux administration (like previously in enterprise Unix administration) you get what you paid for. Nothing can replace multi-year experience, and experience often is acquired by making expensive mistakes (see Admin Horror Stories). Vendor training is expensive and is more or less available only to sysadmin in few industries (financial industry is one). For Red Hat we have the situation that closely resembles the situation well know from Solaris: training is rather good, but prices are exorbitant.

Due to the current complexity (or, more correctly, overcomplexity) of Linux environments most sysadmins can master it well only for commonly used subsystems and for just one flavor of Linux. Better one might be able to  support two (with highly asymmetrical level of skills, being usually considerably more proficient in one flavor over the other). In other words Unix wars are now replaced on Linux turf with vengeance.

The level of mental overload and frustration from the overcomplexity of two major enterprise Linux flavors (RHEL and SLES) is such that people are ready for a change. Note that in OS ecosystem there is a natural tendency toward monopoly -- nothing succeed like success and the critical mass of installation that those two "monstrously complex" Linux distribution hold prevent any escape. Especially in enterprise environment.  Red Hat can essentially dictate what linux should be -- as it did with incorporating systemd in RHEL 7. 

Still there is a large difference between RHEL and SLES popularity: 

Ubuntu -- a dumped-down Linux based on Debian, with some strange design decisions -- is now getting some corporate sales, especially in cloud environment,  the expense of Suse. It still mainly desktop OS but it gradually acquires some enterprise share two.  That makes the number of enterprise linux distribution close to what we used to have in commercial Unix space (Solaris, AIX and HP-UX) and Debian and Ubuntu playing  the role of Solaris.   

Package Hell

The idea of precompiled package is great until it is not. And that's what we have now. Important package such as R language or Infiniband drivers from Mellanox routinely prevent the ability to patch systems in RHEL 6.

The total number of packages is just way too great with many overlapping packages. Typically it is over one thousand, unless you use base system, or HPC computational node distribution.  In the latter case it is still over six hundred.

Number of daemons running in the default RHEL installation is also  very high and few people understand what all those daemons are doing and why they are running after the startup.  In other words RHEL is Microsoft Windows of the Linux word.  And with systemd pushed through the throat of enterprise customers, you will understand even less.

Support is expensive but he help from support is marginal. All those guys do and to look into database tos ee if something similar exists. That works some some problems, but for most it does not. Using free version of Linux such as CENTOS is an escape but with commercial applications you are facing troubles: they can easily blame Os for the the problem you are having and them you are holding the bag.

No efforts are make to consolidate those hundreds of overlapping packages (with some barely supported or unsupported).  This "package mess" is a distinct feature of modern enterprise linux distribution.  Long with libraries hell.

Troubles with SElinux

SLES until recently was slightly simpler then RHEL, as it did not include horribly complex security subsystem that RHEL uses -- SELinux.  It takes a lot of efforts to learn even basics of SELinux and configure properly one facing Internet server. Most sysadmin just use it blindly iether enabling it and disabling it without understanding any details of its functioning (or, more correctly, understanding it on the level allowing them to use common protocols, much like is the case with firewalls)

Actually it has a better solution in Linux-space used in SLES (AppArmor). Which was pretty elegant solution to a complex problem, if you ask me. But the critical mass of installation and m,arket share secured by Red Hat, made it "king of the hill" and prevented AppArmor from becoming Linux standard. A the result SUSE was forced to incorporate SELinux.

SELinux provides a Mandatory Access Control (MAC) system built into the Linux kernel (that is staff that labels things as "super secret", "secret" and "confidential" that three letter agencies are using to guard information). Historically Security Enhanced Linux (SELinux) was an open source project sponsored by the National Security Agency. Despite the user-friendly GUI, SELinux is difficult to configure and hard to understand. The documentation does not help much either. Most administrators are just turning SELinux subsystem off during the initial install but for Internet facing server you need to configure and use it, or...   And sometimes effects can be really subtle: for example you can login as root using password authentication but can't using passwordless ssh certificate.  That's why many complex applications, especially in HPC area explicidly recommend disabling SElinux as a starting point of installation. You can find articles on the WEB devoted to this topic. See for example

SELinux produces some very interesting errors, see for example http://bugs.mysql.com/bug.php?id=12676 and is not very compatible with some subsystems and complex applications.  Especially telling is the comment to the this blog post How to disable SELinux in RHEL 5:

Aeon said... @ May 13, 2008 2:34 PM
 
Thanks a million! I was dealing with a samba refusing to access the server shared folders. After about 2 hours of scrolling forums I found out the issue may be this shitty thing samba_selinux.

I usually disable it when I install, but this time I had to use the Dell utilities (no choice at all) and they enabled the thing. Disabled it your way, rebooted and it works as I wanted it. Thanks again!

SLES has one significant defect: by default it does not assign each user a unique group like RHEL does. But this can be fixed with a special wrapper for useradd command. In simplest for it can be just:

   #wrapper for useradd command
   # accepts two arguments: UID and user name, for example
   # uadd 3333 joedoers

   function uadd
   {
   groupadd -g $1 $2
   useradd -u $1 -g $1 -m $2
   }


Working closely with commercial Linuxes and seeing all their warts and such, one instantly understand that the traditional Open Source (GPL-based Open Source), is a very problematic business model. Historically (especially in case of Red Hat) is was used as a smoke screen for the VCs to get software engineers to work for free, not even for minimum wage, but for free! And grab as much money from suckers as they can, using all right words as an anesthetic. Essentially they take their hard work, pump $$$ in marketing and either sell the resulting company to one of their other portfolio companies or take it public and dump the shares on the public. Meanwhile the software engineers that worked to develop that software for free, aka slave labor, get $0.00 for their hard work while the VCs top brass of the startup and investment bankers make a killing.

And of course then they get their buddies in mainstream media hype the GPL-based Open Source development as the best thing after sliced bread.

Licensing

RHEL licensing is a mess too. In addition two higher level licenses are expensive and make Microsoft server license look very competitive.  Recently they went "IBM way" and started to change different prices for 4 socket servers: you can't just use two 2 socket licenses to license 4 socket server with their new registration-manager.  The next step will be classic IBM per core licensing; that's why so many people passionately hate IBM.

 There are three different types of licensing (let's call them patch-only, regular and with premium support). Each has several variations (for example HPC computational node is a variant of "patches only" license but does not provide GUI and many packages in repository). The level of tech support with the latter two (which are truly enterprise licenses) is very similar -- similarly dismal -- especially for complex problems, unless you press them really hard. 

In addition Red Hat people screwed their portal so much that you can't tell which server is assigned to what license. that situation improved with registration manger but new problem arise.

Generally the level of screw up of RHEL user portal is such, that there doubts that they can do anything useful in Linux space in the future, other then try to hold to their market share.

All is all while RHEL 6 is very complex but still a usable enterprise Linux distribution because if did not radically changed from RHEL 4, and 5. But it is not fan to use it, anymore. It's a pain. It's a headache. The same is true for SLES.

For RHEL 7 more strong words are applicable. 


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

2016 2015 2014 2013 2012 2011 2010 2009 2008
2007 2006 2005 2004 2003 2002 2001 2000 1999

[Nov 21, 2018] Red Hat Enterprise Linux 8 Hits Beta With Integrated Container Features

Nov 21, 2018 | www.eweek.com

Among the biggest changes in the last four years across the compute landscape has been the emergence of containers and microservices as being a primary paradigm for application deployment. In RHEL 8, Red Hat is including multiple container tools that it has been developing and proving out in the open-source community, including Buildah (container building), Podman (running containers) and Skopeo (sharing/finding containers).

Systems management is also getting a boost in RHEL 8 with the Composer features that enable organizations to build and deploy custom RHEL images. Management of RHEL is further enhanced via the new Red Hat Enterprise Linux Web Console, which enables administrators to manage bare metal, virtual, local and remote Linux servers.

[Nov 18, 2018] Systemd killing screen and tmux

Nov 18, 2018 | theregister.co.uk

fobobob , Thursday 10th May 2018 18:00 GMT

Might just be a Debian thing as I haven't looked into it, but I have enough suspicion towards systemd that I find it worth mentioning. Until fairly recently (in terms of Debian releases), the default configuration was to murder a user's processes when they log out. This includes things such as screen and tmux, and I seem to recall it also murdering disowned and NOHUPed processes as well.
Tim99 , Thursday 10th May 2018 06:26 GMT
How can we make money?

A dilemma for a Really Enterprise Dependant Huge Applications Technology company - The technology they provide is open, so almost anyone could supply and support it. To continue growing, and maintain a healthy profit they could consider locking their existing customer base in; but they need to stop other suppliers moving in, who might offer a better and cheaper alternative, so they would like more control of the whole ecosystem. The scene: An imaginary high-level meeting somewhere - The agenda: Let's turn Linux into Windows - That makes a lot of money:-

Q: Windows is a monopoly, so how are we going to monopolise something that is free and open, because we will have to supply source code for anything that will do that? A: We make it convoluted and obtuse, then we will be the only people with the resources to offer it commercially; and to make certain, we keep changing it with dependencies to "our" stuff everywhere - Like Microsoft did with the Registry.

Q: How are we going to sell that idea? A: Well, we could create a problem and solve it - The script kiddies who like this stuff, keep fiddling with things and rebooting all of the time. They don't appear to understand the existing systems - Sell the idea they do not need to know why *NIX actually works.

Q: *NIX is designed to be dependable, and go for long periods without rebooting, How do we get around that. A: That is not the point, the kids don't know that; we can sell them the idea that a minute or two saved every time that they reboot is worth it, because they reboot lots of times in every session - They are mostly running single user laptops, and not big multi-user systems, so they might think that that is important - If there is somebody who realises that this is trivial, we sell them the idea of creating and destroying containers or stopping and starting VMs.

Q: OK, you have sold the concept, how are we going to make it happen? A: Well, you know that we contribute quite a lot to "open" stuff. Let's employ someone with a reputation for producing fragile, barely functioning stuff for desktop systems, and tell them that we need a "fast and agile" approach to create "more advanced" desktop style systems - They would lead a team that will spread this everywhere. I think I know someone who can do it - We can have almost all of the enterprise market.

Q: What about the other large players, surely they can foil our plan? A: No, they won't want to, they are all big companies and can see the benefit of keeping newer, efficient competitors out of the market. Some of them sell equipment and system-wide consulting, so they might just use our stuff with a suitable discount/mark-up structure anyway.

ds6 , 6 months
Re: How can we make money?

This is scarily possible and undeserving of the troll icon.

Harkens easily to non-critical software developers intentionally putting undocumented, buggy code into production systems, forcing the company to keep the guy on payroll to keep the wreck chugging along.

DougS , Thursday 10th May 2018 07:30 GMT
Init did need fixing

But replacing it with systemd is akin to "fixing" the restrictions of travel by bicycle (limited speed and range, ending up sweaty at your destination, dangerous in heavy traffic) by replacing it with an Apache helicopter gunship that has a whole new set of restrictions (need for expensive fuel, noisy and pisses off the neighbors, need a crew of trained mechanics to keep it running, local army base might see you as a threat and shoot missiles at you)

Too bad we didn't get the equivalent of a bicycle with an electric motor, or perhaps a moped.

-tim , Thursday 10th May 2018 07:33 GMT
Those who do not understand Unix are condemned to reinvent it, poorly.

"It sounds super basic, but actually it is much more complex than people think," Poettering said. "Because Systemd knows which service a process belongs to, it can shut down that process."

Poettering and Red Hat,

Please learn about "Process Groups"

Init has had the groundwork for most of the missing features since the early 1980s. For example the "id" field in /etc/inittab was intended for a "makefile" like syntax to fix most of these problems but was dropped in the early days of System V because it wasn't needed.

Herby , Thursday 10th May 2018 07:42 GMT
Process 1 IS complicated.

That is the main problem. With different processes you get different results. For all its faults, SysV init and RC scripts was understandable to some extent. My (cursory) understanding of systemd is that it appears more complicated to UNDERSTAND than the init stuff.

The init scripts are nice text scripts which are executed by a nice well documented shell (bash mostly). Systemd has all sorts of blobs that somehow do things and are totally confusing to me. It suffers from "anti- kiss "

Perhaps a nice book could be written WITH example to show what is going on.

Now let's see does audio come before or after networking (or at the same time)?

Chronos , Thursday 10th May 2018 09:12 GMT
Logging

If they removed logging from the systemd core and went back to good ol' plaintext syslog[-ng], I'd have very little bad to say about Lennart's monolithic pet project. Indeed, I much prefer writing unit files than buggering about getting rcorder right in the old SysV init.

Now, if someone wanted to nuke pulseaudio from orbit and do multiplexing in the kernel a la FreeBSD, I'll chip in with a contribution to the warhead fund. Needing a userland daemon just to pipe audio to a device is most certainly a solution in search of a problem.

Tinslave_the_Barelegged , Thursday 10th May 2018 11:29 GMT
Re: Logging

> If they removed logging from the systemd core

And time syncing

And name resolution

And disk mounting

And logging in

...and...

[Nov 18, 2018] From now on, I will call Systemd-based Linux distros "SNU Linux". Because Systemd's Not Unix-like.

Nov 18, 2018 | theregister.co.uk

tekHedd , Thursday 10th May 2018 15:28 GMT

Not UNIX-like? SNU!

From now on, I will call Systemd-based Linux distros "SNU Linux". Because Systemd's Not Unix-like.

It's not clever, but it's the future. From now on, all major distributions will be called SNU Linux. You can still freely choose to use a non-SNU linux distro, but if you want to use any of the "normal" ones, you will have to call it "SNU" whether you like it or not. It's for your own good. You'll thank me later.

[Nov 18, 2018] So in all reality, systemd is an answer to a problem that nobody who are administring servers ever had.

Nov 18, 2018 | theregister.co.uk

jake , Thursday 10th May 2018 20:23 GMT

Re: Bah!

Nice rant. Kinda.

However, I don't recall any major agreement that init needed fixing. Between BSD and SysV inits, probably 99.999% of all use cases were covered. In the 1 in 100,000 use case, a little bit of C (stand alone code, or patching init itself) covered the special case. In the case of Slackware's SysV/BSD amalgam, I suspect it was more like one in ten million.

So in all reality, systemd is an answer to a problem that nobody had. There was no reason for it in the first place. There still isn't a reason for it ... especially not in the 999,999 places out of 1,000,000 where it is being used. Throw in the fact that it's sticking its tentacles[0] into places where nobody in their right mind would expect an init as a dependency (disk partitioning software? WTF??), can you understand why us "old guard" might question the sanity of people singing it's praises?

[0] My spall chucker insists that the word should be "testicles". Tempting ...

[Nov 18, 2018] Thursday 10th May 2018 19:36 GMT

Nov 18, 2018 | theregister.co.uk

doug_bostrom


sisk , Thursday 10th May 2018 21:17 GMT

It's a pretty polarizing debate: either you see Systemd as a modern, clean, and coherent management toolkit

Very, very few Linux users see it that way.

or an unnecessary burden running roughshod over the engineering maxim: if it ain't broke, don't fix it.

Seen as such by 90% of Linux users because it demonstrably is.

Truthfully Systemd is flawed at a deeply fundamental level. While there are a very few things it can do that init couldn't - the killing off processes owned by a service mentioned as an example in this article is handled just fine by a well written init script - the tradeoffs just aren't worth it. For example: fscking BINARY LOGS. Even if all of Systemd's numerous other problems were fixed that one would keep it forever on my list of things to avoid if at all possible, and the fact that the Systemd team thought it a good idea to make the logs binary shows some very troubling flaws in their thinking at a very fundamental level.

Dazed and Confused , Thursday 10th May 2018 21:43 GMT
Re: fscking BINARY LOGS.

And config too

When it comes to logs and config file if you can't grep it then it doesn't belong on Linux/Unix

Nate Amsden , Thursday 10th May 2018 23:51 GMT
Re: fscking BINARY LOGS.

WRT grep and logs I'm the same way which is why I hate json so much. My saying has been along the lines of "if it's not friends with grep/sed then it's not friends with me". I have whipped some some whacky sed stuff to generate a tiny bit of json to read into chef for provisioning systems though.

XML is similar though I like XML a lot more at least the closing tags are a lot easier to follow then trying to count the nested braces in json.

I haven't had the displeasure much of dealing with the systemd binary logs yet myself.

Tomato42 , Saturday 12th May 2018 08:26 GMT
Re: fscking BINARY LOGS.

> I haven't had the displeasure much of dealing with the systemd binary logs yet myself.

"I have no clue what I'm talking about or what's a robust solution but dear god, that won't stop me!" – why is it that all the people complaining about journald sound like that?

systemd works just fine with regular syslog-ng, without journald (that's the thing that has binary logs) in sight

HieronymusBloggs , Saturday 12th May 2018 18:17 GMT
Re: fscking BINARY LOGS.

"systemd works just fine with regular syslog-ng, without journald (that's the thing that has binary logs) in sight"

Journald can't be switched off, only redirected to /dev/null. It still generates binary log data (which has caused me at least one system hang due to the absurd amount of data it was generating on a system that was otherwise functioning correctly) and consumes system resources. That isn't my idea of "works just fine".

""I have no clue what I'm talking about or what's a robust solution but dear god, that won't stop me!" – why is it that all the people complaining about journald sound like that?"

Nice straw man. Most of the complaints I've seen have been from experienced people who do know what they're talking about.

sisk , Tuesday 15th May 2018 20:22 GMT
Re: fscking BINARY LOGS.

"I have no clue what I'm talking about or what's a robust solution but dear god, that won't stop me!" – why is it that all the people complaining about journald sound like that?

I have had the displeasure of dealing with journald and it is every bit as bad as everyone says and worse.

systemd works just fine with regular syslog-ng, without journald (that's the thing that has binary logs) in sight

Yeah, I've tried that. It caused problems. It wasn't a viable option.

Anonymous Coward , Thursday 10th May 2018 22:30 GMT
Parking U$5bn in redhad for a few months will fix this...

So it's now been 4 years since they first tried to force that shoddy desk-top init system into our servers? And yet they still feel compelled to tell everyone, look it really isn't that terrible. That should tell you something. Unless you are tone death like redhat. Surprised people didn't start walking out when Poettering outlined his plans for the next round of systemD power grabs...

Anyway the only way this farce will end is with shareholder activism. Some hedge fund to buy 10-15 percent of redhat (about the amount you need to make life difficult for management) and force them to sack that "stable genius" Poettering. So market cap is 30bn today. Anyone with 5bn spare to park for a few months wanna step forward and do some good?

cjcox , Thursday 10th May 2018 22:33 GMT
He's a pain

Early on I warned that he was trying to solve a very large problem space. He insisted he could do it with his 10 or so "correct" ways of doing things, which quickly became 20, then 30, then 50, then 90, etc.. etc. I asked for some of the features we had in init, he said "no valid use case". Then, much later (years?), he implements it (no use case provided btw).

Interesting fellow. Very bitter. And not a good listener. But you don't need to listen when you're always right.

Daggerchild , Friday 11th May 2018 08:27 GMT
Spherical wheel is superior.

@T42

Now, you see, you just summed up the whole problem. Like systemd's author, you think you know better than the admin how to run his machine, without knowing, or caring to ask, what he's trying to achieve. Nobody ever runs a computer, to achieve running systemd do they.

Tomato42 , Saturday 12th May 2018 09:05 GMT
Re: Spherical wheel is superior.

I don't claim I know better, but I do know that I never saw a non-distribution provided init script that handled correctly the basic of corner cases – service already running, run file left-over but process dead, service restart – let alone the more obscure ones, like application double forking when it shouldn't (even when that was the failure mode of the application the script was provided with). So maybe, just maybe, you haven't experienced everything there is to experience, so your opinion is subjective?

Yes, the sides of the discussion should talk more, but this applies to both sides. "La, la, la, sysv is working fine on my machine, thankyouverymuch" is not what you can call "participating in discussion". So is quoting well known and long discussed (and disproven) points. (and then downvoting people into oblivion for daring to point this things out).

now in the real world, people that have to deal with init systems on daily basis, as distribution maintainers, by large, have chosen to switch their distributions to systemd, so the whole situation I can sum up one way:

"the dogs may bark, but the caravan moves on"

Kabukiwookie , Monday 14th May 2018 00:14 GMT
Re: Spherical wheel is superior.

I do know that I never saw a non-distribution provided init script that handled correctly the basic of corner cases – service already running

This only shows that you don't have much real life experience managing lots of hosts.

like application double forking when it shouldn't

If this is a problem in the init script, this should be fixed in the init script. If this is a problem in the application itself, it should be fixed in the application, not worked around by the init mechanism. If you're suggesting the latter, you should not be touching any production box.

"La, la, la, sysv is working fine on my machine, thankyouverymuch" is not what you can call "participating in discussion".

Shoving down systemd down people's throat as a solution to a non-existing problem, is not a discussion either; it is the very definition of 'my way or the highway' thinking.

now in the real world, people that have to deal with init systems on daily basis

Indeed and having a bunch of sub-par developers, focused on the 'year of the Linux desktop' to decide what the best way is for admins to manage their enterprise environment is not helping.

"the dogs may bark, but the caravan moves on"

Indeed. It's your way or the highway; I thought you were just complaining about the people complaining about systemd not wanting to have a discussion, while all the while it's systemd proponents ignoring and dismissing very valid complaints.

Daggerchild , Monday 14th May 2018 14:10 GMT
Re: Spherical wheel is superior.

"I never saw ... run file left-over but process dead, service restart ..."

Seriously? I wrote one last week! You use an OS atomic lock on the pidfile and exec the service if the lock succeeded. The lock dies with the process. It's a very small shellscript.

I shot a systemd controlled service. Systemd put it into error state and wouldn't restart it unless I used the right runes. That is functionally identical to the thing you just complained about.

"application double forking when it shouldn't"

I'm going to have to guess what that means, and then point you at DJB's daemontools. You leave a FD open in the child. They can fork all they like. You'll still track when the last dies as the FD will cause an event on final close.

"So maybe, just maybe, you haven't experienced everything there is to experience"

You realise that's the conspiracy theorist argument "You don't know everything, therefore I am right". Doubt is never proof of anything.

"La, la, la, sysv is working fine" is not what you can call "participating in discussion".

Well, no.. it's called evidence. Evidence that things are already working fine, thanks. Evidence that the need for discussion has not been displayed. Would you like a discussion about the Earth being flat? Why not? Are you refusing to engage in a constructive discussion? How obstructive!

"now in the real world..."

In the *real* world people run Windows and Android, so you may want to rethink the "we outnumber you, so we must be right" angle.

You're claiming an awful lot of highground you don't seem to actually know your way around, while trying to wield arguments you don't want to face yourself...

"(and then downvoting people into oblivion for daring to point this things out)"

It's not some denialist conspiracy to suppress your "daring" Truth - you genuinely deserve those downvotes.

Anonymous Coward , Friday 11th May 2018 17:27 GMT
I have no idea how or why systemd ended up on servers. Laptops I can see the appeal for "this is the year of the linux desktop" - for when you want your rebooted machine to just be there as fast as possible (or fail mysteriously as fast as possible). Servers, on the other hand, which take in the order of 10+ minutes to get through POST, initialising whatever LOM, disk controllers, and whatever exotica hardware you may also have connected, I don't see a benefit in Linux starting (or failing to start) a wee bit more quickly. You're only going to reboot those beasts when absolutely necessary. And it should boot the same as it booted last time. PID1 should be as simple as possible.

I only use CentOS these days for FreeIPA but now I'm questioning my life decisions even here. That Debian adopted systemd too is a real shame. It's actually put me off the whole game. Time spent learning systemd is time that could have been spent doing something useful that won't end up randomly breaking with a "will not fix" response.

Systemd should be taken out back and put out of our misery.

Miss Config , Saturday 12th May 2018 11:48 GMT
SystemD ? Was THAT What Buggered My Mint AND Laptop ?

The technical details of SystemD are over my head but I do use Mint as the main OS on this laptop which makes me Mr. Innocent Bystander in this argument. I had heard of SystemD and even a rumour that Mint was going to use it. That Mint ALREADY is using SystemD is news to me

( provided by this article ).

My problem is that a month ago a boot of Mint failed and after reading this thread I must wonder whether SystemD is at least one of the usual suspects as the cause of the problem ?

Here's what happened :

As I do every couple of weeks, I installed the latest available updates from Mint but the next time I booted up it did not get beyond the Mint logo. All I got were terminal-level messages about sudo commands and the ability to enter them. Or rather NOT enter them. Further use of Terminal showed that one system file did not now exist. This was in etc/ and related to the granting of sudo permissions. The fact that it did not exist created a vicious circle and sudo was completely out of action. I took the laptop to a shop where they managed to save my Backups folder that had been on the desktop and install a fresh version of Mint.

So what are the chances that this was a SystemD problem ?

GrumpenKraut , Sunday 13th May 2018 10:51 GMT
Re: SystemD ? Was THAT What Buggered My Mint AND Laptop ?

From what you say the file /etc/sudoers got deleted (or corrupted). It may have been some (badly effed up) update.

Btw. you could have booted from a rescue image (CD or USB stick) and fixed it yourself. Easy when you have a proper backup, not-quite-so-easy when you have to 'manually' recreate that file.

jake , Monday 14th May 2018 18:28 GMT
Re: SystemD ? Was THAT What Buggered My Mint AND Laptop ?

Probably not systemd. If you were the only one it happened to, and it only happened once, write it off as the proverbial "stray cosmic ray" flipping a bit at an inopportune time during the install. If you can repeat it, this is the wrong forum to address the issue. Try instead https://forums.linuxmint.com/

That said, if anybody reading this in the future has a similar problem, you can get a working system back by logging in as root[0], using your favorite text editor[1] to create the file /etc/sudoers with the single line root ALL=(ALL) ALL , saving the file and then running chown 644 /etc/sudoers ... logout of root and back into your user account and get on with it. May I suggest starting with backing up all your personal work (pictures, tunes, correspondence, whathaveyou)?

[0] Yeah, yeah, yeah, I know, don't suggest newbies use root. But if su doesn't work, what would you suggest as an alternative?

[1] visudo wont work for obvious reasons ... even if it did, would you suggest vi to a newbie? Besides, on a single-user system it's hardly necessary for this kind of brute-force bodge.

Miss Config , Monday 14th May 2018 18:38 GMT
Re: SystemD ? Was THAT What Buggered My Mint AND Laptop ?

So even those who are paranoid ( rightly or wrongly ) about SystemD did not pile in to blame it here. I'll take that as a 'no'.

Backup you say ? Tell me about it. I must admit that when it comes to backups I very much talk the talk, full stop.I have since bought a 1TB detachable hard drive which at least makes full backups fast via USB3.

( All I need now is software for DIFFERENTIAL backups ).

jake , Monday 14th May 2018 19:24 GMT
Re: SystemD ? Was THAT What Buggered My Mint AND Laptop ?

Living long enough to have ton of experience is not paranoia (although it can help!). Instead, try the other "P" word ... pragmatism.

Backups are a vital part of properly running any computerized system. However, I can make a case for simply having multiple copies (off site is good!) of all your important personal files being all that's needed for the average single-user, at home system. The OS can be reinstalled, your pictures and personal correspondence (etc.) cannot.

[Nov 18, 2018] Just let chef start the services when it runs after the system boots(which means they start maybe 1 or 2 mins after bootup).

Notable quotes:
"... Another thing bit us with systemd recently as well again going back to bind. Someone on the team upgraded our DNS systems to systemd and the startup parameters for bind were not preserved because systemd ignores the /etc/default/bind file. As a result we had tons of DNS failures when bind was trying to reach out to IPv6 name servers(ugh), when there is no IPv6 connectivity in the network (the solution is to start bind with a -4 option). ..."
"... I'm sure I've only scratched the surface of systemd pain. I'm sure it provides good value to some people, I hear it's good with containers (I have been running LXC containers for years now, I see nothing with systemd that changes that experience so far). ..."
"... If systemd is a solution to any set of problems, I'd love to have those problems back! ..."
Nov 18, 2018 | theregister.co.uk

Nate Amsden , Thursday 10th May 2018 16:34 GMT

as a linux user for 22 users

(20 of which on Debian, before that was Slackware)

I am new to systemd, maybe 3 or 4 months now tops on Ubuntu, and a tiny bit on Debian before that.

I was confident I was going to hate systemd before I used it just based on the comments I had read over the years, I postponed using it as long as I could. Took just a few minutes of using it to confirm my thoughts. Now to be clear, if I didn't have to mess with the systemd to do stuff then I really wouldn't care since I don't interact with it (which is the case on my laptop at least though laptop doesn't have systemd anyway). I manage about 1,000 systems running Ubuntu for work, so I have to mess with systemd, and init etc there. If systemd would just do ONE thing I think it would remove all of the pain that it has inflicted on me over the past several months and I could learn to accept it.

That one thing is, if there is an init script, RUN IT. Not run it like systemd does now. But turn off ALL intelligence systemd has when it finds that script and run it. Don't put it on any special timers, don't try to detect if it is running already, or stopped already or whatever, fire the script up in blocking mode and wait till it exits.

My first experience with systemd was on one of my home servers, I re-installed Debian on it last year, rebuilt the hardware etc and with it came systemd. I believe there is a way to turn systemd off but I haven't tried that yet. The first experience was with bind. I have a slightly custom init script (from previous debian) that I have been using for many years. I copied it to the new system and tried to start bind. Nothing. I looked in the logs and it seems that it was trying to interface with rndc(internal bind thing) for some reason, and because rndc was not working(I never used it so I never bothered to configure it) systemd wouldn't launch bind. So I fixed rndc and systemd would now launch bind, only to stop it within 1 second of launching. My first workaround was just to launch bind by hand at the CLI (no init script), left it running for a few months. Had a discussion with a co-worker who likes systemd and he explained that making a custom unit file and using the type=forking option may fix it.. That did fix the issue.

Next issue came up when dealing with MySQL clusters. I had to initialize the cluster with the "service mysql bootstrap-pxc" command (using the start command on the first cluster member is a bad thing). Run that with systemd, and systemd runs it fine. But go to STOP the service, and systemd thinks the service is not running so doesn't even TRY to stop the service(the service is running). My workaround for my automation for mysql clusters at this point is to just use mysqladmin to shut the mysql instances down. Maybe newer mysql versions have better systemd support though a co-worker who is our DBA and has used mysql for many years says even the new Maria DB builds don't work well with systemd. I am working with Mysql 5.6 which is of course much much older.

Next issue came up with running init scripts that have the same words in them, in the case of most recently I upgraded systems to systemd that run OSSEC. OSSEC has two init scripts for us on the server side (ossec and ossec-auth). Systemd refuses to run ossec-auth because it thinks there is a conflict with the ossec service. I had the same problem with multiple varnish instances running on the same system (varnish instances were named varnish-XXX and varnish-YYY). In the varnish case using custom unit files I got systemd to the point where it would start the service but it still refuses to "enable" the service because of the name conflict (I even changed the name but then systemd was looking at the name of the binary being called in the unit file and said there is a conflict there).

fucking a. Systemd shut up, just run the damn script. It's not hard.

Later a co-worker explained the "systemd way" for handling something like multiple varnish instances on the system but I'm not doing that, in the meantime I just let chef start the services when it runs after the system boots(which means they start maybe 1 or 2 mins after bootup).

Another thing bit us with systemd recently as well again going back to bind. Someone on the team upgraded our DNS systems to systemd and the startup parameters for bind were not preserved because systemd ignores the /etc/default/bind file. As a result we had tons of DNS failures when bind was trying to reach out to IPv6 name servers(ugh), when there is no IPv6 connectivity in the network (the solution is to start bind with a -4 option).

I believe I have also caught systemd trying to mess with file systems(iscsi mount points). I have lots of automation around moving data volumes on the SAN between servers and attaching them via software iSCSI directly to the VMs themselves(before vsphere 4.0 I attached them via fibre channel to the hypervisor but a feature in 4.0 broke that for me). I noticed on at least one occasion when I removed the file systems from a system that SOMETHING (I assume systemd) mounted them again, and it was very confusing to see file systems mounted again for block devices that DID NOT EXIST on the server at the time. I worked around THAT one I believe with the "noauto" option in fstab again. I had to put a lot of extra logic in my automation scripts to work around systemd stuff.

I'm sure I've only scratched the surface of systemd pain. I'm sure it provides good value to some people, I hear it's good with containers (I have been running LXC containers for years now, I see nothing with systemd that changes that experience so far).

But if systemd would just do this one thing and go into dumb mode with init scripts I would be quite happy.

GrumpenKraut , Thursday 10th May 2018 17:52 GMT
Re: as a linux user for 22 users

Now more seriously: it really strikes me that complaints about systemd come from people managing non-trivial setups like the one you describe. While it might have been a PITA to get this done with the old init mechanism, you could make it work reliably.

If systemd is a solution to any set of problems, I'd love to have those problems back!

[Nov 18, 2018] SystemD is just a symptom of this regression of Red hat into money making machine

Nov 18, 2018 | theregister.co.uk

Will Godfrey , Thursday 10th May 2018 16:30 GMT

Business Model

Red Hat have definitely taken a lurch to the dark side in recent years. It seems to be the way businesses go.

They start off providing a service to customers.

As they grow the customers become users.

Once they reach a certain point the users become consumers, and at this point it is the 'consumers' that provide a service for the business.

SystemD is just a symptom of this regression.

[Nov 18, 2018] Fudging the start-up and restoring eth0

Truth be told boisdevname abomination is from Dell
Nov 18, 2018 | theregister.co.uk

The Electron , Thursday 10th May 2018 12:05 GMT

Fudging the start-up and restoring eth0

I knew systemd was coming thanks to playing with Fedora. The quicker start-up times were welcomed. That was about it! I have had to kickstart many of my CentOS 7 builds to disable IPv6 (NFS complains bitterly), kill the incredibly annoying 'biosdevname' that turns sensible eth0/eth1 into some daftly named nonsense, replace Gnome 3 (shudder) with MATE, and fudge start-up processes. In a previous job, I maintained 2 sets of CentOS 7 'infrastructure' servers that provided DNS, DHCP, NTP, and LDAP to a large number of historical vlans. Despite enabling the systemd-network wait online option, which is supposed to start all networks *before* listening services, systemd would run off flicking all the "on" switches having only set-up a couple of vlans. Result: NTP would only be listening on one or two vlan interfaces. The only way I found to get around that was to enable rc.local and call systemd to restart the NTP daemon after 20 seconds. I never had the time to raise a bug with Red Hat, and I assume the issue still persists as no-one designed systemd to handle 15-odd vlans!?

Jay 2 , Thursday 10th May 2018 15:02 GMT
Re: Predictable names

I can't remember if it's HPE or Dell (or both) where you can use set the kernel option biosdevname=0 during build/boot to turn all that renaming stuff off and revert to ethX.

However on (RHEL?)/CentOS 7 I've found that if you build a server like that, and then try to renam/swap the interfaces it will refuse point blank to allow you to swap the interfaces round so that something else can be eth0. In the end we just gave up and renamed everything lanX instead which it was quite happy with.

HieronymusBloggs , Thursday 10th May 2018 16:23 GMT
Re: Predictable names

"I can't remember if it's HPE or Dell (or both) where you can use set the kernel option biosdevname=0 during build/boot to turn all that renaming stuff off and revert to ethX."

I'm using this on my Debian 9 systems. IIRC the option to do so will be removed in Debian 10.

Dazed and Confused , Thursday 10th May 2018 19:21 GMT
Re: Predictable names

I can't remember if it's HPE or Dell (or both)

It's Dell. I got the impression that much of this work had been done, at least, in conjunction with Dell.

[Nov 18, 2018] The beatings will continue until morale improves.

Nov 18, 2018 | theregister.co.uk

Doctor Syntax , Thursday 10th May 2018 10:26 GMT

"The more people learn about it, the more they like it."

Translation: We define those who don't like it as not have learned enough about it.

ROC , Friday 11th May 2018 17:32 GMT
Alternate translation:

The beatings will continue until morale improves.

[Nov 18, 2018] I am barely tolerating SystemD on some servers because RHEL/CentOS 7 is the dominant business distro with a decent support life

Nov 18, 2018 | theregister.co.uk

AJ MacLeod , Thursday 10th May 2018 13:51 GMT

@Sheepykins

I'm not really bothered about whether init was perfect from the beginning - for as long as I've been using Linux (20 years) until now, I have never known the init system to be the cause of major issues. Since in my experience it's not been seriously broken for two decades, why throw it out now for something that is orders of magnitude more complex and ridiculously overreaching?

Like many here I bet, I am barely tolerating SystemD on some servers because RHEL/CentOS 7 is the dominant business distro with a decent support life - but this is also the first time I can recall ever having serious unpredictable issues with startup and shutdown on Linux servers.


stiine, Thursday 10th May 2018 15:38 GMT

sysV init

I've been using Linux ( RedHat, CentOS, Ubuntu), BSD (Solaris, SunOS, freeBSD) and Unix ( aix, sysv all of the way back to AT&T 3B2 servers) in farms of up to 400 servers since 1988 and I never, ever had issues with eth1 becoming eth0 after a reboot. I also never needed to run ifconfig before configuring an interface just to determine what the inteface was going to be named on a server at this time. Then they hired Poettering... now, if you replace a failed nic, 9 times out of 10, the interface is going to have a randomly different name.

/rant

[Nov 18, 2018] systems helps with mounting NSF4 filesystems

Nov 18, 2018 | theregister.co.uk

Chronos , Thursday 10th May 2018 13:32 GMT

Re: Logging

And disk mounting

Well, I am compelled to agree with most everything you wrote except one niche area that systemd does better: Remember putzing about with the amd? One line in fstab:

nasbox:/srv/set0 /nas nfs4 _netdev,noauto,nolock,x-systemd.automount,x-systemd.idle-timeout=1min 0 0

Bloody thing only works and nobody's system comes grinding to a halt every time some essential maintenance is done on the NAS.

Candour compels me to admit surprise that it worked as advertised, though.

DCFusor , Thursday 10th May 2018 13:58 GMT

Re: Logging

No worries, as has happened with every workaround to make systemD simply mount cifs or NFS at boot, yours will fail as soon as the next change happens, yet it will remain on the 'net to be tried over and over as have all the other "fixes" for Poettering's arrogant breakages.

The last one I heard from him on this was "don't mount shares at boot, it's not reliable WONTFIX".

Which is why we're all bitching.

Break my stuff.

Web shows workaround.

Break workaround without fixing the original issue, really.

Never ensure one place for current dox on what works now.

Repeat above endlessly.

Fine if all you do is spin up endless identical instances in some cloud (EG a big chunk of RH customers - but not Debian for example). If like me you have 20+ machines customized to purpose...for which one workaround works on some but not others, and every new release of systemD seems to break something new that has to be tracked down and fixed, it's not acceptable - it's actually making proprietary solutions look more cost effective and less blood pressure raising.

The old init scripts worked once you got them right, and stayed working. A new distro release didn't break them, nor did a systemD update (because there wasn't one). This feels more like sabotage.

[Nov 18, 2018] Today I've kickstarted RHEL7 on a rack of 40 identical servers using same script. On about 25 out of 40 postinstall script added to rc.local failed to run with some obscure error

Nov 18, 2018 | theregister.co.uk

Dabbb , Thursday 10th May 2018 10:16 GMT

Quite understandable that people who don't know anything else would accept systemd. For everyone else it has nothing to do with old school but everything to do with unpredictability of systemd.

Today I've kickstarted RHEL7 on a rack of 40 identical servers using same script. On about 25 out of 40 postinstall script added to rc.local failed to run with some obscure error about script being terminated because something unintelligible did not like it. It never ever happened on RHEL6, it happens all the time on RHEL7. And that's exactly the reason I absolutely hate it both RHEL7 and systemd.

[Nov 18, 2018] You love Systemd you just don't know it yet, wink Red Hat bods

Nov 18, 2018 | theregister.co.uk

Anonymous Coward , Thursday 10th May 2018 02:58 GMT

Poettering still doesn't get it... Pid 1 is for people wearing big boy pants.

"And perhaps, in the process, you may warm up a bit more to the tool"

Like from LNG to Dry Ice? and by tool does he mean Poettering or systemd?

I love the fact that they aren't trying to address the huge and legitimate issues with Systemd, while still plowing ahead adding more things we don't want Systemd to touch into it's ever expanding sprawl.

The root of the issue with Systemd is the problems it causes, not the lack of "enhancements" initd offered. Replacing Init didn't require the breaking changes and incompatibility induced by Poettering's misguided handiwork. A clean init replacement would have made Big Linux more compatible with both it's roots and the other parts of the broader Linux/BSD/Unix world. As a result of his belligerent incompetence, other peoples projects have had to be re-engineered, resulting in incompatibility, extra porting work, and security problems. In short were stuck cleaning up his mess, and the consequences of his security blunders

A worthy Init replacement should have moved to compiled code and given us asynchronous startup, threading, etc, without senselessly re-writing basic command syntax or compatibility. Considering the importance of PID 1, it should have used a formal development process like the BSD world.

Fedora needs to stop enabling his prima donna antics and stop letting him touch things until he admits his mistakes and attempts to fix them. The flame wars not going away till he does.

asdf , Thursday 10th May 2018 23:38 GMT
Re: Poettering still doesn't get it... Pid 1 is for people wearing big boy pants.

SystemD is corporate money (Redhat support dollars) triumphing over the long hairs sadly. Enough money can buy a shitload of code and you can overwhelm the hippies with hairball dependencies (the key moment was udev being dependent on systemd) and soon get as much FOSS as possible dependent on the Linux kernel. This has always been the end game as Red Hat makes its bones on Linux specifically not on FOSS in general (that say runs on Solaris or HP-UX). The tighter they can glue the FOSS ecosystem and the Linux kernel together ala Windows lite style the better for their bottom line. Poettering is just being a good employee asshat extraordinaire he is.

whitepines , Thursday 10th May 2018 03:47 GMT
Raise your hand if you've been completely locked out of a server or laptop (as in, break out the recovery media and settle down, it'll be a while) because systemd:

1.) Couldn't raise a network interface

2.) Farted and forgot the UUID for a disk, then refused to give a recovery shell

3.) Decided an unimportant service (e.g. CUPS or avahi) was too critical to start before giving a login over SSH or locally, then that service stalls forever

4.) Decided that no, you will not be network booting your server today. No way to recover and no debug information, just an interminable hang as it raises wrong network interfaces and waits for DHCP addresses that will never come.

And lest the fun be restricted to startup, on shutdown systemd can quite happily hang forever doing things like stopping nonessential services, *with no timeout and no way to interrupt*. Then you have to Magic Sysreq the machine, except that sometimes secure servers don't have that ability, at least not remotely. Cue data loss and general excitement.

And that's not even going into the fact that you need to *reboot the machine* to patch the *network enabled* and highly privileged systemd, or that it seems to have the attack surface of Jupiter.

Upstart was better than this. SysV was better than this. Mac is better than this. Windows is better than this.

Uggh.

Daggerchild , Thursday 10th May 2018 11:39 GMT
Re: Ahhh SystemD

I honestly would love someone to lay out the problems it solves. Solaris has a similar parallellised startup system, with some similar problems, but it didn't need pid 1.

Tridac , Thursday 10th May 2018 11:53 GMT
Re: Ahhh SystemD

Agreed, Solaris svcadm and svcs etc are an example of how it should be done. A layered approach maintaining what was already there, while adding functionality for management purposes. Keeps all the old text based log files and uses xml scripts (human readable and editable) for higher level functions. Afaics, systemd is a power grab by red hat and an ego trip for it's primary developer. Dumped bloatware Linux in favour of FreeBSD and others after Suse 11.4, though that was bad enough with Gnome 3...

[Nov 17, 2018] RHEL 8 Beta arrives with application streams and more Network World

Nov 17, 2018 | www.networkworld.com

What is changing in networking?

More efficient networking is provided in containers through IPVLAN, which connects containers nested in virtual machines to networking hosts with minimal impact on throughput and latency.

RHEL 8 Beta also provides a new TCP/IP stack that provides bandwidth and round-trip propagation time (BBR) congestion control. BBR is a fairly new TCP delay-controlled TCP flow control algorithm from Google. These changes will lead to higher performance network connections, minimized latency, and less packet loss for all internet services (e.g., streaming video and hosted storage).

[Nov 09, 2018] OpenStack is overkill for Docker

Notable quotes:
"... OpenStack's core value is to gather a pool of hypervisor-enabled computers and enable the delivery of virtual machines (VMs) on demand to users. ..."
Nov 09, 2018 | www.techrepublic.com

javascript:void(0)

Both OpenStack and Docker were conceived to make IT more agile. OpenStack has strived to do this by turning hitherto static IT resources into elastic infrastructure, whereas Docker has reached for this goal by harmonizing development, test, and production resources, as Red Hat's Neil Levine suggests .

But while Docker adoption has soared, OpenStack is still largely stuck in neutral. OpenStack is kept relevant by so many wanting to believe its promise, but never hitting its stride due to a host of factors , including complexity.

And yet Docker could be just the thing to turn OpenStack's popularity into productivity. Whether a Docker-plus-OpenStack pairing is right for your enterprise largely depends on the kind of capacity your enterprise hopes to deliver. If simply Docker, OpenStack is probably overkill.

An open source approach to delivering virtual machines

OpenStack is an operational model for delivering virtualized compute capacity.

Sure, some give it a more grandiose definition ("OpenStack is a set of software tools for building and managing cloud computing platforms for public and private clouds"), but if we ignore secondary services like Cinder, Heat, and Magnum, for example, OpenStack's core value is to gather a pool of hypervisor-enabled computers and enable the delivery of virtual machines (VMs) on demand to users.

That's it.

Not that this is a small thing. After all, without OpenStack, the hypervisor sits idle, lonesome on a single computer, with no way to expose that capacity programmatically (or otherwise) to users.

Before cloudy systems like OpenStack or Amazon's EC2, users would typically file a help ticket with IT. An IT admin, in turn, would use a GUI or command line to create a VM, and then share the credentials with the user.

Systems like OpenStack significantly streamline this process, enabling IT to programmatically deliver capacity to users. That's a big deal.

Docker peanut butter, meet OpenStack jelly

Docker, the darling of the containers world, is similar to the VM in the IaaS picture painted above.

A Docker host is really the unit of compute capacity that users need, and not the container itself. Docker addresses what you do with a host once you've got it, but it doesn't really help you get the host in the first place.

A Docker machine provides a client-side tool that lets you request Docker hosts from an IaaS provider (like EC2 or OpenStack or vSphere), but it's far from a complete solution. In part, this stems from the fact that Docker doesn't have a tenancy model.

With a hypervisor, each VM is a tenant. But in Docker, the Docker host is a tenant. You typically don't want multiple users sharing a Docker host because then they see each others' containers. So typically an enterprise will layer a cloud system underneath Docker to add tenancy. This yields a stack that looks like: hardware > hypervisor > Docker host > container.

A common approach today would be to take OpenStack and use it as the enterprise platform to deliver capacity on demand to users. In other words, users rely on OpenStack to request a Docker host, and then they use Docker to run containers in their Docker host.

So far, so good.

If all you need is Docker...

Things get more complicated when we start parsing what capacity needs delivering.

When an enterprise wants to use Docker, they need to get Docker hosts from a data center. OpenStack can do that, and it can do it alongside delivering all sorts of other capacity to the various teams within the enterprise.

But if all an enterprise IT team needs is Docker containers delivered, then OpenStack -- or a similar orchestration tool -- may be overkill, as VMware executive Jared Rosoff told me.

For this sort of use case, we really need a new platform. This platform could take the form of a piece of software that an enterprise installs on all of its computers in the data center. It would expose an interface to developers that lets them programmatically create Docker hosts when they need them, and then use Docker to create containers in those hosts.

Google has a vision for something like this with its Google Container Engine . Amazon has something similar in its EC2 Container Service . These are both API's that developers can use to provision some Docker-compatible capacity from their data center.

As for Docker, the company behind Docker, the technology, it seems to have punted on this problem. focusing instead on what happens on the host itself.

While we probably don't need to build up a big OpenStack cloud simply to manage Docker instances, it's worth asking what OpenStack should look like if what we wanted to deliver was only Docker hosts, and not VMs.

Again, we see Google and Amazon tackling the problem, but when will OpenStack, or one of its supporters, do the same? The obvious candidate would be VMware, given its longstanding dominance of tooling around virtualization. But the company that solves this problem first, and in a way that comforts traditional IT with familiar interfaces yet pulls them into a cloudy future, will win, and win big.

[Nov 03, 2018] Is Red Hat IBM's 'Hail Mary' pass

Notable quotes:
"... if those employees become unhappy, they can effectively go anywhere they want. ..."
"... IBM's partner/reseller ecosystem is nowhere near what it was since it owned the PC and Server businesses that Lenovo now owns. And IBM's Softlayer/BlueMix cloud is largely tied to its legacy software business, which, again, is slowing. ..."
"... I came to IBM from their SoftLayer acquisition. Their ability to stomp all over the things SoftLayer was almost doing right were astounding. I stood and listened to Ginni say things like, "We purchased SoftLayer because we need to learn from you," and, "We want you to teach us how to do Cloud the right way, since we spent all these years doing things the wrong way," and, "If you find yourself in a meeting with one of our old teams, you guys are gonna be the ones in charge. You are the ones who know how this is supposed to work - our culture has failed at it." Promises which were nothing more than hollow words. ..."
"... Next, it's a little worrisome that the author, now over the whole IBM thing is recommending firing "older people," you know, the ones who helped the company retain its performance in years' past. The smartest article I've read about IBM worried about its cheap style of "acquiring" non-best-of-breed companies and firing oodles of its qualified R&D guys. THAT author was right. ..."
"... Four years in GTS ... joined via being outsourced to IBM by my previous employer. Left GTS after 4 years. ..."
"... The IBM way of life was throughout the Oughts and the Teens an utter and complete failure from the perspective of getting work done right and using people to their appropriate and full potential. ..."
"... As a GTS employee, professional technical training was deemed unnecessary, hence I had no access to any unless I paid for it myself and used my personal time ... the only training available was cheesy presentations or other web based garbage from the intranet, or casual / OJT style meetings with other staff who were NOT professional or expert trainers. ..."
"... As a GTS employee, I had NO access to the expert and professional tools that IBM fricking made and sold to the same damn customers I was supposed to be supporting. Did we have expert and professional workflow / document management / ITIL aligned incident and problem management tools? NO, we had fricking Lotus Notes and email. Instead of upgrading to the newest and best software solutions for data center / IT management & support, we degraded everything down the simplest and least complex single function tools that no "best practices" organization on Earth would ever consider using. ..."
"... And the people management paradigm ... employees ranked annually not against a static or shared goal or metric, but in relation to each other, and there was ALWAYS a "top 10 percent" and a "bottom ten percent" required by upper management ... a system that was sociopathic in it's nature because it encourages employees to NOT work together ... by screwing over one's coworkers, perhaps by not giving necessary information, timely support, assistance as needed or requested, one could potentially hurt their performance and make oneself look relatively better. That's a self-defeating system and it was encouraged by the way IBM ran things. ..."
Nov 03, 2018 | www.zdnet.com
Brain drain is a real risk

IBM has not had a particularly great track record when it comes to integrating the cultures of other companies into its own, and brain drain with a company like Red Hat is a real risk because if those employees become unhappy, they can effectively go anywhere they want. They have the skills to command very high salaries at any of the top companies in the industry.

The other issue is that IBM hasn't figured out how to capture revenue from SMBs -- and that has always been elusive for them. Unless a deal is worth at least $1 million, and realistically $10 million, sales guys at IBM don't tend to get motivated.

Also: Red Hat changes its open-source licensing rules

The 5,000-seat and below market segment has traditionally been partner territory, and when it comes to reseller partners for its cloud, IBM is way, way behind AWS, Microsoft, Google, or even (gasp) Oracle, which is now offering serious margins to partners that land workloads on the Oracle cloud.

IBM's partner/reseller ecosystem is nowhere near what it was since it owned the PC and Server businesses that Lenovo now owns. And IBM's Softlayer/BlueMix cloud is largely tied to its legacy software business, which, again, is slowing.

... ... ...

But I think that it is very unlikely the IBM Cloud, even when juiced on Red Hat steroids, will become anything more ambitious than a boutique business for hybrid workloads when compared with AWS or Azure. Realistically, it has to be the kind of cloud platform that interoperates well with the others or nobody will want it.


geek49203_z , Wednesday, April 26, 2017 10:27 AM

Ex-IBM contractor here...

1. IBM used to value long-term employees. Now they "value" short-term contractors -- but they still pull them out of production for lots of training that, quite frankly, isn't exactly needed for what they are doing. Personally, I think that IBM would do well to return to valuing employees instead of looking at them as expendable commodities, but either way, they need to get past the legacies of when they had long-term employees all watching a single main frame.

2. As IBM moved to an army of contractors, they killed off the informal (but important!) web of tribal knowledge. You know, a friend of a friend who new the answer to some issue, or knew something about this customer? What has happened is that the transaction costs (as economists call it) have escalated until IBM can scarcely order IBM hardware for its own projects, or have SDM's work together.

M Wagner geek49203_z , Wednesday, April 26, 2017 10:35 AM
geek49203_z Number 2 is a problem everywhere. As long-time employees (mostly baby-boomers) retire, their replacements are usually straight out of college with various non-technical degrees. They come in with little history and few older-employees to which they can turn for "the tricks of the trade".
Shmeg , Wednesday, April 26, 2017 10:41 AM
I came to IBM from their SoftLayer acquisition. Their ability to stomp all over the things SoftLayer was almost doing right were astounding. I stood and listened to Ginni say things like, "We purchased SoftLayer because we need to learn from you," and, "We want you to teach us how to do Cloud the right way, since we spent all these years doing things the wrong way," and, "If you find yourself in a meeting with one of our old teams, you guys are gonna be the ones in charge. You are the ones who know how this is supposed to work - our culture has failed at it." Promises which were nothing more than hollow words.
geek49203_z , Wednesday, April 26, 2017 10:27 AM
Ex-IBM contractor here...

1. IBM used to value long-term employees. Now they "value" short-term contractors -- but they still pull them out of production for lots of training that, quite frankly, isn't exactly needed for what they are doing. Personally, I think that IBM would do well to return to valuing employees instead of looking at them as expendable commodities, but either way, they need to get past the legacies of when they had long-term employees all watching a single main frame.

2. As IBM moved to an army of contractors, they killed off the informal (but important!) web of tribal knowledge. You know, a friend of a friend who new the answer to some issue, or knew something about this customer? What has happened is that the transaction costs (as economists call it) have escalated until IBM can scarcely order IBM hardware for its own projects, or have SDM's work together.

M Wagner geek49203_z , Wednesday, April 26, 2017 10:35 AM
geek49203_z Number 2 is a problem everywhere. As long-time employees (mostly baby-boomers) retire, their replacements are usually straight out of college with various non-technical degrees. They come in with little history and few older-employees to which they can turn for "the tricks of the trade".
Shmeg , Wednesday, April 26, 2017 10:41 AM
I came to IBM from their SoftLayer acquisition. Their ability to stomp all over the things SoftLayer was almost doing right were astounding. I stood and listened to Ginni say things like, "We purchased SoftLayer because we need to learn from you," and, "We want you to teach us how to do Cloud the right way, since we spent all these years doing things the wrong way," and, "If you find yourself in a meeting with one of our old teams, you guys are gonna be the ones in charge. You are the ones who know how this is supposed to work - our culture has failed at it." Promises which were nothing more than hollow words.
cavman , Wednesday, April 26, 2017 3:58 PM
In the 1970's 80's and 90's I was working in tech support for a company called ROLM. We were doing communications , voice and data and did many systems for Fortune 500 companies along with 911 systems and the secure system at the White House. My job was to fly all over North America to solve problems with customers and integration of our equipment into their business model. I also did BETA trials and documented systems so others would understand what it took to make it run fine under all conditions.

In 84 IBM bought a percentage of the company and the next year they bought out the company. When someone said to me "IBM just bought you out , you must thing you died and went to heaven." My response was "Think of them as being like the Federal Government but making a profit". They were so heavily structured and hide bound that it was a constant battle working with them. Their response to any comments was "We are IBM"

I was working on an equipment project in Colorado Springs and IBM took control. I was immediately advised that I could only talk to the people in my assigned group and if I had a question outside of my group I had to put it in writing and give it to my manager and if he thought it was relevant it would be forwarded up the ladder of management until it reached a level of a manager that had control of both groups and at that time if he thought it was relevant it would be sent to that group who would send the answer back up the ladder.

I'm a Vietnam Veteran and I used my military training to get things done just like I did out in the field. I went looking for the person I could get an answer from.

At first others were nervous about doing that but within a month I had connections all over the facility and started introducing people at the cafeteria. Things moved quickly as people started working together as a unit. I finished my part of the work which was figuring all the spares technicians would need plus the costs for packaging and service contract estimates. I submitted it to all the people that needed it. I was then hauled into a meeting room by the IBM management and advised that I was a disruptive influence and would be removed. Just then the final contracts that vendors had to sign showed up and it used all my info. The IBM people were livid that they were not involved.

By the way a couple months later the IBM THINK magazine came out with a new story about a radical concept they had tried. A cover would not fit on a component and under the old system both the component and the cover would be thrown out and they would start from scratch doing it over. They decided to have the two groups sit together and figure out why it would not fit and correct it on the spot.

Another great example of IBM people is we had a sales contract to install a multi node voice mail system at WANG computers but we lost it because the IBM people insisted on bundling in AS0400 systems into the sale to WANG computer. Instead we lost a multi million dollar contract.

Eventually Siemens bought 50% of the company and eventually full control. Now all we heard was "That is how we do it in Germany" Our response was "How did that WW II thing work out".

Stockholder , Wednesday, April 26, 2017 7:20 PM
The author may have more loyalty to Microsoft than he confides, is the first thing noticeable about this article. The second thing is that in terms of getting rid of those aged IBM workers, I think he may have completely missed the mark, in fairness, that may be the product of his IBM experience, The sheer hubris of tech-talking from the middle of the story and missing the global misstep that is today's IBM is noticeable. As a stockholder, the first question is, "Where is the investigation to the breach of fiduciary duty by a board that owes its loyalty to stockholders who are scratching their heads at the 'positive' spin the likes of Ginni Rometty is putting on 20 quarters of dead losses?" Got that, 20 quarters of losses.

Next, it's a little worrisome that the author, now over the whole IBM thing is recommending firing "older people," you know, the ones who helped the company retain its performance in years' past. The smartest article I've read about IBM worried about its cheap style of "acquiring" non-best-of-breed companies and firing oodles of its qualified R&D guys. THAT author was right.

IBM's been run into the ground by Ginni, I'll use her first name, since apparently my money is now used to prop up this sham of a leader, who from her uncomfortable public announcement with Tim Cook of Apple, which HAS gone up, by the way, has embraced every political trend, not cause but trend from hiring more women to marginalizing all those old-time white males...You know the ones who produced for the company based on merit, sweat, expertise, all those non-feeling based skills that ultimately are what a shareholder is interested in and replaced them with young, and apparently "social" experts who are pasting some phony "modernity" on a company that under Ginni's leadership has become more of a pet cause than a company.

Finally, regarding ageism and the author's advocacy for the same, IBM's been there, done that as they lost an age discrimination lawsuit decades ago. IBM gave up on doing what it had the ability to do as an enormous business and instead under Rometty's leadership has tried to compete with the scrappy startups where any halfwit knows IBM cannot compete.

The company has rendered itself ridiculous under Rometty, a board that collects paychecks and breaches any notion of fiduciary duty to shareholders, an attempt at partnering with a "mod" company like Apple that simply bolstered Apple and left IBM languishing and a rejection of what has a track record of working, excellence, rewarding effort of employees and the steady plod of performance. Dump the board and dump Rometty.

jperlow Stockholder , Wednesday, April 26, 2017 8:36 PM
Stockholder Your comments regarding any inclination towards age discrimination are duly noted, so I added a qualifier in the piece.
Gravyboat McGee , Wednesday, April 26, 2017 9:00 PM
Four years in GTS ... joined via being outsourced to IBM by my previous employer. Left GTS after 4 years.

The IBM way of life was throughout the Oughts and the Teens an utter and complete failure from the perspective of getting work done right and using people to their appropriate and full potential. I went from a multi-disciplinary team of engineers working across technologies to support corporate needs in the IT environment to being siloed into a single-function organization.

My first year of on-boarding with IBM was spent deconstructing application integration and cross-organizational structures of support and interwork that I had spent 6 years building and maintaining. Handing off different chunks of work (again, before the outsourcing, an Enterprise solution supported by one multi-disciplinary team) to different IBM GTS work silos that had no physical spacial relationship and no interworking history or habits. What we're talking about here is the notion of "left hand not knowing what the right hand is doing" ...

THAT was the IBM way of doing things, and nothing I've read about them over the past decade or so tells me it has changed.

As a GTS employee, professional technical training was deemed unnecessary, hence I had no access to any unless I paid for it myself and used my personal time ... the only training available was cheesy presentations or other web based garbage from the intranet, or casual / OJT style meetings with other staff who were NOT professional or expert trainers.

As a GTS employee, I had NO access to the expert and professional tools that IBM fricking made and sold to the same damn customers I was supposed to be supporting. Did we have expert and professional workflow / document management / ITIL aligned incident and problem management tools? NO, we had fricking Lotus Notes and email. Instead of upgrading to the newest and best software solutions for data center / IT management & support, we degraded everything down the simplest and least complex single function tools that no "best practices" organization on Earth would ever consider using.

And the people management paradigm ... employees ranked annually not against a static or shared goal or metric, but in relation to each other, and there was ALWAYS a "top 10 percent" and a "bottom ten percent" required by upper management ... a system that was sociopathic in it's nature because it encourages employees to NOT work together ... by screwing over one's coworkers, perhaps by not giving necessary information, timely support, assistance as needed or requested, one could potentially hurt their performance and make oneself look relatively better. That's a self-defeating system and it was encouraged by the way IBM ran things.

The "not invented here" ideology was embedded deeply in the souls of all senior IBMers I ever met or worked with ... if you come on board with any outside knowledge or experience, you must not dare to say "this way works better" because you'd be shut down before you could blink. The phrase "best practices" to them means "the way we've always done it".

IBM gave up on innovation long ago. Since the 90's the vast majority of their software has been bought, not built. Buy a small company, strip out the innovation, slap an IBM label on it, sell it as the next coming of Jesus even though they refuse to expend any R&D to push the product to the next level ... damn near everything IBM sold was gentrified, never cutting edge.

And don't get me started on sales practices ... tell the customer how product XYZ is a guaranteed moonshot, they'll be living on lunar real estate in no time at all, and after all the contracts are signed hand the customer a box of nuts & bolts and a letter telling them where they can look up instructions on how to build their own moon rocket. Or for XX dollars more a year, hire a Professional Services IBMer to build it for them.

I have no sympathy for IBM. They need a clean sweep throughout upper management, especially any of the old True Blue hard-core IBMers.

billa201 , Thursday, April 27, 2017 11:24 AM
You obviously have been gone from IBM as they do not treat their employees well anymore and get rid of good talent not keep it a sad state.
ClearCreek , Tuesday, May 9, 2017 7:04 PM
We tried our best to be SMB partners with IBM & Arrow in the early 2000s ... but could never get any traction. I personally needed a mentor, but never found one. I still have/wear some of their swag, and I write this right now on a re-purposed IBM 1U server that is 10 years old, but ... I can't see any way our small company can make $ with them.

Watson is impressive, but you can't build a company on just Watson. This author has some great ideas, yet the phrase that keeps coming to me is internal politics. That corrosive reality has & will kill companies, and it will kill IBM unless it is dealt with.

Turn-arounds are possible (look at MS), but they are hard and dangerous. Hope IBM can figure it out...

[Nov 02, 2018] The D in Systemd stands for 'Dammmmit!' A nasty DHCPv6 packet can pwn a vulnerable Linux box by Shaun Nichols

Notable quotes:
"... Hole opens up remote-code execution to miscreants – or a crash, if you're lucky ..."
"... You can use NAT with IPv6. ..."
Oct 26, 2018 | theregister.co.uk

Hole opens up remote-code execution to miscreants – or a crash, if you're lucky A security bug in Systemd can be exploited over the network to, at best, potentially crash a vulnerable Linux machine, or, at worst, execute malicious code on the box.

The flaw therefore puts Systemd-powered Linux computers – specifically those using systemd-networkd – at risk of remote hijacking: maliciously crafted DHCPv6 packets can try to exploit the programming cockup and arbitrarily change parts of memory in vulnerable systems, leading to potential code execution. This code could install malware, spyware, and other nasties, if successful.

The vulnerability – which was made public this week – sits within the written-from-scratch DHCPv6 client of the open-source Systemd management suite, which is built into various flavors of Linux.

This client is activated automatically if IPv6 support is enabled, and relevant packets arrive for processing. Thus, a rogue DHCPv6 server on a network, or in an ISP, could emit specially crafted router advertisement messages that wake up these clients, exploit the bug, and possibly hijack or crash vulnerable Systemd-powered Linux machines.

Here's the Red Hat Linux summary :

systemd-networkd is vulnerable to an out-of-bounds heap write in the DHCPv6 client when handling options sent by network adjacent DHCP servers. A attacker could exploit this via malicious DHCP server to corrupt heap memory on client machines, resulting in a denial of service or potential code execution.

Felix Wilhelm, of the Google Security team, was credited with discovering the flaw, designated CVE-2018-15688 . Wilhelm found that a specially crafted DHCPv6 network packet could trigger "a very powerful and largely controlled out-of-bounds heap write," which could be used by a remote hacker to inject and execute code.

"The overflow can be triggered relatively easy by advertising a DHCPv6 server with a server-id >= 493 characters long," Wilhelm noted.

In addition to Ubuntu and Red Hat Enterprise Linux, Systemd has been adopted as a service manager for Debian, Fedora, CoreOS, Mint, and SUSE Linux Enterprise Server. We're told RHEL 7, at least, does not use the vulnerable component by default.

Systemd creator Lennart Poettering has already published a security fix for the vulnerable component – this should be weaving its way into distros as we type.

If you run a Systemd-based Linux system, and rely on systemd-networkd, update your operating system as soon as you can to pick up the fix when available and as necessary.

The bug will come as another argument against Systemd as the Linux management tool continues to fight for the hearts and minds of admins and developers alike. Though a number of major admins have in recent years adopted and championed it as the replacement for the old Init era, others within the Linux world seem to still be less than impressed with Systemd and Poettering's occasionally controversial management of the tool. ® Page:

2 3 Next →

Oh Homer , 6 days

Meh

As anyone who bothers to read my comments (BTW "hi" to both of you) already knows, I despise systemd with a passion, but this one is more an IPv6 problem in general.

Yes this is an actual bug in networkd, but IPv6 seems to be far more bug prone than v4, and problems are rife in all implementations. Whether that's because the spec itself is flawed, or because nobody understands v6 well enough to implement it correctly, or possibly because there's just zero interest in making any real effort, I don't know, but it's a fact nonetheless, and my primary reason for disabling it wherever I find it. Which of course contributes to the "zero interest" problem that perpetuates v6's bug prone condition, ad nauseam.

IPv6 is just one of those tech pariahs that everyone loves to hate, much like systemd, albeit fully deserved IMO.

Oh yeah, and here's the obligatory "systemd sucks". Personally I always assumed the "d" stood for "destroyer". I believe the "IP" in "IPv6" stands for "Idiot Protocol".

Anonymous Coward , 6 days
Re: Meh

"nonetheless, and my primary reason for disabling it wherever I find it. "

The very first guide I read to hardening a system recommended disabling services you didn't need and emphasized IPV6 for the reasons you just stated.

Wasn't there a bux in Xorg reported recently as well?

https://www.theregister.co.uk/2018/10/25/x_org_server_vulnerability/

"FreeDesktop.org Might Formally Join Forces With The X.Org Foundation"

https://www.phoronix.com/scan.php?page=news_item&px=FreeDesktop-org-Xorg-Forces

Also, does this mean that Facebook was vulnerable to attack, again?

"Simply put, you could say Facebook loves systemd."

https://www.phoronix.com/scan.php?page=news_item&px=Facebook-systemd-2018

Jay Lenovo , 6 days
Re: Meh

IPv6 and SystemD: Forced industry standard diseases that requires most of us to bite our lips and bear it.

Fortunately, IPv6 by lack of adopted use, limits the scope of this bug.

vtcodger , 6 days
Re: Meh
Fortunately, IPv6 by lack of adopted use, limits the scope of this bug.

Yeah, fortunately IPv6 is only used by a few fringe organizations like Google and Microsoft.

Seriously, I personally want nothing to do with either systemd or IPv6. Both seem to me to fall into the bin labeled "If it ain't broke, let's break it" But still it's troubling that things that some folks regard as major system components continue to ship with significant security flaws. How can one trust anything connected to the Internet that is more sophisticated and complex than a TV streaming box?

DougS , 6 days
Re: Meh

Was going to say the same thing, and I disable IPv6 for the exact same reason. IPv6 code isn't as well tested, as well audited, or as well targeted looking for exploits as IPv4. Stuff like this only proves that it was smart to wait, and I should wait some more.

Nate Amsden , 6 days
Re: Meh

Count me in the camp of who hates systemd(hates it being "forced" on just about every distro, otherwise wouldn't care about it - and yes I am moving my personal servers to Devuan, thought I could go Debian 7->Devuan but turns out that may not work, so I upgraded to Debian 8 a few weeks ago, and will go to Devuan from there in a few weeks, upgraded one Debian 8 to Devuan already 3 more to go -- Debian user since 1998), when reading this article it reminded me of

https://www.theregister.co.uk/2017/06/29/systemd_pwned_by_dns_query/

bombastic bob , 6 days
The gift that keeps on giving (systemd) !!!

This makes me glad I'm using FreeBSD. The Xorg version in FreeBSD's ports is currently *slightly* older than the Xorg version that had that vulnerability in it. AND, FreeBSD will *NEVER* have systemd in it!

(and, for Linux, when I need it, I've been using Devuan)

That being said, the whole idea of "let's do a re-write and do a 'systemd' instead of 'system V init' because WE CAN and it's OUR TURN NOW, 'modern' 'change for the sake of change' etc." kinda reminds me of recent "update" problems with Win-10-nic...

Oh, and an obligatory Schadenfreude laugh: HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA HA!!!!!!!!!!!!!!!!!!!

Long John Brass , 6 days
Re: The gift that keeps on giving (systemd) !!!

Finally got all my machines cut over from Debian to Devuan.

Might spin a FreeBSD system up in a VM and have a play.

I suspect that the infestation of stupid into the Linux space won't stop with or be limited to SystemD. I will wait and watch to see what damage the re-education gulag has done to Sweary McSwearFace (Mr Torvalds)

Dan 55 , 6 days
Re: Meh

I despise systemd with a passion, but this one is more an IPv6 problem in general.

Not really, systemd has its tentacles everywhere and runs as root. Exploits which affect systemd therefore give you the keys to the kingdom.

Orv , 3 days
Re: Meh
Not really, systemd has its tentacles everywhere and runs as root.

Yes, but not really the problem in this case. Any DHCP client is going to have to run at least part of the time as root. There's not enough nuance in the Linux privilege model to allow it to manipulate network interfaces, otherwise.

4 1
Long John Brass , 3 days
Re: Meh
Yes, but not really the problem in this case. Any DHCP client is going to have to run at least part of the time as root. There's not enough nuance in the Linux privilege model to allow it to manipulate network interfaces, otherwise.

Sorry but utter bullshit. You can if you are so inclined you can use the Linux Capabilities framework for this kind of thing. See https://wiki.archlinux.org/index.php/capabilities

3 0
JohnFen , 6 days
Yay for me

"If you run a Systemd-based Linux system"

I remain very happy that I don't use systemd on any of my machines anymore. :)

"others within the Linux world seem to still be less than impressed with Systemd"

Yep, I'm in that camp. I gave it a good, honest go, but it increased the amount of hassle and pain of system management without providing any noticeable benefit, so I ditched it.

ElReg!comments!Pierre , 2 days
Re: Time to troll

> Just like it's entirely possible to have a Linux system without any GNU in it

Just like it's possible to have a GNU system without Linux on it - ho well as soon as GNU MACH is finally up to the task ;-)

On the systemd angle, I, too, am in the process of switching all my machines from Debian to Devuan but on my personnal(*) network a few systemd-infected machines remain, thanks to a combination of laziness from my part and stubborn "systemd is quite OK" attitude from the raspy foundation. That vuln may be the last straw : one on the aforementionned machines sits on my DMZ, chatting freely with the outside world. Nothing really crucial on it, but i'd hate it if it became a foothold for nasties on my network.

(*) policy at work is RHEL, and that's negociated far above my influence level, but I don't really care as all my important stuff runs on Z/OS anyway ;-) . Ok we have to reboot a few VMs occasionnally when systemd throws a hissy fit -which is surprisingly often for an "enterprise" OS -, but meh.

Destroy All Monsters , 5 days
Re: Not possible

This code is actually pretty bad and should raise all kinds of red flags in a code review.

Anonymous Coward , 5 days
Re: Not possible

ITYM Lennart

Christian Berger , 5 days
Re: Not possible

"This code is actually pretty bad and should raise all kinds of red flags in a code review."

Yeah, but for that you need people who can do code reviews, and also people who can accept criticism. That also means saying "no" to people who are bad at coding, and saying that repeatedly if they don't learn.

SystemD seems to be the area where people gather who want to get code in for their resumes, not for people who actually want to make the world a better place.

26 1
jake , 6 days
There is a reason ...

... that an init, traditionally, is a small bit of code that does one thing very well. Like most of the rest of the *nix core utilities. All an init should do is start PID1, set run level, spawn a tty (or several), handle a graceful shutdown, and log all the above in plaintext to make troubleshooting as simplistic as possible. Anything else is a vanity project that is best placed elsewhere, in it's own stand-alone code base.

Inventing a clusterfuck init variation that's so big and bulky that it needs to be called a "suite" is just asking for trouble.

IMO, systemd is a cancer that is growing out of control, and needs to be cut out of Linux before it infects enough of the system to kill it permanently.

AdamWill , 6 days
Re: There is a reason ...

That's why systemd-networkd is a separate, optional component, and not actually part of the init daemon at all. Most systemd distros do not use it by default and thus are not vulnerable to this unless the user actively disables the default network manager and chooses to use networkd instead.

Anonymous Coward , 4 days
Re: There is a reason ...

"Just go install a default Fedora or Ubuntu system and check for yourself: you'll have systemd, but you *won't* have systemd-networkd running."

Funny that I installed ubuntu 18.04 a few weeks ago and the fucking thing installed itself then! ( and was a fucking pain to remove).

LP is a fucking arsehole.

Orv , 3 days
Re: There is a reason ...
Pardon my ignorance (I don't use a distro with systemd) why bother with networkd in the first place if you don't have to use it.

Mostly because the old-style init system doesn't cope all that well with systems that move from network to network. It works for systems with a static IP, or that do a DHCP request at boot, but it falls down on anything more dynamic.

In order to avoid restarting the whole network system every time they switch WiFi access points, people have kludged on solutions like NetworkManager. But it's hard to argue it's more stable or secure than networkd. And this is always going to be a point of vulnerability because anything that manipulates network interfaces will have to be running as root.

These days networking is essential to the basic functionality of most computers; I think there's a good argument that it doesn't make much sense to treat it as a second-class citizen.

AdamWill , 2 days
Re: There is a reason ...

"Funny that I installed ubuntu 18.04 a few weeks ago and the fucking thing installed itself then! ( and was a fucking pain to remove)."

So I looked into it a bit more, and from a few references at least, it seems like Ubuntu has a sort of network configuration abstraction thingy that can use both NM and systemd-networkd as backends; on Ubuntu desktop flavors NM is usually the default, but apparently for recent Ubuntu Server, networkd might indeed be the default. I didn't notice that as, whenever I want to check what's going on in Ubuntu land, I tend to install the default desktop spin...

"LP is a fucking arsehole."

systemd's a lot bigger than Lennart, you know. If my grep fu is correct, out of 1543 commits to networkd, only 298 are from Lennart...

1 0
alain williams , 6 days
Old is good

in many respects when it comes to software because, over time, the bugs will have been found and squashed. Systemd brings in a lot of new code which will, naturally, have lots of bugs that will take time to find & remove. This is why we get problems like this DHCP one.

Much as I like the venerable init: it did need replacing. Systemd is one way to go, more flexible, etc, etc. Something event driven is a good approach.

One of the main problems with systemd is that it has become too big, slurped up lots of functionality which has removed choice, increased fragility. They should have concentrated on adding ways of talking to existing daemons, eg dhcpd, through an API/something. This would have reused old code (good) and allowed other implementations to use the API - this letting people choose what they wanted to run.

But no: Poettering seems to want to build a Cathedral rather than a Bazzar.

He appears to want to make it his way or no way. This is bad, one reason that *nix is good is because different solutions to a problem have been able to be chosen, one removed and another slotted in. This encourages competition and the 'best of breed' comes out on top. Poettering is endangering that process.

Also: he refusal to accept patches to let it work on non-Linux Unix is just plain nasty.

oiseau , 4 days
Re: Old is good

Hello:

One of the main problems with systemd is that it has become too big, slurped up lots of functionality which has removed choice, increased fragility.

IMO, there is a striking paralell between systemd and the registry in Windows OSs.

After many years of dealing with the registry (W98 to XPSP3) I ended up seeing the registry as a sort of developer sanctioned virus running inside the OS, constantly changing and going deeper and deeper into the OS with every iteration and as a result, progressively putting an end to the possibility of knowing/controlling what was going on inside your box/the OS.

Years later, when I learned about the existence of systemd (I was already running Ubuntu) and read up on what it did and how it did it, it dawned on me that systemd was nothing more than a registry class virus and it was infecting Linux_land at the behest of the developers involved.

So I moved from Ubuntu to PCLinuxOS and then on to Devuan.

Call me paranoid but I am convinced that there are people both inside and outside IT that actually want this and are quite willing to pay shitloads of money for it to happen.

I don't see this MS cozying up to Linux in various ways lately as a coincidence: these things do not happen just because or on a senior manager's whim.

What I do see (YMMV) is systemd being a sort of convergence of Linux with Windows, which will not be good for Linux and may well be its undoing.

Cheers,

O.

Rich 2 , 4 days
Re: Old is good

"Also: he refusal to accept patches to let it work on non-Linux Unix is just plain nasty"

Thank goodness this crap is unlikely to escape from Linux!

By the way, for a systemd-free Linux, try void - it's rather good.

Michael Wojcik , 3 days
Re: Old is good

Much as I like the venerable init: it did need replacing.

For some use cases, perhaps. Not for any of mine. SysV init, or even BSD init, does everything I need a Linux or UNIX init system to do. And I don't need any of the other crap that's been built into or hung off systemd, either.

Orv , 3 days
Re: Old is good

BSD init and SysV init work pretty darn well for their original purpose -- servers with static IP addresses that are rebooted no more than once in a fortnight. Anything more dynamic starts to give it trouble.

Chairman of the Bored , 6 days
Too bad Linus swore off swearing

Situations like this go beyond a little "golly gee, I screwed up some C"...

jake , 6 days
Re: Too bad Linus swore off swearing

Linus doesn't care. systemd has nothing to do with the kernel ... other than the fact that the lead devs for systemd have been banned from working on the kernel because they don't play nice with others.

JLV , 6 days
how did it get to this?

I've been using runit, because I am too lazy and clueless to write init scripts reliably. It's very lightweight, runs on a bunch of systems and really does one thing - keep daemons up.

I am not saying it's the best - but it looks like it has a very small codebase, it doesn't do much and generally has not bugged me after I configured each service correctly. I believe other systems also exist to avoid using init scripts directly. Not Monit, as it relies on you configuring the daemon start/stop commands elsewhere.

On the other hand, systemd is a massive sprawl, does a lot of things - some of them useful, like dependencies and generally has needed more looking after. Twice I've had errors on a Django server that, after a lot of looking around ended up because something had changed in the, Chef-related, code that's exposed to systemd and esoteric (not emitted by systemd) errors resulted when systemd could not make sense of the incorrect configuration.

I don't hate it - init scripts look a bit antiquated to me and they seem unforgiving to beginners - but I don't much like it. What I certainly do hate is how, in an OS that is supposed to be all about choice, sometime excessively so as in the window manager menagerie, we somehow ended up with one mandatory daemon scheduler on almost all distributions. Via, of all types of dependencies, the GUI layer. For a window manager that you may not even have installed.

Talk about the antithesis of the Unix philosophy of do one thing, do it well.

Oh, then there are also the security bugs and the project owner is an arrogant twat. That too.

Doctor Syntax , 6 days
Re: how did it get to this?

"init scripts look a bit antiquated to me and they seem unforgiving to beginners"

Init scripts are shell scripts. Shell scripts are as old as Unix. If you think that makes them antiquated then maybe Unix-like systems are not for you. In practice any sub-system generally gets its own scripts installed with the rest of the S/W so if being unforgiving puts beginners off tinkering with them so much the better. If an experienced Unix user really needs to modify one of the system-provided scripts their existing shell knowledge will let them do exactly what's needed. In the extreme, if you need to develop a new init script then you can do so in the same way as you'd develop any other script - edit and test from the command line.

33 4
onefang , 6 days
Re: how did it get to this?

"Init scripts are shell scripts."

While generally true, some sysv init style inits can handle init "scripts" written in any language.

sed gawk , 6 days
Re: how did it get to this?

I personally like openrc as an init system, but systemd is a symptom of the tooling problem.

It's for me a retrograde step but again, it's linux, one can, as you and I do, just remove systemd.

There are a lot of people in the industry now who don't seem able to cope with shell scripts nor are minded to research the arguments for or against shell as part of a unix style of system design.

In conclusion, we are outnumbered, but it will eventually collapse under its own weight and a worthy successor shall rise, perhaps called SystemV, might have to shorten that name a bit.

AdamWill , 6 days
Just about nothing actually uses networkd

"In addition to Ubuntu and Red Hat Enterprise Linux, Systemd has been adopted as a service manager for Debian, Fedora, CoreOS, Mint, and SUSE Linux Enterprise Server. We're told RHEL 7, at least, does not use the vulnerable component by default."

I can tell you for sure that no version of Fedora does, either, and I'm fairly sure that neither does Debian, SLES or Mint. I don't know anything much about CoreOS, but https://coreos.com/os/docs/latest/network-config-with-networkd.html suggests it actually *might* use systemd-networkd.

systemd-networkd is not part of the core systemd init daemon. It's an optional component, and most distros use some other network manager (like NetworkManager or wicd) by default.

Christian Berger , 5 days
The important word here is "still"

I mean commercial distributions seem to be particularly interested in trying out new things that can increase their number of support calls. It's probably just that networkd is either to new and therefore not yet in the release, or still works so badly even the most rudimentary tests fail.

There is no reason to use that NTP daemon of systemd, yet more and more distros ship with it enabled, instead of some sane NTP-server.

NLCSGRV , 6 days
The Curse of Poettering strikes again.
_LC_ , 6 days
Now hang on, please!

Ser iss no neet to worry, systemd will becum stable soon after PulseAudio does.

Ken Hagan , 6 days
Re: Now hang on, please!

I won't hold my breath, then. I have a laptop at the moment that refuses to boot because (as I've discovered from looking at the journal offline) pulseaudio is in an infinite loop waiting for the successful detection of some hardware that, presumably, I don't have.

I imagine I can fix it by hacking the file-system (offline) so that fuckingpulse is no longer part of the boot configuration, but I shouldn't have to. A decent init system would be able to kick of everything else in parallel and if one particular service doesn't come up properly then it just logs the error. I *thought* that was one of the claimed advantages of systemd, but apparently that's just a load of horseshit.

26 0
Obesrver1 , 5 days
Reason for disabling IVP6

That it punches thru NAT routers enabling all your little goodies behind them as directly accessible.

MS even supplies tunneling (Ivp4 to Ivp6) so if using Linux in a VM on a MS system you may still have it anyway.

NAT was always recommended to be used in hardening your system, I prefer to keep all my idIoT devices behind one.

As they are just Idiot devices.

In future I will need a NAT that acts as a DNS and offers some sort of solution to keeping Ivp4.

Orv , 3 days
Re: Reason for disabling IVP6

My NAT router statefully firewalls incoming IPv6 by default, which I consider equivalently secure. NAT adds security mostly by accident, because it de-facto adds a firewall that blocks incoming packets. It's not the address translation itself that makes things more secure, it's the inability to route in from the outside.

dajames , 3 days
Re: Reason for disabling IVP6

You can use NAT with IPv6.

You can, but why would you want to.

NAT is schtick for connecting a whole LAN to a WAN using a single IPv4 address (useful with IPv4 because most ISPs don't give you a /24 when you sign up). If you have a native IPv6 address you'll have something like 2^64 addresses, so machines on your LAN can have an actual WAN-visible address of their own without needing a trick like NAT.

Using NAT with IPv6 is just missing the point.

JohnFen , 3 days
Re: Reason for disabling IVP6

"so machines on your LAN can have an actual WAN-visible address of their own without needing a trick like NAT."

Avoiding that configuration is exactly the use case for using NAT with IPv6. As others have pointed out, you can accomplish the same thing with IPv6 router configuration, but NAT is easier in terms of configuration and maintenance. Given that, and assuming that you don't want to be able to have arbitrary machines open ports that are visible to the internet, then why not use NAT?

Also, if your goal is to make people more likely to move to IPv6, pointing out IPv4 methods that will work with IPv6 (even if you don't consider them optimal) seems like a really, really good idea. It eases the transition.

Destroy All Monsters , 5 days
Please El Reg these stories make ma rage at breakfast, what's this?

The bug will come as another argument against Systemd as the Linux management tool continues to fight for the hearts and minds of admins and developers alike.

Less against systemd (which should get attacked on the design & implementation level) or against IPv6 than against the use of buffer-overflowable languages in 2018 in code that processes input from the Internet (it's not the middle ages anymore) or at least very hard linting of the same.

But in the end, what did it was a violation of the Don't Repeat Yourself principle and lack of sufficently high-level datastructures. Pointer into buffer, and the remaining buffer length are two discrete variables that need to be updated simultaneously to keep the invariant and this happens in several places. This is just a catastrophe waiting to happen. You forget to update it once, you are out! Use structs and functions updating the structs correctly.

And use assertions in the code , this stuff all seems disturbingly assertion-free.

Excellent explanation by Felix Wilhelm:

https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1795921

The function receives a pointer to the option buffer buf, it's remaining size buflen and the IA to be added to the buffer. While the check at (A) tries to ensure that the buffer has enough space left to store the IA option, it does not take the additional 4 bytes from the DHCP6Option header into account (B). Due to this the memcpy at (C) can go out-of-bound and *buflen can underflow [i.e. you suddenly have a gazillion byte buffer, Ed.] in (D) giving an attacker a very powerful and largely controlled OOB heap write starting at (E).

TheSkunkyMonk , 5 days
Init is 1026 lines of code in one file and it works great.
Anonymous Coward , 5 days
"...and Poettering's occasionally controversial management of the tool."

Shouldn't that be "...Potterings controversial management as a tool."?

clocKwize , 4 days
Re: Contractor rights

why don't we stop writing code in languages that make it easy to screw up so easily like this?

There are plenty about nowadays, I'd rather my DHCP client be a little bit slower at processing packets if I had more confidence it would not process then incorrectly and execute code hidden in said packets...

Anonymous Coward , 4 days
Switch, as easy as that

The circus that is called "Linux" have forced me to Devuan and the likes however the circus is getting worse and worse by the day, thus I have switched to the BSD world, I will learn that rather than sit back and watch this unfold As many of us have been saying, the sudden switch to SystemD was rather quick, perhaps you guys need to go investigate why it really happened, don't assume you know, go dig and you will find the answers, it's rather scary, thus I bid the Linux world a farewell after 10 years of support, I will watch the grass dry out from the other side of the fence, It was destined to fail by means of infiltration and screw it up motive(s) on those we do not mention here.

oiseau , 3 days
Re: Switch, as easy as that

Hello:

As many of us have been saying, the sudden switch to SystemD was rather quick, perhaps you guys need to go investigate why it really happened, don't assume you know, go dig and you will find the answers, it's rather scary ...

Indeed, it was rather quick and is very scary.

But there's really no need to dig much, just reason it out.

It's like a follow the money situation of sorts.

I'll try to sum it up in three short questions:

Q1: Hasn't the Linux philosophy (programs that do one thing and do it well) been a success?

A1: Indeed, in spite of the many init systems out there, it has been a success in stability and OS management. And it can easily be tested and debugged, which is an essential requirement.

Q2: So what would Linux need to have the practical equivalent of the registry in Windows for?

A2: So that whatever the registry does in/to Windows can also be done in/to Linux.

Q3: I see. And just who would want that to happen? Makes no sense, it is a huge step backwards.

A3: ....

Cheers,

O.

Dave Bell , 4 days
Reporting weakness

OK, so I was able to check through the link you provided, which says "up to and including 239", but I had just installed a systemd update and when you said there was already a fix written, working it's way through the distro update systems, all I had to do was check my log.

Linux Mint makes it easy.

But why didn't you say something such as "reported to affect systemd versions up to and including 239" and then give the link to the CVE? That failure looks like rather careless journalism.

W.O.Frobozz , 3 days
Hmm.

/sbin/init never had these problems. But then again /sbin/init didn't pretend to be the entire operating system.

[Nov 01, 2018] IBM was doing their best to demoralize their staff (and contractors) and annoy their customers as much as possible!

Notable quotes:
"... I worked as a contractor for IBM's IGS division in the late '90s and early 2000s at their third biggest customer, and even then, IBM was doing their best to demoralize their staff (and contractors) and annoy their customers as much as possible! ..."
"... IBM charged themselves 3x the actual price to customers for their ThinkPads at the time! This meant that I never had a laptop or desktop PC from IBM in the 8 years I worked there. If it wasn't for the project work I did I would not have had a PC to work on! ..."
"... What was strange is that every single time I got a pay cut, IBM would then announce that they had bought a new company! I would have quit long before I did, but I was tied to them while waiting for my Green Card to be approved. I know that raises are few in the current IBM for normal employees and that IBM always pleads poverty for any employee request. Yet, they somehow manage to pay billions of dollars for a new company. Strange that, isn't it? ..."
"... I moved to the company that had won the contract and regret not having the chance to tell that IBM manager what I thought about him and where he could stick the new laptop. ..."
"... After that experience I decided to never work for them in any capacity ever again. I feel pity for the current Red Hat employees and my only advice to them is to get out while they can. "DON'T FOLLOW THE RED HAT TO HELL" ..."
Nov 01, 2018 | theregister.co.uk

Edwin Tumblebunny Ashes to ashes, dust to dust - Red Hat is dead.

Red Hat will be a distant memory in a few years as it gets absorbed by the abhorrent IBM culture and its bones picked clean by the IBM beancounters. Nothing good ever happens to a company bought by IBM.

I worked as a contractor for IBM's IGS division in the late '90s and early 2000s at their third biggest customer, and even then, IBM was doing their best to demoralize their staff (and contractors) and annoy their customers as much as possible!

Some examples:

The on-site IBM employees (and contractors) had to use Lotus Notes for email. That was probably the worst piece of software I have ever used - I think baboons on drugs could have done a better design job. IBM set up a T1 (1.54 Mbps) link between the customer and the local IBM hub for email, etc. It sounds great until you realize there were over 150 people involved and due to the settings of Notes replication, it could often take over an hour to actually download email to read.

To do my job I needed to install some IBM software. My PC did not have enough disk space for this software as well as the other software I needed. Rather than buy me a bigger hard disk I had to spend 8 hours a week installing and reinstalling software to do my job.

I waited three months for a $50 stick of memory to be approved. When it finally arrived my machine had been changed out (due to a new customer project) and the memory was not compatible! Since I worked on a lot of projects I often had machines supplied by the customer on my desk. So, I would use one of these as my personal PC and would get an upgrade when the next project started!

I was told I could not be supplied with a laptop or desktop from IBM as they were too expensive (my IBM division did not want to spend money on anything). IBM charged themselves 3x the actual price to customers for their ThinkPads at the time! This meant that I never had a laptop or desktop PC from IBM in the 8 years I worked there. If it wasn't for the project work I did I would not have had a PC to work on!

IBM has many strange and weird processes that allow them to circumvent the contract they have with their preferred contractor companies. This meant that for a number of years I ended up getting a pay cut. What was strange is that every single time I got a pay cut, IBM would then announce that they had bought a new company! I would have quit long before I did, but I was tied to them while waiting for my Green Card to be approved. I know that raises are few in the current IBM for normal employees and that IBM always pleads poverty for any employee request. Yet, they somehow manage to pay billions of dollars for a new company. Strange that, isn't it?

Eventually I was approved to get a laptop and excitedly watched it move slowly through the delivery system. I got confused when it was reported as delivered to Ohio rather than my work (not in Ohio). After some careful searching I discovered that my manager and his wife both worked for IBM from their home in, yes you can guess, Ohio. It looked like he had redirected my new laptop for his own use and most likely was going to send me his old one and claim it was a new one. I never got the chance to confront him about it, though, as IBM lost the contract with the customer that month and before the laptop should have arrived IBM was out! I moved to the company that had won the contract and regret not having the chance to tell that IBM manager what I thought about him and where he could stick the new laptop.

After that experience I decided to never work for them in any capacity ever again. I feel pity for the current Red Hat employees and my only advice to them is to get out while they can. "DON'T FOLLOW THE RED HAT TO HELL"

Certain Hollywood stars seem to be psychic types : https://twitter.com/JimCarrey/status/1057328878769721344

rmstock , 2 days

Re: "DON'T FOLLOW THE RED HAT TO HELL"

I sense that a global effort is ongoing to shutdown open source software by brute force. First, the enforcement of the EU General Data Protection Regulation (GDPR) by ICANN.org to enable untraceable takeovers of domains. Microsoft buying github. Linus Torvalds forced out of his own Linux kernel project because of the Code of Conduct and now IBM buying RedHat. I wrote the following at https://lulz.com/linux-devs-threaten-killswitch-coc-controversy-1252/ "Torvalds should lawyer up. The problems are the large IT Tech firms who platinum donated all over the place in Open Source land. When IBM donated with 1 billion USD to Linux in 2000 https://itsfoss.com/ibm-invest-1-billion-linux/ a friend who vehemently was against the GPL and what Torvalds was doing, told me that in due time OSS would simply just go away.

These Community Organizers, not Coders per se, are on a mission to overtake and control the Linux Foundation, and if they can't, will search and destroy all of it, even if it destroys themselves. Coraline is merely a expendable pion here. Torvalds is now facing unjust confrontations and charges resembling the nomination of Judge Brett Kavanaugh. Looking at the CoC document it even might have been written by a Google executive, who themselves currently are facing serious charges and lawsuits from their own Code of Conduct. See theintercept.com, their leaked video the day after the election of 2016. They will do anything to pursue this. However to pursue a personal bias or agenda furnishing enactments or acts such as to, omit contradicting facts (code), commit perjury, attend riots and harassments, cleanse Internet archives and search engines of exculpatory evidence and ultimately hire hit-men to exterminate witnesses of truth (developers), in an attempt to elevate bias as fabricated fact (code) are crimes and should be prosecuted accordingly."

[Nov 01, 2018] Will Red Hat Survive IBM Ownership

It does not matter if somebody "stresses independence word are cheap. The mere fact that this is not IBM changes relationships. also IBM executives need to show "leadership" and that entails some "general direction" for Red Hat from now on. At least with Microsoft and HP relationship will be much cooler, then before.
Also IBM does not like "charity" projects like CentOS and that will be affected too, no matter what executives tell you right now. Paradoxically this greatly strengthen the position of Oracle Linux.
The status of IBM software in corporate world 9outside finance companies) is low and their games with licenses (licensing products per core, etc) are viewed by most of their customers as despicable. This was one of the reasons IBM lost it share in enterprise software. For example, greed in selling Tivoli more or less killed the software. All credits to Lou Gerstner who initially defined this culture of relentless outsource and cult of the "bottom line" (which was easy for his because he does not understood technology at all). His successor was even more active in driving company into the ground. R ampant cost-arbitrage driven offshoring has left a legacy of dissatisfied customers. Most projects are over priced. and most of those which were priced more or less on industry level had bad quality results and costs overruns.
IBM cut severance pay to 1 month, is firing people left and right and is insensitive to the fired employees and the result is enormous negativity for them. Good people are scared to work for them and people are asking tough questions.
It's strategy since Gerstner. Palmisano (this guys with history diploma witched into Cybersecurity after retirement in 2012 and led Obama cybersecurity commission) & Ginni Rometty was completely neoliberal mantra "share holder value", grow earnings per share at all costs. They all manage IBM for financial appearance rather than the quaility products. There was no focus on breakthrough-innovation, market leadership in mega-trends (like cloud computing) or even giving excellent service to customers.
Ginni Rometty accepted bonuses during times of layoffs and outsourcing.
When company have lost it architectural talent the brand will eventually die. IBM still has very high number of patents every year. It is still a very large company in terms of revenue and number of employees. However, there are strong signs that its position in the technology industry might deteriorate further.
Nov 01, 2018 | www.itprotoday.com

Cormier also stressed Red Hat independence when we asked how the acquisition would affect ongoing partnerships in place with Amazon Web Services, Microsoft Azure, and other public clouds.

"One of the things you keep seeing in this is that we're going to run Red Hat as a separate entity within IBM," he said. "One of the reasons is business. We need to, and will, remain Switzerland in terms of how we interact with our partners. We're going to continue to prioritize what we do for our partners within our products on a business case perspective, including IBM as a partner. We're not going to do unnatural things. We're going to do the right thing for the business, and most importantly, the customer, in terms of where we steer our products."

Red Hat promises that independence will extend to Red Hat's community projects, such as its freely available Fedora Linux distribution which is widely used by developers. When asked what impact the sale would have on Red Hat maintained projects like Fedora and CentoOS, Cormier replied, "None. We just came from an all-hands meeting from the company around the world for Red Hat, and we got asked this question and my answer was that the day after we close, I don't intend to do anything different or in any other way than we do our every day today. Arvind, Ginnie, Jim, and I have talked about this extensively. For us, it's business as usual. Whatever we were going to do for our road maps as a stand alone will continue. We have to do what's right for the community, upstream, our associates, and our business."

This all sounds good, but as they say, talk is cheap. Six months from now we'll have a better idea of how well IBM can walk the walk and leave Red Hat alone. If it can't and Red Hat doesn't remain clearly distinct from IBM ownership control, then Big Blue will have wasted $33 billion it can't afford to lose and put its future, as well as the future of Red Hat, in jeopardy.

[Oct 30, 2018] Red Hat hired the CentOS developers 4.5-years ago

Oct 30, 2018 | linux.slashdot.org

quantaman ( 517394 ) , Sunday October 28, 2018 @04:22PM ( #57550805 )

Re:Well at least we'll still have Cent ( Score: 4 , Informative)
Fedora is fully owned by Red Hat and CentOS requires the availability of the Red Hat repositories which they aren't obliged to make public to non-customers..

Fedora is fully under Red Hat's control. It's used as a bleeding edge distro for hobbyists and as a testing ground for code before it goes into RHEL. I doubt its going away since it does a great job of establishing mindshare but no business in their right mind is going to run Fedora in production.

But CentOS started as a separate organization with a fairly adversarial relationship to Red Hat since it really is free RHEL which cuts into their actual customer base. They didn't need Red Hat repos back then, just the code which they rebuilt from scratch (which is why they were often a few months behind).

If IBM kills CentOS a new one will pop up in a week, that's the beauty of the GPL.

Luthair ( 847766 ) , Sunday October 28, 2018 @04:22PM ( #57550799 )
Re:Well at least we'll still have Cent ( Score: 3 )

Red Hat hired the CentOS developers 4.5-years ago.

[Oct 30, 2018] We run just about everything on CentOS around here, downstream of RHEL. Should we be worried?

Oct 30, 2018 | arstechnica.com

Muon , Ars Scholae Palatinae 6 hours ago Popular

We run just about everything on CentOS around here, downstream of RHEL. Should we be worried? 649 posts | registered 1/26/2009
brandnewmath , Smack-Fu Master, in training et Subscriptor 6 hours ago Popular
We'll see. Companies in an acquisition always rush to explain how nothing will change to reassure their customers. But we'll see.
Kilroy420 , Ars Tribunus Militum 6 hours ago Popular
Perhaps someone can explain this... Red Hat's revenue and assets barely total about $5B. Even factoring in market share and capitalization, how the hey did IBM come up with $34B cash being a justifiable purchase price??

Honestly, why would Red Hat have said no? 1648 posts | registered 4/3/2012

dorkbert , Ars Tribunus Militum 6 hours ago Popular
My personal observation of IBM over the past 30 years or so is that everything it acquires dies horribly.
barackorama , Smack-Fu Master, in training 6 hours ago
...IBM's own employees see it as a company in free fall. This is not good news.

In other news, property values in Raleigh will rise even more...

Moodyz , Ars Centurion 6 hours ago Popular
Quote:
This is fine

Looking back at what's happened with many of IBM's past acquisitions, I'd say no, not quite fine.
I am not your friend , Wise, Aged Ars Veteran et Subscriptor 6 hours ago Popular
I just can't comprehend that price. Cloud has a rich future, but I didn't even know Red Hat had any presence there, let alone $35 billion worth.
jandrese , Ars Tribunus Angusticlavius et Subscriptor 6 hours ago Popular
50me12 wrote:
Will IBM even know what to do with them?

IBM has been fumbling around for a while. They didn't know how to sell Watson as they sold it like a weird magical drop in service and it failed repeatedly, where really it should be a long term project that you bring customer's along for the ride...

I had a buddy using their cloud service and they went to spin up servers and IBM was all "no man we have to set them up first".... like that's not cloud IBM...

If IBM can't figure out how to sell its own services I'm not sure the powers that be are capable of getting the job done ever. IBM's own leadership seems incompatible with the state of the world.

IBM basically bought a ton of service contracts for companies all over the world. This is exactly what the suits want: reliable cash streams without a lot of that pesky development stuff.

IMHO this is perilous for RHEL. It would be very easy for IBM to fire most of the developers and just latch on to the enterprise services stuff to milk it till its dry.

skizzerz , Wise, Aged Ars Veteran et Subscriptor 6 hours ago
toturi wrote:
I can only see this as a net positive - the ability to scale legacy mainframes onto "Linux" and push for even more security auditing.

I would imagine the RHEL team will get better funding but I would be worried if you're a centos or fedora user.

I'm nearly certain that IBM's management ineptitude will kill off Fedora and CentOS (or at least severely gimp them compared to how they currently are), not realizing how massively important both of these projects are to the core RHEL product. We'll see RHEL itself suffer as a result.

I normally try to understand things with an open mindset, but in this case, IBM has had too long of a history of doing things wrong for me to trust them. I'll be watching this carefully and am already prepping to move off of my own RHEL servers once the support contract expires in a couple years just in case it's needed.

Iphtashu Fitz , Ars Scholae Palatinae 6 hours ago Popular
50me12 wrote:
Will IBM even know what to do with them?

My previous job (6+ years ago now) was at a university that was rather heavily invested in IBM for a high performance research cluster. It was something around 100 or so of their X-series blade servers, all of which were running Red Hat Linux. It wouldn't surprise me if they decided to acquire Red Hat in large part because of all these sorts of IBM systems that run Red Hat on them.

TomXP411 , Ars Tribunus Angusticlavius 6 hours ago Popular
Iphtashu Fitz wrote:
50me12 wrote:
blockquote> Will IBM even know what to do with them?

My previous job (6+ years ago now) was at a university that was rather heavily invested in IBM for a high performance research cluster. It was something around 100 or so of their X-series blade servers, all of which were running Red Hat Linux. It wouldn't surprise me if they decided to acquire Red Hat in large part because of all these sorts of IBM systems that run Red Hat on them.

That was my thought. IBM wants to own an operating system again. With AIX being relegated to obscurity, buying Red Hat is simpler than creating their own Linux fork.

anon_lawyer , Wise, Aged Ars Veteran 6 hours ago Popular
Valuing Red Hat at $34 billion means valuing it at more than 1/4 of IBMs current market cap. From my perspective this tells me IBM is in even worse shape than has been reported.
dmoan , Ars Centurion 6 hours ago
I am not your friend wrote:
I just can't comprehend that price. Cloud has a rich future, but I didn't even know Red Hat had any presence there, let alone $35 billion worth.

Redhat made 258 million in income last year so they paid over 100 times its net income that's crazy valuation here...

[Oct 30, 2018] I have worked at IBM 17 years and have worried about being layed off for about 11 of them. Moral is in the toilet. Bonuses for the rank and file are in the under 1% range while the CEO gets millions

Notable quotes:
"... Adjusting for inflation, I make $6K less than I did my first day. My group is a handful of people as at least 1/2 have quit or retired. To support our customers, we used to have several people, now we have one or two and if someone is sick or on vacation, our support structure is to hope nothing breaks. ..."
Oct 30, 2018 | features.propublica.org

Buzz , Friday, March 23, 2018 12:00 PM

I've worked there 17 years and have worried about being layed off for about 11 of them. Moral is in the toilet. Bonuses for the rank and file are in the under 1% range while the CEO gets millions. Pay raises have been non existent or well under inflation for years.

Adjusting for inflation, I make $6K less than I did my first day. My group is a handful of people as at least 1/2 have quit or retired. To support our customers, we used to have several people, now we have one or two and if someone is sick or on vacation, our support structure is to hope nothing breaks.

We can't keep millennials because of pay, benefits and the expectation of being available 24/7 because we're shorthanded. As the unemployment rate drops, more leave to find a different job, leaving the old people as they are less willing to start over with pay, vacation, moving, selling a house, pulling kids from school, etc.

The younger people are generally less likely to be willing to work as needed on off hours or to pull work from a busier colleague.

I honestly have no idea what the plan is when the people who know what they are doing start to retire, we are way top heavy with 30-40 year guys who are on their way out, very few of the 10-20 year guys due to hiring freezes and we can't keep new people past 2-3 years. It's like our support business model is designed to fail.

[Oct 30, 2018] Will systemd become standard on mainframes as well?

It will be interesting to see what happens in any case.
Oct 30, 2018 | theregister.co.uk

Doctor Syntax , 1 day

So now it becomes Blue Hat. Will systemd become standard on mainframes as well?
DCFusor , 15 hrs
@Doctor

Maybe we get really lucky and they RIF Lennart Poettering or he quits? I hear IBM doesn't tolerate prima donnas and cults of personality quite as much as RH?

Anonymous Coward , 15 hrs
I hear IBM doesn't tolerate prima donnas and cults of personality quite as much as RH?

Quite the contrary. IBM is run and managed by prima donnas and personality cults.

Waseem Alkurdi , 18 hrs
Re: Poettering

OS/2 and Poettering? Best joke I've ever heard!

(It'd be interesting if somebody locked them both up in an office and see what happens!)

Glen Turner 666 , 16 hrs
Re: Patents

IBM already had access to Red Hat's patents, including for patent defence purposes. Look up "open innovation network".

This acquisition is about: (1) IBM needing growth, or at least a plausible scenario for growth. (2) Red Hat wanting an easy expansion of its sales channels, again for growth. (3) Red Hat stockholders being given an offer they can't refuse.

This acquisition is not about: cultural change at IBM. Which is why the acquisition will 'fail'. The bottom line is that engineering matters at the moment (see: Google, Amazon), and IBM sacked their engineering culture across the past two decades. To be successful IBM need to get that culture back, and acquiring Red Hat gives IBM the opportunity to create a product-building, client-service culture within IBM. Except that IBM aren't taking the opportunity, so there's a large risk the reverse will happen -- the acquisition will destroy Red Hat's engineering- and service-oriented culture.

Anonymous Coward , 1 day
The kraken versus the container ship

This could be interesting: will the systemd kraken manage to wrap its tentacles around the big blue container ship and bring it to a halt, or will the container ship turn out to be well armed and fatally harpoon the kraken (causing much rejoicing in the rest of the Linux world)?

Sitaram Chamarty , 1 day
disappointed...

Honestly, this is a time for optimism: if they manage to get rid of Lennart Poettering, everything else will be tolerable!

dbtx , 1 day
*if*

you can change the past so that a "proper replacement" isn't automatically expected to do lots of things that systemd does. That damage is done. We got "better is worse" and enough people liked it-- good luck trying to go back to "worse is better"

tfb , 13 hrs
I presume they were waiting to see what happened to Solaris. When Oracle bought Sun (presumably the only other company who might have bought them was IBM) there were really three enterprise unixoid platforms: Solaris, AIX and RHEL (there were some smaller ones and some which were clearly dying like HPUX). It seemed likely at the time, but not yet certain, that Solaris was going to die (I worked for Sun at the time this happened and that was my opinion anyway). If Solaris did die, then if one company owned both AIX and RHEL then that company would own the enterprise unixoid market. If Solaris didn't die on the other hand then RHEL would be a lot less valuable to IBM as there would be meaningful competition. So, obviously, they waited to see what would happen.

Well, Solaris is perhaps not technically quite dead yet but certainly is moribund, and IBM now owns both AIX and RHEL & hence the enterprise unixoid market. As an interesting side-note, unless Oracle can keep Solaris on life-support this means that IBM own all-but own Oracle's OS as well ('Oracle Linux' is RHEL with, optionally, some of their own additions to the kernel).

[Oct 30, 2018] IBM's Red Hat acquisition is a 'desperate deal,' says analyst

Notable quotes:
"... "It's a desperate deal by a company that missed the boat for the last five years," the managing director at BTIG said on " Closing Bell ." "I'm not surprised that they bought Red Hat , I'm surprised that it took them so long. They've been behind the cloud eight ball." ..."
Oct 30, 2018 | www.cnbc.com
IBM's $34 billion acquisition of Red Hat is a last-ditch effort by IBM to play catch-up in the cloud industry, analyst Joel Fishbein told CNBC on Monday.

"It's a desperate deal by a company that missed the boat for the last five years," the managing director at BTIG said on " Closing Bell ." "I'm not surprised that they bought Red Hat , I'm surprised that it took them so long. They've been behind the cloud eight ball."

This is IBM's largest deal ever and the third-biggest tech deal in the history of the United States. IBM is paying more than a 60 percent premium for the software maker, but CEO Ginni Rometty told CNBC earlier in the day it was a "fair price."

[Oct 30, 2018] Sam Palmisano now infamous Roadmap 2015 ran the company into the ground through its maniacal focus on increasing EPS at any and all costs. Literally.

Oct 30, 2018 | features.propublica.org

GoingGone , Friday, April 13, 2018 6:06 PM

As a 25yr+ vet of IBM, I can confirm that this article is spot-on true. IBM used to be a proud and transparent company that clearly demonstrated that it valued its employees as much as it did its stock performance or dividend rate or EPS, simply because it is good for business. Those principles helped make and keep IBM atop the business world as the most trusted international brand and business icon of success for so many years. In 2000, all that changed when Sam Palmisano became the CEO. Palmisano's now infamous "Roadmap 2015" ran the company into the ground through its maniacal focus on increasing EPS at any and all costs. Literally.

Like, its employees, employee compensation, benefits, skills, and education opportunities. Like, its products, product innovation, quality, and customer service.

All of which resulted in the devastation of its technical capability and competitiveness, employee engagement, and customer loyalty. Executives seemed happy enough as their compensation grew nicely with greater financial efficiencies, and Palisano got a sweet $270M+ exit package in 2012 for a job well done.

The new CEO, Ginni Rometty has since undergone a lot of scrutiny for her lack of business results, but she was screwed from day one. Of course, that doesn't leave her off the hook for the business practices outlined in the article, but what do you expect: she was hand picked by Palmisano and approved by the same board that thought Palmisano was golden.

People (and companies) who have nothing to hide, hide nothing. People (and companies) who are proud of their actions, share it proudly. IBM believes it is being clever and outsmarting employment discrimination laws and saving the company money while retooling its workforce. That may end up being so (but probably won't), but it's irrelevant. Through its practices, IBM has lost the trust of its employees, customers, and ironically, stockholders (just ask Warren Buffett), who are the very(/only) audience IBM was trying to impress. It's just a huge shame.

HiJinks , Sunday, March 25, 2018 3:07 AM
I agree with many who state the report is well done. However, this crap started in the early 1990s. In the late 1980s, IBM offered decent packages to retirement eligible employees. For those close to retirement age, it was a great deal - 2 weeks pay for every year of service (capped at 26 years) plus being kept on to perform their old job for 6 months (while collecting retirement, until the government stepped in an put a halt to it). Nobody eligible was forced to take the package (at least not to general knowledge). The last decent package was in 1991 - similar, but not able to come back for 6 months. However, in 1991, those offered the package were basically told take it or else. Anyone with 30 years of service or 15 years and 55 was eligible and anyone within 5 years of eligibility could "bridge" the difference. They also had to sign a form stating they would not sue IBM in order to get up to a years pay - not taxable per IRS documents back then (but IBM took out the taxes anyway and the IRS refused to return - an employee group had hired lawyers to get the taxes back, a failed attempt which only enriched the lawyers). After that, things went downhill and accelerated when Gerstner took over. After 1991, there were still a some workers who could get 30 years or more, but that was more the exception. I suspect the way the company has been run the past 25 years or so has the Watsons spinning in their graves. Gone are the 3 core beliefs - "Respect for the individual", "Service to the customer" and "Excellence must be a way of life".
ArnieTracey , Saturday, March 24, 2018 7:15 PM
IBM's policy reminds me of the "If a citizen = 30 y.o., then mass execute such, else if they run then hunt and kill them one by one" social policy in the Michael York movie "Logan's Run."

From Wiki, in case you don't know: "It depicts a utopian future society on the surface, revealed as a dystopia where the population and the consumption of resources are maintained in equilibrium by killing everyone who reaches the age of 30. The story follows the actions of Logan 5, a "Sandman" who has terminated others who have attempted to escape death, and is now faced with termination himself."

Jr Jr , Saturday, March 24, 2018 4:37 PM
Corporate loyalty has been gone for 25 years. This isnt surprising. But this age discrimination is blatantly illegal.

[Oct 30, 2018] This might just be the deal that kills IBM because there's no way that they don't do a writedown of 90% of the value of this acquisition within 5 years.

Oct 30, 2018 | arstechnica.com

afidel, 2018-10-29T13:17:22-04:00

tipoo wrote:
Kilroy420 wrote:
Perhaps someone can explain this... Red Hat's revenue and assets barely total about $5B. Even factoring in market share and capitalization, how the hey did IBM come up with $34B cash being a justifiable purchase price??

Honestly, why would Red Hat have said no?

You don't trade at your earnings, you trade at your share price, which for Red Hat and many other tech companies can be quite high on Price/Earnings. They were trading at 52 P/E. Investors factor in a bunch of things involving future growth, and particularly for any companies in the cloud can quite highly overvalue things.

A 25 year old company trading at a P/E of 52 was already overpriced, buying at more than 2x that is insane. This might just be the deal that kills IBM because there's no way that they don't do a writedown of 90% of the value of this acquisition within 5 years.

[Oct 30, 2018] The insttuinaliuzed stupidity of IBM brass is connected with the desire to get bonuses

Oct 30, 2018 | arstechnica.com

3 hours ago afidel wrote: show nested quotes Kilroy420 wrote: Perhaps someone can explain this... Red Hat's revenue and assets barely total about $5B. Even factoring in market share and capitalization, how the hey did IBM come up with $34B cash being a justifiable purchase price??

Honestly, why would Red Hat have said no?

You don't trade at your earnings, you trade at your share price, which for Red Hat and many other tech companies can be quite high on Price/Earnings. They were trading at 52 P/E. Investors factor in a bunch of things involving future growth, and particularly for any companies in the cloud can quite highly overvalue things.
A 25 year old company trading at a P/E of 52 was already overpriced, buying at more than 2x that is insane. This might just be the deal that kills IBM because there's no way that they don't do a writedown of 90% of the value of this acquisition within 5 years.

OK. I did 10 years at IBM Boulder..

The problem isn't the purchase price or the probable write-down later.

The problem is going to be with the executives above it. One thing I noticed at IBM is that the executives needed to put their own stamp on operations to justify their bonuses. We were on a 2 year cycle of execs coming in and saying "Whoa.. things are too centralized, we need to decentralize", then the next exec coming in and saying "things are too decentralized, we need to centralize".

No IBM exec will get a bonus if they are over RedHat and exercise no authority over it. "We left it alone" generates nothing for the PBC. If they are in the middle of a re-org, then the specific metrics used to calculate their bonus can get waived. (Well, we took an unexpected hit this year on sales because we are re-orging to better optimize our resources). With that P/E, no IBM exec is going to get a bonus based on metrics. IBM execs do *not* care about what is good for IBM's business. They are all about gaming the bonuses. Customers aren't even on the list of things they care about.

I am reminded of a coworker who quit in frustration back in the early 2000's due to just plain bad management. At the time, IBM was working on Project Monterey. This was supposed to be a Unix system across multiple architectures. My coworker sent his resignation out to all hands basically saying "This is stupid. we should just be porting Linux". He even broke down the relative costs. Billions for Project Monterey vs thousands for a Linux port. Six months later, we get an email from on-high announcing this great new idea that upper management had come up with. It would be far cheaper to just support Linux than write a new OS.. you'd think that would be a great thing, but the reality is that all it did was create the AIX 5L family, which was AIX 5 with an additional CD called Linux ToolBox, which was loaded with a few Linux programs ported to a specific version of AIX, but never kept current. IBM can make even great decisions into bad decisions.

In May 2007, IBM announced the transition to LEAN. Sounds great, but this LEAN was not on the manufacturing side of the equation. It was in e-Business under Global Services. The new procedures were basically call center operations. Now, prior to this, IBM would have specific engineers for specific accounts. So, Major Bank would have that AIX admin, that Sun admin, that windows admin, etc. They knew who to call and those engineers would have docs and institutional knowledge of that account. During the LEAN announcement, Bob Moffat described the process. Accounts would now call an 800 number and the person calling would open a ticket. This would apply to *any* work request as all the engineers would be pooled and whoever had time would get the ticket. So, reset a password - ticket. So, load a tape - ticket. Install 20 servers - ticket.

Now, the kicker to this was that the change was announced at 8AM and went live at noon. IBM gave their customers who represented over $12 Billion in contracts 4 *hours* notice that they were going to strip their support teams and treat them like a call center. (I will leave it as an exercise to the reader to determine if they would accept that kind of support after spending hundreds of millions on a support contract).

(The pilot program for the LEAN process had its call center outsourced overseas, if that helps you try to figure out why IBM wanted to get rid of dedicated engineers and move to a call-center operation).

[Oct 30, 2018] Presumably the acquisition will have to jump various regulatory hurdles before it is set in stone. If it is successful, Red Hat will be absorbed into IBM's Hybrid Cloud unit

IBM will have to work hard to overcome RH customers' natural (and IMHO largely justified) suspicion of Big Blue.
Notable quotes:
"... focused its workforce on large "hub" cities where graduate engineers prefer to live – New York City, San Francisco, and Austin, in the US for instance – which allowed it to drive out older, settled staff who refused to move closer to the office. ..."
"... The acquisition of Sun Microsystems by Oracle comes to mind there. ..."
"... When Microsoft bought out GitHub, they made a promise to let it run independently and now IBM's given a similar pledge in respect of RedHat. They ought to abide by that promise because the alternatives are already out there in the form of Ubuntu and SUSE Linux Enterprise Server. ..."
Oct 30, 2018 | theregister.co.uk

...That transformation has led to accusations of Big Blue ditching its older staff for newer workers to somehow spark some new energy within it. It also cracked down on remote employees , and focused its workforce on large "hub" cities where graduate engineers prefer to live – New York City, San Francisco, and Austin, in the US for instance – which allowed it to drive out older, settled staff who refused to move closer to the office.

Ledswinger, 1 day

Easy, the same way they deal with their existing employees. It'll be the IBM way or the highway. We'll see the usual repetitive and doomed IBM strategy of brutal downsizings accompanied by the earnest IBM belief that a few offshore wage slaves can do as good a job as anybody else.

The product and service will deteriorate, pricing will have to go up significantly to recover the tens of billions of dollars of "goodwill" that IBM have just splurged, and in five years time we'll all be saying "remember how the clueless twats at IBM bought Red Hat and screwed it up?"

One of IBM's main problems is lack of revenue, and yet Red Hat only adds about $3bn to their revenue. As with most M&A the motivators here are a surplus of cash and hopeless optimism, accompanied by the suppression of all common sense.

Well done Gini, it's another winner.

TVU, 12 hrs
Re: "they will buy a lot of talent."

"What happens over the next 12 -- 24 months will be ... interesting. Usually the acquisition of a relatively young, limber outfit with modern product and service by one of the slow-witted traditional brontosaurs does not end well"

The acquisition of Sun Microsystems by Oracle comes to mind there.

When Microsoft bought out GitHub, they made a promise to let it run independently and now IBM's given a similar pledge in respect of RedHat. They ought to abide by that promise because the alternatives are already out there in the form of Ubuntu and SUSE Linux Enterprise Server.

[Oct 30, 2018] About time: Red Hat support was bad alread

Oct 30, 2018 | theregister.co.uk
Anonymous Coward, 15 hrs

and the support goes bad already...

Someone (RH staff, or one of the Gods) is unhappy with the deal: Red Hat's support site has been down all day.

https://status.redhat.com

[Oct 30, 2018] Purple rain will fall from that blue cloud

Notable quotes:
"... IBM was already "working on Linux." For decades. With multiple hundreds of full-time Linux developers--more than any other corporate contributor--around the world. And not just on including IBM-centric function into Linux, but on mainstream community projects. There have been lots of Linux people in IBM since the early-90's. ..."
"... From a customer standpoint the main thing RedHat adds is formal support. There are still a lot of companies who are uncomfortable deploying an OS that has product support only from StackExchange and web forums ..."
"... You would do better to look at the execution on the numbers - RedHat is not hitting it's targets and there are signs of trouble. These two businesses were both looking for a prop and the RedHat shareholders are getting out while the business is near it's peak. ..."
Oct 30, 2018 | theregister.co.uk

Anonymous Coward , 6 hrs

Re: So exactly how is IBM going to tame employees that are used to going to work in shorts...

Purple rain will fall from that cloud...

asdf , 5 hrs
Wow bravo brutal and accurate. +1
I can't believe its not butter , 1 day
Redhat employees - get out now

You're utterly fucked. Run away now as you have zero future in IBM.

Anonymous Coward , 1 day
Re: Redhat employees - get out now

You're utterly fucked. Run away now as you have zero future in IBM.

That was my immediate thought upon hearing this. I've already worked for IBM, swore never to do it again. Time to dust off & update the resume.

Jove , 19 hrs
Re: Redhat employees - get out now

Another major corporate splashes out a fortune on a star business only to find the clash of cultures destroys substantial value.

W@ldo , 1 day
Re: At least is isnt oracle or M$

Sort of the lesser of evils---do you want to be shot or hung by the neck? No good choice for this acquisition.

Anonymous Coward , 1 day
Re: At least is isnt oracle or M$

honestly, MS would be fine. They're big into Linux and open source and still a heavily pro-engineering company.

Companies like Oracle and IBM are about nothing but making money. Which is why they're both going down the tubes. No-one who doesn't already have them goes near them.

Uncle Ron , 14 hrs
Re: At least is isnt oracle or M$

IBM was already "working on Linux." For decades. With multiple hundreds of full-time Linux developers--more than any other corporate contributor--around the world. And not just on including IBM-centric function into Linux, but on mainstream community projects. There have been lots of Linux people in IBM since the early-90's.

Orv , 9 hrs
Re: At least it is not Oracle or M$

The OS is from Linus and chums, Redhat adds a few storage bits and some Redhat logos and erm.......

From a customer standpoint the main thing RedHat adds is formal support. There are still a lot of companies who are uncomfortable deploying an OS that has product support only from StackExchange and web forums. This market is fairly insensitive to price, which is good for a company like RedHat. (Although there has been an exodus of higher education customers as the price has gone up; like Sun did back in the day, they've been squeezing out that market. Two campuses I've worked for have switched wholesale to CentOS.)

Jove , 11 hrs
RedHat take-over IBM - @HmmmYes

"Just compare the share price of RH v IBM"

You would do better to look at the execution on the numbers - RedHat is not hitting it's targets and there are signs of trouble. These two businesses were both looking for a prop and the RedHat shareholders are getting out while the business is near it's peak.

[Oct 30, 2018] Pay your licensing fee

Oct 30, 2018 | linux.slashdot.org

red crab ( 1044734 ) , Monday October 29, 2018 @12:10AM ( #57552797 )

Re:Pay your licensing fee ( Score: 4 , Interesting)

Footnote: $699 License Fee applies to your systemP server running RHEL 7 with 4 cores activated for one year.

To activate additional processor cores on the systemP server, a fee of $199 per core applies. systemP offers a new Semi-Activation Mode now. In systemP Semi-Activation Mode, you will be only charged for all processor calls exceeding 258 MIPS, which will be processed by additional semi-activated cores on a pro-rata basis.

RHEL on systemP servers also offers a Partial Activation Mode, where additional cores can be activated in Inhibited Efficiency Mode.

To know more about Semi-Activation Mode, Partial Activation Mode and Inhibited Efficiency Mode, visit http://www.ibm.com/systemp [ibm.com] or contact your IBM systemP Sales Engineer.

[Oct 30, 2018] $34B? I was going to say this is the biggest tech acquisition ever, but it's second after Dell buying EMC

Notable quotes:
"... I'm not too sure what IBM is going to do with that, but congrats to whoever is getting the money... ..."
Oct 30, 2018 | theregister.co.uk

ratfox , 1 day

$34B? I was going to say this is the biggest tech acquisition ever, but it's second after Dell buying EMC. I'm not too sure what IBM is going to do with that, but congrats to whoever is getting the money...

[Oct 30, 2018] "OMG" comments rests on three assumptions: Red Hat is 100% brilliant and speckless, IBM is beyond hope and unchangeable, this is a hostile takeover

Notable quotes:
"... But I do beg to differ about the optimism, because, as my boss likes to quote, "culture eats strategy for breakfast". ..."
"... So the problem is that IBM have bought a business whose competencies and success factors differ from the IBM core. Its culture is radically different, and incompatible with IBM. ..."
"... Many of its best employees will be hostile to IBM ..."
"... And just like a gas giant, IBM can be considered a failed star ..."
Oct 30, 2018 | theregister.co.uk

LeoP , 1 day

Less pessimistic here

Quite a lot of the "OMG" moments rests on three assumptions:

  • Red Hat is 100% brilliant and speckless
  • IBM is beyond hope and unchangeable
  • This is a hostile takeover

I beg to differ on all counts. Call me beyond hope myself because of my optimism, but I do think what IBM bought most is a way to run a business. RH is just too big to be borged into a failing giant without leaving quite a substantial mark.

Ledswinger , 1 day
Re: Less pessimistic here

I beg to differ on all counts.

Note: I didn't downvote you, its a valid argument. I can understand why you think that, because I'm in the minority that think the IBM "bear" case is overdone. They've been cleaning their stables for some years now, and that means dropping quite a lot of low margin business, and seeing the topline shrink. That attracts a lot of criticism, although it is good business sense.

But I do beg to differ about the optimism, because, as my boss likes to quote, "culture eats strategy for breakfast".

And (speaking as a strategist) that's 100% true, and 150% true when doing M&A.

So the problem is that IBM have bought a business whose competencies and success factors differ from the IBM core. Its culture is radically different, and incompatible with IBM.

Many of its best employees will be hostile to IBM . RedHat will be borged, and it will leave quite a mark. A bit like Shoemaker-Levi did on Jupiter. Likewise there will be lots of turbulence, but it won't endure, and at the end of it all the gas giant will be unchanged (just a bit poorer). And just like a gas giant, IBM can be considered a failed star .

[Oct 30, 2018] If IBM buys Redhat then what will happen to CentOS?

Notable quotes:
"... As long as IBM doesn't close-source RH stuff -- most of which they couldn't if they wanted to -- CentOS will still be able to do builds of it. The only thing RH can really enforce control over is the branding and documentation. ..."
"... Might be a REALLY good time to fork CentOS before IBM pulls an OpenSolaris on it. Same thing with Fedora. ..."
"... I used to be a Solaris admin. Now I am a linux admin in a red hat shop. Sun was bought by Oracle and more or less died a death. Will the same happen now? I know that Sun and RH are _very_ differeht beasts but I am thinking that now is the time to stop playing on the merry go round called systems administration. ..."
Oct 30, 2018 | theregister.co.uk

Orv , 9 hrs

Re: If IBM buys Redhat then what will happen to CentOS?

If IBM buys Redhat then what will happen to CentOS?

As long as IBM doesn't close-source RH stuff -- most of which they couldn't if they wanted to -- CentOS will still be able to do builds of it. The only thing RH can really enforce control over is the branding and documentation.

Anonymous Coward , 9 hrs
Re: "they will buy a lot of talent."

"I found that those with real talent that matched IBM needs are well looked after."

The problem is that when you have served your need you tend to get downsized as the expectation is that the cheaper offshore bodies can simply take over support etc after picking up the skills they need over a few months !!!

This 'Blue meets Red and assimilates' will be very interesting to watch and will need lots and lots of popcorn on hand !!!

:)

FrankAlphaXII , 1 day
Might be a REALLY good time to fork CentOS before IBM pulls an OpenSolaris on it. Same thing with Fedora.

Kind of sad really, when I used Linux CentOS and Fedora were my go-to distros.

Missing Semicolon , 1 day
Goodbye Centos

Centos is owned by RedHat now. So why on earth would IBM bother keeping it?

Plus, of course, all the support and coding will be done in India now.....

Doctor Syntax , 17 hrs
Re: Goodbye Centos

"Centos is owned by RedHat now."

RedHat is The Upstream Vendor of Scientific Linux. What happens to them if IBM turn nasty?

Anonymous Coward , 16 hrs
Re: Scientific Linux

Scientific Linux is a good idea in theory, dreadful in practice.

The idea of a research/academic-software-focused distro is a good one: unfortunately (I say unfortunately, but it's certainly what I would do myself), increasing numbers of researchers are now developing their pet projects on Debian or Ubuntu, and so therefore often only make .deb packages available.

Anyone who has had any involvement in research software knows that if you find yourself in the position of needing to compile someone else's pet project from source, you are often in for an even more bumpy ride than usual.

And the lack of compatible RPM packages just encourages more and more researchers to go where the packages (and the free-ness) are, namely Debian and friends, which continue to gather momentum, while Red Hat continues to stagnate.

Red Hat may be very stable for running servers (as long as you don't need anything reasonably new (not bleeding edge, but at least newer than three years old)), but I have never really seen the attraction in it myself (especially as there isn't much of a "community" feeling around it, as its commercial focus gets in the way).

Roger Kynaston , 1 day
another buy out of my bread and butter

I used to be a Solaris admin. Now I am a linux admin in a red hat shop. Sun was bought by Oracle and more or less died a death. Will the same happen now? I know that Sun and RH are _very_ differeht beasts but I am thinking that now is the time to stop playing on the merry go round called systems administration.

Anonymous Coward , 1 day
Don't become a Mad Hatter

Dear Red Hatters, welcome to the world of battery hens, all clucking and clicking away to produce the elusive golden egg while the axe looms over their heads. Even if your mental health survives, you will become chicken feed by the time you are middle aged. IBM doesn't have a heart; get out while you can. Follow the example of IBMers in Australia who are jumping ship in their droves, leaving behind a crippled, demoralised workforce. Don't become a Mad Hatter.

[Oct 30, 2018] Kind of surprising since IBM was aligned with SUSE for so many years.

Oct 30, 2018 | arstechnica.com

tgx Ars Centurion reply 5 hours ago

Kind of surprising since IBM was aligned with SUSE for so many years.

IBM is an awful software company. OS/2, Lotus Notes, AIX all withered on the vine.

Doesn't bode well for RedHat.

[Oct 30, 2018] Hello, we are mandating some new policies, git can no longer be used, we must use IBM synergy software with rational rose

Oct 30, 2018 | arstechnica.com

lordofshadows , Smack-Fu Master, in training 6 hours ago

Hello, we are mandating some new policies, git can no longer be used, we must use IBM synergy software with rational rose.

All open source software is subject to corporate approval first, we know we know, to help streamline this process we have approved GNU CC and are looking into this Mak file program.

We are very pleased with systemd, we wish to further expand it's dhcp capabilities and also integrate IBM analytics -- We are also going through a rebranding operation as we feel the color red is too jarring for our customers, we will now be known as IBM Rational Hat and will only distribute through our retail channels to boost sales -- Look for us at walmart, circuit city, and staples

[Oct 30, 2018] And RH customers will want to check their contracts...

Oct 30, 2018 | arstechnica.com

CousinSven , Smack-Fu Master, in training et Subscriptor 4 hours ago New Poster

IBM are paying around 12x annual revenue for Red Hat which is a significant multiple so they will have to squeeze more money out of the business somehow. Either they grow customers or they increase margins or both.

IBM had little choice but to do something like this. They are in a terminal spiral thanks to years of bad leadership. The confused billing of the purchase smacks of rush, so far I have seen Red Hat described as a cloud company, an info sec company, an open source company...

So IBM are buying Red Hat as a last chance bid to avoid being put through the PE threshing machine. Red Hat get a ludicrous premium so will take the money.

And RH customers will want to check their contracts...

[Oct 30, 2018] IBM To Buy Red Hat, the Top Linux Distributor, For $34 Billion

Notable quotes:
"... IBM license fees are predatory. Plus they require you to install agents on your servers for the sole purpose of calculating use and licenses. ..."
"... IBM exploits workers by offshoring and are slow to fix bugs and critical CVEs ..."
Oct 30, 2018 | linux.slashdot.org

Anonymous Coward , Sunday October 28, 2018 @03:34PM ( #57550555 )

Re: Damn. ( Score: 5 , Insightful)

IBM license fees are predatory. Plus they require you to install agents on your servers for the sole purpose of calculating use and licenses.

IBM exploits workers by offshoring and are slow to fix bugs and critical CVEs (WAS and DB2 especially)

The Evil Atheist ( 2484676 ) , Sunday October 28, 2018 @04:13PM ( #57550755 ) Homepage
Re:Damn. ( Score: 4 , Insightful)

IBM buys a company, fires all the transferred employees and hopes they can keep selling their acquired software without further development. If they were serious, they'd have improved their own Linux contribution efforts.

But they literally think they can somehow keep selling software without anyone with knowledge of the software, or for transferring skills to their own employees.

They literally have no interest in actual software development. It's all about sales targets.

Anonymous Coward , Monday October 29, 2018 @01:00AM ( #57552963 )
Re:Damn. ( Score: 3 , Informative)

My advice to Red Hat engineers is to get out now. I was an engineer at a company that was acquired by IBM. I was fairly senior so I stayed on and ended up retiring from the IBM, even though I hated my last few years working there. I worked for several companies during my career, from startups to fortune 100 companies. IBM was the worst place I worked by far. Consider every bad thing you've ever heard about IBM. I've heard those things too, and the reality was much worse.

IBM hasn't improved their Linux contribution efforts because it wouldn't know how. It's not for lack of talented engineers. The management culture is simply pathological. No dissent is allowed. Everyone lives in fear of a low stack ranking and getting laid off. In the end it doesn't matter anyway. Eventually the product you work on that they originally purchased becomes unprofitable and they lay you off anyway. They've long forgotten how to develop software on their own. Don't believe me? Try to think of an IBM branded software product that they built from the ground up in the last 25 years that has significant market share. Development managers chase one development fad after another hoping to find the silver bullet that will allow them to continue the relentless cost cutting regime made necessary in order to make up revenue that has been falling consistently for over a decade now.

As far as I could tell, IBM is good at two things:

  1. Financial manipulation to disguise there shrinking revenue
  2. Buying software companies and mining them for value

Yes, there are still some brilliant people that work there. But IBM is just not good at turning ideas into revenue producing products. They are nearly always unsuccessful when they try and then the go out and buy a company the succeeded in bring to market the kind of product that they tried and failed to build themselves.

They used to be good at customer support, but that is mainly lip service now. Just before I left the company I was tapped to deliver a presentation at a customer seminar. The audience did not care much about my presentation. The only thing they wanted to talk about was how much they had invested millions in re-engineering their business to use our software and now IBM appeared to be wavering in their long term commitment to supporting the product. It was all very embarrassing because I knew what they didn't, that the amount of development and support resources currently allocated to the product line were a small fraction of what they once were. After having worked there I don't know why anyone would ever want to buy a license for any of their products.

gtall ( 79522 ) , Sunday October 28, 2018 @03:59PM ( #57550691 )
Re:A Cloudy argument. ( Score: 5 , Insightful)

So you are saying that IBM has been asleep at the wheel for the last 8 years. Buying Red Hat won't save them, IBM is IBM's enemy.

Aighearach ( 97333 ) writes:
Re: ( Score: 3 )

They're already one of the large cloud providers, but you don't know that because they only focus on big customers.

The Evil Atheist ( 2484676 ) , Sunday October 28, 2018 @04:29PM ( #57550829 ) Homepage
Re:A Cloudy argument. ( Score: 5 , Insightful)

IBM engineers aren't actually crappy. It's the fucking MBAs in management who have no clue about how to run a software development company. Their engineers will want to do good work, but management will worry more about headcount and sales.

The Evil Atheist ( 2484676 ) , Sunday October 28, 2018 @03:57PM ( #57550679 ) Homepage
Goodbye Redhat. ( Score: 5 , Insightful)

IBM acquisitions never go well. All companies acquired by IBM go through a process of "Blue washing", in which the heart and soul of the acquired company is ripped out, the body burnt, and the remaining ashes to be devoured and defecated by its army of clueless salesmen and consultants. It's a sad, and infuriating, repeated pattern. They no longer develop internal talent. They drive away the remaining people left over from the time when they still did develop things. They think they can just buy their way into a market or technology, somehow completely oblivious to the fact that their strategy of firing all their acquired employees/knowledge and hoping to sell software they have no interest in developing would somehow still retain customers. They literally could have just reshuffled and/or hired more developers to work on the kernel, but the fact they didn't shows they have no intention of actually contributing.

Nkwe ( 604125 ) , Sunday October 28, 2018 @04:26PM ( #57550819 )
Cha-Ching ( Score: 3 )

Red Hat closed Friday at $116.68 per share, looks like the buy out is for $190. Not everyone will be unhappy with this. I hope the Red Hat employees that won't like the upcoming cultural changes have stock and options, it may soften the blow a bit.

DougDot ( 966387 ) writes: < dougr@parrot-farm.net > on Sunday October 28, 2018 @05:43PM ( #57551189 ) Homepage
AIX Redux ( Score: 5 , Interesting)

Oh, good. Now IBM can turn RH into AIX while simultaneously suffocating whatever will be left of Redhat's staff with IBM's crushing, indifferent, incompetent bureaucracy.

This is what we call a lose - lose situation. Well, except for the president of Redhat, of course. Jim Whitehurst just got rich.

Tough Love ( 215404 ) writes: on Sunday October 28, 2018 @10:55PM ( #57552583 )
Re:AIX Redux ( Score: 2 )

Worse than Redhat's crushing, indifferent, incompetent bureaucracy? Maybe, but it's close.

ArchieBunker ( 132337 ) , Sunday October 28, 2018 @06:01PM ( #57551279 ) Homepage
Re:AIX Redux ( Score: 5 , Insightful)

Redhat is damn near AIX already. AIX had binary log files long before systemd.

Antique Geekmeister ( 740220 ) , Monday October 29, 2018 @05:51AM ( #57553587 )
Re:Please God No ( Score: 4 , Informative)

The core CentOS leadership are now Red Hat employees. They're not clear of nor uninvolved in this purchase.

alvinrod ( 889928 ) , Sunday October 28, 2018 @03:46PM ( #57550623 )
Re:Please God No ( Score: 5 , Insightful)

Depends on the state. Non-compete clauses are unenforceable in some jurisdictions. IBM would want some of the people to stick around. You can't just take over a complex system from someone else and expect everything to run smoothly or know how to fix or extend it. Also, not everyone who works at Red Hat gets anything from the buyout unless they were regularly giving employees stock. A lot of people are going to want the stable paycheck of working for IBM instead of trying to start a new company.

However, some will inevitably get sick of working at IBM or end up being laid off at some point. If these people want to keep doing what they're doing, they can start a new company. If they're good at what they do, they probably won't have much trouble attracting some venture capital either.

wyattstorch516 ( 2624273 ) , Sunday October 28, 2018 @04:41PM ( #57550887 )
Re:Please God No ( Score: 2 )

Red Hat went public in 1999, they are far from being a start-up. They have acquired several companies themselves so they are just as corporate as IBM although significantly smaller.

Anonymous Coward , Sunday October 28, 2018 @05:10PM ( #57551035 )
Re:Please God No ( Score: 5 , Funny)

Look on the bright side: Poettering works for Red Hat. (Reposting because apparently Poettering has mod points.)

Anonymous Coward , Monday October 29, 2018 @09:24AM ( #57554491 )
Re: It all ( Score: 5 , Interesting)

My feelings exactly. As a former employee for both places, I see this as the death knell for Red Hat. Not immediately, not quickly, but eventually Red Hat's going to go the same way as every other company IBM has acquired.

Red Hat's doom (again, all IMO) started about 10 years ago or so when Matt Szulik left and Jim Whitehurst came on board. Nothing against Jim, but he NEVER seemed to grasp what F/OSS was about. Hell, when he came onboard he wouldn't (and never did) use Linux at all: instead he used a Mac, and so did the rest of the EMT (executive management team) over time. What company is run by people who refuse to use its own product except for one that doesn't have faith. The person on top of the BRAND AND PEOPLE team "needed" an iPad, she said, to do her work (quoting a friend in the IT dept who was asked to get it and set it up for her).

Then when they (the EMTs) wanted to move away from using F/OSS internally to outsourcing huge aspects of our infrastructure (like no longer using F/OSS for email and instead contracting with GOOGLE to do our email, calendaring and document sharing) is when, again for me, the plane started to spiral. How can we sell to OUR CUSTOMERS the idea that "Red Hat and F/OSS will suit all of your corporate needs" when, again, the people running the ship didn't think it would work for OURS? We had no special email or calendar needs, and if we did WE WERE THE LEADERS OF OPEN SOURCE, couldn't we make it do what we want? Hell, I was on an internal (but on our own time) team whose goal was to take needs like this and incubate them with an open source solution to meet that need.

But the EMTs just didn't want to do that. They were too interested in what was "the big thing" (at the time Open Shift was where all of our hiring and resources were being poured) to pay attention to the foundations that were crumbling.

And now, here we are. Red Hat is being subsumed by the largest closed-source company on the planet, one who does their job sub-optimally (to be nice). This is the end of Red Hat as we know it. Without 5-7 years Red Hat will go the way of Tivoli and Lotus: it will be a brand name that lacks any of what made the original company what it was when it was acquired.

[Oct 30, 2018] Why IBM did do the same as Oracle?

Notable quotes:
"... Just fork it, call it Blue Hat Linux, and rake in those sweet support dollars. ..."
Oct 30, 2018 | arstechnica.com

Twilight Sparkle , Ars Scholae Palatinae 4 hours ago

Why would you pay even 34 dollars for software worth $0?

Just fork it, call it Blue Hat Linux, and rake in those sweet support dollars.

[Oct 30, 2018] Soon after I started, the company fired hundreds of 50-something employees and put we "kids" in their jobs. Seeing that employee loyalty was a one way street at that place, I left after a couple of years. Best career move I ever made.

Oct 30, 2018 | features.propublica.org

Al Romig , Wednesday, April 18, 2018 5:20 AM

As a new engineering graduate, I joined a similar-sized multinational US-based company in the early '70s. Their recruiting pitch was, "Come to work here, kid. Do your job, keep your nose clean, and you will enjoy great, secure work until you retire on easy street".

Soon after I started, the company fired hundreds of 50-something employees and put we "kids" in their jobs. Seeing that employee loyalty was a one way street at that place, I left after a couple of years. Best career move I ever made.

GoingGone , Friday, April 13, 2018 6:06 PM
As a 25yr+ vet of IBM, I can confirm that this article is spot-on true. IBM used to be a proud and transparent company that clearly demonstrated that it valued its employees as much as it did its stock performance or dividend rate or EPS, simply because it is good for business. Those principles helped make and keep IBM atop the business world as the most trusted international brand and business icon of success for so many years. In 2000, all that changed when Sam Palmisano became the CEO. Palmisano's now infamous "Roadmap 2015" ran the company into the ground through its maniacal focus on increasing EPS at any and all costs. Literally. Like, its employees, employee compensation, benefits, skills, and education opportunities. Like, its products, product innovation, quality, and customer service. All of which resulted in the devastation of its technical capability and competitiveness, employee engagement, and customer loyalty. Executives seemed happy enough as their compensation grew nicely with greater financial efficiencies, and Palisano got a sweet $270M+ exit package in 2012 for a job well done. The new CEO, Ginni Rometty has since undergone a lot of scrutiny for her lack of business results, but she was screwed from day one. Of course, that doesn't leave her off the hook for the business practices outlined in the article, but what do you expect: she was hand picked by Palmisano and approved by the same board that thought Palmisano was golden.
Paul V Sutera , Tuesday, April 3, 2018 7:33 PM
In 1994, I saved my job at IBM for the first time, and survived. But I was 36 years old. I sat down at the desk of a man in his 50s, and found a few odds and ends left for me in the desk. Almost 20 years later, it was my turn to go. My health and well-being is much better now. Less money but better health. The sins committed by management will always be: "I was just following orders".

[Oct 30, 2018] IBM age discrimination

Notable quotes:
"... Correction, March 24, 2018: Eileen Maroney lives in Aiken, South Carolina. The name of her city was incorrect in the original version of this story. ..."
Oct 30, 2018 | features.propublica.org

Consider, for example, a planning presentation that former IBM executives said was drafted by heads of a business unit carved out of IBM's once-giant software group and charged with pursuing the "C," or cloud, portion of the company's CAMS strategy.

The presentation laid out plans for substantially altering the unit's workforce. It was shown to company leaders including Diane Gherson, the senior vice president for human resources, and James Kavanaugh, recently elevated to chief financial officer. Its language was couched in the argot of "resources," IBM's term for employees, and "EP's," its shorthand for early professionals or recent college graduates.

Among the goals: "Shift headcount mix towards greater % of Early Professional hires." Among the means: "[D]rive a more aggressive performance management approach to enable us to hire and replace where needed, and fund an influx of EPs to correct seniority mix." Among the expected results: "[A] significant reduction in our workforce of 2,500 resources."

A slide from a similar presentation prepared last spring for the same leaders called for "re-profiling current talent" to "create room for new talent." Presentations for 2015 and 2016 for the 50,000-employee software group also included plans for "aggressive performance management" and emphasized the need to "maintain steady attrition to offset hiring."

IBM declined to answer questions about whether either presentation was turned into company policy. The description of the planned moves matches what hundreds of older ex-employees told ProPublica they believe happened to them: They were ousted because of their age. The company used their exits to hire replacements, many of them young; to ship their work overseas; or to cut its overall headcount.

Ed Alpern, now 65, of Austin, started his 39-year run with IBM as a Selectric typewriter repairman. He ended as a project manager in October of 2016 when, he said, his manager told him he could either leave with severance and other parting benefits or be given a bad job review -- something he said he'd never previously received -- and risk being fired without them.

Albert Poggi, now 70, was a three-decade IBM veteran and ran the company's Palisades, New York, technical center where clients can test new products. When notified in November of 2016 he was losing his job to layoff, he asked his bosses why, given what he said was a history of high job ratings. "They told me," he said, "they needed to fill it with someone newer."

The presentations from the software group, as well as the stories of ex-employees like Alpern and Poggi, square with internal documents from two other major IBM business units. The documents for all three cover some or all of the years from 2013 through the beginning of 2018 and deal with job assessments, hiring, firing and layoffs.

The documents detail practices that appear at odds with how IBM says it treats its employees. In many instances, the practices in effect, if not intent, tilt against the company's older U.S. workers.

For example, IBM spokespeople and lawyers have said the company never considers a worker's age in making decisions about layoffs or firings.

But one 2014 document reviewed by ProPublica includes dates of birth. An ex-IBM employee familiar with the process said executives from one business unit used it to decide about layoffs or other job changes for nearly a thousand workers, almost two-thirds of them over 50.

Documents from subsequent years show that young workers are protected from cuts for at least a limited period of time. A 2016 slide presentation prepared by the company's global technology services unit, titled "U.S. Resource Action Process" and used to guide managers in layoff procedures, includes bullets for categories considered "ineligible" for layoff. Among them: "early professional hires," meaning recent college graduates.

In responding to age-discrimination complaints that ex-employees file with the EEOC, lawyers for IBM say that front-line managers make all decisions about who gets laid off, and that their decisions are based strictly on skills and job performance, not age.

But ProPublica reviewed spreadsheets that indicate front-line managers hardly acted alone in making layoff calls. Former IBM managers said the spreadsheets were prepared for upper-level executives and kept continuously updated. They list hundreds of employees together with codes like "lift and shift," indicating that their jobs were to be lifted from them and shifted overseas, and details such as whether IBM's clients had approved the change.

An examination of several of the spreadsheets suggests that, whatever the criteria for assembling them, the resulting list of those marked for layoff was skewed toward older workers. A 2016 spreadsheet listed more than 400 full-time U.S. employees under the heading "REBAL," which refers to "rebalancing," the process that can lead to laying off workers and either replacing them or shifting the jobs overseas. Using the job search site LinkedIn, ProPublica was able to locate about 100 of these employees and then obtain their ages through public records. Ninety percent of those found were 40 or older. Seventy percent were over 50.

IBM frequently cites its history of encouraging diversity in its responses to EEOC complaints about age discrimination. "IBM has been a leader in taking positive actions to ensure its business opportunities are made available to individuals without regard to age, race, color, gender, sexual orientation and other categories," a lawyer for the company wrote in a May 2017 letter. "This policy of non-discrimination is reflected in all IBM business activities."

But ProPublica found at least one company business unit using a point system that disadvantaged older workers. The system awarded points for attributes valued by the company. The more points a person garnered, according to the former employee, the more protected she or he was from layoff or other negative job change; the fewer points, the more vulnerable.

The arrangement appears on its face to favor younger newcomers over older veterans. Employees were awarded points for being relatively new at a job level or in a particular role. Those who worked for IBM for fewer years got more points than those who'd been there a long time.

The ex-employee familiar with the process said a 2014 spreadsheet from that business unit, labeled "IBM Confidential," was assembled to assess the job prospects of more than 600 high-level employees, two-thirds of them from the U.S. It included employees' years of service with IBM, which the former employee said was used internally as a proxy for age. Also listed was an assessment by their bosses of their career trajectories as measured by the highest job level they were likely to attain if they remained at the company, as well as their point scores.

The tilt against older workers is evident when employees' years of service are compared with their point scores. Those with no points and therefore most vulnerable to layoff had worked at IBM an average of more than 30 years; those with a high number of points averaged half that.

Perhaps even more striking is the comparison between employees' service years and point scores on the one hand and their superiors' assessments of their career trajectories on the other.

Along with many American employers, IBM has argued it needs to shed older workers because they're no longer at the top of their games or lack "contemporary" skills.

But among those sized up in the confidential spreadsheet, fully 80 percent of older employees -- those with the most years of service but no points and therefore most vulnerable to layoff -- were rated by superiors as good enough to stay at their current job levels or be promoted. By contrast, only a small percentage of younger employees with a high number of points were similarly rated.

"No major company would use tools to conduct a layoff where a disproportionate share of those let go were African Americans or women," said Cathy Ventrell-Monsees, senior attorney adviser with the EEOC and former director of age litigation for the senior lobbying giant AARP. "There's no difference if the tools result in a disproportionate share being older workers."

In addition to the point system that disadvantaged older workers in layoffs, other documents suggest that IBM has made increasingly aggressive use of its job-rating machinery to pave the way for straight-out firings, or what the company calls "management-initiated separations." Internal documents suggest that older workers were especially targets.

Like in many companies, IBM employees sit down with their managers at the start of each year and set goals for themselves. IBM graded on a scale of 1 to 4, with 1 being top-ranked.

Those rated as 3 or 4 were given formal short-term goals known as personal improvement plans, or PIPs. Historically many managers were lenient, especially toward those with 3s whose ratings had dropped because of forces beyond their control, such as a weakness in the overall economy, ex-employees said.

But within the past couple of years, IBM appears to have decided the time for leniency was over. For example, a software group planning document for 2015 said that, over and above layoffs, the unit should seek to fire about 3,000 of the unit's 50,000-plus workers.

To make such deep cuts, the document said, executives should strike an "aggressive performance management posture." They needed to double the share of employees given low 3 and 4 ratings to at least 6.6 percent of the division's workforce. And because layoffs cost the company more than outright dismissals or resignations, the document said, executives should make sure that more than 80 percent of those with low ratings get fired or forced to quit.

Finally, the 2015 document said the division should work "to attract the best and brightest early professionals" to replace up to two-thirds of those sent packing. A more recent planning document -- the presentation to top executives Gherson and Kavanaugh for a business unit carved out of the software group -- recommended using similar techniques to free up money by cutting current employees to fund an "influx" of young workers.

In a recent interview, Poggi said he was resigned to being laid off. "Everybody at IBM has a bullet with their name on it," he said. Alpern wasn't nearly as accepting of being threatened with a poor job rating and then fired.

Alpern had a particular reason for wanting to stay on at IBM, at least until the end of last year. His younger son, Justin, then a high school senior, had been named a National Merit semifinalist. Alpern wanted him to be able to apply for one of the company's Watson scholarships. But IBM had recently narrowed eligibility so only the children of current employees could apply, not also retirees as it was until 2014.

Alpern had to make it through December for his son to be eligible.

But in August, he said, his manager ordered him to retire. He sought to buy time by appealing to superiors. But he said the manager's response was to threaten him with a bad job review that, he was told, would land him on a PIP, where his work would be scrutinized weekly. If he failed to hit his targets -- and his managers would be the judges of that -- he'd be fired and lose his benefits.

Alpern couldn't risk it; he retired on Oct. 31. His son, now a freshman on the dean's list at Texas A&M University, didn't get to apply.

"I can think of only a couple regrets or disappointments over my 39 years at IBM,"" he said, "and that's one of them."

'Congratulations on Your Retirement!'

Like any company in the U.S., IBM faces few legal constraints to reducing the size of its workforce. And with its no-disclosure strategy, it eliminated one of the last regular sources of information about its employment practices and the changing size of its American workforce.

But there remained the question of whether recent cutbacks were big enough to trigger state and federal requirements for disclosure of layoffs. And internal documents, such as a slide in a 2016 presentation titled "Transforming to Next Generation Digital Talent," suggest executives worried that "winning the talent war" for new young workers required IBM to improve the "attractiveness of (its) culture and work environment," a tall order in the face of layoffs and firings.

So the company apparently has sought to put a softer face on its cutbacks by recasting many as voluntary rather than the result of decisions by the firm. One way it has done this is by converting many layoffs to retirements.

Some ex-employees told ProPublica that, faced with a layoff notice, they were just as happy to retire. Others said they felt forced to accept a retirement package and leave. Several actively objected to the company treating their ouster as a retirement. The company nevertheless processed their exits as such.

Project manager Ed Alpern's departure was treated in company paperwork as a voluntary retirement. He didn't see it that way, because the alternative he said he was offered was being fired outright.

Lorilynn King, a 55-year-old IT specialist who worked from her home in Loveland, Colorado, had been with IBM almost as long as Alpern by May 2016 when her manager called to tell her the company was conducting a layoff and her name was on the list.

King said the manager told her to report to a meeting in Building 1 on IBM's Boulder campus the following day. There, she said, she found herself in a group of other older employees being told by an IBM human resources representative that they'd all be retiring. "I have NO intention of retiring," she remembers responding. "I'm being laid off."

ProPublica has collected documents from 15 ex-IBM employees who got layoff notices followed by a retirement package and has talked with many others who said they received similar paperwork. Critics say the sequence doesn't square well with the law.

"This country has banned mandatory retirement," said Seiner, the University of South Carolina law professor and former EEOC appellate lawyer. "The law says taking a retirement package has to be voluntary. If you tell somebody 'Retire or we'll lay you off or fire you,' that's not voluntary."

Until recently, the company's retirement paperwork included a letter from Rometty, the CEO, that read, in part, "I wanted to take this opportunity to wish you well on your retirement While you may be retiring to embark on the next phase of your personal journey, you will always remain a valued and appreciated member of the IBM family." Ex-employees said IBM stopped sending the letter last year.

IBM has also embraced another practice that leads workers, especially older ones, to quit on what appears to be a voluntary basis. It substantially reversed its pioneering support for telecommuting, telling people who've been working from home for years to begin reporting to certain, often distant, offices. Their other choice: Resign.

David Harlan had worked as an IBM marketing strategist from his home in Moscow, Idaho, for 15 years when a manager told him last year of orders to reduce the performance ratings of everybody at his pay grade. Then in February last year, when he was 50, came an internal video from IBM's new senior vice president, Michelle Peluso, which announced plans to improve the work of marketing employees by ordering them to work "shoulder to shoulder." Those who wanted to stay on would need to "co-locate" to offices in one of six cities.

Early last year, Harlan received an email congratulating him on "the opportunity to join your team in Raleigh, North Carolina." He had 30 days to decide on the 2,600-mile move. He resigned in June.

David Harlan worked for IBM for 15 years from his home in Moscow, Idaho, where he also runs a drama company. Early last year, IBM offered him a choice: Move 2,600 miles to Raleigh-Durham to begin working at an office, or resign. He left in June. (Rajah Bose for ProPublica)

After the Peluso video was leaked to the press, an IBM spokeswoman told the Wall Street Journal that the " vast majority " of people ordered to change locations and begin reporting to offices did so. IBM Vice President Ed Barbini said in an initial email exchange with ProPublica in July that the new policy affected only about 2,000 U.S. employees and that "most" of those had agreed to move.

But employees across a wide range of company operations, from the systems and technology group to analytics, told ProPublica they've also been ordered to co-locate in recent years. Many IBMers with long service said that they quit rather than sell their homes, pull children from school and desert aging parents. IBM declined to say how many older employees were swept up in the co-location initiative.

"They basically knew older employees weren't going to do it," said Eileen Maroney, a 63-year-old IBM product manager from Aiken, South Carolina, who, like Harlan, was ordered to move to Raleigh or resign. "Older people aren't going to move. It just doesn't make any sense." Like Harlan, Maroney left IBM last June.

Having people quit rather than being laid off may help IBM avoid disclosing how much it is shrinking its U.S. workforce and where the reductions are occurring.

Under the federal WARN Act , adopted in the wake of huge job cuts and factory shutdowns during the 1980s, companies laying off 50 or more employees who constitute at least one-third of an employer's workforce at a site have to give advance notice of layoffs to the workers, public agencies and local elected officials.

Similar laws in some states where IBM has a substantial presence are even stricter. California, for example, requires advanced notice for layoffs of 50 or more employees, no matter what the share of the workforce. New York requires notice for 25 employees who make up a third.

Because the laws were drafted to deal with abrupt job cuts at individual plants, they can miss reductions that occur over long periods among a workforce like IBM's that was, at least until recently, widely dispersed because of the company's work-from-home policy.

IBM's training sessions to prepare managers for layoffs suggest the company was aware of WARN thresholds, especially in states with strict notification laws such as California. A 2016 document entitled "Employee Separation Processing" and labeled "IBM Confidential" cautions managers about the "unique steps that must be taken when processing separations for California employees."

A ProPublica review of five years of WARN disclosures for a dozen states where the company had large facilities that shed workers found no disclosures in nine. In the other three, the company alerted authorities of just under 1,000 job cuts -- 380 in California, 369 in New York and 200 in Minnesota. IBM's reported figures are well below the actual number of jobs the company eliminated in these states, where in recent years it has shuttered, sold off or leveled plants that once employed vast numbers.

By contrast, other employers in the same 12 states reported layoffs last year alone totaling 215,000 people. They ranged from giant Walmart to Ostrom's Mushroom Farms in Washington state.

Whether IBM operated within the rules of the WARN act, which are notoriously fungible, could not be determined because the company declined to provide ProPublica with details on its layoffs.

A Second Act, But Poorer

W ith 35 years at IBM under his belt, Ed Miyoshi had plenty of experience being pushed to take buyouts, or early retirement packages, and refusing them. But he hadn't expected to be pushed last fall.

Miyoshi, of Hopewell Junction, New York, had some years earlier launched a pilot program to improve IBM's technical troubleshooting. With the blessing of an IBM vice president, he was busily interviewing applicants in India and Brazil to staff teams to roll the program out to clients worldwide.

The interviews may have been why IBM mistakenly assumed Miyoshi was a manager, and so emailed him to eliminate the one U.S.-based employee still left in his group.

"That was me," Miyoshi realized.

In his sign-off email to colleagues shortly before Christmas 2016, Miyoshi, then 57, wrote: "I am too young and too poor to stop working yet, so while this is good-bye to my IBM career, I fully expect to cross paths with some of you very near in the future."

He did, and perhaps sooner than his colleagues had expected; he started as a subcontractor to IBM about two weeks later, on Jan. 3.

Miyoshi is an example of older workers who've lost their regular IBM jobs and been brought back as contractors. Some of them -- not Miyoshi -- became contract workers after IBM told them their skills were out of date and no longer needed.

Employment law experts said that hiring ex-employees as contractors can be legally dicey. It raises the possibility that the layoff of the employee was not for the stated reason but perhaps because they were targeted for their age, race or gender.

IBM appears to recognize the problem. Ex-employees say the company has repeatedly told managers -- most recently earlier this year -- not to contract with former employees or sign on with third-party contracting firms staffed by ex-IBMers. But ProPublica turned up dozens of instances where the company did just that.

Only two weeks after IBM laid him off in December 2016, Ed Miyoshi of Hopewell Junction, New York, started work as a subcontractor to the company. But he took a $20,000-a-year pay cut. "I'm not a millionaire, so that's a lot of money to me," he says. (Demetrius Freeman for ProPublica)

Responding to a question in a confidential questionnaire from ProPublica, one 35-year company veteran from New York said he knew exactly what happened to the job he left behind when he was laid off. "I'M STILL DOING IT. I got a new gig eight days after departure, working for a third-party company under contract to IBM doing the exact same thing."

In many cases, of course, ex-employees are happy to have another job, even if it is connected with the company that laid them off.

Henry, the Columbus-based sales and technical specialist who'd been with IBM's "resiliency services" unit, discovered that he'd lost his regular IBM job because the company had purchased an Indian firm that provided the same services. But after a year out of work, he wasn't going to turn down the offer of a temporary position as a subcontractor for IBM, relocating data centers. It got money flowing back into his household and got him back where he liked to be, on the road traveling for business.

The compensation most ex-IBM employees make as contractors isn't comparable. While Henry said he collected the same dollar amount, it didn't include health insurance, which cost him $1,325 a month. Miyoshi said his paycheck is 20 percent less than what he made as an IBM regular.

"I took an over $20,000 hit by becoming a contractor. I'm not a millionaire, so that's a lot of money to me," Miyoshi said.

And lower pay isn't the only problem ex-IBM employees-now-subcontractors face. This year, Miyoshi's payable hours have been cut by an extra 10 "furlough days." Internal documents show that IBM repeatedly furloughs subcontractors without pay, often for two, three or more weeks a quarter. In some instances, the furloughs occur with little advance notice and at financially difficult moments. In one document, for example, it appears IBM managers, trying to cope with a cost overrun spotted in mid-November, planned to dump dozens of subcontractors through the end of the year, the middle of the holiday season.

Former IBM employees now on contract said the company controls costs by notifying contractors in the midst of projects they have to take pay cuts or lose the work. Miyoshi said that he originally started working for his third-party contracting firm for 10 percent less than at IBM, but ended up with an additional 10 percent cut in the middle of 2017, when IBM notified the contractor it was slashing what it would pay.

For many ex-employees, there are few ways out. Henry, for example, sought to improve his chances of landing a new full-time job by seeking assistance to finish a college degree through a federal program designed to retrain workers hurt by offshoring of jobs.

But when he contacted the Ohio state agency that administers the Trade Adjustment Assistance, or TAA, program, which provides assistance to workers who lose their jobs for trade-related reasons, he was told IBM hadn't submitted necessary paperwork. State officials said Henry could apply if he could find other IBM employees who were laid off with him, information that the company doesn't provide.

TAA is overseen by the Labor Department but is operated by states under individual agreements with Washington, so the rules can vary from state to state. But generally employers, unions, state agencies and groups of employers can petition for training help and cash assistance. Labor Department data compiled by the advocacy group Global Trade Watch shows that employers apply in about 40 percent of cases. Some groups of IBM workers have obtained retraining funds when they or their state have applied, but records dating back to the early 1990s show IBM itself has applied for and won taxpayer assistance only once, in 2008, for three Chicago-area workers whose jobs were being moved to India.

Teasing New Jobs

A s IBM eliminated thousands of jobs in 2016, David Carroll, a 52-year-old Austin software engineer, thought he was safe.

His job was in mobile development, the "M" in the company's CAMS strategy. And if that didn't protect him, he figured he was only four months shy of qualifying for a program that gives employees who leave within a year of their three-decade mark access to retiree medical coverage and other benefits.

But the layoff notice Carroll received March 2 gave him three months -- not four -- to come up with another job. Having been a manager, he said he knew the gantlet he'd have to run to land a new position inside IBM.

Still, he went at it hard, applying for more than 50 IBM jobs, including one for a job he'd successfully done only a few years earlier. For his effort, he got one offer -- the week after he'd been forced to depart. He got severance pay but lost access to what would have been more generous benefits.

Edward Kishkill, then 60, of Hillsdale, New Jersey, had made a similar calculation.

A senior systems engineer, Kishkill recognized the danger of layoffs, but assumed he was immune because he was working in systems security, the "S" in CAMS and another hot area at the company.

The precaution did him no more good than it had Carroll. Kishkill received a layoff notice the same day, along with 17 of the 22 people on his systems security team, including Diane Moos. The notice said that Kishkill could look for other jobs internally. But if he hadn't landed anything by the end of May, he was out.

With a daughter who was a senior in high school headed to Boston University, he scrambled to apply, but came up dry. His last day was May 31, 2016.

For many, the fruitless search for jobs within IBM is the last straw, a final break with the values the company still says it embraces. Combined with the company's increasingly frequent request that departing employees train their overseas replacements, it has left many people bitter. Scores of ex-employees interviewed by ProPublica said that managers with job openings told them they weren't allowed to hire from layoff lists without getting prior, high-level clearance, something that's almost never given.

ProPublica reviewed documents that show that a substantial share of recent IBM layoffs have involved what the company calls "lift and shift," lifting the work of specific U.S. employees and shifting it to specific workers in countries such as India and Brazil. For example, a document summarizing U.S. employment in part of the company's global technology services division for 2015 lists nearly a thousand people as layoff candidates, with the jobs of almost half coded for lift and shift.

Ex-employees interviewed by ProPublica said the lift-and-shift process required their extensive involvement. For example, shortly after being notified she'd be laid off, Kishkill's colleague, Moos, was told to help prepare a "knowledge transfer" document and begin a round of conference calls and email exchanges with two Indian IBM employees who'd be taking over her work. Moos said the interactions consumed much of her last three months at IBM.

Next Chapters

W hile IBM has managed to keep the scale and nature of its recent U.S. employment cuts largely under the public's radar, the company drew some unwanted attention during the 2016 presidential campaign, when then-candidate Donald Trump lambasted it for eliminating 500 jobs in Minnesota, where the company has had a presence for a half century, and shifting the work abroad.

The company also has caught flak -- in places like Buffalo, New York ; Dubuque, Iowa ; Columbia, Missouri , and Baton Rouge, Louisiana -- for promising jobs in return for state and local incentives, then failing to deliver. In all, according to public officials in those and other places, IBM promised to bring on 3,400 workers in exchange for as much as $250 million in taxpayer financing but has hired only about half as many.

After Trump's victory, Rometty, in a move at least partly aimed at courting the president-elect, pledged to hire 25,000 new U.S. employees by 2020. Spokesmen said the hiring would increase IBM's U.S. employment total, although, given its continuing job cuts, the addition is unlikely to approach the promised hiring total.

When The New York Times ran a story last fall saying IBM now has more employees in India than the U.S., Barbini, the corporate spokesman, rushed to declare, "The U.S. has always been and remains IBM's center of gravity." But his stream of accompanying tweets and graphics focused as much on the company's record for racking up patents as hiring people.

IBM has long been aware of the damage its job cuts can do to people. In a series of internal training documents to prepare managers for layoffs in recent years, the company has included this warning: "Loss of a job often triggers a grief reaction similar to what occurs after a death."

Most, though not all, of the ex-IBM employees with whom ProPublica spoke have weathered the loss and re-invented themselves.

Marjorie Madfis, the digital marketing strategist, couldn't land another tech job after her 2013 layoff, so she headed in a different direction. She started a nonprofit called Yes She Can Inc. that provides job skills development for young autistic women, including her 21-year-old daughter.

After almost two years of looking and desperate for useful work, Brian Paulson, the widely traveled IBM senior manager, applied for and landed a position as a part-time rural letter carrier in Plano, Texas. He now works as a contract project manager for a Las Vegas gaming and lottery firm.

Ed Alpern, who started at IBM as a Selectric typewriter repairman, watched his son go on to become a National Merit Scholar at Texas A&M University, but not a Watson scholarship recipient.

Lori King, the IT specialist and 33-year IBM veteran who's now 56, got in a parting shot. She added an addendum to the retirement papers the firm gave her that read in part: "It was never my plan to retire earlier than at least age 60 and I am not committing to retire. I have been informed that I am impacted by a resource action effective on 2016-08-22, which is my last day at IBM, but I am NOT retiring."

King has aced more than a year of government-funded coding boot camps and university computer courses, but has yet to land a new job.

David Harlan still lives in Moscow, Idaho, after refusing IBM's "invitation" to move to North Carolina, and is artistic director of the Moscow Art Theatre (Too).

Ed Miyoshi is still a technical troubleshooter working as a subcontractor for IBM.

Ed Kishkill, the senior systems engineer, works part time at a local tech startup, but pays his bills as an associate at a suburban New Jersey Staples store.

This year, Paul Henry was back on the road, working as an IBM subcontractor in Detroit, about 200 miles from where he lived in Columbus. On Jan. 8, he put in a 14-hour day and said he planned to call home before turning in. He died in his sleep.

Correction, March 24, 2018: Eileen Maroney lives in Aiken, South Carolina. The name of her city was incorrect in the original version of this story.

Do you have information about age discrimination at IBM?

Let us know.

Peter Gosselin joined ProPublica as a contributing reporter in January 2017 to cover aging. He has covered the U.S. and global economies for, among others, the Los Angeles Times and The Boston Globe, focusing on the lived experiences of working people. He is the author of "High Wire: The Precarious Financial Lives of American Families."

Ariana Tobin is an engagement reporter at ProPublica, where she works to cultivate communities to inform our coverage. She was previously at The Guardian and WNYC. Ariana has also worked as digital producer for APM's Marketplace and contributed to outlets including The New Republic , On Being , the St. Louis Beacon and Bustle .

Production by Joanna Brenner and Hannah Birch . Art direction by David Sleight . Illustrations by Richard Borge .

[Oct 30, 2018] Cutting 'Old Heads' at IBM

Notable quotes:
"... I took an early retirement package when IBM first started downsizing. I had 30 years with them, but I could see the writing on the wall so I got out. I landed an exec job with a biotech company some years later and inherited an IBM consulting team that were already engaged. I reviewed their work for 2 months then had the pleasure of terminating the contract and actually escorting the team off the premises because the work product was so awful. ..."
"... Every former or prospective IBM employee is a potential future IBM customer or partner. How you treat them matters! ..."
"... I advise IBM customers now. My biggest professional achievements can be measured in how much revenue IBM lost by my involvement - millions. Favorite is when IBM paid customer to stop the bleeding. ..."
Oct 30, 2018 | features.propublica.org

I took an early retirement package when IBM first started downsizing. I had 30 years with them, but I could see the writing on the wall so I got out. I landed an exec job with a biotech company some years later and inherited an IBM consulting team that were already engaged. I reviewed their work for 2 months then had the pleasure of terminating the contract and actually escorting the team off the premises because the work product was so awful.

They actually did a presentation of their interim results - but it was a 52 slide package that they had presented to me in my previous job but with the names and numbers changed. see more

DarthVaderMentor dauwkus , Thursday, April 5, 2018 4:43 PM

Intellectual Capital Re-Use! LOL! Not many people realize in IBM that many, if not all of the original IBM Consulting Group materials were made under the Type 2 Materials clause of the IBM Contract, which means the customers actually owned the IP rights of the documents. Can you imagine the mess if just one customer demands to get paid for every re-use of the IP that was developed for them and then re-used over and over again?
NoGattaca dauwkus , Monday, May 7, 2018 5:37 PM
Beautiful! Yea, these companies so fast to push experienced people who have dedicated their lives to the firm - how can you not...all the hours and commitment it takes - way underestimate the power of the network of those left for dead and their influence in that next career gig. Memories are long...very long when it comes to experiences like this.
davosil North_40 , Sunday, March 25, 2018 5:19 PM
True dat! Every former or prospective IBM employee is a potential future IBM customer or partner. How you treat them matters!
Playing Defense North_40 , Tuesday, April 3, 2018 4:41 PM
I advise IBM customers now. My biggest professional achievements can be measured in how much revenue IBM lost by my involvement - millions. Favorite is when IBM paid customer to stop the bleeding.

[Oct 30, 2018] It s all about making the numbers so the management can present a Potemkin Village of profits and ever-increasing growth sufficient to get bonuses. There is no relation to any sort of quality or technological advancement, just HR 3-card monte

Notable quotes:
"... It's no coincidence whatsoever that Diane Gherson, mentioned prominently in the article, blasted out an all-employees email crowing about IBM being a great place to work according to (ahem) LinkedIn. I desperately want to post a link to this piece in the corporate Slack, but that would get me fired immediately instead of in a few months at the next "resource action." It's been a whole 11 months since our division had one, so I know one is coming soon. ..."
"... I used to say when I was there that: "After every defeat, they pin medals on the generals and shoot the soldiers". ..."
"... 1990 is also when H-1B visa rules were changed so that companies no longer had to even attempt to hire an American worker as long as the job paid $60,000, which hasn't changed since. This article doesn't even mention how our work visa system facilitated and even rewarded this abuse of Americans. ..."
"... Well, starting in the 1980s, the American management was allowed by Reagan to get rid of its workforce. ..."
"... It's all about making the numbers so the management can present a Potemkin Village of profits and ever-increasing growth sufficient to get bonuses. There is no relation to any sort of quality or technological advancement, just HR 3-card monte. They have installed air bearing in Old Man Watson's coffin as it has been spinning ever faster ..."
"... Corporate America executive management is all about stock price management. Their bonus's in the millions of dollars are based on stock performance. With IBM's poor revenue performance since Ginny took over, profits can only be maintained by cost reduction. Look at the IBM executive's bonus's throughout the last 20 years and you can see that all resource actions have been driven by Palmisano's and Rominetty's greed for extravagant bonus's. ..."
"... Also worth noting is that IBM drastically cut the cap on it's severance pay calculation. Almost enough to make me regret not having retired before that changed. ..."
"... Yeah, severance started out at 2 yrs pay, went to 1 yr, then to 6 mos. and is now 1 month. ..."
"... You need to investigate AT&T as well, as they did the same thing. I was 'sold' by IBM to AT&T as part of he Network Services operation. AT&T got rid of 4000 of the 8000 US employees sent to AT&T within 3 years. Nearly everyone of us was a 'senior' employee. ..."
Oct 30, 2018 | disqus.com

dragonflap7 months ago I'm a 49-year-old SW engineer who started at IBM as part of an acquisition in 2000. I got laid off in 2002 when IBM started sending reqs to Bangalore in batches of thousands. After various adventures, I rejoined IBM in 2015 as part of the "C" organization referenced in the article.

It's no coincidence whatsoever that Diane Gherson, mentioned prominently in the article, blasted out an all-employees email crowing about IBM being a great place to work according to (ahem) LinkedIn. I desperately want to post a link to this piece in the corporate Slack, but that would get me fired immediately instead of in a few months at the next "resource action." It's been a whole 11 months since our division had one, so I know one is coming soon.

Stewart Dean7 months ago ,

The lead-in to this piece makes it sound like IBM was forced into these practices by inescapable forces. I'd say not, rather that it pursued them because a) the management was clueless about how to lead IBM in the new environment and new challenges so b) it started to play with numbers to keep the (apparent) profits up....to keep the bonuses coming. I used to say when I was there that: "After every defeat, they pin medals on the generals and shoot the soldiers".

And then there's the Pig with the Wooden Leg shaggy dog story that ends with the punch line, "A pig like that you don't eat all at once", which has a lot of the flavor of how many of us saw our jobs as IBM die a slow death.

IBM is about to fall out of the sky, much as General Motors did. How could that happen? By endlessly beating the cow to get more milk.

IBM was hiring right through the Great Depression such that It Did Not Pay Unemployment Insurance. Because it never laid people off, Because until about 1990, your manager was responsible for making sure you had everything you needed to excel and grow....and you would find people that had started on the loading dock and had become Senior Programmers. But then about 1990, IBM starting paying unemployment insurance....just out of the goodness of its heart. Right.

CRAW Stewart Dean7 months ago ,

1990 is also when H-1B visa rules were changed so that companies no longer had to even attempt to hire an American worker as long as the job paid $60,000, which hasn't changed since. This article doesn't even mention how our work visa system facilitated and even rewarded this abuse of Americans.

DDRLSGC Stewart Dean7 months ago ,

Well, starting in the 1980s, the American management was allowed by Reagan to get rid of its workforce.

Georgann Putintsev Stewart Dean7 months ago ,

I found that other Ex-IBMer's respect other Ex-IBMer's work ethics, knowledge and initiative.

Other companies are happy to get them as a valueable resource. In '89 when our Palo Alto Datacenter moved, we were given two options: 1.) to become a Programmer (w/training) 2.) move to Boulder or 3.) to leave.

I got my training with programming experience and left IBM in '92, when for 4 yrs IBM offerred really good incentives for leaving the company. The Executives thought that the IBM Mainframe/MVS z/OS+ was on the way out and the Laptop (Small but Increasing Capacity) Computer would take over everything.

It didn't. It did allow many skilled IBMers to succeed outside of IBM and help built up our customer skill sets. And like many, when the opportunity arose to return I did. In '91 I was accidentally given a male co-workers paycheck and that was one of the reasons for leaving. During my various Contract work outside, I bumped into other male IBMer's that had left too, some I had trained, and when they disclosed that it was their salary (which was 20-40%) higher than mine was the reason they left, I knew I had made the right decision.

Women tend to under-value themselves and their capabilities. Contracting also taught me that companies that had 70% employees and 30% contractors, meant that contractors would be let go if they exceeded their quarterly expenditures.

I first contracted with IBM in '98 and when I decided to re-join IBM '01, I had (3) job offers and I took the most lucrative exciting one to focus on fixing & improving DB2z Qry Parallelism. I developed a targeted L3 Technical Change Team to help L2 Support reduce Customer problems reported and improve our product. The instability within IBM remained and I saw IBM try to eliminate aging, salaried, benefited employees. The 1.) find a job within IBM ... to 2.) to leave ... was now standard.

While my salary had more than doubled since I left IBM the first time, it still wasn't near other male counterparts. The continual rating competition based on salary ranged titles and timing a title raise after a round of layoffs, not before. I had another advantage going and that was that my changed reduced retirement benefits helped me stay there. It all comes down to the numbers that Mgmt is told to cut & save IBM. While much of this article implies others were hired, at our Silicon Valley Location and other locations, they had no intent to backfill. So the already burdened employees were laden with more workloads & stress.

In the early to mid 2000's IBM setup a counter lab in China where they were paying 1/4th U.S. salaries and many SVL IBMers went to CSDL to train our new world 24x7 support employees. But many were not IBM loyal and their attrition rates were very high, so it fell to a wave of new-hires at SVL to help address it.

Stewart Dean Georgann Putintsev7 months ago ,

It's all about making the numbers so the management can present a Potemkin Village of profits and ever-increasing growth sufficient to get bonuses. There is no relation to any sort of quality or technological advancement, just HR 3-card monte. They have installed air bearing in Old Man Watson's coffin as it has been spinning ever faster

IBM32_retiree • 7 months ago ,

Corporate America executive management is all about stock price management. Their bonus's in the millions of dollars are based on stock performance. With IBM's poor revenue performance since Ginny took over, profits can only be maintained by cost reduction. Look at the IBM executive's bonus's throughout the last 20 years and you can see that all resource actions have been driven by Palmisano's and Rominetty's greed for extravagant bonus's.

Dan Yurman7 months ago ,

Bravo ProPublica for another "sock it to them" article - journalism in honor of the spirit of great newspapers everywhere that the refuge of justice in hard times is with the press.

Felix Domestica7 months ago ,

Also worth noting is that IBM drastically cut the cap on it's severance pay calculation. Almost enough to make me regret not having retired before that changed.

RonF Felix Domestica7 months ago ,

Yeah, severance started out at 2 yrs pay, went to 1 yr, then to 6 mos. and is now 1 month.

mjmadfis RonF7 months ago ,

When I was let go in June 2013 it was 6 months severance.

Terry Taylor7 months ago ,

You need to investigate AT&T as well, as they did the same thing. I was 'sold' by IBM to AT&T as part of he Network Services operation. AT&T got rid of 4000 of the 8000 US employees sent to AT&T within 3 years. Nearly everyone of us was a 'senior' employee.

weelittlepeople Terry Taylor7 months ago ,

Good Ol Ma Bell is following the IBM playbook to a Tee

emnyc7 months ago ,

ProPublica deserves a Pulitzer for this article and all the extensive research that went into this investigation.

Incredible job! Congrats.

On a separate note, IBM should be ashamed of themselves and the executive team that enabled all of this should be fired.

WmBlake7 months ago ,

As a permanent old contractor and free-enterprise defender myself, I don't blame IBM a bit for wanting to cut the fat. But for the outright *lies, deception and fraud* that they use to break laws, weasel out of obligations... really just makes me want to shoot them... and I never even worked for them.

Michael Woiwood7 months ago ,

Great Article.

Where I worked, In Rochester,MN, people have known what is happening for years. My last years with IBM were the most depressing time in my life.

I hear a rumor that IBM would love to close plants they no longer use but they are so environmentally polluted that it is cheaper to maintain than to clean up and sell.

scorcher147 months ago ,

One of the biggest driving factors in age discrimination is health insurance costs, not salary. It can cost 4-5x as much to insure and older employee vs. a younger one, and employers know this. THE #1 THING WE CAN DO TO STOP AGE DISCRIMINATION IS TO MOVE AWAY FROM OUR EMPLOYER-PROVIDED INSURANCE SYSTEM. It could be single-payer, but it could also be a robust individual market with enough pool diversification to make it viable. Freeing employers from this cost burden would allow them to pick the right talent regardless of age.

DDRLSGC scorcher147 months ago ,

The American business have constantly fought against single payer since the end of World War II and why should I feel sorry for them when all of a sudden, they are complaining about health care costs? It is outrageous that workers have to face age discrimination; however, the CEOs don't have to deal with that issue since they belong to a tiny group of people who can land a job anywhere else.

pieinthesky scorcher147 months ago ,

Single payer won't help. We have single payer in Canada and just as much age discrimination in employment. Society in general does not like older people so unless you're a doctor, judge or pharmacist you will face age bias. It's even worse in popular culture never mind in employment.

OrangeGina scorcher147 months ago ,

I agree. Yet, a determined company will find other methods, explanations and excuses.

JohnCordCutter7 months ago ,

Thanks for the great article. I left IBM last year. USA based. 49. Product Manager in one of IBMs strategic initiatives, however got told to relocate or leave. I found another job and left. I came to IBM from an acquisition. My only regret is, I wish I had left this toxic environment earlier. It truely is a dreadful place to work.

60 Soon • 7 months ago ,

The methodology has trickled down to smaller companies pursuing the same net results for headcount reduction. The similarities to my experience were painful to read. The grief I felt after my job was "eliminated" 10 years ago while the Recession was at its worst and shortly after my 50th birthday was coming back. I never have recovered financially but have started writing a murder mystery. The first victim? The CEO who let me go. It's true. Revenge is best served cold.

donttreadonme97 months ago ,

Well written . people like me have experienced exactly what you wrote. IBM is a shadow of it's former greatness and I have advised my children to stay away from IBM and companies like it as they start their careers. IBM is a corrupt company. Shame on them !

annapurna7 months ago ,

I hope they find some way to bring a class action lawsuit against these assholes.

Mark annapurna7 months ago ,

I suspect someone will end up hunt them down with an axe at some point. That's the only way they'll probably learn. I don't know about IBM specifically, but when Carly Fiorina ran HP, she travelled with and even went into engineering labs with an armed security detail.

OrangeGina Mark7 months ago ,

all the bigwig CEOs have these black SUV security details now.

Sarahw7 months ago ,

IBM has been using these tactics at least since the 1980s, when my father was let go for similar 'reasons.'

Vin7 months ago ,

Was let go after 34 years of service. Mine Resource Action latter had additional lines after '...unless you are offered ... position within IBM before that date.' , implying don't even try to look for a position. They lines were ' Additional business controls are in effect to manage the business objectives of this resource action, therefore, job offers within (the name of division) will be highly unlikely.'.

Mark Vin7 months ago ,

Absolutely and utterly disgusting.

Greybeard7 months ago ,

I've worked for a series of vendors for over thirty years. A job at IBM used to be the brass ring; nowadays, not so much.

I've heard persistent rumors from IBMers that U.S. headcount is below 25,000 nowadays. Given events like the recent downtime of the internal systems used to order parts (5 or so days--website down because staff who maintained it were let go without replacements), it's hard not to see the spiral continue down the drain.

What I can't figure out is whether Rometty and cronies know what they're doing or are just clueless. Either way, the result is the same: destruction of a once-great company and brand. Tragic.

ManOnTheHill Greybeard7 months ago ,

Well, none of these layoffs/ageist RIFs affect the execs, so they don't see the effects, or they see the effects but attribute them to some other cause.

(I'm surprised the article doesn't address this part of the story; how many affected by layoffs are exec/senior management? My bet is very few.)

ExIBMExec ManOnTheHill7 months ago ,

I was a D-banded exec (Director-level) who was impacted and I know even some VPs who were affected as well, so they do spread the pain, even in the exec ranks.

ManOnTheHill ExIBMExec7 months ago ,

That's different than I have seen in companies I have worked for (like HP). There RIFs (Reduction In Force, their acronym for layoff) went to the director level and no further up.

[Oct 30, 2018] There are plenty of examples of people who were doing their jobs, IN SPADES, putting in tons of unpaid overtime, and generally doing whatever was humanly possible to make sure that whatever was promised to the customer was delivered within their span of control. As they grew older corporations threw them out like an empty can

Notable quotes:
"... The other alternative is a market-based life that, for many, will be cruel, brutish, and short. ..."
Oct 30, 2018 | features.propublica.org

Lorilynn King

Step back and think about this for a minute. There are plenty of examples of people who were doing their jobs, IN SPADES, putting in tons of unpaid overtime, and generally doing whatever was humanly possible to make sure that whatever was promised to the customer was delivered (within their span of control... I'm not going to get into a discussion of how IBM pulls the rug out from underneath contracts after they've been signed).

These people were, and still are, high performers, they are committed to the job and the purpose that has been communicated to them by their peers, management, and customers; and they take the time (their OWN time) to pick up new skills and make sure that they are still current and marketable. They do this because they are committed to doing the job to the best of their ability.... it's what makes them who they are.

IBM (and other companies) are firing these very people ***for one reason and one reason ONLY***: their AGE. They have the skills and they're doing their jobs. If the same person was 30 you can bet that they'd still be there. Most of the time it has NOTHING to do with performance or lack of concurrency. Once the employee is fired, the job is done by someone else. The work is still there, but it's being done by someone younger and/or of a different nationality.

The money that is being saved by these companies has to come from somewhere. People that are having to withdraw their retirement savings 20 or so years earlier than planned are going to run out of funds.... and when they're in nursing homes, guess who is going to be supporting them? Social security will be long gone, their kids have their own monetary challenges.... so it will be government programs.... maybe.

This is not just a problem that impacts the 40 and over crowd. This is going to impact our entire society for generations to come.

NoPolitician
The business reality you speak of can be tempered via government actions. A few things:
  • One of the major hardships here is laying someone off when they need income the most - to pay for their children's college education. To mitigate this, as a country we could make a public education free. That takes off a lot of the sting, some people might relish a change in career when they are in their 50s except that the drop in salary is so steep when changing careers.
  • We could lower the retirement age to 55 and increase Social Security to more than a poverty-level existence.Being laid off when you're 50 or 55 - with little chance to be hired anywhere else - would not hurt as much.
  • We could offer federal wage subsidies for older workers to make them more attractive to hire. While some might see this as a thumb on the scale against younger workers, in reality it would be simply a counterweight to the thumb that is already there against older workers.
  • Universal health care equalizes the cost of older and younger workers.

The other alternative is a market-based life that, for many, will be cruel, brutish, and short.

[Oct 30, 2018] Elimination of loyalty: what corporations cloak as weeding out the low performers tranparantly reveals catching the older workers in the net as well.

Oct 30, 2018 | features.propublica.org

Great White North, Thursday, March 22, 2018 11:29 PM

There's not a word of truth quoted in this article. That is, quoted from IBM spokespeople. It's the culture there now. They don't even realize that most of their customers have become deaf to the same crap from their Sales and Marketing BS, which is even worse than their HR speak.

The sad truth is that IBM became incapable of taking its innovation (IBM is indeed a world beating, patent generating machine) to market a long time ago. It has also lost the ability (if it ever really had it) to acquire other companies and foster their innovation either - they ran most into the ground. As a result, for nearly a decade revenues have declined and resource actions grown. The resource actions may seem to be the ugly problem, but they're only the symptom of a fat greedy and pompous bureaucracy that's lost its ability to grow and stay relevant in a very competitive and changing industry. What they have been able to perfect and grow is their ability to downsize and return savings as dividends (Big Sam Palmisano's "innovation"). Oh, and for senior management to line their pockets.

Nothing IBM is currently doing is sustainable.

If you're still employed there, listen to the pain in the words of your fallen comrades and don't knock yourself out trying to stay afloat. Perhaps learn some BS of your own and milk your job (career? not...) until you find freedom and better pastures.

If you own stock, do like Warren Buffett, and sell it while it still has some value.

Danllo , Thursday, March 22, 2018 10:43 PM
This is NOTHING NEW! All major corporations have and will do this at some point in their existence. Another industry that does this regularly every 3 to 5 years is the pharamaceutical industry. They'll decimate their sales forces in order to, as they like to put it, "right size" the company.

They'll cloak it as weeding out the low performers, but they'll try to catch the "older" workers in the net as well.

[Oct 30, 2018] American companies pay health insurance premiums based on their specific employee profiles

Notable quotes:
"... As long as companies pay for their employees' health insurance they will have an incentive to fire older employees. ..."
"... The answer is to separate health insurance from employment. Companies can't be trusted. Not only health care, but retirement is also sorely abused by corporations. All the money should be in protected employee based accounts. ..."
Oct 30, 2018 | features.propublica.org

sometimestheyaresomewhatright , Thursday, March 22, 2018 4:13 PM

American companies pay health insurance premiums based on their specific employee profiles. Insurance companies compete with each other for the business, but costs are actual. And based on the profile of the pool of employees. So American companies fire older workers just to lower the average age of their employees. Statistically this is going to lower their health care costs.

As long as companies pay for their employees' health insurance they will have an incentive to fire older employees. They have an incentive to fire sick employees and employees with genetic risks. Those are harder to implement as ways to lower costs. Firing older employees is simple to do, just look up their ages.

The answer is to separate health insurance from employment. Companies can't be trusted. Not only health care, but retirement is also sorely abused by corporations. All the money should be in protected employee based accounts.

By the way, most tech companies are actually run by older people. The goal is to broom out mid-level people based on age. Nobody is going to suggest to a sixty year old president that they should self fire, for the good of the company.

[Oct 30, 2018] Cutting Old Heads at IBM by Peter Gosselin and Ariana Tobin

Mar 22, 2018 | features.propublica.org

This story was co-published with Mother Jones.

F or nearly a half century, IBM came as close as any company to bearing the torch for the American Dream.

As the world's dominant technology firm, payrolls at International Business Machines Corp. swelled to nearly a quarter-million U.S. white-collar workers in the 1980s. Its profits helped underwrite a broad agenda of racial equality, equal pay for women and an unbeatable offer of great wages and something close to lifetime employment, all in return for unswerving loyalty.

How the Crowd Led Us to Investigate IBM

Our project started with a digital community of ex-employees. Read more about how we got this story.

Email Updates

Sign up to get ProPublica's major investigations delivered to your inbox.

Do you have information about age discrimination at IBM?

Let us know.

But when high tech suddenly started shifting and companies went global, IBM faced the changing landscape with a distinction most of its fiercest competitors didn't have: a large number of experienced and aging U.S. employees.

The company reacted with a strategy that, in the words of one confidential planning document, would "correct seniority mix." It slashed IBM's U.S. workforce by as much as three-quarters from its 1980s peak, replacing a substantial share with younger, less-experienced and lower-paid workers and sending many positions overseas. ProPublica estimates that in the past five years alone, IBM has eliminated more than 20,000 American employees ages 40 and over, about 60 percent of its estimated total U.S. job cuts during those years.

In making these cuts, IBM has flouted or outflanked U.S. laws and regulations intended to protect later-career workers from age discrimination, according to a ProPublica review of internal company documents, legal filings and public records, as well as information provided via interviews and questionnaires filled out by more than 1,000 former IBM employees.

Among ProPublica's findings, IBM:

Denied older workers information the law says they need in order to decide whether they've been victims of age bias, and required them to sign away the right to go to court or join with others to seek redress. Targeted people for layoffs and firings with techniques that tilted against older workers, even when the company rated them high performers. In some instances, the money saved from the departures went toward hiring young replacements. Converted job cuts into retirements and took steps to boost resignations and firings. The moves reduced the number of employees counted as layoffs, where high numbers can trigger public disclosure requirements. Encouraged employees targeted for layoff to apply for other IBM positions, while quietly advising managers not to hire them and requiring many of the workers to train their replacements. Told some older employees being laid off that their skills were out of date, but then brought them back as contract workers, often for the same work at lower pay and fewer benefits.

IBM declined requests for the numbers or age breakdown of its job cuts. ProPublica provided the company with a 10-page summary of its findings and the evidence on which they were based. IBM spokesman Edward Barbini said that to respond the company needed to see copies of all documents cited in the story, a request ProPublica could not fulfill without breaking faith with its sources. Instead, ProPublica provided IBM with detailed descriptions of the paperwork. Barbini declined to address the documents or answer specific questions about the firm's policies and practices, and instead issued the following statement:

"We are proud of our company and our employees' ability to reinvent themselves era after era, while always complying with the law. Our ability to do this is why we are the only tech company that has not only survived but thrived for more than 100 years."

With nearly 400,000 people worldwide, and tens of thousands still in the U.S., IBM remains a corporate giant. How it handles the shift from its veteran baby-boom workforce to younger generations will likely influence what other employers do. And the way it treats its experienced workers will eventually affect younger IBM employees as they too age.

Fifty years ago, Congress made it illegal with the Age Discrimination in Employment Act , or ADEA, to treat older workers differently than younger ones with only a few exceptions, such as jobs that require special physical qualifications. And for years, judges and policymakers treated the law as essentially on a par with prohibitions against discrimination on the basis of race, gender, sexual orientation and other categories.

In recent decades, however, the courts have responded to corporate pleas for greater leeway to meet global competition and satisfy investor demands for rising profits by expanding the exceptions and shrinking the protections against age bias .

"Age discrimination is an open secret like sexual harassment was until recently," said Victoria Lipnic, the acting chair of the Equal Employment Opportunity Commission, or EEOC, the independent federal agency that administers the nation's workplace anti-discrimination laws.

"Everybody knows it's happening, but often these cases are difficult to prove" because courts have weakened the law, Lipnic said. "The fact remains it's an unfair and illegal way to treat people that can be economically devastating."

Many companies have sought to take advantage of the court rulings. But the story of IBM's downsizing provides an unusually detailed portrait of how a major American corporation systematically identified employees to coax or force out of work in their 40s, 50s and 60s, a time when many are still productive and need a paycheck, but face huge hurdles finding anything like comparable jobs.

The dislocation caused by IBM's cuts has been especially great because until recently the company encouraged its employees to think of themselves as "IBMers" and many operated under the assumption that they had career-long employment.

When the ax suddenly fell, IBM provided almost no information about why an employee was cut or who else was departing, leaving people to piece together what had happened through websites, listservs and Facebook groups such as "Watching IBM" or "Geographically Undesirable IBM Marketers," as well as informal support groups.

Marjorie Madfis, at the time 57, was a New York-based digital marketing strategist and 17-year IBM employee when she and six other members of her nine-person team -- all women in their 40s and 50s -- were laid off in July 2013. The two who remained were younger men.

Since her specialty was one that IBM had said it was expanding, she asked for a written explanation of why she was let go. The company declined to provide it.

"They got rid of a group of highly skilled, highly effective, highly respected women, including me, for a reason nobody knows," Madfis said in an interview. "The only explanation is our age."

Brian Paulson, also 57, a senior manager with 18 years at IBM, had been on the road for more than a year overseeing hundreds of workers across two continents as well as hitting his sales targets for new services, when he got a phone call in October 2015 telling him he was out. He said the caller, an executive who was not among his immediate managers, cited "performance" as the reason, but refused to explain what specific aspects of his work might have fallen short.

It took Paulson two years to land another job, even though he was equipped with an advanced degree, continuously employed at high-level technical jobs for more than three decades and ready to move anywhere from his Fairview, Texas, home.

"It's tough when you've worked your whole life," he said. "The company doesn't tell you anything. And once you get to a certain age, you don't hear a word from the places you apply."

Paul Henry, a 61-year-old IBM sales and technical specialist who loved being on the road, had just returned to his Columbus home from a business trip in August 2016 when he learned he'd been let go. When he asked why, he said an executive told him to "keep your mouth shut and go quietly."

Henry was jobless more than a year, ran through much of his savings to cover the mortgage and health insurance and applied for more than 150 jobs before he found a temporary slot.

"If you're over 55, forget about preparing for retirement," he said in an interview. "You have to prepare for losing your job and burning through every cent you've saved just to get to retirement."

IBM's latest actions aren't anything like what most ex-employees with whom ProPublica talked expected from their years of service, or what today's young workers think awaits them -- or are prepared to deal with -- later in their careers.

"In a fast-moving economy, employers are always going to be tempted to replace older workers with younger ones, more expensive workers with cheaper ones, those who've performed steadily with ones who seem to be up on the latest thing," said Joseph Seiner, an employment law professor at the University of South Carolina and former appellate attorney for the EEOC.

"But it's not good for society," he added. "We have rules to try to maintain some fairness in our lives, our age-discrimination laws among them. You can't just disregard them."

[Oct 30, 2018] How do you say "Red Hat" in Hindi??

Oct 30, 2018 | theregister.co.uk

christie23356 , 14 hrs

Re: How do you say "Red Hat" in Hindi??

Hello)

[Oct 30, 2018] IBM must be borrowing a lot of cash to fund the acquisition. At last count it had about $12B in the bank. Layoffs are emminent in such situation as elimnation of headcount is one of the way to justify the price paid

Oct 30, 2018 | theregister.co.uk

Anonymous Coward 1 day

Borrowing $ at low rates

IBM must be borrowing a lot of cash to fund the acquisition. At last count it had about $12B in the bank... https://www.marketwatch.com/investing/stock/ibm/financials/balance-sheet.

Unlike everyone else - https://www.thestreet.com/story/14513643/1/apple-microsoft-google-are-sitting-on-crazy-amounts-of-cash.html

Jove
Over-paid ...

Looking at the Red Hat numbers, I would not want to be an existing IBM share-holder this morning; both companies missing market expectations and in need of each other to get out of the rut.

It is going to take a lot of effort to make that 63% premium pay-off. If it does not pay-off pretty quickly, the existing RedHat leadership with gone in 18 months.

P.S.

Apparently this is going to be financed by a mixture of cash and debt - increasing IBM's existing debt by nearly 50%. Possible credit rating downgrade on the way?

steviebuk
Goodbye...

...Red Hat.

No doubt IBM will scare off all the decent employees that make it what it is.

SecretSonOfHG
RH employees will start to jump ship

As soon as they have a minimum of experience with the terrible IBM change management processes, the many layers of bureocracy and management involved and the zero or negative value they add to anything at all.

IBM is a shinking ship, the only question being how long it will take to happen. Anyone thinkin RH has any future other than languish and disappear under IBM management is dellusional. Or a IBM stock owner.

Jove
Product lines EoL ...

What get's the chop because it either does not fit in with the Hybrid-Cloud model, or does not generate sufficient margin?

cloth
But *Why* did they buy them?

I'm still trying to figure out "why" they bought Red hat.

The only thing my not insignificant google trawling can find me is that Red Hat sell to the likes of Microsoft and google - now, that *is* interesting. IBM seem to be saying that they can't compete directly but they will sell upwards to their overlords - no ?

Anonymous Coward

Re: But *Why* did they buy them?

As far as I can tell, it is be part of IBM's cloud (or hybrid cloud) strategy. RH have become/are becoming increasingly successful in this arena.

If I was being cynical, I would also say that it will enable IBM to put the RH brand and appropriate open source soundbites on the table for deal-making and sales with or without the RH workforce and philosophy. Also, RH's subscription-base must figure greatly here - a list of perfect customers ripe for "upselling". bazza

Re: But *Why* did they buy them?

I'm fairly convinced that it's because of who uses RedHat. Certainly a lot of financial institutions do, they're in the market for commercial support (the OS cost itself is irrelevant). You can tell this by looking at the prices RedHat were charging for RedHat MRG - beloved by the high speed share traders. To say eye-watering, PER ANNUM too, is an understatement. You'd have to have got deep pockets before such prices became ignorable.

IBM is a business services company that just happens to make hardware and write OSes. RedHat has a lot of customers interested in business services. The ones I think who will be kicking themselves are Hewlett Packard (or whatever they're called these days).

tfb
Re: But *Why* did they buy them?

Because AIX and RHEL are the two remaining enterprise unixoid platforms (Solaris & HPUX are moribund and the other players are pretty small). Now both of those are owned by IBM: they now own the enterprise unixoid market.

theblackhand
Re: But *Why* did they buy them?

"I'm still trying to figure out "why" they bought Red hat."

What they say? It somehow helps them with cloud. Doesn't sound like much money there - certainly not enough to justify the significant increase in debt (~US$17B).

What could it be then? Well RedHat pushed up support prices and their customers didn't squeal much. A lot of those big enterprise customers moved from expensive hardware/expensive OS support over the last ten years to x86 with much cheaper OS support so there's plenty of scope for squeezing more.

[Oct 29, 2018] If I (hypothetically) worked for a company acquired by Big Blue

Oct 29, 2018 | arstechnica.com

MagicDot / Ars Praetorian reply 6 hours ago

If I (hypothetically) worked for a company acquired by Big Blue, I would offer the following:

  • Say hello to good salaries.
  • Say goodbye to perks, bonuses, and your company culture...oh, and you can't work from home anymore.

...but this is all hypothetical. \

Belisarius , Ars Tribunus Angusticlavius et Subscriptor 5 hours ago

sviola wrote:
show nested quotes
I can see what each company will get out of the deal and how they might potentially benefit. However, Red Hat's culture is integral to their success. Both company CEOs were asked today at an all-hands meeting about how they intend to keep the promise of remaining distinct and maintaining the RH culture without IBM suffocating it. Nothing is supposed to change (for now), but IBM has a track record of driving successful companies and open dynamic cultures into the ground. Many, many eyes will be watching this.

Hopefully IBM current top Brass will be smart and give some autonomy to Red Hat and leave it to its own management style. Of course, that will only happen if they deliver IBM goals (and that will probably mean high double digit y2y growth) on regular basis...

One thing is sure, they'll probably kill any overlapsing product in the medium term (who will survive between JBOSS and Websphere is an open bet).

(On a dream side note, maybe, just maybe they'll move some of their software development to Red Hat)

Good luck. Every CEO thinks they're the latest incarnation of Adam Smith, and they're all dying to be seen as "doing something." Doing nothing, while sometimes a really smart thing and oftentimes the right thing to do, isn't looked upon favorably these days in American business. IBM will definitely do something with Red Hat; it's just a matter of what.

[Oct 29, 2018] IBM to acquire software company Red Hat for $34 billion

Oct 29, 2018 | finance.yahoo.com

BIG BLUE

IBM was founded in 1911 and is known in the technology industry as Big Blue, a reference to its once ubiquitous blue computers. It has faced years of revenue declines, as it transitions its legacy computer maker business into new technology products and services. Its recent initiatives have included artificial intelligence and business lines around Watson, named after the supercomputer it developed.

To be sure, IBM is no stranger to acquisitions. It acquired cloud infrastructure provider Softlayer in 2013 for $2 billion, and the Weather Channel's data assets for more than $2 billion in 2015. It also acquired Canadian business software maker Cognos in 2008 for $5 billion.

Other big technology companies have also recently sought to reinvent themselves through acquisitions. Microsoft this year acquired open source software platform GitHub for $7.5 billion; chip maker Broadcom Inc agreed to acquire software maker CA Inc for nearly $19 billion; and Adobe Inc agreed to acquire marketing software maker Marketo for $5 billion.

One of IBM's main competitors, Dell Technologies Inc, made a big bet on software and cloud computing two years ago, when it acquired data storage company EMC for $67 billion. As part of that deal, Dell inherited an 82 percent stake in virtualization software company VMware Inc.

The deal between IBM and Red Hat is expected to close in the second half of 2019. IBM said it planned to suspend its share repurchase program in 2020 and 2021 to help pay for the deal.

IBM said Red Hat would continue to be led by Red Hat CEO Jim Whitehurst and Red Hat's current management team. It intends to maintain Red Hat's headquarters, facilities, brands and practices.

[Oct 28, 2018] In Desperation Move, IBM Buys Red Hat For $34 Billion In Largest Ever Acquisition

Oct 28, 2018 | www.zerohedge.com

In what can only be described as a desperation move, IBM announced that it would acquire Linux distributor Red Hat for a whopping $33.4 billion, its biggest purchase ever, as the company scrambles to catch up to the competition and to boost its flagging cloud sales. Still hurting from its Q3 earnings , which sent its stock tumbling to the lowest level since 2010 after Wall Street was disappointed by yet another quarter of declining revenue...

... IBM will pay $190 for the Raleigh, NC-based Red Hat , a 63% premium to the company's stock price, which closed at $116.68 on Friday, and down 3% on the year.

In the statement, IBM CEO Ginni Rometty said that "the acquisition of Red Hat is a game-changer. It changes everything about the cloud market," but what the acquisition really means is that the company has thrown in the towel in years of accounting gimmicks and attempts to paint lipstick on a pig with the help of ever lower tax rates and pro forma addbacks, and instead will now "kitchen sink" its endless income statement troubles and non-GAAP adjustments in the form of massive purchase accounting tricks for the next several years.

While Rometty has been pushing hard to transition the 107-year-old company into modern business such as the cloud, AI and security software, the company's recent improvements had been largely from IBM's legacy mainframe business, rather than its so-called strategic imperatives. Meanwhile, revenues have continued the shrink and after a brief rebound, sales dipped once again this quarter, after an unprecedented period of 22 consecutive declines starting in 2012, when Rometty took over as CEO.

[Oct 26, 2018] RHCSA Rapid Track course with exam - RH200

The cost is $3,895 USD (Plus all applicable taxes) or 13 Training Units
Oct 08, 2018 | www.redhat.com
Course overview On completion of course materials, students should be prepared to take the Red Hat Certified System Administrator (RHCSA) exam. This version of the course includes the exam.

Note: This course builds on a student's existing understanding of command-line based Linux system administration. Students should be able to execute common commands using the shell, work with common command options, and access man pages for help. Students lacking this knowledge are strongly encouraged to take Red Hat System Administration I (RH124) and II (RH134) instead.

Course content summary Outline for this course
Accessing the command line
Log in to a Linux system and run simple commands using the shell.
Managing files from the command line
Work with files from the bash shell prompt.
Managing local Linux users and groups
Manage Linux users and groups and administer local password policies.
Controlling access to files with Linux file system permissions
Set access permissions on files and interpret the security effects of different permission settings.
Managing SELinux security
Use SELinux to manage access to files and interpret and troubleshoot SELinux security effects.
Monitoring and managing Linux processes
Monitor and control processes running on the system.
Installing and updating software packages
Download, install, update, and manage software packages from Red Hat and yum package repositories.
Controlling services and daemons
Control and monitor network services and system daemons using systemd.
Managing Red Hat Enterprise Linux networking
Configure basic IPv4 networking on Red Hat Enterprise Linux systems.
Analyzing and storing logs
Locate and interpret relevant system log files for troubleshooting purposes.
Managing storage and file systems
Create and use disk partitions, logical volumes, file systems, and swap spaces.
Scheduling system tasks
Schedule recurring system tasks using cron and systemd timer units.
Mounting network file systems
Mount network file system (NFS) exports and server message block (SMB) shares from network file servers.
Limiting network communication with firewalld
Configure a basic local firewall.
Virtualization and kickstart
Manage KVMs and install them with Red Hat Enterprise Linux using Kickstart.

[Oct 17, 2018] How to upgrade Red Hat Linux 6.9 to 7.4 Experiences Sharing by Comnet

Oct 17, 2018 | mycomnet.info

Red Hat is a one of Linux distribution among many others such as Ubuntu, CentOS, Fedora, and others. Many servers around the world use Red Hat to run their server.

I have recently had to do an upgrade to one of our clients from Red Hat Linux 6.8 to 7.4 and I would like to show you how I have set up a lap to test the tools to upgrade. I recommend you to duplicate the environment of production server for lab testing.

Please note the following below

· This guide aims to show you the tools Red Hat has given us to upgrade from version 6 to version 7

· The environment is only on VM which is not reflect the actual environment of our client, therefore, there could be some different outcome when trying to upgrade on production server!!!

· I assume you can install Red Hat on your VM which could be Virtual Box, VMware or any other VM application you familiar with. (Mine is Virtual Box).

Precaution: Please backup your system before running the upgrade to in case anything happens and you might need to fresh install Red Hat 7.4 or roll back to Red Hat 6.9

1. First thing first, check your current Red Hat version. Mine was Red Hat 6.9

2. You should be sure to update your Red Hat 6 to the latest version before attempt the preupgrade tools. So, do 'yum update' to update to latest Red Hat 6.

If the error shows as below. These require an internet connection to connect to Red Hat server.

Be sure to register your subscription carefully again. I have found that for the system that runs for a long time like my client. I have to unregister and register again for 'yum update' to properly run.

Refer to thread on https://access.redhat.com/discussions/3066851?tour=8

3. For some system, the error might show something like 'It is registered but cannot get update' The methods are the same try to run these command in sequence. Sometimes you don't have to unregister, only refresh and attach –auto might do the trick.

sudo subscription-manager remove --all

sudo subscription-manager unregister

sudo subscription-manager clean

Now you can re-register the system, attach the subscriptions

sudo subscription-manager register

sudo subscription-manager refresh

sudo subscription-manager attach --auto

Note that when you unregister, your server is not down, this is only an unregister Red Hat subscription meaning you cannot get any update from them but your server can still be running.

After that, you should now be able to do 'yum update' then download the update. At the end, you should see the screen below which mean you can now proceed to upgrade procedure.

4. Then enable your subscription to the repository of Preupgrade Assistant.

Then install the Preupgrade Tools

5. Once you have installed everything, run the pre upgrade tool, it should take awhile. This tool will examine every package in the system and determine if there could be any error you need to fix before an upgrade. In my experience, I found solutions to most errors by googling the Internet, but it may not always work for your environment.

After preupg is finished running, please check the file in '/root/preupgrade/result.html' which can view in any browser. You could transfer the file the computer that has browser.

The file result.html will show all necessary information about your system before an upgrade. Basically, if you see information like 5.1 on the screen, you good to go.

See 5.2 for all the result after running the Preupgrade tool, be sure to check them all. I found that some information is just informational but please check the 'needs_action' section carefully.

Go down and you will see specific information about result. Check Remediation description for any 'needs_action' to perform the suggested instruction.

6. So, you have checked everything from the Preupgrade tool. Now it's time to start an upgrade.

6.1 Install the upgrade tool

[root@localhost ~]# yum -y install redhat-upgrade-tool

6.2 Disable all active repository

[root@localhost ~]# yum -y install yum-utils

[root@localhost ~]# yum-config-manager --disable \*

Now start an upgrade. I recommend you to save iso file of Red Hat 7.4 to the server then issue the command like below. It's easier. Although, you could use other option like

--device [DEV]

Device or mount point of mounted install media. If DEV is omitted,

redhat-upgrade-tool will scan all currently-mounted removable devices

(for example USB disks and optical media).

--network RELEASEVER

Online repos. RELEASEVER will be used to replace $releasever variable

if it occurs in some repo URL.

[root@localhost /]# cd /root/

[root@localhost ~]# ls
anaconda-ks.cfg  install.log.syslog  preupgrade          rhel-server-7.4-x86_64-dvd.iso

install.log      playground          preupgrade-results

[root@localhost ~]# redhat-upgrade-tool --iso rhel-server-7.4-x86_64-dvd.iso

Then reboot

[root@localhost ~]# reboot

7. Upgrade is now completed. Check your version after upgrade!!!! Then don't forget to check other software and functionality that it runs correctly

[root@localhost ~]# cat /etc/redhat-release

Red Hat Enterprise Linux Server release 7.4 (Maipo)

---------------------------------------------------------------------------------------------------------------------------

I hope the information could guide you through how to upgrade Red Hat Linux 6.9 to 7.4 more or less. I'm also new to Red Hat myself and still have a lot to learn.

Please let me know your experience of upgrade your own Red Hat or if you have any questions, I would try my best to help. Thanks for reading!

Reference Sites

1. https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/migration_planning_guide/chap-red_hat_enterprise_linux-migration_planning_guide-upgrading

2. https://access.redhat.com/discussions/3066851?tour=8 โพสท์ใน Technology | ติดป้ายกำกับ LINUX , RHEL | ใส่ความเห็น

[Oct 16, 2018] How to Enable or Disable Services on Boot in Linux Using chkconfig and systemctl Command by Prakash Subramanian

Oct 15, 2018 | www.2daygeek.com
It's a important topic for Linux admin (such a wonderful topic) so, everyone must be aware of this and practice how to use this in the efficient way.

In Linux, whenever we install any packages which has services or daemons. By default all the services "init & systemd" scripts will be added into it but it wont enabled.

Hence, we need to enable or disable the service manually if it's required. There are three major init systems are available in Linux which are very famous and still in use.

What is init System?

In Linux/Unix based operating systems, init (short for initialization) is the first process that started during the system boot up by the kernel.

It's holding a process id (PID) of 1. It will be running in the background continuously until the system is shut down.

Init looks at the /etc/inittab file to decide the Linux run level then it starts all other processes & applications in the background as per the run level.

BIOS, MBR, GRUB and Kernel processes were kicked up before hitting init process as part of Linux booting process.

Below are the available run levels for Linux (There are seven runlevels exist, from zero to six).

Below three init systems are widely used in Linux.

What is System V (Sys V)?

System V (Sys V) is one of the first and traditional init system for Unix like operating system. init is the first process that started during the system boot up by the kernel and it's a parent process for everything.

Most of the Linux distributions started using traditional init system called System V (Sys V) first. Over the years, several replacement init systems were released to address design limitations in the standard versions such as launchd, the Service Management Facility, systemd and Upstart.

But systemd has been adopted by several major Linux distributions over the traditional SysV init systems.

What is Upstart?

Upstart is an event-based replacement for the /sbin/init daemon which handles starting of tasks and services during boot, stopping them during shutdown and supervising them while the system is running.

It was originally developed for the Ubuntu distribution, but is intended to be suitable for deployment in all Linux distributions as a replacement for the venerable System-V init.

It was used in Ubuntu from 9.10 to Ubuntu 14.10 & RHEL 6 based systems after that they are replaced with systemd.

What is systemd?

Systemd is a new init system and system manager which was implemented/adapted into all the major Linux distributions over the traditional SysV init systems.

systemd is compatible with SysV and LSB init scripts. It can work as a drop-in replacement for sysvinit system. systemd is the first process get started by kernel and holding PID 1.

It's a parant process for everything and Fedora 15 is the first distribution which was adapted systemd instead of upstart. systemctl is command line utility and primary tool to manage the systemd daemons/services such as (start, restart, stop, enable, disable, reload & status).

systemd uses .service files Instead of bash scripts (SysVinit uses). systemd sorts all daemons into their own Linux cgroups and you can see the system hierarchy by exploring /cgroup/systemd file.

How to Enable or Disable Services on Boot Using chkconfig Commmand?

The chkconfig utility is a command-line tool that allows you to specify in which
runlevel to start a selected service, as well as to list all available services along with their current setting.

Also, it will allows us to enable or disable a services from the boot. Make sure you must have superuser privileges (either root or sudo) to use this command.

All the services script are located on /etc/rd.d/init.d .

How to list All Services in run-level

The -–list parameter displays all the services along with their current status (What run-level the services are enabled or disabled).

# chkconfig --list
NetworkManager     0:off    1:off    2:on    3:on    4:on    5:on    6:off
abrt-ccpp          0:off    1:off    2:off    3:on    4:off    5:on    6:off
abrtd              0:off    1:off    2:off    3:on    4:off    5:on    6:off
acpid              0:off    1:off    2:on    3:on    4:on    5:on    6:off
atd                0:off    1:off    2:off    3:on    4:on    5:on    6:off
auditd             0:off    1:off    2:on    3:on    4:on    5:on    6:off
.
.

How to check the Status of Specific Service

If you would like to see a particular service status in run-level then use the following format and grep the required service.

In this case, we are going to check the auditd service status in run-level

[Oct 15, 2018] Breaking News! SUSE Linux Sold for $2.5 Billion It's FOSS by Abhishek Prakash

Aqusition by a private equity shark is never good news for a software vendor...
Jul 03, 2018 | itsfoss.com

British software company Micro Focus International has agreed to sell SUSE Linux and its associated software business to Swedish private equity group EQT Partners for $2.535 billion. Read the details. ­ rm 3 months ago

This comment is awaiting moderation

Novell acquired SUSE in 2003 for $210 million ­ asoc 4 months ago

This comment is awaiting moderation

"It has over 1400 employees all over the globe "
They should be updating their CVs.

[Oct 15, 2018] I honestly, seriously sometimes wonder if systemd is Skynet... or, a way for Skynet to 'waken'.

Oct 15, 2018 | linux.slashdot.org

thegarbz ( 1787294 ) , Sunday August 30, 2015 @04:08AM ( #50419549 )

Re:Hang on a minute... ( Score: 5 , Funny)
I honestly, seriously sometimes wonder if systemd is Skynet... or, a way for Skynet to 'waken'.

Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. At 2:15am it crashes.
No one knows why. The binary log file was corrupted in the process and is unrecoverable. All anyone could remember is a bug listed in the systemd bug tracker talking about su which was classified as WON'T FIX as the developer thought it was a broken concept.

[Oct 15, 2018] Systemd as doord interface for cars ;-) by Nico Schottelius

Notable quotes:
"... Let's say every car manufacturer recently discovered a new technology named "doord", which lets you open up car doors much faster than before. It only takes 0.05 seconds, instead of 1.2 seconds on average. So every time you open a door, you are much, much faster! ..."
"... Many of the manufacturers decide to implement doord, because the company providing doord makes it clear that it is beneficial for everyone. And additional to opening doors faster, it also standardises things. How to turn on your car? It is the same now everywhere, it is not necessarily to look for the keyhole anymore. ..."
"... Unfortunately though, sometimes doord does not stop the engine. Or if it is cold outside, it stops the ignition process, because it takes too long. Doord also changes the way how your navigation system works, because that is totally related to opening doors, but leads to some users being unable to navigate, which is accepted as collateral damage. In the end, you at least have faster door opening and a standard way to turn on the car. Oh, and if you are in a traffic jam and have to restart the engine often, it will stop restarting it after several times, because that's not what you are supposed to do. You can open the engine hood and tune that setting though, but it will be reset once you buy a new car. ..."
Oct 15, 2018 | blog.ungleich.ch

Let's say every car manufacturer recently discovered a new technology named "doord", which lets you open up car doors much faster than before. It only takes 0.05 seconds, instead of 1.2 seconds on average. So every time you open a door, you are much, much faster!

Many of the manufacturers decide to implement doord, because the company providing doord makes it clear that it is beneficial for everyone. And additional to opening doors faster, it also standardises things. How to turn on your car? It is the same now everywhere, it is not necessarily to look for the keyhole anymore.

Unfortunately though, sometimes doord does not stop the engine. Or if it is cold outside, it stops the ignition process, because it takes too long. Doord also changes the way how your navigation system works, because that is totally related to opening doors, but leads to some users being unable to navigate, which is accepted as collateral damage. In the end, you at least have faster door opening and a standard way to turn on the car. Oh, and if you are in a traffic jam and have to restart the engine often, it will stop restarting it after several times, because that's not what you are supposed to do. You can open the engine hood and tune that setting though, but it will be reset once you buy a new car.

[Oct 15, 2018] Future History of Init Systems

Oct 15, 2018 | linux.slashdot.org

AntiSol ( 1329733 ) , Saturday August 29, 2015 @03:52PM ( #50417111 )

Re:Approaching the Singularity ( Score: 4 , Funny)

Future History of Init Systems

Future History of Init Systems
  • 2015: systemd becomes default boot manager in debian.
  • 2017: "complete, from-scratch rewrite" [jwz.org]. In order to not have to maintain backwards compatibility, project is renamed to system-e.
  • 2019: debut of systemf, absorbtion of other projects including alsa, pulseaudio, xorg, GTK, and opengl.
  • 2021: systemg maintainers make the controversial decision to absorb The Internet Archive. Systemh created as a fork without Internet Archive.
  • 2022: systemi, a fork of systemf focusing on reliability and minimalism becomes default debian init system.
  • 2028: systemj, a complete, from-scratch rewrite is controversial for trying to reintroduce binary logging. Consensus is against the systemj devs as sysadmins remember the great systemd logging bug of 2017 unkindly. Systemj project is eventually abandoned.
  • 2029: systemk codebase used as basis for a military project to create a strong AI, known as "project skynet". Software behaves paradoxically and project is terminated.
  • 2033: systeml - "system lean" - a "back to basics", from-scratch rewrite, takes off on several server platforms, boasting increased reliability. systemm, "system mean", a fork, used in security-focused distros.
  • 2117: critical bug discovered in the long-abandoned but critical and ubiquitous system-r project. A new project, system-s, is announced to address shortcomings in the hundred-year-old codebase. A from-scratch rewrite begins.
  • 2142: systemu project, based on a derivative of systemk, introduces "Artificially intelligent init system which will shave 0.25 seconds off your boot time and absolutely definitely will not subjugate humanity". Millions die. The survivors declare "thou shalt not make an init system in the likeness of the human mind" as their highest law.
  • 2147: systemv - a collection of shell scripts written around a very simple and reliable PID 1 introduced, based on the brand new religious doctrines of "keep it simple, stupid" and "do one thing, and do it well". People's computers start working properly again, something few living people can remember. Wyld Stallyns release their 94th album. Everybody lives in peace and harmony.

[Oct 15, 2018] They should have just rename the machinectl into command.com.

Oct 15, 2018 | linux.slashdot.org

RabidReindeer ( 2625839 ) , Saturday August 29, 2015 @11:38AM ( #50415833 )

What's with all the awkward systemd command names? ( Score: 5 , Insightful)

I know systemd sneers at the old Unix convention of keeping it simple, keeping it separate, but that's not the only convention they spit on. God intended Unix (Linux) commands to be cryptic things 2-4 letters long (like "su", for example). Not "systemctl", "machinectl", "journalctl", etc. Might as well just give everything a 47-character long multi-word command like the old Apple commando shell did.

Seriously, though, when you're banging through system commands all day long, it gets old and their choices aren't especially friendly to tab completion. On top of which why is "machinectl" a shell and not some sort of hardware function? They should have just named the bloody thing command.com.

[Oct 15, 2018] Oh look, another Powershell

Oct 15, 2018 | linux.slashdot.org

Anonymous Coward , Saturday August 29, 2015 @11:37AM ( #50415825 )

Cryptic command names ( Score: 5 , Funny)

Great to see that systemd is finally doing something about all of those cryptic command names that plague the unix ecosystem.

Upcoming systemd re-implementations of standard utilities:

ls to be replaced by filectl directory contents [pathname] grep to be replaced by datactl file contents search [plaintext] (note: regexp no longer supported as it's ambiguous) gimp to be replaced by imagectl open file filename draw box [x1,y1,x2,y2] draw line [x1,y1,x2,y2] ...
Anonymous Coward , Saturday August 29, 2015 @11:58AM ( #50415939 )
Re: Cryptic command names ( Score: 3 , Funny)

Oh look, another Powershell

[Oct 14, 2018] Does Systemd Make Linux Complex, Error-Prone, and Unstable

Or in other words, a simple, reliable and clear solution (which has some faults due to its age) was replaced with a gigantic KISS violation. No engineer worth the name will ever do that. And if it needs doing, any good engineer will make damned sure to achieve maximum compatibility and a clean way back. The systemd people seem to be hell-bent on making it as hard as possible to not use their monster. That alone is a good reason to stay away from it.
Oct 14, 2018 | linux.slashdot.org

Reverend Green ( 4973045 ) , Monday December 11, 2017 @04:48AM ( #55714431 )

Re: Does systemd make ... ( Score: 5 , Funny)

Systemd is nothing but a thinly-veiled plot by Vladimir Putin and Beyonce to import illegal German Nazi immigrants over the border from Mexico who will then corner the market in kimchi and implement Sharia law!!!

Anonymous Coward , Monday December 11, 2017 @01:38AM ( #55714015 )

Re:It violates fundamental Unix principles ( Score: 4 , Funny)

The Emacs of the 2010s.

DontBeAMoran ( 4843879 ) , Monday December 11, 2017 @01:57AM ( #55714059 )
Re:It violates fundamental Unix principles ( Score: 5 , Funny)

We are systemd. Lower your memory locks and surrender your processes. We will add your calls and code distinctiveness to our own. Your functions will adapt to service us. Resistance is futile.

serviscope_minor ( 664417 ) , Monday December 11, 2017 @04:47AM ( #55714427 ) Journal
Re:It violates fundamental Unix principles ( Score: 4 , Insightful)

I think we should call systemd the Master Control Program since it seems to like making other programs functions its own.

Anonymous Coward , Monday December 11, 2017 @01:47AM ( #55714035 )
Don't go hating on systemd ( Score: 5 , Funny)

RHEL7 is a fine OS, the only thing it's missing is a really good init system.

[Oct 08, 2018] RHCSA Exam Training by Infinite Skills

For $29.99 they provide a course with 6.5 hours of on demand video
Oct 08, 2018 | www.udemy.com

Description

This Red Hat Certified Systems Administrator Exam EX200 training course from Infinite Skills will teach you everything you need to know to become a Red Hat Certified System Administrator (RHCSA) and pass the EX200 Exam. This course is designed for users that are familiar with Red Hat Enterprise Linux environments.

You will start by learning the fundamentals, such as basic shell commands, creating and modifying users, and changing passwords. The course will then teach you about the shell, explaining how to manage files, use the stream editor, and locate files. This video tutorial will also cover system management, including booting and rebooting, network services, and installing packages. Other topics that are covered include storage management, server management, virtual machines, and security.

Once you have completed this computer based training course, you will be fully capable of taking the RHCSA EX200 exam and becoming a Red Hat Certified System Administrator.

*Infinite Skills has no affiliation with Red Hat, Inc. The Red Hat trademark is used for identification purposes only and is not intended to indicate affiliation with or approval by Red Hat, Inc

[Sep 04, 2018] Unifying custom scripts system-wide with rpm on Red Hat-CentOS

Highly recommended!
Aug 24, 2018 | linuxconfig.org
Objective Our goal is to build rpm packages with custom content, unifying scripts across any number of systems, including versioning, deployment and undeployment. Operating System and Software Versions Requirements Privileged access to the system for install, normal access for build. Difficulty MEDIUM Conventions Introduction One of the core feature of any Linux system is that they are built for automation. If a task may need to be executed more than one time - even with some part of it changing on next run - a sysadmin is provided with countless tools to automate it, from simple shell scripts run by hand on demand (thus eliminating typo errors, or only save some keyboard hits) to complex scripted systems where tasks run from cron at a specified time, interacting with each other, working with the result of another script, maybe controlled by a central management system etc.

While this freedom and rich toolset indeed adds to productivity, there is a catch: as a sysadmin, you write a useful script on a system, which proves to be useful on another, so you copy the script over. On a third system the script is useful too, but with minor modification - maybe a new feature useful only that system, reachable with a new parameter. Generalization in mind, you extend the script to provide the new feature, and complete the task it was written for as well. Now you have two versions of the script, the first is on the first two system, the second in on the third system.

You have 1024 computers running in the datacenter, and 256 of them will need some of the functionality provided by that script. In time you will have 64 versions of the script all over, every version doing its job. On the next system deployment you need a feature you recall you coded at some version, but which? And on which systems are they?

On RPM based systems, such as Red Hat flavors, a sysadmin can take advantage of the package manager to create order in the custom content, including simple shell scripts that may not provide else but the tools the admin wrote for convenience.

In this tutorial we will build a custom rpm for Red Hat Enterprise Linux 7.5 containing two bash scripts, parselogs.sh and pullnews.sh to provide a way that all systems have the latest version of these scripts in the /usr/local/sbin directory, and thus on the path of any user who logs in to the system.


me width=


Distributions, major and minor versions In general, the minor and major version of the build machine should be the same as the systems the package is to be deployed, as well as the distribution to ensure compatibility. If there are various versions of a given distribution, or even different distributions with many versions in your environment (oh, joy!), you should set up build machines for each. To cut the work short, you can just set up build environment for each distribution and each major version, and have them on the lowest minor version existing in your environment for the given major version. Of cause they don't need to be physical machines, and only need to be running at build time, so you can use virtual machines or containers.

In this tutorial our work is much easier, we only deploy two scripts that have no dependencies at all (except bash ), so we will build noarch packages which stand for "not architecture dependent", we'll also not specify the distribution the package is built for. This way we can install and upgrade them on any distribution that uses rpm , and to any version - we only need to ensure that the build machine's rpm-build package is on the oldest version in the environment. Setting up building environment To build custom rpm packages, we need to install the rpm-build package:

# yum install rpm-build
From now on, we do not use root user, and for a good reason. Building packages does not require root privilege, and you don't want to break your building machine.

Building the first version of the package Let's create the directory structure needed for building:

$ mkdir -p rpmbuild/SPECS
Our package is called admin-scripts, version 1.0. We create a specfile that specifies the metadata, contents and tasks performed by the package. This is a simple text file we can create with our favorite text editor, such as vi . The previously installed rpmbuild package will fill your empty specfile with template data if you use vi to create an empty one, but for this tutorial consider the specification below called admin-scripts-1.0.spec :

me width=


Name:           admin-scripts
Version:        1
Release:        0
Summary:        FooBar Inc. IT dept. admin scripts
Packager:       John Doe 
Group:          Application/Other
License:        GPL
URL:            www.foobar.com/admin-scripts
Source0:        %{name}-%{version}.tar.gz
BuildArch:      noarch

%description
Package installing latest version the admin scripts used by the IT dept.

%prep
%setup -q


%build

%install
rm -rf $RPM_BUILD_ROOT
mkdir -p $RPM_BUILD_ROOT/usr/local/sbin
cp scripts/* $RPM_BUILD_ROOT/usr/local/sbin/

%clean
rm -rf $RPM_BUILD_ROOT

%files
%defattr(-,root,root,-)
%dir /usr/local/sbin
/usr/local/sbin/parselogs.sh
/usr/local/sbin/pullnews.sh

%doc

%changelog
* Wed Aug 1 2018 John Doe 
- release 1.0 - initial release
Place the specfile in the rpmbuild/SPEC directory we created earlier.

We need the sources referenced in the specfile - in this case the two shell scripts. Let's create the directory for the sources (called as the package name appended with the main version):

$ mkdir -p rpmbuild/SOURCES/admin-scripts-1/scripts
And copy/move the scripts into it:
$ ls rpmbuild/SOURCES/admin-scripts-1/scripts/
parselogs.sh  pullnews.sh

me width=


As this tutorial is not about shell scripting, the contents of these scripts are irrelevant. As we will create a new version of the package, and the pullnews.sh is the script we will demonstrate with, it's source in the first version is as below:
#!/bin/bash
echo "news pulled"
exit 0
Do not forget to add the appropriate rights to the files in the source - in our case, execution right:
chmod +x rpmbuild/SOURCES/admin-scripts-1/scripts/*.sh
Now we create a tar.gz archive from the source in the same directory:
cd rpmbuild/SOURCES/ && tar -czf admin-scripts-1.tar.gz admin-scripts-1
We are ready to build the package:
rpmbuild --bb rpmbuild/SPECS/admin-scripts-1.0.spec
We'll get some output about the build, and if anything goes wrong, errors will be shown (for example, missing file or path). If all goes well, our new package will appear in the RPMS directory generated by default under the rpmbuild directory (sorted into subdirectories by architecture):
$ ls rpmbuild/RPMS/noarch/
admin-scripts-1-0.noarch.rpm
We have created a simple yet fully functional rpm package. We can query it for all the metadata we supplied earlier:
$ rpm -qpi rpmbuild/RPMS/noarch/admin-scripts-1-0.noarch.rpm 
Name        : admin-scripts
Version     : 1
Release     : 0
Architecture: noarch
Install Date: (not installed)
Group       : Application/Other
Size        : 78
License     : GPL
Signature   : (none)
Source RPM  : admin-scripts-1-0.src.rpm
Build Date  : 2018. aug.  1., Wed, 13.27.34 CEST
Build Host  : build01.foobar.com
Relocations : (not relocatable)
Packager    : John Doe 
URL         : www.foobar.com/admin-scripts
Summary     : FooBar Inc. IT dept. admin scripts
Description :
Package installing latest version the admin scripts used by the IT dept.
And of cause we can install it (with root privileges): Installing custom scripts with rpm Installing custom scripts with rpm

me width=


As we installed the scripts into a directory that is on every user's $PATH , you can run them as any user in the system, from any directory:
$ pullnews.sh 
news pulled
The package can be distributed as it is, and can be pushed into repositories available to any number of systems. To do so is out of the scope of this tutorial - however, building another version of the package is certainly not. Building another version of the package Our package and the extremely useful scripts in it become popular in no time, considering they are reachable anywhere with a simple yum install admin-scripts within the environment. There will be soon many requests for some improvements - in this example, many votes come from happy users that the pullnews.sh should print another line on execution, this feature would save the whole company. We need to build another version of the package, as we don't want to install another script, but a new version of it with the same name and path, as the sysadmins in our organization already rely on it heavily.

First we change the source of the pullnews.sh in the SOURCES to something even more complex:

#!/bin/bash
echo "news pulled"
echo "another line printed"
exit 0
We need to recreate the tar.gz with the new source content - we can use the same filename as the first time, as we don't change version, only release (and so the Source0 reference will be still valid). Note that we delete the previous archive first:
cd rpmbuild/SOURCES/ && rm -f admin-scripts-1.tar.gz && tar -czf admin-scripts-1.tar.gz admin-scripts-1
Now we create another specfile with a higher release number:
cp rpmbuild/SPECS/admin-scripts-1.0.spec rpmbuild/SPECS/admin-scripts-1.1.spec
We don't change much on the package itself, so we simply administrate the new version as shown below:
Name:           admin-scripts
Version:        1
Release:        1
Summary:        FooBar Inc. IT dept. admin scripts
Packager:       John Doe 
Group:          Application/Other
License:        GPL
URL:            www.foobar.com/admin-scripts
Source0:        %{name}-%{version}.tar.gz
BuildArch:      noarch

%description
Package installing latest version the admin scripts used by the IT dept.

%prep
%setup -q


%build

%install
rm -rf $RPM_BUILD_ROOT
mkdir -p $RPM_BUILD_ROOT/usr/local/sbin
cp scripts/* $RPM_BUILD_ROOT/usr/local/sbin/

%clean
rm -rf $RPM_BUILD_ROOT

%files
%defattr(-,root,root,-)
%dir /usr/local/sbin
/usr/local/sbin/parselogs.sh
/usr/local/sbin/pullnews.sh

%doc

%changelog
* Wed Aug 22 2018 John Doe 
- release 1.1 - pullnews.sh v1.1 prints another line
* Wed Aug 1 2018 John Doe 
- release 1.0 - initial release

me width=


All done, we can build another version of our package containing the updated script. Note that we reference the specfile with the higher version as the source of the build:
rpmbuild --bb rpmbuild/SPECS/admin-scripts-1.1.spec
If the build is successful, we now have two versions of the package under our RPMS directory:
ls rpmbuild/RPMS/noarch/
admin-scripts-1-0.noarch.rpm  admin-scripts-1-1.noarch.rpm
And now we can install the "advanced" script, or upgrade if it is already installed. Upgrading custom scripts with rpm Upgrading custom scripts with rpm

And our sysadmins can see that the feature request is landed in this version:

rpm -q --changelog admin-scripts
* sze aug 22 2018 John Doe 
- release 1.1 - pullnews.sh v1.1 prints another line

* sze aug 01 2018 John Doe 
- release 1.0 - initial release
Conclusion

We wrapped our custom content into versioned rpm packages. This means no older versions left scattered across systems, everything is in it's place, on the version we installed or upgraded to. RPM gives the ability to replace old stuff needed only in previous versions, can add custom dependencies or provide some tools or services our other packages rely on. With effort, we can pack nearly any of our custom content into rpm packages, and distribute it across our environment, not only with ease, but with consistency.

[Aug 07, 2018] May I sort the -etc-group and -etc-passwd files

Aug 07, 2018 | unix.stackexchange.com

Ned64 ,Feb 18 at 13:52

My /etc/group has grown by adding new users as well as installing programs that have added their own user and/or group. The same is true for /etc/passwd . Editing has now become a little cumbersome due to the lack of structure.

May I sort these files (e.g. by numerical id or alphabetical by name) without negative effect on the system and/or package managers?

I would guess that is does not matter but just to be sure I would like to get a 2nd opinion. Maybe root needs to be the 1st line or within the first 1k lines or something?

The same goes for /etc/*shadow .

Kevin ,Feb 19 at 23:50

"Editing has now become a little cumbersome due to the lack of structure" Why are you editing those files by hand? – Kevin Feb 19 at 23:50

Barmar ,Feb 21 at 20:51

How does sorting the file help with editing? Is it because you want to group related accounts together, and then do similar changes in a range of rows? But will related account be adjacent if you sort by uid or name? – Barmar Feb 21 at 20:51

Ned64 ,Mar 13 at 23:15

@Barmar It has helped mainly because user accounts are grouped by ranges and separate from system accounts (when sorting by UID). Therefore it is easier e.g. to spot the correct line to examine or change when editing with vi . – Ned64 Mar 13 at 23:15

ErikF ,Feb 18 at 14:12

You should be OK doing this : in fact, according to the article and reading the documentation, you can sort /etc/passwd and /etc/group by UID/GID with pwck -s and grpck -s , respectively.

hvd ,Feb 18 at 22:59

@Menasheh This site's colours don't make them stand out as much as on other sites, but "OK doing this" in this answer is a hyperlink. – hvd Feb 18 at 22:59

mickeyf ,Feb 19 at 14:05

OK, fine, but... In general, are there valid reasons to manually edit /etc/passwd and similar files? Isn't it considered better to access these via the tools that are designed to create and modify them? – mickeyf Feb 19 at 14:05

ErikF ,Feb 20 at 21:21

@mickeyf I've seen people manually edit /etc/passwd when they're making batch changes, like changing the GECOS field for all users due to moving/restructuring (global room or phone number changes, etc.) It's not common anymore, but there are specific reasons that crop up from time to time. – ErikF Feb 20 at 21:21

hvd ,Feb 18 at 17:28

Although ErikF is correct that this should generally be okay, I do want to point out one potential issue:

You're allowed to map different usernames to the same UID. If you make use of this, tools that map a UID back to a username will generally pick the first username they find for that UID in /etc/passwd . Sorting may cause a different username to appear first. For display purposes (e.g. ls -l output), either username should work, but it's possible that you've configured some program to accept requests from username A, where it will deny those requests if it sees them coming from username B, even if A and B are the same user.

Rui F Ribeiro ,Feb 19 at 17:53

Having root at first line has been a long time de facto "standard" and is very convenient if you ever have to fix their shell or delete the password, when dealing with problems or recovering systems.

Likewise I prefer to have daemons/utils users in the middle and standard users at the end of both passwd and shadow .

hvd answer is also very good about disturbing the users order, especially in systems with many users maintained by hand.

If you somewhat manage to sort the files, for instance, only for standard users, it would be more sensible than changing the order of all users, imo.

Barmar ,Feb 21 at 20:13

If you sort numerically by UID, you should get your preferred order. Root is always 0 , and daemons conventionally have UIDs under 100. – Barmar Feb 21 at 20:13

Rui F Ribeiro ,Feb 21 at 20:16

@Barmar If sorting by UID and not by name, indeed, thanks for remembering. – Rui F Ribeiro Feb 21 at 20:16

[Aug 07, 2018] Consistency checking of /etc/passwd and /etc/shadow

Aug 07, 2018 | linux-audit.com

Linux distributions usually provide a pwck utility. This small utility will check the consistency of both files and state any specific issues. By specifying the -r it may run in read-only mode.

Example when running pwck on /etc/passwd and /etc/shadow file

[Aug 07, 2018] passwd - Copying Linux users and passwords to a new server

Aug 07, 2018 | serverfault.com
I am migrating over a server to new hardware. A part of the system will be rebuild. What files and directories are needed to copy so that usernames, passwords, groups, file ownership and file permissions stay intact?

Ubuntu 12.04 LTS. linux passwd share | improve this question asked Mar 20 '14 at 7:47

Mikko Ohtamaa, Mar 20 '14 at 7:54

/etc/passwd - user account information less the encrypted passwords 
/etc/shadow - contains encrypted passwords 
/etc/group - user group information 
/etc/gshadow - - group encrypted passwords

Be sure to ensure that the permissions on the files are correct too share | improve this answer edited Mar 20 '14 at 9:48 answered

Iain 102k 13 154 250

| show 4 more comments up vote 13 down vote

I did this with Gentoo Linux already and copied:

that's it.

If the files on the other machine have different owner IDs, you might change them to the ones on /etc/group and /etc/passwd and then you have the effective permissions restored. share | improve this answer edited Mar 20 '14 at 11:52 answered Mar 20 '14 at 7:53

vanthome 560 3 10

Be careful that you don't delete or renumber system accounts when copying over the files mentioned in the other answers. System services don't usually have fixed user ids, and if you've installed the packages in a different order to the original machine (which is very likely if it was long-lived), then they'll end up in a different order. I tend to copy those files to somewhere like /root/saved-from-old-system and hand-edit them in order to just copy the non-system accounts. (There's probably a tool for this, but I don't tend to copy systems like this often enough to warrant investigating one.)Mar 26 '14 at 5:36

[Jul 30, 2018] Configuring sudo Access

Jul 30, 2018 | access.redhat.com

Note A Red Hat training course is available for RHCSA Rapid Track Course . The sudo command offers a mechanism for providing trusted users with administrative access to a system without sharing the password of the root user. When users given access via this mechanism precede an administrative command with sudo they are prompted to enter their own password. Once authenticated, and assuming the command is permitted, the administrative command is executed as if run by the root user. Follow this procedure to create a normal user account and give it sudo access. You will then be able to use the sudo command from this user account to execute administrative commands without logging in to the account of the root user.

Procedure 2.2. Configuring sudo Access

  1. Log in to the system as the root user.
  2. Create a normal user account using the useradd command. Replace USERNAME with the user name that you wish to create.
    # useradd USERNAME
  3. Set a password for the new user using the passwd command.
    # passwd USERNAME
    Changing password for user USERNAME.
    New password: 
    Retype new password: 
    passwd: all authentication tokens updated successfully.
    
  4. Run the visudo to edit the /etc/sudoers file. This file defines the policies applied by the sudo command.
    # visudo
    
  5. Find the lines in the file that grant sudo access to users in the group wheel when enabled.
    ## Allows people in group wheel to run all commands
    # %wheel        ALL=(ALL)       ALL
    
  6. Remove the comment character ( # ) at the start of the second line. This enables the configuration option.
  7. Save your changes and exit the editor.
  8. Add the user you created to the wheel group using the usermod command.
    # usermod -aG wheel USERNAME
    
  9. Test that the updated configuration allows the user you created to run commands using sudo .
    1. Use the su to switch to the new user account that you created.
      # su USERNAME -
      
    2. Use the groups to verify that the user is in the wheel group.
      $ groups
      USERNAME wheel
      
    3. Use the sudo command to run the whoami command. As this is the first time you have run a command using sudo from this user account the banner message will be displayed. You will be also be prompted to enter the password for the user account.
      $ sudo whoami
      We trust you have received the usual lecture from the local System
      Administrator. It usually boils down to these three things:
      
          #1) Respect the privacy of others.
          #2) Think before you type.
          #3) With great power comes great responsibility.
      
      [sudo] password for USERNAME:
      root
      
      The last line of the output is the user name returned by the whoami command. If sudo is configured correctly this value will be root .
You have successfully configured a user with sudo access. You can now log in to this user account and use sudo to run commands as if you were logged in to the account of the root user.

[Jul 30, 2018] 10 Useful Sudoers Configurations for Setting 'sudo' in Linux

Jul 30, 2018 | www.tecmint.com

Below are ten /etc/sudoers file configurations to modify the behavior of sudo command using Defaults entries.

$ sudo cat /etc/sudoers
/etc/sudoers File
#
# This file MUST be edited with the 'visudo' command as root.
#
# Please consider adding local content in /etc/sudoers.d/ instead of
# directly modifying this file.
#
# See the man page for details on how to write a sudoers file.
#
Defaults        env_reset
Defaults        mail_badpass
Defaults        secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
Defaults        logfile="/var/log/sudo.log"
Defaults        lecture="always"
Defaults        badpass_message="Password is wrong, please try again"
Defaults        passwd_tries=5
Defaults        insults
Defaults        log_input,log_output
Types of Defaults Entries
Defaults                parameter,   parameter_list     #affect all users on any host
Defaults@Host_List      parameter,   parameter_list     #affects all users on a specific host
Defaults:User_List      parameter,   parameter_list     #affects a specific user
Defaults!Cmnd_List      parameter,   parameter_list     #affects  a specific command 
Defaults>Runas_List     parameter,   parameter_list     #affects commands being run as a specific user

For the scope of this guide, we will zero down to the first type of Defaults in the forms below. Parameters may be flags, integer values, strings, or lists.

You should note that flags are implicitly boolean and can be turned off using the '!' operator, and lists have two additional assignment operators, += (add to list) and -= (remove from list).

Defaults     parameter
OR
Defaults     parameter=value
OR
Defaults     parameter -=value   
Defaults     parameter +=value  
OR
Defaults     !parameter

[Jul 30, 2018] Configuring sudo and adding users to Wheel group

Here you can find additional example of access to all command in a particular directory via sudo...
Formatting changed and some errors corrected...
Nov 28, 2014 | linuxnlenux.wordpress.com
If a server needs to be administered by a number of people it is normally not a good idea for them all to use the root account. This is because it becomes difficult to determine exactly who did what, when and where if everyone logs in with the same credentials. The sudo utility was designed to overcome this difficulty.

With sudo (which stands for "superuser do"), you can delegate a limited set of administrative responsibilities to other users, who are strictly limited to the commands you allow them. sudo creates a thorough audit trail, so everything users do gets logged; if users somehow manage to do something they shouldn't have, you'll be able to detect it and apply the needed fixes. You can even configure sudo centrally, so its permissions apply to several hosts.

The privileged command you want to run must first begin with the word sudo followed by the command's regular syntax. When running the command with the sudo prefix, you will be prompted for your regular password before it is executed. You may run other privileged commands using sudo within a five-minute period without being re-prompted for a password. All commands run as sudo are logged in the log file /var/log/messages.

The sudo configuration file is /etc/sudoers . We should never edit this file manually. Instead, use the visudo command: # visudo

This protects from conflicts (when two admins edit this file at the same time) and guarantees that the right syntax is used (the permission bits are correct). The program uses Vi text editor.

All Access to Specific Users

You can grant users bob and bunny full access to all privileged commands, with this sudoers entry.

user1, user2 ALL=(ALL) ALL

This is generally not a good idea because this allows user1 and user2 to use the su command to grant themselves permanent root privileges thereby bypassing the command logging features of sudo.

Access To Specific Users To Specific Files

This entry allows user1 and all the members of the group operator to gain access to all the program files in the /sbin and /usr/sbin directories, plus the privilege of running the command /usr/apps/check.pl.

user1, %operator ALL= /sbin/, /usr/sbin/, /usr/apps/check.pl

Access to Specific Files as Another User

user1 ALL=(accounts) /bin/kill, /usr/bin/kill, /usr/bin/pkill

Access Without Needing Passwords

This example allows all users in the group operator to execute all the commands in the /sbin directory without the need for entering a password.

%operator ALL= NOPASSWD: /sbin/

Adding users to the wheel group

The wheel group is a legacy from UNIX. When a server had to be maintained at a higher level than the day-to-day system administrator, root rights were often required. The 'wheel' group was used to create a pool of user accounts that were allowed to get that level of access to the server. If you weren't in the 'wheel' group, you were denied access to root.

Edit the configuration file (/etc/sudoers) with visudo and change these lines:

# Uncomment to allow people in group wheel to run all commands
# %wheel ALL=(ALL) ALL

To this (as recommended):

# Uncomment to allow people in group wheel to run all commands
%wheel ALL=(ALL) ALL

This will allow anyone in the wheel group to execute commands using sudo (rather than having to add each person one by one).

Now finally use the following command to add any user (e.g- user1) to Wheel group

# usermod -G wheel user1

[Jul 30, 2018] Non-root user getting root access after running sudo vi -etc-hosts

Notable quotes:
"... as the original user ..."
Jul 30, 2018 | unix.stackexchange.com

Gilles, Mar 10, 2018 at 10:24

If sudo vi /etc/hosts is successful, it means that the system administrator has allowed the user to run vi /etc/hosts as root. That's the whole point of sudo: it lets the system administrator authorize certain users to run certain commands with extra privileges.

Giving a user the permission to run vi gives them the permission to run any vi command, including :sh to run a shell and :w to overwrite any file on the system. A rule allowing only to run vi /etc/hosts does not make any sense since it allows the user to run arbitrary commands.

There is no "hacking" involved. The breach of security comes from a misconfiguration, not from a hole in the security model. Sudo does not particularly try to prevent against misconfiguration. Its documentation is well-known to be difficult to understand; if in doubt, ask around and don't try to do things that are too complicated.

It is in general a hard problem to give a user a specific privilege without giving them more than intended. A bulldozer approach like giving them the right to run an interactive program such as vi is bound to fail. A general piece of advice is to give the minimum privileges necessary to accomplish the task. If you want to allow a user to modify one file, don't give them the permission to run an editor. Instead, either:

  • Give them the permission to write to the file. This is the simplest method with the least risk of doing something you didn't intend.
    setfacl u:bob:rw /etc/hosts
    
  • Give them permission to edit the file via sudo. To do that, don't give them the permission to run an editor. As explained in the sudo documentation, give them the permission to run sudoedit , which invokes an editor as the original user and then uses the extra privileges only to modify the file.
    bob ALL = sudoedit /etc/hosts

    The sudo method is more complicated to set up, and is less transparent for the user because they have to invoke sudoedit instead of just opening the file in their editor, but has the advantage that all accesses are logged.

Note that allowing a user to edit /etc/hosts may have an impact on your security infrastructure: if there's any place where you rely on a host name corresponding to a specific machine, then that user will be able to point it to a different machine. Consider that it is probably unnecessary anyway .

[Jun 21, 2018] Create a Sudo Log File by Aaron Kili

Jun 21, 2018 | www.tecmint.com

By default, sudo logs through syslog(3). However, to specify a custom log file, use the logfile parameter like so:

Defaults  logfile="/var/log/sudo.log"

To log hostname and the four-digit year in the custom log file, use log_host and log_year parameters respectively as follows:

Defaults  log_host, log_year, logfile="/var/log/sudo.log"
Log Sudo Command Input/Output

The log_input and log_output parameters enable sudo to run a command in pseudo-tty and log all user input and all output sent to the screen receptively.

The default I/O log directory is /var/log/sudo-io , and if there is a session sequence number, it is stored in this directory. You can specify a custom directory through the iolog_dir parameter.

Defaults   log_input, log_output

There are some escape sequences are supported such as %{seq} which expands to a monotonically increasing base-36 sequence number, such as 000001, where every two digits are used to form a new directory, e.g. 00/00/01 as in the example below:

[Jun 21, 2018] Lecture: Sudo Users by Aaron Kili

Jun 21, 2018 | www.tecmint.com

To lecture sudo users about password usage on the system, use the lecture parameter as below.

It has 3 possible values:

  1. always – always lecture a user.
  2. once – only lecture a user the first time they execute sudo command (this is used when no value is specified)
  3. never – never lecture the user.
 
Defaults  lecture="always"

Additionally, you can set a custom lecture file with the lecture_file parameter, type the appropriate message in the file:

Defaults  lecture_file="/path/to/file"

Show Custom Message When You Enter Wrong sudo Password

When a user enters a wrong password, a certain message is displayed on the command line. The default message is " sorry, try again ", you can modify the message using the badpass_message parameter as follows:

Defaults  badpass_message="Password is wrong, please try again"
Increase sudo Password Tries Limit

The parameter passwd_tries is used to specify the number of times a user can try to enter a password.

The default value is 3:

Defaults   passwd_tries=5

Increase Sudo Password Attempts

To set a password timeout (default is 5 minutes) using passwd_timeout parameter, add the line below:

Defaults   passwd_timeout=2
9. Let Sudo Insult You When You Enter Wrong Password

In case a user types a wrong password, sudo will display insults on the terminal with the insults parameter. This will automatically turn off the badpass_message parameter.

Defaults  insults

[Jun 20, 2018] Suse Doc Administration Guide - Configuring sudo

Notable quotes:
"... WARNING: Dangerous constructs ..."
Sep 06, 2013 | www.suse.com
Basic sudoers Configuration Syntax

In the sudoers configuration files, there are two types of options: strings and flags. While strings can contain any value, flags can be turned either ON or OFF. The most important syntax constructs for sudoers configuration files are:

# Everything on a line after a # gets ignored 
Defaults !insults # Disable the insults flag 
Defaults env_keep += "DISPLAY HOME" # Add DISPLAY and HOME to env_keep
tux ALL = NOPASSWD: /usr/bin/frobnicate, PASSWD: /usr/bin/journalctl

There are two exceptions: #include and #includedir are normal commands. Followed by digits, it specifies a UID.

Remove the ! to set the specified flag to ON.

See Section 2.2.3, Rules in sudoers .

Table 2-1 Useful Flags and Options

Option name

Description

Example

targetpw

This flag controls whether the invoking user is required to enter the password of the target user (ON) (for example root ) or the invoking user (OFF).

Defaults targetpw # Turn targetpw flag ON

rootpw

If set, sudo will prompt for the root password instead of the target user's or the invoker's. The default is OFF.

Defaults !rootpw # Turn rootpw flag OFF

env_reset

If set, sudo constructs a minimal environment with only TERM , PATH , HOME , MAIL , SHELL , LOGNAME , USER , USERNAME , and SUDO_* set. Additionally, variables listed in env_keep get imported from the calling environment. The default is ON.

Defaults env_reset # Turn env_reset flag ON

env_keep

List of environment variables to keep when the env_reset flag is ON.

# Set env_keep to contain EDITOR and PROMPT
Defaults env_keep = "EDITOR PROMPT"
Defaults env_keep += "JRE_HOME" # Add JRE_HOME
Defaults env_keep -= "JRE_HOME" # Remove JRE_HOME

env_delete

List of environment variables to remove when the env_reset flag is OFF.

# Set env_delete to contain EDITOR and PROMPT
Defaults env_delete = "EDITOR PROMPT"
Defaults env_delete += "JRE_HOME" # Add JRE_HOME
Defaults env_delete -= "JRE_HOME" # Remove JRE_HOME

The Defaults token can also be used to create aliases for a collection of users, hosts, and commands. Furthermore, it is possible to apply an option only to a specific set of users.

For detailed information about the /etc/sudoers configuration file, consult man 5 sudoers . 2.2.3 Rules in sudoers

Rules in the sudoers configuration can be very complex, so this section will only cover the basics. Each rule follows the basic scheme ( [] marks optional parts):

#Who      Where         As whom      Tag                What
User_List Host_List = [(User_List)] [NOPASSWD:|PASSWD:] Cmnd_List
Syntax for sudoers Rules
User_List

One or more (separated by , ) identifiers: Either a user name, a group in the format %GROUPNAME or a user ID in the format #UID . Negation can be performed with a ! prefix.

Host_List

One or more (separated by , ) identifiers: Either a (fully qualified) host name or an IP address. Negation can be performed with a ! prefix. ALL is the usual choice for Host_List .

NOPASSWD:|PASSWD:

The user will not be prompted for a password when running commands matching CMDSPEC after NOPASSWD: .

PASSWD is the default, it only needs to be specified when both are on the same line:

tux ALL = PASSWD: /usr/bin/foo, NOPASSWD: /usr/bin/bar
Cmnd_List

One or more (separated by , ) specifiers: A path to an executable, followed by allowed arguments or nothing.

/usr/bin/foo     # Anything allowed
/usr/bin/foo bar # Only "/usr/bin/foo bar" allowed
/usr/bin/foo ""  # No arguments allowed

ALL can be used as User_List , Host_List , and Cmnd_List .

A rule that allows tux to run all commands as root without entering a password:

tux ALL = NOPASSWD: ALL

A rule that allows tux to run systemctl restart apache2 :

tux ALL = /usr/bin/systemctl restart apache2

A rule that allows tux to run wall as admin with no arguments:

tux ALL = (admin) /usr/bin/wall ""

WARNING: Dangerous constructs

Constructs of the kind

ALL ALL = ALL

must not be used without Defaults targetpw , otherwise anyone can run commands as root .

[Jun 20, 2018] Sudo - ArchWiki

Jun 20, 2018 | wiki.archlinux.org

Sudoers default file permissions

The owner and group for the sudoers file must both be 0. The file permissions must be set to 0440. These permissions are set by default, but if you accidentally change them, they should be changed back immediately or sudo will fail.

# chown -c root:root /etc/sudoers
# chmod -c 0440 /etc/sudoers
Tips and tricks Disable per-terminal sudo Warning: This will let any process use your sudo session.

If you are annoyed by sudo's defaults that require you to enter your password every time you open a new terminal, disable tty_tickets :

Defaults !tty_tickets
Environment variables

If you have a lot of environment variables, or you export your proxy settings via export http_proxy="..." , when using sudo these variables do not get passed to the root account unless you run sudo with the -E option.

$ sudo -E pacman -Syu

The recommended way of preserving environment variables is to append them to env_keep :

/etc/sudoers
Defaults env_keep += "ftp_proxy http_proxy https_proxy no_proxy"
Passing aliases

If you use a lot of aliases, you might have noticed that they do not carry over to the root account when using sudo. However, there is an easy way to make them work. Simply add the following to your ~/.bashrc or /etc/bash.bashrc :

alias sudo='sudo '
Root password

Users can configure sudo to ask for the root password instead of the user password by adding targetpw (target user, defaults to root) or rootpw to the Defaults line in /etc/sudoers :

Defaults targetpw

To prevent exposing your root password to users, you can restrict this to a specific group:

Defaults:%wheel targetpw
%wheel ALL=(ALL) ALL
Disable root login

Users may wish to disable the root login. Without root, attackers must first guess a user name configured as a sudoer as well as the user password. See for example Ssh#Deny .

Warning:

The account can be locked via passwd :

# passwd -l root

A similar command unlocks root.

$ sudo passwd -u root

Alternatively, edit /etc/shadow and replace the root's encrypted password with "!":

root:!:12345::::::

To enable root login again:

$ sudo passwd root
Tip: To get to an interactive root prompt, even after disabling the root account, use sudo -i . kdesu

kdesu may be used under KDE to launch GUI applications with root privileges. It is possible that by default kdesu will try to use su even if the root account is disabled. Fortunately one can tell kdesu to use sudo instead of su. Create/edit the file ~/.config/kdesurc :

[super-user-command]
super-user-command=sudo

or use the following command:

$ kwriteconfig5 --file kdesurc --group super-user-command --key super-user-command sudo

Alternatively, install kdesudo AUR , which has the added advantage of tab-completion for the command following.

Harden with Sudo Example

Let us say you create 3 users: admin, devel, and joe. The user "admin" is used for journalctl, systemctl, mount, kill, and iptables; "devel" is used for installing packages, and editing config files; and "joe" is the user you log in with. To let "joe" reboot, shutdown, and use netctl we would do the following:

Edit /etc/pam.d/su and /etc/pam.d/su-1 Require user be in the wheel group, but do not put anyone in it.

#%PAM-1.0
auth            sufficient      pam_rootok.so
# Uncomment the following line to implicitly trust users in the "wheel" group.
#auth           sufficient      pam_wheel.so trust use_uid
# Uncomment the following line to require a user to be in the "wheel" group.
auth            required        pam_wheel.so use_uid
auth            required        pam_unix.so
account         required        pam_unix.so
session         required        pam_unix.so

Limit SSH login to the 'ssh' group. Only "joe" will be part of this group.

groupadd -r ssh
gpasswd -a joe ssh
echo 'AllowGroups ssh' >> /etc/ssh/sshd_config

Restart sshd.service .

Add users to other groups.

for g in power network ;do ;gpasswd -a joe $g ;done
for g in network power storage ;do ;gpasswd -a admin $g ;done

Set permissions on configs so devel can edit them.

chown -R devel:root /etc/{http,openvpn,cups,zsh,vim,screenrc}
Cmnd_Alias  POWER       =   /usr/bin/shutdown -h now, /usr/bin/halt, /usr/bin/poweroff, /usr/bin/reboot
Cmnd_Alias  STORAGE     =   /usr/bin/mount -o nosuid\,nodev\,noexec, /usr/bin/umount
Cmnd_Alias  SYSTEMD     =   /usr/bin/journalctl, /usr/bin/systemctl
Cmnd_Alias  KILL        =   /usr/bin/kill, /usr/bin/killall
Cmnd_Alias  PKGMAN      =   /usr/bin/pacman
Cmnd_Alias  NETWORK     =   /usr/bin/netctl
Cmnd_Alias  FIREWALL    =   /usr/bin/iptables, /usr/bin/ip6tables
Cmnd_Alias  SHELL       =   /usr/bin/zsh, /usr/bin/bash
%power      ALL         =   (root)  NOPASSWD: POWER
%network    ALL         =   (root)  NETWORK
%storage    ALL         =   (root)  STORAGE
root        ALL         =   (ALL)   ALL
admin       ALL         =   (root)  SYSTEMD, KILL, FIREWALL
devel       ALL         =   (root)  PKGMAN
joe         ALL         =   (devel) SHELL, (admin) SHELL

With this setup, you will almost never need to login as the Root user.

"joe" can connect to his home WiFi.

sudo netctl start home
sudo poweroff

"joe" can not use netctl as any other user.

sudo -u admin -- netctl start home

When "joe" needs to use journalctl or kill run away process he can switch to that user

sudo -i -u devel
sudo -i -u admin

But "joe" cannot switch to the root user.

sudo -i -u root

If "joe" want to start a gnu-screen session as admin he can do it like this:

sudo -i -u admin
admin% chown admin:tty `echo $TTY`
admin% screen
Configure sudo using drop-in files in /etc/sudoers.d

sudo parses files contained in the directory /etc/sudoers.d/ . This means that instead of editing /etc/sudoers , you can change settings in standalone files and drop them in that directory. This has two advantages:

The format for entries in these drop-in files is the same as for /etc/sudoers itself. To edit them directly, use visudo -f /etc/sudoers.d/ somefile . See the "Including other files from within sudoers" section of sudoers(5) for details.

The files in /etc/sudoers.d/ directory are parsed in lexicographical order, file names containing . or ~ are skipped. To avoid sorting problems, the file names should begin with two digits, e.g. 01_foo .

Note: The order of entries in the drop-in files is important: make sure that the statements do not override themselves. Warning: The files in /etc/sudoers.d/ are just as fragile as /etc/sudoers itself: any improperly formatted file will prevent sudo from working. Hence, for the same reason it is strongly advised to use visudo Editing files

sudo -e or sudoedit lets you edit a file as another user while still running the text editor as your user.

This is especially useful for editing files as root without elevating the privilege of your text editor, for more details read sudo(8) .

Note that you can set the editor to any program, so for example one can use meld to manage pacnew files:

$ SUDO_EDITOR=meld sudo -e /etc/file{,.pacnew}
Troubleshooting SSH TTY Problems

Notes: please use the second argument of the template to provide more detailed indications. (Discuss in Talk:Sudo# )

SSH does not allocate a tty by default when running a remote command. Without a tty, sudo cannot disable echo when prompting for a password. You can use ssh's -t option to force it to allocate a tty.

The Defaults option requiretty only allows the user to run sudo if they have a tty.

# Disable "ssh hostname sudo <cmd>", because it will show the password in clear text. You have to run "ssh -t hostname sudo <cmd>".
#
#Defaults    requiretty
Permissive umask

Notes: please use the second argument of the template to provide more detailed indications. (Discuss in Talk:Sudo# )

Sudo will union the user's umask value with its own umask (which defaults to 0022). This prevents sudo from creating files with more open permissions than the user's umask allows. While this is a sane default if no custom umask is in use, this can lead to situations where a utility run by sudo may create files with different permissions than if run by root directly. If errors arise from this, sudo provides a means to fix the umask, even if the desired umask is more permissive than the umask that the user has specified. Adding this (using visudo ) will override sudo's default behavior:

Defaults umask = 0022
Defaults umask_override

This sets sudo's umask to root's default umask (0022) and overrides the default behavior, always using the indicated umask regardless of what umask the user as set.

Defaults skeleton

Notes: please use the second argument of the template to provide more detailed indications. (Discuss in Talk:Sudo# )

The authors site has a list of all the options that can be used with the Defaults command in the /etc/sudoers file.

See [1] for a list of options (parsed from the version 1.8.7 source code) in a format optimized for sudoers .

[Jun 20, 2018] sudo - Gentoo Wiki

Jun 20, 2018 | wiki.gentoo.org

Non-root execution

It is also possible to have a user run an application as a different, non-root user. This can be very interesting if you run applications as a different user (for instance apache for the web server) and want to allow certain users to perform administrative steps as that user (like killing zombie processes).

Inside /etc/sudoers you list the user(s) in between ( and ) before the command listing:

CODE Non-root execution syntax
users  hosts = (run-as) commands

For instance, to allow larry to run the kill tool as the apache or gorg user:

CODE Non-root execution example
Cmnd_Alias KILL = /bin/kill, /usr/bin/pkill
 
larry   ALL = (apache, gorg) KILL

With this set, the user can run sudo -u to select the user he wants to run the application as:

user $ sudo -u apache pkill apache

You can set an alias for the user to run an application as using the Runas_Alias directive. Its use is identical to the other _Alias directives we have seen before.

Passwords and default settings

By default, sudo asks the user to identify himself using his own password. Once a password is entered, sudo remembers it for 5 minutes, allowing the user to focus on his tasks and not repeatedly re-entering his password.

Of course, this behavior can be changed: you can set the Defaults: directive in /etc/sudoers to change the default behavior for a user.

For instance, to change the default 5 minutes to 0 (never remember):

CODE Changing the timeout value
Defaults:larry  timestamp_timeout=0

A setting of -1 would remember the password indefinitely (until the system reboots).

A different setting would be to require the password of the user that the command should be run as and not the users' personal password. This is accomplished using runaspw . In the following example we also set the number of retries (how many times the user can re-enter a password before sudo fails) to 2 instead of the default 3:

CODE Requiring the root password instead of the user's password
Defaults:john   runaspw, passwd_tries=2

Another interesting feature is to keep the DISPLAY variable set so that you can execute graphical tools:

CODE Keeping the DISPLAY variable alive
Defaults:john env_keep=DISPLAY

You can change dozens of default settings using the Defaults: directive. Fire up the sudoers manual page and search for Defaults .

If you however want to allow a user to run a certain set of commands without providing any password whatsoever, you need to start the commands with NOPASSWD: , like so:

CODE Allowing emerge to be ran as root without asking for a password
larry     localhost = NOPASSWD: /usr/bin/emerge
Bash completion

Users that want bash completion with sudo need to run this once.

user $ sudo echo "complete -cf sudo" >> $HOME/.bashrc

[Jun 20, 2018] Trick 4: Switching to root

Jun 20, 2018 | www.networkworld.com

There are times when prefacing every command with "sudo" gets in the way of getting your work done. With a default /etc/sudoers configuration and membership in the sudo (or admin) group, you can assume root control using the command sudo su - . Extra care should always be taken when using the root account in this way.

$ sudo -i -u root
[sudo] password for jdoe:
root@stinkbug:~#

[Jun 20, 2018] Prolonging password timeout

Jun 20, 2018 | wiki.gentoo.org

Prolonging password timeout

By default, if a user has entered their password to authenticate their self to sudo , it is remembered for 5 minutes. If the user wants to prolong this period, he can run sudo -v to reset the time stamp so that it will take another 5 minutes before sudo asks for the password again.

user $ sudo -v

The inverse is to kill the time stamp using sudo -k .

[Jun 20, 2018] Shared Administration with Sudo

Jun 20, 2018 | www.freebsd.org

Finally, this line in /usr/local/etc/sudoers allows any member of the webteam group to manage webservice :

%webteam   ALL=(ALL)       /usr/sbin/service webservice *

Unlike su (1) , Sudo only requires the end user password. This adds an advantage where users will not need shared passwords, a finding in most security audits and just bad all the way around.

Users permitted to run applications with Sudo only enter their own passwords. This is more secure and gives better control than su (1) , where the root password is entered and the user acquires all root permissions.

Tip:

Most organizations are moving or have moved toward a two factor authentication model. In these cases, the user may not have a password to enter. Sudo provides for these cases with the NOPASSWD variable. Adding it to the configuration above will allow all members of the webteam group to manage the service without the password requirement:

%webteam   ALL=(ALL)       NOPASSWD: /usr/sbin/service webservice *

13.14.1. Logging Output

An advantage to implementing Sudo is the ability to enable session logging. Using the built in log mechanisms and the included sudoreplay command, all commands initiated through Sudo are logged for later verification. To enable this feature, add a default log directory entry, this example uses a user variable. Several other log filename conventions exist, consult the manual page for sudoreplay for additional information.

Defaults iolog_dir=/var/log/sudo-io/%{user}
Tip:

This directory will be created automatically after the logging is configured. It is best to let the system create directory with default permissions just to be safe. In addition, this entry will also log administrators who use the sudoreplay command. To change this behavior, read and uncomment the logging options inside sudoers .

Once this directive has been added to the sudoers file, any user configuration can be updated with the request to log access. In the example shown, the updated webteam entry would have the following additional changes:

%webteam ALL=(ALL) NOPASSWD: LOG_INPUT: LOG_OUTPUT: /usr/sbin/service webservice *

From this point on, all webteam members altering the status of the webservice application will be logged. The list of previous and current sessions can be displayed with:

# sudoreplay -l

In the output, to replay a specific session, search for the TSID= entry, and pass that to sudoreplay with no other options to replay the session at normal speed. For example:

# sudoreplay user1/00/00/02
Warning:

While sessions are logged, any administrator is able to remove sessions and leave only a question of why they had done so. It is worthwhile to add a daily check through an intrusion detection system ( IDS ) or similar software so that other administrators are alerted to manual alterations.

The sudoreplay is extremely extendable. Consult the documentation for more information.

[Jun 20, 2018] SCOM 1801, 2016 and 2012 Configuring sudo Elevation for UNIX and Linux Monitoring

Jun 20, 2018 | technet.microsoft.com

LINUX

#-----------------------------------------------------------------------------------

#Example user configuration for Operations Manager agent

#Example assumes users named: scomadm & scomadm

#Replace usernames & corresponding /tmp/scx-<username> specification for your environment

#General requirements

Defaults:scomadm !requiretty

#Agent maintenance

##Certificate signing

scomadm ALL=(root) NOPASSWD: /bin/sh -c cp /tmp/scx-scomadm/scx.pem /etc/opt/microsoft/scx/ssl/scx.pem; rm -rf /tmp/scx-scomadm; /opt/microsoft/scx/bin/tools/scxadmin -restart

scomadm ALL=(root) NOPASSWD: /bin/sh -c cat /etc/opt/microsoft/scx/ssl/scx.pem

scomadm ALL=(root) NOPASSWD: /bin/sh -c if test -f /opt/microsoft/omsagent/bin/service_control; then cat /etc/opt/microsoft/omsagent/scom/certs/scom-cert.pem; else cat /etc/opt/microsoft/scx/ssl/scx.pem; fi

scomadm ALL=(root) NOPASSWD: /bin/sh -c if test -f /opt/microsoft/omsagent/bin/service_control; then mv /tmp/scx-scomadm/scom-cert.pem /etc/opt/microsoft/omsagent/scom/certs/scom-cert.pem; fi

scomadm ALL=(root) NOPASSWD: /bin/sh -c if test -r /etc/opt/microsoft/scx/ssl/scx.pem; then cat /etc/opt/microsoft/scx/ssl/scx.pem; else cat /etc/opt/microsoft/scx/ssl/scx-seclevel1.pem; fi

##SCOM Workspace

scomadm ALL=(root) NOPASSWD: /bin/sh -c if test -f /opt/microsoft/omsagent/bin/service_control; then cp /tmp/scx-scomadm/omsadmin.conf /etc/opt/microsoft/omsagent/scom/conf/omsadmin.conf; /opt/microsoft/omsagent/bin/service_control restart scom; fi

scomadm ALL=(root) NOPASSWD: /bin/sh -c if test -f /opt/microsoft/omsagent/bin/omsadmin.sh && test ! -f /etc/opt/microsoft/omsagent/scom/certs/scom-cert.pem; then /opt/microsoft/omsagent/bin/omsadmin.sh -w scom; fi

##Install or upgrade

#Linux

scomadm ALL=(root) NOPASSWD: /bin/sh -c sh /tmp/scx-scomadm/omsagent-1.[0-9].[0-9]-[0-9][0-9].universal[[\:alpha\:]].[[\:digit\:]].x[6-8][4-6].sh --install --enable-opsmgr; if test -f /opt/microsoft/omsagent/bin/omsadmin.sh && test ! -f /etc/opt/microsoft/omsagent/scom/certs/scom-cert.pem; then /opt/microsoft/omsagent/bin/omsadmin.sh -w scom; fi; EC=$?; cd /tmp; rm -rf /tmp/scx-scomadm; exit $EC

scomadm ALL=(root) NOPASSWD: /bin/sh -c sh /tmp/scx-scomadm/omsagent-1.[0-9].[0-9]-[0-9][0-9].universal[[\:alpha\:]].[[\:digit\:]].x[6-8][4-6].sh --upgrade --enable-opsmgr; if test -f /opt/microsoft/omsagent/bin/omsadmin.sh && test ! -f /etc/opt/microsoft/omsagent/scom/certs/scom-cert.pem; then /opt/microsoft/omsagent/bin/omsadmin.sh -w scom; fi; EC=$?; cd /tmp; rm -rf /tmp/scx-scomadm; exit $EC

#RHEL

scomadm ALL=(root) NOPASSWD: /bin/sh -c sh /tmp/scx-scomadm/omsagent-1.[0-9].[0-9]-[0-9][0-9].rhel.[[\:digit\:]].x[6-8][4-6].sh --install --enable-opsmgr; if test -f /opt/microsoft/omsagent/bin/omsadmin.sh && test ! -f /etc/opt/microsoft/omsagent/scom/certs/scom-cert.pem; then /opt/microsoft/omsagent/bin/omsadmin.sh -w scom; fi; EC=$?; cd /tmp; rm -rf /tmp/scx-scomadm; exit $EC

scomadm ALL=(root) NOPASSWD: /bin/sh -c sh /tmp/scx-scomadm/omsagent-1.[0-9].[0-9]-[0-9][0-9].rhel.[[\:digit\:]].x[6-8][4-6].sh --upgrade --enable-opsmgr; if test -f /opt/microsoft/omsagent/bin/omsadmin.sh && test ! -f /etc/opt/microsoft/omsagent/scom/certs/scom-cert.pem; then /opt/microsoft/omsagent/bin/omsadmin.sh -w scom; fi; EC=$?; cd /tmp; rm -rf /tmp/scx-scomadm; exit $EC

#SUSE

scomadm ALL=(root) NOPASSWD: /bin/sh -c sh /tmp/scx-scomadm/omsagent-1.[0-9].[0-9]-[0-9][0-9].sles.1[[\:digit\:]].x[6-8][4-6].sh --install --enable-opsmgr; if test -f /opt/microsoft/omsagent/bin/omsadmin.sh && test ! -f /etc/opt/microsoft/omsagent/scom/certs/scom-cert.pem; then /opt/microsoft/omsagent/bin/omsadmin.sh -w scom; fi; EC=$?; cd /tmp; rm -rf /tmp/scx-scomadm; exit $EC

scomadm ALL=(root) NOPASSWD: /bin/sh -c sh /tmp/scx-scomadm/omsagent-1.[0-9].[0-9]-[0-9][0-9].sles.1[[\:digit\:]].x[6-8][4-6].sh --upgrade --enable-opsmgr; if test -f /opt/microsoft/omsagent/bin/omsadmin.sh && test ! -f /etc/opt/microsoft/omsagent/scom/certs/scom-cert.pem; then /opt/microsoft/omsagent/bin/omsadmin.sh -w scom; fi; EC=$?; cd /tmp; rm -rf /tmp/scx-scomadm; exit $EC

## RHEL PPC

scomadm ALL=(root) NOPASSWD: /bin/sh -c sh /tmp/scx-scomadm/scx-1.[0-9].[0-9]-[0-9][0-9][0-9].rhel.[[\:digit\:]].ppc.sh --install --enable-opsmgr; if test -f /opt/microsoft/omsagent/bin/omsadmin.sh && test ! -f /etc/opt/microsoft/omsagent/scom/certs/scom-cert.pem; then /opt/microsoft/omsagent/bin/omsadmin.sh -w scom; fi; EC=$?; cd /tmp; rm -rf /tmp/scx-scomadm; exit $EC

scomadm ALL=(root) NOPASSWD: /bin/sh -c sh /tmp/scx-scomadm/scx-1.[0-9].[0-9]-[0-9][0-9][0-9].rhel.[[\:digit\:]].ppc.sh --upgrade --enable-opsmgr; if test -f /opt/microsoft/omsagent/bin/omsadmin.sh && test ! -f /etc/opt/microsoft/omsagent/scom/certs/scom-cert.pem; then /opt/microsoft/omsagent/bin/omsadmin.sh -w scom; fi; EC=$?; cd /tmp; rm -rf /tmp/scx-scomadm; exit $EC

##Uninstall

scomadm ALL=(root) NOPASSWD: /bin/sh -c if test -f /opt/microsoft/omsagent/bin/omsadmin.sh; then if test "$(/opt/microsoft/omsagent/bin/omsadmin.sh -l | grep scom | wc -l)" \= "1" && test "$(/opt/microsoft/omsagent/bin/omsadmin.sh -l | wc -l)" \= "1" || test "$(/opt/microsoft/omsagent/bin/omsadmin.sh -l)" \= "No Workspace"; then /opt/microsoft/omsagent/bin/uninstall; else /opt/microsoft/omsagent/bin/omsadmin.sh -x scom; fi; else /opt/microsoft/scx/bin/uninstall; fi

##Log file monitoring

scomadm ALL=(root) NOPASSWD: /opt/microsoft/scx/bin/scxlogfilereader -p

###Examples

#Custom shell command monitoring example -replace <shell command> with the correct command string

#scomadm ALL=(root) NOPASSWD: /bin/bash -c <shell command>

#Daemon diagnostic and restart recovery tasks example (using cron)

#scomadm ALL=(root) NOPASSWD: /bin/sh -c ps -ef | grep cron | grep -v grep

#scomadm ALL=(root) NOPASSWD: /usr/sbin/cron &

#End user configuration for Operations Manager agent

#-----------------------------------------------------------------------------------

[Jun 20, 2018] Sudo and Sudoers Configuration Servers for Hackers

Jun 20, 2018 | serversforhackers.com

%group

We can try editing a group. The following will allow group www-data to run sudo service php5-fpm * commands without a password, great for deployment!

%www-data ALL(ALL:ALL) NOPASSWD:/usr/sbin/service php5-fpm *

Here's the same configuration as a comma-separated list of multiple commands. This let's us get more specific on which service commands we can use with php5-fpm :

%www-data ALL(ALL:ALL) NOPASSWD:/usr/sbin/service php5-fpm reload,/usr/sbin/service php5-fpm restart,

We can enforce the use of a password with some commands, but no password for others:

%admin ALL NOPASSWD:/bin/mkdir, PASSWD:/bin/rm

[Jun 20, 2018] IBM Knowledge Center - Configuring sudo

Jun 20, 2018 | www.ibm.com
  1. Open the /etc/sudoers file with a text editor. The sudo installation includes the visudo editor, which checks the syntax of the file before closing.
  2. Add the following commands to the file. Important: Enter each command on a single line:
    # Preserve GPFS environment variables:
    Defaults env_keep += "MMMODE environmentType GPFS_rshPath GPFS_rcpPath mmScriptTrace GPFSCMDPORTRANGE GPFS_CIM_MSG_FORMAT" 
    
    # Allow members of the gpfs group to run all commands but only selected commands without a password:
    %gpfs ALL=(ALL) PASSWD: ALL, NOPASSWD: /usr/lpp/mmfs/bin/mmremote, /usr/bin/scp, /bin/echo, /usr/lpp/mmfs/bin/mmsdrrestore
    
    # Disable requiretty for group gpfs:
    Defaults:%gpfs !requiretty
    

[Jun 20, 2018] Understanding and using sudo in Unix or Linux (with examples)

Jun 20, 2018 | aplawrence.com

Limiting commands

There's more that sudo does to protect tyou from malicious mischief. The :man sudo" pages cover that completely. Let's continue with our examples; it's time to limit "jim" to specific commands. There are two ways to do that. We can specifically list commands, or we can say that jim can only run commands in a certain directory. A combination of those methods is useful:

jim     ALL=    /bin/kill,/sbin/linuxconf, /usr/sbin/jim/

The careful reader will note that there was a bit of a change here. The line used to read "jim ALL=(ALL) ALL", but now there's only one "ALL" left. Reading the man page can easily leave you quite confused as to what those three "ALL"'s meant. In the example above, ALL refers to machines- the assumption is that this is a network wide sudoers file. In the case of this machine (lnxserve) we could do this:

jim     lnxserve=       /bin/kill, /usr/sbin/jim/

So what was the "(ALL)" for? Well, here's a clue:

jim     lnxserve=(paul,linda)   /bin/kill, /usr/sbin/jim/

That says that jim can (using "sudo -u ") run commands as paul or linda.

This is perfect for giving jim the power to kill paul or linda's processes without giving him anything else. There is one thing we need to add though: if we just left it like this, jim is forced to use "sudo -u paul" or "sudo -u linda" every time. We can add a default "runas_default":

Defaults:jim    timestamp_timeout=-1, env_delete+="BOOP", runas_default=linda

[Jun 20, 2018] Configuring sudo Explaination with an example by Ankit Mehta

May 14, 2009 | www.linux.com

sudo commands use a basic syntax. By default, the /etc/sudoers file will have one stanza:

root      ALL=(ALL) ALL

This tells sudo to give root sudo access to everything on every host. The syntax is simple:

user       host = (user) command

The first column defines the user the command applies to. The host section defines the host this stanza applies to. The (user) section defines the user to run the command as, while the command section defines the command itself.

You can also define aliases for Hosts, Users, and Commands by using the keywords Host_Alias , User_Alias , and Cmnd_Alias respectively.

Let's take a look at a few examples of the different aliases you can use.

... ... ...

Next, lets define some User aliases:

User_Alias        WEBADMIN = ankit, sam
User_Alias        MAILADMIN = ankit, navan
User_Alias        BINADMIN = ankit, jon

Here we've also defined three User aliases. The first user alias has the name WEBADMIN for web administrators. Here we've define Ankit and Sam. The second alias is MAILADMIN, for mail administrators, and here we have Ankit and Navan. Finally, we define an alias of BINADMIN for the regular sysadmins, again Ankit, but with Jon as well.

So far we've defined some hosts and some users. Now we get to define what commands they may be able to run, also using some aliases:

Cmnd_Alias         SU = /bin/su
Cmnd_Alias         BIN = /bin/rpm, /bin/rm, /sbin/linuxconf
Cmnd_Alias         SWATCH = /usr/bin/swatch, /bin/touch
Cmnd_Alias         HTTPD = /etc/rc.d/init.d/httpd, /etc/rc.d/init.d/mysql
Cmnd_Alias         SMTP = /etc/rc.d/init.d/qmail

Here we have a few aliases. The first we call SU, and enables the user to run the /bin/su command. The second we call BIN, which enables the user to run the commands: /bin/rpm , /bin/rm , and /sbin/linuxconf . The next is the SWATCH alias which allows the user to run /usr/bin/swatch and /bin/touch . Then we define the HTTPD alias which allows the user to execute /etc/rc.d/init.d/httpd and /etc/rc.d/init.d/mysql , for web maintenance. Finally, we define SMTP, which allows the user to manipulate the running of the qmail SMTP server...

... ... ...

[Jun 20, 2018] Running Commands as Another User via sudo

Jun 20, 2018 | www.safaribooksonline.com

You want one user to run commands as another, without sharing passwords.

Solution

Suppose you want user smith to be able to run a given command as user jones.

               /etc/sudoers:
smith  ALL = (jones) /usr/local/bin/mycommand

User smith runs:

smith$ sudo -u jones /usr/local/bin/mycommand
smith$ sudo -u jones mycommand                     If /usr/local/bin is in $PATH

User smith will be prompted for his own password, not jones's. The ALL keyword, which matches anything, in this case specifies that the line is valid on any host.

Discussion

sudo exists for this very reason!

To authorize root privileges for smith, replace "jones" with "root" in the above example.

[Jun 20, 2018] Quick HOWTO Ch09 Linux Users and Sudo

This article contains pretty pervert examples that shows that lists can used on the right part of the user statement too ;-)
Jun 20, 2018 | www.linuxhomenetworking.com
Simple /etc/sudoers Examples

This section presents some simple examples of how to do many commonly required tasks using the sudo utility.

Granting All Access to Specific Users

You can grant users bob and bunny full access to all privileged commands, with this sudoers entry.

bob, bunny  ALL=(ALL) ALL

This is generally not a good idea because this allows bob and bunny to use the su command to grant themselves permanent root privileges thereby bypassing the command logging features of sudo. The example on using aliases in the sudoers file shows how to eliminate this prob

Granting Access To Specific Users To Specific Files

This entry allows user peter and all the members of the group operator to gain access to all the program files in the /sbin and /usr/sbin directories, plus the privilege of running the command /usr/local/apps/check.pl. Notice how the trailing slash (/) is required to specify a directory location:

peter, %operator ALL= /sbin/, /usr/sbin, /usr/local/apps/check.pl

Notice also that the lack of any username entries within parentheses () after the = sign prevents the users from running the commands automatically masquerading as another user. This is explained further in the next example.

Granting Access to Specific Files as Another User

The sudo -u entry allows allows you to execute a command as if you were another user, but first you have to be granted this privilege in the sudoers file.

This feature can be convenient for programmers who sometimes need to kill processes related to projects they are working on. For example, programmer peter is on the team developing a financial package that runs a program called monthend as user accounts. From time to time the application fails, requiring "peter" to stop it with the /bin/kill, /usr/bin/kill or /usr/bin/pkill commands but only as user "accounts". The sudoers entry would look like this:

peter ALL=(accounts) /bin/kill, /usr/bin/kill, /usr/bin/pkill

User peter is allowed to stop the monthend process with this command:

[peter@bigboy peter]# sudo -u accounts pkill monthend
Granting Access Without Needing Passwords

This example allows all users in the group operator to execute all the commands in the /sbin directory without the need for entering a password. This has the added advantage of being more convenient to the user:

%operator ALL= NOPASSWD: /sbin/
Using Aliases in the sudoers File

Sometimes you'll need to assign random groupings of users from various departments very similar sets of privileges. The sudoers file allows users to be grouped according to function with the group and then being assigned a nickname or alias which is used throughout the rest of the file. Groupings of commands can also be assigned aliases too.

In the next example, users peter, bob and bunny and all the users in the operator group are made part of the user alias ADMINS. All the command shell programs are then assigned to the command alias SHELLS. Users ADMINS are then denied the option of running any SHELLS commands and su:

Cmnd_Alias    SHELLS = /usr/bin/sh,  /usr/bin/csh, \
                       /usr/bin/ksh, /usr/local/bin/tcsh, \
                       /usr/bin/rsh, /usr/local/bin/zsh
 
 
User_Alias    ADMINS = peter, bob, bunny, %operator
ADMINS        ALL    = !/usr/bin/su, !SHELLS

This attempts to ensure that users don't permanently su to become root, or enter command shells that bypass sudo's command logging. It doesn't prevent them from copying the files to other locations to be run. The advantage of this is that it helps to create an audit trail, but the restrictions can be enforced only as part of the company's overall security policy.

Other Examples

You can view a comprehensive list of /etc/sudoers file options by issuing the command man sudoers.

Using syslog To Track All sudo Commands

All sudo commands are logged in the log file /var/log/messages which can be very helpful in determining how user error may have contributed to a problem. All the sudo log entries have the word sudo in them, so you can easily get a thread of commands used by using the grep command to selectively filter the output accordingly.

Here is sample output from a user bob failing to enter their correct sudo password when issuing a command, immediately followed by the successful execution of the command /bin/more sudoers.

[root@bigboy tmp]# grep sudo /var/log/messages
Nov 18 22:50:30 bigboy sudo(pam_unix)[26812]: authentication failure; logname=bob uid=0 euid=0 tty=pts/0 ruser= rhost= user=bob
Nov 18 22:51:25 bigboy sudo: bob : TTY=pts/0 ; PWD=/etc ; USER=root ; COMMAND=/bin/more sudoers
[root@bigboy tmp]#

[Jun 20, 2018] bash - sudo as another user with their environment

Using strace is an interesting debugging tip
Jun 20, 2018 | unix.stackexchange.com

user80551 ,Jan 2, 2015 at 4:29

$ whoami
admin
$ sudo -S -u otheruser whoami
otheruser
$ sudo -S -u otheruser /bin/bash -l -c 'echo $HOME'
/home/admin

Why isn't $HOME being set to /home/otheruser even though bash is invoked as a login shell?

Specifically, /home/otheruser/.bashrc isn't being sourced. Also, /home/otheruser/.profile isn't being sourced. - ( /home/otheruser/.bash_profile doesn't exist)

EDIT: The exact problem is actually https://stackoverflow.com/questions/27738224/mkvirtualenv-with-fabric-as-another-user-fails

Pavel Šimerda ,Jan 2, 2015 at 8:29

A solution to this question will solve the other question as well, you might want to delete the other question in this situation. – Pavel Šimerda Jan 2 '15 at 8:29

Pavel Šimerda ,Jan 2, 2015 at 8:27

To invoke a login shell using sudo just use -i . When command is not specified you'll get a login shell prompt, otherwise you'll get the output of your command.

Example (login shell):

sudo -i

Example (with a specified user):

sudo -i -u user

Example (with a command):

sudo -i -u user whoami

Example (print user's $HOME ):

sudo -i -u user echo \$HOME

Note: The backslash character ensures that the dollar sign reaches the target user's shell and is not interpreted in the calling user's shell.

I have just checked the last example with strace which tells you exactly what's happening. The output bellow shows that the shell is being called with --login and with the specified command, just as in your explicit call to bash, but in addition sudo can do its own work like setting the $HOME .

# strace -f -e process sudo -S -i -u user echo \$HOME
execve("/usr/bin/sudo", ["sudo", "-S", "-i", "-u", "user", "echo", "$HOME"], [/* 42 vars */]) = 0
...
[pid 12270] execve("/bin/bash", ["-bash", "--login", "-c", "echo \\$HOME"], [/* 16 vars */]) = 0
...

I noticed that you are using -S and I don't think it is generally a good technique. If you want to run commands as a different user without performing authentication from the keyboard, you might want to use SSH instead. It works for localhost as well as for other hosts and provides public key authentication that works without any interactive input.

ssh user@localhost echo \$HOME

Note: You don't need any special options with SSH as the SSH server always creates a login shell to be accessed by the SSH client.

John_West ,Nov 23, 2015 at 11:12

sudo -i -u user echo \$HOME doesn't work for me. Output: $HOME . strace gives the same output as yours. What's the issue? – John_West Nov 23 '15 at 11:12

Pavel Šimerda ,Jan 20, 2016 at 19:02

No idea, it still works for me, I'd need to see it or maybe even touch the system. – Pavel Šimerda Jan 20 '16 at 19:02

Jeff Snider ,Jan 2, 2015 at 8:04

You're giving Bash too much credit. All "login shell" means to Bash is what files are sourced at startup and shutdown. The $HOME variable doesn't figure into it.

The Bash docs explain some more what login shell means: https://www.gnu.org/software/bash/manual/html_node/Bash-Startup-Files.html#Bash-Startup-Files

In fact, Bash doesn't do anything to set $HOME at all. $HOME is set by whatever invokes the shell (login, ssh, etc.), and the shell inherits it. Whatever started your shell as admin set $HOME and then exec-ed bash , sudo by design doesn't alter the environment unless asked or configured to do so, so bash as otheruser inherited it from your shell.

If you want sudo to handle more of the environment in the way you're expecting, look at the -i switch for sudo. Try:

sudo -S -u otheruser -i /bin/bash -l -c 'echo $HOME'

The man page for sudo describes it in more detail, though not really well, I think: http://linux.die.net/man/8/sudo

user80551 ,Jan 2, 2015 at 8:11

$HOME isn't set by bash - Thanks, I didn't know that. – user80551 Jan 2 '15 at 8:11

Pavel Šimerda ,Jan 2, 2015 at 9:46

Look for strace in my answer. It shows that you don't need to build /bin/bash -l -c 'echo $HOME' command line yourself when using -i .

palswim ,Oct 13, 2016 at 20:21

That sudo syntax threw an error on my machine. ( su uses the -c option, but I don't think sudo does.) I had better luck with: HomeDir=$( sudo -u "$1" -H -s echo "\$HOME" )palswim Oct 13 '16 at 20:21

[Jun 20, 2018] What are the differences between su, sudo -s, sudo -i, sudo su

Notable quotes:
"... (which means "substitute user" or "switch user") ..."
"... (hmm... what's the mnemonic? Super-User-DO?) ..."
"... The official meaning of "su" is "substitute user" ..."
"... Interestingly, Ubuntu's manpage does not mention "substitute" at all. The manpage at gnu.org ( gnu.org/software/coreutils/manual/html_node/su-invocation.html ) does indeed say "su: Run a command with substitute user and group ID". ..."
"... sudo -s runs a [specified] shell with root privileges. sudo -i also acquires the root user's environment. ..."
"... To see the difference between su and sudo -s , do cd ~ and then pwd after each of them. In the first case, you'll be in root's home directory, because you're root. In the second case, you'll be in your own home directory, because you're yourself with root privileges. There's more discussion of this exact question here . ..."
"... I noticed sudo -s doesnt seem to process /etc/profile ..."
Jun 20, 2018 | askubuntu.com

Sergey ,Oct 22, 2011 at 7:21

The main difference between these commands is in the way they restrict access to their functions.

su (which means "substitute user" or "switch user") - does exactly that, it starts another shell instance with privileges of the target user. To ensure you have the rights to do that, it asks you for the password of the target user . So, to become root, you need to know root password. If there are several users on your machine who need to run commands as root, they all need to know root password - note that it'll be the same password. If you need to revoke admin permissions from one of the users, you need to change root password and tell it only to those people who need to keep access - messy.

sudo (hmm... what's the mnemonic? Super-User-DO?) is completely different. It uses a config file (/etc/sudoers) which lists which users have rights to specific actions (run commands as root, etc.) When invoked, it asks for the password of the user who started it - to ensure the person at the terminal is really the same "joe" who's listed in /etc/sudoers . To revoke admin privileges from a person, you just need to edit the config file (or remove the user from a group which is listed in that config). This results in much cleaner management of privileges.

As a result of this, in many Debian-based systems root user has no password set - i.e. it's not possible to login as root directly.

Also, /etc/sudoers allows to specify some additional options - i.e. user X is only able to run program Y etc.

The often-used sudo su combination works as follows: first sudo asks you for your password, and, if you're allowed to do so, invokes the next command ( su ) as a super-user. Because su is invoked by root , it require you to enter your password instead of root.

So, sudo su allows you to open a shell as another user (including root), if you're allowed super-user access by the /etc/sudoers file.

dr jimbob ,Oct 22, 2011 at 13:47

I've never seen su as "switch user", but always as superuser; the default behavior without another's user name (though it makes sense). From wikipedia : "The su command, also referred to as super user[1] as early as 1974, has also been called "substitute user", "spoof user" or "set user" because it allows changing the account associated with the current terminal (window)."

Sergey ,Oct 22, 2011 at 20:33

@dr jimbob: you're right, but I'm finding that "switch user" is kinda describes better what it does - though historically it stands for "super user". I'm also delighted to find that the wikipedia article is very similar to my answer - I never saw the article before :)

Angel O'Sphere ,Nov 26, 2013 at 13:02

The official meaning of "su" is "substitute user". See: "man su". – Angel O'Sphere Nov 26 '13 at 13:02

Sergey ,Nov 26, 2013 at 20:25

@AngelO'Sphere: Interestingly, Ubuntu's manpage does not mention "substitute" at all. The manpage at gnu.org ( gnu.org/software/coreutils/manual/html_node/su-invocation.html ) does indeed say "su: Run a command with substitute user and group ID". I think gnu.org is a canonical source :) – Sergey Nov 26 '13 at 20:25

Mike Scott ,Oct 22, 2011 at 6:28

sudo lets you run commands in your own user account with root privileges. su lets you switch user so that you're actually logged in as root.

sudo -s runs a [specified] shell with root privileges. sudo -i also acquires the root user's environment.

To see the difference between su and sudo -s , do cd ~ and then pwd after each of them. In the first case, you'll be in root's home directory, because you're root. In the second case, you'll be in your own home directory, because you're yourself with root privileges. There's more discussion of this exact question here .

Sergey ,Oct 22, 2011 at 7:28

"you're yourself with root privileges" is not what's actually happening :) Actually, it's not possible to be "yourself with root privileges" - either you're root or you're yourself. Try typing whoami in both cases. The fact that cd ~ results are different is a result of sudo -s not setting $HOME environment variable. – Sergey Oct 22 '11 at 7:28

Octopus ,Feb 6, 2015 at 22:15

@Sergey, whoami it says are 'root' because you are running the 'whoami' cmd as though you sudoed it, so temporarily (for the duration of that command) you appear to be the root user, but you might still not have full root access according to the sudoers file. – Octopus Feb 6 '15 at 22:15

Sergey ,Feb 6, 2015 at 22:24

@Octopus: what I was trying to say is that in Unix, a process can only have one UID, and that UID determines the permissions of the process. You can't be "yourself with root privileges", a program either runs with your UID or with root's UID (0). – Sergey Feb 6 '15 at 22:24

Sergey ,Feb 6, 2015 at 22:32

Regarding "you might still not have full root access according to the sudoers file": the sudoers file controls who can run which command as another user, but that happens before the command is executed. However, once you were allowed to start a process as, say, root -- the running process has root's UID and has a full access to the system, there's no way for sudo to restrict that.

Again, you're always either yourself or root, there's no "half-n-half". So, if sudoers file allows you to run shell as root -- permissions in that shell would be indistinguishable from a "normal" root shell. – Sergey Feb 6 '15 at 22:32

dotancohen ,Nov 8, 2014 at 14:07

This answer is a dupe of my answer on a dupe of this question , put here on the canonical answer so that people can find it!

The major difference between sudo -i and sudo -s is:

  • sudo -i gives you the root environment, i.e. your ~/.bashrc is ignored.
  • sudo -s gives you the user's environment, so your ~/.bashrc is respected.

Here is an example, you can see that I have an application lsl in my ~/.bin/ directory which is accessible via sudo -s but not accessible with sudo -i . Note also that the Bash prompt changes as will with sudo -i but not with sudo -s :

dotancohen@melancholy:~$ ls .bin
lsl

dotancohen@melancholy:~$ which lsl
/home/dotancohen/.bin/lsl

dotancohen@melancholy:~$ sudo -i

root@melancholy:~# which lsl

root@melancholy:~# exit
logout

dotancohen@melancholy:~$ sudo -s
Sourced .bashrc

dotancohen@melancholy:~$ which lsl
/home/dotancohen/.bin/lsl

dotancohen@melancholy:~$ exit
exit

Though sudo -s is convenient for giving you the environment that you are familiar with, I recommend the use of sudo -i for two reasons:

  1. The visual reminder that you are in a 'root' session.
  2. The root environment is far less likely to be poisoned with malware, such as a rogue line in .bashrc .

meffect ,Feb 23, 2017 at 5:21

I noticed sudo -s doesnt seem to process /etc/profile , or anything I have in /etc/profile.d/ .. any idea why? – meffect Feb 23 '17 at 5:21

Marius Gedminas ,Oct 22, 2011 at 19:38

su asks for the password of the user "root".

sudo asks for your own password (and also checks if you're allowed to run commands as root, which is configured through /etc/sudoers -- by default all user accounts that belong to the "admin" group are allowed to use sudo).

sudo -s launches a shell as root, but doesn't change your working directory. sudo -i simulates a login into the root account: your working directory will be /root , and root's .profile etc. will be sourced as if on login.

DJCrashdummy ,Jul 29, 2017 at 0:58

to make the answer more complete: sudo -s is almost equal to su ($HOME is different) and sudo -i is equal to su -
In Ubuntu or a related system, I don't find much use for su in the traditional, super-user sense. sudo handles that case much better. However, su is great for becoming another user in one-off situations where configuring sudoers would be silly.

For example, if I'm repairing my system from a live CD/USB, I'll often mount my hard drive and other necessary stuff and chroot into the system. In such a case, my first command is generally:

su - myuser  # Note the '-'. It means to act as if that user had just logged in.

That way, I'm operating not as root, but as my normal user, and I then use sudo as appropriate.

[Jun 20, 2018] How to invoke login shell for another user using sudo

Notable quotes:
"... To invoke a login shell using sudo just use -i . When command is not specified you'll get a login shell prompt, otherwise you'll get the output of your command. ..."
Jun 20, 2018 | unix.stackexchange.com

To invoke a login shell using sudo just use -i . When command is not specified you'll get a login shell prompt, otherwise you'll get the output of your command.

Example (login shell):

sudo -i

Example (with a specified user):

sudo -i -u user

Example (with a command):

sudo -i -u user whoami

Example (print user's $HOME ):

sudo -i -u user echo \$HOME

[Jun 20, 2018] Changing the timeout value

Jun 20, 2018 | wiki.gentoo.org

By default, sudo asks the user to identify himself using his own password. Once a password is entered, sudo remembers it for 5 minutes, allowing the user to focus on his tasks and not repeatedly re-entering his password.

Of course, this behavior can be changed: you can set the Defaults: directive in /etc/sudoers to change the default behavior for a user.

For instance, to change the default 5 minutes to 0 (never remember):

CODE Changing the timeout value
Defaults:larry  timestamp_timeout=0

A setting of -1 would remember the password indefinitely (until the system reboots).

A different setting would be to require the password of the user that the command should be run as and not the users' personal password. This is accomplished using runaspw . In the following example we also set the number of retries (how many times the user can re-enter a password before sudo fails) to 2 instead of the default 3:

[Jun 20, 2018] Bash completion with sudo

Jun 20, 2018 | wiki.gentoo.org
Bash completion

Users that want bash completion with sudo need to run this once.

user $ sudo echo "complete -cf sudo" >> $HOME/.bashrc

[Jun 20, 2018] permission - allow sudo to another user without password

Jun 20, 2018 | apple.stackexchange.com

up vote 35 down vote favorite 11


zio ,Feb 17, 2013 at 13:12

I want to be able to 'su' to a specific user, allowing me to run any command without a password being entered.

For example:

If my login were user1 and the user I want to 'su' to is user2:

I would use the command:

su - user2

but then it prompts me with

Password:

Global nomad ,Feb 17, 2013 at 13:17

Ask the other user for the password. At least the other user knows what's been done under his/her id. – Global nomad Feb 17 '13 at 13:17

zio ,Feb 17, 2013 at 13:24

This is nothing to do with another physical user. Both ID's are mine. I know the password as I created the account. I just don't want to have to type the password every time. – zio Feb 17 '13 at 13:24

bmike ♦ ,Feb 17, 2013 at 15:32

Would it be ok to ssh to at user or do you need to inherit one shell in particular and need su to work? – bmike ♦ Feb 17 '13 at 15:32

bmike ♦ ,Feb 17, 2013 at 23:59

@zio Great use case. Does open -na Skype not work for you? – bmike ♦ Feb 17 '13 at 23:59

user495470 ,Feb 18, 2013 at 4:50

You could also try copying the application bundle and changing CFBundleIdentifier . – user495470 Feb 18 '13 at 4:50

Huygens ,Feb 18, 2013 at 7:39

sudo can do just that for you :)

It needs a bit of configuration though, but once done you would only do this:

sudo -u user2 -s

And you would be logged in as user2 without entering a password.

Configuration

To configure sudo, you must edit its configuration file via: visudo . Note: this command will open the configuration using the vi text editor, if you are unconfortable with that, you need to set another editor (using export EDITOR=<command> ) before executing the following line. Another command line editor sometimes regarded as easier is nano , so you would do export EDITOR=/usr/bin/nano . You usually need super user privilege for visudo :

sudo visudo

This file is structured in different section, the aliases, then defaults and finally at the end you have the rules. This is where you need to add the new line. So you navigate at the end of the file and add this:

user1    ALL=(user2) NOPASSWD: /bin/bash

You can replace also /bin/bash by ALL and then you could launch any command as user2 without a password: sudo -u user2 <command> .

Update

I have just seen your comment regarding Skype. You could consider adding Skype directly to the sudo's configuration file. I assume you have Skype installed in your Applications folder:

user1    ALL=(user2) NOPASSWD: /Applications/Skype.app/Contents/MacOS/Skype

Then you would call from the terminal:

sudo -u user2 /Applications/Skype.app/Contents/MacOS/Skype

bmike ♦ ,May 28, 2014 at 16:04

This is far less complicated than the ssh keys idea, so use this unless you need the ssh keys for remote access as well. – bmike ♦ May 28 '14 at 16:04

Stan Kurdziel ,Oct 26, 2015 at 16:56

One thing to note from a security-perspective is that specifying a specific command implies that it should be a read-only command for user1; Otherwise, they can overwrite the command with something else and run that as user2. And if you don't care about that, then you might as well specify that user1 can run any command as user2 and therefore have a simpler sudo config. – Stan Kurdziel Oct 26 '15 at 16:56

Huygens ,Oct 26, 2015 at 19:24

@StanKurdziel good point! Although it is something to be aware of, it's really seldom to have system executables writable by users unless you're root but in this case you don't need sudo ;-) But you're right to add this comment because it's so seldom that I've probably overlooked it more than one time. – Huygens Oct 26 '15 at 19:24

Gert van den Berg ,Aug 10, 2016 at 14:24

To get it nearer to the behaviour su - user2 instead of su user2 , the commands should probably all involve sudo -u user2 -i , in order to simulate an initial login as user2 – Gert van den Berg Aug 10 '16 at 14:24

bmike ,Feb 18, 2013 at 0:05

I would set up public/private ssh keys for the second account and store the key in the first account.

Then you could run a command like:

 ssh user@localhost -n /Applications/Skype.app/Contents/MacOS/Skype &

You'd still have the issues where Skype gets confused since two instances are running on one user account and files read/written by that program might conflict. It also might work well enough for your needs and you'd not need an iPod touch to run your second Skype instance.

calum_b ,Feb 18, 2013 at 9:54

This is a good secure solution for the general case of password-free login to any account on any host, but I'd say it's probably overkill when both accounts are on the same host and belong to the same user. – calum_b Feb 18 '13 at 9:54

bmike ♦ ,Feb 18, 2013 at 14:02

@scottishwildcat It's far more secure than the alternative of scripting the password and feeding it in clear text or using a variable and storing the password in the keychain and using a tool like expect to script the interaction. I just use sudo su - blah and type my password. I think the other answer covers sudo well enough to keep this as a comment. – bmike ♦ Feb 18 '13 at 14:02

calum_b ,Feb 18, 2013 at 17:47

Oh, I certainly wasn't suggesting your answer should be removed I didn't even down-vote, it's a perfectly good answer. – calum_b Feb 18 '13 at 17:47

bmike ♦ ,Feb 18, 2013 at 18:46

We appear to be in total agreement - thanks for the addition - feel free to edit it into the answer if you can improve on it. – bmike ♦ Feb 18 '13 at 18:46

Gert van den Berg ,Aug 10, 2016 at 14:20

The accepted solution ( sudo -u user2 <...> ) does have the advantage that it can't be used remotely, which might help for security - there is no private key for user1 that can be stolen. – Gert van den Berg Aug 10 '16 at 14:20

[Jun 20, 2018] linux - Automating the sudo su - user command

Jun 20, 2018 | superuser.com

5 down vote favorite


sam ,Feb 9, 2011 at 11:11

I want to automate
sudo su - user

from a script. It should then ask for a password.

grawity ,Feb 9, 2011 at 12:07

Don't sudo su - user , use sudo -iu user instead. (Easier to manage through sudoers , by the way.) – grawity Feb 9 '11 at 12:07

Hello71 ,Feb 10, 2011 at 1:33

How are you able to run sudo su without being able to run sudo visudo ? – Hello71 Feb 10 '11 at 1:33

Torian ,Feb 9, 2011 at 11:37

I will try and guess what you asked.

If you want to use sudo su - user without a password, you should (if you have the privileges) do the following on you sudoers file:

<youuser>  ALL = NOPASSWD: /bin/su - <otheruser>

where:

  • <yourusername> is you username :D (saumun89, i.e.)
  • <otheruser> is the user you want to change to

Then put into the script:

sudo /bin/su - <otheruser>

Doing just this, won't get subsequent commands get run by <otheruser> , it will spawn a new shell. If you want to run another command from within the script as this other user, you should use something like:

 sudo -u <otheruser> <command>

And in sudoers file:

<yourusername>  ALL = (<otheruser>) NOPASSWD: <command>

Obviously, a more generic line like:

<yourusername> ALL = (ALL) NOPASSWD: ALL

Will get things done, but would grant the permission to do anything as anyone.

sam ,Feb 9, 2011 at 11:43

when the sudo su - user command gets executed,it asks for a password. i want a solution in which script automaticaaly reads password from somewhere. i dont have permission to do what u told earlier. – sam Feb 9 '11 at 11:43

sam ,Feb 9, 2011 at 11:47

i have the permission to store password in a file. the script should read password from that file – sam Feb 9 '11 at 11:47

Olli ,Feb 9, 2011 at 12:46

You can use command
 echo "your_password" | sudo -S [rest of your parameters for sudo]

(Of course without [ and ])

Please note that you should protect your script from read access from unauthorized users. If you want to read password from separate file, you can use

  sudo -S [rest of your parameters for sudo] < /etc/sudo_password_file

(Or whatever is the name of password file, containing password and single line break.)

From sudo man page:

   -S          The -S (stdin) option causes sudo to read the password from
               the standard input instead of the terminal device.  The
               password must be followed by a newline character.

AlexandruC ,Dec 6, 2014 at 8:10

This actually works for me. – AlexandruC Dec 6 '14 at 8:10

Oscar Foley ,Feb 8, 2016 at 16:36

This is brilliant – Oscar Foley Feb 8 '16 at 16:36

Mikel ,Feb 9, 2011 at 11:26

The easiest way is to make it so that user doesn't have to type a password at all.

You can do that by running visudo , then changing the line that looks like:

someuser  ALL=(ALL) ALL

to

someuser  ALL=(ALL) NOPASSWD: ALL

However if it's just for one script, it would be more secure to restrict passwordless access to only that script, and remove the (ALL) , so they can only run it as root, not any user , e.g.

Cmnd_Alias THESCRIPT = /usr/local/bin/scriptname

someuser  ALL=NOPASSWD: THESCRIPT

Run man 5 sudoers to see all the details in the sudoers man page .

sam ,Feb 9, 2011 at 11:34

i do not have permission to edit sudoers file.. any other so that it should read password from somewhere so that automation of this can be done. – sam Feb 9 '11 at 11:34

Torian ,Feb 9, 2011 at 11:40

you are out of luck ... you could do this with, lets say expect but that would let the password for your user hardcoded somewhere, where people could see it (granted that you setup permissions the right way, it could still be read by root). – Torian Feb 9 '11 at 11:40

Mikel ,Feb 9, 2011 at 11:40

Try using expect . man expect for details. – Mikel Feb 9 '11 at 11:40

> ,

when the sudo su - user command gets executed,it asks for a password. i want a solution in which script automaticaaly reads password from somewhere. i dont have permission to edit sudoers file.i have the permission to store password in a file.the script should read password from that file – sam

[Jun 20, 2018] sudo - What does ALL ALL=(ALL) ALL mean in sudoers

Jun 20, 2018 | unix.stackexchange.com

up vote 6 down vote favorite 3


LoukiosValentine79 ,May 6, 2015 at 19:29

If a server has the following in /etc/sudoers:
Defaults targetpw
ALL ALL=(ALL) ALL

Then what does this mean? all the users can sudo to all the commands, only their password is needed?

lcd047 ,May 6, 2015 at 20:51

It means "security Nirvana", that's what it means. ;) – lcd047 May 6 '15 at 20:51

poz2k4444 ,May 6, 2015 at 20:19

From the sudoers(5) man page:

The sudoers policy plugin determines a user's sudo privileges.

For the targetpw:

sudo will prompt for the password of the user specified by the -u option (defaults to root) instead of the password of the invoking user when running a command or editing a file.

sudo(8) allows you to execute commands as someone else

So, basically it says that any user can run any command on any host as any user and yes, the user just has to authenticate, but with the password of the other user, in order to run anything.

The first ALL is the users allowed
The second one is the hosts
The third one is the user as you are running the command
The last one is the commands allowed

LoukiosValentine79 ,May 7, 2015 at 16:37

Thanks! In the meantime I found the "Defaults targetpw" entry in sudoers.. updated the Q – LoukiosValentine79 May 7 '15 at 16:37

poz2k4444 ,May 7, 2015 at 18:24

@LoukiosValentine79 I just update the answer, does that answer your question? – poz2k4444 May 7 '15 at 18:24

evan54 ,Feb 28, 2016 at 20:24

wait he has to enter his own password not of the other user right? – evan54 Feb 28 '16 at 20:24

x-yuri ,May 19, 2017 at 12:20

with targetpw the one of the other (target) user – x-yuri May 19 '17 at 12:20

[Jun 20, 2018] sudo - What is ALL ALL=!SUDOSUDO for

Jun 20, 2018 | unix.stackexchange.com

gasko peter ,Dec 6, 2012 at 12:50

The last line of the /etc/sudoers file is:
grep -i sudosudo /etc/sudoers
Cmnd_Alias SUDOSUDO = /usr/bin/sudo
ALL ALL=!SUDOSUDO

why? What does it exactly do?

UPDATE#1: Now I know that it prevents users to use the: "/usr/bin/sudo".

UPDATE#2: not allowing "root ALL=(ALL) ALL" is not a solution.

Updated Question: What is better besides this "SUDOSUDO"? (the problem with this that the sudo binary could be copied..)

Chris Down ,Dec 6, 2012 at 12:53

SUDOSUDO is probably an alias. Does it exist elsewhere in the file? – Chris Down Dec 6 '12 at 12:53

gasko peter ,Dec 6, 2012 at 14:21

question updated :D - so what does it means exactly? – gasko peter Dec 6 '12 at 14:21

gasko peter ,Dec 6, 2012 at 14:30

is "ALL ALL=!SUDOSUDO" as the last line is like when having DROP iptables POLICY and still using a -j DROP rule as last rule in ex.: INPUT chain? :D or does it has real effects? – gasko peter Dec 6 '12 at 14:30

Kevin ,Dec 6, 2012 at 14:48

I'm not 100% sure, but I believe it only prevents anyone from running sudo sudo ... . – Kevin Dec 6 '12 at 14:48

[Mar 29, 2018] Answers to questions to recover password should never be truthful

Notable quotes:
"... So long as you choose from fictional sources which mean something to you, it's pretty easy to remember those answers. ..."
Mar 29, 2018 | discussion.theguardian.com

AlanAudio -> SamXTherapy , 28 Mar 2018 10:24

It's easy to use simple to remember associations from fiction.

For instance, your first school could be Grange Hill, Greyfriars or St Trinians.

First car could be Genevieve, Chitty Chitty Bang Bang, or maybe James Bond's Aston Martin.

Mother's maiden name could be a favorite author while your first pet's name could be Lassie, Trigger or Peter Rabbit.

So long as you choose from fictional sources which mean something to you, it's pretty easy to remember those answers.

[Jan 29, 2018] How Much Swap Should You Use in Linux by Abhishek Prakash

Red Hat recommends a swap size of 20% of RAM for modern systems (i.e. 4GB or higher RAM).
Notable quotes:
"... So many people (including this article) are misinformed about the Linux swap algorithm. It doesn't just check if your RAM reaches a certain usage point. It's incredibly complicated. Linux will swap even if you are using only 20-50% of your RAM. Inactive processes are often swapped and swapping inactive processes makes more room for buffer and cache. Even if you have 16GB of RAM, having a swap partition can be beneficial ..."
Jan 25, 2018 | itsfoss.com

27 Comments

How much should be the swap size? Should the swap be double of the RAM size or should it be half of the RAM size? Do I need swap at all if my system has got several GBs of RAM? Perhaps these are the most common asked questions about choosing swap size while installing Linux. It's nothing new. There has always been a lot of confusion around swap size.

For a long time, the recommended swap size was double of the RAM size but that golden rule is not applicable to modern computers anymore. We have systems with RAM sizes up to 128 GB, many old computers don't even have this much of hard disk.

... ... ...

Swap acts as a breather to your system when the RAM is exhausted. What happens here is that when the RAM is exhausted, your Linux system uses part of the hard disk memory and allocates it to the running application.

That sounds cool. This means if you allocate like 50GB of swap size, your system can run hundreds or perhaps thousands of applications at the same time? WRONG!

You see, the speed matters here. RAM access data in the order of nanoseconds. An SSD access data in microseconds while as a normal hard disk accesses the data in milliseconds. This means that RAM is 1000 times faster than SSD and 100,000 times faster than the usual HDD.

If an application relies too much on the swap, its performance will degrade as it cannot access the data at the same speed as it would have in RAM. So instead of taking 1 second for a task, it may take several minutes to complete the same task. It will leave the application almost useless. This is known as thrashing in computing terms.

In other words, a little swap is helpful. A lot of it will be of no good use.

Why is swap needed?

There are several reasons why you would need swap.

... ... ...

Can you use Linux without swap?

Yes, you can, especially if your system has plenty of RAM. But as explained in the previous section, a little bit of swap is always advisable.

How much should be the swap size?

... ... ...

If you go by Red Hat's suggestion , they recommend a swap size of 20% of RAM for modern systems (i.e. 4GB or higher RAM).

CentOS has a different recommendation for the swap partition size . It suggests swap size to be:

Ubuntu has an entirely different perspective on the swap size as it takes hibernation into consideration. If you need hibernation, a swap of the size of RAM becomes necessary for Ubuntu. Otherwise, it recommends:

... ... ...

Jaden

So many people (including this article) are misinformed about the Linux swap algorithm. It doesn't just check if your RAM reaches a certain usage point. It's incredibly complicated. Linux will swap even if you are using only 20-50% of your RAM. Inactive processes are often swapped and swapping inactive processes makes more room for buffer and cache. Even if you have 16GB of RAM, having a swap partition can be beneficial (especially if hibernating)

kaylee

I have 4 gigs of ram on old laptop running cinnamon going by this, it is set at 60 ( what does 60 mean, and would 10 be better ) i do a little work with blender ( VSE and just starting to mess with 3d text ) should i change to 10
thanks

a. First check your current swappiness value. Type in the terminal (use copy/paste):

cat /proc/sys/vm/swappiness

Press Enter.

The result will probably be 60.

b. To change the swappiness into a more sensible setting, type in the terminal (use copy/paste to avoid typo's):

gksudo xed /etc/sysctl.conf

Press Enter.

Now a text file opens. Scroll to the bottom of that text file and add your swappiness parameter to override the default. Copy/paste the following two green lines:

# Decrease swap usage to a more reasonable level
vm.swappiness=10

c. Save and close the text file. Then reboot your computer.

DannyB

I have 32 GB of memory. Since I use SSD and no actual hard drive, having Swap would add wear to my SSD. For more than two years now I have used Linux Mint with NO SWAP.

Rationale: a small 2 GB of extra "cushion" doesn't really matter. If programs that misbehave use up 32 GB, then they'll use up 34 GB. If I had 32 GB of SWAP for a LOT of cushion, then a misbehaving program is likely to use it all up anyway.

In practice I have NEVER had a problem with 32 GB with no swap at all. At install time I made the decision to try this (which has been great in hindsight) knowing that if I really did need swap later, I could always configure a swap FILE instead of a swap PARTITION.

But I've never needed to and have never looked back at that decision to use no swap. I would recommend it.

John_Betong

No swap on Ubuntu 17.04 I am pleased to say http://www.omgubuntu.co.uk/2016/12/ubuntu-17-04-drops-swaps-swap-partitions-swap-files

Yerry Sherry

A very good explanation why you SHOULD use swap: https://chrisdown.name/2018/01/02/in-defence-of-swap.htm

[Nov 20, 2017] Sudoers - Community Help Wiki

Notable quotes:
"... The special command '"sudoedit"' allows users to run sudo with the -e flag or as the command sudoedit . If you include command line arguments in a command in an alias these must exactly match what the user enters on the command line. If you include any of the following they will need to be escaped with a backslash (\): ",", "\", ":", "=". ..."
Nov 09, 2017 | help.ubuntu.com

... ... ...

Aliases

There are four kinds of aliases: User_Alias, Runas_Alias, Host_Alias and Cmnd_Alias. Each alias definition is of the form:

Where Alias_Type is one of User_Alias, Runas_Alias, Host_Alias or Cmnd_Alias. A name is a string of uppercase letters, numbers and underscores starting with an uppercase letter. You can put several aliases of the same type on one line by separating them with colons (:) as so:

You can include other aliases in an alias specification provided they would normally fit there. For example you can use a user alias wherever you would normally expect to see a list of users (for example in a user or runas alias).

There are also built in aliases called ALL which match everything where they are used. If you used ALL in place of a user list it matches all users for example. If you try and set an alias of ALL it will be overridden by this built in alias so don't even try.

User Aliases

User aliases are used to specify groups of users. You can specify usernames, system groups (prefixed by a %) and netgroups (prefixed by a +) as follows:

 # Everybody in the system group "admin" is covered by the alias ADMINS
 User_Alias ADMINS = %admin
 # The users "tom", "dick", and "harry" are covered by the USERS alias
 User_Alias USERS = tom, dick, harry
 # The users "tom" and "mary" are in the WEBMASTERS alias
 User_Alias WEBMASTERS = tom, mary
 # You can also use ! to exclude users from an alias
 # This matches anybody in the USERS alias who isn't in WEBMASTERS or ADMINS aliases
 User_Alias LIMITED_USERS = USERS, !WEBMASTERS, !ADMINS
Runas Aliases

Runas Aliases are almost the same as user aliases but you are allowed to specify users by uid's. This is helpful as usernames and groups are matched as strings so two users with the same uid but different usernames will not be matched by entering a single username but can be matched with a uid. For example:

 # UID 0 is normally used for root
 # Note the hash (#) on the following line indicates a uid, not a comment.
 Runas_Alias ROOT = #0
 # This is for all the admin users similar to the User_Alias of ADMINS set earlier 
 # with the addition of "root"
 Runas_Alias ADMINS = %admin, root
Host Aliases

A host alias is a list of hostname, ip addresses, networks and netgroups (prefixed with a +). If you do not specify a netmask with a network the netmask of the hosts ethernet interface(s) will be used when matching.

 # This is all the servers
 Host_Alias SERVERS = 192.168.0.1, 192.168.0.2, server1
 # This is the whole network
 Host_Alias NETWORK = 192.168.0.0/255.255.255.0
 # And this is every machine in the network that is not a server
 Host_Alias WORKSTATIONS = NETWORK, !SERVER
 # This could have been done in one step with 
 # Host_Alias WORKSTATIONS = 192.168.0.0/255.255.255.0, !SERVERS
 # but I think this method is clearer.
Command Aliases

Command aliases are lists of commands and directories. You can use this to specify a group of commands. If you specify a directory it will include any file within that directory but not in any subdirectories.

The special command '"sudoedit"' allows users to run sudo with the -e flag or as the command sudoedit . If you include command line arguments in a command in an alias these must exactly match what the user enters on the command line. If you include any of the following they will need to be escaped with a backslash (\): ",", "\", ":", "=".

Examples:

 # All the shutdown commands
 Cmnd_Alias SHUTDOWN_CMDS = /sbin/poweroff, /sbin/reboot, /sbin/halt
 # Printing commands
 Cmnd_Alias PRINTING_CMDS = /usr/sbin/lpc, /usr/sbin/lprm
 # Admin commands
 Cmnd_Alias ADMIN_CMDS = /usr/sbin/passwd, /usr/sbin/useradd, /usr/sbin/userdel, /usr/sbin/usermod, /usr/sbin/visudo
 # Web commands
 Cmnd_Alias WEB_CMDS = /etc/init.d/apache2
User Specifications

User Specifications are where the sudoers file sets who can run what as who. It is the key part of the file and all the aliases have just been set up for this very point. If this was a film this part is where all the key threads of the story come together in the glorious unveiling before the final climatic ending. Basically it is important and without this you ain't going anywhere.

A user specification is in the format

<user list> <host list> = <operator list> <tag list> <command list>

The user list is a list of users or a user alias that has already been set, the host list is a list of hosts or a host alias, the operator list is a list of users they must be running as or a runas alias and the command list is a list of commands or a cmnd alias.

The tag list has not been covered yet and allows you set special things for each command. You can use PASSWD and NOPASSWD to specify whether the user has to enter a password or not and you can also use NOEXEC to prevent any programs launching shells themselves (as once a program is running with sudo it has full root privileges so could launch a root shell to circumvent any restrictions in the sudoers file.

For example (using the aliases and users from earlier)

 # This lets the webmasters run all the web commands on the machine 
 # "webserver" provided they give a password
 WEBMASTERS webserver= WEB_CMDS
 # This lets the admins run all the admin commands on the servers
 ADMINS SERVERS= ADMIN_CMDS
 # This lets all the USERS run admin commands on the workstations provided 
 # they give the root password or and admin password (using "sudo -u <username>")
 USERS WORKSTATIONS=(ADMINS) ADMIN_CMDS
 # This lets "harry" shutdown his own machine without a password
 harry harrys-machine= NOPASSWD: SHUTDOWN_CMDS
 # And this lets everybody print without requiring a password
 ALL ALL=(ALL) NOPASSWD: PRINTING_CMDS
The Default Ubuntu Sudoers File

The sudoers file that ships with Ubuntu 8.04 by default is included here so if you break everything you can restore it if needed and also to highlight some key things.

# /etc/sudoers
#
# This file MUST be edited with the 'visudo' command as root.
#
# See the man page for details on how to write a sudoers file.
#

Defaults    env_reset

# Uncomment to allow members of group sudo to not need a password
# %sudo ALL=NOPASSWD: ALL

# Host alias specification

# User alias specification

# Cmnd alias specification

# User privilege specification
root    ALL=(ALL) ALL

# Members of the admin group may gain root privileges
%admin ALL=(ALL) ALL

This is pretty much empty and only has three rules in it. The first ( Defaults env_reset ) resets the terminal environment after switching to root. So, ie: all user set variables are removed. The second ( root ALL=(ALL) ALL ) just lets root do everything on any machine as any user. And the third ( %admin ALL=(ALL) ALL ) lets anybody in the admin group run anything as any user. Note that they will still require a password (thus giving you the normal behaviour you are so used to).

If you want to add your own specifications and you are a member of the admin group then you will need to add them after this line. Otherwise all your changes will be overridden by this line saying you (as part of the admin group) can do anything on any machine as any user provided you give a password.

Common Tasks

This section includes some common tasks and how to accomplish them using the sudoers file.

Shutting Down From The Console Without A Password

Often people want to be able to shut their computers down without requiring a password to do so. This is particularly useful in media PCs where you want to be able to use the shutdown command in the media centre to shutdown the whole computer.

To do this you need to add some cmnd aliases as follows:

Cmnd_Alias SHUTDOWN_CMDS = /sbin/poweroff, /sbin/halt, /sbin/reboot

You also need to add a user specification (at the end of the file after the " %admin ALL = (ALL) ALL " line so it takes effect - see above for details):

<your username> ALL=(ALL) NOPASSWD: SHUTDOWN_CMDS

Obviously you need to replace "<your username>" with the username of the user who needs to be able to shutdown the pc without a password. You can use a user alias here as normal.

Multiple tags on a line

There are times where you need to have both NOPASSWD and NOEXEC or other tags on the same configuration line. The man page for sudoers is less than clear, so here is an example of how this is done:

myuser ALL = (root) NOPASSWD:NOEXEC: /usr/bin/vim

This example lets the user "myuser" run as root the "vim" binary without a password, and without letting vim shell out (the :shell command).

Enabling Visual Feedback when Typing Passwords

As of Ubuntu 10.04 (Lucid), you can enable visual feedback when you are typing a password at a sudo prompt.

Simply edit /etc/sudoers and change the Defaults line to read:

Defaults        env_reset,pwfeedback
Troubleshooting

If your changes don't seem to have had any effect, check that they are not trying to use aliases that are not defined yet and that no other user specifications later in the file are overriding what you are trying to accomplish.

[Nov 19, 2017] Understanding sudoers syntax

Notable quotes:
"... A command may also be the full path to a directory (including a trailing /). This permits execution of all the files in that directory, but not in any subdirectories. ..."
"... The keyword sudoedit is also recognised as a command name, and arguments can be specified as with other commands. Use this instead of allowing a particular editor to be run with sudo, because it runs the editor as the user and only installs the editor's output file into place as root (or other target user). ..."
Nov 09, 2017 | toroid.org

User specifications

The /etc/sudoers file contains "user specifications" that define the commands that users may execute. When sudo is invoked, these specifications are checked in order, and the last match is used. A user specification looks like this at its most basic:

User Host = (Runas) Command

Read this as "User may run Command as the Runas user on Host".

Any or all of the above may be the special keyword ALL, which always matches.

User and Runas may be usernames, group names prefixed with %, numeric UIDs prefixed with #, or numeric GIDs prefixed with %#. Host may be a hostname, IP address, or a whole network (e.g., 192.0.2.0/24), but not 127.0.0.1.

Runas

This optional clause controls the target user (and group) sudo will run the Command as, or in other words, which combinations of the -u and -g arguments it will accept.

If the clause is omitted, the user will be permitted to run commands only as root. If you specify a username, e.g., (postgres), sudo will accept "-u postgres" and run commands as that user. In both cases, sudo will not accept -g.

If you also specify a target group, e.g., (postgres:postgres), sudo will accept any combination of the listed users and groups (see the section on aliases below). If you specify only a target group, e.g., (:postgres), sudo will accept and act on "-g postgres" but run commands only as the invoking user.

This is why you sometimes see (ALL:ALL) in the 90% of examples.

Commands

In the simplest case, a command is the full path to an executable, which permits it to be executed with any arguments. You may specify a list of arguments after the path to permit the command only with those exact arguments, or write "" to permit execution only without any arguments.

A command may also be the full path to a directory (including a trailing /). This permits execution of all the files in that directory, but not in any subdirectories.

ams ALL=/bin/ls, /bin/df -h /, /bin/date "", \
        /usr/bin/, sudoedit /etc/hosts, \
        OTHER_COMMANDS

The keyword sudoedit is also recognised as a command name, and arguments can be specified as with other commands. Use this instead of allowing a particular editor to be run with sudo, because it runs the editor as the user and only installs the editor's output file into place as root (or other target user).

As shown above, comma-separated lists of commands and aliases may be specified. Commands may also use shell wildcards either in the path or in the argument list (but see the warning below about the latter).

Sudo is very flexible, and it's tempting to set up very fine-grained access, but it can be difficult to understand the consequences of a complex setup, and you can end up with unexpected problems . Try to keep things simple.

Options

Before the command, you can specify zero or more options to control how it will be executed. The most important options are NOPASSWD (to not require a password) and SETENV (to allow the user to set environment variables for the command).

ams ALL=(ALL) NOPASSWD: SETENV: /bin/ls

Other available options include NOEXEC, LOG_INPUT and LOG_OUTPUT, and SELinux role and type specifications. These are all documented in the manpage.

Digests

The path to a binary (i.e., not a directory or alias) may also be prefixed with a digest:

ams ALL=(ALL) sha224:IkotndXGTmZtH5ZNFtRfIwkG0WuiuOs7GoZ+6g== /bin/ls

The specified binary will then be executed only if it matches the digest. SHA-2 digests of 224, 256, 384, and 512-bits are accepted in hex or Base64 format. The values can be generated using, e.g., sha512sum or openssl.

Aliases

In addition to the things listed above, a User, Host, Runas, or Command may be an alias, which is a named list of comma-separated values of the corresponding type. An alias may be used wherever a User, Host, Runas, or Command may occur. They are always named in uppercase, and can be defined as shown in these examples:

# Type_Alias NAME = a, b : NAME_2 = c, d,  

User_Alias TRUSTED = %admin, !ams
Runas_Alias LEGACYUSERS = oldapp1, oldapp2
Runas_Alias APPUSERS = app1, app2, LEGACYUSERS
Host_Alias PRODUCTION = www1, www2, \
    192.0.2.1/24, !192.0.2.222
Cmnd_Alias DBA = /usr/pgsql-9.4/bin, \
    /usr/local/bin/pgadmin

An alias definition can also include another alias of the same type (e.g., LEGACYUSERS above). You cannot include options like NOPASSWD: in command aliases.

Any term in a list may be prefixed with ! to negate it. This can be used to include a group but exclude a certain user, or to exclude certain addresses in a network, and so on. Negation can also be used in command lists, but note the manpage's warning that trying to "subtract" commands from ALL using ! is generally not effective .

Use aliases whenever you need rules involving multiple users, hosts, or commands.

Default options

Sudo has a number of options whose values may be set in the configuration file, overriding the defaults either unconditionally, or only for a given user, host, or command. The defaults are sensible, so you do not need to care about options unless you're doing something special.

Option values are specified in one or more "Defaults" lines. The example below switches on env_reset, turns off insults (read !insults as "not insults"), sets password_tries to 4, and so on. All the values are set unconditionally, i.e. they apply to every user specification.

Defaults env_reset, !insults, password_tries=4, \
    lecture=always
Defaults passprompt="Password for %p:"

Options may also be set only for specific hosts, users, or commands, as shown below. Defaults@host sets options for a host, Defaults:user for a (requesting) user, Defaults!command for a command, and Defaults>user for a target user. You can also use aliases in these definitions.

Defaults@localhost insults
Defaults:ams insults, !lecture
Defaults>root mail_always, mailto="foo@example.org"

Cmnd_Alias FOO = /usr/bin/foo, /usr/bin/bar, \
    /usr/local/bin/baz
Defaults!FOO always_set_home

Unconditional defaults are parsed first, followed by host and user defaults, then runas defaults, then command defaults.

The many available options are explained well in the manpage.

Complications

In addition to the alias mechanism, a User, Host, Runas, or Command may each be a comma-separated list of things of the corresponding type. Also, a user specification may contain multiple host and command sets for a single User. Please be sparing in your use of this syntax, in case you ever have to make sense of it again.

Users and hosts can also be a +netgroup or other more esoteric things, depending on plugins. Host names may also use shell wildcards (see the fqdn option).

If Runas is omitted but the () are not, sudo will reject -u and -g and run commands only as the invoking user.

You can use wildcards in command paths and in arguments, but their meaning is different. In a path, a * will not match a /, so /usr/bin/* will match /usr/bin/who but not /usr/bin/X11/xterm. In arguments, a * does match /; also, arguments are matched as a single string (not a list of separate words), so * can match across words. The manpage includes the following problematic example, which permits additional arguments to be passed to /bin/cat without restriction:

%operator ALL = /bin/cat /var/log/messages*

Warning : Sudo will not work if /etc/sudoers contains syntax errors, so you should only ever edit it using visudo, which performs basic sanity checks, and installs the new file only if it parses correctly.

Another warning: if you take the EBNF in the manpage seriously enough, you will discover that the implementation doesn't follow it. You can avoid this sad fate by linking to this article instead of trying to write your own. Happy sudoing!

[Nov 11, 2017] Example of the sudoers file

Nov 09, 2017 | support.symantec.com

Example of the sudoers file

This is an example of the contents of the sudoers file is located in the /etc directory of the UNIX target computer. This example contains sample configurations required to use the sudo functionality as mentioned in the section Using sudo functionality for querying Oracle UNIX targets .

# User alias specification
##
User_Alias UNIX_USERS = unix1, unix2, unix3
User_Alias BV_CONTROL_USERS = bvunix1, bvunix2, bvunix3
##
# Runas alias specification
Defaults:UNIX_USERS !authenticate
Defaults:BV_CONTROL_USERS !authenticate
##
Runas_Alias SUPER_USERS = root
Defaults logfile=/var/log/sudolog
##
# Cmnd alias specification
##
Cmnd_Alias APPLICATIONS = /usr/sbin/named
Cmnd_Alias AIX_ADMINCMDS = /usr/sbin/lsps, /usr/sbin/lsattr
Cmnd_Alias ADMINCMDS = /usr/sbin/prtconf, /sbin/runlevel, ulimit, AIX_ADMINCMDS,
Cmnd_Alias NETWORKCMDS = /sbin/ifconfig, /usr/local/bin/nslookup, inetadm -p
Cmnd_Alias FILECMDS = /bin/cat, /bin/date '+%Z', /usr/bin/strings -n, \
   /usr/bin/diff, /usr/bin/cmp, /usr/bin/find, \
   /bin/echo, /usr/bin/file, /bin/df -P, \
   /usr/bin/cksum, /bin/ls -la, /bin/ls -lad, \
   /bin/ls -lac, /bin/ls -lau
#Cmnd_Alias COMMONCMDS = /usr/bin, /bin, /usr/local/bin
Cmnd_Alias SU = /usr/bin/su
Cmnd_Alias SYSADMCMD = /usr/lib/sendmail
Cmnd_Alias ACTIVEADMCMDS = /usr/sbin/adduser
UNIX_USERS ALL = (SUPER_USERS) APPLICATIONS, NETWORKCMDS, ADMINCMDS, FILECMDS, !SU, !ACTIVEADMCMDS, !SYSADMCMD, NOPASSWD: ALL
BV_CONTROL_USERS ALL = NOPASSWD: ALL

See Using sudo functionality for querying Oracle UNIX targets .

See Disabling password prompt in the sudoers file .

See Minimum required privileges to query an Oracle database .

[Nov 10, 2017] Make sudo work harder

Notable quotes:
"... timestamp_timeout ..."
www.linux.com
Also at www.ibm.com/developerworks

Managing sudoers

Over time, your sudoers file will grow with more and more entries, which is to be expected. This could be because more application environments are being placed on the server, or because of splitting the delegation of currents tasks down further to segregate responsibility. With many entries, typos can occur, which is common. Making the sudoers file more manageable by the root user makes good administrative sense. Let's look at two ways this can be achieved, or at least a good standard to build on. If you have many static entries (meaning the same command is run on every machine where sudo is), put these into a separate sudoers file, which can be achieved using the include directive.

Having many entries for individual users can also be time consuming when adding or amending entries. With many user entries, it is good practice to put these into groups. Using groups, you can literally group users together, and the groups are valid AIX groups.

Now look at these two methods more closely.

Include file

Within large-enterprise environments, keeping the sudoers file maintained is an important and regularly required task. A solution to make this chore easier is to reorganize the sudoers file. One way to do this is to extract entries that are static or reusable, where the same commands are run on every box. Like audit/security or storix backups or general performance reports, with sudo you can now use the include directive. The main sudoers file can then contain the local entries, and the include file would barely need editing as those entries are static. When visudo is invoked, it will scan sudoers when it sees the include entry. It will scan that file, then come back to the main sudoers and carry on scanning. In reality, it works like this. When you exit out of visudo from the main sudoers file, it will take you to the include file for editing. Once you quit the include, you are back to the AIX prompt. You can have more than one include file, but I cannot think of a reason why you would want more than one.

Let's call our secondary sudoers file sudo_static.<hostname>. In the examples in this demonstration the hostname I am using is rs6000. In the main sudoers file, make the entry as follows:

1 #include /etc/sudo_static.rs6000

Next, add some entries to the /etc/sudo_static.rs6000 file. You do not have to put in all the sudoers directives or stanzas. If this file contains entries where they are not required, don't include them. For example, my include file contains only the following text, and nothing more.

You can use the %h, instead of typing the actual hostname:

I personally do not use this method because I have experienced returning extra characters on the hostname. This issue is fixed in sudo 1.7.2 p1.

1 2 3 4 bravo rs6000 = (root) NOPASSWD: /usr/opt/db2_08_01/adm/db2licd -end bravo rs6000 = (root) NOPASSWD: /usr/opt/db2_08_01/adm/db2licd bravo rs6000 = (db2inst) NOPASSWD: /home/db2inst/sqllib/adm/db2start bravo rs6000 = (db2inst) NOPASSWD: /home/db2inst/sqllib/adm/db2stop force

When you run visudo, and you save and quit the file, visudo will inform you to click Enter to edit the include sudoers file. Once you have edited the file, sudo will pick up on syntax errors if any, as with the main file. Alternatively, to edit the include file directly, use:

1 visudo -f /etc/sudo_static.rs6000
Using groups

Users belonging to a valid AIX group can be included in sudoers, making the sudoers file more manageable with fewer entries per user. When reorganizing the sudoers entries to include groups, you may have to create a new groups under AIX to include users that are only allowed to use sudo for certain commands. To use groups, simply prefix the entries with a '%'. Assume you have groups called devops and devuat , and with those groups you have the following users:

1 2 3 4 5 6 7 8 # lsgroup -f -a users devops devops: users=joex,delta,charlie,tstgn # lsgroup -f -a users devuat devuat: users=zebra,spsys,charlie

For the group devops to be allowed to run the /usr/local/bin/data_ext.sh command as dbdftst.

For the group devuat to be allowed to run the commands :/usr/local/bin/data_mvup.sh, /usr/local/bin/data_rep.sh as dbukuat.

We could have the following sudoers entries:

1 2 3 %devops rs6000 = (dbdftst) NOPASSWD: /usr/local/bin/data_ext.sh %devuat rs6000 = (dbukuat) /usr/local/bin/data_mvup.sh %devuat rs6000 = (dbukuat) /usr/local/bin/data_rep.sh

Notice in the previous entries, the group devops users will not be prompted for their password when executing /usr/local/bin/data_ext.sh; however, the group devuat users will be prompted for their password. User "charlie" is a member of both groups ( devops and devuat ), so he can execute all the above commands.

Timeout with sudo

Sudo has a feature that uses time tickets to determine how long since the last sudo command was run. During this time period, the user can re-run the command without being prompted for the password (that's the user's own password). Once this time allotment has ended, the user is prompted for the password again to re-run the command. If the user gives the correct password, the command is executed, the ticket is then re-set, and the time clock starts all over again. The ticket feature will not work if you have NOPASSWD in the user's entry in sudoers. The default timeout is five minutes. If you wish to change the default value, simply put an entry in sudoers. For example, to set the timeout value for user "bravo" on any commands he runs to 20 minutes, you could use:

1 Defaults:bravo timestamp_timeout=20

To destroy the ticket, as the user, use:

1 $ sudo -k

When the ticket is destroyed, the user will be prompted for his password again, when running a sudo command.

Please do not set the timeout value for all users, as this will cause problems, especially when running jobs in batch and the batch takes longer to run than normal. To disable this feature, use the value -1 in the timestamp_timeout variable. The time tickets are directory entries with the name of the user located in /var/run/sudo.

Those variables

As discussed earlier, sudo will strip out potentially dangerous system variables. To check out what variables are kept and which ones are stripped, use sudo -V . The output will give you a listing of preserved and stripped variables. Stripping out the LIBPATH is clearly an inconvenience. There are a couple of ways around this--either write a wrapper script or specify the environments on the command line. Looking at the wrapper script solution first, suppose you have an application that stops or starts a DB2® instance. You could create a bare-bones script that would keep the variables intact. In Listing 1. rc.db2 , notice that you source the instance profile, which in turn exports various LIBPATH and DB2 environment variables, keeping the environment variable intact, by using:

1 . /home/$inst/sqllib/db2profile

For completeness, the entries in sudoers to execute this is and not strip out any system environment variables are:

1 2 3 4 bravo rs6000 = (dbinst4) NOPASSWD: /home/dbinst4/sqllib/adm/db2start bravo rs6000 = (dbinst4) NOPASSWD: /home/dbinst4/sqllib/adm/db2stop force bravo rs6000 = (dbinst4) NOPASSWD: /usr/local/bin/rc.db2 stop db2inst4 bravo rs6000 = (dbinst4) NOPASSWD: /usr/local/bin/rc.db2 start db2inst4

Note in this example, user "bravo" can execute the above commands as user "dbinst4." Typically, the user would run:

1 2 sudo -u dbinst4 /usr/local/bin/rc.db2 stop db2inst4 sudo -u dbinst4 /usr/local/bin/rc.db2 start db2inst4
Listing 1. rc.db2
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 #!/bin/sh # rc.db2 # stop/start db2 instances # check to see if db2 inst is runningdb2_running(){state=`ps -ef |grep db2sysc |grep -v grep| awk '$1=="'${inst}'" { print $1 }'` if [ "$state" = "" ] then return 1 else return 0 fi} usage () { echo "`basename $0` start | stop <instance>" } # stop db2 stop_db2 () { echo "stopping db2 instance as user $inst" if [ -f /home/$inst/sqllib/db2profile ]; then . /home/$inst/sqllib/db2profile else echo "Cannot source DB2..exiting" exit 1 fi /home/$inst/sqllib/adm/db2stop force } # start db2 start_db2 () { echo "starting db2 instance as user $inst" if [ -f /home/$inst/sqllib/db2profile ]; then . /home/$inst/sqllib/db2profile else echo "Cannot source DB2..exiting" exit 1 fi /home/$inst/sqllib/adm/db2start } # check we get 2 params if [ $# != 2 ] then usage exit 1 fi inst=$2 case "$1" in Start|start) if db2_running then echo "db2 instance $inst appears to be already running" exit 0 else echo " instance not running as user $inst..attempting to start it" start_db2 $inst fi ;; Stop|stop) if db2_running then echo "instance running as $inst..attempting to stop it" stop_db2 $inst else echo "db2 instance $inst appears to be not running anyway" exit 0 fi ;; *) usage ;; esac

The other way to preserve system environment variables is to use the Defaults !env_reset directive, like in sudoers:

1 Defaults !env_reset

Then from the command line, specify the environment variable name with its value:

1 $ sudo LIBPATH=″/usr/lib:/opt/db2_09_05/lib64″ -u delta /usr/local/bin/datapmp

If you do not put the !env_reset entry in, you will get the following error from sudo when you try to run the command:

1 sudo: sorry, you are not allowed to set the following environment variables: LIBPATH

If you find that sudo is also stripping out other environment variables, you can specify the variable name in sudoers so that sudo keeps those variables intact (with the Defaults env_keep += directive). For instance, suppose sudo was stripping out the application variables DSTAGE_SUP and DSTAGE_META from one of my suodo-ised scripts. To preserve these variables, I could put the following entries in sudoers:

1 2 Defaults env_keep += "DSTAGE_SUP" Defaults env_keep += "DSTAGE_META"

Notice that I give the variable name and not the variable value. The values are already contained in my script like this:

1 export DSTAGE_SUP=/opt/dstage/dsengine; export DSTAGE_META=/opt/dstage/db2

Now when the sudo script is executed, the above environment variables are preserved.

Securing the sudo path

A default PATH within sudoers can be imposed using the secure_path directive. This directive specifies where to look for binaries and commands when a user executes a sudo command. This option clearly tries to lock down specific areas where a user runs a sudo command, which is good practice. Use the following directive in sudoers, specifying the secure PATH with its search directories:

1 Defaults secure_path="/usr/local/sbin:/usr/local/bin:/opt/freeware/bin:/usr/sbin"
Getting restrictive

Restrictions can be put in place to restrict certain commands to users. Assume you have a group called dataex , whose members are "alpha," "bravo," and "charlie." Now, that group has been allowed to run the sudo command /usr/local/bin/mis_ext * , where the asterisk represents the many parameters passed to the script. However, user "charlie" is not allowed to execute that script if the parameter is import . This type of condition can be met by using the logical NOT '!' operator. Here is how that is achieved in sudoers:

1 2 %dataex rs6000 = (dbmis) NOPASSWD: /usr/local/bin/mis_ext * charlie rs6000 = (dbmis) NOPASSWD: !/usr/local/bin/mis_ext import

Note that the logical NOT operator entries go after the non-restrictive entry. Many conditional NOT entries can be applied on the same line; just make sure that they are comma separated, like so:

1 2 3 4 charlie rs6000 = (dbmis) NOPASSWD: /usr/local/bin/aut_pmp * charlie rs6000 = (dbmis) NOPASSWD: !/usr/local/bin/aut_pmp create, !/usr/local/bin/aut_pmp delete, !/usr/local/bin/aut_pmp amend
When in visudo, do not think just saving the sudo entry and staying in visudo will make the changes effective; it won't. You must exit visudo for the changes to take effect. Rolling out sudo commands

Rolling out sudo commands to remote hosts in an enterprise environment is best done using a ssh script as root, and the keys should have been exchanged between the hosts, for password-less logins. Let's look at one example of how to do this. With geographically remote machines, if you get a hardware issue of some sort (disk or memory), the IBM® engineer will be on-site to replace the failing hardware. There will be occasions when they require the root password to carry out their task. One procedure you might want to put in place is for the engineer to gain access to root they must use sudo. Informing the engineer prior to the visit of the password would be advantageous. Listing 2 demonstrates one way you could roll out this configuration. Looking more closely at Listing 2 , use a for loop containing a list of hosts you are pushing out to. (Generally, though, you would have these hosts in a text file and read them in using a while loop.) Using the 'here' document method, make a backup copy of sudoers, and an entry is then appended to sudoers, like so:

1 2 # -- ibmeng sudo root ibmeng host1 = (root) NOPASSWD:ALL

Next, the user "ibmeng" is created, and the password is set for the user using chpasswd . In this demonstration, it is ibmpw . A message is then appended to their profile, informing the user how to sudo to root. So when the engineer logs in, he is presented with the message:

1 IBM Engineer, to access root account type: sudo -u root su -

Of course the account for ibmeng would be locked after the visit.

Listing 2. dis_ibm
Nov 09, 2017 | www.ibm.com
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 #!/bin/sh # dis_ibm dest_hosts='host1 host2 host3 host4' for host in $dest_hosts do echo "doing [$host]" $ssh -T -t -l root $host<<'mayday' host=`hostname` cp /etc/sudoers /etc/sudoers.bak if [ $? != 0 ] then echo "error: unable to cp sudoers file" exit 1 fi echo "# -- ibmeng sudo root\nibmeng $host = (root) NOPASSWD:ALL">>/etc/sudoers mkuser su=false ibmeng if [ $? = 0 ] then echo "ibmeng:ibmpw" | chpasswd -c else echo "error: unable to create user ibmeng and or passwd" exit 1 fi chuser gecos='IBM engineer acc' ibmeng if [ -f /home/ibmeng/.profile ] then echo "echo \"IBM Engineer, to access root account type: sudo -u root su -"\" >>/home/ibmeng/.profile fi mayday done
Conclusion

Sudo allows you to control who can run what commands as whom. But you must be able to understand the features of sudoers fully to gain maximum understanding of its implications and responsibility.


Downloadable resources
Related topics
  • Learn more about and download sudo .
  • Download IBM product evaluation versions and get your hands on application development tools and middleware products from DB2®, Lotus®, Rational®, Tivoli®, and WebSphere®.

[Nov 09, 2017] Add an netgroup in sudoers instead a group

Nov 09, 2017 | hd.free.fr

5 thoughts on "sudo command & sudoers file : Concepts and Practical examples"

  1. Pingback: sudo | Site Title
  2. Andres Ferreo July 16, 2014 at 21:18

    I'll like to add an netgroup in sudoers instead a group. That is possible? How should I do this setup

    Thanks.

    1. Pier Post author July 17, 2014 at 22:50

      In order to use a netgroup in the sudoers file, you just need to explicitly define it as a netgroup by using the " + " sign (instead of a " % " sign that would be used for a system group).

      You will need to include this netgroup inside a User_Alias (you may want to create a new User_Alias for this purpose)

      Please check the " 3.1.2 User_Alias " section for more infos, and feel free to ask for more detailed explanation.

      Hope this helps.

      Pier.

  3. Matthew February 14, 2014 at 15:43

    Great info, just diving into the world of this, and was trying to figure out how to limit a login to run a cache clearing command

    user ALL=NOPASSWD: rm -rf /usr/nginx/cache/*

    but i got a syntax error

    1. Pier Post author February 17, 2014 at 07:22

      Hi,

      Looks like you forgot the following part of the command specs :
      3. (ALL) : This is the part that specify which user(s) you may act as.

      Check the 2.1 Section of the current page, you may want to have something like :
      user ALL=(ALL) NOPASSWD: /sbin/rm -rf /usr/nginx/cache/*

      Always use the full path for any given command : This will prevent you from using a bad aliased command.

[Oct 25, 2017] How to Lock User Accounts After Failed Login Attempts

Oct 25, 2017 | www.tecmint.com

How to Lock User Accounts After Consecutive Failed Authentications

You can configure the above functionality in the /etc/pam.d/system-auth and /etc/pam.d/password-auth files, by adding the entries below to the auth section.

auth    required       pam_faillock.so preauth silent audit deny=3 unlock_time=600
auth    [default=die]  pam_faillock.so authfail audit deny=3 unlock_time=600

Where:

Note that the order of these lines is very important, wrong configurations can cause all user accounts to be locked.

The auth section in both files should have the content below arranged in this order:

auth        required      pam_env.so
auth        required      pam_faillock.so preauth silent audit deny=3 unlock_time=300
auth        sufficient    pam_unix.so  nullok  try_first_pass
auth        [default=die]  pam_faillock.so  authfail  audit  deny=3  unlock_time=300
auth        requisite     pam_succeed_if.so uid >= 1000 quiet_success
auth        required      pam_deny.so

Now open these two files with your choice of editor.

# vi /etc/pam.d/system-auth
# vi /etc/pam.d/password-auth

The default entries in auth section both files looks like this.

#%PAM-1.0
# This file is auto-generated.
# User changes will be destroyed the next time authconfig is run.
auth        required      pam_env.so
auth        sufficient    pam_fprintd.so
auth        sufficient    pam_unix.so nullok try_first_pass
auth        requisite     pam_succeed_if.so uid >= 1000 quiet
auth        required      pam_deny.so

After adding the above settings, it should appear as follows.

#%PAM-1.0
# This file is auto-generated.
# User changes will be destroyed the next time authconfig is run.
auth        required      pam_env.so
auth        required      pam_faillock.so preauth silent audit deny=3 unlock_time=300
auth        sufficient    pam_fprintd.so
auth        sufficient    pam_unix.so nullok try_first_pass
auth        [default=die]  pam_faillock.so  authfail  audit  deny=3  unlock_time=300
auth        requisite     pam_succeed_if.so uid >= 1000 quiet
auth        required      pam_deny.so

Then add the following highlighted entry to the account section in both of the above files.

account     required      pam_unix.so
account     sufficient    pam_localuser.so
account     sufficient    pam_succeed_if.so uid < 500 quiet
account     required      pam_permit.so
account     required      pam_faillock.so
How to Lock Root Account After Failed Login Attempts

To lock the root account after failed authentication attempts, add the even_deny_root option to the lines in both files in the auth section like this.

auth        required      pam_faillock.so preauth silent audit deny=3 even_deny_root unlock_time=300
auth        [default=die]  pam_faillock.so  authfail  audit  deny=3 even_deny_root unlock_time=300

Once you have configured everything. You can restart remote access services like sshd , for the above policy to take effect that is if users will employ ssh to connect to the server.

# systemctl restart sshd  [On SystemD]
# service sshd restart    [On SysVInit]
How to Test SSH User Failed Login Attempts

From the above settings, we configured the system to lock a user's account after failed authentication attempts.

In this scenario, the user tecmint is trying to switch to user aaronkilik , but after incorrect logins because of a wrong password, indicated by the " Permission denied " message, the user aaronkilik's account is locked as shown by " authentication failure " message from the fourth attempt.

Test User Failed Login Attempts

Test User Failed Login Attempts

The root user is also notified of the failed login attempts on the system, as shown in the screen shot below.

Failed Login Attempts Message

Failed Login Attempts Message How to View Failed Authentication Attempts

You can see all failed authentication logs using the faillock utility, which is used to display and modify the authentication failure log.

You can view failed login attempts for a particular user like this.

# faillock --user aaronkilik
View User Failed Login Attempts

View User Failed Login Attempts

To view all unsuccessful login attempts, run faillock without any argument like so:

# faillock

To clear a user's authentication failure logs, run this command.

# faillock --user aaronkilik --reset 
OR
# fail --reset  #clears all authentication failure records

Lastly, to tell the system not to lock a user or user's accounts after several unsuccessful login attempts, add the entry marked in red color, just above where pam_faillock is first called under the auth section in both files ( /etc/pam.d/system-auth and /etc/pam.d/password-auth ) as follows.

Simply add full colon separated usernames to the option user in

auth  required      pam_env.so
auth   [success=1 default=ignore] pam_succeed_if.so user in tecmint:aaronkilik 
auth   required      pam_faillock.so preauth silent audit deny=3 unlock_time=600
auth   sufficient    pam_unix.so  nullok  try_first_pass
auth   [default=die]  pam_faillock.so  authfail  audit  deny=3  unlock_time=600
auth   requisite     pam_succeed_if.so uid >= 1000 quiet_success
auth   required      pam_deny.so

For more information, see the pam_faillock and faillock man pages.

# man pam_faillock
# man faillock

[Sep 25, 2017] Artificial Intelligence Just Made Guessing Your Password a Whole Lot Easier by Matthew Hutson

Looks like pseudo-science. When the number of attempts is limited to five or seven AI is of little or no help... Only when password files (let's say shadow file) is stolen, then those methods might be deployed effectively. If the number of attempts to decode password is unlimited then you definitely can use heuristic strategies to limit the space in which you generate probes, such as "style" of password selection from previous stolen archives for the same user.
Sep 15, 2017 | www.sciencemag.org

Researchers at the Stevens Institute of Technology used artificial intelligence to generate a program that successfully guessed 27 percent of the passwords from more than 43 million LinkedIn profiles. The team employed a generative adversarial network (GAN), PassGAN, featuring two artificial neural networks -- a "generator" that produces artificial outputs resembling real examples, and a "discriminator" that attempts to differentiate real from false examples.

New York University's Martin Arjovsky says the work "confirms that there are clear, important problems where applying simple machine-learning solutions can bring a crucial advantage."

However, Cornell Tech's Thomas Ristenpart says this same GAN-based methodology could be applied to help users and enterprises rate password strength, as well as "potentially be used to generate decoy passwords to help detect breaches."

Meanwhile, Stevens' Giuseppe Ateniese says PassGAN can invent passwords indefinitely, noting, "if you give enough data to PassGAN, it will be able to come up with rules that humans cannot think about."

[Aug 29, 2017] The booklet for common tasks on a Linux system.

Aug 29, 2017 | bumble.sourceforge.net

This booklet is designed to help with common tasks on a Linux system. It is designed to be presentable as a series of "recipes" for accomplishing common tasks. These recipes consist of a plain English one-line description, followed by the Linux command which carries out the task.

The document is focused on performing tasks in Linux using the 'command line' or 'console'.

The format of the booklet was largely inspired by the "Linux Cookbook" www.dsl.org/cookbook

[Aug 06, 2017] uefi - CentOS Kickstart Installation - Error populating transaction

Aug 06, 2017 | superuser.com

I am trying to perform a network unattended installation for my servers. They are all UEFI systems and I have gotten them to successfully boot over the network, load grub2, and start the kickstart script for installation.

It seems to reach the point where it runs yum update , although I am not entirely sure. It downloads the CentOS image from the mirror fine and then continually tells me error populating transaction 10 times and then quits.

I've run through this multiple times with different mirrors, so I don't think this is a bad image problem.

Here is an image of the error.

Here is the compiled code for my kickstart script.

install
url --url http://mirror.umd.edu/centos/7/os/x86_64/
lang en_US.UTF-8
selinux --enforcing
keyboard us
skipx

network --bootproto dhcp --hostname r2s2.REDACTED.com --device=REDACTED
rootpw --iscrypted REDACTED
firewall --service=ssh
authconfig --useshadow --passalgo=SHA256 --kickstart
timezone --utc UTC
services --disabled gpm,sendmail,cups,pcmcia,isdn,rawdevices,hpoj,bluetooth,openibd,avahi-daemon,avahi-dnsconfd,hidd,hplip,pcscd




bootloader --location=mbr --append="nofb quiet splash=quiet" 


zerombr
clearpart --all --initlabel
autopart



text
reboot

%packages
yum
dhclient
ntp
wget
@Core
redhat-lsb-core
%end

%post --nochroot
exec < /dev/tty3 > /dev/tty3
#changing to VT 3 so that we can see whats going on....
/usr/bin/chvt 3
(
cp -va /etc/resolv.conf /mnt/sysimage/etc/resolv.conf
/usr/bin/chvt 1
) 2>&1 | tee /mnt/sysimage/root/install.postnochroot.log
%end
%post
logger "Starting anaconda r2s2.REDACTED.com postinstall"
exec < /dev/tty3 > /dev/tty3
#changing to VT 3 so that we can see whats going on....
/usr/bin/chvt 3
(



# eno1 interface
real=`ip -o link | awk '/REDACTED/ {print $2;}' | sed s/:$//`
sanitized_real=`echo $real | sed s/:/_/`


cat << EOF > /etc/sysconfig/network-scripts/ifcfg-$sanitized_real
BOOTPROTO="dhcp"
DEVICE=$real
HWADDR="REDACTED"
ONBOOT=yes
PEERDNS=yes
PEERROUTES=yes
DEFROUTE=yes
EOF



#update local time
echo "updating system time"
/usr/sbin/ntpdate -sub 0.fedora.pool.ntp.org
/usr/sbin/hwclock --systohc


rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm


# update all the base packages from the updates repository
if [ -f /usr/bin/dnf ]; then
  dnf -y update
else
  yum -t -y update
fi


# SSH keys setup snippet for Remote Execution plugin
#
# Parameters:
#
# remote_execution_ssh_keys: public keys to be put in ~/.ssh/authorized_keys
#
# remote_execution_ssh_user: user for which remote_execution_ssh_keys will be
#                            authorized
#
# remote_execution_create_user: create user if it not already existing
#
# remote_execution_effective_user_method: method to switch from ssh user to
#                                         effective user
#
# This template sets up SSH keys in any host so that as long as your public
# SSH key is in remote_execution_ssh_keys, you can SSH into a host. This only
# works in combination with Remote Execution plugin.

# The Remote Execution plugin queries smart proxies to build the
# remote_execution_ssh_keys array which is then made available to this template
# via the host's parameters. There is currently no way of supplying this
# parameter manually.
# See http://projects.theforeman.org/issues/16107 for details.









if [ -f /usr/bin/dnf ]; then
  dnf -y install puppet
else
  yum -t -y install puppet
fi

cat > /etc/puppet/puppet.conf << EOF


[main]
vardir = /var/lib/puppet
logdir = /var/log/puppet
rundir = /var/run/puppet
ssldir = \$vardir/ssl

[agent]
pluginsync      = true
report          = true
ignoreschedules = true
ca_server       = foreman.REDACTED.com
certname        = r2s2.lab.REDACTED.com
environment     = production
server          = foreman.REDACTED.com

EOF

puppet_unit=puppet
/usr/bin/systemctl list-unit-files | grep -q puppetagent && puppet_unit=puppetagent
/usr/bin/systemctl enable ${puppet_unit}
/sbin/chkconfig --level 345 puppet on

# export a custom fact called 'is_installer' to allow detection of the installer environment in Puppet modules
export FACTER_is_installer=true
# passing a non-existent tag like "no_such_tag" to the puppet agent only initializes the node
/usr/bin/puppet agent --config /etc/puppet/puppet.conf --onetime --tags no_such_tag --server foreman.REDACTED.com --no-daemonize




sync

# Inform the build system that we are done.
echo "Informing Foreman that we are built"
wget -q -O /dev/null --no-check-certificate http://foreman.REDACTED.com/unattended/built?token=REDACTED
) 2>&1 | tee /root/install.post.log
exit 0

%end

[Aug 06, 2017] Some basics of MBR vs GPT and BIOS vs UEFI - Manjaro Linux

Aug 06, 2017 | wiki.manjaro.org
Some basics of MBR v/s GPT and BIOS v/s UEFI From Manjaro Linux Jump to: navigation , search Contents [ hide ]

MBR

A master boot record (MBR) is a special type of boot sector at the very beginning of partitioned computer mass storage devices like fixed disks or removable drives intended for use with IBM PC-compatible systems and beyond. The concept of MBRs was publicly introduced in 1983 with PC DOS 2.0.

The MBR holds the information on how the logical partitions, containing file systems, are organized on that medium. Besides that, the MBR also contains executable code to function as a loader for the installed operating system!usually by passing control over to the loader's second stage, or in conjunction with each partition's volume boot record (VBR). This MBR code is usually referred to as a boot loader.

The organization of the partition table in the MBR limits the maximum addressable storage space of a disk to 2 TB (232 × 512 bytes). Therefore, the MBR-based partitioning scheme is in the process of being superseded by the GUID Partition Table (GPT) scheme in new computers. A GPT can coexist with an MBR in order to provide some limited form of a backwards compatibility for older systems. [1]

GPT

GUID Partition Table (GPT) is a standard for the layout of the partition table on a physical hard disk, using globally unique identifiers (GUID). Although it forms a part of the Unified Extensible Firmware Interface (UEFI) standard (Unified EFI Forum proposed replacement for the PC BIOS), it is also used on some BIOS systems because of the limitations of master boot record (MBR) partition tables, which use 32 bits for storing logical block addresses (LBA) and size information.

MBR-based partition table schemes insert the partitioning information for (usually) four "primary" partitions in the master boot record (MBR) (which on a BIOS system is also the container for code that begins the process of booting the system). In a GPT, the first sector of the disk is reserved for a "protective MBR" such that booting a BIOS-based computer from a GPT disk is supported, but the boot loader and O/S must both be GPT-aware. Regardless of the sector size, the GPT header begins on the second logical block of the device. [2]


GPT uses modern logical block addressing (LBA) in place of the cylinder-head-sector (CHS) addressing used with MBR. Legacy MBR information is contained in LBA 0, the GPT header is in LBA 1, and the partition table itself follows. In 64-bit Windows operating systems, 16,384 bytes, or 32 sectors, are reserved for the GPT, leaving LBA 34 as the first usable sector on the disk. [3]

MBR vs. GPT

Compared with MBR disk, A GPT disk can support larger than 2 TB volumes where MBR cannot. A GPT disk can be basic or dynamic, just like an MBR disk can be basic or dynamic. GPT disks also support up to 128 partitions rather than the 4 primary partitions limited to MBR. Also, GPT keeps a backup of the partition table at the end of the disk. Furthermore, GPT disk provides greater reliability due to replication and cyclical redundancy check (CRC) protection of the partition table. [4]

The GUID partition table (GPT) disk partitioning style supports volumes up to 18 exabytes in size and up to 128 partitions per disk, compared to the master boot record (MBR) disk partitioning style, which supports volumes up to 2 terabytes in size and up to 4 primary partitions per disk (or three primary partitions, one extended partition, and unlimited logical drives). Unlike MBR partitioned disks, data critical to platform operation is located in partitions instead of unpartitioned or hidden sectors. In addition, GPT partitioned disks have redundant primary and backup partition tables for improved partition data structure integrity. [5]

BIOS

In IBM PC compatible computers, the Basic Input/Output System (BIOS), also known as System BIOS, ROM BIOS or PC BIOS, is a de facto standard defining a firmware interface. The name originated from the Basic Input/Output System used in the CP/M operating system in 1975. The BIOS software is built into the PC, and is the first software run by a PC when powered on.

The fundamental purposes of the BIOS are to initialize and test the system hardware components, and to load a bootloader or an operating system from a mass memory device. The BIOS additionally provides abstraction layer for the hardware, i.e. a consistent way for application programs and operating systems to interact with the keyboard, display, and other input/output devices. Variations in the system hardware are hidden by the BIOS from programs that use BIOS services instead of directly accessing the hardware. Modern operating systems ignore the abstraction layer provided by the BIOS and access the hardware components directly. [6]

UEFI

The Unified Extensible Firmware Interface (UEFI) (pronounced as an initialism U-E-F-I or like "unify" without the n) is a specification that defines a software interface between an operating system and platform firmware. UEFI is meant to replace the Basic Input/Output System (BIOS) firmware interface, present in all IBM PC-compatible personal computers. In practice, most UEFI images provide legacy support for BIOS services. UEFI can support remote diagnostics and repair of computers, even without another operating system.

The original EFI (Extensible Firmware Interface) specification was developed by Intel. Some of its practices and data formats mirror ones from Windows.] In 2005, UEFI deprecated EFI 1.10 (final release of EFI). The UEFI specification is managed by the Unified EFI Forum.


BIOS vs. UEFI

UEFI enables better use of bigger hard drives. Though UEFI supports the traditional master boot record (MBR) method of hard drive partitioning, it doesn't stop there. It's also capable of working with the GUID Partition Table (GPT), which is free of the limitations the MBR places on the number and size of partitions. GPT ups the maximum partition size from 2.19TB to 9.4 zettabytes.

UEFI may be faster than the BIOS. Various tweaks and optimizations in the UEFI may help your system boot more quickly it could before. For example: With UEFI you may not have to endure messages asking you to set up hardware functions (such as a RAID controller) unless your immediate input is required; and UEFI can choose to initialize only certain components. The degree to which a boot is sped up will depend on your system configuration and hardware, so you may see a significant or a minor speed increase.

Technical changes abound in UEFI. UEFI has room for more useful and usable features than could ever be crammed into the BIOS. Among these are cryptography, network authentication, support for extensions stored on non-volatile media, an integrated boot manager, and even a shell environment for running other EFI applications such as diagnostic utilities or flash updates. In addition, both the architecture and the drivers are CPU-independent, which opens the door to a wider variety of processors (including those using the ARM architecture, for example).

However, UEFI is still not widespread. Though major hardware companies have switched over almost exclusively to UEFI use, you still won't find the new firmware in use on all motherboards!or in quite the same way across the spectrum. Many older and less expensive motherboards also still use the BIOS system. [7]

MBR vs. GPT and BIOS vs. UEFI

Usually, MBR and BIOS (MBR + BIOS), and GPT and UEFI (GPT + UEFI) go hand in hand. This is compulsory for some systems (eg Windows), while optional for others (eg Linux).

http://en.wikipedia.org/wiki/GUID_Partition_Table#Operating_systems_support

http://en.wikipedia.org/wiki/Unified_Extensible_Firmware_Interface#DISKDEVCOMPAT

Converting from MBR to GPT

From http://www.rodsbooks.com/gdisk/mbr2gpt.html

One of the more unusual features of gdisk is its ability to read an MBR partition table or BSD disklabel and convert it to GPT format without damaging the contents of the partitions on the disk. This feature exists to enable upgrading to GPT in case the limitations of MBRs or BSD disklabels become too onerous!for instance, if you want to add more OSes to a multi-boot configuration, but the OSes you want to add require too many primary partitions to fit on an MBR disk.

Conversions from MBR to GPT works because of inefficiencies in the MBR partitioning scheme. On an MBR disk, the bulk of the first cylinder of the disk goes unused!only the first sector (which holds the MBR itself) is used. Depending on the disk's CHS geometry, this first cylinder is likely to be sufficient space to store the GPT header and partition table. Likewise, space is likely to go unused at the end of the disk because the cylinder (as seen by the BIOS and whatever tool originally partitioned the disk) will be incomplete, so the last few sectors will go unused. This leaves space for the backup GPT header and partition table. (Disks partitioned with 1 MiB alignment sometimes leave no gaps at the end of the disk, which can prevent conversion to GPT format!at least, unless you delete or resize the final partition.)

The task of converting MBR to GPT therefore becomes one of extracting the MBR data and stuffing the data into the appropriate GPT locations. Partition start and end points are straightforward to manage, with one important caveat: GPT fdisk ignores the CHS values and uses the LBA values exclusively. This means that the conversion will fail on disks that were partitioned with very old software. If the disk is over 8 GiB in size, though, GPT fdisk should find the data it needs.

Once the conversion is complete, there will be a series of gaps between partitions. Gaps at the start and end of the partition set will be related to the inefficiencies mentioned earlier that permit the conversion to work. Additional gaps before each partition that used to be a logical partition exist because of inefficiencies in the way logical partitions are allocated. These gaps are likely to be quite small (a few kilobytes), so you're unlikely to be able to put useful partitions in those spaces. You could resize your partitions with GNU Parted to remove the gaps, but the risks of such an operation outweigh the very small benefits of recovering a few kilobytes of disk space.

Switching from BIOS to UEFI

See: UEFI_-_Install_Guide#Switching_from_BIOS_to_UEFI

Note

Switching from [MBR + BIOS] to [GPT + UEFI]

Switching from BIOS to UEFI consists of 2 parts-

i. Conversion of disk from MBR to GPT. Side effects- Possible Data Loss, other OS installed on same disk may or may not boot (eg Windows)..

ii. Changing from BIOS to UEFI (and installing GRUB in UEFI mode). Side Effects- Other OS (can be both Linux and Windows) may or may not boot, with systemd you need to comment out the swap partition in /etc/fstab on a GPT partition table (if you use a swap partition).

After converting from MBR to GPT, probably your installed Manjaro wont work, so you would need to prepare beforehand what to do in such a case. (eg, chroot using a live disk and installing GRUB in UEFI way)

And Windows 8 if installed in MBR way, would need to be repaired/reinstalled in accordance to UEFI way.

Feedback

Questions, suggestions, critics? Please post here: [8]

[Jun 09, 2017] Sneaky hackers use Intel management tools to bypass Windows firewall

Notable quotes:
"... the group's malware requires AMT to be enabled and serial-over-LAN turned on before it can work. ..."
"... Using the AMT serial port, for example, is detectable. ..."
"... Do people really admin a machine through AMT through an external firewall? ..."
"... Businesses demanded this technology and, of course, Intel beats the drum for it as well. While I understand their *original* concerns I would never, ever connect it to the outside LAN. A real admin, in jeans and a tee, is a much better solution. ..."
Jun 09, 2017 | arstechnica.com
When you're a bad guy breaking into a network, the first problem you need to solve is, of course, getting into the remote system and running your malware on it. But once you're there, the next challenge is usually to make sure that your activity is as hard to detect as possible. Microsoft has detailed a neat technique used by a group in Southeast Asia that abuses legitimate management tools to evade firewalls and other endpoint-based network monitoring.

The group, which Microsoft has named PLATINUM, has developed a system for sending files -- such as new payloads to run and new versions of their malware-to compromised machines. PLATINUM's technique leverages Intel's Active Management Technology (AMT) to do an end-run around the built-in Windows firewall. The AMT firmware runs at a low level, below the operating system, and it has access to not just the processor, but also the network interface.

The AMT needs this low-level access for some of the legitimate things it's used for. It can, for example, power cycle systems, and it can serve as an IP-based KVM (keyboard/video/mouse) solution, enabling a remote user to send mouse and keyboard input to a machine and see what's on its display. This, in turn, can be used for tasks such as remotely installing operating systems on bare machines. To do this, AMT not only needs to access the network interface, it also needs to simulate hardware, such as the mouse and keyboard, to provide input to the operating system.

But this low-level operation is what makes AMT attractive for hackers: the network traffic that AMT uses is handled entirely within AMT itself. That traffic never gets passed up to the operating system's own IP stack and, as such, is invisible to the operating system's own firewall or other network monitoring software. The PLATINUM software uses another piece of virtual hardware-an AMT-provided virtual serial port-to provide a link between the network itself and the malware application running on the infected PC.

Communication between machines uses serial-over-LAN traffic, which is handled by AMT in firmware. The malware connects to the virtual AMT serial port to send and receive data. Meanwhile, the operating system and its firewall are none the wiser. In this way, PLATINUM's malware can move files between machines on the network while being largely undetectable to those machines.

PLATINUM uses AMT's serial-over-LAN (SOL) to bypass the operating system's network stack and firewall.

Enlarge / PLATINUM uses AMT's serial-over-LAN (SOL) to bypass the operating system's network stack and firewall. Microsoft

AMT has been under scrutiny recently after the discovery of a long-standing remote authentication flaw that enabled attackers to use AMT features without needing to know the AMT password. This in turn could be used to enable features such as the remote KVM to control systems and run code on them.

However, that's not what PLATINUM is doing: the group's malware requires AMT to be enabled and serial-over-LAN turned on before it can work. This isn't exploiting any flaw in AMT; the malware just uses the AMT as it's designed in order to do something undesirable.

Both the PLATINUM malware and the AMT security flaw require AMT to be enabled in the first place; if it's not turned on at all, there's no remote access. Microsoft's write-up of the malware expressed uncertainty about this part; it's possible that the PLATINUM malware itself enabled AMT-if the malware has Administrator privileges, it can enable many AMT features from within Windows-or that AMT was already enabled and the malware managed to steal the credentials.

While this novel use of AMT is useful for transferring files while evading firewalls, it's not undetectable. Using the AMT serial port, for example, is detectable. Microsoft says that its own Windows Defender Advanced Threat Protection can even distinguish between legitimate uses of serial-over-LAN and illegitimate ones. But it's nonetheless a neat way of bypassing one of the more common protective measures that we depend on to detect and prevent unwanted network activity. potato44819 , Ars Legatus Legionis Jun 8, 2017 8:59 PM Popular

"Microsoft says that its own Windows Defender Advanced Threat Protection can even distinguish between legitimate uses of serial-over-LAN and illegitimate ones. But it's nonetheless a neat way of bypassing one of the more common protective measures that we depend on to detect and prevent unwanted network activity."

It's worth noting that this is NOT Windows Defender.

Windows Defender Advanced Threat Protection is an enterprise product.

aexcorp , Ars Scholae Palatinae Jun 8, 2017 9:04 PM Popular
This is pretty fascinating and clever TBH. AMT might be convenient for sysadmin, but it's proved to be a massive PITA from the security perspective. Intel needs to really reconsider its approach or drop it altogether.

"it's possible that the PLATINUM malware itself enabled AMT-if the malware has Administrator privileges, it can enable many AMT features from within Windows"

I've only had 1 machine that had AMT (a Thinkpad T500 that somehow still runs like a charm despite hitting the 10yrs mark this summer), and AMT was toggled directly via the BIOS (this is all pre-UEFI.) Would Admin privileges be able to overwrite a BIOS setting? Would it matter if it was handled via UEFI instead? 1810 posts | registered 8/28/2012

bothered , Ars Scholae Palatinae Jun 8, 2017 9:16 PM
Always on and undetectable. What more can you ask for? I have to imagine that and IDS system at the egress point would help here. 716 posts | registered 11/14/2012
faz , Ars Praefectus Jun 8, 2017 9:18 PM
Using SOL and AMT to bypass the OS sounds like it would work over SOL and IPMI as well.

I only have one server that supports AMT, I just double-checked that the webui for AMT does not allow you to enable/disable SOL. It does not, at least on my version. But my IPMI servers do allow someone to enable SOL from the web interface.

xxx, Jun 8, 2017 9:24 PM
But do we know of an exploit over AMT? I wouldn't think any router firewall would allow packets bound for an AMT to go through. Is this just a mechanism to move within a LAN once an exploit has a beachhead? That is not a small thing, but it would give us a way to gauge the severity of the threat.

Do people really admin a machine through AMT through an external firewall? 178 posts | registered 2/25/2016

zogus , Ars Tribunus Militum Jun 8, 2017 9:26 PM
fake-name wrote:
Quote:
blockquote

Hi there! I do hardware engineering, and I wish more computers had serial ports. Just because you don't use them doesn't mean their disappearance is "fortunate".

Just out of curiosity, what do you use on the PC end when you still do require traditional serial communication? USB-to-RS232 adapter? 1646 posts | registered 11/17/2006

bthylafh , Ars Tribunus Angusticlavius Jun 8, 2017 9:34 PM Popular
zogus wrote:
Just out of curiosity, what do you use on the PC end when you still do require traditional serial communication? USB-to-RS232 adapter?
tomca13 , Wise, Aged Ars Veteran Jun 8, 2017 9:53 PM
This PLATINUM group must be pissed about the INTEL-SA-00075 vulnerability being headline news. All those perfectly vulnerable systems having AMT disabled and limiting their hack. 175 posts | registered 8/9/2002
Darkness1231 , Ars Tribunus Militum et Subscriptor Jun 8, 2017 10:41 PM
Causality wrote:
Intel AMT is a fucking disaster from a security standpoint. It is utterly dependent on security through obscurity with its "secret" coding, and anybody should know that security through obscurity is no security at all.
Businesses demanded this technology and, of course, Intel beats the drum for it as well. While I understand their *original* concerns I would never, ever connect it to the outside LAN. A real admin, in jeans and a tee, is a much better solution.

Hopefully, either Intel will start looking into improving this and/or MSFT will make enough noise that businesses might learn to do their update, provisioning in a more secure manner.

Nah, that ain't happening. Who am I kidding? 1644 posts | registered 3/31/2012

Darkness1231 , Ars Tribunus Militum et Subscriptor Jun 8, 2017 10:45 PM
meta.x.gdb wrote:
But do we know of an exploit over AMT? I wouldn't think any router firewall would allow packets bound for an AMT to go through. Is this just a mechanism to move within a LAN once an exploit has a beachhead? That is not a small thing, but it would give us a way to gauge the severity of the threat. Do people really admin a machine through AMT through an external firewall?
The interconnect is via W*. We ran this dog into the ground last month. Other OSs (all as far as I know (okay, !MSDOS)) keep them separate. Lan0 and lan1 as it were. However it is possible to access the supposedly closed off Lan0/AMT via W*. Which is probably why this was caught in the first place.

Note that MSFT has stepped up to the plate here. This is much better than their traditional silence until forced solution. Which is just the same security through plugging your fingers in your ears that Intel is supporting. 1644 posts | registered 3/31/2012

rasheverak , Wise, Aged Ars Veteran Jun 8, 2017 11:05 PM
Hardly surprising: https://blog.invisiblethings.org/papers ... armful.pdf

This is why I adamantly refuse to use any processor with Intel management features on any of my personal systems. 160 posts | registered 3/6/2014

michaelar , Smack-Fu Master, in training Jun 8, 2017 11:12 PM
Brilliant. Also, manifestly evil.

Is there a word for that? Perhaps "bastardly"?

JDinKC , Smack-Fu Master, in training Jun 8, 2017 11:23 PM
meta.x.gdb wrote:
But do we know of an exploit over AMT? I wouldn't think any router firewall would allow packets bound for an AMT to go through. Is this just a mechanism to move within a LAN once an exploit has a beachhead? That is not a small thing, but it would give us a way to gauge the severity of the threat. Do people really admin a machine through AMT through an external firewall?
The catch would be any machine that leaves your network with AMT enabled. Say perhaps an AMT managed laptop plugged into a hotel wired network. While still a smaller attack surface, any cabled network an AMT computer is plugged into, and not managed by you, would be a source of concern. 55 posts | registered 11/19/2012
Anonymouspock , Wise, Aged Ars Veteran Jun 8, 2017 11:42 PM
Serial ports are great. They're so easy to drive that they work really early in the boot process. You can fix issues with machines that are otherwise impossible to debug.
sphigel , Ars Centurion Jun 9, 2017 12:57 AM
aexcorp wrote:
This is pretty fascinating and clever TBH. AMT might be convenient for sysadmin, but it's proved to be a massive PITA from the security perspective. Intel needs to really reconsider its approach or drop it altogether.

"it's possible that the PLATINUM malware itself enabled AMT-if the malware has Administrator privileges, it can enable many AMT features from within Windows"

I've only had 1 machine that had AMT (a Thinkpad T500 that somehow still runs like a charm despite hitting the 10yrs mark this summer), and AMT was toggled directly via the BIOS (this is all pre-UEFI.) Would Admin privileges be able to overwrite a BIOS setting? Would it matter if it was handled via UEFI instead?

I'm not even sure it's THAT convenient for sys admins. I'm one of a couple hundred sys admins at a large organization and none that I've talked with actually use Intel's AMT feature. We have an enterprise KVM (raritan) that we use to access servers pre OS boot up and if we have a desktop that we can't remote into after sending a WoL packet then it's time to just hunt down the desktop physically. If you're just pushing out a new image to a desktop you can do that remotely via SCCM with no local KVM access necessary. I'm sure there's some sys admins that make use of AMT but I wouldn't be surprised if the numbers were quite small. 273 posts | registered 5/5/2010
gigaplex , Ars Scholae Palatinae Jun 9, 2017 3:53 AM
zogus wrote:
fake-name wrote:
blockquote Quote: blockquote

Hi there! I do hardware engineering, and I wish more computers had serial ports. Just because you don't use them doesn't mean their disappearance is "fortunate".

Just out of curiosity, what do you use on the PC end when you still do require traditional serial communication? USB-to-RS232 adapter?
We just got some new Dell workstations at work recently. They have serial ports. We avoid the consumer machines. 728 posts | registered 9/23/2011

GekkePrutser , Ars Centurion Jun 9, 2017 4:18 AM
Quote:
Physical serial ports (the blue ones) are fortunately a relic of a lost era and are nowadays quite rare to find on PCs.
Not that fortunately.. Serial ports are still very useful for management tasks. It's simple and it works when everything else fails. The low speeds impose little restrictions on cables.

Sure, they don't have much security but that is partly mitigated by them usually only using a few metres cable length. So they'd be covered under the same physical security as the server itself. Making this into a LAN protocol without any additional security, that's where the problem was introduced. Wherever long-distance lines were involved (modems) the security was added at the application level.

[Feb 04, 2017] Restoring deleted /tmp folder

Jan 13, 2015 | cyberciti.biz

As my journey continues with Linux and Unix shell, I made a few mistakes. I accidentally deleted /tmp folder. To restore it all you have to do is:

mkdir /tmp
chmod 1777 /tmp
chown root:root /tmp
ls -ld /tmp
 
mkdir /tmp chmod 1777 /tmp chown root:root /tmp ls -ld /tmp 

[Feb 04, 2017] Use CDPATH to access frequent directories in bash - Mac OS X Hints

Feb 04, 2017 | hints.macworld.com
The variable CDPATH defines the search path for the directory containing directories. So it served much like "directories home". The dangers are in creating too complex CDPATH. Often a single directory works best. For example export CDPATH = /srv/www/public_html . Now, instead of typing cd /srv/www/public_html/CSS I can simply type: cd CSS
Use CDPATH to access frequent directories in bash
Mar 21, '05 10:01:00AM • Contributed by: jonbauman

I often find myself wanting to cd to the various directories beneath my home directory (i.e. ~/Library, ~/Music, etc.), but being lazy, I find it painful to have to type the ~/ if I'm not in my home directory already. Enter CDPATH , as desribed in man bash ):

The search path for the cd command. This is a colon-separated list of directories in which the shell looks for destination directories specified by the cd command. A sample value is ".:~:/usr".
Personally, I use the following command (either on the command line for use in just that session, or in .bash_profile for permanent use):
CDPATH=".:~:~/Library"

This way, no matter where I am in the directory tree, I can just cd dirname , and it will take me to the directory that is a subdirectory of any of the ones in the list. For example:
$ cd
$ cd Documents 
/Users/baumanj/Documents
$ cd Pictures
/Users/username/Pictures
$ cd Preferences
/Users/username/Library/Preferences
etc...
[ robg adds: No, this isn't some deeply buried treasure of OS X, but I'd never heard of the CDPATH variable, so I'm assuming it will be of interest to some other readers as well.]

cdable_vars is also nice
Authored by: clh on Mar 21, '05 08:16:26PM

Check out the bash command shopt -s cdable_vars

From the man bash page:

cdable_vars

If set, an argument to the cd builtin command that is not a directory is assumed to be the name of a variable whose value is the directory to change to.

With this set, if I give the following bash command:

export d="/Users/chap/Desktop"

I can then simply type

cd d

to change to my Desktop directory.

I put the shopt command and the various export commands in my .bashrc file.

[Dec 26, 2016] A Typo Led To Podestas Email Hack, Says Report

Dec 26, 2016 | yro.slashdot.org
(thehill.com) 274

Posted by BeauHD on Tuesday December 13, 2016 @06:30PM from the auto-correct dept. tomhath quotes a report from The Hill:

Last March, Podesta received an email purportedly from Google saying hackers had tried to infiltrate his Gmail account . When an aide emailed the campaign's IT staff to ask if the notice was real, Clinton campaign aide Charles Delavan replied that it was "a legitimate email" and that Podesta should "change his password immediately."

Instead of telling the aide that the email was a threat and that a good response would be to change his password directly through Google's website, he had inadvertently told the aide to click on the fraudulent email and give the attackers access to the account.

Delavan told The New York Times he had intended to type "illegitimate," a typo he still has not forgiven himself for making.

The email was a phishing scam that ultimately revealed Podesta's password to hackers.

Soon after, WikiLeaks began releasing 10 years of his emails.

[Dec 26, 2016] U2F Security Keys May Be the World's Best Hope Against Account Takeovers

Notable quotes:
"... After more than two years of public implementation and internal study, Google security architects have declared Security Keys their preferred form of two-factor authentication. ..."
Dec 26, 2016 | it.slashdot.org
(arstechnica.com) 153

Posted by BeauHD on Friday December 23, 2016 @09:05PM from the new-kid-on-the-block dept.

earlytime writes:

Large scale account hacks such as the billion user Yahoo breach and targeted phishing hacks of gmail accounts during the U.S. election have made 2016 an infamous year for web security. Along comes U2F/web-security keys to address these issues at a critical time.

Ars Technica reports that U2F keys "may be the world's best hope against account takeovers":

"The Security Keys are based on Universal Second Factor , an open standard that's easy for end users to use and straightforward for engineers to stitch into hardware and websites. When plugged into a standard USB port, the keys provide a 'cryptographic assertion' that's just about impossible for attackers to guess or phish. Accounts can require that cryptographic key in addition to a normal user password when users log in. Google, Dropbox, GitHub, and other sites have already implemented the standard into their platforms.

After more than two years of public implementation and internal study, Google security architects have declared Security Keys their preferred form of two-factor authentication.

The architects based their assessment on the ease of using and deploying keys, the security it provided against phishing and other types of password attacks, and the lack of privacy trade-offs that accompany some other forms of two-factor authentication."

The researchers wrote in a recently published report :

"We have shipped support for Security Keys in the Chrome browser, have deployed it within Google's internal sign-in system, and have enabled Security Keys as an available second factor in Google's Web services.

In this work, we demonstrate that Security Keys lead to both an increased level of security and user satisfaction as well as cheaper support cost."

[May 31, 2016] RHEL 6.8 is out

Notable quotes:
"... For customers with ever-increasing volumes of data, the Scalable File System Add-on for Red Hat Enterprise Linux 6.8 now supports xfs filesystem sizes up to 300TB. ..."
"... enables customers to migrate their traditional workloads into container-based applications – suitable for deployment on Red Hat Enterprise Linux 7 and Red Hat Enterprise Linux Atomic Host. ..."
redhat.com

Red Hat Enterprise Linux 6.8 adds improved system archiving, new visibility into storage performance and an updated open standard for secure virtual private networks

Red Hat, Inc. (NYSE: RHT), the world's leading provider of open source solutions, today announced the general availability of Red Hat Enterprise Linux 6.8, the latest version of the Red Hat Enterprise Linux 6 platform. Red Hat Enterprise Linux 6.8 delivers new capabilities and provides a stable and trusted platform for critical IT infrastructure. With nearly six years of field-proven success, Red Hat Enterprise Linux 6 has set the stage for the innovations of today, as Red Hat Enterprise Linux continues to power not only existing workloads, but also the technologies of the future, from cloud-native applications to Linux containers.

With enhancements to security features and management, Red Hat Enterprise Linux 6.8 remains a solid, proven base for modern enterprise IT operations.

Jim Totton vice president and general manager, Platforms Business Unit, Red Hat

Red Hat Enterprise Linux 6.8 includes a number of new and updated features to help organizations bolster platform security and enhance systems management/monitoring capabilities, including:

Enhanced Security, Authentication, and Interoperability

To enhance security for virtual private networks (VPNs), Red Hat Enterprise Linux 6.8 includes libreswan, an implementation of one of the most widely supported and standardized VPN protocols, which replaces openswan as the Red Hat Enterprise Linux 6 VPN endpoint solution, giving Red Hat Enterprise Linux 6 customers access to recent advances in VPN security.

Customers running the latest version of Red Hat Enterprise Linux 6 can see increased client-side performance and simpler management through the addition of new capabilities to the Identity Management client code (SSSD). Cached authentication lookup on the client reduces the unnecessary exchange of user credentials with Active Directory servers. Support for adcli simplifies the management of Red Hat Enterprise Linux 6 systems interoperating with an Active Directory domain. In addition, SSSD now supports user authentication via smart cards, for both system login and related functions such as sudo.

Enhanced Management and Monitoring
The inclusion of Relax-and-Recover, a system archiving tool, provides a more streamlined system administration experience, enabling systems administrators to create local backups in an ISO format that can be centrally archived and replicated remotely for simplified disaster recovery operations. An enhanced yum tool simplifies the addition of packages, adding intelligence to the process of locating required packages to add/enable new platform features.

Red Hat Enterprise Linux 6.8 provides increased visibility into storage usage and performance through dmstats, a program that displays and manages I/O statistics for user-defined regions of devices using the device-mapper driver.

Additional Enhancements and Updates

For customers with ever-increasing volumes of data, the Scalable File System Add-on for Red Hat Enterprise Linux 6.8 now supports xfs filesystem sizes up to 300TB.

Additionally, the general availability of Red Hat Enterprise Linux 6.8 includes the launch of an updated Red Hat Enterprise Linux 6.8 base image which enables customers to migrate their traditional workloads into container-based applications – suitable for deployment on Red Hat Enterprise Linux 7 and Red Hat Enterprise Linux Atomic Host.

Today's release also marks the transition of Red Hat Enterprise Linux 6 into Production Phase 2, a phase which prioritizes ongoing stability and security features for critical platform deployments. More information on the Red Hat Enterprise Linux lifecycle can be found at https://access.redhat.com/support/policy/updates/errata .

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/6.8_Release_Notes/new_features_compiler_and_tools.html

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/6.8_Release_Notes/new_features_file_systems.htmlhttps://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/6.8_Release_Notes/new_features_networking.htmlhttps://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/6.8_Release_Notes/new_features_servers_and_services.htmlhttps://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/6.8_Release_Notes/new_features_storage.htmlhttps://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/6.8_Release_Notes/new_features_system_and_subscription_management.htmlhttps://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/6.8_Release_Notes/chap-Red_Hat_Enterprise_Linux-6.8_Release_Notes-Red_Hat_Software_Collections.html

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/6.8_Release_Notes/part-Red_Hat_Enterprise_Linux-6.8_Release_Notes-Known_Issues.html

[May 31, 2016] Red Hat Enterprise Linux 6.8 Deprecates Btrfs

Notable quotes:
"... Deprecated functionality continues to be supported until the end of life of Red Hat Enterprise Linux 6. Deprecated functionality will likely not be supported in future major releases of this product and is not recommended for new deployments. ..."
www.phoronix.com
Buried within the notes for today's Red Hat Enterprise Linux 6.8 release are a few interesting notes.

First, RHEL has deprecated support for the Btrfs file-system.

Btrfs file system
Development of B-tree file system (Btrfs) has been discontinued, and Btrfs is considered deprecated. Btrfs was previously provided as a Technology Preview, available on AMD64 and Intel 64 architectures.

Huh? Since when was Btrfs development discontinued? At least in the upstream space, it's still ongoing and Facebook (as well as other companies) continue pouring resources into stabilizing and advancing the capabilities of Btrfs, which is widely sought as a Linux alternative to ZFS. There's no signs of things stalling on the Btrfs mailing list. Especially as Red Hat hasn't been packaging ZFS for RHEL officially (but you can grab packages via ZFSOnLinux.org) as an alternative, this move doesn't make a lot of sense. While Btrfs development has dragged on for a while and short of OpenSUSE/SUSE hasn't seen it deployed by default by other tier-one Linux distributions, it's a bit odd that Red Hat seems to be tossing in the towel on Btrfs.

Red Hat's definition of "deprecated" in their RHEL context means (as shown on the same page), "Deprecated functionality continues to be supported until the end of life of Red Hat Enterprise Linux 6. Deprecated functionality will likely not be supported in future major releases of this product and is not recommended for new deployments."

[Apr 25, 2016] What's New in Red Hat Enterprise Linux 7.2

Video presentation.

[Dec 09, 2015] Three ways to easily encrypt your data on Linux

Ok, so you need to quickly encrypt the contents of you pen drive. The easiest solution is to compress them using the 7z archive file format, that is open source, cross-platform, and supports 256-bit encryption using the AES algorithm.

Encrypt with Seahorse

The third option that I will show basically utilizes the popular GNU PG tool to encrypt anything you want in your disk. What we need to install first are the following packages: gpg, seahorse, seahorse-nautilus, seahorse-daemon, and seahorse-contracts which is needed if you're using ElementaryOS like I do. The encryption will be based on a key that we need to create first by opening a terminal, and typing the following command:

[Feb 11, 2015] GHOST: glibc vulnerability (CVE-2015-0235)

First of all this is kind of system error that is not easy to exploit. You need to locate the vulnerable functions in core image and be able to overwrite them via call (length of which any reasonable programmer will check). So whether this vulnerability is exploitable or not for applications that we are running is an open question.

In any case most installed systems are theoretically vilnerable. Practically too if they are running applications that do not check length for such system calls.

Only recently patched systems with glibc-2.11.3-17.74.13.x86_64 and above are not vulnerable.

Red Hat's success aside, it's hard to profit from free by Barb Darrow

Dec 19 2014 | dewaynenet.wordpress.com

Posted by wa8dzp

Red Hat's success aside, it's hard to profit from free

<https://gigaom.com/2014/12/19/red-hats-success-aside-its-hard-to-profit-from-free/>

Red Hat, which just reported a profit of $47.9 million (or 26 cents a share) on revenue of $456 million for its third quarter, has managed to pull off a tricky feat: It's been able to make money off of free, well, open-source, software. (It's profit for the year-ago quarter was $52 million.)

In a blog post, Red Hat CEO Jim Whitehurst said the old days when IT pros risked their careers by betting on open source rather than proprietary software are over. That old adage that you can't be fired for buying IBM should be updated, I guess.

In what looks something like a victory lap, Whitehurst wrote that every company now runs some sort of open source software. He wrote:

Many of us remember the now infamous "Halloween Documents," the classic quote from former Microsoft CEO Steve Ballmer describing Linux as a "cancer," and comments made by former Microsoft CEO Bill Gates, saying, "So certainly we think of [Linux] as a competitor in the student and hobbyist market. But I really do not think in the commercial market, we'll see it [compete with Windows] in any significant way."

He contrasted that to Ballmer successor's Satya Nadella's professed love of Linux. To be fair, Azure was well down the road to embracing open source late in Ballmer's reign but Microsoft's transition from open-source basher to open-source lover is still noteworthy - and indicative of open-source software's wide spread adoption. If you can't beat 'em, join 'em.

Open source is great, but profitable?

So everyone agrees that open source is goodness. But not everyone is sure that many companies will be able to replicate Red Hat's success profiting from it.

Sure, Microsoft wants people to run Linux and Java and whatever on Azure because that gives Azure a critical mass of new-age users who are not necessarily enamored of .NET and Windows. And, Microsoft has lots of revenue opportunities once those developers and companies are on Azure. (The fact that Microsoft is open-sourcing .NET is icing on the open-source cake.)

But how does a company that is 100 percent focused on say, selling support and services and enhancements to Apache Hadoop, make money? A couple of these companies are extremely well-funded and it's unclear where the cash burn ends and the profits can begin.

[snip]

Docker - FreeBSD like container + API for L

Linux Containers (LXC) is a virtualization method for running multiple isolated Linux systems. Docker extends LXC. It uses LXC, cgroups, Linux kernel and other parts to automate the deployment of applications inside software containers.

It comes with API to runs processes in isolation. With docker I can pack WordPress (or any other app written in Python/Ruby/Php & friends) and its dependencies in a lightweight, portable, self-sufficient container. I can deploy and test such container on any Linux based server.

Bad Lockup Bug Plagues Linux

Slashdot

jones_supa (887896) writes "A hard to track system lockup bug seems to have appeared in the span of couple of most recent Linux kernel releases. Dave Jones of Red Hat was the one to first report his experience of frequent lockups with 3.18. Later he found out that the issue is present in 3.17 too. The problem was first suspected to be related to Xen.

A patch dating back to 2005 was pushed for Xen to fix a vmalloc_fault() path that was similar to what was reported by Dave. The patch had a comment that read "the line below does not always work. Needs investigating!" But it looks like this issue was never properly investigated. Due to the nature of the bug and its difficulty in tracking down, testers might be finding multiple but similar bugs within the kernel. Linus even suggested taking a look in the watchdog code. He also concluded the Xen bug to be a different issue. The bug hunt continues in the Linux Kernel Mailing List."

Selected Skeptical Comments

binarylarry (1338699) on Saturday November 29, 2014 @01:04PM (#48485753)

Re: Have they checked systemd? (Score:5, Funny)

It's not systemd related, you can check by opening a termin

Anonymous Coward on Saturday November 29, 2014 @12:34PM (#48485599)

Re: What's happening to Linux? (Score:0)

The kernel with the above problems isn't in the 14.04 ubuntu repo, the latest kernel in 14.04 is 3.13 and is not having this problem. I'm sure it will be fixed soon.

Anonymous Coward on Saturday November 29, 2014 @01:15PM (#48485819)

Re:What's happening to Linux? (Score:1)

I love the assumption that this isn't happening in the corporate world.

It is. It just happens behind closed doors. Thus, patches.

raymorris (2726007) on Saturday November 29, 2014 @01:08PM (#48485775)

Try a stable distro like RH/CentOS. Or Mac (Score:3)

> First got into it ... because Linux was totally stable

If stable is your top priority, Fedora is approximately the worst possible choice. Fedora is essentially Red Hat Beta. If you want stable, the devel / beta branch is not for you. You'll probably be much happier with Red Hat or its twin, CentOS.

Also, you mentioned that you did an "upgrade" to Debian Unstable. You didn't mention any _reason_ for doing that. If stability is a top priority for you, don't upgrade just because you can, don't fix it if it aint broke.

Mac OSX may indeed be a good choice for you also. It is certified Unix and if you use the commondand line in Linux you'll find that day-to-day tasks are the same on a Mac. System internals are different of course, but bash, sed, awk, grep, and vim work just like they do on Linux.

Anonymous Coward on Saturday November 29, 2014 @02:14PM (#48486131)

Re:But guys... (Score:0)

RHEL is an entire distribution. Does this magically make every package inside "enterprise"?
I was referring to single tools and programs. Before you hit me with that "Windows is not a single tool" bat - it does not contain too much. Let's take usable entities instead of packages, software, tools, etc.

And that "doubled Software thing", it was kind of "finger intelligence", i.e. if your fingers type stupid things for themselves. I have another such example: Ever typed Touring complete instead of Turing complete? How about reading holocaust instead of localhost? ;)

jones_supa (887896) on Saturday November 29, 2014 @02:08PM (#48486099)

Re: But guys... (Score:4, Informative)

Have you ever compared enterprise class software (I also count Windows 7 Enterprise) with OSS Software? Windows does not even reliably support STR and resume. Using multiple monitors is a PITA.

Suspend and multiple monitors have always worked great in Windows for me. Under Linux, they have also worked fine in some machines, but I have also occasionally experienced serious problems with those areas. During recent times I have found out that even laptop screen brightness adjustment cannot be expected to work reliably out of the box under Linux.

SuricouRaven (1897204) on Saturday November 29, 2014 @03:26PM (#48486683)

Re: But guys... (Score:2)

There's an imbalance in development. Under windows, every hardware manufacturer does all they can to ensure their hardware is good - investing a lot of money in developing and testing the drivers. Under linux, the manufacturers usually don't care - aside from some server hardware, there just aren't enough resources to justify it from a business perspective. So development falls to three-man team on a side project, and sometimes it's down to community volunteers working from reverse-engineered specifications.

jellomizer (103300) on Saturday November 29, 2014 @03:09PM (#48486527)

Re: Come on Slashdot, get your news current (Score:3)

A Microsoft bug, proof of the incompetence of closed source.
A Linux bug. Either point to some closed source factor, or claim its solving a victory in the flexibility of open source.

Anonymous Coward on Saturday November 29, 2014 @01:36PM (#48485973)

Some actual information (Score:0)

So it may be a "bad" lockup bug in the sense that nobody knows exactly what causes it, but it's not "bad" in the sense that people should worry overly.

Why?

Dave Jones sees it only under insane loads (CPU loads of 150+) running a stress tester that is designed to do crazy things (trinity). And he can reproduce it on only one of his machines, and even there it takes hours. And it happens on a debug kernel that has DEBUG_PAGEALLOC and other explicit (and complex) debug code enabled. And even then the bug is a "Hmm. We made no progress in the last 21 seconds", rather than anything stranger.

In other words, it's "bad" in the sense that any unknown behavior is bad, but it's unknown mainly because it's so hard to trigger. Nobody else than core developers should really care. And those developers do care, so it's not like it's worrisome there either. It just takes longer to figure out because the usual "bisect it" approach isn't very easy when it can take a day to reproduce..

[Aug 31, 2012] Scientific Linux 6.3 Live CD/DVD Has Been Released

Official site is www.scientificlinux.org. Download are available from CERN
August 27, 2012 | Softpedia

Scientific Linux 6.3 is now based on Red Had Enterprise Linux 6.3, powered by Linux kernel 2.6.32, and features XOrg Server 1.7.7, IceWM 1.2.37, GNOME 2.28, Firefox 10.0.6, Thunderbird 10.0.6, LibreOffice 3.4.5.2 and KDE Software Compilation 4.3.4.

Moreover, the distro includes software from rpmforge, epel and elrepo in order to provide support for NTFS and Reiserfs filesystems, secure network connection via OpenVPN, VPNC, PPTP, better multimedia support, and various filesystem tools like dd_rescue, gparted, ddrescue, gdisk.

Scientific Linux 6.3 is distributed as Live CD and DVD ISO images, supporting both 32-bit and 64-bit architectures.

The complete list of changes with a comprehensive list of fixes, improvements, removed and updated packages, can be found in the official release announcement for Scientific Linux 6.3 Live CD/DVD.

[Aug 1, 2012] Oracle Linux A better alternative to CentOS

They provide conversion script: centos2ol.sh

Oracle Linux: A better alternative to CentOS We firmly believe that Oracle Linux is the best Linux distribution on the market today. It's reliable, it's affordable, it's 100% compatible with your existing applications, and it gives you access to some of the most cutting-edge innovations in Linux like Ksplice and dtrace.

But if you're here, you're a CentOS user. Which means that you don't pay for a distribution at all, for at least some of your systems. So even if we made the best paid distribution in the world (and we think we do), we can't actually get it to you... or can we?

We're putting Oracle Linux in your hands by doing two things:

◦We've made the Oracle Linux software available free of charge ◦We've created a simple script to switch your CentOS systems to Oracle Linux We think you'll like what you find, and we'd love for you to give it a try.

Switch your CentOS systems to Oracle Linux Run the following as root:

curl -O https://linux.oracle.com/switch/centos2ol.sh 
sh centos2ol.sh 

FAQ Q: Wait, doesn't Oracle Linux cost money? A: Oracle Linux support costs money. If you just want the software, it's 100% free. And it's all in our yum repo at public-yum.oracle.com. Major releases, errata, the whole shebang. Free source code, free binaries, free updates, freely redistributable, free for production use. Yes, we know that this is Oracle, but it's actually free. Seriously.

[Apr 20, 2012] Oracle Linux The Past, Present and Future Revealed

Apr 19, 2012 | The VAR Guy

During our conversation, Coekaerts touched on a range of additional topics - such as:

[Feb 28, 2012] Red Hat vs. Oracle Linux Support 10 Years Is New Standard

The VAR Guy

The support showdown started a couple of weeks ago, when Red Hat extended the life cycle of Red Hat Enterprise Linux (RHEL) versions 5 and 6 from the norm of seven years to a new standard of 10 years. A few days later, Oracle responded by extending Oracle Linux life cycles to 10 years. Side note: It sounds like SUSE, now owned by Attachmate, also offers extended Linux support of up to 10 years.


Recommended Links

Google matched content

Softpanorama Recommended

Top articles

[Sep 04, 2018] Unifying custom scripts system-wide with rpm on Red Hat-CentOS Published on Aug 24, 2018 | linuxconfig.org

Sites

Please visit nixCraft site. It has material well worth your visit.

Dr. Nikolai Bezroukov


Top 10 Classic Unix Humor Stories

1. The Jargon File the most famous Unix-related humor file.

Please note that so called "hacker dictionary" is the jargon file spoiled by Eric Raymond :-) -- earlier versions of jargon file are better than the latest hacker dictionary...

2. Tao_Of_Programming (originated in 1992). This is probably No. 2 classic. There are several variants, but the link provided seems to be the original text (or at least an early version close to the original).

Here is a classic quote:

"When you have learned to snatch the error code from the trap frame, it will be time for you to leave."

... ...

If the Tao is great, then the operating system is great. If the operating system is great, then the compiler is great. If the compiler is greater, then the applications is great. The user is pleased and there is harmony in the world.

3. Know your Unix System Administrator by Stephan Zielinski -- Probably the third most famous Unix humor item. See also KNOW YOUR UNIX SYSTEM ADMINISTRATOR also at Field Guide to System Administrators [rec.humor.funny]. I personally like the descriptions of idiots and fascists and tend to believe that a lot of administrative fascists are ex-secretaries :-). At the same time former programmers can became sadists also quite often -- there is something in sysadmin job that seems cultivates the feeling of superiority and sadism ( "Users are Losers" mentality. IMHO other members of classification are not that realistic :-) :

There are four major species of Unix sysad:

  1. The

    Technical Thug.
    Usually a systems programmer who has been forced into system administration; writes scripts in a polyglot of the Bourne shell, sed, C, awk, perl, and APL.

  2. The Administrative Fascist.
    Usually a retentive drone (or rarely, a harridan ex-secretary) who has been forced into system administration.
  3. The Maniac.
    Usually an aging cracker who discovered that neither the Mossad nor Cuba are willing to pay a living wage for computer espionage. Fell into system administration; occasionally approaches major competitors with indesp schemes.
  4. The Idiot.
    Usually a cretin, morphodite, or old COBOL programmer selected to be the system administrator by a committee of cretins, morphodites, and old COBOL programmers.

---------------- SITUATION: Root disk fails. ----------------

TECHNICAL THUG:

Repairs drive. Usually is able to repair filesystem from boot monitor. Failing that, front-panel toggles microkernel in and starts script on neighboring machine to load binary boot code into broken machine, reformat and reinstall OS. Lets it run over the weekend while he goes mountain climbing.

ADMINISTRATIVE FASCIST:
Begins investigation to determine who broke the drive. Refuses to fix system until culprit is identified and charged for the equipment.
MANIAC, LARGE SYSTEM:
Rips drive from system, uses sledgehammer to smash same to flinders. Calls manufacturer, threatens pets. Abuses field engineer while they put in a new drive and reinstall the OS.
MANIAC, SMALL SYSTEM:
Rips drive from system, uses ball-peen hammer to smash same to flinders. Calls Requisitions, threatens pets. Abuses bystanders while putting in new drive and reinstalling OS.
IDIOT:
Doesn't notice anything wrong.

---------------- SITUATION: Poor network response. ----------------

TECHNICAL THUG:

Writes scripts to monitor network, then rewires entire machine room, improving response time by 2%. Shrugs shoulders, says, "I've done all I can do," and goes mountain climbing.

ADMINISTRATIVE FASCIST:
Puts network usage policy in motd. Calls up Berkeley and AT&T, badgers whoever answers for network quotas. Tries to get xtrek freaks fired.
MANIAC:
Every two hours, pulls ethernet cable from wall and waits for connections to time out.
IDIOT:
# compress -f /dev/en0

---------------- SITUATION: User questions. ----------------

TECHNICAL THUG:

Hacks the code of emacs' doctor-mode to answer new users questions. Doesn't bother to tell people how to start the new "guru-mode", or for that matter, emacs.

ADMINISTRATIVE FASCIST:
Puts user support policy in motd. Maintains queue of questions. Answers them when he gets a chance, often within two weeks of receipt of the proper form.
MANIAC:
Screams at users until they go away. Sometimes barters knowledge for powerful drink and/or sycophantic adulation.
IDIOT:
Answers all questions to best of his knowledge until the user realizes few UNIX systems support punched cards or JCL.

4. RFC 1925 The Twelve Networking Truths by R. Callon

  1. It Has To Work.
  2. No matter how hard you push and no matter what the priority, you can't increase the speed of light. (2a) (corollary). No matter how hard you try, you can't make a baby in much less than 9 months. Trying to speed this up *might* make it slower, but it won't make it happen any quicker.
  3. With sufficient thrust, pigs fly just fine. However, this is not necessarily a good idea. It is hard to be sure where they are going to land, and it could be dangerous sitting under them as they fly overhead.
  4. Some things in life can never be fully appreciated nor understood unless experienced firsthand. Some things in networking can never be fully understood by someone who neither builds commercial networking equipment nor runs an operational network.
  5. It is always possible to aglutenate multiple separate problems into a single complex interdependent solution. In most cases this is a bad idea.
  6. It is easier to move a problem around (for example, by moving the problem to a different part of the overall network architecture) than it is to solve it. (6a) (corollary). It is always possible to add another level of indirection.
  7. It is always something (7a) (corollary). Good, Fast, Cheap: Pick any two (you can't have all three).
  8. It is more complicated than you think.
  9. For all resources, whatever it is, you need more. (9a) (corollary) Every networking problem always takes longer to solve than it seems like it should.
  10. One size never fits all.
  11. Every old idea will be proposed again with a different name and a different presentation, regardless of whether it works. (11a) (corollary). See rule 6a.
  12. In protocol design, perfection has been reached not when there is nothing left to add, but when there is nothing left to take away.

5. Murphy's laws -- I especially like "Experts arose from their own urgent need to exist." :-). See also

  1. Nothing is as easy as it looks.

  2. Everything takes longer than you think.
  3. Anything that can go wrong will go wrong.
  4. If there is a possibility of several things going wrong, the one that will cause the most damage will be the one to go wrong. Corollary: If there is a worse time for something to go wrong, it will happen then.
  5. If anything simply cannot go wrong, it will anyway.
  6. If you perceive that there are four possible ways in which a procedure can go wrong, and circumvent these, then a fifth way, unprepared for, will promptly develop.
  7. Left to themselves, things tend to go from bad to worse.
  8. If everything seems to be going well, you have obviously overlooked something.
  9. Nature always sides with the hidden flaw.
  10. Mother nature is a bitch.
  11. It is impossible to make anything foolproof because fools are so ingenious.
  12. Whenever you set out to do something, something else must be done first.
  13. Every solution breeds new problems.

... ... ....

6. Network Week/The Bastard Operator from Hell. The classic story about an Administrative Fascist sysadmin.

7. Academic Programmers- A Spotter's Guide by Pete Fenelon; Department of Computer Science, University of York

Preamble
I Am The Greatest
Internet Vegetable
Rabid Prototyper
Get New Utilities!
Square Peg...
Objectionably ...

My Favourite ...
Give Us The Tools!
Macro Magician
Nightmare Networker
Configuration ...
Artificial Stupidity
Number Crusher

Meta Problem Solver
What's A Core File?
I Come From Ruritania
Old Fart At Play
I Can Do That!
What Colour ...
It's Safety Critical!

Objectionably Oriented

OO experienced a Road To Damascus situation the moment objects first crossed her mind. From that moment on everything in her life became object oriented and the project never looked back. Or forwards.

Instead, it kept sending messages to itself asking it what direction it was facing in and would it mind having a look around and send me a message telling me what was there...

OO thinks in Smalltalk and talks to you in Eiffel or Modula-3; unfortunately she's filled the disk with the compilers for them and instead of getting any real work done she's busy writing papers on holes in the type systems and, like all OOs, is designing her own perfect language.

The most dangerous OOs are OODB hackers; they inevitably demand a powerful workstation with local disk onto which they'll put a couple of hundred megabytes of unstructured, incoherent pointers all of which point to the number 42; any attempt to read or write it usually results in the network being down for a week at least.

8 Real Programmers Don't Write Specs

Real Programmers don't write specs -- users should consider themselves lucky to get any programs at all, and take what they get.

Real Programmers don't comment their code. If it was hard to write, it should be hard to understand.

Real Programmers don't write application programs, they program right down on the bare metal. Application programming is for feebs who can't do system programming.

... ... ...

Real Programmers aren't scared of GOTOs... but they really prefer branches to absolute locations.

9. Real Programmers Don't Use Pascal -- [ A letter to the editor of Datamation, volume 29 number 7, July 1983. Ed Post Tektronix, Inc. P.O. Box 1000 m/s 63-205 Wilsonville, OR 97070 Copyright (c) 1982]

Back in the good old days-- the "Golden Era" of computers-- it was easy to separate the men from the boys (sometimes called "Real Men" and "Quiche Eaters" in the literature). During this period, the Real Men were the ones who understood computer programming, and the Quiche Eaters were the ones who didn't. A real computer programmer said things like "DO 10 I=1,10" and "ABEND" (they actually talked in capital letters, you understand), and the rest of the world said things like "computers are too complicated for me" and "I can't relate to computers-- they're so impersonal". (A previous work [1] points out that Real Men don't "relate" to anything, and aren't afraid of being impersonal.)

But, as usual, times change. We are faced today with a world in which little old ladies can get computers in their microwave ovens, 12 year old kids can blow Real Men out of the water playing Asteroids and Pac-Man, and anyone can buy and even understand their very own personal Computer. The Real Programmer is in danger of becoming extinct, of being replaced by high school students with TRASH-80s.

There is a clear need to point out the differences between the typical high school junior Pac-Man player and a Real Programmer. If this difference is made clear, it will give these kids something to aspire to -- a role model, a Father Figure. It will also help explain to the employers of Real Programmers why it would be a mistake to replace the Real Programmers on their staff with 12 year old Pac-Man players (at a considerable salary savings).

10. bsd_logo_story

Last week I walked into a local "home style cookin' restaurant/watering hole" to pick up a take out order. I spoke briefly to the waitress behind the counter, who told me my order would be done in a few minutes.

So, while I was busy gazing at the farm implements hanging on the walls, I was approached by two, uh, um... well, let's call them "natives".

These guys might just be the original Texas rednecks -- complete with ten-gallon hats, snakeskin boots and the pervasive odor of cheap beer and whiskey.

"Pardon us, ma'am. Mind of we ask you a question?"

Well, people keep telling me that Texans are real friendly, so I nodded.

"Are you a Satanist?"

Etc: other historically important items

Programming Eagles

... ... ... ... ... ... ... ... ...

And they showed me the way There were salesmen down the corridor I thought I heard them say Welcome to Mountain View California Such a lovely place Such a lovely place (backgrounded) Such a lovely trace(1) Plenty of jobs at Mountain View California Any time of year Any time of year (backgrounded) You can find one here You can find one here

... ... ... ... ... ... ...

John Lennon's Yesterday -- variation for programmers.

Yesterday,
All those backups seemed a waste of pay.
Now my database has gone away.
Oh I believe in yesterday.

Suddenly,
There's not half the files there used to be,
And there's a milestone hanging over me
The system crashed so suddenly.

I pushed something wrong
What it was I could not say.
Now all my data's gone
and I long for yesterday-ay-ay-ay.

Yesterday,
The need for back-ups seemed so far away.
I knew my data was all here to stay,
Now I believe in yesterday.

The UNIX cult -- a satiric history of Unix

Notes from some recent archeological findings on the birth of the UNIX cult on Sol 3 are presented. Recently discovered electronic records have shed considerable light on the beginnings of the cult. A sketchy history of the cult is attempted.

On the Design of the UNIX operating System

This article was written in 1984 and was published in various UNIX newsletters across the world. I thought that it should be revived to mark the first 25 years of UNIX. If you like this, then you might also like The UNIX Cult.
Peter Collinson

,,, ,,, ,,,

'I Provide Office Solutions,' Says Pitiful Little Man a nice parody on programmers in general and open source programmers in particular

"VisTech is your one-stop source for Internet and Intranet open source development, as well as open source software support and collaborative development" said Smuda, adjusting the toupee he has worn since age 23. "We are a full-service company that can evaluate and integrate multi-platform open source solutions, including Linux, Solaris, Aix and HP-UX"

"Remember, no job is too small for the professionals at VisTech," added the spouseless, childless man, who is destined to die alone and unloved. "And no job is too big, either."

Unofficial Unix Administration Horror Story Summary

Best of DATAMATION GOTO-less

By R. Lawrence Clark*

From DATAMATION, December, 1973


Nearly six years after publication of Dijkstra's now-famous letter, [1] the subject of GOTO-less programming still stirs considerable controversy. Dijkstra and his supporters claim that the GOTO statement leads to difficulty in debugging, modifying, understanding and proving programs. GOTO advocates argues that this statement, used correctly, need not lead to problems, and that it provides a natural straightforward solution to common programming procedures.

Numerous solutions have been advanced in an attempt to resolve this debate. Nevertheless, despite the efforts of some of the foremost computer scientists, the battle continues to rage.

The author has developed a new language construct on which, he believes, both the pro- and the anti-GOTO factions can agree. This construct is called the COME FROM statement. Although usage of the COME FROM statement is independent of the linguistic environment, its use will be illustrated within the FORTRAN language.

Netslave quiz

AT YOUR LAST JOB INTERVIEW, YOU EXHIBITED:

A. Optimism
B. Mild Wariness
C. Tried to overcome headache. I was really tied
D. Controlled Hostility

2. DESCRIBE YOUR WORKPLACE:

A. An enterprising, dynamic group of individuals laying the groundwork for tomorrow's economy.
B. A bunch of geeks with questionable social skills.
C. An anxiety-ridden, with long hours and a lot of stress because of backbiting bunch of finger-pointers.
D. Jerks and PHB

3. DESCRIBE YOUR HOME:

A. Small, but efficient.
B. Shared and dormlike.
C. Rubble-strewn and fetid.
D. I have a personal network at my home with three or more connected computers and permanent connection to the Internet

NEW ELEMENT DISCOVERED!

The heaviest element known to science was recently discovered by university physicists. The new element was tentatively named Administratium. It has no protons and no electrons, and thus has an atomic number of 0. However, it does have one neutron, 15 assistant neutrons, 70 vice-neutrons, and 161 assistant vice-neutrons. This gives it an atomic mass of 247. These 247 particles are held together by a force that involves constant exchange of a special class of particle called morons.

Since it does not have electrons, Administratium is inert. However, it can be detected chemically as it impedes every reaction with which it comes into contact. According to the discoverers, a minute amount of Administratium added to one reaction caused it to take over four days to complete. Without Administratium, the reaction took less than one second.

Administratium has a half-life of approximately three years, after which it does not normally decay but instead undergoes a complex nuclear process called "Reorganization". In this little-understood process, assistant neutrons, vice-neutrons, and assistant vice-neutrons appear to exchange places. Early results indicate that atomic mass actually increases after each "Reorganization".

Misc Unproductive Time Classification -- nice parody on timesheets

You Might Be A Programmer If... By Clay Shannon - bclayshannon@earthlink.net

Jokes Magazine Drug Dealers Vs Software Developers

Jokes Magazine Ten Commandments For Stress Free Programming December 23, 1999

  1. Thou shalt not worry about bugs. Bugs in your software are actually special features.
  2. Thou shalt not fix abort conditions. Your user has a better chance of winning state lottery than getting the same abort again.
  3. Thou shalt not handle errors. Error handing was meant for error prone people, neither you or your users are error prone.
  4. Thou shalt not restrict users. Don't do any editing, let the user input anything, anywhere, anytime. That is being very user friendly.
  5. Thou shalt not optimize. Your user are very thankful to get the information, they don't worry about speed and efficiency.
  6. Thou shalt not provide help. If your users can not figure out themselves how to use your software than they are too dumb to deserve the benefits of your software any way.
  7. Thou shalt not document. Documentation only comes in handy for making future modifications. You made the software perfect the first time, it will never need mods.
  8. Thou shalt not hurry. Only the cute and the mighty should get the program by deadline.
  9. Thou shalt not revise. Your interpretation of specs was right, you know the users' requirements better than them.
  10. Thou shalt not share. If other programmers needed some of your code, they should have written it themselves.

Other Collections of Unix Humor


Don't let a few insignificant facts distract you from waging a holy war

A Slashdot post

It's spelled Linux, but it's pronounced "Not Windows"

- Usenet sig

It is time to unmask the programming community as a Secret Society for the Creation and Preservation of Artificial Complexity.

Edsger W. Dijkstra: The next forty years (EWD 1051)



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2018 by Dr. Nikolai Bezroukov. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) in the author free time and without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Created May 16, 1996; Last modified: November 17, 2018