Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
May the source be with you, but remember the KISS principle ;-)

The tar pit of Red Hat overcomplexity

RHEL 6 and RHEL 7 differences are no smaller then between SUSE and RHEL which essentially doubles workload of sysadmins as any "extra" OS leads to mental overflow and loss of productivity. That's why most sysadmin hate RHEL 7.

News Administration Recommended Books Recommended Links Installation of Red Hat from a USB drive Installation Differences between RHEL 6 and RHEL 7 Migrating systems from RHN to RHNSM
LVM Modifying ISO image to include kickstart file Kickstart Log rotation in RHEL/Centos/Oracle linux Disabling useless daemons in RHEL 6 servers Disabling useless daemons in RHEL 5 linux servers Disabling RHEL 6 Network Manager Biosdevname and renaming of Ethernet interfaces
Systemd Networking Network Manager overwrites resolv.conf RHEL 6 NTP configuration RHEL handling of DST change RHEL6 registration on proxy protected network RHEL5 registration on proxy protected network Xinetd
Disabling useless daemons in RHEL/Centos/Oracle 6 servers Disabling useless daemons in RHEL/Centos/Oracle 5 servers Certification Program Oracle Linux Changing runlevel when booting with grub Infiniband Subnet Manager Infiniband Installing Mellanox Infiniband Driver on RHEL 6.5
RPM YUM Anaconda VNC-based Installation in Anaconda Cron Wheel Group PAM RHEL handling of DST change
RHEL subscription management RHEL Runlevels RHEL4 registration RHEL5 registration on proxy protected network SELinux Security Disk Management IP address change
Redundant daemons in RHEL Disabling the avahi daemon Apache rsyslog SSH NFS Samba NTP
 serial console Screen vim Log rotation rsync Sendmail VNC/VINO Midnight Commander
Red Hat Startup Scripts bash Fedora Oracle Linux Administration Notes RHEL 7 lands with systemd      
Tuning Virtualization Xen Red Hat vs. Solaris

Sysadmin Horror Stories

Tips Humor Etc
Without that discipline, too often, software teams get lost in what are known in the field as "boil-the-ocean" projects -- vast schemes to improve everything at once. That can be inspiring, but in the end we might prefer that they hunker down and make incremental improvements to rescue us from bugs and viruses and make our computers easier to use.

Idealistic software developers love to dream about world-changing innovations; meanwhile, we wait and wait for all the potholes to be fixed.

Frederick Brooks, Jr,
The Mythical Man-Month, 1975


Image a language in which both grammar and vocabulary is changing each decade. Add to this that syntax is complex and vocabulary is huge.  beyond any normal human comprehension. You can learn some subset when  you closely work with a particular subsystem only to forget it after a couple quarter of half a year.  This is the reality of Red Hat enterprise editions.

Moreover, Red Hat exists in several incarnations (some of which are free for developers, and some are low cost ($100 a year for patches or so):

RHEL is a very complex mess

Architectural quality of RHEL is low.  RHEL is way too complex to administer and requires regular human being to remember too much things, which can never fit into one  head. This means this is by definition a brittle system, the elephant that nobody understands  completely.   Add to this some  examples of sloppy programming and old wards (RPM format now is old and partially outlived its usefulness, creating rather complex issues with patching and installing software, issues that take a lot of sysadmin time to resolve) and you get the picture. 

Nice example of Red Hat inaptitude is how they handle proxy settings. Even for software fully controlled by Red Hat  such as yum and subscription manager they use proxy setting in each and every configuration file. Why not to put it  /etc/sysconfig/network or at least read those setting first, if they exist? And any well behaving application need to read environment variable which should take precedence over settings in configuration files. They do not do it.  God knows why. 

Also some programs pick up setting from environment such as http_proxy and those setting overwrite configuration files, some do not and for this category configuration file is the truth in its last instance.

Those giants of system programming even manage to embed proxy settings from /etc/rhsm/rhsm.conf into yum file /etc/yum.repos.d/redhat.repo, so the proxy value is taken from this file. Not from  your /etc/yum.conf settings, as you would expect.  Moreover this is done without any elementary checks for consistency: if you make a pretty innocent  mistake and specify proxy setting in /etc/rhsm/rhsm.conf as


The Red Hat registration manager will accept this and will work fine. But for yum to work properly /etc/rhsm/rhsm.conf proxy specification requires just DNS name without prefix http:// or https://  -- prefix https will be added blindly (and that's wrong) in redhat.repo   without checking if you specified http:// (or https://) prefix or not. This SNAFU will lead to generation in  redhat.repo  the proxy statement of the form https://

At this point you are up for a nasty surprise -- yum will not work with any Redhat repository. And there is no any meaningful diagnostic messages. Looks like RHEL managers are iether engaged in binge drinking, or watch too much porn on the job ;-). 

Yum which started as a very helpful utility. But gradually it turned into a complex monster that requires quite a bit of study and has a set of very complex bugs, some of which are almost features.

SELinux was never a transparent security subsystems and has a lot of quirks of its own. And its key idea as far from being elegant like were the key ideas of AppArmor, which actually disappeared from he Linux security landscape. Many sysadmins simply disable SElinux leaving only firewall to protect the server. Some application require disabling SELinux for proper functioning. 

The deterioration of architectural vision within Red Hat as company is clearly visible in the terrible (simply terrible, without any exaggeration) quality of the customer portal, which is probably the worst I ever encountered. Sometimes I just put tickets to understand how to perform particular operation. Old or Classic as they call it RHEL customer portal actually was OK, and even used to have some useful features. Then for some reason they tried to introduced something new and completely messes the thing.  As of August  2017  quality somewhat improves, but still leaves much to be desired. Sometimes I wonder why I am using the distribution, if the company which produced it (and charges substantial money for it ) is so tremendously architecturally inapt, that it is unable to create a usable customer portal.

RHEL 7 is a bigger mess then RHEL 6

I hate RHEL 7. I really do. Red Hat honchos iether out of envy to Apple (and/or to Microsoft) success in desktop space or for some other reason broke way to many things by introducing systemd (which is actually useful only for laptops and similar portable devices. See Systemd invasion into Linux Server space.

Revamping anaconda in "consumer friendly" fashion and doing a lot of other unnecessary for server space things somewhat destroys or at least diminishes the value of Red Hat as a server OS. Most of those changes also increase complexity by hiding "basic" things upon the layers of "indirection".  Try to remove Network manager in RHEL 7.  Now explain to me why we need it for server room with "forever" attached by 40 Mbit/sec cable servers.

It is undeniable that in Version 7 RHEL became even more complex. It denies usefulness of previous knowledge of Red Hat and tons of books (some of which are good) published in 1998-2010  -- dot com boom and inertia after dot-com crash. After then the genre of Red Hat books dies like died computer books genre  in general and shelves with computer books almost disappeared from Banes and Nobles.

This utter disrespect to people who spend years learning Red Hat crap increased my desire to switch to CentOS or Oracle Linux. I do not want to pay Red Hat money any longer and support deteriorated to the level when it is almost completely useless, unless you buy premium support. And even in this case much depends on to what analyst your ticket is assigned. 

This is an expensive open source, my friend

RHEL is very expensive distribution for small and medium firms. There is no free lunch and if you are using commercial distribution you need to pay annual maintenance or get used to some delays in availability of new versions and security patches. In most cases this is acceptable, so if CentOS or Scientific Linux work OK for a particular application it should be used instead of commercial version just to avoid troubles with licensing.

For HPC clusters Red Hat provides discounted version (for computations nodes only; the headnode is licensed for a full price) with limited number of packages, called Red Hat Enterprise Linux for High Performance Computing. See  for more or less OK explanations of that you should expect.

Essentially, you pay the full price for the headnode and discounted price for each computational node. I am not sure that Oracle Linux is not a better deal as in this case you have the same distribution both for headnode and computational nodes for the same price as Red Hat HPC license with two different distributions.  Truth be told Red Hat does provides optimized networking stack with HPC computer node license.  The question is what is the difference and should you pay such a price for it.

Quality of support

RHEL support deteriorated recently while prices almost doubled from RHEE5 to 6 (especially is you use virtual guests a lot; see discussion at RHEL 6 how much for your package (The Register) and now it is now not very clear what you are paying for.

The product is so complex and big that to provide quality support for it is impossible. First of all deep understand of OS is lacking. Now all tech support does when trying to resolve most of tickets is to search the database of cases, and post as a solution that is related to your case (or may be not ;-)

Premium support still is above average, and they can switch  you to a live engineer on a different continent in critical cases in later hours,  so if server downtime is important this is a kind of (weak) insurance.

In any case, Red Hat support even for subsystem fully developed by Red Hat such as subscription manager and yum is usually dismal, unless you are lucky and get knowledgeable guy (I once did). 

As I already mentioned i most case they limit themselves to searching of the database and recommending something from the article that they found most close to your case.  Often without even understanding what problem you are experiencing. Sometimes this "quote service" from their database that they sell instead of customer support helps, but often it is just a destruction, an imitation of support, if you wish.  In the past (in time of RHEL 4) the quality of support was much better. Now it is unclear what we are paying for.

I for one  thing that  making images before each change  storing a set of images so that you can return to the previous one in a matter of hours is a better insurance.  That capitalizes on  the fact that after the installation OS usually works. So we observe  the phenomenon who is well known to Windows administrators: self-disintegration of OS with time ;-)

The problems typically comes later with additional packages, libraries, etc, which often are not critical for system functionality and with  small sacrifices can be eliminated. "Keep it simple stupid" is a good approach  here.  Although for servers that are in research this  is very difficult to implement.

For example for many servers you do not need X11 to be  installed. That cuts a lot of packages and  an lot of complexity. Exotic protocols designed for laptops also can be eliminated for server which uses regulate wired connection  and static IP address (although in RHEL 7 this is more difficult to do; but  it crappy version in any case and it is prudent to postpone moving to it as long  as practical) .  Avoiding complex taxing package in favor of something simpler is another worthwhile approach.

In any case Unix  now (and Linux in particular) is an operating  system which is clearly above that human capability to comprehend it. So in a way it is amazing that it still works. Also strong architects (like Thomson) are long gone so "entropy" is high and  clean architectural solution with time became much less clean.

Despite several level of support included in licenses (with premium supposedly to be higher level) technical support for really complex cases is uniformly weak, with mostly "monkey looking in database" type of service. If you have a complex problem, you are usually stuck, although premium service provide you an opportunity to talk with a live person, which might help.   In a way, unless you buy premium license,  the only way to use RHEL is "as is".  And with RHEL 7 even this is not a very attractive proposition as the switch to systemd creates its own set of problems and a learning curve for sysadmins.

Some of this deterioration is connected with the fact that Linux became very complex, Byzantine OS that nobody actually knows. Even a number of utilities are such that nobody knows probably more then 30% or 50% of them. and even if you learn some utility during particular case of troubleshooting you will soon forget as you probably will not get the same case in a year or two. In this sense the title "Red Hat Engineer" became a sad joke.

Even if you learned something important today you will soon forget if you do not use it as there way too may utilities, application, configuration files. You name it.

Licensing model: four different types of RHEL licenses

RHEL is struggling to fence off "copycats" by complicating access to the source of patches, but the problem is that its licensing model in Byzantium.  It is based of a half-dozen of different types of subscriptions. Some pretty expensive. In the past it I resented paying Red Hat for  our 4 socket servers to the extent that I stop using this type of servers and completely switched to two socket servers. Which with Intel CPUs rising core count was a easy escape from RHEL restrictions.  Currently Red Hat probably has most complex, the most Byzantine system of subscriptions after IBM (which is probably the leader in licensing obscurantism ;-).

And there are at least four different RHEL licenses for real (hardware-based) servers (  )

  1. Self-support. If you have many identical or almost identical servers or virtual machines it does not make sense to buy standard or premium licenses for all. It can be bought for one server and all other can be used with self-support licenses, which provides access to patches)
  2. Standard (web and phone during business hours (if you manage to get to a support specialist), but mostly web)
  3. Premium (Web and phone with phone 24 x7 for severity 1 and 2 problems)
  4. HPC  computational nodes with limited number of packages in the distribution and repositories (I wonder if using Oracle Linux is not a better deal for computational nodes then this type of RHEL licenses; sometime CENTOS can be used too which eliminates this problem)
  5. No-Cost RHEL Developer Subscription is available from March 2016

RHEL licensing scheme is based on so called "entitlements" which oversimplifying is one license for a 2 socket server. In the past they are "mergeable" so if your 4 socket license expired and you have two spare two socket licenses RHEL is happy to accommodate your needs. Now they are not. 

But that does not assure the right mix if you need different types of licenses for different classes of servers. all is fine until you use mixture of licenses (for example some cluster licenses, some patch only ( aka self-support), some premium license -- 4 types of licenses altogether). In this case to understand where is particular license landed is almost impossible.  But if you use uniform licensees this scheme works reasonably well. But it breaks the moment you buy several type of licensing.  One way to avoid this is to buy RHEL via resellers (such as Dell and HP). In this case Dell of HP engineers provide support for RHEL and naturally they know their hardware much better then RHEL engineers so, for example. driver problems are much easier to debug.

This path works well but to cut the costs you need to buy five year license with the server which is a lot of money and you lack the ability to switch linux flavor. This also a problem with buying cluster license -- Dell and HP can install basic cluster software on the enclosure for minimal fee but they force upon an additional software which you might not want or need.  And believe me this HPC can be used outside computational tasks. It is actually an interesting paradigm of managing heterogeneous datacenter.  The only problem is that you need to learn to use it :-). For example SGE is very well engineered scheduler (originally from SUN, but later open sourced). While this is a free software it  beats many commercial offerings and while it lacks calendar scheduling, any calendar scheduler can be used with it to compensate for this (even cron -- in this each cron task becomes SGE submit script).

Still using HPC-like config might be an option to lower the fees if you use multiple similar servers (for example blade enclosure with 16 identical blades). It is to organize a particular set of servers as a cluster with SGE (or other similar scheduler) installed on the head node. Now Hadoop is a fashionable thing (while being just a simple case of distributed search) and you can already claim tat this is a Hadoop type of service. In this case you pay twice higher price for headnode, but all computation nodes are $100 a year each or so. Although you can get the same self-support license from Oracle for the same price without  Red hat restrictions, so from other point of view, why bother?.

Two licensing system RHN and  RHSM

There are two licensing system used by Red Hat

  1.  Classic(RHN) -- old system that will be phased out in mid 2017
  2. "New" (RHSM) -- new system used predominantly on RHEL 6 and 7. Obligatory from August 2017.

Both are complex and require study.  Many hours of sysadmin time are wasted on mastering its complexities, while in reality this is an overhead that allows Red Hat to charge money for the product. So the fact that they are NOT supporting it well tells us a lot about the level of deterioration of the company. 

All-in-all Red Hat successful created almost un-penetrable mess of obsolete and semi obsolete notes, poorly written and incomplete documentation, dismal diagnostic and poor troubleshooting tools. And the level of frustration sometimes reaches such a level that people just abandon RHEL. I did for several non-critical system. If CentOS or Academic Linux works there is no reason to suffer from Red Hat licensing issues. Also that makes Oracle, surprisingly, more attractive option too :-). Oracle Linux is also cheaper. But usually you are bound by corporate policy here. 

"New" subscription system (RHSM) is slightly better then RHN for large organizations.  It allows to assign specific license to specific box and list the current status of licensing.  But like RHN it requires to use proxy setting in configuration file, it does not take them from the environment. If the company has several proxies and you have mismatch you can be royally screwed. In general you need already to check consistently of your environment with conf file settings.  The level of understanding of proxies environment by RHEL tech support is basic of worse, so they are  using the database of articles instead of actually troubleshooting based on sosreport data. Moreover each day there might a new person working on your ticket, so there no continuity.

RHEL System Registration Guide ( is weak and does not cover more complex cases and typical mishaps.

RHN system of RHEL licenses also can cover various  of sockets (the default is 2). For 4 socket server it will two licenses. This is not the case with RHNSM.

In general licensing by physical socket or even core is the old and dirty IBM trick that now many companies reuse ( and now Red Hat simply can't claim that they are not greedy).

In RHN, at least, licenses were eventually converted into some kind of  uniform licensing tokens that are assigned to unlicensed systems more or less automatically (for example if you have 4 socket system then two tokens were consumed). With RHNSM this is not true, which creating for large enterprises a set of complex problems.

But the major drawback of RHN for large enterprises is that there is no way (or at least I do not know how) to specify which type of license particular system requires.

In its current stage classic licensing system is simply not functional enough for a large enterprise that has complex mix of systems (HPC clusters, servers that require premium support, regular support (most of the servers), self-help system (only patching), etc).  You can slightly improve things by using you own patch distribution server (, but the licensing system remain complex and sysadmin unfriendly.  Using multiple accounts with RHN (one for each type of license) might help but I never tried that. There might be better ways to use RHN but as far as I know most organization use the most primitive "flat license space" model.  And most companies have a single account in Red Hat.

"New" subscription system (RHSM) is slightly better.  It allows to assign specific license to specific box and list the current status of licensing.  But like RHN it requires to use proxy setting in configuration file, it does not take them from the environment. If the company has several proxies and you have mismatch you can be royally screwed. In general you need already to check consistently of your environment with conf file settings.  The level of understanding of proxies environment by RHEL tech support is basic of worse, so they are  using the database of articles instead of actually troubleshooting based on sosreport data. Moreover each day there might a new person working on your ticket, so there no continuity. RHEL System Registration Guide ( is weak and does not cover more complex cases and typical mishaps.


Learn More

So those Red Hat honchos with high salaries essentially create a new job -- license administrator. Congratulations !

If you unlucky guy without such a person, they you need yourself to read and understand at least The RHEL System Registration Guide ( which outlines major options available for registering a system (and carefully avoids mentioning bugs and pitfalls, which are many).  For some reason migration from RHN to RHNSM usually works well so it might make sense to register system first in RHN and then to migrate it.

Also might be useful (to the extent any Red Hat purely written documentation is useful) is How to register and subscribe a system to the Red Hat Customer Portal using Red Hat Subscription-Manager ( At least it tires to  answers to some most basic questions:

There is also an online tool to assist you in selecting the most appropriate registration technology for your system - Red Hat Labs Registration Assistant ( If you would prefer to use this tool, please visit"

Pretty convoluted RPM packaging system which creates problems

The idea of RPM was to simplify installation of complex packages. But they create of a set of problem of their own. Especially connected with libraries (which not exactly Red Hat problem, it is Linux problems). One example is so called multilib problem that is detected by YUM:

--> Finished Dependency Resolution

Error:  Multilib version problems found. This often means that the root
       cause is something else and multilib version checking is just
       pointing out that there is a problem. Eg.:

         1. You have an upgrade for libicu which is missing some
            dependency that another package requires. Yum is trying to
            solve this by installing an older version of libicu of the
            different architecture. If you exclude the bad architecture
            yum will tell you what the root cause is (which package
            requires what). You can try redoing the upgrade with
            --exclude libicu.otherarch ... this should give you an error
            message showing the root cause of the problem.

         2. You have multiple architectures of libicu installed, but
            yum can only see an upgrade for one of those arcitectures.
            If you don't want/need both architectures anymore then you
            can remove the one with the missing update and everything
            will work.

         3. You have duplicate versions of libicu installed already.
            You can use "yum check" to get yum show these errors. can also use --setopt=protected_multilib=false to remove
       this checking, however this is almost never the correct thing to
       do as something else is very likely to go wrong (often causing
       much more problems).

       Protected multilib versions: libicu-4.2.1-14.el6.x86_64 != libicu-4.2.1-11.el6.i686

Selecting packages for installation

You can improve the typical for RHEL situation with a lot of useless daemons installed by carefully selecting packages and then reusing generated kickstart file. That can be done via advanced menu for one box and then using this kickstart file for all other boxes with minor modifications. Kickstart still works, despite trend toward overcomplexity in other parts of distribution ;-)

Problems with architectural vision of Red Hat brass

Both architectural level of thinking of Red Hat brass (with daemons like avahi, systemd installed by default) and clear attempts along the lines "Not invented here" in virtualization creates some concerns. It is clear that Red Hat by itself can't become a major virtualization player like VMware. It just does not have enough money for development.

You would think that the safest bat is to reuse the leader among open source offerings which is currently Xen. But Red Hat brass thinks differently and wants to play more dangerous poker game: it started promoting KVM: Red Hat has released Enterprise Linux 5 with integrated virtualization (Xen) and then changed their mind after RHEL 5.5 or so. In RHEL 6 Xen is replaced by KVM.

What is good that after ten years they eventually manage to re-implement Solaris 10 zones. In RHEL 7 they are usable.

Security overkill with SELinux

RHEL contain security layer called SELinux, but in most cases of corporate deployment it is either disabled, or operates in permissive mode.  The reason is that is notoriously difficult to configure correctly and in most case the game does not worth the candles.

Firewall is more usable in  corporate deployments, especially in cases when you have obnoxious or incompetent security department (a pretty typical situation for a large corporation ;-) as it prevents a lot of stupid questions from utterly incompetent "security gurus" about opened ports and can stop dead scanning attempts of tools that test for known vulnerabilities and by using which security departments are trying to justify their miserable existence. Generally it is dangerous to allow exploits used in such tools which local script kiddies (aka "security team") recklessly launch against your production server (as if checking for a particular vulnerability using internal script is inferior solution). There were reports of crashes of production servers due to such games. Some "security script kiddie" who understand very little in Unix even try to prove their worth by downloading exploits from hacker site and then using it against production servers on the internal corporate network. Unfortunately they are not always fired for such valiant efforts.   

To get the idea about the level of complexity try to read the Deployment Guide. Full set of documentation is available from

So it is not accidental that in many cases SElinix is disabled in enterprise installations. Some commercial software packages explicitly recommend to disable it in their installation manuals.

There is an alternative to SElinux which is more elegant, usable and understandable approach -- AppArmor which is/was used in SLES (SLES now  ha SELinux  too and suffers from overcomplexity even more then RHEL ;-). But it did not get enough traction.  Still IMHO if you need a really high level of security for a particular server this is a preferable path to go. Or you can use Solaris if you have knowledgeable Solaris sysadmin on the floor (security via obscurity actually works pretty well in this case).

RHEL became kind of Microsoft of Linux  world and as such this is the most hackable flavor of linux just due to its general popularity. It is not a good idea to use RHEL if security is of vital importance, although with enabled SE it definably more hardened variant of OS then without. See Potemkin Villages of Computer Security  for more detailed discussion.

Road to hell in paved with good intentions

Loss of architectural integrity of Unix is now very pronounced in RHEL. Both RHEL6 and  RHEL7 although 7 is definitely worse. And this is not only systemd fiasco. For example, recently I spend a day troubleshooting an interesting and unusual problem: one out of 16 identical (both hardware and software wise) blades in a small HPC cluster (and only one) failed to enable bonded interface on boot and this remained off line. Most sysadmins would think that something is wrong with hardware. For example, Ethernet card on the blade and/or switch port or even internal enclosure interconnect. I also initially thought his way. But this was not the case  ;-)  

Tech support from Dell was not able to locate any hardware problem although they did diligently upgraded CMC on enclosure and BIOS and firmware on the blade. BTW this blade has had similar problems in the past and Dell tech support once even replaced Ethernet card in it, thinking that it is culprit. Now i know that this was a completely wrong decision on their part, and waist of both time and money  :-), They come to this conclusion by swapping the blade to a different slot and seeing that the problem migrated into new slot. Bingo -- the card is the root cause.  The problem is that it was not.  What is especially funny is that replacing the card did solve the problem for a while. After reading the information provided below you will be as puzzled as me why it happened. 

But after yet another power outage the problem  returned.

This  time I started to suspect that the card has nothing to do with the problem. After more close examination I discovered that in its infinite wisdom in RHEL 6 Red Hat introduced a package called biosdevname. The package was developed by Dell (the fact which seriously undermined my trust in Dell hardware ;-).

This package renames interfaces to a new set of names, supposedly consistent with their etching on the case of rack servers. It is useless (or more correctly harmful) for blades.  The package is primitive and does not understand that server is a rack server or blade.

Moreover by doing this supposedly useful remaining this package introduces in  70-persistent-net.rules file a stealth rule:

KERNEL=="eth*", ACTION=="add", PROGRAM="/sbin/biosdevname -i %k", NAME="%c"

I did not look at the code but from the observed behaviour it looks that in some cases in RHEL 6 (and most probably in RHEL 7 too) the package adds a "stealth" rule to the END (not the beginning but the end !!!) of  /udev/rules.d/70-persistent-net.rules file, which means that if similar rule in 70-persistent-net.rules exits it is overwritten. Or something similar to this effect. 

If you look at Dell knowledge base there are dozens of advisories related to this package (search just for biosdevname). Which suggests that there some deeply wrong with its architecture.

What I observed  that on some (the key word is some, converting the situation in Alice in Wonderland environment) rules for interfaces listed in  70-persistent-net.rules file simply does not work if this package is enabled. For example Dell Professional services in their infinite wisdom renamed interfaces  back to eth0-eth4 for Inter X710 4-port 10Gb Ethernet card that  we have on some blades. On 15 out of 16 blades in the Dell enclosure this absolutely wrong  idea works perfectly well. But on blades 16 sometimes it does not as a result this blade does not boot after power outage of reboot. When this happens is unpredictable. Sometimes it  boots, but sometimes it does not.  And you can't understand what is happening, no matter how hard you try because of stealth nature of changes introduced by biosdevname package. 

Two interfaces on this blades (as you now suspect eth0 and eth1) were bonded on this blade. After around 6 hours of poking around the problem I discover that despite presence of rule for "eth0-eth4 in 70-persistent-net.rules file RHEL 6.7 still renames on boot all four interfaces to em0-em4 scheme and naturally bonding fails as eth0 and eth1 interfaces do not exit.

First I decided to deinstall biosdevname package and see what will happen. Did not work (see below why -- de-installation script in this RPM is incorrect and contains a bug -- it is not enough to remove files  you also need to run  the command update-initramfs -u (Hat tip to Oler).

Searching for "Renaming em to eth" I found a post in which the author recommended disabling this "feature" via adding biosdevname=0 to kernel parameters /etc/grub.conf

That worked.  So two days my life were lost for finding a way to disable this completely unnecessary for blades RHEL "enhancement".

Here is some information about this package

Copyright (c) 2006, 2007 Dell, Inc.  
Licensed under the GNU General Public License, Version 2.

biosdevname in its simplest form takes a kernel device name as an argument, and returns the BIOS-given name it "should" be.  This is
necessary on systems where the BIOS name for a given device (e.g. the label on the chassis is "Gb1") doesn't map directly 
and obviously to the kernel name (e.g. eth0).
The distro-patches/sles10/ directory contains a patch needed to integrate biosdevname into the SLES10 udev ethernet 
naming rules.This also works as a straight udev rule.  On RHEL4, that looks like:
KERNEL=="eth*", ACTION=="add", PROGRAM="/sbin/biosdevname -i %k", NAME="%c"
This makes use of various BIOS-provided tables:
PCI Confuration Space
PCI IRQ Routing Table ($PIR)
PCMCIA Card Information Structure
SMBIOS 2.6 Type 9, Type 41, and HP OEM-specific types
therefore it's likely that this will only work well on architectures that provide such information in their BIOS.

To add insult to injury this behaviour was demonstrated on only one of 16 absolutely identically configured Dell M630 blades with identical hardware and absolutely identical (cloned) OS instances.  Which makes RHEL kind of "Alice in Wonderland" system. After this experience it is difficult not to hate Red Hat, but we can do very little to change this situation to the better. 

This is just one example. I have more similar  stories to tell

I would like to stress that  the fact that the utility that is included in this package does not understand that it the target for installation is a blade ( there is no etching on blade network interfaces ;-) or server is a pretty typical RHEL behaviour, despite the fact that the package was developed by Dell. They also install audio packages on boxes that have no audio card and do a lot of similar things ;-) 

If you look at this topic using your favorite search engine (which should be Google anymore ;-) you will find dozens of posts in which people try to resolve this problem with various levels of competency and success. Such a tremendous waist of time and efforts.  Among best that I have found were:

Current versions and year of end of support

Supported versions of RHEL (as of April 2016) are  5.11, 6.7, and 7.2. Usually a large enterprise uses a mixture of versions, often all three of them.  Compatibility within a single version is usually very good (I would say on par with Solaris) and the risk on upgrading from, say, 6.2 to 6.7 is minimal. Not so in case of major versions. Here you mileage may vary.

See Red Hat Enterprise Linux - Wikipedia.  and Red Hat Enterprise Linux.


In linux there is no convention for determination which flavor of linux you are running. For Red Hat in order to  determine which version is installed on the server you can use command

cat /etc/redhat-release

Oracle linux adds its own file preserving RHEL file, so a more appropriate  command would be

cat /etc/*release

End of support issues

See Red Hat Enterprise Linux Life Cycle - Red Hat Customer Portal:

For more packages version information see Red Hat Enterprise Linux

Updates in RHEL 5

RHEL 5, especially versions  5.6-5.9 is probably one of the most stable version of  Red Hat I ever encountered. It still support more or less recent hardware (Oracle provides updated kernel if you want it).  This is a very conservative distribution. For example, it still uses such really old (or obsolete, if you wish) versions as bash 3.2.25, Perl 5.8.8, and Python 2.4.3.

Oracle produced improved kernel for 5.x versions based of later version of linux kernel then "stock" RHEL kernel. It might benefit stability if you are running Oracle applications. It is 64-bit only and is more capricious toward hardware then Red Hat stack kernel so your mileage can vary.

RHEL 5 suffers from proliferation of useless or semi-useless daemons and as such is not secure and probably can't  be made secure in default installation. You need carefully minimize the system to get s usable server.

Systemtap is a GPL-based infrastructure which simplifies information gathering on a running Linux system. This assists in diagnosis of performance or functional problems. With systemtap, the tedious and disruptive "instrument, recompile, install, and reboot" sequence is no longer needed to collect diagnostic data. Systemtap is now fully supported. For more information refer to
The Internet storage name service for Linux (isns-utils) is now supported. This allows you to register iSCSI and iFCP storage devices on the network. isns-utils allows dynamic discovery of available storage targets through storage initiators.

isns-utils provides intelligent storage discovery and management services comparable to those found in fibre-channel networks. This allows an IP network to function in a similar capacity to a storage area network.

With its ability to emulate fibre-channel fabric services, isns-utils allows for seamless integration of IP and fibre-channel networks. In addition, isns-utils also provides utilities for managing both iSCSI and fibre-channel devices within the network.

For more information about isns-utils specifications, refer to For usage instructions, refer to /usr/share/docs/isns-utils-[version]/README and /usr/share/docs/isns-utils-[version]/README.redhat.setup.

rsyslog is an enhanced multi-threaded syslogd daemon that supports the following (among others):

rsyslog is compatible with the stock sysklogd, and can be used as a replacement in most cases. Its advanced features make it suitable for enterprise-class, encrypted syslog relay chains; at the same time, its user-friendly interface is designed to make setup easy for novice users.

For more information about rsyslog, refer to

Openswan is a free implementation of Internet Protocol Security (IPsec) and Internet Key Exchange (IKE) for Linux. IPsec uses strong cryptography to provide authentication and encryption services. These services allow you to build secure tunnels through untrusted networks. Everything passing through the untrusted network is encrypted by the IPsec gateway machine and decrypted by the gateway at the other end of the tunnel. The resulting tunnel is a virtual private network (VPN).

This release of Openswan supports IKEv2 (RFC 4306, 4718) and contains an IKE2 daemon that conforms to IETF RFCs. For more information about Openswan, refer to

Password Hashing Using SHA-256/SHA-512
Password hashing using the SHA-256 and SHA-512 hash functions is now supported.

To switch to SHA-256 or SHA-512 on an installed system, run authconfig --passalgo=sha256 --update or authconfig --passalgo=sha512 --update. To configure the hashing method through a GUI, use authconfig-gtk. Existing user accounts will not be affected until their passwords are changed.

For newly installed systems, using SHA-256 or SHA-512 can be configured only for kickstart installations. To do so, use the --passalgo=sha256 or --passalgo=sha512 options of the kickstart command auth; also, remove the --enablemd5 option if present.

If your installation does not use kickstart, use authconfig as described above. After installation, change all created passwords, including the root password.

Appropriate options were also added to libuser, pam, and shadow-utils to support these password hashing algorithms. authconfig configures necessary options automatically, so it is usually not necessary to modify them manually:

OFED in comps.xml
The group OpenFabrics Enterprise Distribution is now included in comps.xml. This group contains components used for high-performance networking and clustering (for example, InfiniBand and Remote Direct Memory Access).

Further, the Workstation group has been removed from comps.xml in the Red Hat Enterprise Linux 5.2 Client version. This group only contained the openib package, which is now part of the OpenFabrics Enterprise Distribution group.

system-config-netboot is now included in this update. This is a GUI-based tool used for enabling, configuring, and disabling network booting. It is also useful in configuring PXE-booting for network installations and diskless clients.
In order to accommodate the use of compilers other than gcc for specific applications that use message passing interface (MPI), the following updates have been applied to the openmpi and lam packages:

Note that when upgrading to this release's version of openmpi, you should migrate any default parameters set for lam or openmpi to /usr/lib(64)/lam/etc/ and /usr/lib(64)/openmpi/[openmpi version]-[compiler name]/etc/. All configurations for either openmpi or lam should be set in these directories.

lvm2 Snapshot Volume Warning
lvm2 will now warn if a snapshot volume is near its maximum capacity. However, this feature is not enabled by default. To enable this feature, uncomment the following line in /etc/lvm/lvm.conf
snapshot_library = ""

Ensure that the dmeventd section and its delimiters ({ }) are also uncommented.

bash has been updated to version 3.2. This version fixes a number of outstanding bugs, most notably:

Note that with this update, the output of ulimit -a has also changed from the Red Hat Enterprise Linux 5.1 version. This may cause a problems with some automated scripts. If you have any scripts that use ulimit -a output strings, you should revise them accordingly.

Updates in RHEL 6

RHEL 6 was released in November 2010. So technical support and patching will last till 2020.  See Red Hat Enterprise Linux Life Cycle - Red Hat Customer Portal

RHEL 6 cut the number of daemons installed by default in comparison with RHEL 5.  NFS4 became default and that cause problems as if master shutdown often client can't recover and enter zombie state.  Luckily they spared us from systemd in this version ;-)

RHEL 6 initially gave me impression of half-baked, rushed to customer distribution and may be signal internal crisis in RHEL development as in some areas it is worse then RHEL 5.6. It stabilized around version 6.5. Some changes were arbitrary and just make distribution look "new" without bring anything significant to the table. For example during installation, the partitioning procedure changed and probably not to the better. Some "mostly-desktop or home network" daemons are present by default.   For example, complex  and potentially insecure avahi daemon (implementation of Zeroconf).

The Avahi daemon discovers network resources, allocates IP addresses without the DHCP server, makes the computer accessible by its local

As RHEL is targeted to corporate environments which typically use static IP for servers it makes little or no sense. It is better to disable it on installation. See   Disabling the Avahi daemon

Also the ability of the distribution to select right set of daemons is compromised in RHEL 6 more then in RHEL 5 despite adding useful concept of "server roles": by default there is a lot of useless daemons. If you try for example to install "database server" role  you then need to check and delete/disable redundant  manually.

Documentation for version 6

Red Hat Enterprise Linux 6 Technical Details : What's New

... ... ...


* Red Hat Enterprise Linux 6 supports more sockets, more cores, more threads, and more memory.

Efficient Scheduling

* The CFS schedules the next task to be run based on which task has consumed the least time, task prioritization, and other factors. Using hardware awareness and multi-core topologies, the CFS optimizes task performance and power consumption.

Reliability, Availability, and Serviceability (RAS)

* RAS hardware-based hot add of CPUs and memory is enabled.
* When supported by machine check hardware, the system can recover from some previously fatal hardware errors with minimal disruption.
* Memory pages with errors can be declared as "poisoned", and will be avoided.


* The new default file system, ext4, is faster, more robust, and scales to 16TB.
* The Scalable File System Add-On contains the XFS file system which scales to 100TB.
* The Resilient Storage Add-On includes the high availability, clustered GFS2 file system.
* NFSv4 is significantly improved over NFSv3, and backwards compatible.
* Fuse allows filesystems to run in user space allowing testing and development on newer fused-based filesystems (such as cloud filesystems).

High Availability

* The web interface based on Conga has been re-designed for added functionality and ease of use.
* The cluster group communication system, Corosync, is mature, secure, high performance, and light-weight.
* Nodes can re-enable themselves after failure without administrative intervention using unfencing.
* Unified logging and debugging simplifies administrative work.
* Virtualized KVM guests can be run as managed services which enables fail-over, including between physical and virtual hosts.
* Centralized configuration and management is provided by Conga.
* A single cluster command can be used to manage system logs from different services, and the logs have a consistent format that is easier to parse.

Power Management

* The tickless kernel feature keeps systems in the idle state longer, resulting in net power savings.
* Active State Power Management and Aggressive Link Power Management provide enhanced system control, reducing the power consumption of I/O subsystems. Administrators can actively throttle power levels to reduce consumption.
* Realtime drive access optimization reduces filesystem metadata write overhead.

System Resource Allocation

* Cgroups organize system tasks so that they can be tracked, and so that other system services can control the resources that cgroup tasks may consume (Partitioning). Two user-space tools, cgexec and cgclassify, provide easy configuration and management of cgroups.
* Cpuset applies CPU resource limits to cgroups, allowing processing performance to be allocated across tasks.
* The memory resource controller applies memory resource limits to cgroups.
* The network resource controller applies network traffic limits to cgroups.


* A snapshot of a logical volume may be merged back into the original logical volume, reverting changes that occurred after the snapshot.
* Mirror logs of regions that need to be synchronized can be replicated, supporting high availability.
* LVM hot spare allows the behavior of a mirrored logical volume after a device failure to be explicitly defined.
* DM-Multipath allows paths to be dynamically selected based on queue size or I/O time data.
* Very large SAN-based storage is supported.
* Automated I/O alignment and self-tuning is supported.
* Filesystem usage information is provided to the storage device, allowing administrators to use thin provisioning to allocate storage on-demand.
* SCSI and ATA standards have been extended to provide alignment and I/O hints, allowing automated tuning and I/O alignment.
* DIF/DIX provides better integrity checks for application data.


* UDP Lite tolerates partially corrupted packets to provide better service for multimedia protocols, such as VOIP, where partial packets are better than none.
* Multiqueue Networking increases processing parallelism for better performance from multiple processors and CPU cores.
* Large Receive Offload (LRO) and Generic Receive Offload (GRO) aggregate packets for better performance.
* Support for Data Center Bridging includes data traffic priorities and flow control for increased Quality of Service.
* New support for software Fiber Channel over Ethernet (FCoE) is provided.
* iSCSI partitions may be used as either root or boot filesystems.
* IPv6 is supported.

Security and Access Control

* SELinux policies have been extended to more system services.
* SELinux sandboxing allows users to run untrusted applications safely and securely.
* File and process permissions have been systematically reduced whenever possible to reduce the risk of privilege escalation.
* New utilities and system libraries provide more control over process privileges for easily managing reduced capabilities.
* Walk-up kiosks (as in banks, HR departments, etc.) are protected by SELinux access control, with on-the-fly environment setup and take-down, for secure public use.
* Openswan includes a general implementation of IPsec that works with Cisco IPsec.

Enforcement and Verification of Security Policies

* OpenScap standardizes system security information, enabling automatic patch verification and system compromise evaluation.

Identity and Authentication

* The new System Security Services Daemon (SSSD) provides centralized access to identity and authentication resources, enables caching and offline support.
* OpenLDAP is a compliant LDAP client with high availability from N-way MultiMaster replication, and performance improvements.

Web Infrastructure

* This release of Apache includes many improvements, see Overview of new features in Apache 2.2
* A major revision of Squid includes manageability and IPv6 support
* Memcached 1.4.4 is a high-performance and highly scalable, distributed, memory-based object caching system which enhances the speed of dynamic web applications.


* OpenJDK 6 is an open source implementation of the Java Platform Standard Edition (SE) 6 specification. It is TCK-certified based on the IcedTea project, and the implementation of a Java Web Browser plugin and Java web start removes the need for proprietary plugins.
* Tight integration of OpenJDK and Red Hat Enterprise Linux includes support for Java probes in SystemTap to enable better debugging for Java.
* Tomcat 6 is an open source and best-of-breed application server running on the Java platform. With support for Java Servlets and Java Server Pages (JSP), tomcat provides a robust environment for developing and deploying dynamic web applications.


* Ruby 1.8.7 is included, and Rails 3 supports dependencies.
* Version 4.4 of gcc includes OpenMP3 conformance for portable parallel programs, Integrated Register Allocator, Tuples, additional C++0x conformance implementations, and debuginfo handling improvements.
* Improvements to the libraries include malloc optimizations, improved speed and efficiency for large blocks, NUMA considerations, lock-free C++ class libraries, NSS crypto consolidation for LSB 4.0 and FIPS level 2, and improved automatic parallel mode in the C++ library.
* Gdb 7.1.29 improvements include C++ function, class, templates, variables, constructor / destructor improvements, catch / throw and exception improvements, large program debugging optimizations, and non-blocking thread debugging (threads can be stopped and continued independently).
* TurboGears 2 is a powerful internet-enabled framework that enables rapid web application development and deployment in Python.
* Updates to the popular web scripting and programming languages PHP (5.3.2), Perl (5.10.1) include many improvements.

Application Tuning

* SystemTap uses the kernel to generate non-intrusive debugging information about running applications.
* The tuned daemon monitors system use and uses that information to automatically and dynamically adjust system settings for better performance.
* SELinux can be used to observe, then tighten application access to system resources, leading to greater security.


* PostgreSQL 8.4.4 includes many improvements, please see PostgreSQL 8.4 Feature List for details.
* MySQL 5.1.47 improvement are listed here: What Is New in MySQL 5.1.
* SQLite 3.6.20 includes significant performance improvements, and many important bug fixes. Note that this release has made incompatible changes to the internal OS interface and VFS layers (compared to earlier releases).

System API / ABI Stability

* The API / ABI Compatibility Commitment defines stable, public, system interfaces for the full ten-year life cycle of Red Hat Enterprise Linux 6. During that time, applications will not be affected by security errata or service packs, and will not require re-certification. Backward compatibility for the core ABI is maintained across major releases, allowing applications to span subsequent releases.

Integrated Virtualization, Kernel-Based Virtualization

* The KVM hypervisor is fully integrated into the kernel, so all RHEL system improvements benefit the virtualized environment.
* The application environment is consistent for physical and virtual systems.
* Deployment flexibility, provided by the ability to easily move guests between hosts, allows administrators to consolidate resources onto fewer machines during quiet times, or free up hardware for maintenance downtime.

Leverages Kernel Features

* Hardware abstraction enables applications to move from physical to virtualized environments independently of the underlying hardware.
* Increased scalability of CPUs and memory provides more guests per server.
* Block storage benefits from selectable I/O schedulers and support for asynchronous I/O.
* Cgroups and related CPU, memory, and networking resource controls provide the ability to reduce resource contention and improve overall system performance.
* Reliability, Availability, and Serviceability (RAS) features (e.g., hot add of processors and memory, machine check handling, and recovery from previously fatal errors) minimize downtime.
* Multicast bridging includes the first release of IGMP snooping (in IPv4) to build intelligent packet routing and enhance network efficiency.
* CPU affinity assigns guests to specific CPUs.

Guest Acceleration

* CPU masking allows all guests to use the same type of CPU.
* SR-IOV virtualizes physical I/O card resources, primarily networking, allowing multiple guests to share a single physical resource.
* Message signaled interrupts deliver interrupts as specific signals, increasing the number of interrupts.
* Transparent hugepages provides significant performance improvements for guest memory allocation.
* Kernel Same Page (KSM) provides reuse of identical pages across virtual machines (known as deduplication in the storage context).
* The tickless kernel defines a stable time model for guests, avoiding clock drift.
* Advanced paravirtualization interfaces include non-traditional devices such as the clock (enabled by the tickless kernel), interrupt controller, spinlock subsystem, and vmchannel.


* In virtualized environments, sVirt (powered by SELinux) protects guests from one another

Microsoft Windows Support

* Windows WHQL-certified drivers enable virtualized Windows systems, and allow Microsoft customers to receive technical support for virtualized instances of Windows Server.

Installation, Updates, and Deployment

* Anaconda supports installation of a “minimal platform” as a specific server installation, or as a strategy for reducing the number of software packages to increase security.
* Red Hat Network (RHN) and Satellite continue to provide management, provisioning and monitoring for large deployments.
* Installation options have been reorganized into “workload profiles” so that each system installation will provide the right software for specific tasks.
* Dracut, a replacement for mkinitrd, minimizes the impact of underlying hardware changes, is more maintainable, and makes it easier to support third party drivers.
* The new yum history command provides information about yum transactions, and supports undo and redo of selected operations.
* Yum and RPM offer significantly improved performance.
* RPM signatures use the Secure Hash Algorithm (SHA256) for data verification and authentication, improving security.
* Storage devices can be designated for encryption at installation time, protecting user and system data. Key escrow allows recovery of lost keys.
* Standards Based Linux Instrumentation for Manageability (SBLIM) manages systems using Web-Based Enterprise Management (WBEM).
* ABRT enhanced error reporting speeds triage and resolution of software failures.

Routine Task Delegation

* PolicyKit allows administrators to provide users access to privileged operations, such adding a printer or rebooting a desktop, without granting administrative privileges.


* Improvements include better printing, printer discovery, and printer configuration services from cups and system-config-printer.
* SNMP-based monitoring of ink and toner supply levels and printer status provides easier monitoring to enable efficient inventory management of ink and toner cartridges.
* Automatic PPD configuration for postscript printers, where PPD option values are queried from printer, are available in CUPS web interface.

Microsoft Interoperability

* Samba improvements include support for Windows 2008R2 trust relationships: Windows cross-forest, transitive trust, and one-way domain trust.
* Applications can use OpenChange to gain access to Microsoft Exchange servers using native protocols, allowing mail clients like Evolution to have tighter integration with Exchange servers.

RHEL 7 and systemd invasion into server space

RHEL 7 was released in June 2014. With the release of RHEL 7 we see hard push to systemd exclusivity.  Runlevels are gone. The release of RHEL 7 with systemd as the only option for system and process management has reignited the old debate weather Red Hat is trying to establish Microsoft-style monopoly over enterprise Linux and move Linux closer to Windows: closed but user-friendly system. 

for server sysadmins systemd is a massive, fundamental change to core Linux administration for no perceivable gain. So while there is a high level of support of systemd from Linux users who run Linux on their laptops and maybe as home server, there is a strong backlash against systemd from Linux system administrators who are responsible for significant number of Linux servers in enterprise environment.

After all runlevels were used in production environment, if only to run system with or without X11.  Please read  an interesting essay on systemd (ProSystemdAntiSystemd).

Often initiated by opponents, they will lament on the horrors of PulseAudio and point out their scorn for Lennart Poettering. This later became a common canard for proponents to dismiss criticism as Lennart-bashing. Futile to even discuss, but it’s a staple.

Lennart’s character is actually, at times, relevant.. Trying to have a large discussion about systemd without ever invoking him is like discussing glibc in detail without ever mentioning Ulrich Drepper. Most people take it overboard, however.

A lot of systemd opponents will express their opinions regarding a supposed takeover of the Linux ecosystem by systemd, as its auxiliaries (all requiring governance by the systemd init) expose APIs, which are then used by various software in the desktop stack, creating dependency chains between it and systemd that the opponents deemed unwarranted. They will also point out the udev debacle and occasionally quote Lennart. Opponents see this as anti-competitive behavior and liken it to “embrace, extend, extinguish”. They often exaggerate and go all out with their vitriol though, as they start to contemplate shadowy backroom conspiracies at Red Hat (admittedly it is pretty fun to pretend that anyone defending a given piece of software is actually a shill who secretly works for it, but I digress), leaving many of their concerns to be ignored and deem ridiculous altogether.

... ... ...

In addition, the Linux community is known for reinventing the square wheel over and over again. Chaos is both Linux’s greatest strength and its greatest weakness. Remember HAL? Distro adoption is not an indicator of something being good, so much as something having sufficient mindshare.

... ... ...

The observation that sysinit is dumb and heavily flawed with its clunky inittab and runlevel abstractions, is absolutely nothing new. Richard Gooch wrote a paper back in 2002 entitled “Linux Boot Scripts”, which criticized both the SysV and BSD approaches, based on his earlier work on simpleinit(8). That said, his solution is still firmly rooted in the SysV and BSD philosophies, but he makes it more elegant by supplying primitives for modularity and expressing dependencies.

Even before that, DJB wrote the famous daemontools suite which has had many successors influenced by its approach, including s6, perp, runit and daemontools-encore. The former two are completely independent implementations, but based on similar principles, though with significant improvements. An article dated to 2007 entitled “Init Scripts Considered Harmful” encourages this approach and criticizes initscripts.

Around 2002, Richard Lightman wrote depinit(8), which introduced parallel service start, a dependency system, named service groups rather than runlevels (similar to systemd targets), its own unmount logic on shutdown, arbitrary pipelines between daemons for logging purposes, and more. It failed to gain traction and is now a historical relic.

Other systems like initng and eINIT came afterward, which were based on highly modular plugin-based architectures, implementing large parts of their logic as plugins, for a wide variety of actions that software like systemd implements as an inseparable part of its core. Initmacs, anyone?

Even Fefe, anti-bloat activist extraordinaire, wrote his own system called minit early on, which could handle dependencies and autorestart. As is typical of Fefe’s software, it is painful to read and makes you want to contemplate seppuku with a pizza cutter.

And that’s just Linux. Partial list, obviously.

At the end of the day, all comparing to sysvinit does is show that you’ve been living under a rock for years. What’s more, it is no secret to a lot of people that the way distros have been writing initscripts has been totally anathema to basic software development practices, like modularizing and reusing common functions, for years. Among other concerns such as inadequate use of already leaky abstractions like start-stop-daemon(8). Though sysvinit does encourage poor work like this to an extent, it’s distro maintainers who do share a deal of the blame for the mess. See the BSDs for a sane example of writing initscripts. OpenRC was directly inspired by the BSDs’ example. Hint: it’s in the name - “RC”.

The rather huge scope and opinionated nature of systemd leads to people yearning for the days of sysvinit. A lot of this is ignorance about good design principles, but a good part may also be motivated from an inability to properly convey desires of simple and transparent systems. In this way, proponents and opponents get caught in feedback loops of incessantly going nowhere with flame wars over one initd implementation (that happened to be dominant), completely ignoring all the previous research on improving init, as it all gets left to bite the dust. Even further, most people fail to differentiate init from rc scripts, and sort of hold sysvinit to be equivalent to the shoddy initscripts that distros have written, and all the hacks they bolted on top like LSB headers and startpar(2). This is a huge misunderstanding that leads to a lot of wasted energy.

Don’t talk about sysvinit. Talk about systemd on its own merits and the advantages or disadvantages of how it solves problems, potentially contrasting them to other init systems. But don’t immediately go “SysV initscripts were way better and more configurable, I don’t see what systemd helps solve beyond faster boot times.”, or from the other side “systemd is way better than sysvinit, look at how clean unit files are compared to this horribly written initscript I cherrypicked! Why wouldn’t you switch?”

... ... ...

Now that we pointed out how most systemd debates play out in practice and why it’s usually a colossal waste of time to partake in them, let’s do a crude overview of the personalities that make this clusterfuck possible.

The technically competent sides tend to largely fall in these two broad categories:

a) Proponents are usually part of the modern Desktop Linux bandwagon. They run contemporary mainstream distributions with the latest software, use and contribute to large desktop environment initiatives and related standards like the *kits. They’re not necessarily purely focused on the Linux desktop. They’ll often work on features ostensibly meant for enterprise server management, cloud computing, embedded systems and other needs, but the rhetoric of needing a better desktop and following the example set by Windows and OS X is largely pervasive amongst their ranks. They will decry what they perceive as “integration failures”, “fragmentation” and are generally hostile towards research projects and anything they see as “toy projects”. They are hackers, but their mindset is largely geared towards reducing interface complexity, instead of implementation complexity, and will frequently argue against the alleged pitfalls of too much configurability, while seeing computers as appliances instead of tools.

b) Opponents are a bit more varied in their backgrounds, but they typically hail from more niche distributions like Slackware, Gentoo, CRUX and others. They are largely uninterested in many of the Desktop Linux “advancements”, value configuration, minimalism and care about malleability more than user friendliness. They’re often familiar with many other Unix-like environments besides Linux, though they retain a fondness for the latter. They have their own pet projects and are likely to use, contribute to or at least follow a lot of small projects in the low-level system plumbing area. They can likely name at least a dozen alternatives to the GNU coreutils (I can name about 7, I think), generally favor traditional Unix principles and see computers as tools. These are the people more likely to be sympathetic to things like the suckless philosophy.

It should really come as no surprise that the former group dominates. They’re the ones that largely shape the end user experience. The latter are pretty apathetic or even critical of it, in contrast. Additionally, the former group simply has far more manpower in the right places. Red Hat’s employees alone dominate much of the Linux kernel, the GNU base system, GNOME, NetworkManager, many projects affiliated with standards (including Polkit) and more. There’s no way to compete with a vast group of paid individuals like those.


The “Year of the Linux Desktop” has become a meme at this point, one that is used most often sarcastically. Yet there are still a lot of people who deeply hold onto it and think that if only Linux had a good abstraction engine for package manager backends, those Windows users will be running Fedora in no time.

What we’re seeing is undoubtedly a cultural clash by two polar opposites that coexist in the Linux community. We can see it in action through the vitriol against Red Hat developers, and conversely the derision against Gentoo users on part of Lennart Poettering, Greg K-H and others. Though it appears in this case “Gentoo user” is meant as a metonym for Linux users whose needs fall outside the mainstream application set. Theo de Raadt infamously quipped that Linux is “for people who hate Microsoft”, but that quote is starting to appear outdated.

Many of the more technically competent people with views critical of systemd have been rather quiet in public, for some reason. Likely it’s a realization that the Linux desktop’s direction is inevitable, and thus trying to criticize it is a futile endeavor. There are people who still think GNOME abandoning Sawfish was a mistake, so yes.

The non-desktop people still have their own turf, but they feel threatened by systemd to one degree or another. Still, I personally do not see them dwindling down. What I believe will happen is that they will become even more segregated than they already are from mainstream Linux and that using their software will feel more otherworldly as time goes on.

There are many who are predicting a huge renaissance for BSD in the aftermath of systemd, but I’m skeptical of this. No doubt there will be increased interest, but as a whole it seems most of the anti-systemd crowd is still deeply invested in sticking to Linux.

Ultimately, the cruel irony is that in systemd’s attempt to supposedly unify the distributions, it has created a huge rift unlike any other and is exacerbating the long-present hostilities between desktop Linux and minimalist Linux sides at rates that are absolutely atypical. What will end up of systemd remains unknown. Given Linux’s tendency for chaos, it might end up the new HAL, though with a significantly more painful aftermath, or it might continue on its merry way and become a Linux standard set in stone, in which case the Linux community will see a sharp ideological divide. Or perhaps it won’t. Perhaps things will go on as usual, on an endless spiral of reinvention without climax. Perhaps we will be doomed to flame on systemd for all eternity. Perhaps we’ll eventually get sick of it and just part our own ways into different corners.

Either way, I’ve become less and less fond of politics for uselessd and see systemd debates as being metaphorically like car crashes. I likely won’t help but chime in at times, though I intend uselessd to branch off into its own direction with time.


A very controversial subsystem, systemd is implemented. systemd is a suite of system management daemons, libraries, and utilities designed for Linux and programmed exclusively for the Linux API. There is no more runlevels. For servers systemd makes little sense. Sysadmins now need to learn new systemd commands  for starting and stopping various services. There is still ‘service’ command included for backwards compatibility, but it may go away in future releases. See CentOS 7 - RHEL 7 systemd commands Linux BrigadeCentOS 7 - RHEL 7 systemd commands

From Wikipedia (systemd)

In a 2012 interview, Slackware's founder Patrick Volkerding  expressed the following reservations about the systemd architecture which are fully applicable to the server environment

Concerning systemd, I do like the idea of a faster boot time (obviously), but I also like controlling the startup of the system with shell scripts that are readable, and I'm guessing that's what most Slackware users prefer too. I don't spend all day rebooting my machine, and having looked at systemd config files it seems to me a very foreign way of controlling a system to me, and attempting to control services, sockets, devices, mounts, etc., all within one daemon flies in the face of the UNIX concept of doing one thing and doing it well.

In an August 2014 article published in InfoWorld, Paul Venezia wrote about the systemd controversy, and attributed the controversy to violation of the Unix philosophy, and to "enormous egos who firmly believe they can do no wrong."[42] The article also characterizes the architecture of systemd as more similar to that of Microsoft Windows software:[42]

While systemd has succeeded in its original goals, it's not stopping there. systemd is becoming the Svchost of Linux – which I don't think most Linux folks want. You see, systemd is growing, like wildfire, well outside the bounds of enhancing the Linux boot experience. systemd wants to control most, if not all, of the fundamental functional aspects of a Linux system – from authentication to mounting shares to network configuration to syslog to cron.


After 10 years or so after Solaris 10 Linux at last got them.

Linux containers have emerged as a key open source application packaging and delivery technology, combining lightweight application isolation with the flexibility of image-based deployment methods. Developers have rapidly embraced Linux containers because they simplify and accelerate application deployment, and many Platform-as-a-Service (PaaS) platforms are built around Linux container technology, including OpenShift by Red Hat.

Red Hat Enterprise Linux 7 implements Linux containers using core technologies such as control groups (cGroups) for resource management, namespaces for process isolation, and SELinux for security, enabling secure multi-tenancy and reducing the potential for security exploits. The Red Hat container certification ensures that application containers built using Red Hat Enterprise Linux will operate seamlessly across certified container hosts.


With more and more systems, even at the low end, presenting non-uniform memory access (NUMA) topologies, Red Hat Enterprise Linux 7 addresses the performance irregularities that such systems present. A new, kernel-based NUMA affinity mechanism automates memory and scheduler optimization. It attempts to match processes that consume significant resources with available memory and CPU resources in order to reduce cross-node traffic. The resulting improved NUMA resource alignment improves performance for applications and virtual machines, especially when running memory-intensive workloads.


Red Hat Enterprise Linux 7 unifies hardware event reporting into a single reporting mechanism. Instead of various tools collecting errors from different sources with different timestamps, a new hardware event reporting mechanism (HERM) will make it easier to correlate events and get an accurate picture of system behavior. HERM reports events in a single location and in a sequential timeline. HERM uses a new userspace daemon, rasdaemon, to catch and log all RAS events coming from the kernel tracing infrastructure.


Red Hat Enterprise Linux 7 advances the level of integration and usability between the Red Hat Enterprise Linux guest and VMware vSphere. Integration now includes: • Open VM Tools — bundled open source virtualization utilities. • 3D graphics drivers for hardware-accelerated OpenGL and X11 rendering. • Fast communication mechanisms between VMware ESX and the virtual machine.


The ability to revert to a known, good system configuration is crucial in a production environment. Using LVM snapshots with ext4 and XFS (or the integrated snapshotting feature in Btrfs described in the “Snapper” section) an administrator can capture the state of a system and preserve it for future use. An example use case would involve an in-place upgrade that does not present a desired outcome and an administrator who wants to restore the original configuration.


Red Hat Enterprise Linux 7 introduces Live Media Creator for creating customized installation media from a kickstart file for a range of deployment use cases. Media can then be used to deploy standardized images whether on standardized corporate desktops, standardized servers, virtual machines, or hyperscale deployments. Live Media Creator, especially when used with templates, provides a way to control and manage configurations across the enterprise.


Red Hat Enterprise Linux 7 features the ability to use installation templates to create servers for common workloads. These templates can simplify and speed creating and deploying Red Hat Enterprise Linux servers, even for those with little or no experience with Linux.

Top Visited
Past week
Past month


Old News ;-)

[Sep 04, 2018] Unifying custom scripts system-wide with rpm on Red Hat-CentOS

Highly recommended!
Aug 24, 2018 |
Objective Our goal is to build rpm packages with custom content, unifying scripts across any number of systems, including versioning, deployment and undeployment. Operating System and Software Versions Requirements Privileged access to the system for install, normal access for build. Difficulty MEDIUM Conventions Introduction One of the core feature of any Linux system is that they are built for automation. If a task may need to be executed more than one time - even with some part of it changing on next run - a sysadmin is provided with countless tools to automate it, from simple shell scripts run by hand on demand (thus eliminating typo errors, or only save some keyboard hits) to complex scripted systems where tasks run from cron at a specified time, interacting with each other, working with the result of another script, maybe controlled by a central management system etc.

While this freedom and rich toolset indeed adds to productivity, there is a catch: as a sysadmin, you write a useful script on a system, which proves to be useful on another, so you copy the script over. On a third system the script is useful too, but with minor modification - maybe a new feature useful only that system, reachable with a new parameter. Generalization in mind, you extend the script to provide the new feature, and complete the task it was written for as well. Now you have two versions of the script, the first is on the first two system, the second in on the third system.

You have 1024 computers running in the datacenter, and 256 of them will need some of the functionality provided by that script. In time you will have 64 versions of the script all over, every version doing its job. On the next system deployment you need a feature you recall you coded at some version, but which? And on which systems are they?

On RPM based systems, such as Red Hat flavors, a sysadmin can take advantage of the package manager to create order in the custom content, including simple shell scripts that may not provide else but the tools the admin wrote for convenience.

In this tutorial we will build a custom rpm for Red Hat Enterprise Linux 7.5 containing two bash scripts, and to provide a way that all systems have the latest version of these scripts in the /usr/local/sbin directory, and thus on the path of any user who logs in to the system.

me width=

Distributions, major and minor versions In general, the minor and major version of the build machine should be the same as the systems the package is to be deployed, as well as the distribution to ensure compatibility. If there are various versions of a given distribution, or even different distributions with many versions in your environment (oh, joy!), you should set up build machines for each. To cut the work short, you can just set up build environment for each distribution and each major version, and have them on the lowest minor version existing in your environment for the given major version. Of cause they don't need to be physical machines, and only need to be running at build time, so you can use virtual machines or containers.

In this tutorial our work is much easier, we only deploy two scripts that have no dependencies at all (except bash ), so we will build noarch packages which stand for "not architecture dependent", we'll also not specify the distribution the package is built for. This way we can install and upgrade them on any distribution that uses rpm , and to any version - we only need to ensure that the build machine's rpm-build package is on the oldest version in the environment. Setting up building environment To build custom rpm packages, we need to install the rpm-build package:

# yum install rpm-build
From now on, we do not use root user, and for a good reason. Building packages does not require root privilege, and you don't want to break your building machine.

Building the first version of the package Let's create the directory structure needed for building:

$ mkdir -p rpmbuild/SPECS
Our package is called admin-scripts, version 1.0. We create a specfile that specifies the metadata, contents and tasks performed by the package. This is a simple text file we can create with our favorite text editor, such as vi . The previously installed rpmbuild package will fill your empty specfile with template data if you use vi to create an empty one, but for this tutorial consider the specification below called admin-scripts-1.0.spec :

me width=

Name:           admin-scripts
Version:        1
Release:        0
Summary:        FooBar Inc. IT dept. admin scripts
Packager:       John Doe 
Group:          Application/Other
License:        GPL
Source0:        %{name}-%{version}.tar.gz
BuildArch:      noarch

Package installing latest version the admin scripts used by the IT dept.

%setup -q


mkdir -p $RPM_BUILD_ROOT/usr/local/sbin
cp scripts/* $RPM_BUILD_ROOT/usr/local/sbin/


%dir /usr/local/sbin


* Wed Aug 1 2018 John Doe 
- release 1.0 - initial release
Place the specfile in the rpmbuild/SPEC directory we created earlier.

We need the sources referenced in the specfile - in this case the two shell scripts. Let's create the directory for the sources (called as the package name appended with the main version):

$ mkdir -p rpmbuild/SOURCES/admin-scripts-1/scripts
And copy/move the scripts into it:
$ ls rpmbuild/SOURCES/admin-scripts-1/scripts/

me width=

As this tutorial is not about shell scripting, the contents of these scripts are irrelevant. As we will create a new version of the package, and the is the script we will demonstrate with, it's source in the first version is as below:
echo "news pulled"
exit 0
Do not forget to add the appropriate rights to the files in the source - in our case, execution right:
chmod +x rpmbuild/SOURCES/admin-scripts-1/scripts/*.sh
Now we create a tar.gz archive from the source in the same directory:
cd rpmbuild/SOURCES/ && tar -czf admin-scripts-1.tar.gz admin-scripts-1
We are ready to build the package:
rpmbuild --bb rpmbuild/SPECS/admin-scripts-1.0.spec
We'll get some output about the build, and if anything goes wrong, errors will be shown (for example, missing file or path). If all goes well, our new package will appear in the RPMS directory generated by default under the rpmbuild directory (sorted into subdirectories by architecture):
$ ls rpmbuild/RPMS/noarch/
We have created a simple yet fully functional rpm package. We can query it for all the metadata we supplied earlier:
$ rpm -qpi rpmbuild/RPMS/noarch/admin-scripts-1-0.noarch.rpm 
Name        : admin-scripts
Version     : 1
Release     : 0
Architecture: noarch
Install Date: (not installed)
Group       : Application/Other
Size        : 78
License     : GPL
Signature   : (none)
Source RPM  : admin-scripts-1-0.src.rpm
Build Date  : 2018. aug.  1., Wed, 13.27.34 CEST
Build Host  :
Relocations : (not relocatable)
Packager    : John Doe 
URL         :
Summary     : FooBar Inc. IT dept. admin scripts
Description :
Package installing latest version the admin scripts used by the IT dept.
And of cause we can install it (with root privileges): Installing custom scripts with rpm Installing custom scripts with rpm

me width=

As we installed the scripts into a directory that is on every user's $PATH , you can run them as any user in the system, from any directory:
news pulled
The package can be distributed as it is, and can be pushed into repositories available to any number of systems. To do so is out of the scope of this tutorial - however, building another version of the package is certainly not. Building another version of the package Our package and the extremely useful scripts in it become popular in no time, considering they are reachable anywhere with a simple yum install admin-scripts within the environment. There will be soon many requests for some improvements - in this example, many votes come from happy users that the should print another line on execution, this feature would save the whole company. We need to build another version of the package, as we don't want to install another script, but a new version of it with the same name and path, as the sysadmins in our organization already rely on it heavily.

First we change the source of the in the SOURCES to something even more complex:

echo "news pulled"
echo "another line printed"
exit 0
We need to recreate the tar.gz with the new source content - we can use the same filename as the first time, as we don't change version, only release (and so the Source0 reference will be still valid). Note that we delete the previous archive first:
cd rpmbuild/SOURCES/ && rm -f admin-scripts-1.tar.gz && tar -czf admin-scripts-1.tar.gz admin-scripts-1
Now we create another specfile with a higher release number:
cp rpmbuild/SPECS/admin-scripts-1.0.spec rpmbuild/SPECS/admin-scripts-1.1.spec
We don't change much on the package itself, so we simply administrate the new version as shown below:
Name:           admin-scripts
Version:        1
Release:        1
Summary:        FooBar Inc. IT dept. admin scripts
Packager:       John Doe 
Group:          Application/Other
License:        GPL
Source0:        %{name}-%{version}.tar.gz
BuildArch:      noarch

Package installing latest version the admin scripts used by the IT dept.

%setup -q


mkdir -p $RPM_BUILD_ROOT/usr/local/sbin
cp scripts/* $RPM_BUILD_ROOT/usr/local/sbin/


%dir /usr/local/sbin


* Wed Aug 22 2018 John Doe 
- release 1.1 - v1.1 prints another line
* Wed Aug 1 2018 John Doe 
- release 1.0 - initial release

me width=

All done, we can build another version of our package containing the updated script. Note that we reference the specfile with the higher version as the source of the build:
rpmbuild --bb rpmbuild/SPECS/admin-scripts-1.1.spec
If the build is successful, we now have two versions of the package under our RPMS directory:
ls rpmbuild/RPMS/noarch/
admin-scripts-1-0.noarch.rpm  admin-scripts-1-1.noarch.rpm
And now we can install the "advanced" script, or upgrade if it is already installed. Upgrading custom scripts with rpm Upgrading custom scripts with rpm

And our sysadmins can see that the feature request is landed in this version:

rpm -q --changelog admin-scripts
* sze aug 22 2018 John Doe 
- release 1.1 - v1.1 prints another line

* sze aug 01 2018 John Doe 
- release 1.0 - initial release

We wrapped our custom content into versioned rpm packages. This means no older versions left scattered across systems, everything is in it's place, on the version we installed or upgraded to. RPM gives the ability to replace old stuff needed only in previous versions, can add custom dependencies or provide some tools or services our other packages rely on. With effort, we can pack nearly any of our custom content into rpm packages, and distribute it across our environment, not only with ease, but with consistency.

[Jun 12, 2018] How to convert RHEL 6.x to CentOS 6.x The Picky SysAdmin

Jun 12, 2018 |

How to convert RHEL 6.x to CentOS 6.x 2014-07-15 by Eric Schewe


Last modified: [last-modified]

This post relates to my older post about converting RHEL 5.x to CentOS 5.x . All the reasons for doing so and other background information can be found in that post.

This post will cover how to convert RHEL 6.x to 5.x.

Updated 2016-03-29 – Thanks to feedback from here I've updated the guide.

Updates and Backups!
  1. Fully patch your system and reboot your system before starting this process
  2. Take a full backup of your system or a Snapshot if it's a VM
  1. Login to the server and become root
  2. Clean up yum's cache
    1 localhost :~ root # yum clean all
  3. Create a temporary working area
    1 2 localhost :~ root # mkdir -p /temp/centos localhost :~ root # cd /temp/centos
  4. Determine your version of RHEL
    1 localhost :~ root # cat /etc/redhat-release
  5. Determine your architecture (32-bit = i386, 64-bit = x86_64)
    1 localhost :~ root # uname -i
  6. Download the applicable files for your release and architecture. The version numbers on these packages could change. To find the current versions of these files browse this FTP site: (32-bit) or (64-bit) and replace the 'x' values below with the current version numbers
    CentOS 6.5 / 32-bit
    1 2 3 4 5 localhost :~ root # wget localhost :~ root # wget localhost :~ root # wget localhost :~ root # wget localhost :~ root # wget

    CentOS 6.5 / 64-bit
    1 2 3 4 5 localhost :~ root # wget localhost :~ root # wget localhost :~ root # wget localhost :~ root # wget localhost :~ root # wget
  7. Import the GPG key for the appropriate version of CentOS
    1 localhost :~ root # rpm --import RPM-GPG-KEY-CentOS-6
  8. Remove RHEL packages

    Note: If the 'rpm -e' command fails saying one of the packages is not installed remove the package from the command and run it again.

    1 2 localhost :~ root # yum remove rhnlib abrt-plugin-bugzilla redhat-release-notes* localhost :~ root # rpm -e --nodeps redhat-release-server-6Server redhat-indexhtml
  9. Remove any left over RHEL subscription information and the subscription-manager

    Note: If you do not do this every time you run 'yum' you will receive the following message: "This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register."

    1 2 localhost :~ root # subscription-manager clean localhost :~ root # yum remove subscription-manager
  10. Force install the CentOS RPMs we downloaded
    1 localhost :~ root # rpm -Uvh --force *.rpm
  11. Clean up yum one more time and then upgrade
    1 2 localhost :~ root # yum clean all localhost :~ root # yum upgrade
  12. Reboot your server
  13. Verify functionality
  14. Delete VM Snapshot if you took one as part of the backup

[Apr 29, 2018] RHEL 6.10 Beta is available

Apr 29, 2018 |

...On April 26, Red Hat announced the RHEL 6.10 beta, providing stability and security updates.

RHEL 6 was first released in November 2010 and was superseded as the leading edge of RHEL development when RHEL 7 was released in June 2014.

"Red Hat Enterprise Linux offers a ten year lifecycle, one of the longest in the industry, and version 6 is within its seventh year of support, putting it in the Maintenance Support 2 phase," Marcel Kolaja, product manager, Red Hat Enterprise Linux at Red Hat, told ServerWatch . "This means that Red Hat Enterprise Linux 6 receives Critical-rated Red Hat Security Advisories (RHSAs), and selected Urgent-rated Red Hat Bug Fix Advisories (RHBAs) may be released as they become available."

Kolaja noted that the Red Hat Enterprise Linux 6.10 Beta delivers updates only to maintain and enhance the security, stability, and reliability of the platform in production roles.

[Apr 26, 2018] chkservice - A Tool For Managing Systemd Units From Linux Terminal

Apr 26, 2018 |

For RPM Based Systems , use DNF Command to install chkservice .

$ sudo yum install
How To Use chkservice

Just fire the following command to launch the chkservice tool. The output is split to four parts.

$ sudo chkservice

[Jan 29, 2018] How good is Red Hat Enterprise Linux or CentOS as a desktop OS - Quora

Jan 29, 2018 |

It's harder to use CentOS as a desktop than Debian, the latter of which has many, many more packages. You can use the Nux repo at My RPM repositories (or use his CentOS 6 remix called Stella: Stella, stellae) to beef up your CentOS installation.

But I think there's safety in numbers, and a lot more developers package things for Debian.

That said, I use Fedora because it's pretty much the CentOS of the future, and there are a lot more packages. Sure, you need to add RPM Fusion for some things, but that's not so difficult.

Stability is a funny thing. Often in the Linux world, "stable" just means old. I find Fedora -- with much newer packages -- very stable in terms of its ability to keep running. Using what works with your hardware is more important than what is labeled "stable." Older hardware is often better with older software. And newer hardware may be more "stable" with newer software.

Your mileage will definitely vary, and it doesn't hurt to do a few installations (Debian, CentOS, Fedora, Ubuntu) to see what works best for you. Try before you buy (even if what you're "buying" is free).

[Oct 25, 2017] So after 4 hours of debugging systemd and NetworkManager; nothing but pain linux

Oct 25, 2017 |

This is old but pretty instructive post. Comments provide important tips on how to deal with systemd problems.

it drops the default route every time I reboot the machine

Can't work out how to resolve this at all. Nothing in logs or anything. Two hours down the pan.

ALL of this is config management stuff related to systemd and NetworkManager. Not at all impressed. Excuse the rant but this is motivating me to move our CentOS 5.x machines to FreeBSD which has some semblance of debuggability, configuration management that doesn't make you want to gouge your eyes out and stability.

This feels like Windows, WMI and the registry which is an abhorrent place for Linux to be. I know because I deal with Windows Server as well.

I know there is a fan following of systemd on here but this is what it's like in the real world for those of us who need to argue with it to get shit done, so Dear Lennart (and RedHat): Fuck you for taking away 4 hours of my life and fuck you for forcing these bits of crap on the distributions.

Edit: please read all my replies before you kick me down.

Edit 2: I've spoken to our CTO. 95% of our kit can be moved to windows as it's hosting JVMs so that's going in the mix as well.

I have a few CentOS 7 servers online and have had no trouble with routes being dropped, during reboot or at any other time.

[–] godlesspeon S ] 2 points 3 points 4 points 3 years ago (3 children)

How did you do the install on these? I'm genuinely interested in where this has gone wrong.

[–] e_t_ 8 points 9 points 10 points 3 years ago (2 children)

I went through Anaconda. It wrote a config file to /etc/sysconfig/network-scripts, which NM picks up. I think the same file could be created with nmtui .

[–] godlesspeon S ] 2 points 3 points 4 points 3 years ago (1 child)

I used nmtui and the sysconfig folder is expected (all contents as expected, correctly configured) and this is good hardware (Intel gbit cards).

I will say that the damn thing has just spontaneously started working which is even more worrying as the whole thing is completely non-deterministic. Ugh.

[–] riking27 9 points 10 points 11 points 3 years ago (0 children)

Nondeterministic? Sounds like a missing ordering dependency to me ( After= , Before= ).

Example: NFS mounts need an to work; anything using the network on stop needs an .

[–] azalynx 67 points 68 points 69 points 3 years ago (124 children)

[...] this is motivating me to move our CentOS 5.x machines to FreeBSD which has [...]

As a fan of systemd, I was going to take your side and agree that these issues are unacceptable, until you said this. You're not helping, seriously. Your problem is related to bugs in a piece of software, not to systemd's design philosophy, and especially not to Linux as a whole.

The fact that your solution to some bugs in the software is to throw the baby out with the bathwater, speaks a lot about your attitude. Chances are the issues you mentioned will be resolved as RHEL/CentOS 7 become more deployed, and others run across similar problems and actually report and/or fix the actual bugs instead of embarking on an anti-Lennart crusade and spewing vitriol all over the place.

I would also be willing to fucking bet that other people have had similar problems with every new version of RHEL or CentOS ever, in history, but because this time there's a convenient bullseye in the form of systemd to assign blame to, everyone loses their shit.

[...] and fuck you for forcing these bits of crap on the distributions.

Woah, what? This fallacy again? Red Hat isn't forcing jack shit on anyone . The reality is that many users/businesses need this functionality; just because you don't need it, that doesn't mean that there aren't a thousand other use cases out there that require systemd. Please stop assuming that you are at the center of the universe and that everyone elses use cases are like yours . Distributions have always been designed to be "one-size-fits-all" in order to fit everyone's uses; obviously many distributions want systemd's functionality because they have users who demand it.

Once again, as I said in my first sentence; it's not ok for an enterprise product to have the problems you experienced, but your attitude isn't helping, really. It's also worth noting that if everyone just flipped tables when something broke, we'd never have any progress.

[Oct 25, 2017] Top 5 Linux pain points in 2017

Looks like the author is a dilettante, if he things that library compatibility problem is not an issue. He also failed to mention systemd.
Oct 25, 2017 |
2016 Open Source Yearbook article on troubleshooting tips for the 5 most common Linux issues , Linux installs and operates as expected for most users, but some inevitably run into problems. How have things changed over the past year in this regard? Once again, I posted the question to and on social media, and analyzed LQ posting patterns. Here are the updated results.

1. Documentation

Documentation, or lack thereof, was one of the largest pain points this year. Although open source methodology produces superior code, the importance of producing quality documentation has only recently come to the forefront. As more non-technical users adopt Linux and open source software, the quality and quantity of documentation will become paramount. If you've wanted to contribute to an open source project but don't feel you are technical enough to offer code, improving documentation is a great way to participate. Many projects even keep the documentation in their repository, so you can use your contribution to get acclimated to the version control workflow.

If you've wanted to contribute to an open source project but don't feel you are technical enough to offer code, improving documentation is a great way to participate.

2. Software/library version incompatibility I was surprised by this one, but software/library version incompatibility was mentioned frequently... 3. UEFI and secure boot

Although this issue continues to improve as more supported hardware is deployed, many users indicate that they still have issues with UEFI and/or secure boot. Using a distribution that fully supports UEFI/secure boot out of the box is the best solution here.

4. Deprecation of 32-bit

Many users are lamenting the death of 32-bit support in their favorite distributions and software projects. Although you still have many options if 32-bit support is a must, fewer and fewer projects are likely to continue supporting a platform with decreasing market share and mind share. Luckily, we're talking about open source, so you'll likely have at least a couple options as long as someone cares about the platform.

5. Deteriorating support and testing for X-forwarding Although many longtime and power users of Linux regularly use X-forwarding and consider it critical functionality, as Linux becomes more mainstream it appears to be seeing less testing and support; especially from newer apps. With Wayland network transparency still evolving, the situation may get worse before it improves.

[Sep 27, 2017] Chkservice - An Easy Way to Manage Systemd Units in Terminal

Sep 27, 2017 |

Systemd is a system and service manager for Linux operating systems which introduces the concept of systemd units and provides a number of features such as parallel startup of system services at boot time, on-demand activation of daemons, etc. It helps to manage services on your Linux OS such as starting/stopping/reloading. But to operate on services with systemd, you need to know the different services launched and the name which exactly matches the service. There is a tool provided which can help Linux users to navigate through the different services available on your Linux as you do for the different process in progress on your system with top command.

What is chkservice?

chkservice is a new and handy tool for systemd units management in a terminal. It is a GitHub project developed by Svetlana Linuxenko. It has the particularity to list the differents services presents on your system. You have a view of each service available and you are able to manage it as you want.


sudo add-apt-repository ppa:linuxenko/chkservice
sudo apt-get update
sudo apt-get install chkservice


git clone
cd chkservice
makepkg -si


dnf copr enable srakitnican/default
dnf install chkservice

chkservice require super user privileges to make changes into unit states or sysv scripts. For user it works read-only.

Package dependencies:

Build dependencies:

Build and install debian package.

git clone
mkdir build
cd build

dpkg -i chkservice-x.x.x.deb

Build release version.

git clone
mkdir build
cd build
cmake ../

[Sep 23, 2017] CentOS 7 Server Hardening Guide Linux Security Networking

Highly recommended!
Notable quotes:
"... As a rule of thumb, malicious applications usually write to /tmp and then attempt to run whatever was written. A way to prevent this is to mount /tmp on a separate partition with the options noexec , nodev and nosuid enabled. ..."
Sep 23, 2017 |

Remove packages which you don't require on a server, e.g. firmware of sound cards, firmware of WinTV, wireless drivers etc.

# yum remove alsa-* ivtv-* iwl*firmware ic94xx-firmware
2. System Settings – File Permissions and Masks 2.1 Restrict Partition Mount Options

Partitions should have hardened mount options:

  1. /boot – rw,nodev,noexec,nosuid
  2. /home – rw,nodev,nosuid
  3. /tmp – rw,nodev,noexec,nosuid
  4. /var – rw,nosuid
  5. /var/log – rw,nodev,noexec,nosuid
  6. /var/log/audit – rw,nodev,noexec,nosuid
  7. /var/www – rw,nodev,nosuid

As a rule of thumb, malicious applications usually write to /tmp and then attempt to run whatever was written. A way to prevent this is to mount /tmp on a separate partition with the options noexec , nodev and nosuid enabled.

This will deny binary execution from /tmp , disable any binary to be suid root, and disable any block devices from being created.

The storage location /var/tmp should be bind mounted to /tmp , as having multiple locations for temporary storage is not required:

/tmp /var/tmp none rw,nodev,noexec,nosuid,bind 0 0

The same applies to shared memory /dev/shm :

tmpfs /dev/shm tmpfs rw,nodev,noexec,nosuid 0 0

The proc pseudo-filesystem /proc should be mounted with hidepid . When setting hidepid to 2, directories entries in /proc will hidden.

proc /proc proc rw,hidepid=2 0 0

Harden removeable media mounts by adding nodev noexec and nosuid , e.g.:

/dev/cdrom /mnt/cdrom iso9660 ro,noexec,nosuid,nodev,noauto 0 0
2.2 Restrict Dynamic Mounting and Unmounting of Filesystems

Add the following to /etc/modprobe.d/hardening.conf to disable uncommon filesystems:

install cramfs /bin/true

install freevxfs /bin/true

install jffs2 /bin/true

install hfs /bin/true

install hfsplus /bin/true

install squashfs /bin/true

install udf /bin/true

Depending on a setup (if you don't run clusters, NFS, CIFS etc), you may consider disabling the following too:

install fat /bin/true

install vfat /bin/true

install cifs /bin/true

install nfs /bin/true

install nfsv3 /bin/true

install nfsv4 /bin/true

install gfs2 /bin/true

It is wise to leave ext4, xfs and btrfs enabled at all times.

2.3 Prevent Users Mounting USB Storage

Add the following to /etc/modprobe.d/hardening.conf to disable modprobe loading of USB and FireWire storage drivers:

blacklist usb-storage

blacklist firewire-core

install usb-storage /bin/true

Disable USB authorisation. Create a file /opt/ with the following content:


echo 0 > /sys/bus/usb/devices/usb1/authorized

echo 0 > /sys/bus/usb/devices/usb1/authorized_default

If more than one USB device is available, then add them all. Create a service file /etc/systemd/system/usb-auth.service with the following content:


Description=Disable USB auth




ExecStart=/bin/bash /opt/


Set permissions, enable and start the service:

# chmod 0700 /opt/

# systemctl enable usb-auth.service

# systemctl start usb-auth.service

If required, disable kernel support for USB via bootloader configuration. To do so, append nousb to the kernel line GRUB_CMDLINE_LINUX in /etc/default/grub and generate the Grub2 configuration file:

# grub2-mkconfig -o /boot/grub2/grub.cfg

Note that disabling all kernel support for USB will likely cause problems for systems with USB-based keyboards etc.

2.4 Restrict Programs from Dangerous Execution Patterns

Configure /etc/sysctl.conf with the following:

# Disable core dumps

fs.suid_dumpable = 0

# Disable System Request debugging functionality

kernel.sysrq = 0

# Restrict access to kernel logs

kernel.dmesg_restrict = 1

# Enable ExecShield protection

kernel.exec-shield = 1

# Randomise memory space

kernel.randomize_va_space = 2

# Hide kernel pointers

kernel.kptr_restrict = 2

Load sysctl settings:

# sysctp -p
2.5 Set UMASK 027

The following files require umask hardening: /etc/bashrc , /etc/csh.cshrc , /etc/init.d/functions and /etc/profile .

Sed one-liner:

# sed -i -e 's/umask 022/umask 027/g' -e 's/umask 002/umask 027/g' /etc/bashrc

# sed -i -e 's/umask 022/umask 027/g' -e 's/umask 002/umask 027/g' /etc/csh.cshrc

# sed -i -e 's/umask 022/umask 027/g' -e 's/umask 002/umask 027/g' /etc/profile

# sed -i -e 's/umask 022/umask 027/g' -e 's/umask 002/umask 027/g' /etc/init.d/functions
2.6 Disable Core Dumps

Open /etc/security/limits.conf and set the following:

*  hard  core  0
2.7 Set Security Limits to Prevent DoS

Add the following to /etc/security/limits.conf to enforce sensible security limits:

# 4096 is a good starting point

*      soft   nofile    4096

*      hard   nofile    65536

*      soft   nproc     4096

*      hard   nproc     4096

*      soft   locks     4096

*      hard   locks     4096

*      soft   stack     10240

*      hard   stack     32768

*      soft   memlock   64

*      hard   memlock   64

*      hard   maxlogins 10

# Soft limit 32GB, hard 64GB

*      soft   fsize     33554432

*      hard   fsize     67108864

# Limits for root

root   soft   nofile    4096

root   hard   nofile    65536

root   soft   nproc     4096

root   hard   nproc     4096

root   soft   stack     10240

root   hard   stack     32768

root   soft   fsize     33554432
2.8 Verify Permissions of Files

Ensure that all files are owned by a user:

# find / -ignore_readdir_race -nouser -print -exec chown root {} \;

Ensure that all files are owned by a group:

# find / -ignore_readdir_race -nogroup -print -exec chgrp root {} \;

Automate the process by creating a cron file /etc/cron.daily/unowned_files with the following content:


find / -ignore_readdir_race -nouser -print -exec chown root {} \;

find / -ignore_readdir_race -nogroup -print -exec chgrp root {} \;

Set ownership and permissions:

# chown root:root /etc/cron.daily/unowned_files

# chmod 0700 /etc/cron.daily/unowned_files
2.9 Monitor SUID/GUID Files

Search for setuid/setgid files and identify if all are required:

# find / -xdev -type f -perm -4000 -o -perm -2000
3. System Settings – Firewall and Network Configuration 3.1 Firewall

Setting the default firewalld zone to drop makes any packets which are not explicitly permitted to be rejected.

# sed -i "s/DefaultZone=.*/DefaultZone=drop/g" /etc/firewalld/firewalld.conf

Unless firewalld is required, mask it and replace with iptables:

# systemctl stop firewalld.service

# systemctl mask firewalld.service

# systemctl daemon-reload

# yum install iptables-services

# systemctl enable iptables.service ip6tables.service

Add the following to /etc/sysconfig/iptables to allow only minimal outgoing traffic (DNS, NTP, HTTP/S and SMTPS):








-A INPUT -i lo -m comment --comment local -j ACCEPT

-A INPUT -d ! -i lo -j REJECT --reject-with icmp-port-unreachable

-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT

-A INPUT -p tcp -m tcp -m conntrack --ctstate NEW --dport 22 -s -j ACCEPT

-A INPUT -p tcp -m tcp -m conntrack --ctstate NEW --dport 22 -s -j ACCEPT

-A INPUT -p tcp -m tcp -m conntrack --ctstate NEW --dport 22 -s -j ACCEPT

-A INPUT -p tcp -m tcp -m conntrack --ctstate NEW --dport 22 -j ACCEPT


-A OUTPUT -d -o lo -m comment --comment local -j ACCEPT

-A OUTPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT

-A OUTPUT -p icmp -m icmp --icmp-type any -j ACCEPT

-A OUTPUT -p udp -m udp -m conntrack --ctstate NEW --dport 53 -j ACCEPT

-A OUTPUT -p tcp -m tcp -m conntrack --ctstate NEW --dport 53 -j ACCEPT

-A OUTPUT -p udp -m udp -m conntrack --ctstate NEW --dport 123 -j ACCEPT

-A OUTPUT -p tcp -m tcp -m conntrack --ctstate NEW --dport 80 -j ACCEPT

-A OUTPUT -p tcp -m tcp -m conntrack --ctstate NEW --dport 443 -j ACCEPT

-A OUTPUT -p tcp -m tcp -m conntrack --ctstate NEW --dport 587 -j ACCEPT

-A OUTPUT -j LOG --log-prefix "iptables_output "

-A OUTPUT -j REJECT --reject-with icmp-port-unreachable


Note that the rule allowing all incoming SSH traffic should be removed restricting access to an IP whitelist only, or hiding SSH behind a VPN.

Add the following to /etc/sysconfig/ip6tables to deny all IPv6:









Apply configurations:

# iptables-restore < /etc/sysconfig/iptables

# ip6tables-restore < /etc/sysconfig/ip6tables
3.2 TCP Wrappers

Open /etc/hosts.allow and allow localhost traffic and SSH:


sshd: ALL

The file /etc/hosts.deny should be configured to deny all by default:

3.3 Kernel Parameters Which Affect Networking

Open /etc/sysctl.conf and add the following:

# Disable packet forwarding

net.ipv4.ip_forward = 0

# Disable redirects, not a router

net.ipv4.conf.all.accept_redirects = 0

net.ipv4.conf.default.accept_redirects = 0

net.ipv4.conf.all.send_redirects = 0

net.ipv4.conf.default.send_redirects = 0

net.ipv4.conf.all.secure_redirects = 0

net.ipv4.conf.default.secure_redirects = 0

net.ipv6.conf.all.accept_redirects = 0

net.ipv6.conf.default.accept_redirects = 0

# Disable source routing

net.ipv4.conf.all.accept_source_route = 0

net.ipv4.conf.default.accept_source_route = 0

net.ipv6.conf.all.accept_source_route = 0

# Enable source validation by reversed path

net.ipv4.conf.all.rp_filter = 1

net.ipv4.conf.default.rp_filter = 1

# Log packets with impossible addresses to kernel log

net.ipv4.conf.all.log_martians = 1

net.ipv4.conf.default.log_martians = 1

# Disable ICMP broadcasts

net.ipv4.icmp_echo_ignore_broadcasts = 1

# Ignore bogus ICMP errors

net.ipv4.icmp_ignore_bogus_error_responses = 1

# Against SYN flood attacks

net.ipv4.tcp_syncookies = 1

# Turning off timestamps could improve security but degrade performance.

# TCP timestamps are used to improve performance as well as protect against

# late packets messing up your data flow. A side effect of this feature is 

# that the uptime of the host can sometimes be computed.

# If you disable TCP timestamps, you should expect worse performance 

# and less reliable connections.

net.ipv4.tcp_timestamps = 1

# Disable IPv6 unless required

net.ipv6.conf.lo.disable_ipv6 = 1

net.ipv6.conf.all.disable_ipv6 = 1

net.ipv6.conf.default.disable_ipv6 = 1

# Do not accept router advertisements

net.ipv6.conf.all.accept_ra = 0

net.ipv6.conf.default.accept_ra = 0
3.4 Kernel Modules Which Affect Networking

Open /etc/modprobe.d/hardening.conf and disable Bluetooth kernel modules:

install bnep /bin/true

install bluetooth /bin/true

install btusb /bin/true

install net-pf-31 /bin/true

Also disable AppleTalk:

install appletalk /bin/true

Unless required, disable support for IPv6:

options ipv6 disable=1

Disable (uncommon) protocols:

install dccp /bin/true

install sctp /bin/true

install rds /bin/true

install tipc /bin/true

Since we're looking at server security, wireless shouldn't be an issue, therefore we can disable all the wireless drivers.

# for i in $(find /lib/modules/$(uname -r)/kernel/drivers/net/wireless -name "*.ko" -type f);do \

  echo blacklist "$i" >>/etc/modprobe.d/hardening-wireless.conf;done
3.5 Disable Radios

Disable radios (wifi and wwan):

# nmcli radio all off
3.6 Disable Zeroconf Networking

Open /etc/sysconfig/network and add the following:

3.7 Disable Interface Usage of IPv6

Open /etc/sysconfig/network and add the following:


3.8 Network Sniffer

The server should not be acting as a network sniffer and capturing packages. Run the following to determine if any interface is running in promiscuous mode:

# ip link | grep PROMISC
3.9 Secure VPN Connection

Install the libreswan package if implementation of IPsec and IKE is required.

# yum install libreswan
3.10 Disable DHCP Client

Manual assignment of IP addresses provides a greater degree of management.

For each network interface that is available on the server, open a corresponding file /etc/sysconfig/network-scripts/ifcfg- interface and configure the following parameters:




4. System Settings – SELinux

Ensure that SELinux is not disabled in /etc/default/grub , and verify that the state is enforcing:

# sestatus
5. System Settings – Account and Access Control 5.1 Delete Unused Accounts and Groups

Remove any account which is not required, e.g.:

# userdel -r adm

# userdel -r ftp

# userdel -r games

# userdel -r lp

Remove any group which is not required, e.g.:

# groupdel games
5.2 Disable Direct root Login
# echo > /etc/securetty
5.3 Enable Secure (high quality) Password Policy

Note that running authconfig will overwrite the PAM configuration files destroying any manually made changes. Make sure that you have a backup

Secure password policy rules are outlined below.

  1. Minimum length of a password – 16.
  2. Minimum number of character classes in a password – 4.
  3. Maximum number of same consecutive characters in a password – 2.
  4. Maximum number of consecutive characters of same class in a password – 2.
  5. Require at least one lowercase and one uppercase characters in a password.
  6. Require at least one digit in a password.
  7. Require at least one other character in a password.

The following command will enable SHA512 as well as set the above password requirements:

# authconfig --passalgo=sha512 \

 --passminlen=16 \

 --passminclass=4 \

 --passmaxrepeat=2 \

 --passmaxclassrepeat=2 \

 --enablereqlower \

 --enablerequpper \

 --enablereqdigit \

 --enablereqother \


Open /etc/security/pwquality.conf and add the following:

difok = 8

gecoscheck = 1

These will ensure that 8 characters in the new password must not be present in the old password, and will check for the words from the passwd entry GECOS string of the user.

5.4 Prevent Log In to Accounts With Empty Password

Remove any instances of nullok from /etc/pam.d/system-auth and /etc/pam.d/password-auth to prevent logins with empty passwords.

Sed one-liner:

# sed -i 's/\<nullok\>//g' /etc/pam.d/system-auth /etc/pam.d/password-auth
5.5 Set Account Expiration Following Inactivity

Disable accounts as soon as the password has expired.

Open /etc/default/useradd and set the following:


Sed one-liner:

# sed -i 's/^INACTIVE.*/INACTIVE=0/' /etc/default/useradd
5.6 Secure Pasword Policy

Open /etc/login.defs and set the following:





Sed one-liner:

# sed -i -e 's/^PASS_MAX_DAYS.*/PASS_MAX_DAYS 60/' \

  -e 's/^PASS_MIN_DAYS.*/PASS_MIN_DAYS 1/' \

  -e 's/^PASS_MIN_LEN.*/PASS_MIN_LEN 14/' \

  -e 's/^PASS_WARN_AGE.*/PASS_WARN_AGE 14/' /etc/login.defs
5.7 Log Failed Login Attemps

Open /etc/login.defs and enable logging:


Also add a delay in seconds before being allowed another attempt after a login failure:

5.8 Ensure Home Directories are Created for New Users

Open /etc/login.defs and configure:

5.9 Verify All Account Password Hashes are Shadowed

The command below should return "x":

# cut -d: -f2 /etc/passwd|uniq
5.10 Set Deny and Lockout Time for Failed Password Attempts

Add the following line immediately before the statement in the AUTH section of /etc/pam.d/system-auth and /etc/pam.d/password-auth :

auth required preauth silent deny=3 unlock_time=900 fail_interval=900

Add the following line immediately after the statement in the AUTH section of /etc/pam.d/system-auth and /etc/pam.d/password-auth :

auth [default=die] authfail deny=3 unlock_time=900 fail_interval=900

Add the following line immediately before the statement in the ACCOUNT section of /etc/pam.d/system-auth and /etc/pam.d/password-auth :

account required

The content of the file /etc/pam.d/system-auth can be seen below.


auth        required

auth        required preauth silent deny=3 unlock_time=900 fail_interval=900

auth        sufficient  try_first_pass

auth        [default=die] authfail deny=3 unlock_time=900 fail_interval=900

auth        requisite uid >= 1000 quiet_success

auth        required

account     required

account     required

account     sufficient

account     sufficient uid < 1000 quiet

account     required

password    requisite try_first_pass local_users_only retry=3 authtok_type=

password    sufficient sha512 shadow  try_first_pass use_authtok remember=5

password    required

session     optional revoke

session     required

-session    optional

session     [success=1 default=ignore] service in crond quiet use_uid

session     required

Also, do not allow users to reuse recent passwords by adding the remember option.

Make /etc/pam.d/system-auth and /etc/pam.d/password-auth configurations immutable so that they don't get overwritten when authconfig is run:

# chattr +i /etc/pam.d/system-auth /etc/pam.d/password-auth

Accounts will get locked after 3 failed login attemtps:

login[]: pam_faillock(login:auth): Consecutive login failures for user tomas account temporarily locked

Use the following to clear user's fail count:

# faillock --user tomas --reset
5.11 Set Boot Loader Password

Prevent users from entering the grub command line and edit menu entries:

# grub2-setpassword

# grub2-mkconfig -o /boot/grub2/grub.cfg

This will create the file /boot/grub2/user.cfg if one is not already present, which will contain the hashed Grub2 bootloader password.

Verify permissions of /boot/grub2/grub.cfg :

# chmod 0600 /boot/grub2/grub.cfg
5.12 Password-protect Single User Mode

CentOS 7 single user mode is password protected by the root password by default as part of the design of Grub2 and systemd.

5.13 Ensure Users Re-Authenticate for Privilege Escalation

The NOPASSWD tag allows a user to execute commands using sudo without having to provide a password. While this may sometimes be useful it is also dangerious.

Ensure that the NOPASSWD tag does not exist in /etc/sudoers configuration file or /etc/sudoers.d/ .

5.14 Multiple Console Screens and Console Locking

Install the screen package to be able to emulate multiple console windows:

# yum install screen

Install the vlock package to enable console screen locking:

# yum install vlock
5.15 Disable Ctrl-Alt-Del Reboot Activation

Prevent a locally logged-in console user from rebooting the system when Ctrl-Alt-Del is pressed:

# systemctl mask
5.16 Warning Banners for System Access

Add the following line to the files /etc/issue and /etc/ :

Unauthorised access prohibited. Logs are recorded and monitored.
5.17 Set Interactive Session Timeout

Open /etc/profile and set:

readonly TMOUT=900
5.18 Two Factor Authentication

The recent version of OpenSSH server allows to chain several authentication methods, meaning that all of them have to be satisfied in order for a user to log in successfully.

Adding the following line to /etc/ssh/sshd_config would require a user to authenticate with a key first, and then also provide a password.

AuthenticationMethods publickey,password

This is by definition a two factor authentication: the key file is something that a user has, and the account password is something that a user knows.

Alternatively, two factor authentication for SSH can be set up by using Google Authenticator.

5.19 Configure History File Size

Open /etc/profile and set the number of commands to remember in the command history to 5000:


Sed one-liner:

# sed -i 's/HISTSIZE=.*/HISTSIZE=5000/g' /etc/profile
6. System Settings – System Accounting with auditd 6.1 Auditd Configuration

Open /etc/audit/auditd.conf and configure the following:

local_events = yes

write_logs = yes

log_file = /var/log/audit/audit.log

max_log_file = 25

num_logs = 10

max_log_file_action = rotate

space_left = 30

space_left_action = email

admin_space_left = 10

admin_space_left_action = email

disk_full_action = suspend

disk_error_action = suspend

action_mail_acct =

flush = data

The above auditd configuration should never use more than 250MB of disk space (10x25MB=250MB) on /var/log/audit .

Set admin_space_left_action=single if you want to cause the system to switch to single user mode for corrective action rather than send an email.

Automatically rotating logs ( max_log_file_action=rotate ) minimises the chances of the system unexpectedly running out of disk space by being filled up with log data.

We need to ensure that audit event data is fully synchronised ( flush=data ) with the log files on the disk .

6.2 Auditd Rules

System audit rules must have mode 0640 or less permissive and owned by the root user:

# chown root:root /etc/audit/rules.d/audit.rules

# chmod 0600 /etc/audit/rules.d/audit.rules

Open /etc/audit/rules.d/audit.rules and add the following:

# Delete all currently loaded rules


# Set kernel buffer size

-b 8192

# Set the action that is performed when a critical error is detected.

# Failure modes: 0=silent 1=printk 2=panic

-f 1

# Record attempts to alter the localtime file

-w /etc/localtime -p wa -k audit_time_rules

# Record events that modify user/group information

-w /etc/group -p wa -k audit_rules_usergroup_modification

-w /etc/passwd -p wa -k audit_rules_usergroup_modification

-w /etc/gshadow -p wa -k audit_rules_usergroup_modification

-w /etc/shadow -p wa -k audit_rules_usergroup_modification

-w /etc/security/opasswd -p wa -k audit_rules_usergroup_modification

# Record events that modify the system's network environment

-w /etc/ -p wa -k audit_rules_networkconfig_modification

-w /etc/issue -p wa -k audit_rules_networkconfig_modification

-w /etc/hosts -p wa -k audit_rules_networkconfig_modification

-w /etc/sysconfig/network -p wa -k audit_rules_networkconfig_modification

-a always,exit -F arch=b32 -S sethostname -S setdomainname -k audit_rules_networkconfig_modification

-a always,exit -F arch=b64 -S sethostname -S setdomainname -k audit_rules_networkconfig_modification

# Record events that modify the system's mandatory access controls

-w /etc/selinux/ -p wa -k MAC-policy

# Record attempts to alter logon and logout events

-w /var/log/tallylog -p wa -k logins

-w /var/log/lastlog -p wa -k logins

-w /var/run/faillock/ -p wa -k logins

# Record attempts to alter process and session initiation information

-w /var/log/btmp -p wa -k session

-w /var/log/wtmp -p wa -k session

-w /var/run/utmp -p wa -k session

# Ensure auditd collects information on kernel module loading and unloading

-w /usr/sbin/insmod -p x -k modules

-w /usr/sbin/modprobe -p x -k modules

-w /usr/sbin/rmmod -p x -k modules

-a always,exit -F arch=b64 -S init_module -S delete_module -k modules

# Ensure auditd collects system administrator actions

-w /etc/sudoers -p wa -k actions

# Record attempts to alter time through adjtimex

-a always,exit -F arch=b32 -S adjtimex -S settimeofday -S stime -k audit_time_rules

# Record attempts to alter time through settimeofday

-a always,exit -F arch=b64 -S adjtimex -S settimeofday -k audit_time_rules

# Record attempts to alter time through clock_settime

-a always,exit -F arch=b32 -S clock_settime -F a0=0x0 -k time-change

# Record attempts to alter time through clock_settime

-a always,exit -F arch=b64 -S clock_settime -F a0=0x0 -k time-change

# Record events that modify the system's discretionary access controls

-a always,exit -F arch=b32 -S chmod -S fchmod -S fchmodat -F auid>=1000 -F auid!=4294967295 -k perm_mod

-a always,exit -F arch=b32 -S chown -S fchown -S fchownat -S lchown -F auid>=1000 -F auid!=4294967295 -k perm_mod

-a always,exit -F arch=b64 -S chmod -S fchmod -S fchmodat -F auid>=1000 -F auid!=4294967295 -k perm_mod

-a always,exit -F arch=b64 -S chown -S fchown -S fchownat -S lchown -F auid>=1000 -F auid!=4294967295 -k perm_mod

-a always,exit -F arch=b32 -S setxattr -S lsetxattr -S fsetxattr -S removexattr -S lremovexattr -S fremovexattr -F auid>=1000 -F auid!=4294967295 -k perm_mod

-a always,exit -F arch=b64 -S setxattr -S lsetxattr -S fsetxattr -S removexattr -S lremovexattr -S fremovexattr -F auid>=1000 -F auid!=4294967295 -k perm_mod

# Ensure auditd collects unauthorised access attempts to files (unsuccessful)

-a always,exit -F arch=b32 -S creat -S open -S openat -S open_by_handle_at -S truncate -S ftruncate -F exit=-EACCES -F auid>=1000 -F auid!=4294967295 -k access

-a always,exit -F arch=b32 -S creat -S open -S openat -S open_by_handle_at -S truncate -S ftruncate -F exit=-EPERM -F auid>=1000 -F auid!=4294967295 -k access

-a always,exit -F arch=b64 -S creat -S open -S openat -S open_by_handle_at -S truncate -S ftruncate -F exit=-EACCES -F auid>=1000 -F auid!=4294967295 -k access

-a always,exit -F arch=b64 -S creat -S open -S openat -S open_by_handle_at -S truncate -S ftruncate -F exit=-EPERM -F auid>=1000 -F auid!=4294967295 -k access

# Ensure auditd collects information on exporting to media (successful)

-a always,exit -F arch=b32 -S mount -F auid>=1000 -F auid!=4294967295 -k export

-a always,exit -F arch=b64 -S mount -F auid>=1000 -F auid!=4294967295 -k export

# Ensure auditd collects file deletion events by user

-a always,exit -F arch=b32 -S rmdir -S unlink -S unlinkat -S rename -S renameat -F auid>=1000 -F auid!=4294967295 -k delete

-a always,exit -F arch=b64 -S rmdir -S unlink -S unlinkat -S rename -S renameat -F auid>=1000 -F auid!=4294967295 -k delete

# Ensure auditd collects information on the use of privileged commands

-a always,exit -F path=/usr/bin/chage -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/chcon -F perm=x -F auid>=1000 -F auid!=4294967295 -F key=privileged-priv_change

-a always,exit -F path=/usr/bin/chfn -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/chsh -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/crontab -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/gpasswd -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/mount -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/newgrp -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/passwd -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/pkexec -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/screen -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/ssh-agent -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/sudo -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/sudoedit -F perm=x -F auid>=1000 -F auid!=4294967295 -F key=privileged

-a always,exit -F path=/usr/bin/su -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/umount -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/wall -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/bin/write -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/lib64/dbus-1/dbus-daemon-launch-helper -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/libexec/openssh/ssh-keysign -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/libexec/utempter/utempter -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/lib/polkit-1/polkit-agent-helper-1 -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/sbin/netreport -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/sbin/pam_timestamp_check -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/sbin/postdrop -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/sbin/postqueue -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/sbin/restorecon -F perm=x -F auid>=1000 -F auid!=4294967295 -F key=privileged-priv_change

-a always,exit -F path=/usr/sbin/semanage -F perm=x -F auid>=1000 -F auid!=4294967295 -F key=privileged-priv_change

-a always,exit -F path=/usr/sbin/setsebool -F perm=x -F auid>=1000 -F auid!=4294967295 -F key=privileged-priv_change

-a always,exit -F path=/usr/sbin/unix_chkpwd -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/sbin/userhelper -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

-a always,exit -F path=/usr/sbin/usernetctl -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged

# Make the auditd configuration immutable.

# The configuration can only be changed by rebooting the machine.

-e 2

The auditd service does not include the ability to send audit records to a centralised server for management directly.

It does, however, include a plug-in for audit event multiplexor to pass audit records to the local syslog server.

To do so, open the file /etc/audisp/plugins.d/syslog.conf and set:

active = yes

Enable and start the service:

# systemctl enable auditd.service

# systemctl start auditd.service
6.3. Enable Kernel Auditing

Open /etc/default/grub and append the following parameter to the kernel boot line GRUB_CMDLINE_LINUX:


Update Grub2 configuration to reflect changes:

# grub2-mkconfig -o /boot/grub2/grub.cfg
7. System Settings – Software Integrity Checking 7.1 Advanced Intrusion Detection Environment (AIDE)

Install AIDE:

# yum install aide

Build AIDE database:

# /usr/sbin/aide --init

By default, the database will be written to the file /var/lib/aide/ .

# cp /var/lib/aide/ /var/lib/aide/aide.db.gz

Storing the database and the configuration file /etc/aide.conf (or SHA2 hashes of the files) in a secure location provides additional assurance about their integrity.

Check AIDE database:

# /usr/sbin/aide --check

By default, AIDE does not install itself for periodic execution. Configure periodic execution of AIDE by adding to cron:

# echo "30 4 * * * root /usr/sbin/aide --check|mail -s 'AIDE'" >> /etc/crontab

Periodically running AIDE is necessary in order to reveal system changes.

7.2 Tripwire

Open Source Tripwire is an alternative to AIDE. It is recommended to use one or another, but not both.

Install Tripwire from the EPEL repository:

# yum install epel-release

# yum install tripwire

# /usr/sbin/tripwire-setup-keyfiles

The Tripwire configuration file is /etc/tripwire/twcfg.txt and the policy file is /etc/tripwire/twpol.txt . These can be edited and configured to match the system Tripwire is installed on, see this blog post for more details.

Initialise the database to implement the policy:

# tripwire --init

Check for policy violations:

# tripwire --check

Tripwire adds itself to /etc/cron.daily/ for daily execution therefore no extra configuration is required.

7.3 Prelink

Prelinking is done by the prelink package, which is not installed by default.

# yum install prelink

To disable prelinking, open the file /etc/sysconfig/prelink and set the following:


Sed one-liner:

# sed -i 's/PRELINKING.*/PRELINKING=no/g' /etc/sysconfig/prelink

Disable existing prelinking on all system files:

# prelink -ua
8. System Settings – Logging and Message Forwarding 8.1 Configure Persistent Journald Storage

By default, journal stores log files only in memory or a small ring-buffer in the directory /run/log/journal . This is sufficient to show recent log history with journalctl, but logs aren't saved permanently. Enabling persistent journal storage ensures that comprehensive data is available after system reboot.

Open the file /etc/systemd/journald.conf and put the following:



# How much disk space the journal may use up at most


# How much disk space systemd-journald shall leave free for other uses


# How large individual journal files may grow at most


Restart the service:

# systemctl restart systemd-journald
8.2 Configure Message Forwarding to Remote Server

Depending on your setup, open /etc/rsyslog.conf and add the following to forward messages to a some remote server:


Here *.* stands for facility.severity . Note that a single @ sends logs over UDP, where a double @ sends logs using TCP.

8.3 Logwatch

Logwatch is a customisable log-monitoring system.

# yum install logwatch

Logwatch adds itself to /etc/cron.daily/ for daily execution therefore no configuration is mandatory.

9. System Settings – Security Software 9.1 Malware Scanners

Install Rkhunter and ClamAV:

# yum install epel-release

# yum install rkhunter clamav clamav-update

# rkhunter --update

# rkhunter --propupd

# freshclam -v

Rkhunter adds itself to /etc/cron.daily/ for daily execution therefore no configuration is required. ClamAV scans should be tailored to individual needs.

9.2 Arpwatch

Arpwatch is a tool used to monitor ARP activity of a local network (ARP spoofing detection), therefore it is unlikely one will use it in the cloud, however, it is still worth mentioning that the tools exist.

Be aware of the configuration file /etc/sysconfig/arpwatch which you use to set the email address where to send the reports.

9.3 Commercial AV

Consider installing a commercial AV product that provides real-time on-access scanning capabilities.

9.4 Grsecurity

Grsecurity is an extensive security enhancement to the Linux kernel. Although it isn't free nowadays, the software is still worth mentioning.

The company behind Grsecurity stopped publicly distributing stable patches back in 2015, with an exception of the test series continuing to be available to the public in order to avoid impact to the Gentoo Hardened and Arch Linux communities.

Two years later, the company decided to cease free distribution of the test patches as well, therefore as of 2017, Grsecurity software is available to paying customers only.

10. System Settings – OS Update Installation

Install the package yum-utils for better consistency checking of the package database.

# yum install yum-utils

Configure automatic package updates via yum-cron.

# yum install yum-cron

Add the following to /etc/yum/yum-cron.conf to get notified via email when new updates are available:

update_cmd = default

update_messages = yes

download_updates = no	

apply_updates = no

emit_via = email	

email_from =

email_to =

email_host = localhost

Add the following to /etc/yum/yum-cron-hourly.conf to check for security-related updates every hour and automatically download and install them:

update_cmd = security

update_messages = yes

download_updates = yes

apply_updates = yes

emit_via = stdio

Enable and start the service:

# systemctl enable yum-cron.service

# systemctl start yum-cron.service
11. System Settings – Process Accounting

The package psacct contain utilities for monitoring process activities:

  1. ac – displays statistics about how long users have been logged on.
  2. lastcomm – displays information about previously executed commands.
  3. accton – turns process accounting on or off.
  4. sa – summarises information about previously executed commands.

Install and enable the service:

# yum install psacct

# systemctl enable psacct.service

# systemctl start psacct.service
1. Services – SSH Server

Create a group for SSH access as well as some regular user account who will be a member of the group:

# groupadd ssh-users

# useradd -m -s /bin/bash -G ssh-users tomas

Generate SSH keys for the user:

# su - tomas

$ mkdir --mode=0700 ~/.ssh

$ ssh-keygen -b 4096 -t rsa -C "tomas" -f ~/.ssh/id_rsa

Generate SSH host keys:

# ssh-keygen -b 4096 -t rsa -N "" -f /etc/ssh/ssh_host_rsa_key

# ssh-keygen -b 1024 -t dsa -N "" -f /etc/ssh/ssh_host_dsa_key

# ssh-keygen -b 521 -t ecdsa -N "" -f /etc/ssh/ssh_host_ecdsa_key

# ssh-keygen -t ed25519 -N "" -f /etc/ssh/ssh_host_ed25519_key

For RSA keys, 2048 bits is considered sufficient. DSA keys must be exactly 1024 bits as specified by FIPS 186-2.

For ECDSA keys, the -b flag determines the key length by selecting from one of three elliptic curve sizes: 256, 384 or 521 bits. ED25519 keys have a fixed length and the -b flag is ignored.

The host can be impersonated if an unauthorised user obtains the private SSH host key file, therefore ensure that permissions of /etc/ssh/*_key are properly set:

# chmod 0600 /etc/ssh/*_key

Configure /etc/ssh/sshd_config with the following:

# SSH port

Port 22

# Listen on IPv4 only


# Protocol version 1 has been exposed

Protocol 2

# Limit the ciphers to those which are FIPS-approved, the AES and 3DES ciphers

# Counter (CTR) mode is preferred over cipher-block chaining (CBC) mode

Ciphers aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,aes192-cbc,aes256-cbc,3des-cbc

# Use FIPS-approved MACs

MACs hmac-sha2-512,hmac-sha2-256,hmac-sha1

# INFO is a basic logging level that will capture user login/logout activity

# DEBUG logging level is not recommended for production servers

LogLevel INFO

# Disconnect if no successful login is made in 60 seconds

LoginGraceTime 60

# Do not permit root logins via SSH

PermitRootLogin no

# Check file modes and ownership of the user's files before login

StrictModes yes

# Close TCP socket after 2 invalid login attempts

MaxAuthTries 2

# The maximum number of sessions per network connection

MaxSessions 2

# User/group permissions


AllowGroups ssh-users

DenyUsers root

DenyGroups root

# Password and public key authentications

PasswordAuthentication no

PermitEmptyPasswords no

PubkeyAuthentication yes

AuthorizedKeysFile  .ssh/authorized_keys

# Disable unused authentications mechanisms

RSAAuthentication no # DEPRECATED

RhostsRSAAuthentication no # DEPRECATED

ChallengeResponseAuthentication no

KerberosAuthentication no

GSSAPIAuthentication no

HostbasedAuthentication no

IgnoreUserKnownHosts yes

# Disable insecure access via rhosts files

IgnoreRhosts yes

AllowAgentForwarding no

AllowTcpForwarding no

# Disable X Forwarding

X11Forwarding no

# Disable message of the day but print last log

PrintMotd no

PrintLastLog yes

# Show banner

Banner /etc/issue

# Do not send TCP keepalive messages

TCPKeepAlive no

# Default for new installations

UsePrivilegeSeparation sandbox

# Prevent users from potentially bypassing some access restrictions

PermitUserEnvironment no

# Disable compression

Compression no

# Disconnect the client if no activity has been detected for 900 seconds

ClientAliveInterval 900

ClientAliveCountMax 0

# Do not look up the remote hostname

UseDNS no

UsePAM yes

In case you want to change the default SSH port to something else, you will need to tell SELinux about it.

# yum install policycoreutils-python

For example, to allow SSH server to listen on TCP 2222, do the following:

# semanage port -a -t ssh_port_t 2222 -p tcp

Ensure that the firewall allows incoming traffic on the new SSH port and restart the sshd service.

2. Service – Network Time Protocol

CentOS 7 should come with Chrony, make sure that the service is enabled:

# systemctl enable chronyd.service
3. Services – Mail Server 3.1 Postfix

Postfix should be installed and enabled already. In case it isn't, the do the following:

# yum install postfix

# systemctl enable postfix.service

Open /etc/postfix/ and configure the following to act as a null client:

smtpd_banner = $myhostname ESMTP

inet_interfaces = loopback-only

inet_protocols = ipv4

mydestination =

local_transport = error: local delivery disabled

unknown_local_recipient_reject_code = 550

mynetworks =

relayhost = []:587

Optionally (depending on your setup), you can configure Postfix to use authentication:

# yum install cyrus-sasl-plain

Open /etc/postfix/ and add the following:

smtp_sasl_auth_enable = yes

smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd

smtp_sasl_security_options = noanonymous

smtp_tls_CApath = /etc/ssl/certs

smtp_use_tls = yes

Open /etc/postfix/sasl_passwd and put authentication credentials in a format of:


Set permissions and create a database file:

# chmod 0600 /etc/postfix/sasl_passwd

# postmap /etc/postfix/sasl_passwd

Restart the service and ensure that firewall allows outgoing traffic to the SMTP relay server.

3.2 Mail Distribution to Active Mail Accounts

Configure the file /etc/aliases to have a forward rule for the root user.

4. Services – Remove Obsolete Services

None of these should be installed on CentOS 7 minimal:

# yum erase xinetd telnet-server rsh-server \

  telnet rsh ypbind ypserv tfsp-server bind \

  vsfptd dovercot squid net-snmpd talk-erver talk

Check all enabled services:

# systemctl list-unit-files --type=service|grep enabled

Disable kernel dump service:

# systemctl disable kdump.service

# systemctl mask kdump.service

Disable everything that is not required, e.g.:

# systemctl disable tuned.service
5. Services – Restrict at and cron to Authorised Users

If the file cron.allow exists, then only users listed in the file are allowed to use cron, and the cron.deny file is ignored.

# echo root > /etc/cron.allow

# echo root > /etc/at.allow

# rm -f /etc/at.deny /etc/cron.deny

Note that the root user can always use cron, regardless of the usernames listed in the access control files.

6. Services – Disable X Windows Startup

This can be achieved by setting a default target:

# systemctl set-default
7. Services – Fail2ban

Install Fail2ban from the EPEL repository:

# yum install epel-release

# yum install fail2ban

If using iptables rather than firewalld, open the file /etc/fail2ban/jail.d/00-firewalld.conf and comment out the following line:

#banaction = firewallcmd-ipset

Fail2Ban uses /etc/fail2ban/jail.conf . Configuration snippet for SSH is provided below:


port    = ssh

enabled = true

ignoreip =

bantime  = 600

maxretry = 5

If you run SSH on a non-default port, you can change the port value to any positive integer and then enable the jail.

# systemctl enable fail2ban.service
# systemctl start fail2ban.service
8. Services – Sysstat to Collect Performance Activity

Sysstat may provide useful insight into system usage and performance, however, unless used, the service should be disabled, or not installed at all.

# yum install sysstat
# systemctl enable sysstat.service
# systemctl start sysstat.service

[Sep 18, 2017] Kickstart File Example

Kickstart File Example

Below is an example of a kickstart file that you can use to install and configure Parallels Cloud Server in unattended mode. You can use this file as the basis for creating your own kickstart files.

# Install Parallels Cloud Server


# Uncomment the line below to install Parallels Cloud Server in a completely unattended mode

# cmdline

# Use the path of to get the installation files.

url --url

# Use English as the language during the installation and as the default system language.

lang en_US.UTF-8

# Use the English keyboard type.

keyboard us

# Uncomment the line below to remove all partitions from the SDA hard drive and create these partitions: /boot, /, /vz, and swap.

# clearpart --drives=sda --all --initlabel

# zerombr

part /boot --fstype=ext4 --size=512

part / --fstype=ext4 --size=20096

part /vz --fstype=ext4 --size=40768 --grow

part swap --size=4000

# Use a DHCP server to obtain network configuration.

network --bootproto dhcp

# Set the root password for the server.

rootpw xxxxxxxxx

# Use the SHA-512 encryption for user passwords and enable shadow passwords.

auth --enableshadow --passalgo=sha512

# Set the system time zone to America/New York and the hardware clock to UTC.

timezone --utc America/New_York

# Set sda as the first drive in the BIOS boot order and write the boot record to mbr.

bootloader --location=mbr

# Tell the Parallels Cloud Server installer to reboot the system after installation.


# Install the Parallels Cloud Server license on the server.


# Create the virt_network1 Virtual Network on the server and associate it with the network adapter eth0.

vznetcfg --net=virt_network1:eth0

# Configure the ip_tables ipt_REJECT ipt_tos ipt_limit modules to be loaded in Containers.

# Use the to handle Fedora OS and application templates.

vztturlmap $FC_SERVER

# Install the listed EZ templates. Cache all OS templates after installation. Skip the installation of pre-created templates.


%eztemplates --cache





# Install the packages for Parallels Cloud Server on the server.






Kickstart file example for installing on EFI-based systems

You can use the file above to install Parallels Cloud Server on BIOS-based systems. For installation on EFI-based systems, you need to modify the following places in the file (the changes are highlighted in bold):

# The following 4 commands are used to remove all partitions from the SDA hard drive and create these partitions: /boot/efi (required for EFI-based systems), /boot, /, /vz, and swap.

# clearpart --drives=sda --all --initlabel

part /boot/efi --fstype=efi --grow --maxsize=200 --size=20

part /boot --fstype=ext4 --size=512

part / --fstype=ext4 --size=20096

part /vz --fstype=ext4 --size=40768 --grow

part swap --size=4000

# Configure the bootloader.

bootloader --location=partition

Kickstart file example for upgrading to Parallels Cloud Server 6.0

Below is an example of a kickstart file you can use to upgrade your system to Parallels Cloud Server 6.0.

# Upgrade to Parallels Cloud Server rather than perform a fresh installation.


# Use the path of to get the installation files.

url --url

# Use English as the language during the upgrade and as the default system language.

lang en_US.UTF-8

# Use the English keyboard type.

keyboard us

# Set the system time zone to America/New York and the hardware clock to UTC.

timezone --utc America/New_York

# Upgrade the bootloader configuration.

bootloader --upgrade

[Sep 18, 2017] RHEL ISO with kickstart file

Aug 06, 2017 |

Hugo , asked Jan 7 '15 at 16:36

I am trying to edit the original RHEL 6.5 DVD (rhel-server-6.5-x86_64-dvd.iso) from redhat in order to add kickstart file on it. The goal is to have one 3.4Go iso with automatic install. And not one boot media and one DVD.

This technique is not supported by redhat officially, but I found a procedure :

My ks.cfg looks like :

repo --name="Red Hat Enterprise Linux"  --baseurl=file:/mnt/source --cost=100
repo --name=HighAvailability --baseurl=file:///mnt/source/HighAvailability

I got an error when the installer start : it didn't find the disk Red Hat Enterprise Linux.

I guess this is because installer is not looking on its own media.

Is there a way to achieve this ? Does the cdrom have optional parameter to hard link the device ?

tonioc , answered Jan 7 '15 at 17:30

You don't need to set the repo URLs in ks.cfg, here is an example of kickstart that I use currently with rhel6.
# interactive install from CD-ROM/DVD

key --skip
lang en_US.UTF-8
# keyboard us

clearpart --all --initlabel
part /boot --fstype ext4 --size=100
part pv.100000 --size=1 --grow
volgroup vg00 --pesize=32768 pv.100000
logvol / --fstype ext4 --name=lvroot --vgname=vg00 --size=15360
logvol swap --fstype swap --name=lvswap --vgname=vg00 --size=2048
logvol /var --fstype ext4 --name=lvvar --vgname=vg00 --size 5120

timezone Europe/Paris
firewall --disabled
authconfig --useshadow --passalgo=sha512
selinux --enforcing


# pre-set list of packages/groups to install 
# ... and so on the list of packages/groups I pre-customize (and with - those I don't want)
# ... and so on
# postinstall, execution avec chroot dans le systeme installé.
%post --interpreter=/bin/sh --log=/root/post_install.log
echo -e "================================================================="
echo -e "       Starting kickStart post install script "

# do some extra stuff here , like mounting cd-rom copying add-ons specific for my product

Hugo , asked Jan 7 '15 at 16:36

I am trying to edit the original RHEL 6.5 DVD (rhel-server-6.5-x86_64-dvd.iso) from redhat in order to add kickstart file on it. The goal is to have one 3.4Go iso with automatic install. And not one boot media and one DVD.

This technique is not supported by redhat officially, but I found a procedure :

My ks.cfg looks like :

repo --name="Red Hat Enterprise Linux"  --baseurl=file:/mnt/source --cost=100
repo --name=HighAvailability --baseurl=file:///mnt/source/HighAvailability

I got an error when the installer start : it didn't find the disk Red Hat Enterprise Linux.

I guess this is because installer is not looking on its own media.

Is there a way to achieve this ? Does the cdrom have optional parameter to hard link the device ?

tonioc , answered Jan 7 '15 at 17:30

You don't need to set the repo URLs in ks.cfg, here is an example of kickstart that I use currently with rhel6.
# interactive install from CD-ROM/DVD

key --skip
lang en_US.UTF-8
# keyboard us

clearpart --all --initlabel
part /boot --fstype ext4 --size=100
part pv.100000 --size=1 --grow
volgroup vg00 --pesize=32768 pv.100000
logvol / --fstype ext4 --name=lvroot --vgname=vg00 --size=15360
logvol swap --fstype swap --name=lvswap --vgname=vg00 --size=2048
logvol /var --fstype ext4 --name=lvvar --vgname=vg00 --size 5120

timezone Europe/Paris
firewall --disabled
authconfig --useshadow --passalgo=sha512
selinux --enforcing


# pre-set list of packages/groups to install 
# ... and so on the list of packages/groups I pre-customize (and with - those I don't want)
# ... and so on
# postinstall, execution avec chroot dans le systeme installé.
%post --interpreter=/bin/sh --log=/root/post_install.log
echo -e "================================================================="
echo -e "       Starting kickStart post install script "

# do some extra stuff here , like mounting cd-rom copying add-ons specific for my product

[Sep 17, 2017] RHEL6 Unable to download kickstart file

Sep 17, 2017 |
Oops! I didn't mean to do this.
up vote down vote favorite

robertpas , asked Nov 9 '15 at 15:13

In our lab we have a set of scripts that automatically configure a kickstart installation for RHEL5 on HP ProLiant DL380p Gen8. Based on data from several configuration files, it does the following steps:
  1. mounts redhat dvd
  2. modifies isolinux.cfg accordingly
  3. creates ks.cfg
  4. creates a bootdisk with the installation data (isolinux.cfg, ks.cfg, etc)
  5. creates a http server with the bootdisk directory.
  6. mounts the bootdisk through ILO ( /dev/scd1 )
  7. installs RHEL5

Here is the line referring to the kickstart file location :

append initrd=initrd.img ks=hd:scd1:/isolinux/ks.cfg ksdevice=eth4

Everything works well for RHEL5, but there have been requests for RHEL6.

For RHEL6, everything seems to work OK until #7, where it returns the message "unable to download kickstart file" . I have commented some lines in the scripts, eliminating the installation part and leaving only the ILO mount part.

The bootdisk is mounted and accessible on /dev/scd1 . The ks.cfg file is present there. I have also tested and the files from the Kickstart server are accessible with wget .

I have also tried accessing the ks.cfg file through http :

append initrd=initrd.img ks=http://<ip>:<port>/boot/isolinux/ks.cfg ksdevice=eth4

The above part did not work.

But what really vexes me is that RHEL5 works in the same conditions, but RHEL6 does not.

I have been talking to redhat support for a week and they don't seem to know what is wrong.

Any help would be greatly appreciated.

robertpas , answered Nov 11 '15 at 8:11

I have figured out the problem.

There seems to be a difference between RHEL5 and RHEL6 at the installation level.

RHEL5 will detect your physical cdrom and mount it on /dev/scd0 , therefore the location of the mount will be /dev/scd1 . RHEL6 does not seem to do this, therefore the mount location will be /dev/scd0 .

The correct way to declare the ks file location in a case like this is :

append initrd=initrd.img ks=hd:scd0:/isolinux/ks.cfg ksdevice=eth4

I hope someone will find this helpful in the future.

[Sep 17, 2017] Archlinux, systemd-free

Notable quotes:
"... Since the adoption of systemd by Arch Linux I've encountered many problems with my systems, ranging from lost temporary files which systemd deemed proper to delete without asking (changing default behaviour on a whim), to total, consistent boot lockups because systemd-210+ couldn't mount an empty /usr/local partition (whereas systemd-208 could; go figure). ..."
"... As each "upgrade" of systemd aggressively assimilated more and more key system components into itself, it became apparent that the only way to avoid this single most critical point of failure was to stay as far away from it as possible. ..."
"... How about defaulting KillUserProcesses to yes , which effectively kills all backgrounded user processes (tmux and screen included) on logout? ..."
Aug 02, 2017 |

An init system must be an init system

Systemd problems

Since the adoption of systemd by Arch Linux I've encountered many problems with my systems, ranging from lost temporary files which systemd deemed proper to delete without asking (changing default behaviour on a whim), to total, consistent boot lockups because systemd-210+ couldn't mount an empty /usr/local partition (whereas systemd-208 could; go figure).

As each "upgrade" of systemd aggressively assimilated more and more key system components into itself, it became apparent that the only way to avoid this single most critical point of failure was to stay as far away from it as possible.

Reading the list of those system components is daunting: login, pam, getty, syslog, udev, cryptsetup, cron, at, dbus, acpi, cgroups, gnome-session, autofs, tcpwrappers, audit, chroot, mount ... How about defaulting KillUserProcesses to yes , which effectively kills all backgrounded user processes (tmux and screen included) on logout?

It would seem that the only thing still missing from systemd is a decent init system.

The solution: Remove systemd, install OpenRC

"Coincidentally", there were others before me who had had similar concerns and had prepared the way.

Their efforts and experience is summarised in these pages. Sincere, warm thanks go to artoo and Aaditya who have done most of the work in Archland and, of course, the Gentoo developers who have made this possible in the first place. I administer a handsome lot of linux boxes and I've performed the migration procedure (successfully and without exception) in all of them, even remote ones.

The procedure is explained in Installation ; however you might want to read about OpenRC in the links below:

The Archlinux OpenRC wiki page doesn't contain information on the migration process anymore; it breaks down things in several different articles and provides links to other resources not always Arch-specific, which unnecessarily obfuscates things, not to mention the omnipresent warning to not remove systemd. The migration procedure described here instead is reliable and as plain and simple as possible, explaining what is to be done and why in clearly defined steps, despite what a FUD-spreading Arch Wiki admin says against it in his every other post. This is proven time and again on many different boxes and setups.

For your convenience, an up-to-date OpenRC ISO image is also provided for clean installations. Go to Installation for additional information.

Other Linux distros: Escape from systemd

Here we focus on removing systemd from Arch Linux and derivatives: Manjaro, ArchBang, Antergos etc. For information about removing systemd from other Linux distributions (namely Debian and deb/apt-get based ones like Ubuntu and Mint) you can visit the Without systemd wiki .
Additionally, a list of Operating systems without systemd in the default installation might be of special interest as, ultimately, the future of the Linux init systems will be determined by the popularity (or lack thereof) of systemd-free distros like Gentoo , Slackware , PCLinuxOS , Void Linux and Devuan .

Non-Linux OSes are also a viable (if somewhat last-resort) option, especially if the situation for non-systemd setups significantly worsens; FreeBSD and DragonFlyBSD are totally worth taking a shot at.

Clean installations

Contact You may contact us at Freenode, channels #openrc, #manjaro-openrc and #arch-openrc.

[Aug 14, 2017] Are 32-bit applications supported in RHEL 7?

Aug 14, 2017 |
Solution Verified - Updated June 1 2017 at 1:22 PM -


Issue Resolution

Red Hat Enterprise Linux 7 does not support installation on i686, 32 bit hardware. ISO installation media is only provided for 64-bit hardware. Refer to Red Hat Enterprise Linux technology capabilities and limits for additional details.

However, 32-bit applications are supported in a 64-bit RHEL 7 environment in the following scenarios:

While RHEL 7 will not natively support 32-bit hardware, certified hardware can be searched for in the certified hardware database .

[Aug 14, 2017] CentOS-RHEL - WineHQ Wiki

Aug 14, 2017 |

Notes on EPEL 7

At the time of this writing, EPEL 7 still has no 32-bit packages (including wine and its dependencies). There is a 64-bit version of wine, but without the 32-bit libraries needed for WoW64 capabilities, it cannot support any 32-bit Windows apps (the vast majority) and even many 64-bit ones (that still include 32-bit components).

This is primarily because with release 7, Red Hat didn't have enough customer demand to justify an i386 build. While Red Hat itself still comes with lean multilib and 32-bit support for legacy needs, this is part of Red Hat's release process, not the packages themselves. Therefore CentOS 7 had to develop its own workflow for building an i386 release, a process that was completed in Oct 2015 .

With its i386 release, CentOS has cleared a major hurdle on the way to an EPEL with 32-bit libraries, and now the ball is in the Fedora project's court (as the maintainers of the EPEL). Once a i386 version of the EPEL becomes available, you should be able to follow the same instructions above to install a fully functional wine package for CentOS 7 and its siblings.

Thankfully, this also means that EPEL 8 shouldn't suffer from this same problem. In the meantime though, you can keep reading for some hints on getting a recent version of wine from the source code.

Special Considerations for Red Hat

Those with a Red Hat subscription should have access to enhanced help and support, but we wanted to provide some very quick notes on enabling the EPEL for your system. Before installing the epel-release package, you'll first need to activate some additional repositories.

On release 6 and older, which used the Red Hat Network Classic system for package subscriptions, you need to activate the optional repo with rhn-channel

rhn-channel -a -c rhel-6-server-optional-rpms -u <your-RHN-username> -p <your-RHN-password>

Starting with release 7 and the Subscription Manager system, you'll need to activate both the optional and extras repos with subscription-manager

subscription-manager repos --enable=rhel-7-server-optional-rpms
subscription-manager repos --enable=rhel-7-server-extras-rpms

As for source RPMs signed by Red Hat, there doesn't seem to be much public-facing documentation. With a subscription, you should be able to login and browse the repos ; this post on LWN also has some background.

[Aug 14, 2017] CentOS 6 or CentOS 7

Aug 14, 2017 |

[Aug 13, 2017] The Tar Pit - How and why systemd has won

Notable quotes:
"... However, despite this and despite the flame wars it has caused throughout the open source communities, and the endless attempts to boycott it, systemd has already won. Red Hat Enterprise Linux now uses it; Debian made it the default init system for their next version 6 and as a consequence, Ubuntu is replacing Upstart with systemd; openSUSE and Arch have it enabled for quite some time now. Basically every major GNU/Linux distribution is now using it 7 . ..."
"... systemd doesn't even know what the fuck it wants to be. It is variously referred to as a "system daemon" or a "basic userspace building block to make an OS from", both of which are highly ambiguous. [...] Ironically, despite aiming to standardize Linux distributions, it itself has no clear standard, and is perpetually rolling. ..."
Sep 27, 2014 |

systemd is the work of Lennart Poettering, some guy from Germany who makes free and open source software, and who's been known to rub people the wrong way more than once. In case you haven't heard of him, he's also the author of PulseAudio, also known as that piece of software people often remove from their GNU/Linux systems in order to make their sound work. Like any software engineer, or rather like one who's gotten quite a few projects up and running, Poettering has an ego. Well, this should be about systemd, not about Poettering, but it very well is.

systemd started as a replacement for the System V init process. Like everything else, operating systems have a beginning and an end, and like every other operating system, Linux also has one: the Linux kernel passes control over to user space by executing a predefined process commonly called init , but which can be whatever process the user desires. Now, the problem with the old System V approach is that, well, I don't really know what the problem is with it, other than the fact that it's based on so-called "init scripts" 1 and that this, and maybe a few other design aspects impose some fundamental performance limitations. Of course, there are other aspects, such as the fact that no one ever expects or wants the init process to die, otherwise the whole system will crash.

The broader history is that systemd isn't the first attempt to stand out as a new, "better" init system. Canonical have already tried that with Upstart; Gentoo relies on OpenRC; Android uses a combination between Busybox and their own home-made flavour of initialization scripts, but then again, Android does a lot of things differently. However, contrary to the basic tenets 2 of the Unix philosophy , systemd also aims to do a lot of things differently.

For example, it aims to integrate as many other system-critical daemons as possible: from device management, IPC and logging to session management and time-based scheduling, systemd wants to do it all. This is indeed rather stupid from a software engineering point of view 3 , as it increases software complexity and bugs and the attack surface and whatnot 4 , but I can understand the rationale behind it: the maintainers want more control over everything, so they end up requesting that all other daemons are written as systemd plugins 5 .

However, despite this and despite the flame wars it has caused throughout the open source communities, and the endless attempts to boycott it, systemd has already won. Red Hat Enterprise Linux now uses it; Debian made it the default init system for their next version 6 and as a consequence, Ubuntu is replacing Upstart with systemd; openSUSE and Arch have it enabled for quite some time now. Basically every major GNU/Linux distribution is now using it 7 .

At the end of the day, systemd has won by being integrated into the democratic ecosystem that is GNU/Linux. As much as I hate PulseAudio and as much as I don't like Poettering, I see that distribution developers and maintainers seem to desperately need it, although I must confess I don't really know why. Either way, compare this :

systemd doesn't even know what the fuck it wants to be. It is variously referred to as a "system daemon" or a "basic userspace building block to make an OS from", both of which are highly ambiguous. [...] Ironically, despite aiming to standardize Linux distributions, it itself has no clear standard, and is perpetually rolling.

to this :

Verifiable Systems are closely related to stateless systems: if the underlying storage technology can cryptographically ensure that the vendor-supplied OS is trusted and in a consistent state, then it must be made sure that /etc or /var are either included in the OS image, or simply unnecessary for booting.

and this . Some of the stuff there might be downright weird or unrealistic or bullshit, but other than that, these guys (especially Poettering) have a damn good idea what they want to do and where they're going, unlike many other free software, open source projects.

And now's one of those times when such a clear vision makes all the difference.

  1. That is, it's "imperative" instead of "declarative". Does this matter to the average guy? I haven't the vaguest idea, to be honest.
  2. Some people don't consider software engineering a science, that's why. But I guess it would be fairer to call them "principles", wouldn't it?
  3. One does not simply integrate components for the sake of "integration". There are good reasons to have isolation and well-established communication protocols between software components: for example if I want to build my own udev or cron or you-name-it, systemd won't let me do that, because it "integrates". Well, fuck that.
  4. And guess what; for system software, systemd has a shitload of bugs . This is just not acceptable for production. Not. Acceptable. Ever.
  5. That's what "having systemd as a dependency" really means, no matter how they try to sugarcoat it.
  6. Jessie, at the time of writing.
  7. Well, except Slackware.

[Aug 13, 2017] Is Modern Linux Becoming Too Complex

One man's variety is another man's hopelessly confusing goddamn mess.

Feb 11, 2015 | Slashdot

An anonymous reader writes: Debian developer John Goerzen asks whether Linux has become so complex that it has lost some of its defining characteristics. "I used to be able to say Linux was clean, logical, well put-together, and organized. I can't really say this anymore. Users and groups are not really determinitive for permissions, now that we have things like polkit running around. (Yes, by the way, I am a member of plugdev.) Error messages are unhelpful (WHY was I not authorized?) and logs are nowhere to be found. Traditionally, one could twiddle who could mount devices via /etc/fstab lines and perhaps some sudo rules. Granted, you had to know where to look, but when you did, it was simple; only two pieces to fit together. I've even spent time figuring out where to look and STILL have no idea what to do."

Lodragandraoidh (639696) on Wednesday February 11, 2015 @11:21AM (#49029667)

Re:So roll your own. (Score:5, Insightful)

I think you're missing the point. Linux is the kernel - and it is very stable, and while it has modern extensions, it still keeps the POSIX interfaces consistent to allow inter-operation as desired. The issue here is not that forks and new versions of Linux distros are an aberration, but how the major distributions have changed and the article is a symptom of those changes towards homogeneity.

The Linux kernel is by definition identically complex on any distro using a given version of the kernel (the variances created by compilation switches notwithstanding). The real variance is in the distros - and I don't think variety is a bad thing, particularly in this day and age when we are having to focus more and more on security, and small applications on different types of devices - from small ARM processor systems, to virtual cluster systems in data centers.

Variety creates a strong ecosystem that is more resilient to security exploitation as a whole; variety is needed now more than ever given the security threats we are seeing. If you look at the history of Linux distributions over time - you'll see that from the very beginning it was a vibrant field with many distros - some that bombed out - some that were forked and then died, and forks and forks of forks that continued on - keeping the parts that seemed to work for those users.

Today - I think people perceive what is happening with the major distros as a reduction in choice (if Redhat is essentially identical to Debian, Ubuntu, et al - why bother having different distros?) - a bottleneck in variability; from a security perspective, I think people are worried that a monoculture is emerging that will present a very large and crystallized attack surface after the honeymoon period is over.

If people don't like what is available, if they are concerned about the security implications, then they or their friends need to do something about it. Fork an existing distro, roll your own distro, or if you are really clever - build your own operating system from scratch to provide an answer, and hopefully something better/different in the long run. Progress isn't a bad thing; sitting around doing nothing and complaining about it is.

NotDrWho (3543773) on Wednesday February 11, 2015 @11:28AM (#49029739)

Re: So roll your own. (Score:5, Funny)

One man's variety is another man's hopelessly confusing goddamn mess.

Anonymous Coward on Wednesday February 11, 2015 @09:31AM (#49028605)

Re: Yes (Score:4, Insightful)

Systemd has been the most divisive force in the Linux community lately, and perhaps ever. It has been foisted upon many unwilling victims. It has torn apart the Debian community. It has forced many long-time Linux users to the BSDs, just so they can get systems that boot properly.

Systemd has harmed the overall Linux community more than anything else has. Microsoft and SCO, for example, couldn't have dreamed of harming Linux as much as systemd has managed to do, and in such a short amount of time, too.

Amen. It's sad, but a single person has managed to kill the momentum of GNU/Linux as an operating system. Microsoft should give the guy a medal.

People are loath to publish new projects because keeping them running with systemd and all its dependencies in all possible permutations is a full time job. The whole "do one thing only and do it well" concept has been flushed down the drain.

I know that I am not the only sysadmin who refuses to install Red Hat Enterprise Linux 7, but install new systems with RHE

gmack (197796) <gmack@innerfire.nCOLAet minus caffeine> on Wednesday February 11, 2015 @11:55AM (#49030073) Homepage Journal

Score:4, Informative)

Who modded this up?

SystemD has put in jeopardy the entire presence of Linux in the server room:

1: AFIAK, as there have been zero mention of this, SystemD appears to have had -zero- formal code testing, auditing, or other assurance that it is stable. It was foisted on people in RHEL 7 and downstreams with no ability to transition to it.

Formal code testing is pretty much what Redhat brings to the table.

2: It breaks applications that use the init.d mechanism to start with. This is very bad, since some legacy applications can not be upgraded. Contrast that to AIX where in some cases, programs written back in 1991 will run without issue on AIX 7.1. Similar with Solaris.

At worst it breaks their startup scripts, and since they are shell scripts they are easy to fix.

3: SystemD is one large code blob with zero internal separation... and it listens on the network with root permissions. It does not even drop perms which virtually every other utility does. Combine this with the fact that this has seen no testing... and this puts every production system on the Internet at risk of a remote root hole. It will be -decades- before SystemD becomes a solid program. Even programs like sendmail went through many bug fixes where security was a big problem... and sendmail has multiple daemons to separate privs, unlike SystemD.

Do you really understand the architecture of either SystemD or sendmail? Sendmail was a single binary written in a time before anyone cared about security. I don't recall sendmail being a bundle programs but then it's been a decade since I stopped using it precisely because of it's poor security track record. Contrary to your FUD, Systemd runs things as separate daemons with each component using the least amount of privileges needed to do it's job and on top of that many of the network services (ntp, dhcpd) that people complain about are completely optional addons and quite frankly, since they seem designed around the single purpose of Linux containers, I have not installed them. This is a basic FAQ entry on the systemd web site so I really don't get how you didn't know this.

4: SystemD cannot be worked around. The bash hole, I used busybox to fix. If SystemD breaks, since it encompasses everything including the bootloader, it can't be replaced. At best, the system would need major butchery to work. In the enterprise, this isn't going to happen, and the Linux box will be "upgraded" to a Windows or Solaris box.

Unlikely, it is a minority of malcontents who are upset about SystemD who have created an echo chamber of half truths and outright lies. Anyone who needs to get work done will not even notice the transition.

5: SystemD replaces many utilities that have stood 20+ years of testing, and takes a step back in security by the monolithic userland and untested code. Even AIX with its ODM has at least seen certification under FIPS, Common Criteria, and other items.

Again you use the word "monolitic without having a shred of knowledge about how SystemD works.The previous init system despite all of it's testing was a huge mess. There is a reason there were multiple projects that came before SystemD that tried to clean up the horrific mess that was the previous init.

6: SystemD has no real purpose, other than ego. The collective response justifying its existence is, "because we say so. Fuck you and use it." Well, this is no way to treat enterprise customers. Enterprise customers can easily move to Solaris if push comes to shove, and Solaris has a very good record of security, without major code added without actual testing being done, and a way to be compatible. I can turn Solaris 11's root role into a user, for example.

Solaris has already transitioned to it's own equivalent daemon that does roughly what SystemD does.

As for SystemD: It allows booting on more complicated hardware. Debian switched because they were losing market share on larger systems that the current init system only handles under extreme protest. As a side affect of the primary problem it was meant to solve, it happens to be faster which is great for desktops and uses a lot less memory which is good for embedded systems.

So, all and all, SystemD is the worst thing that has happened with Linux, its reputation, and potentially, its future in 10 years, since the ACTA treaty was put to rest. SystemD is not production ready, and potentially can put every single box in jeopardy of a remote root hole.

Riight.. Meanwhile in the real world, none of my desktops or servers have any SystemD related network services running so no root hole here.

Dragonslicer (991472) on Wednesday February 11, 2015 @12:26PM (#49030407)

Score:5, Insightful)

3: SystemD is one large code blob with zero internal separation... and it listens on the network with root permissions. It does not even drop perms which virtually every other utility does. Combine this with the fact that this has seen no testing... and this puts every production system on the Internet at risk of a remote root hole. It will be -decades- before SystemD becomes a solid program.

Even programs like sendmail went through many bug fixes where security was a big problem... and sendmail has multiple daemons to separate privs, unlike SystemD.

Because of course it's been years since anyone found any security holes in well-tested software like Bash or OpenSSL.

Anonymous Coward on Wednesday February 11, 2015 @08:24AM (#49028117)

Score:5, Interesting)

I was reading through the article's comments and saw this thread of discussion []. Well, it's hard to call it a thread of discussion because John apparently put an end to it right away.

The first comment in that thread is totally right though. It is systemd and Gnome 3 that are causing so many of these problems with Linux today. I don't use Debian, but I do use another distro that switched to systemd, and it is in fact the problem here. My workstation doesn't work anywhere as well as it did a couple of years ago, before systemd got installed. So when somebody blames systemd for these kinds of problems, that person is totally correct. I don't get why John would censor the discussion like that. I also don't get why he'd label somebody who points out the real problem as being a 'troll'.

John needs to admit that the real problem here is not the people who are against systemd. These people are actually the ones who are right, and who have the solution to John's problems!

The comment I linked to says 'Systemd needs to be removed from Debian immediately.', and that's totally right. But I think we need to expand it to 'Systemd needs to be removed from all Linux distros immediately.'

If we want Linux to be usable again, systemd does need to go. It's just as simple as that. Censoring any and all discussion of the real problem here, systemd, sure isn't going to get these problems resolved any quicker!

Re:Why does John shut down all systemd talk? (Score:5, Insightful)

[Aug 10, 2017] Kickstart Problem with --initlabel

Notable quotes:
"... Last edited by phatrik; 08-07-2012 at 11:17 AM . Reason: prefixed with solved ..."
Aug 07, 2012 |
Kickstart: Problem with --initlabel

I'm having a problem when using kickstart to deploy CentOS 6.3 KVM guest OS and no one seems to know why so I figured I'd ask in the KVM SF :-) Details:

- Tyring to install CentOS 6.3
- Doing a netinstall using a FTP site
- The installation is for a guest OS (KVM).

The install is being launched with:

virt-install -n -r 768 -l /media/netinstall -x "ks="

The install starts and gets to the point where I see "Error processing drive. This device may need to be re-initialized." The relevant part of my KS file:

clearpart --initlabel --all

# Disk partitioning information

part /boot --fstype="ext4" --size=500
part /home --fstype="ext4" --size=2048
part swap --fstype="swap" --size=2048
part / --fstype="ext4" --grow --size=1

When I switch to the 3rd terminal for information, here's what I see:

required disklable type for sda (1) is None
default diskalbel type for sda is msdos
selecting msdos diskalbel for sda based on size

Based on "required diskalbel type for sda (1) is none" I decided to remove the --initlabel parm, however I still face the same problem (prompted to initialize the disk).



Last edited by phatrik; 08-07-2012 at 11:17 AM . Reason: prefixed with solved
dyasny 08-07-2012 Registered: Dec 2007 Location: Canada Distribution: RHEL,Fedora Posts: 995
I'd just abandon virt-install and deploy VMs from a template instead. Much faster and easier to do
phatrik 08-07-2012, 08:14

Thank you for your reply, but that's obviously not the answer I'm looking for. Yes I know virt-clone could be used but what I'm truly interested in is getting at the bottom of this problem.


From RedHat Knowledge base

The 'clearpart --initlabel' option in a kickstart no longer initializes drives in RHEL 6.3.

Red Hat Enterprise Linux 6.3
Anaconda (kickstart)


Use the 'zerombr' option in the kickstart to initialize disks and create a new partition table.

Use the 'ignoredisk' option in the kickstart to limit which disks are initialized by the 'zerombr' option. The following example will limit initialization to the 'vda' disk only:
ignoredisk --only-use=vda


Thank you for your reply, that's exactly what I was looking for.


Originally Posted by wungad Issue
The 'clearpart --initlabel' option in a kickstart no longer initializes drives in RHEL 6.3.


Red Hat Enterprise Linux 6.3
Anaconda (kickstart)


Use the 'zerombr' option in the kickstart to initialize disks and create a new partition table.
Use the 'ignoredisk' option in the kickstart to limit which disks are initialized by the 'zerombr' option. The following example will limit initialization to the 'vda' disk only:
ignoredisk --only-use=vda

[Aug 08, 2017] Kickstart option to set GRUB drive location?

Notable quotes:
"... ignoredisk --drives=sdb ..."
Aug 08, 2017 |

andersbiro " 2010/03/04 12:36:32

Hello, I have successfully created a Centos USB stick installation with an automated kickstart configuration according to the instructions at .

Everything works flawlessly with the exception that the installation writes the GRUB Boot Loaders on the USB stick instead of the destination hard drive and hence can only be booted from the USB stick.

Afterwards I can solve this manually by editing grub.conf to point to the hard drive and using the grub utility I can nstall the Grub loader on the hard drive MBR instead and then it boots normally.

The aim is however to create a fully automated installation since the end users in question are expected to be technically proficient so my question is if there is a kickstart option to explicitly write GRUB correctly to the hard drive from the very beginning?

There seems to be a kickstart "bootloader" option but I can not really see any flags that would explicitly set the GRUB on a specified hard drive? Top

Forum Moderator
Posts: 9310
Joined: 2007/10/22 11:30:09
Location: ~/Earth/UK/England/Suffolk
Contact: Contact AlanBartlett Website
Re: Kickstart option to set GRUB drive location?

Post by AlanBartlett " 2010/03/04 18:52:16

In the CentOS wiki article that you reference, under the heading Notes , there is a [color=ff1480]cherry-red[/color] block of text. Isn't that appropriate?

If not, do you have any suggestions for improvement to the article? Top

Retired Moderator
Posts: 18276
Joined: 2006/12/13 20:15:34
Location: Tidewater, Virginia, North America
Contact: Contact pschaff Website
Kickstart option to set GRUB drive location?

Post by pschaff " 2010/03/04 21:36:41

andersbiro wrote:
There seems to be a kickstart "bootloader" option but I can not really see any flags that would explicitly set the GRUB on a specified hard drive?

How about Kickstart Options : Code: Select all

bootloader (required)
Specifies how the boot loader should be installed. This option is required for both installations and upgrades.

* --append= ? Specifies kernel parameters. To specify multiple parameters, separate them with spaces. For example:

bootloader --location=mbr --append="hdd=ide-scsi ide=nodma"

* --driveorder ? Specify which drive is first in the BIOS boot order. For example:

bootloader --driveorder=sda,hda

* --location= ? Specifies where the boot record is written. Valid values are the following: mbr (the default),
partition (installs the boot loader on the first sector of the partition containing the kernel), or
none (do not install the boot loader).

Still can't guarantee that a totally automated approach is possible, unless the hardware is identical, as the devices and ordering will be system-dependent. Top

Posts: 12
Joined: 2010/02/22 10:07:54
Re: Kickstart option to set GRUB drive location?

Post by andersbiro " 2010/03/08 08:48:17

AlanBartlett wrote:
In the CentOS wiki article that you reference, under the heading Notes , there is a [color=ff1480]cherry-red[/color] block of text. Isn't that appropriate?

If not, do you have any suggestions for improvement to the article?

To my understanding this specific part of the text refers to an interactive installation but since I deal with a fully automatic installation I do not think that part is appropriate so that is why I am looking for corresponding kickstart options to achieve the same thing.
To be fair it also mentions the line bootloader --driveorder=cciss/c0d0,sda --location=mbr" that might be appropriate but since I am not very proficient with completely comprehending the parameters. Top

Posts: 12
Joined: 2010/02/22 10:07:54
Re: Kickstart option to set GRUB drive location?

Post by andersbiro " 2010/03/08 08:58:16

pschaff wrote:
andersbiro wrote:
There seems to be a kickstart "bootloader" option but I can not really see any flags that would explicitly set the GRUB on a specified hard drive?

How about Kickstart Options : Code: Select all

bootloader (required)
Specifies how the boot loader should be installed. This option is required for both installations and upgrades.

* --append= ? Specifies kernel parameters. To specify multiple parameters, separate them with spaces. For example:

bootloader --location=mbr --append="hdd=ide-scsi ide=nodma"

* --driveorder ? Specify which drive is first in the BIOS boot order. For example:

bootloader --driveorder=sda,hda

* --location= ? Specifies where the boot record is written. Valid values are the following: mbr (the default),
partition (installs the boot loader on the first sector of the partition containing the kernel), or
none (do not install the boot loader).

I was aware of these parameters but I am not fully sure about how to apply them... the "--location" flag seemed easy enough and also driveorder but the "append" kernel parameters eludes me but perhaps this is not required.

I know that the kernel and Grub part should reside on the first partition of disk "sda" and the USB stick on "sdb" so would setting the "--driveorder=sda,sdb" insure that grub.conf points to the sda disk?

Also, would that automatically write the GRUB loader on "sda" as well or do you need to use the "partition flag" for that? Top

Posts: 12
Joined: 2010/02/22 10:07:54
Re: Kickstart option to set GRUB drive location?

Post by andersbiro " 2010/03/08 12:04:09

As a matter of fact I tried the --driveorder flag and that actually worked as it now can boot directly without USB stick which is a great step forward.
The only remaining obstacle is that somehow the FAT32 partition disappear from the USB stick so it cannot be used for future installations.
This can however be fixed by using FDISK to create a new FAT32 partition in the same space and somehow this also restores the previous file in the partition.

Since the GRUB bootloader seems to be written to the destination disk I must say that I cannot understand at all why the FAT32 partition disappears?
Are additional flags required to prevent this from happening? Top

Retired Moderator
Posts: 18276
Joined: 2006/12/13 20:15:34
Location: Tidewater, Virginia, North America
Contact: Contact pschaff Website
Re: Kickstart option to set GRUB drive location?

Post by pschaff " 2010/03/08 12:51:41

andersbiro wrote:
Since the GRUB bootloader seems to be written to the destination disk I must say that I cannot understand at all why the FAT32 partition disappears?
Are additional flags required to prevent this from happening?

I have not seen that happen. You have both FAT32 and ext3 partitions, and the FAT32 one is gone after the install? I'd check the kickstart file carefully to be sure it is not inadvertently messing with the USB drive.

Thanks for reporting back, and please keep us posted. Any recommendations for the Wiki article appreciated. Top

Posts: 12
Joined: 2010/02/22 10:07:54
Re: Kickstart option to set GRUB drive location?

Post by andersbiro " 2010/03/08 14:49:12

I managed to solve the issue by adding the "ignoredisk --drives=sdb" parameter for the USB drive and now the installer leaves the USB stick intact and the installation works flawlessly.
I however still do not know why the installer affected the disk in the first place but this flag did at any rate solve the problem for me. Top
Retired Moderator
Posts: 18276
Joined: 2006/12/13 20:15:34
Location: Tidewater, Virginia, North America
Contact: Contact pschaff Website
Re: Kickstart option to set GRUB drive location?

Post by pschaff " 2010/03/08 14:53:40

Thanks for the additional info. Still seems that a general solution is elusive, as there's no guarantee that on a different set of hardware the USB drive will show up as /dev/sdb. Top
Posts: 1
Joined: 2012/04/03 17:00:26
Re: Kickstart option to set GRUB drive location?

Post by nektoid " 2012/04/03 17:06:27

Hi. I ran into this recently kickstarting both 5.5 and 6.2 hosts. Kickstarts worked one day and the next the bootloader wanted to be on the usb key, odd. This is was an example of what worked for me with 5.5 where the usb key was consistently seen as sdb. Both at their respective parts in the preamble section of ks.cfg.

#stop writing bootloader to usb
bootloader --driveorder=sda,sdb --location=mbr

#stop erasing my usb stick
ignoredisk --drives=sdb

[Aug 08, 2017] Unattended Installation of Red Hat Enterprise Linux 7 Operating System on Dell PowerEdge Servers Using iDRAC With Lifecycle Controller

The OS Deployment feature available in Lifecycle Controller enables you to deploy standard and custom operating systems on the managed system. You can also configure RAID before installing the operating system if it is not already configured. You can deploy the operating system using any of the following methods:

The unattended installation feature requires an OS configuration or answer file. During unattended installation, the answer file is provided to the OS loader. This activity requires minimal or no user intervention. Currently, the unattended installation feature is supported only for Microsoft Windows and Red Hat Enterprise Linux 7 operating systems from Lifecycle Controller.

Note: This paper only covers unattended installation of Red Hat Enterprise Linux 7 operating system from Lifecycle Controller. For more information about unattended installation of Microsoft Windows operating systems, see the "Unattended Installation of Windows Operating Systems

[Aug 06, 2017] Unable to download the kickstart file

Aug 06, 2017 |
Hi there
There is a kickstart file on it opens in the browser fine and I can read the content.
Also wget works flawlessly yet the installation fails with the following error : "Unable to download the kickstart file, please modify the kickstart parameter..."
I'm using apache as a web server.
Any idea what's causing the problem ?
Do I need some special configuration of Apache ?

Jesus is the King Solved! Go to Solution. 0 Kudos Reply

12 REPLIES Michal Kapalka (mikap) Honored Contributor

‎11-15-2010 11:54 PM

‎11-15-2010 11:54 PM

Re: Unable to download the kickstart file

check this forum thread :

mikap 0 Kudos Reply Piotr Kirklewski Super Advisor

‎11-16-2010 05:50 AM

‎11-16-2010 05:50 AM

Re: Unable to download the kickstart file
Well - I'm not using NFS here.
So it seems to me that post is irrelevant unless I don't get it. Jesus is the King 0 Kudos Reply Piotr Kirklewski Super Advisor

‎11-16-2010 06:08 AM

‎11-16-2010 06:08 AM

Re: Unable to download the kickstart file
Also I have only one NIC in my VM and I'm not specifying the ksdevice in my ks.cfg .
Should I do that ? Jesus is the King 0 Kudos Reply Matti_Kurkela Honored Contributor

‎11-17-2010 01:52 AM

‎11-17-2010 01:52 AM

Re: Unable to download the kickstart file
You did not mention the name of your Linux distribution, but Kickstart suggests RedHat or one of its derivatives.

RedHat's installer sets up a shell prompt on one of the virtual consoles, and a log or two on other virtual consoles.

You might take a look at the logged messages, and/or use the shell prompt to verify the system has a working network connection.

See paragraph 4.1.1 of this document for a description of the virtual console functionality of the installer (this part of the installer has not changed significantly since RHEL 3):

Another possibility is that the web server offers the kickstart file using a MIME content type the installer does not like. I guess "text/plain" would be acceptable, but this might not be Apache's default content type for .cfg files.

If this is the case, you might have to add the file type to your Apache configuration:

AddType text/plain .cfg

MK MK 0 Kudos Reply Jimmy Vance HPE Pro

‎11-17-2010 04:06 AM

‎11-17-2010 04:06 AM

Re: Unable to download the kickstart file
You don't mention the server model or distribution version your working with. Even though other systems can access the ks file OK, maybe the system your trying to install on does not have proper NIC drivers during boot to access the ks file?

No support by private messages. Please ask the forum! I work for HPE

If you feel this was helpful please click the KUDOS! thumb below! 0 Kudos Reply Piotr Kirklewski Super Advisor

‎11-17-2010 09:47 AM

‎11-17-2010 09:47 AM

Re: Unable to download the kickstart file
I'm trying to install Centos5.5 64-bit
The PXE server is Debian.

vim /tftpboot/pxelinux.cfg/default

### CENTOS 5.5 - CUSTOM ###
LABEL Centos 5.5 x86_64 Custom
kernel CentOS/vmlinuz noapic
append initrd=CentOS/initrd.img ks= text nofb
Customized, unatended installation version of Centos 5.5

Did you mean to add this to apache config ? :

DirectoryIndex index.html index.cgi index.php index.xhtml index.htm index.cfg ks.cfg

Jesus is the King 0 Kudos Reply Jimmy Vance HPE Pro

‎11-17-2010 10:15 AM

‎11-17-2010 10:15 AM

Re: Unable to download the kickstart file

OK, so now we know your working with CentOS

Does the CentOS 5.5 initrd image your booting from contain the correct NIC drivers for the server your trying to install on? You didn't mention the server model.

No support by private messages. Please ask the forum! I work for HPE

If you feel this was helpful please click the KUDOS! thumb below! 0 Kudos Reply Piotr Kirklewski Super Advisor

‎11-17-2010 02:30 PM

‎11-17-2010 02:30 PM

Re: Unable to download the kickstart file
It's a VM on ESXi 4.1.
I'm already using the same intrd here:
Label CentOS 5 64-bit installer & rescue
kernel centos5x64/vmlinuz
append initrd=centos5x64/initrd.img text vga=791

I have just copied it to a different dir and started working on the kickstart installation.
The manual installation goes without any problems so I can't see why would it be different for the kickstart.
Jesus is the King 0 Kudos Reply Jimmy Vance HPE Pro

‎11-17-2010 04:56 PM

‎11-17-2010 04:56 PM

Re: Unable to download the kickstart file
A manual installation from media doesn't matter if it is bare iron, or a VM, the network does not come into play. For bare iron, or a VM, network installation the boot installer needs to be able to access the network to pull the KS file. While your in the installer switch over to one of the other console screens F2, F4, etc and see if the network is functional

No support by private messages. Please ask the forum! I work for HPE

If you feel this was helpful please click the KUDOS! thumb below! 1 Kudo Reply Piotr Kirklewski Super Advisor

‎11-18-2010 04:52 PM

‎11-18-2010 04:52 PM

Re: Unable to download the kickstart file
Alt + F3 shows the following message:
ERROR: No network device in choose network device!
ERROR: No network drivers for dooing kickstart
ERROR: Unable to bring up network

Where do I go from there ?

Jesus is the King 0 Kudos Reply Jimmy Vance HPE Pro

‎11-18-2010 05:12 PM

‎11-18-2010 05:12 PM

Re: Unable to download the kickstart file
Hopefully someone with more VMware experience than I have will chime in. I'm not sure what driver you need to load for the virtual NIC VMware presents to the guest OS

No support by private messages. Please ask the forum! I work for HPE

If you feel this was helpful please click the KUDOS! thumb below! 0 Kudos Reply DeafFrog Valued Contributor

‎11-22-2010 12:05 AM

‎11-22-2010 12:05 AM

Re: Unable to download the kickstart file
Hi ,

pls check this :

section "network" > ESXi configuration guide > page 96 .

hope this helps..

Reg. FrogIsDeaf

[Aug 06, 2017] Customising Anaconda uEFI boot menu to include kickstart parameter - Red Hat Customer Portal

Aug 06, 2017 |
Customising Anaconda uEFI boot menu to include kickstart parameter Latest response March 3 2017 at 6:50 AM Hi

I am producing a remastered RHEL 6.5 which contains a custom kickstart. I have got the ISO to boot using the kickstart from a BIOS boot by making the standard modifications to isolinux.cfg (i.e. "append initrd=initrd.img ks=cdrom:/ks-x86_64.cfg"). However I cannot locate the correct file(s) to perform the same customisation when I boot to UEFI. I can enter the "ks=cdrom:/ks-x86_64.cfg" parameter to the UEFI anaconda boot menu by editing the kernel parameters but I cannot find a way of customising it like you can with editing isolinux.cfg.

Does anyone know how to customise the anaconda boot parameters when using UEFI?

Many thanks Started March 21 2014 at 11:55 AM by Aidan Beeson Community Member 87 points Join the conversation Responses Guru 6863 points 21 March 2014 4:03 PM James Radtke Community Leader Hey Aidan - I don't know this for certain (and don't have time to validate right now) but hopefully to get you moving forward.

Look in /EFI/BOOT in your media. It seems to resemble what is in /isolinux

Specifically, check out BOOTX64.conf

#debug --graphics
timeout 5
title Red Hat Enterprise Linux 6.4
    kernel /images/pxeboot/vmlinuz
    initrd /images/pxeboot/initrd.img
title Install system with basic video driver
    kernel /images/pxeboot/vmlinuz xdriver=vesa nomodeset askmethod
    initrd /images/pxeboot/initrd.img
title rescue
    kernel /images/pxeboot/vmlinuz rescue askmethod
    initrd /images/pxeboot/initrd.img

I'll revisit this later if I have something to update/change. But, hopefully this is correct and helpful. ;-)

Community Member 87 points 21 March 2014 4:26 PM Aidan Beeson Hi James,

That does look promising. Bit of a "d'oh" moment as it was kinda obvious really! When I get a chance to get onto the UEFI hardware again I'll have a play and update this thread...


Community Member 87 points 25 March 2014 10:07 AM Aidan Beeson James,

You're correct, the configuration is located in the /EFI/BOOT/BOOTX64.conf.
Unlike the non-efi Anaconda boot it uses a standard grub menu rather than the (slightly) fancier one. To get it to boot anaconda using the kickstart file I've used the following:

#debug --graphics
timeout 60
# hiddenmenu
title Install RHEL with kickstart
    kernel /images/pxeboot/vmlinuz ks=cdrom:/ks-x86_64.cfg
    initrd /images/pxeboot/initrd.img
title Install Standard Red Hat Enterprise Linux OS
    kernel /images/pxeboot/vmlinuz
    initrd /images/pxeboot/initrd.img
title Install system with basic video driver
    kernel /images/pxeboot/vmlinuz xdriver=vesa nomodeset askmethod
    initrd /images/pxeboot/initrd.img
title rescue
    kernel /images/pxeboot/vmlinuz rescue askmethod
    initrd /images/pxeboot/initrd.img

Many thanks


Guru 6863 points 25 March 2014 2:27 PM James Radtke Community Leader Good to hear it. I'm glad we were on the right page. I believe this (uEFI) will start to become more of a hot topic as time goes on. (I know I have a lot to learn yet ;-) Community Member 30 points 9 March 2015 5:33 PM Jun Li Yes, uefi is becoming more popular now, uefi is standard now for HP G9 servers, while IBM/Lenovo x series made uefi standard couple years ago.

Satellite 6 is falling behind, so far, it doesn't support pxe uefi kickstart, reasons are:
1. tftp server doesn't have uefi boot image, only has bios boot image;
2. dhcp server config doesn't have have definition for uefi based pxe request. This actually isn't a satellite/foreman issue, because foreman is missing the function of update dhcpd.conf when a subnet defined.
3. A uefi compatible pxe config file is missing when a new host defined in foreman/satellite.
the one that defined in /var/lib/tftpboot/pxelinux.cfg only works for bios based pxe.

Based on the 3 issues, I got pxe uefi kickstart working this morning, by addressing the above 3 accordingly:

  1. Manually add uefi boot image to ttftp server:

[root@capsule tftpboot]# pwd
[root@capsule tftpboot]# cd efi/
[root@capsule efi]# ls
bootx64.efi efidefault images splash.xpm.gz TRANS.TBL
[root@capsule efi]#
[root@capsule efi]# pwd
[root@capsule efi]# ls
bootx64.efi efidefault images splash.xpm.gz TRANS.TBL
[root@capsule efi]# ls -l images/pxeboot/
total 36644
-r--r--r-- 1 root root 33383449 Mar 6 12:27 initrd.img
-r--r--r-- 1 root root 441 Mar 6 12:27 TRANS.TBL
-r-xr-xr-x 1 root root 4128944 Mar 6 12:27 vmlinuz
[root@capsule efi]#
[root@capsule efi]# cat efidefault

debug --graphics

timeout 5
title Red Hat Enterprise Linux 6.5
root (nd)
kernel /images/pxeboot/vmlinuz ks= ksdevice=bootif network kssendmac
title Install system with basic video driver
kernel /images/pxeboot/vmlinuz xdriver=vesa nomodeset askmethod
initrd /images/pxeboot/initrd.img
title rescue
kernel /images/pxeboot/vmlinuz rescue askmethod
initrd /images/pxeboot/initrd.img
[root@capsule efi]#

  1. Add one section to serve uefi based pxe request, so the cliet will get bootx64.efi boot image instead of pxelinux.0

class "pxeclients" {
match if substring (option vendor-class-identifier, 0, 9) = "PXEClient";

              if option arch = 00:06 {
                      filename "efi/bootia32.efi";
              } else if option arch = 00:07 {
                      filename "efi/bootx64.efi";
              } else {
                      filename "pxelinux.0";


  1. every time when a new host defined in satellite 6/foreman, you will get a unattended provision url, copy this url, replace the ks url in /var/lib/tftpboot/efi/efidefault,

Now you should be able to kickstart a uefi system via pxe in a satellite 6/capsule environment.

Community Member 95 points 6 March 2015 12:02 AM David Worth Hi,

I am trying to create a custom iso for kickstart builds on servers with UEFI. From looking at the post, it looks like you were able to create a boot iso for EFI. If that is the case, what were the steps that you followed to create it? I am struggling to find good detailed info.


Community Member 39 points 9 March 2015 3:35 PM Aidan Beeson Hi David,

There are probably better/different ways of doing this but this is how I got it to work:

  1. Loop mount the install iso.
  2. Create local disk copies of "isolinx", "EFI" and "images" directories.
  3. Modify files in isolinux & EFI directories as required (e.g. isolinux.cfg and EFI/BOOT/BOOTX64.conf)
  4. Create the new ISO:
 mkisofs -o my.iso \
        -R -J -A "MyProject" \
  -hide-rr-moved \
  -v -d -N \
  -no-emul-boot -boot-load-size 4 -boot-info-table \
  -b isolinux/isolinux.bin \
  -c isolinux/isolinux.boot \
  -eltorito-alt-boot -no-emul-boot  \
  -eltorito-boot images/efiboot.img \
  -x ${mountDIR}/isolinux \
  -x ${mountDIR}/images \
  -x ${mountDIR}/EFI \
  -x .svn \
  -graft-points /path/to/loopmount/install_dvd my_kickstart.cfg=my_kickstart.cfg isolinux/=isolinux images=images EFI=EFI

The isolinux.cfg and BOOTx64.conf should contain a reference to the ks file, for example:


label linux
  menu label ^Install OS using kickstart
  menu default
  kernel vmlinuz
  append initrd=initrd.img ks=cdrom:/my_kickstart.cfg
label vesa
  menu label Install ^standard Red Hat Enterprise Linux OS
  kernel vmlinuz
  append initrd=initrd.img 
label rescue
  menu label ^Rescue installed system
  kernel vmlinuz
  append initrd=initrd.img rescue
label local
  menu label Boot from ^local drive
  localboot 0xffff
label memtest86
  menu label ^Memory test
  kernel memtest
  append -


  title Install OS using kickstart
        kernel /images/pxeboot/vmlinuz ks=cdrom:/my_kickstart.cfg
        initrd /images/pxeboot/initrd.img
title Install standard Red Hat Enterprise Linux OS
        kernel /images/pxeboot/vmlinuz
        initrd /images/pxeboot/initrd.img
title Install system with basic video driver
        kernel /images/pxeboot/vmlinuz xdriver=vesa nomodeset askmethod
        initrd /images/pxeboot/initrd.img
title rescue
        kernel /images/pxeboot/vmlinuz rescue askmethod
        initrd /images/pxeboot/initrd.img

Hope this helps. I've tested it using a VMware EFI emulation but not in anger on any "real" EFI systems.


Community Member 95 points 11 March 2015 8:23 PM David Worth Thanks for the info. We are building new servers on HP gen8 and 9 servers. The Gen9's are defaulting to UEFI. For this go around I reverted back to legacy, but we will have more builds to come. So it would be nice to figure a way to boot and install using UEFI. I am guessing that is the direction hardware vendors are going. Also interesting that it was not too easy to find a lot of good info. Just pieces here and there.

I will give this a try on the next hardware build using EFI.

Thanks again!

Community Member 95 points 22 March 2015 12:44 AM David Worth Hi Aidan

Thanks again for the info. I was troubleshooting why boot from SAN was not working with a HP bl460Gen9 blade. I was trying legacy mode, but would not boot after install from kickstart. Waiting For HP on this issue.

However, I created a dual boot ISO, legacy and EFI. I was able to set the bios to EFI and image the server from kickstart. I just added the following to EFI/BOOT/BOOTX64.conf:

debug --graphics

timeout 360
title RHEL 6.6 l00l
kernel /images/pxeboot/vmlinuz ks=nfs:nfsserver:/ifs/data/kickstart/KSConfigs/l001/l001-ks.cfg initrd=rhe6664.img ksdevice=eth0 ip= gateway= netmask= dns=
initrd /images/pxeboot/initrd.img
title RHEL 6.6 l002
kernel /images/pxeboot/vmlinuz ks=nfs:nfsserver:/ifs/data/kickstart/KSConfigs/l002/l002-ks.cfg initrd=rhe6664.img ksdevice=eth0 ip= gateway= netmask= dns=
initrd /images/pxeboot/initrd.img
title Red Hat Enterprise Linux 6.6
kernel /images/pxeboot/vmlinuz
initrd /images/pxeboot/initrd.img
title Install system with basic video driver
kernel /images/pxeboot/vmlinuz xdriver=vesa nomodeset askmethod
initrd /images/pxeboot/initrd.img
title rescue
kernel /images/pxeboot/vmlinuz rescue askmethod
initrd /images/pxeboot/initrd.img

There was only one instance where the ISO booted up and was went to a grub menu. Not sure why, but rebooted and everything was good. So your steps also work with physical as well.


Community Member 35 points 18 June 2015 5:38 PM Jose Carlos Alves Hi David,

What you have in your file isolinux/isolinux.cfg?


Community Member 95 points 18 June 2015 10:24 PM David Worth Hi Sarah,

To setup a dual legacy and EFI boot iso, I mount the latest RHEL 6.x dvd and copy the following directories to a work area on my server.

EFI/ images/ isolinux

If you are just setting up legacy boot, you can ignore the EFI and images directories.

Here is an example of what I am putting in the isolinux.cfg

------start of file --------
default vesamenu.c32

prompt 1

timeout 600

display boot.msg

menu background splash.jpg
menu title Welcome to Red Hat Enterprise Linux 6.6!
menu color border 0 #ffffffff #00000000
menu color sel 7 #ffffffff #ff000000
menu color title 0 #ffffffff #00000000
menu color tabmsg 0 #ffffffff #00000000
menu color unsel 0 #ffffffff #00000000
menu color hotsel 0 #ff000000 #ffffffff
menu color hotkey 7 #ffffffff #ff000000
menu color scrollbar 0 #ffffffff #00000000

label linux
menu label ^Install or upgrade an existing system
menu default
kernel vmlinuz
append initrd=initrd.img
label vesa
menu label Install system with ^basic video driver
kernel vmlinuz
append initrd=initrd.img xdriver=vesa nomodeset
label rescue
menu label ^Rescue installed system
kernel vmlinuz
append initrd=initrd.img rescue
label local
menu label Boot from ^local drive
localboot 0xffff
label servera-set-Network
kernel vmlinuz
append initrd=initrd.img ksdevice=eth0 ip= gateway= netmask= dns=1192.168.1.2
label serverb-dhcp
kernel vmlinuz
append initrd=initrd.img ksdevice=eth0

------end of file --------

Then you have to run mkisofs to create the iso.

Hope that helps.


Community Member 35 points 19 June 2015 8:15 AM Jose Carlos Alves Hi David,

My problema is because I have a HP Gen9 with UEFI and not EFI

I have

In the isolinux.cfg, I have this


prompt 1

timeout 5

label ptmtshdpnopp01
kernel vmlinuz
append initrd=initrd.img ksdevice=eth0 ip= netmask= gateway= dns= ks=nfs:

label ptmtshdpnopp02
kernel vmlinuz

append initrd=initrd.img ksdevice=eth0 ip= netmask= gateway= dns= ks=nfs:

I create th iso with

mkisofs -N -J -joliet-long -D -V "HADOOP" -o rhel-server-6.6-x86_64-hadoop.iso -b "isolinux/isolinux.bin" -c "isolinux/" -hide "isolinux/" -no-emul-boot -boot-load-size 4 -boot-info-table isolinux-6.6-x86_64/

But the machine doesn't see the iso

Thanks for the help

Community Member 95 points 19 June 2015 1:18 PM David Worth Hi Sara,

I ran into the same thing earlier this year. We started using Gen9's. We had issues with boot from San and switching them to legacy mode, so we went with EFI, aka UEFI. EFI or UEFI boot is a different method of managing booting the OS. It was developed by HP in the Itanium servers. But you can search for the features and differences with legacy boot.

The way I went after booting with an ISO for kickstart installs is to create a dual boot ISO, legacy and EFI boot. Anything that will boot from legacy mode will read the isolinux.cfg and EFI mode will read the Bootx64.conf.

To create a dual ISO, create directory and copy EFI/ images/ isolinux/ directories from an install DVD. Next all you legacy boot entries will go into the isolinux/isolinux.cfg. You can see the example I have above. For EFI, you will have to edit the EFI/BOOT/BOOTx64.conf. Here is an example of what I put into mine.


debug --graphics

timeout 900
title Red Hat Enterprise Linux 6.6
kernel /images/pxeboot/vmlinuz
initrd /images/pxeboot/initrd.img
title rescue
kernel /images/pxeboot/vmlinuz rescue askmethod
initrd /images/pxeboot/initrd.img
title server-d
kernel /images/pxeboot/vmlinuz ksdevice=eth0 ip= gateway= netmask= dns=
initrd /images/pxeboot/initrd.img
title server-e
kernel /images/pxeboot/vmlinuz ksdevice=eth0
initrd /images/pxeboot/initrd.img

In the example above, I put in an example of setting an IP as well as using DHCP. After you edit your legacy or EFI configs, then you have to create the ISO. Here is what I run to create a dual ISO:

  1. cd into the directory that contains the 3 directories I mentioned earlier.
  2. run > mkisofs -o ../my-iso-name.iso -R -J -A "MyDualISO" -hide-rr-moved -v -d -N -no-emul-boot -boot-load-size 4 -boot-info-table -b isolinux/isolinux.bin -c isolinux/isolinux.boot -eltorito-alt-boot -no-emul-boot -eltorito-boot images/efiboot.img -x ${mountDIR}/isolinux -x ${mountDIR}/images -x ${mountDIR}/EFI .

Also, after this is done, your kickstart will need to be able to work with EFI boots. First, the partition table will need to be gpt. Next, the bootloader location needs to be partition. Last, you will need a partition /boot/efi with fstype as efi and size 200.

If you need help with that, I can send you my disk layout from my kickstart config. Just let me know.


Community Member 97 points 5 August 2015 7:26 PM Ivan Borghetti Hello David, how are you. I am also trying to create an iso that supports dual boot , i am now testing the UEFI boot and it works however i still need to customize the menu to list my different kickstarts, etc.

After copying the directories i needed i created the iso running mkisofs however i did not have the following lines in my command:

-x ${mountDIR}/isolinux -x ${mountDIR}/images -x ${mountDIR}/EFI .

Would you mind explaining me why those are needed and which would be the $(mountDIR) , i guess is the parent directory where those 3 subdirectories are, right? in my case would be iso

├── EFI
│ └── BOOT
├── images
│ └── pxeboot
└── isolinux

thanks in advance

Community Member 95 points 8 August 2015 10:27 AM David Worth Hi Ivan,

I am almost sure you do not need them. The -x is similar to the -m, which if you look in the man page, it allows you to exclude files. I was basing my mkisofs command off the one in this thread above. I thought I removed it from the one I use, but I still have it in.

Try removing it and see what happens. Can you let me know the outcome? Just curious.

Community Member 97 points 10 August 2015 12:12 PM Ivan Borghetti Hello David, thanks for your response, I tried without those parameters and it worked without issues,

thanks again

Community Member 95 points 10 August 2015 1:06 PM David Worth Great. Thanks for letting me know. I will take it out of my build script as well. Community Member 35 points 19 June 2015 2:37 PM Jose Carlos Alves Hi David,

Yes please, let me know what you have in your kickstart config about the disk layout.


Community Member 95 points 19 June 2015 3:05 PM David Worth Hi Sara,

I am setting up my disk partitioning in a pre section, then writing the disk to a file that gets included. The script will check for all drives and then only select a disk that is greater than 130000MB, roughly 126GB. This is to deal with the ISO, USB, or cdrom that you are using to image the server. It will ignore that and select my disk which is 134GB.


Disk Configuration

%include /tmp/partitioning

End Disk Configuration

%pre --log /root/ks-rhn-pre.log

Find OS disk

list-harddrives | while read DISK DSIZE
#Convert float into int
# get scsi ID for disk
scsi_id="$(/mnt/runtime/lib/udev/scsi_id -gud /dev/${DISK})"
echo "F-$DSIZE I-$DSIZEI-- ID $scsi_id"

# determine if we should partition this device
###  if disk is smaller than 130000, than change the size in mb to reflect it.
if [ ${DSIZEI} -gt 130000 ]; then
     # add device to ignoredisk --only-use list
     ##the following is for SAN disks##
     ##End SAN Disk###

    ##if using local disks, comment the tdsk above and use this###
    ###End Local disk###

    echo "DISK - $tdsk"
Create GPT partition

echo "creating gpt on ${tdsk}"
parted -s ${tdsk} mklabel gpt

cat << EOF >> /tmp/partitioning
bootloader --location=partition --driveorder=${tdsk}
ignoredisk --only-use=${tdsk}
clearpart --linux --drives=${tdsk}

part /boot/efi --fstype=efi --size=200 --ondisk=${tdsk}
part /boot --fstype ext4 --size=512 --ondisk=${tdsk}
part pv.4 --size=100 --grow --ondisk=${tdsk}
volgroup vg00 --pesize=32768 pv.4

logvol / --fstype ext4 --name=lvroot --vgname=vg00 --size=10240
logvol /opt --fstype ext4 --name=lvopt --vgname=vg00 --size=2048
logvol /home --fstype ext4 --name=lvhome --vgname=vg00 --size=2048
logvol /tmp --fstype ext4 --name=lvtmp --vgname=vg00 --size=6144
logvol /var --fstype ext4 --name=lvvar --vgname=vg00 --size=5120
logvol /usr/local --fstype ext4 --name=lvulocal --vgname=vg00 --size=4096
logvol swap --fstype swap --name=lvswap1 --vgname=vg00 --size=16384
logvol swap --fstype swap --name=lvswap2 --vgname=vg00 --size=16384



Also note, I am booting from SAN so this line will work for SAN disks. > tdsk="/dev/disk/by-id/scsi-${scsi_id}"

If you are using local disks or VMware, use this > tdsk="/dev/$DISK"

I added comments to the pre script so you can see where to make changes. Hope that helps.


Community Member 95 points 19 June 2015 3:18 PM David Worth It seems the formatting of the code is off. Let me try again. Raw
##Disk Configuration##
%include /tmp/partitioning

##End Disk Configuration##

%pre --log /root/ks-rhn-pre.log
##Find OS disk

 list-harddrives | while read DISK DSIZE
    #Convert float into int
    # get scsi ID for disk
    scsi_id="$(/mnt/runtime/lib/udev/scsi_id -gud /dev/${DISK})"
        echo "F-$DSIZE I-$DSIZEI-- ID $scsi_id"

    # determine if we should partition this device
###  if disk is smaller than 130000, than change the size in mb to reflect it.
    if [ ${DSIZEI} -gt 130000 ]; then
         # add device to ignoredisk --only-use list
##the following is for SAN disks##
##End SAN Disk###

    ##if using local disks, comment the tdsk above and use this###
    ###End Local disk###

        echo "DISK - $tdsk"

##Create GPT partition
echo "creating gpt on ${tdsk}"
parted -s ${tdsk} mklabel gpt

cat << EOF >> /tmp/partitioning
bootloader --location=partition --driveorder=${tdsk}
ignoredisk --only-use=${tdsk}
clearpart --linux --drives=${tdsk}

part /boot/efi --fstype=efi --size=200 --ondisk=${tdsk}
part /boot --fstype ext4 --size=512 --ondisk=${tdsk}
part pv.4 --size=100 --grow --ondisk=${tdsk}
volgroup vg00 --pesize=32768 pv.4

logvol / --fstype ext4 --name=lvroot --vgname=vg00 --size=10240
logvol /opt --fstype ext4 --name=lvopt --vgname=vg00 --size=2048
logvol /home --fstype ext4 --name=lvhome --vgname=vg00 --size=2048
logvol /tmp --fstype ext4 --name=lvtmp --vgname=vg00 --size=6144
logvol /var --fstype ext4 --name=lvvar --vgname=vg00 --size=5120
logvol /usr/local --fstype ext4 --name=lvulocal --vgname=vg00 --size=4096
logvol swap --fstype swap --name=lvswap1 --vgname=vg00 --size=16384
logvol swap --fstype swap --name=lvswap2 --vgname=vg00 --size=16384


Community Member 35 points 29 June 2015 8:52 AM Jose Carlos Alves Thanks David,

I already installed my machines with your help.

Best regards,
Sara Soares

Community Member 32 points 1 October 2015 6:45 PM Gustavo Vegas Hello everyone,
I have a similar requirement, in our case we use ISO images to kickstart our servers. With pieces of information from this thread as well as some other resources out there, I have been able to get the RHEL6 side working. Now, I am also trying to do this for RHEL7, and things seem to have changed. The BOOTX64.conf seems to be getting ignored and GRUB2/grub.conf seems to be the one been taken into account. Any insights on how to work this out on RHEL7? Any help would be appreciated.


Red Hat Active Contributor 190 points 2 October 2015 1:07 PM Petr Bokoc Hello,

For RHEL7 on UEFI systems, the file you want to modify on the boot media is BOOT/EFI/grub.cfg . You can append the inst.ks= parameter to the line starting with linuxefi in any of the entries. The second entry is selected by default; you can use the set_default= option at the beginning of the file to change that (entries are numbered from 0).

After you change grub.cfg to your preferences, you can follow the instructions in the Anaconda Customization Guide to create a new bootable ISO image with the modified boot menu.

Newbie 11 points 3 March 2017 11:48 AM systeembeheer beeldengeluid Hello, just a update to help a few people along te way on Rhel 7.3 the dual(UEFI/Bios) boot iso, it can be build with a few steps.

Use following steps : Copy the content from a iso boot image say "rhel-server-7.3-x86_64-boot.iso" to your own image directory. Where you modify your dualboot image. Modify the EFI/Boot/grub.cfg for your UEFI boot needs. Modify the isolinux/isolinux.xfg for your BIOS based Boot needs.

Now the right mkisofs command( this one did it for me) Start this in the root of your own image directory mkisofs -U -A "RHEL-7.3 x86_64" -V "RHEL-7.3 x86_64" -volset "RHEL-7.3 x86_64" -J -joliet-long -r -v -T -x ./lost+found \ -o ../Kickstart_lab_7.3-disc1-dualboot.iso -b isolinux/isolinux.bin -c isolinux/ -no-emul-boot -boot-load-size 4 \ -boot-info-table -eltorito-alt-boot -e images/efiboot.img -no-emul-boot .

The volume name ( "RHEL-7.3 x86_64") may come back to bite you ( haven't tried to change it) This label important in the Uefi part of your dual boot and comes back in the EFI/BOOT/grub.cfg. In two places, first in the search line and second after inst.stage2 parameter in the menu entry.

------------------- snippet grub.cfg------

search --no-floppy --set=root -l 'RHEL-7.3 x86_64'

--### BEGIN /etc/grub.d/10_linux ###-- menuentry 'Install Red Hat Enterprise Linux 7.3' --class fedora --class gnu-linux --class gnu --class os { linuxefi /images/pxeboot/vmlinuz inst.stage2=hd:LABEL=RHEL-7.3\x20x86_64 quiet initrdefi /images/pxeboot/initrd.img


After the inst.stage2 you can place your inst.ks parameters you need.

The isolinux/isolinux.cfg has your bios based boot parameter

Have fun.. and good luck

[Aug 06, 2017] Nathan Mike - Senior System Engineer - LPI1-C and CLA-11

Notable quotes:
"... mkdir -p bootdisk/RHEL ..."
"... example: cp -R /mnt/isolinux/* ~/bootdisk/RHEL/ ..."
"... cd ~/bootdisk/RHEL/ ..."
"... example: cp ks.cfg ~/bootdisk/RHEL/ ..."
"... mkisofs -r -T -J -V "RedHat KSBoot" -b isolinux.bin -c -no-emul-boot -boot-load-size 4 -boot-info-table -v -o linuxboot.iso . ..."
"... linux ks=cdrom:/ks.cfg ..."
"... linux ks=cdrom:/ks.cfg append ip=<IPaddress> netmask=<netmask> ksdevice=<NICx> ..."
"... example: linux ks=cdrom:/ks.cfg append ip= netmask= ksdevice=eth0 ..."
Aug 06, 2017 |
How to create a kickstart ISO boot disk for RedHat Posted by mikent on April 12, 2012 14 Votes

1) logon as root

2) create a directory name bootdisk/RHEL

mkdir -p bootdisk/RHEL

3) copy the directory isolinux from your RedHat DVD or other location containing RedHat binaries in bootdisk/RHEL

example: cp -R /mnt/isolinux/* ~/bootdisk/RHEL/

4) change direcotry to ~/bootdisk/RHEL/

cd ~/bootdisk/RHEL/

5) create (or copy) your ks.cfg (it will be discussed later in another post how to create a kickstart file) in ~/bootdisk/RHEL/

example: cp ks.cfg ~/bootdisk/RHEL/

6) Now, you can create the ISO boot disk as follow (make sure you run the command from ~/bootdisk/RHEL/) :

mkisofs -r -T -J -V "RedHat KSBoot" -b isolinux.bin -c -no-emul-boot -boot-load-size 4 -boot-info-table -v -o linuxboot.iso .

7) Burn your iso linuxboot.iso into a blank cd-rom or mount it as it is on a Virtual Machine for example

8) At linux boot prompt, type the following command:

linux ks=cdrom:/ks.cfg

if you need to install using a specific IP address using a specific Ks boot device, type the following:

linux ks=cdrom:/ks.cfg append ip=<IPaddress> netmask=<netmask> ksdevice=<NICx>

example: linux ks=cdrom:/ks.cfg append ip= netmask= ksdevice=eth0

9) your are done!

[Aug 06, 2017] How can I troubleshoot a failing Linux kickstart installation

Aug 06, 2017 |
down vote favorite I am having some trouble getting my pxebooting kickstart process working.

The script is known to work when installed via a DVD with the local media on the disk. I have updated the script to work with a remote repository and am leveraging PXEBooting so this can be leveraged for my enterprise. The script executes fine up until it starts to download packages.

Checking the log files on the web server hosting the repository, the first file is downloaded successfully with a 200 HTTP code. But the server running the kickstart indicates a failure and attempts to download the package again. I have confirmed this on the web server, as I see multiple requests for the same package repeated, all with the 200 HTTP code. But kickstart indicates that the download failed.

I am using CentOS 5. I copied the entire first DVD (I do not need the OpenOffice suite from the second) to the location on the web server, so the repodata already exists.

I have been able successfully download software from this repository using other systems that have already been built.

No errors appear in any of the log files that I have found on the kickstarted server, no messages are output to the screen. I have not been able to find any means of debugging this issue.

I am hoping that someone here can provide a link or information on how to attain more detailed debugging information to resolve this.

Thank you in advance. linux centos installation kickstart

share improve this question edited Apr 21 '15 at 14:34 ewwhite 162k 64 332 643 asked May 22 '12 at 13:48 Nick V 105
Try hitting ALT+F2 and ALT+F3 on the server you are kickstarting to see if it has any additional information that might help. – becomingwisest May 22 '12 at 14:03
If you download the package from that URL, do you get a valid package? – larsks May 22 '12 at 14:22
I have tried ALT+F2 and ALT+F3. F2 brings me to the BusyBox prompt, but no error messages. I have looked through the entire filesystem and nothing. The install logs under /mnt/sysimage/root/ do not even contain information. – Nick V May 22 '12 at 14:31
I am able to download a package from the repository without error. The issue only exists when I attempt to use the repository for installation. I even attempted to run a manual installation from the repository. The image/stage2.img file is loaded, but when it comes time to download packages the behavior is the same. At this point, I have loopback-mounted the first DVD but the same issue exists. (using CentOS 5.8) – Nick V May 22 '12 at 14:32
add a comment |
1 Answer active oldest votes
up vote down vote accepted Can you post an excerpt of your kickstart? What is the relationship between the repository and the system you're building? Same subnet?

I had a period of kickstart installation problems during the middle of the CentOS 5 series. The best thing to do from your standpoint is to check the other virtual terminals during the installation. Are you running the installation in graphical (X Windows) or text mode?

Here's what the different virtual terminals display. You should be able to debug from there.

The installation dialog when using text or cmdline

A shell prompt

The install log displaying messages from install program

The system log displaying messages from kernel, etc.

All other messages

The installation dialog when using the graphical installer

At some point, I was unable to resolve a kickstart performance issue. I ended up changing the installation method to NFS and the issues disappeared. See: CentOS 5.5 remote kickstart installation stalls at "Starting install process." How to debug?

share improve this answer edited Apr 13 at 12:14 Community ♦ 1 answered May 22 '12 at 14:42 ewwhite 162k 64 332 643
For the same mystical reason, the NFS change resolved the issue. – Nick V Aug 6 '12 at 4:33

[Aug 06, 2017] TipsAndTricks-KickStart - CentOS Wiki

Aug 06, 2017 |
Tips and tricks for anaconda and kickstart

For full documentations, please see (CentOS 5), (CentOS 6) or (CentOS 7)

Tuning the %packages section

When using %packages to define the set of packages that should be installed there are a number of more or less documented options that can be set:

Dependencies between packages will be automatically resolved. This option has been deprecated in Centos 5, dependencies are resolved automatically every time now.

Skips the installation of files that are marked as documentation (all files that are listed when you do rpm -qld <packagename>)

Skips installation of @Base. This won't work unless you know what you're doing as it might leave out packages required by post installation scripts
Ignore missing packages and groups instead of asking what to do. This option has been deprecated in Centos 5, dependencies are resolved automatically every time now.

Example of minimal package selection for CentOS 4:

%packages --resolvedeps --excludedocs --nobase

please note that essential stuff will be missing. There will be no rpm, no yum, no vim, no dhcp-client and no keyboard layouts. Kudzu is required, because the installer fails if it is missing.

Example of minimal package selection for CentOS 5:

%packages --excludedocs --nobase

Again, this will leave you with a *very* basic system that will be missing almost every feature you might expect.

The --resolvedeps used with CentOS 4 is not required for CentOS 5 and newer releases as the newer installer always resolves dependencies.


If you start out with a unpartitioned disk, or a virtual machine on a unpartitioned image, use the --initlabel parameter to clearpart to make sure that the disklabel is initialized, or Anaconda will ask you to confirm creation of a disklabel interactively. For instance, to clean all partitions on xvda , and initialize the disklabel if it does not exist yet, you could use:

clearpart --all --initlabel --drives=xvda
Running anaconda in real text-mode

You probably already know that you can make anaconda run with a ncurses interface instead of the X11 interface by adding the line "text" to your kickstart file. But there's another option: install in real shell-like textmode. Replace the "text"-line with a "cmdline"-line in your kickstart file and anaconda will do the whole installation in textmode. Especially when you use %packages --nobase or run complex %post scripts this will probably save hours of debugging, because you can actually see the output of all scripts that run during the installation.

Enable/disable firstboot

You all know firstboot, the druid that helps you to set up the system after install. It can be enabled and disabled by adding either "firstboot --enable" or "firstboot --disable" to the command section of your kickstart file.

What the different terminals display
The installation dialog when using text or cmdline
A shell prompt
The install log displaying messages from install program
The system log displaying messages from kernel, etc.
All other messages
The installation dialog when using the graphical installer
Logging %pre and %post

When using a %pre or %post script you can simply log the output to a file by using --log=/path/to/file

%post --log=/root/my-post-log
echo 'Hello, World!'

Another way of logging and displaying the results on the screen would be the following:

exec < /dev/tty3 > /dev/tty3
chvt 3
echo "################################"
echo "# Running Post Configuration   #"
echo "################################"
echo 'Hello, World!'
) 2>&1 | /usr/bin/tee /var/log/post_install.log
chvt 1
Trusted interfaces for firewall configuration

You can use the --trust option to the firewall option multiple times to trust multiple interfaces:

# Enable firewall, open port for ssh and make eth1 and eth2 trusted
firewall --enable --ssh --trust=eth1 --trust=eth2
Use a specific network interface for kickstart

When your system has more than one network interface anaconda asks you which one you'd like to use for the kickstart process. This decision can be made at boot time by adding the ksdevice paramter and setting it accordingly. To run kickstart via eth0 simply add ksdevice=eth0 to the kernel command line.

A second method is using ksdevice=link . In this case anaconda will use the first interface it finds that has a active link.

A third method works if you are doing PXE based installations. Then you add IPAPPEND 2 to the PXE configuration file and use ksdevice=bootif . In this case anaconda will use the interface that did the PXE boot (this does not necessarily needs to be the first one with a active link).

Within the kickstart config itself you need to define the network interfaces using the network statement. If you are using method 2 or 3 then you don't know which device actually will be used. If you don't specify a device for the network statement anaconda will configure the device used for the kickstart process and set it up according to your network statement.

Forcing kickstart to ask for network configuration

Starting at CentOS 5, there a undocumented option that enable a prompt asking for network configuration during the installation. At the network statement, put the query keyword at the --bootproto= networking configuration, as we see below:

network --device=eth0 --bootproto=query

And a dialog box will appear asking for IP addressing, as well the hostname configuration.

Useful collection of ready-made kickstart files

At you can find a collection of ready-made kickstart files. Their primary goal is to provide functional sample kickstarts and snippets for the various types of deployments used in the community.

[Aug 06, 2017] Unable to get kickstart file from http webserver

Aug 06, 2017 |
up vote down vote favorite I'm trying to get a vm up using a kickstart file. However, whenever the virtual machine initalize, it says it is unable to located the kickstart file from the location provided.

Code to build vm:

virt-install --name guest --ram 2048 --disk /vm/guest.img --location /CentOS-6.6-x86_64-bin-DVD1.iso -x "ks= ksdevice=eth0 ip= netmask= dns= gateway="

kickstarter file:

#platform=x86, AMD64, or Intel EM64T
# Firewall configuration
firewall --disabled
# Install OS instead of upgrade
# Use network installation
url --url=""
# Root password
rootpw --iscrypted $1$AcXRM2i4$9Wzd1rjvrLNREmeIsM9.W1
# System authorization information
auth  --useshadow  --passalgo=sha512
# Use graphical install
firstboot --disable
# System keyboard
keyboard us
# System language
lang en_US
# SELinux configuration
selinux --enforcing
# Installation logging level
logging --level=info

# System timezone
timezone  Asia/Singapore
# Network information
network  --bootproto=dhcp --device=eth0 --onboot=on
# System bootloader configuration
bootloader --location=mbr
# Clear the Master Boot Record
# Partition clearing information
clearpart --all  
# Disk partitioning information
part /boot --fstype="ext4" --size=100
part swap --fstype="swap" --size=512
part / --fstype="ext4" --grow --size=1


The file is located at the /var/www/html directory of the webserver.

Any advice on what I may have missed will be greatly appreciated. linux virtual-machine centos httpd kickstart

share improve this question asked Jan 28 '15 at 17:12 user4985 1
add a comment |
1 Answer active oldest votes
up vote down vote Make sure that your .cfg file have right permissions and is readable by other users/systems. You may try to wget or just simply open it from any other PC in your network and see if it works.

If you have permissions problem try to set chmod to 666.

chmod 666

[Aug 02, 2017] EWONTFIX - Systemd has 6 service startup notification types, and theyre all wrong

Notable quotes:
"... Socket activation is a misfeature, it doesnt matter if it can be done cross-platform, it's still just dumb. It leads to a system appearing to boot a tiny bit faster, but you can no longer guarantee that everything is actually *up* at that point. I dont care much about speeding up boot times but I would still call that a win - IF it wasnt achieved at the cost of making the entire boot process unreliable, but it's not worth anything like the price you pay here. see more ..."
"... Anyway, the feature creep of systemd is the scary part to many of us. Why does my init process need to be my automounter, my syslog, my inetd? ..."
"... Personally, I'm just horribly upset about them changing the ordering of the start/stop/restart arguments for no good reason. what advantage does systemctl start apache have over systemctl apache start? ..."
"... You know what makes a laptop really boot faster? Without breaking anything? Suspend to disk. ..."
"... Only idiots who spend all their time rebooting and doing nothing else value boot-up times above all else. REAL people, however, have other concerns, such as system security, system integrity, and system brittleness (systemd's mandate that all deamons become dependent upon systemd makes ALL deamons brittle), which destroys longterm system stability. see more ..."
Aug 02, 2017 |

lucius-cornelius 3 years ago

I'm just a user...

OpenSource began as a philosophical response to a mindset that was regarded as unhealthy and incompatible with freedom. The software is an expression of that philosophical response. So any new software that disregards, or appears to disregard that philosophy, is immediately suspect. It's no surprise, given the corruption of OpenSource in recent years, that we now have a generation of users and developers who don't care about freedom, or security, or simplicity. But it is rather sad.

The KISS principle is fundamental. Adding a new and very complex gadget in order to replace an older, simpler but perfectly functioning gadget, breaks every principle of good design and engineering.
Take GRUB2 (please, it's hideous), GRUB1 was fine, LILO is awesome. Linux had 2 perfectly working boot loaders so what happens? Replace one of them with a far more complex system. It's files have different names, the lines of code are much more complex and long winded. Editing it is not as easy as it was. This is not progress.
This appears to be not about anything other than developer's egos.

I see Linux being dragged into the same muddied waters that Windows and OSX inhabit, in order to placate the same greedy amoral Corporations or the same mindless squawking "journalists", and all to please a mass of people who don't give a damn what OS they run as long as they can gain access to their stuff without having to think/learn/read.
Think of it this way, it's a bit like taking the Mona Lisa down and drawing breasts on it and putting an ipod in one hand so that the masses can "appreciate it", or taking a Mozart symphony and remixing it into a 4 minute drum and bass track but insisting that "it's still a Mozart symphony and classical music lovers can rest assured". It's a tragedy for art lovers and the masses don't care anyway. So you end up trashing your own treasure in the pursuit of the appreciation of those who don't understand what treasure is and would trample it in sheer ignorance if they came near it.

So for me, a mere user, I don't understand the technical arguments, but I understand philosophy and I see the philosophy behind Linux/OpenSource being strangled, slowly and deliberately, by people who either harbour hidden agendas or who don't care about philosophy.

akulkis lucius-cornelius 3 years ago
"This appears to be not about anything other than developer's egos."

Ding! Ding! Ding! We have a winner.

Sievers & Poettering (and i'll throw gregkh in for good measure) are behaving like vandals. This is an indicator of being a psychopath -- they don't care how many people are harmed, and to what extent, as long as they get what they want. I believe we have a pretty good idea from 20th Century European history of where that sort of "will to power (and screw anybody who doesn't have the means to stop me immediately)" mentality leads to. Do I think it will cause 50 million dead like the last incarnation of this that came out of Germany? No. Will it cause a lot of needless misery, and pointless wasting of resources: Yes.

Siosm 3 years ago
Everything you said here is "debunked" by the well written answers in the same thread:

The main method used by systemd to monitor processes is not by monitoring PIDs, it's by using cgroups. You need to put your hatred aside, and really start reading what's actually done by systemd to understand how it works, because those posts clearly show you don't. Yes it is significantly different that what was here before, and yes those changes are somewhat disruptive. This is what makes it interesting.

Stonoho Siosm 3 years ago
I do not want my software to be "interesting." I want it to be reliable. Part of reliability is that it does not depend on the specific operating system I run it on. Cgroups are linux-specific and not suitable for portable software.
greyfade Stonoho 3 years ago
I'm genuinely curious: What about init systems makes cross-platform portability desirable? Worded another way: Why would I want to use a Unix init system on Windows?

I sincerely cannot comprehend why using the native features of the kernel an init system is written for is a bad thing, especially when it enables the init system to (potentially) do a better job? (Let's ignore for the moment whether systemd is actually doing a good job. This is a general question that would apply even if we weren't talking about systemd specifically.)

akulkis greyfade 3 years ago

Uh, Dude, the problem is that systemd is going to make deamons non-portable between Linux and the rest of the Unix world. That's not a feature, it's a very, very, very, very, very, very serious bug.

greyfade akulkis 3 years ago ...

Why is that an issue? Almost every Linux distro and most of the BSDs have been shipping _their own_ init scripts for years, even when a given daemon includes its own. I don't see how this makes daemons non-portable.

akulkis greyfade 3 years ago
Because every deamon is now going to have to have TWO versions -- the Linux systemd compatible version, and the sane one used by BSD and the rest of the unix world.

Poettering summed up his attitude -- Linux isn't unix, so don't give a shit about breaking everything and anything, as long as it feeds his narcisstic craving to control every damned thing. Basically, he's not improving Linux, he's turning it into Windows. If he wants to work on creating a system that's compatible with the Windows philosphy (it's just fabulous write huge, unstable programs with murky boundaries that do lots and lots of things poorly and don't play well with others) then he should get out of Unix and write for the ReactOS people.

It's obvious that Sievers and Poettering want to be the big fish in the pond... so w should encourage them to go to ReactOS... ReactOS needs people like Sievert and Poettering -- the Linux world doesn't. All of this crap of moving tons of things out of /bin to /usr/bin, etc (and then putting in soft links from /bin to /usr/bin, rather than, oh, I don't know, doing something SANE like putting the soft link in /usr/bin pointing at /bin) is an example of the absolute CONTEMPT these two have for the entire Linux community. Things that have worked well for almost half a century should not be thrown out like a baby with the bathwater, for the (dubious) goal of shaving 5 seconds (and usually not even that) off of boot-up time --- because what, we don't use our computers to get actual work done, we just sit around all day rebooting !?!?!?!?!?!??! Who the hell cares if a computer takes 10 seconds or 2 minutes to reboot -- it's not something you do that often. And the only computers which ARE booted up frequently (laptops) aren't running dozens upon dozens of system services to make for a long boot-up anyways.

Even my laptop, I only reboot once every couple of weeks.

Frankly, I regard Sievers and Poettering as nothing less than vandals, because they are breaking far more than fixing. For a pair who profess to be systems programmers, they seem to be utterly clueless about issues and mechanisms directly linked to system stability vs. instability.

Overly complicated PID 1 is a perfect example.

digi_owl akulkis 3 years ago

Honestly i dont think Poettering and crew is gunning for Windows. They are trying to create OSX without the Apple hardware dongles. But then they also work for a company that is likely trying to supplant Sun Solaris. Just about everything that comes out of RH these days are gunning for some kind of "secure" workstation for government/corporate work. Maybe with a solid dose of cloud thrown in on top to spite Oracle (who owns Sun these days).

akulkis digi_owl 2 years ago
OS X is BSD underneath. They are certainly NOT trying to create BSD. BSD's init system is even more simplistic than SysV init
digi_owl akulkis 2 years ago
OSX use Launchd, and some want to see that transfered to the BSDs. But at this point Systemd is much more than Launchd, never mind SUN's SMF.

At this point it may be as relevant to talk about Systemd/Linux as it is to talk about GNU/Linux.

But look at the projects Poettering has initiated and they are all more or less inspired by OSX. He pretty much admitted as much in an interview some years back.

svartalf digi_owl 2 years ago
The problem is...everything that they're doing ensures that it won't BE secure. That's the most laughable thing about this cruelty joke Red Hat's inflicting on us. They used to be friends...but with friends like that, who needs enemis...
svartalf akulkis 2 years ago
I'd opine that neither are really systems programmers. This is evidenced by at least Pottering's partial failures over time, including PulseAudio which honestly breaks more than it fixes and is really solving problems you and most everyone else (99.9999...%) don't have, nor will they ever have.

This is something that is very much *MISSION CRITICAL*, meaning it really needs to be this largely armor-plated, can't really ever fail, piece of software. So far, Lennart has yet to write a piece of software like this. He's got...interesting...notions... But they should've never been given the time of day in most of the cases, mainly because if you'd done the mental exercises most actual systems developers do, you'd realize that they're all solutions looking for a problem to fix.. Remote network audio is, quite frankly, something already solved and his "solution" added issues including needless latency in things.

His logging solution in systemd is excreable- and this massive mission/function creep abortion they call systemd is, quite simply, too complex- it's attempting to build an OS abstraction on top of an...heh...OS abstraction already there. It's almost like the One Ring. That's not a systems programmer there. It's a wannabe.

akulkis svartalf 2 years ago

You have summed it up very well. These guys are applications programmers posing as systems programmers. see more

svartalf greyfade 2 years ago
It's an issue because it's more akin to the welded shut hood of your vehicle versus what you currently have. If a *BSD variant wants something resembling systemd, the best they can hope for is the winnowed down fork labeled uselessd that does NOTHING but handle the boot dependencies, etc.

With the old way, while it was clunky and "slow" (I'd opine that "fast" is only relevant in the *EMBEDDED* and *CONSUMER DEVICE* spaces, not the server spaces -- you shouldn't NEED fast boot times for a server...) you could "port" it in the large to your system or at least HACK your needed daemon start script into the system. With systemd, you don't get that luxury. It needs to be systemd-ed and you can just pretty much kiss anything other than Linux support goodbye- or support the "old" way and the "new" way at the same time.

It's stupid. It's a damned waste of valuable time. And better yet, this doesn't even get into the lack of actual mission-criticalness that is needed in the server and embedded spaces- that this damn thing DOES NOT AND CAN NOT HAVE because of it's poor design decisions. I keep questioning why in the hell Red Hat keeps the man on their payroll, this is that bad.

greyfade svartalf 2 years ago
Your argument is one that doesn't seem, to me, to support your case.

Why would BSD want systemd? That's the core of my main objection to the argument that systemd's lack of portability is bad: It's a Linux init system that uses Linux kernel features to do its more interesting jobs, so why would any other platform want it?

Why would you have to hack your daemon? Systemd supports launching daemons via init script, and with an extension, supports sysvinit-style scripts. The systemd-specific code that enables systemd optimization for a daemon amounts to an #ifdef for a few lines of code and a single C file that can be simply excluded from your build to support non-systemd systems. How does this mean that you have to "kiss anything other than Linux support goodbye"?

You are, to be blunt, merely regurgitating the same long-debunked arguments I've seen discussed a hundred times, and still not answering my question.

akulkis greyfade 2 years ago

The reason BSD doesn't want systemd is because systemd is a steaming pile of excrement. NO SANE PERSON wants systemd. see more

SomeoneWithAClue akulkis 3 years ago

systemd's socket activation support relies on features that have been available on every decent Unix-like system for ages. So does its readiness notification support (as outlined above). Therefore they can both be implemented on *every* Unix with a minimal amout of fuss. How exactly is this supposed to make a daemon unportable? see more

Arker SomeoneWithAClue 3 years ago

Socket activation is a misfeature, it doesnt matter if it can be done cross-platform, it's still just dumb. It leads to a system appearing to boot a tiny bit faster, but you can no longer guarantee that everything is actually *up* at that point. I dont care much about speeding up boot times but I would still call that a win - IF it wasnt achieved at the cost of making the entire boot process unreliable, but it's not worth anything like the price you pay here. see more

sarlalian greyfade 3 years ago

Thats the problem with Systemd, is it's not a Unix init system, its a linux init system. If it were a unix init system, it wouldn't depend on things like cgroups existing. cgroups aren't part of the posix spec, they don't exist on FreeBSD, OpenBSD, NetBSD, Solaris and really any others. There are numerous other problems with the nested dependencies of systemd, its not friendly with running inside containers. I'm sure most people who don't like systemd would have less issues with it, if the pid 1 portion of systemd were smaller and less complex. The problem is if pid 1 dies, or needs to be updated, this results in a crash or reboot.

Anyway, the feature creep of systemd is the scary part to many of us. Why does my init process need to be my automounter, my syslog, my inetd?

Personally, I'm just horribly upset about them changing the ordering of the start/stop/restart arguments for no good reason. what advantage does systemctl start apache have over systemctl apache start?

greyfade sarlalian 3 years ago

This does not answer my question. Why is it a bad thing for it to be a Linux init system? Why does conforming to the POSIX spec matter for a Linux init system?

We can talk about dependencies and argue about features, sure. But what I don't understand is this fetishistic obsession with POSIX when we're talking about a freaking init system. I'm not aware of POSIX imposing any requirements on the init system other than a small subset of interactions with it, so why is this such a big deal? Why do you even care?

I understand, and to some degree, agree with the concerns about feature creep. But the rest of your arguments are little more than bikeshedding, and don't really explain why systemd is objectively bad.

sarlalian greyfade 3 years ago

Technically your question was about running a "unix" init system on windows. My point which did answer that question was that it isn't a "unix" init system, it is a Linux init system. While POSIX isn't some sort of panacea it does imply some level of portability across unix and unix like systems, and for an init system that is important to a lot of people. Especially when systemd apparently provides non-portable (dbus and sd_notify) ways for daemons to implement some concept of service activation.

From a desktop perspective, systemd looks fantastic, it has a ton of neat features that will make a linux laptop boot up faster, and use less resources on startup these are good things, it is also substantially more usable than the usability nightmare that is launchd which is where the systemd authors got a lot of their ideas. I'm not really against sytemd on the desktop, on the server however, where security and stability matter it leaves some things to be desired.

Objective problems with systemd on the server:

1) PID 1 is arguably the most important process, if it halts your sever halts, ideally PID 1 should be the simplest, most tested program you have running so that it won't halt ever, and never needs to be upgraded.

2) It depends on too many libraries for such a critical program. I understand that many of those libraries are well tested, best of breed, but each library increases the attack surface of critical parts of a servers infrastructure. This increased attack surface makes it more likely that there will be root exploits available for it, and decreases the overall security and stability of a system.

3) AutoFS systems suck in general, and when the automounter doesn't work correctly, services and systems hang, and this is not a good thing.

4) Not that I've spent a lot of time trying to get it to work in a docker container, but evidently it is non-trivial to get systemd to work as PID 1 in a container due to dbus and cgroup dependencies, so I end up with an init process for my overall system, and a separate init process for my containers.

5) Because it is tied so directly to Linux, I end up with one init system for my workstations (Fedora based due to external vendor requirements), and my other servers FreeBSD based.

Subjective problems with systemd.

1) It violates the unix philosophy "This is the Unix philosophy: Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface."

2) Hubris, obviously we know better than the syslog/syslog-ng/rsyslog/inetd/xinetd/autofs/automounter/amd people... They are experts in their domain, who cares, we can do better, and include it in the init process.

3) The main developers of systemd appear to be jackasses.

- Lennart Poettering ( "In fact, the way I see things the Linux API has been taking the role of the POSIX API and Linux is the focal point of all Free Software development. Due to that I can only recommend developers to try to hack with only Linux in mind and experience the freedom and the opportunities this offers you. So, get yourself a copy of The Linux Programming Interface, ignore everything it says about POSIX compatibility and hack away your amazing Linux software. It's quite relieving!" (Obviously Linux is the whole of the unix universe).

- Kay Sievers ( https://bugs.freedesktop.or... The whole kernel debug argument being parsed by systemd and screwing things up for the kernel blowup from a couple of months back.

4) The systemd dev's seem to have the worst case of NIH (Not invented here) Syndrome I've ever seen. We are creating an init system.. need to make sure it has AutoFS, xinetd, Syslog, DHCP, Device Detection

5) The whole boot speed at the cost of sanity attitude drives me nuts.

Anyway, to address your bikesheding accusation, an init system SHOULD more resemble a bike shed than a nuclear power plant, but systemd seems to be aiming for kernel level complexity vs. bike shed level simplicity.

Arker sarlalian 3 years ago

You know what makes a laptop really boot faster? Without breaking anything? Suspend to disk.
akulkis Arker 2 years ago
Only idiots who spend all their time rebooting and doing nothing else value boot-up times above all else. REAL people, however, have other concerns, such as system security, system integrity, and system brittleness (systemd's mandate that all deamons become dependent upon systemd makes ALL deamons brittle), which destroys longterm system stability. see more
greyfade sarlalian 3 years ago
Again, you have not addressed my question.

I pointed out myself that it's not a Unix init system, and that it has no reason to be one. My point about portability to Windows was to highlight that although it has some POSIX compatibility, there's no rationale for bringing a Unix init system to it. And that highlights my core question: Why does it matter that it be a POSIX init system? It's not meant to be portable, so there is no rationale I can find to justify making it portable. (Why would you even want to use systemd on BSD?) I wasn't asking if it should be portable to Windows, I was asking *WHY* it should ever be portable to anything other than Linux. You have dodged this question four times now.

I have already conceded twice that there are valid objective problems. Your points 1 and 4 (and to a lesser degree 2) are among them. 3 I will not address, because I have not encountered such problems, and have not seen them documented. Your "objective" point 5 is what I accuse of bikeshedding, as you have not presented a case as to why a least common denominator, featureless, aging, limited init system is preferable to having an init system which takes advantage of the features a platform has to offer. I would take the same position if we were talking about some new hypothetical FreeBSD init system: If it takes advantage of FreeBSD-specific features, why should it be portable to, say, OpenBSD?

To your subjective points:

1) I disagree with taking a dogmatic stance on the Unix philosophy. All philosophies have limits to their applicability (there is no universal philosophy, IMO), and there are numerous reasonable rationales for having a more complex communication protocol between subprocesses than a text stream. I do not find this a convincing argument.

2) This appeal to authority is much less convincing. It is hubris, I would claim, to think that someone else *can't* do it better than you.

3) I can make the same accusation of Linus (Why would you use Linux? Linus is an asshole!), Theo de Raadt (Why would you use OpenBSD or OpenSSH? Theo is an asshole!), Dan J. Bernstein (Why would you use djbdns or qmail or...) and on and on. For every amazing piece of software, there's a raging asshole behind it. This argument is unconvincing, and tells me that you're scrambling for justification.

4) On balance, there are problems with the most commonly-used variants of each of those tools. hid-dbus, xinetd, syslog-ng, dhcpcd, and several others have all sent me flying into a rage about some silly thing at one time or another, and I, for one, *welcome* more alternatives. If you think you can do better, by all means, *please do.* I think you're an asshole, but I'll at least give it a shot.

5) The whole POSIX-or-nothing attitude drives me nuts. What's your point?

sarlalian greyfade 3 years ago
Your original statement was "I'm genuinely curious: What about init systems makes cross-platform portability desirable? Worded another way: Why would I want to use a Unix init system on Windows?", maybe I'm reading that wrong, but it really does seem like you called it a UNIX init system there and asked why it should be portable to Windows. If your point about windows was that it has some POSIX compatibility then I failed at reading between the lines there, because what I read was 1) an assertion that it is a Unix init system which to me implies some degree of portability to other Unix / Unix like systems, and 2) a question about why you would want to use it on windows. Yes, in retrospect it probably should have been obvious that you were making an intentionally absurd statement about using a unix init system on windows, but I missed that point and for that apologize. My first response was more about correcting the assertion that it was a Unix init system, which is a factually incorrect statement. I don't think anyone really cares whether its POSIX or not, some people care about the byproduct of it being POSIX or not and that byproduct is some degree of portability, personally for me, whether its portable or not doesn't matter at all, I was wrong in including portability in my list of objective problems with systemd, as its definitely a subjective problem with it, and a subjective problem that I don't have with it, but some others do. I evidently got carried away with my second response ( , sorry. I really haven't been dodging the portability question, I just don't care whether its portable or not as far as problems with systemd go, portability is about as close to the bottom of the list as it gets for me.

That said, I've never asserted that SysV init is better or that I prefer it in any way shape or form, should it be replace, absolutely, is systemd the answer? I hope not, but evidently most people disagree with me on this one. And to be fair I'm not a developer of any init system, so everything I say should be taken with a grain of salt. Is my point 5 bikeshedding, probably because upon further reflection, I don't care if its portable or not.

For my subjective points, great, you disagree with me on my opinion and think I'm an asshole, good for you. Have a lovely day.

greyfade sarlalian 3 years ago
I'm sorry my original question caused confusion. I was trying to make the point that if portability of systemd to a *BSD is desirable, then logically that means that for the same reasons an init system for a genetic Unix should be in turn portable to Windows. I wasn't calling systemd a Unix init system, just pointing out the consequences of the logic, as it applies to init systems in general.

And please don't take my tone as confrontational. I'm just trying to get rational explanations for the animosity towards systemd, many of which seem to me to be completely baseless complaints. (Though, as I said, there are valid complaints and I try to acknowledge them when they're mentioned, but they're few and don't really support the case that systemd is somehow bad.)

sarlalian greyfade 3 years ago
Systemd's portability isn't really that important, the only place where it causes issues is when it makes other daemons depend on systemd specific functions like the sd_notify function. That said, thats really only a #ifdef away from not being an issue if you are on a system that doesn't support sd_notify which to me makes it a pretty trivial issue and relatively easy to port around.

As for taking your tone as confrontational, its really difficult to read identify tone in a string of text, that said, you did say you think I'm an asshole, and I'm not sure that being called an asshole can ever be interpreted as non-confrontational between strangers on the internet.

Any personal animosity (probably an overly strong word for a feeling about an init system) I have for it comes down to the simple statement that it trades stability, security and simplicity for features that I don't care much about in a server operating system. And the absolutely stupid annoying %$^#&^$^%%$# change of the location of start/stop/restart in the systemctl command as compared to the service command. (Yes I'm aware that service is still available but its less informative than systemctl).

greyfade sarlalian3 years ago
The "asshole" bit was an illustrative response. I'm sorry. You depicted these developers as unsavory people, and I was trying to point out that that is a ludicrous point to be making in matters of software - plenty of assholes (many of whom I like) write great software, so demeaning the software based on their personality is simply wrong.

On a lighter note, it may not be a bad idea to add a set of functions to your shell .rc scripts to wrap the functionality of service management in a way that's more sensible to you. I agree the differences are annoying, but short of proposing changes upstream, there's little you can do about it.

greyfade 3 years ago
"(Why would you even want to use systemd on BSD?)"

I would not want to run systemd on my system, regardless of kernel. But the pressure to do so, again regardless of kernel, is simply to get a program that depends on it to run. To the degree the systemd cabal gets more and more software to depend on this more and more people will be inconvenienced by it.

More generally, why would I not want to run a 'linux only' init system? The same reason I dont want to run 'linux only' anything. I have been using *nix for over 20 years, I have migrated distributions, I have migrated kernels, and having the freedom to do that again whenever necessary is important.

greyfade Arker 3 years ago
You're not the first to make this point, and I'm still frustrated by it, because it makes little sense, and still fails to answer my question.

Swapping out a kernel in an environment is not so trivial a matter. Switching between different versions of Linux is fine, since the ABI hasn't changed all that much over the years, and so there's little worry about a software distribution working across several kernel versions. But swapping out a Linux kernel for something entirely different is another matter. I understand that the BSD emulation of the Linux ABI is incomplete, and so swapping out Linux for a BSD kernel is going to cause a great number of headaches - the init system is the least of your concerns.

But I do not understand why you're so concerned about being able to do such a thing, especially when doing so necessarily means a substantial change in the operating environment, let alone binary loader behavior differences and ABI differences that will inevitably break a number of core features.

It's fine if you want to just use a software distribution where systemd is unsupported, but making such drastic changes to a running system is at best inadvisable, even ignoring the matter of the init system or the difficulties in changing or other critical infrastructure.

akulkis greyfade 2 years ago
your points have been addressed repeatedly.

If you don't want to listen, tha'ts your problem. Now, sit the hell down and shut the hell up, and LISTEN, Lennart.

akulkis sarlalian 2 years ago
And it doesn't even deliver the faster bootup time which is supposedly the justification for all of this BS.
akulkis greyfade 2 years ago
BS, Grayfade. We have answered your question repeatedly. Stop being a concern troll.
greyfade akulkis 2 years ago
No, you really haven't. I asked why portability and POSIX-compliance is so important for systemd. I have never gotten an answer. I asked why anyone would want systemd portable to systems like BSD. All I get is complaints that no one would because it sucks. That doesn't answer my questions. If you're going to say systemd sucks and then give me reasons it sucks, I'm going to expect your reasoning to make sense.

But you don't have reasoning that I can see. All you're doing is complaining that it sucks. Well, that's useless.

And then you keep bugging me several months after I've long forgotten about this post and call *me* a troll?

Please, just drop the subject. You're clearly not interested in conversation or reasonable discussion, so please don't reply to me again unless you have actual, reasoned answers to my questions.

pydsigner greyfade 2 years ago
It matters because a dev doesn't want to have to maintain separate daemon codebases to support both Linux and BSD. That might seem like an edge case, but if you write something to use all of the systemd interfaces because you aren't concerned about having to run it on other UNIX systems, the people who want to use your project on BSD will be left to reimplement daemonization themselves.
greyfade pydsigner 2 years ago
They don't have to maintain a separate daemon codebase. They can continue to use their old rc script and the systemd compatibility layer can run that script just fine. Or, a simple unit file can run the daemon directly.

It's only if the daemon needs to directly manage systemd services or take advantage of systemd-specific features that maintaining additional code becomes a concern.

Now, if a daemon uses systemd libraries, I have to question the motivation for porting that daemon to another OS. Clearly, if the daemon performs some task relevant to systemd, what reason is there for it to perform the same task on a system with no systemd?

Conversely, if a daemon does not rely on systemd, but just links the libraries, I have to question the motivation for linking to those libraries. If they're not needed, why link?

I still don't see the argument.

Sebastian Freundt SomeoneWithAClue 2 years ago

Interestingly, that's not true. See see more

akulkis SomeoneWithAClue 2 years ago

And what if it crashes in the process?
What if there's a bug in the serialization?
What if there's a bug in the de-serialization in the upgraded version of systemd that just got exec()ed by the old version of systemd? see more

Sebastian Freundt akulkis 2 years ago

I can tell you what happens when (not if) it crashes: see more

akulkis Sebastian Freundt 2 years ago

Yes, I know. It's that Someone With[OUT] a Clue doesn't seem to understand how systemd's pid 1 is overly complex, and by that, I mean, anything more complex than a process which spawns off the rest of the init system, and then sits around reaping orphans. see more

OrenWithAnE greyfade 3 years ago
It's fragile. If the cgroup implementation changes (aka, Linus feels like it) then we all feel the pain. Option #1 above relies only on the guaranteed behavior on compliant POSIX systems. It's anti-fragile.

greyfade OrenWithAnE 3 years ago

That's a completely ridiculous argument. One need only look at the last few months of LKML discussion to see that Linus is absolutely dogmatic about never, under any circumstances, breaking userland. He has gone on dozens of long rants at people for changing something that broke a userland program, shouting obscenities for days about it.

If you think the cgroups ABI changing is at all a problem, then you clearly haven't paid one whit of attention to kernel development. Kernel ABIs that are in use never change without massive cooperative efforts.

sarlalian greyfade 3 years ago

Considering that they were considering hiding some kernel arguments from systemd because it was screwing up their ability to debug the kernel, I think you may be overestimating Linus's feelings about breaking systemd. see more

OrenWithAnE greyfade 3 years ago

Is there a normative document spelling out the specifications and contract for cgroups, equivalent to the POSIX spec for socket writes?

greyfade OrenWithAnE 3 years ago

Yes. In the kernel docs, where it's supposed to be. And if the ABI for it changes at any time and breaks anything, Linus will bite everyones' heads off and revert the change after dressing everyone down. And if the ABI actually needs to be changed, there will be extended discussion, Linus will complain that everyone is violating his Rule #1, then all of the users of the API will agree on a migration plan to the new ABI, then it'll get changed. Remember Linux kernel development rule #1: DON'T BREAK USERSPACE.

Again, it's an absurd argument. Linux is not a Unix, and there is absolutely no reason for the init system to be portable beyond Linux. You haven't answered my question. You've only presented an ill-posed and illogical half-argument that seems to be predicated on the fact that it's not POSIX, which, as I've already pointed out, isn't relevant. Systemd is a Linux init system, not a POSIX init system. The question remains: What about init systems makes portability desirable?

[Aug 02, 2017] CentOS - RHEL 7 How to disable NetworkManager

Aug 02, 2017 |
CentOS / RHEL 7 : How to disable NetworkManager

By Sandeep

Disabling NetworkManager

The following steps will disable NetworkManager service and allows the interface to be managed only by network service.

1. To check which are the interfaces managed by NetworkManager

# nmcli device status

This displays a table that lists all network interfaces along with their STATE. If Network Manager is not controlling an interface, its STATE will be listed as unmanaged . Any other value indicates the interface is under Network Manager control.

2. Stop the NetworkManager service:

# systemctl stop NetworkManager

3. Disable the service permanently:

# systemctl disable NetworkManager

4. To confirm the NetworkManager service has been disabled

# systemctl list-unit-files | grep NetworkManager

5. Add the below parameter in /etc/sysconfig/network-scripts/ifcfg-ethX of interfaces that are managed by NetworkManager to make it unmanaged.

Note: Be sure to change the NM_CONTROLLED="yes" to " no " or the network service may complain about "Connection activation failed" when it cannot find an interface to start Switching to "network" service

When the NetworkManager is disabled, the interface can be configured for use with the network service. Follow the steps below to configure and interface using network services.

1. Set the IP address in the configuration file: /etc/sysconfig/network-scripts/ifcfg-eth0. Set the NM_CONTROLLED value to no and assign a static IP address in the file.


2. Set the DNS servers to be used by adding into the file: /etc/resolv.conf :

nameserver [server 1]
nameserver [server 2]

3. Enable the network service

# systemctl enable network

4. Restart the network service

# systemctl restart network

Filed Under: CentOS/RHEL 7

Some more articles you might also be interested in
  1. CentOS / RHEL 7 : Never run the iptables service and FirewallD service at the same time!
  2. RHEL 7 – RHCSA Notes – Create and manage Access Control Lists (ACLs)
  3. CentOS / RHEL 7 : How to boot into Rescue Mode or Emergency Mode
  4. CentOS / RHEL 7 : Change default kernel (boot with old kernel)
  5. CentOS / RHEL 7 : How to start / Stop Firewalld
  6. CentOS / RHEL 7 : Unable to start/enable iptables
  7. CentOS / RHEL : Configure yum automatic updates with yum-cron service
  8. CentOS / RHEL 7 : How to Change Timezone
  9. CentOS / RHEL 7 : How to add a kernel parameter only to a specific kernel
  10. CentOS / RHEL 7 : Booting process

[Aug 02, 2017] EWONTFIX - Broken by design systemd

Notable quotes:
"... difficult not to use ..."
Aug 02, 2017 |
Broken by design: systemd 09 Feb 2014 19:56:09 GMT

Recently the topic of systemd has come up quite a bit in various communities in which I'm involved, including the musl IRC channel and on the Busybox mailing list .

While the attitude towards systemd in these communities is largely negative, much of what I've seen has been either dismissable by folks in different circles as mere conservatism, or tempered by an idea that despite its flaws, "the design is sound". This latter view comes with the notion that systemd's flaws are fixable without scrapping it or otherwise incurring major costs, and therefore not a major obstacle to adopting systemd.

My view is that this idea is wrong: systemd is broken by design , and despite offering highly enticing improvements over legacy init systems, it also brings major regressions in terms of many of the areas Linux is expected to excel: security, stability, and not having to reboot to upgrade your system.

The first big problem: PID 1

On unix systems, PID 1 is special. Orphaned processes (including a special case: daemons which orphan themselves) get reparented to PID 1. There are also some special signal semantics with respect to PID 1, and perhaps most importantly, if PID 1 crashes or exits, the whole system goes down (kernel panic).

Among the reasons systemd wants/needs to run as PID 1 is getting parenthood of badly-behaved daemons that orphan themselves, preventing their immediate parent from knowing their PID to signal or wait on them.

Unfortunately, it also gets the other properties, including bringing down the whole system when it crashes. This matters because systemd is complex. A lot more complex than traditional init systems. When I say complex, I don't mean in a lines-of-code sense. I mean in terms of the possible inputs and code paths that may be activated at runtime. While legacy init systems basically deal with no inputs except SIGCHLD from orphaned processes exiting and manual runlevel changes performed by the administrator, systemd deals with all sorts of inputs, including device insertion and removal, changes to mount points and watched points in the filesystem, and even a public DBus-based API. These in turn entail resource allocation, file parsing, message parsing, string handling, and so on. This brings us to:

The second big problem: Attack Surface

On a hardened system without systemd, you have at most one root-privileged process with any exposed surface: sshd. Everything else is either running as unprivileged users or does not have any channel for providing it input except local input from root. Using systemd then more than doubles the attack surface.

This increased and unreasonable risk is not inherent to systemd's goal of fixing legacy init. However it is inherent to the systemd design philosophy of putting everything into the init process.

The third big problem: Reboot to Upgrade

Windows Update rebooting

Fundamentally, upgrading should never require rebooting unless the component being upgraded is the kernel. Even then, for security updates, it's ideal to have a "hot-patch" that can be applied as a loadable kernel module to mitigate the security issue until rebooting with the new kernel is appropriate.

Unfortunately, by moving large amounts of functionality that's likely to need to be upgraded into PID 1, systemd makes it impossible to upgrade without rebooting. This leads to "Linux" becoming the laughing stock of Windows fans , as happened with Ubuntu a long time ago.

Possible counter-arguments

With regards to security , one could ask why can't desktop systems use systemd, and leave server systems to find something else. But I think this line of reasoning is flawed in at least three ways:

  1. Many of the selling-point features of systemd are server-oriented. State-of-the-art transaction-style handling of daemon starting and stopping is not a feature that's useful on desktop systems. The intended audience for that sort of thing is clearly servers.
  2. The desktop is quickly becoming irrelevant. The future platform is going to be mobile and is going to be dealing with the reality of running untrusted applications. While the desktop made the unix distinction of local user accounts largely irrelevant, the coming of mobile app ecosystems full of potentially-malicious apps makes "local security" more important than ever.
  3. The crowd pushing systemd, possibly including its author, is not content to have systemd be one choice among many. By providing public APIs intended to be used by other applications, systemd has set itself up to be difficult not to use once it achieves a certain adoption threshold.

With regards to upgrades , systemd's systemctl has a daemon-reexec command to make systemd serialize its state, re-exec itself, and continue uninterrupted. This could perhaps be used to switch to a new version without rebooting. Various programs already use this technique, such as the IRC client irssi which lets you /upgrade without dropping any connections. Unfortunately, this brings us back to the issue of PID 1 being special. For normal applications, if re-execing fails, the worst that happens is the process dies and gets restarted (either manually or by some monitoring process) if necessary. However for PID 1, if re-execing itself fails, the whole system goes down (kernel panic).

For common reasons it might fail, the execve syscall returns failure in the original process image, allowing the program to handle the error. However, failure of execve is not entirely atomic:

In addition, systemd might fail to restore its serialized state due to resource allocation failures, or if the old and new versions have diverged sufficiently that the old state is not usable by the new version.

So if not systemd, what? Debian's discussion of whether to adopt systemd or not basically devolved into a false dichotomy between systemd and upstart. And except among grumpy old luddites, keeping legacy sysvinit is not an attractive option. So despite all its flaws, is systemd still the best option?


None of the things systemd "does right" are at all revolutionary. They've been done many times before. DJB's daemontools , runit , and Supervisor , among others, have solved the "legacy init is broken" problem over and over again (though each with some of their own flaws). Their failure to displace legacy sysvinit in major distributions had nothing to do with whether they solved the problem, and everything to do with marketing. Said differently, there's nothing great and revolutionary about systemd. Its popularity is purely the result of an aggressive, dictatorial marketing strategy including elements such as:

So how should init be done right?

The Unix way: with simple self-contained programs that do one thing and do it well.

First, get everything out of PID 1:

The systemd way: Take advantage of special properties of pid 1 to the maximum extent possible. This leads to ever-expanding scope creep and exacerbates all of the problems described above (and probably many more yet to be discovered).

The right way: Do away with everything special about pid 1 by making pid 1 do nothing but start the real init script and then just reap zombies:

#define _XOPEN_SOURCE 700
#include <signal.h>
#include <unistd.h>

int main()
    sigset_t set;
    int status;

    if (getpid() != 1) return 1;

    sigprocmask(SIG_BLOCK, &set, 0);

    if (fork()) for (;;) wait(&status);

    sigprocmask(SIG_UNBLOCK, &set, 0);

    setpgid(0, 0);
    return execve("/etc/rc", (char *[]){ "rc", 0 }, (char *[]){ 0 });

Yes, that's really all that belongs in PID 1. Then there's no way it can fail at runtime, and no need to upgrade it once it's successfully running.

Next, from the init script, run a process supervision system outside of PID 1 to manage daemons as immediate child processes (no backgrounding). As mentioned above are several existing choices here. It's not clear to me that any of them are sufficiently polished or robust to satisfy major distributions at this time. But neither is systemd ; its backers are just better at sweeping that under the rug.

What the existing choices do have, though, is better design , mainly in the way of having clean, well-defined scope rather than Katamari Damacy .

If none of them are ready for prime time, then the folks eager to replace legacy init in their favorite distributions need to step up and either polish one of the existing solutions or write a better implementation based on the same principles. Either of these options would be a lot less work than fixing what's wrong with systemd.

Whatever system is chosen, the most important criterion is that it be transparent to applications. For 30+ years, the choice of init system used has been completely irrelevant to everybody but system integrators and administrators. User applications have had no reason to know or care whether you use sysvinit with runlevels, upstart, my minimal init with a hard-coded rc script or a more elaborate process-supervision system, or even /bin/sh . Ironically, this sort of modularity and interchangibility is what made systemd possible; if we were starting from the kind of monolithic, API-lock-in-oriented product systemd aims to be, swapping out the init system for something new and innovative would not even be an option.

Update: license on code

Added December 21, 2014.

There has been some interest in having a proper free software license on the trivial init code included above. I originally considered it too trivial to even care about copyright or need a license on it, but I don't want this to keep anyone from using or reusing it, so I'm explicitly licensing it under the following terms (standard MIT license):

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.


[Jul 15, 2017] Red Hat 6.9 is now available

After the Phase 2, RHEL 6 will only receive security updates till 2020 under the Phase 3 which will commence on May 10, 2017. Red Hat's current focus of development is the RHEL 7 platform which was updated to RHEL 7.3 last year.
Mar 21, 2017

Release Notes

Red Hat Enterprise Linux 6 is now in Production Phase 2, and Red Hat Enterprise Linux 6.9 therefore provides a stable release focused on bug fixes. Red Hat Enterprise Linux 6 enters Production Phase 3 on May 10, 2017. Subsequent updates will be limited to qualified critical security fixes and business-impacting urgent issues. Please refer to Red Hat Enterprise Linux Life Cycle for more information

Migration to RHEL 7 is now supported

In-place Upgrade

As Red Hat Enterprise Linux subscriptions are not tied to a particular release, existing customers can update their Red Hat Enterprise Linux 6 infrastructure to Red Hat Enterprise Linux 7 at any time, free of charge, to take advantage of recent upstream innovations. To simplify the upgrade to Red Hat Enterprise Linux 7, Red Hat provides the Preupgrade Assistant and Red Hat Upgrade Tool. For more information, see Chapter 2, General Updates.

Red Hat Insights
Since Red Hat Enterprise Linux 6.7, the Red Hat Insights service is available. Red Hat Insights is a proactive service designed to enable you to identify, examine, and resolve known technical issues before they affect your deployment. Insights leverages the combined knowledge of Red Hat Support Engineers, documented solutions, and resolved issues to deliver relevant, actionable information to system administrators.

The service is hosted and delivered through the customer portal at or through Red Hat Satellite. To register your systems, follow the Getting Started Guide for Insights. For further information, data security and limits, refer to

Red Hat Customer Portal Labs

Red Hat Customer Portal Labs is a set of tools in a section of the Customer Portal available at The applications in Red Hat Customer Portal Labs can help you improve performance, quickly troubleshoot issues, identify security problems, and quickly deploy and configure complex applications. Some of the most popular applications are, for example
NetworkManager now supports manual DNS configuration with dns=none

With this update, the user has the option to prevent NetworkManager from modifying the /etc/resolv.conf file. This is useful for manual management of DNS settings. To protect the file from being modified, add the dns=none option to the /etc/NetworkManager/NetworkManager.conf file. (BZ#1308730)

Red Hat Software Collections

is a Red Hat content set that provides a set of dynamic programming languages, database servers, and related packages that you can install and use on all supported releases of Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 on AMD64 and Intel 64 architectures. Red Hat Developer Toolset is included as a separate Software Collection.

Red Hat Developer Toolset is designed for developers working on the Red Hat Enterprise Linux platform. It provides current versions of the GNU Compiler Collection, GNU Debugger, and other development, debugging, and performance monitoring tools. Since Red Hat Software Collections 2.3, the Eclipse development platform is provided as a separate Software Collection.

Dynamic languages, database servers, and other tools distributed with Red Hat Software Collections do not replace the default system tools provided with Red Hat Enterprise Linux, nor are they used in preference to these tools. Red Hat Software Collections uses an alternative packaging mechanism based on the scl utility to provide a parallel set of packages. This set enables optional use of alternative package versions on Red Hat Enterprise Linux. By using the scl utility, users can choose which package version they want to run at any time.

See the Red Hat Software Collections documentation for the components included in the set, system requirements, known problems, usage, and specifics of individual Software Collections.

See the Red Hat Developer Toolset documentation for more information about the components included in this Software Collection, installation, usage, known problems, and more.

[Jun 01, 2017] CVE-2017-1000367 Bug in sudos get_process_ttyname. Most linux distributions are affected

Jun 01, 2017 |

There is a serious vulnerability in sudo command that grants root access to anyone with a shell account. It works on SELinux enabled systems such as CentOS/RHEL and others too. A local user with privileges to execute commands via sudo could use this flaw to escalate their privileges to root. Patch your system as soon as possible.

It was discovered that Sudo did not properly parse the contents of /proc/[pid]/stat when attempting to determine its controlling tty. A local attacker in some configurations could possibly use this to overwrite any file on the filesystem, bypassing intended permissions or gain root shell.

... ... ...

A list of affected Linux distro
  1. Red Hat Enterprise Linux 6 (sudo)
  2. Red Hat Enterprise Linux 7 (sudo)
  3. Red Hat Enterprise Linux Server (v. 5 ELS) (sudo)
  4. Oracle Enterprise Linux 6
  5. Oracle Enterprise Linux 7
  6. Oracle Enterprise Linux Server 5
  7. CentOS Linux 6 (sudo)
  8. CentOS Linux 7 (sudo)
  9. Debian wheezy
  10. Debian jessie
  11. Debian stretch
  12. Debian sid
  13. Ubuntu 17.04
  14. Ubuntu 16.10
  15. Ubuntu 16.04 LTS
  16. Ubuntu 14.04 LTS
  17. SUSE Linux Enterprise Software Development Kit 12-SP2
  18. SUSE Linux Enterprise Server for Raspberry Pi 12-SP2
  19. SUSE Linux Enterprise Server 12-SP2
  20. SUSE Linux Enterprise Desktop 12-SP2
  21. OpenSuse, Slackware, and Gentoo Linux

[May 19, 2017] Google Found Over 1,000 Bugs In 47 Open Source Projects

May 14, 2017 |
( 43

Posted by EditorDavid on Saturday May 13, 2017 @11:34AM

Orome1 writes: In the last five months, Google's OSS-Fuzz program has unearthed over 1,000 bugs in 47 open source software projects ...

So far, OSS-Fuzz has found a total of 264 potential security vulnerabilities: 7 in Wireshark, 33 in LibreOffice, 8 in SQLite 3, 17 in FFmpeg -- and the list goes on...

Google launched the program in December and wants more open source projects to participate, so they're offering cash rewards for including "fuzz" targets for testing in their software.

"Eligible projects will receive $1,000 for initial integration, and up to $20,000 for ideal integration" -- or twice that amount, if the proceeds are donated to a charity.

[Mar 21, 2017] systemd-redux - blog dot lusis

Mar 21, 2017 |

I encourage you STRONGLY to read the systemd-devel mailing list for the kinds of issues you'll possibly have to deal with.


Nov 20th, 2014 | Comments

I figured it was about time for a followup on my systemd post. I've been meaning to do it for a while but time hasn't allowed. The end of Linux

Some people wrongly characterized this as some sort of hyperbole. It was not. Systemd IS changing what we know as Linux today. It remains to be seen if this is a good or bad thing but Linux is becoming something different than it was.

Linux is in for a rough few years

I do honestly believe this will end up being the start of a rocky period for Linux.

Additionally, while not Systemd specific but legitimately all inter-related, kdbus is coming and its already got its fair share of issues in the first implementation including breaking userspace.

We also have distros like SLES adopting btrfs as the default filesystem.

All of these things combined mean that Linux is pushing the bleeding edge of a lot of unbaked technologies. Time will tell if this turns people off or not. I expect that enterprise shops will probably freeze systems at RHEL6 for a good while to come (and not just the standard "we're enterprise and we don't like to upgrade" time period).

Systemd isn't going away

Systemd is here to stay. The only way you will have a system without it is to roll your own. I don't expect many distros to chose to back out. My best hope is that they'll all freeze at the current version. Maybe a few things will get backported here and there for security fixes.

SystemD components are NOT optional

I know everyone likes to tout this but, no, the various systemd components while not pid 1 are realistically not optional. Kdbus, single parent hierarchy for namespaces (systemd is taking this one of course), udev changes - the kernel and distros are changing and coallescing around whatever systemd ships. Most distros will probably use systemd-networkd for instance. Look at what happened with Debian just today. The (albeit way late to the game) recommendation to support alternate init systems was rejected. I encourage you STRONGLY to read the systemd-devel mailing list for the kinds of issues you'll possibly have to deal with.


To be clear if you're going to stick with Linux, you will have to deal with systemd. It's up to you to decide if that's something you're comfortable with. Systemd is bringing some good things but, like other discussions I've been involved with, you're going to be stuck with all the other stuff that comes along with it whether you like it or not.

It's worth noting that FreeBSD just got a nice donation from the WhatsApp folks. It also ships with ZFS as part of the kernel and has a jails which is a much more baked technology and implementation than LXC. While you can't use docker now with jails, my understanding is that there is work being done to support NON-LXC operating system level virtualization (such as jails and solaris zones).

Speaking of zones and Solaris, if that's an option for you it's probably the best of breed stack right now. Rich mature OS-level virtualization. SmartOS brings along KVM support for when you HAVE to run Linux but backed by Solaris tech under the hood. There's also OmniOS as a variant as well.

If you absolutely MUST run Linux, my recommendation is to minimize the interaction with the base distro as much as possible. CoreOS (when it's finally baked and production ready) can bring you an LXC based ecosystem. If they were to ever add actual virt support (i.e. KVM), then you could mix and match as needed. If you're working for a startup or a more flexible organization, you can go down this path. If you're working for a more traditional enterprise, your options are pretty limited. At least you'll have the RedHat support contract.

Posted by John E. Vincent Nov 20th, 2014

[Feb 04, 2017] Restoring deleted /tmp folder

Jan 13, 2015 |

As my journey continues with Linux and Unix shell, I made a few mistakes. I accidentally deleted /tmp folder. To restore it all you have to do is:

mkdir /tmp
chmod 1777 /tmp
chown root:root /tmp
ls -ld /tmp
mkdir /tmp chmod 1777 /tmp chown root:root /tmp ls -ld /tmp 

[Jan 26, 2017] Penguins force-fed root Cruel security flaw found in systemd v228
Some Linux distros will need to be updated following the discovery of an easily exploitable flaw in a core system management component.

The CVE-2016-10156 security hole in systemd v228 opens the door to privilege escalation attacks, creating a means for hackers to root systems locally if not across the internet. The vulnerability is fixed in systemd v229.

Essentially, it is possible to create world-readable, world-writeable setuid executable files that are root owned by setting all the mode bits in a call to touch(). The systemd changelog for the fix reads:

basic: fix touch() creating files with 07777 mode

mode_t is unsigned, so MODE_INVALID < 0 can never be true.

This fixes a possible [denial of service] where any user could fill /run by writing to a world-writable /run/systemd/show-status.

However, as pointed out by security researcher Sebastian Krahmer, the flaw is worse than a denial-of-service vulnerability – it can be exploited by a malicious program or logged-in user to gain administrator access: "Mode 07777 also contains the suid bit, so files created by touch() are world writable suids, root owned."

The security bug was quietly fixed in January 2016 back when it was thought to pose only a system-crashing risk. Now the programming blunder has been upgraded this week following a reevaluation of its severity. The bug now weighs in at a CVSS score of 7.2, towards the top end of the 1-10 scale.

It's a local root exploit, so it requires access to the system in question to exploit, but it pretty much boils down to "create a powerful file in a certain way, and gain root on the server." It's trivial to pull off.

"Newer" versions of systemd deployed by Fedora or Ubuntu have been secured, but Debian systems are still running an older version and therefore need updating.

systemd is a suite for building blocks for Linux systems that provides system and service management technology. Security specialists view it with suspicion and complaints about function creep are not uncommon. ®

[Dec 26, 2016] Devuans Systemd-Free Linux Hits Beta 2

Notable quotes:
"... Devuan came about after some users felt [Debian] had become too desktop-friendly . The change the greybeards objected to most was the decision to replace sysvinit init with systemd, a move felt to betray core Unix principles of user choice and keeping bloat to a bare minimum. ..."
"... now features an "init freedom" logo with the tagline, "watching your first step. ..."
Dec 26, 2016 |
( 338

Posted by EditorDavid on Saturday December 03, 2016 @11:38PM from the forking-the-road dept.

Long-time Slashdot reader Billly Gates writes,

"For all the systemd haters who want a modern distro feel free to rejoice. The Debian fork called Devuan is almost done, completing a daunting task of stripping systemd dependencies from Debian."

From The Register:

Devuan came about after some users felt [Debian] had become too desktop-friendly . The change the greybeards objected to most was the decision to replace sysvinit init with systemd, a move felt to betray core Unix principles of user choice and keeping bloat to a bare minimum.

Supporters of init freedom also dispute assertions that systemd is in all ways superior to sysvinit init, arguing that Debian ignored viable alternatives like sinit , openrc , runit , s6 and shepherd . All are therefore included in Devuan. now features an "init freedom" logo with the tagline, "watching your first step.

Their home page now links to the download site for Devuan Jessie 1.0 Beta2 , promising an OS that "avoids entanglement".

[Dec 26, 2016] The Linux Foundation Offers 50% Discounts On Training

Dec 26, 2016 |
( 39 Posted by EditorDavid on Sunday December 18, 2016 @05:44PM from the tell-em-Linus-sent-you dept. An anonymous reader writes: The non-profit association that sponsors Linus Torvalds' work on Linux also offers self-paced online training and certification programs. And now through December 22, they're available at a 50% discount . "Make learning Linux and other open source technologies your New Year's Resolution this holiday season," reads a special page at There's training in Linux security, networking, and system administration, as well as software-defined networking and OpenStack administration. (Plus a course called "Fundamentals Of Professional Open Source Management," and two certification programs that can make you a Linux Foundation-certified engineer or system administrator.)
And if you order right now, they'll also give you a free mug with a penguin on it.

[Nov 06, 2016] ascii files can be recovered b

view /tmp/somefile and see what you want to copy over from /dev/ to original location.

if you are on ext2 mount, may be you can try recover command.

my two penny advice for future :please always read man page for command and arguments before you actually run.

[Jun 06, 2016] 20 Linux Accounts to Follow on Twitter by Marin Todorov

| Published: November 30, 2015 | November 30, 2015
Download Your Free eBooks NOW - 10 Free Linux eBooks for Administrators | 4 Free Shell Scripting eBooks

System Administrators often need to find new information in their field of work. Reading the latest blog posts from hundreds of different sources is a task that not everyone may have the time to do. If you are a such busy user or just like to find new information about Linux, you can use social media website like Twitter.
Linux Twitter Accounts to Follow

20 Linux Twitter Accounts to Follow

Twitter is a website where you can follow users that share information that you are interested in. You can use the power of this website to get news, new ideas to solve problems, commands, links to interesting articles, new releases updates and many others. The possibilities are many, but Twitter is as good as the people you follow on it.

If you don't follow anyone, then your Twitter wall will remain empty. But if you follow the right people, you will be presented with tons of interesting information shared by people you followed.

The fact that you came across TecMint definitely means you are a Linux user thirsty to learn new stuff. We have decided to make your Twitter wall a bit more interesting, by gathering 20 Linux accounts to follow on Twitter.
1. Linus Torvalds – @Linus__Torvalds

Of course, the number one spot is saved for the person who created Linux – Linus Torvalds. His account is not that frequently updated, but it is still good to have it. The account was created on November 2012 and has over 22k followers.
Follow @Linus__Torvalds

Follow @Linus__Torvalds
2. FSF – @fsf

The Free Software Foundation is fighting for essential rights for the free software since 1985. The FSF has joined twitter on May 2008 and has over 10.6K followers. You can find different information here about new releases of new and free software as well as other information relevant to free software.
Follow @fsf

Follow @fsf
3. The Linux Foundation – @linuxfoundation

Next in our list is the Linux Foundation. On that page you will find many interesting news, latest updates around Linux and some useful tutorials. The account joined Twitter on May 2008 and has been active ever since. It has over 198K followers.
Follow @linuxfoundation

Follow @linuxfoundation
4. Linux Today – @linuxtoday

LinuxToday is account that shares different news and tutorials gathered from different sources around the internet. This account joined Twitter on June 2009 and has over 67K users.
Follow @linuxtoday

Follow @linuxtoday
5. Distro Watch – @DistroWatch

DistroWatch will keep you updated about the latest Linux distributions available. If you are a OS maniac like us, this account is a must follow. The account joined Twitter on February 2009 and has over 23K followers.
Follow @DistroWatch

Follow @DistroWatch
6. Linux – @Linux

The Linux page likes to follow up with the latest Linux OS releases. You can follow up this page if you want to know when a new Linux release is available. The account was created on September 2007 and has over 188K followers.
Follow @Linux

Follow @Linux
7. LinuxDotCom – @LinuxDotCom

LinuxDotCom is a page that covers information about Linux and everything around it. From Linux operating systems to devices in our life that use Linux. The account joined Twitter on January 2009 and has nearly 80K followers.
Follow @LinuxDotCom

Follow @LinuxDotCom
8. Linux For You – @LinuxForYou

LinuxForYou is Asia's first English magazine for free and open source software. It joined Twitter on February 2009 and has nearly 21K followers.
Follow @LinuxForYou

Follow @LinuxForYou
9. Linux Journal – @linuxjournal

Another good tweeter account to keep up with latest Linux news is LinuxJournal's. Their articles are always informative and if you like to get notified about new information about Linux, I will recommend you to signup for their newsletter. The account joined on October 2007 and has over 35K followers.

10. Linux Pro – @linux_pro

The Linux_pro page is the page of the famous LinuxPro magazine. Except for Linux news, you will learn about the latest products, tools and strategies for administrators, programming in the Linux environment and more. The account joined Twitter on September 2008 and has over 35K followers.

11 Tux Radar – @turxradar

This is another popular account that provides interesting, yet different Linux News. TuxRadar uses different sources so you will definitely want to have them in your wall stream. The account joined Twitter on February 2009 and has 11K followers

12. CommandLineFu – @commandlinefu

If you like the Linux command line and want to find more tricks and tips, then commandlinefu is the perfect user to follow. The account posts frequent updates with different useful commands. It joined Twitter on January 2009 and has nearly 18K followers
Follow @commandlinefu

Follow @commandlinefu
13. Command Line Magic – @climagic

CommandLineMagic shows some command lines for advanced linux users as well as some funny nerdy jokes. It's another fun account to follow and learn from. It joined Twitter November 2009 and has 108K followers:

14 SadServer – @sadserver

The SadServer is one of those accounts that just makes you laugh and want to check over and over again. Fun facts and stories are shared often so you won't be disappointed. The account joined Twitter on February 2010 and has over 54K followers.
Follow @sadserver

Follow @sadserver
15. Nixcraft – @nixcraft

If you enjoy Linux and DevOps work then NixCraft is the one you should follow. The account is very popular around Linux users and has over 48K followers. It joined twitter on November 2008.

16.Unixmen – @unixmen

Unixmen has a blog full of useful tutorials about Linux administration. It's another popular account across Linux users. The account has nearly 10K followers and joined twitter on April 2009.

17. HowToForge – @howtoforgecom

HowToForge provides user friendly tutorials and howtos about almost every topic related to Linux. They have over 8K followers on Twitter.
Follow @howtoforgecom

Follow @howtoforgecom
18. Webupd8 – @WebUpd8

Webupd8 describe themselves as Ubuntu blog, but they cover much more than that. On their website or twitter account you can find information about newly released Linux operating systems, open source software, howto's as well as customization tips. The account has nearly 30K followers and joined Twitter on March 2009.
Follow @WebUpd8

Follow @WebUpd8
19.The Geek Stuff – @thegeekstuff

TheGeekStuff is another useful account where you can find Linux tutorials on different topics on both software and hardware. The account has over 3.5K followers and joined Twitter on December 2008.

20. Tecmint – @tecmint

Last, but definitely not least, lets not forget about TecMint the very website that you're reading right now. We like to share all type of different stuff about Linux – from tutorials to funny things on terminal and jokes about Linux. Tecmint is basically best website and twitter page that you can must follow it and ensures that you will never miss another article from us.
Follow @tecmint

[May 31, 2016] RHEL 6.8 is out

Notable quotes:
"... For customers with ever-increasing volumes of data, the Scalable File System Add-on for Red Hat Enterprise Linux 6.8 now supports xfs filesystem sizes up to 300TB. ..."
"... enables customers to migrate their traditional workloads into container-based applications – suitable for deployment on Red Hat Enterprise Linux 7 and Red Hat Enterprise Linux Atomic Host. ..."

Red Hat Enterprise Linux 6.8 adds improved system archiving, new visibility into storage performance and an updated open standard for secure virtual private networks

Red Hat, Inc. (NYSE: RHT), the world's leading provider of open source solutions, today announced the general availability of Red Hat Enterprise Linux 6.8, the latest version of the Red Hat Enterprise Linux 6 platform. Red Hat Enterprise Linux 6.8 delivers new capabilities and provides a stable and trusted platform for critical IT infrastructure. With nearly six years of field-proven success, Red Hat Enterprise Linux 6 has set the stage for the innovations of today, as Red Hat Enterprise Linux continues to power not only existing workloads, but also the technologies of the future, from cloud-native applications to Linux containers.

With enhancements to security features and management, Red Hat Enterprise Linux 6.8 remains a solid, proven base for modern enterprise IT operations.

Jim Totton vice president and general manager, Platforms Business Unit, Red Hat

Red Hat Enterprise Linux 6.8 includes a number of new and updated features to help organizations bolster platform security and enhance systems management/monitoring capabilities, including:

Enhanced Security, Authentication, and Interoperability

To enhance security for virtual private networks (VPNs), Red Hat Enterprise Linux 6.8 includes libreswan, an implementation of one of the most widely supported and standardized VPN protocols, which replaces openswan as the Red Hat Enterprise Linux 6 VPN endpoint solution, giving Red Hat Enterprise Linux 6 customers access to recent advances in VPN security.

Customers running the latest version of Red Hat Enterprise Linux 6 can see increased client-side performance and simpler management through the addition of new capabilities to the Identity Management client code (SSSD). Cached authentication lookup on the client reduces the unnecessary exchange of user credentials with Active Directory servers. Support for adcli simplifies the management of Red Hat Enterprise Linux 6 systems interoperating with an Active Directory domain. In addition, SSSD now supports user authentication via smart cards, for both system login and related functions such as sudo.

Enhanced Management and Monitoring
The inclusion of Relax-and-Recover, a system archiving tool, provides a more streamlined system administration experience, enabling systems administrators to create local backups in an ISO format that can be centrally archived and replicated remotely for simplified disaster recovery operations. An enhanced yum tool simplifies the addition of packages, adding intelligence to the process of locating required packages to add/enable new platform features.

Red Hat Enterprise Linux 6.8 provides increased visibility into storage usage and performance through dmstats, a program that displays and manages I/O statistics for user-defined regions of devices using the device-mapper driver.

Additional Enhancements and Updates

For customers with ever-increasing volumes of data, the Scalable File System Add-on for Red Hat Enterprise Linux 6.8 now supports xfs filesystem sizes up to 300TB.

Additionally, the general availability of Red Hat Enterprise Linux 6.8 includes the launch of an updated Red Hat Enterprise Linux 6.8 base image which enables customers to migrate their traditional workloads into container-based applications – suitable for deployment on Red Hat Enterprise Linux 7 and Red Hat Enterprise Linux Atomic Host.

Today's release also marks the transition of Red Hat Enterprise Linux 6 into Production Phase 2, a phase which prioritizes ongoing stability and security features for critical platform deployments. More information on the Red Hat Enterprise Linux lifecycle can be found at .

[May 31, 2016] Red Hat Enterprise Linux 6.8 Deprecates Btrfs

Notable quotes:
"... Deprecated functionality continues to be supported until the end of life of Red Hat Enterprise Linux 6. Deprecated functionality will likely not be supported in future major releases of this product and is not recommended for new deployments. ..."
Buried within the notes for today's Red Hat Enterprise Linux 6.8 release are a few interesting notes.

First, RHEL has deprecated support for the Btrfs file-system.

Btrfs file system
Development of B-tree file system (Btrfs) has been discontinued, and Btrfs is considered deprecated. Btrfs was previously provided as a Technology Preview, available on AMD64 and Intel 64 architectures.

Huh? Since when was Btrfs development discontinued? At least in the upstream space, it's still ongoing and Facebook (as well as other companies) continue pouring resources into stabilizing and advancing the capabilities of Btrfs, which is widely sought as a Linux alternative to ZFS. There's no signs of things stalling on the Btrfs mailing list. Especially as Red Hat hasn't been packaging ZFS for RHEL officially (but you can grab packages via as an alternative, this move doesn't make a lot of sense. While Btrfs development has dragged on for a while and short of OpenSUSE/SUSE hasn't seen it deployed by default by other tier-one Linux distributions, it's a bit odd that Red Hat seems to be tossing in the towel on Btrfs.

Red Hat's definition of "deprecated" in their RHEL context means (as shown on the same page), "Deprecated functionality continues to be supported until the end of life of Red Hat Enterprise Linux 6. Deprecated functionality will likely not be supported in future major releases of this product and is not recommended for new deployments."

[Apr 25, 2016] What's New in Red Hat Enterprise Linux 7.2

Video presentation.

[Apr 25, 2016] Red Hat Enterprise Linux 7.2 Beta Now Available

Red Hat

With a nod to the importance of continuously maintaining stable and secure enterprise environments, the beta release of Red Hat Enterprise Linux 7.2 includes several new and enhanced security capabilities. The introduction of a new SCAP module in the installer (anaconda) allows enterprise customers to apply SCAP-based security profiles during installation. Another new capability allows for the binding of data to local networks. This allows enterprises to encrypt systems at scale with centralized management. In addition, Red Hat Enterprise 7.2 beta introduces support for DNSSEC for DNS zones managed by Identity Management (IdM) as well as federated identities, a mechanism that allows users to access resources using a single set of digital credentials.

Given the complexity and necessary due diligence required to efficiently and effectively manage the modern datacenter at scale, the beta release of Red Hat Enterprise Linux 7.2 includes new and improved tools to facilitate a more streamlined system administration experience. These new features and enhancements include:

As always, leveraging work in the Fedora community, Red Hat continuously monitors upstream developments and systematically incorporates select enterprise-ready features and technologies into Red Hat Enterprise Linux. The beta release of Red Hat Enterprise Linux 7.2 accomplishes this through the rebasing of the GNOME 3 desktop, the inclusion of GNOME Software, and the addition of new tuned profiles (inclusive of a profile for Red Hat Enterprise Linux for Real Time).

For more information on Red Hat Enterprise Linux 7.2, you can read the full release notes or, as an existing Red Hat customer, take Red Hat Enterprise Linux 7.2 beta for a test drive yourself via the Red Hat Customer Portal.

[Apr 25, 2016] What's Coming in Red Hat Enterprise Linux 7.2

DNSSEC for DNS zones managed by Red Hat Identity Management

RHEL 7.2 will also bring live kernel patching to RHEL, which Dumas sees as a critical security measure. Using elements of the KPATCH technology that recently landed in the upstream Linux 4.0 kernel, RHEL users will be able to patch their running kernels dynamically.

...Dumas is particularly excited about the performance gains that RHEL 7.2 introduces. In particular she noted that core networking patch performance is being accelerated by 35 percent for RHEL 7.2.

...With RHEL 7.2, Red Hat is refreshing the desktop with GNOME 3.14, which includes the GNOME software package manager and improvements to multi-monitor deployment capabilities.

[Apr 25, 2016] Red Hat Enterprise Linux 7 What's New

Jun 10, 2014 | InformationWeek

Red Hat released the 7.0 version of Red Hat Enterprise Linux today, with embedded support for Docker containers and support for direct use of Microsoft's Active Directory. The update uses XFS as its new file system.

"[Use of XFS] opens the door to a new class of data warehouse and big data analytics applications," said Mark Coggin, senior director of product marketing, in an interview before the announcement.

The high-capacity, 64-bit XFS file system, now the default file system in Red Hat Enterprise Linux, originated in the Silicon Graphics Irix operating system. It can scale up to 500 TB of addressable memory. In comparison, previous file systems, such EXT 4, typically supported 16 TBs.

RHEL 7's support for Linux containers amounts to a Docker container format integrated into the operating system so that users can begin building a "layered" application. Applications in the container can be moved around and will be optimized to run on Red Hat Atomic servers, which are hosts that use the specialized Atomic version of Enterprise Linux to manage containers.

[Want to learn more about Red Hat's commitment to Linux containers? Read Red Hat Containers OS-Centric: No Accident.]

RHEL 7 will also work with Active Directory, using cross-realm trust. Since both Linux and Windows are frequently found in the same enterprise data centers, cross-realm trust lets Linux use Active Directory as either a secondary check on a primary identity management system, or simply as a trusted source to identify users, Coggin says.

RHEL 7 also has more built-in instrumentation and tuning for optimized performance based on a selected system profile. "If you're running a compute-bound workload, you can select a profile that's better geared to it," Coggin notes.

[Dec 12, 2015] How to install and configure ZFS on Linux using Debian Jessie 8.1
ZFS is a combined filesystem and logical volume manager. The features of ZFS include protection against data corruption, support for high storage capacities, efficient data compression, integration of the filesystem and volume management concept, snapshots and copy-on-write clones, continuous integrity checking and automatic repair, RAID-Z and native NFSv4 ACLs.

ZFS was originally implemented as open-source software, licensed under the Common Development and Distribution License (CDDL).

When we talking about the ZFS filesystem, we can highlight the following key concepts:

For a full overview and description of all available features see this detailed wikipedia article.

In this tutorial, I will guide you step by step through the installation of the ZFS filesystem on Debian 8.1 (Jessie). I will show you how to create and configure pool's using raid0 (stripe), raid1 (Mirror) and RAID-Z (Raid with parity) and explain how to configure a file system with ZFS.

Based on the information from the website, ZFS is only supported on the AMD64 and Intel 64 Bit architecture (amd64). Let's get started with the setup. ... ... ...

The ZFS file system is a revolutionary new file system that fundamentally changes the way file systems are administered on Unix-like operating systems. ZFS provides features and benefits that were not found in any other file system available today. ZFS is robust, scalable, and easy to administer.

[Dec 05, 2015] How to forcefully unmount a Linux disk partition

January 27, 2006 |

... ... ...

Linux / UNIX will not allow you to unmount a device that is busy. There are many reasons for this (such as program accessing partition or open file) , but the most important one is to prevent the data loss. Try the following command to find out what processes have activities on the device/partition. If your device name is /dev/sdb1, enter the following command as root user:

# lsof | grep '/dev/sda1'
vi 4453       vivek    3u      BLK        8,1                 8167 /dev/sda1

Above output tells that user vivek has a vi process running that is using /dev/sda1. All you have to do is stop vi process and run umount again. As soon as that program terminates its task, the device will no longer be busy and you can unmount it with the following command:

# umount /dev/sda1
How do I list the users on the file-system /nas01/?

Type the following command:

# fuser -u /nas01/
# fuser -u /var/www/
Sample outputs:
/var/www:             3781rc(root)  3782rc(nginx)  3783rc(nginx)  3784rc(nginx)  3785rc(nginx)  3786rc(nginx)  3787rc(nginx)  3788rc(nginx)  3789rc(nginx)  3790rc(nginx)  3791rc(nginx)  3792rc(nginx)  3793rc(nginx)  3794rc(nginx)  3795rc(nginx)  3796rc(nginx)  3797rc(nginx)  3798rc(nginx)  3800rc(nginx)  3801rc(nginx)  3802rc(nginx)  3803rc(nginx)  3804rc(nginx)  3805rc(nginx)  3807rc(nginx)  3808rc(nginx)  3809rc(nginx)  3810rc(nginx)  3811rc(nginx)  3812rc(nginx)  3813rc(nginx)  3815rc(nginx)  3816rc(nginx)  3817rc(nginx)

The following discussion allows you to unmount device and partition forcefully using mount or fuser Linux commands.

Linux fuser command to forcefully unmount a disk partition

Suppose you have /dev/sda1 mounted on /mnt directory then you can use fuser command as follows:

WARNING! These examples may result into data loss if not executed properly (see "Understanding device error busy error" for more information).

Type the command to unmount /mnt forcefully:

# fuser -km /mnt

Linux umount command to unmount a disk partition.

You can also try the umount command with –l option on a Linux based system:

# umount -l /mnt

If you would like to unmount a NFS mount point then try following command:

# umount -f /mnt

Please note that using these commands or options can cause data loss for open files; programs which access files after the file system has been unmounted will get an error.

See also:

[Nov 08, 2015] Getting Service or Asset Tags on Linux by Nick Geoghegan

Jul 3, 2015

At one point in time, you will need to find out your service or asset tag. Maybe you need to find out when your machine is out of vendor warranty, or are actually finding out what is in the machine. Popping the service tag into the Dell support site will tell you this… But what if you don't have them written down?

The Dell "tools", it should be pointed out, require you restarting the machine with a CD in the drive or using a COM file. There is no way in hell that I'm digging out a DOS disk to try and run a COM file to get the service tag. The CD, as it turns out, is just a rebadged Ubuntu CD… Success!

So I mounted the Dell ISO, which was rather fiddly, and took a look around. A program called serviceTag was the first thing I noticed. Was this a specific Dell tool? What would happen if I ran it?

Being paranoid, I decided to see what was linked to this binary.

$ ldd serviceTag =>  (0xf773f000) => not found => /usr/lib32/ (0xf7635000) => /lib32/ (0xf760b000) => /usr/lib32/ (0xf75ed000) => /lib32/ (0xf7473000)
     /lib/ (0xf7740000)

Hmmmm. Never heard of libsmbios before. A quick Google vision quest lead me here.

The SMBIOS Specification addresses how motherboard and system vendors present management information about their products in a standard format by extending the BIOS interface on x86 architecture systems.


Debian (and RHEL) have these tools in their standard repos! For Debian, it's just a matter of

apt-get install libsmbios-bin

You can then, simply, run

[root@calculon /home/nick]$ /usr/sbin/getSystemId
Libsmbios version:      2.2.28
Product Name:           Gazelle Professional
Vendor:                 System76, Inc.
BIOS Version:           4.6.5
System ID:              XXXXXXXXXXXXXX
Service Tag:            XXXXXXXXXXXXXX
Express Service Code:   0
Asset Tag:              XXXXXXXXXXXXXX
Property Ownership Tag:

[Sep 02, 2015] Is systemd as bad as boycott systemd is trying to make it

October 26, 2014 |
Win2NIX, on October 26, 2014 at 11:04 pm

I migrated fully to NIX after 10-15 years as a Win admin and got tired of having control "hidden". Worked with ESX and used the console and loved the freedom. The trend I am noticing with the systemd debate is VERY similar to what has happened with M$. Keep It Simple Stupid is something Nix should be doing, having things modular and not depending on something else makes life easier. If one thing breaks it's not taking everything else with it. Further, if this is all done in binary and not easily read THIS IS NOT GOOD. I hated M$ making me download other crap to diagnose their BSODs if you like having your system flipping out and not saving your data then I guess systemd would be for you given it's direction. This is also akin to making your browser part of your OS and having it intertwine with it. (Bad Voodoo) I'm using Mint and looking for a possible way to decouple from systemd. I just don't see this as a good thing and it reminds me too much of M$ tactics. Now is the time to deviate from systemd and keep a more modular approach then watch and see if systemd starts to be an issue, which at this point if it keeps taking over more management it's only a matter of time. I also wonder if the M$ embracing open source has anything to do with this, it certainly smells of large corporation thinking or lack there of. I like improving things, but this does not appear to be an improvement rather a bomb waiting to go off. On these points this is a bad idea, binary not an easy way to gain insight and correct issues and adding multiple processes to control with more being added. I was able to patch heartbleed within 15 minutes after finding out about it. In the M$/corp world good luck hope it's this month.

Ummm..., on September 4, 2014 at 7:55 pm

I will admit right off, that I am not a linux designer or maintainer. I got started with linux about 20 years ago. People state that the old init system was fragile. Maybe it was, again…not building linux from scratch I wouldn't know. I don't recall ever having any issues though.

Whether right or wrong, from my (very) limited understanding, the systemd process is driven by binary files, which are not really meant to be edited or looked at by hand. So if something catastrophic happens (which granted hasn't happened yet)…how would I fix it or know what to fix? Go to my distro's forum and hope someone can fix it/release a patch soon?

Anyway, if one of the earlier commenters is correct, and there is no specific plan for systemd (which frankly is a scary thought)…how much more of the system will it continue to take over? And at what point does too much become too much?

I'm all for progress, but I think the Keep It Simple Stupid approach, which may not be "exciting" stuff to develop, it still the best approach.

"why did the people responsible for the development of the major Linux distributions accept it as a replacement for old init system?"

I can't speak for the initial decision, but at this point, I would suspect that inertia is keeping it in place. I highly doubt that any of the major linux desktop systems that must current users depend on would even function without systemd…at least not without a lot of major programming changes to make it happen. If someone did take that route, then all of those custom changes then need to be maintained.

(Simplistically thinking) Why can't things be more pluggable/portable? Distro X uses a systemd plugin for their init, and distro Y chooses to build against something else? Granted systemd is most likely now too big for that, but one can dream I suppose.

AC, on September 4, 2014 at 2:16 pm

Yes. Systemd is a trojan.

xx, on September 4, 2014 at 1:17 pm

Systemd is a perfect system for rootkits, and NSA backdoors.
Once it will be complete it will hide necessary processes even from root, it will filter unnecessary events from log, and it will do much much more.

But it seems, that only minority care about that.

Dimitri Minaev, on September 4, 2014 at 11:59 am

IMHO, the downside of systems as a project is that its parts lack a defined stable interface. This means that you cannot replace one part with a different one, creating your own stack of tools. When you configure your desktop system, you can combine any display manager with any window manager with any panel or file manager. Can you replace networkd with another tool transparently? If yes, can you be sure that your tool will keep working after the next systemd upgrade?

T Davis, on September 4, 2014 at 11:20 am

The reason Debian (and therefore Ubuntu) adopted SystemD is that the appointed Debian tech team is now devided equally between Ubuntu devs (which were Debian devs before Ubuntu came along) and Redhat employees. Look at the voting emails and 3 months of arguments.

The biggest issue is really not one of SystemD infiltration, but more of Redhat taking over every aspect of the Linux development process. Time and again, I have seen Canonical steer in their own direction, not because they want too go rogue, but because the upstreams for the main projects (Gnome, Wayland, Pulse Audio, now SystemD and possibly OpenStack, and even the kernel to some extent) are almost exclusively owned by Redhat, and only wish to make forward progress at their own pace (wayland has had almost twice the development time and resources as mir for example).

The REAL issue here is; who has the Linux community in their best interests? Do some real investigation and write a story on that.

Ericg, on September 3, 2014 at 7:12 pm
Except you, the author, has fallen into the same trap everyone else does… Confusing Systemd (the project) with systemd (the binary). Systemd, the project, is like Apache, its an umbrella term for a lot of other things. Systemd, logind, networkd, and other utilities.

Systemd, the binary, handles service management in pid1, that includes socket and explicit activation. Other tasks it passes off to non-pid 1 processes. For example: session management isn't handled through systemd pid 1, its handled through logind.

Readahead is handled through a service file for systemd, just like other daemons.

syslog functionality isn't handled in pid1, its handled in journald which is a separate process.

hostname, locale, and time registation are all handled through explicit utilities: hostnamectl, localectl, and timedatectl, which are done as separate processes.

Network configuration got added in networkd. What is networkd? The most minimal network userland you can have. Its for people who don't want to write by-hand config files, but for whom NetworkManager is way overkill. Is it pid 1? Nope.

Yes, systemd started off as "just an init replacement." It grew into more things. But don't assume that "systemd" (the binary) is the same as "systemd" (the project). Most things that are added to systemd in recent times AREN'T pid 1 like boycottsystemd claims, they're just small utilities that got added under the systemd umbrella project.

Peter, on September 4, 2014 at 4:42 am
Ericg, thats the problem
systemmd has become a whole integrated stack
init.d while not easy to use for starters, was at least within the idea of simple units which can be mixed and matched to get the results the user wants – note user wants – not developer wants

a Linux user, on September 4, 2014 at 5:23 am

hostname, locale, and time registation are all handled through explicit utilities: hostnamectl, localectl, and timedatectl, which are done as separate processes.

Missing the point.

People talk as though prior to systemd such tasks were beyond Linux, didn't work, always crashed, were a nightmare to use or manage and that is not the case.

The only difference I see between my Linux machine now and my Linux machine of a few years ago is that it now boots faster. And that's it. And whilst that's nice, it's so meaningless as to be painful to behold the enthusiasm that some display, as though all they did all day long was sit and reboot their machines with a stop watch in one hand.

The main problem with systemd is this – if there are ulterior motives at work here (and by definition they will be hidden at present) then by the time we find that out it will be too late.
And the other problem is that it takes a special kind of arrogance to sneer at 20+ years of development by some seriously smart people and claim that you, as a mere child, can do better. I do wonder how far systemd would have got had it not had Red Hat's weight behind it. I do realise that improvement sometimes means kicking out old 'tried and trusted' methods. But it's the way its happening with systemd that rings alarm bells – too many sneering, nasty bullies trashing anyone who disagrees (just like anyone who thinks Corporations should pay proper taxes is sneered at, or anyone who thinks Putin is not as bad as he is made out to be gets sneered at – sneering is the new way of silencing genuine debate, so when I come across it in Linuxland, alarm bells beging to ring).

Linux is about granular power and control, not convenience.

J. Orejarena, on September 4, 2014 at 9:38 am
"The main problem with systemd is this – if there are ulterior motives at work here (and by definition they will be hidden at present) then by the time we find that out it will be too late."

Just read (without the blank space before ".net") to find the ulterior motives.

[May 07, 2015] Red Hat Enterprise Linux Life Cycle - Red Hat Customer Portal

* The life cycle dates are subject to adjustment.

In Red Hat Enterprise Linux 4, EUS was available for the following minor releases:

In Red Hat Enterprise Linux 5, EUS is available for the following minor releases:

In Red Hat Enterprise Linux 6, EUS is available for all minor releases released during the Production 1 Phase, but not for the minor release marking transition to Production 2 or any minor releases released during Production Phases 2 or 3. Each Red Hat Enterprise Linux 6 EUS stream is available for 24 months from the availability of the minor release.

In Red Hat Enterprise Linux 6, EUS is available for the following minor releases:

Future Red Hat Enterprise Linux 6 releases for which EUS is available will be added to the above list upon their release.

In Red Hat Enterprise Linux 7, EUS will be available for all minor releases during the Production 1 Phase, but not for 7.0 or the minor release marking the transition to Production 2, or for any minor releases released during Production Phases 2 or 3. Each Red Hat Enterprise Linux 7 EUS stream is available for 24 months from the availability of the minor release.

In Red Hat Enterprise Linux 7, EUS is available for the following releases:

Future Red Hat Enterprise Linux 7 releases for which EUS is available will be added to the above list upon their release.

Please see this Knowledgebase Article for more details on EUS.

[Jun 27, 2014] What's new in Red Hat Enterprise Linux 7

Red Hat


...Red Hat Enterprise Linux 7 delivers dramatic improvements in reliability, performance, and scalability. A wealth of new features provides the architect, system administrator, and developer with the resources necessary to innovate and manage more efficiently.


Linux containers have emerged as a key open source application packaging and delivery technology, combining lightweight application isolation with the flexibility of image-based deployment methods. Developers have rapidly embraced Linux containers because they simplify and accelerate application deployment, and many Platform-as-a-Service (PaaS) platforms are built around Linux container technology, including OpenShift by Red Hat. Red Hat Enterprise Linux 7 implements Linux containers using core technologies such as control groups (cGroups) for resource management, namespaces for process isolation, and SELinux for security, enabling secure multi-tenancy and reducing the potential for security exploits. The Red Hat container certification ensures that application containers built using Red Hat Enterprise Linux will operate seamlessly across certified container hosts.
With more and more systems, even at the low end, presenting non-uniform memory access (NUMA) topologies, Red Hat Enterprise Linux 7 addresses the performance irregularities that such systems present. A new, kernel-based NUMA affinity mechanism automates memory and scheduler optimization. It attempts to match processes that consume significant resources with available memory and CPU resources in order to reduce cross-node traffic. The resulting improved NUMA resource alignment improves performance for applications and virtual machines, especially when running memory-intensive workloads.
Red Hat Enterprise Linux 7 unifies hardware event reporting into a single reporting mechanism. Instead of various tools collecting errors from different sources with different timestamps, a new hardware event reporting mechanism (HERM) will make it easier to correlate events and get an accurate picture of system behavior. HERM reports events in a single location and in a sequential timeline. HERM uses a new userspace daemon, rasdaemon, to catch and log all RAS events coming from the kernel tracing infrastructure.
Red Hat Enterprise Linux 7 advances the level of integration and usability between the Red Hat Enterprise Linux guest and VMware vSphere. Integration now includes: • Open VM Tools - bundled open source virtualization utilities. • 3D graphics drivers for hardware-accelerated OpenGL and X11 rendering. • Fast communication mechanisms between VMware ESX and the virtual machine.
The ability to revert to a known, good system configuration is crucial in a production environment. Using LVM snapshots with ext4 and XFS (or the integrated snapshotting feature in Btrfs described in the "Snapper" section) an administrator can capture the state of a system and preserve it for future use. An example use case would involve an in-place upgrade that does not present a desired outcome and an administrator who wants to restore the original configuration.
Red Hat Enterprise Linux 7 introduces Live Media Creator for creating customized installation media from a kickstart file for a range of deployment use cases. Media can then be used to deploy standardized images whether on standardized corporate desktops, standardized servers, virtual machines, or hyperscale deployments. Live Media Creator, especially when used with templates, provides a way to control and manage configurations across the enterprise.
Red Hat Enterprise Linux 7 features the ability to use installation templates to create servers for common workloads. These templates can simplify and speed creating and deploying Red Hat Enterprise Linux servers, even for those with little or no experience with Linux.

Red Hat Red Hat Enterprise Linux 7 – Setting World Records At Launch

June 10, 2014

Today's announcement of general availability of Red Hat Enterprise Linux 7 marks a significant milestone for Red Hat. The culmination of a multi-year effort by Red Hat's engineering team and our partners, the latest major release of our flagship platform redefines the enterprise operating system, and is designed to power the spectrum of enterprise IT: applications running on physical servers, containerized applications, and also cloud services.

Since its introduction more than a decade ago, Red Hat Enterprise Linux has become the world's leading enterprise Linux platform, setting industry standards for performance along the way, with Red Hat Enterprise Linux 7 continuing this trend. On its first day of general availability, Red Hat Enterprise Linux 7 already claims multiple world record-breaking benchmark results running on HP ProLiant servers, including:

SPECjbb2013 Multi-JVM Benchmark
• One processor world record for both max-jOPS (16,252) and critical-jOPS (4,721) metrics
• Two processor world record for both max-jOPS (119,517) and critical-jOPS (36,411) metrics
• Four processor world record for both max-jOPS (202,763) and critical-jOPS (65,950) metrics

The SPECjbb2013 benchmark is an industry-standard measurement of Java-based application performance developed by the Standard Performance Evaluation Corporation (SPEC). Application performance remains an important attribute for many customers, and this set of results demonstrates Red Hat Enterprise Linux's continued ability to deliver world-class performance, alongside support from our ecosystem of partners and OEMs. With these impressive results to its name already, we like to think that this is only the tip of the iceberg for Red Hat Enterprise Linux 7's achievements, especially since the platform is designed to power a broad spectrum of enterprise IT workloads.

SPEC and SPECjbb are registered trademarks of the Standard Performance Evaluation Corporation. Results as of June 10, 2014. See for more information.

For further details on SPECjbb2013 benchmark results achieved on HP ProLiant XL220a Gen8 v2 (1P), HP ProLiant DL580 Gen8 (2P), and HP ProLiant DL580 Gen8 (4P) servers, see

[Jun 27, 2014] Red Hat Enterprise Linux 7 in evaluation for Common Criteria certification

June 19, 2014

Security is a crucial component of the technology Red Hat provides for its customers and partners, especially those who operate in sensitive environments, including the military.

[Jun 27, 2014] Oracle Announces OpenStack Support for Oracle Linux and Oracle VM

A technology preview of an OpenStack distribution that allows Oracle Linux and Oracle VM to work with the open source cloud software is now available. Users can install this OpenStack technology preview in their test environments with the latest version of Oracle Linux and the beta release of Oracle VM 3.3.

Read the Press Release
Read More from Oracle Senior Vice President of Linux and Virtualization Wim Coekaerts
Read More from Oracle Product Management Director Ronen Kofman

Oracle Linux Free as in Speech AND Free as in Beer by Monica Kumar

Jan 08, 2014 | Oracle's Linux Blog

One of the biggest benefits of Oracle Linux is that binaries, patches, errata, and source are always free. Even if you don't have a support subscription, you can download and run exactly the same enterprise-grade distribution that is deployed in production by thousands of customers around the world. You can receive binaries and errata reliably and on schedule, and take advantage of the thousands of hours Oracle spends testing Oracle Linux every day. And, of course, Oracle Linux is completely compatible with Red Hat Enterprise Linux, so switching to Oracle Linux is easy.

CentOS is another Linux distribution that offers free binaries with Red Hat compatibility. Traditionally, CentOS has been used for Linux systems which do not require support in order to reduce or avoid expensive Red Hat Enterprise Linux subscription costs. Recently, Red Hat announced it was "joining forces" with the CentOS project, hiring many the key CentOS developers, and "building a new CentOS." This is a curious development given that the primary factors that have made CentOS popular are that it is free and Red Hat compatible.

It would be natural for existing CentOS users to wonder what Red Hat actually has in mind for the "new CentOS" when the FAQ accompanying the announcement states that Red Hat does not recommend CentOS for production deployment, is not recommending mixed CentOS and Red Hat Enterprise Linux deployments, will not support JBoss and other products on CentOS, and is not including CentOS in Red Hat's developer offerings designed to create "applications for deployment into production environments."

If Red Hat truly wished to satisfy the key requirements of most CentOS users, they would take a much simpler step: they would make Red Hat Enterprise Linux binaries, patches, and errata available for free download – just like Oracle already does.

Fortunately, no matter what future CentOS faces in Red Hat's hands, Oracle Linux offers all users a single distribution for development, testing, and deployment, for free or with a paid support subscription. Oracle does not require customers to buy a subscription for every server running Oracle Linux (or any server running Oracle Linux). If a customer wants to pay for support for production systems only, that's the customer's choice. The Oracle model is simple, economical, and well suited to environments with rapidly changing needs.

Oracle is focused on providing what we have since day one – a fast, reliable Linux distribution that is completely compatible with Red Hat Enterprise Linux, coupled with enterprise class support, indemnity, and flexible support policies. If you are CentOS user, or a Red Hat user, why not download and try Oracle Linux today? You have nothing to lose – after all, it's of the CentOS community while remaining committed to our current and new users."

Al Gillen, program vice president, System Software, IDC
"CentOS is one of the major non-commercial distributions in the industry, and a key adjacent project for many Red Hat Enterprise Linux customers. This relationship helps strengthen the CentOS community, and will ensure that CentOS benefits directly from the community-centric development approach that Red Hat both understands and heavily supports. Given the growing opportunities for Linux in the market today in areas such as OpenStack, cloud and big data, a stronger CentOS technology backed by the CentOS community-including Red Hat-is a positive development that helps the overall industry."

Stephen O'Grady, principal analyst, RedMonk
"Though it will doubtless come as a surprise, this move by Red Hat represents the logical embrace of an adjacent ecosystem. Bringing the CentOS and Red Hat communities closer together should be a win for both parties."

Additional Resources

Connect with Red Hat

Red Hat + CentOS - Red Hat Open Source Community

Red Hat + CentOS

Red Hat and the CentOS Project are building a new CentOS, capable of driving forward development and adoption of next-generation open source projects.

Red Hat will contribute its resources and expertise in building thriving open source communities to help establish more open project governance, broaden opportunities for participation, and provide new ways for CentOS users and contributors to collaborate on next-generation technologies such as cloud, virtualization, and Software-Defined Networking (SDN).

With Red Hat's contributions and investment, the CentOS Project will be better able to serve the needs of open source community members who require different or faster-moving components to be integrated with CentOS, expanding on existing efforts to collaborate with open source projects such as OpenStack, Gluster, OpenShift Origin, and oVirt.

Red Hat has worked with the CentOS Project to establish a merit-based open governance model for the CentOS Project, allowing for greater contribution and participation through increased transparency and access.


Today, the CentOS Project produces CentOS, a popular community Linux distribution built from much of the Red Hat Enterprise Linux codebase and other sources. Over the coming year, the CentOS Project will expand its mission to establish CentOS as a leading community platform for emerging open source technologies coming from other projects such as OpenStack.

How is CentOS different from Red Hat Enterprise Linux?

CentOS is a community project that is developed, maintained, and supported by and for its users and contributors. Red Hat Enterprise Linux is a subscription product that is developed, maintained, and supported by Red Hat for its subscribers.

While CentOS is derived from the Red Hat Enterprise Linux codebase, CentOS and Red Hat Enterprise Linux are distinguished by divergent build environments, QA processes, and, in some editions, different kernels and other open source components. For this reason, the CentOS binaries are not the same as the Red Hat Enterprise Linux binaries.

The two also have very different focuses. While CentOS delivers a distribution with strong community support, Red Hat Enterprise Linux provides a stable enterprise platform with a focus on security, reliability, and performance as well as hardware, software, and government certifications for production deployments. Red Hat also delivers training, and an entire support organization ready to fix problems and deliver future flexibility by getting features worked into new versions.

Once in use, the operating systems often diverge further, as users selectively install patches to address bugs and security vulnerabilities to maintain their respective installs. In addition, the CentOS Project maintains code repositories of software that are not part of the Red Hat Enterprise Linux codebase. This includes feature changes selected by the CentOS Project. These are available as extra/additional packages and environments for CentOS users.

[Oct 26, 2013] RHEL handling of DST change

Most server hardware clocks are use UTC. UTC stands for the Universal Time, Coordinated, also known as Greenwich Mean Time (GMT). Other time zones are determined by adding or subtracting from the UTC time. Server typically displays local time, which now is subject of DST correction twice a year.

Wikipedia defines DST as follows:

Daylight saving time (DST), also known as summer time in British English, is the convention of advancing clocks so that evenings have more daylight and mornings have less. Typically clocks are adjusted forward one hour in late winter or early spring and are adjusted backward in autumn.

DST patch is only required in few countries such as USA. Please see this wikipedia article.

Linux will change to and from DST when the HWCLOCK setting in /etc/sysconfig/clock is set to -u, i.e. when the hardware clock is set to UTC (which is closely related to GMT), regardless of whether Linux was running at the time DST is entered or left.

When the HWCLOCK setting is set to `--localtime', Linux will not adjust the time, operating under the assumption that your system may be a dual boot system at that time and that the other OS takes care of the DST switch. If that was not the case, the DST change needs to be made manually.


EST is defined as being GMT -5 all year round. US/Eastern, on the other hand, means GMT-5 or GMT-4 depending on whether Daylight Savings Time (DST) is in effect or not.

The tzdata package contains data files with rules for various timezones around the world. When this package is updated, it will update multiple timezone changes for all previous timezone fixes.

[Feb 28, 2012] Red Hat vs. Oracle Linux Support 10 Years Is New Standard

The VAR Guy

The support showdown started a couple of weeks ago, when Red Hat extended the life cycle of Red Hat Enterprise Linux (RHEL) versions 5 and 6 from the norm of seven years to a new standard of 10 years. A few days later, Oracle responded by extending Oracle Linux life cycles to 10 years. Side note: It sounds like SUSE, now owned by Attachmate, also offers extended Linux support of up to 10 years.

[Feb 07, 2012] Virtualization With Xen On CentOS 6.2 (x86_64)

Linux Howtos

This tutorial provides step-by-step instructions on how to install Xen (version 4.1.2) on a CentOS 6.2 (x86_64) system.

Xen lets you create guest operating systems (*nix operating systems like Linux and FreeBSD), so called "virtual machines" or domUs, under a host operating system (dom0). Using Xen you can separate your applications into different virtual machines that are totally independent from each other (e.g. a virtual machine for a mail server, a virtual machine for a high-traffic web site, another virtual machine that serves your customers' web sites, a virtual machine for DNS, etc.), but still use the same hardware. This saves money, and what is even more important, it's more secure. If the virtual machine of your DNS server gets hacked, it has no effect on your other virtual machines. Plus, you can move virtual machines from one Xen server to the next one.

[Jan 11, 2012] Red Hat Enterprise Linux 6.2 Announcement

They continue to push KVM which is seldom used in enterprise environment. The most important addition is Linux containers.
Dec 06, 2011 [rhelv6-announce]

Hardware support

Linux Containers




Error detection and reporting


The X server has been re-based in this release. Updating the X server will increase system stability through the isolation of the system display drivers and will provide a better base for new features. Overall improved support for newer workstation optional hardware, multiple displays and new input devices.

[Jul 31, 2011] Scientific Linux pushes RHEL clones forward by Sean Michael Kerner

July 29, 2011 | InternetNews.
From the 'Clone Wars' files:

"Scientific Linux 6.1 is now available providing users with a stable reliable Free (as in Beer) version of Red Hat Enterprise Linux 6.1.

Red Hat released RHEL 6.1 in May, providing improved driver support and hardware enablement and oh yeah security fixes too.

Scientific Linux is a joint effort by Fermilab and CERN and is targeted at the scientific community, but it's a solid RHEL version in its own right. It's also one that could now be attracting some new users, thanks to delays at the 'other' popular RHEL clone -- CentOS.

The CentOS project just releases CentOS 6 and are many months behind Scientific Linux and even more time behind RHEL. That's a problem for some and could also represent a real security risk for most.

With the more rapid release cycle of Scientific Linux I will not be surprised if some disgruntled CentOS users make the switch and/or if new users just start off with Scientific Linux first.

While Scientific Linux is faster than CentOS at replicating RHEL 6.1, they aren't the fastest clone.

Oracle Linux 6.1 came out in June, barely a month after Red Hat's release.

It's somewhat ironic that Oracle is now the fasted clone tracking RHEL, since Red Hat has made it harder to clone with the way they package releases. As it turns out, it's not slowing Oracle down at all - though it might be impacting the community releases.

[May 31, 2011] RHEL Tuning and Optimization for Oracle V11

The Completely Fair Queuing (CFQ) scheduler is the default algorithm in Red Hat Enterprise Linux 4 which is suitable for a wide variety of applications and provides a good compromise between throughput and latency. In comparison to the CFQ algorithm, the Deadline scheduler caps maximum latency per request and maintains a good disk throughput which is best for disk-intensive database applications.

Hence, the Deadline scheduler is recommended for database systems. Also, at the time of this writing there is a bug in the CFQ scheduler which affects heavy I/O, see Metalink Bug:5041764. Even though this bug report talks about OCFS2 testing, this bug can also happen during heavy IO access to raw or block devices and as a consequence could evict RAC nodes.

To switch to the Deadline scheduler, the boot parameter elevator=deadline must be passed to the kernel that is being used.

Edit the /etc/grub.conf file and add the following parameter to the kernel that is being used, in this example 2.4.21-32.0.1.ELhugemem:

title Red Hat Enterprise Linux Server (2.6.18-8.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-8.el5 ro root=/dev/sda2 elevator=deadline initrd /initrd-2.6.18-8.el5.img

This entry tells the 2.6.18-8.el5 kernel to use the Deadline scheduler. Make sure to reboot the system to activate the new scheduler.

Changing Network Adapter Settings

To check the speed and settings of network adapters, use the ethtool command which works now

for most network interface cards. To check the adapter settings of eth0 run:

# ethtool eth0

To force a speed change to 1000Mbps, full duplex mode, run:

# ethtool -s eth0 speed 1000 duplex full autoneg off

To make a speed change permanent for eth0, set or add the ETHTOOL_OPT environment variable in


ETHTOOL_OPTS="speed 1000 duplex full autoneg off"

This environment variable is sourced in by the network scripts each time the network service is


Changing Network Kernel Settings

Oracle now uses User Datagram Protocol (UDP) as the default protocol on Linux for interprocess

communication, such as cache fusion buffer transfers between the instances. However, starting with

Oracle 10g network settings should be adjusted for standalone databases as well.

Oracle recommends the default and maximum send buffer size (SO_SNDBUF socket option) and

receive buffer size (SO_RCVBUF socket option) to be set to 256 KB. The receive buffers are used

by TCP and UDP to hold received data until it is read by the application. The receive buffer cannot

overflow because the peer is not allowed to send data beyond the buffer size window. This means that

datagrams will be discarded if they do not fit in the socket receive buffer. This could cause the sender

to overwhelm the receiver.

The default and maximum window size can be changed in the proc file system without reboot:

The default setting in bytes of the socket receive buffer

# sysctl -w net.core.rmem_default=262144

The default setting in bytes of the socket send buffer

# sysctl -w net.core.wmem_default=262144

The maximum socket receive buffer size which may be set by using the SO_RCVBUF socket option

# sysctl -w net.core.rmem_max=262144

The maximum socket send buffer size which may be set by using the SO_SNDBUF socket option

# sysctl -w net.core.wmem_max=262144

To make the change permanent, add the following lines to the /etc/sysctl.conf file, which is used

during the boot process:





To improve fail over performance in a RAC cluster, consider changing the following IP kernel

parameters as well:





Changing these settings may be highly dependent on your system, network, and other applications.

For suggestions, see Metalink Note:249213.1 and Note:265194.1.

On Red Hat Enterprise Linux systems the default range of IP port numbers that are allowed for TCP

and UDP traffic on the server is too low for 9i and 10g systems. Oracle recommends the following port


# sysctl -w net.ipv4.ip_local_port_range="1024 65000"

To make the change permanent, add the following line to the /etc/sysctl.conf file, which is used during

the boot process:

net.ipv4.ip_local_port_range=1024 65000

The first number is the first local port allowed for TCP and UDP traffic, and the second number is the last port number.

10.3. Flow Control for e1000 Network Interface Cards

The e1000 network interface card family do not have flow control enabled in the 2.6 kernel on Red Hat

Enterprise Linux 4 and 5. If you have heavy traffic, then the RAC interconnects may lose blocks, see

Metalink Bug:5058952. For more information on flow control, see Wikipedia Flow control1.

To enable Receive flow control for e1000 network interface cards, add the following line to the /etc/

modprobe.conf file:

options e1000 FlowControl=1

The e1000 module needs to be reloaded for the change to take effect. Once the module is loaded with

flow control, you should see e1000 flow control module messages in /var/log/messages.

Verifying Asynchronous I/O Usage

To verify whether $ORACLE_HOME/bin/oracle was linked with asynchronous I/O, you can use the

Linux commands ldd and nm.

In the following example, $ORACLE_HOME/bin/oracle was relinked with asynchronous I/O:

$ ldd $ORACLE_HOME/bin/oracle | grep libaio => /usr/lib/ (0x0093d000)

$ nm $ORACLE_HOME/bin/oracle | grep io_getevent

w io_getevents@@LIBAIO_0.1


In the following example, $ORACLE_HOME/bin/oracle has NOT been relinked with asynchronous I/


$ ldd $ORACLE_HOME/bin/oracle | grep libaio

$ nm $ORACLE_HOME/bin/oracle | grep io_getevent

w io_getevents


If $ORACLE_HOME/bin/oracle is relinked with asynchronous I/O it does not necessarily mean that

Oracle is really using it. You also have to ensure that Oracle is configured to use asynchronous I/O

calls, see Enabling Asynchronous I/O Support.

To verify whether Oracle is making asynchronous I/O calls, you can take a look at the /proc/

slabinfo file assuming there are no other applications performing asynchronous I/O calls on the

system. This file shows kernel slab cache information in real time.

On a Red Hat Enterprise Linux 3 system where Oracle does not make asynchronous I/O calls, the

output looks like this:

$ egrep "kioctx|kiocb" /proc/slabinfo

kioctx 0 0 128 0 0 1 : 1008 252

kiocb 0 0 128 0 0 1 : 1008 252


Once Oracle makes asynchronous I/O calls, the output on a Red Hat Enterprise Linux 3 system will

look like this:

$ egrep "kioctx|kiocb" /proc/slabinfo

kioctx 690 690 128 23 23 1 : 1008 252

kiocb 58446 65160 128 1971 2172 1 : 1008 252 Red Hat Enterprise Linux 5.7 Released in Beta

Storage Drivers

4.2. Network Drivers

[May 21, 2011] 6.1 Technical Notes


[May 21, 2011] Red Hat Delivers Red Hat Enterprise Linux 6.1

RHEL 6.0 was pretty raw, hopefully they fixed the host glaring flaws.
May 19, 2011 | Red Hat

Red Hat, Inc. (NYSE: RHT) today announced the general availability of Red Hat Enterprise Linux 6.1, the first update to the platform since the delivery of Red Hat Enterprise Linux 6 in November 2010.
... ... ... ...

Red Hat Enterprise Linux 6.1 is already established as a performance leader serving both as a virtual machine guest and hypervisor host in SPECvirt benchmarks. Red Hat and HP recently announced that the combination of Red Hat Enterprise Linux with KVM running on a HP ProLiant BL620c G7 20-core Blade server delivered a record-setting SPECvirt_sc2010 benchmark result. Red Hat and IBM also recently announced that the companies submitted a benchmark to SPEC in which a combination of Red Hat Enterprise Linux, Red Hat Enterprise Virtualization and IBM systems delivered 45% better consolidation capability than competitors in performance tests conducted by Red Hat and IBM. See for details.

"Building on our decade-long partnership to optimize Red Hat Enterprise Linux for IBM platforms, our companies have collaborated closely on the development of Red Hat Enterprise Linux 6.1," said Jean Staten Healy, director, Cross-IBM Linux and Open Virtualization. "Red Hat Enterprise Linux 6.1 combined with IBM hardware capabilities offers our customers expanded flexibility, performance and scalability across their bare metal, virtualized and cloud environments. Our collaboration continues to drive innovation and leading results in the industry."

In addition to performance improvements, Red Hat Enterprise Linux 6.1 also provides numerous technology updates, including:

[May 19, 2011] CentOS 6? by David Sumsky

Oracle Linux might be an alternative...

dsumsky lines

I'm a big fan of CentOS project. I use it in production and I recommend it to the others as an enterprise ready Linux distro. I have to admit that I was quite disappointed by the behaviour of project developers who weren't able to tell the community the reasons why the upcoming releases were and are so overdue. I was used to downloading CentOS images one or two months after the current RHEL release was announced. The situation has changed with RHEL 5.6 which is available since January, 2011 but the corresponding CentOS was released not before April, 2011. It took about 3 months to release it instead of one or two as usual. By the way, the main news in RHEL 5.6 are:

More details on RHEL 5.6 are officially available here.

The similar or perhaps worse situation was around the release date of CentOS 6. As you know, RHEL 6 is available since November, 2011. I considered CentOS 6 almost dead after I read about transitions to Scientific Linux or about purchasing support from Red Hat and migrating the CentOS installations to RHEL . But according to this schedule people around CentOS seem to be working hard again and the CentOS 6 should be available at the end of May.

I hope the project will continue as I don't know about better alternative to RHEL (RHEL clone) than CentOS. The question is how the whole, IMO unnecessary situation, will influence the reputation of the project.

[Nov 14, 2010] Red Hat releases RHEL 6

"Red Hat on Wednesday released version 6 of its Red Hat Enterprise Linux (RHEL) distribution. 'RHEL 6 is the culmination of 10 years of learning and partnering,' said Paul Cormier, Red Hat's president of products and technologies, in a webcast announcing the launch. Cormier positioned the OS both as a foundation for cloud deployments and a potential replacement for Windows Server. 'We want to drive Linux deeper into every single IT organization. It is a great product to erode the Microsoft Server ecosystem,' he said. Overall, RHEL 6 has more than 2,000 packages, and an 85 percent increase in the amount of code from the previous version, said Jim Totton, vice president of Red Hat's platform business unit. The company has added 1,800 features to the OS and resolved more than 14,000 bug issues."

5.6 Release Notes

Fourth Extended Filesystem (ext4) Support

The fourth extended filesystem (ext4) is now a fully supported feature in Red Hat Enterprise Linux 5.6. ext4 is based on the third extended filesystem (ext3) and features a number of improvements, including: support for larger file size and offset, faster and more efficient allocation of disk space, no limit on the number of subdirectories within a directory, faster file system checking, and more robust journaling.

To complement the addition of ext4 as a fully supported filesystem in Red Hat Enterprise Linux 5.6, the e4fsprogs package has been updated to the latest upstream version. e4fsprogs contains utilities to create, modify, verify, and correct the ext4 filesystem.

Logical Volume Manager (LVM)

Volume management creates a layer of abstraction over physical storage by creating logical storage volumes. This provides greater flexibility over just using physical storage directly. Red Hat Enterprise Linux 5.6 manages logical volumes using the Logical Volume Manager (LVM). Further Reading The Logical Volume Manager Administration document describes the LVM logical volume manager, including information on running LVM in a clustered environment.

[Apr 20, 2009] Sun goes to Oracle for $7.4B

Oracle+Sun has the power to seriously harm IBM. Solaris still has the highest market share among proprietary Unixes. And AIX is only third after HP-UX. Wonder if Solaris will become Oracle's main development platform again. Oracle is a top contributor to Linux and that might help to bridge the gap in shell and packaging. Telecommunications and database administrators always preferred Solaris over Linux.
Yahoo! Finance

Oracle Corp. snapped up computer server and software maker Sun Microsystems Inc. for $7.4 billion Monday, trumping rival IBM Corp.'s attempt to buy one of Silicon Valley's best known -- and most troubled -- companies.

... ... ...

Jonathan Schwartz, Sun's CEO, predicted the combination will create a "systems and software powerhouse" that "redefines the industry, redrawing the boundaries that have frustrated the industry's ability to solve." Among other things, he predicted Oracle will be able to offer its customers simpler computing solutions at less expensive prices by drawing upon Sun's technology.

... ... ...

Yet Oracle says it can run Sun more efficiently. It expects the purchase to add at least 15 cents per share to its adjusted earnings in the first year after the deal closes. The company estimated Santa Clara, Calif.-based Sun will contribute more than $1.5 billion to Oracle's adjusted profit in the first year and more than $2 billion in the second year.

If Oracle can hit those targets, Sun would yield more profit than the combined contributions of three other major acquisitions -- PeopleSoft Inc., Siebel Systems Inc. and BEA Systems -- that cost Oracle a total of more than $25 billion.

A deal with Oracle might not be plagued by the same antitrust issues that could have loomed over IBM and Sun, since there is significantly less overlap between the two companies. Still, Oracle could be able to use Sun's products to enhance its own software.

Oracle's main business is database software. Sun's Solaris operating system is a leading platform for that software. The company also makes "middleware," which allows business computing applications to work together. Oracle's middleware is built on Sun's Java language and software.

Calling Java the "single most important software asset we have ever acquired," Ellison predicted it would eventually help make Oracle's middleware products generate as much revenue as its database line does.

Sun's takeover is a reminder that a few missteps and bad timing can cause a star to come crashing down.

Sun was founded in 1982 by men who would become legendary Silicon Valley figures: Andy Bechtolsheim, a graduate student whose computer "workstation" for the Stanford University Network (SUN) led to the company's first product; Bill Joy, whose work formed the basis for Sun's computer operating system; and Stanford MBAs Vinod Khosla and Scott McNealy.

Sun was a pioneer in the concept of networked computing, the idea that computers could do more when lots of them were linked together. Sun's computers took off at universities and in the government, and became part of the backbone of the early Internet. Then the 1990s boom made Sun a star. It claimed to put "the dot in dot-com," considered buying a struggling Apple Computer Inc. and saw its market value peak around $200 billion.

[Apr 17, 2009] Adobe Reader 9 released - Linux and Solaris x86

Tabbed viewing was added
Ashutosh Sharma

Adobe Reader 9.1 for Linux and Solaris x86 has been released today. Solaris x86 support was one of the most requested feature by users. As per the Reader team's announcement, this release includes the following major features:

- Support for Tabbed Viewing (preview)
- Super fast launch, and better performance than previous releases
- Integration with
- IPv6 support
- Enhanced support for PDF portfolios (preview)

The complete list is available here.

Adobe Reader 9.1 is now available for download and works on OpenSolaris, Solaris 10 and most modern Linux distributions such as Ubuntu 8.04, PCLinuxOS, Mandriva 2009, SLED 10, Mint Linux 6 and Fedora 10.

See also Sneak Preview of the Tabbed Viewing interface in Adobe Reader 9.x (on Ubuntu)

[Feb 22, 2009] 10 shortcuts to master bash - Program - Linux - Builder AU By Guest Contributor, TechRepublic | 2007/06/25 18:30:02

If you've ever typed a command at the Linux shell prompt, you've probably already used bash -- after all, it's the default command shell on most modern GNU/Linux distributions.

The bash shell is the primary interface to the Linux operating system -- it accepts, interprets and executes your commands, and provides you with the building blocks for shell scripting and automated task execution.

Bash's unassuming exterior hides some very powerful tools and shortcuts. If you're a heavy user of the command line, these can save you a fair bit of typing. This document outlines 10 of the most useful tools:

  1. Easily recall previous commands

    Bash keeps track of the commands you execute in a history buffer, and allows you to recall previous commands by cycling through them with the Up and Down cursor keys. For even faster recall, "speed search" previously-executed commands by typing the first few letters of the command followed by the key combination Ctrl-R; bash will then scan the command history for matching commands and display them on the console. Type Ctrl-R repeatedly to cycle through the entire list of matching commands.

  2. Use command aliases

    If you always run a command with the same set of options, you can have bash create an alias for it. This alias will incorporate the required options, so that you don't need to remember them or manually type them every time. For example, if you always run ls with the -l option to obtain a detailed directory listing, you can use this command:

    bash> alias ls='ls -l' 

    To create an alias that automatically includes the -l option. Once this alias has been created, typing ls at the bash prompt will invoke the alias and produce the ls -l output.

    You can obtain a list of available aliases by invoking alias without any arguments, and you can delete an alias with unalias.

  3. Use filename auto-completion

    Bash supports filename auto-completion at the command prompt. To use this feature, type the first few letters of the file name, followed by Tab. bash will scan the current directory, as well as all other directories in the search path, for matches to that name. If a single match is found, bash will automatically complete the filename for you. If multiple matches are found, you will be prompted to choose one.

  4. Use key shortcuts to efficiently edit the command line

    Bash supports a number of keyboard shortcuts for command-line navigation and editing. The Ctrl-A key shortcut moves the cursor to the beginning of the command line, while the Ctrl-E shortcut moves the cursor to the end of the command line. The Ctrl-W shortcut deletes the word immediately before the cursor, while the Ctrl-K shortcut deletes everything immediately after the cursor. You can undo a deletion with Ctrl-Y.

  5. Get automatic notification of new mail

    You can configure bash to automatically notify you of new mail, by setting the $MAILPATH variable to point to your local mail spool. For example, the command:

    bash> MAILPATH='/var/spool/mail/john'
    bash> export MAILPATH 

    Causes bash to print a notification on john's console every time a new message is appended to John's mail spool.

  6. Run tasks in the background

    Bash lets you run one or more tasks in the background, and selectively suspend or resume any of the current tasks (or "jobs"). To run a task in the background, add an ampersand (&) to the end of its command line. Here's an example:

    bash> tail -f /var/log/messages &
    [1] 614

    Each task backgrounded in this manner is assigned a job ID, which is printed to the console. A task can be brought back to the foreground with the command fg jobnumber, where jobnumber is the job ID of the task you wish to bring to the foreground. Here's an example:

    bash> fg 1

    A list of active jobs can be obtained at any time by typing jobs at the bash prompt.

  7. Quickly jump to frequently-used directories

    You probably already know that the $PATH variable lists bash's "search path" -- the directories it will search when it can't find the requested file in the current directory. However, bash also supports the $CDPATH variable, which lists the directories the cd command will look in when attempting to change directories. To use this feature, assign a directory list to the $CDPATH variable, as shown in the example below:

    bash> CDPATH='.:~:/usr/local/apache/htdocs:/disk1/backups'
    bash> export CDPATH

    Now, whenever you use the cd command, bash will check all the directories in the $CDPATH list for matches to the directory name.

  8. Perform calculations

    Bash can perform simple arithmetic operations at the command prompt. To use this feature, simply type in the arithmetic expression you wish to evaluate at the prompt within double parentheses, as illustrated below. Bash will attempt to perform the calculation and return the answer.

    bash> echo $((16/2))
  9. Customise the shell prompt

    You can customise the bash shell prompt to display -- among other things -- the current username and host name, the current time, the load average and/or the current working directory. To do this, alter the $PS1 variable, as below:

    bash> PS1='\u@\h:\w \@> '
    bash> export PS1
    root@medusa:/tmp 03:01 PM>

    This will display the name of the currently logged-in user, the host name, the current working directory and the current time at the shell prompt. You can obtain a list of symbols understood by bash from its manual page.

  10. Get context-specific help

    Bash comes with help for all built-in commands. To see a list of all built-in commands, type help. To obtain help on a specific command, type help command, where command is the command you need help on. Here's an example:

    bash> help alias
    ...some help text...

    Obviously, you can obtain detailed help on the bash shell by typing man bash at your command prompt at any time.

[Feb 22, 2009] Installation Guide for RHEL 5

2. Steps to Get You Started
2.1. Upgrade or Install?
2.2. Is Your Hardware Compatible?
2.3. Do You Have Enough Disk Space?
2.4. Can You Install Using the CD-ROM or DVD?
2.4.1. Alternative Boot Methods
2.4.2. Making an Installation Boot CD-ROM
2.5. Preparing for a Network Installation
2.5.1. Preparing for FTP and HTTP installation
2.5.2. Preparing for an NFS install
2.6. Preparing for a Hard Drive Installation
3. System Specifications List
4. Installing on Intel® and AMD Systems
4.1. The Graphical Installation Program User Interface
4.1.1. A Note about Virtual Consoles
4.2. The Text Mode Installation Program User Interface
4.2.1. Using the Keyboard to Navigate
4.3. Starting the Installation Program
4.3.1. Booting the Installation Program on x86, AMD64, and Intel® 64 Systems
4.3.2. Booting the Installation Program on Itanium Systems
4.3.3. Additional Boot Options
4.4. Selecting an Installation Method
4.5. Installing from DVD/CD-ROM
4.5.1. What If the IDE CD-ROM Was Not Found?
4.6. Installing from a Hard Drive
4.7. Performing a Network Installation
4.8. Installing via NFS
4.9. Installing via FTP
4.10. Installing via HTTP
4.11. Welcome to Red Hat Enterprise Linux
4.12. Language Selection
4.13. Keyboard Configuration
4.14. Enter the Installation Number
4.15. Disk Partitioning Setup
4.16. Advanced Storage Options
4.17. Create Default Layout
4.18. Partitioning Your System
4.18.1. Graphical Display of Hard Drive(s)
4.18.2. Disk Druid's Buttons
4.18.3. Partition Fields
4.18.4. Recommended Partitioning Scheme
4.18.5. Adding Partitions
4.18.6. Editing Partitions
4.18.7. Deleting a Partition
4.19. x86, AMD64, and Intel® 64 Boot Loader Configuration
4.19.1. Advanced Boot Loader Configuration
4.19.2. Rescue Mode
4.19.3. Alternative Boot Loaders
4.19.4. SMP Motherboards and GRUB
4.20. Network Configuration
4.21. Time Zone Configuration
4.22. Set Root Password
4.23. Package Group Selection
4.24. Preparing to Install
4.24.1. Prepare to Install
4.25. Installing Packages
4.26. Installation Complete
4.27. Itanium Systems - Booting Your Machine and Post-Installation Setup
4.27.1. Post-Installation Boot Loader Options
4.27.2. Booting Red Hat Enterprise Linux Automatically
5. Removing Red Hat Enterprise Linux
6. Troubleshooting Installation on an Intel® or AMD System
6.1. You are Unable to Boot Red Hat Enterprise Linux
6.1.1. Are You Unable to Boot With Your RAID Card?
6.1.2. Is Your System Displaying Signal 11 Errors?
6.2. Trouble Beginning the Installation
6.2.1. Problems with Booting into the Graphical Installation
6.3. Trouble During the Installation
6.3.1. No devices found to install Red Hat Enterprise Linux Error Message
6.3.2. Saving Traceback Messages Without a Diskette Drive
6.3.3. Trouble with Partition Tables
6.3.4. Using Remaining Space
6.3.5. Other Partitioning Problems
6.3.6. Other Partitioning Problems for Itanium System Users
6.3.7. Are You Seeing Python Errors?
6.4. Problems After Installation
6.4.1. Trouble With the Graphical GRUB Screen on an x86-based System?
6.4.2. Booting into a Graphical Environment
6.4.3. Problems with the X Window System (GUI)
6.4.4. Problems with the X Server Crashing and Non-Root Users
6.4.5. Problems When You Try to Log In
6.4.6. Is Your RAM Not Being Recognized?
6.4.7. Your Printer Does Not Work
6.4.8. Problems with Sound Configuration
6.4.9. Apache-based httpd service/Sendmail Hangs During Startup
7. Driver Media for Intel® and AMD Systems
7.1. Why Do I Need Driver Media?
7.2. So What Is Driver Media Anyway?
7.3. How Do I Obtain Driver Media?
7.3.1. Creating a Driver Diskette from an Image File
7.4. Using a Driver Image During Installation
8. Additional Boot Options for Intel® and AMD Systems
9. The GRUB Boot Loader
9.1. Boot Loaders and System Architecture
9.2. GRUB
9.2.1. GRUB and the x86 Boot Process
9.2.2. Features of GRUB
9.3. Installing GRUB
9.4. GRUB Terminology
9.4.1. Device Names
9.4.2. File Names and Blocklists
9.4.3. The Root File System and GRUB
9.5. GRUB Interfaces
9.5.1. Interfaces Load Order
9.6. GRUB Commands
9.7. GRUB Menu Configuration File
9.7.1. Configuration File Structure
9.7.2. Configuration File Directives
9.8. Changing Runlevels at Boot Time
9.9. Additional Resources
9.9.1. Installed Documentation
9.9.2. Useful Websites
9.9.3. Related Books
10. Additional Resources about Itanium and Linux
IV. Common Tasks
23. Upgrading Your Current System
23.1. Determining Whether to Upgrade or Re-Install
23.2. Upgrading Your System
24. Activate Your Subscription
24.1. RHN Registration
24.1.1. Provide a Red Hat Login
24.1.2. Provide Your Installation Number
24.1.3. Connect Your System
25. An Introduction to Disk Partitions
25.1. Hard Disk Basic Concepts
25.1.1. It is Not What You Write, it is How You Write It
25.1.2. Partitions: Turning One Drive Into Many
25.1.3. Partitions within Partitions - An Overview of Extended Partitions
25.1.4. Making Room For Red Hat Enterprise Linux
25.1.5. Partition Naming Scheme
25.1.6. Disk Partitions and Other Operating Systems
25.1.7. Disk Partitions and Mount Points
25.1.8. How Many Partitions?
V. Basic System Recovery
26. Basic System Recovery
26.1. Common Problems
26.1.1. Unable to Boot into Red Hat Enterprise Linux
26.1.2. Hardware/Software Problems
26.1.3. Root Password
26.2. Booting into Rescue Mode
26.2.1. Reinstalling the Boot Loader
26.3. Booting into Single-User Mode
26.4. Booting into Emergency Mode
27. Rescue Mode on POWER Systems
27.1. Special Considerations for Accessing the SCSI Utilities from Rescue Mode
VI. Advanced Installation and Deployment
28. Kickstart Installations
28.1. What are Kickstart Installations?
28.2. How Do You Perform a Kickstart Installation?
28.3. Creating the Kickstart File
28.4. Kickstart Options
28.4.1. Advanced Partitioning Example
28.5. Package Selection
28.6. Pre-installation Script
28.6.1. Example
28.7. Post-installation Script
28.7.1. Examples
28.8. Making the Kickstart File Available
28.8.1. Creating Kickstart Boot Media
28.8.2. Making the Kickstart File Available on the Network
28.9. Making the Installation Tree Available
28.10. Starting a Kickstart Installation
29. Kickstart Configurator
29.1. Basic Configuration
29.2. Installation Method
29.3. Boot Loader Options
29.4. Partition Information
29.4.1. Creating Partitions
29.5. Network Configuration
29.6. Authentication
29.7. Firewall Configuration
29.7.1. SELinux Configuration
29.8. Display Configuration
29.8.1. General
29.8.2. Video Card
29.8.3. Monitor
29.9. Package Selection
29.10. Pre-Installation Script
29.11. Post-Installation Script
29.11.1. Chroot Environment
29.11.2. Use an Interpreter
29.12. Saving the File
30. Boot Process, Init, and Shutdown
30.1. The Boot Process
30.2. A Detailed Look at the Boot Process
30.2.1. The BIOS
30.2.2. The Boot Loader
30.2.3. The Kernel
30.2.4. The /sbin/init Program
30.3. Running Additional Programs at Boot Time
30.4. SysV Init Runlevels
30.4.1. Runlevels
30.4.2. Runlevel Utilities
30.5. Shutting Down
31. PXE Network Installations
31.1. Setting up the Network Server
31.2. PXE Boot Configuration
31.2.1. Command Line Configuration
31.3. Adding PXE Hosts
31.3.1. Command Line Configuration
31.4. TFTPD
31.4.1. Starting the tftp Server
31.5. Configuring the DHCP Server
31.6. Adding a Custom Boot Message
31.7. Performing the PXE Installation

[Feb 3, 2009] Using The Red Hat Rescue Environment LG #159

There are several different rescue CDs out there, and they all provide slightly different rescue environments. The requirement here at Red Hat Academy is, perhaps unsurprisingly, an intimate knowledge of how to use the Red Hat Enterprise Linux (RHEL) 5 boot CD.

All these procedures should work exactly the same way with Fedora and CentOS. As with any rescue environment, it provides a set of useful tools; it also allows you to configure your network interfaces. This can be helpful if you have an NFS install tree to mount, or if you have an RPM that was corrupted and needs to be replaced. There are LVM tools for manipulating Logical Volumes, "fdisk" for partitioning devices, and a number of other tools making up a small but capable toolkit.

The Red Hat rescue environment provided by the first CD or DVD can really come in handy in many situations. With it you can solve boot problems, bypass forgotten GRUB bootloader passwords, replace corrupted RPMs, and more. I will go over some of the most important and common issues. I also suggest reviewing a password recovery article written by Suramya Tomar ( that deals with recovering lost root passwords in a variety of ways for different distributions. I will not be covering that here since his article is a very good resource for those problems.

Start by getting familiar with using GRUB and booting into single user mode. After you learn to overcome and repair a variety of boot problems, what initially appears to be a non-bootable system may be fully recoverable. The best way to get practice recovering non-bootable systems is by using a non-production machine or a virtual machine and trying out various scenarios. I used Michael Jang's book, "Red Hat Certified Engineer Linux Study Guide", to review non-booting scenarios and rehearse how to recover from various situations. I would highly recommend getting comfortable with recovering non-booting systems because dealing with them in real life without any practice beforehand can be very stressful. Many of these problems are really easy to fix but only if you have had previous experience and know the steps to take.

When you are troubleshooting a non-booting system, there are certain things that you should be on the alert for. For example, an error in /boot/grub/grub.conf, /etc/fstab, or /etc/inittab can cause the system to not boot properly; so can an overwritten boot sector. In going through the process of troubleshooting with the RHEL rescue environment, I'll point out some things that may be of help in these situations.

[Jan 22, 2009] The World's Open Source Leader

Intel Intel Core i7 (Nehalem) processor is now supported. That increases scalability for database loads. Nehalem is a quad-core, hyperthreaded 45nM processor. Unaudited results showing gains of 1.7x for commercial applications and gains up to 3.5x for high-performance technical computing applications compared to the previous generation of Intel processors.

The Nehalem architecture has many new features. According to Wikipedia the most significant changes from the Core 2 include:

[Dec 24, 2008] Alan Cox and the End of an Era - Blogs – ComputerworldUK blogs - The latest technology news & analysis on Outsourcing, HMRC data, Apple iPhone, Global warming, MySQL, Open Enterprise

And now, it seems, after ten years at the company, Cox is leaving Red Hat:

I will be departing Red Hat mid January having handed in my notice. I'm not going to be spending more time with the family, gardening or other such wonderous things. I'm leaving on good terms and strongly supporting the work Red Hat is doing.

I've been at Red Hat for ten years as contractor and employee and now have an opportunity to get even closer to the low level stuff that interests me most. Barring last minute glitches I shall be relocating to Intel (logically at least, physically I'm not going anywhere) and still be working on Linux and free software stuff.

I know some people will wonder what it means for Red Hat engineering. Red Hat has a solid, world class, engineering team and my departure will have no effect on their ability to deliver.

[Sep 11, 2008] The LXF Guide 10 tips for lazy sysadmins Linux Format The website of the UK's best-selling Linux magazine

A lazy sysadmin is a good sysadmin. Time spent in finding more-efficient shortcuts is time saved later on for that ongoing project of "reading the whole of the internet", so try Juliet Kemp's 10 handy tips to make your admin life easier...

  1. Cache your password with ssh-agent
  2. Speed up logins using Kerberos
  3. screen: detach to avoid repeat logins
  4. screen: connect multiple users
  5. Expand Bash's tab completion
  6. Automate your installations
  7. Roll out changes to multiple systems
  8. Automate Debian updates
  9. Sanely reboot a locked-up box
  10. Send commands to several PCs

[Sep 9, 2008] The Fedora-Red Hat Crisis by Bruce Byfield

September 9, 2008 |

A few weeks ago, when I wrote that, "forced to choose, the average FOSS-based business is going to choose business interests over FOSS [free and open source software] every time," many people, including Mathew Aslett and Matt Assay, politely accused me of being too cynical. Unhappily, you only have to look at the relations between Red Hat and Fedora, the distribution Red Hat sponsors, during the recent security crisis for evidence that I might be all too accurate.

That this evidence should come from Red Hat and Fedora is particularly dismaying. Until last month, most observers would have described the Red Hat-Fedora relationship as a model of how corporate and community interests could work together for mutual benefit.

Although Fedora was initially dismissed as Red Hat's beta release when it was first founded in 2003, in the last few years, it had developed laudatory open processes and become increasingly independent of Red Hat. As Max Spevack, the former chair of the Fedora Board, said in 2006, the Red Hat-Fedora relationship seemed a "good example of how to have a project that serves the interests of a company that also is valuable and gives value to community members."

Yet it seems that, faced with a problem, Red Hat moved to protect its corporate interests at the expense of Fedora's interests and expectations as a community -- and that Fedora leaders were as surprised by the response as the general community.

Outline of a crisis

What happened last month is still unclear. My request a couple of weeks ago to discuss events with Paul W. Frields, the current Fedora Chair, was answered by a Red Hat publicist, who told me that the official statements on the crisis were all that any one at Red Hat or Fedora was prepared to say in public -- a response so stereotypically corporate in its caution that it only emphasizes the conflict of interests.

However, the Fedora announcements mailing list gave the essentials. On August 14, Frields sent out a notice that Fedora was "currently investigating an issue in the infrastructure systems." He warned that the entire Fedora site might become temporarily unavailable and warned that users should "not download or update any additional packages on your Fedora systems." As might be expected, the cryptic nature of this corporate-sounding announcement caused considerable curiosity, both within and without Fedora, with most people wanting to know more.

A day later, Frield's name was on another notice, saying that the situation was continuing, and pleading for Fedora users to be patient. A third notice followed on August 19, announcing that some Fedora services were now available, and providing the first real clue to what was happening when a new SSH fingerprint was released.

It was only on August 22 that Frields was permitted to announce that, "Last week we discovered that some Fedora servers were illegally accessed. The intrusion into the servers was quickly discovered, and the servers were taken offline . . . .One of the compromised Fedora servers was a system used for signing Fedora packages. However, based on our efforts, we have high confidence that the intruder was not able to capture the passphrase used to secure the Fedora package signing key."

Since then, plans for changing security keys have been announced. However, as of September 8, the crisis continues, with Fedora users still unable to get security updates or bug-fixes. Three weeks without these services might seem trivial to Windows users, but for Fedora users, like those of other GNU/Linux distribution, many of whom are used to daily updates to their system, the crisis amounts to a major disruption of service.

A conflict of cultures

From a corporate viewpoint, Red Hat's close-lipped reaction to the crisis is understandable. Like any company based on free and open source software, Red Hat derives its income from delivering services to customers, and obviously its ability to deliver services is handicapped (if not completely curtailed) when its servers are compromised. Under these circumstances, the company's wish to proceed cautiously and with as little publicity as possible is perfectly natural.

The problem is that, in moving to defend its own credibility, Red Hat has neglected Fedora's. While secrecy about the crisis may be second nature to Red Hat's legal counsel, the FOSS community expects openness.

In this respect, Red Hat's handling of the crisis could not contrast more strongly with the reaction of the community-based Debian distribution when a major security flaw was discovered in its openssl package last May. In keeping with Debian's policy of openness, the first public announcement followed hard on the discovery, and included an explanation of the scope, what users could do, and the sites where users could find tools and instructions for protecting themselves.

[Aug 23, 2008] OpenSSH blacklist script

That's sad -- RHN was compromised due and some troyanised OpenSSH packages were uploaded.
22nd August 2008

Last week Red Hat detected an intrusion on certain of its computer systems and took immediate action. While the investigation into the intrusion is on-going, our initial focus was to review and test the distribution channel we use with our customers, Red Hat Network (RHN) and its associated security measures. Based on these efforts, we remain highly confident that our systems and processes prevented the intrusion from compromising RHN or the content distributed via RHN and accordingly believe that customers who keep their systems updated using Red Hat Network are not at risk. We are issuing this alert primarily for those who may obtain Red Hat binary packages via channels other than those of official Red Hat subscribers.

In connection with the incident, the intruder was able to get a small number of OpenSSH packages relating only to Red Hat Enterprise Linux 4 (i386 and x86_64 architectures only) and Red Hat Enterprise Linux 5 (x86_64 architecture only) signed. As a precautionary measure, we are releasing an updated version of these packages and have published a list of the tampered packages and how to detect them.

To reiterate, our processes and efforts to date indicate that packages obtained by Red Hat Enterprise Linux subscribers via Red Hat Network are not at risk.

We have provided a shell script which lists the affected packages and can verify that none of them are installed on a system:

The script has a detached GPG signature from the Red Hat Security Response Team (key) so you can verify its integrity:

This script can be executed either as a non-root user or as root. To execute the script after downloading it and saving it to your system, run the command:

         bash ./

If the script output includes any lines beginning with "ALERT" then a tampered package has been installed on the system. Otherwise, if no tampered packages were found, the script should produce only a single line of output beginning with the word "PASS", as shown below:

         bash ./
   PASS: no suspect packages were found on this system

The script can also check a set of packages by passing it a list of source or binary RPM filenames. In this mode, a "PASS" or "ALERT" line will be printed for each filename passed; for example:

         bash ./ openssh-4.3p2-16.el5.i386.rpm
   PASS: signature of package "openssh-4.3p2-16.el5.i386.rpm" not on blacklist

Red Hat customers who discover any tampered packages, need help with running this script, or have any questions should log into the Red Hat support website and file a support ticket, call their local support center, or contact their Technical Account Manager.

[Aug 7, 2008] rsyslog 2.0.6 (v2 Stable) by Rainer Gerhards

This is new syslog daemon used by RHEL.

About: Rsyslog is an enhanced multi-threaded syslogd. Among others, it offers support for on-demand disk buffering, reliable syslog over TCP, SSL, TLS, and RELP, writing to databases (MySQL, PostgreSQL, Oracle, and many more), email alerting, fully configurable output formats (including high-precision timestamps), the ability to filter on any part of the syslog message, on-the-wire message compression, and the ability to convert text files to syslog. It is a drop-in replacement for stock syslogd and able to work with the same configuration file syntax.

Changes: IPv6 addresses could not be specified in forwarding actions, because they contain colons and the colon character was already used for some other purpose. IPv6 addresses can now be specified inside of square brackets. This is a recommended update for all v2-stable branch users.

[Mar 26, 2008] InternetNews Realtime IT News – Oracle Expands Its Linux Base by Sean Michael Kerner

Oracle claims that it continues to pick up users for its Linux offering and now is set to add new clustering capabilities to the mix.

So how is Oracle doing with its Oracle Unbreakable Linux? Pretty well. According to Monica Kumar, senior director Linux and open source product marketing at Oracle, there are now 2,000 customers for Oracle's Linux. Those customers will now be getting a bonus from Oracle: free clustering software.

Oracle's Clusterware software previously had only been available to Oracle's Real Application Clusters (RAC) customers, but now will also be part of the Unbreakable Linux support offering at no additional cost.

Clusterware is the core Oracle (NASDAQ: ORCL) software offering that enables the grouping of individual servers together into a cluster system. Kumar explained to that the full RAC offering provides additional components beyond just Clusterware that are useful for managing and deploying Oracle databases on clusters.

The new offering for Linux users, however, does not necessarily replace the need for RAC.

"We're not saying that this [Clusterware] replaces RAC," Kumar noted. "We are taking it out of RAC for other general purpose uses as well. Clusterware is general purpose software that is part of RAC but that isn't the full solution."

The Clusterware addition to the Oracle Unbreakable Linux support offering is expected by Kumar to add further impetus for users to adopt Oracle's Linux support program.

Oracle Unbreakable Linux was first announced in October 2006 and takes Red Hat's Enterprise Linux as a base. To date, Red Hat has steadfastly denied on its quarterly investor calls that Oracle's Linux offering has had any tangible impact on its customer base.

In 2007, Oracle and Red Hat both publicly traded barbs over Yahoo, which apparently is a customer of both Oracle's Unbreakable Linux as well as Red Hat Enterprise Linux.

"We can't comment on them [Red Hat] and what they're saying," Kumar said. "I can tell you that we're seeing a large number of Oracle customers who were running on Linux before coming to Unbreakable Linux. It's difficult to say if they're moving all of their Linux servers to Oracle or not."

That said, Kumar added that Linux customers are coming to Oracle for more than just running Oracle on Linux, they're also coming with other application loads as well.

"Since there are no migration issues we do see a lot of RHEL [Red Hat Enterprise Linux] customers because it's easy for them to transition," Kumar claimed.

Ever since Oracle's Linux first appeared, Oracle has claimed that it was fully compatible with RHEL and it's a claim that Kumar reiterated.

"In the beginning, people had questions about how does compatibility work, but we have been able to address all those questions," Kumar said. "In the least 15 months, Oracle has proved that we're fully compatible and that we're not here to fork Linux but to make it stronger."

[Feb 26, 2008] Role-based access control in SELinux

Learn how to work with RBAC in SELinux, and see how the SELinux policy, kernel, and userspace work together to enforce the RBAC and tie users to a type enforcement policy.

[Jan 24, 2008] Project details for cgipaf

The package also contain Solaris binary of chpasswd clone, which is extremely useful for mass changes of passwords in mixed corporate environments which along with Linux and AIX (both have native chpasswd implementation) include Solaris or other Unixes that does not have chpasswd utility (HP-UX is another example in this category). Version 1.3.2 now includes Solaris binary of chpasswd which works on Solaris 9 and 10.

cgipaf is a combination of three CGI programs.

All programs use PAM for user authentication. It is possible to run a script to update SAMBA passwords or NIS configuration when a password is changed. mailcfg.cgi creates a .procmailrc in the user's home directory. A user with too many invalid logins can be locked. The minimum and maximum UID can be set in the configuration file, so you can specify a range of UIDs that are allowed to use cgipaf.

[Dec 21, 2007] LXER interview with John Hull - the manager of the Dell Linux engineering team

The original sales estimates for Ubuntu computers was around 1% of the total sales, or about 20,000 systems annually. Have the expectations been met so far? Will Dell ever release sales figures for Ubuntu systems?

The program so far is meeting expectations. Customers are certainly showing their interest and buying systems preloaded with Ubuntu, but it certainly won't overtake Microsoft Windows anytime soon. Dell has a policy not to release sales numbers, so I don't expect us to make Ubuntu sales figures available publicly.

[Dec 21, 2007] Red Hat to get new CEO from Delta Air Lines Underexposed - CNET

"When you take them out of the big buildings, without the imprimatur of Hewlett-Packard, IBM and Oracle, or HP around them, they just didn't hold up."

Szulik, who took over as CEO from Bob Young in 1999 just a few months after its initial public offering, said he's stepping down because of family health issues.

"For the last nine months, I've struggled with health issues in my family," and that priority couldn't be balanced with work, Szulik said in an interview. "This job requires a 7x24, 110 percent commitment."

Szulik, who remains chairman of the board, praised Whitehurst in a statement, saying he's a "hands-on guy who will be a strong cultural fit at Red Hat" and "a talented executive who has successfully led a global technology-focused organization at Delta."

On a conference call, Szulik said Whitehurst stood "head and shoulders" above other candidates interviewed in a recruiting process. He was a programmer earlier in his career and runs four versions of Linux at home, he said.

Moreover, Szulik said he wasn't satisfied with more traditional tech executives who were interviewed.

"What we encountered was in many cases was a lack of understanding of open-source software development and of our model," he said. During the interview, he added about the tech industry candidates, "When you take them out of the big buildings, without the imprimatur of Hewlett-Packard, IBM and Oracle, or HP around them, they just didn't hold up."

The surprise move was announced as the leading Linux seller announced results for its third quarter of fiscal 2008. Its revenue increased 28 percent to $135.4 million and net income went up 12 percent to $20.3 million, or 10 cents per share. The company also raised estimates for full-year results to revenue of $521 million to $523 million and earnings of about 70 cents per share.

[Oct 29, 2007] Oracle's Linux Unbreakable Or Just A Necessary Adjustment - Open Source Blog - InformationWeek

.. In fact, Coekaerts has to say this often because Oracle is widely viewed as an opportunistic supporter of Linux, taking Red Hat's product, stripping out its trademarks, and offering it as its own. Coekaerts says what's more important is that Oracle is a contributor to Linux. It contributed the cluster file system and hasn't really generated a competing distribution.

Yet, in some cases, there is an Oracle distribution. Most customers Coekaerts deals with get their Linux from Red Hat and then ask for Oracle's technical support in connection with the Oracle database. But Oracle has been asked often enough to supply Linux with its applications or database that it makes available a version of Red Hat Enterprise Linux, with the Red Hat logos and labels stripped out. Oracle's version of Linux has a "cute" penguin inserted and is optimized to work with Oracle database applications. It may also have a few Oracle-added "bug fixes," Coekaerts says.

The bug fixes, however, lead to confusion about Coekaert's relatively simple formulation of Oracle enterprise support, not an Oracle fork. And that confusion stems from Oracle CEO Larry Ellison's attention-getting way of introducing Unbreakable Linux at the October 2006 Oracle OpenWorld.

When enterprise customers call with a problem, Oracle's technical support finds the problem and supplies a fix. If it's a change in the Linux kernel, the customer would normally have to wait for the fix to be submitted to kernel maintainers for review, get merged into the kernel, and then get included in an updated version of an enterprise edition from Red Hat or Novell. Such a process can take up to two years, observers inside and outside the kernel process say.

The pace of bug fixes "is the most serious problem facing the Linux community today," Ellison explained during an Oracle OpenWorld keynote a year ago.

When Oracle's Linux technical support team has a fix, it gives that fix to the customer without waiting for Red Hat's uptake or the kernel process itself, Ellison said.

Red Hat's Berman argues that when it comes to the size of the problem, Oracle makes too much of too little.

When Red Hat learns of bugs, it retrofits the fixes into its current and older versions of Red Hat Enterprise Linux. That's one of Red Hat's main engineering investments in Linux, Berman said in an interview.

Coekaerts responds, "There are disagreements on what is considered critical by the distribution vendors and us or our customers."

Berman acknowledges that several judgment calls are involved. Some bugs affect only a few enterprise customers. They may apply to an old RHEL version. "Three or four times a year" a proposed fix may not be deemed important enough to undergo this retrofit, he says.

But Coekaerts told InformationWeek: "Oracle customers encounter this problem more than three or four times a year. I cannot give a number, it tends to vary. But it does happen rather frequently."

Berman counters that when Oracle changes Red Hat's tested code with its own bug fixes, it breaks the certification that Red Hat offers on its distribution, so it's no longer guaranteed to work with other software. "Oracle claims they will patch things for a customer. That's a fork," he says.

What Red Hat calls a fork is what Oracle calls a "one-off fix to customers at the time of the problem. … If the customer runs version 5 but Red Hat is at version 8, and the customer runs into a bug, does he want to go into [the next release with a fix] version 9? Likely not. He wants to minimize the amount of change. Oracle will fix the customer's problem in version 5…" Coekaerts says.

I think it's fair to characterize what Oracle does as technical support, not a fork. There's no attempt to sustain the aberration through a succession of Linux kernels offered to the general public as an alternative to the mainstream kernel.

But the Oracle/Red Hat debate defines a gray area in a fast-moving kernel development process. Bugs that affect many users get addressed through the kernel process or the Red Hat and Novell (NSDQ: NOVL) retrofits. That still may not always cover a problem for an individual user or a set of users sitting on a particular piece of aging hardware or caught in a specific hardware/software configuration.

If Oracle fixes some of these problems, I say more power to it.

But if they are problems that are isolated in nature or limited in scope, as I suspect they are, that makes them something less than Ellison's "most serious problem facing the Linux community today."

Ellison needed air cover to take Red Hat's product and do what he wanted with it. In the long run, he's probably increasing the use of Linux in the enterprise and keeping Red Hat on its toes as a support organization. That's less benefit than claimed, but still something.

[Oct 23, 2007] Yast (Yet Another Setup Tool) part of its distribution.

Oracle Enterprise Linux became more compatible with Suse

Yet Another Setup Tool. Yast helps make system administration easier by providing a single utility for configuring and maintaining Linux systems. The version of Yast available here is modified to work with all Enterprise Linux distributions including Enterprise Linux and SuSE.

Special note to Oracle Management Pack for Linux users:

[Oct 23, 2007] UK Unix group newsletter

Oracle hasn't "talked about how our Linux is better than anyone else's Linux. Oracle has not forked and has no desire to fork Red Hat Enterprise Linux and maintain its own version. We don't differentiate on the distribution because we use source code provided by Red Hat to produce Oracle Enterprise Linux and errata. We don't care whether you run Red Hat Enterprise Linux or Enterprise Linux from Oracle and we'll support you in either case because the two are fully binary- and source-compatible. Instead, we focus on the nature and the quality of our support and the way we test Linux using real-world test cases and workloads."


data=writeback While the writeback option provides lower data consistency guarantees than the journal or ordered modes, some applications show very significant speed improvement when it is used. For example, speed improvements can be seen when heavy synchronous writes are performed, or when applications create and delete large volumes of small files, such as delivering a large flow of short email messages. The results of the testing effort described in Chapter 3 illustrate this topic.

When the writeback option is used, data consistency is similar to that provided by the ext2 file system. However, file system integrity is maintained continuously during normal operation in the ext3 file system.

In the event of a power failure or system crash, the file system may not be recoverable if a significant portion of data was held only in system memory and not on permanent storage. In this case, the filesystem must be recreated from backups. Often, changes made since the file system was last backed up are inevitably lost.

[Aug 7, 2007] Linux Replacing atime

August 7, 2007 | KernelTrap

Submitted by Jeremy on August 7, 2007 - 9:26am.

In a recent lkml thread, Linus Torvalds was involved in a discussion about mounting filesystems with the noatime option for better performance, "'noatime,data=writeback' will quite likely be *quite* noticeable (with different effects for different loads), but almost nobody actually runs that way."

He noted that he set O_NOATIME when writing git, "and it was an absolutely huge time-saver for the case of not having 'noatime' in the mount options. Certainly more than your estimated 10% under some loads."

The discussion then looked at using the relatime mount option to improve the situation, "relative atime only updates the atime if the previous atime is older than the mtime or ctime. Like noatime, but useful for applications like mutt that need to know when a file has been read since it was last modified."

Ingo Molnar stressed the significance of fixing this performance issue, "I cannot over-emphasize how much of a deal it is in practice. Atime updates are by far the biggest IO performance deficiency that Linux has today. Getting rid of atime updates would give us more everyday Linux performance than all the pagecache speedups of the past 10 years, _combined_." He submitted some patches to improve relatime, and noted about atime:

"It's also perhaps the most stupid Unix design idea of all times. Unix is really nice and well done, but think about this a bit: 'For every file that is read from the disk, lets do a ... write to the disk! And, for every file that is already cached and which we read from the cache ... do a write to the disk!'"

[Aug 7, 2007] Expect plays a crucial role in network management by Cameron Laird

Jul 31, 2007 |

If you manage systems and networks, you need Expect.

More precisely, why would you want to be without Expect? It saves hours common tasks otherwise demand. Even if you already depend on Expect, though, you might not be aware of the capabilities described below.

Expect automates command-line interactions

You don't have to understand all of Expect to begin profiting from the tool; let's start with a concrete example of how Expect can simplify your work on AIX® or other operating systems:

Suppose you have logins on several UNIX® or UNIX-like hosts and you need to change the passwords of these accounts, but the accounts are not synchronized by Network Information Service (NIS), Lightweight Directory Access Protocol (LDAP), or some other mechanism that recognizes you're the same person logging in on each machine. Logging in to a specific host and running the appropriate passwd command doesn't take long-probably only a minute, in most cases. And you must log in "by hand," right, because there's no way to script your password?

Wrong. In fact, the standard Expect distribution (full distribution) includes a command-line tool (and a manual page describing its use!) that precisely takes over this chore. passmass (see Resources) is a short script written in Expect that makes it as easy to change passwords on twenty machines as on one. Rather than retyping the same password over and over, you can launch passmass once and let your desktop computer take care of updating each individual host. You save yourself enough time to get a bit of fresh air, and multiple opportunities for the frustration of mistyping something you've already entered.

The limits of Expect

This passmass application is an excellent model-it illustrates many of Expect's general properties:

You probably know enough already to begin to write or modify your own Expect tools. As it turns out, the passmass distribution actually includes code to log in by means of ssh, but omits the command-line parsing to reach that code. Here's one way you might modify the distribution source to put ssh on the same footing as telnet and the other protocols:
Listing 1. Modified passmass fragment that accepts the -ssh argument

} "-rlogin" {
set login "rlogin"
} "-slogin" {
set login "slogin"
} "-ssh" {
set login "ssh"
} "-telnet" {
set login "telnet"

In my own code, I actually factor out more of this "boilerplate." For now, though, this cascade of tests, in the vicinity of line #100 of passmass, gives a good idea of Expect's readability. There's no deep programming here-no need for object-orientation, monadic application, co-routines, or other subtleties. You just ask the computer to take over typing you usually do for yourself. As it happens, this small step represents many minutes or hours of human effort saved.

[Jul 30, 2007] Due to problems on high loads in Linux 2.6.23 kernel the Linux kernel process scheduler has been completely ripped out and replaced with a completely new one called Completely Fair Scheduler (CFS) modeled after Solaris 10 scheduler.

This is will not affect the current Linux distributions (Suse 9, 10 and RHEL 4.x) as they forked the kernel and essentially develop it as a separate tree.

But it will affect any future Red Hat or Suse distribution (Suse 11 and RHEL 6 respectively).

How it will fair in comparison with Solaris 10 remains to be seen:

The main idea of CFS's design can be summed up in a single sentence: CFS basically models an "ideal, precise multi-tasking CPU" on real hardware.

Ideal multi-tasking CPU" is a (non-existent) CPU that has 100% physical power and which can run each task at precise equal speed, in parallel, each at 1/n running speed. For example: if there are 2 tasks running then it runs each at exactly 50% speed.

[Apr 10, 2007] Here come the RHEL 5 clones

Of course if you go with a cloned RHEL, while you get the code goodies, you don't get Red Hat's support. Various Red Hat clone distributions, such StartCom AS-5, CentOS, and White Box Enterprise Linux, are built from Red Hat's source code, which is freely available at the Raleigh, NC company's FTP site. The "cloned" versions alter or otherwise remove non-free packages within the RHEL distribution, or non-redistributable bits such as the Red Hat logo.

StartCom Enterprise Linux AS-5 is specifically positioned as a low-cost, server alternative to RHEL 5. This is typical of the RHEL clones.

These distributions, which usually don't offer support options, are meant for expert Linux users who want Red Hat's Linux distribution, but don't feel the need for Red Hat's support.

[Apr 10, 2007] Red Hat Enterprise Linux 5 Some Assembly Required

With RHEL 5, Red Hat has shuffled its SKUs around a bit-what had previously been the entry-level ES server version is now just called Red Hat Enterprise Linux. This version is limited to two CPU sockets, and is priced, per year, at $349 for a basic support plan, $799 for a standard support plan and $1,299 for a premium support plan.

This version comes with an allowance for running up to four guest instances of RHEL. You can run more than that, as well as other operating systems, but only four get updates from, and may be managed through, RHN (Red Hat Network). We thought it was interesting how RHN recognized the difference between guests and hosts on its own and tracked our entitlements accordingly.

What had been the higher-end, AS version of RHEL is now called Red Hat Enterprise Linux Advanced Platform. This version lacks arbitrary hardware limitations and allows for an unlimited number of RHEL guest instances per host. RHEL's Advanced Platform edition is priced, per year, at $1,499 with a standard support plan and $2,499 with a premium plan.

[Mar 23, 2007] Using YUM in RHEL5 for RPM systems

There is more to Red Hat Enterprise Linux 5 (RHEL5) than Xen. I, for one, think people will develop a real taste for YUM (Yellow dog Updater Modified), an automatic update and package installer/remover for RPM systems.

YUM has already been used in the last few Fedora Core releases, but RHEL4 uses the up2date package manager. RHEL5 will use YUM 3.0. Up2date is used as a wrapper around YUM in RHEL5. Third-party code repositories, prepared directories or websites that contain software packages and index files, will also make use of the Anaconda-YUM combination.

... ... ...

Using YUM makes it much easier to maintain groups of machines without having to manually update each one using RPM. Some of its features include:

RHEL5 moves the entire stack of tools which install and update software to YUM. This includes everything from the initial install (through Anaconda) to host-based software management tools, like system-config-packages, to even the updating of your system via Red Hat Network (RHN). New functionality will include the ability to use a YUM repository to supplement the packages provided with your in-house software, as well as plugins to provide additional behavior tweaks.

YUM automatically locates and obtains the correct RPM packages from repositories. It frees you from having to manually find and install new applications or updates. You can use one single command to update all system software, or search for new software by specifying criteria.

[Dec 7, 2006] Survey Finds Red Hat Customers Willing To Stay With Company if it Cuts Prices

(SeekingAlpha) Eric Savitz submits: Red Hat customers are mulling their options. But they can be bought.

That's one of the takeaways from a fascinating report today from Pacific Crest's Brendan Barnicle based on a survey he did of 118 enterprise operating system buyers, including 86 Red Hat support customers. The goal of the survey was to see how Linux users are responding to the new offerings from Oracle (MSFT)/Novell (NOVL) partnership.

Reading the results of the study, you reach several conclusions. One, most customers are seriously considering the new offerings. Two, Red Hat can hold on to most of them, if they are willing to cut prices far enough. And three, customers seem a little more interested in the Microsoft/Novell offerings than those from Oracle.

Here are a few details:

[Dec 1, 2006] Red Hat From 'Cuddly Penguin' to Public Enemy No. 1

We have suffered from that image in the past. And some of our competitors have played up the fact that the JBoss guys are behaving like a sect. When, in fact, if you look at the composition of our community, we have an order of magnitude more committers than our direct open-source competitors.

But the perception is still there. Bull even said something about that perception. And we'd been thinking about opening up the governance. So when Bull provided us with a great study case, we decided to put the pedal to the metal. But make no mistake this is not going to be a free-for-all. We care a lot about the quality of what gets committed. We invest very heavily in all our projects. We're serious about this so we expect the same level of seriousness from our collaborators.

There is going to be a hybrid model where there is an opening up of the governance. In terms of code contributions it's always been there. But now it's been made explicit instead of implicit and open to attacks of "closedness." JBoss has always been an open community, but we've hired most of our primary committers.

Well, you seem more willing to compromise and evolve your stance on things. Like SCA [Service Component Architecture]-initially you were against it, but it seems like you've changed your mind.

Well, yeah, the specific SCA stance today is there is no reason for us to be for or against it. If it plays out in the market, we'll support it. And I think Mark Little [a JBoss core developer] said it very well that the ESB implementations usually outlive standards.

So what you're seeing from us is mostly due to Mark Little's influence. Mark has been around in the standards arena and has seen all these standards come and go. So it's not about the standards, it's about our implementation in support of all these standards. And it's not our place to be waging a standards war. It's our place to implement and let the market decide and we'll follow the market.

So where I'll agree with you is that it's less of a dogmatic position in terms of perceived competition and more focus on what we do well, which is implementations.

Another thing is JBoss four years ago was very much Marc Fleury and the competitive stance against Sun and things like that. Today I don't do anything. In fact, I actively stay out in terms of not getting in the way of my guys.

So it's both a sign of maturity and of a more diverse organization. I'm representing more than leading the technical direction these days. And that's a very good thing.

You said you approached David Heinemeier Hansson, the creator of Ruby on Rails, to work at JBoss. What other types of developers are you interested in hiring?

Yeah, we did approach him. There is a lot of talent around the Web framework. One of the problems is it's a very fragmented community at a personal level. You have one guy and his framework. Though, this is not the case with Ruby on Rails. But there's a lot of innovation that's going on that would benefit from unification under a bigger distribution umbrella and bigger R&D umbrella. And I think JBoss/Red Hat is in a position to offer that. So we're always talking about new guys.

One of the things I like to do is talk to the core developers and say, "Where are you in terms of recruitment?" And we're talking to scripting guys. I think scripting is the next frontier as [Ruby on Rails] has showed. We have a unique opportunity of bringing under one big branded umbrella a diverse group of folks that today are doing excellent work, be it the scripting crowd, REST, Web framework, or the Faces, or the guys integrating with Seam. All of the work we're doing is going to take more people and we're always on the lookout for the right talent and the right fit.

[Sep 14, 2005] Dr. Dobb's Red Hat Releases Enterprise Linux 5 Beta September 13

... The Red Hat Enterprise Linux 5 Beta 1 release contains virtualization on the i386 and x86_64 architectures as well as a technology preview for IA64.

... ... ...

Aside from Xen, Red Hat Enterprise Linux 5 Beta 1 features AutoFS and iSCSI network storage support, smart card integration, SELinux security, clustering and a cluster file system, Infiniband and RDMA support, and Kexec and Kdump, which replace the current Diskdump and Netdump. Beta 1 also incorporates improvements to the installation process, analysis and development tools SystemTap and Frysk, a new driver model and enablers for stateless Linux.

Linux Client Migration Cookbook A Practical Planning and Implementation Guide for Migrating to Desktop Linux

IBM Redbooks

The goal of this IBM Redbook is to provide a technical planning reference for IT organizations large or small that are now considering a migration to Linux-based personal computers. For Linux, there is a tremendous amount of "how to" information available online that addresses specific and very technical operating system configuration issues, platform-specific installation methods, user interface customizations, etc. This book includes some technical "how to" as well, but the overall focus of the content in this book is to walk the reader through some of the important considerations and planning issues you could encounter during a migration project. Within the context of a pre-existing Microsoft Windows-based environment, we attempt to present a more holistic, end-to-end view of the technical challenges and methods necessary to complete a successful migration to Linux-based clients.

[Jun 24, 2004] Open Source Blog: Open Sourcery by Blane Warrene

I recently spent some time speaking with a popular Yankee Group analyst who covers the enterprise sector in the US, focusing in on open source and where the movement may go in the next few years.

Just to be clear, I differentiate, as most industry watchers do, between Linux and open source. While Linux is open source, the primary Linux distributors have caught on to how they need to position themselves for success and are starting to run their businesses just as any proprietary software company does.

Red Hat and SUSE make prime examples, realizing the path to long term success and revenue streams resided in proving themselves enterprise worthy to larger businesses and institutions, have shifted business models or been acquired by organizations with roots in the enterprise.

Her views, while not always popular in the open source community. are right on point if open source seeks widespread adoption and a permanent seat at the table for longer term financial success.

There are a few obstacles open source proponents need to accept and move forward on:

  1. It will be more costly for a company to migrate away from Windows to Linux, even in light of slightly reduced ongoing maintenance and improved security and uptime. While I have not always agreed that the costs are higher, having migrated corporate systems to Linux in the past, their research showed it to be true in many cases -- especially when migrating beyond standard web hosting and email systems. The costs are higher when factoring in re-certifying drivers, application integrity and training.
  2. To truly become entrenched as a viable financially-rewarding option (meaning open source companies make money and create jobs), a shift toward commercial software models is necessary. This does not mean forgoing open source, however, what it does mean is developing a structure for development, distribution, patching and support that passes muster with corporate IT managers who could be investing substantial amounts of money in open source.

What it boils down to is that while open source has definitely revolutionized software, and it is found internationally in companies large and small, businesses still pick software because it provides a solution not just because it is open source.

The fact that it is cheaper or free simply means the user will save money, but this does not win the favor of those buyers who could be injecting millions into open source projects rather than proprietary software makers.

I would use Firebird as a model. In an interview with Helen Borrie, forthcoming in my July column on SitePoint, she noted that since many Fortune 500 companies are using an open source database like Firebird speaks volumes to the maturing of their project and open source at large.

The reason as I see it, is due to the treatment of Firebird like an enterprise scale proprietary software project. They have a well managed developer community and active support lists, commercial offerings for support through partnerships with several companies, and commercial development projects for corporate clients.

If more open source projects looked at Borrie's team model and discipline in development and support, we just might see more penetration that attracts longer and more profitable contracts and work for those like us in the SitePoint community.

Selected Comments


It will be more costly for a company to migrate away from Windows to Linux, even in light of slightly reduced ongoing maintenance and improved security and uptime. You mean relative to staying with Windows? Does this include recurring costs of Windows licensing / upgrades?

The costs are higher when factoring in re-certifying drivers, application integrity and training.

On the drivers front, that assumes (if we're saying Linux cf. Windows) that systems need upgrades as frequently. There's generally less need to keep upgrading Linux, when used as a server.

Re application integrity, think thats very hard to research accurately - kind of a wooly comment that needs qualification.

On the training side, it's an interesting area where it's kind of like comparing Apples with Pears.

Windows generally hides administrators from much of what's really happening, so it's probably easier to train someone to the point where they're feeling confident but given serious problems, who do you turn to?

*Nix effectively exposes administrators to everything so more time is required to reach the point where sysadmins are confident. Once they reach that point though, they're typically capable of handling anything. The result is stable systems. I'd also argue that a single *Nix sysadmin is capable of maintaining a greater number of systems (scripts / automation etc.) although no figures to back that.

Firebird is an interesting example. The flip side of Firebirds way of doing things seems to be the Open Source "community" is largely unaware of it (compared to, say, MySQL).

Posted by: HarryF from Jun 24th, 2004 @ 8:03 AM MDT


Yes - on costs - Linux was actually found to be more expensive in numerous cases compared to staying with Windows. This is unfortunate as I am a proponent of finding migration paths from Windows to Linux for stability and administration automation. However, the research did show the total cost of ownership eventually balances out, it simply is much more expensive at the outset than staying on a Windows upgrade path.

This survey (partially on site with staff and others via questionnaire) - 1000 companies with 5000 or more employees - found that they did have to certify drivers at the initial migration, certify all new disk images, provide training or certification to adhere to corporate policy, buy indemnification insurance, perform migrations, test, establish support contracts and finally, pay about a 15 percent premium when bringing in certified L:inux staff.

The benefit if the company decided to take the financial hit: over an extended period they experienced the benefits of Linux - uptime, experienced admins and flexibility of the platform.

Application integrity was ambiguous in the study - however - managers cited it constantly when trying to retire commercial Unix and move apps to Linux, needing certification that an entire applications runs exactly as before.

Perhaps it is time for the open source community to begin establishing central organizational points that act as clearinghouses - like Open Source Development labs does for Linux - to certify open source applications on a major scale.

Posted by: bwarrene from Jun 24th, 2004 @ 1:12 PM MDT


I beg to differ on Harry's view about Firebird. Firebird is not as popular as MySQL because 1) it's a newer project (project, not software) and 2) MySQL support comes built into PHP; no need for additional software. Firebird requires either recompilation or loading this DLL into the extension space.

Posted by: andrecruz Jun 24th, 2004 @ 9:37 PM MDT


It was nice to read about your chat with L... DiD... (why are we keeping her name secret?).

Second, I don't understand your distinction between Linux and Open Source. Maybe I'm slow or something, but what it seems to boil down to is:

"Open Source = unprofessional Proprietary = professional (unstated) Linux = open source, but starting to become professional despite itself by acting like proprietary."

Well I'll grant you there are a lot of unprofesssional Free Software projects out there; but the same is true of proprietary. Bad proprietary programs are slightly less likely to see the light of day, but there's still a bevy of them out there.

Now, on the assertion that Linux companies are succeeding by acting like proprietary companies: there's truth and non-truth to it. On the one hand, Red Hat and SuSE have no doubt learned a lot about management, marketing, and good business practices from established companies. On the other hand, an effective open source player does not act the same as an effective proprietary player: there are all kinds of issues with dealing with the developer community that are not an issue in the proprietary world: they bring plusses and minuses, but have to be dealt with rather than ignored.

And I will note that Red Hat, the most successful Linux distributor, is a pure-play Open Source vendor: they do not ship proprietary code. In fact, they devote a lot of developer time to a community distribution that they make no direct money on (but do get free testing from). Likewise, one of the first things Novell did after its so-far successful acquisition of SuSE was to GPL SuSE's proprietary installer. This suggests that while good management is indispensible in anythin, Open Source ventures should not be running off and trying to ape proprietary vendors blindly.

Finally, there's a big difference between the way mass-market shrinkwrapped proprietary software and the way big-iron stuff is. With big-iron stuff you often have consultants in the field, lots of direct customer feedback, maybe even code sharing under NDA with the client: in short, it works a lot like an Open Source project. And that's where Open Source has shined: *nix boxes, web servers, network infrastructure, compilers, developer tools, and increasingly RDMSes. With mass shrinkwrap you have to do much more seeking out of customer needs on your own and also be prepared to tell customers to shove it and wait for the next release. On stuff like this (desktop guis and apps) Open Source has been less successful.

At least one high-profile OSS desktop project (Mozilla) was a legendary quagmire for a long time and is only beginning to claw its way back. Many of the mistakes came from not being open to community input ("dammit, we don't need a whole platform, just a good browser") as any good project of any kind should be. Thing is, no one has a clear idea of how to be usefully open to community input on a mass-market OSS project yet: the twin dangers of adding every requested feature or my-way-or-the-highway-ism have been so far hard to avoid.

Personally, I think the question of the Open Source desktop is given too much importance. Windows server shipments still account for 60% of the market, so it's not like that area is all sewn up. A company that wants to avoid vendor lock-in would do best to migrate its server infrastructure first - that's gonna be least painful and probably highest long-term benefit. Then maybe desktop apps, the maybe desktop operating system.

On MySQL vs. Firebird: yes, MySQL is more widespread, but they're used for entirely different things.

Posted by: jmcginty Jun 25th, 2004 @ 12:34 PM MDT

Dag Wieers

I'm a bit confused to why you want to differentiate between Linux (eg. Red Hat) and Open Source.

Red Hat releases source packages and contributes largely to Open Source projects, both in resources as in code. Improvements by Red Hat are included in SuSE and vice versa. Everybody wins.

This ensures that Red Hat will have to be the best on its own merits. Competition will always be lurking around the corner to take over. Despite that, Red Hat is doing a good job.

You cannot compare this to proprietary vendors were your money goes into the big company bucket being used for the next version that you have to pay for again.

If I can choose I'd rather pay for services, if it guarantees that the money is used for Open Source development. If my Open Source vendor goes belly-up, its work is still available for anyone to use.

Paying for Open Source just guarantees you that you have freedom and are never tight to any vendor. Red Hat is just one example to show that the money is used for the good of the public.

And if you don't have deep pockets, there's still Fedora, CentOS, TaoLinux or Whitebox. Plenty of competition in the same vendor segment. Hard to beat IMO.

Posted by: Dag Wieers from Jun 26th, 2004 @ 3:57 AM MDT

Ron Johnson

One thing I notice that is never mentioned when talking about Windows vs. Linux TCO is virus & worm costs. Both the cost of AV s/w and clean-up after an infection sneaks into the corporate LAN. That *huge* expense will never be borne by a Linux shop.

Posted by: Ron Johnson Jun 26th, 2004 @ 7:56 AM MDT

HP Throws Weight Behind MySQL, JBoss By Clint Boulton

HP (Quote, Chart) stepped up its commitment to open source software Monday by pledging to offer and support the MySQL database server and JBoss application server software in its servers.

The Palo Alto, Calif. systems vendor said it has inked agreements with those open source purveyors to certify and support MySQL and JBoss software on its servers.

Jeffrey Wade, manager of Linux Marketing Communications at HP, said the certifications factor in the company's Linux reference architecture is a software stack that covers everything from the hardware to the operating system, drivers and management agents.

Deployed on HP ProLiant servers, the open source Linux Reference Architectures are based on software from MySQL, JBoss, Apache, and OpenLDAP. The company's commercial Linux Reference Architectures are based on product from Oracle, BEA and SAP.

Both MySQL and JBoss will join the HP Partner Program and receive joint testing and engineering support on HP's hardware systems.

Wade told the added layer of MySQL and JBoss support addresses one of the largest concerns customers have today in opting to pick open source technology over mainstay proprietary products such as Microsoft (Quote, Chart)Windows, Sun Microsystems' (Quote, Chart) Solaris or UNIX.

"We can provide support for that entire solution stack and we're also now giving our customers flexibility in choice and the types of solutions they want to deploy whether that's a commercial or open source application," Wade said.

Bob Bickel, vice president of strategy and corporate development at JBoss, said commercial use remains somewhat constrained because a CIO doesn't know whom they can turn to for support.

"They don't know who they can turn to for indemnification," Bickel told "Yeah, it works great and it's cheap but what happens in the middle of their big selling season if something goes down. Who do they turn to and get it from. What HP's doing is taking an all encompassing view of this with certification and testing."

Testing keeps customers from guessing what version of a Java virtual machine, operating system, MySQL or JBoss product can all work together in a guaranteed way, Bickel explained.

MySQL Vice President of Marketing Zack Urlocker said companies such as Sabre are using an open source stack for business applications. Partnering with HP, then, provides great validation for MySQL and JBoss software.

"A couple of years ago the big knock on open source was that it might be good on the periphery or Web applications, but was not quite ready for business critical applications," Urlocker told "Now, the No. 1 issues have been support. People who have had a lot of success with Linux are now looking at how to use a whole open source stack."

The deal is truly symbiotic. While MySQL and JBoss get backing from a technology driver such as HP, HP gets the added credibility of being cozy with open source, a label many enterprises and HP rivals, such as IBM (Quote, Chart) and Dell (Quote, Chart), are working toward.

Linux sales are trending tall regardless; according to recent hardware server and database software studies from high-tech research outfit Gartner.

Despite legal threats from SCO Group and competition from Microsoft, Gartner said Linux continued to be the growth powerhouse in the operating systems server market, with a revenue increase of 57.3 percent in the first quarter of 2004.

Gartner also found that Linux siphoned market share from UNIX in the relational database management system (RDBMS) market, a niche that grew 158 percent from $116 million in new license revenue in 2002 to nearly $300 million in 2003.

Recommended Links

Google matched content

Softpanorama Recommended

Top articles

[Sep 04, 2018] Unifying custom scripts system-wide with rpm on Red Hat-CentOS Published on Aug 24, 2018 |

[Sep 23, 2017] CentOS 7 Server Hardening Guide Linux Security Networking Published on Sep 23, 2017 |


Red Hat 5.2 Enterprise Linux Documentation

Document Published PDF Download
Software Package Manifest May 21, 2008 PDF
Deployment Guide May 21, 2008 PDF
Installation Guide May 21, 2008 PDF
Virtualization Guide May 21, 2008 PDF
Cluster Suite Overview May 21, 2008 PDF
Cluster Administration May 21, 2008 PDF
LVM Administrator's Guide May 21, 2008 PDF
Global File System May 21, 2008 PDF
Using GNBD with GFS May 21, 2008 PDF
Linux Virtual Server Administration May 21, 2008 PDF
Using Device-Mapper Multipath May 21, 2008 PDF
Tuning and Optimizing Red Hat Enterprise Linux for Oracle 9i and 10g Databases Nov PDF

Differences with Solaris

Migration toolkits



Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy


War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes


Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law


Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least

Copyright © 1996-2018 by Dr. Nikolai Bezroukov. was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) in the author free time and without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.


FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting development of this site and speed up access. In case is down you can use the at


The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: September 16, 2018