Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Disabling Network Manager in RHEL7

News

NetworkManager

Recommended Books

Recommended Links

Disabling RHEL 6 Network Manager

Network Manager overwrites resolv.conf
Linux ip command Linux ifconfig How to change IP address in RHEL ethtool Autonegotiation Traceroute
Network Utilities Configuring the Services Horror Stories  Unix History Humor Etc

When /etc/resolv.conf is no longer a valid DNS configuration file

In RHEL7 Network Manager is the default configuration tool for the entire networking stack including DNS resolution. One interesting problem with Network Manager is that in default installation  it happily overwrites /etc/resolv.conf putting the end to the Unix era during which you can assume that if you write something into config file it will be intact and any script that generate such a file should be able  to detect that it was changed manually and re-parse changes or produce a warning. 

In RHEL6 most sysadmins  simply deinstall Network Manager on servers, and thus did not face its idiosyncrasies. BTW Network Manager is a part of Gnome project, which is pretty symbolic  taking into account that the word "Gnome" recently became a kind of curse among Linux users ( GNOME 3.x has alienated both users and developers ;-).

In RHEL 6 and  before certain installation options excluded Network Manager from installation by default (Minimal server in RHEL6 is one).  For all other you can deinstall it after the installation. This is no longer recommended in RHEL7, but still you can disable it. See  How to disable NetworkManager on CentOS - RHEL 7 for details.  Still as the Network Manager is the preferred by Red Hat intermediary for connecting to the network for RHEL7 (and now is it is present even in minimal installation) many sysadmin prefer to keep it on RHEL7. People who tried run into various types of problems if they have a setup that was more or less non-trivial. See for example How to uninstall NetworkManager in RHEL7 Packages like Docker expect it to be present as well.

 Non-documented by Red Hat solution exists even with Network Manager running (CentOS 7 NetworkManager Keeps Overwriting -etc-resolv.conf )

To prevent Network Manager to overwrite your resolv.conf changes, remove the DNS1, DNS2, ... lines from /etc/sysconfig/network-scripts/ifcfg-*.

and

...tell NetworkManager to not modify the DNS settings:

 /etc/NetworkManager/NetworkManager.conf
 [main]
 dns=none

After that you need to restart Network manager service. Red Hat does not addresses this problem properly even in training courses:

Alex Wednesday, November 16, 2016 at 00:38 - Reply ↓

Ironically, Redhat’s own training manual does not address this problem properly.

I was taking a RHEL 7 Sysadmin course when I ran into this bug. I used nmcli thinking it would save me time in creating a static connection. Well, the connection was able to ping IPs immediately, but was not able to resolve any host addresses. I noticed that /etc/resolv.conf was being overwritten and cleared of it’s settings.

No matter what we tried, there was nothing the instructor and I could do to fix the issue. We finally used the “dns=none” solution posted here to fix the problem.

Actually the behaviour is more complex and tricky then I described and hostnamectl produce the same effect of overwriting the file:

Brian Wednesday, March 7, 2018 at 01:08 -

Thank you Ken – yes – that fix finally worked for me on RHEL 7!

Set “dns=none” in the [man] section of /etc/NetworkManager/NetworkManager.conf
Tested: with “$ cat /etc/resolv.conf” both before and after “# service network restart” and got the same output!

Otherwise I could not find out how to reliably set the “search” domains list, as I did not see an option in the /etc/sysconfig/network-scripts/ifcfg-INT files.

Brian Wednesday, March 7, 2018 at 01:41 -

Brian again here… note that I also had “DNS1, DNS2” removed from /etc/sysconfig/nework-scripts/ifcfg-INT.

CAUTION: the “hostnamectl”[1] command will also reset /etc/resolv.conf rather bluntly… replacing the default “search” domain and deleting any “nameserver” entries. The file will also include the “# Generated by NetworkManager” header comment.

[1] e.g. #hostnamectl set-hostname newhost.domain –static; hostnamectl status
Then notice how that will overwrite /etc/resolv.conf as well

But this is troubling sign: now in addition to which configuration file you need to edit, you need to know which files you can edit and which you can't (or how you can disable the default behaviour).  Which is a completely different environment which is  just one step away from Microsoft Registry.  

Moreover, even minimal installation of RHEL7 has over hundred of *.conf files.



Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Jun 08, 2021] Increasing Network Connections in Centos7

Feb 24, 2016 | blog.dougco.com

I had a client who was losing network connectivity intermittently recently and it turns out they needed to increase the high limit for network connections. Centos7 has some variable name changes from previous versions so here are some helpful tips on how to increase the limits.

In older Centos you might have seen these error messages:

ip_conntrack version 2.4 (8192 buckets, 65536 max) – 304 bytes per conntrack

In newer verions, something like:

localhost kernel: nf_conntrack: table full, dropping packet

The below is for Centos versions that have renamed the ip_conntrack to nf_conntrack.

To get a list of network parameters:

sysctl -a | grep netfilter

This shows current value for the key parameter:

/sbin/sysctl net.netfilter.nf_conntrack_max

This shows your system current load:

/sbin/sysctl net.netfilter.nf_conntrack_count

So now to update the value in the kernel to triple the limit, of course make sure your RAM has room with what you choose:

/sbin/sysctl -w net.netfilter.nf_conntrack_max=196608

To make it permanent after reboot, please add these values to the /etc/sysctl.conf

net.ipv4.netfilter.ip_conntrack_max=196608

Hope this helps!

[Mar 22, 2019] Move your dotfiles to version control Opensource.com

Mar 22, 2019 | opensource.com

Move your dotfiles to version control Back up or sync your custom configurations across your systems by sharing dotfiles on GitLab or GitHub. 20 Mar 2019 Matthew Broberg (Red Hat) Feed 11 up 4 comments x Get the newsletter

Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.

https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0

What a Shell Dotfile Can Do For You , H. "Waldo" Grunenwald goes into excellent detail about the why and how of setting up your dotfiles. Let's dig into the why and how of sharing them. What's a dotfile?

"Dotfiles" is a common term for all the configuration files we have floating around our machines. These files usually start with a . at the beginning of the filename, like .gitconfig , and operating systems often hide them by default. For example, when I use ls -a on MacOS, it shows all the lovely dotfiles that would otherwise not be in the output.

dotfiles on master
➜ ls
README.md Rakefile bin misc profiles zsh-custom

dotfiles on master
➜ ls -a
. .gitignore .oh-my-zsh README.md zsh-custom
.. .gitmodules .tmux Rakefile
.gemrc .global_ignore .vimrc bin
.git .gvimrc .zlogin misc
.gitconfig .maid .zshrc profiles

If I take a look at one, .gitconfig , which I use for Git configuration, I see a ton of customization. I have account information, terminal color preferences, and tons of aliases that make my command-line interface feel like mine. Here's a snippet from the [alias] block:

87 # Show the diff between the latest commit and the current state
88 d = !"git diff-index --quiet HEAD -- || clear; git --no-pager diff --patch-with-stat"
89
90 # `git di $number` shows the diff between the state `$number` revisions ago and the current state
91 di = !"d() { git diff --patch-with-stat HEAD~$1; }; git diff-index --quiet HEAD -- || clear; d"
92
93 # Pull in remote changes for the current repository and all its submodules
94 p = !"git pull; git submodule foreach git pull origin master"
95
96 # Checkout a pull request from origin (of a github repository)
97 pr = !"pr() { git fetch origin pull/$1/head:pr-$1; git checkout pr-$1; }; pr"

Since my .gitconfig has over 200 lines of customization, I have no interest in rewriting it on every new computer or system I use, and either does anyone else. This is one reason sharing dotfiles has become more and more popular, especially with the rise of the social coding site GitHub. The canonical article advocating for sharing dotfiles is Zach Holman's Dotfiles Are Meant to Be Forked from 2008. The premise is true to this day: I want to share them, with myself, with those new to dotfiles, and with those who have taught me so much by sharing their customizations.

Sharing dotfiles

Many of us have multiple systems or know hard drives are fickle enough that we want to back up our carefully curated customizations. How do we keep these wonderful files in sync across environments?

My favorite answer is distributed version control, preferably a service that will handle the heavy lifting for me. I regularly use GitHub and continue to enjoy GitLab as I get more experienced with it. Either one is a perfect place to share your information. To set yourself up:

  1. Sign into your preferred Git-based service.
  2. Create a repository called "dotfiles." (Make it public! Sharing is caring.)
  3. Clone it to your local environment. *
  4. Copy your dotfiles into the folder.
  5. Symbolically link (symlink) them back to their target folder (most often $HOME ).
  6. Push them to the remote repository.

* You may need to set up your Git configuration commands to clone the repository. Both GitHub and GitLab will prompt you with the commands to run.

gitlab-new-project.png

Step 4 above is the crux of this effort and can be a bit tricky. Whether you use a script or do it by hand, the workflow is to symlink from your dotfiles folder to the dotfiles destination so that any updates to your dotfiles are easily pushed to the remote repository. To do this for my .gitconfig file, I would enter:

$ cd dotfiles /
$ ln -nfs .gitconfig $HOME / .gitconfig

The flags added to the symlinking command offer a few additional benefits:

You can review the IEEE and Open Group specification of ln and the version on MacOS 10.14.3 if you want to dig deeper into the available parameters. I had to look up these flags since I pulled them from someone else's dotfiles.

You can also make updating simpler with a little additional code, like the Rakefile I forked from Brad Parbs . Alternatively, you can keep it incredibly simple, as Jeff Geerling does in his dotfiles . He symlinks files using this Ansible playbook . Keeping everything in sync at this point is easy: you can cron job or occasionally git push from your dotfiles folder.

Quick aside: What not to share

Before we move on, it is worth noting what you should not add to a shared dotfile repository -- even if it starts with a dot. Anything that is a security risk, like files in your .ssh/ folder, is not a good choice to share using this method. Be sure to double-check your configuration files before publishing them online and triple-check that no API tokens are in your files.

Where should I start?

If Git is new to you, my article about the terminology and a cheat sheet of my most frequently used commands should help you get going.

There are other incredible resources to help you get started with dotfiles. Years ago, I came across dotfiles.github.io and continue to go back to it for a broader look at what people are doing. There is a lot of tribal knowledge hidden in other people's dotfiles. Take the time to scroll through some and don't be shy about adding them to your own.

I hope this will get you started on the joy of having consistent dotfiles across your computers.

What's your favorite dotfile trick? Add a comment or tweet me @mbbroberg . Topics Git GitHub GitLab About the author

Matthew Broberg - Matt loves working with technology communities to develop products and content that invite delightful engagement. He's a serial podcaster, best known for the Geek Whisperers podcast , is on the board of the Influence Marketing Council , co-maintains the DevRel Collective , and often shares his thoughts on Twitter and GitHub ... More about me


James Phillips on 20 Mar 2019 Permalink

See also the rcm suite for managing dotfiles from a central location. This provides the subdirectory from which you can put your dotfiles into revision control.

Web refs:

https://fedoramagazine.org/managing-dotfiles-rcm/

https://github.com/thoughtbot/rcm

Chris Hermansen on 20 Mar 2019 Permalink

An interesting article, Matt, thanks! I was glad to see "what not to share".

While most of my dot files hold no secrets, as you note some do - .ssh, .gnupg, .local/share among others... could be some others. Thinking about this, my dot files are kind of like my sock drawer - plenty of serviceable socks there, not sure I would want to share them! Anyway a neat idea.

Mark Pitman on 21 Mar 2019 Permalink

Instead of linking your dotfiles, give YADM a try: https://yadm.io

It wraps the git command and keeps the actual git repository in a subdirectory.

Tom Payne on 21 Mar 2019 Permalink

Check out https://github.com/twpayne/chezmoi . It allows you to store secrets securely, too. Disclaimer: I'm the author of chezmoi.

[Mar 22, 2019] Program the real world using Rust on Raspberry Pi Opensource.com

Mar 22, 2019 | opensource.com

If you own a Raspberry Pi, chances are you may already have experimented with physical computing -- writing code to interact with the real, physical world, like blinking some LEDs or controlling a servo motor . You may also have used GPIO Zero , a Python library that provides a simple interface to GPIO devices from Raspberry Pi with a friendly Python API. GPIO Zero is developed by Opensource.com community moderator Ben Nuttall .

I am working on rust_gpiozero , a port of the awesome GPIO Zero library that uses the Rust programming language. It is still a work in progress, but it already includes some useful components.

Rust is a systems programming language developed at Mozilla. It is focused on performance, reliability, and productivity. The Rust website has great resources if you'd like to learn more about it.

Getting started

Before starting with rust_gpiozero, it's smart to have a basic grasp of the Rust programming language. I recommend working through at least the first three chapters in The Rust Programming Language book.

I recommend installing Rust on your Raspberry Pi using rustup . Alternatively, you can set up a cross-compilation environment using cross (which works only on an x86_64 Linux host) or this how-to .

After you've installed Rust, create a new Rust project by entering:

[Mar 13, 2019] Virtual filesystems Why we need them and how they work Opensource.com

Mar 13, 2019 | opensource.com

Virtual filesystems in Linux: Why we need them and how they work Virtual filesystems are the magic abstraction that makes the "everything is a file" philosophy of Linux possible. 08 Mar 2019 Alison Chaiken Feed 18 up 6 comments x Get the newsletter

Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.

https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0

Robert Love , "A filesystem is a hierarchical storage of data adhering to a specific structure." However, this description applies equally well to VFAT (Virtual File Allocation Table), Git, and Cassandra (a NoSQL database ). So what distinguishes a filesystem? Filesystem basics

The Linux kernel requires that for an entity to be a filesystem, it must also implement the open() , read() , and write() methods on persistent objects that have names associated with them. From the point of view of object-oriented programming , the kernel treats the generic filesystem as an abstract interface, and these big-three functions are "virtual," with no default definition. Accordingly, the kernel's default filesystem implementation is called a virtual filesystem (VFS).

If we can open(), read(), and write(), it is a file as this console session shows.

VFS underlies the famous observation that in Unix-like systems "everything is a file." Consider how weird it is that the tiny demo above featuring the character device /dev/console actually works. The image shows an interactive Bash session on a virtual teletype (tty). Sending a string into the virtual console device makes it appear on the virtual screen. VFS has other, even odder properties. For example, it's possible to seek in them .

More Linux resources

The familiar filesystems like ext4, NFS, and /proc all provide definitions of the big-three functions in a C-language data structure called file_operations . In addition, particular filesystems extend and override the VFS functions in the familiar object-oriented way. As Robert Love points out, the abstraction of VFS enables Linux users to blithely copy files to and from foreign operating systems or abstract entities like pipes without worrying about their internal data format. On behalf of userspace, via a system call, a process can copy from a file into the kernel's data structures with the read() method of one filesystem, then use the write() method of another kind of filesystem to output the data.

The function definitions that belong to the VFS base type itself are found in the fs/*.c files in kernel source, while the subdirectories of fs/ contain the specific filesystems. The kernel also contains filesystem-like entities such as cgroups, /dev, and tmpfs, which are needed early in the boot process and are therefore defined in the kernel's init/ subdirectory. Note that cgroups, /dev, and tmpfs do not call the file_operations big-three functions, but directly read from and write to memory instead.

The diagram below roughly illustrates how userspace accesses various types of filesystems commonly mounted on Linux systems. Not shown are constructs like pipes, dmesg, and POSIX clocks that also implement struct file_operations and whose accesses therefore pass through the VFS layer.

VFS are a "shim layer" between system calls and implementors of specific file_operations like ext4 and procfs . The file_operations functions can then communicate either with device-specific drivers or with memory accessors. tmpfs , devtmpfs and cgroups don't make use of file_operations but access memory directly.

VFS's existence promotes code reuse, as the basic methods associated with filesystems need not be re-implemented by every filesystem type. Code reuse is a widely accepted software engineering best practice! Alas, if the reused code introduces serious bugs , then all the implementations that inherit the common methods suffer from them.

/tmp: A simple tip

An easy way to find out what VFSes are present on a system is to type mount | grep -v sd | grep -v :/ , which will list all mounted filesystems that are not resident on a disk and not NFS on most computers. One of the listed VFS mounts will assuredly be /tmp, right?

Everyone knows that keeping / tmp on a physical storage device is crazy! credit: https://tinyurl.com/ybomxyfo

Why is keeping /tmp on storage inadvisable? Because the files in /tmp are temporary(!), and storage devices are slower than memory, where tmpfs are created. Further, physical devices are more subject to wear from frequent writing than memory is. Last, files in /tmp may contain sensitive information, so having them disappear at every reboot is a feature.

Unfortunately, installation scripts for some Linux distros still create /tmp on a storage device by default. Do not despair should this be the case with your system. Follow simple instructions on the always excellent Arch Wiki to fix the problem, keeping in mind that memory allocated to tmpfs is not available for other purposes. In other words, a system with a gigantic tmpfs with large files in it can run out of memory and crash. Another tip: when editing the /etc/fstab file, be sure to end it with a newline, as your system will not boot otherwise. (Guess how I know.)

/proc and /sys

Besides /tmp, the VFSes with which most Linux users are most familiar are /proc and /sys. (/dev relies on shared memory and has no file_operations). Why two flavors? Let's have a look in more detail.

The procfs offers a snapshot into the instantaneous state of the kernel and the processes that it controls for userspace. In /proc, the kernel publishes information about the facilities it provides, like interrupts, virtual memory, and the scheduler. In addition, /proc/sys is where the settings that are configurable via the sysctl command are accessible to userspace. Status and statistics on individual processes are reported in /proc/<PID> directories.

/proc/ meminfo is an empty file that nonetheless contains valuable information.

The behavior of /proc files illustrates how unlike on-disk filesystems VFS can be. On the one hand, /proc/meminfo contains the information presented by the command free . On the other hand, it's also empty! How can this be? The situation is reminiscent of a famous article written by Cornell University physicist N. David Mermin in 1985 called " Is the moon there when nobody looks? Reality and the quantum theory." The truth is that the kernel gathers statistics about memory when a process requests them from /proc, and there actually is nothing in the files in /proc when no one is looking. As Mermin said , "It is a fundamental quantum doctrine that a measurement does not, in general, reveal a preexisting value of the measured property." (The answer to the question about the moon is left as an exercise.)

The files in /proc are empty when no process accesses them. ( Source )

The apparent emptiness of procfs makes sense, as the information available there is dynamic. The situation with sysfs is different. Let's compare how many files of at least one byte in size there are in /proc versus /sys.

virtualfilesystems_6-filesize.png

Procfs has precisely one, namely the exported kernel configuration, which is an exception since it needs to be generated only once per boot. On the other hand, /sys has lots of larger files, most of which comprise one page of memory. Typically, sysfs files contain exactly one number or string, in contrast to the tables of information produced by reading files like /proc/meminfo.

The purpose of sysfs is to expose the readable and writable properties of what the kernel calls "kobjects" to userspace. The only purpose of kobjects is reference-counting: when the last reference to a kobject is deleted, the system will reclaim the resources associated with it. Yet, /sys constitutes most of the kernel's famous " stable ABI to userspace " which no one may ever, under any circumstances, "break." That doesn't mean the files in sysfs are static, which would be contrary to reference-counting of volatile objects.

The kernel's stable ABI instead constrains what can appear in /sys, not what is actually present at any given instant. Listing the permissions on files in sysfs gives an idea of how the configurable, tunable parameters of devices, modules, filesystems, etc. can be set or read. Logic compels the conclusion that procfs is also part of the kernel's stable ABI, although the kernel's documentation doesn't state so explicitly.

Files in sysfs describe exactly one property each for an entity and may be readable, writable or both. The "0" in the file reveals that the SSD is not removable.
Snooping on VFS with eBPF and bcc tools

The easiest way to learn how the kernel manages sysfs files is to watch it in action, and the simplest way to watch on ARM64 or x86_64 is to use eBPF. eBPF (extended Berkeley Packet Filter) consists of a virtual machine running inside the kernel that privileged users can query from the command line. Kernel source tells the reader what the kernel can do; running eBPF tools on a booted system shows instead what the kernel actually does .

Happily, getting started with eBPF is pretty easy via the bcc tools, which are available as packages from major Linux distros and have been amply documented by Brendan Gregg. The bcc tools are Python scripts with small embedded snippets of C, meaning anyone who is comfortable with either language can readily modify them. At this count, there are 80 Python scripts in bcc/tools , making it highly likely that a system administrator or developer will find an existing one relevant to her/his needs.

To get a very crude idea about what work VFSes are performing on a running system, try the simple vfscount or vfsstat , which show that dozens of calls to vfs_open() and its friends occur every second.

vfsstat.py is a Python script with an embedded C snippet that simply counts VFS function calls.

For a less trivial example, let's watch what happens in sysfs when a USB stick is inserted on a running system.

Watch with eBPF what happens in /sys when a USB stick is inserted, with simple and complex examples.

In the first simple example above, the trace.py bcc tools script prints out a message whenever the sysfs_create_files() command runs. We see that sysfs_create_files() was started by a kworker thread in response to the USB stick insertion, but what file was created? The second example illustrates the full power of eBPF. Here, trace.py is printing the kernel backtrace (-K option) plus the name of the file created by sysfs_create_files(). The snippet inside the single quotes is some C source code, including an easily recognizable format string, that the provided Python script induces a LLVM just-in-time compiler to compile and execute inside an in-kernel virtual machine. The full sysfs_create_files() function signature must be reproduced in the second command so that the format string can refer to one of the parameters. Making mistakes in this C snippet results in recognizable C-compiler errors. For example, if the -I parameter is omitted, the result is "Failed to compile BPF text." Developers who are conversant with either C or Python will find the bcc tools easy to extend and modify.

When the USB stick is inserted, the kernel backtrace appears showing that PID 7711 is a kworker thread that created a file called "events" in sysfs. A corresponding invocation with sysfs_remove_files() shows that removal of the USB stick results in removal of the events file, in keeping with the idea of reference counting. Watching sysfs_create_link() with eBPF during USB stick insertion (not shown) reveals that no fewer than 48 symbolic links are created.

What is the purpose of the events file anyway? Using cscope to find the function __device_add_disk() reveals that it calls disk_add_events(), and either "media_change" or "eject_request" may be written to the events file. Here, the kernel's block layer is informing userspace about the appearance and disappearance of the "disk." Consider how quickly informative this method of investigating how USB stick insertion works is compared to trying to figure out the process solely from the source.

Read-only root filesystems make embedded devices possible

Assuredly, no one shuts down a server or desktop system by pulling out the power plug. Why? Because mounted filesystems on the physical storage devices may have pending writes, and the data structures that record their state may become out of sync with what is written on the storage. When that happens, system owners will have to wait at next boot for the fsck filesystem-recovery tool to run and, in the worst case, will actually lose data.

Yet, aficionados will have heard that many IoT and embedded devices like routers, thermostats, and automobiles now run Linux. Many of these devices almost entirely lack a user interface, and there's no way to "unboot" them cleanly. Consider jump-starting a car with a dead battery where the power to the Linux-running head unit goes up and down repeatedly. How is it that the system boots without a long fsck when the engine finally starts running? The answer is that embedded devices rely on a read-only root fileystem (ro-rootfs for short).

ro-rootfs are why embedded systems don't frequently need to fsck. Credit (with permission): https://tinyurl.com/yxoauoub

A ro-rootfs offers many advantages that are less obvious than incorruptibility. One is that malware cannot write to /usr or /lib if no Linux process can write there. Another is that a largely immutable filesystem is critical for field support of remote devices, as support personnel possess local systems that are nominally identical to those in the field. Perhaps the most important (but also most subtle) advantage is that ro-rootfs forces developers to decide during a project's design phase which system objects will be immutable. Dealing with ro-rootfs may often be inconvenient or even painful, as const variables in programming languages often are, but the benefits easily repay the extra overhead.

Creating a read-only rootfs does require some additional amount of effort for embedded developers, and that's where VFS comes in. Linux needs files in /var to be writable, and in addition, many popular applications that embedded systems run will try to create configuration dot-files in $HOME. One solution for configuration files in the home directory is typically to pregenerate them and build them into the rootfs. For /var, one approach is to mount it on a separate writable partition while / itself is mounted as read-only. Using bind or overlay mounts is another popular alternative.

Bind and overlay mounts and their use by containers

Running man mount is the best place to learn about bind and overlay mounts, which give embedded developers and system administrators the power to create a filesystem in one path location and then provide it to applications at a second one. For embedded systems, the implication is that it's possible to store the files in /var on an unwritable flash device but overlay- or bind-mount a path in a tmpfs onto the /var path at boot so that applications can scrawl there to their heart's delight. At next power-on, the changes in /var will be gone. Overlay mounts provide a union between the tmpfs and the underlying filesystem and allow apparent modification to an existing file in a ro-rootfs, while bind mounts can make new empty tmpfs directories show up as writable at ro-rootfs paths. While overlayfs is a proper filesystem type, bind mounts are implemented by the VFS namespace facility .

Based on the description of overlay and bind mounts, no one will be surprised that Linux containers make heavy use of them. Let's spy on what happens when we employ systemd-nspawn to start up a container by running bcc's mountsnoop tool:

The system- nspawn invocation fires up the container while mountsnoop.py runs.

And let's see what happened:

Running mountsnoop during the container "boot" reveals that the container runtime relies heavily on bind mounts. (Only the beginning of the lengthy output is displayed)

Here, systemd-nspawn is providing selected files in the host's procfs and sysfs to the container at paths in its rootfs. Besides the MS_BIND flag that sets bind-mounting, some of the other flags that the "mount" system call invokes determine the relationship between changes in the host namespace and in the container. For example, the bind-mount can either propagate changes in /proc and /sys to the container, or hide them, depending on the invocation.

Summary

Understanding Linux internals can seem an impossible task, as the kernel itself contains a gigantic amount of code, leaving aside Linux userspace applications and the system-call interface in C libraries like glibc. One way to make progress is to read the source code of one kernel subsystem with an emphasis on understanding the userspace-facing system calls and headers plus major kernel internal interfaces, exemplified here by the file_operations table. The file operations are what makes "everything is a file" actually work, so getting a handle on them is particularly satisfying. The kernel C source files in the top-level fs/ directory constitute its implementation of virtual filesystems, which are the shim layer that enables broad and relatively straightforward interoperability of popular filesystems and storage devices. Bind and overlay mounts via Linux namespaces are the VFS magic that makes containers and read-only root filesystems possible. In combination with a study of source code, the eBPF kernel facility and its bcc interface makes probing the kernel simpler than ever before.

Much thanks to Akkana Peck and Michael Eager for comments and corrections.


Alison Chaiken will present Virtual filesystems: why we need them and how they work at the 17th annual Southern California Linux Expo ( SCaLE 17x ) March 7-10 in Pasadena, Calif.


01101001b on 08 Mar 2019 Permalink

Many years ago while I was still a Wind*ws user, I tried the same thing. It was a total failure, so I ditched that idea. Now reading your article, my first thought was "this is not going to..." and then I realized that indeed it would work! I'm such a chump. Thank you for your article!

Alison Chaiken on 13 Mar 2019 Permalink

The idea of virtual filesystems appears to originate in a 1986 USENIX paper called "Vnodes: An Architecture for Multiple File System Types in Sun UNIX." You can find it at

http://www.cs.fsu.edu/~awang/courses/cop5611_s2004/vnode.pdf

Figure 1 looks remarkably like one of the diagrams I made. I thank my co-worker Sam Cramer for pointing this out. You can find the figure in the slides that accompany the talk at http://she-devel.com/ChaikenSCALE2019.pdf

Clark Henry on 11 Mar 2019 Permalink

Alison, this presentation at SCaLE 17x was my favorite over all 4 days and 15 sessions! Awesome lecture and live demos. I learned an incredible amount, even as a Linux kernel noob. Really appreciate the bcc/eBPF demo as well - it was really encouraging to see how easy it is to get started.

So grateful that you also created this blog post... the recording from SCaLE was totally botched! Ugh.

Thank you!

Alison Chaiken on 13 Mar 2019 Permalink

Thanks for your kind words; I'm sorry to learn that the SCALE video is botched. Here at least are the slides:

http://she-devel.com/ChaikenSCALE2019.pdf

I'd be happy to answer any questions.

Pyrax on 13 Mar 2019 Permalink

Hey, Clark could you please a link to the lectures and the live demos i would really love to have a look.

Alison Chaiken on 13 Mar 2019 Permalink

Have a look here

https://www.youtube.com/watch?v=hiIkx0doVB0&t=15596s

at about 4:00. You'll want to look at the slides separately. They are are at http://she-devel.com/ChaikenSCALE2019.pdf

[Mar 13, 2019] Getting started with the cat command by Alan Formy-Duval

Mar 13, 2019 | opensource.com

6 comments

Cat can also number a file's lines during output. There are two commands to do this, as shown in the help documentation: -b, --number-nonblank number nonempty output lines, overrides -n
-n, --number number all output lines

If I use the -b command with the hello.world file, the output will be numbered like this:

   $ cat -b hello.world
   1 Hello World !

In the example above, there is an empty line. We can determine why this empty line appears by using the -n argument:

$ cat -n hello.world
   1 Hello World !
   2
   $

Now we see that there is an extra empty line. These two arguments are operating on the final output rather than the file contents, so if we were to use the -n option with both files, numbering will count lines as follows:

   
   $ cat -n hello.world goodbye.world
   1 Hello World !
   2
   3 Good Bye World !
   4
   $

One other option that can be useful is -s for squeeze-blank . This argument tells cat to reduce repeated empty line output down to one line. This is helpful when reviewing files that have a lot of empty lines, because it effectively fits more text on the screen. Suppose I have a file with three lines that are spaced apart by several empty lines, such as in this example, greetings.world :

   $ cat greetings.world
   Greetings World !

   Take me to your Leader !

   We Come in Peace !
   $

Using the -s option saves screen space:

$ cat -s greetings.world

Cat is often used to copy contents of one file to another file. You may be asking, "Why not just use cp ?" Here is how I could create a new file, called both.files , that contains the contents of the hello and goodbye files:

$ cat hello.world goodbye.world > both.files
$ cat both.files
Hello World !
Good Bye World !
$
zcat

There is another variation on the cat command known as zcat . This command is capable of displaying files that have been compressed with Gzip without needing to uncompress the files with the gunzip command. As an aside, this also preserves disk space, which is the entire reason files are compressed!

The zcat command is a bit more exciting because it can be a huge time saver for system administrators who spend a lot of time reviewing system log files. Where can we find compressed log files? Take a look at /var/log on most Linux systems. On my system, /var/log contains several files, such as syslog.2.gz and syslog.3.gz . These files are the result of the log management system, which rotates and compresses log files to save disk space and prevent logs from growing to unmanageable file sizes. Without zcat , I would have to uncompress these files with the gunzip command before viewing them. Thankfully, I can use zcat :

$ cd / var / log
$ ls * .gz
syslog.2.gz syslog.3.gz
$
$ zcat syslog.2.gz | more
Jan 30 00:02: 26 workstation systemd [ 1850 ] : Starting GNOME Terminal Server...
Jan 30 00:02: 26 workstation dbus-daemon [ 1920 ] : [ session uid = 2112 pid = 1920 ] Successful
ly activated service 'org.gnome.Terminal'
Jan 30 00:02: 26 workstation systemd [ 1850 ] : Started GNOME Terminal Server.
Jan 30 00:02: 26 workstation org.gnome.Terminal.desktop [ 2059 ] : # watch_fast: "/org/gno
me / terminal / legacy / " (establishing: 0, active: 0)
Jan 30 00:02:26 workstation org.gnome.Terminal.desktop[2059]: # unwatch_fast: " / org / g
nome / terminal / legacy / " (active: 0, establishing: 1)
Jan 30 00:02:26 workstation org.gnome.Terminal.desktop[2059]: # watch_established: " /
org / gnome / terminal / legacy / " (establishing: 0)
--More--

We can also pass both files to zcat if we want to review both of them uninterrupted. Due to how log rotation works, you need to pass the filenames in reverse order to preserve the chronological order of the log contents:

$ ls -l * .gz
-rw-r----- 1 syslog adm 196383 Jan 31 00:00 syslog.2.gz
-rw-r----- 1 syslog adm 1137176 Jan 30 00:00 syslog.3.gz
$ zcat syslog.3.gz syslog.2.gz | more

The cat command seems simple but is very useful. I use it regularly. You also don't need to feed or pet it like a real cat. As always, I suggest you review the man pages ( man cat ) for the cat and zcat commands to learn more about how it can be used. You can also use the --help argument for a quick synopsis of command line arguments.

Victorhck on 13 Feb 2019 Permalink

and there's also a "tac" command, that is just a "cat" upside down!
Following your example:

~~~~~

tac both.files
Good Bye World!
Hello World!
~~~~
Happy hacking! :)
Johan Godfried on 26 Feb 2019 Permalink

Interesting article but please don't misuse cat to pipe to more......

I am trying to teach people to use less pipes and here you go abusing cat to pipe to other commands. IMHO, 99.9% of the time this is not necessary!

In stead of "cat file | command" most of the time, you can use "command file" (yes, I am an old dinosaur from a time where memory was very expensive and forking multiple commands could fill it all up)

Uri Ran on 03 Mar 2019 Permalink

Run cat then press keys to see the codes your shortcut send. (Press Ctrl+C to kill the cat when you're done.)

For example, on my Mac, the key combination option-leftarrow is ^[^[[D and command-downarrow is ^[[B.

I learned it from https://stackoverflow.com/users/787216/lolesque in his answer to https://stackoverflow.com/questions/12382499/looking-for-altleftarrowkey...

Geordie on 04 Mar 2019 Permalink

cat is also useful to make (or append to) text files without an editor:

$ cat >> foo << "EOF"
> Hello World
> Another Line
> EOF
$

Disabling Network Manager

OpenStack networking currently does not work on systems that have the Network Manager (NetworkManager) service enabled. The Network Manager service is currently enabled by default on Red Hat Enterprise Linux installations where one of these package groups was selected during installation: The Network Manager service is not currently enabled by default on Red Hat Enterprise Linux installations where one of these package groups was selected during installation: Follow the steps listed in this procedure while logged in as the root user on each system in the environment that will handle network traffic. This includes the system that will host the OpenStack Networking service, all network nodes, and all compute nodes.

These steps ensure that the NetworkManager service is disabled and replaced by the standard network service for all interfaces that will be used by OpenStack Networking.

  1. Verify Network Manager is currently enabled using the chkconfig command.
    # chkconfig --list NetworkManager
    The output displayed by the chkconfig command inicates whether or not the Network Manager service is enabled.
    • The system displays an error if the Network Manager service is not currently installed:
      error reading information on service NetworkManager: No such file or directory
      If this error is displayed then no further action is required to disable the Network Manager service.
    • The system displays a list of numerical run levels along with a value of on or off indicating whether the Network Manager service is enabled when the system is operating in the given run level.
      NetworkManager 	0:off	1:off	2:off	3:off	4:off	5:off	6:off
      If the value displayed for all run levels is off then the Network Manager service is disabled and no further action is required. If the value displayed for any of the run levels is on then the Network Manager service is enabled and further action is required.
  2. Ensure that the Network Manager service is stopped using the service command.
    # service NetworkManager stop
  3. Ensure that the Network Manager service is disabled using the chkconfig command.
    # chkconfig NetworkManager off
  4. Open each interface configuration file on the system in a text editor. Interface configuration files are found in the /etc/sysconfig/network-scripts/ directory and have names of the form ifcfg-X where X is replaced by the name of the interface. Valid interface names include eth0, p1p5, and em1.

    In each file ensure that the NM_CONTROLLED configuration key is set to no and the ON_BOOT configuration key is set to yes.

    NM_CONTROLLED=no
    ONBOOT=yes
    This action ensures that the standard network service will take control of the interfaces and automatically activate them on boot.
  5. Ensure that the network service is started using the service command.
    # service network start
  6. Ensure that the network service is enabled using the chkconfig command.
    # chkconfig network on
The Network Manager service has been disabled. The standard network service has been enabled and configured to control the required network interfaces.

Recommended Links

Top Visited

Bulletin Latest Past week Past month
Google Search



Please visit nixCraft site. It has material well worth your visit.

Dr. Nikolai Bezroukov

Introducing NetworkManager by Rosanna Yuen January 2005

How to disable Network Manager on Linux - Linux FAQ

nmcli(1) tool for controlling NetworkManager - Linux man page

networkmanager.conf(5) - Linux man page

[ifupdown]

This section contains ifupdown-specific options and thus only has effect when using ifupdown plugin.
managed=false | true
Controls whether interfaces listed in the 'interfaces' file are managed by NetworkManager. If set to true, then interfaces listed in /etc/network/interfaces are managed by NetworkManager. If set to false, then any interface listed in /etc/network/interfaces will be ignored by NetworkManager. Remember that NetworkManager controls the default route, so because the interface is ignored, NetworkManager may assign the default route to some other interface. When the option is missing, false value is taken as default.

nm-system-settings.conf(5) - Linux man page

nm-tool(1) - Linux man page

Ubuntu Manpage nmcli - command-line tool for controlling NetworkManager



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: March, 23, 2019