Softpanorama

Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
May the source be with you, but remember the KISS principle ;-)
Bigger doesn't imply better. Bigger often is a sign of obesity, of lost control, of overcomplexity, of cancerous cells

Slightly Skeptical View on Enterprise Unix Administration

News Webliography of problems with "pure" cloud environment Recommended Books Recommended Links Recommended Tools to Enhance Command Line Usage in Windows Programmable Keyboards Microsoft IntelliType Macros
Unix Configuration Management Tools Job schedulers Unix System Monitoring Over 50 and unemployed Corporate bullshit as a communication method Diplomatic Communication Bosos or Empty Suits (Aggressive Incompetent Managers)
ILO command line interface Using HP ILO virtual CDROM iDRAC7 goes unresponsive - can't connect to iDRAC7 Resetting frozen iDRAC without unplugging the server Troubleshooting HPOM agents Webliography of problems with "pure" cloud environment The tar pit of Red Hat overcomplexity
Bare metal recovery of Linux systems Shadow IT Is DevOps a yet another "for profit" technocult Carpal tunnel syndrome Sysadmin Horror Stories Humor Etc


The KISS rule can be expanded as: Keep It Simple, Sysadmin ;-)

This page is written as a pretest against overcomplexity and bizarre data center atmosphere dominant in "semi-outsourced" or fully outsourced datacenters ;-). Unix/Linux sysadmins are being killed with overcomplexity of the environment. As Charlie Schluting noted in 2010: (Enterprise Networking Plane, April 7, 2010)

What happened to the old "sysadmin" of just a few years ago? We've split what used to be the sysadmin into application teams, server teams, storage teams, and network teams. There were often at least a few people, the holders of knowledge, who knew how everything worked, and I mean everything. Every application, every piece of network gear, and how every server was configured -- these people could save a business in times of disaster.

Now look at what we've done. Knowledge is so decentralized we must invent new roles to act as liaisons between all the IT groups.

Architects now hold much of the high-level "how it works" knowledge, but without knowing how any one piece actually does work.

In organizations with more than a few hundred IT staff and developers, it becomes nearly impossible for one person to do and know everything. This movement toward specializing in individual areas seems almost natural. That, however, does not provide a free ticket for people to turn a blind eye.

Specialization

You know the story: Company installs new application, nobody understands it yet, so an expert is hired. Often, the person with a certification in using the new application only really knows how to run that application. Perhaps they aren't interested in learning anything else, because their skill is in high demand right now. And besides, everything else in the infrastructure is run by people who specialize in those elements. Everything is taken care of.

Except, how do these teams communicate when changes need to take place? Are the storage administrators teaching the Windows administrators about storage multipathing; or worse logging in and setting it up because it's faster for the storage gurus to do it themselves? A fundamental level of knowledge is often lacking, which makes it very difficult for teams to brainstorm about new ways evolve IT services. The business environment has made it OK for IT staffers to specialize and only learn one thing.

If you hire someone certified in the application, operating system, or network vendor you use, that is precisely what you get. Certifications may be a nice filter to quickly identify who has direct knowledge in the area you're hiring for, but often they indicate specialization or compensation for lack of experience.

Resource Competition

Does your IT department function as a unit? Even 20-person IT shops have turf wars, so the answer is very likely, "no." As teams are split into more and more distinct operating units, grouping occurs. One IT budget gets split between all these groups. Often each group will have a manager who pitches his needs to upper management in hopes they will realize how important the team is.

The "us vs. them" mentality manifests itself at all levels, and it's reinforced by management having to define each team's worth in the form of a budget. One strategy is to illustrate a doomsday scenario. If you paint a bleak enough picture, you may get more funding. Only if you are careful enough to illustrate the failings are due to lack of capital resources, not management or people. A manager of another group may explain that they are not receiving the correct level of service, so they need to duplicate the efforts of another group and just implement something themselves. On and on, the arguments continue.

Most often, I've seen competition between server groups result in horribly inefficient uses of hardware. For example, what happens in your organization when one team needs more server hardware? Assume that another team has five unused servers sitting in a blade chassis. Does the answer change? No, it does not. Even in test environments, sharing doesn't often happen between IT groups.

With virtualization, some aspects of resource competition get better and some remain the same. When first implemented, most groups will be running their own type of virtualization for their platform. The next step, I've most often seen, is for test servers to get virtualized. If a new group is formed to manage the virtualization infrastructure, virtual machines can be allocated to various application and server teams from a central pool and everyone is now sharing. Or, they begin sharing and then demand their own physical hardware to be isolated from others' resource hungry utilization. This is nonetheless a step in the right direction. Auto migration and guaranteed resource policies can go a long way toward making shared infrastructure, even between competing groups, a viable option.

Blamestorming

The most damaging side effect of splitting into too many distinct IT groups is the reinforcement of an "us versus them" mentality. Aside from the notion that specialization creates a lack of knowledge, blamestorming is what this article is really about. When a project is delayed, it is all too easy to blame another group. The SAN people didn't allocate storage on time, so another team was delayed. That is the timeline of the project, so all work halted until that hiccup was restored. Having someone else to blame when things get delayed makes it all too easy to simply stop working for a while.

More related to the initial points at the beginning of this article, perhaps, is the blamestorm that happens after a system outage.

Say an ERP system becomes unresponsive a few times throughout the day. The application team says it's just slowing down, and they don't know why. The network team says everything is fine. The server team says the application is "blocking on IO," which means it's a SAN issue. The SAN team say there is nothing wrong, and other applications on the same devices are fine. You've ran through nearly every team, but without an answer still. The SAN people don't have access to the application servers to help diagnose the problem. The server team doesn't even know how the application runs.

See the problem? Specialized teams are distinct and by nature adversarial. Specialized staffers often relegate themselves into a niche knowing that as long as they continue working at large enough companies, "someone else" will take care of all the other pieces.

I unfortunately don't have an answer to this problem. Maybe rotating employees between departments will help. They gain knowledge and also get to know other people, which should lessen the propensity to view them as outsiders

Additional useful material on the topic can also be found in my older article Solaris vs Linux:

Abstract

Introduction

Nine factors framework for comparison of two flavors of Unix in a large enterprise environment

Four major areas of Linux and Solaris deployment

Comparison of internal architecture and key subsystems

Security

Hardware: SPARC vs. X86

Development environment

Solaris as a cultural phenomenon

Using Solaris-Linux enterprise mix as the least toxic Unix mix available

Conclusions

Acknowledgements

Webliography

Here are my notes/reflection of sysadmin problem in strange (and typically pretty toxic) IT departments of large corporations:

 


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

2018 2017 2016 2015 2014 2013 2012 2011 2010 2009
2008 2007 2006 2005 2004 2003 2002 2001 2000 1999

"I appreciate Woody Allen's humor because one of my safety valves is an appreciation for life's absurdities. His message is that life isn't a funeral march to the grave. It's a polka."

-- Dennis Kusinich

[Oct 18, 2018] 'less' command clearing screen upon exit - how to switch it off?

Notable quotes:
"... To prevent less from clearing the screen upon exit, use -X . ..."
Oct 18, 2018 | superuser.com

Wojciech Kaczmarek ,Feb 9, 2010 at 11:21

How to force the less program to not clear the screen upon exit?

I'd like it to behave like git log command:

Any ideas? I haven't found any suitable less options nor env variables in a manual, I suspect it's set via some env variable though.

sleske ,Feb 9, 2010 at 11:59

To prevent less from clearing the screen upon exit, use -X .

From the manpage:

-X or --no-init

Disables sending the termcap initialization and deinitialization strings to the terminal. This is sometimes desirable if the deinitialization string does something unnecessary, like clearing the screen.

As to less exiting if the content fits on one screen, that's option -F :

-F or --quit-if-one-screen

Causes less to automatically exit if the entire file can be displayed on the first screen.

-F is not the default though, so it's likely preset somewhere for you. Check the env var LESS .

markpasc ,Oct 11, 2010 at 3:44

This is especially annoying if you know about -F but not -X , as then moving to a system that resets the screen on init will make short files simply not appear, for no apparent reason. This bit me with ack when I tried to take my ACK_PAGER='less -RF' setting to the Mac. Thanks a bunch! – markpasc Oct 11 '10 at 3:44

sleske ,Oct 11, 2010 at 8:45

@markpasc: Thanks for pointing that out. I would not have realized that this combination would cause this effect, but now it's obvious. – sleske Oct 11 '10 at 8:45

Michael Goldshteyn ,May 30, 2013 at 19:28

This is especially useful for the man pager, so that man pages do not disappear as soon as you quit less with the 'q' key. That is, you scroll to the position in a man page that you are interested in only for it to disappear when you quit the less pager in order to use the info. So, I added: export MANPAGER='less -s -X -F' to my .bashrc to keep man page info up on the screen when I quit less, so that I can actually use it instead of having to memorize it. – Michael Goldshteyn May 30 '13 at 19:28

Michael Burr ,Mar 18, 2014 at 22:00

It kinda sucks that you have to decide when you start less how it must behave when you're going to exit. – Michael Burr Mar 18 '14 at 22:00

Derek Douville ,Jul 11, 2014 at 19:11

If you want any of the command-line options to always be default, you can add to your .profile or .bashrc the LESS environment variable. For example:
export LESS="-XF"

will always apply -X -F whenever less is run from that login session.

Sometimes commands are aliased (even by default in certain distributions). To check for this, type

alias

without arguments to see if it got aliased with options that you don't want. To run the actual command in your $PATH instead of an alias, just preface it with a back-slash :

\less

To see if a LESS environment variable is set in your environment and affecting behavior:

echo $LESS

dotancohen ,Sep 2, 2014 at 10:12

In fact, I add export LESS="-XFR" so that the colors show through less as well. – dotancohen Sep 2 '14 at 10:12

Giles Thomas ,Jun 10, 2015 at 12:23

Thanks for that! -XF on its own was breaking the output of git diff , and -XFR gets the best of both worlds -- no screen-clearing, but coloured git diff output. – Giles Thomas Jun 10 '15 at 12:23

[Oct 18, 2018] Isn't less just more

Highly recommended!
Oct 18, 2018 | unix.stackexchange.com

Bauna ,Aug 18, 2010 at 3:07

less is a lot more than more , for instance you have a lot more functionality:
g: go top of the file
G: go bottom of the file
/: search forward
?: search backward
N: show line number
: goto line
F: similar to tail -f, stop with ctrl+c
S: split lines

And I don't remember more ;-)

törzsmókus ,Feb 19 at 13:19

h : everything you don't remember ;) – törzsmókus Feb 19 at 13:19

KeithB ,Aug 18, 2010 at 0:36

There are a couple of things that I do all the time in less , that doesn't work in more (at least the versions on the systems I use. One is using G to go to the end of the file, and g to go to the beginning. This is useful for log files, when you are looking for recent entries at the end of the file. The other is search, where less highlights the match, while more just brings you to the section of the file where the match occurs, but doesn't indicate where it is.

geoffc ,Sep 8, 2010 at 14:11

Less has a lot more functionality.

You can use v to jump into the current $EDITOR. You can convert to tail -f mode with f as well as all the other tips others offered.

Ubuntu still has distinct less/more bins. At least mine does, or the more command is sending different arguments to less.

In any case, to see the difference, find a file that has more rows than you can see at one time in your terminal. Type cat , then the file name. It will just dump the whole file. Type more , then the file name. If on ubuntu, or at least my version (9.10), you'll see the first screen, then --More--(27%) , which means there's more to the file, and you've seen 27% so far. Press space to see the next page. less allows moving line by line, back and forth, plus searching and a whole bunch of other stuff.

Basically, use less . You'll probably never need more for anything. I've used less on huge files and it seems OK. I don't think it does crazy things like load the whole thing into memory ( cough Notepad). Showing line numbers could take a while, though, with huge files.

[Oct 18, 2018] What are the differences between most, more and less

Highly recommended!
Jun 29, 2013 | unix.stackexchange.com

Smith John ,Jun 29, 2013 at 13:16

more

more is an old utility. When the text passed to it is too large to fit on one screen, it pages it. You can scroll down but not up.

Some systems hardlink more to less , providing users with a strange hybrid of the two programs that looks like more and quits at the end of the file like more but has some less features such as backwards scrolling. This is a result of less 's more compatibility mode. You can enable this compatibility mode temporarily with LESS_IS_MORE=1 less ... .

more passes raw escape sequences by default. Escape sequences tell your terminal which colors to display.

less

less was written by a man who was fed up with more 's inability to scroll backwards through a file. He turned less into an open source project and over time, various individuals added new features to it. less is massive now. That's why some small embedded systems have more but not less . For comparison, less 's source is over 27000 lines long. more implementations are generally only a little over 2000 lines long.

In order to get less to pass raw escape sequences, you have to pass it the -r flag. You can also tell it to only pass ANSI escape characters by passing it the -R flag.

most

most is supposed to be more than less . It can display multiple files at a time. By default, it truncates long lines instead of wrapping them and provides a left/right scrolling mechanism. most's website has no information about most 's features. Its manpage indicates that it is missing at least a few less features such as log-file writing (you can use tee for this though) and external command running.

By default, most uses strange non-vi-like keybindings. man most | grep '\<vi.?\>' doesn't return anything so it may be impossible to put most into a vi-like mode.

most has the ability to decompress gunzip-compressed files before reading. Its status bar has more information than less 's.

most passes raw escape sequences by default.

tifo ,Oct 14, 2014 at 8:44

Short answer: Just use less and forget about more

Longer version:

more is old utility. You can't browse step wise with more, you can use space to browse page wise, or enter line by line, that is about it. less is more + more additional features. You can browse page wise, line wise both up and down, search

Jonathan.Brink ,Aug 9, 2015 at 20:38

If "more" is lacking for you and you know a few vi commands use "less" – Jonathan.Brink Aug 9 '15 at 20:38

Wilko Fokken ,Jan 30, 2016 at 20:31

There is one single application whereby I prefer more to less :

To check my LATEST modified log files (in /var/log/ ), I use ls -AltF | more .

While less deletes the screen after exiting with q , more leaves those files and directories listed by ls on the screen, sparing me memorizing their names for examination.

(Should anybody know a parameter or configuration enabling less to keep it's text after exiting, that would render this post obsolete.)

Jan Warchoł ,Mar 9, 2016 at 10:18

The parameter you want is -X (long form: --no-init ). From less ' manpage:

Disables sending the termcap initialization and deinitialization strings to the terminal. This is sometimes desirable if the deinitialization string does something unnecessary, like clearing the screen.

Jan Warchoł Mar 9 '16 at 10:18

[Oct 17, 2018] How to use arrays in bash script - LinuxConfig.org

Oct 17, 2018 | linuxconfig.org

Create indexed arrays on the fly We can create indexed arrays with a more concise syntax, by simply assign them some values:

$ my_array=(foo bar)
In this case we assigned multiple items at once to the array, but we can also insert one value at a time, specifying its index:
$ my_array[0]=foo
Array operations Once an array is created, we can perform some useful operations on it, like displaying its keys and values or modifying it by appending or removing elements: Print the values of an array To display all the values of an array we can use the following shell expansion syntax:
${my_array[@]}
Or even:
${my_array[*]}
Both syntax let us access all the values of the array and produce the same results, unless the expansion it's quoted. In this case a difference arises: in the first case, when using @ , the expansion will result in a word for each element of the array. This becomes immediately clear when performing a for loop . As an example, imagine we have an array with two elements, "foo" and "bar":
$ my_array=(foo bar)
Performing a for loop on it will produce the following result:
$ for i in "${my_array[@]}"; do echo "$i"; done
foo
bar
When using * , and the variable is quoted, instead, a single "result" will be produced, containing all the elements of the array:
$ for i in "${my_array[*]}"; do echo "$i"; done
foo bar

me name=


Print the keys of an array It's even possible to retrieve and print the keys used in an indexed or associative array, instead of their respective values. The syntax is almost identical, but relies on the use of the ! operator:
$ my_array=(foo bar baz)
$ for index in "${!my_array[@]}"; do echo "$index"; done
0
1
2
The same is valid for associative arrays:
$ declare -A my_array
$ my_array=([foo]=bar [baz]=foobar)
$ for key in "${!my_array[@]}"; do echo "$key"; done
baz
foo
As you can see, being the latter an associative array, we can't count on the fact that retrieved values are returned in the same order in which they were declared. Getting the size of an array We can retrieve the size of an array (the number of elements contained in it), by using a specific shell expansion:
$ my_array=(foo bar baz)
$ echo "the array contains ${#my_array[@]} elements"
the array contains 3 elements
We have created an array which contains three elements, "foo", "bar" and "baz", then by using the syntax above, which differs from the one we saw before to retrieve the array values only for the # character before the array name, we retrieved the number of the elements in the array instead of its content. Adding elements to an array As we saw, we can add elements to an indexed or associative array by specifying respectively their index or associative key. In the case of indexed arrays, we can also simply add an element, by appending to the end of the array, using the += operator:
$ my_array=(foo bar)
$ my_array+=(baz)
If we now print the content of the array we see that the element has been added successfully:
$ echo "${my_array[@]}"
foo bar baz
Multiple elements can be added at a time:
$ my_array=(foo bar)
$ my_array+=(baz foobar)
$ echo "${my_array[@]}"
foo bar baz foobar
To add elements to an associative array, we are bound to specify also their associated keys:
$ declare -A my_array

# Add single element
$ my_array[foo]="bar"

# Add multiple elements at a time
$ my_array+=([baz]=foobar [foobarbaz]=baz)

me name=


Deleting an element from the array To delete an element from the array we need to know it's index or its key in the case of an associative array, and use the unset command. Let's see an example:
$ my_array=(foo bar baz)
$ unset my_array[1]
$ echo ${my_array[@]}
foo baz
We have created a simple array containing three elements, "foo", "bar" and "baz", then we deleted "bar" from it running unset and referencing the index of "bar" in the array: in this case we know it was 1 , since bash arrays start at 0. If we check the indexes of the array, we can now see that 1 is missing:
$ echo ${!my_array[@]}
0 2
The same thing it's valid for associative arrays:
$ declare -A my_array
$ my_array+=([foo]=bar [baz]=foobar)
$ unset my_array[foo]
$ echo ${my_array[@]}
foobar
In the example above, the value referenced by the "foo" key has been deleted, leaving only "foobar" in the array.

Deleting an entire array, it's even simpler: we just pass the array name as an argument to the unset command without specifying any index or key:

$ unset my_array
$ echo ${!my_array[@]}

After executing unset against the entire array, when trying to print its content an empty result is returned: the array doesn't exist anymore. Conclusions In this tutorial we saw the difference between indexed and associative arrays in bash, how to initialize them and how to perform fundamental operations, like displaying their keys and values and appending or removing items. Finally we saw how to unset them completely. Bash syntax can sometimes be pretty weird, but using arrays in scripts can be really useful. When a script starts to become more complex than expected, my advice is, however, to switch to a more capable scripting language such as python.

[Oct 16, 2018] Taking Command of the Terminal with GNU Screen

It is available from EPEL repository; to launch it type byobu-screen
Notable quotes:
"... Note that byobu doesn't actually do anything to screen itself. It's an elaborate (and pretty groovy) screen configuration customization. You could do something similar on your own by hacking your ~/.screenrc, but the byobu maintainers have already done it for you. ..."
Oct 16, 2018 | www.linux.com

Can I have a Copy of That?

Want a quick and dirty way to take notes of what's on your screen? Yep, there's a command for that. Run Ctrl-a h and screen will save a text file called "hardcopy.n" in your current directory that has all of the existing text. Want to get a quick snapshot of the top output on a system? Just run Ctrl-a h and there you go.

You can also save a log of what's going on in a window by using Ctrl-a H . This will create a file called screenlog.0 in the current directory. Note that it may have limited usefulness if you're doing something like editing a file in Vim, and the output can look pretty odd if you're doing much more than entering a few simple commands. To close a screenlog, use Ctrl-a H again.

Note if you want a quick glance at the system info, including hostname, system load, and system time, you can get that with Ctrl-a t .

Simplifying Screen with Byobu

If the screen commands seem a bit too arcane to memorize, don't worry. You can tap the power of GNU Screen in a slightly more user-friendly package called byobu . Basically, byobu is a souped-up screen profile originally developed for Ubuntu. Not using Ubuntu? No problem, you can find RPMs or a tarball with the profiles to install on other Linux distros or Unix systems that don't feature a native package.

Note that byobu doesn't actually do anything to screen itself. It's an elaborate (and pretty groovy) screen configuration customization. You could do something similar on your own by hacking your ~/.screenrc, but the byobu maintainers have already done it for you.

Since most of byobu is self-explanatory, I won't go into great detail about using it. You can launch byobu by running byobu . You'll see a shell prompt plus a few lines at the bottom of the screen with additional information about your system, such as the system CPUs, uptime, and system time. To get a quick help menu, hit F9 and then use the Help entry. Most of the commands you would use most frequently are assigned F keys as well. Creating a new window is F2, cycling between windows is F3 and F4, and detaching from a session is F6. To re-title a window use F8, and if you want to lock the screen use F12.

The only downside to byobu is that it's not going to be on all systems, and in a pinch it may help to know your way around plain-vanilla screen rather than byobu.

For an easy reference, here's a list of the most common screen commands that you'll want to know. This isn't exhaustive, but it should be enough for most users to get started using screen happily for most use cases.

Finally, if you want help on GNU Screen, use the man page (man screen) and its built-in help with Ctrl-a :help. Screen has quite a few advanced options that are beyond an introductory tutorial, so be sure to check out the man page when you have the basics down.

[Oct 16, 2018] How To Use Linux Screen

Oct 16, 2018 | linuxize.com

Working with Linux Screen Windows

When you start a new screen session by default it creates a single window with a shell in it.

You can have multiple windows inside a Screen session.

To create a new window with shell type Ctrl+a c , the first available number from the range 0...9 will be assigned to it.

Bellow are some most common commands for managing Linux Screen Windows:

Detach from Linux Screen Session

You can detach from the screen session at anytime by typing:

Ctrl+a d

The program running in the screen session will continue to run after you detach from the session.

Reattach to Linux Screen

To resume your screen session use the following command:

screen -r
Copy

In case you have multiple screen sessions running on you machine you will need to append the screen session ID after the r switch.

To find the session ID list the current running screen sessions with:

screen -ls
Copy
There are screens on:
    10835.pts-0.linuxize-desktop   (Detached)
    10366.pts-0.linuxize-desktop   (Detached)
2 Sockets in /run/screens/S-linuxize.
Copy

If you want to restore screen 10835.pts-0, then type the following command:

screen -r 10835
Copy

Customize Linux Screen

When screen is started it reads its configuration parameters from /etc/screenrc and ~/.screenrc if the file is present. We can modify the default Screen settings according to our own preferences using the .screenrc file.

Here is a sample ~/.screenrc configuration with customized status line and few additional options:

~/.screenrc
# Turn off the welcome message
startup_message off

# Disable visual bell
vbell off

# Set scrollback buffer to 10000
defscrollback 10000

# Customize the status line
hardstatus alwayslastline
hardstatus string '%{= kG}[ %{G}%H %{g}][%= %{= kw}%?%-Lw%?%{r}(%{W}%n*%f%t%?(%u)%?%{r})%{w}%?%+

[Oct 15, 2018] Breaking News! SUSE Linux Sold for $2.5 Billion It's FOSS by Abhishek Prakash

Aqusition by a private equity shark is never good news for a software vendor...
Jul 03, 2018 | itsfoss.com

British software company Micro Focus International has agreed to sell SUSE Linux and its associated software business to Swedish private equity group EQT Partners for $2.535 billion. Read the details. ­ rm 3 months ago

This comment is awaiting moderation

Novell acquired SUSE in 2003 for $210 million ­ asoc 4 months ago

This comment is awaiting moderation

"It has over 1400 employees all over the globe "
They should be updating their CVs.

[Oct 15, 2018] I honestly, seriously sometimes wonder if systemd is Skynet... or, a way for Skynet to 'waken'.

Oct 15, 2018 | linux.slashdot.org

thegarbz ( 1787294 ) , Sunday August 30, 2015 @04:08AM ( #50419549 )

Re:Hang on a minute... ( Score: 5 , Funny)
I honestly, seriously sometimes wonder if systemd is Skynet... or, a way for Skynet to 'waken'.

Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. At 2:15am it crashes.
No one knows why. The binary log file was corrupted in the process and is unrecoverable. All anyone could remember is a bug listed in the systemd bug tracker talking about su which was classified as WON'T FIX as the developer thought it was a broken concept.

[Oct 15, 2018] Systemd as doord interface for cars ;-) by Nico Schottelius

Notable quotes:
"... Let's say every car manufacturer recently discovered a new technology named "doord", which lets you open up car doors much faster than before. It only takes 0.05 seconds, instead of 1.2 seconds on average. So every time you open a door, you are much, much faster! ..."
"... Many of the manufacturers decide to implement doord, because the company providing doord makes it clear that it is beneficial for everyone. And additional to opening doors faster, it also standardises things. How to turn on your car? It is the same now everywhere, it is not necessarily to look for the keyhole anymore. ..."
"... Unfortunately though, sometimes doord does not stop the engine. Or if it is cold outside, it stops the ignition process, because it takes too long. Doord also changes the way how your navigation system works, because that is totally related to opening doors, but leads to some users being unable to navigate, which is accepted as collateral damage. In the end, you at least have faster door opening and a standard way to turn on the car. Oh, and if you are in a traffic jam and have to restart the engine often, it will stop restarting it after several times, because that's not what you are supposed to do. You can open the engine hood and tune that setting though, but it will be reset once you buy a new car. ..."
Oct 15, 2018 | blog.ungleich.ch

Let's say every car manufacturer recently discovered a new technology named "doord", which lets you open up car doors much faster than before. It only takes 0.05 seconds, instead of 1.2 seconds on average. So every time you open a door, you are much, much faster!

Many of the manufacturers decide to implement doord, because the company providing doord makes it clear that it is beneficial for everyone. And additional to opening doors faster, it also standardises things. How to turn on your car? It is the same now everywhere, it is not necessarily to look for the keyhole anymore.

Unfortunately though, sometimes doord does not stop the engine. Or if it is cold outside, it stops the ignition process, because it takes too long. Doord also changes the way how your navigation system works, because that is totally related to opening doors, but leads to some users being unable to navigate, which is accepted as collateral damage. In the end, you at least have faster door opening and a standard way to turn on the car. Oh, and if you are in a traffic jam and have to restart the engine often, it will stop restarting it after several times, because that's not what you are supposed to do. You can open the engine hood and tune that setting though, but it will be reset once you buy a new car.

[Oct 15, 2018] Future History of Init Systems

Oct 15, 2018 | linux.slashdot.org

AntiSol ( 1329733 ) , Saturday August 29, 2015 @03:52PM ( #50417111 )

Re:Approaching the Singularity ( Score: 4 , Funny)

Future History of Init Systems

Future History of Init Systems
  • 2015: systemd becomes default boot manager in debian.
  • 2017: "complete, from-scratch rewrite" [jwz.org]. In order to not have to maintain backwards compatibility, project is renamed to system-e.
  • 2019: debut of systemf, absorbtion of other projects including alsa, pulseaudio, xorg, GTK, and opengl.
  • 2021: systemg maintainers make the controversial decision to absorb The Internet Archive. Systemh created as a fork without Internet Archive.
  • 2022: systemi, a fork of systemf focusing on reliability and minimalism becomes default debian init system.
  • 2028: systemj, a complete, from-scratch rewrite is controversial for trying to reintroduce binary logging. Consensus is against the systemj devs as sysadmins remember the great systemd logging bug of 2017 unkindly. Systemj project is eventually abandoned.
  • 2029: systemk codebase used as basis for a military project to create a strong AI, known as "project skynet". Software behaves paradoxically and project is terminated.
  • 2033: systeml - "system lean" - a "back to basics", from-scratch rewrite, takes off on several server platforms, boasting increased reliability. systemm, "system mean", a fork, used in security-focused distros.
  • 2117: critical bug discovered in the long-abandoned but critical and ubiquitous system-r project. A new project, system-s, is announced to address shortcomings in the hundred-year-old codebase. A from-scratch rewrite begins.
  • 2142: systemu project, based on a derivative of systemk, introduces "Artificially intelligent init system which will shave 0.25 seconds off your boot time and absolutely definitely will not subjugate humanity". Millions die. The survivors declare "thou shalt not make an init system in the likeness of the human mind" as their highest law.
  • 2147: systemv - a collection of shell scripts written around a very simple and reliable PID 1 introduced, based on the brand new religious doctrines of "keep it simple, stupid" and "do one thing, and do it well". People's computers start working properly again, something few living people can remember. Wyld Stallyns release their 94th album. Everybody lives in peace and harmony.

[Oct 15, 2018] They should have just rename the machinectl into command.com.

Oct 15, 2018 | linux.slashdot.org

RabidReindeer ( 2625839 ) , Saturday August 29, 2015 @11:38AM ( #50415833 )

What's with all the awkward systemd command names? ( Score: 5 , Insightful)

I know systemd sneers at the old Unix convention of keeping it simple, keeping it separate, but that's not the only convention they spit on. God intended Unix (Linux) commands to be cryptic things 2-4 letters long (like "su", for example). Not "systemctl", "machinectl", "journalctl", etc. Might as well just give everything a 47-character long multi-word command like the old Apple commando shell did.

Seriously, though, when you're banging through system commands all day long, it gets old and their choices aren't especially friendly to tab completion. On top of which why is "machinectl" a shell and not some sort of hardware function? They should have just named the bloody thing command.com.

[Oct 15, 2018] Oh look, another Powershell

Oct 15, 2018 | linux.slashdot.org

Anonymous Coward , Saturday August 29, 2015 @11:37AM ( #50415825 )

Cryptic command names ( Score: 5 , Funny)

Great to see that systemd is finally doing something about all of those cryptic command names that plague the unix ecosystem.

Upcoming systemd re-implementations of standard utilities:

ls to be replaced by filectl directory contents [pathname] grep to be replaced by datactl file contents search [plaintext] (note: regexp no longer supported as it's ambiguous) gimp to be replaced by imagectl open file filename draw box [x1,y1,x2,y2] draw line [x1,y1,x2,y2] ...
Anonymous Coward , Saturday August 29, 2015 @11:58AM ( #50415939 )
Re: Cryptic command names ( Score: 3 , Funny)

Oh look, another Powershell

[Oct 14, 2018] Does Systemd Make Linux Complex, Error-Prone, and Unstable

Or in other words, a simple, reliable and clear solution (which has some faults due to its age) was replaced with a gigantic KISS violation. No engineer worth the name will ever do that. And if it needs doing, any good engineer will make damned sure to achieve maximum compatibility and a clean way back. The systemd people seem to be hell-bent on making it as hard as possible to not use their monster. That alone is a good reason to stay away from it.
Oct 14, 2018 | linux.slashdot.org

Reverend Green ( 4973045 ) , Monday December 11, 2017 @04:48AM ( #55714431 )

Re: Does systemd make ... ( Score: 5 , Funny)

Systemd is nothing but a thinly-veiled plot by Vladimir Putin and Beyonce to import illegal German Nazi immigrants over the border from Mexico who will then corner the market in kimchi and implement Sharia law!!!

Anonymous Coward , Monday December 11, 2017 @01:38AM ( #55714015 )

Re:It violates fundamental Unix principles ( Score: 4 , Funny)

The Emacs of the 2010s.

DontBeAMoran ( 4843879 ) , Monday December 11, 2017 @01:57AM ( #55714059 )
Re:It violates fundamental Unix principles ( Score: 5 , Funny)

We are systemd. Lower your memory locks and surrender your processes. We will add your calls and code distinctiveness to our own. Your functions will adapt to service us. Resistance is futile.

serviscope_minor ( 664417 ) , Monday December 11, 2017 @04:47AM ( #55714427 ) Journal
Re:It violates fundamental Unix principles ( Score: 4 , Insightful)

I think we should call systemd the Master Control Program since it seems to like making other programs functions its own.

Anonymous Coward , Monday December 11, 2017 @01:47AM ( #55714035 )
Don't go hating on systemd ( Score: 5 , Funny)

RHEL7 is a fine OS, the only thing it's missing is a really good init system.

[Oct 14, 2018] Linux and Unix xargs command tutorial with examples by George Ornbo

Sep 11, 2017 | shapeshed.com

xargs v exec {}

The find command supports the -exec option that allows arbitrary commands to be found on files that are found. The following are equivalent.

find ./foo -type f -name "*.txt" -exec rm {} \; 
find ./foo -type f -name "*.txt" | xargs rm

So which one is faster? Let's compare a folder with 1000 files in it.

time find . -type f -name "*.txt" -exec rm {} \;
0.35s user 0.11s system 99% cpu 0.467 total

time find ./foo -type f -name "*.txt" | xargs rm
0.00s user 0.01s system 75% cpu 0.016 total

Clearly using xargs is far more efficient. In fact several benchmarks suggest using xargs over exec {} is six times more efficient.

How to print commands that are executed

The -t option prints each command that will be executed to the terminal. This can be helpful when debugging scripts.

echo 'one two three' | xargs -t rm
rm one two three
How to view the command and prompt for execution

The -p command will print the command to be executed and prompt the user to run it. This can be useful for destructive operations where you really want to be sure on the command to be run. l

echo 'one two three' | xargs -p touch
touch one two three ?...
How to run multiple commands with xargs

It is possible to run multiple commands with xargs by using the -I flag. This replaces occurrences of the argument with the argument passed to xargs. The following prints echos a string and creates a folder.

cat foo.txt
one
two
three

cat foo.txt | xargs -I % sh -c 'echo %; mkdir %'
one 
two
three

ls 
one two three
Further reading

George Ornbo is a hacker, futurist, blogger and Dad based in Buckinghamshire, England.He is the author of Sams Teach Yourself Node.js in 24 Hours .He can be found in most of the usual places as shapeshed including Twitter and GitHub .

Content is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)

[Oct 13, 2018] 4 Useful Tools to Run Commands on Multiple Linux Servers

Oct 13, 2018 | www.tecmint.com
  1. John west says: October 12, 2018 at 5:02 pm

    Xcat project spawned psh and dsh . Older version of psh had an option to use nodelist.tab file that contained:

    Host01 group1,webserver,rhel7
    Host02 group1,appserver,rhel6
    ....
    

    Psh. Group1 uptime

    Would run on both

    Psh rhel6 uptime

    would run only on host02

    Each server is listed once, not i. 10 different files.

    Later xcat switched to a db format for hosts, but was more complicated.

    Loved that nodelist.tab simplicity.

    Reply
  2. Rick Maus says: October 11, 2018 at 4:45 pm

    Thanks for the article! Always looking for more options to perform similar tasks.

    When you want to interact with multiple hosts simultaneously, MobaXterm (mobaxterm.mobatek.net), is a powerful tool. You can even use your favorite text editor (vim, emacs, nano, ed) in real time.

    Each character typed is sent in parallel to all hosts and you immediately see the effect. Selectively toggling whether the input stream is sent to individual host(s) during a session allows for custom changes that only affect a desired subset of hosts.

    MobaXterm has a free home version as well as a paid professional edition. The company was highly responsive to issues reports that I provided and corrected the issues quickly.

    I have no affiliation with the company other than being a happy free edition customer.

    Reply
    • Aaron Kili says: October 12, 2018 at 12:59 pm

      @Rick

      Many thanks for sharing this seful information.

[Oct 13, 2018] replace cd in bash to (silent) pushd · GitHub

Oct 13, 2018 | gist.github.com

Instantly share code, notes, and snippets.

@mbadran mbadran / gist:130469 Created Jun 16, 2009
What would you like to do? Learn more about clone URLs
Download ZIP replace cd in bash to (silent) pushd Raw gistfile1.sh
alias cd= " pushd $@ > /dev/null "
@bobbydavid
Copy link
@bobbydavid bobbydavid Sep 19, 2012 One annoyance with this alias is that simply typing "cd" will twiddle the directory stack instead of bringing you to your home directory.
Copy link
bobbydavid commented Sep 19, 2012
One annoyance with this alias is that simply typing "cd" will twiddle the directory stack instead of bringing you to your home directory.
@dideler
Copy link
@dideler dideler Mar 9, 2013 @bobbydavid makes a good point. This would be better as a function.
function cd {                                                                   
    if (("$#" > 0)); then
        pushd "$@" > /dev/null
    else
        cd $HOME
    fi
}

By the way, I found this gist by googling "silence pushd".

Copy link
dideler commented Mar 9, 2013
@bobbydavid makes a good point. This would be better as a function.
function cd {                                                                   
    if (("$#" > 0)); then
        pushd "$@" > /dev/null
    else
        cd $HOME
    fi
}

By the way, I found this gist by googling "silence pushd".

@ghost
Copy link
@ghost ghost May 30, 2013 Don't you miss something?
function cd {
    if (("$#" > 0)); then
        if [ "$1" == "-" ]; then
            popd > /dev/null
        else
            pushd "$@" > /dev/null
        fi
    else
        cd $HOME
    fi
}

You can always mimic the "cd -" functionality by using pushd alone.
Btw, I also found this gist by googling "silent pushd" ;)

Copy link
ghost commented May 30, 2013
Don't you miss something?
function cd {
    if (("$#" > 0)); then
        if [ "$1" == "-" ]; then
            popd > /dev/null
        else
            pushd "$@" > /dev/null
        fi
    else
        cd $HOME
    fi
}

You can always mimic the "cd -" functionality by using pushd alone.
Btw, I also found this gist by googling "silent pushd" ;)

@cra
Copy link
@cra cra Jul 1, 2014 And thanks to your last comment, I found this gist by googling "silent cd -" :)
Copy link
cra commented Jul 1, 2014
And thanks to your last comment, I found this gist by googling "silent cd -" :)
@keltroth
Copy link
@keltroth keltroth Jun 25, 2015 With bash completion activated a can't get rid of this error :
"bash: pushd: cd: No such file or directory"...

Any clue ?

Copy link
keltroth commented Jun 25, 2015
With bash completion activated a can't get rid of this error :
"bash: pushd: cd: No such file or directory"...

Any clue ?

@keltroth
Copy link
@keltroth keltroth Jun 25, 2015 Got it !
One have to add :
complete -d cd

After making the alias !

My complete code here :

function _cd {
    if (("$#" > 0)); then
        if [ "$1" == "-" ]; then
            popd > /dev/null
        else
            pushd "$@" > /dev/null
        fi
    else
        cd $HOME
    fi
}

alias cd=_cd
complete -d cd
Copy link
keltroth commented Jun 25, 2015
Got it !
One have to add :
complete -d cd

After making the alias !

My complete code here :

function _cd {
    if (("$#" > 0)); then
        if [ "$1" == "-" ]; then
            popd > /dev/null
        else
            pushd "$@" > /dev/null
        fi
    else
        cd $HOME
    fi
}

alias cd=_cd
complete -d cd
@jan-warchol
Copy link
@jan-warchol jan-warchol Nov 29, 2015 I wanted to be able to go back by a given number of history items by typing cd -n , and I came up with this:
function _cd {
    # typing just `_cd` will take you $HOME ;)
    if [ "$1" == "" ]; then
        pushd "$HOME" > /dev/null

    # use `_cd -` to visit previous directory
    elif [ "$1" == "-" ]; then
        pushd $OLDPWD > /dev/null

    # use `_cd -n` to go n directories back in history
    elif [[ "$1" =~ ^-[0-9]+$ ]]; then
        for i in `seq 1 ${1/-/}`; do
            popd > /dev/null
        done

    # use `_cd -- <path>` if your path begins with a dash
    elif [ "$1" == "--" ]; then
        shift
        pushd -- "$@" > /dev/null

    # basic case: move to a dir and add it to history
    else
        pushd "$@" > /dev/null
    fi
}

# replace standard `cd` with enhanced version, ensure tab-completion works
alias cd=_cd
complete -d cd

I think you may find this interesting.

Copy link
jan-warchol commented Nov 29, 2015
I wanted to be able to go back by a given number of history items by typing cd -n , and I came up with this:
function _cd {
    # typing just `_cd` will take you $HOME ;)
    if [ "$1" == "" ]; then
        pushd "$HOME" > /dev/null

    # use `_cd -` to visit previous directory
    elif [ "$1" == "-" ]; then
        pushd $OLDPWD > /dev/null

    # use `_cd -n` to go n directories back in history
    elif [[ "$1" =~ ^-[0-9]+$ ]]; then
        for i in `seq 1 ${1/-/}`; do
            popd > /dev/null
        done

    # use `_cd -- <path>` if your path begins with a dash
    elif [ "$1" == "--" ]; then
        shift
        pushd -- "$@" > /dev/null

    # basic case: move to a dir and add it to history
    else
        pushd "$@" > /dev/null
    fi
}

# replace standard `cd` with enhanced version, ensure tab-completion works
alias cd=_cd
complete -d cd

I think you may find this interesting.

@3v1n0
Copy link
@3v1n0 3v1n0 Oct 25, 2017 Another improvement over @jan-warchol version, to make cd - to alternatively use pushd $OLDPWD and popd depending on what we called before.

This allows to avoid to fill your history with elements when you often do cd -; cd - # repeated as long you want . This could be applied when using this alias also for $OLDPWD , but in that case it might be that you want it repeated there, so I didn't touch it.

Also added cd -l as alias for dir -v and use cd -g X to go to the X th directory in your history (without popping, that's possible too of course, but it' something more an addition in this case).

# Replace cd with pushd https://gist.github.com/mbadran/130469
function push_cd() {
  # typing just `push_cd` will take you $HOME ;)
  if [ -z "$1" ]; then
    push_cd "$HOME"

  # use `push_cd -` to visit previous directory
  elif [ "$1" == "-" ]; then
    if [ "$(dirs -p | wc -l)" -gt 1 ]; then
      current_dir="$PWD"
      popd > /dev/null
      pushd -n $current_dir > /dev/null
    elif [ -n "$OLDPWD" ]; then
      push_cd $OLDPWD
    fi

  # use `push_cd -l` or `push_cd -s` to print current stack of folders
  elif [ "$1" == "-l" ] || [ "$1" == "-s" ]; then
    dirs -v

  # use `push_cd -l N` to go to the Nth directory in history (pushing)
  elif [ "$1" == "-g" ] && [[ "$2" =~ ^[0-9]+$ ]]; then
    indexed_path=$(dirs -p | sed -n $(($2+1))p)
    push_cd $indexed_path

  # use `push_cd +N` to go to the Nth directory in history (pushing)
  elif [[ "$1" =~ ^+[0-9]+$ ]]; then
    push_cd -g ${1/+/}

  # use `push_cd -N` to go n directories back in history
  elif [[ "$1" =~ ^-[0-9]+$ ]]; then
    for i in `seq 1 ${1/-/}`; do
      popd > /dev/null
    done

  # use `push_cd -- <path>` if your path begins with a dash
  elif [ "$1" == "--" ]; then
    shift
    pushd -- "$@" > /dev/null

    # basic case: move to a dir and add it to history
  else
    pushd "$@" > /dev/null

    if [ "$1" == "." ] || [ "$1" == "$PWD" ]; then
      popd -n > /dev/null
    fi
  fi

  if [ -n "$CD_SHOW_STACK" ]; then
    dirs -v
  fi
}

# replace standard `cd` with enhanced version, ensure tab-completion works
alias cd=push_cd
complete -d cd```
Copy link
3v1n0 commented Oct 25, 2017
Another improvement over @jan-warchol version, to make cd - to alternatively use pushd $OLDPWD and popd depending on what we called before.

This allows to avoid to fill your history with elements when you often do cd -; cd - # repeated as long you want . This could be applied when using this alias also for $OLDPWD , but in that case it might be that you want it repeated there, so I didn't touch it.

Also added cd -l as alias for dir -v and use cd -g X to go to the X th directory in your history (without popping, that's possible too of course, but it' something more an addition in this case).

# Replace cd with pushd https://gist.github.com/mbadran/130469
function push_cd() {
  # typing just `push_cd` will take you $HOME ;)
  if [ -z "$1" ]; then
    push_cd "$HOME"

  # use `push_cd -` to visit previous directory
  elif [ "$1" == "-" ]; then
    if [ "$(dirs -p | wc -l)" -gt 1 ]; then
      current_dir="$PWD"
      popd > /dev/null
      pushd -n $current_dir > /dev/null
    elif [ -n "$OLDPWD" ]; then
      push_cd $OLDPWD
    fi

  # use `push_cd -l` or `push_cd -s` to print current stack of folders
  elif [ "$1" == "-l" ] || [ "$1" == "-s" ]; then
    dirs -v

  # use `push_cd -l N` to go to the Nth directory in history (pushing)
  elif [ "$1" == "-g" ] && [[ "$2" =~ ^[0-9]+$ ]]; then
    indexed_path=$(dirs -p | sed -n $(($2+1))p)
    push_cd $indexed_path

  # use `push_cd +N` to go to the Nth directory in history (pushing)
  elif [[ "$1" =~ ^+[0-9]+$ ]]; then
    push_cd -g ${1/+/}

  # use `push_cd -N` to go n directories back in history
  elif [[ "$1" =~ ^-[0-9]+$ ]]; then
    for i in `seq 1 ${1/-/}`; do
      popd > /dev/null
    done

  # use `push_cd -- <path>` if your path begins with a dash
  elif [ "$1" == "--" ]; then
    shift
    pushd -- "$@" > /dev/null

    # basic case: move to a dir and add it to history
  else
    pushd "$@" > /dev/null

    if [ "$1" == "." ] || [ "$1" == "$PWD" ]; then
      popd -n > /dev/null
    fi
  fi

  if [ -n "$CD_SHOW_STACK" ]; then
    dirs -v
  fi
}

# replace standard `cd` with enhanced version, ensure tab-completion works
alias cd=push_cd
complete -d cd```

[Oct 10, 2018] Bash History Display Date And Time For Each Command - nixCraft

Oct 10, 2018 | www.cyberciti.biz
  1. Abhijeet Vaidya says: March 11, 2010 at 11:41 am End single quote is missing.
    Correct command is:
    echo 'export HISTTIMEFORMAT="%d/%m/%y %T "' >> ~/.bash_profile 
  2. izaak says: March 12, 2010 at 11:06 am I would also add
    $ echo 'export HISTSIZE=10000' >> ~/.bash_profile

    It's really useful, I think.

  3. Dariusz says: March 12, 2010 at 2:31 pm you can add it to /etc/profile so it is available to all users. I also add:

    # Make sure all terminals save history
    shopt -s histappend histreedit histverify
    shopt -s no_empty_cmd_completion # bash>=2.04 only

    # Whenever displaying the prompt, write the previous line to disk:
    PROMPT_COMMAND='history -a'

    #Use GREP color features by default: This will highlight the matched words / regexes
    export GREP_OPTIONS='–color=auto'
    export GREP_COLOR='1;37;41′

  4. Babar Haq says: March 15, 2010 at 6:25 am Good tip. We have multiple users connecting as root using ssh and running different commands. Is there a way to log the IP that command was run from?
    Thanks in advance.
    1. Anthony says: August 21, 2014 at 9:01 pm Just for anyone who might still find this thread (like I did today):

      export HISTTIMEFORMAT="%F %T : $(echo $SSH_CONNECTION | cut -d\ -f1) : "

      will give you the time format, plus the IP address culled from the ssh_connection environment variable (thanks for pointing that out, Cadrian, I never knew about that before), all right there in your history output.

      You could even add in $(whoami)@ right to get if you like (although if everyone's logging in with the root account that's not helpful).

  5. cadrian says: March 16, 2010 at 5:55 pm Yup, you can export one of this

    env | grep SSH
    SSH_CLIENT=192.168.78.22 42387 22
    SSH_TTY=/dev/pts/0
    SSH_CONNECTION=192.168.78.22 42387 192.168.36.76 22

    As their bash history filename

    set |grep -i hist
    HISTCONTROL=ignoreboth
    HISTFILE=/home/cadrian/.bash_history
    HISTFILESIZE=1000000000
    HISTSIZE=10000000

    So in profile you can so something like HISTFILE=/root/.bash_history_$(echo $SSH_CONNECTION| cut -d\ -f1)

  6. TSI says: March 21, 2010 at 10:29 am bash 4 can syslog every command bat afaik, you have to recompile it (check file config-top.h). See the news file of bash: http://tiswww.case.edu/php/chet/bash/NEWS
    If you want to safely export history of your luser, you can ssl-syslog them to a central syslog server.
  7. Dinesh Jadhav says: November 12, 2010 at 11:00 am This is good command, It helps me a lot.
  8. Indie says: September 19, 2011 at 11:41 am You only need to use
    export HISTTIMEFORMAT='%F %T '

    in your .bash_profile

  9. lalit jain says: October 3, 2011 at 9:58 am -- show history with date & time

    # HISTTIMEFORMAT='%c '
    #history

  10. Sohail says: January 13, 2012 at 7:05 am Hi
    Nice trick but unfortunately, the commands which were executed in the past few days also are carrying the current day's (today's) timestamp.

    Please advice.

    Regards

    1. Raymond says: March 15, 2012 at 9:05 am Hi Sohail,

      Yes indeed that will be the behavior of the system since you have just enabled on that day the HISTTIMEFORMAT feature. In other words, the system recall or record the commands which were inputted prior enabling of this feature. Hope this answers your concern.

      Thanks!

      1. Raymond says: March 15, 2012 at 9:08 am Hi Sohail,

        Yes, that will be the behavior of the system since you have just enabled on that day the HISTTIMEFORMAT feature. In other words, the system can't recall or record the commands which were inputted prior enabling of this feature, thus it will just reflect on the printed output (upon execution of "history") the current day and time. Hope this answers your concern.

        Thanks!

  11. Sohail says: February 24, 2012 at 6:45 am Hi

    The command only lists the current date (Today) even for those commands which were executed on earlier days.

    Any solutions ?

    Regards

  12. nitiratna nikalje says: August 24, 2012 at 5:24 pm hi vivek.do u know any openings for freshers in linux field? I m doing rhce course from rajiv banergy. My samba,nfs-nis,dhcp,telnet,ftp,http,ssh,squid,cron,quota and system administration is over.iptables ,sendmail and dns is remaining.

    -9029917299(Nitiratna)

  13. JMathew says: August 26, 2012 at 10:51 pm Hi,

    Is there anyway to log username also along with the Command Which we typed

    Thanks in Advance

  14. suresh says: May 22, 2013 at 1:42 pm How can i get full comman along with data and path as we het in history command.
  15. rajesh says: December 6, 2013 at 5:56 am Thanks it worked..
  16. Krishan says: February 7, 2014 at 6:18 am The command is not working properly. It is displaying the date and time of todays for all the commands where as I ran the some command three before.

    How come it is displaying the today date

  17. PR says: April 29, 2014 at 5:18 pm Hi..

    I want to collect the history of particular user everyday and want to send an email.I wrote below script.
    for collecting everyday history by time shall i edit .profile file of that user
    echo 'export HISTTIMEFORMAT="%d/%m/%y %T "' >> ~/.bash_profile
    Script:

    #!/bin/bash
    #This script sends email of particular user
    history >/tmp/history
    if [ -s /tmp/history ]
    then
           mailx -s "history 29042014"  </tmp/history
               fi
    rm /tmp/history
    #END OF THE SCRIPT
    

    Can any one suggest better way to collect particular user history for everyday

  18. lefty.crupps says: October 24, 2014 at 7:10 pm Love it, but using the ISO date format is always recommended (YYYY-MM-DD), just as every other sorted group goes from largest sorting (Year) to smallest sorting (day)
    https://en.wikipedia.org/wiki/ISO_8601#Calendar_dates

    In that case, myne looks like this:
    echo 'export HISTTIMEFORMAT="%YY-%m-%d/ %T "' >> ~/.bashrc

    Thanks for the tip!

    1. lefty.crupps says: October 24, 2014 at 7:11 pm please delete post 33, my command is messed up.
  19. lefty.crupps says: October 24, 2014 at 7:11 pm Love it, but using the ISO date format is always recommended (YYYY-MM-DD), just as every other sorted group goes from largest sorting (Year) to smallest sorting (day)
    https://en.wikipedia.org/wiki/ISO_8601#Calendar_dates

    In that case, myne looks like this:
    echo ‘export HISTTIMEFORMAT=â€%Y-%m-%d %T “‘ >> ~/.bashrc

    Thanks for the tip!

  20. Vanathu says: October 30, 2014 at 1:01 am its show only current date for all the command history
    1. lefty.crupps says: October 30, 2014 at 2:08 am it's marking all of your current history with today's date. Try checking again in a few days.
  21. tinu says: October 14, 2015 at 3:30 pm Hi All,

    I Have enabled my history with the command given :
    echo 'export HISTTIMEFORMAT="%d/%m/%y %T "' >> ~/.bash_profile

    i need to know how i can add the ip's also , from which the commands are fired to the system.

[Oct 05, 2018] Micromanaging boss in IT required employees to work 80 hours without compensation

Notable quotes:
"... Then I am on call 24/7 and must be able to respond within 5 minutes. I end up working 60 hours a week but there's absolutely no overtime or banking of hours so I work for free for 20 hours. ..."
"... So I said that it wasn't enough and he said I will work 80hrs/week or I'm fired. Thursday the assistant manager was fired and So I quit. ..."
Oct 05, 2018 | www.reddit.com

munky9001 Application Security Specialist 18 points 19 points 20 points 5 years ago * (10 children)

My worst one was how I got my nickname "Turkish Hacker"

Basically I was hired as network admin for an isp. I maintained mail servers and what have you. Now the most critical server was this freebsd 5 sendmail, dns, apache server. We're talking it hadnt done patches in 7 years. Then there was a centos 4 server which hadnt done patching in 2 years. It has postfix, dns, apache.

Well I was basically planning to just nuke the freebsd server. Move everything off it and move to new sexy hardware but first I needed to put out all the fires so I had time to fix it all up. Well first big problem was the sheer slowness of the setup. We're talking an oc3 and a ds10 load balanced via dns round robin both saturated. 99.99% of it was spam. The server was basically an open relay to the world. However since we were already paying for some vms at rackspace to run dns servers. I'd setup dns bind9 infrastructure and openvpn between the servers. So basically I was going absolute best practices from the getgo on the new infrastructure. I migrated all dns entries to the new dns servers.

I then start fixing the anti-spam situation. I basically get everything to a situation where spam is being stopped successfully. However there was a back and forth and I'd enable something and fix something and they'd attack a new way. I'd eventually compiled and got everything up to date for the mail server and even the perl modules that spamassassin needed etc etc. Well these spammers didnt like that... so this turkish group defaces the server. hundreds of local businesses lose their website and there's no good backups.

Oh and it's defaced the day I quit without notice. Now I outright claim to be a hacker but whitehat. My boss naturally accuses me of being the hacker but he had nothing to stand on and the other people quitting the same day were basically backing me up. You might ask what I quit over?

Well imagine a boss who micromanaged absolutely everything but only works about 2 hours a week. Who then gets really mad and chews you out publicly by sending an email to the entire company chewing you out. He also made rules like, 'To take a break you must leave the building.' and then the rule right after that, 'no leaving the building'. Then staggered assigned 1 hour unpaid lunches so nobody could eat together but once you are done eating you must go back to work. AKA you work for free for 50 mins. Then I am on call 24/7 and must be able to respond within 5 minutes. I end up working 60 hours a week but there's absolutely no overtime or banking of hours so I work for free for 20 hours.

So basically the assistant manager of the place... the guy who basically kept the place together went to the boss and said ALL the rules have to go away by friday or I quit. Well Wednesday the boss actually comes into the office and everyone is wondering wtf he is even doing there. 1 major thing is that on that friday I was up for a raise by agreement and he comes to me asking if I could work 80hrs a week 'which is easy because he does it every week' so I brought up the increase in pay because if I was going to continue with 60 hours a week I would need a 50% raise and if he wants 80hrs/week it'll be a 100% raise. He offered me 10cents/hr($200/year) and he'll pay for my plane ticket to las vegas($200). So I said that it wasn't enough and he said I will work 80hrs/week or I'm fired. Thursday the assistant manager was fired and So I quit.

vocatus NSA/DOD/USAR/USAP/AEXP [ S ] 2 points 3 points 4 points 5 years ago (1 child)
Wow...just wow. That sounds like an outstanding corporate culture! /sarcasm

Glad you got out while the gettin' was good.

munky9001 Application Security Specialist 3 points 4 points 5 points 5 years ago (0 children)
It's kinda funny. There was an event where the day a meraki web filter firewall was deployed; the purpose of the thing was to block us from going to facebook and such. Well within the same day a bunch of people were caught on facebook. Boss sent out an email to the entire company naming each one saying they'd be fired if caught again. Except oh wait we're an isp and modems and phone lines everywhere. Those people just started using those or their phones.

Now the day the senior staff quit... the only people left were the easily replaceable people who had been threatened with the above. Not only are they dogfuckers, desperate for any job, and apparently stupid because if they walked out the same time we did the boss would have been fired but they stayed and they asked for 25cent raise each.

MrDoomBringer 1 point 2 points 3 points 5 years ago (1 child)
80hrs/week is illegal in most places and would have easily gotten you some compensation, did you pursue that or was it just not worth it after that?
munky9001 Application Security Specialist 2 points 3 points 4 points 5 years ago (0 children)
Actually it's not illegal in most places for IT and other 'emergency services' labour laws don't cover IT in such things.

http://www.labour.gov.on.ca/english/es/tools/srt/coverage_government_it.php

[Oct 05, 2018] sudo yum -y remove krb5 (this removes coreutils)

Oct 05, 2018 | www.reddit.com

DrGirlfriend Systems Architect 2 points 3 points 4 points 5 years ago (5 children)

2960G 2 points 3 points 4 points 5 years ago (1 child)
+1 for the "yum -y". Had the 'pleasure' of fixing a box one of my colleagues did "yum -y remove openssl". Through utter magic managed to recover it without reinstalling :-)
chriscowley DevOps 0 points 1 point 2 points 5 years ago (0 children)
Do I explain. I would probably curled the RPMs of the repo into cpio and put them into place manually (been there)
vocatus NSA/DOD/USAR/USAP/AEXP [ S ] 0 points 1 point 2 points 5 years ago (1 child)
That last one gave me the shivers.

[Oct 05, 2018] Trying to preserve connection after networking change while working on the core switch remotely backfired, as sysadmin forgot to cancel scheduled reload comment after testing change

Notable quotes:
"... "All monitoring for customer is showing down except the edge firewalls". ..."
"... as soon as they said it I knew I forgot to cancel the reload. ..."
"... That was a fun day.... What's worse is I was following a change plan, I just missed the "reload cancel". Stupid, stupid, stupid, stupid. ..."
Oct 05, 2018 | www.reddit.com

Making some network changes in a core switch, use 'reload in 5' as I wasn't 100% certain the changes wouldn't kill my remote connection.

Changes go in, everything stays up, no apparent issues. Save changes, log out.

"All monitoring for customer is showing down except the edge firewalls".

... as soon as they said it I knew I forgot to cancel the reload.

0xD6 5 years ago

This one hit pretty close to home having spent the last month at a small Service Provider with some serious redundancy issues. We're working through them one by one, but there is one outage in particular that was caused by the same situation... Only the scope was pretty "large".

Performed change, was distracted by phone call. Had an SMS notifying me of problems with a legacy border that I had just performed my changes on. See my PuTTY terminal and my blood starts to run cold. "Reload requested by 0xd6".

...Fuck I'm thinking, but everything should be back soon, not much I can do now.

However, not only did our primary transit terminate on this legacy device, our old non-HSRP L3 gateways and BGP nail down routes for one of our /20s and a /24... So, because of my forgotten reload I withdrew the majority of our network from all peers and the internet at large.

That was a fun day.... What's worse is I was following a change plan, I just missed the "reload cancel". Stupid, stupid, stupid, stupid.

[Oct 05, 2018] I learned a valuable lesson about pressing buttons without first fully understanding what they do.

Oct 05, 2018 | www.reddit.com

WorkOfOz (0 children)

This is actually one of my standard interview questions since I believe any sys admin that's worth a crap has made a mistake they'll never forget.

Here's mine, circa 2001. In response to a security audit, I had to track down which version of the Symantec Antivirus was running and what definition was installed on every machine in the company. I had been working through this for awhile and got a bit reckless.

There was a button in the console that read 'Virus Sweep'. Thinking it'd get the info from each machine and give me the details, I pressed it.. I was wrong..

Very Wrong. Instead it proceeded to initiate a virus scan on every machine including all of the servers.

Less than 5 minutes later, many of our older servers and most importantly our file servers froze. In the process, I took down a trade floor for about 45 minutes while we got things back up. I learned a valuable lesson about pressing buttons without first fully understanding what they do.

[Oct 05, 2018] A newbie turned production server off to replace a monitor

Oct 05, 2018 | www.reddit.com

just_call_in_sick 5 years ago (1 child)

A friend of the family was an IT guy and he gave me the usual high school unpaid intern job. My first day, he told me that a computer needed the monitor replaced. He gave me this 13" CRT and sent me on my way. I found the room (a wiring closet) with a tiny desk and a large desktop tower on it.

TURNED OFF THE COMPUTER and went about replacing the monitor. I think it took about 5 minutes for people start wondering why they can no longer use the file server and can't save their files they have been working on all day.

It turns out that you don't have to turn off computers to replace the monitor.

[Oct 05, 2018] Sometimes one extra space makes a big differenece

Oct 05, 2018 | cam.ac.uk

From: rheiger@renext.open.ch (Richard H. E. Eiger)
Organization: Olivetti (Schweiz) AG, Branch Office Berne

In article <1992Oct9.100444.27928@u.washington.edu> tzs@stein.u.washington.edu
(Tim Smith) writes:
> I was working on a line printer spooler, which lived in /etc. I wanted
> to remove it, and so issued the command "rm /etc/lpspl." There was only
> one problem. Out of habit, I typed "passwd" after "/etc/" and removed
> the password file. Oops.
>
[deleted to save space[
>
> --Tim Smith

Here's another story. Just imagine having the sendmail.cf file in /etc. Now, I
was working on the sendmail stuff and had come up with lots of sendmail.cf.xxx
which I wanted to get rid of so I typed "rm -f sendmail.cf. *". At first I was
surprised about how much time it took to remove some 10 files or so. Hitting
the interrupt key, when I finally saw what had happened was way to late,
though.

Fortune has it that I'm a very lazy person. That's why I never bothered to just
back up directories with data that changes often. Therefore I managed to
restore /etc successfully before rebooting... :-) Happy end, after all. Of
course I had lost the only well working version of my sendmail.cf...

Richard

[Oct 05, 2018] Deletion of files purpose of which you do not understand sometimes backfire by Anatoly Ivasyuk

Oct 05, 2018 | cam.ac.uk

Unix Admin. Horror Story Summary, version 1.0 by Anatoly Ivasyuk

From: philip@haas.berkeley.edu (Philip Enteles)
Organization: Haas School of Business, Berkeley

As a new system administrator of a Unix machine with limited space I thought I was doing myself a favor by keeping things neat and clean. One
day as I was 'cleaning up' I removed a file called 'bzero'.

Strange things started to happen like vi didn't work then the compliants started coming in. Mail didn't work. The compilers didn't work. About this time the REAL system administrator poked his head in and asked what I had done.

Further examination showed that bzero is the zeroed memory without which the OS had no operating space so anything using temporary memory was non-functional.

The repair? Well things are tough to do when most of the utilities don't work. Eventually the REAL system administrator took the system to single user and rebuilt the system including full restores from a tape system. The Moral is don't be to anal about things you don't understand.

Take the time learn what those strange files are before removing them and screwing yourself.

Philip Enteles

[Oct 05, 2018] Danger of hidden symlinks

Oct 05, 2018 | cam.ac.uk

From: cjc@ulysses.att.com (Chris Calabrese)
Organization: AT&T Bell Labs, Murray Hill, NJ, USA

In article <7515@blue.cis.pitt.edu.UUCP> broadley@neurocog.lrdc.pitt.edu writes:
>On a old decstation 3100

I was deleting last semesters users to try to dig up some disk space, I also deleted some test users at the same time.

One user took longer then usual, so I hit control-c and tried ls. "ls: command not found"

Turns out that the test user had / as the home directory and the remove user script in Ultrix just happily blew away the whole disk.

>U...~

[Oct 05, 2018] Hidden symlinks and recursive deletion of the directories

Notable quotes:
"... Fucking asshole ex-sysadmin taught me a good lesson about checking for symlink bombs. ..."
Oct 05, 2018 | www.reddit.com

mavantix Jack of All Trades, Master of Some; 5 years ago (4 children)

I was cleaning up old temp folders of junk on Windows 2003 server, and C:\temp was full of shit. Most of it junk. Rooted deep in the junk, some asshole admin had apparently symlink'd sysvol to a folder in there. Deleting wiped sysvol.

There where no usable backups, well, there where but the ArcServe was screwed by lack of maintenance.

Spent days rebuilding policies.

Fucking asshole ex-sysadmin taught me a good lesson about checking for symlink bombs.

...and no I didn't tell this story to teach any of your little princesses to do the same when you leave your company.

[Oct 05, 2018] Automaticallt putting slash in from of directory named like system (named like bin,etc,usr, var) which is etched in sysadmin memory

This is why you should never type rm command on command line. Type it in editor first.
Oct 05, 2018 | www.reddit.com

aultl Senior DevOps Engineer

rm -rf /var

I was trying to delete /var/named/var

nekoeth0 Linux Admin, 5 years ago
Haha, that happened to me too. I had to use a live distro, chroot, copy, what not. It was fun!

[Oct 05, 2018] I corrupted a 400TB data warehouse.

Oct 05, 2018 | www.reddit.com

I corrupted a 400TB data warehouse.

Took 6 days to restore from tape.

mcowger VCDX | DevOps Guy 8 points 9 points 10 points 5 years ago (0 children)

Meh - happened a long time ago.

Had a big Solaris box (E6900) running Oracle 10 for the DW. Was going to add some new LUNs to the box and also change some of the fiber pathing to go through a new set of faster switches. Had the MDS changes prebuilt, confirmed in with another admin, through change control, etc.

Did fabric A, which went through fine, and then did fabric B without pausing or checking that the new paths came up on side A before I knocked over side B (in violation of my own approved plan). For the briefest of instants, there were no paths to the devices and Oracle was configured in full async write mode :(. Instant corruption of the tables that were active. Tried to do use archivelogs to bring it back, but no dice (and this is before Flashbacks, etc). So we were hosed.

Had to have my DBA babysit the RMAN restore for the entire weekend :(. 1GBe links to backup infrastructure.

RCA resulted in MANY MANY changes to the design of that system, and me just barely keeping my job.

invisibo DevOps 2 points 3 points 4 points 5 years ago (0 children)
You just made me say "holy shit! Out loud. You win.
FooHentai 2 points 3 points 4 points 5 years ago (0 children)
Ouch.

I dropped a 500Gb RAID set. There were 2 identical servers in the rack right next to each other. Both OpenFiler, both unlabeled. Didn't know about the other one and was told to 'wipe the OpenFiler'. Got a call half an hour later from a team wondering where all their test VMs had gone.

vocatus NSA/DOD/USAR/USAP/AEXP [ S ] 1 point 2 points 3 points 5 years ago (0 children)
I have to hear the story.

[Oct 02, 2018] Sysadmins_about thier jobs

Oct 02, 2018 | www.reddit.com

[–]


therankin 5 months ago (0 children)

You really stuck it out.

Sometimes things where I work feel like that. I'm the only admin at a school with 120 employees and 450 students. It's both challenging, fun, and can be frustrating. The team I work with is very nice and supportive and on those rare occasions I need help we have resources to get it (Juniper contract, a consulting firm helped me build out a redundant cluster and a SAN, etc).

I can see that if the environment wasn't so great I could feel similar to the way you did.

I'm glad you got out, and I'm glad you've been able to tell us that it can get better.

The truth is I actually like going to work; I think if you're in a position where you dread work you should probably work on getting out of there. Good for you though sticking it out so long. You learned a lot and learned how to juggle things under pressure and now you have a better gig partially because of it (I'm guessing).

baphometsayshi 6 months ago (0 children)

-the problem is you. Your expectations are too high -no job is perfect. Be happy you have one and can support your family.

fuck those people with a motorized rake. if they were in your position they would have been screaming for relief.

Dontinquire 6 months ago (0 children)
If I may be so bold... Fuck them, good for you. Get what you're worth, not what you're paid.
jedikaiti 6 months ago (0 children)
Your friends are nuts, they should have been helping you jump ship ASAP.
headsupvoip 6 months ago (1 child)
WoW, just WOW. Glad you got out. Sounds like this ISP does not understand that the entire revenue stream is based on aging hardware. If they are not willing to keep it updated, which means keeping the staff at full strength and old equipment replaced it will all come to a halt. Your old boss sounds like she is a prime example of the Peter principle.

We do VoIP and I researched the Calix product line for ISP's after reading your post. Always wondered how they still supported legacy TDM. Like the old Hank Williams song with a twist, "Us geeks will Survive". Cheers

djgizmo 6 months ago (0 children)
TDM is still a big thing since the old switches used to cost 400k plus. Now that metaswitch is less than 200k, it's less of a thing, but tdm for transport is rock solid and you can buy a ds3 from one co or headend to another for very cheap to transport lines compared to the equivalent in data.
AJGrayTay 6 months ago (0 children)

I talk to friends. Smart people I look up to and trust. The answer?

-the problem is you. Your expectations are too high

-no job is perfect. Be happy you have one and can support your family.

Oh man, this... I feel your pain. Only you know your situation, man (or woman). Trust your judgement, isolate emotional responses and appraise objectively. Rarely, if ever, does anyone know better than you do.

DAVPX 6 months ago (0 children)
Well done. I went through something like this recently and am also feeling much better working remotely.
renegadecanuck 6 months ago (0 children)
It's true that no job is perfect, but there's a difference between "this one person in the office is kind of grumpy and annoys me", "I have one coworker that's lazy and management doesn't see it", or "we have to support this really old computer, because replacing the equipment that uses it is way too expensive" and "I get treated like shit every day, my working hours are too long, and they won't get anyone to help me".
ralph8877 6 months ago * (0 children)

He puts that responsibility on me. Says if I can get someone hired I'd get a $500 bonus.

but my boss didn't bite.

Seems like your boss lied to you to keep you from quitting.

fricfree 6 months ago (1 child)
I'm curious. How many hours were you working in this role?
chrisanzalone007 6 months ago (0 children)
First I'd like to say. I have 4 kids and a wife. It was discussed during the interview that I work to live. And expect to work 40 hours unless it's an emergency.

For the first 6 months or so I'd work on weekdays after hours on simple improvements. This then stopped and shifted to working on larger projects (via home labs) on the weekends (entire weekends lost) just to feel prepared to administer Linux kvm, or our mpls Network (that I had zero prior experience on)

This started effecting my home life. I stopped working at home and discussed with my wife. Together we decided if the company was willing to pay significantly more ($20k a year) I would invest the needed time after hours into the company.

I brought this to my boss and nothing happened. I got a 4% raise. This is when I capped the time I spent at home on NEEDED things and only focused on what I wanted out of IT (network engineering) and started digging into GNS3 a bit more

-RYknow 6 months ago (1 child)
Disappointed there was no mention of thenfirend who supported your decision, and kept telling you to get out!! Haha! Just playing buddy! Glad you got out...and that place can fuck themselves!!

Really happy for you man! You deserve to be treated better!

chrisanzalone007 6 months ago (0 children)
Lol! Looking back I actually feel bad that you had to listen to my problems! Definitely appreciated - you probably prevented a few explosions lol
mailboy79 6 months ago (1 child)
OP, thank you for your post.

I'm unemployed right now and the interviews have temporarily slowed down, but I'm determined not to take "just anything" to get by, as doing so can adversely affect my future prospects.

thinkyrsoillustrious 6 months ago (0 children)

MSP

I've been reading this whole thread and can relate, as I'm in a similar position. But wanted to comment, based on experience, that you are correct about taking 'just anything' will adversely affect your future prospects. After moving to Central NY I was unemployed for a while and took a helpdesk position (I was a Senior Systems Analyst/Microsoft Systems Administrator making good money 18 years ago!) That stuck with me as they see it on my resume...and have only been offered entry level salaries every since. That, and current management in a horrible company, ruined my career!! So be careful...

illkrumpwithyou 6 months ago (0 children)
A job is a means to an end, my friend. If it was making you miserable then you absolutely did the right thing in leaving.
1nfestissumam 6 months ago (0 children)
Congrats, but a 5 month gap on the resume won't fly for 99.9% of our perspective employers. There better be a damn good reason other than complaining about burnout on the lined up interview. I resigned without another opportunity lined up and learned a 9 month lesson the hard way. Never jump ship without anything lined up. If not for you, then for your family.
mistralol 6 months ago (0 children)
It works like this. Get paid as much a possible for as little effort as possible.

And companies want as much as possible for as little money as possible.

Sinister29389 6 months ago (0 children)
SA's typically wear many hats at small or large organizations. Shoot, we even got a call from an employee asking IT if they can jump start their car 😫.

It's a thankless job and resources are always lacking. The typical thinking is, IT does not make money for the company, they spend it. This always put the dept. on the back burner until something breaks of course.

qsub 6 months ago (0 children)
I've bee through 7 jobs in the last 12 years, I'm surprised people still hire me lol. I'm pretty comfortable where I am now though, I also have a family.
Sinsilenc 6 months ago (0 children)
These companies that think managed printer contracts are expensive are nuts. They are about even with toner replacement costs. We buy our own xerox machines and have our local xerox rep manage them for us. Whoever had that hair brained idea needs slapped.
OFDarkness 6 months ago (0 children)
I had 12 years of what you had, OP. Same job, same shit you described on a daily basis... I can tell you right now it's not you or your expectations too high... some companies are just sick from the top to the bottom and ISP's... well they are magnets both for ppl who work too hard and for ppl who love to exploit the ones who work too hard in order to get their bonuses... After realising that no matter how hard I worked, the ppl in charge would never change anything to make work better both for workers, company and Customers... I quit my job, spent a year living off my savings... then started my own business and never looked back. You truly deserve better!
Grimzkunk 6 months ago (4 children)
Just for me curiosity, do you have family? Young kids?
chrisanzalone007 6 months ago (3 children)
4 kids. 11 and under
Grimzkunk 6 months ago (2 children)
Wow! Congrats man, 4 kids is a lot of energy. How many hours are you working each weeks?

I only have one little dude for the moment and having to drive him to and from the daycare each day prevent me from doing anymore than 40hr/week.

chrisanzalone007 6 months ago (1 child)
Very much tiring! Lol. I'm at 40 hours a week unless we have an outage, or compromised server(or want to learn something on my own time). I'm up front at my interviews that "i work to live" and family is the most important thing to me.

I do spend quite a bit of personal time playing. With Linux KVM, (Linux in general since I'm a windows guy) difference between LVM block storage with different file systems, no lvm with image files. Backups (lvm snapshots) - front end management - which ultimately evolved to using proxmox. Since I manage iptv I read books about the mpeg stream to understand the pieces of it better.

I was actually hired as a "network engineer" (even though I'm more of a systems guy) so I actually WANT more network experience. Bgp, ospf, mpls, qos in mpls, multicast, pim sparse mode.

So I've made a GNS3 lab with our ALU hardware in attempts to build our network to get some hands on experience.

One day while trying to enable router guard (multicast related) - I broke the hell out of multicast by enabling it on the incoming vlan for our largest service area. Felt like an idiot so I stopped touching production until I could learn it (and still not there!)

Grimzkunk 6 months ago (0 children)
Good to hear the "I work to live". I'm also applying the same, family first, but it can sadly be hard for some to understand :-(

I feel like employers should have different expectations with employee that have family VS those that that don't. And even then, I would completely understand a single person without family that enjoy life the more he can and doesn't want to work more than what is on its contract since.

I really don't know how it works in employers mind.

chrisanzalone007 6 months ago (0 children)
The work load is high, but I've learned to deal with it. I ordered 7 servers about 9 months ago and they were just deployed last month. Prioritizing is life!

What pushed me over the ledge was. Me spending my time trying to improve solutions to save time, and being told not gonna happen. The light in the tunnel gets further and further away.

_ChangeOfPace 5 months ago (1 child)
Were you working for an ISP in the SC/GA area? This sounds exactly like our primary ISP.
chrisanzalone007 5 months ago (0 children)
That's crazy! But no

[Oct 02, 2018] Rookie almost wipes customer's entire inventory unbeknownst to sysadmin

Notable quotes:
"... At that moment, everything from / and onward began deleting forcefully and Reginald described his subsequent actions as being akin to "flying flat like a dart in the air, arms stretched out, pointy finger fully extended" towards the power switch on the mini computer. ..."
Oct 02, 2018 | theregister.co.uk

I was going to type rm -rf /*.old* – which would have forcibly removed all /dgux.old stuff, including any sub-directories I may have created with that name," he said.

But – as regular readers will no doubt have guessed – he didn't.

"I fat fingered and typed rm -rf /* – and then I accidentally hit enter instead of the "." key."

At that moment, everything from / and onward began deleting forcefully and Reginald described his subsequent actions as being akin to "flying flat like a dart in the air, arms stretched out, pointy finger fully extended" towards the power switch on the mini computer.

"Everything got quiet."

Reginald tried to boot up the system, but it wouldn't. So instead he booted up off a tape drive to run the mini Unix installer and mounted the boot "/" file system as if he were upgrading – and then checked out the damage.

"Everything down to /dev was deleted, but I was so relieved I hadn't deleted the customer's database and only system files."

Reginald did what all the best accident-prone people do – kept the cock-up to himself, hoped no one would notice and started covering his tracks, by recreating all the system files.

Over the next three hours, he "painstakingly recreated the entire device tree by hand", at which point he could boot the machine properly – "and even the application worked out".

Jubilant at having managed the task, Reginald tried to keep a lid on the heart that was no doubt in his throat by this point and closed off his work, said goodbye to the sysadmin and went home to calm down. Luckily no one was any the wiser.

"If the admins read this message, this would be the first time they hear about it," he said.

"At the time they didn't come in to check what I was doing, and the system was inaccessible to the users due to planned maintenance anyway."

Did you feel the urge to confess to errors no one else at your work knew about? Do you know someone who kept something under their hat for years? Spill the beans to Who, Me? by emailing us here . ® Re: If rm -rf /* doesn't delete anything valuable

Eh? As I read it, Reginald kicked off the rm -rf /*, then hit the power switch before it deleted too much. The tape rescue revealed that "everything down to /dev" had been deleted, ie. everything in / beginnind a,b,c and some d. On a modern system that might include /boot and /bin, but evidently was not a total disaster on Reg's server.


Anonymous Coward

title="Inappropriate post? Report it to our moderators" type="submit" value="Report abuse"> I remember discovering the hard way that when you delete an email account in Thunderbird and it asks if you want to delete all the files associated with it it actually means do you want to delete the entire directory tree below where the account is stored .... so, as I discovered, saying "yes" when the reason you are deleting the account is because you'd just created it in the wrong place in the the directory tree is not a good idea - instead of just deleting the new account I nuked all the data associated with all our family email accounts!

big_D Monday 1st October 2018 10:05 GMT bpfh
div Re: .cobol

"Delete is right above Rename in the bloody menu"

Probably designed by the same person who designed the crontab app then, with the command line options -e to edit and -r to remove immediately without confirmation. Misstype at your peril...

I found this out - to my peril - about 3 seconds before I realised that it was a good idea for a server's crontab to include a daily executed crontab -l > /foo/bar/crontab-backup.txt ...

Jason Bloomberg
div Re: .cobol

I went to delete the original files, but I only got as far as "del *.COB" befiore hitting return.

I managed a similar thing but more deliberately; belatedly finding "DEL FOOBAR.???" included files with no extensions when it didn't on a previous version (Win3.1?).

That wasn't the disaster it could have been but I've had my share of all-nighters making it look like I hadn't accidentally scrubbed a system clean.

Down not across
div Re: .cobol

Probably designed by the same person who designed the crontab app then, with the command line options -e to edit and -r to remove immediately without confirmation. Misstype at your peril...

Using crontab -e is asking for trouble even without mistypes. I've see too many corrupted or truncated crontabs after someone has edited them with crontab -e. crontab -l > crontab.txt;vi crontab.txt;crontab crontab.txt is much better way.

You mean not everyone has crontab entry that backs up crontab at least daily?

MrBanana
div Re: .cobol

"WAH! I copied the .COBOL back to .COB and started over again. As I knew what I wanted to do this time, it only took about a day to re-do what I had deleted."

When this has happened to me, I end up with better code than I had before. Re-doing the work gives you a better perspective. Even if functionally no different it will be cleaner, well commented, and laid out more consistently. I sometimes now do it deliberately (although just saving the first new version, not deleting it) to clean up the code.

big_D
div Re: .cobol

I totally agree, the resultant code was better than what I had previously written, because some of the mistakes and assumptions I'd made the first time round and worked around didn't make it into the new code.

Woza
div Reminds me of the classic

https://www.ee.ryerson.ca/~elf/hack/recovery.html

Anonymous South African Coward
div Re: Reminds me of the classic

https://www.ee.ryerson.ca/~elf/hack/recovery.html

Was about to post the same. It is a legendary classic by now.

Chairman of the Bored
div One simple trick...

...depending on your shell and its configuration a zero size file in each directory you care about called '-i' will force the rampaging recursive rm, mv, or whatever back into interactive mode. By and large it won't defend you against mistakes in a script, but its definitely saved me from myself when running an interactive shell.

It's proven useful enough to earn its own cronjob that runs once a week and features a 'find -type d' and touch '-i' combo on systems I like.

Glad the OP's mad dive for the power switch saved him, I wasn't so speedy once. Total bustification. Hence this one simple trick...

Now if I could ever fdisk the right f$cking disk, I'd be set!

PickledAardvark
div Re: One simple trick...

"Can't you enter a command to abort the wipe?"

Maybe. But you still have to work out what got deleted.

On the first Unix system I used, an admin configured the rm command with a system alias so that rm required a confirmation. Annoying after a while but handy when learning.

When you are reconfiguring a system, delete/rm is not the only option. Move/mv protects you from your errors. If the OS has no move/mv, then copy, verify before delete.

Doctor Syntax
div Re: One simple trick...

"Move/mv protects you from your errors."

Not entirely. I had a similar experience with mv. I was left with a running shell so could cd through the remains of the file system end list files with echo * but not repair it..

Although we had the CDs (SCO) to reboot the system required a specific driver which wasn't included on the CDs and hadn't been provided by the vendor. It took most of a day before they emailed the correct driver to put on a floppy before I could reboot. After that it only took a few minutes to put everything back in place.

Chairman of the Bored
div Re: One simple trick...

@Chris Evans,

Yes there are a number of things you can do. Just like Windows a quick ctrl-C will abort a rm operation taking place in an interactive shell. Destroying the window in which the interactive shell running rm is running will work, too (alt-f4 in most window managers or 'x' out of the window)

If you know the process id of the rm process you can 'kill $pid' or do a 'killall -KILL rm'

Couple of problems:

(1) law of maximum perversity says that the most important bits will be destroyed first in any accident sequence

(2) by the time you realize the mistake there is no time to kill rm before law 1 is satisfied

The OP's mad dive for the power button is probably the very best move... provided you are right there at the console. And provided the big red switch is actually connected to anything

Colin Bull 1
div cp can also be dangerous

After several years working in a DOS environment I got a job as project Manager / Sys admin on a Unix based customer site for a six month stint. On my second day I wanted to use a test system to learn the software more, so decided to copy the live order files to the test system.

Unfortunately I forgot the trailing full stop as it was not needed in DOS - so the live order index file over wrote the live data file. And the company only took orders for next day delivery so it wiped all current orders.

Luckily it printed a sales acknowledgement every time an order was placed so I escaped death and learned never miss the second parameter with cp command.

Anonymous Coward

title="Inappropriate post? Report it to our moderators" type="submit" value="Report abuse"> i'd written a script to deploy the latest changes to the live environment. worked great. except one day i'd entered a typo and it was now deploying the same files to the remote directory, over and again.

it did that for 2 whole years with around 7 code releases. not a single person realised the production system was running the same code after each release with no change in functionality. all the customer cared about was 'was the site up?'

not a single person realised. not the developers. not the support staff. not me. not the testers. not the customer. just made you think... wtf had we been doing for 2 years??? Yet Another Anonymous coward

div Look on the bright side, any bugs your team had introduced in those 2 years had been blocked by your intrinsically secure script
Prst. V.Jeltz
div not a single person realised. not the developers. not the support staff. not me. not the testers. not the customer. just made you think... wtf had we been doing for 2 years???

That is Classic! not surprised about the AC!

Bet some of the beancounters were less than impressed , probly on customer side :)

Anonymous Coward

title="Inappropriate post? Report it to our moderators" type="submit" value="Report abuse"> Re: ...then there's backup stories...

Many years ago (pre internet times) a client phones at 5:30 Friday afternoon. It was the IT guy wanting to run through the steps involved in recovering from a backup. Their US headquarters had a hard disk fail on their accounting system. He was talking the Financial Controller through a recovery and while he knew his stuff he just wanted to double check everything.

8pm the same night the phone rang again - how soon could I fly to the states? Only one of the backup tapes was good. The financial controller had put the sole remaining good backup tape in the drive, then popped out to get a bite to eat at 7pm because it was going to be a late night. At 7:30pm the scheduled backup process copied the corrupted database over the only remaining backup.

Saturday was spent on the phone trying to talk them through everything I could think of.

Sunday afternoon I was sitting in a private jet winging it's way to their US HQ. Three days of very hard work later we'd managed to recreate the accounting database from pieces of corrupted databases and log files. Another private jet ride home - this time the pilot was kind enough to tell me there was a cooler full of beer behind my seat. Olivier2553

div Re: Welcome to the club!

"Lesson learned: NEVER decide to "clean up some old files" at 4:30 on a Friday afternoon. You WILL look for shortcuts and it WILL bite you on the ass."

Do not do anything of some significance on Friday. At all. Any major change, big operation, etc. must be made by Thursday at the latest, so in case of cock-up, you have the Friday (plus days week-end) to repair it.

JQW
div I once wiped a large portion of a hard drive after using find with exec rm -rf {} - due to not taking into account the fact that some directories on the system had spaces in them.
Will Godfrey
div Defensive typing

I've long been in the habit of entering dangerous commands partially in reverse, so in the case of theO/Ps one I've have done:

' -rf /*.old* '

then gone back top the start of the line and entered the ' rm ' bit.

sisk
div A couple months ago on my home computer (which has several Linux distros installed and which all share a common /home because I apparently like to make life difficult for myself - and yes, that's as close to a logical reason I have for having multiple distros installed on one machine) I was going to get rid of one of the extraneous Linux installs and use the space to expand the root partition for one of the other distros. I realized I'd typed /dev/sdc2 instead of /dev/sdc3 at the same time that I verified that, yes, I wanted to delete the partition. And sdc2 is where the above mentioned shared /home lives. Doh.

Fortunately I have a good file server and a cron job running rsync every night, so I didn't actually lose any data, but I think my heart stopped for a few seconds before I realized that.

Kevin Fairhurst
div Came in to work one Monday to find that the Unix system was borked... on investigation it appeared that a large number of files & folders had gone missing, probably by someone doing an incorrect rm.

Our systems were shared with our US office who supported the UK outside of our core hours (we were in from 7am to ensure trading was ready for 8am, they were available to field staff until 10pm UK time) so we suspected it was one of our US counterparts who had done it, but had no way to prove it.

Rather than try and fix anything, they'd gone through and deleted all logs and history entries so we could never find the evidence we needed!

Restoring the system from a recent backup brought everything back online again, as one would expect!

DavidRa
div Sure they did, but the universe invented better idiots

Of course. However, the incompletely-experienced often choose to force bypass that configuration. For example, a lot of systems aliased rm to "rm -i" by default, which would force interactive confirmations. People would then say "UGH, I hate having to do this" and add their own customisations to their shells/profiles etc:

unalias rm

alias rm=rm -f

Lo and behold, now no silly confirmations, regardless of stupidity/typos/etc.

[Oct 02, 2018] Quiet log noise with Python and machine learning Opensource.com

Notable quotes:
"... Tristan Cacqueray will present Reduce your log noise using machine learning at the OpenStack Summit , November 13-15 in Berlin. ..."
Oct 02, 2018 | opensource.com

Quiet log noise with Python and machine learning Logreduce saves debugging time by picking out anomalies from mountains of log data. 28 Sep 2018 Tristan de Cacqueray (Red Hat) Feed 9 up radio communication signals Image by : Internet Archive Book Images . Modified by Opensource.com. CC BY-SA 4.0 x Get the newsletter

Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.

https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0

Logreduce machine learning model is trained using previous successful job runs to extract anomalies from failed runs' logs.

This principle can also be applied to other use cases, for example, extracting anomalies from Journald or other systemwide regular log files.

Using machine learning to reduce noise

A typical log file contains many nominal events ("baselines") along with a few exceptions that are relevant to the developer. Baselines may contain random elements such as timestamps or unique identifiers that are difficult to detect and remove. To remove the baseline events, we can use a k -nearest neighbors pattern recognition algorithm ( k -NN).

ml-generic-workflow.png training/testing workflow diagram

Log events must be converted to numeric values for k -NN regression. Using the generic feature extraction tool HashingVectorizer enables the process to be applied to any type of log. It hashes each word and encodes each event in a sparse matrix. To further reduce the search space, tokenization removes known random words, such as dates or IP addresses.

hashing-vectorizer.png Feature extraction diagram

Once the model is trained, the k -NN search tells us the distance of each new event from the baseline.

kneighbors.png k-NNN diagram

This Jupyter notebook demonstrates the process and graphs the sparse matrix vectors.

anomaly-detection-with-scikit-learn.png Jupyter notebook Introducing Logreduce

The Logreduce Python software transparently implements this process. Logreduce's initial goal was to assist with Zuul CI job failure analyses using the build database, and it is now integrated into the Software Factory development forge's job logs process.

At its simplest, Logreduce compares files or directories and removes lines that are similar. Logreduce builds a model for each source file and outputs any of the target's lines whose distances are above a defined threshold by using the following syntax: distance | filename:line-number: line-content .

$ logreduce diff / var / log / audit / audit.log.1 / var / log / audit / audit.log
INFO logreduce.Classifier - Training took 21.982s at 0.364MB / s ( 1.314kl / s ) ( 8.000 MB - 28.884 kilo-lines )
0.244 | audit.log: 19963 : type =USER_AUTH acct = "root" exe = "/usr/bin/su" hostname =managesf.sftests.com
INFO logreduce.Classifier - Testing took 18.297s at 0.306MB / s ( 1.094kl / s ) ( 5.607 MB - 20.015 kilo-lines )
99.99 % reduction ( from 20015 lines to 1

A more advanced Logreduce use can train a model offline to be reused. Many variants of the baselines can be used to fit the k -NN search tree.

$ logreduce dir-train audit.clf / var / log / audit / audit.log. *
INFO logreduce.Classifier - Training took 80.883s at 0.396MB / s ( 1.397kl / s ) ( 32.001 MB - 112.977 kilo-lines )
DEBUG logreduce.Classifier - audit.clf: written
$ logreduce dir-run audit.clf / var / log / audit / audit.log

Logreduce also implements interfaces to discover baselines for Journald time ranges (days/weeks/months) and Zuul CI job build histories. It can also generate HTML reports that group anomalies found in multiple files in a simple interface.

html-report.png Logreduce HTML output Managing baselines

The key to using k -NN regression for anomaly detection is to have a database of known good baselines, which the model uses to detect lines that deviate too far. This method relies on the baselines containing all nominal events, as anything that isn't found in the baseline will be reported as anomalous.

CI jobs are great targets for k -NN regression because the job outputs are often deterministic and previous runs can be automatically used as baselines. Logreduce features Zuul job roles that can be used as part of a failed job post task in order to issue a concise report (instead of the full job's logs). This principle can be applied to other cases, as long as baselines can be constructed in advance. For example, a nominal system's SoS report can be used to find issues in a defective deployment.

baselines.png Baselines management Anomaly classification service

The next version of Logreduce introduces a server mode to offload log processing to an external service where reports can be further analyzed. It also supports importing existing reports and requests to analyze a Zuul build. The services run analyses asynchronously and feature a web interface to adjust scores and remove false positives.

classification-interface.png Logreduce classification interface

Reviewed reports can be archived as a standalone dataset with the target log files and the scores for anomalous lines recorded in a flat JSON file.

Project roadmap

Logreduce is already being used effectively, but there are many opportunities for improving the tool. Plans for the future include:

If you are interested in getting involved in this project, please contact us on the #log-classify Freenode IRC channel. Feedback is always appreciated!


Tristan Cacqueray will present Reduce your log noise using machine learning at the OpenStack Summit , November 13-15 in Berlin. Topics Python OpenStack Summit AI and machine learning Programming SysAdmin About the author Tristan de Cacqueray - OpenStack Vulnerability Management Team (VMT) member working at Red Hat.

[Oct 02, 2018] whohas software package and repository meta-search

Oct 02, 2018 | www.philippwesche.org

whohas 0.29

by Philipp L. Wesche


Description

whohas is a command line tool that allows querying several package lists at once - currently supported are Arch, Debian, Fedora, Gentoo, Mandriva, openSUSE, Slackware (and linuxpackages.net), Source Mage, Ubuntu, FreeBSD, NetBSD, OpenBSD, Fink, MacPorts and Cygwin. whohas is written in Perl and was designed to help package maintainers find ebuilds, pkgbuilds and similar package definitions from other distributions to learn from. However, it can also be used by normal users who want to know:

News

The 0.29 branch is now being maintained on github .

Tutorial

It is suggested you use Unix command line tools to enhance your search results. whohas is optimised for fast execution. This is done by threading, and the order of results cannot be guaranteed. To nonetheless get a standardised output, alphabetically sorted by distribution, use the sort tool:

whohas gimp | sort

You can use grep to improve your search results. Depending on whether you want only packages whose names begin with your search term, end with your search term, or exactly match, you would use a space before, after or on both sides of your search term, respectively:

whohas gimp | sort | grep " gimp"

whohas vim | sort | grep "vim "

whohas gimp | sort | grep " gimp "

The spaces will ensure that only results for the package gimp are displayed, not for gimp-print etc.

If you want results for a particular distribution only, do

whohas arch | grep "^Arch"

Output for each module will still be ordered, so you don't need to sort results in this case, although you may wish to do so for some distributions. Distribution names are abbreviated as "Arch", "Debian", "Fedora", "Gentoo", "Mandriva", "openSUSE", "Slackware", "Source Mage", "FreeBSD", "NetBSD", "OpenBSD", "Fink", "MacPorts" and "Cygwin".

Output in version 0.1 looked like this . The first column is the name of the distribution, the second the name of the package, the third the version number, then the date, repository name and a url linking to more information about the package. Future versions will have package size information, too. Column lengths are fixed, so you can use cut:

whohas vim | grep " vim " | cut -b 38-47

The first bytes of the data fields at the time of writing are 0, 11, 49, 67, 71, 81 and 106.

Here is an example of whohas 0.1 in a terminal session, showing how it works with grep and cut.

Features and limitations

Debian refers to the binary distribution. Slackware queries Current only - Slackware package search is currently offline and undergoing redesign, therefore I disabled the module until I know more. Binary sizes for Fedora are package sizes - space needed on disk will be greater by about factor 2. Binary sizes for Debian are unpacked sizes. All details (including availability, version numbers and binary sizes) are for the x86 architecture.

Debian version numbers in rare cases may not be for x86 (will be fixed). Gentoo version availability may not be for x86 (will be fixed). I recommend you consult the URLs provided in the output, which give detailed and accurate information about each package.

You may want to use a terminal that recognises hyperlinks and allows easy access through the browser, such as gnome-terminal.

For Fedora, only release 12 is enabled by default, and only the most up to date package will be listed if different versions are available.

For openSUSE, repository designations are abbreviated for screen space reasons: the tilde symbol, ~, replaces "home", and any trailing string that simply points to the current release is truncated. Nonetheless, some of openSUSE's repository paths remain too long to be shown in full.

Changelog

Link

Dependencies

Currently, the local repositories created in the user's home directory take up 1.0 megabytes.

[Sep 27, 2018] bash - Conflict between `pushd .` and `cd -` - Unix Linux Stack Exchange

Sep 27, 2018 | unix.stackexchange.com

Conflict between `pushd .` and `cd -` Ask Question up vote 5 down vote favorite 1


Bernhard ,Feb 21, 2012 at 12:07

I am a happy user of the cd - command to go to the previous directory. At the same time I like pushd . and popd .

However, when I want to remember the current working directory by means of pushd . , I lose the possibility to go to the previous directory by cd - . (As pushd . also performs cd . ).

How can I use pushd to still be able to use cd -

By the way: GNU bash, version 4.1.7(1)

Patrick ,Feb 21, 2012 at 12:39

Why not use pwd to figure out where you are? – Patrick Feb 21 '12 at 12:39

Bernhard ,Feb 21, 2012 at 12:46

I don't understand your question? The point is that pushd breaks the behavior of cd - that I want (or expect). I know perfectly well in which directory I am, but I want to increase the speed with which I change directories :) – Bernhard Feb 21 '12 at 12:46

jofel ,Feb 21, 2012 at 14:39

Do you know zsh ? It has really nice features like AUTO_PUSHD. – jofel Feb 21 '12 at 14:39

Theodore R. Smith ,Feb 21, 2012 at 16:26

+1 Thank you for teaching me about cd -! For most of a decade, I've been doing $ cd $OLDPWD instead. – Theodore R. Smith Feb 21 '12 at 16:26

Patrick ,Feb 22, 2012 at 1:58

@bernhard Oh, I misunderstood what you were asking. You were wanting to know how to store the current working directory. I was interpreting it as you wanted to remember (as in you forgot) your current working directory. – Patrick Feb 22 '12 at 1:58

Wojtek Rzepala ,Feb 21, 2012 at 12:32

You can use something like this:
push() { 
    if [ "$1" = . ]; then
        old=$OLDPWD
        current=$PWD
        builtin pushd .
        cd "$old"
        cd "$current"
    else
        builtin pushd "$1"
    fi
}

If you name it pushd , then it will have precedence over the built-in as functions are evaluated before built-ins.

You need variables old and current as overwriting OLDPWD will make it lose its special meaning.

Bernhard ,Feb 21, 2012 at 12:41

This works perfectly for me. Is there no such feature in the built-in pushd? As I would always prefer a standard solution. Thanks for this function however, maybe I will leave out the argument and it's checking at some point. – Bernhard Feb 21 '12 at 12:41

bsd ,Feb 21, 2012 at 12:53

There is no such feature in the builtin. Your own function is the best solution because pushd and popd both call cd modifying $OLDPWD, hence the source of your problem. I would name the function saved and use it in the context you like too, that of saving cwd. – bsd Feb 21 '12 at 12:53

Wildcard ,Mar 29, 2016 at 23:08

You might also want to unset old and current after you're done with them. – Wildcard Mar 29 '16 at 23:08

Kevin ,Feb 21, 2012 at 16:11

A slightly more concise version of Wojtek's answer :
pushd () {
        if [ "$1" = . ]; then
                cd -
                builtin pushd -
        else    
                builtin pushd "$1"
        fi      
}

By naming the function pushd , you can use pushd as normal, you don't need to remember to use the function name.

,

Kevin's answer is excellent. I've written up some details about what's going on, in case people are looking for a better understanding of why their script is necessary to solve the problem.

The reason that pushd . breaks the behavior of cd - will be apparent if we dig into the workings of cd and the directory stack. Let's push a few directories onto the stack:

$ mkdir dir1 dir2 dir3
$ pushd dir1
~/dir1 ~
$ pushd../dir2
~/dir2 ~/dir1 ~
$ pushd../dir3
~/dir3 ~/dir2 ~/dir1 ~
$ dirs -v
0       ~/dir3
1       ~/dir2
2       ~/dir1
3       ~

Now we can try cd - to jump back a directory:

$ cd -
/home/username/dir2
$ dirs -v
0       ~/dir2
1       ~/dir2
2       ~/dir1
3       ~

We can see that cd - jumped us back to the previous directory, replacing stack ~0 with the directory we jumped into. We can jump back with cd - again:

$ cd -
/home/username/dir3
$ dirs -v
0       ~/dir3
1       ~/dir2
2       ~/dir1
3       ~

Notice that we jumped back to our previous directory, even though the previous directory wasn't actually listed in the directory stack. This is because cd uses the environment variable $OLDPWD to keep track of the previous directory:

$ echo $OLDPWD
/home/username/dir2

If we do pushd . we will push an extra copy of the current directory onto the stack:

$ pushd . 
~/dir3 ~/dir3 ~/dir2 ~/dir1 ~
$ dirs -v
0       ~/dir3
1       ~/dir3
2       ~/dir2
3       ~/dir1
4       ~

In addition to making an extra copy of the current directory in the stack, pushd . has updated $OLDPWD :

$echo $OLDPWD
/home/username/dir3

So cd - has lost its useful history, and will now just move you to the current directory - accomplishing nothing.

[Sep 26, 2018] bash - removing or clearing stack of popd-pushd paths

Sep 26, 2018 | unix.stackexchange.com

chrisjlee ,Feb 9, 2012 at 6:24

After pushd ing too many times, I want to clear the whole stack of paths.

How would I popd all the items in the stack?

I'd like to popd without needing to know how many are in the stack?

The bash manual doesn't seem to cover this .

Why do I need to know this? I'm fastidious and to clean out the stack.

jw013 ,Feb 9, 2012 at 6:39

BTW, the complete bash manual is over at gnu.org. If you use the all on one page version, it may be easier to find stuff there. – jw013 Feb 9 '12 at 6:39

jw013 ,Feb 9, 2012 at 6:37

dirs -c is what you are looking for.

Eliran Malka ,Mar 23, 2017 at 15:20

this does empty the stack, but does not restore the working directory from the stack bottom – Eliran Malka Mar 23 '17 at 15:20

Eliran Malka ,Mar 23, 2017 at 15:37

In order to both empty the stack and restore the working directory from the stack bottom, either:

Chuck Wilbur ,Nov 14, 2017 at 18:21

The first method is exactly what I wanted. The second wouldn't work in my case since I had called pushd a few times, then removed one of the directories in the middle, then popd was failing when I tried to unroll. I needed to jump over all the buggered up stuff in the middle to get back to where I started. – Chuck Wilbur Nov 14 '17 at 18:21

Eliran Malka ,Nov 14, 2017 at 22:51

right @ChuckWilbur - if you scrambled the dir stack, popd won't save you :) – Eliran Malka Nov 14 '17 at 22:51

jw013 ,Dec 7, 2017 at 20:50

It's better to pushd -0 instead of cd "$(dirs ...)" . – jw013 Dec 7 '17 at 20:50

Eliran Malka ,Dec 11, 2017 at 13:56

@jw013 how so? that would mess with the dir stack even more (which we're trying to clear here..) – Eliran Malka Dec 11 '17 at 13:56

jw013 ,Dec 12, 2017 at 15:31

cd "$(...)" works in 90%, probably even 99% of use cases, but with pushd -0 you can confidently say 100%. There are so many potential gotchas and edge cases associated with expanding file/directory paths in the shell that the most robust thing to do is just avoid it altogether, which pushd -0 does very concisely.

There is no chance of getting caught by a bug with a weird edge case if you never take the risk. If you want further reading on the possible headaches involved with Unix file / path names, a good starting point is mywiki.wooledge.org/ParsingLsjw013 Dec 12 '17 at 15:31

[Sep 25, 2018] Sorting Text

Notable quotes:
"... POSIX does not require that sort be stable, and most implementations are not ..."
"... Fortunately, the GNU implementation in the coreutils package [1] remedies that deficiency via the -- stable option ..."
Sep 25, 2018 | www.amazon.com
awk , cut , and join , sort views its input as a stream of records made up of fields of variable width, with records delimited by newline characters and fields delimited by whitespace or a user-specifiable single character.

sort

Usage
sort [ options ] [ file(s) ]
Purpose
Sort input lines into an order determined by the key field and datatype options, and the locale.
Major options
-b
Ignore leading whitespace.
-c
Check that input is correctly sorted. There is no output, but the exit code is nonzero if the input is not sorted.
-d
Dictionary order: only alphanumerics and whitespace are significant.
-g
General numeric value: compare fields as floating-point numbers. This works like -n , except that numbers may have decimal points and exponents (e.g., 6.022e+23 ). GNU version only.
-f
Fold letters implicitly to a common lettercase so that sorting is case-insensitive.
-i
Ignore nonprintable characters.
-k
Define the sort key field.
-m
Merge already-sorted input files into a sorted output stream.
-n
Compare fields as integer numbers.
-o outfile
Write output to the specified file instead of to standard output. If the file is one of the input files, sort copies it to a temporary file before sorting and writing the output.
-r
Reverse the sort order to descending, rather than the default ascending.
-t char
Use the single character char as the default field separator, instead of the default of whitespace.
-u
Unique records only: discard all but the first record in a group with equal keys. Only the key fields matter: other parts of the discarded records may differ.
Behavior
sort reads the specified files, or standard input if no files are given, and writes the sorted data on standard output.
Sorting by Lines

In the simplest case, when no command-line options are supplied, complete records are sorted according to the order defined by the current locale. In the traditional C locale, that means ASCII order, but you can set an alternate locale as we described in Section 2.8 . A tiny bilingual dictionary in the ISO 8859-1 encoding translates four French words differing only in accents:

$ cat french-english                           Show the tiny dictionary

côte    coast

cote    dimension

coté    dimensioned

côté    side
To understand the sorting, use the octal dump tool, od , to display the French words in ASCII and octal:
$ cut -f1 french-english | od -a -b            Display French words in octal bytes

0000000   c   t   t   e  nl   c   o   t   e  nl   c   o   t   i  nl   c

        143 364 164 145 012 143 157 164 145 012 143 157 164 351 012 143

0000020   t   t   i  nl

        364 164 351 012

0000024
Evidently, with the ASCII option -a , od strips the high-order bit of characters, so the accented letters have been mangled, but we can see their octal values: é is 351 8 and ô is 364 8 . On GNU/Linux systems, you can confirm the character values like this:
$ man iso_8859_1                               Check the ISO 8859-1 manual page

...

       Oct   Dec   Hex   Char   Description

       --------------------------------------------------------------------

...

       351   233   E9     é     LATIN SMALL LETTER E WITH ACUTE

...

       364   244   F4     ô     LATIN SMALL LETTER O WITH CIRCUMFLEX

...
First, sort the file in strict byte order:
$ LC_ALL=C sort french-english                 Sort in traditional ASCII order

cote    dimension

coté    dimensioned

côte    coast

côté    side
Notice that e (145 8 ) sorted before é (351 8 ), and o (157 8 ) sorted before ô (364 8 ), as expected from their numerical values. Now sort the text in Canadian-French order:
$ LC_ALL=fr_CA.iso88591 sort french-english          Sort in Canadian-French locale

côte    coast

cote    dimension

coté    dimensioned

côté    side
The output order clearly differs from the traditional ordering by raw byte values. Sorting conventions are strongly dependent on language, country, and culture, and the rules are sometimes astonishingly complex. Even English, which mostly pretends that accents are irrelevant, can have complex sorting rules: examine your local telephone directory to see how lettercase, digits, spaces, punctuation, and name variants like McKay and Mackay are handled.

Sorting by Fields

For more control over sorting, the -k option allows you to specify the field to sort on, and the -t option lets you choose the field delimiter. If -t is not specified, then fields are separated by whitespace and leading and trailing whitespace in the record is ignored. With the -t option, the specified character delimits fields, and whitespace is significant. Thus, a three-character record consisting of space-X-space has one field without -t , but three with -t ' ' (the first and third fields are empty). The -k option is followed by a field number, or number pair, optionally separated by whitespace after -k . Each number may be suffixed by a dotted character position, and/or one of the modifier letters shown in Table.

Letter

Description

b

Ignore leading whitespace.

d

Dictionary order.

f

Fold letters implicitly to a common lettercase.

g

Compare as general floating-point numbers. GNU version only.

i

Ignore nonprintable characters.

n

Compare as (integer) numbers.

r

Reverse the sort order.


Fields and characters within fields are numbered starting from one.

If only one field number is specified, the sort key begins at the start of that field, and continues to the end of the record ( not the end of the field).

If a comma-separated pair of field numbers is given, the sort key starts at the beginning of the first field, and finishes at the end of the second field.

With a dotted character position, comparison begins (first of a number pair) or ends (second of a number pair) at that character position: -k2.4,5.6 compares starting with the fourth character of the second field and ending with the sixth character of the fifth field.

If the start of a sort key falls beyond the end of the record, then the sort key is empty, and empty sort keys sort before all nonempty ones.

When multiple -k options are given, sorting is by the first key field, and then, when records match in that key, by the second key field, and so on.

!

While the -k option is available on all of the systems that we tested, sort also recognizes an older field specification, now considered obsolete, where fields and character positions are numbered from zero. The key start for character m in field n is defined by + n.m , and the key end by - n.m . For example, sort +2.1 -3.2 is equivalent to sort -k3.2,4.3 . If the character position is omitted, it defaults to zero. Thus, +4.0nr and +4nr mean the same thing: a numeric key, beginning at the start of the fifth field, to be sorted in reverse (descending) order.


Let's try out these options on a sample password file, sorting it by the username, which is found in the first colon-separated field:
$ sort -t: -k1,1 /etc/passwd               Sort by username

bin:x:1:1:bin:/bin:/sbin/nologin

chico:x:12501:1000:Chico Marx:/home/chico:/bin/bash

daemon:x:2:2:daemon:/sbin:/sbin/nologin

groucho:x:12503:2000:Groucho Marx:/home/groucho:/bin/sh

gummo:x:12504:3000:Gummo Marx:/home/gummo:/usr/local/bin/ksh93

harpo:x:12502:1000:Harpo Marx:/home/harpo:/bin/ksh

root:x:0:0:root:/root:/bin/bash

zeppo:x:12505:1000:Zeppo Marx:/home/zeppo:/bin/zsh

For more control, add a modifier letter in the field selector to define the type of data in the field and the sorting order. Here's how to sort the password file by descending UID:

$ sort -t: -k3nr /etc/passwd               Sort by descending UID

zeppo:x:12505:1000:Zeppo Marx:/home/zeppo:/bin/zsh

gummo:x:12504:3000:Gummo Marx:/home/gummo:/usr/local/bin/ksh93

groucho:x:12503:2000:Groucho Marx:/home/groucho:/bin/sh

harpo:x:12502:1000:Harpo Marx:/home/harpo:/bin/ksh

chico:x:12501:1000:Chico Marx:/home/chico:/bin/bash

daemon:x:2:2:daemon:/sbin:/sbin/nologin

bin:x:1:1:bin:/bin:/sbin/nologin

root:x:0:0:root:/root:/bin/bash

A more precise field specification would have been -k3nr,3 (that is, from the start of field three, numerically, in reverse order, to the end of field three), or -k3,3nr , or even -k3,3 -n -r , but sort stops collecting a number at the first nondigit, so -k3nr works correctly.

In our password file example, three users have a common GID in field 4, so we could sort first by GID, and then by UID, with:

$ sort -t: -k4n -k3n /etc/passwd           Sort by GID and UID

root:x:0:0:root:/root:/bin/bash

bin:x:1:1:bin:/bin:/sbin/nologin

daemon:x:2:2:daemon:/sbin:/sbin/nologin

chico:x:12501:1000:Chico Marx:/home/chico:/bin/bash

harpo:x:12502:1000:Harpo Marx:/home/harpo:/bin/ksh

zeppo:x:12505:1000:Zeppo Marx:/home/zeppo:/bin/zsh

groucho:x:12503:2000:Groucho Marx:/home/groucho:/bin/sh

gummo:x:12504:3000:Gummo Marx:/home/gummo:/usr/local/bin/ksh93

The useful -u option asks sort to output only unique records, where unique means that their sort-key fields match, even if there are differences elsewhere. Reusing the password file one last time, we find:

$ sort -t: -k4n -u /etc/passwd             Sort by unique GID

root:x:0:0:root:/root:/bin/bash

bin:x:1:1:bin:/bin:/sbin/nologin

daemon:x:2:2:daemon:/sbin:/sbin/nologin

chico:x:12501:1000:Chico Marx:/home/chico:/bin/bash

groucho:x:12503:2000:Groucho Marx:/home/groucho:/bin/sh

gummo:x:12504:3000:Gummo Marx:/home/gummo:/usr/local/bin/ksh93

Notice that the output is shorter: three users are in group 1000, but only one of them was output...

Sorting Text Blocks

Sometimes you need to sort data composed of multiline records. A good example is an address list, which is conveniently stored with one or more blank lines between addresses. For data like this, there is no constant sort-key position that could be used in a -k option, so you have to help out by supplying some extra markup. Here's a simple example:

$ cat my-friends                           Show address file

# SORTKEY: Schloß, Hans Jürgen
Hans Jürgen Schloß
Unter den Linden 78
D-10117 Berlin
Germany

# SORTKEY: Jones, Adrian
Adrian Jones
371 Montgomery Park Road
Henley-on-Thames RG9 4AJ
UK

# SORTKEY: Brown, Kim
Kim Brown
1841 S Main Street
Westchester, NY 10502
USA

The sorting trick is to use the ability of awk to handle more-general record separators to recognize paragraph breaks, temporarily replace the line breaks inside each address with an otherwise unused character, such as an unprintable control character, and replace the paragraph break with a newline. sort then sees lines that look like this:

# SORTKEY: Schloß, Hans Jürgen^ZHans Jürgen Schloß^ZUnter den Linden 78^Z...

# SORTKEY: Jones, Adrian^ZAdrian Jones^Z371 Montgomery Park Road^Z...

# SORTKEY: Brown, Kim^ZKim Brown^Z1841 S Main Street^Z...

Here, ^Z is a Ctrl-Z character. A filter step downstream from sort restores the line breaks and paragraph breaks, and the sort key lines are easily removed, if desired, with grep . The entire pipeline looks like this:

cat my-friends |                                         Pipe in address file
  awk -v RS="" { gsub("\n", "^Z"); print }' |            Convert addresses to single lines
    sort -f |                                            Sort address bundles, ignoring case
      awk -v ORS="\n\n" '{ gsub("^Z", "\n"); print }' |  Restore line structure
        grep -v '# SORTKEY'                              Remove markup lines

The gsub( ) function performs "global substitutions." It is similar to the s/x/y/g construct in sed . The RS variable is the input Record Separator. Normally, input records are separated by newlines, making each line a separate record. Using RS=" " is a special case, whereby records are separated by blank lines; i.e., each block or "paragraph" of text forms a separate record. This is exactly the form of our input data. Finally, ORS is the Output Record Separator; each output record printed with print is terminated with its value. Its default is also normally a single newline; setting it here to " \n\n " preserves the input format with blank lines separating records. (More detail on these constructs may be found in Chapter 9 .)

The beauty of this approach is that we can easily include additional keys in each address that can be used for both sorting and selection: for example, an extra markup line of the form:

# COUNTRY: UK

in each address, and an additional pipeline stage of grep '# COUNTRY: UK ' just before the sort , would let us extract only the UK addresses for further processing.

You could, of course, go overboard and use XML markup to identify the parts of the address in excruciating detail:

<address>
  <personalname>Hans Jürgen</personalname>
  <familyname>Schloß</familyname>
  <streetname>Unter den Linden<streetname>
  <streetnumber>78</streetnumber>
  <postalcode>D-10117</postalcode>
  <city>Berlin</city>
  <country>Germany</country>
</address>

With fancier data-processing filters, you could then please your post office by presorting your mail by country and postal code, but our minimal markup and simple pipeline are often good enough to get the job done.

4.1.4. Sort Efficiency

The obvious way to sort data requires comparing all pairs of items to see which comes first, and leads to algorithms known as bubble sort and insertion sort . These quick-and-dirty algorithms are fine for small amounts of data, but they certainly are not quick for large amounts, because their work to sort n records grows like n 2 . This is quite different from almost all of the filters that we discuss in this book: they read a record, process it, and output it, so their execution time is directly proportional to the number of records, n .

Fortunately, the sorting problem has had lots of attention in the computing community, and good sorting algorithms are known whose average complexity goes like n 3/2 ( shellsort ), n log n ( heapsort , mergesort , and quicksort ), and for restricted kinds of data, n ( distribution sort ). The Unix sort command implementation has received extensive study and optimization: you can be confident that it will do the job efficiently, and almost certainly better than you can do yourself without learning a lot more about sorting algorithms.

4.1.5. Sort Stability

An important question about sorting algorithms is whether or not they are stable : that is, is the input order of equal records preserved in the output? A stable sort may be desirable when records are sorted by multiple keys, or more than once in a pipeline. POSIX does not require that sort be stable, and most implementations are not, as this example shows:

$ sort -t_ -k1,1 -k2,2 << EOF              Sort four lines by first two fields
> one_two
> one_two_three
> one_two_four
> one_two_five
> EOF

one_two
one_two_five
one_two_four
one_two_three

The sort fields are identical in each record, but the output differs from the input, so sort is not stable. Fortunately, the GNU implementation in the coreutils package [1] remedies that deficiency via the -- stable option: its output for this example correctly matches the input.

[1] Available at ftp://ftp.gnu.org/gnu/coreutils/ .

>

[Sep 21, 2018] Preferred editor or IDE for development work - Red Hat Learning Community

Pycharm supports Perl, althouth this is not advertized.
Sep 21, 2018 | learn.redhat.com

Re: Preferred editor or IDE for development work

I don't do a lot of development work, but while learning Python I've found pycharm to be a robust and helpful IDE. Other than that, I'm old school like Proksch and use vi.

MICHAEL BAKER
SYSTEM ADMINISTRATOR, IT MAIL SERVICES

micjohns

Re: Preferred editor or IDE for development work

Yes, I'm the same as @Proksch. For my development environment at Red Hat, vim is easiest to use as I'm using Linux to pop in and out of files. Otherwise, I've had a lot of great experiences with Visual Studio.

[Sep 18, 2018] Getting started with Tmux

Sep 15, 2018 | linuxize.com

... ... ...

Customizing Tmux

When Tmux is started it reads its configuration parameters from ~/.tmux.conf if the file is present.

Here is a sample ~/.tmux.conf configuration with customized status line and few additional options:

~/.tmux.conf
# Improve colors
set -g default-terminal 'screen-256color'

# Set scrollback buffer to 10000
set -g history-limit 10000

# Customize the status line
set -g status-fg  green
set -g status-bg  black
Copy
Basic Tmux Usage

Below are the most basic steps for getting started with Tmux:

  1. On the command prompt, type tmux new -s my_session ,
  2. Run the desired program.
  3. Use the key sequence Ctrl-b + d to detach from the session.
  4. Reattach to the Tmux session by typing tmux attach-session -t my_session .
Conclusion

In this tutorial, you learned how to use Tmux. Now you can start creating multiple Tmux windows in a single session, split windows by creating new panes, navigate between windows, detach and resume sessions and personalize your Tmux instance using the .tmux.conf file.

There's lots more to learn about Tmux at Tmux User's Manual page.

[Sep 16, 2018] After the iron curtain fell, there was a big demand for Russian-trained programmers because they could program in a very efficient and light manner that didn't demand too much of the hardware, if I remember correctly

Notable quotes:
"... It's a bit of chicken-and-egg problem, though. Russia, throughout 20th century, had problem with developing small, effective hardware, so their programmers learned how to code to take maximum advantage of what they had, with their technological deficiency in one field giving rise to superiority in another. ..."
"... Russian tech ppl should always be viewed with certain amount of awe and respect...although they are hardly good on everything. ..."
"... Soviet university training in "cybernetics" as it was called in the late 1980s involved two years of programming on blackboards before the students even touched an actual computer. ..."
"... I recall flowcharting entirely on paper before committing a program to punched cards. ..."
Aug 01, 2018 | turcopolier.typepad.com

Bill Herschel 2 days ago ,

Very, very slightly off-topic.

Much has been made, including in this post, of the excellent organization of Russian forces and Russian military technology.

I have been re-investigating an open-source relational database system known as PosgreSQL (variously), and I remember finding perhaps a decade ago a very useful whole text search feature of this system which I vaguely remember was written by a Russian and, for that reason, mildly distrusted by me.

Come to find out that the principle developers and maintainers of PostgreSQL are Russian. OMG. Double OMG, because the reason I chose it in the first place is that it is the best non-proprietary RDBS out there and today is supported on Google Cloud, AWS, etc.

The US has met an equal or conceivably a superior, case closed. Trump's thoroughly odd behavior with Putin is just one but a very obvious one example of this.

Of course, Trump's nationalistic blather is creating a "base" of people who believe in the godliness of the US. They are in for a very serious disappointment.

kao_hsien_chih Bill Herschel a day ago ,

After the iron curtain fell, there was a big demand for Russian-trained programmers because they could program in a very efficient and "light" manner that didn't demand too much of the hardware, if I remember correctly.

It's a bit of chicken-and-egg problem, though. Russia, throughout 20th century, had problem with developing small, effective hardware, so their programmers learned how to code to take maximum advantage of what they had, with their technological deficiency in one field giving rise to superiority in another.

Russia has plenty of very skilled, very well-trained folks and their science and math education is, in a way, more fundamentally and soundly grounded on the foundational stuff than US (based on my personal interactions anyways).

Russian tech ppl should always be viewed with certain amount of awe and respect...although they are hardly good on everything.

TTG kao_hsien_chih a day ago ,

Well said. Soviet university training in "cybernetics" as it was called in the late 1980s involved two years of programming on blackboards before the students even touched an actual computer.

It gave the students an understanding of how computers works down to the bit flipping level. Imagine trying to fuzz code in your head.

FarNorthSolitude TTG a day ago ,

I recall flowcharting entirely on paper before committing a program to punched cards. I used to do hex and octal math in my head as part of debugging core dumps. Ah, the glory days.

Honeywell once made a military computer that was 10 bit. That stumped me for a while, as everything was 8 or 16 bit back then.

kao_hsien_chih FarNorthSolitude 10 hours ago ,

That used to be fairly common in the civilian sector (in US) too: computing time was expensive, so you had to make sure that the stuff worked flawlessly before it was committed.

No opportunity to seeing things go wrong and do things over like much of how things happen nowadays. Russians, with their hardware limitations/shortages, I imagine must have been much more thorough than US programmers were back in the old days, and you could only get there by being very thoroughly grounded n the basics.

[Sep 10, 2018] How to Exclude a Directory for TAR

Feb 05, 2012 | www.mysysad.com
Frankly speaking, I did not want to waste time and bandwidth downloading images. Here is the syntax to exclude a directory.

# tar cvfp mytarball.tar /mypath/Example.com_DIR --exclude=/mypath/Example.com_DIR/images

Tar everything in the current directory but exclude two files

# tar cvpf mytar.tar * --exclude=index.html --exclude=myimage.png

[Sep 07, 2018] How Can We Fix The Broken Economics of Open Source?

Notable quotes:
"... [with some subset of features behind a paywall] ..."
Sep 07, 2018 | news.slashdot.org

If we take consulting, services, and support off the table as an option for high-growth revenue generation (the only thing VCs care about), we are left with open core [with some subset of features behind a paywall] , software as a service, or some blurring of the two... Everyone wants infrastructure software to be free and continuously developed by highly skilled professional developers (who in turn expect to make substantial salaries), but no one wants to pay for it. The economics of this situation are unsustainable and broken ...

[W]e now come to what I have recently called "loose" open core and SaaS. In the future, I believe the most successful OSS projects will be primarily monetized via this method. What is it? The idea behind "loose" open core and SaaS is that a popular OSS project can be developed as a completely community driven project (this avoids the conflicts of interest inherent in "pure" open core), while value added proprietary services and software can be sold in an ecosystem that forms around the OSS...

Unfortunately, there is an inflection point at which in some sense an OSS project becomes too popular for its own good, and outgrows its ability to generate enough revenue via either "pure" open core or services and support... [B]uilding a vibrant community and then enabling an ecosystem of "loose" open core and SaaS businesses on top appears to me to be the only viable path forward for modern VC-backed OSS startups.
Klein also suggests OSS foundations start providing fellowships to key maintainers, who currently "operate under an almost feudal system of patronage, hopping from company to company, trying to earn a living, keep the community vibrant, and all the while stay impartial..."

"[A]s an industry, we are going to have to come to terms with the economic reality: nothing is free, including OSS. If we want vibrant OSS projects maintained by engineers that are well compensated and not conflicted, we are going to have to decide that this is something worth paying for. In my opinion, fellowships provided by OSS foundations and funded by companies generating revenue off of the OSS is a great way to start down this path."

[Sep 04, 2018] Unifying custom scripts system-wide with rpm on Red Hat-CentOS

Highly recommended!
Aug 24, 2018 | linuxconfig.org
Objective Our goal is to build rpm packages with custom content, unifying scripts across any number of systems, including versioning, deployment and undeployment. Operating System and Software Versions Requirements Privileged access to the system for install, normal access for build. Difficulty MEDIUM Conventions Introduction One of the core feature of any Linux system is that they are built for automation. If a task may need to be executed more than one time - even with some part of it changing on next run - a sysadmin is provided with countless tools to automate it, from simple shell scripts run by hand on demand (thus eliminating typo errors, or only save some keyboard hits) to complex scripted systems where tasks run from cron at a specified time, interacting with each other, working with the result of another script, maybe controlled by a central management system etc.

While this freedom and rich toolset indeed adds to productivity, there is a catch: as a sysadmin, you write a useful script on a system, which proves to be useful on another, so you copy the script over. On a third system the script is useful too, but with minor modification - maybe a new feature useful only that system, reachable with a new parameter. Generalization in mind, you extend the script to provide the new feature, and complete the task it was written for as well. Now you have two versions of the script, the first is on the first two system, the second in on the third system.

You have 1024 computers running in the datacenter, and 256 of them will need some of the functionality provided by that script. In time you will have 64 versions of the script all over, every version doing its job. On the next system deployment you need a feature you recall you coded at some version, but which? And on which systems are they?

On RPM based systems, such as Red Hat flavors, a sysadmin can take advantage of the package manager to create order in the custom content, including simple shell scripts that may not provide else but the tools the admin wrote for convenience.

In this tutorial we will build a custom rpm for Red Hat Enterprise Linux 7.5 containing two bash scripts, parselogs.sh and pullnews.sh to provide a way that all systems have the latest version of these scripts in the /usr/local/sbin directory, and thus on the path of any user who logs in to the system.


me width=


Distributions, major and minor versions In general, the minor and major version of the build machine should be the same as the systems the package is to be deployed, as well as the distribution to ensure compatibility. If there are various versions of a given distribution, or even different distributions with many versions in your environment (oh, joy!), you should set up build machines for each. To cut the work short, you can just set up build environment for each distribution and each major version, and have them on the lowest minor version existing in your environment for the given major version. Of cause they don't need to be physical machines, and only need to be running at build time, so you can use virtual machines or containers.

In this tutorial our work is much easier, we only deploy two scripts that have no dependencies at all (except bash ), so we will build noarch packages which stand for "not architecture dependent", we'll also not specify the distribution the package is built for. This way we can install and upgrade them on any distribution that uses rpm , and to any version - we only need to ensure that the build machine's rpm-build package is on the oldest version in the environment. Setting up building environment To build custom rpm packages, we need to install the rpm-build package:

# yum install rpm-build
From now on, we do not use root user, and for a good reason. Building packages does not require root privilege, and you don't want to break your building machine.

Building the first version of the package Let's create the directory structure needed for building:

$ mkdir -p rpmbuild/SPECS
Our package is called admin-scripts, version 1.0. We create a specfile that specifies the metadata, contents and tasks performed by the package. This is a simple text file we can create with our favorite text editor, such as vi . The previously installed rpmbuild package will fill your empty specfile with template data if you use vi to create an empty one, but for this tutorial consider the specification below called admin-scripts-1.0.spec :

me width=


Name:           admin-scripts
Version:        1
Release:        0
Summary:        FooBar Inc. IT dept. admin scripts
Packager:       John Doe 
Group:          Application/Other
License:        GPL
URL:            www.foobar.com/admin-scripts
Source0:        %{name}-%{version}.tar.gz
BuildArch:      noarch

%description
Package installing latest version the admin scripts used by the IT dept.

%prep
%setup -q


%build

%install
rm -rf $RPM_BUILD_ROOT
mkdir -p $RPM_BUILD_ROOT/usr/local/sbin
cp scripts/* $RPM_BUILD_ROOT/usr/local/sbin/

%clean
rm -rf $RPM_BUILD_ROOT

%files
%defattr(-,root,root,-)
%dir /usr/local/sbin
/usr/local/sbin/parselogs.sh
/usr/local/sbin/pullnews.sh

%doc

%changelog
* Wed Aug 1 2018 John Doe 
- release 1.0 - initial release
Place the specfile in the rpmbuild/SPEC directory we created earlier.

We need the sources referenced in the specfile - in this case the two shell scripts. Let's create the directory for the sources (called as the package name appended with the main version):

$ mkdir -p rpmbuild/SOURCES/admin-scripts-1/scripts
And copy/move the scripts into it:
$ ls rpmbuild/SOURCES/admin-scripts-1/scripts/
parselogs.sh  pullnews.sh

me width=


As this tutorial is not about shell scripting, the contents of these scripts are irrelevant. As we will create a new version of the package, and the pullnews.sh is the script we will demonstrate with, it's source in the first version is as below:
#!/bin/bash
echo "news pulled"
exit 0
Do not forget to add the appropriate rights to the files in the source - in our case, execution right:
chmod +x rpmbuild/SOURCES/admin-scripts-1/scripts/*.sh
Now we create a tar.gz archive from the source in the same directory:
cd rpmbuild/SOURCES/ && tar -czf admin-scripts-1.tar.gz admin-scripts-1
We are ready to build the package:
rpmbuild --bb rpmbuild/SPECS/admin-scripts-1.0.spec
We'll get some output about the build, and if anything goes wrong, errors will be shown (for example, missing file or path). If all goes well, our new package will appear in the RPMS directory generated by default under the rpmbuild directory (sorted into subdirectories by architecture):
$ ls rpmbuild/RPMS/noarch/
admin-scripts-1-0.noarch.rpm
We have created a simple yet fully functional rpm package. We can query it for all the metadata we supplied earlier:
$ rpm -qpi rpmbuild/RPMS/noarch/admin-scripts-1-0.noarch.rpm 
Name        : admin-scripts
Version     : 1
Release     : 0
Architecture: noarch
Install Date: (not installed)
Group       : Application/Other
Size        : 78
License     : GPL
Signature   : (none)
Source RPM  : admin-scripts-1-0.src.rpm
Build Date  : 2018. aug.  1., Wed, 13.27.34 CEST
Build Host  : build01.foobar.com
Relocations : (not relocatable)
Packager    : John Doe 
URL         : www.foobar.com/admin-scripts
Summary     : FooBar Inc. IT dept. admin scripts
Description :
Package installing latest version the admin scripts used by the IT dept.
And of cause we can install it (with root privileges): Installing custom scripts with rpm Installing custom scripts with rpm

me width=


As we installed the scripts into a directory that is on every user's $PATH , you can run them as any user in the system, from any directory:
$ pullnews.sh 
news pulled
The package can be distributed as it is, and can be pushed into repositories available to any number of systems. To do so is out of the scope of this tutorial - however, building another version of the package is certainly not. Building another version of the package Our package and the extremely useful scripts in it become popular in no time, considering they are reachable anywhere with a simple yum install admin-scripts within the environment. There will be soon many requests for some improvements - in this example, many votes come from happy users that the pullnews.sh should print another line on execution, this feature would save the whole company. We need to build another version of the package, as we don't want to install another script, but a new version of it with the same name and path, as the sysadmins in our organization already rely on it heavily.

First we change the source of the pullnews.sh in the SOURCES to something even more complex:

#!/bin/bash
echo "news pulled"
echo "another line printed"
exit 0
We need to recreate the tar.gz with the new source content - we can use the same filename as the first time, as we don't change version, only release (and so the Source0 reference will be still valid). Note that we delete the previous archive first:
cd rpmbuild/SOURCES/ && rm -f admin-scripts-1.tar.gz && tar -czf admin-scripts-1.tar.gz admin-scripts-1
Now we create another specfile with a higher release number:
cp rpmbuild/SPECS/admin-scripts-1.0.spec rpmbuild/SPECS/admin-scripts-1.1.spec
We don't change much on the package itself, so we simply administrate the new version as shown below:
Name:           admin-scripts
Version:        1
Release:        1
Summary:        FooBar Inc. IT dept. admin scripts
Packager:       John Doe 
Group:          Application/Other
License:        GPL
URL:            www.foobar.com/admin-scripts
Source0:        %{name}-%{version}.tar.gz
BuildArch:      noarch

%description
Package installing latest version the admin scripts used by the IT dept.

%prep
%setup -q


%build

%install
rm -rf $RPM_BUILD_ROOT
mkdir -p $RPM_BUILD_ROOT/usr/local/sbin
cp scripts/* $RPM_BUILD_ROOT/usr/local/sbin/

%clean
rm -rf $RPM_BUILD_ROOT

%files
%defattr(-,root,root,-)
%dir /usr/local/sbin
/usr/local/sbin/parselogs.sh
/usr/local/sbin/pullnews.sh

%doc

%changelog
* Wed Aug 22 2018 John Doe 
- release 1.1 - pullnews.sh v1.1 prints another line
* Wed Aug 1 2018 John Doe 
- release 1.0 - initial release

me width=


All done, we can build another version of our package containing the updated script. Note that we reference the specfile with the higher version as the source of the build:
rpmbuild --bb rpmbuild/SPECS/admin-scripts-1.1.spec
If the build is successful, we now have two versions of the package under our RPMS directory:
ls rpmbuild/RPMS/noarch/
admin-scripts-1-0.noarch.rpm  admin-scripts-1-1.noarch.rpm
And now we can install the "advanced" script, or upgrade if it is already installed. Upgrading custom scripts with rpm Upgrading custom scripts with rpm

And our sysadmins can see that the feature request is landed in this version:

rpm -q --changelog admin-scripts
* sze aug 22 2018 John Doe 
- release 1.1 - pullnews.sh v1.1 prints another line

* sze aug 01 2018 John Doe 
- release 1.0 - initial release
Conclusion

We wrapped our custom content into versioned rpm packages. This means no older versions left scattered across systems, everything is in it's place, on the version we installed or upgraded to. RPM gives the ability to replace old stuff needed only in previous versions, can add custom dependencies or provide some tools or services our other packages rely on. With effort, we can pack nearly any of our custom content into rpm packages, and distribute it across our environment, not only with ease, but with consistency.

[Aug 07, 2018] May I sort the -etc-group and -etc-passwd files

Aug 07, 2018 | unix.stackexchange.com

Ned64 ,Feb 18 at 13:52

My /etc/group has grown by adding new users as well as installing programs that have added their own user and/or group. The same is true for /etc/passwd . Editing has now become a little cumbersome due to the lack of structure.

May I sort these files (e.g. by numerical id or alphabetical by name) without negative effect on the system and/or package managers?

I would guess that is does not matter but just to be sure I would like to get a 2nd opinion. Maybe root needs to be the 1st line or within the first 1k lines or something?

The same goes for /etc/*shadow .

Kevin ,Feb 19 at 23:50

"Editing has now become a little cumbersome due to the lack of structure" Why are you editing those files by hand? – Kevin Feb 19 at 23:50

Barmar ,Feb 21 at 20:51

How does sorting the file help with editing? Is it because you want to group related accounts together, and then do similar changes in a range of rows? But will related account be adjacent if you sort by uid or name? – Barmar Feb 21 at 20:51

Ned64 ,Mar 13 at 23:15

@Barmar It has helped mainly because user accounts are grouped by ranges and separate from system accounts (when sorting by UID). Therefore it is easier e.g. to spot the correct line to examine or change when editing with vi . – Ned64 Mar 13 at 23:15

ErikF ,Feb 18 at 14:12

You should be OK doing this : in fact, according to the article and reading the documentation, you can sort /etc/passwd and /etc/group by UID/GID with pwck -s and grpck -s , respectively.

hvd ,Feb 18 at 22:59

@Menasheh This site's colours don't make them stand out as much as on other sites, but "OK doing this" in this answer is a hyperlink. – hvd Feb 18 at 22:59

mickeyf ,Feb 19 at 14:05

OK, fine, but... In general, are there valid reasons to manually edit /etc/passwd and similar files? Isn't it considered better to access these via the tools that are designed to create and modify them? – mickeyf Feb 19 at 14:05

ErikF ,Feb 20 at 21:21

@mickeyf I've seen people manually edit /etc/passwd when they're making batch changes, like changing the GECOS field for all users due to moving/restructuring (global room or phone number changes, etc.) It's not common anymore, but there are specific reasons that crop up from time to time. – ErikF Feb 20 at 21:21

hvd ,Feb 18 at 17:28

Although ErikF is correct that this should generally be okay, I do want to point out one potential issue:

You're allowed to map different usernames to the same UID. If you make use of this, tools that map a UID back to a username will generally pick the first username they find for that UID in /etc/passwd . Sorting may cause a different username to appear first. For display purposes (e.g. ls -l output), either username should work, but it's possible that you've configured some program to accept requests from username A, where it will deny those requests if it sees them coming from username B, even if A and B are the same user.

Rui F Ribeiro ,Feb 19 at 17:53

Having root at first line has been a long time de facto "standard" and is very convenient if you ever have to fix their shell or delete the password, when dealing with problems or recovering systems.

Likewise I prefer to have daemons/utils users in the middle and standard users at the end of both passwd and shadow .

hvd answer is also very good about disturbing the users order, especially in systems with many users maintained by hand.

If you somewhat manage to sort the files, for instance, only for standard users, it would be more sensible than changing the order of all users, imo.

Barmar ,Feb 21 at 20:13

If you sort numerically by UID, you should get your preferred order. Root is always 0 , and daemons conventionally have UIDs under 100. – Barmar Feb 21 at 20:13

Rui F Ribeiro ,Feb 21 at 20:16

@Barmar If sorting by UID and not by name, indeed, thanks for remembering. – Rui F Ribeiro Feb 21 at 20:16

[Aug 07, 2018] Consistency checking of /etc/passwd and /etc/shadow

Aug 07, 2018 | linux-audit.com

Linux distributions usually provide a pwck utility. This small utility will check the consistency of both files and state any specific issues. By specifying the -r it may run in read-only mode.

Example when running pwck on /etc/passwd and /etc/shadow file

[Aug 07, 2018] passwd - Copying Linux users and passwords to a new server

Aug 07, 2018 | serverfault.com
I am migrating over a server to new hardware. A part of the system will be rebuild. What files and directories are needed to copy so that usernames, passwords, groups, file ownership and file permissions stay intact?

Ubuntu 12.04 LTS. linux passwd share | improve this question asked Mar 20 '14 at 7:47

Mikko Ohtamaa, Mar 20 '14 at 7:54

/etc/passwd - user account information less the encrypted passwords 
/etc/shadow - contains encrypted passwords 
/etc/group - user group information 
/etc/gshadow - - group encrypted passwords

Be sure to ensure that the permissions on the files are correct too share | improve this answer edited Mar 20 '14 at 9:48 answered

Iain 102k 13 154 250

| show 4 more comments up vote 13 down vote

I did this with Gentoo Linux already and copied:

that's it.

If the files on the other machine have different owner IDs, you might change them to the ones on /etc/group and /etc/passwd and then you have the effective permissions restored. share | improve this answer edited Mar 20 '14 at 11:52 answered Mar 20 '14 at 7:53

vanthome 560 3 10

Be careful that you don't delete or renumber system accounts when copying over the files mentioned in the other answers. System services don't usually have fixed user ids, and if you've installed the packages in a different order to the original machine (which is very likely if it was long-lived), then they'll end up in a different order. I tend to copy those files to somewhere like /root/saved-from-old-system and hand-edit them in order to just copy the non-system accounts. (There's probably a tool for this, but I don't tend to copy systems like this often enough to warrant investigating one.)Mar 26 '14 at 5:36

[Aug 07, 2018] Managing Multiple Linux Servers with ClusterSSH Linux.com The source for Linux information

Aug 07, 2018 | www.linux.com

Managing Multiple Linux Servers with ClusterSSH

If you're a Linux system administrator, chances are you've got more than one machine that you're responsible for on a daily basis. You may even have a bank of machines that you maintain that are similar -- a farm of Web servers, for example. If you have a need to type the same command into several machines at once, you can login to each one with SSH and do it serially, or you can save yourself a lot of time and effort and use a tool like ClusterSSH.

ClusterSSH is a Tk/Perl wrapper around standard Linux tools like XTerm and SSH. As such, it'll run on just about any POSIX-compliant OS where the libraries exist -- I've run it on Linux, Solaris, and Mac OS X. It requires the Perl libraries Tk ( perl-tk on Debian or Ubuntu) and X11::Protocol ( libx11-protocol-perl on Debian or Ubuntu), in addition to xterm and OpenSSH.

Installation

Installing ClusterSSH on a Debian or Ubuntu system is trivial -- a simple sudo apt-get install clusterssh will install it and its dependencies. It is also packaged for use with Fedora, and it is installable via the ports system on FreeBSD. There's also a MacPorts version for use with Mac OS X, if you use an Apple machine. Of course, it can also be compiled from source.

Configuration

ClusterSSH can be configured either via its global configuration file -- /etc/clusters , or via a file in the user's home directory called .csshrc . I tend to favor the user-level configuration as that lets multiple people on the same system to setup their ClusterSSH client as they choose. Configuration is straightforward in either case, as the file format is the same. ClusterSSH defines a "cluster" as a group of machines that you'd like to control via one interface. With that in mind, you enumerate your clusters at the top of the file in a "clusters" block, and then you describe each cluster in a separate section below.

For example, let's say I've got two clusters, each consisting of two machines. "Cluster1" has the machines "Test1" and "Test2" in it, and "Cluster2" has the machines "Test3" and "Test4" in it. The ~.csshrc (or /etc/clusters ) control file would look like this:

clusters = cluster1 cluster2

cluster1 = test1 test2
cluster2 = test3 test4

You can also make meta-clusters -- clusters that refer to clusters. If you wanted to make a cluster called "all" that encompassed all the machines, you could define it two ways. First, you could simply create a cluster that held all the machines, like the following:

clusters = cluster1 cluster2 all

cluster1 = test1 test2
cluster2 = test3 test4
all = test1 test2 test3 test4

However, my preferred method is to use a meta-cluster that encompasses the other clusters:

clusters = cluster1 cluster2 all

cluster1 = test1 test2
cluster2 = test3 test4
all = cluster1 cluster2

ClusterSSH

By calling out the "all" cluster as containing cluster1 and cluster2, if either of those clusters ever change, the change is automatically captured so you don't have to update the "all" definition. This will save you time and headache if your .csshrc file ever grows in size.

Using ClusterSSH

Using ClusterSSH is similar to launching SSH by itself. Simply running cssh -l <username> <clustername> will launch ClusterSSH and log you in as the desired user on that cluster. In the figure below, you can see I've logged into "cluster1" as myself. The small window labeled "CSSH [2]" is the Cluster SSH console window. Anything I type into that small window gets echoed to all the machines in the cluster -- in this case, machines "test1" and "test2". In a pinch, you can also login to machines that aren't in your .csshrc file, simply by running cssh -l <username> <machinename1> <machinename2> <machinename3> .

If I want to send something to one of the terminals, I can simply switch focus by clicking in the desired XTerm, and just type in that window like I usually would. ClusterSSH has a few menu items that really help when dealing with a mix of machines. As per the figure below, in the "Hosts" menu of the ClusterSSH console there's are several options that come in handy.

"Retile Windows" does just that if you've manually resized or moved something. "Add host(s) or Cluster(s)" is great if you want to add another set of machines or another cluster to the running ClusterSSH session. Finally, you'll see each host listed at the bottom of the "Hosts" menu. By checking or unchecking the boxes next to each hostname, you can select which hosts the ClusterSSH console will echo commands to. This is handy if you want to exclude a host or two for a one-off or particular reason. The final menu option that's nice to have is under the "Send" menu, called "Hostname". This simply echoes each machine's hostname to the command line, which can be handy if you're constructing something host-specific across your cluster.

Resize Windows

Caveats with ClusterSSH

Like many UNIX tools, ClusterSSH has the potential to go horribly awry if you aren't very careful with its use. I've seen ClusterSSH mistakes take out an entire tier of Web servers simply by propagating a typo in an Apache configuration. Having access to multiple machines at once, possibly as a privileged user, means mistakes come at a great cost. Take care, and double-check what you're doing before you punch that Enter key.

Conclusion

ClusterSSH isn't a replacement for having a configuration management system or any of the other best practices when managing a number of machines. However, if you need to do something in a pinch outside of your usual toolset or process, or if you're doing prototype work, ClusterSSH is indispensable. It can save a lot of time when doing tasks that need to be done on more than one machine, but like any power tool, it can cause a lot of damage if used haphazardly.

[Jul 30, 2018] Sudo related horror story

Jul 30, 2018 | www.sott.net

A new sysadmin decided to scratch his etch in sudoers file and in the standard definition of additional sysadmins via wheel group

## Allows people in group wheel to run all commands
# %wheel        ALL=(ALL)       ALL
he replaced ALL with localhost
## Allows people in group wheel to run all commands
# %wheel        localhost=(ALL)       ALL
then without testing he distributed this file to all servers in the datacenter. Sysadmin who worked after him discovered that sudo su - command no longer works and they can't get root using their tried and true method ;-)

[Jul 30, 2018] Configuring sudo Access

Jul 30, 2018 | access.redhat.com

Note A Red Hat training course is available for RHCSA Rapid Track Course . The sudo command offers a mechanism for providing trusted users with administrative access to a system without sharing the password of the root user. When users given access via this mechanism precede an administrative command with sudo they are prompted to enter their own password. Once authenticated, and assuming the command is permitted, the administrative command is executed as if run by the root user. Follow this procedure to create a normal user account and give it sudo access. You will then be able to use the sudo command from this user account to execute administrative commands without logging in to the account of the root user.

Procedure 2.2. Configuring sudo Access

  1. Log in to the system as the root user.
  2. Create a normal user account using the useradd command. Replace USERNAME with the user name that you wish to create.
    # useradd USERNAME
  3. Set a password for the new user using the passwd command.
    # passwd USERNAME
    Changing password for user USERNAME.
    New password: 
    Retype new password: 
    passwd: all authentication tokens updated successfully.
    
  4. Run the visudo to edit the /etc/sudoers file. This file defines the policies applied by the sudo command.
    # visudo
    
  5. Find the lines in the file that grant sudo access to users in the group wheel when enabled.
    ## Allows people in group wheel to run all commands
    # %wheel        ALL=(ALL)       ALL
    
  6. Remove the comment character ( # ) at the start of the second line. This enables the configuration option.
  7. Save your changes and exit the editor.
  8. Add the user you created to the wheel group using the usermod command.
    # usermod -aG wheel USERNAME
    
  9. Test that the updated configuration allows the user you created to run commands using sudo .
    1. Use the su to switch to the new user account that you created.
      # su USERNAME -
      
    2. Use the groups to verify that the user is in the wheel group.
      $ groups
      USERNAME wheel
      
    3. Use the sudo command to run the whoami command. As this is the first time you have run a command using sudo from this user account the banner message will be displayed. You will be also be prompted to enter the password for the user account.
      $ sudo whoami
      We trust you have received the usual lecture from the local System
      Administrator. It usually boils down to these three things:
      
          #1) Respect the privacy of others.
          #2) Think before you type.
          #3) With great power comes great responsibility.
      
      [sudo] password for USERNAME:
      root
      
      The last line of the output is the user name returned by the whoami command. If sudo is configured correctly this value will be root .
You have successfully configured a user with sudo access. You can now log in to this user account and use sudo to run commands as if you were logged in to the account of the root user.

[Jul 30, 2018] 10 Useful Sudoers Configurations for Setting 'sudo' in Linux

Jul 30, 2018 | www.tecmint.com

Below are ten /etc/sudoers file configurations to modify the behavior of sudo command using Defaults entries.

$ sudo cat /etc/sudoers
/etc/sudoers File
#
# This file MUST be edited with the 'visudo' command as root.
#
# Please consider adding local content in /etc/sudoers.d/ instead of
# directly modifying this file.
#
# See the man page for details on how to write a sudoers file.
#
Defaults        env_reset
Defaults        mail_badpass
Defaults        secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
Defaults        logfile="/var/log/sudo.log"
Defaults        lecture="always"
Defaults        badpass_message="Password is wrong, please try again"
Defaults        passwd_tries=5
Defaults        insults
Defaults        log_input,log_output
Types of Defaults Entries
Defaults                parameter,   parameter_list     #affect all users on any host
Defaults@Host_List      parameter,   parameter_list     #affects all users on a specific host
Defaults:User_List      parameter,   parameter_list     #affects a specific user
Defaults!Cmnd_List      parameter,   parameter_list     #affects  a specific command 
Defaults>Runas_List     parameter,   parameter_list     #affects commands being run as a specific user

For the scope of this guide, we will zero down to the first type of Defaults in the forms below. Parameters may be flags, integer values, strings, or lists.

You should note that flags are implicitly boolean and can be turned off using the '!' operator, and lists have two additional assignment operators, += (add to list) and -= (remove from list).

Defaults     parameter
OR
Defaults     parameter=value
OR
Defaults     parameter -=value   
Defaults     parameter +=value  
OR
Defaults     !parameter

[Jul 30, 2018] Configuring sudo and adding users to Wheel group

Here you can find additional example of access to all command in a particular directory via sudo...
Formatting changed and some errors corrected...
Nov 28, 2014 | linuxnlenux.wordpress.com
If a server needs to be administered by a number of people it is normally not a good idea for them all to use the root account. This is because it becomes difficult to determine exactly who did what, when and where if everyone logs in with the same credentials. The sudo utility was designed to overcome this difficulty.

With sudo (which stands for "superuser do"), you can delegate a limited set of administrative responsibilities to other users, who are strictly limited to the commands you allow them. sudo creates a thorough audit trail, so everything users do gets logged; if users somehow manage to do something they shouldn't have, you'll be able to detect it and apply the needed fixes. You can even configure sudo centrally, so its permissions apply to several hosts.

The privileged command you want to run must first begin with the word sudo followed by the command's regular syntax. When running the command with the sudo prefix, you will be prompted for your regular password before it is executed. You may run other privileged commands using sudo within a five-minute period without being re-prompted for a password. All commands run as sudo are logged in the log file /var/log/messages.

The sudo configuration file is /etc/sudoers . We should never edit this file manually. Instead, use the visudo command: # visudo

This protects from conflicts (when two admins edit this file at the same time) and guarantees that the right syntax is used (the permission bits are correct). The program uses Vi text editor.

All Access to Specific Users

You can grant users bob and bunny full access to all privileged commands, with this sudoers entry.

user1, user2 ALL=(ALL) ALL

This is generally not a good idea because this allows user1 and user2 to use the su command to grant themselves permanent root privileges thereby bypassing the command logging features of sudo.

Access To Specific Users To Specific Files

This entry allows user1 and all the members of the group operator to gain access to all the program files in the /sbin and /usr/sbin directories, plus the privilege of running the command /usr/apps/check.pl.

user1, %operator ALL= /sbin/, /usr/sbin/, /usr/apps/check.pl

Access to Specific Files as Another User

user1 ALL=(accounts) /bin/kill, /usr/bin/kill, /usr/bin/pkill

Access Without Needing Passwords

This example allows all users in the group operator to execute all the commands in the /sbin directory without the need for entering a password.

%operator ALL= NOPASSWD: /sbin/

Adding users to the wheel group

The wheel group is a legacy from UNIX. When a server had to be maintained at a higher level than the day-to-day system administrator, root rights were often required. The 'wheel' group was used to create a pool of user accounts that were allowed to get that level of access to the server. If you weren't in the 'wheel' group, you were denied access to root.

Edit the configuration file (/etc/sudoers) with visudo and change these lines:

# Uncomment to allow people in group wheel to run all commands
# %wheel ALL=(ALL) ALL

To this (as recommended):

# Uncomment to allow people in group wheel to run all commands
%wheel ALL=(ALL) ALL

This will allow anyone in the wheel group to execute commands using sudo (rather than having to add each person one by one).

Now finally use the following command to add any user (e.g- user1) to Wheel group

# usermod -G wheel user1

[Jul 30, 2018] Non-root user getting root access after running sudo vi -etc-hosts

Notable quotes:
"... as the original user ..."
Jul 30, 2018 | unix.stackexchange.com

Gilles, Mar 10, 2018 at 10:24

If sudo vi /etc/hosts is successful, it means that the system administrator has allowed the user to run vi /etc/hosts as root. That's the whole point of sudo: it lets the system administrator authorize certain users to run certain commands with extra privileges.

Giving a user the permission to run vi gives them the permission to run any vi command, including :sh to run a shell and :w to overwrite any file on the system. A rule allowing only to run vi /etc/hosts does not make any sense since it allows the user to run arbitrary commands.

There is no "hacking" involved. The breach of security comes from a misconfiguration, not from a hole in the security model. Sudo does not particularly try to prevent against misconfiguration. Its documentation is well-known to be difficult to understand; if in doubt, ask around and don't try to do things that are too complicated.

It is in general a hard problem to give a user a specific privilege without giving them more than intended. A bulldozer approach like giving them the right to run an interactive program such as vi is bound to fail. A general piece of advice is to give the minimum privileges necessary to accomplish the task. If you want to allow a user to modify one file, don't give them the permission to run an editor. Instead, either:

  • Give them the permission to write to the file. This is the simplest method with the least risk of doing something you didn't intend.
    setfacl u:bob:rw /etc/hosts
    
  • Give them permission to edit the file via sudo. To do that, don't give them the permission to run an editor. As explained in the sudo documentation, give them the permission to run sudoedit , which invokes an editor as the original user and then uses the extra privileges only to modify the file.
    bob ALL = sudoedit /etc/hosts

    The sudo method is more complicated to set up, and is less transparent for the user because they have to invoke sudoedit instead of just opening the file in their editor, but has the advantage that all accesses are logged.

Note that allowing a user to edit /etc/hosts may have an impact on your security infrastructure: if there's any place where you rely on a host name corresponding to a specific machine, then that user will be able to point it to a different machine. Consider that it is probably unnecessary anyway .

[Jul 29, 2018] The evolution of package managers by Steve Ovens (Red Hat)

Jul 26, 2018 | opensource.com

Package managers play an important role in Linux software management. Here's how some of the leading players compare.

Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.

https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0

Linux adopted early the practice of maintaining a centralized location where users could find and install software. In this article, I'll discuss the history of software installation on Linux and how modern operating systems are kept up to date against the never-ending torrent of CVEs .

How was software on Linux installed before package managers?

Historically, software was provided either via FTP or mailing lists (eventually this distribution would grow to include basic websites). Only a few small files contained the instructions to create a binary (normally in a tarfile). You would untar the files, read the readme, and as long as you had GCC or some other form of C compiler, you would then typically run a ./configure script with some list of attributes, such as pathing to library files, location to create new binaries, etc. In addition, the configure process would check your system for application dependencies. If any major requirements were missing, the configure script would exit and you could not proceed with the installation until all the dependencies were met. If the configure script completed successfully, a Makefile would be created.

Once a Makefile existed, you would then proceed to run the make command (this command is provided by whichever compiler you were using). The make command has a number of options called make flags , which help optimize the resulting binaries for your system. In the earlier days of computing, this was very important because hardware struggled to keep up with modern software demands. Today, compilation options can be much more generic as most hardware is more than adequate for modern software.

Finally, after the make process had been completed, you would need to run make install (or sudo make install ) in order to actually install the software. As you can imagine, doing this for every single piece of software was time-consuming and tedious -- not to mention the fact that updating software was a complicated and potentially very involved process.

What is a package?

Packages were invented to combat this complexity. Packages collect multiple data files together into a single archive file for easier portability and storage, or simply compress files to reduce storage space. The binaries included in a package are precompiled with according to the sane defaults the developer chosen. Packages also contain metadata, such as the software's name, a description of its purpose, a version number, and a list of dependencies necessary for the software to run properly.

Several flavors of Linux have created their own package formats. Some of the most commonly used package formats include:

While packages themselves don't manage dependencies directly, they represented a huge step forward in Linux software management.

What is a software repository?

A few years ago, before the proliferation of smartphones, the idea of a software repository was difficult for many users to grasp if they were not involved in the Linux ecosystem. To this day, most Windows users still seem to be hardwired to open a web browser to search for and install new software. However, those with smartphones have gotten used to the idea of a software "store." The way smartphone users obtain software and the way package managers work are not dissimilar. While there have been several attempts at making an attractive UI for software repositories, the vast majority of Linux users still use the command line to install packages. Software repositories are a centralized listing of all of the available software for any repository the system has been configured to use. Below are some examples of searching a repository for a specifc package (note that these have been truncated for brevity):

Arch Linux with aurman

user@arch ~ $ aurman -Ss kate

extra/kate 18.04.2-2 (kde-applications kdebase)
Advanced Text Editor
aur/kate-root 18.04.0-1 (11, 1.139399)
Advanced Text Editor, patched to be able to run as root
aur/kate-git r15288.15d26a7-1 (1, 1e-06)
An advanced editor component which is used in numerous KDE applications requiring a text editing component

CentOS 7 using YUM

[user@centos ~]$ yum search kate

kate-devel.x86_64 : Development files for kate
kate-libs.x86_64 : Runtime files for kate
kate-part.x86_64 : Kate kpart plugin

Ubuntu using APT

user@ubuntu ~ $ apt search kate
Sorting... Done
Full Text Search... Done

kate/xenial 4:15.12.3-0ubuntu2 amd64
powerful text editor

kate-data/xenial,xenial 4:4.14.3-0ubuntu4 all
shared data files for Kate text editor

kate-dbg/xenial 4:15.12.3-0ubuntu2 amd64
debugging symbols for Kate

kate5-data/xenial,xenial 4:15.12.3-0ubuntu2 all
shared data files for Kate text editor

What are the most prominent package managers?

As suggested in the above output, package managers are used to interact with software repositories. The following is a brief overview of some of the most prominent package managers.

RPM-based package managers

Updating RPM-based systems, particularly those based on Red Hat technologies, has a very interesting and detailed history. In fact, the current versions of yum (for enterprise distributions) and DNF (for community) combine several open source projects to provide their current functionality.

Initially, Red Hat used a package manager called RPM (Red Hat Package Manager), which is still in use today. However, its primary use is to install RPMs, which you have locally, not to search software repositories. The package manager named up2date was created to inform users of updates to packages and enable them to search remote repositories and easily install dependencies. While it served its purpose, some community members felt that up2date had some significant shortcomings.

The current incantation of yum came from several different community efforts. Yellowdog Updater (YUP) was developed in 1999-2001 by folks at Terra Soft Solutions as a back-end engine for a graphical installer of Yellow Dog Linux . Duke University liked the idea of YUP and decided to improve upon it. They created Yellowdog Updater, Modified (yum) which was eventually adapted to help manage the university's Red Hat Linux systems. Yum grew in popularity, and by 2005 it was estimated to be used by more than half of the Linux market. Today, almost every distribution of Linux that uses RPMs uses yum for package management (with a few notable exceptions).

Working with yum

In order for yum to download and install packages out of an internet repository, files must be located in /etc/yum.repos.d/ and they must have the extension .repo . Here is an example repo file:

[local_base]
name=Base CentOS (local)
baseurl=http://7-repo.apps.home.local/yum-repo/7/
enabled=1
gpgcheck=0

This is for one of my local repositories, which explains why the GPG check is off. If this check was on, each package would need to be signed with a cryptographic key and a corresponding key would need to be imported into the system receiving the updates. Because I maintain this repository myself, I trust the packages and do not bother signing them.

Once a repository file is in place, you can start installing packages from the remote repository. The most basic command is yum update , which will update every package currently installed. This does not require a specific step to refresh the information about repositories; this is done automatically. A sample of the command is shown below:

[user@centos ~]$ sudo yum update
Loaded plugins: fastestmirror, product-id, search-disabled-repos, subscription-manager
local_base | 3.6 kB 00:00:00
local_epel | 2.9 kB 00:00:00
local_rpm_forge | 1.9 kB 00:00:00
local_updates | 3.4 kB 00:00:00
spideroak-one-stable | 2.9 kB 00:00:00
zfs | 2.9 kB 00:00:00
(1/6): local_base/group_gz | 166 kB 00:00:00
(2/6): local_updates/primary_db | 2.7 MB 00:00:00
(3/6): local_base/primary_db | 5.9 MB 00:00:00
(4/6): spideroak-one-stable/primary_db | 12 kB 00:00:00
(5/6): local_epel/primary_db | 6.3 MB 00:00:00
(6/6): zfs/x86_64/primary_db | 78 kB 00:00:00
local_rpm_forge/primary_db | 125 kB 00:00:00
Determining fastest mirrors
Resolving Dependencies
--> Running transaction check

If you are sure you want yum to execute any command without stopping for input, you can put the -y flag in the command, such as yum update -y .

Installing a new package is just as easy. First, search for the name of the package with yum search :

[user@centos ~]$ yum search kate

artwiz-aleczapka-kates-fonts.noarch : Kates font in Artwiz family
ghc-highlighting-kate-devel.x86_64 : Haskell highlighting-kate library development files
kate-devel.i686 : Development files for kate
kate-devel.x86_64 : Development files for kate
kate-libs.i686 : Runtime files for kate
kate-libs.x86_64 : Runtime files for kate
kate-part.i686 : Kate kpart plugin

Once you have the name of the package, you can simply install the package with sudo yum install kate-devel -y . If you installed a package you no longer need, you can remove it with sudo yum remove kate-devel -y . By default, yum will remove the package plus its dependencies.

There may be times when you do not know the name of the package, but you know the name of the utility. For example, suppose you are looking for the utility updatedb , which creates/updates the database used by the locate command. Attempting to install updatedb returns the following results:

[user@centos ~]$ sudo yum install updatedb
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
No package updatedb available.
Error: Nothing to do

You can find out what package the utility comes from by running:

[user@centos ~]$ yum whatprovides *updatedb
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile

bacula-director-5.2.13-23.1.el7.x86_64 : Bacula Director files
Repo : local_base
Matched from:
Filename : /usr/share/doc/bacula-director-5.2.13/updatedb

mlocate-0.26-8.el7.x86_64 : An utility for finding files by name
Repo : local_base
Matched from:
Filename : /usr/bin/updatedb

The reason I have used an asterisk * in front of the command is because yum whatprovides uses the path to the file in order to make a match. Since I was not sure where the file was located, I used an asterisk to indicate any path.

There are, of course, many more options available to yum. I encourage you to view the man page for yum for additional options.

Dandified Yum (DNF) is a newer iteration on yum. Introduced in Fedora 18, it has not yet been adopted in the enterprise distributions, and as such is predominantly used in Fedora (and derivatives). Its usage is almost exactly the same as that of yum, but it was built to address poor performance, undocumented APIs, slow/broken dependency resolution, and occasional high memory usage. DNF is meant as a drop-in replacement for yum, and therefore I won't repeat the commands -- wherever you would use yum , simply substitute dnf .

Working with Zypper

Zypper is another package manager meant to help manage RPMs. This package manager is most commonly associated with SUSE (and openSUSE ) but has also seen adoption by MeeGo , Sailfish OS , and Tizen . It was originally introduced in 2006 and has been iterated upon ever since. There is not a whole lot to say other than Zypper is used as the back end for the system administration tool YaST and some users find it to be faster than yum.

Zypper's usage is very similar to that of yum. To search for, update, install or remove a package, simply use the following:

zypper search kate
zypper update
zypper install kate
zypper remove kate

Some major differences come into play in how repositories are added to the system with zypper . Unlike the package managers discussed above, zypper adds repositories using the package manager itself. The most common way is via a URL, but zypper also supports importing from repo files.

suse:~ # zypper addrepo http://download.videolan.org/pub/vlc/SuSE/15.0 vlc
Adding repository 'vlc' [done]
Repository 'vlc' successfully added

Enabled : Yes
Autorefresh : No
GPG Check : Yes
URI : http://download.videolan.org/pub/vlc/SuSE/15.0
Priority : 99

You remove repositories in a similar manner:

suse:~ # zypper removerepo vlc
Removing repository 'vlc' ...................................[done]
Repository 'vlc' has been removed.

Use the zypper repos command to see what the status of repositories are on your system:

suse:~ # zypper repos
Repository priorities are without effect. All enabled repositories share the same priority.

# | Alias | Name | Enabled | GPG Check | Refresh
---+---------------------------+-----------------------------------------+---------+-----------+--------
1 | repo-debug | openSUSE-Leap-15.0-Debug | No | ---- | ----
2 | repo-debug-non-oss | openSUSE-Leap-15.0-Debug-Non-Oss | No | ---- | ----
3 | repo-debug-update | openSUSE-Leap-15.0-Update-Debug | No | ---- | ----
4 | repo-debug-update-non-oss | openSUSE-Leap-15.0-Update-Debug-Non-Oss | No | ---- | ----
5 | repo-non-oss | openSUSE-Leap-15.0-Non-Oss | Yes | ( p) Yes | Yes
6 | repo-oss | openSUSE-Leap-15.0-Oss | Yes | ( p) Yes | Yes

zypper even has a similar ability to determine what package name contains files or binaries. Unlike YUM, it uses a hyphen in the command (although this method of searching is deprecated):

localhost:~ # zypper what-provides kate
Command 'what-provides' is replaced by 'search --provides --match-exact'.
See 'help search' for all available options.
Loading repository data...
Reading installed packages...

S | Name | Summary | Type
---+------+----------------------+------------
i+ | Kate | Advanced Text Editor | application
i | kate | Advanced Text Editor | package

As with YUM and DNF, Zypper has a much richer feature set than covered here. Please consult with the official documentation for more in-depth information.

Debian-based package managers

One of the oldest Linux distributions currently maintained, Debian's system is very similar to RPM-based systems. They use .deb packages, which can be managed by a tool called dpkg . dpkg is very similar to rpm in that it was designed to manage packages that are available locally. It does no dependency resolution (although it does dependency checking), and has no reliable way to interact with remote repositories. In order to improve the user experience and ease of use, the Debian project commissioned a project called Deity . This codename was eventually abandoned and changed to Advanced Package Tool (APT) .

Released as test builds in 1998 (before making an appearance in Debian 2.1 in 1999), many users consider APT one of the defining features of Debian-based systems. It makes use of repositories in a similar fashion to RPM-based systems, but instead of individual .repo files that yum uses, apt has historically used /etc/apt/sources.list to manage repositories. More recently, it also ingests files from /etc/apt/sources.d/ . Following the examples in the RPM-based package managers, to accomplish the same thing on Debian-based distributions you have a few options. You can edit/create the files manually in the aforementioned locations from the terminal, or in some cases, you can use a UI front end (such as Software & Updates provided by Ubuntu et al.). To provide the same treatment to all distributions, I will cover only the command-line options. To add a repository without directly editing a file, you can do something like this:

user@ubuntu:~$ sudo apt-add-repository "deb http://APT.spideroak.com/ubuntu-spideroak-hardy/ release restricted"

This will create a spideroakone.list file in /etc/apt/sources.list.d . Obviously, these lines change depending on the repository being added. If you are adding a Personal Package Archive (PPA), you can do this:

user@ubuntu:~$ sudo apt-add-repository ppa:gnome-desktop

NOTE: Debian does not support PPAs natively.

After a repository has been added, Debian-based systems need to be made aware that there is a new location to search for packages. This is done via the apt-get update command:

user@ubuntu:~$ sudo apt-get update
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [107 kB]
Hit:2 http://APT.spideroak.com/ubuntu-spideroak-hardy release InRelease
Hit:3 http://ca.archive.ubuntu.com/ubuntu xenial InRelease
Get:4 http://ca.archive.ubuntu.com/ubuntu xenial-updates InRelease [109 kB]
Get:5 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages [517 kB]
Get:6 http://security.ubuntu.com/ubuntu xenial-security/main i386 Packages [455 kB]
Get:7 http://security.ubuntu.com/ubuntu xenial-security/main Translation-en [221 kB]
...

Fetched 6,399 kB in 3s (2,017 kB/s)
Reading package lists... Done

Now that the new repository is added and updated, you can search for a package using the apt-cache command:

user@ubuntu:~$ apt-cache search kate
aterm-ml - Afterstep XVT - a VT102 emulator for the X window system
frescobaldi - Qt4 LilyPond sheet music editor
gitit - Wiki engine backed by a git or darcs filestore
jedit - Plugin-based editor for programmers
kate - powerful text editor
kate-data - shared data files for Kate text editor
kate-dbg - debugging symbols for Kate
katepart - embeddable text editor component

To install kate , simply run the corresponding install command:

user@ubuntu:~$ sudo apt-get install kate

To remove a package, use apt-get remove :

user@ubuntu:~$ sudo apt-get remove kate

When it comes to package discovery, APT does not provide any functionality that is similar to yum whatprovides . There are a few ways to get this information if you are trying to find where a specific file on disk has come from.

Using dpkg

user@ubuntu:~$ dpkg -S /bin/ls
coreutils: /bin/ls

Using apt-file

user@ubuntu:~$ sudo apt-get install apt-file -y

user@ubuntu:~$ sudo apt-file update

user@ubuntu:~$ apt-file search kate

The problem with apt-file search is that it, unlike yum whatprovides , it is overly verbose unless you know the exact path, and it automatically adds a wildcard search so that you end up with results for anything with the word kate in it:

kate: /usr/bin/kate
kate: /usr/lib/x86_64-linux-gnu/qt5/plugins/ktexteditor/katebacktracebrowserplugin.so
kate: /usr/lib/x86_64-linux-gnu/qt5/plugins/ktexteditor/katebuildplugin.so
kate: /usr/lib/x86_64-linux-gnu/qt5/plugins/ktexteditor/katecloseexceptplugin.so
kate: /usr/lib/x86_64-linux-gnu/qt5/plugins/ktexteditor/katectagsplugin.so

Most of these examples have used apt-get . Note that most of the current tutorials for Ubuntu specifically have taken to simply using apt . The single apt command was designed to implement only the most commonly used commands in the APT arsenal. Since functionality is split between apt-get , apt-cache , and other commands, apt looks to unify these into a single command. It also adds some niceties such as colorization, progress bars, and other odds and ends. Most of the commands noted above can be replaced with apt , but not all Debian-based distributions currently receiving security patches support using apt by default, so you may need to install additional packages.

Arch-based package managers

Arch Linux uses a package manager called pacman . Unlike .deb or .rpm files, pacman uses a more traditional tarball with the LZMA2 compression ( .tar.xz ). This enables Arch Linux packages to be much smaller than other forms of compressed archives (such as gzip ). Initially released in 2002, pacman has been steadily iterated and improved. One of the major benefits of pacman is that it supports the Arch Build System , a system for building packages from source. The build system ingests a file called a PKGBUILD, which contains metadata (such as version numbers, revisions, dependencies, etc.) as well as a shell script with the required flags for compiling a package conforming to the Arch Linux requirements. The resulting binaries are then packaged into the aforementioned .tar.xz file for consumption by pacman.

This system led to the creation of the Arch User Repository (AUR) which is a community-driven repository containing PKGBUILD files and supporting patches or scripts. This allows for a virtually endless amount of software to be available in Arch. The obvious advantage of this system is that if a user (or maintainer) wishes to make software available to the public, they do not have to go through official channels to get it accepted in the main repositories. The downside is that it relies on community curation similar to Docker Hub , Canonical's Snap packages, or other similar mechanisms. There are numerous AUR-specific package managers that can be used to download, compile, and install from the PKGBUILD files in the AUR (we will look at this later).

Working with pacman and official repositories

Arch's main package manager, pacman, uses flags instead of command words like yum and apt . For example, to search for a package, you would use pacman -Ss . As with most commands on Linux, you can find both a manpage and inline help. Most of the commands for pacman use the sync (-S) flag. For example:

user@arch ~ $ pacman -Ss kate

extra/kate 18.04.2-2 (kde-applications kdebase)
Advanced Text Editor
extra/libkate 0.4.1-6 [installed]
A karaoke and text codec for embedding in ogg
extra/libtiger 0.3.4-5 [installed]
A rendering library for Kate streams using Pango and Cairo
extra/ttf-cheapskate 2.0-12
TTFonts collection from dustimo.com
community/haskell-cheapskate 0.1.1-100
Experimental markdown processor.

Arch also uses repositories similar to other package managers. In the output above, search results are prefixed with the repository they are found in ( extra/ and community/ in this case). Similar to both Red Hat and Debian-based systems, Arch relies on the user to add the repository information into a specific file. The location for these repositories is /etc/pacman.conf . The example below is fairly close to a stock system. I have enabled the [multilib] repository for Steam support:

[options]
Architecture = auto

Color
CheckSpace

SigLevel = Required DatabaseOptional
LocalFileSigLevel = Optional

[core]
Include = /etc/pacman.d/mirrorlist

[extra]
Include = /etc/pacman.d/mirrorlist

[community]
Include = /etc/pacman.d/mirrorlist

[multilib]
Include = /etc/pacman.d/mirrorlist

It is possible to specify a specific URL in pacman.conf . This functionality can be used to make sure all packages come from a specific point in time. If, for example, a package has a bug that affects you severely and it has several dependencies, you can roll back to a specific point in time by adding a specific URL into your pacman.conf and then running the commands to downgrade the system:

[core]
Server=https://archive.archlinux.org/repos/2017/12/22/$repo/os/$arch

Like Debian-based systems, Arch does not update its local repository information until you tell it to do so. You can refresh the package database by issuing the following command:

user@arch ~ $ sudo pacman -Sy

:: Synchronizing package databases...
core 130.2 KiB 851K/s 00:00 [##########################################################] 100%
extra 1645.3 KiB 2.69M/s 00:01 [##########################################################] 100%
community 4.5 MiB 2.27M/s 00:02 [##########################################################] 100%
multilib is up to date

As you can see in the above output, pacman thinks that the multilib package database is up to date. You can force a refresh if you think this is incorrect by running pacman -Syy . If you want to update your entire system (excluding packages installed from the AUR), you can run pacman -Syu :

user@arch ~ $ sudo pacman -Syu

:: Synchronizing package databases...
core is up to date
extra is up to date
community is up to date
multilib is up to date
:: Starting full system upgrade...
resolving dependencies...
looking for conflicting packages...

Packages (45) ceph-13.2.0-2 ceph-libs-13.2.0-2 debootstrap-1.0.105-1 guile-2.2.4-1 harfbuzz-1.8.2-1 harfbuzz-icu-1.8.2-1 haskell-aeson-1.3.1.1-20
haskell-attoparsec-0.13.2.2-24 haskell-tagged-0.8.6-1 imagemagick-7.0.8.4-1 lib32-harfbuzz-1.8.2-1 lib32-libgusb-0.3.0-1 lib32-systemd-239.0-1
libgit2-1:0.27.2-1 libinput-1.11.2-1 libmagick-7.0.8.4-1 libmagick6-6.9.10.4-1 libopenshot-0.2.0-1 libopenshot-audio-0.1.6-1 libosinfo-1.2.0-1
libxfce4util-4.13.2-1 minetest-0.4.17.1-1 minetest-common-0.4.17.1-1 mlt-6.10.0-1 mlt-python-bindings-6.10.0-1 ndctl-61.1-1 netctl-1.17-1
nodejs-10.6.0-1

Total Download Size: 2.66 MiB
Total Installed Size: 879.15 MiB
Net Upgrade Size: -365.27 MiB

:: Proceed with installation? [Y/n]

In the scenario mentioned earlier regarding downgrading a system, you can force a downgrade by issuing pacman -Syyuu . It is important to note that this should not be undertaken lightly. This should not cause a problem in most cases; however, there is a chance that downgrading of a package or several packages will cause a cascading failure and leave your system in an inconsistent state. USE WITH CAUTION!

To install a package, simply use pacman -S kate :

user@arch ~ $ sudo pacman -S kate

resolving dependencies...
looking for conflicting packages...

Packages (7) editorconfig-core-c-0.12.2-1 kactivities-5.47.0-1 kparts-5.47.0-1 ktexteditor-5.47.0-2 syntax-highlighting-5.47.0-1 threadweaver-5.47.0-1
kate-18.04.2-2

Total Download Size: 10.94 MiB
Total Installed Size: 38.91 MiB

:: Proceed with installation? [Y/n]

To remove a package, you can run pacman -R kate . This removes only the package and not its dependencies:

user@arch ~ $ sudo pacman -S kate

checking dependencies...

Packages (1) kate-18.04.2-2

Total Removed Size: 20.30 MiB

:: Do you want to remove these packages? [Y/n]

If you want to remove the dependencies that are not required by other packages, you can run pacman -Rs:

user@arch ~ $ sudo pacman -Rs kate

checking dependencies...

Packages (7) editorconfig-core-c-0.12.2-1 kactivities-5.47.0-1 kparts-5.47.0-1 ktexteditor-5.47.0-2 syntax-highlighting-5.47.0-1 threadweaver-5.47.0-1
kate-18.04.2-2

Total Removed Size: 38.91 MiB

:: Do you want to remove these packages? [Y/n]

Pacman, in my opinion, offers the most succinct way of searching for the name of a package for a given utility. As shown above, yum and apt both rely on pathing in order to find useful results. Pacman makes some intelligent guesses as to which package you are most likely looking for:

user@arch ~ $ sudo pacman -Fs updatedb
core/mlocate 0.26.git.20170220-1
usr/bin/updatedb

user@arch ~ $ sudo pacman -Fs kate
extra/kate 18.04.2-2
usr/bin/kate

Working with the AUR

There are several popular AUR package manager helpers. Of these, yaourt and pacaur are fairly prolific. However, both projects are listed as discontinued or problematic on the Arch Wiki . For that reason, I will discuss aurman . It works almost exactly like pacman, except it searches the AUR and includes some helpful, albeit potentially dangerous, options. Installing a package from the AUR will initiate use of the package maintainer's build scripts. You will be prompted several times for permission to continue (I have truncated the output for brevity):

aurman -S telegram-desktop-bin
~~ initializing aurman...
~~ the following packages are neither in known repos nor in the aur
...
~~ calculating solutions...

:: The following 1 package(s) are getting updated:
aur/telegram-desktop-bin 1.3.0-1 -> 1.3.9-1

?? Do you want to continue? Y/n: Y

~~ looking for new pkgbuilds and fetching them...
Cloning into 'telegram-desktop-bin'...

remote: Counting objects: 301, done.
remote: Compressing objects: 100% (152/152), done.
remote: Total 301 (delta 161), reused 286 (delta 147)
Receiving objects: 100% (301/301), 76.17 KiB | 639.00 KiB/s, done.
Resolving deltas: 100% (161/161), done.
?? Do you want to see the changes of telegram-desktop-bin? N/y: N

[sudo] password for user:

...
==> Leaving fakeroot environment.
==> Finished making: telegram-desktop-bin 1.3.9-1 (Thu 05 Jul 2018 11:22:02 AM EDT)
==> Cleaning up...
loading packages...
resolving dependencies...
looking for conflicting packages...

Packages (1) telegram-desktop-bin-1.3.9-1

Total Installed Size: 88.81 MiB
Net Upgrade Size: 5.33 MiB

:: Proceed with installation? [Y/n]

Sometimes you will be prompted for more input, depending on the complexity of the package you are installing. To avoid this tedium, aurman allows you to pass both the --noconfirm and --noedit options. This is equivalent to saying "accept all of the defaults, and trust that the package maintainers scripts will not be malicious." USE THIS OPTION WITH EXTREME CAUTION! While these options are unlikely to break your system on their own, you should never blindly accept someone else's scripts.

Conclusion

This article, of course, only scratches the surface of what package managers can do. There are also many other package managers available that I could not cover in this space. Some distributions, such as Ubuntu or Elementary OS, have gone to great lengths to provide a graphical approach to package management.

If you are interested in some of the more advanced functions of package managers, please post your questions or comments below and I would be glad to write a follow-up article.

Appendix # search for packages
yum search <package>
dnf search <package>
zypper search <package>
apt-cache search <package>
apt search <package>
pacman -Ss <package>

# install packages
yum install <package>
dnf install <package>
zypper install <package>
apt-get install <package>
apt install <package>
pacman -Ss <package>

# update package database, not required by yum, dnf and zypper
apt-get update
apt update
pacman -Sy

# update all system packages
yum update
dnf update
zypper update
apt-get upgrade
apt upgrade
pacman -Su

# remove an installed package
yum remove <package>
dnf remove <package>
apt-get remove <package>
apt remove <package>
pacman -R <package>
pacman -Rs <package>

# search for the package name containing specific file or folder
yum whatprovides *<binary>
dnf whatprovides *<binary>
zypper what-provides <binary>
zypper search --provides <binary>
apt-file search <binary>
pacman -Sf <binary>

Topics Linux About the author Steve Ovens - Steve is a dedicated IT professional and Linux advocate. Prior to joining Red Hat, he spent several years in financial, automotive, and movie industries. Steve currently works for Red Hat as an OpenShift consultant and has certifications ranging from the RHCA (in DevOps), to Ansible, to Containerized Applications and more. He spends a lot of time discussing technology and writing tutorials on various technical subjects with friends, family, and anyone who is interested in listening. More about me

[Jul 05, 2018] Can rsync resume after being interrupted

Notable quotes:
"... as if it were successfully transferred ..."
Jul 05, 2018 | unix.stackexchange.com

Tim ,Sep 15, 2012 at 23:36

I used rsync to copy a large number of files, but my OS (Ubuntu) restarted unexpectedly.

After reboot, I ran rsync again, but from the output on the terminal, I found that rsync still copied those already copied before. But I heard that rsync is able to find differences between source and destination, and therefore to just copy the differences. So I wonder in my case if rsync can resume what was left last time?

Gilles ,Sep 16, 2012 at 1:56

Yes, rsync won't copy again files that it's already copied. There are a few edge cases where its detection can fail. Did it copy all the already-copied files? What options did you use? What were the source and target filesystems? If you run rsync again after it's copied everything, does it copy again? – Gilles Sep 16 '12 at 1:56

Tim ,Sep 16, 2012 at 2:30

@Gilles: Thanks! (1) I think I saw rsync copied the same files again from its output on the terminal. (2) Options are same as in my other post, i.e. sudo rsync -azvv /home/path/folder1/ /home/path/folder2 . (3) Source and target are both NTFS, buy source is an external HDD, and target is an internal HDD. (3) It is now running and hasn't finished yet. – Tim Sep 16 '12 at 2:30

jwbensley ,Sep 16, 2012 at 16:15

There is also the --partial flag to resume partially transferred files (useful for large files) – jwbensley Sep 16 '12 at 16:15

Tim ,Sep 19, 2012 at 5:20

@Gilles: What are some "edge cases where its detection can fail"? – Tim Sep 19 '12 at 5:20

Gilles ,Sep 19, 2012 at 9:25

@Tim Off the top of my head, there's at least clock skew, and differences in time resolution (a common issue with FAT filesystems which store times in 2-second increments, the --modify-window option helps with that). – Gilles Sep 19 '12 at 9:25

DanielSmedegaardBuus ,Nov 1, 2014 at 12:32

First of all, regarding the "resume" part of your question, --partial just tells the receiving end to keep partially transferred files if the sending end disappears as though they were completely transferred.

While transferring files, they are temporarily saved as hidden files in their target folders (e.g. .TheFileYouAreSending.lRWzDC ), or a specifically chosen folder if you set the --partial-dir switch. When a transfer fails and --partial is not set, this hidden file will remain in the target folder under this cryptic name, but if --partial is set, the file will be renamed to the actual target file name (in this case, TheFileYouAreSending ), even though the file isn't complete. The point is that you can later complete the transfer by running rsync again with either --append or --append-verify .

So, --partial doesn't itself resume a failed or cancelled transfer. To resume it, you'll have to use one of the aforementioned flags on the next run. So, if you need to make sure that the target won't ever contain files that appear to be fine but are actually incomplete, you shouldn't use --partial . Conversely, if you want to make sure you never leave behind stray failed files that are hidden in the target directory, and you know you'll be able to complete the transfer later, --partial is there to help you.

With regards to the --append switch mentioned above, this is the actual "resume" switch, and you can use it whether or not you're also using --partial . Actually, when you're using --append , no temporary files are ever created. Files are written directly to their targets. In this respect, --append gives the same result as --partial on a failed transfer, but without creating those hidden temporary files.

So, to sum up, if you're moving large files and you want the option to resume a cancelled or failed rsync operation from the exact point that rsync stopped, you need to use the --append or --append-verify switch on the next attempt.

As @Alex points out below, since version 3.0.0 rsync now has a new option, --append-verify , which behaves like --append did before that switch existed. You probably always want the behaviour of --append-verify , so check your version with rsync --version . If you're on a Mac and not using rsync from homebrew , you'll (at least up to and including El Capitan) have an older version and need to use --append rather than --append-verify . Why they didn't keep the behaviour on --append and instead named the newcomer --append-no-verify is a bit puzzling. Either way, --append on rsync before version 3 is the same as --append-verify on the newer versions.

--append-verify isn't dangerous: It will always read and compare the data on both ends and not just assume they're equal. It does this using checksums, so it's easy on the network, but it does require reading the shared amount of data on both ends of the wire before it can actually resume the transfer by appending to the target.

Second of all, you said that you "heard that rsync is able to find differences between source and destination, and therefore to just copy the differences."

That's correct, and it's called delta transfer, but it's a different thing. To enable this, you add the -c , or --checksum switch. Once this switch is used, rsync will examine files that exist on both ends of the wire. It does this in chunks, compares the checksums on both ends, and if they differ, it transfers just the differing parts of the file. But, as @Jonathan points out below, the comparison is only done when files are of the same size on both ends -- different sizes will cause rsync to upload the entire file, overwriting the target with the same name.

This requires a bit of computation on both ends initially, but can be extremely efficient at reducing network load if for example you're frequently backing up very large files fixed-size files that often contain minor changes. Examples that come to mind are virtual hard drive image files used in virtual machines or iSCSI targets.

It is notable that if you use --checksum to transfer a batch of files that are completely new to the target system, rsync will still calculate their checksums on the source system before transferring them. Why I do not know :)

So, in short:

If you're often using rsync to just "move stuff from A to B" and want the option to cancel that operation and later resume it, don't use --checksum , but do use --append-verify .

If you're using rsync to back up stuff often, using --append-verify probably won't do much for you, unless you're in the habit of sending large files that continuously grow in size but are rarely modified once written. As a bonus tip, if you're backing up to storage that supports snapshotting such as btrfs or zfs , adding the --inplace switch will help you reduce snapshot sizes since changed files aren't recreated but rather the changed blocks are written directly over the old ones. This switch is also useful if you want to avoid rsync creating copies of files on the target when only minor changes have occurred.

When using --append-verify , rsync will behave just like it always does on all files that are the same size. If they differ in modification or other timestamps, it will overwrite the target with the source without scrutinizing those files further. --checksum will compare the contents (checksums) of every file pair of identical name and size.

UPDATED 2015-09-01 Changed to reflect points made by @Alex (thanks!)

UPDATED 2017-07-14 Changed to reflect points made by @Jonathan (thanks!)

Alex ,Aug 28, 2015 at 3:49

According to the documentation --append does not check the data, but --append-verify does. Also, as @gaoithe points out in a comment below, the documentation claims --partial does resume from previous files. – Alex Aug 28 '15 at 3:49

DanielSmedegaardBuus ,Sep 1, 2015 at 13:29

Thank you @Alex for the updates. Indeed, since 3.0.0, --append no longer compares the source to the target file before appending. Quite important, really! --partial does not itself resume a failed file transfer, but rather leaves it there for a subsequent --append(-verify) to append to it. My answer was clearly misrepresenting this fact; I'll update it to include these points! Thanks a lot :) – DanielSmedegaardBuus Sep 1 '15 at 13:29

Cees Timmerman ,Sep 15, 2015 at 17:21

This says --partial is enough. – Cees Timmerman Sep 15 '15 at 17:21

DanielSmedegaardBuus ,May 10, 2016 at 19:31

@CMCDragonkai Actually, check out Alexander's answer below about --partial-dir -- looks like it's the perfect bullet for this. I may have missed something entirely ;) – DanielSmedegaardBuus May 10 '16 at 19:31

Jonathan Y. ,Jun 14, 2017 at 5:48

What's your level of confidence in the described behavior of --checksum ? According to the man it has more to do with deciding which files to flag for transfer than with delta-transfer (which, presumably, is rsync 's default behavior). – Jonathan Y. Jun 14 '17 at 5:48

Alexander O'Mara ,Jan 3, 2016 at 6:34

TL;DR:

Just specify a partial directory as the rsync man pages recommends:

--partial-dir=.rsync-partial

Longer explanation:

There is actually a built-in feature for doing this using the --partial-dir option, which has several advantages over the --partial and --append-verify / --append alternative.

Excerpt from the rsync man pages:
--partial-dir=DIR
      A  better way to keep partial files than the --partial option is
      to specify a DIR that will be used  to  hold  the  partial  data
      (instead  of  writing  it  out to the destination file).  On the
      next transfer, rsync will use a file found in this dir  as  data
      to  speed  up  the resumption of the transfer and then delete it
      after it has served its purpose.

      Note that if --whole-file is specified (or  implied),  any  par-
      tial-dir  file  that  is  found for a file that is being updated
      will simply be removed (since rsync  is  sending  files  without
      using rsync's delta-transfer algorithm).

      Rsync will create the DIR if it is missing (just the last dir --
      not the whole path).  This makes it easy to use a relative  path
      (such  as  "--partial-dir=.rsync-partial")  to have rsync create
      the partial-directory in the destination file's  directory  when
      needed,  and  then  remove  it  again  when  the partial file is
      deleted.

      If the partial-dir value is not an absolute path, rsync will add
      an  exclude rule at the end of all your existing excludes.  This
      will prevent the sending of any partial-dir files that may exist
      on the sending side, and will also prevent the untimely deletion
      of partial-dir items on the receiving  side.   An  example:  the
      above  --partial-dir  option would add the equivalent of "-f '-p
      .rsync-partial/'" at the end of any other filter rules.

By default, rsync uses a random temporary file name which gets deleted when a transfer fails. As mentioned, using --partial you can make rsync keep the incomplete file as if it were successfully transferred , so that it is possible to later append to it using the --append-verify / --append options. However there are several reasons this is sub-optimal.

  1. Your backup files may not be complete, and without checking the remote file which must still be unaltered, there's no way to know.
  2. If you are attempting to use --backup and --backup-dir , you've just added a new version of this file that never even exited before to your version history.

However if we use --partial-dir , rsync will preserve the temporary partial file, and resume downloading using that partial file next time you run it, and we do not suffer from the above issues.

trs ,Apr 7, 2017 at 0:00

This is really the answer. Hey everyone, LOOK HERE!! – trs Apr 7 '17 at 0:00

JKOlaf ,Jun 28, 2017 at 0:11

I agree this is a much more concise answer to the question. the TL;DR: is perfect and for those that need more can read the longer bit. Strong work. – JKOlaf Jun 28 '17 at 0:11

N2O ,Jul 29, 2014 at 18:24

You may want to add the -P option to your command.

From the man page:

--partial By default, rsync will delete any partially transferred file if the transfer
         is interrupted. In some circumstances it is more desirable to keep partially
         transferred files. Using the --partial option tells rsync to keep the partial
         file which should make a subsequent transfer of the rest of the file much faster.

  -P     The -P option is equivalent to --partial --progress.   Its  pur-
         pose  is to make it much easier to specify these two options for
         a long transfer that may be interrupted.

So instead of:

sudo rsync -azvv /home/path/folder1/ /home/path/folder2

Do:

sudo rsync -azvvP /home/path/folder1/ /home/path/folder2

Of course, if you don't want the progress updates, you can just use --partial , i.e.:

sudo rsync --partial -azvv /home/path/folder1/ /home/path/folder2

gaoithe ,Aug 19, 2015 at 11:29

@Flimm not quite correct. If there is an interruption (network or receiving side) then when using --partial the partial file is kept AND it is used when rsync is resumed. From the manpage: "Using the --partial option tells rsync to keep the partial file which should <b>make a subsequent transfer of the rest of the file much faster</b>." – gaoithe Aug 19 '15 at 11:29

DanielSmedegaardBuus ,Sep 1, 2015 at 14:11

@Flimm and @gaoithe, my answer wasn't quite accurate, and definitely not up-to-date. I've updated it to reflect version 3 + of rsync . It's important to stress, though, that --partial does not itself resume a failed transfer. See my answer for details :) – DanielSmedegaardBuus Sep 1 '15 at 14:11

guettli ,Nov 18, 2015 at 12:28

@DanielSmedegaardBuus I tried it and the -P is enough in my case. Versions: client has 3.1.0 and server has 3.1.1. I interrupted the transfer of a single large file with ctrl-c. I guess I am missing something. – guettli Nov 18 '15 at 12:28

Yadunandana ,Sep 16, 2012 at 16:07

I think you are forcibly calling the rsync and hence all data is getting downloaded when you recall it again. use --progress option to copy only those files which are not copied and --delete option to delete any files if already copied and now it does not exist in source folder...
rsync -avz --progress --delete -e  /home/path/folder1/ /home/path/folder2

If you are using ssh to login to other system and copy the files,

rsync -avz --progress --delete -e "ssh -o UserKnownHostsFile=/dev/null -o \
StrictHostKeyChecking=no" /home/path/folder1/ /home/path/folder2

let me know if there is any mistake in my understanding of this concept...

Fabien ,Jun 14, 2013 at 12:12

Can you please edit your answer and explain what your special ssh call does, and why you advice to do it? – Fabien Jun 14 '13 at 12:12

DanielSmedegaardBuus ,Dec 7, 2014 at 0:12

@Fabien He tells rsync to set two ssh options (rsync uses ssh to connect). The second one tells ssh to not prompt for confirmation if the host he's connecting to isn't already known (by existing in the "known hosts" file). The first one tells ssh to not use the default known hosts file (which would be ~/.ssh/known_hosts). He uses /dev/null instead, which is of course always empty, and as ssh would then not find the host in there, it would normally prompt for confirmation, hence option two. Upon connecting, ssh writes the now known host to /dev/null, effectively forgetting it instantly :) – DanielSmedegaardBuus Dec 7 '14 at 0:12

DanielSmedegaardBuus ,Dec 7, 2014 at 0:23

...but you were probably wondering what effect, if any, it has on the rsync operation itself. The answer is none. It only serves to not have the host you're connecting to added to your SSH known hosts file. Perhaps he's a sysadmin often connecting to a great number of new servers, temporary systems or whatnot. I don't know :) – DanielSmedegaardBuus Dec 7 '14 at 0:23

moi ,May 10, 2016 at 13:49

"use --progress option to copy only those files which are not copied" What? – moi May 10 '16 at 13:49

Paul d'Aoust ,Nov 17, 2016 at 22:39

There are a couple errors here; one is very serious: --delete will delete files in the destination that don't exist in the source. The less serious one is that --progress doesn't modify how things are copied; it just gives you a progress report on each file as it copies. (I fixed the serious error; replaced it with --remove-source-files .) – Paul d'Aoust Nov 17 '16 at 22:39

[Jul 04, 2018] How do I parse command line arguments in Bash

Notable quotes:
"... enhanced getopt ..."
Jul 04, 2018 | stackoverflow.com

Lawrence Johnston ,Oct 10, 2008 at 16:57

Say, I have a script that gets called with this line:
./myscript -vfd ./foo/bar/someFile -o /fizz/someOtherFile

or this one:

./myscript -v -f -d -o /fizz/someOtherFile ./foo/bar/someFile

What's the accepted way of parsing this such that in each case (or some combination of the two) $v , $f , and $d will all be set to true and $outFile will be equal to /fizz/someOtherFile ?

Inanc Gumus ,Apr 15, 2016 at 19:11

See my very easy and no-dependency answer here: stackoverflow.com/a/33826763/115363Inanc Gumus Apr 15 '16 at 19:11

dezza ,Aug 2, 2016 at 2:13

For zsh-users there's a great builtin called zparseopts which can do: zparseopts -D -E -M -- d=debug -debug=d And have both -d and --debug in the $debug array echo $+debug[1] will return 0 or 1 if one of those are used. Ref: zsh.org/mla/users/2011/msg00350.htmldezza Aug 2 '16 at 2:13

Bruno Bronosky ,Jan 7, 2013 at 20:01

Preferred Method: Using straight bash without getopt[s]

I originally answered the question as the OP asked. This Q/A is getting a lot of attention, so I should also offer the non-magic way to do this. I'm going to expand upon guneysus's answer to fix the nasty sed and include Tobias Kienzler's suggestion .

Two of the most common ways to pass key value pair arguments are:

Straight Bash Space Separated

Usage ./myscript.sh -e conf -s /etc -l /usr/lib /etc/hosts

#!/bin/bash

POSITIONAL=()
while [[ $# -gt 0 ]]
do
key="$1"

case $key in
    -e|--extension)
    EXTENSION="$2"
    shift # past argument
    shift # past value
    ;;
    -s|--searchpath)
    SEARCHPATH="$2"
    shift # past argument
    shift # past value
    ;;
    -l|--lib)
    LIBPATH="$2"
    shift # past argument
    shift # past value
    ;;
    --default)
    DEFAULT=YES
    shift # past argument
    ;;
    *)    # unknown option
    POSITIONAL+=("$1") # save it in an array for later
    shift # past argument
    ;;
esac
done
set -- "${POSITIONAL[@]}" # restore positional parameters

echo FILE EXTENSION  = "${EXTENSION}"
echo SEARCH PATH     = "${SEARCHPATH}"
echo LIBRARY PATH    = "${LIBPATH}"
echo DEFAULT         = "${DEFAULT}"
echo "Number files in SEARCH PATH with EXTENSION:" $(ls -1 "${SEARCHPATH}"/*."${EXTENSION}" | wc -l)
if [[ -n $1 ]]; then
    echo "Last line of file specified as non-opt/last argument:"
    tail -1 "$1"
fi
Straight Bash Equals Separated

Usage ./myscript.sh -e=conf -s=/etc -l=/usr/lib /etc/hosts

#!/bin/bash

for i in "$@"
do
case $i in
    -e=*|--extension=*)
    EXTENSION="${i#*=}"
    shift # past argument=value
    ;;
    -s=*|--searchpath=*)
    SEARCHPATH="${i#*=}"
    shift # past argument=value
    ;;
    -l=*|--lib=*)
    LIBPATH="${i#*=}"
    shift # past argument=value
    ;;
    --default)
    DEFAULT=YES
    shift # past argument with no value
    ;;
    *)
          # unknown option
    ;;
esac
done
echo "FILE EXTENSION  = ${EXTENSION}"
echo "SEARCH PATH     = ${SEARCHPATH}"
echo "LIBRARY PATH    = ${LIBPATH}"
echo "Number files in SEARCH PATH with EXTENSION:" $(ls -1 "${SEARCHPATH}"/*."${EXTENSION}" | wc -l)
if [[ -n $1 ]]; then
    echo "Last line of file specified as non-opt/last argument:"
    tail -1 $1
fi

To better understand ${i#*=} search for "Substring Removal" in this guide . It is functionally equivalent to `sed 's/[^=]*=//' <<< "$i"` which calls a needless subprocess or `echo "$i" | sed 's/[^=]*=//'` which calls two needless subprocesses.

Using getopt[s]

from: http://mywiki.wooledge.org/BashFAQ/035#getopts

Never use getopt(1). getopt cannot handle empty arguments strings, or arguments with embedded whitespace. Please forget that it ever existed.

The POSIX shell (and others) offer getopts which is safe to use instead. Here is a simplistic getopts example:

#!/bin/sh

# A POSIX variable
OPTIND=1         # Reset in case getopts has been used previously in the shell.

# Initialize our own variables:
output_file=""
verbose=0

while getopts "h?vf:" opt; do
    case "$opt" in
    h|\?)
        show_help
        exit 0
        ;;
    v)  verbose=1
        ;;
    f)  output_file=$OPTARG
        ;;
    esac
done

shift $((OPTIND-1))

[ "${1:-}" = "--" ] && shift

echo "verbose=$verbose, output_file='$output_file', Leftovers: $@"

# End of file

The advantages of getopts are:

  1. It's portable, and will work in e.g. dash.
  2. It can handle things like -vf filename in the expected Unix way, automatically.

The disadvantage of getopts is that it can only handle short options ( -h , not --help ) without trickery.

There is a getopts tutorial which explains what all of the syntax and variables mean. In bash, there is also help getopts , which might be informative.

Livven ,Jun 6, 2013 at 21:19

Is this really true? According to Wikipedia there's a newer GNU enhanced version of getopt which includes all the functionality of getopts and then some. man getopt on Ubuntu 13.04 outputs getopt - parse command options (enhanced) as the name, so I presume this enhanced version is standard now. – Livven Jun 6 '13 at 21:19

szablica ,Jul 17, 2013 at 15:23

That something is a certain way on your system is a very weak premise to base asumptions of "being standard" on. – szablica Jul 17 '13 at 15:23

Stephane Chazelas ,Aug 20, 2014 at 19:55

@Livven, that getopt is not a GNU utility, it's part of util-linux . – Stephane Chazelas Aug 20 '14 at 19:55

Nicolas Mongrain-Lacombe ,Jun 19, 2016 at 21:22

If you use -gt 0 , remove your shift after the esac , augment all the shift by 1 and add this case: *) break;; you can handle non optionnal arguments. Ex: pastebin.com/6DJ57HTcNicolas Mongrain-Lacombe Jun 19 '16 at 21:22

kolydart ,Jul 10, 2017 at 8:11

You do not echo –default . In the first example, I notice that if –default is the last argument, it is not processed (considered as non-opt), unless while [[ $# -gt 1 ]] is set as while [[ $# -gt 0 ]]kolydart Jul 10 '17 at 8:11

Robert Siemer ,Apr 20, 2015 at 17:47

No answer mentions enhanced getopt . And the top-voted answer is misleading: It ignores -⁠vfd style short options (requested by the OP), options after positional arguments (also requested by the OP) and it ignores parsing-errors. Instead:
  • Use enhanced getopt from util-linux or formerly GNU glibc . 1
  • It works with getopt_long() the C function of GNU glibc.
  • Has all useful distinguishing features (the others don't have them):
    • handles spaces, quoting characters and even binary in arguments 2
    • it can handle options at the end: script.sh -o outFile file1 file2 -v
    • allows = -style long options: script.sh --outfile=fileOut --infile fileIn
  • Is so old already 3 that no GNU system is missing this (e.g. any Linux has it).
  • You can test for its existence with: getopt --test → return value 4.
  • Other getopt or shell-builtin getopts are of limited use.

The following calls

myscript -vfd ./foo/bar/someFile -o /fizz/someOtherFile
myscript -v -f -d -o/fizz/someOtherFile -- ./foo/bar/someFile
myscript --verbose --force --debug ./foo/bar/someFile -o/fizz/someOtherFile
myscript --output=/fizz/someOtherFile ./foo/bar/someFile -vfd
myscript ./foo/bar/someFile -df -v --output /fizz/someOtherFile

all return

verbose: y, force: y, debug: y, in: ./foo/bar/someFile, out: /fizz/someOtherFile

with the following myscript

#!/bin/bash

getopt --test > /dev/null
if [[ $? -ne 4 ]]; then
    echo "I'm sorry, `getopt --test` failed in this environment."
    exit 1
fi

OPTIONS=dfo:v
LONGOPTIONS=debug,force,output:,verbose

# -temporarily store output to be able to check for errors
# -e.g. use "--options" parameter by name to activate quoting/enhanced mode
# -pass arguments only via   -- "$@"   to separate them correctly
PARSED=$(getopt --options=$OPTIONS --longoptions=$LONGOPTIONS --name "$0" -- "$@")
if [[ $? -ne 0 ]]; then
    # e.g. $? == 1
    #  then getopt has complained about wrong arguments to stdout
    exit 2
fi
# read getopt's output this way to handle the quoting right:
eval set -- "$PARSED"

# now enjoy the options in order and nicely split until we see --
while true; do
    case "$1" in
        -d|--debug)
            d=y
            shift
            ;;
        -f|--force)
            f=y
            shift
            ;;
        -v|--verbose)
            v=y
            shift
            ;;
        -o|--output)
            outFile="$2"
            shift 2
            ;;
        --)
            shift
            break
            ;;
        *)
            echo "Programming error"
            exit 3
            ;;
    esac
done

# handle non-option arguments
if [[ $# -ne 1 ]]; then
    echo "$0: A single input file is required."
    exit 4
fi

echo "verbose: $v, force: $f, debug: $d, in: $1, out: $outFile"

1 enhanced getopt is available on most "bash-systems", including Cygwin; on OS X try brew install gnu-getopt
2 the POSIX exec() conventions have no reliable way to pass binary NULL in command line arguments; those bytes prematurely end the argument
3 first version released in 1997 or before (I only tracked it back to 1997)

johncip ,Jan 12, 2017 at 2:00

Thanks for this. Just confirmed from the feature table at en.wikipedia.org/wiki/Getopts , if you need support for long options, and you're not on Solaris, getopt is the way to go. – johncip Jan 12 '17 at 2:00

Kaushal Modi ,Apr 27, 2017 at 14:02

I believe that the only caveat with getopt is that it cannot be used conveniently in wrapper scripts where one might have few options specific to the wrapper script, and then pass the non-wrapper-script options to the wrapped executable, intact. Let's say I have a grep wrapper called mygrep and I have an option --foo specific to mygrep , then I cannot do mygrep --foo -A 2 , and have the -A 2 passed automatically to grep ; I need to do mygrep --foo -- -A 2 . Here is my implementation on top of your solution.Kaushal Modi Apr 27 '17 at 14:02

bobpaul ,Mar 20 at 16:45

Alex, I agree and there's really no way around that since we need to know the actual return value of getopt --test . I'm a big fan of "Unofficial Bash Strict mode", (which includes set -e ), and I just put the check for getopt ABOVE set -euo pipefail and IFS=$'\n\t' in my script. – bobpaul Mar 20 at 16:45

Robert Siemer ,Mar 21 at 9:10

@bobpaul Oh, there is a way around that. And I'll edit my answer soon to reflect my collections regarding this issue ( set -e )... – Robert Siemer Mar 21 at 9:10

Robert Siemer ,Mar 21 at 9:16

@bobpaul Your statement about util-linux is wrong and misleading as well: the package is marked "essential" on Ubuntu/Debian. As such, it is always installed. – Which distros are you talking about (where you say it needs to be installed on purpose)? – Robert Siemer Mar 21 at 9:16

guneysus ,Nov 13, 2012 at 10:31

from : digitalpeer.com with minor modifications

Usage myscript.sh -p=my_prefix -s=dirname -l=libname

#!/bin/bash
for i in "$@"
do
case $i in
    -p=*|--prefix=*)
    PREFIX="${i#*=}"

    ;;
    -s=*|--searchpath=*)
    SEARCHPATH="${i#*=}"
    ;;
    -l=*|--lib=*)
    DIR="${i#*=}"
    ;;
    --default)
    DEFAULT=YES
    ;;
    *)
            # unknown option
    ;;
esac
done
echo PREFIX = ${PREFIX}
echo SEARCH PATH = ${SEARCHPATH}
echo DIRS = ${DIR}
echo DEFAULT = ${DEFAULT}

To better understand ${i#*=} search for "Substring Removal" in this guide . It is functionally equivalent to `sed 's/[^=]*=//' <<< "$i"` which calls a needless subprocess or `echo "$i" | sed 's/[^=]*=//'` which calls two needless subprocesses.

Tobias Kienzler ,Nov 12, 2013 at 12:48

Neat! Though this won't work for space-separated arguments à la mount -t tempfs ... . One can probably fix this via something like while [ $# -ge 1 ]; do param=$1; shift; case $param in; -p) prefix=$1; shift;; etc – Tobias Kienzler Nov 12 '13 at 12:48

Robert Siemer ,Mar 19, 2016 at 15:23

This can't handle -vfd style combined short options. – Robert Siemer Mar 19 '16 at 15:23

bekur ,Dec 19, 2017 at 23:27

link is broken! – bekur Dec 19 '17 at 23:27

Matt J ,Oct 10, 2008 at 17:03

getopt() / getopts() is a good option. Stolen from here :

The simple use of "getopt" is shown in this mini-script:

#!/bin/bash
echo "Before getopt"
for i
do
  echo $i
done
args=`getopt abc:d $*`
set -- $args
echo "After getopt"
for i
do
  echo "-->$i"
done

What we have said is that any of -a, -b, -c or -d will be allowed, but that -c is followed by an argument (the "c:" says that).

If we call this "g" and try it out:

bash-2.05a$ ./g -abc foo
Before getopt
-abc
foo
After getopt
-->-a
-->-b
-->-c
-->foo
-->--

We start with two arguments, and "getopt" breaks apart the options and puts each in its own argument. It also added "--".

Robert Siemer ,Apr 16, 2016 at 14:37

Using $* is broken usage of getopt . (It hoses arguments with spaces.) See my answer for proper usage. – Robert Siemer Apr 16 '16 at 14:37

SDsolar ,Aug 10, 2017 at 14:07

Why would you want to make it more complicated? – SDsolar Aug 10 '17 at 14:07

thebunnyrules ,Jun 1 at 1:57

@Matt J, the first part of the script (for i) would be able to handle arguments with spaces in them if you use "$i" instead of $i. The getopts does not seem to be able to handle arguments with spaces. What would be the advantage of using getopt over the for i loop? – thebunnyrules Jun 1 at 1:57

bronson ,Jul 15, 2015 at 23:43

At the risk of adding another example to ignore, here's my scheme.
  • handles -n arg and --name=arg
  • allows arguments at the end
  • shows sane errors if anything is misspelled
  • compatible, doesn't use bashisms
  • readable, doesn't require maintaining state in a loop

Hope it's useful to someone.

while [ "$#" -gt 0 ]; do
  case "$1" in
    -n) name="$2"; shift 2;;
    -p) pidfile="$2"; shift 2;;
    -l) logfile="$2"; shift 2;;

    --name=*) name="${1#*=}"; shift 1;;
    --pidfile=*) pidfile="${1#*=}"; shift 1;;
    --logfile=*) logfile="${1#*=}"; shift 1;;
    --name|--pidfile|--logfile) echo "$1 requires an argument" >&2; exit 1;;

    -*) echo "unknown option: $1" >&2; exit 1;;
    *) handle_argument "$1"; shift 1;;
  esac
done

rhombidodecahedron ,Sep 11, 2015 at 8:40

What is the "handle_argument" function? – rhombidodecahedron Sep 11 '15 at 8:40

bronson ,Oct 8, 2015 at 20:41

Sorry for the delay. In my script, the handle_argument function receives all the non-option arguments. You can replace that line with whatever you'd like, maybe *) die "unrecognized argument: $1" or collect the args into a variable *) args+="$1"; shift 1;; . – bronson Oct 8 '15 at 20:41

Guilherme Garnier ,Apr 13 at 16:10

Amazing! I've tested a couple of answers, but this is the only one that worked for all cases, including many positional parameters (both before and after flags) – Guilherme Garnier Apr 13 at 16:10

Shane Day ,Jul 1, 2014 at 1:20

I'm about 4 years late to this question, but want to give back. I used the earlier answers as a starting point to tidy up my old adhoc param parsing. I then refactored out the following template code. It handles both long and short params, using = or space separated arguments, as well as multiple short params grouped together. Finally it re-inserts any non-param arguments back into the $1,$2.. variables. I hope it's useful.
#!/usr/bin/env bash

# NOTICE: Uncomment if your script depends on bashisms.
#if [ -z "$BASH_VERSION" ]; then bash $0 $@ ; exit $? ; fi

echo "Before"
for i ; do echo - $i ; done


# Code template for parsing command line parameters using only portable shell
# code, while handling both long and short params, handling '-f file' and
# '-f=file' style param data and also capturing non-parameters to be inserted
# back into the shell positional parameters.

while [ -n "$1" ]; do
        # Copy so we can modify it (can't modify $1)
        OPT="$1"
        # Detect argument termination
        if [ x"$OPT" = x"--" ]; then
                shift
                for OPT ; do
                        REMAINS="$REMAINS \"$OPT\""
                done
                break
        fi
        # Parse current opt
        while [ x"$OPT" != x"-" ] ; do
                case "$OPT" in
                        # Handle --flag=value opts like this
                        -c=* | --config=* )
                                CONFIGFILE="${OPT#*=}"
                                shift
                                ;;
                        # and --flag value opts like this
                        -c* | --config )
                                CONFIGFILE="$2"
                                shift
                                ;;
                        -f* | --force )
                                FORCE=true
                                ;;
                        -r* | --retry )
                                RETRY=true
                                ;;
                        # Anything unknown is recorded for later
                        * )
                                REMAINS="$REMAINS \"$OPT\""
                                break
                                ;;
                esac
                # Check for multiple short options
                # NOTICE: be sure to update this pattern to match valid options
                NEXTOPT="${OPT#-[cfr]}" # try removing single short opt
                if [ x"$OPT" != x"$NEXTOPT" ] ; then
                        OPT="-$NEXTOPT"  # multiple short opts, keep going
                else
                        break  # long form, exit inner loop
                fi
        done
        # Done with that param. move to next
        shift
done
# Set the non-parameters back into the positional parameters ($1 $2 ..)
eval set -- $REMAINS


echo -e "After: \n configfile='$CONFIGFILE' \n force='$FORCE' \n retry='$RETRY' \n remains='$REMAINS'"
for i ; do echo - $i ; done

Robert Siemer ,Dec 6, 2015 at 13:47

This code can't handle options with arguments like this: -c1 . And the use of = to separate short options from their arguments is unusual... – Robert Siemer Dec 6 '15 at 13:47

sfnd ,Jun 6, 2016 at 19:28

I ran into two problems with this useful chunk of code: 1) the "shift" in the case of "-c=foo" ends up eating the next parameter; and 2) 'c' should not be included in the "[cfr]" pattern for combinable short options. – sfnd Jun 6 '16 at 19:28

Inanc Gumus ,Nov 20, 2015 at 12:28

More succinct way

script.sh

#!/bin/bash

while [[ "$#" > 0 ]]; do case $1 in
  -d|--deploy) deploy="$2"; shift;;
  -u|--uglify) uglify=1;;
  *) echo "Unknown parameter passed: $1"; exit 1;;
esac; shift; done

echo "Should deploy? $deploy"
echo "Should uglify? $uglify"

Usage:

./script.sh -d dev -u

# OR:

./script.sh --deploy dev --uglify

hfossli ,Apr 7 at 20:58

This is what I am doing. Have to while [[ "$#" > 1 ]] if I want to support ending the line with a boolean flag ./script.sh --debug dev --uglify fast --verbose . Example: gist.github.com/hfossli/4368aa5a577742c3c9f9266ed214aa58hfossli Apr 7 at 20:58

hfossli ,Apr 7 at 21:09

I sent an edit request. I just tested this and it works perfectly. – hfossli Apr 7 at 21:09

hfossli ,Apr 7 at 21:10

Wow! Simple and clean! This is how I'm using this: gist.github.com/hfossli/4368aa5a577742c3c9f9266ed214aa58hfossli Apr 7 at 21:10

Ponyboy47 ,Sep 8, 2016 at 18:59

My answer is largely based on the answer by Bruno Bronosky , but I sort of mashed his two pure bash implementations into one that I use pretty frequently.
# As long as there is at least one more argument, keep looping
while [[ $# -gt 0 ]]; do
    key="$1"
    case "$key" in
        # This is a flag type option. Will catch either -f or --foo
        -f|--foo)
        FOO=1
        ;;
        # Also a flag type option. Will catch either -b or --bar
        -b|--bar)
        BAR=1
        ;;
        # This is an arg value type option. Will catch -o value or --output-file value
        -o|--output-file)
        shift # past the key and to the value
        OUTPUTFILE="$1"
        ;;
        # This is an arg=value type option. Will catch -o=value or --output-file=value
        -o=*|--output-file=*)
        # No need to shift here since the value is part of the same string
        OUTPUTFILE="${key#*=}"
        ;;
        *)
        # Do whatever you want with extra options
        echo "Unknown option '$key'"
        ;;
    esac
    # Shift after checking all the cases to get the next option
    shift
done

This allows you to have both space separated options/values, as well as equal defined values.

So you could run your script using:

./myscript --foo -b -o /fizz/file.txt

as well as:

./myscript -f --bar -o=/fizz/file.txt

and both should have the same end result.

PROS:

  • Allows for both -arg=value and -arg value
  • Works with any arg name that you can use in bash
    • Meaning -a or -arg or --arg or -a-r-g or whatever
  • Pure bash. No need to learn/use getopt or getopts

CONS:

  • Can't combine args
    • Meaning no -abc. You must do -a -b -c

These are the only pros/cons I can think of off the top of my head

bubla ,Jul 10, 2016 at 22:40

I have found the matter to write portable parsing in scripts so frustrating that I have written Argbash - a FOSS code generator that can generate the arguments-parsing code for your script plus it has some nice features:

https://argbash.io

RichVel ,Aug 18, 2016 at 5:34

Thanks for writing argbash, I just used it and found it works well. I mostly went for argbash because it's a code generator supporting the older bash 3.x found on OS X 10.11 El Capitan. The only downside is that the code-generator approach means quite a lot of code in your main script, compared to calling a module. – RichVel Aug 18 '16 at 5:34

bubla ,Aug 23, 2016 at 20:40

You can actually use Argbash in a way that it produces tailor-made parsing library just for you that you can have included in your script or you can have it in a separate file and just source it. I have added an example to demonstrate that and I have made it more explicit in the documentation, too. – bubla Aug 23 '16 at 20:40

RichVel ,Aug 24, 2016 at 5:47

Good to know. That example is interesting but still not really clear - maybe you can change name of the generated script to 'parse_lib.sh' or similar and show where the main script calls it (like in the wrapping script section which is more complex use case). – RichVel Aug 24 '16 at 5:47

bubla ,Dec 2, 2016 at 20:12

The issues were addressed in recent version of argbash: Documentation has been improved, a quickstart argbash-init script has been introduced and you can even use argbash online at argbash.io/generatebubla Dec 2 '16 at 20:12

Alek ,Mar 1, 2012 at 15:15

I think this one is simple enough to use:
#!/bin/bash
#

readopt='getopts $opts opt;rc=$?;[ $rc$opt == 0? ]&&exit 1;[ $rc == 0 ]||{ shift $[OPTIND-1];false; }'

opts=vfdo:

# Enumerating options
while eval $readopt
do
    echo OPT:$opt ${OPTARG+OPTARG:$OPTARG}
done

# Enumerating arguments
for arg
do
    echo ARG:$arg
done

Invocation example:

./myscript -v -do /fizz/someOtherFile -f ./foo/bar/someFile
OPT:v 
OPT:d 
OPT:o OPTARG:/fizz/someOtherFile
OPT:f 
ARG:./foo/bar/someFile

erm3nda ,May 20, 2015 at 22:50

I read all and this one is my preferred one. I don't like to use -a=1 as argc style. I prefer to put first the main option -options and later the special ones with single spacing -o option . Im looking for the simplest-vs-better way to read argvs. – erm3nda May 20 '15 at 22:50

erm3nda ,May 20, 2015 at 23:25

It's working really well but if you pass an argument to a non a: option all the following options would be taken as arguments. You can check this line ./myscript -v -d fail -o /fizz/someOtherFile -f ./foo/bar/someFile with your own script. -d option is not set as d: – erm3nda May 20 '15 at 23:25

unsynchronized ,Jun 9, 2014 at 13:46

Expanding on the excellent answer by @guneysus, here is a tweak that lets user use whichever syntax they prefer, eg
command -x=myfilename.ext --another_switch

vs

command -x myfilename.ext --another_switch

That is to say the equals can be replaced with whitespace.

This "fuzzy interpretation" might not be to your liking, but if you are making scripts that are interchangeable with other utilities (as is the case with mine, which must work with ffmpeg), the flexibility is useful.

STD_IN=0

prefix=""
key=""
value=""
for keyValue in "$@"
do
  case "${prefix}${keyValue}" in
    -i=*|--input_filename=*)  key="-i";     value="${keyValue#*=}";; 
    -ss=*|--seek_from=*)      key="-ss";    value="${keyValue#*=}";;
    -t=*|--play_seconds=*)    key="-t";     value="${keyValue#*=}";;
    -|--stdin)                key="-";      value=1;;
    *)                                      value=$keyValue;;
  esac
  case $key in
    -i) MOVIE=$(resolveMovie "${value}");  prefix=""; key="";;
    -ss) SEEK_FROM="${value}";          prefix=""; key="";;
    -t)  PLAY_SECONDS="${value}";           prefix=""; key="";;
    -)   STD_IN=${value};                   prefix=""; key="";; 
    *)   prefix="${keyValue}=";;
  esac
done

vangorra ,Feb 12, 2015 at 21:50

getopts works great if #1 you have it installed and #2 you intend to run it on the same platform. OSX and Linux (for example) behave differently in this respect.

Here is a (non getopts) solution that supports equals, non-equals, and boolean flags. For example you could run your script in this way:

./script --arg1=value1 --arg2 value2 --shouldClean

# parse the arguments.
COUNTER=0
ARGS=("$@")
while [ $COUNTER -lt $# ]
do
    arg=${ARGS[$COUNTER]}
    let COUNTER=COUNTER+1
    nextArg=${ARGS[$COUNTER]}

    if [[ $skipNext -eq 1 ]]; then
        echo "Skipping"
        skipNext=0
        continue
    fi

    argKey=""
    argVal=""
    if [[ "$arg" =~ ^\- ]]; then
        # if the format is: -key=value
        if [[ "$arg" =~ \= ]]; then
            argVal=$(echo "$arg" | cut -d'=' -f2)
            argKey=$(echo "$arg" | cut -d'=' -f1)
            skipNext=0

        # if the format is: -key value
        elif [[ ! "$nextArg" =~ ^\- ]]; then
            argKey="$arg"
            argVal="$nextArg"
            skipNext=1

        # if the format is: -key (a boolean flag)
        elif [[ "$nextArg" =~ ^\- ]] || [[ -z "$nextArg" ]]; then
            argKey="$arg"
            argVal=""
            skipNext=0
        fi
    # if the format has not flag, just a value.
    else
        argKey=""
        argVal="$arg"
        skipNext=0
    fi

    case "$argKey" in 
        --source-scmurl)
            SOURCE_URL="$argVal"
        ;;
        --dest-scmurl)
            DEST_URL="$argVal"
        ;;
        --version-num)
            VERSION_NUM="$argVal"
        ;;
        -c|--clean)
            CLEAN_BEFORE_START="1"
        ;;
        -h|--help|-help|--h)
            showUsage
            exit
        ;;
    esac
done

akostadinov ,Jul 19, 2013 at 7:50

This is how I do in a function to avoid breaking getopts run at the same time somewhere higher in stack:
function waitForWeb () {
   local OPTIND=1 OPTARG OPTION
   local host=localhost port=8080 proto=http
   while getopts "h:p:r:" OPTION; do
      case "$OPTION" in
      h)
         host="$OPTARG"
         ;;
      p)
         port="$OPTARG"
         ;;
      r)
         proto="$OPTARG"
         ;;
      esac
   done
...
}

Renato Silva ,Jul 4, 2016 at 16:47

EasyOptions does not require any parsing:
## Options:
##   --verbose, -v  Verbose mode
##   --output=FILE  Output filename

source easyoptions || exit

if test -n "${verbose}"; then
    echo "output file is ${output}"
    echo "${arguments[@]}"
fi

Oleksii Chekulaiev ,Jul 1, 2016 at 20:56

I give you The Function parse_params that will parse params:
  1. Without polluting global scope.
  2. Effortlessly returns to you ready to use variables so that you could build further logic on them
  3. Amount of dashes before params does not matter ( --all equals -all equals all=all )

The script below is a copy-paste working demonstration. See show_use function to understand how to use parse_params .

Limitations:

  1. Does not support space delimited params ( -d 1 )
  2. Param names will lose dashes so --any-param and -anyparam are equivalent
  3. eval $(parse_params "$@") must be used inside bash function (it will not work in the global scope)

#!/bin/bash

# Universal Bash parameter parsing
# Parse equal sign separated params into named local variables
# Standalone named parameter value will equal its param name (--force creates variable $force=="force")
# Parses multi-valued named params into an array (--path=path1 --path=path2 creates ${path[*]} array)
# Parses un-named params into ${ARGV[*]} array
# Additionally puts all named params into ${ARGN[*]} array
# Additionally puts all standalone "option" params into ${ARGO[*]} array
# @author Oleksii Chekulaiev
# @version v1.3 (May-14-2018)
parse_params ()
{
    local existing_named
    local ARGV=() # un-named params
    local ARGN=() # named params
    local ARGO=() # options (--params)
    echo "local ARGV=(); local ARGN=(); local ARGO=();"
    while [[ "$1" != "" ]]; do
        # Escape asterisk to prevent bash asterisk expansion
        _escaped=${1/\*/\'\"*\"\'}
        # If equals delimited named parameter
        if [[ "$1" =~ ^..*=..* ]]; then
            # Add to named parameters array
            echo "ARGN+=('$_escaped');"
            # key is part before first =
            local _key=$(echo "$1" | cut -d = -f 1)
            # val is everything after key and = (protect from param==value error)
            local _val="${1/$_key=}"
            # remove dashes from key name
            _key=${_key//\-}
            # search for existing parameter name
            if (echo "$existing_named" | grep "\b$_key\b" >/dev/null); then
                # if name already exists then it's a multi-value named parameter
                # re-declare it as an array if needed
                if ! (declare -p _key 2> /dev/null | grep -q 'declare \-a'); then
                    echo "$_key=(\"\$$_key\");"
                fi
                # append new value
                echo "$_key+=('$_val');"
            else
                # single-value named parameter
                echo "local $_key=\"$_val\";"
                existing_named=" $_key"
            fi
        # If standalone named parameter
        elif [[ "$1" =~ ^\-. ]]; then
            # Add to options array
            echo "ARGO+=('$_escaped');"
            # remove dashes
            local _key=${1//\-}
            echo "local $_key=\"$_key\";"
        # non-named parameter
        else
            # Escape asterisk to prevent bash asterisk expansion
            _escaped=${1/\*/\'\"*\"\'}
            echo "ARGV+=('$_escaped');"
        fi
        shift
    done
}

#--------------------------- DEMO OF THE USAGE -------------------------------

show_use ()
{
    eval $(parse_params "$@")
    # --
    echo "${ARGV[0]}" # print first unnamed param
    echo "${ARGV[1]}" # print second unnamed param
    echo "${ARGN[0]}" # print first named param
    echo "${ARG0[0]}" # print first option param (--force)
    echo "$anyparam"  # print --anyparam value
    echo "$k"         # print k=5 value
    echo "${multivalue[0]}" # print first value of multi-value
    echo "${multivalue[1]}" # print second value of multi-value
    [[ "$force" == "force" ]] && echo "\$force is set so let the force be with you"
}

show_use "param 1" --anyparam="my value" param2 k=5 --force --multi-value=test1 --multi-value=test2

Oleksii Chekulaiev ,Sep 28, 2016 at 12:55

To use the demo to parse params that come into your bash script you just do show_use "$@"Oleksii Chekulaiev Sep 28 '16 at 12:55

Oleksii Chekulaiev ,Sep 28, 2016 at 12:58

Basically I found out that github.com/renatosilva/easyoptions does the same in the same way but is a bit more massive than this function. – Oleksii Chekulaiev Sep 28 '16 at 12:58

galmok ,Jun 24, 2015 at 10:54

I'd like to offer my version of option parsing, that allows for the following:
-s p1
--stage p1
-w somefolder
--workfolder somefolder
-sw p1 somefolder
-e=hello

Also allows for this (could be unwanted):

-s--workfolder p1 somefolder
-se=hello p1
-swe=hello p1 somefolder

You have to decide before use if = is to be used on an option or not. This is to keep the code clean(ish).

while [[ $# > 0 ]]
do
    key="$1"
    while [[ ${key+x} ]]
    do
        case $key in
            -s*|--stage)
                STAGE="$2"
                shift # option has parameter
                ;;
            -w*|--workfolder)
                workfolder="$2"
                shift # option has parameter
                ;;
            -e=*)
                EXAMPLE="${key#*=}"
                break # option has been fully handled
                ;;
            *)
                # unknown option
                echo Unknown option: $key #1>&2
                exit 10 # either this: my preferred way to handle unknown options
                break # or this: do this to signal the option has been handled (if exit isn't used)
                ;;
        esac
        # prepare for next option in this key, if any
        [[ "$key" = -? || "$key" == --* ]] && unset key || key="${key/#-?/-}"
    done
    shift # option(s) fully processed, proceed to next input argument
done

Luca Davanzo ,Nov 14, 2016 at 17:56

what's the meaning for "+x" on ${key+x} ? – Luca Davanzo Nov 14 '16 at 17:56

galmok ,Nov 15, 2016 at 9:10

It is a test to see if 'key' is present or not. Further down I unset key and this breaks the inner while loop. – galmok Nov 15 '16 at 9:10

Mark Fox ,Apr 27, 2015 at 2:42

Mixing positional and flag-based arguments --param=arg (equals delimited)

Freely mixing flags between positional arguments:

./script.sh dumbo 127.0.0.1 --environment=production -q -d
./script.sh dumbo --environment=production 127.0.0.1 --quiet -d

can be accomplished with a fairly concise approach:

# process flags
pointer=1
while [[ $pointer -le $# ]]; do
   param=${!pointer}
   if [[ $param != "-"* ]]; then ((pointer++)) # not a parameter flag so advance pointer
   else
      case $param in
         # paramter-flags with arguments
         -e=*|--environment=*) environment="${param#*=}";;
                  --another=*) another="${param#*=}";;

         # binary flags
         -q|--quiet) quiet=true;;
                 -d) debug=true;;
      esac

      # splice out pointer frame from positional list
      [[ $pointer -gt 1 ]] \
         && set -- ${@:1:((pointer - 1))} ${@:((pointer + 1)):$#} \
         || set -- ${@:((pointer + 1)):$#};
   fi
done

# positional remain
node_name=$1
ip_address=$2
--param arg (space delimited)

It's usualy clearer to not mix --flag=value and --flag value styles.

./script.sh dumbo 127.0.0.1 --environment production -q -d

This is a little dicey to read, but is still valid

./script.sh dumbo --environment production 127.0.0.1 --quiet -d

Source

# process flags
pointer=1
while [[ $pointer -le $# ]]; do
   if [[ ${!pointer} != "-"* ]]; then ((pointer++)) # not a parameter flag so advance pointer
   else
      param=${!pointer}
      ((pointer_plus = pointer + 1))
      slice_len=1

      case $param in
         # paramter-flags with arguments
         -e|--environment) environment=${!pointer_plus}; ((slice_len++));;
                --another) another=${!pointer_plus}; ((slice_len++));;

         # binary flags
         -q|--quiet) quiet=true;;
                 -d) debug=true;;
      esac

      # splice out pointer frame from positional list
      [[ $pointer -gt 1 ]] \
         && set -- ${@:1:((pointer - 1))} ${@:((pointer + $slice_len)):$#} \
         || set -- ${@:((pointer + $slice_len)):$#};
   fi
done

# positional remain
node_name=$1
ip_address=$2

schily ,Oct 19, 2015 at 13:59

Note that getopt(1) was a short living mistake from AT&T.

getopt was created in 1984 but already buried in 1986 because it was not really usable.

A proof for the fact that getopt is very outdated is that the getopt(1) man page still mentions "$*" instead of "$@" , that was added to the Bourne Shell in 1986 together with the getopts(1) shell builtin in order to deal with arguments with spaces inside.

BTW: if you are interested in parsing long options in shell scripts, it may be of interest to know that the getopt(3) implementation from libc (Solaris) and ksh93 both added a uniform long option implementation that supports long options as aliases for short options. This causes ksh93 and the Bourne Shell to implement a uniform interface for long options via getopts .

An example for long options taken from the Bourne Shell man page:

getopts "f:(file)(input-file)o:(output-file)" OPTX "$@"

shows how long option aliases may be used in both Bourne Shell and ksh93.

See the man page of a recent Bourne Shell:

http://schillix.sourceforge.net/man/man1/bosh.1.html

and the man page for getopt(3) from OpenSolaris:

http://schillix.sourceforge.net/man/man3c/getopt.3c.html

and last, the getopt(1) man page to verify the outdated $*:

http://schillix.sourceforge.net/man/man1/getopt.1.html

Volodymyr M. Lisivka ,Jul 9, 2013 at 16:51

Use module "arguments" from bash-modules

Example:

#!/bin/bash
. import.sh log arguments

NAME="world"

parse_arguments "-n|--name)NAME;S" -- "$@" || {
  error "Cannot parse command line."
  exit 1
}

info "Hello, $NAME!"

Mike Q ,Jun 14, 2014 at 18:01

This also might be useful to know, you can set a value and if someone provides input, override the default with that value..

myscript.sh -f ./serverlist.txt or just ./myscript.sh (and it takes defaults)

    #!/bin/bash
    # --- set the value, if there is inputs, override the defaults.

    HOME_FOLDER="${HOME}/owned_id_checker"
    SERVER_FILE_LIST="${HOME_FOLDER}/server_list.txt"

    while [[ $# > 1 ]]
    do
    key="$1"
    shift

    case $key in
        -i|--inputlist)
        SERVER_FILE_LIST="$1"
        shift
        ;;
    esac
    done


    echo "SERVER LIST   = ${SERVER_FILE_LIST}"

phk ,Oct 17, 2015 at 21:17

Another solution without getopt[s], POSIX, old Unix style

Similar to the solution Bruno Bronosky posted this here is one without the usage of getopt(s) .

Main differentiating feature of my solution is that it allows to have options concatenated together just like tar -xzf foo.tar.gz is equal to tar -x -z -f foo.tar.gz . And just like in tar , ps etc. the leading hyphen is optional for a block of short options (but this can be changed easily). Long options are supported as well (but when a block starts with one then two leading hyphens are required).

Code with example options
#!/bin/sh

echo
echo "POSIX-compliant getopt(s)-free old-style-supporting option parser from phk@[se.unix]"
echo

print_usage() {
  echo "Usage:

  $0 {a|b|c} [ARG...]

Options:

  --aaa-0-args
  -a
    Option without arguments.

  --bbb-1-args ARG
  -b ARG
    Option with one argument.

  --ccc-2-args ARG1 ARG2
  -c ARG1 ARG2
    Option with two arguments.

" >&2
}

if [ $# -le 0 ]; then
  print_usage
  exit 1
fi

opt=
while :; do

  if [ $# -le 0 ]; then

    # no parameters remaining -> end option parsing
    break

  elif [ ! "$opt" ]; then

    # we are at the beginning of a fresh block
    # remove optional leading hyphen and strip trailing whitespaces
    opt=$(echo "$1" | sed 's/^-\?\([a-zA-Z0-9\?-]*\)/\1/')

  fi

  # get the first character -> check whether long option
  first_chr=$(echo "$opt" | awk '{print substr($1, 1, 1)}')
  [ "$first_chr" = - ] && long_option=T || long_option=F

  # note to write the options here with a leading hyphen less
  # also do not forget to end short options with a star
  case $opt in

    -)

      # end of options
      shift
      break
      ;;

    a*|-aaa-0-args)

      echo "Option AAA activated!"
      ;;

    b*|-bbb-1-args)

      if [ "$2" ]; then
        echo "Option BBB with argument '$2' activated!"
        shift
      else
        echo "BBB parameters incomplete!" >&2
        print_usage
        exit 1
      fi
      ;;

    c*|-ccc-2-args)

      if [ "$2" ] && [ "$3" ]; then
        echo "Option CCC with arguments '$2' and '$3' activated!"
        shift 2
      else
        echo "CCC parameters incomplete!" >&2
        print_usage
        exit 1
      fi
      ;;

    h*|\?*|-help)

      print_usage
      exit 0
      ;;

    *)

      if [ "$long_option" = T ]; then
        opt=$(echo "$opt" | awk '{print substr($1, 2)}')
      else
        opt=$first_chr
      fi
      printf 'Error: Unknown option: "%s"\n' "$opt" >&2
      print_usage
      exit 1
      ;;

  esac

  if [ "$long_option" = T ]; then

    # if we had a long option then we are going to get a new block next
    shift
    opt=

  else

    # if we had a short option then just move to the next character
    opt=$(echo "$opt" | awk '{print substr($1, 2)}')

    # if block is now empty then shift to the next one
    [ "$opt" ] || shift

  fi

done

echo "Doing something..."

exit 0

For the example usage please see the examples further below.

Position of options with arguments

For what its worth there the options with arguments don't be the last (only long options need to be). So while e.g. in tar (at least in some implementations) the f options needs to be last because the file name follows ( tar xzf bar.tar.gz works but tar xfz bar.tar.gz does not) this is not the case here (see the later examples).

Multiple options with arguments

As another bonus the option parameters are consumed in the order of the options by the parameters with required options. Just look at the output of my script here with the command line abc X Y Z (or -abc X Y Z ):

Option AAA activated!
Option BBB with argument 'X' activated!
Option CCC with arguments 'Y' and 'Z' activated!
Long options concatenated as well

Also you can also have long options in option block given that they occur last in the block. So the following command lines are all equivalent (including the order in which the options and its arguments are being processed):

  • -cba Z Y X
  • cba Z Y X
  • -cb-aaa-0-args Z Y X
  • -c-bbb-1-args Z Y X -a
  • --ccc-2-args Z Y -ba X
  • c Z Y b X a
  • -c Z Y -b X -a
  • --ccc-2-args Z Y --bbb-1-args X --aaa-0-args

All of these lead to:

Option CCC with arguments 'Z' and 'Y' activated!
Option BBB with argument 'X' activated!
Option AAA activated!
Doing something...
Not in this solution Optional arguments

Options with optional arguments should be possible with a bit of work, e.g. by looking forward whether there is a block without a hyphen; the user would then need to put a hyphen in front of every block following a block with a parameter having an optional parameter. Maybe this is too complicated to communicate to the user so better just require a leading hyphen altogether in this case.

Things get even more complicated with multiple possible parameters. I would advise against making the options trying to be smart by determining whether the an argument might be for it or not (e.g. with an option just takes a number as an optional argument) because this might break in the future.

I personally favor additional options instead of optional arguments.

Option arguments introduced with an equal sign

Just like with optional arguments I am not a fan of this (BTW, is there a thread for discussing the pros/cons of different parameter styles?) but if you want this you could probably implement it yourself just like done at http://mywiki.wooledge.org/BashFAQ/035#Manual_loop with a --long-with-arg=?* case statement and then stripping the equal sign (this is BTW the site that says that making parameter concatenation is possible with some effort but "left [it] as an exercise for the reader" which made me take them at their word but I started from scratch).

Other notes

POSIX-compliant, works even on ancient Busybox setups I had to deal with (with e.g. cut , head and getopts missing).

Noah ,Aug 29, 2016 at 3:44

Solution that preserves unhandled arguments. Demos Included.

Here is my solution. It is VERY flexible and unlike others, shouldn't require external packages and handles leftover arguments cleanly.

Usage is: ./myscript -flag flagvariable -otherflag flagvar2

All you have to do is edit the validflags line. It prepends a hyphen and searches all arguments. It then defines the next argument as the flag name e.g.

./myscript -flag flagvariable -otherflag flagvar2
echo $flag $otherflag
flagvariable flagvar2

The main code (short version, verbose with examples further down, also a version with erroring out):

#!/usr/bin/env bash
#shebang.io
validflags="rate time number"
count=1
for arg in $@
do
    match=0
    argval=$1
    for flag in $validflags
    do
        sflag="-"$flag
        if [ "$argval" == "$sflag" ]
        then
            declare $flag=$2
            match=1
        fi
    done
        if [ "$match" == "1" ]
    then
        shift 2
    else
        leftovers=$(echo $leftovers $argval)
        shift
    fi
    count=$(($count+1))
done
#Cleanup then restore the leftovers
shift $#
set -- $leftovers

The verbose version with built in echo demos:

#!/usr/bin/env bash
#shebang.io
rate=30
time=30
number=30
echo "all args
$@"
validflags="rate time number"
count=1
for arg in $@
do
    match=0
    argval=$1
#   argval=$(echo $@ | cut -d ' ' -f$count)
    for flag in $validflags
    do
            sflag="-"$flag
        if [ "$argval" == "$sflag" ]
        then
            declare $flag=$2
            match=1
        fi
    done
        if [ "$match" == "1" ]
    then
        shift 2
    else
        leftovers=$(echo $leftovers $argval)
        shift
    fi
    count=$(($count+1))
done

#Cleanup then restore the leftovers
echo "pre final clear args:
$@"
shift $#
echo "post final clear args:
$@"
set -- $leftovers
echo "all post set args:
$@"
echo arg1: $1 arg2: $2

echo leftovers: $leftovers
echo rate $rate time $time number $number

Final one, this one errors out if an invalid -argument is passed through.

#!/usr/bin/env bash
#shebang.io
rate=30
time=30
number=30
validflags="rate time number"
count=1
for arg in $@
do
    argval=$1
    match=0
        if [ "${argval:0:1}" == "-" ]
    then
        for flag in $validflags
        do
                sflag="-"$flag
            if [ "$argval" == "$sflag" ]
            then
                declare $flag=$2
                match=1
            fi
        done
        if [ "$match" == "0" ]
        then
            echo "Bad argument: $argval"
            exit 1
        fi
        shift 2
    else
        leftovers=$(echo $leftovers $argval)
        shift
    fi
    count=$(($count+1))
done
#Cleanup then restore the leftovers
shift $#
set -- $leftovers
echo rate $rate time $time number $number
echo leftovers: $leftovers

Pros: What it does, it handles very well. It preserves unused arguments which a lot of the other solutions here don't. It also allows for variables to be called without being defined by hand in the script. It also allows prepopulation of variables if no corresponding argument is given. (See verbose example).

Cons: Can't parse a single complex arg string e.g. -xcvf would process as a single argument. You could somewhat easily write additional code into mine that adds this functionality though.

Daniel Bigham ,Aug 8, 2016 at 12:42

The top answer to this question seemed a bit buggy when I tried it -- here's my solution which I've found to be more robust:
boolean_arg=""
arg_with_value=""

while [[ $# -gt 0 ]]
do
key="$1"
case $key in
    -b|--boolean-arg)
    boolean_arg=true
    shift
    ;;
    -a|--arg-with-value)
    arg_with_value="$2"
    shift
    shift
    ;;
    -*)
    echo "Unknown option: $1"
    exit 1
    ;;
    *)
    arg_num=$(( $arg_num + 1 ))
    case $arg_num in
        1)
        first_normal_arg="$1"
        shift
        ;;
        2)
        second_normal_arg="$1"
        shift
        ;;
        *)
        bad_args=TRUE
    esac
    ;;
esac
done

# Handy to have this here when adding arguments to
# see if they're working. Just edit the '0' to be '1'.
if [[ 0 == 1 ]]; then
    echo "first_normal_arg: $first_normal_arg"
    echo "second_normal_arg: $second_normal_arg"
    echo "boolean_arg: $boolean_arg"
    echo "arg_with_value: $arg_with_value"
    exit 0
fi

if [[ $bad_args == TRUE || $arg_num < 2 ]]; then
    echo "Usage: $(basename "$0") <first-normal-arg> <second-normal-arg> [--boolean-arg] [--arg-with-value VALUE]"
    exit 1
fi

phyatt ,Sep 7, 2016 at 18:25

This example shows how to use getopt and eval and HEREDOC and shift to handle short and long parameters with and without a required value that follows. Also the switch/case statement is concise and easy to follow.
#!/usr/bin/env bash

# usage function
function usage()
{
   cat << HEREDOC

   Usage: $progname [--num NUM] [--time TIME_STR] [--verbose] [--dry-run]

   optional arguments:
     -h, --help           show this help message and exit
     -n, --num NUM        pass in a number
     -t, --time TIME_STR  pass in a time string
     -v, --verbose        increase the verbosity of the bash script
     --dry-run            do a dry run, don't change any files

HEREDOC
}  

# initialize variables
progname=$(basename $0)
verbose=0
dryrun=0
num_str=
time_str=

# use getopt and store the output into $OPTS
# note the use of -o for the short options, --long for the long name options
# and a : for any option that takes a parameter
OPTS=$(getopt -o "hn:t:v" --long "help,num:,time:,verbose,dry-run" -n "$progname" -- "$@")
if [ $? != 0 ] ; then echo "Error in command line arguments." >&2 ; usage; exit 1 ; fi
eval set -- "$OPTS"

while true; do
  # uncomment the next line to see how shift is working
  # echo "\$1:\"$1\" \$2:\"$2\""
  case "$1" in
    -h | --help ) usage; exit; ;;
    -n | --num ) num_str="$2"; shift 2 ;;
    -t | --time ) time_str="$2"; shift 2 ;;
    --dry-run ) dryrun=1; shift ;;
    -v | --verbose ) verbose=$((verbose + 1)); shift ;;
    -- ) shift; break ;;
    * ) break ;;
  esac
done

if (( $verbose > 0 )); then

   # print out all the parameters we read in
   cat <<-EOM
   num=$num_str
   time=$time_str
   verbose=$verbose
   dryrun=$dryrun
EOM
fi

# The rest of your script below

The most significant lines of the script above are these:

OPTS=$(getopt -o "hn:t:v" --long "help,num:,time:,verbose,dry-run" -n "$progname" -- "$@")
if [ $? != 0 ] ; then echo "Error in command line arguments." >&2 ; exit 1 ; fi
eval set -- "$OPTS"

while true; do
  case "$1" in
    -h | --help ) usage; exit; ;;
    -n | --num ) num_str="$2"; shift 2 ;;
    -t | --time ) time_str="$2"; shift 2 ;;
    --dry-run ) dryrun=1; shift ;;
    -v | --verbose ) verbose=$((verbose + 1)); shift ;;
    -- ) shift; break ;;
    * ) break ;;
  esac
done

Short, to the point, readable, and handles just about everything (IMHO).

Hope that helps someone.

Emeric Verschuur ,Feb 20, 2017 at 21:30

I have write a bash helper to write a nice bash tool

project home: https://gitlab.mbedsys.org/mbedsys/bashopts

example:

#!/bin/bash -ei

# load the library
. bashopts.sh

# Enable backtrace dusplay on error
trap 'bashopts_exit_handle' ERR

# Initialize the library
bashopts_setup -n "$0" -d "This is myapp tool description displayed on help message" -s "$HOME/.config/myapprc"

# Declare the options
bashopts_declare -n first_name -l first -o f -d "First name" -t string -i -s -r
bashopts_declare -n last_name -l last -o l -d "Last name" -t string -i -s -r
bashopts_declare -n display_name -l display-name -t string -d "Display name" -e "\$first_name \$last_name"
bashopts_declare -n age -l number -d "Age" -t number
bashopts_declare -n email_list -t string -m add -l email -d "Email adress"

# Parse arguments
bashopts_parse_args "$@"

# Process argument
bashopts_process_args

will give help:

NAME:
    ./example.sh - This is myapp tool description displayed on help message

USAGE:
    [options and commands] [-- [extra args]]

OPTIONS:
    -h,--help                          Display this help
    -n,--non-interactive true          Non interactive mode - [$bashopts_non_interactive] (type:boolean, default:false)
    -f,--first "John"                  First name - [$first_name] (type:string, default:"")
    -l,--last "Smith"                  Last name - [$last_name] (type:string, default:"")
    --display-name "John Smith"        Display name - [$display_name] (type:string, default:"$first_name $last_name")
    --number 0                         Age - [$age] (type:number, default:0)
    --email                            Email adress - [$email_list] (type:string, default:"")

enjoy :)

Josh Wulf ,Jun 24, 2017 at 18:07

I get this on Mac OS X: ``` lib/bashopts.sh: line 138: declare: -A: invalid option declare: usage: declare [-afFirtx] [-p] [name[=value] ...] Error in lib/bashopts.sh:138. 'declare -x -A bashopts_optprop_name' exited with status 2 Call tree: 1: lib/controller.sh:4 source(...) Exiting with status 1 ``` – Josh Wulf Jun 24 '17 at 18:07

Josh Wulf ,Jun 24, 2017 at 18:17

You need Bash version 4 to use this. On Mac, the default version is 3. You can use home brew to install bash 4. – Josh Wulf Jun 24 '17 at 18:17

a_z ,Mar 15, 2017 at 13:24

Here is my approach - using regexp.
  • no getopts
  • it handles block of short parameters -qwerty
  • it handles short parameters -q -w -e
  • it handles long options --qwerty
  • you can pass attribute to short or long option (if you are using block of short options, attribute is attached to the last option)
  • you can use spaces or = to provide attributes, but attribute matches until encountering hyphen+space "delimiter", so in --q=qwe ty qwe ty is one attribute
  • it handles mix of all above so -o a -op attr ibute --option=att ribu te --op-tion attribute --option att-ribute is valid

script:

#!/usr/bin/env sh

help_menu() {
  echo "Usage:

  ${0##*/} [-h][-l FILENAME][-d]

Options:

  -h, --help
    display this help and exit

  -l, --logfile=FILENAME
    filename

  -d, --debug
    enable debug
  "
}

parse_options() {
  case $opt in
    h|help)
      help_menu
      exit
     ;;
    l|logfile)
      logfile=${attr}
      ;;
    d|debug)
      debug=true
      ;;
    *)
      echo "Unknown option: ${opt}\nRun ${0##*/} -h for help.">&2
      exit 1
  esac
}
options=$@

until [ "$options" = "" ]; do
  if [[ $options =~ (^ *(--([a-zA-Z0-9-]+)|-([a-zA-Z0-9-]+))(( |=)(([\_\.\?\/\\a-zA-Z0-9]?[ -]?[\_\.\?a-zA-Z0-9]+)+))?(.*)|(.+)) ]]; then
    if [[ ${BASH_REMATCH[3]} ]]; then # for --option[=][attribute] or --option[=][attribute]
      opt=${BASH_REMATCH[3]}
      attr=${BASH_REMATCH[7]}
      options=${BASH_REMATCH[9]}
    elif [[ ${BASH_REMATCH[4]} ]]; then # for block options -qwert[=][attribute] or single short option -a[=][attribute]
      pile=${BASH_REMATCH[4]}
      while (( ${#pile} > 1 )); do
        opt=${pile:0:1}
        attr=""
        pile=${pile/${pile:0:1}/}
        parse_options
      done
      opt=$pile
      attr=${BASH_REMATCH[7]}
      options=${BASH_REMATCH[9]}
    else # leftovers that don't match
      opt=${BASH_REMATCH[10]}
      options=""
    fi
    parse_options
  fi
done

mauron85 ,Jun 21, 2017 at 6:03

Like this one. Maybe just add -e param to echo with new line. – mauron85 Jun 21 '17 at 6:03

John ,Oct 10, 2017 at 22:49

Assume we create a shell script named test_args.sh as follow
#!/bin/sh
until [ $# -eq 0 ]
do
  name=${1:1}; shift;
  if [[ -z "$1" || $1 == -* ]] ; then eval "export $name=true"; else eval "export $name=$1"; shift; fi  
done
echo "year=$year month=$month day=$day flag=$flag"

After we run the following command:

sh test_args.sh  -year 2017 -flag  -month 12 -day 22

The output would be:

year=2017 month=12 day=22 flag=true

Will Barnwell ,Oct 10, 2017 at 23:57

This takes the same approach as Noah's answer , but has less safety checks / safeguards. This allows us to write arbitrary arguments into the script's environment and I'm pretty sure your use of eval here may allow command injection. – Will Barnwell Oct 10 '17 at 23:57

Masadow ,Oct 6, 2015 at 8:53

Here is my improved solution of Bruno Bronosky's answer using variable arrays.

it lets you mix parameters position and give you a parameter array preserving the order without the options

#!/bin/bash

echo $@

PARAMS=()
SOFT=0
SKIP=()
for i in "$@"
do
case $i in
    -n=*|--skip=*)
    SKIP+=("${i#*=}")
    ;;
    -s|--soft)
    SOFT=1
    ;;
    *)
        # unknown option
        PARAMS+=("$i")
    ;;
esac
done
echo "SKIP            = ${SKIP[@]}"
echo "SOFT            = $SOFT"
    echo "Parameters:"
    echo ${PARAMS[@]}

Will output for example:

$ ./test.sh parameter -s somefile --skip=.c --skip=.obj
parameter -s somefile --skip=.c --skip=.obj
SKIP            = .c .obj
SOFT            = 1
Parameters:
parameter somefile

Jason S ,Dec 3, 2017 at 1:01

You use shift on the known arguments and not on the unknown ones so your remaining $@ will be all but the first two arguments (in the order they are passed in), which could lead to some mistakes if you try to use $@ later. You don't need the shift for the = parameters, since you're not handling spaces and you're getting the value with the substring removal #*=Jason S Dec 3 '17 at 1:01

Masadow ,Dec 5, 2017 at 9:17

You're right, in fact, since I build a PARAMS variable, I don't need to use shift at all – Masadow Dec 5 '17 at 9:17

[Jul 03, 2018] A large collection of Unix-Linux 'find' command examples by Alvin Alexander

May 16, 2018 | alvinalexander.com
Abridged 'find' command examples

If you just want to see some examples and skip the reading, here are a little more than thirty find command examples to get you started. Almost every command is followed by a short description to explain the command; others are described more fully at the URLs shown:

basic 'find file' commands
--------------------------
find / -name foo.txt -type f -print # full command
find / -name foo.txt -type f # -print isn't necessary
find / -name foo.txt # don't have to specify "type==file"
find . -name foo.txt # search under the current dir
find . -name "foo.*" # wildcard
find . -name "*.txt" # wildcard
find /users/al -name Cookbook -type d # search '/users/al' dir

search multiple dirs
--------------------
find /opt /usr /var -name foo.scala -type f # search multiple dirs

case-insensitive searching
--------------------------
find . -iname foo # find foo, Foo, FOo, FOO, etc.
find . -iname foo -type d # same thing, but only dirs
find . -iname foo -type f # same thing, but only files

find files with different extensions
------------------------------------
find . -type f \( -name "*.c" -o -name "*.sh" \) # *.c and *.sh files
find . -type f \( -name "*cache" -o -name "*xml" -o -name "*html" \) # three patterns

find files that don't match a pattern (-not)
--------------------------------------------
find . -type f -not -name "*.html" # find all files not ending in ".html"

find files by text in the file (find + grep)
--------------------------------------------
find . -type f -name "*.java" -exec grep -l StringBuffer {} \; # find StringBuffer in all *.java files
find . -type f -name "*.java" -exec grep -il string {} \; # ignore case with -i option
find . -type f -name "*.gz" -exec zgrep 'GET /foo' {} \; # search for a string in gzip'd files

5 lines before, 10 lines after grep matches
-------------------------------------------
find . -type f -name "*.scala" -exec grep -B5 -A10 'null' {} \;
(see http://alvinalexander.com/linux-unix/find-grep-print-lines-before-after-search-term)

find files and act on them (find + exec)
----------------------------------------
find /usr/local -name "*.html" -type f -exec chmod 644 {} \; # change html files to mode 644
find htdocs cgi-bin -name "*.cgi" -type f -exec chmod 755 {} \; # change cgi files to mode 755
find . -name "*.pl" -exec ls -ld {} \; # run ls command on files found

find and copy
-------------
find . -type f -name "*.mp3" -exec cp {} /tmp/MusicFiles \; # cp *.mp3 files to /tmp/MusicFiles

copy one file to many dirs
--------------------------
find dir1 dir2 dir3 dir4 -type d -exec cp header.shtml {} \; # copy the file header.shtml to those dirs

find and delete
---------------
find . -type f -name "Foo*" -exec rm {} \; # remove all "Foo*" files under current dir
find . -type d -name CVS -exec rm -r {} \; # remove all subdirectories named "CVS" under current dir

find files by modification time
-------------------------------
find . -mtime 1 # 24 hours
find . -mtime -7 # last 7 days
find . -mtime -7 -type f # just files
find . -mtime -7 -type d # just dirs

find files by modification time using a temp file
-------------------------------------------------
touch 09301330 poop # 1) create a temp file with a specific timestamp
find . -mnewer poop # 2) returns a list of new files
rm poop # 3) rm the temp file

find with time: this works on mac os x
--------------------------------------
find / -newerct '1 minute ago' -print

find and tar
------------
find . -type f -name "*.java" | xargs tar cvf myfile.tar
find . -type f -name "*.java" | xargs tar rvf myfile.tar
(see http://alvinalexander.com/blog/post/linux-unix/using-find-xargs-tar-create-huge-archive-cygwin-linux-unix
for more information)

find, tar, and xargs
--------------------
find . -name -type f '*.mp3' -mtime -180 -print0 | xargs -0 tar rvf music.tar
(-print0 helps handle spaces in filenames)
(see http://alvinalexander.com/mac-os-x/mac-backup-filename-directories-spaces-find-tar-xargs)

find and pax (instead of xargs and tar)
---------------------------------------
find . -type f -name "*html" | xargs tar cvf jw-htmlfiles.tar -
find . -type f -name "*html" | pax -w -f jw-htmlfiles.tar
(

[Jul 02, 2018] How can I detect whether a symlink is broken in Bash - Stack Overflow

Jul 02, 2018 | stackoverflow.com

How can I detect whether a symlink is broken in Bash? Ask Question up vote 38 down vote favorite 7


zoltanctoth ,Nov 8, 2011 at 10:39

I run find and iterate through the results with [ \( -L $F \) ] to collect certain symbolic links.

I am wondering if there is an easy way to determine if the link is broken (points to a non-existent file) in this scenario.

Here is my code:

FILES=`find /target/ | grep -v '\.disabled$' | sort`

for F in $FILES; do
    if [ -L $F ]; then
        DO THINGS
    fi
done

Roger ,Nov 8, 2011 at 10:45

# test if file exists (test actual file, not symbolic link)
if [ ! -e "$F" ] ; then
    # code if the symlink is broken
fi

Calimo ,Apr 18, 2017 at 19:50

Note that the code will also be executed if the file does not exist at all. It is fine with find but in other scenarios (such as globs) should be combined with -h to handle this case, for instance [ -h "$F" -a ! -e "$F" ] . – Calimo Apr 18 '17 at 19:50

Sridhar-Sarnobat ,Jul 13, 2017 at 22:36

You're not really testing the symbolic link with this approach. – Sridhar-Sarnobat Jul 13 '17 at 22:36

Melab ,Jul 24, 2017 at 15:22

@Calimo There is no difference. – Melab Jul 24 '17 at 15:22

Shawn Chin ,Nov 8, 2011 at 10:51

This should print out links that are broken:
find /target/dir -type l ! -exec test -e {} \; -print

You can also chain in operations to find command, e.g. deleting the broken link:

find /target/dir -type l ! -exec test -e {} \; -exec rm {} \;

Andrew Schulman ,Nov 8, 2011 at 10:43

readlink -q will fail silently if the link is bad:
for F in $FILES; do
    if [ -L $F ]; then
        if readlink -q $F >/dev/null ; then
            DO THINGS
        else
            echo "$F: bad link" >/dev/stderr
        fi
    fi
done

zoltanctoth ,Nov 8, 2011 at 10:55

this seems pretty nice as this only returns true if the file is actually a symlink. But even with adding -q, readlink outputs the name of the link on linux. If this is the case in general maybe the answer should be updated with 'readlink -q $F > dev/null'. Or am I missing something? – zoltanctoth Nov 8 '11 at 10:55

Andrew Schulman ,Nov 8, 2011 at 11:02

No, you're right. Corrected, thanks. – Andrew Schulman Nov 8 '11 at 11:02

Chaim Geretz ,Mar 31, 2015 at 21:09

Which version? I don't see this behavior on my system readlink --version readlink (coreutils) 5.2.1 – Chaim Geretz Mar 31 '15 at 21:09

Aquarius Power ,May 4, 2014 at 23:46

this will work if the symlink was pointing to a file or a directory, but now is broken
if [[ -L "$strFile" ]] && [[ ! -a "$strFile" ]];then 
  echo "'$strFile' is a broken symlink"; 
fi

ACyclic ,May 24, 2014 at 13:02

This finds all files of type "link", which also resolves to a type "link". ie. a broken symlink
find /target -type l -xtype l

cdelacroix ,Jun 23, 2015 at 12:59

variant: find -L /target -type lcdelacroix Jun 23 '15 at 12:59

Sridhar-Sarnobat ,Jul 13, 2017 at 22:38

Can't you have a symlink to a symlink that isn't broken?' – Sridhar-Sarnobat Jul 13 '17 at 22:38

,

If you don't mind traversing non-broken dir symlinks, to find all orphaned links:
$ find -L /target -type l | while read -r file; do echo $file is orphaned; done

To find all files that are not orphaned links:

$ find -L /target ! -type l

[Jul 02, 2018] command line - How can I find broken symlinks

Mar 15, 2012 | unix.stackexchange.com

gabe, Mar 15, 2012 at 16:29

Is there a way to find all symbolic links that don't point anywere?

find ./ -type l

will give me all symbolic links, but makes no distinction between links that go somewhere and links that don't.

I'm currently doing:

find ./ -type l -exec file {} \; |grep broken

But I'm wondering what alternate solutions exist.

rozcietrzewiacz ,May 15, 2012 at 7:01

I'd strongly suggest not to use find -L for the task (see below for explanation). Here are some other ways to do this:
  • If you want to use a "pure find " method, it should rather look like this:
    find . -xtype l
    

    ( xtype is a test performed on a dereferenced link) This may not be available in all versions of find , though. But there are other options as well:

  • You can also exec test -e from within the find command:
    find . -type l ! -exec test -e {} \; -print
    
  • Even some grep trick could be better (i.e., safer ) than find -L , but not exactly such as presented in the question (which greps in entire output lines, including filenames):
     find . -type l -exec sh -c 'file -b "$1" | grep -q ^broken' sh {} \; -print
    

The find -L trick quoted by solo from commandlinefu looks nice and hacky, but it has one very dangerous pitfall : All the symlinks are followed. Consider directory with the contents presented below:

$ ls -l
total 0
lrwxrwxrwx 1 michal users  6 May 15 08:12 link_1 -> nonexistent1
lrwxrwxrwx 1 michal users  6 May 15 08:13 link_2 -> nonexistent2
lrwxrwxrwx 1 michal users  6 May 15 08:13 link_3 -> nonexistent3
lrwxrwxrwx 1 michal users  6 May 15 08:13 link_4 -> nonexistent4
lrwxrwxrwx 1 michal users 11 May 15 08:20 link_out -> /usr/share/

If you run find -L . -type l in that directory, all /usr/share/ would be searched as well (and that can take really long) 1 . For a find command that is "immune to outgoing links", don't use -L .


1 This may look like a minor inconvenience (the command will "just" take long to traverse all /usr/share ) – but can have more severe consequences. For instance, consider chroot environments: They can exist in some subdirectory of the main filesystem and contain symlinks to absolute locations. Those links could seem to be broken for the "outside" system, because they only point to proper places once you've entered the chroot. I also recall that some bootloader used symlinks under /boot that only made sense in an initial boot phase, when the boot partition was mounted as / .

So if you use a find -L command to find and then delete broken symlinks from some harmless-looking directory, you might even break your system...

quornian ,Nov 17, 2012 at 21:56

I think -type l is redundant since -xtype l will operate as -type l on non-links. So find -xtype l is probably all you need. Thanks for this approach. – quornian Nov 17 '12 at 21:56

qwertzguy ,Jan 8, 2015 at 21:37

Be aware that those solutions don't work for all filesystem types. For example it won't work for checking if /proc/XXX/exe link is broken. For this, use test -e "$(readlink /proc/XXX/exe)" . – qwertzguy Jan 8 '15 at 21:37

weakish ,Apr 8, 2016 at 4:57

@Flimm find . -xtype l means "find all symlinks whose (ultimate) target files are symlinks". But the ultimate target of a symlink cannot be a symlink, otherwise we can still follow the link and it is not the ultimate target. Since there is no such symlinks, we can define them as something else, i.e. broken symlinks. – weakish Apr 8 '16 at 4:57

weakish ,Apr 22, 2016 at 12:19

@JoóÁdám "which can only be a symbolic link in case it is broken". Give "broken symbolic link" or "non exist file" an individual type, instead of overloading l , is less confusing to me. – weakish Apr 22 '16 at 12:19

Alois Mahdal ,Jul 15, 2016 at 0:22

The warning at the end is useful, but note that this does not apply to the -L hack but rather to (blindly) removing broken symlinks in general. – Alois Mahdal Jul 15 '16 at 0:22

Sam Morris ,Mar 15, 2012 at 17:38

The symlinks command from http://www.ibiblio.org/pub/Linux/utils/file/symlinks-1.4.tar.gz can be used to identify symlinks with a variety of characteristics. For instance:
$ rm a
$ ln -s a b
$ symlinks .
dangling: /tmp/b -> a

qed ,Jul 27, 2014 at 20:32

Is this tool available for osx? – qed Jul 27 '14 at 20:32

qed ,Jul 27, 2014 at 20:51

Never mind, got it compiled. – qed Jul 27 '14 at 20:51

Daniel Jonsson ,Apr 11, 2015 at 22:11

Apparently symlinks is pre-installed on Fedora. – Daniel Jonsson Apr 11 '15 at 22:11

pooryorick ,Sep 29, 2012 at 14:02

As rozcietrzewiacz has already commented, find -L can have unexpected consequence of expanding the search into symlinked directories, so isn't the optimal approach. What no one has mentioned yet is that
find /path/to/search -xtype l

is the more concise, and logically identical command to

find /path/to/search -type l -xtype l

None of the solutions presented so far will detect cyclic symlinks, which is another type of breakage. this question addresses portability. To summarize, the portable way to find broken symbolic links, including cyclic links, is:

find /path/to/search -type l -exec test ! -e {} \; -print

For more details, see this question or ynform.org . Of course, the definitive source for all this is the findutils documentaton .

Flimm ,Oct 7, 2014 at 13:00

Short, consice, and addresses the find -L pitfall as well as cyclical links. +1 – Flimm Oct 7 '14 at 13:00

neu242 ,Aug 1, 2016 at 10:03

Nice. The last one works on MacOSX as well, while @rozcietrzewiacz's answer didn't. – neu242 Aug 1 '16 at 10:03

kwarrick ,Mar 15, 2012 at 16:52

I believe adding the -L flag to your command will allow you do get rid of the grep:
$ find -L . -type l

http://www.commandlinefu.com/commands/view/8260/find-broken-symlinks

from the man:

 -L      Cause the file information and file type (see stat(2)) returned 
         for each symbolic link to be those of the file referenced by the
         link, not the link itself. If the referenced file does not exist,
         the file information and type will be for the link itself.

rozcietrzewiacz ,May 15, 2012 at 7:37

At first I've upvoted this, but then I've realised how dangerous it may be. Before you use it, please have a look at my answer ! – rozcietrzewiacz May 15 '12 at 7:37

andy ,Dec 26, 2012 at 6:56

If you need a different behavior whether the link is broken or cyclic you can also use %Y with find:
$ touch a
$ ln -s a b  # link to existing target
$ ln -s c d  # link to non-existing target
$ ln -s e e  # link to itself
$ find . -type l -exec test ! -e {} \; -printf '%Y %p\n' \
   | while read type link; do
         case "$type" in
         N) echo "do something with broken link $link" ;;
         L) echo "do something with cyclic link $link" ;;
         esac
      done
do something with broken link ./d
do something with cyclic link ./e

This example is copied from this post (site deleted) .

Reference

syntaxerror ,Jun 25, 2015 at 0:28

Yet another shorthand for those whose find command does not support xtype can be derived from this: find . type l -printf "%Y %p\n" | grep -w '^N' . As andy beat me to it with the same (basic) idea in his script, I was reluctant to write it as separate answer. :) – syntaxerror Jun 25 '15 at 0:28

Alex ,Apr 30, 2013 at 6:37

find -L . -type l |xargs symlinks will give you info whether the link exists or not on a per foundfile basis.

conradkdotcom ,Oct 24, 2014 at 14:33

This will print out the names of broken symlinks in the current directory.
for l in $(find . -type l); do cd $(dirname $l); if [ ! -e "$(readlink $(basename $l))" ]; then echo $l; fi; cd - > /dev/null; done

Works in Bash. Don't know about other shells.

Iskren ,Aug 8, 2015 at 14:01

I use this for my case and it works quite well, as I know the directory to look for broken symlinks:
find -L $path -maxdepth 1 -type l

and my folder does include a link to /usr/share but it doesn't traverse it. Cross-device links and those that are valid for chroots, etc. are still a pitfall but for my use case it's sufficient.

,

Simple no-brainer answer, which is a variation on OP's version. Sometimes, you just want something easy to type or remember:
find . | xargs file | grep -i "broken symbolic link"

[Jul 02, 2018] 25 simple examples of Linux find command BinaryTides

Jul 02, 2018 | www.binarytides.com

Ignore the case

It is often useful to ignore the case when searching for file names. To ignore the case, just use the "iname" option instead of the "name" option.

[term]
$ find ./test -iname "*.Php"
./test/subdir/how.php
./test/cool.php
[/term]

[high]
Its always better to wrap the search term (name parameter) in double or single quotes. Not doing so will seem to work sometimes and give strange results at other times.
[/high]

3. Limit depth of directory traversal

The find command by default travels down the entire directory tree recursively, which is time and resource consuming. However the depth of directory travesal can be specified. For example we don't want to go more than 2 or 3 levels down in the sub directories. This is done using the maxdepth option.

[term]
$ find ./test -maxdepth 2 -name "*.php"
./test/subdir/how.php
./test/cool.php

$ find ./test -maxdepth 1 -name *.php
./test/cool.php
[/term]

The second example uses maxdepth of 1, which means it will not go lower than 1 level deep, either only in the current directory.

This is very useful when we want to do a limited search only in the current directory or max 1 level deep sub directories and not the entire directory tree which would take more time.

Just like maxdepth there is an option called mindepth which does what the name suggests, that is, it will go atleast N level deep before searching for the files.

4. Invert match

It is also possible to search for files that do no match a given name or pattern. This is helpful when we know which files to exclude from the search.

[term]
$ find ./test -not -name "*.php"
./test
./test/abc.txt
./test/subdir
[/term]

So in the above example we found all files that do not have the extension of php, either non-php files. The find command also supports the exclamation mark inplace of not.

[pre]
find ./test ! -name "*.php"
[/pre]

5. Combine multiple search criterias

It is possible to use multiple criterias when specifying name and inverting. For example

[term]
$ find ./test -name 'abc*' ! -name '*.php'
./test/abc.txt
./test/abc
[/term]

The above find command looks for files that begin with abc in their names and do not have a php extension. This is an example of how powerful search expressions can be build with the find command.

OR operator

When using multiple name criterias, the find command would combine them with AND operator, which means that only those files which satisfy all criterias will be matched. However if we need to perform an OR based matching then the find command has the "o" switch.

[term]
$ find -name '*.php' -o -name '*.txt'
./abc.txt
./subdir/how.php
./abc.php
./cool.php
[/term]

The above command search for files ending in either the php extension or the txt extension.

[Jul 02, 2018] Explanation of % directives in find -printf

Jul 02, 2018 | unix.stackexchange.com

san1512 ,Jul 11, 2015 at 6:24

find /tmp -printf '%s %p\n' |sort -n -r | head

This command is working fine but what are the %s %p options used here? Are there any other options that can be used?

Cyrus ,Jul 11, 2015 at 6:41

Take a look at find's manpage. – Cyrus Jul 11 '15 at 6:41

phuclv ,Oct 9, 2017 at 3:13

possible duplicate of Where to find printf formatting reference?phuclv Oct 9 '17 at 3:13

Hennes ,Jul 11, 2015 at 6:34

What are the %s %p options used here?

From the man page :

%s File's size in bytes.

%p File's name.

Scroll down on that page beyond all the regular letters for printf and read the parts which come prefixed with a %.

%n Number of hard links to file.

%p File's name.

%P File's name with the name of the starting-point under which it was found removed.

%s File's size in bytes.

%t File's last modification time in the format returned by the C `ctime' function.

Are there any other options that can be used?

There are. See the link to the manpage.

Kusalananda ,Nov 17, 2017 at 9:53

@don_crissti I'll never understand why people prefer random web documentation to the documentation installed on their systems (which has the added benefit of actually being relevant to their system). – Kusalananda Nov 17 '17 at 9:53

don_crissti ,Nov 17, 2017 at 12:52

@Kusalananda - Well, I can think of one scenario in which people would include a link to a web page instead of a quote from the documentation installed on their system: they're not on a linux machine at the time of writing the post... However, the link should point (imo) to the official docs (hence my comment above, which, for some unknown reason, was deleted by the mods...). That aside, I fully agree with you: the OP should consult the manual page installed on their system. – don_crissti Nov 17 '17 at 12:52

runlevel0 ,Feb 15 at 12:10

@don_crissti Or they are on a server that has no manpages installed which is rather frequent. – runlevel0 Feb 15 at 12:10

Hennes ,Feb 16 at 16:16

My manual page tend to be from FreeBSD though. Unless I happen to have a Linux VM within reach. And I have the impression that most questions are GNU/Linux based. – Hennes Feb 16 at 16:16

[Jun 24, 2018] Three Ways to Script Processes in Parallel by Rudis Muiznieks

Sep 02, 2015 | www.codeword.xyz
Wednesday, September 02, 2015 | 9 Comments

I was recently troubleshooting some issues we were having with Shippable , trying to get a bunch of our unit tests to run in parallel so that our builds would complete faster. I didn't care what order the different processes completed in, but I didn't want the shell script to exit until all the spawned unit test processes had exited. I ultimately wasn't able to satisfactorily solve the issue we were having, but I did learn more than I ever wanted to know about how to run processes in parallel in shell scripts. So here I shall impart unto you the knowledge I have gained. I hope someone else finds it useful!

Wait

The simplest way to achieve what I wanted was to use the wait command. You simply fork all of your processes with & , and then follow them with a wait command. Behold:

#!/bin/sh

/usr/bin/my-process-1 --args1 &
/usr/bin/my-process-2 --args2 &
/usr/bin/my-process-3 --args3 &

wait
echo all processes complete

It's really as easy as that. When you run the script, all three processes will be forked in parallel, and the script will wait until all three have completed before exiting. Anything after the wait command will execute only after the three forked processes have exited.

Pros

Damn, son! It doesn't get any simpler than that!

Cons

I don't think there's really any way to determine the exit codes of the processes you forked. That was a deal-breaker for my use case, since I needed to know if any of the tests failed and return an error code from the parent shell script if they did.

Another downside is that output from the processes will be all mish-mashed together, which makes it difficult to follow. In our situation, it was basically impossible to determine which unit tests had failed because they were all spewing their output at the same time.

GNU Parallel

There is a super nifty program called GNU Parallel that does exactly what I wanted. It works kind of like xargs in that you can give it a collection of arguments to pass to a single command which will all be run, only this will run them in parallel instead of in serial like xargs does (OR DOES IT??</foreshadowing>). It is super powerful, and all the different ways you can use it are beyond the scope of this article, but here's a rough equivalent to the example script above:

#!/bin/sh

parallel /usr/bin/my-process-{} --args{} ::: 1 2 3
echo all processes complete

The official "10 seconds installation" method for the latest version of GNU Parallel (from the README) is as follows:

(wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - http://pi.dk/3) | bash

Pros

If any of the processes returns a non-zero exit code, parallel will return a non-zero exit code. This means you can use $? in your shell script to detect if any of the processes failed. Nice! GNU Parallel also (by default) collates the output of each process together, so you'll see the complete output of each process as it completes instead of a mash-up of all the output combined together as it's produced. Also nice!

I am such a damn fanboy I might even buy an official GNU Parallel mug and t-shirt . Actually I'll probably save the money and get the new Star Wars Battlefront game when it comes out instead. But I did seriously consider the parallel schwag for a microsecond or so.

Cons

Literally none.

Xargs

So it turns out that our old friend xargs has supported parallel processing all along! Who knew? It's like the nerdy chick in the movies who gets a makeover near the end and it turns out she's even hotter than the stereotypical hot cheerleader chicks who were picking on her the whole time. Just pass it a -Pn argument and it will run your commands using up to n threads. Check out this mega-sexy equivalent to the above scripts:

#!/bin/sh

printf "1\n2\n3" | xargs -n1 -P3 -I{} /usr/bin/my-process-{} --args{}
echo all processes complete

Pros

xargs returns a non-zero exit code if any of the processes fails, so you can again use $? in your shell script to detect errors. The difference is it will return 123 , unlike GNU Parallel which passes through the non-zero exit code of the process that failed (I'm not sure how parallel picks if more than one process fails, but I'd assume it's either the first or last process to fail). Another pro is that xargs is most likely already installed on your preferred distribution of Linux.

Cons

I have read reports that the non-GNU version of xargs does not support parallel processing, so you may or may not be out of luck with this option if you're on AIX or a BSD or something.

xargs also has the same problem as the wait solution where the output from your processes will be all mixed together.

Another con is that xargs is a little less flexible than parallel in how you specify the processes to run. You have to pipe your values into it, and if you use the -I argument for string-replacement then your values have to be separated by newlines (which is more annoying when running it ad-hoc). It's still pretty nice, but nowhere near as flexible or powerful as parallel .

Also there's no place to buy an xargs mug and t-shirt. Lame!

And The Winner Is

After determining that the Shippable problem we were having was completely unrelated to the parallel scripting method I was using, I ended up sticking with parallel for my unit tests. Even though it meant one more dependency on our build machine, the ease

[Jun 23, 2018] Queuing tasks for batch execution with Task Spooler by Ben Martin

Aug 12, 2008 | www.linux.com

The Task Spooler project allows you to queue up tasks from the shell for batch execution. Task Spooler is simple to use and requires no configuration. You can view and edit queued commands, and you can view the output of queued commands at any time.

Task Spooler has some similarities with other delayed and batch execution projects, such as " at ." While both Task Spooler and at handle multiple queues and allow the execution of commands at a later point, the at project handles output from commands by emailing the results to the user who queued the command, while Task Spooler allows you to get at the results from the command line instead. Another major difference is that Task Spooler is not aimed at executing commands at a specific time, but rather at simply adding to and executing commands from queues.

The main repositories for Fedora, openSUSE, and Ubuntu do not contain packages for Task Spooler. There are packages for some versions of Debian, Ubuntu, and openSUSE 10.x available along with the source code on the project's homepage. In this article I'll use a 64-bit Fedora 9 machine and install version 0.6 of Task Spooler from source. Task Spooler does not use autotools to build, so to install it, simply run make; sudo make install . This will install the main Task Spooler command ts and its manual page into /usr/local.

A simple interaction with Task Spooler is shown below. First I add a new job to the queue and check the status. As the command is a very simple one, it is likely to have been executed immediately. Executing ts by itself with no arguments shows the executing queue, including tasks that have completed. I then use ts -c to get at the stdout of the executed command. The -c option uses cat to display the output file for a task. Using ts -i shows you information about the job. To clear finished jobs from the queue, use the ts -C command, not shown in the example.

$ ts echo "hello world"
6

$ ts
ID State Output E-Level Times(r/u/s) Command [run=0/1]
6 finished /tmp/ts-out.QoKfo9 0 0.00/0.00/0.00 echo hello world

$ ts -c 6
hello world

$ ts -i 6
Command: echo hello world
Enqueue time: Tue Jul 22 14:42:22 2008
Start time: Tue Jul 22 14:42:22 2008
End time: Tue Jul 22 14:42:22 2008
Time run: 0.003336s

The -t option operates like tail -f , showing you the last few lines of output and continuing to show you any new output from the task. If you would like to be notified when a task has completed, you can use the -m option to have the results mailed to you, or you can queue another command to be executed that just performs the notification. For example, I might add a tar command and want to know when it has completed. The below commands will create a tarball and use libnotify commands to create an inobtrusive popup window on my desktop when the tarball creation is complete. The popup will be dismissed automatically after a timeout.

$ ts tar czvf /tmp/mytarball.tar.gz liberror-2.1.80011
11
$ ts notify-send "tarball creation" "the long running tar creation process is complete."
12
$ ts
ID State Output E-Level Times(r/u/s) Command [run=0/1]
11 finished /tmp/ts-out.O6epsS 0 4.64/4.31/0.29 tar czvf /tmp/mytarball.tar.gz liberror-2.1.80011
12 finished /tmp/ts-out.4KbPSE 0 0.05/0.00/0.02 notify-send tarball creation the long... is complete.

Notice in the output above, toward the far right of the header information, the run=0/1 line. This tells you that Task Spooler is executing nothing, and can possibly execute one task. Task spooler allows you to execute multiple tasks at once from your task queue to take advantage of multicore CPUs. The -S option allows you to set how many tasks can be executed in parallel from the queue, as shown below.

$ ts -S 2
$ ts
ID State Output E-Level Times(r/u/s) Command [run=0/2]
6 finished /tmp/ts-out.QoKfo9 0 0.00/0.00/0.00 echo hello world

If you have two tasks that you want to execute with Task Spooler but one depends on the other having already been executed (and perhaps that the previous job has succeeded too) you can handle this by having one task wait for the other to complete before executing. This becomes more important on a quad core machine when you might have told Task Spooler that it can execute three tasks in parallel. The commands shown below create an explicit dependency, making sure that the second command is executed only if the first has completed successfully, even when the queue allows multiple tasks to be executed. The first command is queued normally using ts . I use a subshell to execute the commands by having ts explicitly start a new bash shell. The second command uses the -d option, which tells ts to execute the command only after the successful completion of the last command that was appended to the queue. When I first inspect the queue I can see that the first command (28) is executing. The second command is queued but has not been added to the list of executing tasks because Task Spooler is aware that it cannot execute until task 28 is complete. The second time I view the queue, both tasks have completed.

$ ts bash -c "sleep 10; echo hi"
28
$ ts -d echo there
29
$ ts
ID State Output E-Level Times(r/u/s) Command [run=1/2]
28 running /tmp/ts-out.hKqDva bash -c sleep 10; echo hi
29 queued (file) && echo there
$ ts
ID State Output E-Level Times(r/u/s) Command [run=0/2]
28 finished /tmp/ts-out.hKqDva 0 10.01/0.00/0.01 bash -c sleep 10; echo hi
29 finished /tmp/ts-out.VDtVp7 0 0.00/0.00/0.00 && echo there
$ cat /tmp/ts-out.hKqDva
hi
$ cat /tmp/ts-out.VDtVp7
there

You can also explicitly set dependencies on other tasks as shown below. Because the ts command prints the ID of a new task to the console, the first command puts that ID into a shell variable for use in the second command. The second command passes the task ID of the first task to ts, telling it to wait for the task with that ID to complete before returning. Because this is joined with the command we wish to execute with the && operation, the second command will execute only if the first one has finished and succeeded.

The first time we view the queue you can see that both tasks are running. The first task will be in the sleep command that we used explicitly to slow down its execution. The second command will be executing ts , which will be waiting for the first task to complete. One downside of tracking dependencies this way is that the second command is added to the running queue even though it cannot do anything until the first task is complete.

$ FIRST_TASKID=`ts bash -c "sleep 10; echo hi"`
$ ts sh -c "ts -w $FIRST_TASKID && echo there"
25
$ ts
ID State Output E-Level Times(r/u/s) Command [run=2/2]
24 running /tmp/ts-out.La9Gmz bash -c sleep 10; echo hi
25 running /tmp/ts-out.Zr2n5u sh -c ts -w 24 && echo there
$ ts
ID State Output E-Level Times(r/u/s) Command [run=0/2]
24 finished /tmp/ts-out.La9Gmz 0 10.01/0.00/0.00 bash -c sleep 10; echo hi
25 finished /tmp/ts-out.Zr2n5u 0 9.47/0.00/0.01 sh -c ts -w 24 && echo there
$ ts -c 24
hi
$ ts -c 25
there Wrap-up

Task Spooler allows you to convert a shell command to a queued command by simply prepending ts to the command line. One major advantage of using ts over something like the at command is that you can effectively run tail -f on the output of a running task and also get at the output of completed tasks from the command line. The utility's ability to execute multiple tasks in parallel is very handy if you are running on a multicore CPU. Because you can explicitly wait for a task, you can set up very complex interactions where you might have several tasks running at once and have jobs that depend on multiple other tasks to complete successfully before they can execute.

Because you can make explicitly dependant tasks take up slots in the actively running task queue, you can effectively delay the execution of the queue until a time of your choosing. For example, if you queue up a task that waits for a specific time before returning successfully and have a small group of other tasks that are dependent on this first task to complete, then no tasks in the queue will execute until the first task completes.

Category:

Click Here!

[Jun 23, 2018] at, batch, atq, and atrm examples

Jun 23, 2018 | www.computerhope.com
at -m 01:35 < my-at-jobs.txt

Run the commands listed in the ' my-at-jobs.txt ' file at 1:35 AM. All output from the job will be mailed to the user running the task. When this command has been successfully entered you should receive a prompt similar to the example below:

commands will be executed using /bin/sh
job 1 at Wed Dec 24 00:22:00 2014
at -l

This command will list each of the scheduled jobs in a format like the following:

1          Wed Dec 24 00:22:00 2003

...this is the same as running the command atq .

at -r 1

Deletes job 1 . This command is the same as running the command atrm 1 .

atrm 23

Deletes job 23. This command is the same as running the command at -r 23 .

[Jun 23, 2018] Bash script processing limited number of commands in parallel

Jun 23, 2018 | stackoverflow.com

AL-Kateb ,Oct 23, 2013 at 13:33

I have a bash script that looks like this:
#!/bin/bash
wget LINK1 >/dev/null 2>&1
wget LINK2 >/dev/null 2>&1
wget LINK3 >/dev/null 2>&1
wget LINK4 >/dev/null 2>&1
# ..
# ..
wget LINK4000 >/dev/null 2>&1

But processing each line until the command is finished then moving to the next one is very time consuming, I want to process for instance 20 lines at once then when they're finished another 20 lines are processed.

I thought of wget LINK1 >/dev/null 2>&1 & to send the command to the background and carry on, but there are 4000 lines here this means I will have performance issues, not to mention being limited in how many processes I should start at the same time so this is not a good idea.

One solution that I'm thinking of right now is checking whether one of the commands is still running or not, for instance after 20 lines I can add this loop:

while [  $(ps -ef | grep KEYWORD | grep -v grep | wc -l) -gt 0 ]; do
sleep 1
done

Of course in this case I will need to append & to the end of the line! But I'm feeling this is not the right way to do it.

So how do I actually group each 20 lines together and wait for them to finish before going to the next 20 lines, this script is dynamically generated so I can do whatever math I want on it while it's being generated, but it DOES NOT have to use wget, it was just an example so any solution that is wget specific is not gonna do me any good.

kojiro ,Oct 23, 2013 at 13:46

wait is the right answer here, but your while [ $(ps would be much better written while pkill -0 $KEYWORD – using proctools that is, for legitimate reasons to check if a process with a specific name is still running. – kojiro Oct 23 '13 at 13:46

VasyaNovikov ,Jan 11 at 19:01

I think this question should be re-opened. The "possible duplicate" QA is all about running a finite number of programs in parallel. Like 2-3 commands. This question, however, is focused on running commands in e.g. a loop. (see "but there are 4000 lines"). – VasyaNovikov Jan 11 at 19:01

robinCTS ,Jan 11 at 23:08

@VasyaNovikov Have you read all the answers to both this question and the duplicate? Every single answer to this question here, can also be found in the answers to the duplicate question. That is precisely the definition of a duplicate question. It makes absolutely no difference whether or not you are running the commands in a loop. – robinCTS Jan 11 at 23:08

VasyaNovikov ,Jan 12 at 4:09

@robinCTS there are intersections, but questions themselves are different. Also, 6 of the most popular answers on the linked QA deal with 2 processes only. – VasyaNovikov Jan 12 at 4:09

Dan Nissenbaum ,Apr 20 at 15:35

I recommend reopening this question because its answer is clearer, cleaner, better, and much more highly upvoted than the answer at the linked question, though it is three years more recent. – Dan Nissenbaum Apr 20 at 15:35

devnull ,Oct 23, 2013 at 13:35

Use the wait built-in:
process1 &
process2 &
process3 &
process4 &
wait
process5 &
process6 &
process7 &
process8 &
wait

For the above example, 4 processes process1 .. process4 would be started in the background, and the shell would wait until those are completed before starting the next set ..

From the manual :

wait [jobspec or pid ...]

Wait until the child process specified by each process ID pid or job specification jobspec exits and return the exit status of the last command waited for. If a job spec is given, all processes in the job are waited for. If no arguments are given, all currently active child processes are waited for, and the return status is zero. If neither jobspec nor pid specifies an active child process of the shell, the return status is 127.

kojiro ,Oct 23, 2013 at 13:48

So basically i=0; waitevery=4; for link in "${links[@]}"; do wget "$link" & (( i++%waitevery==0 )) && wait; done >/dev/null 2>&1kojiro Oct 23 '13 at 13:48

rsaw ,Jul 18, 2014 at 17:26

Unless you're sure that each process will finish at the exact same time, this is a bad idea. You need to start up new jobs to keep the current total jobs at a certain cap .... parallel is the answer. – rsaw Jul 18 '14 at 17:26

DomainsFeatured ,Sep 13, 2016 at 22:55

Is there a way to do this in a loop? – DomainsFeatured Sep 13 '16 at 22:55

Bobby ,Apr 27, 2017 at 7:55

I've tried this but it seems that variable assignments done in one block are not available in the next block. Is this because they are separate processes? Is there a way to communicate the variables back to the main process? – Bobby Apr 27 '17 at 7:55

choroba ,Oct 23, 2013 at 13:38

See parallel . Its syntax is similar to xargs , but it runs the commands in parallel.

chepner ,Oct 23, 2013 at 14:35

This is better than using wait , since it takes care of starting new jobs as old ones complete, instead of waiting for an entire batch to finish before starting the next. – chepner Oct 23 '13 at 14:35

Mr. Llama ,Aug 13, 2015 at 19:30

For example, if you have the list of links in a file, you can do cat list_of_links.txt | parallel -j 4 wget {} which will keep four wget s running at a time. – Mr. Llama Aug 13 '15 at 19:30

0x004D44 ,Nov 2, 2015 at 21:42

There is a new kid in town called pexec which is a replacement for parallel . – 0x004D44 Nov 2 '15 at 21:42

mat ,Mar 1, 2016 at 21:04

Not to be picky, but xargs can also parallelize commands. – mat Mar 1 '16 at 21:04

Vader B ,Jun 27, 2016 at 6:41

In fact, xargs can run commands in parallel for you. There is a special -P max_procs command-line option for that. See man xargs .

> ,

You can run 20 processes and use the command:
wait

Your script will wait and continue when all your background jobs are finished.

[Jun 23, 2018] parallelism - correct xargs parallel usage

Jun 23, 2018 | unix.stackexchange.com

Yan Zhu ,Apr 19, 2015 at 6:59

I am using xargs to call a python script to process about 30 million small files. I hope to use xargs to parallelize the process. The command I am using is:
find ./data -name "*.json" -print0 |
  xargs -0 -I{} -P 40 python Convert.py {} > log.txt

Basically, Convert.py will read in a small json file (4kb), do some processing and write to another 4kb file. I am running on a server with 40 CPU cores. And no other CPU-intense process is running on this server.

By monitoring htop (btw, is there any other good way to monitor the CPU performance?), I find that -P 40 is not as fast as expected. Sometimes all cores will freeze and decrease almost to zero for 3-4 seconds, then will recover to 60-70%. Then I try to decrease the number of parallel processes to -P 20-30 , but it's still not very fast. The ideal behavior should be linear speed-up. Any suggestions for the parallel usage of xargs ?

Ole Tange ,Apr 19, 2015 at 8:45

You are most likely hit by I/O: The system cannot read the files fast enough. Try starting more than 40: This way it will be fine if some of the processes have to wait for I/O. – Ole Tange Apr 19 '15 at 8:45

Fox ,Apr 19, 2015 at 10:30

What kind of processing does the script do? Any database/network/io involved? How long does it run? – Fox Apr 19 '15 at 10:30

PSkocik ,Apr 19, 2015 at 11:41

I second @OleTange. That is the expected behavior if you run as many processes as you have cores and your tasks are IO bound. First the cores will wait on IO for their task (sleep), then they will process, and then repeat. If you add more processes, then the additional processes that currently aren't running on a physical core will have kicked off parallel IO operations, which will, when finished, eliminate or at least reduce the sleep periods on your cores. – PSkocik Apr 19 '15 at 11:41

Bichoy ,Apr 20, 2015 at 3:32

1- Do you have hyperthreading enabled? 2- in what you have up there, log.txt is actually overwritten with each call to convert.py ... not sure if this is the intended behavior or not. – Bichoy Apr 20 '15 at 3:32

Ole Tange ,May 11, 2015 at 18:38

xargs -P and > is opening up for race conditions because of the half-line problem gnu.org/software/parallel/ Using GNU Parallel instead will not have that problem. – Ole Tange May 11 '15 at 18:38

James Scriven ,Apr 24, 2015 at 18:00

I'd be willing to bet that your problem is python . You didn't say what kind of processing is being done on each file, but assuming you are just doing in-memory processing of the data, the running time will be dominated by starting up 30 million python virtual machines (interpreters).

If you can restructure your python program to take a list of files, instead of just one, you will get a huge improvement in performance. You can then still use xargs to further improve performance. For example, 40 processes, each processing 1000 files:

find ./data -name "*.json" -print0 |
  xargs -0 -L1000 -P 40 python Convert.py

This isn't to say that python is a bad/slow language; it's just not optimized for startup time. You'll see this with any virtual machine-based or interpreted language. Java, for example, would be even worse. If your program was written in C, there would still be a cost of starting a separate operating system process to handle each file, but it would be much less.

From there you can fiddle with -P to see if you can squeeze out a bit more speed, perhaps by increasing the number of processes to take advantage of idle processors while data is being read/written.

Stephen ,Apr 24, 2015 at 13:03

So firstly, consider the constraints:

What is the constraint on each job? If it's I/O you can probably get away with multiple jobs per CPU core up till you hit the limit of I/O, but if it's CPU intensive, its going to be worse than pointless running more jobs concurrently than you have CPU cores.

My understanding of these things is that GNU Parallel would give you better control over the queue of jobs etc.

See GNU parallel vs & (I mean background) vs xargs -P for a more detailed explanation of how the two differ.

,

As others said, check whether you're I/O-bound. Also, xargs' man page suggests using -n with -P , you don't mention the number of Convert.py processes you see running in parallel.

As a suggestion, if you're I/O-bound, you might try using an SSD block device, or try doing the processing in a tmpfs (of course, in this case you should check for enough memory, avoiding swap due to tmpfs pressure (I think), and the overhead of copying the data to it in the first place).

[Jun 23, 2018] Linux/Bash, how to schedule commands in a FIFO queue?

Jun 23, 2018 | superuser.com

Andrei ,Apr 10, 2013 at 14:26

I want the ability to schedule commands to be run in a FIFO queue. I DON'T want them to be run at a specified time in the future as would be the case with the "at" command. I want them to start running now, but not simultaneously. The next scheduled command in the queue should be run only after the first command finishes executing. Alternatively, it would be nice if I could specify a maximum number of commands from the queue that could be run simultaneously; for example if the maximum number of simultaneous commands is 2, then only at most 2 commands scheduled in the queue would be taken from the queue in a FIFO manner to be executed, the next command in the remaining queue being started only when one of the currently 2 running commands finishes.

I've heard task-spooler could do something like this, but this package doesn't appear to be well supported/tested and is not in the Ubuntu standard repositories (Ubuntu being what I'm using). If that's the best alternative then let me know and I'll use task-spooler, otherwise, I'm interested to find out what's the best, easiest, most tested, bug-free, canonical way to do such a thing with bash.

UPDATE:

Simple solutions like ; or && from bash do not work. I need to schedule these commands from an external program, when an event occurs. I just don't want to have hundreds of instances of my command running simultaneously, hence the need for a queue. There's an external program that will trigger events where I can run my own commands. I want to handle ALL triggered events, I don't want to miss any event, but I also don't want my system to crash, so that's why I want a queue to handle my commands triggered from the external program.

Andrei ,Apr 11, 2013 at 11:40

Task Spooler:

http://vicerveza.homeunix.net/~viric/soft/ts/

https://launchpad.net/ubuntu/+source/task-spooler/0.7.3-1

Does the trick very well. Hopefully it will be included in Ubuntu's package repos.

Hennes ,Apr 10, 2013 at 15:00

Use ;

For example:
ls ; touch test ; ls

That will list the directory. Only after ls has run it will run touch test which will create a file named test. And only after that has finished it will run the next command. (In this case another ls which will show the old contents and the newly created file).

Similar commands are || and && .

; will always run the next command.

&& will only run the next command it the first returned success.
Example: rm -rf *.mp3 && echo "Success! All MP3s deleted!"

|| will only run the next command if the first command returned a failure (non-zero) return value. Example: rm -rf *.mp3 || echo "Error! Some files could not be deleted! Check permissions!"

If you want to run a command in the background, append an ampersand ( & ).
Example:
make bzimage &
mp3blaster sound.mp3
make mytestsoftware ; ls ; firefox ; make clean

Will run two commands int he background (in this case a kernel build which will take some time and a program to play some music). And in the foregrounds it runs another compile job and, once that is finished ls, firefox and a make clean (all sequentially)

For more details, see man bash


[Edit after comment]

in pseudo code, something like this?

Program run_queue:

While(true)
{
   Wait_for_a_signal();

   While( queue not empty )
   {
       run next command from the queue.
       remove this command from the queue.
       // If commands where added to the queue during execution then
       // the queue is not empty, keep processing them all.
   }
   // Queue is now empty, returning to wait_for_a_signal
}
// 
// Wait forever on commands and add them to a queue
// Signal run_quueu when something gets added.
//
program add_to_queue()
{
   While(true)
   {
       Wait_for_event();
       Append command to queue
       signal run_queue
   }    
}

terdon ,Apr 10, 2013 at 15:03

The easiest way would be to simply run the commands sequentially:
cmd1; cmd2; cmd3; cmdN

If you want the next command to run only if the previous command exited successfully, use && :

cmd1 && cmd2 && cmd3 && cmdN

That is the only bash native way I know of doing what you want. If you need job control (setting a number of parallel jobs etc), you could try installing a queue manager such as TORQUE but that seems like overkill if all you want to do is launch jobs sequentially.

psusi ,Apr 10, 2013 at 15:24

You are looking for at 's twin brother: batch . It uses the same daemon but instead of scheduling a specific time, the jobs are queued and will be run whenever the system load average is low.

mpy ,Apr 10, 2013 at 14:59

Apart from dedicated queuing systems (like the Sun Grid Engine ) which you can also use locally on one machine and which offer dozens of possibilities, you can use something like
 command1 && command2 && command3

which is the other extreme -- a very simple approach. The latter neither does provide multiple simultaneous processes nor gradually filling of the "queue".

Bogdan Dumitru ,May 3, 2016 at 10:12

I went on the same route searching, trying out task-spooler and so on. The best of the best is this:

GNU Parallel --semaphore --fg It also has -j for parallel jobs.

[Jun 23, 2018] Task Spooler

Notable quotes:
"... As in freshmeat.net : ..."
"... doesn't work anymore ..."
Jun 23, 2018 | vicerveza.homeunix.net

As in freshmeat.net :

task spooler is a Unix batch system where the tasks spooled run one after the other. The amount of jobs to run at once can be set at any time. Each user in each system has his own job queue. The tasks are run in the correct context (that of enqueue) from any shell/process, and its output/results can be easily watched. It is very useful when you know that your commands depend on a lot of RAM, a lot of disk use, give a lot of output, or for whatever reason it's better not to run them all at the same time, while you want to keep your resources busy for maximum benfit. Its interface allows using it easily in scripts.

For your first contact, you can read an article at linux.com , which I like as overview, guide and examples (original url) . On more advanced usage, don't neglect the TRICKS file in the package.

Features

I wrote Task Spooler because I didn't have any comfortable way of running batch jobs in my linux computer. I wanted to:

At the end, after some time using and developing ts , it can do something more:

You can look at an old (but representative) screenshot of ts-0.2.1 if you want.

Mailing list

I created a GoogleGroup for the program. You look for the archive and the join methods in the taskspooler google group page .

Alessandro Öhler once maintained a mailing list for discussing newer functionalities and interchanging use experiences. I think this doesn't work anymore , but you can look at the old archive or even try to subscribe .

How it works

The queue is maintained by a server process. This server process is started if it isn't there already. The communication goes through a unix socket usually in /tmp/ .

When the user requests a job (using a ts client), the client waits for the server message to know when it can start. When the server allows starting , this client usually forks, and runs the command with the proper environment, because the client runs run the job and not the server, like in 'at' or 'cron'. So, the ulimits, environment, pwd,. apply.

When the job finishes, the client notifies the server. At this time, the server may notify any waiting client, and stores the output and the errorlevel of the finished job.

Moreover the client can take advantage of many information from the server: when a job finishes, where does the job output go to, etc.

Download

Download the latest version (GPLv2+ licensed): ts-1.0.tar.gz - v1.0 (2016-10-19) - Changelog

Look at the version repository if you are interested in its development.

Андрей Пантюхин (Andrew Pantyukhin) maintains the BSD port .

Alessandro Öhler provided a Gentoo ebuild for 0.4 , which with simple changes I updated to the ebuild for 0.6.4 . Moreover, the Gentoo Project Sunrise already has also an ebuild ( maybe old ) for ts .

Alexander V. Inyukhin maintains unofficial debian packages for several platforms. Find the official packages in the debian package system .

Pascal Bleser packed the program for SuSE and openSuSE in RPMs for various platforms .

Gnomeye maintains the AUR package .

Eric Keller wrote a nodejs web server showing the status of the task spooler queue ( github project ).

Manual

Look at its manpage (v0.6.1). Here you also have a copy of the help for the same version:

usage: ./ts [action] [-ngfmd] [-L <lab>] [cmd...]
Env vars:
  TS_SOCKET  the path to the unix socket used by the ts command.
  TS_MAILTO  where to mail the result (on -m). Local user by default.
  TS_MAXFINISHED  maximum finished jobs in the queue.
  TS_ONFINISH  binary called on job end (passes jobid, error, outfile, command).
  TS_ENV  command called on enqueue. Its output determines the job information.
  TS_SAVELIST  filename which will store the list, if the server dies.
  TS_SLOTS   amount of jobs which can run at once, read on server start.
Actions:
  -K       kill the task spooler server
  -C       clear the list of finished jobs
  -l       show the job list (default action)
  -S [num] set the number of max simultanious jobs of the server.
  -t [id]  tail -f the output of the job. Last run if not specified.
  -c [id]  cat the output of the job. Last run if not specified.
  -p [id]  show the pid of the job. Last run if not specified.
  -o [id]  show the output file. Of last job run, if not specified.
  -i [id]  show job information. Of last job run, if not specified.
  -s [id]  show the job state. Of the last added, if not specified.
  -r [id]  remove a job. The last added, if not specified.
  -w [id]  wait for a job. The last added, if not specified.
  -u [id]  put that job first. The last added, if not specified.
  -U <id-id>  swap two jobs in the queue.
  -h       show this help
  -V       show the program version
Options adding jobs:
  -n       don't store the output of the command.
  -g       gzip the stored output (if not -n).
  -f       don't fork into background.
  -m       send the output by e-mail (uses sendmail).
  -d       the job will be run only if the job before ends well
  -L <lab> name this task with a label, to be distinguished on listing.
Thanks

[Jun 23, 2018] bash - Shell Scripting Using xargs to execute parallel instances of a shell function

Jun 23, 2018 | stackoverflow.com

Gnats ,Jul 23, 2010 at 19:33

I'm trying to use xargs in a shell script to run parallel instances of a function I've defined in the same script. The function times the fetching of a page, and so it's important that the pages are actually fetched concurrently in parallel processes, and not in background processes (if my understanding of this is wrong and there's negligible difference between the two, just let me know).

The function is:

function time_a_url ()
{
     oneurltime=$($time_command -p wget -p $1 -O /dev/null 2>&1 1>/dev/null | grep real | cut -d" " -f2)
     echo "Fetching $1 took $oneurltime seconds."
}

How does one do this with an xargs pipe in a form that can take number of times to run time_a_url in parallel as an argument? And yes, I know about GNU parallel, I just don't have the privilege to install software where I'm writing this.

Dennis Williamson ,Jul 23, 2010 at 23:03

Here's a demo of how you might be able to get your function to work:
$ f() { echo "[$@]"; }
$ export -f f
$ echo -e "b 1\nc 2\nd 3 4" | xargs -P 0 -n 1 -I{} bash -c f\ \{\}
[b 1]
[d 3 4]
[c 2]

The keys to making this work are to export the function so the bash that xargs spawns will see it and to escape the space between the function name and the escaped braces. You should be able to adapt this to work in your situation. You'll need to adjust the arguments for -P and -n (or remove them) to suit your needs.

You can probably get rid of the grep and cut . If you're using the Bash builtin time , you can specify an output format using the TIMEFORMAT variable. If you're using GNU /usr/bin/time , you can use the --format argument. Either of these will allow you to drop the -p also.

You can replace this part of your wget command: 2>&1 1>/dev/null with -q . In any case, you have those reversed. The correct order would be >/dev/null 2>&1 .

Lee Netherton ,Aug 30, 2011 at 16:32

I used xargs -P0 -n1 -I{} bash -c "f {}" which still works, and seems a little tidier. – Lee Netherton Aug 30 '11 at 16:32

tmpvar ,Jul 24, 2010 at 15:21

On Mac OS X:

xargs: max. processes must be >0 (for: xargs -P [>0])

f() { echo "[$@]"; }
export -f f

echo -e "b 1\nc 2\nd 3 4" | sed 's/ /\\ /g' | xargs -P 10 -n 1 -I{} bash -c f\ \{\}

echo -e "b 1\nc 2\nd 3 4" | xargs -P 10 -I '{}' bash -c 'f "$@"' arg0 '{}'

,

If you install GNU Parallel on another system, you will see the functionality is in a single file (called parallel).

You should be able to simply copy that file to your own ~/bin.

[Jun 21, 2018] Create a Sudo Log File by Aaron Kili

Jun 21, 2018 | www.tecmint.com

By default, sudo logs through syslog(3). However, to specify a custom log file, use the logfile parameter like so:

Defaults  logfile="/var/log/sudo.log"

To log hostname and the four-digit year in the custom log file, use log_host and log_year parameters respectively as follows:

Defaults  log_host, log_year, logfile="/var/log/sudo.log"
Log Sudo Command Input/Output

The log_input and log_output parameters enable sudo to run a command in pseudo-tty and log all user input and all output sent to the screen receptively.

The default I/O log directory is /var/log/sudo-io , and if there is a session sequence number, it is stored in this directory. You can specify a custom directory through the iolog_dir parameter.

Defaults   log_input, log_output

There are some escape sequences are supported such as %{seq} which expands to a monotonically increasing base-36 sequence number, such as 000001, where every two digits are used to form a new directory, e.g. 00/00/01 as in the example below:

[Jun 21, 2018] Lecture: Sudo Users by Aaron Kili

Jun 21, 2018 | www.tecmint.com

To lecture sudo users about password usage on the system, use the lecture parameter as below.

It has 3 possible values:

  1. always – always lecture a user.
  2. once – only lecture a user the first time they execute sudo command (this is used when no value is specified)
  3. never – never lecture the user.
 
Defaults  lecture="always"

Additionally, you can set a custom lecture file with the lecture_file parameter, type the appropriate message in the file:

Defaults  lecture_file="/path/to/file"

Show Custom Message When You Enter Wrong sudo Password

When a user enters a wrong password, a certain message is displayed on the command line. The default message is " sorry, try again ", you can modify the message using the badpass_message parameter as follows:

Defaults  badpass_message="Password is wrong, please try again"
Increase sudo Password Tries Limit

The parameter passwd_tries is used to specify the number of times a user can try to enter a password.

The default value is 3:

Defaults   passwd_tries=5

Increase Sudo Password Attempts

To set a password timeout (default is 5 minutes) using passwd_timeout parameter, add the line below:

Defaults   passwd_timeout=2
9. Let Sudo Insult You When You Enter Wrong Password

In case a user types a wrong password, sudo will display insults on the terminal with the insults parameter. This will automatically turn off the badpass_message parameter.

Defaults  insults

[Jun 20, 2018] Suse Doc Administration Guide - Configuring sudo

Notable quotes:
"... WARNING: Dangerous constructs ..."
Sep 06, 2013 | www.suse.com
Basic sudoers Configuration Syntax

In the sudoers configuration files, there are two types of options: strings and flags. While strings can contain any value, flags can be turned either ON or OFF. The most important syntax constructs for sudoers configuration files are:

# Everything on a line after a # gets ignored 
Defaults !insults # Disable the insults flag 
Defaults env_keep += "DISPLAY HOME" # Add DISPLAY and HOME to env_keep
tux ALL = NOPASSWD: /usr/bin/frobnicate, PASSWD: /usr/bin/journalctl

There are two exceptions: #include and #includedir are normal commands. Followed by digits, it specifies a UID.

Remove the ! to set the specified flag to ON.

See Section 2.2.3, Rules in sudoers .

Table 2-1 Useful Flags and Options

Option name

Description

Example

targetpw

This flag controls whether the invoking user is required to enter the password of the target user (ON) (for example root ) or the invoking user (OFF).

Defaults targetpw # Turn targetpw flag ON

rootpw

If set, sudo will prompt for the root password instead of the target user's or the invoker's. The default is OFF.

Defaults !rootpw # Turn rootpw flag OFF

env_reset

If set, sudo constructs a minimal environment with only TERM , PATH , HOME , MAIL , SHELL , LOGNAME , USER , USERNAME , and SUDO_* set. Additionally, variables listed in env_keep get imported from the calling environment. The default is ON.

Defaults env_reset # Turn env_reset flag ON

env_keep

List of environment variables to keep when the env_reset flag is ON.

# Set env_keep to contain EDITOR and PROMPT
Defaults env_keep = "EDITOR PROMPT"
Defaults env_keep += "JRE_HOME" # Add JRE_HOME
Defaults env_keep -= "JRE_HOME" # Remove JRE_HOME

env_delete

List of environment variables to remove when the env_reset flag is OFF.

# Set env_delete to contain EDITOR and PROMPT
Defaults env_delete = "EDITOR PROMPT"
Defaults env_delete += "JRE_HOME" # Add JRE_HOME
Defaults env_delete -= "JRE_HOME" # Remove JRE_HOME

The Defaults token can also be used to create aliases for a collection of users, hosts, and commands. Furthermore, it is possible to apply an option only to a specific set of users.

For detailed information about the /etc/sudoers configuration file, consult man 5 sudoers . 2.2.3 Rules in sudoers

Rules in the sudoers configuration can be very complex, so this section will only cover the basics. Each rule follows the basic scheme ( [] marks optional parts):

#Who      Where         As whom      Tag                What
User_List Host_List = [(User_List)] [NOPASSWD:|PASSWD:] Cmnd_List
Syntax for sudoers Rules
User_List

One or more (separated by , ) identifiers: Either a user name, a group in the format %GROUPNAME or a user ID in the format #UID . Negation can be performed with a ! prefix.

Host_List

One or more (separated by , ) identifiers: Either a (fully qualified) host name or an IP address. Negation can be performed with a ! prefix. ALL is the usual choice for Host_List .

NOPASSWD:|PASSWD:

The user will not be prompted for a password when running commands matching CMDSPEC after NOPASSWD: .

PASSWD is the default, it only needs to be specified when both are on the same line:

tux ALL = PASSWD: /usr/bin/foo, NOPASSWD: /usr/bin/bar
Cmnd_List

One or more (separated by , ) specifiers: A path to an executable, followed by allowed arguments or nothing.

/usr/bin/foo     # Anything allowed
/usr/bin/foo bar # Only "/usr/bin/foo bar" allowed
/usr/bin/foo ""  # No arguments allowed

ALL can be used as User_List , Host_List , and Cmnd_List .

A rule that allows tux to run all commands as root without entering a password:

tux ALL = NOPASSWD: ALL

A rule that allows tux to run systemctl restart apache2 :

tux ALL = /usr/bin/systemctl restart apache2

A rule that allows tux to run wall as admin with no arguments:

tux ALL = (admin) /usr/bin/wall ""

WARNING: Dangerous constructs

Constructs of the kind

ALL ALL = ALL

must not be used without Defaults targetpw , otherwise anyone can run commands as root .

[Jun 20, 2018] Sudo - ArchWiki

Jun 20, 2018 | wiki.archlinux.org

Sudoers default file permissions

The owner and group for the sudoers file must both be 0. The file permissions must be set to 0440. These permissions are set by default, but if you accidentally change them, they should be changed back immediately or sudo will fail.

# chown -c root:root /etc/sudoers
# chmod -c 0440 /etc/sudoers
Tips and tricks Disable per-terminal sudo Warning: This will let any process use your sudo session.

If you are annoyed by sudo's defaults that require you to enter your password every time you open a new terminal, disable tty_tickets :

Defaults !tty_tickets
Environment variables

If you have a lot of environment variables, or you export your proxy settings via export http_proxy="..." , when using sudo these variables do not get passed to the root account unless you run sudo with the -E option.

$ sudo -E pacman -Syu

The recommended way of preserving environment variables is to append them to env_keep :

/etc/sudoers
Defaults env_keep += "ftp_proxy http_proxy https_proxy no_proxy"
Passing aliases

If you use a lot of aliases, you might have noticed that they do not carry over to the root account when using sudo. However, there is an easy way to make them work. Simply add the following to your ~/.bashrc or /etc/bash.bashrc :

alias sudo='sudo '
Root password

Users can configure sudo to ask for the root password instead of the user password by adding targetpw (target user, defaults to root) or rootpw to the Defaults line in /etc/sudoers :

Defaults targetpw

To prevent exposing your root password to users, you can restrict this to a specific group:

Defaults:%wheel targetpw
%wheel ALL=(ALL) ALL
Disable root login

Users may wish to disable the root login. Without root, attackers must first guess a user name configured as a sudoer as well as the user password. See for example Ssh#Deny .

Warning:

The account can be locked via passwd :

# passwd -l root

A similar command unlocks root.

$ sudo passwd -u root

Alternatively, edit /etc/shadow and replace the root's encrypted password with "!":

root:!:12345::::::

To enable root login again:

$ sudo passwd root
Tip: To get to an interactive root prompt, even after disabling the root account, use sudo -i . kdesu

kdesu may be used under KDE to launch GUI applications with root privileges. It is possible that by default kdesu will try to use su even if the root account is disabled. Fortunately one can tell kdesu to use sudo instead of su. Create/edit the file ~/.config/kdesurc :

[super-user-command]
super-user-command=sudo

or use the following command:

$ kwriteconfig5 --file kdesurc --group super-user-command --key super-user-command sudo

Alternatively, install kdesudo AUR , which has the added advantage of tab-completion for the command following.

Harden with Sudo Example

Let us say you create 3 users: admin, devel, and joe. The user "admin" is used for journalctl, systemctl, mount, kill, and iptables; "devel" is used for installing packages, and editing config files; and "joe" is the user you log in with. To let "joe" reboot, shutdown, and use netctl we would do the following:

Edit /etc/pam.d/su and /etc/pam.d/su-1 Require user be in the wheel group, but do not put anyone in it.

#%PAM-1.0
auth            sufficient      pam_rootok.so
# Uncomment the following line to implicitly trust users in the "wheel" group.
#auth           sufficient      pam_wheel.so trust use_uid
# Uncomment the following line to require a user to be in the "wheel" group.
auth            required        pam_wheel.so use_uid
auth            required        pam_unix.so
account         required        pam_unix.so
session         required        pam_unix.so

Limit SSH login to the 'ssh' group. Only "joe" will be part of this group.

groupadd -r ssh
gpasswd -a joe ssh
echo 'AllowGroups ssh' >> /etc/ssh/sshd_config

Restart sshd.service .

Add users to other groups.

for g in power network ;do ;gpasswd -a joe $g ;done
for g in network power storage ;do ;gpasswd -a admin $g ;done

Set permissions on configs so devel can edit them.

chown -R devel:root /etc/{http,openvpn,cups,zsh,vim,screenrc}
Cmnd_Alias  POWER       =   /usr/bin/shutdown -h now, /usr/bin/halt, /usr/bin/poweroff, /usr/bin/reboot
Cmnd_Alias  STORAGE     =   /usr/bin/mount -o nosuid\,nodev\,noexec, /usr/bin/umount
Cmnd_Alias  SYSTEMD     =   /usr/bin/journalctl, /usr/bin/systemctl
Cmnd_Alias  KILL        =   /usr/bin/kill, /usr/bin/killall
Cmnd_Alias  PKGMAN      =   /usr/bin/pacman
Cmnd_Alias  NETWORK     =   /usr/bin/netctl
Cmnd_Alias  FIREWALL    =   /usr/bin/iptables, /usr/bin/ip6tables
Cmnd_Alias  SHELL       =   /usr/bin/zsh, /usr/bin/bash
%power      ALL         =   (root)  NOPASSWD: POWER
%network    ALL         =   (root)  NETWORK
%storage    ALL         =   (root)  STORAGE
root        ALL         =   (ALL)   ALL
admin       ALL         =   (root)  SYSTEMD, KILL, FIREWALL
devel       ALL         =   (root)  PKGMAN
joe         ALL         =   (devel) SHELL, (admin) SHELL

With this setup, you will almost never need to login as the Root user.

"joe" can connect to his home WiFi.

sudo netctl start home
sudo poweroff

"joe" can not use netctl as any other user.

sudo -u admin -- netctl start home

When "joe" needs to use journalctl or kill run away process he can switch to that user

sudo -i -u devel
sudo -i -u admin

But "joe" cannot switch to the root user.

sudo -i -u root

If "joe" want to start a gnu-screen session as admin he can do it like this:

sudo -i -u admin
admin% chown admin:tty `echo $TTY`
admin% screen
Configure sudo using drop-in files in /etc/sudoers.d

sudo parses files contained in the directory /etc/sudoers.d/ . This means that instead of editing /etc/sudoers , you can change settings in standalone files and drop them in that directory. This has two advantages:

The format for entries in these drop-in files is the same as for /etc/sudoers itself. To edit them directly, use visudo -f /etc/sudoers.d/ somefile . See the "Including other files from within sudoers" section of sudoers(5) for details.

The files in /etc/sudoers.d/ directory are parsed in lexicographical order, file names containing . or ~ are skipped. To avoid sorting problems, the file names should begin with two digits, e.g. 01_foo .

Note: The order of entries in the drop-in files is important: make sure that the statements do not override themselves. Warning: The files in /etc/sudoers.d/ are just as fragile as /etc/sudoers itself: any improperly formatted file will prevent sudo from working. Hence, for the same reason it is strongly advised to use visudo Editing files

sudo -e or sudoedit lets you edit a file as another user while still running the text editor as your user.

This is especially useful for editing files as root without elevating the privilege of your text editor, for more details read sudo(8) .

Note that you can set the editor to any program, so for example one can use meld to manage pacnew files:

$ SUDO_EDITOR=meld sudo -e /etc/file{,.pacnew}
Troubleshooting SSH TTY Problems

Notes: please use the second argument of the template to provide more detailed indications. (Discuss in Talk:Sudo# )

SSH does not allocate a tty by default when running a remote command. Without a tty, sudo cannot disable echo when prompting for a password. You can use ssh's -t option to force it to allocate a tty.

The Defaults option requiretty only allows the user to run sudo if they have a tty.

# Disable "ssh hostname sudo <cmd>", because it will show the password in clear text. You have to run "ssh -t hostname sudo <cmd>".
#
#Defaults    requiretty
Permissive umask

Notes: please use the second argument of the template to provide more detailed indications. (Discuss in Talk:Sudo# )

Sudo will union the user's umask value with its own umask (which defaults to 0022). This prevents sudo from creating files with more open permissions than the user's umask allows. While this is a sane default if no custom umask is in use, this can lead to situations where a utility run by sudo may create files with different permissions than if run by root directly. If errors arise from this, sudo provides a means to fix the umask, even if the desired umask is more permissive than the umask that the user has specified. Adding this (using visudo ) will override sudo's default behavior:

Defaults umask = 0022
Defaults umask_override

This sets sudo's umask to root's default umask (0022) and overrides the default behavior, always using the indicated umask regardless of what umask the user as set.

Defaults skeleton

Notes: please use the second argument of the template to provide more detailed indications. (Discuss in Talk:Sudo# )

The authors site has a list of all the options that can be used with the Defaults command in the /etc/sudoers file.

See [1] for a list of options (parsed from the version 1.8.7 source code) in a format optimized for sudoers .

[Jun 20, 2018] sudo - Gentoo Wiki

Jun 20, 2018 | wiki.gentoo.org

Non-root execution

It is also possible to have a user run an application as a different, non-root user. This can be very interesting if you run applications as a different user (for instance apache for the web server) and want to allow certain users to perform administrative steps as that user (like killing zombie processes).

Inside /etc/sudoers you list the user(s) in between ( and ) before the command listing:

CODE Non-root execution syntax
users  hosts = (run-as) commands

For instance, to allow larry to run the kill tool as the apache or gorg user:

CODE Non-root execution example
Cmnd_Alias KILL = /bin/kill, /usr/bin/pkill
 
larry   ALL = (apache, gorg) KILL

With this set, the user can run sudo -u to select the user he wants to run the application as:

user $ sudo -u apache pkill apache

You can set an alias for the user to run an application as using the Runas_Alias directive. Its use is identical to the other _Alias directives we have seen before.

Passwords and default settings

By default, sudo asks the user to identify himself using his own password. Once a password is entered, sudo remembers it for 5 minutes, allowing the user to focus on his tasks and not repeatedly re-entering his password.

Of course, this behavior can be changed: you can set the Defaults: directive in /etc/sudoers to change the default behavior for a user.

For instance, to change the default 5 minutes to 0 (never remember):

CODE Changing the timeout value
Defaults:larry  timestamp_timeout=0

A setting of -1 would remember the password indefinitely (until the system reboots).

A different setting would be to require the password of the user that the command should be run as and not the users' personal password. This is accomplished using runaspw . In the following example we also set the number of retries (how many times the user can re-enter a password before sudo fails) to 2 instead of the default 3:

CODE Requiring the root password instead of the user's password
Defaults:john   runaspw, passwd_tries=2

Another interesting feature is to keep the DISPLAY variable set so that you can execute graphical tools:

CODE Keeping the DISPLAY variable alive
Defaults:john env_keep=DISPLAY

You can change dozens of default settings using the Defaults: directive. Fire up the sudoers manual page and search for Defaults .

If you however want to allow a user to run a certain set of commands without providing any password whatsoever, you need to start the commands with NOPASSWD: , like so:

CODE Allowing emerge to be ran as root without asking for a password
larry     localhost = NOPASSWD: /usr/bin/emerge
Bash completion

Users that want bash completion with sudo need to run this once.

user $ sudo echo "complete -cf sudo" >> $HOME/.bashrc

[Jun 20, 2018] Trick 4: Switching to root

Jun 20, 2018 | www.networkworld.com

There are times when prefacing every command with "sudo" gets in the way of getting your work done. With a default /etc/sudoers configuration and membership in the sudo (or admin) group, you can assume root control using the command sudo su - . Extra care should always be taken when using the root account in this way.

$ sudo -i -u root
[sudo] password for jdoe:
root@stinkbug:~#

[Jun 20, 2018] Prolonging password timeout

Jun 20, 2018 | wiki.gentoo.org

Prolonging password timeout

By default, if a user has entered their password to authenticate their self to sudo , it is remembered for 5 minutes. If the user wants to prolong this period, he can run sudo -v to reset the time stamp so that it will take another 5 minutes before sudo asks for the password again.

user $ sudo -v

The inverse is to kill the time stamp using sudo -k .

[Jun 20, 2018] Shared Administration with Sudo

Jun 20, 2018 | www.freebsd.org

Finally, this line in /usr/local/etc/sudoers allows any member of the webteam group to manage webservice :

%webteam   ALL=(ALL)       /usr/sbin/service webservice *

Unlike su (1) , Sudo only requires the end user password. This adds an advantage where users will not need shared passwords, a finding in most security audits and just bad all the way around.

Users permitted to run applications with Sudo only enter their own passwords. This is more secure and gives better control than su (1) , where the root password is entered and the user acquires all root permissions.

Tip:

Most organizations are moving or have moved toward a two factor authentication model. In these cases, the user may not have a password to enter. Sudo provides for these cases with the NOPASSWD variable. Adding it to the configuration above will allow all members of the webteam group to manage the service without the password requirement:

%webteam   ALL=(ALL)       NOPASSWD: /usr/sbin/service webservice *

13.14.1. Logging Output

An advantage to implementing Sudo is the ability to enable session logging. Using the built in log mechanisms and the included sudoreplay command, all commands initiated through Sudo are logged for later verification. To enable this feature, add a default log directory entry, this example uses a user variable. Several other log filename conventions exist, consult the manual page for sudoreplay for additional information.

Defaults iolog_dir=/var/log/sudo-io/%{user}
Tip:

This directory will be created automatically after the logging is configured. It is best to let the system create directory with default permissions just to be safe. In addition, this entry will also log administrators who use the sudoreplay command. To change this behavior, read and uncomment the logging options inside sudoers .

Once this directive has been added to the sudoers file, any user configuration can be updated with the request to log access. In the example shown, the updated webteam entry would have the following additional changes:

%webteam ALL=(ALL) NOPASSWD: LOG_INPUT: LOG_OUTPUT: /usr/sbin/service webservice *

From this point on, all webteam members altering the status of the webservice application will be logged. The list of previous and current sessions can be displayed with:

# sudoreplay -l

In the output, to replay a specific session, search for the TSID= entry, and pass that to sudoreplay with no other options to replay the session at normal speed. For example:

# sudoreplay user1/00/00/02
Warning:

While sessions are logged, any administrator is able to remove sessions and leave only a question of why they had done so. It is worthwhile to add a daily check through an intrusion detection system ( IDS ) or similar software so that other administrators are alerted to manual alterations.

The sudoreplay is extremely extendable. Consult the documentation for more information.

[Jun 20, 2018] SCOM 1801, 2016 and 2012 Configuring sudo Elevation for UNIX and Linux Monitoring

Jun 20, 2018 | technet.microsoft.com

LINUX

#-----------------------------------------------------------------------------------

#Example user configuration for Operations Manager agent

#Example assumes users named: scomadm & scomadm

#Replace usernames & corresponding /tmp/scx-<username> specification for your environment

#General requirements

Defaults:scomadm !requiretty

#Agent maintenance

##Certificate signing

scomadm ALL=(root) NOPASSWD: /bin/sh -c cp /tmp/scx-scomadm/scx.pem /etc/opt/microsoft/scx/ssl/scx.pem; rm -rf /tmp/scx-scomadm; /opt/microsoft/scx/bin/tools/scxadmin -restart

scomadm ALL=(root) NOPASSWD: /bin/sh -c cat /etc/opt/microsoft/scx/ssl/scx.pem

scomadm ALL=(root) NOPASSWD: /bin/sh -c if test -f /opt/microsoft/omsagent/bin/service_control; then cat /etc/opt/microsoft/omsagent/scom/certs/scom-cert.pem; else cat /etc/opt/microsoft/scx/ssl/scx.pem; fi

scomadm ALL=(root) NOPASSWD: /bin/sh -c if test -f /opt/microsoft/omsagent/bin/service_control; then mv /tmp/scx-scomadm/scom-cert.pem /etc/opt/microsoft/omsagent/scom/certs/scom-cert.pem; fi

scomadm ALL=(root) NOPASSWD: /bin/sh -c if test -r /etc/opt/microsoft/scx/ssl/scx.pem; then cat /etc/opt/microsoft/scx/ssl/scx.pem; else cat /etc/opt/microsoft/scx/ssl/scx-seclevel1.pem; fi

##SCOM Workspace

scomadm ALL=(root) NOPASSWD: /bin/sh -c if test -f /opt/microsoft/omsagent/bin/service_control; then cp /tmp/scx-scomadm/omsadmin.conf /etc/opt/microsoft/omsagent/scom/conf/omsadmin.conf; /opt/microsoft/omsagent/bin/service_control restart scom; fi

scomadm ALL=(root) NOPASSWD: /bin/sh -c if test -f /opt/microsoft/omsagent/bin/omsadmin.sh && test ! -f /etc/opt/microsoft/omsagent/scom/certs/scom-cert.pem; then /opt/microsoft/omsagent/bin/omsadmin.sh -w scom; fi

##Install or upgrade

#Linux

scomadm ALL=(root) NOPASSWD: /bin/sh -c sh /tmp/scx-scomadm/omsagent-1.[0-9].[0-9]-[0-9][0-9].universal[[\:alpha\:]].[[\:digit\:]].x[6-8][4-6].sh --install --enable-opsmgr; if test -f /opt/microsoft/omsagent/bin/omsadmin.sh && test ! -f /etc/opt/microsoft/omsagent/scom/certs/scom-cert.pem; then /opt/microsoft/omsagent/bin/omsadmin.sh -w scom; fi; EC=$?; cd /tmp; rm -rf /tmp/scx-scomadm; exit $EC

scomadm ALL=(root) NOPASSWD: /bin/sh -c sh /tmp/scx-scomadm/omsagent-1.[0-9].[0-9]-[0-9][0-9].universal[[\:alpha\:]].[[\:digit\:]].x[6-8][4-6].sh --upgrade --enable-opsmgr; if test -f /opt/microsoft/omsagent/bin/omsadmin.sh && test ! -f /etc/opt/microsoft/omsagent/scom/certs/scom-cert.pem; then /opt/microsoft/omsagent/bin/omsadmin.sh -w scom; fi; EC=$?; cd /tmp; rm -rf /tmp/scx-scomadm; exit $EC

#RHEL

scomadm ALL=(root) NOPASSWD: /bin/sh -c sh /tmp/scx-scomadm/omsagent-1.[0-9].[0-9]-[0-9][0-9].rhel.[[\:digit\:]].x[6-8][4-6].sh --install --enable-opsmgr; if test -f /opt/microsoft/omsagent/bin/omsadmin.sh && test ! -f /etc/opt/microsoft/omsagent/scom/certs/scom-cert.pem; then /opt/microsoft/omsagent/bin/omsadmin.sh -w scom; fi; EC=$?; cd /tmp; rm -rf /tmp/scx-scomadm; exit $EC

scomadm ALL=(root) NOPASSWD: /bin/sh -c sh /tmp/scx-scomadm/omsagent-1.[0-9].[0-9]-[0-9][0-9].rhel.[[\:digit\:]].x[6-8][4-6].sh --upgrade --enable-opsmgr; if test -f /opt/microsoft/omsagent/bin/omsadmin.sh && test ! -f /etc/opt/microsoft/omsagent/scom/certs/scom-cert.pem; then /opt/microsoft/omsagent/bin/omsadmin.sh -w scom; fi; EC=$?; cd /tmp; rm -rf /tmp/scx-scomadm; exit $EC

#SUSE

scomadm ALL=(root) NOPASSWD: /bin/sh -c sh /tmp/scx-scomadm/omsagent-1.[0-9].[0-9]-[0-9][0-9].sles.1[[\:digit\:]].x[6-8][4-6].sh --install --enable-opsmgr; if test -f /opt/microsoft/omsagent/bin/omsadmin.sh && test ! -f /etc/opt/microsoft/omsagent/scom/certs/scom-cert.pem; then /opt/microsoft/omsagent/bin/omsadmin.sh -w scom; fi; EC=$?; cd /tmp; rm -rf /tmp/scx-scomadm; exit $EC

scomadm ALL=(root) NOPASSWD: /bin/sh -c sh /tmp/scx-scomadm/omsagent-1.[0-9].[0-9]-[0-9][0-9].sles.1[[\:digit\:]].x[6-8][4-6].sh --upgrade --enable-opsmgr; if test -f /opt/microsoft/omsagent/bin/omsadmin.sh && test ! -f /etc/opt/microsoft/omsagent/scom/certs/scom-cert.pem; then /opt/microsoft/omsagent/bin/omsadmin.sh -w scom; fi; EC=$?; cd /tmp; rm -rf /tmp/scx-scomadm; exit $EC

## RHEL PPC

scomadm ALL=(root) NOPASSWD: /bin/sh -c sh /tmp/scx-scomadm/scx-1.[0-9].[0-9]-[0-9][0-9][0-9].rhel.[[\:digit\:]].ppc.sh --install --enable-opsmgr; if test -f /opt/microsoft/omsagent/bin/omsadmin.sh && test ! -f /etc/opt/microsoft/omsagent/scom/certs/scom-cert.pem; then /opt/microsoft/omsagent/bin/omsadmin.sh -w scom; fi; EC=$?; cd /tmp; rm -rf /tmp/scx-scomadm; exit $EC

scomadm ALL=(root) NOPASSWD: /bin/sh -c sh /tmp/scx-scomadm/scx-1.[0-9].[0-9]-[0-9][0-9][0-9].rhel.[[\:digit\:]].ppc.sh --upgrade --enable-opsmgr; if test -f /opt/microsoft/omsagent/bin/omsadmin.sh && test ! -f /etc/opt/microsoft/omsagent/scom/certs/scom-cert.pem; then /opt/microsoft/omsagent/bin/omsadmin.sh -w scom; fi; EC=$?; cd /tmp; rm -rf /tmp/scx-scomadm; exit $EC

##Uninstall

scomadm ALL=(root) NOPASSWD: /bin/sh -c if test -f /opt/microsoft/omsagent/bin/omsadmin.sh; then if test "$(/opt/microsoft/omsagent/bin/omsadmin.sh -l | grep scom | wc -l)" \= "1" && test "$(/opt/microsoft/omsagent/bin/omsadmin.sh -l | wc -l)" \= "1" || test "$(/opt/microsoft/omsagent/bin/omsadmin.sh -l)" \= "No Workspace"; then /opt/microsoft/omsagent/bin/uninstall; else /opt/microsoft/omsagent/bin/omsadmin.sh -x scom; fi; else /opt/microsoft/scx/bin/uninstall; fi

##Log file monitoring

scomadm ALL=(root) NOPASSWD: /opt/microsoft/scx/bin/scxlogfilereader -p

###Examples

#Custom shell command monitoring example -replace <shell command> with the correct command string

#scomadm ALL=(root) NOPASSWD: /bin/bash -c <shell command>

#Daemon diagnostic and restart recovery tasks example (using cron)

#scomadm ALL=(root) NOPASSWD: /bin/sh -c ps -ef | grep cron | grep -v grep

#scomadm ALL=(root) NOPASSWD: /usr/sbin/cron &

#End user configuration for Operations Manager agent

#-----------------------------------------------------------------------------------

[Jun 20, 2018] Sudo and Sudoers Configuration Servers for Hackers

Jun 20, 2018 | serversforhackers.com

%group

We can try editing a group. The following will allow group www-data to run sudo service php5-fpm * commands without a password, great for deployment!

%www-data ALL(ALL:ALL) NOPASSWD:/usr/sbin/service php5-fpm *

Here's the same configuration as a comma-separated list of multiple commands. This let's us get more specific on which service commands we can use with php5-fpm :

%www-data ALL(ALL:ALL) NOPASSWD:/usr/sbin/service php5-fpm reload,/usr/sbin/service php5-fpm restart,

We can enforce the use of a password with some commands, but no password for others:

%admin ALL NOPASSWD:/bin/mkdir, PASSWD:/bin/rm

[Jun 20, 2018] IBM Knowledge Center - Configuring sudo

Jun 20, 2018 | www.ibm.com
  1. Open the /etc/sudoers file with a text editor. The sudo installation includes the visudo editor, which checks the syntax of the file before closing.
  2. Add the following commands to the file. Important: Enter each command on a single line:
    # Preserve GPFS environment variables:
    Defaults env_keep += "MMMODE environmentType GPFS_rshPath GPFS_rcpPath mmScriptTrace GPFSCMDPORTRANGE GPFS_CIM_MSG_FORMAT" 
    
    # Allow members of the gpfs group to run all commands but only selected commands without a password:
    %gpfs ALL=(ALL) PASSWD: ALL, NOPASSWD: /usr/lpp/mmfs/bin/mmremote, /usr/bin/scp, /bin/echo, /usr/lpp/mmfs/bin/mmsdrrestore
    
    # Disable requiretty for group gpfs:
    Defaults:%gpfs !requiretty
    

[Jun 20, 2018] Understanding and using sudo in Unix or Linux (with examples)

Jun 20, 2018 | aplawrence.com

Limiting commands

There's more that sudo does to protect tyou from malicious mischief. The :man sudo" pages cover that completely. Let's continue with our examples; it's time to limit "jim" to specific commands. There are two ways to do that. We can specifically list commands, or we can say that jim can only run commands in a certain directory. A combination of those methods is useful:

jim     ALL=    /bin/kill,/sbin/linuxconf, /usr/sbin/jim/

The careful reader will note that there was a bit of a change here. The line used to read "jim ALL=(ALL) ALL", but now there's only one "ALL" left. Reading the man page can easily leave you quite confused as to what those three "ALL"'s meant. In the example above, ALL refers to machines- the assumption is that this is a network wide sudoers file. In the case of this machine (lnxserve) we could do this:

jim     lnxserve=       /bin/kill, /usr/sbin/jim/

So what was the "(ALL)" for? Well, here's a clue:

jim     lnxserve=(paul,linda)   /bin/kill, /usr/sbin/jim/

That says that jim can (using "sudo -u ") run commands as paul or linda.

This is perfect for giving jim the power to kill paul or linda's processes without giving him anything else. There is one thing we need to add though: if we just left it like this, jim is forced to use "sudo -u paul" or "sudo -u linda" every time. We can add a default "runas_default":

Defaults:jim    timestamp_timeout=-1, env_delete+="BOOP", runas_default=linda

[Jun 20, 2018] Configuring sudo Explaination with an example by Ankit Mehta

May 14, 2009 | www.linux.com

sudo commands use a basic syntax. By default, the /etc/sudoers file will have one stanza:

root      ALL=(ALL) ALL

This tells sudo to give root sudo access to everything on every host. The syntax is simple:

user       host = (user) command

The first column defines the user the command applies to. The host section defines the host this stanza applies to. The (user) section defines the user to run the command as, while the command section defines the command itself.

You can also define aliases for Hosts, Users, and Commands by using the keywords Host_Alias , User_Alias , and Cmnd_Alias respectively.

Let's take a look at a few examples of the different aliases you can use.

... ... ...

Next, lets define some User aliases:

User_Alias        WEBADMIN = ankit, sam
User_Alias        MAILADMIN = ankit, navan
User_Alias        BINADMIN = ankit, jon

Here we've also defined three User aliases. The first user alias has the name WEBADMIN for web administrators. Here we've define Ankit and Sam. The second alias is MAILADMIN, for mail administrators, and here we have Ankit and Navan. Finally, we define an alias of BINADMIN for the regular sysadmins, again Ankit, but with Jon as well.

So far we've defined some hosts and some users. Now we get to define what commands they may be able to run, also using some aliases:

Cmnd_Alias         SU = /bin/su
Cmnd_Alias         BIN = /bin/rpm, /bin/rm, /sbin/linuxconf
Cmnd_Alias         SWATCH = /usr/bin/swatch, /bin/touch
Cmnd_Alias         HTTPD = /etc/rc.d/init.d/httpd, /etc/rc.d/init.d/mysql
Cmnd_Alias         SMTP = /etc/rc.d/init.d/qmail

Here we have a few aliases. The first we call SU, and enables the user to run the /bin/su command. The second we call BIN, which enables the user to run the commands: /bin/rpm , /bin/rm , and /sbin/linuxconf . The next is the SWATCH alias which allows the user to run /usr/bin/swatch and /bin/touch . Then we define the HTTPD alias which allows the user to execute /etc/rc.d/init.d/httpd and /etc/rc.d/init.d/mysql , for web maintenance. Finally, we define SMTP, which allows the user to manipulate the running of the qmail SMTP server...

... ... ...

[Jun 20, 2018] Running Commands as Another User via sudo

Jun 20, 2018 | www.safaribooksonline.com

You want one user to run commands as another, without sharing passwords.

Solution

Suppose you want user smith to be able to run a given command as user jones.

               /etc/sudoers:
smith  ALL = (jones) /usr/local/bin/mycommand

User smith runs:

smith$ sudo -u jones /usr/local/bin/mycommand
smith$ sudo -u jones mycommand                     If /usr/local/bin is in $PATH

User smith will be prompted for his own password, not jones's. The ALL keyword, which matches anything, in this case specifies that the line is valid on any host.

Discussion

sudo exists for this very reason!

To authorize root privileges for smith, replace "jones" with "root" in the above example.

[Jun 20, 2018] Quick HOWTO Ch09 Linux Users and Sudo

This article contains pretty pervert examples that shows that lists can used on the right part of the user statement too ;-)
Jun 20, 2018 | www.linuxhomenetworking.com
Simple /etc/sudoers Examples

This section presents some simple examples of how to do many commonly required tasks using the sudo utility.

Granting All Access to Specific Users

You can grant users bob and bunny full access to all privileged commands, with this sudoers entry.

bob, bunny  ALL=(ALL) ALL

This is generally not a good idea because this allows bob and bunny to use the su command to grant themselves permanent root privileges thereby bypassing the command logging features of sudo. The example on using aliases in the sudoers file shows how to eliminate this prob

Granting Access To Specific Users To Specific Files

This entry allows user peter and all the members of the group operator to gain access to all the program files in the /sbin and /usr/sbin directories, plus the privilege of running the command /usr/local/apps/check.pl. Notice how the trailing slash (/) is required to specify a directory location:

peter, %operator ALL= /sbin/, /usr/sbin, /usr/local/apps/check.pl

Notice also that the lack of any username entries within parentheses () after the = sign prevents the users from running the commands automatically masquerading as another user. This is explained further in the next example.

Granting Access to Specific Files as Another User

The sudo -u entry allows allows you to execute a command as if you were another user, but first you have to be granted this privilege in the sudoers file.

This feature can be convenient for programmers who sometimes need to kill processes related to projects they are working on. For example, programmer peter is on the team developing a financial package that runs a program called monthend as user accounts. From time to time the application fails, requiring "peter" to stop it with the /bin/kill, /usr/bin/kill or /usr/bin/pkill commands but only as user "accounts". The sudoers entry would look like this:

peter ALL=(accounts) /bin/kill, /usr/bin/kill, /usr/bin/pkill

User peter is allowed to stop the monthend process with this command:

[peter@bigboy peter]# sudo -u accounts pkill monthend
Granting Access Without Needing Passwords

This example allows all users in the group operator to execute all the commands in the /sbin directory without the need for entering a password. This has the added advantage of being more convenient to the user:

%operator ALL= NOPASSWD: /sbin/
Using Aliases in the sudoers File

Sometimes you'll need to assign random groupings of users from various departments very similar sets of privileges. The sudoers file allows users to be grouped according to function with the group and then being assigned a nickname or alias which is used throughout the rest of the file. Groupings of commands can also be assigned aliases too.

In the next example, users peter, bob and bunny and all the users in the operator group are made part of the user alias ADMINS. All the command shell programs are then assigned to the command alias SHELLS. Users ADMINS are then denied the option of running any SHELLS commands and su:

Cmnd_Alias    SHELLS = /usr/bin/sh,  /usr/bin/csh, \
                       /usr/bin/ksh, /usr/local/bin/tcsh, \
                       /usr/bin/rsh, /usr/local/bin/zsh
 
 
User_Alias    ADMINS = peter, bob, bunny, %operator
ADMINS        ALL    = !/usr/bin/su, !SHELLS

This attempts to ensure that users don't permanently su to become root, or enter command shells that bypass sudo's command logging. It doesn't prevent them from copying the files to other locations to be run. The advantage of this is that it helps to create an audit trail, but the restrictions can be enforced only as part of the company's overall security policy.

Other Examples

You can view a comprehensive list of /etc/sudoers file options by issuing the command man sudoers.

Using syslog To Track All sudo Commands

All sudo commands are logged in the log file /var/log/messages which can be very helpful in determining how user error may have contributed to a problem. All the sudo log entries have the word sudo in them, so you can easily get a thread of commands used by using the grep command to selectively filter the output accordingly.

Here is sample output from a user bob failing to enter their correct sudo password when issuing a command, immediately followed by the successful execution of the command /bin/more sudoers.

[root@bigboy tmp]# grep sudo /var/log/messages
Nov 18 22:50:30 bigboy sudo(pam_unix)[26812]: authentication failure; logname=bob uid=0 euid=0 tty=pts/0 ruser= rhost= user=bob
Nov 18 22:51:25 bigboy sudo: bob : TTY=pts/0 ; PWD=/etc ; USER=root ; COMMAND=/bin/more sudoers
[root@bigboy tmp]#

[Jun 20, 2018] bash - sudo as another user with their environment

Using strace is an interesting debugging tip
Jun 20, 2018 | unix.stackexchange.com

user80551 ,Jan 2, 2015 at 4:29

$ whoami
admin
$ sudo -S -u otheruser whoami
otheruser
$ sudo -S -u otheruser /bin/bash -l -c 'echo $HOME'
/home/admin

Why isn't $HOME being set to /home/otheruser even though bash is invoked as a login shell?

Specifically, /home/otheruser/.bashrc isn't being sourced. Also, /home/otheruser/.profile isn't being sourced. - ( /home/otheruser/.bash_profile doesn't exist)

EDIT: The exact problem is actually https://stackoverflow.com/questions/27738224/mkvirtualenv-with-fabric-as-another-user-fails

Pavel Šimerda ,Jan 2, 2015 at 8:29

A solution to this question will solve the other question as well, you might want to delete the other question in this situation. – Pavel Šimerda Jan 2 '15 at 8:29

Pavel Šimerda ,Jan 2, 2015 at 8:27

To invoke a login shell using sudo just use -i . When command is not specified you'll get a login shell prompt, otherwise you'll get the output of your command.

Example (login shell):

sudo -i

Example (with a specified user):

sudo -i -u user

Example (with a command):

sudo -i -u user whoami

Example (print user's $HOME ):

sudo -i -u user echo \$HOME

Note: The backslash character ensures that the dollar sign reaches the target user's shell and is not interpreted in the calling user's shell.

I have just checked the last example with strace which tells you exactly what's happening. The output bellow shows that the shell is being called with --login and with the specified command, just as in your explicit call to bash, but in addition sudo can do its own work like setting the $HOME .

# strace -f -e process sudo -S -i -u user echo \$HOME
execve("/usr/bin/sudo", ["sudo", "-S", "-i", "-u", "user", "echo", "$HOME"], [/* 42 vars */]) = 0
...
[pid 12270] execve("/bin/bash", ["-bash", "--login", "-c", "echo \\$HOME"], [/* 16 vars */]) = 0
...

I noticed that you are using -S and I don't think it is generally a good technique. If you want to run commands as a different user without performing authentication from the keyboard, you might want to use SSH instead. It works for localhost as well as for other hosts and provides public key authentication that works without any interactive input.

ssh user@localhost echo \$HOME

Note: You don't need any special options with SSH as the SSH server always creates a login shell to be accessed by the SSH client.

John_West ,Nov 23, 2015 at 11:12

sudo -i -u user echo \$HOME doesn't work for me. Output: $HOME . strace gives the same output as yours. What's the issue? – John_West Nov 23 '15 at 11:12

Pavel Šimerda ,Jan 20, 2016 at 19:02

No idea, it still works for me, I'd need to see it or maybe even touch the system. – Pavel Šimerda Jan 20 '16 at 19:02

Jeff Snider ,Jan 2, 2015 at 8:04

You're giving Bash too much credit. All "login shell" means to Bash is what files are sourced at startup and shutdown. The $HOME variable doesn't figure into it.

The Bash docs explain some more what login shell means: https://www.gnu.org/software/bash/manual/html_node/Bash-Startup-Files.html#Bash-Startup-Files

In fact, Bash doesn't do anything to set $HOME at all. $HOME is set by whatever invokes the shell (login, ssh, etc.), and the shell inherits it. Whatever started your shell as admin set $HOME and then exec-ed bash , sudo by design doesn't alter the environment unless asked or configured to do so, so bash as otheruser inherited it from your shell.

If you want sudo to handle more of the environment in the way you're expecting, look at the -i switch for sudo. Try:

sudo -S -u otheruser -i /bin/bash -l -c 'echo $HOME'

The man page for sudo describes it in more detail, though not really well, I think: http://linux.die.net/man/8/sudo

user80551 ,Jan 2, 2015 at 8:11

$HOME isn't set by bash - Thanks, I didn't know that. – user80551 Jan 2 '15 at 8:11

Pavel Šimerda ,Jan 2, 2015 at 9:46

Look for strace in my answer. It shows that you don't need to build /bin/bash -l -c 'echo $HOME' command line yourself when using -i .

palswim ,Oct 13, 2016 at 20:21

That sudo syntax threw an error on my machine. ( su uses the -c option, but I don't think sudo does.) I had better luck with: HomeDir=$( sudo -u "$1" -H -s echo "\$HOME" )palswim Oct 13 '16 at 20:21

[Jun 20, 2018] What are the differences between su, sudo -s, sudo -i, sudo su

Notable quotes:
"... (which means "substitute user" or "switch user") ..."
"... (hmm... what's the mnemonic? Super-User-DO?) ..."
"... The official meaning of "su" is "substitute user" ..."
"... Interestingly, Ubuntu's manpage does not mention "substitute" at all. The manpage at gnu.org ( gnu.org/software/coreutils/manual/html_node/su-invocation.html ) does indeed say "su: Run a command with substitute user and group ID". ..."
"... sudo -s runs a [specified] shell with root privileges. sudo -i also acquires the root user's environment. ..."
"... To see the difference between su and sudo -s , do cd ~ and then pwd after each of them. In the first case, you'll be in root's home directory, because you're root. In the second case, you'll be in your own home directory, because you're yourself with root privileges. There's more discussion of this exact question here . ..."
"... I noticed sudo -s doesnt seem to process /etc/profile ..."
Jun 20, 2018 | askubuntu.com

Sergey ,Oct 22, 2011 at 7:21

The main difference between these commands is in the way they restrict access to their functions.

su (which means "substitute user" or "switch user") - does exactly that, it starts another shell instance with privileges of the target user. To ensure you have the rights to do that, it asks you for the password of the target user . So, to become root, you need to know root password. If there are several users on your machine who need to run commands as root, they all need to know root password - note that it'll be the same password. If you need to revoke admin permissions from one of the users, you need to change root password and tell it only to those people who need to keep access - messy.

sudo (hmm... what's the mnemonic? Super-User-DO?) is completely different. It uses a config file (/etc/sudoers) which lists which users have rights to specific actions (run commands as root, etc.) When invoked, it asks for the password of the user who started it - to ensure the person at the terminal is really the same "joe" who's listed in /etc/sudoers . To revoke admin privileges from a person, you just need to edit the config file (or remove the user from a group which is listed in that config). This results in much cleaner management of privileges.

As a result of this, in many Debian-based systems root user has no password set - i.e. it's not possible to login as root directly.

Also, /etc/sudoers allows to specify some additional options - i.e. user X is only able to run program Y etc.

The often-used sudo su combination works as follows: first sudo asks you for your password, and, if you're allowed to do so, invokes the next command ( su ) as a super-user. Because su is invoked by root , it require you to enter your password instead of root.

So, sudo su allows you to open a shell as another user (including root), if you're allowed super-user access by the /etc/sudoers file.

dr jimbob ,Oct 22, 2011 at 13:47

I've never seen su as "switch user", but always as superuser; the default behavior without another's user name (though it makes sense). From wikipedia : "The su command, also referred to as super user[1] as early as 1974, has also been called "substitute user", "spoof user" or "set user" because it allows changing the account associated with the current terminal (window)."

Sergey ,Oct 22, 2011 at 20:33

@dr jimbob: you're right, but I'm finding that "switch user" is kinda describes better what it does - though historically it stands for "super user". I'm also delighted to find that the wikipedia article is very similar to my answer - I never saw the article before :)

Angel O'Sphere ,Nov 26, 2013 at 13:02

The official meaning of "su" is "substitute user". See: "man su". – Angel O'Sphere Nov 26 '13 at 13:02

Sergey ,Nov 26, 2013 at 20:25

@AngelO'Sphere: Interestingly, Ubuntu's manpage does not mention "substitute" at all. The manpage at gnu.org ( gnu.org/software/coreutils/manual/html_node/su-invocation.html ) does indeed say "su: Run a command with substitute user and group ID". I think gnu.org is a canonical source :) – Sergey Nov 26 '13 at 20:25

Mike Scott ,Oct 22, 2011 at 6:28

sudo lets you run commands in your own user account with root privileges. su lets you switch user so that you're actually logged in as root.

sudo -s runs a [specified] shell with root privileges. sudo -i also acquires the root user's environment.

To see the difference between su and sudo -s , do cd ~ and then pwd after each of them. In the first case, you'll be in root's home directory, because you're root. In the second case, you'll be in your own home directory, because you're yourself with root privileges. There's more discussion of this exact question here .

Sergey ,Oct 22, 2011 at 7:28

"you're yourself with root privileges" is not what's actually happening :) Actually, it's not possible to be "yourself with root privileges" - either you're root or you're yourself. Try typing whoami in both cases. The fact that cd ~ results are different is a result of sudo -s not setting $HOME environment variable. – Sergey Oct 22 '11 at 7:28

Octopus ,Feb 6, 2015 at 22:15

@Sergey, whoami it says are 'root' because you are running the 'whoami' cmd as though you sudoed it, so temporarily (for the duration of that command) you appear to be the root user, but you might still not have full root access according to the sudoers file. – Octopus Feb 6 '15 at 22:15

Sergey ,Feb 6, 2015 at 22:24

@Octopus: what I was trying to say is that in Unix, a process can only have one UID, and that UID determines the permissions of the process. You can't be "yourself with root privileges", a program either runs with your UID or with root's UID (0). – Sergey Feb 6 '15 at 22:24

Sergey ,Feb 6, 2015 at 22:32

Regarding "you might still not have full root access according to the sudoers file": the sudoers file controls who can run which command as another user, but that happens before the command is executed. However, once you were allowed to start a process as, say, root -- the running process has root's UID and has a full access to the system, there's no way for sudo to restrict that.

Again, you're always either yourself or root, there's no "half-n-half". So, if sudoers file allows you to run shell as root -- permissions in that shell would be indistinguishable from a "normal" root shell. – Sergey Feb 6 '15 at 22:32

dotancohen ,Nov 8, 2014 at 14:07

This answer is a dupe of my answer on a dupe of this question , put here on the canonical answer so that people can find it!

The major difference between sudo -i and sudo -s is:

  • sudo -i gives you the root environment, i.e. your ~/.bashrc is ignored.
  • sudo -s gives you the user's environment, so your ~/.bashrc is respected.

Here is an example, you can see that I have an application lsl in my ~/.bin/ directory which is accessible via sudo -s but not accessible with sudo -i . Note also that the Bash prompt changes as will with sudo -i but not with sudo -s :

dotancohen@melancholy:~$ ls .bin
lsl

dotancohen@melancholy:~$ which lsl
/home/dotancohen/.bin/lsl

dotancohen@melancholy:~$ sudo -i

root@melancholy:~# which lsl

root@melancholy:~# exit
logout

dotancohen@melancholy:~$ sudo -s
Sourced .bashrc

dotancohen@melancholy:~$ which lsl
/home/dotancohen/.bin/lsl

dotancohen@melancholy:~$ exit
exit

Though sudo -s is convenient for giving you the environment that you are familiar with, I recommend the use of sudo -i for two reasons:

  1. The visual reminder that you are in a 'root' session.
  2. The root environment is far less likely to be poisoned with malware, such as a rogue line in .bashrc .

meffect ,Feb 23, 2017 at 5:21

I noticed sudo -s doesnt seem to process /etc/profile , or anything I have in /etc/profile.d/ .. any idea why? – meffect Feb 23 '17 at 5:21

Marius Gedminas ,Oct 22, 2011 at 19:38

su asks for the password of the user "root".

sudo asks for your own password (and also checks if you're allowed to run commands as root, which is configured through /etc/sudoers -- by default all user accounts that belong to the "admin" group are allowed to use sudo).

sudo -s launches a shell as root, but doesn't change your working directory. sudo -i simulates a login into the root account: your working directory will be /root , and root's .profile etc. will be sourced as if on login.

DJCrashdummy ,Jul 29, 2017 at 0:58

to make the answer more complete: sudo -s is almost equal to su ($HOME is different) and sudo -i is equal to su -
In Ubuntu or a related system, I don't find much use for su in the traditional, super-user sense. sudo handles that case much better. However, su is great for becoming another user in one-off situations where configuring sudoers would be silly.

For example, if I'm repairing my system from a live CD/USB, I'll often mount my hard drive and other necessary stuff and chroot into the system. In such a case, my first command is generally:

su - myuser  # Note the '-'. It means to act as if that user had just logged in.

That way, I'm operating not as root, but as my normal user, and I then use sudo as appropriate.

[Jun 20, 2018] How to invoke login shell for another user using sudo

Notable quotes:
"... To invoke a login shell using sudo just use -i . When command is not specified you'll get a login shell prompt, otherwise you'll get the output of your command. ..."
Jun 20, 2018 | unix.stackexchange.com

To invoke a login shell using sudo just use -i . When command is not specified you'll get a login shell prompt, otherwise you'll get the output of your command.

Example (login shell):

sudo -i

Example (with a specified user):

sudo -i -u user

Example (with a command):

sudo -i -u user whoami

Example (print user's $HOME ):

sudo -i -u user echo \$HOME

[Jun 20, 2018] Changing the timeout value

Jun 20, 2018 | wiki.gentoo.org

By default, sudo asks the user to identify himself using his own password. Once a password is entered, sudo remembers it for 5 minutes, allowing the user to focus on his tasks and not repeatedly re-entering his password.

Of course, this behavior can be changed: you can set the Defaults: directive in /etc/sudoers to change the default behavior for a user.

For instance, to change the default 5 minutes to 0 (never remember):

CODE Changing the timeout value
Defaults:larry  timestamp_timeout=0

A setting of -1 would remember the password indefinitely (until the system reboots).

A different setting would be to require the password of the user that the command should be run as and not the users' personal password. This is accomplished using runaspw . In the following example we also set the number of retries (how many times the user can re-enter a password before sudo fails) to 2 instead of the default 3:

[Jun 20, 2018] Bash completion with sudo

Jun 20, 2018 | wiki.gentoo.org
Bash completion

Users that want bash completion with sudo need to run this once.

user $ sudo echo "complete -cf sudo" >> $HOME/.bashrc

[Jun 20, 2018] permission - allow sudo to another user without password

Jun 20, 2018 | apple.stackexchange.com

up vote 35 down vote favorite 11


zio ,Feb 17, 2013 at 13:12

I want to be able to 'su' to a specific user, allowing me to run any command without a password being entered.

For example:

If my login were user1 and the user I want to 'su' to is user2:

I would use the command:

su - user2

but then it prompts me with

Password:

Global nomad ,Feb 17, 2013 at 13:17

Ask the other user for the password. At least the other user knows what's been done under his/her id. – Global nomad Feb 17 '13 at 13:17

zio ,Feb 17, 2013 at 13:24

This is nothing to do with another physical user. Both ID's are mine. I know the password as I created the account. I just don't want to have to type the password every time. – zio Feb 17 '13 at 13:24

bmike ♦ ,Feb 17, 2013 at 15:32

Would it be ok to ssh to at user or do you need to inherit one shell in particular and need su to work? – bmike ♦ Feb 17 '13 at 15:32

bmike ♦ ,Feb 17, 2013 at 23:59

@zio Great use case. Does open -na Skype not work for you? – bmike ♦ Feb 17 '13 at 23:59

user495470 ,Feb 18, 2013 at 4:50

You could also try copying the application bundle and changing CFBundleIdentifier . – user495470 Feb 18 '13 at 4:50

Huygens ,Feb 18, 2013 at 7:39

sudo can do just that for you :)

It needs a bit of configuration though, but once done you would only do this:

sudo -u user2 -s

And you would be logged in as user2 without entering a password.

Configuration

To configure sudo, you must edit its configuration file via: visudo . Note: this command will open the configuration using the vi text editor, if you are unconfortable with that, you need to set another editor (using export EDITOR=<command> ) before executing the following line. Another command line editor sometimes regarded as easier is nano , so you would do export EDITOR=/usr/bin/nano . You usually need super user privilege for visudo :

sudo visudo

This file is structured in different section, the aliases, then defaults and finally at the end you have the rules. This is where you need to add the new line. So you navigate at the end of the file and add this:

user1    ALL=(user2) NOPASSWD: /bin/bash

You can replace also /bin/bash by ALL and then you could launch any command as user2 without a password: sudo -u user2 <command> .

Update

I have just seen your comment regarding Skype. You could consider adding Skype directly to the sudo's configuration file. I assume you have Skype installed in your Applications folder:

user1    ALL=(user2) NOPASSWD: /Applications/Skype.app/Contents/MacOS/Skype

Then you would call from the terminal:

sudo -u user2 /Applications/Skype.app/Contents/MacOS/Skype

bmike ♦ ,May 28, 2014 at 16:04

This is far less complicated than the ssh keys idea, so use this unless you need the ssh keys for remote access as well. – bmike ♦ May 28 '14 at 16:04

Stan Kurdziel ,Oct 26, 2015 at 16:56

One thing to note from a security-perspective is that specifying a specific command implies that it should be a read-only command for user1; Otherwise, they can overwrite the command with something else and run that as user2. And if you don't care about that, then you might as well specify that user1 can run any command as user2 and therefore have a simpler sudo config. – Stan Kurdziel Oct 26 '15 at 16:56

Huygens ,Oct 26, 2015 at 19:24

@StanKurdziel good point! Although it is something to be aware of, it's really seldom to have system executables writable by users unless you're root but in this case you don't need sudo ;-) But you're right to add this comment because it's so seldom that I've probably overlooked it more than one time. – Huygens Oct 26 '15 at 19:24

Gert van den Berg ,Aug 10, 2016 at 14:24

To get it nearer to the behaviour su - user2 instead of su user2 , the commands should probably all involve sudo -u user2 -i , in order to simulate an initial login as user2 – Gert van den Berg Aug 10 '16 at 14:24

bmike ,Feb 18, 2013 at 0:05

I would set up public/private ssh keys for the second account and store the key in the first account.

Then you could run a command like:

 ssh user@localhost -n /Applications/Skype.app/Contents/MacOS/Skype &

You'd still have the issues where Skype gets confused since two instances are running on one user account and files read/written by that program might conflict. It also might work well enough for your needs and you'd not need an iPod touch to run your second Skype instance.

calum_b ,Feb 18, 2013 at 9:54

This is a good secure solution for the general case of password-free login to any account on any host, but I'd say it's probably overkill when both accounts are on the same host and belong to the same user. – calum_b Feb 18 '13 at 9:54

bmike ♦ ,Feb 18, 2013 at 14:02

@scottishwildcat It's far more secure than the alternative of scripting the password and feeding it in clear text or using a variable and storing the password in the keychain and using a tool like expect to script the interaction. I just use sudo su - blah and type my password. I think the other answer covers sudo well enough to keep this as a comment. – bmike ♦ Feb 18 '13 at 14:02

calum_b ,Feb 18, 2013 at 17:47

Oh, I certainly wasn't suggesting your answer should be removed I didn't even down-vote, it's a perfectly good answer. – calum_b Feb 18 '13 at 17:47

bmike ♦ ,Feb 18, 2013 at 18:46

We appear to be in total agreement - thanks for the addition - feel free to edit it into the answer if you can improve on it. – bmike ♦ Feb 18 '13 at 18:46

Gert van den Berg ,Aug 10, 2016 at 14:20

The accepted solution ( sudo -u user2 <...> ) does have the advantage that it can't be used remotely, which might help for security - there is no private key for user1 that can be stolen. – Gert van den Berg Aug 10 '16 at 14:20

[Jun 20, 2018] linux - Automating the sudo su - user command

Jun 20, 2018 | superuser.com

5 down vote favorite


sam ,Feb 9, 2011 at 11:11

I want to automate
sudo su - user

from a script. It should then ask for a password.

grawity ,Feb 9, 2011 at 12:07

Don't sudo su - user , use sudo -iu user instead. (Easier to manage through sudoers , by the way.) – grawity Feb 9 '11 at 12:07

Hello71 ,Feb 10, 2011 at 1:33

How are you able to run sudo su without being able to run sudo visudo ? – Hello71 Feb 10 '11 at 1:33

Torian ,Feb 9, 2011 at 11:37

I will try and guess what you asked.

If you want to use sudo su - user without a password, you should (if you have the privileges) do the following on you sudoers file:

<youuser>  ALL = NOPASSWD: /bin/su - <otheruser>

where:

  • <yourusername> is you username :D (saumun89, i.e.)
  • <otheruser> is the user you want to change to

Then put into the script:

sudo /bin/su - <otheruser>

Doing just this, won't get subsequent commands get run by <otheruser> , it will spawn a new shell. If you want to run another command from within the script as this other user, you should use something like:

 sudo -u <otheruser> <command>

And in sudoers file:

<yourusername>  ALL = (<otheruser>) NOPASSWD: <command>

Obviously, a more generic line like:

<yourusername> ALL = (ALL) NOPASSWD: ALL

Will get things done, but would grant the permission to do anything as anyone.

sam ,Feb 9, 2011 at 11:43

when the sudo su - user command gets executed,it asks for a password. i want a solution in which script automaticaaly reads password from somewhere. i dont have permission to do what u told earlier. – sam Feb 9 '11 at 11:43

sam ,Feb 9, 2011 at 11:47

i have the permission to store password in a file. the script should read password from that file – sam Feb 9 '11 at 11:47

Olli ,Feb 9, 2011 at 12:46

You can use command
 echo "your_password" | sudo -S [rest of your parameters for sudo]

(Of course without [ and ])

Please note that you should protect your script from read access from unauthorized users. If you want to read password from separate file, you can use

  sudo -S [rest of your parameters for sudo] < /etc/sudo_password_file

(Or whatever is the name of password file, containing password and single line break.)

From sudo man page:

   -S          The -S (stdin) option causes sudo to read the password from
               the standard input instead of the terminal device.  The
               password must be followed by a newline character.

AlexandruC ,Dec 6, 2014 at 8:10

This actually works for me. – AlexandruC Dec 6 '14 at 8:10

Oscar Foley ,Feb 8, 2016 at 16:36

This is brilliant – Oscar Foley Feb 8 '16 at 16:36

Mikel ,Feb 9, 2011 at 11:26

The easiest way is to make it so that user doesn't have to type a password at all.

You can do that by running visudo , then changing the line that looks like:

someuser  ALL=(ALL) ALL

to

someuser  ALL=(ALL) NOPASSWD: ALL

However if it's just for one script, it would be more secure to restrict passwordless access to only that script, and remove the (ALL) , so they can only run it as root, not any user , e.g.

Cmnd_Alias THESCRIPT = /usr/local/bin/scriptname

someuser  ALL=NOPASSWD: THESCRIPT

Run man 5 sudoers to see all the details in the sudoers man page .

sam ,Feb 9, 2011 at 11:34

i do not have permission to edit sudoers file.. any other so that it should read password from somewhere so that automation of this can be done. – sam Feb 9 '11 at 11:34

Torian ,Feb 9, 2011 at 11:40

you are out of luck ... you could do this with, lets say expect but that would let the password for your user hardcoded somewhere, where people could see it (granted that you setup permissions the right way, it could still be read by root). – Torian Feb 9 '11 at 11:40

Mikel ,Feb 9, 2011 at 11:40

Try using expect . man expect for details. – Mikel Feb 9 '11 at 11:40

> ,

when the sudo su - user command gets executed,it asks for a password. i want a solution in which script automaticaaly reads password from somewhere. i dont have permission to edit sudoers file.i have the permission to store password in a file.the script should read password from that file – sam

[Jun 20, 2018] sudo - What does ALL ALL=(ALL) ALL mean in sudoers

Jun 20, 2018 | unix.stackexchange.com

up vote 6 down vote favorite 3


LoukiosValentine79 ,May 6, 2015 at 19:29

If a server has the following in /etc/sudoers:
Defaults targetpw
ALL ALL=(ALL) ALL

Then what does this mean? all the users can sudo to all the commands, only their password is needed?

lcd047 ,May 6, 2015 at 20:51

It means "security Nirvana", that's what it means. ;) – lcd047 May 6 '15 at 20:51

poz2k4444 ,May 6, 2015 at 20:19

From the sudoers(5) man page:

The sudoers policy plugin determines a user's sudo privileges.

For the targetpw:

sudo will prompt for the password of the user specified by the -u option (defaults to root) instead of the password of the invoking user when running a command or editing a file.

sudo(8) allows you to execute commands as someone else

So, basically it says that any user can run any command on any host as any user and yes, the user just has to authenticate, but with the password of the other user, in order to run anything.

The first ALL is the users allowed
The second one is the hosts
The third one is the user as you are running the command
The last one is the commands allowed

LoukiosValentine79 ,May 7, 2015 at 16:37

Thanks! In the meantime I found the "Defaults targetpw" entry in sudoers.. updated the Q – LoukiosValentine79 May 7 '15 at 16:37

poz2k4444 ,May 7, 2015 at 18:24

@LoukiosValentine79 I just update the answer, does that answer your question? – poz2k4444 May 7 '15 at 18:24

evan54 ,Feb 28, 2016 at 20:24

wait he has to enter his own password not of the other user right? – evan54 Feb 28 '16 at 20:24

x-yuri ,May 19, 2017 at 12:20

with targetpw the one of the other (target) user – x-yuri May 19 '17 at 12:20

[Jun 20, 2018] sudo - What is ALL ALL=!SUDOSUDO for

Jun 20, 2018 | unix.stackexchange.com

gasko peter ,Dec 6, 2012 at 12:50

The last line of the /etc/sudoers file is:
grep -i sudosudo /etc/sudoers
Cmnd_Alias SUDOSUDO = /usr/bin/sudo
ALL ALL=!SUDOSUDO

why? What does it exactly do?

UPDATE#1: Now I know that it prevents users to use the: "/usr/bin/sudo".

UPDATE#2: not allowing "root ALL=(ALL) ALL" is not a solution.

Updated Question: What is better besides this "SUDOSUDO"? (the problem with this that the sudo binary could be copied..)

Chris Down ,Dec 6, 2012 at 12:53

SUDOSUDO is probably an alias. Does it exist elsewhere in the file? – Chris Down Dec 6 '12 at 12:53

gasko peter ,Dec 6, 2012 at 14:21

question updated :D - so what does it means exactly? – gasko peter Dec 6 '12 at 14:21

gasko peter ,Dec 6, 2012 at 14:30

is "ALL ALL=!SUDOSUDO" as the last line is like when having DROP iptables POLICY and still using a -j DROP rule as last rule in ex.: INPUT chain? :D or does it has real effects? – gasko peter Dec 6 '12 at 14:30

Kevin ,Dec 6, 2012 at 14:48

I'm not 100% sure, but I believe it only prevents anyone from running sudo sudo ... . – Kevin Dec 6 '12 at 14:48

[Jun 18, 2018] Copy and paste text in midnight commander (MC) via putty in Linux

Notable quotes:
"... IF you're using putty in either Xorg or Windows (i.e terminal within a gui) , it's possible to use the "conventional" right-click copy/paste behavior while in mc. Hold the shift key while you mark/copy. ..."
"... Putty has ability to copy-paste. In mcedit, hold Shift and select by mouse ..."
Jun 18, 2018 | superuser.com

Den ,Mar 1, 2015 at 22:50

I use Midnight Commander (MC) editor over putty to edit files

I want to know how to copy text from one file, close it then open another file and paste it?

If it is not possible with Midnight Commander, is there another easy way to copy and paste specific text from different files?

szkj ,Mar 12, 2015 at 22:40

I would do it like this:
  1. switch to block selection mode by pressing F3
  2. select a block
  3. switch off block selection mode with F3
  4. press Ctrl+F which will open Save block dialog
  5. press Enter to save it to the default location
  6. open the other file in the editor, and navigate to the target location
  7. press Shift+F5 to open Insert file dialog
  8. press Enter to paste from the default file location (which is same as the one in Save block dialog)

NOTE: There are other environment related methods, that could be more conventional nowadays, but the above one does not depend on any desktop environment related clipboard, (terminal emulator features, putty, Xorg, etc.). This is a pure mcedit feature which works everywhere.

Andrejs ,Apr 28, 2016 at 8:13

To copy: (hold) Shift + Select with mouse (copies to clipboard)

To paste in windows: Ctrl+V

To paste in another file in PuTTY/MC: Shift + Ins

Piotr Dobrogost ,Mar 30, 2017 at 17:32

If you get unwanted indents in what was pasted then while editing file in Midnight Commander press F9 to show top menu and in Options/Generals menu uncheck Return does autoindent option. Yes, I was happy when I found it too :) – Piotr Dobrogost Mar 30 '17 at 17:32

mcii-1962 ,May 26, 2015 at 13:17

IF you're using putty in either Xorg or Windows (i.e terminal within a gui) , it's possible to use the "conventional" right-click copy/paste behavior while in mc. Hold the shift key while you mark/copy.

Eden ,Feb 15, 2017 at 4:09

  1. Hold down the Shift key, and drag the mouse through the text you want to copy. The text's background will become dark orange.
  2. Release the Shift key and press Shift + Ctrl + c . The text will be copied.
  3. Now you can paste the text to anywhere you want by pressing Shift + Ctrl + v , even to the new page in MC.

xoid ,Jun 6, 2016 at 6:37

Putty has ability to copy-paste. In mcedit, hold Shift and select by mouse

mcii-1962 ,Jun 20, 2016 at 23:01

LOL - did you actually read the other answers? And your answer is incomplete, you should include what to do with the mouse in order to "select by mouse".
According to help in MC:

Ctrl + Insert copies to the mcedit.clip, and Shift + Insert pastes from mcedit.clip.

It doesn't work for me, by some reason, but by pressing F9 you get a menu, Edit > Copy to clipfile - worked fine.

[Jun 18, 2018] My Favorite Tool - Midnight Commander by Colin Sauze

Notable quotes:
"... "what did I just press and what did it do?" ..."
"... Underneath it's got lots of powerful features like syntax highlighting, bracket matching, regular expression search and replace, and spell checking. ..."
"... I use Mcedit for most of my day-to-day text editing, although I do switch to heavier weight GUI-based editors when I need to edit lots of files at once. ..."
Jun 18, 2018 | software-carpentry.org

I've always hated the Vi vs Emacs holy war that many Unix users like to wage and I find that both editors have serious shortcomings and definitely aren't something I'd recommend a beginner use. Pico and Nano are certainly easier to use, but they always a feel a bit lacking in features and clunky to me.

Mcedit runs from the command line but has a colourful GUI-like interface, you can use the mouse if you want, but I generally don't.

If you're old enough to have used DOS, then it's very reminiscent of the "edit" text editor that was built into MS-DOS 5 and 6, except it's full of powerful features that still make it a good choice in 2018. It has a nice intuitive interface based around the F keys on the keyboard and a pull-down menu which can be accessed by pressing F9 .

It's really easy to use and you're told about all the most important key combinations on screen and the rest can all be discovered from the menus. I find this far nicer than Vi or Emacs where I have to constantly look up key combinations or press a key by mistake and then have the dreaded "what did I just press and what did it do?" thought.

Underneath it's got lots of powerful features like syntax highlighting, bracket matching, regular expression search and replace, and spell checking.

I use Mcedit for most of my day-to-day text editing, although I do switch to heavier weight GUI-based editors when I need to edit lots of files at once. I just wish more people knew about it and then it might be installed by default on more of the shared systems and HPCs that I have to use!

[Jun 17, 2018] Midnight Commander Guide

Jun 17, 2018 | www.nawaz.org

Selecting Text

images/editmark.png

3.2.3 Navigation 3.2.4 Replacing Text

images/editreplace.png

3.2.5 Saving

images/editsaveas.png 3.2.6 Syntax Highlighting

images/edithighlight.png

3.2.7 More Options 3.2.8 Some Comments about Editing

[Jun 14, 2018] Changing shortcuts in midnight commander by rride Last Updated 20:01 PM

Feb 04, 2018 | www.queryxchange.com

I haven't found anything on the topic in the Internet. The only line from .mc/ini that looks related to the question is keymap=mc.keymap but I have no idea what to do with it.

Tags : linux keyboard-shortcuts midnight-commander

Okiedokie... lets see
$ man-section mc | head -n20
mc (1)
--
 Name
 Usage
 Description
 Options
 Overview
 Mouse support
 Keys
 Redefine hotkey bindings

8th section... is that possible? Lets look

man mc (scroll,scroll,scroll)

Redefine hotkey bindings
    Hotkey bindings may be read from external file (keymap-file).  A keymap-
    file is searched on the following algorithm  (to the first one found):

     1) command line option -K <keymap> or --keymap=<keymap>
     2) Environment variable MC_KEYMAP
     3) Parameter keymap in section [Midnight-Commander] of config file.
     4) File ~/.config/mc/mc.keymap
     5) File /etc/mc/mc.keymap
     6) File /usr/share/mc/mc.keymap

Bingo!

cp /etc/mc/mc.keymap ~/.config/mc/

Now edit the key mappings as you like and save ~/.config/mc/mc.keymap when done

For more info, read the Keys ( man mc ) section and the three sections following that.


$ cat /home/jaroslav/bin/man-sections 
#!/bin/sh
MANPAGER=cat man $@ | grep -E '^^[[1m[A-Z]{3,}'

[Jun 13, 2018] How mc.init is stored

Jun 13, 2018 | superuser.com

The configuration is stored in

$HOME/.config/mc/

In your case edit the file $HOME/.config/mc/ini . You can check which files are actually read in by midnight-commander using strace :

strace -e trace=open -o mclog mc

[Jun 13, 2018] Temporary Do Something Else while editing/viewing a file

Jun 13, 2018 | www.nawaz.org

[Jun 13, 2018] My Screen is Garbled Up

Jun 13, 2018 | www.nawaz.org

[Jun 13, 2018] Find file shows no results

Jun 13, 2018 | wiki.archlinux.org

If the Find file dialog (accessible with Alt+? ) shows no results, check the current directory for symbolic links. Find file does not follow symbolic links, so use bind mounts (see mount(8) ) instead, or the External panelize command.

[Jun 13, 2018] Draft of documentation for Midnight Commander

Jun 13, 2018 | midnight-commander.org

Table of content

  1. Introduction
  2. Getting sources
  3. Making and installing?
  4. Ini-options setup?
  5. Usage
  6. Migration to keybindings in 4.8.x series
  7. How to report about bugs
  8. Frequently asked questions

[Jun 13, 2018] Trash support

Jun 13, 2018 | wiki.archlinux.org

Midnight Commander does not support a trash can by default. Using libtrash

Install the libtrash AUR package, and create an mc alias in the initialization file of your shell (e.g., ~/.bashrc or ~/.zshrc ):

alias mc='LD_PRELOAD=/usr/lib/libtrash.so.3.3 mc'

To apply the changes, reopen your shell session or source the shell initialization file.

Default settings are defined in /etc/libtrash.conf.sys . You can overwrite these settings per-user in ~/.libtrash , for example:

TRASH_CAN = .Trash
INTERCEPT_RENAME = NO
IGNORE_EXTENSIONS= o;exe;com
UNCOVER_DIRS=/dev

Now files deleted by Midnight Commander (launched with mc ) will be moved to the ~/.Trash directory.

Warning:

See also [2] .

[Jun 13, 2018] Mcedit is actually a multiwindow editor

Opening another file in editor will create the second window. You can list windows using F9/Window/List\
That allows to copy and paste selections to different files while in editor
Jun 13, 2018 | www.unix.com

Many people don't know that mc has a multi-window text-editor built-in (eerily disabled by default) with macro capability and all sorts of goodies. run

mc -e my.txt

to edit directly.

[Jun 13, 2018] Make both panels display the same directory

Jun 13, 2018 | www.fredshack.com

ALT+i. If NOK, try ESC+i

[Jun 13, 2018] Opening editor in another screen or tmux window

Jun 13, 2018 | www.queryxchange.com

by user2252728 Last Updated May 15, 2015 11:14 AM


The problem

I'm using tmux and I want MC to open files for editing in another tmux window, so that I can keep browsing files while editing.

What I've tried

MC checks if EDITOR variable is set and then interprets it as a program for editing, so if I do export EDITOR=vim then MC will use vim to open files.

I've tried to build on that:

function foo () { tmux new-window "vim $1"; }
export EDITOR=foo

If I do $EDITOR some_file then I get the file open in vim in another tmux windows - exactly what I wanted.

Sadly, when I try to edit in MC it goes blank for a second and then returns to normal MC window. MC doesn't seem to keep any logs and I don't get any error message.

The question(s)

Tags : midnight-commander

Answers 1
You are defining a shell function, which is unknown for mc when it is trying to start the editor.

The correct way is to create a bash script, not a function. Then set EDITOR value to it, for example:

$ cat ~/myEditor.sh
#!/bin/sh
tmux new-window "vim $1"

export EDITOR=~/myEditor.sh

[Jun 13, 2018] Copy and paste text in midnight commander (MC) via putty in Linux

Jun 13, 2018 | www.queryxchange.com

I use Midnight Commander (MC) editor over putty to edit files

I want to know how to copy text from one file, close it then open another file and paste it?

If it is not possible with Midnight Commander, is there another easy way to copy and paste specific text from different files?


I would do it like this:
  1. switch to block selection mode by pressing F3
  2. select a block
  3. switch off block selection mode with F3
  4. press Ctrl+F which will open Save block dialog
  5. press Enter to save it to the default location
  6. open the other file in the editor, and navigate to the target location
  7. press Shift+F5 to open Insert file dialog
  8. press Enter to paste from the default file location (which is same as the one in Save block dialog)

[Jun 13, 2018] How to exclude some pattern when doing a search in MC

Mar 25, 2018 | www.queryxchange.com

In Midnight Commander, is it possible to exclude some directories/patterns/... when doing search? ( M-? ) I'm specifically interested in skipping the .hg subdirectory.


Answers 1
In the "[Misc]" section of your ~/.mc/ini file, you can specify the directories you wish to skip in the "find_ignore_dirs" setting.

To specify multiple directories, use a colon (":") as the delimiter.

[Jun 13, 2018] Midnight Commander tab completion

Sep 17, 2011 | superuser.com
You can get tab-completion by pressing ESC then TAB . You can also get the currently highlighted file/subdir name onto the command line with ESC-ENTER.

[Jun 13, 2018] mc-wrapper does not exit to MC_PWD directory

Jun 13, 2018 | www.queryxchange.com

I recently installed openSUSE 13.1 and set up the mc in typical why by aliasing mc with mc-wrapper.sh to have it exit into the last working directory in mc instance. However this does not seem to be working. I tried to debug the mc-wrapper.sh script - the echo commands.

MC_USER=`id | sed 's/[^(]*(//;s/).*//'`
MC_PWD_FILE="${TMPDIR-/tmp}/mc-$MC_USER/mc.pwd.$$"
/usr/bin/mc -P "$MC_PWD_FILE" "$@"

if test -r "$MC_PWD_FILE"; then
        MC_PWD="`cat "$MC_PWD_FILE"`"
        if test -n "$MC_PWD" && test -d "$MC_PWD"; then
                echo "will cd in : $MC_PWD"
                cd $MC_PWD
                echo $(pwd)
        fi
        unset MC_PWD
fi

rm -f "$MC_PWD_FILE"
unset MC_PWD_FILE
echo $(pwd)

To my surprise, mc-wrapper-sh does change the directory and is in the directory before exiting but back in bash prompt the working directory is the one from which the script was invoked.

Can it be that some bash settings is required for this to work?

Tags : linux bash shell midnight-commander

Answers 1
Using answer above working solution for bash shell is this:
alias mc='source /usr/lib/mc/mc-wrapper.sh'

OR

alias mc='. /usr/lib/mc/mc-wrapper.sh'

[Jun 13, 2018] How to enable find-as-you-type behavior

Jun 13, 2018 | www.queryxchange.com

Alt + S will show the "quick search" in Midnight Commander.

[Jun 13, 2018] How to expand the command line to the whole screen in MC

Jun 13, 2018 | www.queryxchange.com

You can hide the Midnight Commander Window by pressing Ctrl + O . Press Ctrl + O again to return back to Midnight Commander.

[Jun 13, 2018] MC Tips Tricks

Jun 13, 2018 | www.fredshack.com

If MC displays funny characters, make sure the terminal emulator uses UTF8 encoding. Smooth scrolling

vi ~/.mc/ini (per user) or /etc/mc/mc.ini (system-wide):

panel_scroll_pages=0

Make both panels display the same directory

ALT+i. If NOK, try ESC+i

Navigate through history

ESC+y to go back to the previous directory, ESC+u to go the next

Options > Configuration > Lynx-like motion doesn't go through the navigation history but rather jumps in/out of a directory so the user doesn't have to hit PageUp followed by Enter

Loop through all items starting with the same letter

CTRL+s followed by the letter to jump to the first occurence, then keep hitting CTRL+s to loop through the list

Customize keyboard shortcuts

Check mc.keymap

[Jun 13, 2018] MC_HOME allows you to run mc with alternative mc.init

Notable quotes:
"... MC_HOME variable can be set to alternative path prior to starting mc. Man pages are not something you can find the answer right away =) ..."
"... A small drawback of this solution: if you set MC_HOME to a directory different from your usual HOME, mc will ignore the content of your usual ~/.bashrc so, for example, your custom aliases defined in that file won't work anymore. Workaround: add a symlink to your ~/.bashrc into the new MC_HOME directory ..."
"... at the same time ..."
Jun 13, 2018 | unix.stackexchange.com

Tagwint ,Dec 19, 2014 at 16:41

That turned out to be simpler as one might think. MC_HOME variable can be set to alternative path prior to starting mc. Man pages are not something you can find the answer right away =)

here's how it works: - usual way

[jsmith@wstation5 ~]$ mc -F
Root directory: /home/jsmith

[System data]
<skipped>

[User data]
    Config directory: /home/jsmith/.config/mc/
    Data directory:   /home/jsmith/.local/share/mc/
        skins:          /home/jsmith/.local/share/mc/skins/
        extfs.d:        /home/jsmith/.local/share/mc/extfs.d/
        fish:           /home/jsmith/.local/share/mc/fish/
        mcedit macros:  /home/jsmith/.local/share/mc/mc.macros
        mcedit external macros: /home/jsmith/.local/share/mc/mcedit/macros.d/macro.*
    Cache directory:  /home/jsmith/.cache/mc/

and the alternative way:

[jsmith@wstation5 ~]$ MC_HOME=/tmp/MCHOME mc -F
Root directory: /tmp/MCHOME

[System data]
<skipped>    

[User data]
    Config directory: /tmp/MCHOME/.config/mc/
    Data directory:   /tmp/MCHOME/.local/share/mc/
        skins:          /tmp/MCHOME/.local/share/mc/skins/
        extfs.d:        /tmp/MCHOME/.local/share/mc/extfs.d/
        fish:           /tmp/MCHOME/.local/share/mc/fish/
        mcedit macros:  /tmp/MCHOME/.local/share/mc/mc.macros
        mcedit external macros: /tmp/MCHOME/.local/share/mc/mcedit/macros.d/macro.*
    Cache directory:  /tmp/MCHOME/.cache/mc/

Use case of this feature:

You have to share the same user name on remote server (access can be distinguished by rsa keys) and want to use your favorite mc configuration w/o overwriting it. Concurrent sessions do not interfere each other.

This works well as a part of sshrc-approach described in https://github.com/Russell91/sshrc

Cri ,Sep 5, 2016 at 10:26

A small drawback of this solution: if you set MC_HOME to a directory different from your usual HOME, mc will ignore the content of your usual ~/.bashrc so, for example, your custom aliases defined in that file won't work anymore. Workaround: add a symlink to your ~/.bashrc into the new MC_HOME directoryCri Sep 5 '16 at 10:26

goldilocks ,Dec 18, 2014 at 16:03

If you mean, you want to be able to run two instances of mc as the same user at the same time with different config directories, as far as I can tell you can't. The path is hardcoded.

However, if you mean, you want to be able to switch which config directory is being used, here's an idea (tested, works). You probably want to do it without mc running:

  • Create a directory $HOME/mc_conf , with a subdirectory, one .
  • Move the contents of $HOME/.config/mc into the $HOME/mc_conf/one subdirectory
  • Duplicate the one directory as $HOME/mc_conf/two .
  • Create a script, $HOME/bin/switch_mc :
    #!/bin/bash
    
    configBase=$HOME/mc_conf
    linkPath=$HOME/.config/mc
    
    if [ -z $1 ] || [ ! -e "$configBase/$1" ]; then
        echo "Valid subdirecory name required."
        exit 1
    fi
    
    killall mc
    rm $linkPath
    ln -sv $configBase/$1 $linkPath
    
  • Run this, switch_mc one . rm will bark about no such file, that doesn't matter.

Hopefully it's clear what's happening there -- this sets a the config directory path as a symlink. Whatever configuration changes you now make and save will be int the one directory. You can then exit and switch_mc two , reverting to the old config, then start mc again, make changes and save them, etc.

You could get away with removing the killall mc and playing around; the configuration stuff is in the ini file, which is read at start-up (so you can't switch on the fly this way). It's then not touched until exit unless you "Save setup", but at exit it may be overwritten, so the danger here is that you erase something you did earlier or outside of the running instance.

Tagwint ,Dec 18, 2014 at 16:52

that works indeed, your idea is pretty clear, thank you for your time However my idea was to be able run differently configured mc's under the same account not interfering each other. I should have specified that in my question. The path to config dir is in fact hardcoded, but it is hardcoded RELATIVELY to user's home dir, that is the value of $HOME, thus changing it before mc start DOES change the config dir location - I've checked that. the drawback is $HOME stays changed as long as mc runs, which could be resolved if mc had a kind of startup hook to put restore to original HOME into – Tagwint Dec 18 '14 at 16:52

Tagwint ,Dec 18, 2014 at 17:17

I've extended my original q with 'same time' condition - it did not fit in my prev comment size limitation – Tagwint Dec 18 '14 at 17:17

[Jun 13, 2018] Editing mc.ini

Jun 07, 2014 | superuser.com
mc / mcedit has a config option called auto_save_setup which is enabled by default. This option automatically saves your current setup upon exiting. The problem occurs when you try to edit ~/.config/mc/ini using mcedit . It will overwrite whatever changes you made upon exiting, so you must edit the ~/.config/mc/ini using a different editor such as nano .

Source: https://linux.die.net/man/1/mc (search for "Auto Save Setup")

[Jun 13, 2018] Running mc with you own skin

Jun 13, 2018 | help.ubuntu.com

put

export TERM="xterm-256color"

at the bottom (top, if ineffective) of your ~/.bashrc file. Thus you can load skins as in

mc -S sand256.ini

In

/home/you/.config/mc/ini

have the lines:

[Midnight-Commander]
skin=sand256

for preset skin. Newer mc version offer to choose a preset skin from within the menu and save it in the above ini file, relieving you of the above manual step.

Many people don't know that mc has a multi-window text-editor built-in (eerily disabled by default) with macro capability and all sorts of goodies. run

mc -e my.txt

to edit directly.

Be aware that many skins break the special characters for sorting filenames reverse up/down unless one works hard with locale parameters and what not. Few people in the world know how to do that properly. In below screenshot you see "arrowdown n" over the filename list to indicate sort order. In many xterm, you will get ??? instead so you might resort to unskin and go to "default skin" setting with ugly colours.

The below CTRL-O hotkey starts what mc calls a subshell. If you run mc a second time in a "subshell", mc will not remind you of the CTRL-O hotkey (as if the world only knows 3 hotkeys) but will start mc with no deeper "subshell" iteration possible, unless one modifies the sources.

[Jun 13, 2018] mcdiff - Internal diff viewer of GNU Midnight Commander

Jun 13, 2018 | www.systutorials.com

mcdiff: Internal diff viewer of GNU Midnight Commander. Index of mcdiff man page
Read mcdiff man page on Linux: $ man 1 mcdiff NAME mcdiff - Internal diff viewer of GNU Midnight Commander. USAGE mcdiff [-bcCdfhstVx?] file1 file2 DESCRIPTION

mcdiff is a link to mc , the main GNU Midnight Commander executable. Executing GNU Midnight Commander under this name requests starting the internal diff viewer which compares file1 and file2 specified on the command line.

OPTIONS
-b
Force black and white display.
-c
Force color mode on terminals where mcdiff defaults to black and white.
-C <keyword>=<fgcolor>,<bgcolor>,<attributes>:<keyword>= ...
Specify a different color set. See the Colors section in mc (1) for more information.
-d
Disable mouse support.
-f
Display the compiled-in search paths for Midnight Commander files.
-t
Used only if the code was compiled with S-Lang and terminfo: it makes the Midnight Commander use the value of the TERMCAP variable for the terminal information instead of the information on the system wide terminal database
-V
Displays the version of the program.
-x
Forces xterm mode. Used when running on xterm-capable terminals (two screen modes, and able to send mouse escape sequences).
COLORS The default colors may be changed by appending to the MC_COLOR_TABLE environment variable. Foreground and background colors pairs may be specified for example with:
MC_COLOR_TABLE="$MC_COLOR_TABLE:\
normal=lightgray,black:\
selected=black,green"
FILES /usr/share/mc/mc.hlp
The help file for the program.

/usr/share/mc/mc.ini

The default system-wide setup for GNU Midnight Commander, used only if the user's own ~/.config/mc/ini file is missing.

/usr/share/mc/mc.lib

Global settings for the Midnight Commander. Settings in this file affect all users, whether they have ~/.config/mc/ini or not.

~/.config/mc/ini

User's own setup. If this file is present, the setup is loaded from here instead of the system-wide startup file.

[Jun 13, 2018] MC (Midnight Commmander) mc/ini settings file location

Jun 13, 2018 | unix.stackexchange.com

UVV ,Oct 13, 2014 at 7:51

It's in the following file: ~/.config/mc/ini .

obohovyk ,Oct 13, 2014 at 7:53

Unfortunately not... – obohovyk Oct 13 '14 at 7:53

UVV ,Oct 13, 2014 at 8:02

@alexkowalski then it's ~/.config/mc/iniUVV Oct 13 '14 at 8:02

obohovyk ,Oct 13, 2014 at 8:41

Yeah, thanks!!! – obohovyk Oct 13 '14 at 8:41

,

If you have not made any changes, the config file does not yet exist.

The easy way to change from the default skin:

  1. Start Midnight Commander
    sudo mc
    
  2. F9 , O for Options, or cursor to "Options" and press Enter
  3. A for Appearance, or cursor to Appearance and press Enter

    You will see that default is the current skin.

  4. Press Enter to see the other skin choices
  5. Cursor to the skin you want and select it by pressing Enter
  6. Click OK

After you do this, the ini file will exist and can be edited, but it is easier to change skins using the method I described.

[Jun 13, 2018] Hide/view of hidden files

Sep 17, 2011 | superuser.com

Something I discovered which I REALLY appreciated was the hide/view of hidden files can be toggled by pressing ALT-. (ALT-PERIOD). Be aware that often the RIGHT ALT key is NOT seen as an ALT key by the system, so you usually need to use Left-ALT-. to toggle this. I forgot about the Right-ALT weirdness and thought I'd broken mc one day. {sigh} Such a blonde...

Just checked (xev!), I guess the ALT-. toggle is mapped to ALT_L-., and the right ALT key gives an ALT_R keycode... which doesn't match the mc mapping, causing it to not work... now I know why! Hooray!

[Jun 13, 2018] Loss of output problem

Sep 17, 2011 | superuser.com
1) If the panels are active and I issue a command that has a lot of output, it appears to be lost forever.

i.e., if the panels are visible and I cat something (i.e., cat /proc/cpuinfo), that info is gone forever once the panels get redrawn.

If you use Cygwin's mintty terminal, you can use its Flip Screen context menu command (or Alt+F12 shortcut) to switch between the so-called alternate screen, where fullscreen applications like mc normally run, and the primary screen where output from commands such as cat appears.

[Jun 13, 2018] I Can't Select Text With My Mouse

Jun 13, 2018 | www.nawaz.org

I Can't Select Text With My Mouse

[Jun 13, 2018] parsync - a parallel rsync wrapper for large data transfers by Harry Mangalam

Jan 22, 2017 | nac.uci.edu

harry.mangalam@uci.edu

harry.mangalam@uci.edu

v1.67 (Mac Beta) Table of Contents

  1. Download
  2. Dependencies
  3. Overview
  4. parsync help

1. Download

If you already know you want it, get it here: parsync+utils.tar.gz (contains parsync plus the kdirstat-cache-writer , stats , and scut utilities below) Extract it into a dir on your $PATH and after verifying the other dependencies below, give it a shot.

While parsync is developed for and test on Linux, the latest version of parsync has been modified to (mostly) work on the Mac (tested on OSX 10.9.5). A number of the Linux-specific dependencies have been removed and there are a number of Mac-specific work arounds.

Thanks to Phil Reese < preese@stanford.edu > for the code mods needed to get it started. It's the same package and instructions for both platforms.

2. Dependencies

parsync requires the following utilities to work:

non-default Perl utility: URI::Escape qw(uri_escape)
sudo yum install perl-URI  # CentOS-like

sudo apt-get install liburi-perl  # Debian-like
parsync needs to be installed only on the SOURCE end of the transfer and uses whatever rsync is available on the TARGET. It uses a number of Linux- specific utilities so if you're transferring between Linux and a FreeBSD host, install parsync on the Linux side. In fact, as currently written, it will only PUSH data to remote targets ; it will not pull data as rsync itself can do. This will probably in the near future. 3. Overview rsync is a fabulous data mover. Possibly more bytes have been moved (or have been prevented from being moved) by rsync than by any other application. So what's not to love? For transferring large, deep file trees, rsync will pause while it generates lists of files to process. Since Version 3, it does this pretty fast, but on sluggish filesystems, it can take hours or even days before it will start to actually exchange rsync data. Second, due to various bottlenecks, rsync will tend to use less than the available bandwidth on high speed networks. Starting multiple instances of rsync can improve this significantly. However, on such transfers, it is also easy to overload the available bandwidth, so it would be nice to both limit the bandwidth used if necessary and also to limit the load on the system. parsync tries to satisfy all these conditions and more by:
Important Only use for LARGE data transfers The main use case for parsync is really only very large data transfers thru fairly fast network connections (>1Gb/s). Below this speed, a single rsync can saturate the connection, so there's little reason to use parsync and in fact the overhead of testing the existence of and starting more rsyncs tends to worsen its performance on small transfers to slightly less than rsync alone.
Beyond this introduction, parsync's internal help is about all you'll need to figure out how to use it; below is what you'll see when you type parsync -h . There are still edge cases where parsync will fail or behave oddly, especially with small data transfers, so I'd be happy to hear of such misbehavior or suggestions to improve it. Download the complete tarball of parsync, plus the required utilities here: parsync+utils.tar.gz Unpack it, move the contents to a dir on your $PATH , chmod it executable, and try it out.
parsync --help
or just
parsync
Below is what you should see:

4. parsync help

parsync version 1.67 (Mac compatibility beta) Jan 22, 2017
by Harry Mangalam <hjmangalam@gmail.com> || <harry.mangalam@uci.edu>

parsync is a Perl script that wraps Andrew Tridgell's miraculous 'rsync' to
provide some load balancing and parallel operation across network connections
to increase the amount of bandwidth it can use.

parsync is primarily tested on Linux, but (mostly) works on MaccOSX
as well.

parsync needs to be installed only on the SOURCE end of the
transfer and only works in local SOURCE -> remote TARGET mode
(it won't allow remote local SOURCE <- remote TARGET, emitting an
error and exiting if attempted).

It uses whatever rsync is available on the TARGET.  It uses a number
of Linux-specific utilities so if you're transferring between Linux
and a FreeBSD host, install parsync on the Linux side.

The only native rsync option that parsync uses is '-a' (archive) &
'-s' (respect bizarro characters in filenames).
If you need more, then it's up to you to provide them via
'--rsyncopts'. parsync checks to see if the current system load is
too heavy and tries to throttle the rsyncs during the run by
monitoring and suspending / continuing them as needed.

It uses the very efficient (also Perl-based) kdirstat-cache-writer
from kdirstat to generate lists of files which are summed and then
crudely divided into NP jobs by size.

It appropriates rsync's bandwidth throttle mechanism, using '--maxbw'
as a passthru to rsync's 'bwlimit' option, but divides it by NP so
as to keep the total bw the same as the stated limit.  It monitors and
shows network bandwidth, but can't change the bw allocation mid-job.
It can only suspend rsyncs until the load decreases below the cutoff.
If you suspend parsync (^Z), all rsync children will suspend as well,
regardless of current state.

Unless changed by '--interface', it tried to figure out how to set the
interface to monitor.  The transfer will use whatever interface routing
provides, normally set by the name of the target.  It can also be used for
non-host-based transfers (between mounted filesystems) but the network
bandwidth continues to be (usually pointlessly) shown.

[[NB: Between mounted filesystems, parsync sometimes works very poorly for
reasons still mysterious.  In such cases (monitor with 'ifstat'), use 'cp'
or 'tnc' (https://goo.gl/5FiSxR) for the initial data movement and a single
rsync to finalize.  I believe the multiple rsync chatter is interfering with
the transfer.]]

It only works on dirs and files that originate from the current dir (or
specified via "--rootdir").  You cannot include dirs and files from
discontinuous or higher-level dirs.

** the ~/.parsync files **
The ~/.parsync dir contains the cache (*.gz), the chunk files (kds*), and the
time-stamped log files. The cache files can be re-used with '--reusecache'
(which will re-use ALL the cache and chunk files.  The log files are
datestamped and are NOT overwritten.

** Odd characters in names **
parsync will sometimes refuse to transfer some oddly named files, altho
recent versions of rsync allow the '-s' flag (now a parsync default)
which tries to respect names with spaces and properly escaped shell
characters.  Filenames with embedded newlines, DOS EOLs, and other
odd chars will be recorded in the log files in the ~/.parsync dir.

** Because of the crude way that files are chunked, NP may be
adjusted slightly to match the file chunks. ie '--NP 8' -> '--NP 7'.
If so, a warning will be issued and the rest of the transfer will be
automatically adjusted.

OPTIONS
=======
[i] = integer number
[f] = floating point number
[s] = "quoted string"
( ) = the default if any

--NP [i] (sqrt(#CPUs)) ...............  number of rsync processes to start
      optimal NP depends on many vars.  Try the default and incr as needed
--startdir [s] (`pwd`)  .. the directory it works relative to. If you omit
                           it, the default is the CURRENT dir. You DO have
                           to specify target dirs.  See the examples below.
--maxbw [i] (unlimited) ..........  in KB/s max bandwidth to use (--bwlimit
       passthru to rsync).  maxbw is the total BW to be used, NOT per rsync.
--maxload [f] (NP+2)  ........ max total system load - if sysload > maxload,
                                               sleeps an rsync proc for 10s
--checkperiod [i] (5) .......... sets the period in seconds between updates
--rsyncopts [s]  ...  options passed to rsync as a quoted string (CAREFUL!)
           this opt triggers a pause before executing to verify the command.
--interface [s]  .............  network interface to /monitor/, not nec use.
      default: `/sbin/route -n | grep "^0.0.0.0" | rev | cut -d' ' -f1 | rev`
      above works on most simple hosts, but complex routes will confuse it.
--reusecache  ..........  don't re-read the dirs; re-use the existing caches
--email [s]  .....................  email address to send completion message
                                      (requires working mail system on host)
--barefiles   .....  set to allow rsync of individual files, as oppo to dirs
--nowait  ................  for scripting, sleep for a few s instead of wait
--version  .................................  dumps version string and exits
--help  .........................................................  this help

Examples
========
-- Good example 1 --
% parsync  --maxload=5.5 --NP=4 --startdir='/home/hjm' dir1 dir2 dir3
hjm@remotehost:~/backups

where
  = "--startdir='/home/hjm'" sets the working dir of this operation to
      '/home/hjm' and dir1 dir2 dir3 are subdirs from '/home/hjm'
  = the target "hjm@remotehost:~/backups" is the same target rsync would use
  = "--NP=4" forks 4 instances of rsync
  = -"-maxload=5.5" will start suspending rsync instances when the 5m system
      load gets to 5.5 and then unsuspending them when it goes below it.

  It uses 4 instances to rsync dir1 dir2 dir3 to hjm@remotehost:~/backups

-- Good example 2 --
% parsync --rsyncopts="--ignore-existing" --reusecache  --NP=3
  --barefiles  *.txt   /mount/backups/txt

where
  =  "--rsyncopts='--ignore-existing'" is an option passed thru to rsync
     telling it not to disturb any existing files in the target directory.
  = "--reusecache" indicates that the filecache shouldn't be re-generated,
    uses the previous filecache in ~/.parsync
  = "--NP=3" for 3 copies of rsync (with no "--maxload", the default is 4)
  = "--barefiles" indicates that it's OK to transfer barefiles instead of
    recursing thru dirs.
  = "/mount/backups/txt" is the target - a local disk mount instead of a network host.

  It uses 3 instances to rsync *.txt from the current dir to "/mount/backups/txt".

-- Error Example 1 --
% pwd
/home/hjm  # executing parsync from here

% parsync --NP4 --compress /usr/local  /media/backupdisk

why this is an error:
  = '--NP4' is not an option (parsync will say "Unknown option: np4")
    It should be '--NP=4'
  = if you were trying to rsync '/usr/local' to '/media/backupdisk',
    it will fail since there is no /home/hjm/usr/local dir to use as
    a source. This will be shown in the log files in
    ~/.parsync/rsync-logfile-<datestamp>_#
    as a spew of "No such file or directory (2)" errors
  = the '--compress' is a native rsync option, not a native parsync option.
    You have to pass it to rsync with "--rsyncopts='--compress'"

The correct version of the above command is:

% parsync --NP=4  --rsyncopts='--compress' --startdir=/usr  local
/media/backupdisk

-- Error Example 2 --
% parsync --start-dir /home/hjm  mooslocal  hjm@moo.boo.yoo.com:/usr/local

why this is an error:
  = this command is trying to PULL data from a remote SOURCE to a
    local TARGET.  parsync doesn't support that kind of operation yet.

The correct version of the above command is:

# ssh to hjm@moo, install parsync, then:
% parsync  --startdir=/usr  local  hjm@remote:/home/hjm/mooslocal

[Jun 09, 2018] How to use autofs to mount NFS shares by Alan Formy-Duval

Jun 05, 2018 | opensource.com
fstab file. However, there may be times when you prefer to have a remote file system mount only on demand -- for example, to boost performance by reducing network bandwidth usage, or to hide or obfuscate certain directories for security reasons. The package autofs provides this feature. In this article, I'll describe how to get a basic automount configuration up and running.

First, a few assumptions: Assume the NFS server named tree.mydatacenter.net is up and running. Also assume a data directory named ourfiles and two user directories, for Carl and Sarah, are being shared by this server.

A few best practices will make things work a bit better: It is a good idea to use the same user ID for your users on the server and any client workstations where they have an account. Also, your workstations and server should have the same domain name. Checking the relevant configuration files should confirm.

alan@workstation1:~$ sudo getent passwd carl sarah
[sudo] password for alan:
carl:x:1020:1020:Carl,,,:/home/carl:/bin/bash
sarah:x:1021:1021:Sarah,,,:/home/sarah:/bin/bash

alan@workstation1:~$ sudo getent hosts
127.0.0.1 localhost
127.0.1.1 workstation1.mydatacenter.net workstation1
10.10.1.5 tree.mydatacenter.net tree

As you can see, both the client workstation and the NFS server are configured in the hosts file. I'm assuming a basic home or even small office network that might lack proper internal domain name service (i.e., DNS).

Install the packages

You need to install only two packages: nfs-common for NFS client functions, and autofs to provide the automount function.

alan@workstation1:~$ sudo apt-get install nfs-common autofs

You can verify that the autofs files have been placed in the etc directory:

alan@workstation1:~$ cd /etc; ll auto*
-rw-r--r-- 1 root root 12596 Nov 19 2015 autofs.conf
-rw-r--r-- 1 root root 857 Mar 10 2017 auto.master
-rw-r--r-- 1 root root 708 Jul 6 2017 auto.misc
-rwxr-xr-x 1 root root 1039 Nov 19 2015 auto.net*
-rwxr-xr-x 1 root root 2191 Nov 19 2015 auto.smb*
alan@workstation1:/etc$ Configure autofs

Now you need to edit several of these files and add the file auto.home . First, add the following two lines to the file auto.master :

/mnt/tree /etc/auto.misc
/home/tree /etc/auto.home

Each line begins with the directory where the NFS shares will be mounted. Go ahead and create those directories:

alan@workstation1:/etc$ sudo mkdir /mnt/tree /home/tree

Second, add the following line to the file auto.misc :

ourfiles        -fstype=nfs     tree:/share/ourfiles

This line instructs autofs to mount the ourfiles share at the location matched in the auto.master file for auto.misc . As shown above, these files will be available in the directory /mnt/tree/ourfiles .

Third, create the file auto.home with the following line:

*               -fstype=nfs     tree:/home/&

This line instructs autofs to mount the users share at the location matched in the auto.master file for auto.home . In this case, Carl and Sarah's files will be available in the directories /home/tree/carl or /home/tree/sarah , respectively. The asterisk (referred to as a wildcard) makes it possible for each user's share to be automatically mounted when they log in. The ampersand also works as a wildcard representing the user's directory on the server side. Their home directory should be mapped accordingly in the passwd file. This doesn't have to be done if you prefer a local home directory; instead, the user could use this as simple remote storage for specific files.

Finally, restart the autofs daemon so it will recognize and load these configuration file changes.

alan@workstation1:/etc$ sudo service autofs restart
Testing autofs

If you change to one of the directories listed in the file auto.master and run the ls command, you won't see anything immediately. For example, change directory (cd) to /mnt/tree . At first, the output of ls won't show anything, but after running cd ourfiles , the ourfiles share directory will be automatically mounted. The cd command will also be executed and you will be placed into the newly mounted directory.

carl@workstation1:~$ cd /mnt/tree
carl@workstation1:/mnt/tree$ ls
carl@workstation1:/mnt/tree$ cd ourfiles
carl@workstation1:/mnt/tree/ourfiles$

To further confirm that things are working, the mount command will display the details of the mounted share.

carl@workstation1:~$ mount
tree:/mnt/share/ourfiles on /mnt/tree/ourfiles type nfs4 (rw,relatime,vers=4.0,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.10.1.22,local_lock=none,addr=10.10.1.5)

The /home/tree directory will work the same way for Carl and Sarah.

I find it useful to bookmark these directories in my file manager for quicker access.

[Jun 09, 2018] How to use the history command in Linux Opensource.com

Jun 09, 2018 | opensource.com

Changing an executed command

history also allows you to rerun a command with different syntax. For example, if I wanted to change my previous command history | grep dnf to history | grep ssh , I can execute the following at the prompt:

$ ^dnf^ssh^

history will rerun the command, but replace dnf with ssh , and execute it.

Removing history

There may come a time that you want to remove some or all the commands in your history file. If you want to delete a particular command, enter history -d <line number> . To clear the entire contents of the history file, execute history -c .

The history file is stored in a file that you can modify, as well. Bash shell users will find it in their Home directory as .bash_history .

Next steps

There are a number of other things that you can do with history :

For more information about the history command and other interesting things you can do with it, take a look at the GNU Bash Manual .

[Jun 09, 2018] 5 Useful Tools to Remember Linux Commands Forever

Jun 09, 2018 |