Softpanorama

Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
May the source be with you, but remember the KISS principle ;-)
Skepticism and critical thinking is not panacea, but can help to understand the world better

Slightly Skeptical View on Enterprise Unix Administration

News Webliography of problems with "pure" cloud environment Recommended Books Recommended Links Shadow IT Project Management Programmable Keyboards
Sysadmin Horror Stories Missing backup horror stories Creative uses of rm Recovery of LVM partitions Notes on hard drives partitioning for Linux Top Vulnerabilities in Linux Environment Root filesystem is mounted read only on boot
The tar pit of Red Hat overcomplexity Systemd invasion into Linux Server space Registering a server using Red Hat Subscription Manager (RHSM) Nagios in Large Enterprise Environment Sudoer File Examples Dealing with multiple flavors of Unix SSH Configuration
Unix Configuration Management Tools Job schedulers Red Hat Certification Program Red Hat Enterprise Linux Life Cycle Registering a server using Red Hat Subscription Manager (RHSM) Unix System Monitoring Recommended Tools to Enhance Command Line Usage in Windows
Is DevOps a yet another "for profit" technocult Using HP ILO virtual CDROM iDRAC7 goes unresponsive - can't connect to iDRAC7 Resetting frozen iDRAC without unplugging the server Troubleshooting HPOM agents Saferm -- wrapper for rm command ILO command line interface
Bare metal recovery of Linux systems Relax-and-Recover on RHEL HP Operations Manager Troubleshooting HPOM agents Number of Servers per Sysadmin Open source politics: IBM acquires Red Hat Tivoli Workload Scheduler
Over 50 and unemployed Surviving a Bad Performance Review Understanding Micromanagers and Control Freaks Bosos or Empty Suits (Aggressive Incompetent Managers) Narcissists Female Sociopaths Bully Managers
Slackerism Information Overload Workaholism and Burnout Unix Sysadmin Tips Orthodox Editors Admin Humor Sysadmin Health Issues


The KISS rule can be expanded as: Keep It Simple, Sysadmin ;-)

This page is written as a protest against overcomplexity and bizarre data center atmosphere typical in "semi-outsourced" or fully outsourced datacenters ;-). Unix/Linux sysadmins are being killed by overcomplexity of the environment.  Later swats  of Linux knowledge (and many excellent  books)  were  killed with introduction of systemd. Especially for older, most experience members of the team, who have unique set of organization knowledge as well as specifics of their career which allowed them to watch the development of Linux almost from the version 0.92.

System administration is still a unique area were people with the ability to program can display their own creativity with relative ease and can still enjoy "old style" atmosphere of software development, when you yourself put a specification, implement it, test the program and then use it in daily work. This is a very exciting, unique opportunity that no DevOps can ever provide. Then why an increasing number of sysadmins are far from being excited about working in those positions, or outright want to quick the  field (or, at least, work 4 days a week). And that include sysadmins who have tremendous speed and capability to process and learn new information. Even for them "enough is enough".   The answer is different for each individual sysadmins, but usually is some variation of the following themes: 

  1.  Too rapid pace of change with a lot of "change for the sake of the change"  often serving as smokescreen for outsourcing efforts (VMware yesterday, Azure today, Amazon cloud tomorrow, etc)
  2. Job insecurity due to outsourcing/offshoring -- constant pressure to cut headcount in the name of 'efficiency" which in reality is more connected with the size of top brass bonuses then anything related to IT datacenter functioning.   Sysadmin over 50 are especially vulnerable category here and in case the are laid off have almost no chances to get back into the IT workforce at the previous level of salary/benefits. often the only job they can find is job  as Home Depot, or similar retail outlets. 
  3. Back breaking level of overcomplexity and bizarre tech decisions crippling the data center (aka crapification ). Potemkin-style  culture often prevails in evaluation of software in large US corporations. The surface sheen is more important than the substance. The marketing brochures and manuals are no different from mainstream news media in the level of BS they spew. IBM is especially guilty (look how they marketed IBM Watson; ; as Oren Etzioni, CEO of the Allen Institute for AI noted "the only intelligent thing about Watson was IBM PR department [push]").
  4. Bureaucratization/fossilization of the large companies IT environment. That includes using "Performance Reviews" (prevalent in IT variant of waterboarding ;-) for the enforcement of management policies, priorities, whims, etc.   That creates alienation from the company (as it should). One can think of the modern corporate Data Center as an organization where the administration has more tremendously power in the decision-making process and eats up more of the corporate budget while the people who do the actual work are increasingly ignored and their share of the budget shrinks.
  5. "Neoliberal austerity" (which is essentially another name for the "war on labor") -- Drastic cost cutting measures at the expense of workforce such as elimination of external vendor training, crapification of benefits, limitation of business trips and enforcing useless or outright harmful for business "new" products instead of "tried and true" old with  the same function.    They are accompanied by the new cultural obsession with ‘character’ (as in "he/she has a right character" -- which in "Neoliberal speak" means he/she is a toothless conformist ;-), glorification of groupthink,   and the intensification of surveillance.

As Charlie Schluting noted in 2010: (Enterprise Networking Plane, April 7, 2010)

What happened to the old "sysadmin" of just a few years ago? We've split what used to be the sysadmin into application teams, server teams, storage teams, and network teams. There were often at least a few people, the holders of knowledge, who knew how everything worked, and I mean everything. Every application, every piece of network gear, and how every server was configured -- these people could save a business in times of disaster.

Now look at what we've done. Knowledge is so decentralized we must invent new roles to act as liaisons between all the IT groups.

Architects now hold much of the high-level "how it works" knowledge, but without knowing how any one piece actually does work.

In organizations with more than a few hundred IT staff and developers, it becomes nearly impossible for one person to do and know everything. This movement toward specializing in individual areas seems almost natural. That, however, does not provide a free ticket for people to turn a blind eye.

Specialization

You know the story: Company installs new application, nobody understands it yet, so an expert is hired. Often, the person with a certification in using the new application only really knows how to run that application. Perhaps they aren't interested in learning anything else, because their skill is in high demand right now. And besides, everything else in the infrastructure is run by people who specialize in those elements. Everything is taken care of.

Except, how do these teams communicate when changes need to take place? Are the storage administrators teaching the Windows administrators about storage multipathing; or worse logging in and setting it up because it's faster for the storage gurus to do it themselves? A fundamental level of knowledge is often lacking, which makes it very difficult for teams to brainstorm about new ways evolve IT services. The business environment has made it OK for IT staffers to specialize and only learn one thing.

If you hire someone certified in the application, operating system, or network vendor you use, that is precisely what you get. Certifications may be a nice filter to quickly identify who has direct knowledge in the area you're hiring for, but often they indicate specialization or compensation for lack of experience.

Resource Competition

Does your IT department function as a unit? Even 20-person IT shops have turf wars, so the answer is very likely, "no." As teams are split into more and more distinct operating units, grouping occurs. One IT budget gets split between all these groups. Often each group will have a manager who pitches his needs to upper management in hopes they will realize how important the team is.

The "us vs. them" mentality manifests itself at all levels, and it's reinforced by management having to define each team's worth in the form of a budget. One strategy is to illustrate a doomsday scenario. If you paint a bleak enough picture, you may get more funding. Only if you are careful enough to illustrate the failings are due to lack of capital resources, not management or people. A manager of another group may explain that they are not receiving the correct level of service, so they need to duplicate the efforts of another group and just implement something themselves. On and on, the arguments continue.

Most often, I've seen competition between server groups result in horribly inefficient uses of hardware. For example, what happens in your organization when one team needs more server hardware? Assume that another team has five unused servers sitting in a blade chassis. Does the answer change? No, it does not. Even in test environments, sharing doesn't often happen between IT groups.

With virtualization, some aspects of resource competition get better and some remain the same. When first implemented, most groups will be running their own type of virtualization for their platform. The next step, I've most often seen, is for test servers to get virtualized. If a new group is formed to manage the virtualization infrastructure, virtual machines can be allocated to various application and server teams from a central pool and everyone is now sharing. Or, they begin sharing and then demand their own physical hardware to be isolated from others' resource hungry utilization. This is nonetheless a step in the right direction. Auto migration and guaranteed resource policies can go a long way toward making shared infrastructure, even between competing groups, a viable option.

Blamestorming

The most damaging side effect of splitting into too many distinct IT groups is the reinforcement of an "us versus them" mentality. Aside from the notion that specialization creates a lack of knowledge, blamestorming is what this article is really about. When a project is delayed, it is all too easy to blame another group. The SAN people didn't allocate storage on time, so another team was delayed. That is the timeline of the project, so all work halted until that hiccup was restored. Having someone else to blame when things get delayed makes it all too easy to simply stop working for a while.

More related to the initial points at the beginning of this article, perhaps, is the blamestorm that happens after a system outage.

Say an ERP system becomes unresponsive a few times throughout the day. The application team says it's just slowing down, and they don't know why. The network team says everything is fine. The server team says the application is "blocking on IO," which means it's a SAN issue. The SAN team say there is nothing wrong, and other applications on the same devices are fine. You've ran through nearly every team, but without an answer still. The SAN people don't have access to the application servers to help diagnose the problem. The server team doesn't even know how the application runs.

See the problem? Specialized teams are distinct and by nature adversarial. Specialized staffers often relegate themselves into a niche knowing that as long as they continue working at large enough companies, "someone else" will take care of all the other pieces.

I unfortunately don't have an answer to this problem. Maybe rotating employees between departments will help. They gain knowledge and also get to know other people, which should lessen the propensity to view them as outsiders

The tragic part of the current environment is that it is like shifting sands. And it is not only due to the "natural process of crapification of operating systems" in which the OS gradually loses its architectural integrity. The pace of change is just too fast to adapt for mere humans. And most of it represents "change for the  sake of change" not some valuable improvement or extension of capabilities.

If you are a sysadmin, who is writing  his own scripts, you write on the sand, spending a lot of time thinking over and debugging your scripts. Which raise you productivity and diminish the number of possible errors. But the next OS version wipes considerable part of your word and you need to revise your scripts again. The tale of Sisyphus can now be re-interpreted as a prescient warning about the thankless task of sysadmin to learn new staff and maintain their own script library ;-)  Sometimes a lot of work is wiped out because the corporate brass decides to to switch to a different flavor of Linux, or we add "yet another flavor" due to a large acquisition.  Add to this inevitable technological changes and the question arise, can't you get a more respectable profession, in which 66% of knowledge is not replaced in the next ten years.  

Balkanization of linux demonstrated also in the Babylon  Tower of system programming languages (C, C++, Perl, Python, Ruby, Go, Java to name a few) and systems that supposedly should help you but mostly do quite opposite (Puppet, Ansible, Chef, etc). Add to this monitoring infrastructure (say Nagios) and you definitely have an information overload.

Inadequate training just add to the stress. First of all corporations no longer want to pay for it. So you are your own and need to do it mostly on your free time, as the workload is substantial in most organizations. Using free or low cost courses if they are available, or buying your own books and trying to learn new staff using them (which of course is the mark of any good sysadmin, but should not the only source of new knowledge  Days when you can for a week travel to vendor training center and have a chance to communicate with other admins from different organization for a week (which probably was the most valuable part of the whole exercise; although I can tell that training by Sun (Solaris) and IBM (AIX) in late 1990th was really high quality using highly qualified instructors, from which you can learn a lot outside the main topic of the course.  Thos days are long in the past. Unlike "Trump University" Sun courses could probably have been called "Sun University." Most training now is via Web and chances for face-to-face communication disappeared.  Also from learning "why" the stress now is on learning of "how".  Why topic typically are reserved to "advanced" courses.

Also the necessary to relearn staff again and again (and often new technologies/daemons/version of OS) are iether the same, or even inferior to previous, or represent open scam in which training is the way to extract money from lemmings (Agile, most of DevOps hoopla, etc). This is typical neoliberal mentality (" greed is good") implemented in education. There is also tendency to treat virtual machines and cloud infrastructure as separate technologies, which requires separate training and separate set of certifications (ASW, Asure).  This is a kind of infantilization of profession when a person who learned a lot of staff in previous 10 years need to forget it and relearn most of it again and again.

Of course  sysadmins not the only suffered. Computer scientists also now struggle with  the excessive level of complexity and too quickly shifting sand. Look at the tragedy of Donald Knuth with this life long idea to create comprehensive monograph for system programmers (The Art of Computer programming). He was flattened by the shifting sands and probably will not be able to finish even volume 4 (out of seven that were planned) in his lifetime. 

Of course much  depends on the evolution of hardware and changes caused by the evolution of hardware such as mass introduction of large SSDs, multi-core CPUs and large RAM

Nobody is now surprised to see a server with 128GB of RAM, laptop with  16Gb of RAM, or cellphones with  4GB of RAM and 1GHZ CPU (Please not that IBM Pc stated with 1 MBof RAM (of which only 640KB was available for programs) and 4.7 MHz (not GHz) single core CPU without floating arithmetic unit).  Such changes while  painful are inevitable and hardware progress slowed down recently as it reached physical limits of technology (we probably will not see 2 nanometer lithography based CPU and 8GHz CPU clock speed in our lifetimes. .

 The other are changes caused by fashion and the desire to entrench their position by the dominate player are more difficult to accept. It is difficult or even impossible to predict which technology became fashionable tomorrow and how long DevOp will remain in fashion. Typically such thing last around ten years.  After that everything is typically fades in oblivion,  or even is crossed out, and former idols will be shattered. This strange period of re-invention of "glass-walls datacenter" under then banner of DevOps  (and old timers still remember that IBM datacenters were hated with passion, and this hate created additional non-technological incentive for mini-computers and later for IBM PC)  is characterized by the level of hype usually reserved for woman fashion.  Now it sometimes looks to me that the movie The Devil Wears Prada  is a subtle parable on sysadmin work.

Add to this horrible job  market, especially for university graduated and older sysadmins (see Over 50 and unemployed ) and one probably start suspect that the life of modern sysadmin is far from paradise. When you read some job description  on sites like Monster, Dice or  Indeed you just ask yourself, if those people really want to hire anybody, or this is just a smoke screen for H1B candidates job certification.  The level of details often is so precise that it is almost impossible to change your current specialization. They do not care about the level of talent, they do not want to train a suitable candidate. They want a person who fit 100% from day 1.  Also in place like NYC or SF rent and property prices and valuations are growing while income growth has been stagnant.

Vandalism of Unix performed by Red Hat with RHEL 7 makes the current  environment somewhat unhealthy. It is clear that this was done by the whim of Red Hat brass, not in the interest of the community. This is a typical Microsoft-style trick which make dozens of high quality books written by very talented authors instantly semi-obsolete.  And question arise whether it make sense to write any book about RHEL other then for solid advance.  It generated some backlash, but the position  of Red Hat as Microsoft on Linux  allowed it to shove down the throat their inferior technical decisions. In a way it reminds me the way Microsoft dealt with Windows 7 replacing it with Windows 10.  Essentially destroying previous windows interface ecosystem (while preserving binary compatibility)

See also

Here are my notes/reflection of sysadmin problem that often arise if rather strange (and sometimes pretty toxic) IT departments of large corporations:


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

Home 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999

For the list of top articles see Recommended Links section

2018 2017 2016 2015 2014 2013 2012 2011 2010 2009
2008 2007 2006 2005 2004 2003 2002 2001 2000 1999

"I appreciate Woody Allen's humor because one of my safety valves is an appreciation for life's absurdities. His message is that life isn't a funeral march to the grave. It's a polka."

-- Dennis Kusinich

[Oct 09, 2019] The gzip Recovery Toolkit

Oct 09, 2019 | www.aaronrenn.com

So you thought you had your files backed up - until it came time to restore. Then you found out that you had bad sectors and you've lost almost everything because gzip craps out 10% of the way through your archive. The gzip Recovery Toolkit has a program - gzrecover - that attempts to skip over bad data in a gzip archive. This saved me from exactly the above situation. Hopefully it will help you as well.

I'm very eager for feedback on this program . If you download and try it, I'd appreciate and email letting me know what your results were. My email is arenn@urbanophile.com . Thanks.

ATTENTION

99% of "corrupted" gzip archives are caused by transferring the file via FTP in ASCII mode instead of binary mode. Please re-transfer the file in the correct mode first before attempting to recover from a file you believe is corrupted.

Disclaimer and Warning

This program is provided AS IS with absolutely NO WARRANTY. It is not guaranteed to recover anything from your file, nor is what it does recover guaranteed to be good data. The bigger your file, the more likely that something will be extracted from it. Also keep in mind that this program gets faked out and is likely to "recover" some bad data. Everything should be manually verified.

Downloading and Installing

Note that version 0.8 contains major bug fixes and improvements. See the ChangeLog for details. Upgrading is recommended. The old version is provided in the event you run into troubles with the new release.

You need the following packages:

First, build and install zlib if necessary. Next, unpack the gzrt sources. Then cd to the gzrt directory and build the gzrecover program by typing make . Install manually by copying to the directory of your choice.

Usage

Run gzrecover on a corrupted .gz file. If you leave the filename blank, gzrecover will read from the standard input. Anything that can be read from the file will be written to a file with the same name, but with a .recovered appended (any .gz is stripped). You can override this with the -o option. The default filename when reading from the standard input is "stdin.recovered". To write recovered data to the standard output, use the -p option. (Note that -p and -o cannot be used together).

To get a verbose readout of exactly where gzrecover is finding bad bytes, use the -v option to enable verbose mode. This will probably overflow your screen with text so best to redirect the stderr stream to a file. Once gzrecover has finished, you will need to manually verify any data recovered as it is quite likely that our output file is corrupt and has some garbage data in it. Note that gzrecover will take longer than regular gunzip. The more corrupt your data the longer it takes. If your archive is a tarball, read on.

For tarballs, the tar program will choke because GNU tar cannot handle errors in the file format. Fortunately, GNU cpio (tested at version 2.6 or higher) handles corrupted files out of the box.

Here's an example:

$ ls *.gz
my-corrupted-backup.tar.gz
$ gzrecover my-corrupted-backup.tar.gz
$ ls *.recovered
my-corrupted-backup.tar.recovered
$ cpio -F my-corrupted-backup.tar.recovered -i -v

Note that newer versions of cpio can spew voluminous error messages to your terminal. You may want to redirect the stderr stream to /dev/null. Also, cpio might take quite a long while to run.

Copyright

The gzip Recovery Toolkit v0.8
Copyright (c) 2002-2013 Aaron M. Renn ( arenn@urbanophile.com )

The gzrecover program is licensed under the GNU General Public License .

[Oct 09, 2019] gzip - How can I recover files from a corrupted .tar.gz archive - Stack Overflow

Oct 09, 2019 | stackoverflow.com

15


George ,Jun 24, 2016 at 2:49

Are you sure that it is a gzip file? I would first run 'file SMS.tar.gz' to validate that.

Then I would read the The gzip Recovery Toolkit page.

JohnEye ,Oct 4, 2016 at 11:27

Recovery is possible but it depends on what caused the corruption.

If the file is just truncated, getting some partial result out is not too hard; just run

gunzip < SMS.tar.gz > SMS.tar.partial

which will give some output despite the error at the end.

If the compressed file has large missing blocks, it's basically hopeless after the bad block.

If the compressed file is systematically corrupted in small ways (e.g. transferring the binary file in ASCII mode, which smashes carriage returns and newlines throughout the file), it is possible to recover but requires quite a bit of custom programming, it's really only worth it if you have absolutely no other recourse (no backups) and the data is worth a lot of effort. (I have done it successfully.) I mentioned this scenario in a previous question .

The answers for .zip files differ somewhat, since zip archives have multiple separately-compressed members, so there's more hope (though most commercial tools are rather bogus, they eliminate warnings by patching CRCs, not by recovering good data). But your question was about a .tar.gz file, which is an archive with one big member.

,

Here is one possible scenario that we encountered. We had a tar.gz file that would not decompress, trying to unzip gave the error:
gzip -d A.tar.gz
gzip: A.tar.gz: invalid compressed data--format violated

I figured out that the file may been originally uploaded over a non binary ftp connection (we don't know for sure).

The solution was relatively simple using the unix dos2unix utility

dos2unix A.tar.gz
dos2unix: converting file A.tar.gz to UNIX format ...
tar -xvf A.tar
file1.txt
file2.txt 
....etc.

It worked! This is one slim possibility, and maybe worth a try - it may help somebody out there.

[Oct 08, 2019] Forward root email on Linux server

Oct 08, 2019 | www.reddit.com

Hi, generally I configure /etc/aliases to forward root messages to my work email address. I found this useful, because sometimes I become aware of something wrong...

I create specific email filter on my MUA to put everything with "fail" in subject in my ALERT subfolder, "update" or "upgrade" in my UPGRADE subfolder, and so on.

It is a bit annoying, because with > 50 server, there is lot of "rumor", anyway.

How do you manage that?

Thank you!

[Oct 08, 2019] I swear to god we spend 60% of our time planning our sprints, and 40% of the time doing the work, and management wonders why our true productivity has fallen through the floor...

Notable quotes:
"... Scrum is dead, long live Screm! We need to implement it immediately. We must innovate and stay ahead of the curve! level 7 ..."
"... First you scream, then you ahh. Now you can screm ..."
"... Are you saying quantum synergy coupled with block chain neutral intelligence can not be used to expedite artificial intelligence amalgamation into that will metaphor into cucumber obsession? ..."
Oct 08, 2019 | www.reddit.com

MadManMorbo 58 points · 6 days ago

We recently implemented DevOps practices, Scrum, and sprints have become the norm... I swear to god we spend 60% of our time planning our sprints, and 40% of the time doing the work, and management wonders why our true productivity has fallen through the floor... level 5

Angdrambor 26 points · 6 days ago

Let me guess - they left out the retrospectives because somebody brought up how bad they were fucking it all up? level 6

lurker_lurks 15 points · 6 days ago

Scrum is dead, long live Screm! We need to implement it immediately. We must innovate and stay ahead of the curve! level 7

JustCallMeFrij 1 point · 6 days ago

First you scream, then you ahh. Now you can screm

StormlitRadiance 1 point · 5 days ago

It consists of three managers for every engineer and they all screm all day at a different quartet of three managers and an engineer. level 6

water_mizu 7 points · 6 days ago

Are you saying quantum synergy coupled with block chain neutral intelligence can not be used to expedite artificial intelligence amalgamation into that will metaphor into cucumber obsession?

malikto44 9 points · 6 days ago

I worked at a place where the standup meetings went at least 4-6 hours each day. It was amazing how little got done there. Glad I bailed. k comments 1.0k Posted by u/bpitts2 3 days ago Rant If I go outside of process to help you for your "urgent" issue, be cool and don't abuse the relationship.

What is it with these people? Someone brought me an "urgent" request (of course there wasn't a ticket), so I said no worries, I'll help you out. Just open a ticket for me so we can track the work and document the conversation. We got that all knocked out and everyone was happy.

So a day or two later, I suddenly get an instant message for yet another "urgent" issue. ... Ok ... Open a ticket, and I'll get it assigned to one of my team members to take a look.

And a couple days later ... he's back and I'm being asked for help troubleshooting an application that we don't own. At least there's a ticket and an email thread... but wtf man.

What the heck man?

This is like when you get a free drink or dessert from your waiter. Don't keep coming back and asking for more free pie. You know damn well you're supposed to pay for pie. Be cool. I'll help you out when you're really in a tight spot, but the more you cry "urgent", the less I care about your issues.

IT folks are constantly looked at as being dicks because we force people to follow the support process, but this is exactly why we have to make them follow the process. 290 comments 833 Posted by u/SpicyTunaNinja 4 days ago Silver Let's talk about mental health and stress

Hey r/Sysadmin , please don't suffer in silence. I know the job can be very difficult at times, especially with competing objectives, tight (or impossible) deadlines, bad bosses and needy end users, but please - always remember that there are ways to manage that stress. Speaking to friends and family regularly to vent, getting a therapist, or taking time off.

Yes, you do have the ability to take personal leave/medical leave if its that bad. No, it doesn't matter what your colleagues or boss will think..and no, you are not a quitter, weak, or a loser if you take time for yourself - to heal mentally, physically or emotionally.

Don't let yourself get to the point that this one IT employee did at the Paris Police headquarters. Ended up taking the lives of multiple others, and ultimately losing his life. https://www.nbcnews.com/news/world/paris-policeman-kills-2-officers-injures-3-others-knife-attack-n1061861

EDIT: Holy Cow! Thanks for the silver and platinum kind strangers. All i wanted to do was to get some more awareness on this subject, and create a reminder that we all deserve happiness and peace of mind. A reminder that hopefully sticks with you for the days and weeks to come.

Work is just one component of life, and not to get so wrapped up and dedicate yourself to the detriment of your health. 302 comments 783 Posted by u/fresh1003 2 days ago By 2025 80% of enterprises will shutdown their data center and move to cloud...do you guys believe this?

By 2025 80% of enterprises will shutdown their data center and move to cloud...do you guys believe this? 995 comments 646 Posted by u/eternalterra 3 days ago Silver Career / Job Related The more tasks I have, the slower I become

Good morning,

We, sysadmins, have times when we don't really have nothing to do but maintenance. BUT, there are times when it seems like chaos comes out of nowhere. When I have a lot of tasks to do, I tend to get slower. The more tasks I have pending, the slower I become. I cannot avoid to start thinking about 3 or 4 different problems at the same time, and I can't focus! I only have 2 years of experiences as sysadmin.

Do you guys experience the same?

Cheers, 321 comments 482 Posted by u/proudcanadianeh 6 days ago General Discussion Cloudflare, Google and Firefox to add support for HTTP/3, shifting away from TCP

Per this article: https://www.techspot.com/news/82111-cloudflare-google-firefox-add-support-http3-shifting-away.html

Not going to lie, this is the first I have heard of http3. Anyone have any insight in what this shift is going to mean on a systems end? Is this a new protocol entirely? 265 comments 557 Posted by u/_sadme_ 8 hours ago Career / Job Related Leaving the IT world...

Hello everyone,

Have you ever wondered if your whole career will be related to IT stuff? I have, since my early childhood. It was more than 30 years ago - in the marvelous world of an 8-bit era. After writing my first code (10 PRINT " my_name " : 20 GOTO 10) I exactly knew what I wanted to do in the future. Now, after spending 18 years in this industry, which is half of my age, I'm not so sure about it.

I had plenty of time to do almost everything. I was writing software for over 100K users and I was covered in dust while drilling holes for ethernet cables in houses of our customers. I was a main network administrator for a small ISP and systems administrator for a large telecom operator. I made few websites and I was managing a team of technical support specialists. I was teaching people - on individual courses on how to use Linux and made some trainings for admins on how to troubleshoot multicast transmissions in their own networks. I was active in some Open Source communities, including running forums about one of Linux distributions (the forum was quite popular in my country) and I was punching endless Ctrl+C/Ctrl+V combos from Stack Overflow. I even fixed my aunt's computer!

And suddenly I realised that I don't want to do this any more. I've completely burnt out. It was like a snap of a finger.

During many years I've collected a wide range of skills that are (or will be) obsolete. I don't want to spend rest of my life maintaining a legacy code written in C or PHP or learning a new language which is currently on top and forcing myself to write in a coding style I don't really like. That's not all... If you think you'll enjoy setting up vlans on countless switches, you're probably wrong. If you think that managing clusters of virtual machines is an endless fun, you'll probably be disappointed. If you love the smell of a brand new blade server and the "click" sound it makes when you mount it into the rack, you'll probably get fed up with it. Sooner or later.

But there's a good side of having those skills. With skills come experience, knowledge and good premonition. And these features don't get old. Remember that!

My employer offered me a position of a project manager and I eagerly agreed to it. It means that I'm leaving the world of "hardcore IT" I'll be doing some other, less crazy stuff. I'm logging out of my console and I'll run Excel. But I'll keep all good memories from all those years. I'd like to thank all of you for doing what you're doing, because it's really amazing. Good luck! The world lies in your hands! 254 comments 450 Posted by u/remrinds 1 day ago General Discussion UPDATE: So our cloud exchange server was down for 17 hours on friday

my original post got deleted because i behaved wrongly and posted some slurs. I apologise for that.


anyway, so, my companie is using Office365 ProPlus and we migrated our on premise exchange server to cloud a while ago, and on friday last week, all of our user (1000 or so) could not access their exchange box, we are a TV broadcasting station so you can only imagine the damage when we could not use our mailing system.


initially, we opened a ticket with microsoft and they just kept us on hold for 12 hours (we are in japan so they had to communicate with US and etc which took time), and then they told us its our network infra thats wrong when we kept telling them its not. we asked them to check their envrionment at least once which they did not until 12 hours later.


in the end, it was their exchange server that was the problem, i will copy and paste the whole incident report below


Title: Can't access Exchange

User Impact: Users are unable to access the Exchange Online service.

Current status: We've determined that a recent sync between Exchange Online and Azure Active Directory (AAD) inadvertently resulted in access issues with the Exchange Online service. We've restored the affected environment and updated the Global Location Service (GLS) records, which we believe has resolved the issue. We're awaiting confirmation from your representatives that this issue is resolved.

Scope of impact: Your organization is affected by this event, and this issue impacts all users.

Start time: Friday, October 4, 2019, 4:51 AM
Root cause: A recent Service Intelligence (SI) move inadvertently resulted in access issues with the Exchange Online service.


they wont explain further than what they posted on the incident page but if anyone here is good with microsofts cloud envrionment, can anyone tell me what was the root cause of this? from what i can gather, the AAD and exchange server couldnt sync but they wont tell us what the actual problem is, what the hell is Service intelligence and how does it fix our exchange server when they updated the global location service?


any insight on these report would be more than appreciated


thanks! 444 comments 336 Posted by u/Rocco_Saint 13 hours ago KB4524148 Kills Print Spooler? Thought it was supposed to fix that issue?

I rolled out this patch this weekend to my test group and it appears that some of the workstations this was applied to are having print spooler issues.

Here's the details for the patch.

I'm in the middle of troubleshooting it now, but wanted to reach out and see if anyone else was having issues. 108 comments 316 Posted by u/GrizzlyWhosSteve 1 day ago Finally Learned Docker

I hadn't found a use case for containers in my environment so I had put off learning Docker for a while. But, I'm writing a rails app to simplify/automate some of our administrative tasks. Setting up my different dev and test environments was definitely non trivial, and I plan on onboarding another person or 2 so they can help me out and add it to their resume.

I installed Docker desktop on my Mac, wrote 2 files essentially copied from Docker's website, built it, then ran it. It took a total of 10 minutes to go from zero Docker to fully configured and running. It's really that easy to start using it. So, now I've decided to set up Kubernetes at work this week and see what I can find to do with it.

Edit: leaning towards OKD. Has anyone used it/OpenShift that wants to talk me out of it? 195 comments 189 Posted by u/Reverent 4 days ago Off Topic How to trigger a sysadmin in two words

Vendor Requirements. 401 comments 181 Posted by u/stewardson 6 days ago General Discussion Monday From Hell

Let me tell you about the Monday from hell I encountered yesterday.

I work for xyz corp which is an MSP for IT services. One of the companies we support, we'll call them abc corp .

I come in to work Monday morning and look at my alerts report from the previous day and find that all of the servers (about 12+) at abc corp are showing offline. Manager asks me to go on site to investigate, it's around the corner so nbd .

I get to the client, head over to the server room and open up the KVM console. I switch inputs and see no issues with the ESX hosts. I then switch over to the (for some reason, physical) vCenter server and find this lovely little message:

HELLO, this full-disk encryption.

E-mail: email@protonmail.com

rsrv: email@scryptmail.com

Now, I've never seen this before and it looks sus but just in case, i reboot the machine - same message . Do a quick google search and found that the server was hit with an MBR level ransomware encryption. I then quickly switched over to the server that manages the backups and found that it's also encrypted - f*ck.

At this point, I call Mr. CEO and account manager to come on site. While waiting for them, I found that the SANs had also been logged in to and had all data deleted and snapshots deleted off the datastores and the EQL volume was also encrypted - PERFECT!

At this point, I'm basically freaking out. ABC Corp is owned by a parent company who apparently also got hit however we don't manage them *phew*.

Our only saving grace at this point is the offsite backups. I log in to the server and wouldn't ya know it, I see this lovely message:

Last replication time: 6/20/2019 13:00:01

BackupsTech had a script that ran to report on replication status daily and the reports were showing that they were up to date. Obviously, they weren't so at this point we're basically f*cked.

We did eventually find out this originated from parentcompany and that the accounts used were from the old IT Manager that recently left a few weeks ago. Unfortunately, they never disabled the accounts in either domain and the account used was a domain admin account.

We're currently going through and attempting to undelete the VMFS data to regain access to the VM files. If anyone has any suggestions on this, feel free to let me know.

TL;DR - ransomware, accounts not disabled, backups deleted, f*cked. 94 comments Continue browsing in r/sysadmin Subreddit icon r/sysadmin

380k

Members

1.6k

Online

Oct 22, 2008

Cake Day A reddit dedicated to the profession of Computer System Administration. Reddit about careers press advertise blog Using Reddit help Reddit App Reddit premium Reddit gifts Directory Terms | Content policy | Privacy policy | Mod policy Reddit Inc © 2019. All rights reserved 1.2k They say, No more IT or system or server admins needed very soon... 1.2k Subreddit icon r/sysadmin • Posted by u/rdns98 6 days ago They say, No more IT or system or server admins needed very soon...

Sick and tired of listening to these so called architects and full stack developers who watch bunch of videos on YouTube and Pluralsight, find articles online. They go around workplace throwing words like containers, devops, NoOps, azure, infrastructure as code, serverless, etc, they don't understand half of the stuff. I do some of the devops tasks in our company, I understand what it takes to implement and manage these technologies. Every meeting is infested with these A holes. 1.0k comments 91% Upvoted What are your thoughts? Log in or Sign up log in sign up Sort by level 1

ntengineer 619 points · 6 days ago

Your best defense against these is to come up with non-sarcastic and quality questions to ask these people during the meeting, and watch them not have a clue how to answer them.

For example, a friend of mine worked at a smallish company, some manager really wanted to move more of their stuff into Azure including AD and Exchange environment. But they had common problems with their internet connection due to limited bandwidth and them not wanting to spend more. So during a meeting my friend asked a question something like this:

"You said on this slide that moving the AD environment and Exchange environment to Azure will save us money. Did you take into account that we will need to increase our internet speed by a factor of at least 4 in order to accommodate the increase in traffic going out to the Azure cloud? "

Of course, they hadn't. So the CEO asked my friend if he had the numbers, which he had already done his homework, and it was a significant increase in cost every month and taking into account the cost for Azure and the increase in bandwidth wiped away the manager's savings.

I know this won't work for everyone. Sometimes there is real savings in moving things to the cloud. But often times there really isn't. Calling the uneducated people out on what they see as facts can be rewarding. level 2

PuzzledSwitch 99 points · 6 days ago

my previous boss was that kind of a guy. he waited till other people were done throwing their weight around in a meeting and then calmly and politely dismantled them with facts.

no amount of corporate pressuring or bitching could ever stand up to that. level 3

themastermatt 43 points · 6 days ago

Ive been trying to do this. Problem is that everyone keeps talking all the way to the end of the meeting leaving no room for rational facts. level 4

PuzzledSwitch 33 points · 6 days ago

make a follow-up in email, then.

or, you might have to interject for a moment.

5 more replies level 3

williamfny Jack of All Trades 25 points · 6 days ago

This is my approach. I don't yell or raise my voice, I just wait. Then I start asking questions that they generally cannot answer and slowly take them apart. I don't have to be loud to get my point across. level 4

MaxHedrome 5 points · 6 days ago

Listen to this guy OP

This tactic is called "the box game". Just continuously ask them logical questions that can't be answered with their stupidity. (Box them in), let them be their own argument against themselves.

2 more replies level 2

notechno 34 points · 6 days ago

Not to mention downtime. We have two ISPs in our area. Most of our clients have both in order to keep a fail-over. However, depending on where the client is located, one ISP is fast but goes down every time it rains and the other is solid but slow. Now our only AzureAD customers are those who have so many remote workers that they throw their hands up and deal with the outages as they come. Maybe this works in Europe or South Korea, but this here 'Murica got too many internet holes. level 3

katarh 12 points · 6 days ago

Yup. If you're in a city with fiber it can probably work. If you have even one remote site, and all they have is DSL (or worse, satellite, as a few offices I once supported were literally in the woods when I worked for a timber company) then even Citrix becomes out of the question.

4 more replies

1 more reply level 2

elasticinterests 202 points · 6 days ago

Definitely this, if you know your stuff and can wrap some numbers around it you can regain control of the conversation.

I use my dad as a prime example, he was an electrical engineer for ~40 years, ended up just below board level by the time he retired. He sat in on a product demo once, the kit they were showing off would speed up jointing cable in the road by 30 minutes per joint. My dad asked 3 questions and shut them down:

"how much will it cost to supply all our jointing teams?" £14million

"how many joints do our teams complete each day?" (this they couldn't answer so my dad helped them out) 3

"So are we going to tell the jointers that they get an extra hour and a half hour lunch break or a pay cut?"

Room full of executives that had been getting quite excited at this awesome new investment were suddenly much more interested in showing these guys the door and getting to their next meeting. level 3

Cutriss '); DROP TABLE memes;-- 61 points · 6 days ago

I'm confused a bit by your story. Let's assume they work 8-hour days and so the jointing takes 2.66 hours per operation.

This enhancement will cut that down to 2.16 hours. That's awfully close to enabling a team to increase jointing-per-day from 3 to 4.

That's nearly a 33% increase in productivity. Factoring in overhead it probably is slightly higher.

Is there some reason the workers can't do more than 3 in a day? level 4

slickeddie Sysadmin 87 points · 6 days ago

I think they did 3 as that was the workload so being able to do 4 isn't relevant if there isn't a fourth to do.

That's what I get out of his story anyway.

And also if it was going to be 14 million in costs to equip everyone, the savings have to be there. If adding 1 unit of productivity per day didn't save 14 million in a year or two, it's not really worth it. level 5

Cutriss '); DROP TABLE memes;-- 40 points · 6 days ago

That was basically what I figured was the missing piece - the logistical inability to process 4 units.

As far as the RoI, I had to assume that the number of teams involved and their operational costs had already factored into whether or not 14m was even a price anyone could consider paying. In other words, by virtue of the meeting even happening I figured that the initial costing had not already been laughed out of the room, but perhaps that's a bit too much of an assumption to make. level 6

beer_kimono 14 points · 6 days ago

In my limited experience they slow roll actually pricing anything. Of course physical equipment pricing might be more straightforward, which would explain why his dad got a number instead of a discussion of licensing options. level 5

Lagkiller 7 points · 6 days ago

And also if it was going to be 14 million in costs to equip everyone, the savings have to be there. If adding 1 unit of productivity per day didn't save 14 million in a year or two, it's not really worth it.

That entirely depends. If you have 10 people producing 3 joints a day, with this new kit you could reduce your headcount by 2 and still produce the same, or take on additional workload and increase your production. Not to mention that you don't need to equip everyone with these kits either, you could save them for the projects which needed more daily production thus saving money on the kits and increasing production on an as needed basis.

They story is missing a lot of specifics and while it sounds great, I'm quite certain there was likely a business case to be made.

5 more replies level 4

Standardly 14 points · 6 days ago

He's saying they get 3 done in a day, and the product would save them 30 minutes per joint. That's an hour and a half saved per day, not even enough time to finish a fourth joint, hence the "so do i just give my workers an extra 30 min on lunch"? He just worded it all really weird. level 4

elasticinterests 11 points · 6 days ago

Your ignoring travel time in there, there are also factors to do with outside contractors carrying out the digging and reinstatement works and getting sign off to actually dig the hole in the road in the first place.

There is also the possibility I'm remembering the time wrong... it's been a while! level 5

Cutriss '); DROP TABLE memes;-- 10 points · 6 days ago

Travel time actually works in my favour. If it takes more time to go from job to job, then the impact of the enhancement is magnified because the total time per job shrinks. level 4

wildcarde815 Jack of All Trades 8 points · 6 days ago

Id bet because nobody wants to pay overtime.

1 more reply level 3

say592 4 points · 6 days ago

That seems like a poor example. Why would you ignore efficiency improvements just to give your workers something to do? Why not find another task for them or figure out a way to consolidate the teams some? We fight this same mentality on our manufacturing floors, the idea that if we automate a process someone will lose their job or we wont have enough work for everyone. Its never the case. However, because of the automation improvements we have done in the last 10 years, we are doing 2.5x the output with only a 10% increase in the total number of laborers.

So maybe in your example for a time they would have nothing for these people to do. Thats a management problem. Have them come back and sweep the floor or wash their trucks. Eventually you will be at a point where there is more work, and that added efficiency will save you from needing to put more crews on the road.

2 more replies level 2

SithLordAJ 75 points · 6 days ago

Calling the uneducated people out on what they see as facts can be rewarding.

I wouldnt call them all uneducated. I think what they are is basically brainwashed. They constantly hear from the sales teams of vendors like Microsoft pitching them the idea of moving everything to Azure.

They do not hear the cons. They do not hear from the folks who know their environment and would know if something is a good fit. At least, they dont hear it enough, and they never see it first hand.

Now, I do think this is their fault... they need to seek out that info more, weigh things critically, and listen to what's going on with their teams more. Isolation from the team is their own doing.

After long enough standing on the edge and only hearing "jump!!", something stupid happens. level 3

AquaeyesTardis 18 points · 6 days ago

Apart from performance, what would be some of the downsides of containers? level 4

ztherion Programmer/Infrastructure/Linux 51 points · 6 days ago

There's little downside to containers by themselves. They're just a method of sandboxing processes and packaging a filesystem as a distributable image. From a performance perspective the impact is near negligible (unless you're doing some truly intensive disk I/O).

What can be problematic is taking a process that was designed to run on exactly n dedicated servers and converting it to a modern 2 to n autoscaling deployment that shares hosting with other apps on n a platform like Kubernetes. It's a significant challenge that requires a lot of expertise and maintenance, so there needs to be a clear business advantage to justify hiring at least one additional full time engineer to deal with it. level 5

AirFell85 11 points · 6 days ago

ELI5:

More logistical layers require more engineers to support.

1 more reply

3 more replies level 4

justabofh 33 points · 6 days ago

Containers are great for stateless stuff. So your webservers/application servers can be shoved into containers. Think of containers as being the modern version of statically linked binaries or fat applications. Static binaries have the problem that any security vulnerability requires a full rebuild of the application, and that problem is escalated in containers (where you might not even know that a broken library exists)

If you are using the typical business application, you need one or more storage components for data which needs to be available, possibly changed and access controlled.

Containers are a bad fit for stateful databases, or any stateful component, really.

Containers also enable microservices, which are great ideas at a certain organisation size (if you aren't sure you need microservices, just use a simple monolithic architecture). The problem with microservices is that you replace complexity in your code with complexity in the communications between the various components, and that is harder to see and debug. level 5

Untgradd 6 points · 6 days ago

Containers are fine for stateful services -- you can manage persistence at the storage layer the same way you would have to manage it if you were running the process directly on the host.

6 more replies level 5

malikto44 5 points · 6 days ago

Backing up containers can be a pain, so you don't want to use them for valuable data unless the data is stored elsewhere, like a database or even a file server.

For spinning up stateless applications to take workload behind a load balancer, containers are excellent.

9 more replies

33 more replies level 3

malikto44 3 points · 6 days ago

The problem is that there is an overwhelming din from vendors. Everybody and their brother, sister, mother, uncle, cousin, dog, cat, and gerbil is trying to sell you some pay-by-the-month cloud "solution".

The reason is that the cloud forces people into monthly payments, which is a guarenteed income for companies, but costs a lot more in the long run, and if something happens and one can't make the payments, business is halted, ensuring that bankruptcies hit hard and fast. Even with the mainframe, a company could limp along without support for a few quarters until they could get cash flow enough.

If we have a serious economic downturn, the fact that businesses will be completely shuttered if they can't afford their AWS bill just means fewer companies can limp along when the economy is bad, which will intensify a downturn.

1 more reply

3 more replies level 2

wildcarde815 Jack of All Trades 12 points · 6 days ago

Also if you can't work without cloud access you better have a second link. level 2

pottertown 10 points · 6 days ago

Our company viewed the move to Azure less as a cost savings measure and more of a move towards agility and "right now" sizing of our infrastructure.

Your point is very accurate, as an example our location is wholly incapable of moving moving much to the cloud due to half of us being connnected via satellite network and the other half being bent over the barrel by the only ISP in town. level 2

_The_Judge 27 points · 6 days ago

I'm sorry, but I find management these days around tech wholly inadequate. The idea that you can get an MBA and manage shit you have no idea about is absurd and just wastes everyone elses time for them to constantly ELI5 so manager can do their job effectively. level 2

laserdicks 57 points · 6 days ago

Calling the uneducated people out on what they see as facts can be rewarding

Aaand political suicide in a corporate environment. Instead I use the following:

"I love this idea! We've actually been looking into a similar solution however we weren't able to overcome some insurmountable cost sinkholes (remember: nothing is impossible; just expensive). Will this idea require an increase in internet speed to account for the traffic going to the azure cloud?" level 3

lokko12 71 points · 6 days ago

Will this idea require an increase in internet speed to account for the traffic going to the azure cloud?

No.

...then people rent on /r/sysadmin about stupid investments and say "but i told them". level 4

HORACE-ENGDAHL Jack of All Trades 61 points · 6 days ago

This exactly, you can't compromise your own argument to the extent that it's easy to shoot down in the name of giving your managers a way of saving face, and if you deliver the facts after that sugar coating it will look even worse, as it will be interpreted as you setting them up to look like idiots. Being frank, objective and non-blaming is always the best route. level 4

linuxdragons 13 points · 6 days ago

Yeah, this is a terrible example. If I were his manager I would be starting the paperwork trail after that meeting.

6 more replies level 3

messburg 61 points · 6 days ago

I think it's quite an american thing to do it so enthusiastically; to hurt no one, but the result is so condescending. It must be annoying to walk on egg shells to survive a day in the office.


And this is not a rant against soft skills in IT, at all. level 4

vagrantprodigy07 13 points · 6 days ago

It is definitely annoying. level 4

widowhanzo 27 points · 6 days ago
· edited 6 days ago

We work with Americans and they're always so positive, it's kinda annoying. They enthusiastically say "This is very interesting" when in reality it sucks and they know it.

Another less professional example, one of my (non-american) co-workers always wants to go out for coffee (while we have free and better coffee in the office), and the American coworker is always nice like "I'll go with you but I'm not having any" and I just straight up reply "No. I'm not paying for shitty coffee, I'll make a brew in the office". And that's that. Sometimes telling it as it is makes the whole conversation much shorter :D level 5

superkp 42 points · 6 days ago

Maybe the american would appreciate the break and a chance to spend some time with a coworker away from screens, but also doesn't want the shit coffee?

Sounds quite pleasant, honestly. level 6

egamma Sysadmin 39 points · 6 days ago

Yes. "Go out for coffee" is code for "leave the office so we can complain about management/company/customer/Karen". Some conversations shouldn't happen inside the building. level 7

auru21 5 points · 6 days ago

And complain about that jerk who never joins them

1 more reply level 6

Adobe_Flesh 6 points · 6 days ago

Inferior American - please compute this - the sole intention was consumption of coffee. Therefore due to existence of coffee in office, trip to coffee shop is not sane. Resistance to my reasoning is futile. level 7

superkp 1 point · 6 days ago

Coffee is not the sole intention. If that was the case, then there wouldn't have been an invitation.

Another intention would be the social aspect - which americans are known to love, especially favoring it over doing any actual work in the context of a typical 9-5 corporate job.

I've effectively resisted your reasoning, therefore [insert ad hominem insult about inferior logic].

5 more replies level 5

ITaggie Tier II Support/Linux Admin 10 points · 6 days ago

I mean, I've taken breaks just to get away from the screens for a little while. They might just like being around you.

1 more reply

11 more replies level 3

tastyratz 7 points · 6 days ago

There is still value to tact in your delivery, but, don't slit your own throat. Remind them they hired you for a reason.

"I appreciate the ideas being raised here by management for cost savings and it certainly merits a discussion point. As a business, we have many challenges and one of them includes controlling costs. Bridging these conversations with subject matter experts and management this early can really help fill out the picture. I'd like to try to provide some of that value to the conversation here" level 3

renegadecanuck 2 points · 6 days ago

If it's political suicide to point out potential downsides, then you need to work somewhere else.

Especially since your response, in my experience, won't get anything done (people will either just say "no, that's not an issue", or they'll find your tone really condescending) and will just piss people off.

I worked with someone like that would would always be super passive aggressive in how she brought things up and it pissed me off to no end, because it felt less like bringing up potential issues and more like being belittling. level 4

laserdicks 1 point · 4 days ago

Agreed, but I'm too early on in my career to make that jump. level 3

A_A_A_U_U_U 1 point · 6 days ago

Feels good to get to a point in my career where I can call people our whenever the hell I feel like it. I've got recruiters banging down my door, I couldn't swing a stick without hitting a job offer or three.

Of course you I'm not suggesting problem be combative for no reason and to be tactful about it but if you don't call out fools like that then you're being negligent in your duties. level 4

adisor19 3 points · 6 days ago

This. Current IT market is in our advantage. Say it like it is and if they don't like it, GTFO.

14 more replies level 1

DragonDrew Jack of All Trades 777 points · 6 days ago

"I am resolute in my ability to elevate this collaborative, forward-thinking team into the revenue powerhouse that I believe it can be. We will transition into a DevOps team specialising in migrating our existing infrastructure entirely to code and go completely serverless!" - CFO that outsources IT level 2

OpenScore Sysadmin 529 points · 6 days ago

"We will utilize Artificial Intelligence, machine learning, Cloud technologies, python, data science and blockchain to achieve business value" level 3

omfgitzfear 472 points · 6 days ago

We're gonna be AGILE level 4

whetu 113 points · 6 days ago

Synergy. level 5

Erok2112 92 points · 6 days ago
Gold

Weird Al even wrote a song about it!

https://www.youtube.com/watch?v=GyV_UG60dD4 level 6

uptimefordays Netadmin 32 points · 6 days ago

It's so good, I hate it. level 7

Michelanvalo 31 points · 6 days ago

I love Al, I've seen him in concert a number of times, Alapalooza was the first CD I ever opened, I own hundreds of dollars of merchandise.

I cannot stand this song because it drives me insane to hear all this corporate shit in one 4:30 space.

4 more replies

8 more replies level 5

geoff1210 9 points · 6 days ago

I can't attend keynotes without this playing in the back of my head

17 more replies level 4

MadManMorbo 58 points · 6 days ago

We recently implemented DevOps practices, Scrum, and sprints have become the norm... I swear to god we spend 60% of our time planning our sprints, and 40% of the time doing the work, and management wonders why our true productivity has fallen through the floor... level 5

Angdrambor 26 points · 6 days ago

Let me guess - they left out the retrospectives because somebody brought up how bad they were fucking it all up? level 6

ValensEtVolens 1 point · 6 days ago

Those should be fairly short too. But how do you improve if you don't apply lessons learned?

Glad I work for a top IT company.
level 5
StormlitRadiance 23 points · 6 days ago

If you spend three whole days out of every five in planning meetings, this is a problem with your meeting planners, not with screm. If these people stay in charge, you'll be stuck in planning hell no matter what framework or buzzwords they try to fling around. level 6

lurker_lurks 15 points · 6 days ago

Scrum is dead, long live Screm! We need to implement it immediately. We must innovate and stay ahead of the curve! level 7

Solaris17 Sysadmin 3 points · 6 days ago

My last company used screm. No 3 day meeting events, 20min of loose direction and we were off to the races. We all came back with parts to different projects. SYNERGY. level 7

JustCallMeFrij 1 point · 6 days ago

First you scream, then you ahh. Now you can screm level 8

lurker_lurks 4 points · 6 days ago

You screm, I screm, we all screm for ice crem. level 7

StormlitRadiance 1 point · 5 days ago

It consists of three managers for every engineer and they all screm all day at a different quartet of three managers and an engineer. level 6

water_mizu 7 points · 6 days ago

Are you saying quantum synergy coupled with block chain neutral intelligence can not be used to expedite artificial intelligence amalgamation into that will metaphor into cucumber obsession?

3 more replies level 5

malikto44 9 points · 6 days ago

I worked at a place where the standup meetings went at least 4-6 hours each day. It was amazing how little got done there. Glad I bailed.

7 more replies level 4

opmrcrab 23 points · 6 days ago

fr agile, FTFY :P level 4

ChristopherBurr 20 points · 6 days ago

Haha, we just fired our Agil scrum masters. Turns out, they couldn't make development faster or streamlined.

I was so tired of seeing all the colored post it notes and white boards set up everywhere.

JasonHenley 8 points · 6 days ago

We prefer to call them Scrum Lords here. level 4

Mr-Shank 85 points · 6 days ago

Agile is cancer...

I understand what it takes to implement and manage these technologies. Every meeting is infested with these A holes. 1.0k comments 1.0k Posted by u/bpitts2 3 days ago Rant If I go outside of process to help you for your "urgent" issue, be cool and don't abuse the relationship.

What is it with these people? Someone brought me an "urgent" request (of course there wasn't a ticket), so I said no worries, I'll help you out. Just open a ticket for me so we can track the work and document the conversation. We got that all knocked out and everyone was happy.

So a day or two later, I suddenly get an instant message for yet another "urgent" issue. ... Ok ... Open a ticket, and I'll get it assigned to one of my team members to take a look.

And a couple days later ... he's back and I'm being asked for help troubleshooting an application that we don't own. At least there's a ticket and an email thread... but wtf man.

What the heck man?

This is like when you get a free drink or dessert from your waiter. Don't keep coming back and asking for more free pie. You know damn well you're supposed to pay for pie. Be cool. I'll help you out when you're really in a tight spot, but the more you cry "urgent", the less I care about your issues.

IT folks are constantly looked at as being dicks because we force people to follow the support process, but this is exactly why we have to make them follow the process. 290 comments 833 Posted by u/SpicyTunaNinja 4 days ago Silver Let's talk about mental health and stress

Hey r/Sysadmin , please don't suffer in silence. I know the job can be very difficult at times, especially with competing objectives, tight (or impossible) deadlines, bad bosses and needy end users, but please - always remember that there are ways to manage that stress. Speaking to friends and family regularly to vent, getting a therapist, or taking time off.

Yes, you do have the ability to take personal leave/medical leave if its that bad. No, it doesn't matter what your colleagues or boss will think..and no, you are not a quitter, weak, or a loser if you take time for yourself - to heal mentally, physically or emotionally.

Don't let yourself get to the point that this one IT employee did at the Paris Police headquarters. Ended up taking the lives of multiple others, and ultimately losing his life. https://www.nbcnews.com/news/world/paris-policeman-kills-2-officers-injures-3-others-knife-attack-n1061861

EDIT: Holy Cow! Thanks for the silver and platinum kind strangers. All i wanted to do was to get some more awareness on this subject, and create a reminder that we all deserve happiness and peace of mind. A reminder that hopefully sticks with you for the days and weeks to come.

Work is just one component of life, and not to get so wrapped up and dedicate yourself to the detriment of your health. 302 comments 783 Posted by u/fresh1003 2 days ago By 2025 80% of enterprises will shutdown their data center and move to cloud...do you guys believe this?

By 2025 80% of enterprises will shutdown their data center and move to cloud...do you guys believe this? 995 comments 646 Posted by u/eternalterra 3 days ago Silver Career / Job Related The more tasks I have, the slower I become

Good morning,

We, sysadmins, have times when we don't really have nothing to do but maintenance. BUT, there are times when it seems like chaos comes out of nowhere. When I have a lot of tasks to do, I tend to get slower. The more tasks I have pending, the slower I become. I cannot avoid to start thinking about 3 or 4 different problems at the same time, and I can't focus! I only have 2 years of experiences as sysadmin.

Do you guys experience the same?

Cheers, 321 comments 482 Posted by u/proudcanadianeh 6 days ago General Discussion Cloudflare, Google and Firefox to add support for HTTP/3, shifting away from TCP

Per this article: https://www.techspot.com/news/82111-cloudflare-google-firefox-add-support-http3-shifting-away.html

Not going to lie, this is the first I have heard of http3. Anyone have any insight in what this shift is going to mean on a systems end? Is this a new protocol entirely? 265 comments 557 Posted by u/_sadme_ 8 hours ago Career / Job Related Leaving the IT world...

Hello everyone,

Have you ever wondered if your whole career will be related to IT stuff? I have, since my early childhood. It was more than 30 years ago - in the marvelous world of an 8-bit era. After writing my first code (10 PRINT " my_name " : 20 GOTO 10) I exactly knew what I wanted to do in the future. Now, after spending 18 years in this industry, which is half of my age, I'm not so sure about it.

I had plenty of time to do almost everything. I was writing software for over 100K users and I was covered in dust while drilling holes for ethernet cables in houses of our customers. I was a main network administrator for a small ISP and systems administrator for a large telecom operator. I made few websites and I was managing a team of technical support specialists. I was teaching people - on individual courses on how to use Linux and made some trainings for admins on how to troubleshoot multicast transmissions in their own networks. I was active in some Open Source communities, including running forums about one of Linux distributions (the forum was quite popular in my country) and I was punching endless Ctrl+C/Ctrl+V combos from Stack Overflow. I even fixed my aunt's computer!

And suddenly I realised that I don't want to do this any more. I've completely burnt out. It was like a snap of a finger.

During many years I've collected a wide range of skills that are (or will be) obsolete. I don't want to spend rest of my life maintaining a legacy code written in C or PHP or learning a new language which is currently on top and forcing myself to write in a coding style I don't really like. That's not all... If you think you'll enjoy setting up vlans on countless switches, you're probably wrong. If you think that managing clusters of virtual machines is an endless fun, you'll probably be disappointed. If you love the smell of a brand new blade server and the "click" sound it makes when you mount it into the rack, you'll probably get fed up with it. Sooner or later.

But there's a good side of having those skills. With skills come experience, knowledge and good premonition. And these features don't get old. Remember that!

My employer offered me a position of a project manager and I eagerly agreed to it. It means that I'm leaving the world of "hardcore IT" I'll be doing some other, less crazy stuff. I'm logging out of my console and I'll run Excel. But I'll keep all good memories from all those years. I'd like to thank all of you for doing what you're doing, because it's really amazing. Good luck! The world lies in your hands! 254 comments 450 Posted by u/remrinds 1 day ago General Discussion UPDATE: So our cloud exchange server was down for 17 hours on friday

my original post got deleted because i behaved wrongly and posted some slurs. I apologise for that.


anyway, so, my companie is using Office365 ProPlus and we migrated our on premise exchange server to cloud a while ago, and on friday last week, all of our user (1000 or so) could not access their exchange box, we are a TV broadcasting station so you can only imagine the damage when we could not use our mailing system.


initially, we opened a ticket with microsoft and they just kept us on hold for 12 hours (we are in japan so they had to communicate with US and etc which took time), and then they told us its our network infra thats wrong when we kept telling them its not. we asked them to check their envrionment at least once which they did not until 12 hours later.


in the end, it was their exchange server that was the problem, i will copy and paste the whole incident report below


Title: Can't access Exchange

User Impact: Users are unable to access the Exchange Online service.

Current status: We've determined that a recent sync between Exchange Online and Azure Active Directory (AAD) inadvertently resulted in access issues with the Exchange Online service. We've restored the affected environment and updated the Global Location Service (GLS) records, which we believe has resolved the issue. We're awaiting confirmation from your representatives that this issue is resolved.

Scope of impact: Your organization is affected by this event, and this issue impacts all users.

Start time: Friday, October 4, 2019, 4:51 AM
Root cause: A recent Service Intelligence (SI) move inadvertently resulted in access issues with the Exchange Online service.


they wont explain further than what they posted on the incident page but if anyone here is good with microsofts cloud envrionment, can anyone tell me what was the root cause of this? from what i can gather, the AAD and exchange server couldnt sync but they wont tell us what the actual problem is, what the hell is Service intelligence and how does it fix our exchange server when they updated the global location service?


any insight on these report would be more than appreciated


thanks! 444 comments 336 Posted by u/Rocco_Saint 13 hours ago KB4524148 Kills Print Spooler? Thought it was supposed to fix that issue?

I rolled out this patch this weekend to my test group and it appears that some of the workstations this was applied to are having print spooler issues.

Here's the details for the patch.

I'm in the middle of troubleshooting it now, but wanted to reach out and see if anyone else was having issues. 108 comments 316 Posted by u/GrizzlyWhosSteve 1 day ago Finally Learned Docker

I hadn't found a use case for containers in my environment so I had put off learning Docker for a while. But, I'm writing a rails app to simplify/automate some of our administrative tasks. Setting up my different dev and test environments was definitely non trivial, and I plan on onboarding another person or 2 so they can help me out and add it to their resume.

I installed Docker desktop on my Mac, wrote 2 files essentially copied from Docker's website, built it, then ran it. It took a total of 10 minutes to go from zero Docker to fully configured and running. It's really that easy to start using it. So, now I've decided to set up Kubernetes at work this week and see what I can find to do with it.

Edit: leaning towards OKD. Has anyone used it/OpenShift that wants to talk me out of it? 195 comments 189 Posted by u/Reverent 4 days ago Off Topic How to trigger a sysadmin in two words

Vendor Requirements. 401 comments 181 Posted by u/stewardson 6 days ago General Discussion Monday From Hell

Let me tell you about the Monday from hell I encountered yesterday.

I work for xyz corp which is an MSP for IT services. One of the companies we support, we'll call them abc corp .

I come in to work Monday morning and look at my alerts report from the previous day and find that all of the servers (about 12+) at abc corp are showing offline. Manager asks me to go on site to investigate, it's around the corner so nbd .

I get to the client, head over to the server room and open up the KVM console. I switch inputs and see no issues with the ESX hosts. I then switch over to the (for some reason, physical) vCenter server and find this lovely little message:

HELLO, this full-disk encryption.

E-mail: email@protonmail.com

rsrv: email@scryptmail.com

Now, I've never seen this before and it looks sus but just in case, i reboot the machine - same message . Do a quick google search and found that the server was hit with an MBR level ransomware encryption. I then quickly switched over to the server that manages the backups and found that it's also encrypted - f*ck.

At this point, I call Mr. CEO and account manager to come on site. While waiting for them, I found that the SANs had also been logged in to and had all data deleted and snapshots deleted off the datastores and the EQL volume was also encrypted - PERFECT!

At this point, I'm basically freaking out. ABC Corp is owned by a parent company who apparently also got hit however we don't manage them *phew*.

Our only saving grace at this point is the offsite backups. I log in to the server and wouldn't ya know it, I see this lovely message:

Last replication time: 6/20/2019 13:00:01

BackupsTech had a script that ran to report on replication status daily and the reports were showing that they were up to date. Obviously, they weren't so at this point we're basically f*cked.

We did eventually find out this originated from parentcompany and that the accounts used were from the old IT Manager that recently left a few weeks ago. Unfortunately, they never disabled the accounts in either domain and the account used was a domain admin account.

We're currently going through and attempting to undelete the VMFS data to regain access to the VM files. If anyone has any suggestions on this, feel free to let me know.

TL;DR - ransomware, accounts not disabled, backups deleted, f*cked. 94 comments Continue browsing in r/sysadmin Subreddit icon r/sysadmin

380k

Members

1.6k

Online

Oct 22, 2008

Cake Day A reddit dedicated to the profession of Computer System Administration. Reddit about careers press advertise blog Using Reddit help Reddit App Reddit premium Reddit gifts Directory Terms | Content policy | Privacy policy | Mod policy Reddit Inc © 2019. All rights reserved 1.2k They say, No more IT or system or server admins needed very soon... 1.2k Subreddit icon r/sysadmin • Posted by u/rdns98 6 days ago They say, No more IT or system or server admins needed very soon...

Sick and tired of listening to these so called architects and full stack developers who watch bunch of videos on YouTube and Pluralsight, find articles online. They go around workplace throwing words like containers, devops, NoOps, azure, infrastructure as code, serverless, etc, they don't understand half of the stuff. I do some of the devops tasks in our company, I understand what it takes to implement and manage these technologies. Every meeting is infested with these A holes. 1.0k comments 91% Upvoted What are your thoughts? Log in or Sign up log in sign up Sort by level 1

ntengineer 619 points · 6 days ago

Your best defense against these is to come up with non-sarcastic and quality questions to ask these people during the meeting, and watch them not have a clue how to answer them.

For example, a friend of mine worked at a smallish company, some manager really wanted to move more of their stuff into Azure including AD and Exchange environment. But they had common problems with their internet connection due to limited bandwidth and them not wanting to spend more. So during a meeting my friend asked a question something like this:

"You said on this slide that moving the AD environment and Exchange environment to Azure will save us money. Did you take into account that we will need to increase our internet speed by a factor of at least 4 in order to accommodate the increase in traffic going out to the Azure cloud? "

Of course, they hadn't. So the CEO asked my friend if he had the numbers, which he had already done his homework, and it was a significant increase in cost every month and taking into account the cost for Azure and the increase in bandwidth wiped away the manager's savings.

I know this won't work for everyone. Sometimes there is real savings in moving things to the cloud. But often times there really isn't. Calling the uneducated people out on what they see as facts can be rewarding. level 2

PuzzledSwitch 99 points · 6 days ago

my previous boss was that kind of a guy. he waited till other people were done throwing their weight around in a meeting and then calmly and politely dismantled them with facts.

no amount of corporate pressuring or bitching could ever stand up to that. level 3

themastermatt 43 points · 6 days ago

Ive been trying to do this. Problem is that everyone keeps talking all the way to the end of the meeting leaving no room for rational facts. level 4

PuzzledSwitch 33 points · 6 days ago

make a follow-up in email, then.

or, you might have to interject for a moment.

5 more replies level 3

williamfny Jack of All Trades 25 points · 6 days ago

This is my approach. I don't yell or raise my voice, I just wait. Then I start asking questions that they generally cannot answer and slowly take them apart. I don't have to be loud to get my point across. level 4

MaxHedrome 5 points · 6 days ago

Listen to this guy OP

This tactic is called "the box game". Just continuously ask them logical questions that can't be answered with their stupidity. (Box them in), let them be their own argument against themselves.

2 more replies level 2

notechno 34 points · 6 days ago

Not to mention downtime. We have two ISPs in our area. Most of our clients have both in order to keep a fail-over. However, depending on where the client is located, one ISP is fast but goes down every time it rains and the other is solid but slow. Now our only AzureAD customers are those who have so many remote workers that they throw their hands up and deal with the outages as they come. Maybe this works in Europe or South Korea, but this here 'Murica got too many internet holes. level 3

katarh 12 points · 6 days ago

Yup. If you're in a city with fiber it can probably work. If you have even one remote site, and all they have is DSL (or worse, satellite, as a few offices I once supported were literally in the woods when I worked for a timber company) then even Citrix becomes out of the question.

4 more replies

1 more reply level 2

elasticinterests 202 points · 6 days ago

Definitely this, if you know your stuff and can wrap some numbers around it you can regain control of the conversation.

I use my dad as a prime example, he was an electrical engineer for ~40 years, ended up just below board level by the time he retired. He sat in on a product demo once, the kit they were showing off would speed up jointing cable in the road by 30 minutes per joint. My dad asked 3 questions and shut them down:

"how much will it cost to supply all our jointing teams?" £14million

"how many joints do our teams complete each day?" (this they couldn't answer so my dad helped them out) 3

"So are we going to tell the jointers that they get an extra hour and a half hour lunch break or a pay cut?"

Room full of executives that had been getting quite excited at this awesome new investment were suddenly much more interested in showing these guys the door and getting to their next meeting. level 3

Cutriss '); DROP TABLE memes;-- 61 points · 6 days ago

I'm confused a bit by your story. Let's assume they work 8-hour days and so the jointing takes 2.66 hours per operation.

This enhancement will cut that down to 2.16 hours. That's awfully close to enabling a team to increase jointing-per-day from 3 to 4.

That's nearly a 33% increase in productivity. Factoring in overhead it probably is slightly higher.

Is there some reason the workers can't do more than 3 in a day? level 4

slickeddie Sysadmin 87 points · 6 days ago

I think they did 3 as that was the workload so being able to do 4 isn't relevant if there isn't a fourth to do.

That's what I get out of his story anyway.

And also if it was going to be 14 million in costs to equip everyone, the savings have to be there. If adding 1 unit of productivity per day didn't save 14 million in a year or two, it's not really worth it. level 5

Cutriss '); DROP TABLE memes;-- 40 points · 6 days ago

That was basically what I figured was the missing piece - the logistical inability to process 4 units.

As far as the RoI, I had to assume that the number of teams involved and their operational costs had already factored into whether or not 14m was even a price anyone could consider paying. In other words, by virtue of the meeting even happening I figured that the initial costing had not already been laughed out of the room, but perhaps that's a bit too much of an assumption to make. level 6

beer_kimono 14 points · 6 days ago

In my limited experience they slow roll actually pricing anything. Of course physical equipment pricing might be more straightforward, which would explain why his dad got a number instead of a discussion of licensing options. level 5

Lagkiller 7 points · 6 days ago

And also if it was going to be 14 million in costs to equip everyone, the savings have to be there. If adding 1 unit of productivity per day didn't save 14 million in a year or two, it's not really worth it.

That entirely depends. If you have 10 people producing 3 joints a day, with this new kit you could reduce your headcount by 2 and still produce the same, or take on additional workload and increase your production. Not to mention that you don't need to equip everyone with these kits either, you could save them for the projects which needed more daily production thus saving money on the kits and increasing production on an as needed basis.

They story is missing a lot of specifics and while it sounds great, I'm quite certain there was likely a business case to be made.

5 more replies level 4

Standardly 14 points · 6 days ago

He's saying they get 3 done in a day, and the product would save them 30 minutes per joint. That's an hour and a half saved per day, not even enough time to finish a fourth joint, hence the "so do i just give my workers an extra 30 min on lunch"? He just worded it all really weird. level 4

elasticinterests 11 points · 6 days ago

Your ignoring travel time in there, there are also factors to do with outside contractors carrying out the digging and reinstatement works and getting sign off to actually dig the hole in the road in the first place.

There is also the possibility I'm remembering the time wrong... it's been a while! level 5

Cutriss '); DROP TABLE memes;-- 10 points · 6 days ago

Travel time actually works in my favour. If it takes more time to go from job to job, then the impact of the enhancement is magnified because the total time per job shrinks. level 4

wildcarde815 Jack of All Trades 8 points · 6 days ago

Id bet because nobody wants to pay overtime.

1 more reply level 3

say592 4 points · 6 days ago

That seems like a poor example. Why would you ignore efficiency improvements just to give your workers something to do? Why not find another task for them or figure out a way to consolidate the teams some? We fight this same mentality on our manufacturing floors, the idea that if we automate a process someone will lose their job or we wont have enough work for everyone. Its never the case. However, because of the automation improvements we have done in the last 10 years, we are doing 2.5x the output with only a 10% increase in the total number of laborers.

So maybe in your example for a time they would have nothing for these people to do. Thats a management problem. Have them come back and sweep the floor or wash their trucks. Eventually you will be at a point where there is more work, and that added efficiency will save you from needing to put more crews on the road.

2 more replies level 2

SithLordAJ 75 points · 6 days ago

Calling the uneducated people out on what they see as facts can be rewarding.

I wouldnt call them all uneducated. I think what they are is basically brainwashed. They constantly hear from the sales teams of vendors like Microsoft pitching them the idea of moving everything to Azure.

They do not hear the cons. They do not hear from the folks who know their environment and would know if something is a good fit. At least, they dont hear it enough, and they never see it first hand.

Now, I do think this is their fault... they need to seek out that info more, weigh things critically, and listen to what's going on with their teams more. Isolation from the team is their own doing.

After long enough standing on the edge and only hearing "jump!!", something stupid happens. level 3

AquaeyesTardis 18 points · 6 days ago

Apart from performance, what would be some of the downsides of containers? level 4

ztherion Programmer/Infrastructure/Linux 51 points · 6 days ago

There's little downside to containers by themselves. They're just a method of sandboxing processes and packaging a filesystem as a distributable image. From a performance perspective the impact is near negligible (unless you're doing some truly intensive disk I/O).

What can be problematic is taking a process that was designed to run on exactly n dedicated servers and converting it to a modern 2 to n autoscaling deployment that shares hosting with other apps on n a platform like Kubernetes. It's a significant challenge that requires a lot of expertise and maintenance, so there needs to be a clear business advantage to justify hiring at least one additional full time engineer to deal with it. level 5

AirFell85 11 points · 6 days ago

ELI5:

More logistical layers require more engineers to support.

1 more reply

3 more replies level 4

justabofh 33 points · 6 days ago

Containers are great for stateless stuff. So your webservers/application servers can be shoved into containers. Think of containers as being the modern version of statically linked binaries or fat applications. Static binaries have the problem that any security vulnerability requires a full rebuild of the application, and that problem is escalated in containers (where you might not even know that a broken library exists)

If you are using the typical business application, you need one or more storage components for data which needs to be available, possibly changed and access controlled.

Containers are a bad fit for stateful databases, or any stateful component, really.

Containers also enable microservices, which are great ideas at a certain organisation size (if you aren't sure you need microservices, just use a simple monolithic architecture). The problem with microservices is that you replace complexity in your code with complexity in the communications between the various components, and that is harder to see and debug. level 5

Untgradd 6 points · 6 days ago

Containers are fine for stateful services -- you can manage persistence at the storage layer the same way you would have to manage it if you were running the process directly on the host.

6 more replies level 5

malikto44 5 points · 6 days ago

Backing up containers can be a pain, so you don't want to use them for valuable data unless the data is stored elsewhere, like a database or even a file server.

For spinning up stateless applications to take workload behind a load balancer, containers are excellent.

9 more replies

33 more replies level 3

malikto44 3 points · 6 days ago

The problem is that there is an overwhelming din from vendors. Everybody and their brother, sister, mother, uncle, cousin, dog, cat, and gerbil is trying to sell you some pay-by-the-month cloud "solution".

The reason is that the cloud forces people into monthly payments, which is a guarenteed income for companies, but costs a lot more in the long run, and if something happens and one can't make the payments, business is halted, ensuring that bankruptcies hit hard and fast. Even with the mainframe, a company could limp along without support for a few quarters until they could get cash flow enough.

If we have a serious economic downturn, the fact that businesses will be completely shuttered if they can't afford their AWS bill just means fewer companies can limp along when the economy is bad, which will intensify a downturn.

1 more reply

3 more replies level 2

wildcarde815 Jack of All Trades 12 points · 6 days ago

Also if you can't work without cloud access you better have a second link. level 2

pottertown 10 points · 6 days ago

Our company viewed the move to Azure less as a cost savings measure and more of a move towards agility and "right now" sizing of our infrastructure.

Your point is very accurate, as an example our location is wholly incapable of moving moving much to the cloud due to half of us being connnected via satellite network and the other half being bent over the barrel by the only ISP in town. level 2

_The_Judge 27 points · 6 days ago

I'm sorry, but I find management these days around tech wholly inadequate. The idea that you can get an MBA and manage shit you have no idea about is absurd and just wastes everyone elses time for them to constantly ELI5 so manager can do their job effectively. level 2

laserdicks 57 points · 6 days ago

Calling the uneducated people out on what they see as facts can be rewarding

Aaand political suicide in a corporate environment. Instead I use the following:

"I love this idea! We've actually been looking into a similar solution however we weren't able to overcome some insurmountable cost sinkholes (remember: nothing is impossible; just expensive). Will this idea require an increase in internet speed to account for the traffic going to the azure cloud?" level 3

lokko12 71 points · 6 days ago

Will this idea require an increase in internet speed to account for the traffic going to the azure cloud?

No.

...then people rent on /r/sysadmin about stupid investments and say "but i told them". level 4

HORACE-ENGDAHL Jack of All Trades 61 points · 6 days ago

This exactly, you can't compromise your own argument to the extent that it's easy to shoot down in the name of giving your managers a way of saving face, and if you deliver the facts after that sugar coating it will look even worse, as it will be interpreted as you setting them up to look like idiots. Being frank, objective and non-blaming is always the best route. level 4

linuxdragons 13 points · 6 days ago

Yeah, this is a terrible example. If I were his manager I would be starting the paperwork trail after that meeting.

6 more replies level 3

messburg 61 points · 6 days ago

I think it's quite an american thing to do it so enthusiastically; to hurt no one, but the result is so condescending. It must be annoying to walk on egg shells to survive a day in the office.


And this is not a rant against soft skills in IT, at all. level 4

vagrantprodigy07 13 points · 6 days ago

It is definitely annoying. level 4

widowhanzo 27 points · 6 days ago
· edited 6 days ago

We work with Americans and they're always so positive, it's kinda annoying. They enthusiastically say "This is very interesting" when in reality it sucks and they know it.

Another less professional example, one of my (non-american) co-workers always wants to go out for coffee (while we have free and better coffee in the office), and the American coworker is always nice like "I'll go with you but I'm not having any" and I just straight up reply "No. I'm not paying for shitty coffee, I'll make a brew in the office". And that's that. Sometimes telling it as it is makes the whole conversation much shorter :D level 5

superkp 42 points · 6 days ago

Maybe the american would appreciate the break and a chance to spend some time with a coworker away from screens, but also doesn't want the shit coffee?

Sounds quite pleasant, honestly. level 6

egamma Sysadmin 39 points · 6 days ago

Yes. "Go out for coffee" is code for "leave the office so we can complain about management/company/customer/Karen". Some conversations shouldn't happen inside the building. level 7

auru21 5 points · 6 days ago

And complain about that jerk who never joins them

1 more reply level 6

Adobe_Flesh 6 points · 6 days ago

Inferior American - please compute this - the sole intention was consumption of coffee. Therefore due to existence of coffee in office, trip to coffee shop is not sane. Resistance to my reasoning is futile. level 7

superkp 1 point · 6 days ago

Coffee is not the sole intention. If that was the case, then there wouldn't have been an invitation.

Another intention would be the social aspect - which americans are known to love, especially favoring it over doing any actual work in the context of a typical 9-5 corporate job.

I've effectively resisted your reasoning, therefore [insert ad hominem insult about inferior logic].

5 more replies level 5

ITaggie Tier II Support/Linux Admin 10 points · 6 days ago

I mean, I've taken breaks just to get away from the screens for a little while. They might just like being around you.

1 more reply

11 more replies level 3

tastyratz 7 points · 6 days ago

There is still value to tact in your delivery, but, don't slit your own throat. Remind them they hired you for a reason.

"I appreciate the ideas being raised here by management for cost savings and it certainly merits a discussion point. As a business, we have many challenges and one of them includes controlling costs. Bridging these conversations with subject matter experts and management this early can really help fill out the picture. I'd like to try to provide some of that value to the conversation here" level 3

renegadecanuck 2 points · 6 days ago

If it's political suicide to point out potential downsides, then you need to work somewhere else.

Especially since your response, in my experience, won't get anything done (people will either just say "no, that's not an issue", or they'll find your tone really condescending) and will just piss people off.

I worked with someone like that would would always be super passive aggressive in how she brought things up and it pissed me off to no end, because it felt less like bringing up potential issues and more like being belittling. level 4

laserdicks 1 point · 4 days ago

Agreed, but I'm too early on in my career to make that jump. level 3

A_A_A_U_U_U 1 point · 6 days ago

Feels good to get to a point in my career where I can call people our whenever the hell I feel like it. I've got recruiters banging down my door, I couldn't swing a stick without hitting a job offer or three.

Of course you I'm not suggesting problem be combative for no reason and to be tactful about it but if you don't call out fools like that then you're being negligent in your duties. level 4

adisor19 3 points · 6 days ago

This. Current IT market is in our advantage. Say it like it is and if they don't like it, GTFO.

14 more replies level 1

DragonDrew Jack of All Trades 777 points · 6 days ago

"I am resolute in my ability to elevate this collaborative, forward-thinking team into the revenue powerhouse that I believe it can be. We will transition into a DevOps team specialising in migrating our existing infrastructure entirely to code and go completely serverless!" - CFO that outsources IT level 2

OpenScore Sysadmin 529 points · 6 days ago

"We will utilize Artificial Intelligence, machine learning, Cloud technologies, python, data science and blockchain to achieve business value" level 3

omfgitzfear 472 points · 6 days ago

We're gonna be AGILE level 4

whetu 113 points · 6 days ago

Synergy. level 5

Erok2112 92 points · 6 days ago
Gold

Weird Al even wrote a song about it!

https://www.youtube.com/watch?v=GyV_UG60dD4 level 6

uptimefordays Netadmin 32 points · 6 days ago

It's so good, I hate it. level 7

Michelanvalo 31 points · 6 days ago

I love Al, I've seen him in concert a number of times, Alapalooza was the first CD I ever opened, I own hundreds of dollars of merchandise.

I cannot stand this song because it drives me insane to hear all this corporate shit in one 4:30 space.

4 more replies

8 more replies level 5

geoff1210 9 points · 6 days ago

I can't attend keynotes without this playing in the back of my head

17 more replies level 4

MadManMorbo 58 points · 6 days ago

We recently implemented DevOps practices, Scrum, and sprints have become the norm... I swear to god we spend 60% of our time planning our sprints, and 40% of the time doing the work, and management wonders why our true productivity has fallen through the floor... level 5

Angdrambor 26 points · 6 days ago

Let me guess - they left out the retrospectives because somebody brought up how bad they were fucking it all up? level 6

ValensEtVolens 1 point · 6 days ago

Those should be fairly short too. But how do you improve if you don't apply lessons learned?

Glad I work for a top IT company.
level 5
StormlitRadiance 23 points · 6 days ago

If you spend three whole days out of every five in planning meetings, this is a problem with your meeting planners, not with screm. If these people stay in charge, you'll be stuck in planning hell no matter what framework or buzzwords they try to fling around. level 6

lurker_lurks 15 points · 6 days ago

Scrum is dead, long live Screm! We need to implement it immediately. We must innovate and stay ahead of the curve! level 7

Solaris17 Sysadmin 3 points · 6 days ago

My last company used screm. No 3 day meeting events, 20min of loose direction and we were off to the races. We all came back with parts to different projects. SYNERGY. level 7

JustCallMeFrij 1 point · 6 days ago

First you scream, then you ahh. Now you can screm level 8

lurker_lurks 4 points · 6 days ago

You screm, I screm, we all screm for ice crem. level 7

StormlitRadiance 1 point · 5 days ago

It consists of three managers for every engineer and they all screm all day at a different quartet of three managers and an engineer. level 6

water_mizu 7 points · 6 days ago

Are you saying quantum synergy coupled with block chain neutral intelligence can not be used to expedite artificial intelligence amalgamation into that will metaphor into cucumber obsession?

3 more replies level 5

malikto44 9 points · 6 days ago

I worked at a place where the standup meetings went at least 4-6 hours each day. It was amazing how little got done there. Glad I bailed.

7 more replies level 4

opmrcrab 23 points · 6 days ago

fr agile, FTFY :P level 4

ChristopherBurr 20 points · 6 days ago

Haha, we just fired our Agil scrum masters. Turns out, they couldn't make development faster or streamlined.

I was so tired of seeing all the colored post it notes and white boards set up everywhere. level 5

JasonHenley 8 points · 6 days ago

We prefer to call them Scrum Lords here.

1 more reply level 4

Mr-Shank 85 points · 6 days ago

Agile is cancer... level 5

Skrp 66 points · 6 days ago

It doesn't have to be. But oftentimes it is, yes. level 6

Farren246 74 points · 6 days ago

Agile is good. "Agile" is very very bad. level 7

nineteen999 55 points · 6 days ago
· edited 6 days ago

Everyone says this, meaning "the way I do Agile is good, the way everyone else does it sucks. Buy my Agile book! Or my Agile training course! Only $199.99". level 8

fariak 54 points · 6 days ago

There are different ways to do Agile? From the past couple of places I worked at I thought Agile was just standing in a corner for 5 minutes each morning. Do some people sit? level 9

nineteen999 45 points · 6 days ago

Wait until they have you doing "retrospectives" on a Friday afternoon with a bunch of alcohol involved. By Monday morning nobody remembers what the fuck they retrospected about on Friday. level 10

fariak 51 points · 6 days ago

Now that's a scrum Continue this thread

6 more replies level 9

Ryuujinx DevOps Engineer 25 points · 6 days ago

No, that's what it's supposed to look like. A quick 'Is anyone blocked? Does anyone need anything/can anyone chip in with X? Ok get back to it'

What it usually looks like is a round table 'Tell us what you're working on' that takes at least 30, and depending on team size, closer to an hour. level 10

become_taintless 13 points · 6 days ago

our weekly 'stand-up' is often 60-90 minutes long, because they treat it like not only a roundtable discussion about what you're working on, but an opportunity to hash out every discussion to death, in front of C-levels.

also, the C-levels are at our 'stand-ups', because of course Continue this thread

3 more replies

8 more replies level 8

togetherwem0m0 12 points · 6 days ago

To me agile is an unfortunate framework to confront and dismantle a lot of hampering low value business processes. I call it a "get-er-done" framework. But yes theres not all Rose's and sunshine in agile. But it's important to destroy processes that make delivering value impossible

1 more reply level 7

PublicyPolicy 9 points · 6 days ago

Haha. all the places I worked with agile.

We gotta do agile.

But we set how much work gets done and when. Oh you are behind schedule. No problem. No unit test and no testing for you. Can't fall behind.

Then CIO. Guess what, we moved the up december deadline to September. Be agile! It's already been promised. We just have to pivot fuckers!

11 more replies level 5

Thameus We are Pakleds make it go 8 points · 6 days ago

"That's not real Agile" level 6

pioto 36 points · 6 days ago

No true Scotsman Scrum Master level 5

Thameus We are Pakleds make it go 8 points · 6 days ago

"That's not real Agile" level 6

pioto 36 points · 6 days ago

No true Scotsman Scrum Master

1 more reply level 5

StormlitRadiance 3 points · 6 days ago

Psychotic middle managers will always have their little spastic word salad, no matter what those words are. level 6

make_havoc 2 points · 6 days ago

Why? Why? Why is it that I can only give you one upvote? You need a thousand for this truth bomb! level 5

sobrique 2 points · 6 days ago

Like all such things - it's a useful technique, that turns into a colossal pile of wank if it's misused. This is true of practically every buzzword laden methodology I've seen introduced in the last 20 years. level 5

Angdrambor 2 points · 6 days ago

Fro me, the fact that my team is moderately scrummy is a decent treatment for my ADHD. The patterns are right up there with ritalin in terms of making me less neurologically crippled. level 5

corsicanguppy DevOps Zealot 1 point · 6 days ago

The 'fr' on the front isn't usually pronounced level 4

Thangleby_Slapdiback 3 points · 6 days ago

Christ I hate that word. level 4

NHarvey3DK 2 points · 6 days ago

I think we've moved on to AI level 4

blaze13541 1 point · 6 days ago

I think I'm going to snap if I have one more meting that discusses seamless migrations and seamless movement across a complex multi forest non-standardized network.

pooley92 1 point · 6 days ago

Try the business bullshit generator https://www.atrixnet.com/bs-generator.html level 4

pooley92 1 point · 6 days ago

Or try the tech bullshit generator https://www.makebullshit.com/

unixwasright 49 points · 6 days ago

Do we not still need to get the word "paradigm" in there somewhere? level 4

wallybeavis 36 points · 6 days ago

Last time I tried shifting some paradigms, I threw out my back. level 5

jackology 19 points · 6 days ago

Pivot yourself. level 6

EViLTeW 23 points · 6 days ago

If this doesn't work, circle back around and do the needful.

[Oct 06, 2019] Weird Al Yankovic - Mission Statement

Highly recommended!
This song seriously streamlined my workflow.
Oct 06, 2019 | www.youtube.com

FanmaR , 4 years ago

Props to the artist who actually found a way to visualize most of this meaningless corporate lingo. I'm sure it wasn't easy to come up with everything.

Maxwelhse , 3 years ago

He missed "sea change" and "vertical integration". Otherwise, that was pretty much all of the useless corporate meetings I've ever attended distilled down to 4.5 minutes. Oh, and you're getting laid off and/or no raises this year.

VenetianTemper , 4 years ago

From my experiences as an engineer, never trust a company that describes their product with the word "synergy".

Swag Mcfresh , 5 years ago

For those too young to get the joke, this is a style parody of Crosby, Stills & Nash, a folk-pop super-group from the 60's. They were hippies who spoke out against corporate interests, war, and politics. Al took their sound (flawlessly), and wrote a song in corporate jargon (the exact opposite of everything CSN was about). It's really brilliant, to those who get the joke.

112steinway , 4 years ago

Only in corporate speak can you use a whole lot of words while saying nothing at all.

Jonathan Ingersoll , 3 years ago

As a business major this is basically every essay I wrote.

A.J. Collins , 3 years ago

"The company has undergone organization optimization due to our strategy modification, which includes empowering the support to the operation in various global markets" - Red 5 on why they laid off 40 people suddenly. Weird Al would be proud.

meanmanturbo , 3 years ago

So this is basically a Dilbert strip turned into a song. I approve.

zyxwut321 , 4 years ago

In his big long career this has to be one of the best songs Weird Al's ever done. Very ambitious rendering of one of the most ambitious songs in pop music history.

teenygozer , 3 years ago

This should be played before corporate meetings to shame anyone who's about to get up and do the usual corporate presentation. Genius as usual, Mr. Yankovic!

Dunoid , 4 years ago

Maybe I'm too far gone to the world of computer nerds, but "Cloud Computing" seems like it should have been in the song somewhere.

Snoo Lee , 4 years ago

The "paradigm shift" at the end of the video / song is when the corporation screws everybody at the end. Brilliantly done, Al.

A Piece Of Bread , 3 years ago

Don't forget to triangulate the automatonic business monetizer to create exceptional synergy.

GeoffryHawk , 3 years ago

There's a quote it goes something like: A politician is someone who speaks for hours while saying nothing at all. And this is exactly it and it's brilliant.

Sefie Ezephiel , 4 months ago

From the current Gamestop earnings call "address the challenges that have impacted our results, and execute both deliberately and with urgency. We believe we will transform the business and shape the strategy for the GameStop of the future. This will be driven by our go-forward leadership team that is now in place, a multi-year transformation effort underway, a commitment to focusing on the core elements of our business that are meaningful to our future, and a disciplined approach to capital allocation."" yeah Weird Al totally nailed it

Phil H , 6 months ago

"People who enjoy meetings should not be put in charge of anything." -Thomas Sowell

Laff , 3 years ago

I heard "monetize our asses" for some reason...

Brett Naylor , 4 years ago

Excuse me, but "proactive" and "paradigm"? Aren't these just buzzwords that dumb people use to sound important? Not that I'm accusing you of anything like that. [pause] I'm fired, aren't I?~George Meyer

Mark Kahn , 4 years ago

Brilliant social commentary, on how the height of 60's optimism was bastardized into corporate enthusiasm. I hope SteveJjobs got to see this.

Mark , 4 years ago

That's the strangest "Draw My Life" I've ever seen.

Δ , 17 hours ago

I watch this at least once a day to take the edge of my job search whenever I have to decipher fifteen daily want-ads claiming to seek "Hospitality Ambassadors", "Customer Satisfaction Specialists", "Brand Representatives" and "Team Commitment Associates" eventually to discover they want someone to run a cash register and sweep up.

Mike The SandbridgeKid , 5 years ago

The irony is a song about Corporate Speak in the style of tie-died, hippie-dippy CSN (+/- )Y four-part harmony. Suite Judy Blue Eyes via Almost Cut My Hair filtered through Carry On. "Fantastic" middle finger to Wall Street,The City, and the monstrous excesses of Unbridled Capitalism.

Geetar Bear , 4 years ago (edited)

This reminds me of George carlin so much

Vaugn Ripen , 2 years ago

If you understand who and what he's taking a jab at, this is one of the greatest songs and videos of all time. So spot on. This and Frank's 2000 inch tv are my favorite songs of yours. Thanks Al!

Joolz Godfree , 4 years ago

hahaha, "Client-Centric Solutions...!" (or in my case at the time, 'Customer-Centric' solutions) now THAT's a term i haven't heard/read/seen in years, since last being an office drone. =D

Miles Lacey , 4 years ago

When I interact with this musical visual medium I am motivated to conceptualize how the English language can be better compartmentalized to synergize with the client-centric requirements of the microcosmic community focussed social entities that I administrate on social media while interfacing energetically about the inherent shortcomings of the current socio-economic and geo-political order in which we co-habitate. Now does this tedium flow in an effortless stream of coherent verbalisations capable of comprehension?

Soufriere , 5 years ago

When I bought "Mandatory Fun", put it in my car, and first heard this song, I busted a gut, laughing so hard I nearly crashed. All the corporate buzzwords! (except "pivot", apparently).

[Oct 06, 2019] Devop created huge opportunities for a new generation of snake oil salesman

Highly recommended!
Oct 06, 2019 | www.reddit.com

DragonDrew Jack of All Trades 772 points · 4 days ago

"I am resolute in my ability to elevate this collaborative, forward-thinking team into the revenue powerhouse that I believe it can be. We will transition into a DevOps team specialising in migrating our existing infrastructure entirely to code and go completely serverless!" - CFO that outsources IT level 2 OpenScore Sysadmin 527 points · 4 days ago

"We will utilize Artificial Intelligence, machine learning, Cloud technologies, python, data science and blockchain to achieve business value"

[Oct 06, 2019] This talk of going serverless or getting rid of traditional IT admins has gotten very old. In some ways it is true, but in many ways it is greatly exaggerated. There will always be a need for onsite technical support

Oct 06, 2019 | www.reddit.com

remi_in_2016_LUL NOC/SOC Analyst 109 points · 4 days ago

I agree with the sentiment. This talk of going serverless or getting rid of traditional IT admins has gotten very old. In some ways it is true, but in many ways it is greatly exaggerated. There will always be a need for onsite technical support. There are still users today that cannot plug in a mouse or keyboard into a USB port. Not to mention layer 1 issues; good luck getting your cloud provider to run a cable drop for you. Besides, who is going to manage your cloud instances? They don't just operate and manage themselves.

TLDR; most of us aren't going anywhere.

[Oct 05, 2019] Sick and tired of listening to these so called architects and full stack developers who watch bunch of videos on YouTube and Pluralsight, find articles online. They go around workplace throwing words like containers, devops, NoOps, azure, infrastructure as code, serverless, etc, but they don t understand half of the stuff

Devop created a new generation of bullsheeters
Oct 05, 2019 | www.reddit.com

They say, No more IT or system or server admins needed very soon...

Sick and tired of listening to these so called architects and full stack developers who watch bunch of videos on YouTube and Pluralsight, find articles online. They go around workplace throwing words like containers, devops, NoOps, azure, infrastructure as code, serverless, etc, they don't understand half of the stuff. I do some of the devops tasks in our company, I understand what it takes to implement and manage these technologies. Every meeting is infested with these A holes.

ntengineer 613 points · 4 days ago

Your best defense against these is to come up with non-sarcastic and quality questions to ask these people during the meeting, and watch them not have a clue how to answer them.

For example, a friend of mine worked at a smallish company, some manager really wanted to move more of their stuff into Azure including AD and Exchange environment. But they had common problems with their internet connection due to limited bandwidth and them not wanting to spend more. So during a meeting my friend asked a question something like this:

"You said on this slide that moving the AD environment and Exchange environment to Azure will save us money. Did you take into account that we will need to increase our internet speed by a factor of at least 4 in order to accommodate the increase in traffic going out to the Azure cloud? "

Of course, they hadn't. So the CEO asked my friend if he had the numbers, which he had already done his homework, and it was a significant increase in cost every month and taking into account the cost for Azure and the increase in bandwidth wiped away the manager's savings.

I know this won't work for everyone. Sometimes there is real savings in moving things to the cloud. But often times there really isn't. Calling the uneducated people out on what they see as facts can be rewarding. level 2

PuzzledSwitch 101 points · 4 days ago

my previous boss was that kind of a guy. he waited till other people were done throwing their weight around in a meeting and then calmly and politely dismantled them with facts.

no amount of corporate pressuring or bitching could ever stand up to that. level 3

themastermatt 42 points · 4 days ago

Ive been trying to do this. Problem is that everyone keeps talking all the way to the end of the meeting leaving no room for rational facts. level 4 PuzzledSwitch 35 points · 4 days ago

make a follow-up in email, then.

or, you might have to interject for a moment.

williamfny Jack of All Trades 26 points · 4 days ago

This is my approach. I don't yell or raise my voice, I just wait. Then I start asking questions that they generally cannot answer and slowly take them apart. I don't have to be loud to get my point across. level 4

MaxHedrome 6 points · 4 days ago

Listen to this guy OP

This tactic is called "the box game". Just continuously ask them logical questions that can't be answered with their stupidity. (Box them in), let them be their own argument against themselves.

CrazyTachikoma 4 days ago

Most DevOps I've met are devs trying to bypass the sysadmins. This, and the Cloud fad, are burning serious amount of money from companies managed by stupid people that get easily impressed by PR stunts and shiny conferences. Then when everything goes to shit, they call the infrastructure team to fix it...

[Oct 05, 2019] Summary of Eric Hoffer's, The True Believer Reason and Meaning

Oct 05, 2019 | reasonandmeaning.com

Summary of Eric Hoffer's, The True Believer September 4, 2017 Book Reviews - Politics , Politics - Tyranny John Messerly

Eric Hoffer in 1967, in the Oval Office, visiting President Lyndon Baines JohnsonEric Hoffer in 1967, in the Oval Office , visiting President Lyndon Baines Johnson

" Hatred is the most accessible and comprehensive of all the unifying agents Mass movements can rise and spread without belief in a god, but never without a belief in a devil. " ~ Eric Hoffer, The True Believer: Thoughts on the Nature of Mass Movements

(This article was reprinted in the online magazine of the Institute for Ethics & Emerging Technologies, October 19, 2017.)

Eric Hoffer (1898 – 1983) was an American moral and social philosopher who worked for more than twenty years as longshoremen in San Francisco. The author of ten books, he was awarded the Presidential Medal of Freedom in 1983. His first book, The True Believer: Thoughts on the Nature of Mass Movements (1951), is a work in social psychology which discusses the psychological causes of fanaticism. It is widely considered a classic.

Overview

The first lines of Hoffer's book clearly state its purpose:

This book deals with some peculiarities common to all mass movements, be they religious movements, social revolutions or nationalist movements. It does not maintain that all movements are identical, but that they share certain essential characteristics which give them a family likeness.

All mass movements generate in their adherents a readiness to die and a proclivity for united action; all of them, irrespective of the doctrine they preach and the program they project, breed fanaticism, enthusiasm, fervent hope, hatred and intolerance; all of them are capable of releasing a powerful flow of activity in certain departments of life; all of them demand blind faith and single-hearted allegiance

The assumption that mass movements have many traits in common does not imply that all movements are equally beneficent or poisonous. The book passes no judgments, and expresses no preferences. It merely tries to explain (pp. xi-xiii)

Part 1 – The Appeal of Mass Movements

Hoffer says that mass movements begin when discontented, frustrated, powerless people lose faith in existing institutions and demand change. Feeling hopeless, such people participate in movements that allow them to become part of a larger collective. They become true believers in a mass movement that "appeals not to those intent on bolstering and advancing a cherished self, but to those who crave to be rid of an unwanted self because it can satisfy the passion for self-renunciation." (p. 12)

Put another way, Hoffer says: "Faith in a holy cause is to a considerable extent a substitute for the loss of faith in ourselves." (p. 14) Leaders inspire these movements, but the seeds of mass movements must already exist for the leaders to be successful. And while mass movements typically blend nationalist, political and religious ideas, they all compete for angry and/or marginalized people.

Part 2 – The Potential Converts

The destitute are not usually converts to mass movements; they are too busy trying to survive to become engaged. But what Hoffer calls the "new poor," those who previously had wealth or status but who believe they have now lost it, are potential converts. Such people are resentful and blame others for their problems.

Mass movements also attract the partially assimilated -- those who feel alienated from mainstream culture. Others include misfits, outcasts, adolescents, and sinners, as well as the ambitious, selfish, impotent and bored. What all converts all share is the feeling that their lives are meaningless and worthless.

A rising mass movement attracts and holds a following not by its doctrine and promises but by the refuge it offers from the anxieties, barrenness, and meaninglessness of an individual existence. It cures the poignantly frustrated not by conferring on them an absolute truth or remedying the difficulties and abuses which made their lives miserable, but by freeing them from their ineffectual selves -- and it does this by enfolding and absorbing them into a closely knit and exultant corporate whole. (p. 41)

Hoffer emphasizes that creative people -- those who experience creative flow -- aren't usually attracted to mass movements. Creativity provides inner joy which both acts as an antidote to the frustrations with external hardships. Creativity also relieves boredom, a major cause of mass movements:

There is perhaps no more reliable indicator of a society's ripeness for a mass movement than the prevalence of unrelieved boredom. In almost all the descriptions of the periods preceding the rise of mass movements there is reference to vast ennui; and in their earliest stages mass movements are more likely to find sympathizers and
support among the bored than among the exploited and oppressed. To a deliberate fomenter of mass upheavals, the report that people are bored still should be at least as encouraging as that they are suffering from intolerable economic or political abuses. (pp. 51-52)

Part 3 – United Action and Self-Sacrifice

Mass movements demand of their followers a "total surrender of a distinct self." (p. 117) Thus a follower identifies as "a member of a certain tribe or family." (p. 62) Furthermore, mass movements denigrate and "loathe the present." (p. 74) By regarding the modern world as worthless, the movement inspires a battle against it.

What surprises one, when listening to the frustrated as the decry the present and all its works, is the enormous joy they derive from doing so. Such delight cannot come from the mere venting of a grievance. There must be something more -- and there is. By expiating upon the incurable baseness and vileness of the times, the frustrated soften their feeling of failure and isolation (p. 75)

Mass movements also promote faith over reason and serve as "fact-proof screens between the faithful and the realities of the world." (p. 79)

The effectiveness of a doctrine does not come from its meaning but from its certitude presented as the embodiment of the one and only truth. If a doctrine is not unintelligible, it has to be vague; and if neither unintelligible nor vague, it has to be unverifiable. One has to get to heaven or the distant future to determine the truth of an effective doctrine simple words are made pregnant with meaning and made to look like symbols in a secret message. There is thus an illiterate air about the most literate true believer. (pp. 80-81).

So believers ignore truths that contradict their fervent beliefs, but this hides the fact that,

The fanatic is perpetually incomplete and insecure. He cannot generate self-assurance out of his individual sources but finds it only by clinging passionately to whatever support he happens to embrace. The passionate attachment is the essence of his blind devotion and religiosity, and he sees in it the sources of all virtue and strength He sacrifices his life to prove his worth The fanatic cannot be weaned away from his cause by an appeal to reason or his moral sense. He fears compromise and cannot be persuaded to qualify the certitude and righteousness of his holy cause. (p. 85).

Thus the doctrines of the mass movement must not be questioned -- they are regarded with certitude -- and they are spread through "persuasion, coercion, and proselytization." Persuasion works best on those already sympathetic to the doctrines, but it must be vague enough to allow "the frustrated to hear the echo of their own musings in impassioned double talk." (p. 106) Hoffer quotes Nazi propagandist Joseph Goebbels : "a sharp sword must always stand behind propaganda if it is to be really effective." (p. 106) The urge to proselytize comes not from a deeply held belief in the truth of doctrine but from an urge of the fanatic to "strengthen his own faith by converting others." (p. 110)

Moreover, mass movements need an object of hate which unifies believers, and "the ideal devil is a foreigner." (p. 93) Mass movements need a devil. But in reality, the "hatred of a true believer is actually a disguised self-loathing " and "the fanatic is perpetually incomplete and insecure." (p. 85) Through their fanatical action and personal sacrifice, the fanatic tries to give their life meaning.

Part 4 – Beginning and End

Hoffer states that three personality types typically lead mass movements: "men of words", "fanatics", and "practical men of action." Men of words try to "discredit the prevailing creeds" and creates a "hunger for faith" which is then fed by "doctrines and slogans of the new faith." (p. 140) (In the USA think of the late William F. Buckley.) Slowly followers emerge.

Then fanatics take over. (In the USA think of the Koch brothers, Murdoch, Limbaugh, O'Reilly, Hannity, Alex Jones, etc.) Fanatics don't find solace in literature, philosophy or art. Instead, they are characterized by viciousness, the urge to destroy, and the perpetual struggle for power. But after mass movements transform the social order, the insecurity of their followers is not ameliorated. At this point, the "practical men of action" take over and try to lead the new order by further controlling their followers. (Think Steve Bannon, Mitch McConnell, Steve Miller, etc.)

In the end mass movements that succeed often bring about a social order worse than the previous one. (This was one of Will Durant's findings in The Lessons of History . ) As Hoffer puts it near the end of his work: "All mass movements irrespective of the doctrine they preach and the program they project, breed fanaticism, enthusiasm, fervent hope, hatred, and intolerance." (p. 141)

__________________________________________________________________________

Quotes from Hoffer, Eric (2002). The True Believer: Thoughts on the Nature of Mass Movements . Harper Perennial Modern Classics. ISBN 978-0-060-50591-2 .

[Oct 02, 2019] raid5 - Can I recover a RAID 5 array if two drives have failed - Server Fault

Oct 02, 2019 | serverfault.com

Can I recover a RAID 5 array if two drives have failed? Ask Question Asked 9 years ago Active 2 years, 3 months ago Viewed 58k times I have a Dell 2600 with 6 drives configured in a RAID 5 on a PERC 4 controller. 2 drives failed at the same time, and according to what I know a RAID 5 is recoverable if 1 drive fails. I'm not sure if the fact I had six drives in the array might save my skin.

I bought 2 new drives and plugged them in but no rebuild happened as I expected. Can anyone shed some light? raid raid5 dell-poweredge share Share a link to this question

add a comment | 4 Answers 4 active oldest votes

11 Regardless of how many drives are in use, a RAID 5 array only allows for recovery in the event that just one disk at a time fails.

What 3molo says is a fair point but even so, not quite correct I think - if two disks in a RAID5 array fail at the exact same time then a hot spare won't help, because a hot spare replaces one of the failed disks and rebuilds the array without any intervention, and a rebuild isn't possible if more than one disk fails.

For now, I am sorry to say that your options for recovering this data are going to involve restoring a backup.

For the future you may want to consider one of the more robust forms of RAID (not sure what options a PERC4 supports) such as RAID 6 or a nested RAID array . Once you get above a certain amount of disks in an array you reach the point where the chance that more than one of them can fail before a replacement is installed and rebuilt becomes unacceptably high. share Share a link to this answer Copy link | improve this answer edited Jun 8 '12 at 13:37 longneck 21.1k 3 3 gold badges 43 43 silver badges 76 76 bronze badges answered Sep 21 '10 at 14:43 Rob Moir Rob Moir 30k 4 4 gold badges 53 53 silver badges 84 84 bronze badges

add a comment | 2 You can try to force one or both of the failed disks to be online from the BIOS interface of the controller. Then check that the data and the file system are consistent. share Share a link to this answer Copy link | improve this answer answered Sep 21 '10 at 15:35 Mircea Vutcovici Mircea Vutcovici 13.6k 3 3 gold badges 42 42 silver badges 69 69 bronze badges add a comment | 2 Direct answer is "No". In-direct -- "It depends". Mainly it depends on whether disks are partially out of order, or completely. In case there're partially broken, you can give it a try -- I would copy (using tool like ddrescue) both failed disks. Then I'd try to run the bunch of disks using Linux SoftRAID -- re-trying with proper order of disks and stripe-size in read-only mode and counting CRC mismatches. It's quite doable, I should say -- this text in Russian mentions 12 disk RAID50's recovery using LSR , for example. share Share a link to this answer Copy link | improve this answer edited Jun 8 '12 at 15:12 Skyhawk 13.5k 3 3 gold badges 45 45 silver badges 91 91 bronze badges answered Jun 8 '12 at 14:11 poige poige 7,370 2 2 gold badges 16 16 silver badges 38 38 bronze badges add a comment | 0 It is possible if raid was with one spare drive , and one of your failed disks died before the second one. So, you just need need to try reconstruct array virtually with 3d party software . Found small article about this process on this page: http://www.angeldatarecovery.com/raid5-data-recovery/

And, if you realy need one of died drives you can send it to recovery shops. With of this images you can reconstruct raid properly with good channces.

[Sep 29, 2019] IPTABLES makes corporate security scans go away

Sep 29, 2019 | www.reddit.com

r/ShittySysadmin • Posted by u/TBoneJeeper 1 month ago IPTABLES makes corporate security scans go away

In a remote office location, corporate's network security scans can cause many false alarms and even take down services if they are tickled the wrong way. Dropping all traffic from the scanner's IP is a great time/resource-saver. No vulnerability reports, no follow-ups with corporate. No time for that. 12 comments 93% Upvoted What are your thoughts? Log in or Sign up log in sign up Sort by level 1 name_censored_ 9 points · 1 month ago

Seems a bit like a bandaid to me.

level 2 TBoneJeeper 3 points · 1 month ago

Good ideas, but sounds like a lot of work. Just dropping their packets had the desired effect and took 30 seconds. level 3 name_censored_ 5 points · 1 month ago

No-one ever said being lazy was supposed to be easy. level 2 spyingwind 2 points · 1 month ago

To be serious, closing of unused ports is good practice. Even better if used services can only be accessed from know sources. Such as the DB only allows access from the App server. A jump box, like a guacd server for remote access for things like RDP and SSH, would help reduce the threat surface. Or go further and setup Ansible/Chef/etc to allow only authorized changes. level 3 gortonsfiJr 2 points · 1 month ago

Except, seriously, in my experience the security teams demand that you make big security holes for them in your boxes, so that they can hammer away at them looking for security holes. level 4 asmiggs 1 point · 1 month ago

Security teams will always invoke the worst case scenario, 'what if your firewall is borked?', 'what if your jumpbox is hacked?' etc. You can usually give their scanner exclusive access to get past these things but surprise surprise the only worst case scenario I've faced is 'what if your security scanner goes rogue?'. level 5 gortonsfiJr 1 point · 1 month ago

What if you lose control of your AD domain and some rogue agent gets domain admin rights? Also, we're going to need domain admin rights.

...Is this a test? level 6 spyingwind 1 point · 1 month ago

What if an attacker was pretending o be a security company? No DA access! You can plug in anywhere, but if port security blocks your scanner, then I can't help. Also only 80 and 443 are allowed into our network. level 3 TBoneJeeper 1 point · 1 month ago

Agree. But in rare cases, the ports/services are still used (maybe rarely), yet have "vulnerabilities" that are difficult to address. Some of these scanners hammer services so hard, trying every CGI/PHP/java exploit known to man in rapid succession, and older hardware/services cannot keep up and get wedged. I remember every Tuesday night I would have to go restart services because this is when they were scanned. Either vendor support for this software version was no longer available, or would simply require too much time to open vendor support cases to report the issues, argue with 1st level support, escalate, work with engineering, test fixes, etc. level 1 rumplestripeskin 1 point · 1 month ago

Yes... and use Ansible to update iptables on each of your Linux VMs. level 1 rumplestripeskin 1 point · 1 month ago

I know somebody who actually did this. level 2 TBoneJeeper 2 points · 1 month ago

Maybe we worked together :-)

[Sep 23, 2019] How to recover deleted files with foremost on Linux - LinuxConfig.org

Sep 23, 2019 | linuxconfig.org
Details
System Administration
15 September 2019
Contents In this article we will talk about foremost , a very useful open source forensic utility which is able to recover deleted files using the technique called data carving . The utility was originally developed by the United States Air Force Office of Special Investigations, and is able to recover several file types (support for specific file types can be added by the user, via the configuration file). The program can also work on partition images produced by dd or similar tools.

In this tutorial you will learn:

foremost-manual <img src=https://linuxconfig.org/images/foremost_manual.png alt=foremost-manual width=1200 height=675 /> Foremost is a forensic data recovery program for Linux used to recover files using their headers, footers, and data structures through a process known as file carving. Software Requirements and Conventions Used
Software Requirements and Linux Command Line Conventions
Category Requirements, Conventions or Software Version Used
System Distribution-independent
Software The "foremost" program
Other Familiarity with the command line interface
Conventions # - requires given linux commands to be executed with root privileges either directly as a root user or by use of sudo command
$ - requires given linux commands to be executed as a regular non-privileged user
Installation

Since foremost is already present in all the major Linux distributions repositories, installing it is a very easy task. All we have to do is to use our favorite distribution package manager. On Debian and Ubuntu, we can use apt :

$ sudo apt install foremost

In recent versions of Fedora, we use the dnf package manager to install packages , the dnf is a successor of yum . The name of the package is the same:

$ sudo dnf install foremost

If we are using ArchLinux, we can use pacman to install foremost . The program can be found in the distribution "community" repository:

$ sudo pacman -S foremost

SUBSCRIBE TO NEWSLETTER
Subscribe to Linux Career NEWSLETTER and receive latest Linux news, jobs, career advice and tutorials.

me name=


Basic usage
WARNING
No matter which file recovery tool or process your are going to use to recover your files, before you begin it is recommended to perform a low level hard drive or partition backup, hence avoiding an accidental data overwrite !!! In this case you may re-try to recover your files even after unsuccessful recovery attempt. Check the following dd command guide on how to perform hard drive or partition low level backup.

The foremost utility tries to recover and reconstruct files on the base of their headers, footers and data structures, without relying on filesystem metadata . This forensic technique is known as file carving . The program supports various types of files, as for example:

The most basic way to use foremost is by providing a source to scan for deleted files (it can be either a partition or an image file, as those generated with dd ). Let's see an example. Imagine we want to scan the /dev/sdb1 partition: before we begin, a very important thing to remember is to never store retrieved data on the same partition we are retrieving the data from, to avoid overwriting delete files still present on the block device. The command we would run is:

$ sudo foremost -i /dev/sdb1

By default, the program creates a directory called output inside the directory we launched it from and uses it as destination. Inside this directory, a subdirectory for each supported file type we are attempting to retrieve is created. Each directory will hold the corresponding file type obtained from the data carving process:

output
├── audit.txt
├── avi
├── bmp
├── dll
├── doc
├── docx
├── exe
├── gif
├── htm
├── jar
├── jpg
├── mbd
├── mov
├── mp4
├── mpg
├── ole
├── pdf
├── png  
├── ppt
├── pptx
├── rar
├── rif
├── sdw
├── sx
├── sxc
├── sxi
├── sxw
├── vis
├── wav
├── wmv
├── xls
├── xlsx
└── zip

When foremost completes its job, empty directories are removed. Only the ones containing files are left on the filesystem: this let us immediately know what type of files were successfully retrieved. By default the program tries to retrieve all the supported file types; to restrict our search, we can, however, use the -t option and provide a list of the file types we want to retrieve, separated by a comma. In the example below, we restrict the search only to gif and pdf files:

$ sudo foremost -t gif,pdf -i /dev/sdb1

https://www.youtube.com/embed/58S2wlsJNvo

In this video we will test the forensic data recovery program Foremost to recover a single png file from /dev/sdb1 partition formatted with the EXT4 filesystem.

me name=


Specifying an alternative destination

As we already said, if a destination is not explicitly declared, foremost creates an output directory inside our cwd . What if we want to specify an alternative path? All we have to do is to use the -o option and provide said path as argument. If the specified directory doesn't exist, it is created; if it exists but it's not empty, the program throws a complain:

ERROR: /home/egdoc/data is not empty
        Please specify another directory or run with -T.

To solve the problem, as suggested by the program itself, we can either use another directory or re-launch the command with the -T option. If we use the -T option, the output directory specified with the -o option is timestamped. This makes possible to run the program multiple times with the same destination. In our case the directory that would be used to store the retrieved files would be:

/home/egdoc/data_Thu_Sep_12_16_32_38_2019
The configuration file

The foremost configuration file can be used to specify file formats not natively supported by the program. Inside the file we can find several commented examples showing the syntax that should be used to accomplish the task. Here is an example involving the png type (the lines are commented since the file type is supported by default):

# PNG   (used in web pages)
#       (NOTE THIS FORMAT HAS A BUILTIN EXTRACTION FUNCTION)
#       png     y       200000  \x50\x4e\x47?   \xff\xfc\xfd\xfe

The information to provide in order to add support for a file type, are, from left to right, separated by a tab character: the file extension ( png in this case), whether the header and footer are case sensitive ( y ), the maximum file size in Bytes ( 200000 ), the header ( \x50\x4e\x47? ) and and the footer ( \xff\xfc\xfd\xfe ). Only the latter is optional and can be omitted.

If the path of the configuration file it's not explicitly provided with the -c option, a file named foremost.conf is searched and used, if present, in the current working directory. If it is not found the default configuration file, /etc/foremost.conf is used instead.

Adding the support for a file type

By reading the examples provided in the configuration file, we can easily add support for a new file type. In this example we will add support for flac audio files. Flac (Free Lossless Audio Coded) is a non-proprietary lossless audio format which is able to provide compressed audio without quality loss. First of all, we know that the header of this file type in hexadecimal form is 66 4C 61 43 00 00 00 22 ( fLaC in ASCII), and we can verify it by using a program like hexdump on a flac file:

$ hexdump -C
blind_guardian_war_of_wrath.flac|head
00000000  66 4c 61 43 00 00 00 22  12 00 12 00 00 00 0e 00  |fLaC..."........|
00000010  36 f2 0a c4 42 f0 00 4d  04 60 6d 0b 64 36 d7 bd  |6...B..M.`m.d6..|
00000020  3e 4c 0d 8b c1 46 b6 fe  cd 42 04 00 03 db 20 00  |>L...F...B.... .|
00000030  00 00 72 65 66 65 72 65  6e 63 65 20 6c 69 62 46  |..reference libF|
00000040  4c 41 43 20 31 2e 33 2e  31 20 32 30 31 34 31 31  |LAC 1.3.1 201411|
00000050  32 35 21 00 00 00 12 00  00 00 54 49 54 4c 45 3d  |25!.......TITLE=|
00000060  57 61 72 20 6f 66 20 57  72 61 74 68 11 00 00 00  |War of Wrath....|
00000070  52 45 4c 45 41 53 45 43  4f 55 4e 54 52 59 3d 44  |RELEASECOUNTRY=D|
00000080  45 0c 00 00 00 54 4f 54  41 4c 44 49 53 43 53 3d  |E....TOTALDISCS=|
00000090  32 0c 00 00 00 4c 41 42  45 4c 3d 56 69 72 67 69  |2....LABEL=Virgi|

As you can see the file signature is indeed what we expected. Here we will assume a maximum file size of 30 MB, or 30000000 Bytes. Let's add the entry to the file:

flac    y       30000000    \x66\x4c\x61\x43\x00\x00\x00\x22

The footer signature is optional so here we didn't provide it. The program should now be able to recover deleted flac files. Let's verify it. To test that everything works as expected I previously placed, and then removed, a flac file from the /dev/sdb1 partition, and then proceeded to run the command:

$ sudo foremost -i /dev/sdb1 -o $HOME/Documents/output

As expected, the program was able to retrieve the deleted flac file (it was the only file on the device, on purpose), although it renamed it with a random string. The original filename cannot be retrieved because, as we know, files metadata is contained in the filesystem, and not in the file itself:

/home/egdoc/Documents
└── output
    ├── audit.txt
    └── flac
        └── 00020482.flac

me name=


The audit.txt file contains information about the actions performed by the program, in this case:

Foremost version 1.5.7 by Jesse Kornblum, Kris
Kendall, and Nick Mikus
Audit File

Foremost started at Thu Sep 12 23:47:04 2019
Invocation: foremost -i /dev/sdb1 -o /home/egdoc/Documents/output
Output directory: /home/egdoc/Documents/output
Configuration file: /etc/foremost.conf
------------------------------------------------------------------
File: /dev/sdb1
Start: Thu Sep 12 23:47:04 2019
Length: 200 MB (209715200 bytes)

Num      Name (bs=512)         Size      File Offset     Comment

0:      00020482.flac         28 MB        10486784
Finish: Thu Sep 12 23:47:04 2019

1 FILES EXTRACTED

flac:= 1
------------------------------------------------------------------

Foremost finished at Thu Sep 12 23:47:04 2019
Conclusion

In this article we learned how to use foremost, a forensic program able to retrieve deleted files of various types. We learned that the program works by using a technique called data carving , and relies on files signatures to achieve its goal. We saw an example of the program usage and we also learned how to add the support for a specific file type using the syntax illustrated in the configuration file. For more information about the program usage, please consult its manual page.

[Sep 22, 2019] Easing into automation with Ansible Enable Sysadmin

Sep 19, 2019 | www.redhat.com
It's easier than you think to get started automating your tasks with Ansible. This gentle introduction gives you the basics you need to begin streamlining your administrative life.

Posted | by Jörg Kastning (Red Hat Accelerator)

Image
"DippingToes 02.jpg" by Zhengan is licensed under CC BY-SA 4.0

In the end of 2015 and the beginning of 2016, we decided to use Red Hat Enterprise Linux (RHEL) as our third operating system, next to Solaris and Microsoft Windows. I was part of the team that tested RHEL, among other distributions, and would engage in the upcoming operation of the new OS. Thinking about a fast-growing number of Red Hat Enterprise Linux systems, it came to my mind that I needed a tool to automate things because without automation the number of hosts I can manage is limited.

I had experience with Puppet back in the day but did not like that tool because of its complexity. We had more modules and classes than hosts to manage back then. So, I took a look at Ansible version 2.1.1.0 in July 2016.

What I liked about Ansible and still do is that it is push-based. On a target node, only Python and SSH access are needed to control the node and push configuration settings to it. No agent needs to be removed if you decide that Ansible isn't the right tool for you. The YAML syntax is easy to read and write, and the option to use playbooks as well as ad hoc commands makes Ansible a flexible solution that helps save time in our day-to-day business. So, it was at the end of 2016 when we decided to evaluate Ansible in our environment.

First steps

As a rule of thumb, you should begin automating things that you have to do on a daily or at least a regular basis. That way, automation saves time for more interesting or more important things. I followed this rule by using Ansible for the following tasks:

  1. Set a baseline configuration for newly provisioned hosts (set DNS, time, network, sshd, etc.)
  2. Set up patch management to install Red Hat Security Advisories (RHSAs) .
  3. Test how useful the ad hoc commands are, and where we could benefit from them.
Baseline Ansible configuration

For us, baseline configuration is the configuration every newly provisioned host gets. This practice makes sure the host fits into our environment and is able to communicate on the network. Because the same configuration steps have to be made for each new host, this is an awesome step to get started with automation.

The following are the tasks I started with:

(Some of these steps are already published here on Enable Sysadmin, as you can see, and others might follow soon.)

All of these tasks have in common that they are small and easy to start with, letting you gather experience with using different kinds of Ansible modules, roles, variables, and so on. You can run each of these roles and tasks standalone, or tie them all together in one playbook that sets the baseline for your newly provisioned system.

Red Hat Enterprise Linux Server patch management with Ansible

As I explained on my GitHub page for ansible-role-rhel-patchmanagement , in our environment, we deploy Red Hat Enterprise Linux Servers for our operating departments to run their applications.

More about automation

This role was written to provide a mechanism to install Red Hat Security Advisories on target nodes once a month. In our special use case, only RHSAs are installed to ensure a minimum security limit. The installation is enforced once a month. The advisories are summarized in "Patch-Sets." This way, it is ensured that the same advisories are used for all stages during a patch cycle.

The Ansible Inventory nodes are summarized in one of the following groups, each of which defines when a node is scheduled for patch installation:

In case packages were updated on target nodes, the hosts will reboot afterward.

Because the production systems are most important, they are divided into two separate groups (phase3 and phase4) to decrease the risk of failure and service downtime due to advisory installation.

You can find more about this role in my GitHub repo: https://github.com/Tronde/ansible-role-rhel-patchmanagement .

Updating and patch management are tasks every sysadmin has to deal with. With these roles, Ansible helped me get this task done every month, and I don't have to care about it anymore. Only when a system is not reachable, or yum has a problem, do I get an email report telling me to take a look. But, I got lucky, and have not yet received any mail report for the last couple of months, now. (Yes, of course, the system is able to send mail.)

Ad hoc commands

The possibility to run ad hoc commands for quick (and dirty) tasks was one of the reasons I chose Ansible. You can use these commands to gather information when you need them or to get things done without the need to write a playbook first.

I used ad hoc commands in cron jobs until I found the time to write playbooks for them. But, with time comes practice, and today I try to use playbooks and roles for every task that has to run more than once.

Here are small examples of ad hoc commands that provide quick information about your nodes.

Query package version
ansible all -m command -a'/usr/bin/rpm -qi <PACKAGE NAME>' | grep 'SUCCESS\|Version'
Query OS-Release
ansible all -m command -a'/usr/bin/cat /etc/os-release'
Query running kernel version
ansible all -m command -a'/usr/bin/uname -r'
Query DNS servers in use by nodes
ansible all -m command -a'/usr/bin/cat /etc/resolv.conf' | grep 'SUCCESS\|nameserver'

Hopefully, these samples give you an idea for what ad hoc commands can be used.

Summary

It's not hard to start with automation. Just look for small and easy tasks you do every single day, or even more than once a day, and let Ansible do these tasks for you.

Eventually, you will be able to solve more complex tasks as your automation skills grow. But keep things as simple as possible. You gain nothing when you have to troubleshoot a playbook for three days when it solves a task you could have done in an hour.

[Want to learn more about Ansible? Check out these free e-books .]

[Sep 18, 2019] the myopic drive to profitability and naivety to unintended consequences are pushing these tech out into the world before they are ready.

Sep 18, 2019 | www.moonofalabama.org

A.L. , Sep 18 2019 19:56 utc | 31

@30 David G

perhaps, just like proponents of AI and self driving cars. They just love the technology, financially and emotionally invested in it so much they can't see the forest from the trees.

I like technology, I studied engineering. But the myopic drive to profitability and naivety to unintended consequences are pushing these tech out into the world before they are ready.

engineering used to be a discipline with ethics and responsibilities... But now anybody who could write two lines of code can call themselves a software engineer....

[Sep 16, 2019] 10 Ansible modules you need to know Opensource.com

Sep 16, 2019 | opensource.com

10 Ansible modules you need to know See examples and learn the most important modules for automating everyday tasks with Ansible. 11 Sep 2019 DirectedSoul (Red Hat) Feed 25 up 4 comments x Subscribe now

Get the highlights in your inbox every week.

https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0 Ansible is an open source IT configuration management and automation platform. It uses human-readable YAML templates so users can program repetitive tasks to happen automatically without having to learn an advanced programming language.

Ansible is agentless, which means the nodes it manages do not require any software to be installed on them. This eliminates potential security vulnerabilities and makes overall management smoother.

Ansible modules are standalone scripts that can be used inside an Ansible playbook. A playbook consists of a play, and a play consists of tasks. These concepts may seem confusing if you're new to Ansible, but as you begin writing and working more with playbooks, they will become familiar.

More on Ansible

There are some modules that are frequently used in automating everyday tasks; those are the ones that we will cover in this article.

Ansible has three main files that you need to consider:

Module 1: Package management

There is a module for most popular package managers, such as DNF and APT, to enable you to install any package on a system. Functionality depends entirely on the package manager, but usually these modules can install, upgrade, downgrade, remove, and list packages. The names of relevant modules are easy to guess. For example, the DNF module is dnf_module , the old YUM module (required for Python 2 compatibility) is yum_module , while the APT module is apt_module , the Slackpkg module is slackpkg_module , and so on.

Example 1:

- name : install the latest version of Apache and MariaDB
dnf :
name :
- httpd
- mariadb-server
state : latest

This installs the Apache web server and the MariaDB SQL database.

Example 2: - name : Install a list of packages
yum :
name :
- nginx
- postgresql
- postgresql-server
state : present

This installs the list of packages and helps download multiple packages.

Module 2: Service

After installing a package, you need a module to start it. The service module enables you to start, stop, and reload installed packages; this comes in pretty handy.

Example 1: - name : Start service foo, based on running process /usr/bin/foo
service :
name : foo
pattern : /usr/bin/foo
state : started

This starts the service foo .

Example 2: - name : Restart network service for interface eth0
service :
name : network
state : restarted
args : eth0

This restarts the network service of the interface eth0 .

Module 3: Copy

The copy module copies a file from the local or remote machine to a location on the remote machine.

Example 1: - name : Copy a new "ntp.conf file into place, backing up the original if it differs from the copied version
copy:
src: /mine/ntp.conf
dest: /etc/ntp.conf
owner: root
group: root
mode: '0644'
backup: yes Example 2: - name : Copy file with owner and permission, using symbolic representation
copy :
src : /srv/myfiles/foo.conf
dest : /etc/foo.conf
owner : foo
group : foo
mode : u=rw,g=r,o=r Module 4: Debug

The debug module prints statements during execution and can be useful for debugging variables or expressions without having to halt the playbook.

Example 1: - name : Display all variables/facts known for a host
debug :
var : hostvars [ inventory_hostname ]
verbosity : 4

This displays all the variable information for a host that is defined in the inventory file.

Example 2: - name : Write some content in a file /tmp/foo.txt
copy :
dest : /tmp/foo.txt
content : |
Good Morning!
Awesome sunshine today.
register : display_file_content
- name : Debug display_file_content
debug :
var : display_file_content
verbosity : 2

This registers the content of the copy module output and displays it only when you specify verbosity as 2. For example:

ansible-playbook demo.yaml -vv
Module 5: File

The file module manages the file and its properties.

Example 1: - name : Change file ownership, group and permissions
file :
path : /etc/foo.conf
owner : foo
group : foo
mode : '0644'

This creates a file named foo.conf and sets the permission to 0644 .

Example 2: - name : Create a directory if it does not exist
file :
path : /etc/some_directory
state : directory
mode : '0755'

This creates a directory named some_directory and sets the permission to 0755 .

Module 6: Lineinfile

The lineinfile module manages lines in a text file.

Example 1: - name : Ensure SELinux is set to enforcing mode
lineinfile :
path : /etc/selinux/config
regexp : '^SELINUX='
line : SELINUX=enforcing

This sets the value of SELINUX=enforcing .

Example 2: - name : Add a line to a file if the file does not exist, without passing regexp
lineinfile :
path : /etc/resolv.conf
line : 192.168.1.99 foo.lab.net foo
create : yes

This adds an entry for the IP and hostname in the resolv.conf file.

Module 7: Git

The git module manages git checkouts of repositories to deploy files or software.

Example 1: # Example Create git archive from repo
- git :
repo : https://github.com/ansible/ansible-examples.git
dest : /src/ansible-examples
archive : /tmp/ansible-examples.zip Example 2: - git :
repo : https://github.com/ansible/ansible-examples.git
dest : /src/ansible-examples
separate_git_dir : /src/ansible-examples.git

This clones a repo with a separate Git directory.

Module 8: Cli_command

The cli_command module , first available in Ansible 2.7, provides a platform-agnostic way of pushing text-based configurations to network devices over the network_cli connection plugin.

Example 1: - name : commit with comment
cli_config :
config : set system host-name foo
commit_comment : this is a test

This sets the hostname for a switch and exits with a commit message.

Example 2: - name : configurable backup path
cli_config :
config : "{{ lookup('template', 'basic/config.j2') }}"
backup : yes
backup_options :
filename : backup.cfg
dir_path : /home/user

This backs up a config to a different destination file.

Module 9: Archive

The archive module creates a compressed archive of one or more files. By default, it assumes the compression source exists on the target.

Example 1: - name : Compress directory /path/to/foo/ into /path/to/foo.tgz
archive :
path : /path/to/foo
dest : /path/to/foo.tgz Example 2: - name : Create a bz2 archive of multiple files, rooted at /path
archive :
path :
- /path/to/foo
- /path/wong/foo
dest : /path/file.tar.bz2
format : bz2 Module 10: Command

One of the most basic but useful modules, the command module takes the command name followed by a list of space-delimited arguments.

Example 1: - name : return motd to registered var
command : cat /etc/motd
register : mymotd Example 2: - name : Change the working directory to somedir/ and run the command as db_owner if /path/to/database does not exist.
command : /usr/bin/make_database.sh db_user db_name
become : yes
become_user : db_owner
args :
chdir : somedir/
creates : /path/to/database Conclusion

There are tons of modules available in Ansible, but these ten are the most basic and powerful ones you can use for an automation job. As your requirements change, you can learn about other useful modules by entering ansible-doc <module-name> on the command line or refer to the official documentation .

[Sep 16, 2019] Artistic Style - Index

Sep 16, 2019 | astyle.sourceforge.net

Artistic Style 3.1 A Free, Fast, and Small Automatic Formatter
for C, C++, C++/CLI, Objective‑C, C#, and Java Source Code

Project Page: http://astyle.sourceforge.net/
SourceForge: http://sourceforge.net/projects/astyle/

Artistic Style is a source code indenter, formatter, and beautifier for the C, C++, C++/CLI, Objective‑C, C# and Java programming languages.

When indenting source code, we as programmers have a tendency to use both spaces and tab characters to create the wanted indentation. Moreover, some editors by default insert spaces instead of tabs when pressing the tab key. Other editors (Emacs for example) have the ability to "pretty up" lines by automatically setting up the white space before the code on the line, possibly inserting spaces in code that up to now used only tabs for indentation.

The NUMBER of spaces for each tab character in the source code can change between editors (unless the user sets up the number to his liking...). One of the standard problems programmers face when moving from one editor to another is that code containing both spaces and tabs, which was perfectly indented, suddenly becomes a mess to look at. Even if you as a programmer take care to ONLY use spaces or tabs, looking at other people's source code can still be problematic.

To address this problem, Artistic Style was created – a filter written in C++ that automatically re-indents and re-formats C / C++ / Objective‑C / C++/CLI / C# / Java source files. It can be used from a command line, or it can be incorporated as a library in another program.

[Sep 16, 2019] Usage -- PrettyPrinter 0.18.0 documentation

Sep 16, 2019 | prettyprinter.readthedocs.io

Usage

Install the package with pip :

pip install prettyprinter

Then, instead of

from pprint import pprint

do

from prettyprinter import cpprint

for colored output. For colorless output, remove the c prefix from the function name:

from prettyprinter import pprint

[Sep 16, 2019] JavaScript code prettifier

Sep 16, 2019 | github.com

Announcement: Action required rawgit.com is going away .

An embeddable script that makes source-code snippets in HTML prettier.

[Sep 16, 2019] Pretty-print for shell script

Sep 16, 2019 | stackoverflow.com

Benoit ,Oct 21, 2010 at 13:19

I'm looking for something similiar to indent but for (bash) scripts. Console only, no colorizing, etc.

Do you know of one ?

Jamie ,Sep 11, 2012 at 3:00

Vim can indent bash scripts. But not reformat them before indenting.
Backup your bash script, open it with vim, type gg=GZZ and indent will be corrected. (Note for the impatient: this overwrites the file, so be sure to do that backup!)

Though, some bugs with << (expecting EOF as first character on a line) e.g.

EDIT: ZZ not ZQ

Daniel Martí ,Apr 8, 2018 at 13:52

A bit late to the party, but it looks like shfmt could do the trick for you.

Brian Chrisman ,Sep 9 at 7:47

In bash I do this:
reindent() {
source <(echo "Zibri () {";cat "$1"; echo "}")
declare -f Zibri|head --lines=-1|tail --lines=+3 | sed -e "s/^\s\s\s\s//"
}

this eliminates comments and reindents the script "bash way".

If you have HEREDOCS in your script, they got ruined by the sed in the previous function.

So use:

reindent() {
source <(echo "Zibri () {";cat "$1"; echo "}")
declare -f Zibri|head --lines=-1|tail --lines=+3"
}

But all your script will have a 4 spaces indentation.

Or you can do:

reindent () 
{ 
    rstr=$(mktemp -u "XXXXXXXXXX");
    source <(echo "Zibri () {";cat "$1"|sed -e "s/^\s\s\s\s/$rstr/"; echo "}");
    echo '#!/bin/bash';
    declare -f Zibri | head --lines=-1 | tail --lines=+3 | sed -e "s/^\s\s\s\s//;s/$rstr/    /"
}

which takes care also of heredocs.

> ,

Found this http://www.linux-kheops.com/doc/perl/perl-aubert/fmt.script .

Very nice, only one thing i took out is the [...]->test substitution.

[Sep 16, 2019] A command-line HTML pretty-printer Making messy HTML readable - Stack Overflow

Notable quotes:
"... Have a look at the HTML Tidy Project: http://www.html-tidy.org/ ..."
Sep 16, 2019 | stackoverflow.com

nisetama ,Aug 12 at 10:33

I'm looking for recommendations for HTML pretty printers which fulfill the following requirements:
  • Takes HTML as input, and then output a nicely formatted/correctly indented but "graphically equivalent" version of the given input HTML.
  • Must support command-line operation.
  • Must be open-source and run under Linux.

> ,

Have a look at the HTML Tidy Project: http://www.html-tidy.org/

The granddaddy of HTML tools, with support for modern standards.

There used to be a fork called tidy-html5 which since became the official thing. Here is its GitHub repository .

Tidy is a console application for Mac OS X, Linux, Windows, UNIX, and more. It corrects and cleans up HTML and XML documents by fixing markup errors and upgrading legacy code to modern standards.

For your needs, here is the command line to call Tidy:

[Sep 14, 2019] The Man Who Could Speak Japanese

This impostor definitely demonstrated programming abilities, although at the time there was not such ter :-)
Notable quotes:
"... "We wrote it down. ..."
"... The next phrase was: ..."
"... " ' Booki fai kiz soy ?' " said Whitey. "It means 'Do you surrender?' " ..."
"... " ' Mizi pok loi ooni rak tong zin ?' 'Where are your comrades?' " ..."
"... "Tong what ?" rasped the colonel. ..."
"... "Tong zin , sir," our instructor replied, rolling chalk between his palms. He arched his eyebrows, as though inviting another question. There was one. The adjutant asked, "What's that gizmo on the end?" ..."
"... Of course, it might have been a Japanese newspaper. Whitey's claim to be a linguist was the last of his status symbols, and he clung to it desperately. Looking back, I think his improvisations on the Morton fantail must have been one of the most heroic achievements in the history of confidence men -- which, as you may have gathered by now, was Whitey's true profession. Toward the end of our tour of duty on the 'Canal he was totally discredited with us and transferred at his own request to the 81-millimeter platoon, where our disregard for him was no stigma, since the 81 millimeter musclemen regarded us as a bunch of eight balls anyway. Yet even then, even after we had become completely disillusioned with him, he remained a figure of wonder among us. We could scarcely believe that an impostor could be clever enough actually to invent a language -- phonics, calligraphy, and all. It had looked like Japanese and sounded like Japanese, and during his seventeen days of lecturing on that ship Whitey had carried it all in his head, remembering every variation, every subtlety, every syntactic construction. ..."
"... https://www.americanheritage.com/man-who-could-speak-japanese ..."
Sep 14, 2019 | www.nakedcapitalism.com

Wukchumni , September 13, 2019 at 4:29 pm

Re: Fake list of grunge slang:

a fabulous tale of the South Pacific by William Manchester

The Man Who Could Speak Japanese

"We wrote it down.

The next phrase was:

" ' Booki fai kiz soy ?' " said Whitey. "It means 'Do you surrender?' "

Then:

" ' Mizi pok loi ooni rak tong zin ?' 'Where are your comrades?' "

"Tong what ?" rasped the colonel.

"Tong zin , sir," our instructor replied, rolling chalk between his palms. He arched his eyebrows, as though inviting another question. There was one. The adjutant asked, "What's that gizmo on the end?"

Of course, it might have been a Japanese newspaper. Whitey's claim to be a linguist was the last of his status symbols, and he clung to it desperately. Looking back, I think his improvisations on the Morton fantail must have been one of the most heroic achievements in the history of confidence men -- which, as you may have gathered by now, was Whitey's true profession. Toward the end of our tour of duty on the 'Canal he was totally discredited with us and transferred at his own request to the 81-millimeter platoon, where our disregard for him was no stigma, since the 81 millimeter musclemen regarded us as a bunch of eight balls anyway. Yet even then, even after we had become completely disillusioned with him, he remained a figure of wonder among us. We could scarcely believe that an impostor could be clever enough actually to invent a language -- phonics, calligraphy, and all. It had looked like Japanese and sounded like Japanese, and during his seventeen days of lecturing on that ship Whitey had carried it all in his head, remembering every variation, every subtlety, every syntactic construction.

https://www.americanheritage.com/man-who-could-speak-japanese

[Sep 13, 2019] How to use Ansible Galaxy Enable Sysadmin

Sep 13, 2019 | www.redhat.com

Ansible is a multiplier, a tool that automates and scales infrastructure of every size. It is considered to be a configuration management, orchestration, and deployment tool. It is easy to get up and running with Ansible. Even a new sysadmin could start automating with Ansible in a matter of a few hours.

Ansible automates using the SSH protocol. The control machine uses an SSH connection to communicate with its target hosts, which are typically Linux hosts. If you're a Windows sysadmin, you can still use Ansible to automate your Windows environments using WinRM as opposed to SSH. Presently, though, the control machine still needs to run Linux.

More about automation

As a new sysadmin, you might start with just a few playbooks. But as your automation skills continue to grow, and you become more familiar with Ansible, you will learn best practices and further realize that as your playbooks increase, using Ansible Galaxy becomes invaluable.

In this article, you will learn a bit about Ansible Galaxy, its structure, and how and when you can put it to use.

What Ansible does

Common sysadmin tasks that can be performed with Ansible include patching, updating systems, user and group management, and provisioning. Ansible presently has a huge footprint in IT Automation -- if not the largest presently -- and is considered to be the most popular and widely used configuration management, orchestration, and deployment tool available today.

One of the main reasons for its popularity is its simplicity. It's simple, powerful, and agentless. Which means a new or entry-level sysadmin can hit the ground automating in a matter of hours. Ansible allows you to scale quickly, efficiently, and cross-functionally.

Create roles with Ansible Galaxy

Ansible Galaxy is essentially a large public repository of Ansible roles. Roles ship with READMEs detailing the role's use and available variables. Galaxy contains a large number of roles that are constantly evolving and increasing.

Galaxy can use git to add other role sources, such as GitHub. You can initialize a new galaxy role using ansible-galaxy init , or you can install a role directly from the Ansible Galaxy role store by executing the command ansible-galaxy install <name of role> .

Here are some helpful ansible-galaxy commands you might use from time to time:

To create an Ansible role using Ansible Galaxy, we need to use the ansible-galaxy command and its templates. Roles must be downloaded before they can be used in playbooks, and they are placed into the default directory /etc/ansible/roles . You can find role examples at https://galaxy.ansible.com/geerlingguy :

Image Create collections

While Ansible Galaxy has been the go-to tool for constructing and managing roles, with new iterations of Ansible you are bound to see changes or additions. On Ansible version 2.8 you get the new feature of collections .

What are collections and why are they worth mentioning? As the Ansible documentation states:

Collections are a distribution format for Ansible content. They can be used to package and distribute playbooks, roles, modules, and plugins.

Collections follow a simple structure:

collection/
├── docs/
├── galaxy.yml
├── plugins/
│ ├──  modules/
│ │ └──  module1.py
│ ├──  inventory/
│ └──  .../
├── README.md
├── roles/
│ ├──  role1/
│ ├──  role2/
│ └──  .../
├── playbooks/
│ ├──  files/
│ ├──  vars/
│ ├──  templates/
│ └──  tasks/
└──  tests/
Image
Creating a collection skeleton.

The ansible-galaxy-collection command implements the following commands. Notably, a few of the subcommands are the same as used with ansible-galaxy :

In order to determine what can go into a collection, a great resource can be found here .

Conclusion

Establish yourself as a stellar sysadmin with an automation solution that is simple, powerful, agentless, and scales your infrastructure quickly and efficiently. Using Ansible Galaxy to create roles is superb thinking, and an ideal way to be organized and thoughtful in managing your ever-growing playbooks.

The only way to improve your automation skills is to work with a dedicated tool and prove the value and positive impact of automation on your infrastructure.

[Sep 13, 2019] Dell EMC OpenManage Ansible Modules for iDRAC

Sep 13, 2019 | github.com

1. Introduction

Dell EMC OpenManage Ansible Modules provide customers the ability to automate the Out-of-Band configuration management, deployment and updates for Dell EMC PowerEdge Servers using Ansible by leeveragin the management automation built into the iDRAC with Lifecycle Controller. iDRAC provides both REST APIs based on DMTF RedFish industry standard and WS-Management (WS-MAN) for management automation of PowerEdge Servers.

With OpenManage Ansible modules, you can do:

1.1 How OpenManage Ansible Modules work?

OpenManage Ansible modules extensively uses the Server Configuration Profile (SCP) for most of the configuration management, deployment and update of PowerEdge Servers. Lifecycle Controller 2 version 1.4 and later adds support for SCP. A SCP contains all BIOS, iDRAC, Lifecycle Controller, Network amd Storage settings of a PowerEdge server and can be applied to multiple servers, enabling rapid, reliable and reproducible configuration.

A SCP operation can be performed using any of the following methods:

NOTE : This BETA release of OpenManage Ansible Module supports only the first option listed above for SCP operations i.e. export/import to/from a remote network share via CIFS or NFS. Future releases will support all the options for SCP operations.

Setting up a local mount point for a remote network share

Since OpenManage Ansible modules extensively uses SCP to automate and orchestrate configuration, deployment and update on PowerEdge servers, you must locally mount the remote network share (CIFS or NFS) on the ansible server where you will be executing the playbook or modules. Local mount point also should have read-write privileges in order for OpenManage Ansible modules to write a SCP file to remote network share that will be imported by iDRAC.

You can use either of the following ways to setup a local mount point:


1.2 What is included in this BETA release?

Use Cases Included in this BETA release
Protocol Support
  • WS-Management
Server Administration Power and Thermal
  • Power Control
iDRAC Reset
  • Yes
iDRAC Configuration User and Password Management
  • Local user and password management
    • Create User
    • Change Password
    • Change User Privileges
    • Remove an user
iDRAC Network Configuration
  • NIC Selection
  • Zero-Touch Auto-Config settings
  • IPv4 Address settings:
    • Enable/Disable IPv4
    • Static IPv4 Address settings (IPv4 address, gateway and netmask)
    • Enable/Disable DNS from DHCP
    • Preferred/Alternate DNS Server
  • VLAN Configuration
SNMP and SNMP Alert Configuration
  • SNMP Agent configuration
  • SNMP Alert Destination Configuration
    • Add, Modify and Delete an alert destination
Server Configuration Profile (SCP)
  • Export SCP to remote network share (CIFS, NFS)
  • Import SCP from a remote network share (CIFS, NFS)
iDRAC Services
  • iDRAC Web Server configuration
    • Enable/Disable Web server
    • TLS protocol version
    • SSL Encryption Bits
    • HTTP/HTTPS port
    • Time out period
Lifecycle Controller (LC) attributes
  • Enable/Disable CSIOR (Collect System Inventory on Restart)
BIOS Configuration Boot Order Settings
  • Change Boot Mode (Bios, Uefi)
  • Change Bios/Uefi Boot Sequence
  • One-Time Bios/Uefi Boot Configuration settings
Deployment OS Deployment
  • OS Deployment from:
    • Remote Network Share (CIFS, NFS)
Storage Virtual Drive
  • Create and Delete virtual drives
Update Firmware Update
  • Firmware update from:
    • Remote network share (CIFS, NFS)
Monitor Logs
  • Export Lifecycle Controller (LC) Logs to:
    • Remote network share (CIFS, NFS)
  • Export Tech Support Report (TSR) to:
    • Remote network share (CIFS, NFS)

2. Requirements

[Sep 13, 2019] How to setup nrpe for client side monitoring - LinuxConfig.org

Sep 13, 2019 | linuxconfig.org

... ... ...

We can also include our own custom configuration file(s) in our custom packages, thus allowing updating client monitoring configuration in a centralized and automated way. Keeping that in mind, we'll configure the client in /etc/nrpe.d/custom.cfg on all distributions in the following examples.

NRPE does not accept any commands other then localhost by default. This is for security reasons. To allow command execution from a server, we need to set the server's IP address as an allowed address. In our case the server is a Nagios server, with IP address 10.101.20.34 . We add the following to our client configuration:

allowed_hosts=10.101.20.34

me name=


Multiple addresses or hostnames can be added, separated by commas. Note that the above logic requires static address for the monitoring server. Using dhcp on the monitoring server will surely break your configuration, if you use IP address here. The same applies to the scenario where you use hostnames, and the client can't resolve the server's hostname.

Configuring a custom check on the server and client side

To demonstrate our monitoring setup's capabilites, let's say we would like to know if the local postfix system delivers a mail on a client for user root . The mail could contain a cronjob output, some report, or something that is written to the STDERR and is delivered as a mail by default. For instance, abrt sends a crash report to root by default on a process crash. We did not setup a mail relay, but we still would like to know if a mail arrives. Let's write a custom check to monitor that.

  1. Our first piece of the puzzle is the check itself. Consider the following simple bash script called check_unread_mail :
    #!/bin/bash
    
    USER=root
    
    if [ "$(command -v finger >> /dev/null; echo $?)" -gt 0 ]; then
            echo "UNKNOWN: utility finger not found"
            exit 3
    fi
    if [ "$(id "$USER" >> /dev/null ; echo $?)" -gt 0 ]; then
            echo "UNKNOWN: user $USER does not exist"
            exit 3
    fi
    ## check for mail
    if [ "$(finger -pm "$USER" | tail -n 1 | grep -ic "No mail.")" -gt 0 ]; then
            echo "OK: no unread mail for user $USER"
            exit 0
    else
            echo "WARNING: unread mail for user $USER"
            exit 1
    fi
    

    This simple check uses the finger utility to check for unread mail for user root . Output of the finger -pm may vary by version and thus distribution, so some adjustments may be needed.

    For example on Fedora 30, last line of the output of finger -pm <username> is "No mail.", but on openSUSE Leap 15.1 it would be "No Mail." (notice the upper case Mail). In this case the grep -i handles this difference, but it shows well that when working with different distributions and versions, some additional work may be needed.

  2. We'll need finger to make this check work. The package's name is the same on all distributions, so we can install it with apt , zypper , dnf or yum .
  3. We need to set the check executable:
    # chmod +x check_unread_mail
    
  4. We'll place the check into the /usr/lib64/nagios/plugins directory, the common place for nrpe checks. We'll reference it later.
  5. We'll call our command check_mail_root . Let's place another line into our custom client configuration, where we tell nrpe what commands we accept, and what need to be done when a given command arrives:
    command[check_mail_root]=/usr/lib64/nagios/plugins/check_unread_mail
    
  6. With this our client configuration is complete. We can start the service on the client with systemd . The service name is nagios-nrpe-server on Debian derivatives, and simply nrpe on other distributions.
    # systemctl start nagios-nrpe-server
    # systemctl status nagios-nrpe-server
    ● nagios-nrpe-server.service - Nagios Remote Plugin Executor
       Loaded: loaded (/lib/systemd/system/nagios-nrpe-server.service; enabled; vendor preset: enabled)
       Active: active (running) since Tue 2019-09-10 13:03:10 CEST; 1min 51s ago
         Docs: http://www.nagios.org/documentation
     Main PID: 3782 (nrpe)
        Tasks: 1 (limit: 3549)
       CGroup: /system.slice/nagios-nrpe-server.service
               └─3782 /usr/sbin/nrpe -c /etc/nagios/nrpe.cfg -f
    
    szept 10 13:03:10 mail-test-client systemd[1]: Started Nagios Remote Plugin Executor.
    szept 10 13:03:10 mail-test-client nrpe[3782]: Starting up daemon
    szept 10 13:03:10 mail-test-client nrpe[3782]: Server listening on 0.0.0.0 port 5666.
    szept 10 13:03:10 mail-test-client nrpe[3782]: Server listening on :: port 5666.
    szept 10 13:03:10 mail-test-client nrpe[3782]: Listening for connections on port 5666
    

    me name=


  7. Now we can configure the server side. If we don't have one already, we can define a command that calls a remote nrpe instance with a command as it's sole argument:
    # this command runs a program $ARG1$ with no arguments
    define command {
            command_name    check_nrpe_1arg
            command_line    $USER1$/check_nrpe -H $HOSTADDRESS$ -t 60 -c $ARG1$ 2>/dev/null
    }
    
  8. We also define the client as a host:
    define host {
            use                     linux-server
            host_name               mail-test-client
            alias                   mail-test-client
            address                 mail-test-client
    }
    
    The address can be an IP address or hostname. In the later case we need to ensure it can be resolved by the monitoring server.
  9. We can define a service on the above host using the Nagios side command and the client side command:
    define service {
            use                        generic-service
            host_name                  mail-test-client
            service_description        OS:unread mail for root
            check_command              check_nrpe_1arg!check_mail_root
    }
    
    These adjustments can be placed to any configuration file the Nagios server reads on startup, but it is a good practice to keep configuration files tidy.
  10. We verify our new Nagios configuration:
    # nagios -v /etc/nagios/nagios.cfg
    
    If "Things look okay", we can apply the configuration with a server reload:

[Sep 12, 2019] 9 Best File Comparison and Difference (Diff) Tools for Linux

Sep 12, 2019 | www.tecmint.com

3. Kompare

Kompare is a diff GUI wrapper that allows users to view differences between files and also merge them.

Some of its features include:

  1. Supports multiple diff formats
  2. Supports comparison of directories
  3. Supports reading diff files
  4. Customizable interface
  5. Creating and applying patches to source files
Kompare Tool - Compare Two Files in Linux <img aria-describedby="caption-attachment-21311" src="https://www.tecmint.com/wp-content/uploads/2016/07/Kompare-Two-Files-in-Linux.png" alt="Kompare Tool - Compare Two Files in Linux" width="1097" height="701" srcset="https://www.tecmint.com/wp-content/uploads/2016/07/Kompare-Two-Files-in-Linux.png 1097w, https://www.tecmint.com/wp-content/uploads/2016/07/Kompare-Two-Files-in-Linux-768x491.png 768w" sizes="(max-width: 1097px) 100vw, 1097px" />

Kompare Tool – Compare Two Files in Linux

Visit Homepage : https://www.kde.org/applications/development/kompare/

4. DiffMerge

DiffMerge is a cross-platform GUI application for comparing and merging files. It has two functionality engines, the Diff engine which shows the difference between two files, which supports intra-line highlighting and editing and a Merge engine which outputs the changed lines between three files.

It has got the following features:

  1. Supports directory comparison
  2. File browser integration
  3. Highly configurable
DiffMerge - Compare Files in Linux <img aria-describedby="caption-attachment-21312" src="https://www.tecmint.com/wp-content/uploads/2016/07/DiffMerge-Compare-Files-in-Linux.png" alt="DiffMerge - Compare Files in Linux" width="1078" height="700" srcset="https://www.tecmint.com/wp-content/uploads/2016/07/DiffMerge-Compare-Files-in-Linux.png 1078w, https://www.tecmint.com/wp-content/uploads/2016/07/DiffMerge-Compare-Files-in-Linux-768x499.png 768w" sizes="(max-width: 1078px) 100vw, 1078px" />

DiffMerge – Compare Files in Linux

Visit Homepage : https://sourcegear.com/diffmerge/

5. Meld – Diff Tool

Meld is a lightweight GUI diff and merge tool. It enables users to compare files, directories plus version controlled programs. Built specifically for developers, it comes with the following features:

  1. Two-way and three-way comparison of files and directories
  2. Update of file comparison as a users types more words
  3. Makes merges easier using auto-merge mode and actions on changed blocks
  4. Easy comparisons using visualizations
  5. Supports Git, Mercurial, Subversion, Bazaar plus many more
Meld - A Diff Tool to Compare File in Linux <img aria-describedby="caption-attachment-21313" src="https://www.tecmint.com/wp-content/uploads/2016/07/Meld-Diff-Tool-to-Compare-Files-in-Linux.png" alt="Meld - A Diff Tool to Compare File in Linux" width="1028" height="708" srcset="https://www.tecmint.com/wp-content/uploads/2016/07/Meld-Diff-Tool-to-Compare-Files-in-Linux.png 1028w, https://www.tecmint.com/wp-content/uploads/2016/07/Meld-Diff-Tool-to-Compare-Files-in-Linux-768x529.png 768w" sizes="(max-width: 1028px) 100vw, 1028px" />

Meld – A Diff Tool to Compare File in Linux

Visit Homepage : http://meldmerge.org/

6. Diffuse – GUI Diff Tool

Diffuse is another popular, free, small and simple GUI diff and merge tool that you can use on Linux. Written in Python, It offers two major functionalities, that is: file comparison and version control, allowing file editing, merging of files and also output the difference between files.

You can view a comparison summary, select lines of text in files using a mouse pointer, match lines in adjacent files and edit different file. Other features include:

  1. Syntax highlighting
  2. Keyboard shortcuts for easy navigation
  3. Supports unlimited undo
  4. Unicode support
  5. Supports Git, CVS, Darcs, Mercurial, RCS, Subversion, SVK and Monotone
DiffUse - A Tool to Compare Text Files in Linux <img aria-describedby="caption-attachment-21314" src="https://www.tecmint.com/wp-content/uploads/2016/07/DiffUse-Compare-Text-Files-in-Linux.png" alt="DiffUse - A Tool to Compare Text Files in Linux" width="1030" height="795" srcset="https://www.tecmint.com/wp-content/uploads/2016/07/DiffUse-Compare-Text-Files-in-Linux.png 1030w, https://www.tecmint.com/wp-content/uploads/2016/07/DiffUse-Compare-Text-Files-in-Linux-768x593.png 768w" sizes="(max-width: 1030px) 100vw, 1030px" />

DiffUse – A Tool to Compare Text Files in Linux

Visit Homepage : http://diffuse.sourceforge.net/

7. XXdiff – Diff and Merge Tool

XXdiff is a free, powerful file and directory comparator and merge tool that runs on Unix like operating systems such as Linux, Solaris, HP/UX, IRIX, DEC Tru64. One limitation of XXdiff is its lack of support for unicode files and inline editing of diff files.

It has the following list of features:

  1. Shallow and recursive comparison of two, three file or two directories
  2. Horizontal difference highlighting
  3. Interactive merging of files and saving of resulting output
  4. Supports merge reviews/policing
  5. Supports external diff tools such as GNU diff, SIG diff, Cleareddiff and many more
  6. Extensible using scripts
  7. Fully customizable using resource file plus many other minor features
xxdiff Tool <img aria-describedby="caption-attachment-21315" src="https://www.tecmint.com/wp-content/uploads/2016/07/xxdiff-Tool.png" alt="xxdiff Tool" width="718" height="401" />

xxdiff Tool

Visit Homepage : http://furius.ca/xxdiff/

8. KDiff3 – – Diff and Merge Tool

KDiff3 is yet another cool, cross-platform diff and merge tool made from KDevelop . It works on all Unix-like platforms including Linux and Mac OS X, Windows.

It can compare or merge two to three files or directories and has the following notable features:

  1. Indicates differences line by line and character by character
  2. Supports auto-merge
  3. In-built editor to deal with merge-conflicts
  4. Supports Unicode, UTF-8 and many other codecs
  5. Allows printing of differences
  6. Windows explorer integration support
  7. Also supports auto-detection via byte-order-mark "BOM"
  8. Supports manual alignment of lines
  9. Intuitive GUI and many more
KDiff3 Tool for Linux <img aria-describedby="caption-attachment-21418" src="https://www.tecmint.com/wp-content/uploads/2016/07/KDiff3-Tool-for-Linux.png" alt="KDiff3 Tool for Linux" width="950" height="694" srcset="https://www.tecmint.com/wp-content/uploads/2016/07/KDiff3-Tool-for-Linux.png 950w, https://www.tecmint.com/wp-content/uploads/2016/07/KDiff3-Tool-for-Linux-768x561.png 768w" sizes="(max-width: 950px) 100vw, 950px" />

KDiff3 Tool for Linux

Visit Homepage : http://kdiff3.sourceforge.net/

9. TkDiff

TkDiff is also a cross-platform, easy-to-use GUI wrapper for the Unix diff tool. It provides a side-by-side view of the differences between two input files. It can run on Linux, Windows and Mac OS X.

Additionally, it has some other exciting features including diff bookmarks, a graphical map of differences for easy and quick navigation plus many more.

Visit Homepage : https://sourceforge.net/projects/tkdiff/

Having read this review of some of the best file and directory comparator and merge tools, you probably want to try out some of them. These may not be the only diff tools available you can find on Linux, but they are known to offer some the best features, you may also want to let us know of any other diff tools out there that you have tested and think deserve to be mentioned among the best.

[Sep 11, 2019] string - Extract substring in Bash - Stack Overflow

Sep 11, 2019 | stackoverflow.com

Jeff ,May 8 at 18:30

Given a filename in the form someletters_12345_moreleters.ext , I want to extract the 5 digits and put them into a variable.

So to emphasize the point, I have a filename with x number of characters then a five digit sequence surrounded by a single underscore on either side then another set of x number of characters. I want to take the 5 digit number and put that into a variable.

I am very interested in the number of different ways that this can be accomplished.

Berek Bryan ,Jan 24, 2017 at 9:30

Use cut :
echo 'someletters_12345_moreleters.ext' | cut -d'_' -f 2

More generic:

INPUT='someletters_12345_moreleters.ext'
SUBSTRING=$(echo $INPUT| cut -d'_' -f 2)
echo $SUBSTRING

JB. ,Jan 6, 2015 at 10:13

If x is constant, the following parameter expansion performs substring extraction:
b=${a:12:5}

where 12 is the offset (zero-based) and 5 is the length

If the underscores around the digits are the only ones in the input, you can strip off the prefix and suffix (respectively) in two steps:

tmp=${a#*_}   # remove prefix ending in "_"
b=${tmp%_*}   # remove suffix starting with "_"

If there are other underscores, it's probably feasible anyway, albeit more tricky. If anyone knows how to perform both expansions in a single expression, I'd like to know too.

Both solutions presented are pure bash, with no process spawning involved, hence very fast.

A Sahra ,Mar 16, 2017 at 6:27

Generic solution where the number can be anywhere in the filename, using the first of such sequences:
number=$(echo $filename | egrep -o '[[:digit:]]{5}' | head -n1)

Another solution to extract exactly a part of a variable:

number=${filename:offset:length}

If your filename always have the format stuff_digits_... you can use awk:

number=$(echo $filename | awk -F _ '{ print $2 }')

Yet another solution to remove everything except digits, use

number=$(echo $filename | tr -cd '[[:digit:]]')

sshow ,Jul 27, 2017 at 17:22

In case someone wants more rigorous information, you can also search it in man bash like this
$ man bash [press return key]
/substring  [press return key]
[press "n" key]
[press "n" key]
[press "n" key]
[press "n" key]

Result:

${parameter:offset}
       ${parameter:offset:length}
              Substring Expansion.  Expands to  up  to  length  characters  of
              parameter  starting  at  the  character specified by offset.  If
              length is omitted, expands to the substring of parameter  start‐
              ing at the character specified by offset.  length and offset are
              arithmetic expressions (see ARITHMETIC  EVALUATION  below).   If
              offset  evaluates  to a number less than zero, the value is used
              as an offset from the end of the value of parameter.  Arithmetic
              expressions  starting  with  a - must be separated by whitespace
              from the preceding : to be distinguished from  the  Use  Default
              Values  expansion.   If  length  evaluates to a number less than
              zero, and parameter is not @ and not an indexed  or  associative
              array,  it is interpreted as an offset from the end of the value
              of parameter rather than a number of characters, and the  expan‐
              sion is the characters between the two offsets.  If parameter is
              @, the result is length positional parameters beginning at  off‐
              set.   If parameter is an indexed array name subscripted by @ or
              *, the result is the length members of the array beginning  with
              ${parameter[offset]}.   A  negative  offset is taken relative to
              one greater than the maximum index of the specified array.  Sub‐
              string  expansion applied to an associative array produces unde‐
              fined results.  Note that a negative offset  must  be  separated
              from  the  colon  by  at least one space to avoid being confused
              with the :- expansion.  Substring indexing is zero-based  unless
              the  positional  parameters are used, in which case the indexing
              starts at 1 by default.  If offset  is  0,  and  the  positional
              parameters are used, $0 is prefixed to the list.

Aleksandr Levchuk ,Aug 29, 2011 at 5:51

Building on jor's answer (which doesn't work for me):
substring=$(expr "$filename" : '.*_\([^_]*\)_.*')

kayn ,Oct 5, 2015 at 8:48

I'm surprised this pure bash solution didn't come up:
a="someletters_12345_moreleters.ext"
IFS="_"
set $a
echo $2
# prints 12345

You probably want to reset IFS to what value it was before, or unset IFS afterwards!

zebediah49 ,Jun 4 at 17:31

Here's how i'd do it:
FN=someletters_12345_moreleters.ext
[[ ${FN} =~ _([[:digit:]]{5})_ ]] && NUM=${BASH_REMATCH[1]}

Note: the above is a regular expression and is restricted to your specific scenario of five digits surrounded by underscores. Change the regular expression if you need different matching.

TranslucentCloud ,Jun 16, 2014 at 13:27

Following the requirements

I have a filename with x number of characters then a five digit sequence surrounded by a single underscore on either side then another set of x number of characters. I want to take the 5 digit number and put that into a variable.

I found some grep ways that may be useful:

$ echo "someletters_12345_moreleters.ext" | grep -Eo "[[:digit:]]+" 
12345

or better

$ echo "someletters_12345_moreleters.ext" | grep -Eo "[[:digit:]]{5}" 
12345

And then with -Po syntax:

$ echo "someletters_12345_moreleters.ext" | grep -Po '(?<=_)\d+' 
12345

Or if you want to make it fit exactly 5 characters:

$ echo "someletters_12345_moreleters.ext" | grep -Po '(?<=_)\d{5}' 
12345

Finally, to make it be stored in a variable it is just need to use the var=$(command) syntax.

Darron ,Jan 9, 2009 at 16:13

Without any sub-processes you can:
shopt -s extglob
front=${input%%_+([a-zA-Z]).*}
digits=${front##+([a-zA-Z])_}

A very small variant of this will also work in ksh93.

user2350426

add a comment ,Aug 5, 2014 at 8:11
If we focus in the concept of:
"A run of (one or several) digits"

We could use several external tools to extract the numbers.
We could quite easily erase all other characters, either sed or tr:

name='someletters_12345_moreleters.ext'

echo $name | sed 's/[^0-9]*//g'    # 12345
echo $name | tr -c -d 0-9          # 12345

But if $name contains several runs of numbers, the above will fail:

If "name=someletters_12345_moreleters_323_end.ext", then:

echo $name | sed 's/[^0-9]*//g'    # 12345323
echo $name | tr -c -d 0-9          # 12345323

We need to use regular expresions (regex).
To select only the first run (12345 not 323) in sed and perl:

echo $name | sed 's/[^0-9]*\([0-9]\{1,\}\).*$/\1/'
perl -e 'my $name='$name';my ($num)=$name=~/(\d+)/;print "$num\n";'

But we could as well do it directly in bash (1) :

regex=[^0-9]*([0-9]{1,}).*$; \
[[ $name =~ $regex ]] && echo ${BASH_REMATCH[1]}

This allows us to extract the FIRST run of digits of any length
surrounded by any other text/characters.

Note : regex=[^0-9]*([0-9]{5,5}).*$; will match only exactly 5 digit runs. :-)

(1) : faster than calling an external tool for each short texts. Not faster than doing all processing inside sed or awk for large files.

codist ,May 6, 2011 at 12:50

Here's a prefix-suffix solution (similar to the solutions given by JB and Darron) that matches the first block of digits and does not depend on the surrounding underscores:
str='someletters_12345_morele34ters.ext'
s1="${str#"${str%%[[:digit:]]*}"}"   # strip off non-digit prefix from str
s2="${s1%%[^[:digit:]]*}"            # strip off non-digit suffix from s1
echo "$s2"                           # 12345

Campa ,Oct 21, 2016 at 8:12

I love sed 's capability to deal with regex groups:
> var="someletters_12345_moreletters.ext"
> digits=$( echo $var | sed "s/.*_\([0-9]\+\).*/\1/p" -n )
> echo $digits
12345

A slightly more general option would be not to assume that you have an underscore _ marking the start of your digits sequence, hence for instance stripping off all non-numbers you get before your sequence: s/[^0-9]\+\([0-9]\+\).*/\1/p .


> man sed | grep s/regexp/replacement -A 2
s/regexp/replacement/
    Attempt to match regexp against the pattern space.  If successful, replace that portion matched with replacement.  The replacement may contain the special  character  &  to
    refer to that portion of the pattern space which matched, and the special escapes \1 through \9 to refer to the corresponding matching sub-expressions in the regexp.

More on this, in case you're not too confident with regexps:

  • s is for _s_ubstitute
  • [0-9]+ matches 1+ digits
  • \1 links to the group n.1 of the regex output (group 0 is the whole match, group 1 is the match within parentheses in this case)
  • p flag is for _p_rinting

All escapes \ are there to make sed 's regexp processing work.

Dan Dascalescu ,May 8 at 18:28

Given test.txt is a file containing "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
cut -b19-20 test.txt > test1.txt # This will extract chars 19 & 20 "ST" 
while read -r; do;
> x=$REPLY
> done < test1.txt
echo $x
ST

Alex Raj Kaliamoorthy ,Jul 29, 2016 at 7:41

My answer will have more control on what you want out of your string. Here is the code on how you can extract 12345 out of your string
str="someletters_12345_moreleters.ext"
str=${str#*_}
str=${str%_more*}
echo $str

This will be more efficient if you want to extract something that has any chars like abc or any special characters like _ or - . For example: If your string is like this and you want everything that is after someletters_ and before _moreleters.ext :

str="someletters_123-45-24a&13b-1_moreleters.ext"

With my code you can mention what exactly you want. Explanation:

#* It will remove the preceding string including the matching key. Here the key we mentioned is _ % It will remove the following string including the matching key. Here the key we mentioned is '_more*'

Do some experiments yourself and you would find this interesting.

Dan Dascalescu ,May 8 at 18:27

similar to substr('abcdefg', 2-1, 3) in php:
echo 'abcdefg'|tail -c +2|head -c 3

olibre ,Nov 25, 2015 at 14:50

Ok, here goes pure Parameter Substitution with an empty string. Caveat is that I have defined someletters and moreletters as only characters. If they are alphanumeric, this will not work as it is.
filename=someletters_12345_moreletters.ext
substring=${filename//@(+([a-z])_|_+([a-z]).*)}
echo $substring
12345

gniourf_gniourf ,Jun 4 at 17:33

There's also the bash builtin 'expr' command:
INPUT="someletters_12345_moreleters.ext"  
SUBSTRING=`expr match "$INPUT" '.*_\([[:digit:]]*\)_.*' `  
echo $SUBSTRING

russell ,Aug 1, 2013 at 8:12

A little late, but I just ran across this problem and found the following:
host:/tmp$ asd=someletters_12345_moreleters.ext 
host:/tmp$ echo `expr $asd : '.*_\(.*\)_'`
12345
host:/tmp$

I used it to get millisecond resolution on an embedded system that does not have %N for date:

set `grep "now at" /proc/timer_list`
nano=$3
fraction=`expr $nano : '.*\(...\)......'`
$debug nano is $nano, fraction is $fraction

> ,Aug 5, 2018 at 17:13

A bash solution:
IFS="_" read -r x digs x <<<'someletters_12345_moreleters.ext'

This will clobber a variable called x . The var x could be changed to the var _ .

input='someletters_12345_moreleters.ext'
IFS="_" read -r _ digs _ <<<"$input"

[Sep 08, 2019] How to replace spaces in file names using a bash script

Sep 08, 2019 | stackoverflow.com

Ask Question Asked 9 years, 4 months ago Active 2 months ago Viewed 226k times 238 127


Mark Byers ,Apr 25, 2010 at 19:20

Can anyone recommend a safe solution to recursively replace spaces with underscores in file and directory names starting from a given root directory? For example:
$ tree
.
|-- a dir
|   `-- file with spaces.txt
`-- b dir
    |-- another file with spaces.txt
    `-- yet another file with spaces.pdf

becomes:

$ tree
.
|-- a_dir
|   `-- file_with_spaces.txt
`-- b_dir
    |-- another_file_with_spaces.txt
    `-- yet_another_file_with_spaces.pdf

Jürgen Hötzel ,Nov 4, 2015 at 3:03

Use rename (aka prename ) which is a Perl script which may be on your system already. Do it in two steps:
find -name "* *" -type d | rename 's/ /_/g'    # do the directories first
find -name "* *" -type f | rename 's/ /_/g'

Based on Jürgen's answer and able to handle multiple layers of files and directories in a single bound using the "Revision 1.5 1998/12/18 16:16:31 rmb1" version of /usr/bin/rename (a Perl script):

find /tmp/ -depth -name "* *" -execdir rename 's/ /_/g' "{}" \;

oevna ,Jan 1, 2016 at 8:25

I use:
for f in *\ *; do mv "$f" "${f// /_}"; done

Though it's not recursive, it's quite fast and simple. I'm sure someone here could update it to be recursive.

The ${f// /_} part utilizes bash's parameter expansion mechanism to replace a pattern within a parameter with supplied string. The relevant syntax is ${parameter/pattern/string} . See: https://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html or http://wiki.bash-hackers.org/syntax/pe .

armandino ,Dec 3, 2013 at 20:51

find . -depth -name '* *' \
| while IFS= read -r f ; do mv -i "$f" "$(dirname "$f")/$(basename "$f"|tr ' ' _)" ; done

failed to get it right at first, because I didn't think of directories.

Edmund Elmer ,Jul 3 at 7:12

you can use detox by Doug Harple
detox -r <folder>

Dennis Williamson ,Mar 22, 2012 at 20:33

A find/rename solution. rename is part of util-linux.

You need to descend depth first, because a whitespace filename can be part of a whitespace directory:

find /tmp/ -depth -name "* *" -execdir rename " " "_" "{}" ";"

armandino ,Apr 26, 2010 at 11:49

bash 4.0
#!/bin/bash
shopt -s globstar
for file in **/*\ *
do 
    mv "$file" "${file// /_}"       
done

Itamar ,Jan 31, 2013 at 21:27

you can use this:
    find . -name '* *' | while read fname 

do
        new_fname=`echo $fname | tr " " "_"`

        if [ -e $new_fname ]
        then
                echo "File $new_fname already exists. Not replacing $fname"
        else
                echo "Creating new file $new_fname to replace $fname"
                mv "$fname" $new_fname
        fi
done

yabt ,Apr 26, 2010 at 14:54

Here's a (quite verbose) find -exec solution which writes "file already exists" warnings to stderr:
function trspace() {
   declare dir name bname dname newname replace_char
   [ $# -lt 1 -o $# -gt 2 ] && { echo "usage: trspace dir char"; return 1; }
   dir="${1}"
   replace_char="${2:-_}"
   find "${dir}" -xdev -depth -name $'*[ \t\r\n\v\f]*' -exec bash -c '
      for ((i=1; i<=$#; i++)); do
         name="${@:i:1}"
         dname="${name%/*}"
         bname="${name##*/}"
         newname="${dname}/${bname//[[:space:]]/${0}}"
         if [[ -e "${newname}" ]]; then
            echo "Warning: file already exists: ${newname}" 1>&2
         else
            mv "${name}" "${newname}"
         fi
      done
  ' "${replace_char}" '{}' +
}

trspace rootdir _

degi ,Aug 8, 2011 at 9:10

This one does a little bit more. I use it to rename my downloaded torrents (no special characters (non-ASCII), spaces, multiple dots, etc.).
#!/usr/bin/perl

&rena(`find . -type d`);
&rena(`find . -type f`);

sub rena
{
    ($elems)=@_;
    @t=split /\n/,$elems;

    for $e (@t)
    {
    $_=$e;
    # remove ./ of find
    s/^\.\///;
    # non ascii transliterate
    tr [\200-\377][_];
    tr [\000-\40][_];
    # special characters we do not want in paths
    s/[ \-\,\;\?\+\'\"\!\[\]\(\)\@\#]/_/g;
    # multiple dots except for extension
    while (/\..*\./)
    {
        s/\./_/;
    }
    # only one _ consecutive
    s/_+/_/g;
    next if ($_ eq $e ) or ("./$_" eq $e);
    print "$e -> $_\n";
    rename ($e,$_);
    }
}

Junyeop Lee ,Apr 10, 2018 at 9:44

Recursive version of Naidim's Answers.
find . -name "* *" | awk '{ print length, $0 }' | sort -nr -s | cut -d" " -f2- | while read f; do base=$(basename "$f"); newbase="${base// /_}"; mv "$(dirname "$f")/$(basename "$f")" "$(dirname "$f")/$newbase"; done

ghoti ,Dec 5, 2016 at 21:16

I found around this script, it may be interesting :)
 IFS=$'\n';for f in `find .`; do file=$(echo $f | tr [:blank:] '_'); [ -e $f ] && [ ! -e $file ] && mv "$f" $file;done;unset IFS

ghoti ,Dec 5, 2016 at 21:17

Here's a reasonably sized bash script solution
#!/bin/bash
(
IFS=$'\n'
    for y in $(ls $1)
      do
         mv $1/`echo $y | sed 's/ /\\ /g'` $1/`echo "$y" | sed 's/ /_/g'`
      done
)

user1060059 ,Nov 22, 2011 at 15:15

This only finds files inside the current directory and renames them . I have this aliased.

find ./ -name "* *" -type f -d 1 | perl -ple '$file = $_; $file =~ s/\s+/_/g; rename($_, $file);

Hongtao ,Sep 26, 2014 at 19:30

I just make one for my own purpose. You may can use it as reference.
#!/bin/bash
cd /vzwhome/c0cheh1/dev_source/UB_14_8
for file in *
do
    echo $file
    cd "/vzwhome/c0cheh1/dev_source/UB_14_8/$file/Configuration/$file"
    echo "==> `pwd`"
    for subfile in *\ *; do [ -d "$subfile" ] && ( mv "$subfile" "$(echo $subfile | sed -e 's/ /_/g')" ); done
    ls
    cd /vzwhome/c0cheh1/dev_source/UB_14_8
done

Marcos Jean Sampaio ,Dec 5, 2016 at 20:56

For files in folder named /files
for i in `IFS="";find /files -name *\ *`
do
   echo $i
done > /tmp/list


while read line
do
   mv "$line" `echo $line | sed 's/ /_/g'`
done < /tmp/list

rm /tmp/list

Muhammad Annaqeeb ,Sep 4, 2017 at 11:03

For those struggling through this using macOS, first install all the tools:
 brew install tree findutils rename

Then when needed to rename, make an alias for GNU find (gfind) as find. Then run the code of @Michel Krelin:

alias find=gfind 
find . -depth -name '* *' \
| while IFS= read -r f ; do mv -i "$f" "$(dirname "$f")/$(basename "$f"|tr ' ' _)" ; done

[Sep 07, 2019] As soon as you stop writing code on a regular basis you stop being a programmer. You lose you qualification very quickly. That's a typical tragedy of talented programmers who became mediocre managers or, worse, theoretical computer scientists

Programming skills are somewhat similar to the skills of people who play violin or piano. As soon a you stop playing violin or piano still start to evaporate. First slowly, then quicker. In two yours you probably will lose 80%.
Notable quotes:
"... I happened to look the other day. I wrote 35 programs in January, and 28 or 29 programs in February. These are small programs, but I have a compulsion. I love to write programs and put things into it. ..."
Sep 07, 2019 | archive.computerhistory.org

Dijkstra said he was proud to be a programmer. Unfortunately he changed his attitude completely, and I think he wrote his last computer program in the 1980s. At this conference I went to in 1967 about simulation language, Chris Strachey was going around asking everybody at the conference what was the last computer program you wrote. This was 1967. Some of the people said, "I've never written a computer program." Others would say, "Oh yeah, here's what I did last week." I asked Edsger this question when I visited him in Texas in the 90s and he said, "Don, I write programs now with pencil and paper, and I execute them in my head." He finds that a good enough discipline.

I think he was mistaken on that. He taught me a lot of things, but I really think that if he had continued... One of Dijkstra's greatest strengths was that he felt a strong sense of aesthetics, and he didn't want to compromise his notions of beauty. They were so intense that when he visited me in the 1960s, I had just come to Stanford. I remember the conversation we had. It was in the first apartment, our little rented house, before we had electricity in the house.

We were sitting there in the dark, and he was telling me how he had just learned about the specifications of the IBM System/360, and it made him so ill that his heart was actually starting to flutter.

He intensely disliked things that he didn't consider clean to work with. So I can see that he would have distaste for the languages that he had to work with on real computers. My reaction to that was to design my own language, and then make Pascal so that it would work well for me in those days. But his response was to do everything only intellectually.

So, programming.

I happened to look the other day. I wrote 35 programs in January, and 28 or 29 programs in February. These are small programs, but I have a compulsion. I love to write programs and put things into it. I think of a question that I want to answer, or I have part of my book where I want to present something. But I can't just present it by reading about it in a book. As I code it, it all becomes clear in my head. It's just the discipline. The fact that I have to translate my knowledge of this method into something that the machine is going to understand just forces me to make that crystal-clear in my head. Then I can explain it to somebody else infinitely better. The exposition is always better if I've implemented it, even though it's going to take me more time.

[Sep 07, 2019] Knuth about computer science and money: At that point I made the decision in my life that I wasn't going to optimize my income;

Sep 07, 2019 | archive.computerhistory.org

So I had a programming hat when I was outside of Cal Tech, and at Cal Tech I am a mathematician taking my grad studies. A startup company, called Green Tree Corporation because green is the color of money, came to me and said, "Don, name your price. Write compilers for us and we will take care of finding computers for you to debug them on, and assistance for you to do your work. Name your price." I said, "Oh, okay. $100,000.", assuming that this was In that era this was not quite at Bill Gate's level today, but it was sort of out there.

The guy didn't blink. He said, "Okay." I didn't really blink either. I said, "Well, I'm not going to do it. I just thought this was an impossible number."

At that point I made the decision in my life that I wasn't going to optimize my income; I was really going to do what I thought I could do for well, I don't know. If you ask me what makes me most happy, number one would be somebody saying "I learned something from you". Number two would be somebody saying "I used your software". But number infinity would be Well, no. Number infinity minus one would be "I bought your book". It's not as good as "I read your book", you know. Then there is "I bought your software"; that was not in my own personal value. So that decision came up. I kept up with the literature about compilers. The Communications of the ACM was where the action was. I also worked with people on trying to debug the ALGOL language, which had problems with it. I published a few papers, like "The Remaining Trouble Spots in ALGOL 60" was one of the papers that I worked on. I chaired a committee called "Smallgol" which was to find a subset of ALGOL that would work on small computers. I was active in programming languages.

[Sep 07, 2019] Knuth: maybe 1 in 50 people have the "computer scientist's" type of intellect

Sep 07, 2019 | conservancy.umn.edu

Frana: You have made the comment several times that maybe 1 in 50 people have the "computer scientist's mind." Knuth: Yes. Frana: I am wondering if a large number of those people are trained professional librarians? [laughter] There is some strangeness there. But can you pinpoint what it is about the mind of the computer scientist that is....

Knuth: That is different?

Frana: What are the characteristics?

Knuth: Two things: one is the ability to deal with non-uniform structure, where you have case one, case two, case three, case four. Or that you have a model of something where the first component is integer, the next component is a Boolean, and the next component is a real number, or something like that, you know, non-uniform structure. To deal fluently with those kinds of entities, which is not typical in other branches of mathematics, is critical. And the other characteristic ability is to shift levels quickly, from looking at something in the large to looking at something in the small, and many levels in between, jumping from one level of abstraction to another. You know that, when you are adding one to some number, that you are actually getting closer to some overarching goal. These skills, being able to deal with nonuniform objects and to see through things from the top level to the bottom level, these are very essential to computer programming, it seems to me. But maybe I am fooling myself because I am too close to it.

Frana: It is the hardest thing to really understand that which you are existing within.

Knuth: Yes.

[Sep 07, 2019] Knuth: I can be a writer, who tries to organize other people's ideas into some kind of a more coherent structure so that it is easier to put things together

Sep 07, 2019 | conservancy.umn.edu

Knuth: I can be a writer, who tries to organize other people's ideas into some kind of a more coherent structure so that it is easier to put things together. I can see that I could be viewed as a scholar that does his best to check out sources of material, so that people get credit where it is due. And to check facts over, not just to look at the abstract of something, but to see what the methods were that did it and to fill in holes if necessary. I look at my role as being able to understand the motivations and terminology of one group of specialists and boil it down to a certain extent so that people in other parts of the field can use it. I try to listen to the theoreticians and select what they have done that is important to the programmer on the street; to remove technical jargon when possible.

But I have never been good at any kind of a role that would be making policy, or advising people on strategies, or what to do. I have always been best at refining things that are there and bringing order out of chaos. I sometimes raise new ideas that might stimulate people, but not really in a way that would be in any way controlling the flow. The only time I have ever advocated something strongly was with literate programming; but I do this always with the caveat that it works for me, not knowing if it would work for anybody else.

When I work with a system that I have created myself, I can always change it if I don't like it. But everybody who works with my system has to work with what I give them. So I am not able to judge my own stuff impartially. So anyway, I have always felt bad about if anyone says, 'Don, please forecast the future,'...

[Sep 07, 2019] How to Debug Bash Scripts by Mike Ward

Sep 07, 2019 | linuxconfig.org

05 September 2019

... ... ... How to use other Bash options

The Bash options for debugging are turned off by default, but once they are turned on by using the set command, they stay on until explicitly turned off. If you are not sure which options are enabled, you can examine the $- variable to see the current state of all the variables.

$ echo $-
himBHs
$ set -xv && echo $-
himvxBHs

There is another useful switch we can use to help us find variables referenced without having any value set. This is the -u switch, and just like -x and -v it can also be used on the command line, as we see in the following example:

set u option at command line <img src=https://linuxconfig.org/images/02-how-to-debug-bash-scripts.png alt="set u option at command line" width=1200 height=254 /> Setting u option at the command line

We mistakenly assigned a value of 7 to the variable called "level" then tried to echo a variable named "score" that simply resulted in printing nothing at all to the screen. Absolutely no debug information was given. Setting our -u switch allows us to see a specific error message, "score: unbound variable" that indicates exactly what went wrong.

We can use those options in short Bash scripts to give us debug information to identify problems that do not otherwise trigger feedback from the Bash interpreter. Let's walk through a couple of examples.

#!/bin/bash

read -p "Path to be added: " $path

if [ "$path" = "/home/mike/bin" ]; then
        echo $path >> $PATH
        echo "new path: $PATH"
else
        echo "did not modify PATH"
fi
results from addpath script <img src=https://linuxconfig.org/images/03-how-to-debug-bash-scripts.png alt="results from addpath script" width=1200 height=417 /> Using x option when running your Bash script

In the example above we run the addpath script normally and it simply does not modify our PATH . It does not give us any indication of why or clues to mistakes made. Running it again using the -x option clearly shows us that the left side of our comparison is an empty string. $path is an empty string because we accidentally put a dollar sign in front of "path" in our read statement. Sometimes we look right at a mistake like this and it doesn't look wrong until we get a clue and think, "Why is $path evaluated to an empty string?"

Looking this next example, we also get no indication of an error from the interpreter. We only get one value printed per line instead of two. This is not an error that will halt execution of the script, so we're left to simply wonder without being given any clues. Using the -u switch,we immediately get a notification that our variable j is not bound to a value. So these are real time savers when we make mistakes that do not result in actual errors from the Bash interpreter's point of view.

#!/bin/bash

for i in 1 2 3
do
        echo $i $j
done
results from count.sh script <img src=https://linuxconfig.org/images/04-how-to-debug-bash-scripts.png alt="results from count.sh script" width=1200 height=291 /> Using u option running your script from the command line

Now surely you are thinking that sounds fine, but we seldom need help debugging mistakes made in one-liners at the command line or in short scripts like these. We typically struggle with debugging when we deal with longer and more complicated scripts, and we rarely need to set these options and leave them set while we run multiple scripts. Setting -xv options and then running a more complex script will often add confusion by doubling or tripling the amount of output generated.

Fortunately we can use these options in a more precise way by placing them inside our scripts. Instead of explicitly invoking a Bash shell with an option from the command line, we can set an option by adding it to the shebang line instead.

#!/bin/bash -x

This will set the -x option for the entire file or until it is unset during the script execution, allowing you to simply run the script by typing the filename instead of passing it to Bash as a parameter. A long script or one that has a lot of output will still become unwieldy using this technique however, so let's look at a more specific way to use options.


me name=


For a more targeted approach, surround only the suspicious blocks of code with the options you want. This approach is great for scripts that generate menus or detailed output, and it is accomplished by using the set keyword with plus or minus once again.

#!/bin/bash

read -p "Path to be added: " $path

set -xv
if [ "$path" = "/home/mike/bin" ]; then
        echo $path >> $PATH
        echo "new path: $PATH"
else
        echo "did not modify PATH"
fi
set +xv
results from addpath script <img src=https://linuxconfig.org/images/05-how-to-debug-bash-scripts.png alt="results from addpath script" width=1200 height=469 /> Wrapping options around a block of code in your script

We surrounded only the blocks of code we suspect in order to reduce the output, making our task easier in the process. Notice we turn on our options only for the code block containing our if-then-else statement, then turn off the option(s) at the end of the suspect block. We can turn these options on and off multiple times in a single script if we can't narrow down the suspicious areas, or if we want to evaluate the state of variables at various points as we progress through the script. There is no need to turn off an option If we want it to continue for the remainder of the script execution.

For completeness sake we should mention also that there are debuggers written by third parties that will allow us to step through the code execution line by line. You might want to investigate these tools, but most people find that that they are not actually needed.

As seasoned programmers will suggest, if your code is too complex to isolate suspicious blocks with these options then the real problem is that the code should be refactored. Overly complex code means bugs can be difficult to detect and maintenance can be time consuming and costly.

One final thing to mention regarding Bash debugging options is that a file globbing option also exists and is set with -f . Setting this option will turn off globbing (expansion of wildcards to generate file names) while it is enabled. This -f option can be a switch used at the command line with bash, after the shebang in a file or, as in this example to surround a block of code.

#!/bin/bash

echo "ignore fileglobbing option turned off"
ls *

echo "ignore file globbing option set"
set -f
ls *
set +f
results from -f option <img src=https://linuxconfig.org/images/06-how-to-debug-bash-scripts.png alt="results from -f option" width=1200 height=314 /> Using f option to turn off file globbing How to use trap to help debug

There are more involved techniques worth considering if your scripts are complicated, including using an assert function as mentioned earlier. One such method to keep in mind is the use of trap. Shell scripts allow us to trap signals and do something at that point.

A simple but useful example you can use in your Bash scripts is to trap on EXIT .

#!/bin/bash

trap 'echo score is $score, status is $status' EXIT

if [ -z  ]; then
        status="default"
else
        status=
fi

score=0
if [ ${USER} = 'superman' ]; then
        score=99
elif [ $# -gt 1 ]; then
        score=
fi
results from using trap EXIT <img src=https://linuxconfig.org/images/07-how-to-debug-bash-scripts.png alt="results from using trap EXIT" width=1200 height=469 /> Using trap EXIT to help debug your script

me name=


As you can see just dumping the current values of variables to the screen can be useful to show where your logic is failing. The EXIT signal obviously does not need an explicit exit statement to be generated; in this case the echo statement is executed when the end of the script is reached.

Another useful trap to use with Bash scripts is DEBUG . This happens after every statement, so it can be used as a brute force way to show the values of variables at each step in the script execution.

#!/bin/bash

trap 'echo "line ${LINENO}: score is $score"' DEBUG

score=0

if [ "${USER}" = "mike" ]; then
        let "score += 1"
fi

let "score += 1"

if [ "" = "7" ]; then
        score=7
fi
exit 0
results from using trap DEBUG <img src=https://linuxconfig.org/images/08-how-to-debug-bash-scripts.png alt="results from using trap DEBUG" width=1200 height=469 /> Using trap DEBUG to help debug your script Conclusion

When you notice your Bash script not behaving as expected and the reason is not clear to you for whatever reason, consider what information would be useful to help you identify the cause then use the most comfortable tools available to help you pinpoint the issue. The xtrace option -x is easy to use and probably the most useful of the options presented here, so consider trying it out next time you're faced with a script that's not doing what you thought it would

[Sep 06, 2019] Knuth: Programming and architecture are interrelated and it is impossible to create good architecure wthout actually programming at least of a prototype

Notable quotes:
"... When you're writing a document for a human being to understand, the human being will look at it and nod his head and say, "Yeah, this makes sense." But then there's all kinds of ambiguities and vagueness that you don't realize until you try to put it into a computer. Then all of a sudden, almost every five minutes as you're writing the code, a question comes up that wasn't addressed in the specification. "What if this combination occurs?" ..."
"... When you're faced with implementation, a person who has been delegated this job of working from a design would have to say, "Well hmm, I don't know what the designer meant by this." ..."
Sep 06, 2019 | archive.computerhistory.org

...I showed the second version of this design to two of my graduate students, and I said, "Okay, implement this, please, this summer. That's your summer job." I thought I had specified a language. I had to go away. I spent several weeks in China during the summer of 1977, and I had various other obligations. I assumed that when I got back from my summer trips, I would be able to play around with TeX and refine it a little bit. To my amazement, the students, who were outstanding students, had not competed [it]. They had a system that was able to do about three lines of TeX. I thought, "My goodness, what's going on? I thought these were good students." Well afterwards I changed my attitude to saying, "Boy, they accomplished a miracle."

Because going from my specification, which I thought was complete, they really had an impossible task, and they had succeeded wonderfully with it. These students, by the way, [were] Michael Plass, who has gone on to be the brains behind almost all of Xerox's Docutech software and all kind of things that are inside of typesetting devices now, and Frank Liang, one of the key people for Microsoft Word.

He did important mathematical things as well as his hyphenation methods which are quite used in all languages now. These guys were actually doing great work, but I was amazed that they couldn't do what I thought was just sort of a routine task. Then I became a programmer in earnest, where I had to do it. The reason is when you're doing programming, you have to explain something to a computer, which is dumb.

When you're writing a document for a human being to understand, the human being will look at it and nod his head and say, "Yeah, this makes sense." But then there's all kinds of ambiguities and vagueness that you don't realize until you try to put it into a computer. Then all of a sudden, almost every five minutes as you're writing the code, a question comes up that wasn't addressed in the specification. "What if this combination occurs?"

It just didn't occur to the person writing the design specification. When you're faced with implementation, a person who has been delegated this job of working from a design would have to say, "Well hmm, I don't know what the designer meant by this."

If I hadn't been in China they would've scheduled an appointment with me and stopped their programming for a day. Then they would come in at the designated hour and we would talk. They would take 15 minutes to present to me what the problem was, and then I would think about it for a while, and then I'd say, "Oh yeah, do this. " Then they would go home and they would write code for another five minutes and they'd have to schedule another appointment.

I'm probably exaggerating, but this is why I think Bob Floyd's Chiron compiler never got going. Bob worked many years on a beautiful idea for a programming language, where he designed a language called Chiron, but he never touched the programming himself. I think this was actually the reason that he had trouble with that project, because it's so hard to do the design unless you're faced with the low-level aspects of it, explaining it to a machine instead of to another person.

Forsythe, I think it was, who said, "People have said traditionally that you don't understand something until you've taught it in a class. The truth is you don't really understand something until you've taught it to a computer, until you've been able to program it." At this level, programming was absolutely important

[Sep 06, 2019] Knuth: No, I stopped going to conferences. It was too discouraging. Computer programming keeps getting harder because more stuff is discovered

Sep 06, 2019 | conservancy.umn.edu

Knuth: No, I stopped going to conferences. It was too discouraging. Computer programming keeps getting harder because more stuff is discovered. I can cope with learning about one new technique per day, but I can't take ten in a day all at once. So conferences are depressing; it means I have so much more work to do. If I hide myself from the truth I am much happier.

[Sep 06, 2019] How TAOCP was hatched

Notable quotes:
"... Also, Addison-Wesley was the people who were asking me to do this book; my favorite textbooks had been published by Addison Wesley. They had done the books that I loved the most as a student. For them to come to me and say, "Would you write a book for us?", and here I am just a secondyear gradate student -- this was a thrill. ..."
"... But in those days, The Art of Computer Programming was very important because I'm thinking of the aesthetical: the whole question of writing programs as something that has artistic aspects in all senses of the word. The one idea is "art" which means artificial, and the other "art" means fine art. All these are long stories, but I've got to cover it fairly quickly. ..."
Sep 06, 2019 | archive.computerhistory.org

Knuth: This is, of course, really the story of my life, because I hope to live long enough to finish it. But I may not, because it's turned out to be such a huge project. I got married in the summer of 1961, after my first year of graduate school. My wife finished college, and I could use the money I had made -- the $5000 on the compiler -- to finance a trip to Europe for our honeymoon.

We had four months of wedded bliss in Southern California, and then a man from Addison-Wesley came to visit me and said "Don, we would like you to write a book about how to write compilers."

The more I thought about it, I decided "Oh yes, I've got this book inside of me."

I sketched out that day -- I still have the sheet of tablet paper on which I wrote -- I sketched out 12 chapters that I thought ought to be in such a book. I told Jill, my wife, "I think I'm going to write a book."

As I say, we had four months of bliss, because the rest of our marriage has all been devoted to this book. Well, we still have had happiness. But really, I wake up every morning and I still haven't finished the book. So I try to -- I have to -- organize the rest of my life around this, as one main unifying theme. The book was supposed to be about how to write a compiler. They had heard about me from one of their editorial advisors, that I knew something about how to do this. The idea appealed to me for two main reasons. One is that I did enjoy writing. In high school I had been editor of the weekly paper. In college I was editor of the science magazine, and I worked on the campus paper as copy editor. And, as I told you, I wrote the manual for that compiler that we wrote. I enjoyed writing, number one.

Also, Addison-Wesley was the people who were asking me to do this book; my favorite textbooks had been published by Addison Wesley. They had done the books that I loved the most as a student. For them to come to me and say, "Would you write a book for us?", and here I am just a secondyear gradate student -- this was a thrill.

Another very important reason at the time was that I knew that there was a great need for a book about compilers, because there were a lot of people who even in 1962 -- this was January of 1962 -- were starting to rediscover the wheel. The knowledge was out there, but it hadn't been explained. The people who had discovered it, though, were scattered all over the world and they didn't know of each other's work either, very much. I had been following it. Everybody I could think of who could write a book about compilers, as far as I could see, they would only give a piece of the fabric. They would slant it to their own view of it. There might be four people who could write about it, but they would write four different books. I could present all four of their viewpoints in what I would think was a balanced way, without any axe to grind, without slanting it towards something that I thought would be misleading to the compiler writer for the future. I considered myself as a journalist, essentially. I could be the expositor, the tech writer, that could do the job that was needed in order to take the work of these brilliant people and make it accessible to the world. That was my motivation. Now, I didn't have much time to spend on it then, I just had this page of paper with 12 chapter headings on it. That's all I could do while I'm a consultant at Burroughs and doing my graduate work. I signed a contract, but they said "We know it'll take you a while." I didn't really begin to have much time to work on it until 1963, my third year of graduate school, as I'm already finishing up on my thesis. In the summer of '62, I guess I should mention, I wrote another compiler. This was for Univac; it was a FORTRAN compiler. I spent the summer, I sold my soul to the devil, I guess you say, for three months in the summer of 1962 to write a FORTRAN compiler. I believe that the salary for that was $15,000, which was much more than an assistant professor. I think assistant professors were getting eight or nine thousand in those days.

Feigenbaum: Well, when I started in 1960 at [University of California] Berkeley, I was getting $7,600 for the nine-month year.

Knuth: Knuth: Yeah, so you see it. I got $15,000 for a summer job in 1962 writing a FORTRAN compiler. One day during that summer I was writing the part of the compiler that looks up identifiers in a hash table. The method that we used is called linear probing. Basically you take the variable name that you want to look up, you scramble it, like you square it or something like this, and that gives you a number between one and, well in those days it would have been between 1 and 1000, and then you look there. If you find it, good; if you don't find it, go to the next place and keep on going until you either get to an empty place, or you find the number you're looking for. It's called linear probing. There was a rumor that one of Professor Feller's students at Princeton had tried to figure out how fast linear probing works and was unable to succeed. This was a new thing for me. It was a case where I was doing programming, but I also had a mathematical problem that would go into my other [job]. My winter job was being a math student, my summer job was writing compilers. There was no mix. These worlds did not intersect at all in my life at that point. So I spent one day during the summer while writing the compiler looking at the mathematics of how fast does linear probing work. I got lucky, and I solved the problem. I figured out some math, and I kept two or three sheets of paper with me and I typed it up. ["Notes on 'Open' Addressing', 7/22/63] I guess that's on the internet now, because this became really the genesis of my main research work, which developed not to be working on compilers, but to be working on what they call analysis of algorithms, which is, have a computer method and find out how good is it quantitatively. I can say, if I got so many things to look up in the table, how long is linear probing going to take. It dawned on me that this was just one of many algorithms that would be important, and each one would lead to a fascinating mathematical problem. This was easily a good lifetime source of rich problems to work on. Here I am then, in the middle of 1962, writing this FORTRAN compiler, and I had one day to do the research and mathematics that changed my life for my future research trends. But now I've gotten off the topic of what your original question was.

Feigenbaum: We were talking about sort of the.. You talked about the embryo of The Art of Computing. The compiler book morphed into The Art of Computer Programming, which became a seven-volume plan.

Knuth: Exactly. Anyway, I'm working on a compiler and I'm thinking about this. But now I'm starting, after I finish this summer job, then I began to do things that were going to be relating to the book. One of the things I knew I had to have in the book was an artificial machine, because I'm writing a compiler book but machines are changing faster than I can write books. I have to have a machine that I'm totally in control of. I invented this machine called MIX, which was typical of the computers of 1962.

In 1963 I wrote a simulator for MIX so that I could write sample programs for it, and I taught a class at Caltech on how to write programs in assembly language for this hypothetical computer. Then I started writing the parts that dealt with sorting problems and searching problems, like the linear probing idea. I began to write those parts, which are part of a compiler, of the book. I had several hundred pages of notes gathering for those chapters for The Art of Computer Programming. Before I graduated, I've already done quite a bit of writing on The Art of Computer Programming.

I met George Forsythe about this time. George was the man who inspired both of us [Knuth and Feigenbaum] to come to Stanford during the '60s. George came down to Southern California for a talk, and he said, "Come up to Stanford. How about joining our faculty?" I said "Oh no, I can't do that. I just got married, and I've got to finish this book first." I said, "I think I'll finish the book next year, and then I can come up [and] start thinking about the rest of my life, but I want to get my book done before my son is born." Well, John is now 40-some years old and I'm not done with the book. Part of my lack of expertise is any good estimation procedure as to how long projects are going to take. I way underestimated how much needed to be written about in this book. Anyway, I started writing the manuscript, and I went merrily along writing pages of things that I thought really needed to be said. Of course, it didn't take long before I had started to discover a few things of my own that weren't in any of the existing literature. I did have an axe to grind. The message that I was presenting was in fact not going to be unbiased at all. It was going to be based on my own particular slant on stuff, and that original reason for why I should write the book became impossible to sustain. But the fact that I had worked on linear probing and solved the problem gave me a new unifying theme for the book. I was going to base it around this idea of analyzing algorithms, and have some quantitative ideas about how good methods were. Not just that they worked, but that they worked well: this method worked 3 times better than this method, or 3.1 times better than this method. Also, at this time I was learning mathematical techniques that I had never been taught in school. I found they were out there, but they just hadn't been emphasized openly, about how to solve problems of this kind.

So my book would also present a different kind of mathematics than was common in the curriculum at the time, that was very relevant to analysis of algorithm. I went to the publishers, I went to Addison Wesley, and said "How about changing the title of the book from 'The Art of Computer Programming' to 'The Analysis of Algorithms'." They said that will never sell; their focus group couldn't buy that one. I'm glad they stuck to the original title, although I'm also glad to see that several books have now come out called "The Analysis of Algorithms", 20 years down the line.

But in those days, The Art of Computer Programming was very important because I'm thinking of the aesthetical: the whole question of writing programs as something that has artistic aspects in all senses of the word. The one idea is "art" which means artificial, and the other "art" means fine art. All these are long stories, but I've got to cover it fairly quickly.

I've got The Art of Computer Programming started out, and I'm working on my 12 chapters. I finish a rough draft of all 12 chapters by, I think it was like 1965. I've got 3,000 pages of notes, including a very good example of what you mentioned about seeing holes in the fabric. One of the most important chapters in the book is parsing: going from somebody's algebraic formula and figuring out the structure of the formula. Just the way I had done in seventh grade finding the structure of English sentences, I had to do this with mathematical sentences.

Chapter ten is all about parsing of context-free language, [which] is what we called it at the time. I covered what people had published about context-free languages and parsing. I got to the end of the chapter and I said, well, you can combine these ideas and these ideas, and all of a sudden you get a unifying thing which goes all the way to the limit. These other ideas had sort of gone partway there. They would say "Oh, if a grammar satisfies this condition, I can do it efficiently." "If a grammar satisfies this condition, I can do it efficiently." But now, all of a sudden, I saw there was a way to say I can find the most general condition that can be done efficiently without looking ahead to the end of the sentence. That you could make a decision on the fly, reading from left to right, about the structure of the thing. That was just a natural outgrowth of seeing the different pieces of the fabric that other people had put together, and writing it into a chapter for the first time. But I felt that this general concept, well, I didn't feel that I had surrounded the concept. I knew that I had it, and I could prove it, and I could check it, but I couldn't really intuit it all in my head. I knew it was right, but it was too hard for me, really, to explain it well.

So I didn't put in The Art of Computer Programming. I thought it was beyond the scope of my book. Textbooks don't have to cover everything when you get to the harder things; then you have to go to the literature. My idea at that time [is] I'm writing this book and I'm thinking it's going to be published very soon, so any little things I discover and put in the book I didn't bother to write a paper and publish in the journal because I figure it'll be in my book pretty soon anyway. Computer science is changing so fast, my book is bound to be obsolete.

It takes a year for it to go through editing, and people drawing the illustrations, and then they have to print it and bind it and so on. I have to be a little bit ahead of the state-of-the-art if my book isn't going to be obsolete when it comes out. So I kept most of the stuff to myself that I had, these little ideas I had been coming up with. But when I got to this idea of left-to-right parsing, I said "Well here's something I don't really understand very well. I'll publish this, let other people figure out what it is, and then they can tell me what I should have said." I published that paper I believe in 1965, at the end of finishing my draft of the chapter, which didn't get as far as that story, LR(k). Well now, textbooks of computer science start with LR(k) and take off from there. But I want to give you an idea of

[Sep 05, 2019] linux - Directory bookmarking for bash - Stack Overflow

Notable quotes:
"... May you wan't to change this alias to something which fits your needs ..."
Jul 29, 2017 | stackoverflow.com

getmizanur , asked Sep 10 '11 at 20:35

Is there any directory bookmarking utility for bash to allow move around faster on the command line?

UPDATE

Thanks guys for the feedback however I created my own simple shell script (feel free to modify/expand it)

function cdb() {
    USAGE="Usage: cdb [-c|-g|-d|-l] [bookmark]" ;
    if  [ ! -e ~/.cd_bookmarks ] ; then
        mkdir ~/.cd_bookmarks
    fi

    case $1 in
        # create bookmark
        -c) shift
            if [ ! -f ~/.cd_bookmarks/$1 ] ; then
                echo "cd `pwd`" > ~/.cd_bookmarks/"$1" ;
            else
                echo "Try again! Looks like there is already a bookmark '$1'"
            fi
            ;;
        # goto bookmark
        -g) shift
            if [ -f ~/.cd_bookmarks/$1 ] ; then 
                source ~/.cd_bookmarks/"$1"
            else
                echo "Mmm...looks like your bookmark has spontaneously combusted. What I mean to say is that your bookmark does not exist." ;
            fi
            ;;
        # delete bookmark
        -d) shift
            if [ -f ~/.cd_bookmarks/$1 ] ; then 
                rm ~/.cd_bookmarks/"$1" ;
            else
                echo "Oops, forgot to specify the bookmark" ;
            fi    
            ;;
        # list bookmarks
        -l) shift
            ls -l ~/.cd_bookmarks/ ;
            ;;
         *) echo "$USAGE" ;
            ;;
    esac
}

INSTALL

1./ create a file ~/.cdb and copy the above script into it.

2./ in your ~/.bashrc add the following

if [ -f ~/.cdb ]; then
    source ~/.cdb
fi

3./ restart your bash session

USAGE

1./ to create a bookmark

$cd my_project
$cdb -c project1

2./ to goto a bookmark

$cdb -g project1

3./ to list bookmarks

$cdb -l

4./ to delete a bookmark

$cdb -d project1

5./ where are all my bookmarks stored?

$cd ~/.cd_bookmarks

Fredrik Pihl , answered Sep 10 '11 at 20:47

Also, have a look at CDPATH

A colon-separated list of search paths available to the cd command, similar in function to the $PATH variable for binaries. The $CDPATH variable may be set in the local ~/.bashrc file.

ash$ cd bash-doc
bash: cd: bash-doc: No such file or directory

bash$ CDPATH=/usr/share/doc
bash$ cd bash-doc
/usr/share/doc/bash-doc

bash$ echo $PWD
/usr/share/doc/bash-doc

and

cd -

It's the command-line equivalent of the back button (takes you to the previous directory you were in).

ajreal , answered Sep 10 '11 at 20:41

In bash script/command,
you can use pushd and popd

pushd

Save and then change the current directory. With no arguments, pushd exchanges the top two directories.

Usage

cd /abc
pushd /xxx    <-- save /abc to environment variables and cd to /xxx
pushd /zzz
pushd +1      <-- cd /xxx

popd is to remove the variable (reverse manner)

fgm , answered Sep 11 '11 at 8:28

bookmarks.sh provides a bookmark management system for the Bash version 4.0+. It can also use a Midnight Commander hotlist.

Dmitry Frank , answered Jun 16 '15 at 10:22

Thanks for sharing your solution, and I'd like to share mine as well, which I find more useful than anything else I've came across before.

The engine is a great, universal tool: command-line fuzzy finder by Junegunn.

It primarily allows you to "fuzzy-find" files in a number of ways, but it also allows to feed arbitrary text data to it and filter this data. So, the shortcuts idea is simple: all we need is to maintain a file with paths (which are shortcuts), and fuzzy-filter this file. Here's how it looks: we type cdg command (from "cd global", if you like), get a list of our bookmarks, pick the needed one in just a few keystrokes, and press Enter. Working directory is changed to the picked item:

It is extremely fast and convenient: usually I just type 3-4 letters of the needed item, and all others are already filtered out. Additionally, of course we can move through list with arrow keys or with vim-like keybindings Ctrl+j / Ctrl+k .

Article with details: Fuzzy shortcuts for your shell .

It is possible to use it for GUI applications as well (via xterm): I use that for my GUI file manager Double Commander . I have plans to write an article about this use case, too.

return42 , answered Feb 6 '15 at 11:56

Inspired by the question and answers here, I added the lines below to my ~/.bashrc file.

With this you have a favdir command (function) to manage your favorites and a autocompletion function to select an item from these favorites.

# ---------
# Favorites
# ---------

__favdirs_storage=~/.favdirs
__favdirs=( "$HOME" )

containsElement () {
    local e
    for e in "${@:2}"; do [[ "$e" == "$1" ]] && return 0; done
    return 1
}

function favdirs() {

    local cur
    local IFS
    local GLOBIGNORE

    case $1 in
        list)
            echo "favorite folders ..."
            printf -- ' - %s\n' "${__favdirs[@]}"
            ;;
        load)
            if [[ ! -e $__favdirs_storage ]] ; then
                favdirs save
            fi
            # mapfile requires bash 4 / my OS-X bash vers. is 3.2.53 (from 2007 !!?!).
            # mapfile -t __favdirs < $__favdirs_storage
            IFS=$'\r\n' GLOBIGNORE='*' __favdirs=($(< $__favdirs_storage))
            ;;
        save)
            printf -- '%s\n' "${__favdirs[@]}" > $__favdirs_storage
            ;;
        add)
            cur=${2-$(pwd)}
            favdirs load
            if containsElement "$cur" "${__favdirs[@]}" ; then
                echo "'$cur' allready exists in favorites"
            else
                __favdirs+=( "$cur" )
                favdirs save
                echo "'$cur' added to favorites"
            fi
            ;;
        del)
            cur=${2-$(pwd)}
            favdirs load
            local i=0
            for fav in ${__favdirs[@]}; do
                if [ "$fav" = "$cur" ]; then
                    echo "delete '$cur' from favorites"
                    unset __favdirs[$i]
                    favdirs save
                    break
                fi
                let i++
            done
            ;;
        *)
            echo "Manage favorite folders."
            echo ""
            echo "usage: favdirs [ list | load | save | add | del ]"
            echo ""
            echo "  list : list favorite folders"
            echo "  load : load favorite folders from $__favdirs_storage"
            echo "  save : save favorite directories to $__favdirs_storage"
            echo "  add  : add directory to favorites [default pwd $(pwd)]."
            echo "  del  : delete directory from favorites [default pwd $(pwd)]."
    esac
} && favdirs load

function __favdirs_compl_command() {
    COMPREPLY=( $( compgen -W "list load save add del" -- ${COMP_WORDS[COMP_CWORD]}))
} && complete -o default -F __favdirs_compl_command favdirs

function __favdirs_compl() {
    local IFS=$'\n'
    COMPREPLY=( $( compgen -W "${__favdirs[*]}" -- ${COMP_WORDS[COMP_CWORD]}))
}

alias _cd='cd'
complete -F __favdirs_compl _cd

Within the last two lines, an alias to change the current directory (with autocompletion) is created. With this alias ( _cd ) you are able to change to one of your favorite directories. May you wan't to change this alias to something which fits your needs .

With the function favdirs you can manage your favorites (see usage).

$ favdirs 
Manage favorite folders.

usage: favdirs [ list | load | save | add | del ]

  list : list favorite folders
  load : load favorite folders from ~/.favdirs
  save : save favorite directories to ~/.favdirs
  add  : add directory to favorites [default pwd /tmp ].
  del  : delete directory from favorites [default pwd /tmp ].

Zied , answered Mar 12 '14 at 9:53

Yes there is DirB: Directory Bookmarks for Bash well explained in this Linux Journal article

An example from the article:

% cd ~/Desktop
% s d       # save(bookmark) ~/Desktop as d
% cd /tmp   # go somewhere
% pwd
/tmp
% g d       # go to the desktop
% pwd
/home/Desktop

Al Conrad , answered Sep 4 '15 at 16:10

@getmizanur I used your cdb script. I enhanced it slightly by adding bookmarks tab completion. Here's my version of your cdb script.
_cdb()
{
    local _script_commands=$(ls -1 ~/.cd_bookmarks/)
    local cur=${COMP_WORDS[COMP_CWORD]}

    COMPREPLY=( $(compgen -W "${_script_commands}" -- $cur) )
}
complete -F _cdb cdb


function cdb() {

    local USAGE="Usage: cdb [-h|-c|-d|-g|-l|-s] [bookmark]\n
    \t[-h or no args] - prints usage help\n
    \t[-c bookmark] - create bookmark\n
    \t[-d bookmark] - delete bookmark\n
    \t[-g bookmark] - goto bookmark\n
    \t[-l] - list bookmarks\n
    \t[-s bookmark] - show bookmark location\n
    \t[bookmark] - same as [-g bookmark]\n
    Press tab for bookmark completion.\n"        

    if  [ ! -e ~/.cd_bookmarks ] ; then
        mkdir ~/.cd_bookmarks
    fi

    case $1 in
        # create bookmark
        -c) shift
            if [ ! -f ~/.cd_bookmarks/$1 ] ; then
                echo "cd `pwd`" > ~/.cd_bookmarks/"$1"
                complete -F _cdb cdb
            else
                echo "Try again! Looks like there is already a bookmark '$1'"
            fi
            ;;
        # goto bookmark
        -g) shift
            if [ -f ~/.cd_bookmarks/$1 ] ; then
                source ~/.cd_bookmarks/"$1"
            else
                echo "Mmm...looks like your bookmark has spontaneously combusted. What I mean to say is that your bookmark does not exist." ;
            fi
            ;;
        # show bookmark
        -s) shift
            if [ -f ~/.cd_bookmarks/$1 ] ; then
                cat ~/.cd_bookmarks/"$1"
            else
                echo "Mmm...looks like your bookmark has spontaneously combusted. What I mean to say is that your bookmark does not exist." ;
            fi
            ;;
        # delete bookmark
        -d) shift
            if [ -f ~/.cd_bookmarks/$1 ] ; then
                rm ~/.cd_bookmarks/"$1" ;
            else
                echo "Oops, forgot to specify the bookmark" ;
            fi
            ;;
        # list bookmarks
        -l) shift
            ls -1 ~/.cd_bookmarks/ ;
            ;;
        -h) echo -e $USAGE ;
            ;;
        # goto bookmark by default
        *)
            if [ -z "$1" ] ; then
                echo -e $USAGE
            elif [ -f ~/.cd_bookmarks/$1 ] ; then
                source ~/.cd_bookmarks/"$1"
            else
                echo "Mmm...looks like your bookmark has spontaneously combusted. What I mean to say is that your bookmark does not exist." ;
            fi
            ;;
    esac
}

tobimensch , answered Jun 5 '16 at 21:31

Yes, one that I have written, that is called anc.

https://github.com/tobimensch/anc

Anc stands for anchor, but anc's anchors are really just bookmarks.

It's designed for ease of use and there're multiple ways of navigating, either by giving a text pattern, using numbers, interactively, by going back, or using [TAB] completion.

I'm actively working on it and open to input on how to make it better.

Allow me to paste the examples from anc's github page here:

# make the current directory the default anchor:
$ anc s

# go to /etc, then /, then /usr/local and then back to the default anchor:
$ cd /etc; cd ..; cd usr/local; anc

# go back to /usr/local :
$ anc b

# add another anchor:
$ anc a $HOME/test

# view the list of anchors (the default one has the asterisk):
$ anc l
(0) /path/to/first/anchor *
(1) /home/usr/test

# jump to the anchor we just added:
# by using its anchor number
$ anc 1
# or by jumping to the last anchor in the list
$ anc -1

# add multiple anchors:
$ anc a $HOME/projects/first $HOME/projects/second $HOME/documents/first

# use text matching to jump to $HOME/projects/first
$ anc pro fir

# use text matching to jump to $HOME/documents/first
$ anc doc fir

# add anchor and jump to it using an absolute path
$ anc /etc
# is the same as
$ anc a /etc; anc -1

# add anchor and jump to it using a relative path
$ anc ./X11 #note that "./" is required for relative paths
# is the same as
$ anc a X11; anc -1

# using wildcards you can add many anchors at once
$ anc a $HOME/projects/*

# use shell completion to see a list of matching anchors
# and select the one you want to jump to directly
$ anc pro[TAB]

Cảnh Toàn Nguyễn , answered Feb 20 at 5:41

Bashmarks is an amazingly simple and intuitive utility. In short, after installation, the usage is:
s <bookmark_name> - Saves the current directory as "bookmark_name"
g <bookmark_name> - Goes (cd) to the directory associated with "bookmark_name"
p <bookmark_name> - Prints the directory associated with "bookmark_name"
d <bookmark_name> - Deletes the bookmark
l                 - Lists all available bookmarks

,

For short term shortcuts, I have a the following in my respective init script (Sorry. I can't find the source right now and didn't bother then):
function b() {
    alias $1="cd `pwd -P`"
}

Usage:

In any directory that you want to bookmark type

b THEDIR # <THEDIR> being the name of your 'bookmark'

It will create an alias to cd (back) to here.

To return to a 'bookmarked' dir type

THEDIR

It will run the stored alias and cd back there.

Caution: Use only if you understand that this might override existing shell aliases and what that means.

[Sep 04, 2019] Basic Trap for File Cleanup

Sep 04, 2019 | www.putorius.net

Basic Trap for File Cleanup

Using an trap to cleanup is simple enough. Here is an example of using trap to clean up a temporary file on exit of the script.

#!/bin/bash
trap "rm -f /tmp/output.txt" EXIT
yum -y update > /tmp/output.txt
if grep -qi "kernel" /tmp/output.txt; then
     mail -s "KERNEL UPDATED" user@example.com < /tmp/output.txt
fi

NOTE: It is important that the trap statement be placed at the beginning of the script to function properly. Any commands above the trap can exit and not be caught in the trap.

Now if the script exits for any reason, it will still run the rm command to delete the file. Here is an example of me sending SIGINT (CTRL+C) while the script was running.

# ./test.sh
 ^Cremoved '/tmp/output.txt'

NOTE: I added verbose ( -v ) output to the rm command so it prints "removed". The ^C signifies where I hit CTRL+C to send SIGINT.

This is a much cleaner and safer way to ensure the cleanup occurs when the script exists. Using EXIT ( 0 ) instead of a single defined signal (i.e. SIGINT – 2) ensures the cleanup happens on any exit, even successful completion of the script.

[Sep 04, 2019] Exec - Process Replacement Redirection in Bash by Steven Vona

Sep 02, 2019 | www.putorius.net

The Linux exec command is a bash builtin and a very interesting utility. It is not something most people who are new to Linux know. Most seasoned admins understand it but only use it occasionally. If you are a developer, programmer or DevOp engineer it is probably something you use more often. Lets take a deep dive into the builtin exec command, what it does and how to use it.

Table of Contents

Basics of the Sub-Shell

In order to understand the exec command, you need a fundamental understanding of how sub-shells work.

... ... ...

What the Exec Command Does

In it's most basic function the exec command changes the default behavior of creating a sub-shell to run a command. If you run exec followed by a command, that command will REPLACE the original process, it will NOT create a sub-shell.

An additional feature of the exec command, is redirection and manipulation of file descriptors . Explaining redirection and file descriptors is outside the scope of this tutorial. If these are new to you please read " Linux IO, Standard Streams and Redirection " to get acquainted with these terms and functions.

In the following sections we will expand on both of these functions and try to demonstrate how to use them.

How to Use the Exec Command with Examples

Let's look at some examples of how to use the exec command and it's options.

Basic Exec Command Usage – Replacement of Process

If you call exec and supply a command without any options, it simply replaces the shell with command .

Let's run an experiment. First, I ran the ps command to find the process id of my second terminal window. In this case it was 17524. I then ran "exec tail" in that second terminal and checked the ps command again. If you look at the screenshot below, you will see the tail process replaced the bash process (same process ID).

Linux terminal screenshot showing the exec command replacing a parent process instead of creating a sub-shell.
Screenshot 3

Since the tail command replaced the bash shell process, the shell will close when the tail command terminates.

Exec Command Options

If the -l option is supplied, exec adds a dash at the beginning of the first (zeroth) argument given. So if we ran the following command:

exec -l tail -f /etc/redhat-release

It would produce the following output in the process list. Notice the highlighted dash in the CMD column.

The -c option causes the supplied command to run with a empty environment. Environmental variables like PATH , are cleared before the command it run. Let's try an experiment. We know that the printenv command prints all the settings for a users environment. So here we will open a new bash process, run the printenv command to show we have some variables set. We will then run printenv again but this time with the exec -c option.

animated gif showing the exec command output with the -c option supplied.

In the example above you can see that an empty environment is used when using exec with the -c option. This is why there was no output to the printenv command when ran with exec.

The last option, -a [name], will pass name as the first argument to command . The command will still run as expected, but the name of the process will change. In this next example we opened a second terminal and ran the following command:

exec -a PUTORIUS tail -f /etc/redhat-release

Here is the process list showing the results of the above command:

Linux terminal screenshot showing the exec command using the -a option to replace the name of the first argument
Screenshot 5

As you can see, exec passed PUTORIUS as first argument to command , therefore it shows in the process list with that name.

Using the Exec Command for Redirection & File Descriptor Manipulation

The exec command is often used for redirection. When a file descriptor is redirected with exec it affects the current shell. It will exist for the life of the shell or until it is explicitly stopped.

If no command is specified, redirections may be used to affect the current shell environment.

– Bash Manual

Here are some examples of how to use exec for redirection and manipulating file descriptors. As we stated above, a deep dive into redirection and file descriptors is outside the scope of this tutorial. Please read " Linux IO, Standard Streams and Redirection " for a good primer and see the resources section for more information.

Redirect all standard output (STDOUT) to a file:
exec >file

In the example animation below, we use exec to redirect all standard output to a file. We then enter some commands that should generate some output. We then use exec to redirect STDOUT to the /dev/tty to restore standard output to the terminal. This effectively stops the redirection. Using the cat command we can see that the file contains all the redirected output.

Screenshot of Linux terminal using exec to redirect all standard output to a file
Open a file as file descriptor 6 for writing:
exec 6> file2write
Open file as file descriptor 8 for reading:
exec 8< file2read
Copy file descriptor 5 to file descriptor 7:
exec 7<&5
Close file descriptor 8:
exec 8<&-
Conclusion

In this article we covered the basics of the exec command. We discussed how to use it for process replacement, redirection and file descriptor manipulation.

In the past I have seen exec used in some interesting ways. It is often used as a wrapper script for starting other binaries. Using process replacement you can call a binary and when it takes over there is no trace of the original wrapper script in the process table or memory. I have also seen many System Administrators use exec when transferring work from one script to another. If you call a script inside of another script the original process stays open as a parent. You can use exec to replace that original script.

I am sure there are people out there using exec in some interesting ways. I would love to hear your experiences with exec. Please feel free to leave a comment below with anything on your mind.

Resources

[Sep 03, 2019] bash - How to convert strings like 19-FEB-12 to epoch date in UNIX - Stack Overflow

Feb 11, 2013 | stackoverflow.com

Asked 6 years, 6 months ago Active 2 years, 2 months ago Viewed 53k times 24 4

hellish ,Feb 11, 2013 at 3:45

In UNIX how to convert to epoch milliseconds date strings like:
19-FEB-12
16-FEB-12
05-AUG-09

I need this to compare these dates with the current time on the server.

> ,

To convert a date to seconds since the epoch:
date --date="19-FEB-12" +%s

Current epoch:

date +%s

So, since your dates are in the past:

NOW=`date +%s`
THEN=`date --date="19-FEB-12" +%s`

let DIFF=$NOW-$THEN
echo "The difference is: $DIFF"

Using BSD's date command, you would need

$ date -j -f "%d-%B-%y" 19-FEB-12 +%s

Differences from GNU date :

  1. -j prevents date from trying to set the clock
  2. The input format must be explicitly set with -f
  3. The input date is a regular argument, not an option (viz. -d )
  4. When no time is specified with the date, use the current time instead of midnight.

[Sep 03, 2019] Linux - UNIX Convert Epoch Seconds To the Current Time - nixCraft

Sep 03, 2019 | www.cyberciti.biz

Print Current UNIX Time

Type the following command to display the seconds since the epoch:

date +%s

date +%s

Sample outputs:
1268727836

Convert Epoch To Current Time

Type the command:

date -d @Epoch
date -d @1268727836
date -d "1970-01-01 1268727836 sec GMT"

date -d @Epoch date -d @1268727836 date -d "1970-01-01 1268727836 sec GMT"

Sample outputs:

Tue Mar 16 13:53:56 IST 2010

Please note that @ feature only works with latest version of date (GNU coreutils v5.3.0+). To convert number of seconds back to a more readable form, use a command like this:

date -d @1268727836 +"%d-%m-%Y %T %z"

date -d @1268727836 +"%d-%m-%Y %T %z"

Sample outputs:

16-03-2010 13:53:56 +0530

[Sep 03, 2019] command line - How do I convert an epoch timestamp to a human readable format on the cli - Unix Linux Stack Exchange

Sep 03, 2019 | unix.stackexchange.com

Gilles ,Oct 11, 2010 at 18:14

date -d @1190000000 Replace 1190000000 with your epoch

Stefan Lasiewski ,Oct 11, 2010 at 18:04

$ echo 1190000000 | perl -pe 's/(\d+)/localtime($1)/e' 
Sun Sep 16 20:33:20 2007

This can come in handy for those applications which use epoch time in the logfiles:

$ tail -f /var/log/nagios/nagios.log | perl -pe 's/(\d+)/localtime($1)/e'
[Thu May 13 10:15:46 2010] EXTERNAL COMMAND: PROCESS_SERVICE_CHECK_RESULT;HOSTA;check_raid;0;check_raid.pl: OK (Unit 0 on Controller 0 is OK)

Stéphane Chazelas ,Jul 31, 2015 at 20:24

With bash-4.2 or above:
printf '%(%F %T)T\n' 1234567890

(where %F %T is the strftime() -type format)

That syntax is inspired from ksh93 .

In ksh93 however, the argument is taken as a date expression where various and hardly documented formats are supported.

For a Unix epoch time, the syntax in ksh93 is:

printf '%(%F %T)T\n' '#1234567890'

ksh93 however seems to use its own algorithm for the timezone and can get it wrong. For instance, in Britain, it was summer time all year in 1970, but:

$ TZ=Europe/London bash -c 'printf "%(%c)T\n" 0'
Thu 01 Jan 1970 01:00:00 BST
$ TZ=Europe/London ksh93 -c 'printf "%(%c)T\n" "#0"'
Thu Jan  1 00:00:00 1970

DarkHeart ,Jul 28, 2014 at 3:56

Custom format with GNU date :
date -d @1234567890 +'%Y-%m-%d %H:%M:%S'

Or with GNU awk :

awk 'BEGIN { print strftime("%Y-%m-%d %H:%M:%S", 1234567890); }'

Linked SO question: https://stackoverflow.com/questions/3249827/convert-from-unixtime-at-command-line

,

The two I frequently use are:
$ perl -leprint\ scalar\ localtime\ 1234567890
Sat Feb 14 00:31:30 2009

[Sep 03, 2019] Time conversion using Bash Vanstechelman.eu

Sep 03, 2019 | www.vanstechelman.eu

Time conversion using Bash This article show how you can obtain the UNIX epoch time (number of seconds since 1970-01-01 00:00:00 UTC) using the Linux bash "date" command. It also shows how you can convert a UNIX epoch time to a human readable time.

Obtain UNIX epoch time using bash
Obtaining the UNIX epoch time using bash is easy. Use the build-in date command and instruct it to output the number of seconds since 1970-01-01 00:00:00 UTC. You can do this by passing a format string as parameter to the date command. The format string for UNIX epoch time is '%s'.

lode@srv-debian6:~$ date "+%s"
1234567890

To convert a specific date and time into UNIX epoch time, use the -d parameter. The next example shows how to convert the timestamp "February 20th, 2013 at 08:41:15" into UNIX epoch time.

lode@srv-debian6:~$ date "+%s" -d "02/20/2013 08:41:15"
1361346075

Converting UNIX epoch time to human readable time
Even though I didn't find it in the date manual, it is possible to use the date command to reformat a UNIX epoch time into a human readable time. The syntax is the following:

lode@srv-debian6:~$ date -d @1234567890
Sat Feb 14 00:31:30 CET 2009

The same thing can also be achieved using a bit of perl programming:

lode@srv-debian6:~$ perl -e 'print scalar(localtime(1234567890)), "\n"'
Sat Feb 14 00:31:30 2009

Please note that the printed time is formatted in the timezone in which your Linux system is configured. My system is configured in UTC+2, you can get another output for the same command.

[Sep 03, 2019] Run PerlTidy to beautify the code

Notable quotes:
"... Once I installed Code::TidyAll and placed those files in the root directory of the project, I could run tidyall -a . ..."
Sep 03, 2019 | perlmaven.com

The Code-TidyAll distribution provides a command line script called tidyall that will use Perl::Tidy to change the layout of the code.

This tandem needs 2 configuration file.

The .perltidyrc file contains the instructions to Perl::Tidy that describes the layout of a Perl-file. We used the following file copied from the source code of the Perl Maven project.

-pbp
-nst
-et=4
--maximum-line-length=120

# Break a line after opening/before closing token.
-vt=0
-vtc=0

The tidyall command uses a separate file called .tidyallrc that describes which files need to be beautified.

[PerlTidy]
select = {lib,t}/**/*.{pl,pm,t}
select = Makefile.PL
select = {mod2html,podtree2html,pods2html,perl2html}
argv = --profile=$ROOT/.perltidyrc

[SortLines]
select = .gitignore
Once I installed Code::TidyAll and placed those files in the root directory of the project, I could run tidyall -a .

That created a directory called .tidyall.d/ where it stores cached versions of the files, and changed all the files that were matches by the select statements in the .tidyallrc file.

Then, I added .tidyall.d/ to the .gitignore file to avoid adding that subdirectory to the repository and ran tidyall -a again to make sure the .gitignore file is sorted.

[Sep 02, 2019] bash - Pretty-print for shell script

Oct 21, 2010 | stackoverflow.com

Pretty-print for shell script Ask Question Asked 8 years, 10 months ago Active 30 days ago Viewed 14k times


Benoit ,Oct 21, 2010 at 13:19

I'm looking for something similiar to indent but for (bash) scripts. Console only, no colorizing, etc.

Do you know of one ?

Jamie ,Sep 11, 2012 at 3:00

Vim can indent bash scripts. But not reformat them before indenting.
Backup your bash script, open it with vim, type gg=GZZ and indent will be corrected. (Note for the impatient: this overwrites the file, so be sure to do that backup!)

Though, some bugs with << (expecting EOF as first character on a line) e.g.

EDIT: ZZ not ZQ

Daniel Martí ,Apr 8, 2018 at 13:52

A bit late to the party, but it looks like shfmt could do the trick for you.

Brian Chrisman ,Aug 11 at 4:08

In bash I do this:
reindent() {
source <(echo "Zibri () {";cat "$1"; echo "}")
declare -f Zibri|head --lines=-1|tail --lines=+3 | sed -e "s/^\s\s\s\s//"
}

this eliminates comments and reindents the script "bash way".

If you have HEREDOCS in your script, they got ruined by the sed in the previous function.

So use:

reindent() {
source <(echo "Zibri () {";cat "$1"; echo "}")
declare -f Zibri|head --lines=-1|tail --lines=+3"
}

But all your script will have a 4 spaces indentation.

Or you can do:

reindent () 
{ 
    rstr=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 16 | head -n 1);
    source <(echo "Zibri () {";cat "$1"|sed -e "s/^\s\s\s\s/$rstr/"; echo "}");
    echo '#!/bin/bash';
    declare -f Zibri | head --lines=-1 | tail --lines=+3 | sed -e "s/^\s\s\s\s//;s/$rstr/    /"
}

which takes care also of heredocs.

Pius Raeder ,Jan 10, 2017 at 8:35

Found this http://www.linux-kheops.com/doc/perl/perl-aubert/fmt.script .

Very nice, only one thing i took out is the [...]->test substitution.

[Sep 02, 2019] mvdan-sh A shell parser, formatter, and interpreter (POSIX-Bash-mksh)

Written in Go language
Sep 02, 2019 | github.com
go parser shell bash formatter posix mksh interpreter bash-parser beautify
  1. Go 98.8%
  2. Other 1.2%
Type Name Latest commit message Commit time
Failed to load latest commit information.
_fuzz/ it
_js
cmd
expand
fileutil
interp
shell
syntax
.gitignore
.travis.yml
LICENSE
README.md
go.mod
go.sum
release-docker.sh
README.md

sh

A shell parser, formatter and interpreter. Supports POSIX Shell , Bash and mksh . Requires Go 1.11 or later.

Quick start

To parse shell scripts, inspect them, and print them out, see the syntax examples .

For high-level operations like performing shell expansions on strings, see the shell examples .

shfmt

Go 1.11 and later can download the latest v2 stable release:

cd $(mktemp -d); go mod init tmp; go get mvdan.cc/sh/cmd/shfmt

The latest v3 pre-release can be downloaded in a similar manner, using the /v3 module:

cd $(mktemp -d); go mod init tmp; go get mvdan.cc/sh/v3/cmd/shfmt

Finally, any older release can be built with their respective older Go versions by manually cloning, checking out a tag, and running go build ./cmd/shfmt .

shfmt formats shell programs. It can use tabs or any number of spaces to indent. See canonical.sh for a quick look at its default style.

You can feed it standard input, any number of files or any number of directories to recurse into. When recursing, it will operate on .sh and .bash files and ignore files starting with a period. It will also operate on files with no extension and a shell shebang.

shfmt -l -w script.sh

Typically, CI builds should use the command below, to error if any shell scripts in a project don't adhere to the format:

shfmt -d .

Use -i N to indent with a number of spaces instead of tabs. There are other formatting options - see shfmt -h . For example, to get the formatting appropriate for Google's Style guide, use shfmt -i 2 -ci .

Packages are available on Arch , CRUX , Docker , FreeBSD , Homebrew , NixOS , Scoop , Snapcraft , and Void .

Replacing bash -n

bash -n can be useful to check for syntax errors in shell scripts. However, shfmt >/dev/null can do a better job as it checks for invalid UTF-8 and does all parsing statically, including checking POSIX Shell validity:

$ echo '${foo:1 2}' | bash -n
$ echo '${foo:1 2}' | shfmt
1:9: not a valid arithmetic operator: 2
$ echo 'foo=(1 2)' | bash --posix -n
$ echo 'foo=(1 2)' | shfmt -p
1:5: arrays are a bash feature

gosh

cd $(mktemp -d); go mod init tmp; go get mvdan.cc/sh/v3/cmd/gosh

Experimental shell that uses interp . Work in progress, so don't expect stability just yet.

Fuzzing

This project makes use of go-fuzz to find crashes and hangs in both the parser and the printer. To get started, run:

git checkout fuzz
./fuzz

Caveats

  • When indexing Bash associative arrays, always use quotes. The static parser will otherwise have to assume that the index is an arithmetic expression.
$ echo '${array[spaced string]}' | shfmt
1:16: not a valid arithmetic operator: string
$ echo '${array[dash-string]}' | shfmt
${array[dash - string]}
  • $(( and (( ambiguity is not supported. Backtracking would complicate the parser and make streaming support via io.Reader impossible. The POSIX spec recommends to space the operands if $( ( is meant.
$ echo '$((foo); (bar))' | shfmt
1:1: reached ) without matching $(( with ))
  • Some builtins like export and let are parsed as keywords. This is to allow statically parsing them and building their syntax tree, as opposed to just keeping the arguments as a slice of arguments.

JavaScript

A subset of the Go packages are available as an npm package called mvdan-sh . See the _js directory for more information.

Docker

To build a Docker image, checkout a specific version of the repository and run:

docker build -t my:tag -f cmd/shfmt/Dockerfile .

Related projects

[Sep 01, 2019] Three Ways to Exclude Specific-Certain Packages from Yum Update by Magesh Maruthamuthu

Sep 01, 2019 | www.2daygeek.com

Three Ways to Exclude Specific Packages from Yum Update

· Published : August 28, 2019 || Last Updated: August 31, 2019

Method 1 : Exclude Packages with yum Command Manually or Temporarily

We can use --exclude or -x switch with yum command to exclude specific packages from getting updated through yum command.

This is a temporary method or On-Demand method. If you want to exclude specific package only once then we can use this method.

The below command will update all packages except kernel.

To exclude single package.

# yum update --exclude=kernel

or

# yum update -x 'kernel'

To exclude multiple packages. The below command will update all packages except kernel and php.

# yum update --exclude=kernel* --exclude=php*

or

# yum update --exclude httpd,php
Method-2: Exclude Packages with yum Command Permanently

If you are frequently performing the patch update,You can use this permanent method.

To do so, add the required packages in /etc/yum.conf to disable packages updates permanently.

Once you add an entry, you don't need to specify these package each time you run yum update command. Also, this prevents packages from any accidental update.

# vi /etc/yum.conf

[main]
cachedir=/var/cache/yum/$basearch/$releasever
keepcache=0
debuglevel=2
logfile=/var/log/yum.log
exactarch=1
obsoletes=1
gpgcheck=1
plugins=1
installonly_limit=3
exclude=kernel* php*
Method-3: Exclude Packages Using Yum versionlock plugin

This is also permanent method similar to above. Yum versionlock plugin allow users to lock specified packages from being updated through yum command.

To do so, run the following command. The below command will exclude the freetype package from yum update.

You can also add the package entry directly in "/etc/yum/pluginconf.d/versionlock.list" file.

# yum versionlock add freetype

Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock
Adding versionlock on: 0:freetype-2.8-12.el7
versionlock added: 1

Use the below command to check the list of packages locked by versionlock plugin.

# yum versionlock list

Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock
0:freetype-2.8-12.el7.*
versionlock list done

Run the following command to discards the list.

# yum versionlock clear

[Aug 31, 2019] Linux on your laptop A closer look at EFI boot options

Aug 31, 2019 | www.zdnet.com
Before EFI, the standard boot process for virtually all PC systems was called "MBR", for Master Boot Record; today you are likely to hear it referred to as "Legacy Boot". This process depended on using the first physical block on a disk to hold some information needed to boot the computer (thus the name Master Boot Record); specifically, it held the disk address at which the actual bootloader could be found, and the partition table that defined the layout of the disk. Using this information, the PC firmware could find and execute the bootloader, which would then bring up the computer and run the operating system.

This system had a number of rather obvious weaknesses and shortcomings. One of the biggest was that you could only have one bootable object on each physical disk drive (at least as far as the firmware boot was concerned). Another was that if that first sector on the disk became corrupted somehow, you were in deep trouble.

Over time, as part of the Extensible Firmware Interface, a new approach to boot configuration was developed. Rather than storing critical boot configuration information in a single "magic" location, EFI uses a dedicated "EFI boot partition" on the desk. This is a completely normal, standard disk partition, the same as which may be used to hold the operating system or system recovery data.

The only requirement is that it be FAT formatted, and it should have the boot and esp partition flags set (esp stands for EFI System Partition). The specific data and programs necessary for booting is then kept in directories on this partition, typically in directories named to indicate what they are for. So if you have a Windows system, you would typically find directories called 'Boot' and 'Microsoft' , and perhaps one named for the manufacturer of the hardware, such as HP. If you have a Linux system, you would find directories called opensuse, debian, ubuntu, or any number of others depending on what particular Linux distribution you are using.

It should be obvious from the description so far that it is perfectly possible with the EFI boot configuration to have multiple boot objects on a single disk drive.

Before going any further, I should make it clear that if you install Linux as the only operating system on a PC, it is not necessary to know all of this configuration information in detail. The installer should take care of setting all of this up, including creating the EFI boot partition (or using an existing EFI boot partition), and further configuring the system boot list so that whatever system you install becomes the default boot target.

If you were to take a brand new computer with UEFI firmware, and load it from scratch with any of the current major Linux distributions, it would all be set up, configured, and working just as it is when you purchase a new computer preloaded with Windows (or when you load a computer from scratch with Windows). It is only when you want to have more than one bootable operating system – especially when you want to have both Linux and Windows on the same computer – that things may become more complicated.

The problems that arise with such "multiboot" systems are generally related to getting the boot priority list defined correctly.

When you buy a new computer with Windows, this list typically includes the Windows bootloader on the primary disk, and then perhaps some other peripheral devices such as USB, network interfaces and such. When you install Linux alongside Windows on such a computer, the installer will add the necessary information to the EFI boot partition, but if the boot priority list is not changed, then when the system is rebooted after installation it will simply boot Windows again, and you are likely to think that the installation didn't work.

There are several ways to modify this boot priority list, but exactly which ones are available and whether or how they work depends on the firmware of the system you are using, and this is where things can get really messy. There are just about as many different UEFI firmware implementations as there are PC manufacturers, and the manufacturers have shown a great deal of creativity in the details of this firmware.

First, in the simplest case, there is a software utility included with Linux called efibootmgr that can be used to modify, add or delete the boot priority list. If this utility works properly, and the changes it makes are permanent on the system, then you would have no other problems to deal with, and after installing it would boot Linux and you would be happy. Unfortunately, while this is sometimes the case it is frequently not. The most common reason for this is that changes made by software utilities are not actually permanently stored by the system BIOS, so when the computer is rebooted the boot priority list is restored to whatever it was before, which generally means that Windows gets booted again.

The other common way of modifying the boot priority list is via the computer BIOS configuration program. The details of how to do this are different for every manufacturer, but the general procedure is approximately the same. First you have to press the BIOS configuration key (usually F2, but not always, unfortunately) during system power-on (POST). Then choose the Boot item from the BIOS configuration menu, which should get you to a list of boot targets presented in priority order. Then you need to modify that list; sometimes this can be done directly in that screen, via the usual F5/F6 up/down key process, and sometimes you need to proceed one level deeper to be able to do that. I wish I could give more specific and detailed information about this, but it really is different on every system (sometimes even on different systems produced by the same manufacturer), so you just need to proceed carefully and figure out the steps as you go.

I have seen a few rare cases of systems where neither of these methods works, or at least they don't seem to be permanent, and the system keeps reverting to booting Windows. Again, there are two ways to proceed in this case. The first is by simply pressing the "boot selection" key during POST (power-on). Exactly which key this is varies, I have seen it be F12, F9, Esc, and probably one or two others. Whichever key it turns out to be, when you hit it during POST you should get a list of bootable objects defined in the EFI boot priority list, so assuming your Linux installation worked you should see it listed there. I have known of people who were satisfied with this solution, and would just use the computer this way and have to press boot select each time they wanted to boot Linux.

The alternative is to actually modify the files in the EFI boot partition, so that the (unchangeable) Windows boot procedure would actually boot Linux. This involves overwriting the Windows file bootmgfw.efi with the Linux file grubx64.efi. I have done this, especially in the early days of EFI boot, and it works, but I strongly advise you to be extremely careful if you try it, and make sure that you keep a copy of the original bootmgfw.efi file. Finally, just as a final (depressing) warning, I have also seen systems where this seemed to work, at least for a while, but then at some unpredictable point the boot process seemed to notice that something had changed and it restored bootmgfw.efi to its original state – thus losing the Linux boot configuration again. Sigh.

So, that's the basics of EFI boot, and how it can be configured. But there are some important variations possible, and some caveats to be aware of.

[Aug 31, 2019] Programming is about Effective Communication

Aug 31, 2019 | developers.slashdot.org

Anonymous Coward , Friday February 22, 2019 @02:42PM ( #58165060 )

Algorithms, not code ( Score: 4 , Insightful)

Sad to see these are all books about coding and coding style. Nothing at all here about algorithms, or data structures.

My vote goes for Algorithms by Sedgewick

Seven Spirals ( 4924941 ) , Friday February 22, 2019 @02:57PM ( #58165150 )
MOTIF Programming by Marshall Brain ( Score: 3 )

Amazing how little memory and CPU MOTIF applications take. Once you get over the callbacks, it's actually not bad!

Seven Spirals ( 4924941 ) writes:
Re: ( Score: 2 )

Interesting. Sorry you had that experience. I'm not sure what you mean by a "multi-line text widget". I can tell you that early versions of OpenMOTIF were very very buggy in my experience. You probably know this, but after OpenMOTIF was completed and revved a few times the original MOTIF code was released as open-source. Many of the bugs I'd been seeing (and some just strange visual artifacts) disappeared. I know a lot of people love QT and it's produced real apps and real results - I won't poo-poo it. How

SuperKendall ( 25149 ) writes:
Design and Evolution of C++ ( Score: 2 )

Even if you don't like C++ much, The Design and Evolution of C++ [amazon.com] is a great book for understanding why pretty much any language ends up the way it does, seeing the tradeoffs and how a language comes to grow and expand from simple roots. It's way more interesting to read than you might expect (not very dry, and more about human interaction than you would expect).

Other than that reading through back posts in a lot of coding blogs that have been around a long time is probably a really good idea.

Also a side re

shanen ( 462549 ) writes:
What about books that hadn't been written yet? ( Score: 2 )

You young whippersnappers don't 'preciate how good you have it!

Back in my day, the only book about programming was the 1401 assembly language manual!

But seriously, folks, it's pretty clear we still don't know shite about how to program properly. We have some fairly clear success criteria for improving the hardware, but the criteria for good software are clear as mud, and the criteria for ways to produce good software are much muddier than that.

Having said that, I will now peruse the thread rather carefully

shanen ( 462549 ) writes:
TMI, especially PII ( Score: 2 )

Couldn't find any mention of Guy Steele, so I'll throw in The New Hacker's Dictionary , which I once owned in dead tree form. Not sure if Version 4.4.7 http://catb.org/jargon/html/ [catb.org] is the latest online... Also remember a couple of his language manuals. Probably used the Common Lisp one the most...

Didn't find any mention of a lot of books that I consider highly relevant, but that may reflect my personal bias towards history. Not really relevant for most programmers.

TMI, but if I open up my database on all t

UnknownSoldier ( 67820 ) , Friday February 22, 2019 @03:52PM ( #58165532 )
Programming is about **Effective Communication** ( Score: 5 , Insightful)

I've been programming for the past ~40 years and I'll try to summarize what I believe are the most important bits about programming (pardon the pun.) Think of this as a META: " HOWTO: Be A Great Programmer " summary. (I'll get to the books section in a bit.)

1. All code can be summarized as a trinity of 3 fundamental concepts:

* Linear ; that is, sequence: A, B, C
* Cyclic ; that is, unconditional jumps: A-B-C-goto B
* Choice ; that is, conditional jumps: if A then B

2. ~80% of programming is NOT about code; it is about Effective Communication. Whether that be:

* with your compiler / interpreter / REPL
* with other code (levels of abstraction, level of coupling, separation of concerns, etc.)
* with your boss(es) / manager(s)
* with your colleagues
* with your legal team
* with your QA dept
* with your customer(s)
* with the general public

The other ~20% is effective time management and design. A good programmer knows how to budget their time. Programming is about balancing the three conflicting goals of the Program Management Triangle [wikipedia.org]: You can have it on time, on budget, on quality. Pick two.

3. Stages of a Programmer

There are two old jokes:

In Lisp all code is data. In Haskell all data is code.

And:

Progression of a (Lisp) Programmer:

* The newbie realizes that the difference between code and data is trivial.
* The expert realizes that all code is data.
* The true master realizes that all data is code.

(Attributed to Aristotle Pagaltzis)

The point of these jokes is that as you work with systems you start to realize that a data-driven process can often greatly simplify things.

4. Know Thy Data

Fred Books once wrote

"Show me your flowcharts (source code), and conceal your tables (domain model), and I shall continue to be mystified; show me your tables (domain model) and I won't usually need your flowcharts (source code): they'll be obvious."

A more modern version would read like this:

Show me your code and I'll have to see your data,
Show me your data and I won't have to see your code.

The importance of data can't be understated:

* Optimization STARTS with understanding HOW the data is being generated and used, NOT the code as has been traditionally taught.
* Post 2000 "Big Data" has been called the new oil. We are generating upwards to millions of GB of data every second. Analyzing that data is import to spot trends and potential problems.

5. There are three levels of optimizations. From slowest to fastest run-time:

a) Bit-twiddling hacks [stanford.edu]
b) Algorithmic -- Algorithmic complexity or Analysis of algorithms [wikipedia.org] (such as Big-O notation)
c) Data-Orientated Design [dataorienteddesign.com] -- Understanding how hardware caches such as instruction and data caches matter. Optimize for the common case, NOT the single case that OOP tends to favor.

Optimizing is understanding Bang-for-the-Buck. 80% of code execution is spent in 20% of the time. Speeding up hot-spots with bit twiddling won't be as effective as using a more efficient algorithm which, in turn, won't be as efficient as understanding HOW the data is manipulated in the first place.

6. Fundamental Reading

Since the OP specifically asked about books -- there are lots of great ones. The ones that have impressed me that I would mark as "required" reading:

* The Mythical Man-Month
* Godel, Escher, Bach
* Knuth: The Art of Computer Programming
* The Pragmatic Programmer
* Zero Bugs and Program Faster
* Writing Solid Code / Code Complete by Steve McConnell
* Game Programming Patterns [gameprogra...tterns.com] (*)
* Game Engine Design
* Thinking in Java by Bruce Eckel
* Puzzles for Hackers by Ivan Sklyarov

(*) I did NOT list Design Patterns: Elements of Reusable Object-Oriented Software as that leads to typical, bloated, over-engineered crap. The main problem with "Design Patterns" is that a programmer will often get locked into a mindset of seeing everything as a pattern -- even when a simple few lines of code would solve th eproblem. For example here is 1,100+ of Crap++ code such as Boost's over-engineered CRC code [boost.org] when a mere ~25 lines of SIMPLE C code would have done the trick. When was the last time you ACTUALLY needed to _modify_ a CRC function? The BIG picture is that you are probably looking for a BETTER HASHING function with less collisions. You probably would be better off using a DIFFERENT algorithm such as SHA-2, etc.

7. Do NOT copy-pasta

Roughly 80% of bugs creep in because someone blindly copied-pasted without thinking. Type out ALL code so you actually THINK about what you are writing.

8. K.I.S.S.

Over-engineering and aka technical debt, will be your Achilles' heel. Keep It Simple, Silly.

9. Use DESCRIPTIVE variable names

You spend ~80% of your time READING code, and only ~20% writing it. Use good, descriptive variable names. Far too programmers write usless comments and don't understand the difference between code and comments:

Code says HOW, Comments say WHY

A crap comment will say something like: // increment i

No, Shit Sherlock! Don't comment the obvious!

A good comment will say something like: // BUGFIX: 1234: Work-around issues caused by A, B, and C.

10. Ignoring Memory Management doesn't make it go away -- now you have two problems. (With apologies to JWZ)

TINSTAAFL.

11. Learn Multi-Paradigm programming [wikipedia.org].

If you don't understand both the pros and cons of these programming paradigms ...

* Procedural
* Object-Orientated
* Functional, and
* Data-Orientated Design

... then you will never really understand programming, nor abstraction, at a deep level, along with how and when it should and shouldn't be used.

12. Multi-disciplinary POV

ALL non-trivial code has bugs. If you aren't using static code analysis [wikipedia.org] then you are not catching as many bugs as the people who are.

Also, a good programmer looks at his code from many different angles. As a programmer you must put on many different hats to find them:

* Architect -- design the code
* Engineer / Construction Worker -- implement the code
* Tester -- test the code
* Consumer -- doesn't see the code, only sees the results. Does it even work?? Did you VERIFY it did BEFORE you checked your code into version control?

13. Learn multiple Programming Languages

Each language was designed to solve certain problems. Learning different languages, even ones you hate, will expose you to different concepts. e.g. If you don't how how to read assembly language AND your high level language then you will never be as good as the programmer who does both.

14. Respect your Colleagues' and Consumers Time, Space, and Money.

Mobile game are the WORST at respecting people's time, space and money turning "players into payers." They treat customers as whales. Don't do this. A practical example: If you are a slack channel with 50+ people do NOT use @here. YOUR fire is not their emergency!

15. Be Passionate

If you aren't passionate about programming, that is, you are only doing it for the money, it will show. Take some pride in doing a GOOD job.

16. Perfect Practice Makes Perfect.

If you aren't programming every day you will never be as good as someone who is. Programming is about solving interesting problems. Practice solving puzzles to develop your intuition and lateral thinking. The more you practice the better you get.

"Sorry" for the book but I felt it was important to summarize the "essentials" of programming.

--
Hey Slashdot. Fix your shitty filter so long lists can be posted.: "Your comment has too few characters per line (currently 37.0)."

raymorris ( 2726007 ) , Friday February 22, 2019 @05:39PM ( #58166230 ) Journal
Shared this with my team ( Score: 4 , Insightful)

You crammed a lot of good ideas into a short post.
I'm sending my team at work a link to your post.

You mentioned code can data. Linus Torvalds had this to say:

"I'm a huge proponent of designing your code around the data, rather than the other way around, and I think it's one of the reasons git has been fairly successful [â¦] I will, in fact, claim that the difference between a bad programmer and a good one is whether he considers his code or his data structures more important."

"Bad programmers worry about the code. Good programmers worry about data structures and their relationships."

I'm inclined to agree. Once the data structure is right, the code oftem almost writes itself. It'll be easy to write and easy to read because it's obvious how one would handle data structured in that elegant way.

Writing the code necessary to transform the data from the input format into the right structure can be non-obvious, but it's normally worth it.

[Aug 31, 2019] Slashdot Asks How Did You Learn How To Code - Slashdot

Aug 31, 2019 | ask.slashdot.org

GreatDrok ( 684119 ) , Saturday June 04, 2016 @10:03PM ( #52250917 ) Journal

Programming, not coding ( Score: 5 , Interesting)

i learnt to program at school from a Ph.D computer scientist. We never even had computers in the class. We learnt to break the problem down into sections using flowcharts or pseudo-code and then we would translate that program into whatever coding language we were using. I still do this usually in my notebook where I figure out all the things I need to do and then write the skeleton of the code using a series of comments for what each section of my program and then I fill in the code for each section. It is a combination of top down and bottom up programming, writing routines that can be independently tested and validated.

[Aug 29, 2019] Parsing bash script options with getopts by Kevin Sookocheff

Mar 30, 2018 | sookocheff.com

Parsing bash script options with getopts Posted on January 4, 2015 | 5 minutes | Kevin Sookocheff A common task in shell scripting is to parse command line arguments to your script. Bash provides the getopts built-in function to do just that. This tutorial explains how to use the getopts built-in function to parse arguments and options to a bash script.

The getopts function takes three parameters. The first is a specification of which options are valid, listed as a sequence of letters. For example, the string 'ht' signifies that the options -h and -t are valid.

The second argument to getopts is a variable that will be populated with the option or argument to be processed next. In the following loop, opt will hold the value of the current option that has been parsed by getopts .

while getopts ":ht" opt; do
  case ${opt} in
    h ) # process option a
      ;;
    t ) # process option t
      ;;
    \? ) echo "Usage: cmd [-h] [-t]"
      ;;
  esac
done

This example shows a few additional features of getopts . First, if an invalid option is provided, the option variable is assigned the value ? . You can catch this case and provide an appropriate usage message to the user. Second, this behaviour is only true when you prepend the list of valid options with : to disable the default error handling of invalid options. It is recommended to always disable the default error handling in your scripts.

The third argument to getopts is the list of arguments and options to be processed. When not provided, this defaults to the arguments and options provided to the application ( $@ ). You can provide this third argument to use getopts to parse any list of arguments and options you provide.

Shifting processed options

The variable OPTIND holds the number of options parsed by the last call to getopts . It is common practice to call the shift command at the end of your processing loop to remove options that have already been handled from $@ .

shift $((OPTIND -1))
Parsing options with arguments

Options that themselves have arguments are signified with a : . The argument to an option is placed in the variable OPTARG . In the following example, the option t takes an argument. When the argument is provided, we copy its value to the variable target . If no argument is provided getopts will set opt to : . We can recognize this error condition by catching the : case and printing an appropriate error message.

while getopts ":t:" opt; do
  case ${opt} in
    t )
      target=$OPTARG
      ;;
    \? )
      echo "Invalid option: $OPTARG" 1>&2
      ;;
    : )
      echo "Invalid option: $OPTARG requires an argument" 1>&2
      ;;
  esac
done
shift $((OPTIND -1))
An extended example – parsing nested arguments and options

Let's walk through an extended example of processing a command that takes options, has a sub-command, and whose sub-command takes an additional option that has an argument. This is a mouthful so let's break it down using an example. Let's say we are writing our own version of the pip command . In this version you can call pip with the -h option to display a help message.

> pip -h
Usage:
    pip -h                      Display this help message.
    pip install                 Install a Python package.

We can use getopts to parse the -h option with the following while loop. In it we catch invalid options with \? and shift all arguments that have been processed with shift $((OPTIND -1)) .

while getopts ":h" opt; do
  case ${opt} in
    h )
      echo "Usage:"
      echo "    pip -h                      Display this help message."
      echo "    pip install                 Install a Python package."
      exit 0
      ;;
    \? )
      echo "Invalid Option: -$OPTARG" 1>&2
      exit 1
      ;;
  esac
done
shift $((OPTIND -1))

Now let's add the sub-command install to our script. install takes as an argument the Python package to install.

> pip install urllib3

install also takes an option, -t . -t takes as an argument the location to install the package to relative to the current directory.

> pip install urllib3 -t ./src/lib

To process this line we must find the sub-command to execute. This value is the first argument to our script.

subcommand=$1
shift # Remove `pip` from the argument list

Now we can process the sub-command install . In our example, the option -t is actually an option that follows the package argument so we begin by removing install from the argument list and processing the remainder of the line.

case "$subcommand" in
  install)
    package=$1
    shift # Remove `install` from the argument list
    ;;
esac

After shifting the argument list we can process the remaining arguments as if they are of the form package -t src/lib . The -t option takes an argument itself. This argument will be stored in the variable OPTARG and we save it to the variable target for further work.

case "$subcommand" in
  install)
    package=$1
    shift # Remove `install` from the argument list

  while getopts ":t:" opt; do
    case ${opt} in
      t )
        target=$OPTARG
        ;;
      \? )
        echo "Invalid Option: -$OPTARG" 1>&2
        exit 1
        ;;
      : )
        echo "Invalid Option: -$OPTARG requires an argument" 1>&2
        exit 1
        ;;
    esac
  done
  shift $((OPTIND -1))
  ;;
esac

Putting this all together, we end up with the following script that parses arguments to our version of pip and its sub-command install .

package=""  # Default to empty package
target=""  # Default to empty target

# Parse options to the `pip` command
while getopts ":h" opt; do
  case ${opt} in
    h )
      echo "Usage:"
      echo "    pip -h                      Display this help message."
      echo "    pip install <package>       Install <package>."
      exit 0
      ;;
   \? )
     echo "Invalid Option: -$OPTARG" 1>&2
     exit 1
     ;;
  esac
done
shift $((OPTIND -1))

subcommand=$1; shift  # Remove 'pip' from the argument list
case "$subcommand" in
  # Parse options to the install sub command
  install)
    package=$1; shift  # Remove 'install' from the argument list

    # Process package options
    while getopts ":t:" opt; do
      case ${opt} in
        t )
          target=$OPTARG
          ;;
        \? )
          echo "Invalid Option: -$OPTARG" 1>&2
          exit 1
          ;;
        : )
          echo "Invalid Option: -$OPTARG requires an argument" 1>&2
          exit 1
          ;;
      esac
    done
    shift $((OPTIND -1))
    ;;
esac

After processing the above sequence of commands, the variable package will hold the package to install and the variable target will hold the target to install the package to. You can use this as a template for processing any set of arguments and options to your scripts.

bash getopts

[Aug 29, 2019] How do I parse command line arguments in Bash - Stack Overflow

Jul 10, 2017 | stackoverflow.com

Livven, Jul 10, 2017 at 8:11

Update: It's been more than 5 years since I started this answer. Thank you for LOTS of great edits/comments/suggestions. In order save maintenance time, I've modified the code block to be 100% copy-paste ready. Please do not post comments like "What if you changed X to Y ". Instead, copy-paste the code block, see the output, make the change, rerun the script, and comment "I changed X to Y and " I don't have time to test your ideas and tell you if they work.
Method #1: Using bash without getopt[s]

Two common ways to pass key-value-pair arguments are:

Bash Space-Separated (e.g., --option argument ) (without getopt[s])

Usage demo-space-separated.sh -e conf -s /etc -l /usr/lib /etc/hosts

cat >/tmp/demo-space-separated.sh <<'EOF'
#!/bin/bash

POSITIONAL=()
while [[ $# -gt 0 ]]
do
key="$1"

case $key in
    -e|--extension)
    EXTENSION="$2"
    shift # past argument
    shift # past value
    ;;
    -s|--searchpath)
    SEARCHPATH="$2"
    shift # past argument
    shift # past value
    ;;
    -l|--lib)
    LIBPATH="$2"
    shift # past argument
    shift # past value
    ;;
    --default)
    DEFAULT=YES
    shift # past argument
    ;;
    *)    # unknown option
    POSITIONAL+=("$1") # save it in an array for later
    shift # past argument
    ;;
esac
done
set -- "${POSITIONAL[@]}" # restore positional parameters

echo "FILE EXTENSION  = ${EXTENSION}"
echo "SEARCH PATH     = ${SEARCHPATH}"
echo "LIBRARY PATH    = ${LIBPATH}"
echo "DEFAULT         = ${DEFAULT}"
echo "Number files in SEARCH PATH with EXTENSION:" $(ls -1 "${SEARCHPATH}"/*."${EXTENSION}" | wc -l)
if [[ -n $1 ]]; then
    echo "Last line of file specified as non-opt/last argument:"
    tail -1 "$1"
fi
EOF

chmod +x /tmp/demo-space-separated.sh

/tmp/demo-space-separated.sh -e conf -s /etc -l /usr/lib /etc/hosts

output from copy-pasting the block above:

FILE EXTENSION  = conf
SEARCH PATH     = /etc
LIBRARY PATH    = /usr/lib
DEFAULT         =
Number files in SEARCH PATH with EXTENSION: 14
Last line of file specified as non-opt/last argument:
#93.184.216.34    example.com
Bash Equals-Separated (e.g., --option=argument ) (without getopt[s])

Usage demo-equals-separated.sh -e=conf -s=/etc -l=/usr/lib /etc/hosts

cat >/tmp/demo-equals-separated.sh <<'EOF'
#!/bin/bash

for i in "$@"
do
case $i in
    -e=*|--extension=*)
    EXTENSION="${i#*=}"
    shift # past argument=value
    ;;
    -s=*|--searchpath=*)
    SEARCHPATH="${i#*=}"
    shift # past argument=value
    ;;
    -l=*|--lib=*)
    LIBPATH="${i#*=}"
    shift # past argument=value
    ;;
    --default)
    DEFAULT=YES
    shift # past argument with no value
    ;;
    *)
          # unknown option
    ;;
esac
done
echo "FILE EXTENSION  = ${EXTENSION}"
echo "SEARCH PATH     = ${SEARCHPATH}"
echo "LIBRARY PATH    = ${LIBPATH}"
echo "DEFAULT         = ${DEFAULT}"
echo "Number files in SEARCH PATH with EXTENSION:" $(ls -1 "${SEARCHPATH}"/*."${EXTENSION}" | wc -l)
if [[ -n $1 ]]; then
    echo "Last line of file specified as non-opt/last argument:"
    tail -1 $1
fi
EOF

chmod +x /tmp/demo-equals-separated.sh

/tmp/demo-equals-separated.sh -e=conf -s=/etc -l=/usr/lib /etc/hosts

output from copy-pasting the block above:

FILE EXTENSION  = conf
SEARCH PATH     = /etc
LIBRARY PATH    = /usr/lib
DEFAULT         =
Number files in SEARCH PATH with EXTENSION: 14
Last line of file specified as non-opt/last argument:
#93.184.216.34    example.com

To better understand ${i#*=} search for "Substring Removal" in this guide . It is functionally equivalent to `sed 's/[^=]*=//' <<< "$i"` which calls a needless subprocess or `echo "$i" | sed 's/[^=]*=//'` which calls two needless subprocesses.

Method #2: Using bash with getopt[s]

from: http://mywiki.wooledge.org/BashFAQ/035#getopts

getopt(1) limitations (older, relatively-recent getopt versions):

  • can't handle arguments that are empty strings
  • can't handle arguments with embedded whitespace

More recent getopt versions don't have these limitations.

Additionally, the POSIX shell (and others) offer getopts which doesn't have these limitations. I've included a simplistic getopts example.

Usage demo-getopts.sh -vf /etc/hosts foo bar

cat >/tmp/demo-getopts.sh <<'EOF'
#!/bin/sh

# A POSIX variable
OPTIND=1         # Reset in case getopts has been used previously in the shell.

# Initialize our own variables:
output_file=""
verbose=0

while getopts "h?vf:" opt; do
    case "$opt" in
    h|\?)
        show_help
        exit 0
        ;;
    v)  verbose=1
        ;;
    f)  output_file=$OPTARG
        ;;
    esac
done

shift $((OPTIND-1))

[ "${1:-}" = "--" ] && shift

echo "verbose=$verbose, output_file='$output_file', Leftovers: $@"
EOF

chmod +x /tmp/demo-getopts.sh

/tmp/demo-getopts.sh -vf /etc/hosts foo bar

output from copy-pasting the block above:

verbose=1, output_file='/etc/hosts', Leftovers: foo bar

The advantages of getopts are:

  1. It's more portable, and will work in other shells like dash .
  2. It can handle multiple single options like -vf filename in the typical Unix way, automatically.

The disadvantage of getopts is that it can only handle short options ( -h , not --help ) without additional code.

There is a getopts tutorial which explains what all of the syntax and variables mean. In bash, there is also help getopts , which might be informative.

johncip ,Jul 23, 2018 at 15:15

No answer mentions enhanced getopt . And the top-voted answer is misleading: It either ignores -⁠vfd style short options (requested by the OP) or options after positional arguments (also requested by the OP); and it ignores parsing-errors. Instead:
  • Use enhanced getopt from util-linux or formerly GNU glibc . 1
  • It works with getopt_long() the C function of GNU glibc.
  • Has all useful distinguishing features (the others don't have them):
    • handles spaces, quoting characters and even binary in arguments 2 (non-enhanced getopt can't do this)
    • it can handle options at the end: script.sh -o outFile file1 file2 -v ( getopts doesn't do this)
    • allows = -style long options: script.sh --outfile=fileOut --infile fileIn (allowing both is lengthy if self parsing)
    • allows combined short options, e.g. -vfd (real work if self parsing)
    • allows touching option-arguments, e.g. -oOutfile or -vfdoOutfile
  • Is so old already 3 that no GNU system is missing this (e.g. any Linux has it).
  • You can test for its existence with: getopt --test → return value 4.
  • Other getopt or shell-builtin getopts are of limited use.

The following calls

myscript -vfd ./foo/bar/someFile -o /fizz/someOtherFile
myscript -v -f -d -o/fizz/someOtherFile -- ./foo/bar/someFile
myscript --verbose --force --debug ./foo/bar/someFile -o/fizz/someOtherFile
myscript --output=/fizz/someOtherFile ./foo/bar/someFile -vfd
myscript ./foo/bar/someFile -df -v --output /fizz/someOtherFile

all return

verbose: y, force: y, debug: y, in: ./foo/bar/someFile, out: /fizz/someOtherFile

with the following myscript

#!/bin/bash
# saner programming env: these switches turn some bugs into errors
set -o errexit -o pipefail -o noclobber -o nounset

# -allow a command to fail with !'s side effect on errexit
# -use return value from ${PIPESTATUS[0]}, because ! hosed $?
! getopt --test > /dev/null 
if [[ ${PIPESTATUS[0]} -ne 4 ]]; then
    echo 'I'm sorry, `getopt --test` failed in this environment.'
    exit 1
fi

OPTIONS=dfo:v
LONGOPTS=debug,force,output:,verbose

# -regarding ! and PIPESTATUS see above
# -temporarily store output to be able to check for errors
# -activate quoting/enhanced mode (e.g. by writing out "--options")
# -pass arguments only via   -- "$@"   to separate them correctly
! PARSED=$(getopt --options=$OPTIONS --longoptions=$LONGOPTS --name "$0" -- "$@")
if [[ ${PIPESTATUS[0]} -ne 0 ]]; then
    # e.g. return value is 1
    #  then getopt has complained about wrong arguments to stdout
    exit 2
fi
# read getopt's output this way to handle the quoting right:
eval set -- "$PARSED"

d=n f=n v=n outFile=-
# now enjoy the options in order and nicely split until we see --
while true; do
    case "$1" in
        -d|--debug)
            d=y
            shift
            ;;
        -f|--force)
            f=y
            shift
            ;;
        -v|--verbose)
            v=y
            shift
            ;;
        -o|--output)
            outFile="$2"
            shift 2
            ;;
        --)
            shift
            break
            ;;
        *)
            echo "Programming error"
            exit 3
            ;;
    esac
done

# handle non-option arguments
if [[ $# -ne 1 ]]; then
    echo "$0: A single input file is required."
    exit 4
fi

echo "verbose: $v, force: $f, debug: $d, in: $1, out: $outFile"

1 enhanced getopt is available on most "bash-systems", including Cygwin; on OS X try brew install gnu-getopt or sudo port install getopt
2 the POSIX exec() conventions have no reliable way to pass binary NULL in command line arguments; those bytes prematurely end the argument
3 first version released in 1997 or before (I only tracked it back to 1997)

Tobias Kienzler ,Mar 19, 2016 at 15:23

from : digitalpeer.com with minor modifications

Usage myscript.sh -p=my_prefix -s=dirname -l=libname

#!/bin/bash
for i in "$@"
do
case $i in
    -p=*|--prefix=*)
    PREFIX="${i#*=}"

    ;;
    -s=*|--searchpath=*)
    SEARCHPATH="${i#*=}"
    ;;
    -l=*|--lib=*)
    DIR="${i#*=}"
    ;;
    --default)
    DEFAULT=YES
    ;;
    *)
            # unknown option
    ;;
esac
done
echo PREFIX = ${PREFIX}
echo SEARCH PATH = ${SEARCHPATH}
echo DIRS = ${DIR}
echo DEFAULT = ${DEFAULT}

To better understand ${i#*=} search for "Substring Removal" in this guide . It is functionally equivalent to `sed 's/[^=]*=//' <<< "$i"` which calls a needless subprocess or `echo "$i" | sed 's/[^=]*=//'` which calls two needless subprocesses.

Robert Siemer ,Jun 1, 2018 at 1:57

getopt() / getopts() is a good option. Stolen from here :

The simple use of "getopt" is shown in this mini-script:

#!/bin/bash
echo "Before getopt"
for i
do
  echo $i
done
args=`getopt abc:d $*`
set -- $args
echo "After getopt"
for i
do
  echo "-->$i"
done

What we have said is that any of -a, -b, -c or -d will be allowed, but that -c is followed by an argument (the "c:" says that).

If we call this "g" and try it out:

bash-2.05a$ ./g -abc foo
Before getopt
-abc
foo
After getopt
-->-a
-->-b
-->-c
-->foo
-->--

We start with two arguments, and "getopt" breaks apart the options and puts each in its own argument. It also added "--".

hfossli ,Jan 31 at 20:05

More succinct way

script.sh

#!/bin/bash

while [[ "$#" -gt 0 ]]; do case $1 in
  -d|--deploy) deploy="$2"; shift;;
  -u|--uglify) uglify=1;;
  *) echo "Unknown parameter passed: $1"; exit 1;;
esac; shift; done

echo "Should deploy? $deploy"
echo "Should uglify? $uglify"

Usage:

./script.sh -d dev -u

# OR:

./script.sh --deploy dev --uglify

bronson ,Apr 27 at 23:22

At the risk of adding another example to ignore, here's my scheme.
  • handles -n arg and --name=arg
  • allows arguments at the end
  • shows sane errors if anything is misspelled
  • compatible, doesn't use bashisms
  • readable, doesn't require maintaining state in a loop

Hope it's useful to someone.

while [ "$#" -gt 0 ]; do
  case "$1" in
    -n) name="$2"; shift 2;;
    -p) pidfile="$2"; shift 2;;
    -l) logfile="$2"; shift 2;;

    --name=*) name="${1#*=}"; shift 1;;
    --pidfile=*) pidfile="${1#*=}"; shift 1;;
    --logfile=*) logfile="${1#*=}"; shift 1;;
    --name|--pidfile|--logfile) echo "$1 requires an argument" >&2; exit 1;;

    -*) echo "unknown option: $1" >&2; exit 1;;
    *) handle_argument "$1"; shift 1;;
  esac
done

Robert Siemer ,Jun 6, 2016 at 19:28

I'm about 4 years late to this question, but want to give back. I used the earlier answers as a starting point to tidy up my old adhoc param parsing. I then refactored out the following template code. It handles both long and short params, using = or space separated arguments, as well as multiple short params grouped together. Finally it re-inserts any non-param arguments back into the $1,$2.. variables. I hope it's useful.
#!/usr/bin/env bash

# NOTICE: Uncomment if your script depends on bashisms.
#if [ -z "$BASH_VERSION" ]; then bash $0 $@ ; exit $? ; fi

echo "Before"
for i ; do echo - $i ; done


# Code template for parsing command line parameters using only portable shell
# code, while handling both long and short params, handling '-f file' and
# '-f=file' style param data and also capturing non-parameters to be inserted
# back into the shell positional parameters.

while [ -n "$1" ]; do
        # Copy so we can modify it (can't modify $1)
        OPT="$1"
        # Detect argument termination
        if [ x"$OPT" = x"--" ]; then
                shift
                for OPT ; do
                        REMAINS="$REMAINS \"$OPT\""
                done
                break
        fi
        # Parse current opt
        while [ x"$OPT" != x"-" ] ; do
                case "$OPT" in
                        # Handle --flag=value opts like this
                        -c=* | --config=* )
                                CONFIGFILE="${OPT#*=}"
                                shift
                                ;;
                        # and --flag value opts like this
                        -c* | --config )
                                CONFIGFILE="$2"
                                shift
                                ;;
                        -f* | --force )
                                FORCE=true
                                ;;
                        -r* | --retry )
                                RETRY=true
                                ;;
                        # Anything unknown is recorded for later
                        * )
                                REMAINS="$REMAINS \"$OPT\""
                                break
                                ;;
                esac
                # Check for multiple short options
                # NOTICE: be sure to update this pattern to match valid options
                NEXTOPT="${OPT#-[cfr]}" # try removing single short opt
                if [ x"$OPT" != x"$NEXTOPT" ] ; then
                        OPT="-$NEXTOPT"  # multiple short opts, keep going
                else
                        break  # long form, exit inner loop
                fi
        done
        # Done with that param. move to next
        shift
done
# Set the non-parameters back into the positional parameters ($1 $2 ..)
eval set -- $REMAINS


echo -e "After: \n configfile='$CONFIGFILE' \n force='$FORCE' \n retry='$RETRY' \n remains='$REMAINS'"
for i ; do echo - $i ; done

> ,

I have found the matter to write portable parsing in scripts so frustrating that I have written Argbash - a FOSS code generator that can generate the arguments-parsing code for your script plus it has some nice features:

https://argbash.io

[Aug 29, 2019] shell - An example of how to use getopts in bash - Stack Overflow

The key thing to understand is that getops is just parsing options. You need to shift them as a separate operation:
shift $((OPTIND-1))
May 10, 2013 | stackoverflow.com

An example of how to use getopts in bash Ask Question Asked 6 years, 3 months ago Active 10 months ago Viewed 419k times 288 132

chepner ,May 10, 2013 at 13:42

I want to call myscript file in this way:
$ ./myscript -s 45 -p any_string

or

$ ./myscript -h >>> should display help
$ ./myscript    >>> should display help

My requirements are:

  • getopt here to get the input arguments
  • check that -s exists, if not return error
  • check that the value after the -s is 45 or 90
  • check that the -p exists and there is an input string after
  • if the user enters ./myscript -h or just ./myscript then display help

I tried so far this code:

#!/bin/bash
while getopts "h:s:" arg; do
  case $arg in
    h)
      echo "usage" 
      ;;
    s)
      strength=$OPTARG
      echo $strength
      ;;
  esac
done

But with that code I get errors. How to do it with Bash and getopt ?

,

#!/bin/bash

usage() { echo "Usage: $0 [-s <45|90>] [-p <string>]" 1>&2; exit 1; }

while getopts ":s:p:" o; do
    case "${o}" in
        s)
            s=${OPTARG}
            ((s == 45 || s == 90)) || usage
            ;;
        p)
            p=${OPTARG}
            ;;
        *)
            usage
            ;;
    esac
done
shift $((OPTIND-1))

if [ -z "${s}" ] || [ -z "${p}" ]; then
    usage
fi

echo "s = ${s}"
echo "p = ${p}"

Example runs:

$ ./myscript.sh
Usage: ./myscript.sh [-s <45|90>] [-p <string>]

$ ./myscript.sh -h
Usage: ./myscript.sh [-s <45|90>] [-p <string>]

$ ./myscript.sh -s "" -p ""
Usage: ./myscript.sh [-s <45|90>] [-p <string>]

$ ./myscript.sh -s 10 -p foo
Usage: ./myscript.sh [-s <45|90>] [-p <string>]

$ ./myscript.sh -s 45 -p foo
s = 45
p = foo

$ ./myscript.sh -s 90 -p bar
s = 90
p = bar

[Aug 28, 2019] How to Replace Spaces in Filenames with Underscores on the Linux Shell

You probably would be better off with -nv options for mv
Aug 28, 2019 | vitux.com
$ for file in *; do mv "$file" `echo $file | tr ' ' '_'` ; done

[Aug 28, 2019] 9 Quick 'mv' Command Practical Examples in Linux

Aug 28, 2019 | www.linuxbuzz.com

Example:5) Do not overwrite existing file at destination (mv -n)

Use '-n' option in mv command in case if we don't want to overwrite an existing file at destination,

[linuxbuzz@web ~]$ ls -l tools.txt /tmp/sysadmin/tools.txt
-rw-rw-r--. 1 linuxbuzz linuxbuzz 0 Aug 24 09:59 /tmp/sysadmin/tools.txt
-rw-rw-r--. 1 linuxbuzz linuxbuzz 0 Aug 24 10:10 tools.txt
[linuxbuzz@web ~]$

As we can see tools.txt is present in our current working directory and in /tmp/sysadmin, use below mv command to avoid overwriting at destination,

[linuxbuzz@web ~]$ mv -n tools.txt /tmp/sysadmin/tools.txt
[linuxbuzz@web ~]$
Example:6) Forcefully overwrite write protected file at destination (mv -f)

Use '-f' option in mv command to forcefully overwrite the write protected file at destination. Let's assumes we have a file named " bands.txt " in our present working directory and in /tmp/sysadmin.

[linuxbuzz@web ~]$ ls -l bands.txt /tmp/sysadmin/bands.txt
-rw-rw-r--. 1 linuxbuzz linuxbuzz 0 Aug 25 00:24 bands.txt
-r--r--r--. 1 linuxbuzz linuxbuzz 0 Aug 25 00:24 /tmp/sysadmin/bands.txt
[linuxbuzz@web ~]$

As we can see under /tmp/sysadmin, bands.txt is write protected file,

Without -f option

[linuxbuzz@web ~]$ mv bands.txt /tmp/sysadmin/bands.txt

mv: try to overwrite '/tmp/sysadmin/bands.txt', overriding mode 0444 (r–r–r–)?

To forcefully overwrite, use below mv command,

[linuxbuzz@web ~]$ mv -f bands.txt /tmp/sysadmin/bands.txt
[linuxbuzz@web ~]$
Example:7) Verbose output of mv command (mv -v)

Use '-v' option in mv command to print the verbose output, example is shown below

[linuxbuzz@web ~]$ mv -v  buzz51.txt buzz52.txt buzz53.txt buzz54.txt /tmp/sysadmin/
'buzz51.txt' -> '/tmp/sysadmin/buzz51.txt'
'buzz52.txt' -> '/tmp/sysadmin/buzz52.txt'
'buzz53.txt' -> '/tmp/sysadmin/buzz53.txt'
'buzz54.txt' -> '/tmp/sysadmin/buzz54.txt'
[linuxbuzz@web ~]$
Example:8) Create backup at destination while using mv command (mv -b)

Use '-b' option to take backup of a file at destination while performing mv command, at destination backup file will be created with tilde character appended to it, example is shown below,

[linuxbuzz@web ~]$ mv -b buzz55.txt /tmp/sysadmin/buzz55.txt
[linuxbuzz@web ~]$ ls -l /tmp/sysadmin/buzz55.txt*
-rw-rw-r--. 1 linuxbuzz linuxbuzz 0 Aug 25 00:47 /tmp/sysadmin/buzz55.txt
-rw-rw-r--. 1 linuxbuzz linuxbuzz 0 Aug 25 00:37 /tmp/sysadmin/buzz55.txt~
[linuxbuzz@web ~]$
Example:9) Move file only when its newer than destination (mv -u)

There are some scenarios where we same file at source and destination and we wan to move the file only when file at source is newer than the destination, so to accomplish, use -u option in mv command. Example is shown below

[linuxbuzz@web ~]$ ls -l tools.txt /tmp/sysadmin/tools.txt
-rw-rw-r--. 1 linuxbuzz linuxbuzz 55 Aug 25 00:55 /tmp/sysadmin/tools.txt
-rw-rw-r--. 1 linuxbuzz linuxbuzz 87 Aug 25 00:57 tools.txt
[linuxbuzz@web ~]$

Execute below mv command to mv file only when its newer than destination,

[linuxbuzz@web ~]$ mv -u tools.txt /tmp/sysadmin/tools.txt
[linuxbuzz@web ~]$

That's all from this article, we have covered all important and basic examples of mv command.

Hopefully above examples will help you to learn more about mv command. Write your feedback and suggestions to us.

[Aug 28, 2019] Echo Command in Linux with Examples

Notable quotes:
"... The -e parameter is used for the interpretation of backslashes ..."
"... The -n option is used for omitting trailing newline. ..."
Aug 28, 2019 | linoxide.com

The -e parameter is used for the interpretation of backslashes

... ... ...

To create a new line after each word in a string use the -e operator with the \n option as shown
$ echo -e "Linux \nis \nan \nopensource \noperating \nsystem"

... ... ...

Omit echoing trailing newline

The -n option is used for omitting trailing newline. This is shown in the example below

$ echo -n "Linux is an opensource operating system"

Sample Output

Linux is an opensource operating systemjames@buster:/$

[Aug 28, 2019] How to navigate Ansible documentation Enable Sysadmin

Aug 28, 2019 | www.redhat.com

We take our first glimpse at the Ansible documentation on the official website. While Ansible can be overwhelming with so many immediate options, let's break down what is presented to us here. Putting our attention on the page's main pane, we are given five offerings from Ansible. This pane is a central location, or one-stop-shop, to maneuver through the documentation for products like Ansible Tower, Ansible Galaxy, and Ansible Lint: Image

We can even dive into Ansible Network for specific module documentation that extends the power and ease of Ansible automation to network administrators. The focal point of the rest of this article will be around Ansible Project, to give us a great starting point into our automation journey:

Image

Once we click the Ansible Documentation tile under the Ansible Project section, the first action we should take is to ensure we are viewing the documentation's correct version. We can get our current version of Ansible from our control node's command line by running ansible --version . Armed with the version information provided by the output, we can select the matching version in the site's upper-left-hand corner using the drop-down menu, that by default says latest :

Image

[Aug 27, 2019] Bash Variables - Bash Reference Manual

Aug 27, 2019 | bash.cyberciti.biz
BASH_LINENO
An array variable whose members are the line numbers in source files corresponding to each member of FUNCNAME . ${BASH_LINENO[$i]} is the line number in the source file where ${FUNCNAME[$i]} was called. The corresponding source file name is ${BASH_SOURCE[$i]} . Use LINENO to obtain the current line number.

[Aug 27, 2019] linux - How to show line number when executing bash script - Stack Overflow

Aug 27, 2019 | stackoverflow.com

How to show line number when executing bash script Ask Question Asked 6 years, 1 month ago Active 1 year, 4 months ago Viewed 47k times 68 31


dspjm ,Jul 23, 2013 at 7:31

I have a test script which has a lot of commands and will generate lots of output, I use set -x or set -v and set -e , so the script would stop when error occurs. However, it's still rather difficult for me to locate which line did the execution stop in order to locate the problem. Is there a method which can output the line number of the script before each line is executed? Or output the line number before the command exhibition generated by set -x ? Or any method which can deal with my script line location problem would be a great help. Thanks.

Suvarna Pattayil ,Jul 28, 2017 at 17:25

You mention that you're already using -x . The variable PS4 denotes the value is the prompt printed before the command line is echoed when the -x option is set and defaults to : followed by space.

You can change PS4 to emit the LINENO (The line number in the script or shell function currently executing).

For example, if your script reads:

$ cat script
foo=10
echo ${foo}
echo $((2 + 2))

Executing it thus would print line numbers:

$ PS4='Line ${LINENO}: ' bash -x script
Line 1: foo=10
Line 2: echo 10
10
Line 3: echo 4
4

http://wiki.bash-hackers.org/scripting/debuggingtips gives the ultimate PS4 that would output everything you will possibly need for tracing:

export PS4='+(${BASH_SOURCE}:${LINENO}): ${FUNCNAME[0]:+${FUNCNAME[0]}(): }'

Deqing ,Jul 23, 2013 at 8:16

In Bash, $LINENO contains the line number where the script currently executing.

If you need to know the line number where the function was called, try $BASH_LINENO . Note that this variable is an array.

For example:

#!/bin/bash       

function log() {
    echo "LINENO: ${LINENO}"
    echo "BASH_LINENO: ${BASH_LINENO[*]}"
}

function foo() {
    log "$@"
}

foo "$@"

See here for details of Bash variables.

Eliran Malka ,Apr 25, 2017 at 10:14

Simple (but powerful) solution: Place echo around the code you think that causes the problem and move the echo line by line until the messages does not appear anymore on screen - because the script has stop because of an error before.

Even more powerful solution: Install bashdb the bash debugger and debug the script line by line

kklepper ,Apr 2, 2018 at 22:44

Workaround for shells without LINENO

In a fairly sophisticated script I wouldn't like to see all line numbers; rather I would like to be in control of the output.

Define a function

echo_line_no () {
    grep -n "$1" $0 |  sed "s/echo_line_no//" 
    # grep the line(s) containing input $1 with line numbers
    # replace the function name with nothing 
} # echo_line_no

Use it with quotes like

echo_line_no "this is a simple comment with a line number"

Output is

16   "this is a simple comment with a line number"

if the number of this line in the source file is 16.

This basically answers the question How to show line number when executing bash script for users of ash or other shells without LINENO .

Anything more to add?

Sure. Why do you need this? How do you work with this? What can you do with this? Is this simple approach really sufficient or useful? Why do you want to tinker with this at all?

Want to know more? Read reflections on debugging

[Aug 27, 2019] Gogo - Create Shortcuts to Long and Complicated Paths in Linux

Looks like second rate utility. No new worthwhile ideas. Not recommended.
Aug 27, 2019 | www.tecmint.com
~/.config/gogo/gogo.conf file (which should be auto created if it doesn't exist) and has the following syntax.
# Comments are lines that start from '#' character.
default = ~/something
alias = /desired/path
alias2 = /desired/path with space
alias3 = "/this/also/works"
zażółć = "unicode/is/also/supported/zażółć gęślą jaźń"

If you run gogo run without any arguments, it will go to the directory specified in default; this alias is always available, even if it's not in the configuration file, and points to $HOME directory.

To display the current aliases, use the -l switch. From the following screenshot, you can see that default points to ~/home/tecmint which is user tecmint's home directory on the system.

$ gogo -l
List Gogo Aliases <img aria-describedby="caption-attachment-28848" src="https://www.tecmint.com/wp-content/uploads/2018/03/List-Gogo-Aliases.png" alt="List Gogo Aliases" width="664" height="150" />

List Gogo Aliases

Below is an example of running gogo without any arguments.

$ cd Documents/Phone-Backup/Linux-Docs/
$ gogo
$ pwd
Running Gogo Without Options <img aria-describedby="caption-attachment-28849" src="https://www.tecmint.com/wp-content/uploads/2018/03/Gogo-Listing.png" alt="Running Gogo Without Options" width="661" height="105" />

Running Gogo Without Options

To create a shortcut to a long path, move into the directory you want and use the -a flag to add an alias for that directory in gogo , as shown.

$ cd Documents/Phone-Backup/Linux-Docs/Ubuntu/
$ gogo -a Ubuntu
$ gogo
$ gogo -l
$ gogo -a Ubuntu
$ pwd
Create Long Directory Shortcut <img aria-describedby="caption-attachment-28850" src="https://www.tecmint.com/wp-content/uploads/2018/03/Create-Gogo-Shortcut.png" alt="Create Long Directory Shortcut " width="739" height="270" />

Create Long Directory Shortcut

You can also create aliases for connecting directly into directories on a remote Linux servers. To do this, simple add the following lines to gogo configuration file, which can be accessed using -e flag, this will use the editor specified in the $EDITOR env variable.

$ gogo -e

One configuration file opens, add these following lines to it.

sshroot = ssh://root@192.168.56.5:/bin/bash  /root/
sshtdocs = ssh://tecmint@server3  ~/tecmint/docs/
  1. sitaram says: August 25, 2019 at 7:46 am

    The bulk of what this tool does can be replaced with a shell function that does ` cd $(grep -w ^$1 ~/.config/gogo.conf | cut -f2 -d' ') `, where `$1` is the argument supplied to the function.

    If you've already installed fzf (and you really should), then you can get a far better experience than even zsh's excellent "completion" facilities. I use something like ` cd $(fzf -1 +m -q "$1" < ~/.cache/to) ` (My equivalent of gogo.conf is ` ~/.cache/to `).

[Aug 26, 2019] Linux and Unix exit code tutorial with examples by George Ornbo

Aug 07, 2016 | shapeshed.com
Tutorial on using exit codes from Linux or UNIX commands. Examples of how to get the exit code of a command, how to set the exit code and how to suppress exit codes.

Estimated reading time: 3 minutes

Table of contents

UNIX exit code

What is an exit code in the UNIX or Linux shell?

An exit code, or sometimes known as a return code, is the code returned to a parent process by an executable. On POSIX systems the standard exit code is 0 for success and any number from 1 to 255 for anything else.

Exit codes can be interpreted by machine scripts to adapt in the event of successes of failures. If exit codes are not set the exit code will be the exit code of the last run command.

How to get the exit code of a command

To get the exit code of a command type echo $? at the command prompt. In the following example a file is printed to the terminal using the cat command.

cat file.txt
hello world
echo $?
0

The command was successful. The file exists and there are no errors in reading the file or writing it to the terminal. The exit code is therefore 0 .

In the following example the file does not exist.

cat doesnotexist.txt
cat: doesnotexist.txt: No such file or directory
echo $?
1

The exit code is 1 as the operation was not successful.

How to use exit codes in scripts

To use exit codes in scripts an if statement can be used to see if an operation was successful.

#!/bin/bash

cat file.txt 

if [ $? -eq 0 ]
then
  echo "The script ran ok"
  exit 0
else
  echo "The script failed" >&2
  exit 1
fi

If the command was unsuccessful the exit code will be 0 and 'The script ran ok' will be printed to the terminal.

How to set an exit code

To set an exit code in a script use exit 0 where 0 is the number you want to return. In the following example a shell script exits with a 1 . This file is saved as exit.sh .

#!/bin/bash

exit 1

Executing this script shows that the exit code is correctly set.

bash exit.sh
echo $?
1
What exit code should I use?

The Linux Documentation Project has a list of reserved codes that also offers advice on what code to use for specific scenarios. These are the standard error codes in Linux or UNIX.

How to suppress exit statuses

Sometimes there may be a requirement to suppress an exit status. It may be that a command is being run within another script and that anything other than a 0 status is undesirable.

In the following example a file is printed to the terminal using cat . This file does not exist so will cause an exit status of 1 .

To suppress the error message any output to standard error is sent to /dev/null using 2>/dev/null .

If the cat command fails an OR operation can be used to provide a fallback - cat file.txt || exit 0 . In this case an exit code of 0 is returned even if there is an error.

Combining both the suppression of error output and the OR operation the following script returns a status code of 0 with no output even though the file does not exist.

#!/bin/bash

cat 'doesnotexist.txt' 2>/dev/null || exit 0
Further reading

[Aug 26, 2019] Exit Codes - Shell Scripting Tutorial

Aug 26, 2019 | www.shellscript.sh

Exit codes are a number between 0 and 255, which is returned by any Unix command when it returns control to its parent process.
Other numbers can be used, but these are treated modulo 256, so exit -10 is equivalent to exit 246 , and exit 257 is equivalent to exit 1 .

These can be used within a shell script to change the flow of execution depending on the success or failure of commands executed. This was briefly introduced in Variables - Part II . Here we shall look in more detail in the available interpretations of exit codes.

Success is traditionally represented with exit 0 ; failure is normally indicated with a non-zero exit-code. This value can indicate different reasons for failure.
For example, GNU grep returns 0 on success, 1 if no matches were found, and 2 for other errors (syntax errors, non-existent input files, etc).

We shall look at three different methods for checking error status, and discuss the pros and cons of each approach.

Firstly, the simple approach:


#!/bin/sh
# First attempt at checking return codes
USERNAME=`grep "^${1}:" /etc/passwd|cut -d":" -f1`
if [ "$?" -ne "0" ]; then
  echo "Sorry, cannot find user ${1} in /etc/passwd"
  exit 1
fi
NAME=`grep "^${1}:" /etc/passwd|cut -d":" -f5`
HOMEDIR=`grep "^${1}:" /etc/passwd|cut -d":" -f6`

echo "USERNAME: $USERNAME"
echo "NAME: $NAME"
echo "HOMEDIR: $HOMEDIR"

This script works fine if you supply a valid username in /etc/passwd . However, if you enter an invalid code, it does not do what you might at first expect - it keeps running, and just shows:
USERNAME: 
NAME: 
HOMEDIR:
Why is this? As mentioned, the $? variable is set to the return code of the last executed command . In this case, that is cut . cut had no problems which it feels like reporting - as far as I can tell from testing it, and reading the documentation, cut returns zero whatever happens! It was fed an empty string, and did its job - returned the first field of its input, which just happened to be the empty string.

So what do we do? If we have an error here, grep will report it, not cut . Therefore, we have to test grep 's return code, not cut 's.


#!/bin/sh
# Second attempt at checking return codes
grep "^${1}:" /etc/passwd > /dev/null 2>&1
if [ "$?" -ne "0" ]; then
  echo "Sorry, cannot find user ${1} in /etc/passwd"
  exit 1
fi
USERNAME=`grep "^${1}:" /etc/passwd|cut -d":" -f1`
NAME=`grep "^${1}:" /etc/passwd|cut -d":" -f5`
HOMEDIR=`grep "^${1}:" /etc/passwd|cut -d":" -f6`

echo "USERNAME: $USERNAME"
echo "NAME: $NAME"
echo "HOMEDIR: $HOMEDIR"

This fixes the problem for us, though at the expense of slightly longer code.
That is the basic way which textbooks might show you, but it is far from being all there is to know about error-checking in shell scripts. This method may not be the most suitable to your particular command-sequence, or may be unmaintainable. Below, we shall investigate two alternative approaches.

As a second approach, we can tidy this somewhat by putting the test into a separate function, instead of littering the code with lots of 4-line tests:


#!/bin/sh
# A Tidier approach

check_errs()
{
  # Function. Parameter 1 is the return code
  # Para. 2 is text to display on failure.
  if [ "${1}" -ne "0" ]; then
    echo "ERROR # ${1} : ${2}"
    # as a bonus, make our script exit with the right error code.
    exit ${1}
  fi
}

### main script starts here ###

grep "^${1}:" /etc/passwd > /dev/null 2>&1
check_errs $? "User ${1} not found in /etc/passwd"
USERNAME=`grep "^${1}:" /etc/passwd|cut -d":" -f1`
check_errs $? "Cut returned an error"
echo "USERNAME: $USERNAME"
check_errs $? "echo returned an error - very strange!"

This allows us to test for errors 3 times, with customised error messages, without having to write 3 individual tests. By writing the test routine once. we can call it as many times as we wish, creating a more intelligent script, at very little expense to the programmer. Perl programmers will recognise this as being similar to the die command in Perl.

As a third approach, we shall look at a simpler and cruder method. I tend to use this for building Linux kernels - simple automations which, if they go well, should just get on with it, but when things go wrong, tend to require the operator to do something intelligent (ie, that which a script cannot do!):


#!/bin/sh
cd /usr/src/linux && \
make dep && make bzImage && make modules && make modules_install && \
cp arch/i386/boot/bzImage /boot/my-new-kernel && cp System.map /boot && \
echo "Your new kernel awaits, m'lord."
This script runs through the various tasks involved in building a Linux kernel (which can take quite a while), and uses the && operator to check for success. To do this with if would involve:
#!/bin/sh
cd /usr/src/linux
if [ "$?" -eq "0" ]; then
  make dep 
    if [ "$?" -eq "0" ]; then
      make bzImage 
      if [ "$?" -eq "0" ]; then
        make modules 
        if [ "$?" -eq "0" ]; then
          make modules_install
          if [ "$?" -eq "0" ]; then
            cp arch/i386/boot/bzImage /boot/my-new-kernel
            if [ "$?" -eq "0" ]; then
              cp System.map /boot/
              if [ "$?" -eq "0" ]; then
                echo "Your new kernel awaits, m'lord."
              fi
            fi
          fi
        fi
      fi
    fi
  fi
fi

... which I, personally, find pretty difficult to follow.


The && and || operators are the shell's equivalent of AND and OR tests. These can be thrown together as above, or:


#!/bin/sh
cp /foo /bar && echo Success || echo Failed

This code will either echo

Success

or

Failed

depending on whether or not the cp command was successful. Look carefully at this; the construct is

command && command-to-execute-on-success || command-to-execute-on-failure

Only one command can be in each part. This method is handy for simple success / fail scenarios, but if you want to check on the status of the echo commands themselves, it is easy to quickly become confused about which && and || applies to which command. It is also very difficult to maintain. Therefore this construct is only recommended for simple sequencing of commands.

In earlier versions, I had suggested that you can use a subshell to execute multiple commands depending on whether the cp command succeeded or failed:

cp /foo /bar && ( echo Success ; echo Success part II; ) || ( echo Failed ; echo Failed part II )

But in fact, Marcel found that this does not work properly. The syntax for a subshell is:

( command1 ; command2; command3 )

The return code of the subshell is the return code of the final command ( command3 in this example). That return code will affect the overall command. So the output of this script:

cp /foo /bar && ( echo Success ; echo Success part II; /bin/false ) || ( echo Failed ; echo Failed part II )

Is that it runs the Success part (because cp succeeded, and then - because /bin/false returns failure, it also executes the Failure part:

Success
Success part II
Failed
Failed part II

So if you need to execute multiple commands as a result of the status of some other condition, it is better (and much clearer) to use the standard if , then , else syntax.

[Aug 26, 2019] linux - Avoiding accidental 'rm' disasters - Super User

Aug 26, 2019 | superuser.com

Avoiding accidental 'rm' disasters Ask Question Asked 6 years, 3 months ago Active 6 years, 3 months ago Viewed 1k times 1

Mr_Spock ,May 26, 2013 at 11:30

Today, using sudo -s , I wanted to rm -R ./lib/ , but I actually rm -R /lib/ .

I had to reinstall my OS (Mint 15) and re-download and re-configure all my packages. Not fun.

How can I avoid similar mistakes in the future?

Vittorio Romeo ,May 26, 2013 at 11:55

First of all, stop executing everything as root . You never really need to do this. Only run individual commands with sudo if you need to. If a normal command doesn't work without sudo, just call sudo !! to execute it again.

If you're paranoid about rm , mv and other operations while running as root, you can add the following aliases to your shell's configuration file:

[ $UID = 0 ] && \
  alias rm='rm -i' && \
  alias mv='mv -i' && \
  alias cp='cp -i'

These will all prompt you for confirmation ( -i ) before removing a file or overwriting an existing file, respectively, but only if you're root (the user with ID 0).

Don't get too used to that though. If you ever find yourself working on a system that doesn't prompt you for everything, you might end up deleting stuff without noticing it. The best way to avoid mistakes is to never run as root and think about what exactly you're doing when you use sudo .

[Aug 26, 2019] bash - How to prevent rm from reporting that a file was not found

Aug 26, 2019 | stackoverflow.com

How to prevent rm from reporting that a file was not found? Ask Question Asked 7 years, 4 months ago Active 1 year, 4 months ago Viewed 101k times 133 19


pizza ,Apr 20, 2012 at 21:29

I am using rm within a BASH script to delete many files. Sometimes the files are not present, so it reports many errors. I do not need this message. I have searched the man page for a command to make rm quiet, but the only option I found is -f , which from the description, "ignore nonexistent files, never prompt", seems to be the right choice, but the name does not seem to fit, so I am concerned it might have unintended consequences.
  • Is the -f option the correct way to silence rm ? Why isn't it called -q ?
  • Does this option do anything else?

Keith Thompson ,Dec 19, 2018 at 13:05

The main use of -f is to force the removal of files that would not be removed using rm by itself (as a special case, it "removes" non-existent files, thus suppressing the error message).

You can also just redirect the error message using

$ rm file.txt 2> /dev/null

(or your operating system's equivalent). You can check the value of $? immediately after calling rm to see if a file was actually removed or not.

vimdude ,May 28, 2014 at 18:10

Yes, -f is the most suitable option for this.

tripleee ,Jan 11 at 4:50

-f is the correct flag, but for the test operator, not rm
[ -f "$THEFILE" ] && rm "$THEFILE"

this ensures that the file exists and is a regular file (not a directory, device node etc...)

mahemoff ,Jan 11 at 4:41

\rm -f file will never report not found.

Idelic ,Apr 20, 2012 at 16:51

As far as rm -f doing "anything else", it does force ( -f is shorthand for --force ) silent removal in situations where rm would otherwise ask you for confirmation. For example, when trying to remove a file not writable by you from a directory that is writable by you.

Keith Thompson ,May 28, 2014 at 18:09

I had same issue for cshell. The only solution I had was to create a dummy file that matched pattern before "rm" in my script.

[Aug 26, 2019] shell - rm -rf return codes

Aug 26, 2019 | superuser.com

rm -rf return codes Ask Question Asked 6 years ago Active 6 years ago Viewed 15k times 8 0


SheetJS ,Aug 15, 2013 at 2:50

Any one can let me know the possible return codes for the command rm -rf other than zero i.e, possible return codes for failure cases. I want to know more detailed reason for the failure of the command unlike just the command is failed(return other than 0).

Adrian Frühwirth ,Aug 14, 2013 at 7:00

To see the return code, you can use echo $? in bash.

To see the actual meaning, some platforms (like Debian Linux) have the perror binary available, which can be used as follows:

$ rm -rf something/; perror $?
rm: cannot remove `something/': Permission denied
OS error code   1:  Operation not permitted

rm -rf automatically suppresses most errors. The most likely error you will see is 1 (Operation not permitted), which will happen if you don't have permissions to remove the file. -f intentionally suppresses most errors

Adrian Frühwirth ,Aug 14, 2013 at 7:21

grabbed coreutils from git....

looking at exit we see...

openfly@linux-host:~/coreutils/src $ cat rm.c | grep -i exit
  if (status != EXIT_SUCCESS)
  exit (status);
  /* Since this program exits immediately after calling 'rm', rm need not
  atexit (close_stdin);
          usage (EXIT_FAILURE);
        exit (EXIT_SUCCESS);
          usage (EXIT_FAILURE);
        error (EXIT_FAILURE, errno, _("failed to get attributes of %s"),
        exit (EXIT_SUCCESS);
  exit (status == RM_ERROR ? EXIT_FAILURE : EXIT_SUCCESS);

Now looking at the status variable....

openfly@linux-host:~/coreutils/src $ cat rm.c | grep -i status
usage (int status)
  if (status != EXIT_SUCCESS)
  exit (status);
  enum RM_status status = rm (file, &x);
  assert (VALID_STATUS (status));
  exit (status == RM_ERROR ? EXIT_FAILURE : EXIT_SUCCESS);

looks like there isn't much going on there with the exit status.

I see EXIT_FAILURE and EXIT_SUCCESS and not anything else.

so basically 0 and 1 / -1

To see specific exit() syscalls and how they occur in a process flow try this

openfly@linux-host:~/ $ strace rm -rf $whatever

fairly simple.

ref:

http://www.unix.com/man-page/Linux/EXIT_FAILURE/exit/

[Aug 22, 2019] How To Display Bash History Without Line Numbers - OSTechNix

Aug 22, 2019 | www.ostechnix.com

Method 2 – Using history command

We can use the history command's write option to print the history without numbers like below.

$ history -w /dev/stdout
Method 3 – Using history and cut commands

One such way is to use history and cut commands like below.

$ history | cut -c 8-

[Aug 22, 2019] Why Micro Data Centers Deliver Good Things in Small Packages by Calvin Hennick

Aug 22, 2019 | solutions.cdw.com

Enterprises are deploying self-contained micro data centers to power computing at the network edge.

Calvin Hennick is a freelance journalist who specializes in business and technology writing. He is a contributor to the CDW family of technology magazines.

The location for data processing has changed significantly throughout the history of computing. During the mainframe era, data was processed centrally, but client/server architectures later decentralized computing. In recent years, cloud computing centralized many processing workloads, but digital transformation and the Internet of Things are poised to move computing to new places, such as the network edge .

"There's a big transformation happening," says Thomas Humphrey, segment director for edge computing at APC . "Technologies like IoT have started to require that some local computing and storage happen out in that distributed IT architecture."

For example, some IoT systems require processing of data at remote locations rather than a centralized data center , such as at a retail store instead of a corporate headquarters.

To meet regulatory requirements and business needs, IoT solutions often need low latency, high bandwidth, robust security and superior reliability . To meet these demands, many organizations are deploying micro data centers: self-contained solutions that provide not only essential infrastructure, but also physical security, power and cooling and remote management capabilities.

"Digital transformation happens at the network edge, and edge computing will happen inside micro data centers ," says Bruce A. Taylor, executive vice president at Datacenter Dynamics . "This will probably be one of the fastest growing segments -- if not the fastest growing segment -- in data centers for the foreseeable future."

What Is a Micro Data Center?

Delivering the IT capabilities needed for edge computing represents a significant challenge for many organizations, which need manageable and secure solutions that can be deployed easily, consistently and close to the source of computing . Vendors such as APC have begun to create comprehensive solutions that provide these necessary capabilities in a single, standardized package.

"From our perspective at APC, the micro data center was a response to what was happening in the market," says Humphrey. "We were seeing that enterprises needed more robust solutions at the edge."

Most micro data center solutions rely on hyperconverged infrastructure to integrate computing, networking and storage technologies within a compact footprint . A typical micro data center also incorporates physical infrastructure (including racks), fire suppression, power, cooling and remote management capabilities. In effect, the micro data center represents a sweet spot between traditional IT closets and larger modular data centers -- giving organizations the ability to deploy professional, powerful IT resources practically anywhere .

Standardized Deployments Across the Country

Having robust IT resources at the network edge helps to improve reliability and reduce latency, both of which are becoming more and more important as analytics programs require that data from IoT deployments be processed in real time .

"There's always been edge computing," says Taylor. "What's new is the need to process hundreds of thousands of data points for analytics at once."

Standardization, redundant deployment and remote management are also attractive features, especially for large organizations that may need to deploy tens, hundreds or even thousands of micro data centers. "We spoke to customers who said, 'I've got to roll out and install 3,500 of these around the country,'" says Humphrey. "And many of these companies don't have IT staff at all of these sites." To address this scenario, APC designed standardized, plug-and-play micro data centers that can be rolled out seamlessly. Additionally, remote management capabilities allow central IT departments to monitor and troubleshoot the edge infrastructure without costly and time-intensive site visits.

In part because micro data centers operate in far-flung environments, security is of paramount concern. The self-contained nature of micro data centers ensures that only authorized personnel will have access to infrastructure equipment , and security tools such as video surveillance provide organizations with forensic evidence in the event that someone attempts to infiltrate the infrastructure.

How Micro Data Centers Can Help in Retail, Healthcare

Micro data centers make business sense for any organization that needs secure IT infrastructure at the network edge. But the solution is particularly appealing to organizations in fields such as retail, healthcare and finance , where IT environments are widely distributed and processing speeds are often a priority.

In retail, for example, edge computing will become more important as stores find success with IoT technologies such as mobile beacons, interactive mirrors and real-time tools for customer experience, behavior monitoring and marketing .

"It will be leading-edge companies driving micro data center adoption, but that doesn't necessarily mean they'll be technology companies," says Taylor. "A micro data center can power real-time analytics for inventory control and dynamic pricing in a supermarket."

In healthcare, digital transformation is beginning to touch processes and systems ranging from medication carts to patient records, and data often needs to be available locally; for example, in case of a data center outage during surgery. In finance, the real-time transmission of data can have immediate and significant financial consequences. And in both of these fields, regulations governing data privacy make the monitoring and security features of micro data centers even more important.

Micro data centers also have enormous potential to power smart city initiatives and to give energy companies a cost-effective way of deploying resources in remote locations , among other use cases.

"The proliferation of edge computing will be greater than anything we've seen in the past," Taylor says. "I almost can't think of a field where this won't matter."

Learn more about how solutions and services from CDW and APC can help your organization overcome its data center challenges.

Micro Data Centers Versus IT Closets

Think the micro data center is just a glorified update on the traditional IT closet? Think again.

"There are demonstrable differences," says Bruce A. Taylor, executive vice president at Datacenter Dynamics. "With micro data centers, there's a tremendous amount of computing capacity in a very small, contained space, and we just didn't have that capability previously ."

APC identifies three key differences between IT closets and micro data centers:

Difference #1: Uptime Expectations. APC notes that, of the nearly 3 million IT closets in the U.S., over 70 percent report outages directly related to human error. In an unprotected IT closet, problems can result from something as preventable as cleaning staff unwittingly disconnecting a cable. Micro data centers, by contrast, utilize remote monitoring, video surveillance and sensors to reduce downtime related to human error.

Difference #2: Cooling Configurations. The cooling of IT wiring closets is often approached both reactively and haphazardly, resulting in premature equipment failure. Micro data centers are specifically designed to assure cooling compatibility with anticipated loads.

Difference #3: Power Infrastructure. Unlike many IT closets, micro data centers incorporate uninterruptible power supplies, ensuring that infrastructure equipment has the power it needs to help avoid downtime.

[Aug 20, 2019] Is it possible to insert separator in midnight commander menu?

Jun 07, 2010 | superuser.com

Ask Question Asked 9 years, 2 months ago Active 7 years, 10 months ago Viewed 363 times 2

okutane ,Jun 7, 2010 at 3:36

I want to insert some items into mc menu (which is opened by F2) grouped together. Is it possible to insert some sort of separator before them or put them into some submenu?
Probably, not.
The format of the menu file is very simple. Lines that start with anything but
space or tab are considered entries for the menu (in order to be able to use
it like a hot key, the first character should be a letter). All the lines that
start with a space or a tab are the commands that will be executed when the
entry is selected.

But MC allows you to make multiple menu entries with same shortcut and title, so you can make a menu entry that looks like separator and does nothing, like:

a hello
  echo world
- --------
b world
  echo hello
- --------
c superuser
  ls /

This will look like:

[Aug 20, 2019] Midnight Commander, using date in User menu

Dec 31, 2013 | unix.stackexchange.com

user2013619 ,Dec 31, 2013 at 0:43

I would like to use MC (midnight commander) to compress the selected dir with date in its name, e.g: dirname_20131231.tar.gz

The command in the User menu is :

tar -czf dirname_`date '+%Y%m%d'`.tar.gz %d

The archive is missing because %m , and %d has another meaning in MC. I made an alias for the date, but it also doesn't work.

Does anybody solved this problem ever?

John1024 ,Dec 31, 2013 at 1:06

To escape the percent signs, double them:
tar -czf dirname_$(date '+%%Y%%m%%d').tar.gz %d

The above would compress the current directory (%d) to a file also in the current directory. If you want to compress the directory pointed to by the cursor rather than the current directory, use %f instead:

tar -czf %f_$(date '+%%Y%%m%%d').tar.gz %f

mc handles escaping of special characters so there is no need to put %f in quotes.

By the way, midnight commander's special treatment of percent signs occurs not just in the user menu file but also at the command line. This is an issue when using shell commands with constructs like ${var%.c} . At the command line, the same as in the user menu file, percent signs can be escaped by doubling them.

[Aug 20, 2019] How to exclude file when using scp command recursively

Aug 12, 2019 | www.cyberciti.biz

I need to copy all the *.c files from local laptop named hostA to hostB including all directories. I am using the following scp command but do not know how to exclude specific files (such as *.out): $ scp -r ~/projects/ user@hostB:/home/delta/projects/ How do I tell scp command to exclude particular file or directory at the Linux/Unix command line? One can use scp command to securely copy files between hosts on a network. It uses ssh for data transfer and authentication purpose. Typical scp command syntax is as follows: scp file1 user@host:/path/to/dest/ scp -r /path/to/source/ user@host:/path/to/dest/ scp [options] /dir/to/source/ user@host:/dir/to/dest/

Scp exclude files

I don't think so you can filter or exclude files when using scp command. However, there is a great workaround to exclude files and copy it securely using ssh. This page explains how to filter or excludes files when using scp to copy a directory recursively.

How to use rsync command to exclude files

The syntax is:

rsync -av -e ssh --exclude='*.out' /path/to/source/ user@hostB:/path/to/dest/

Where,

  1. -a : Recurse into directories i.e. copy all files and subdirectories. Also, turn on archive mode and all other options (-rlptgoD)
  2. -v : Verbose output
  3. -e ssh : Use ssh for remote shell so everything gets encrypted
  4. --exclude='*.out' : exclude files matching PATTERN e.g. *.out or *.c and so on.
Example of rsync command

In this example copy all file recursively from ~/virt/ directory but exclude all *.new files:
$ rsync -av -e ssh --exclude='*.new' ~/virt/ root@centos7:/tmp

[Aug 19, 2019] Moreutils - A Collection Of More Useful Unix Utilities - OSTechNix

Parallel is a really useful utility. RPM is installable from EPEL.
Aug 19, 2019 | www.ostechnix.com

... ... ...

On RHEL , CentOS , Scientific Linux :
$ sudo yum install epel-release
$ sudo yum install moreutils

[Aug 19, 2019] mc - Is there are any documentation about user-defined menu in midnight-commander - Unix Linux Stack Exchange

Aug 19, 2019 | unix.stackexchange.com

Is there are any documentation about user-defined menu in midnight-commander? Ask Question Asked 5 years, 2 months ago Active 1 year, 2 months ago Viewed 3k times 6 2


login ,Jun 11, 2014 at 13:13

I'd like to create my own user-defined menu for mc ( menu file). I see some lines like
+ t r & ! t t

or

+ t t

What does it mean?

goldilocks ,Jun 11, 2014 at 13:35

It is documented in the help, the node is "Edit Menu File" under "Command Menu"; if you scroll down you should find "Addition Conditions":

If the condition begins with '+' (or '+?') instead of '=' (or '=?') it is an addition condition. If the condition is true the menu entry will be included in the menu. If the condition is false the menu entry will not be included in the menu.

This is preceded by "Default conditions" (the = condition), which determine which entry will be highlighted as the default choice when the menu appears. Anyway, by way of example:

+ t r & ! t t

t r means if this is a regular file ("t(ype) r"), and ! t t means if the file has not been tagged in the interface.

Jarek

  • Midnight Commander man

On top what has been written above, this page can be browsed in the Internet, when searching for man pages, e.g.: https://www.systutorials.com/docs/linux/man/1-mc/

Search for "Menu File Edit" .

Best regards, Jarek

[Aug 14, 2019] bash - PID background process - Unix Linux Stack Exchange

Aug 14, 2019 | unix.stackexchange.com

PID background process Ask Question Asked 2 years, 8 months ago Active 2 years, 8 months ago Viewed 2k times 2


Raul ,Nov 27, 2016 at 18:21

As I understand pipes and commands, bash takes each command, spawns a process for each one and connects stdout of the previous one with the stdin of the next one.

For example, in "ls -lsa | grep feb", bash will create two processes, and connect the output of "ls -lsa" to the input of "grep feb".

When you execute a background command like "sleep 30 &" in bash, you get the pid of the background process running your command. Surprisingly for me, when I wrote "ls -lsa | grep feb &" bash returned only one PID.

How should this be interpreted? A process runs both "ls -lsa" and "grep feb"? Several process are created but I only get the pid of one of them?

Raul ,Nov 27, 2016 at 19:21

Spawns 2 processes. The & displays the PID of the second process. Example below.
$ echo $$
13358
$ sleep 100 | sleep 200 &
[1] 13405
$ ps -ef|grep 13358
ec2-user 13358 13357  0 19:02 pts/0    00:00:00 -bash
ec2-user 13404 13358  0 19:04 pts/0    00:00:00 sleep 100
ec2-user 13405 13358  0 19:04 pts/0    00:00:00 sleep 200
ec2-user 13406 13358  0 19:04 pts/0    00:00:00 ps -ef
ec2-user 13407 13358  0 19:04 pts/0    00:00:00 grep --color=auto 13358
$

> ,

When you run a job in the background, bash prints the process ID of its subprocess, the one that runs the command in that job. If that job happens to create more subprocesses, that's none of the parent shell's business.

When the background job is a pipeline (i.e. the command is of the form something1 | something2 & , and not e.g. { something1 | something2; } & ), there's an optimization which is strongly suggested by POSIX and performed by most shells including bash: each of the elements of the pipeline are executed directly as subprocesses of the original shell. What POSIX mandates is that the variable $! is set to the last command in the pipeline in this case. In most shells, that last command is a subprocess of the original process, and so are the other commands in the pipeline.

When you run ls -lsa | grep feb , there are three processes involved: the one that runs the left-hand side of the pipe (a subshell that finishes setting up the pipe then executes ls ), the one that runs the right-hand side of the pipe (a subshell that finishes setting up the pipe then executes grep ), and the original process that waits for the pipe to finish.

You can watch what happens by tracing the processes:

$ strace -f -e clone,wait4,pipe,execve,setpgid bash --norc
execve("/usr/local/bin/bash", ["bash", "--norc"], [/* 82 vars */]) = 0
setpgid(0, 24084)                       = 0
bash-4.3$ sleep 10 | sleep 20 &

Note how the second sleep is reported and stored as $! , but the process group ID is the first sleep . Dash has the same oddity, ksh and mksh don't.

[Aug 14, 2019] unix - How to get PID of process by specifying process name and store it in a variable to use further - Stack Overflow

Aug 14, 2019 | stackoverflow.com

Nidhi ,Nov 28, 2014 at 0:54

pids=$(pgrep <name>)

will get you the pids of all processes with the given name. To kill them all, use

kill -9 $pids

To refrain from using a variable and directly kill all processes with a given name issue

pkill -9 <name>

panticz.de ,Nov 11, 2016 at 10:11

On a single line...
pgrep -f process_name | xargs kill -9

flazzarini ,Jun 13, 2014 at 9:54

Another possibility would be to use pidof it usually comes with most distributions. It will return you the PID of a given process by using it's name.
pidof process_name

This way you could store that information in a variable and execute kill -9 on it.

#!/bin/bash
pid=`pidof process_name`
kill -9 $pid

Pawel K ,Dec 20, 2017 at 10:27

use grep [n]ame to remove that grep -v name this is first... Sec using xargs in the way how it is up there is wrong to rnu whatever it is piped you have to use -i ( interactive mode) otherwise you may have issues with the command.

ps axf | grep | grep -v grep | awk '{print "kill -9 " $1}' ? ps aux |grep [n]ame | awk '{print "kill -9 " $2}' ? isnt that better ?

[Aug 14, 2019] linux - How to get PID of background process - Stack Overflow

Highly recommended!
Aug 14, 2019 | stackoverflow.com

How to get PID of background process? Ask Question Asked 9 years, 8 months ago Active 7 months ago Viewed 238k times 336 64


pixelbeat ,Mar 20, 2013 at 9:11

I start a background process from my shell script, and I would like to kill this process when my script finishes.

How to get the PID of this process from my shell script? As far as I can see variable $! contains the PID of the current script, not the background process.

WiSaGaN ,Jun 2, 2015 at 14:40

You need to save the PID of the background process at the time you start it:
foo &
FOO_PID=$!
# do other stuff
kill $FOO_PID

You cannot use job control, since that is an interactive feature and tied to a controlling terminal. A script will not necessarily have a terminal attached at all so job control will not necessarily be available.

Phil ,Dec 2, 2017 at 8:01

You can use the jobs -l command to get to a particular jobL
^Z
[1]+  Stopped                 guard

my_mac:workspace r$ jobs -l
[1]+ 46841 Suspended: 18           guard

In this case, 46841 is the PID.

From help jobs :

-l Report the process group ID and working directory of the jobs.

jobs -p is another option which shows just the PIDs.

Timo ,Dec 2, 2017 at 8:03

  • $$ is the current script's pid
  • $! is the pid of the last background process

Here's a sample transcript from a bash session ( %1 refers to the ordinal number of background process as seen from jobs ):

$ echo $$
3748

$ sleep 100 &
[1] 192

$ echo $!
192

$ kill %1

[1]+  Terminated              sleep 100

lepe ,Dec 2, 2017 at 8:29

An even simpler way to kill all child process of a bash script:
pkill -P $$

The -P flag works the same way with pkill and pgrep - it gets child processes, only with pkill the child processes get killed and with pgrep child PIDs are printed to stdout.

Luis Ramirez ,Feb 20, 2013 at 23:11

this is what I have done. Check it out, hope it can help.
#!/bin/bash
#
# So something to show.
echo "UNO" >  UNO.txt
echo "DOS" >  DOS.txt
#
# Initialize Pid List
dPidLst=""
#
# Generate background processes
tail -f UNO.txt&
dPidLst="$dPidLst $!"
tail -f DOS.txt&
dPidLst="$dPidLst $!"
#
# Report process IDs
echo PID=$$
echo dPidLst=$dPidLst
#
# Show process on current shell
ps -f
#
# Start killing background processes from list
for dPid in $dPidLst
do
        echo killing $dPid. Process is still there.
        ps | grep $dPid
        kill $dPid
        ps | grep $dPid
        echo Just ran "'"ps"'" command, $dPid must not show again.
done

Then just run it as: ./bgkill.sh with proper permissions of course

root@umsstd22 [P]:~# ./bgkill.sh
PID=23757
dPidLst= 23758 23759
UNO
DOS
UID        PID  PPID  C STIME TTY          TIME CMD
root      3937  3935  0 11:07 pts/5    00:00:00 -bash
root     23757  3937  0 11:55 pts/5    00:00:00 /bin/bash ./bgkill.sh
root     23758 23757  0 11:55 pts/5    00:00:00 tail -f UNO.txt
root     23759 23757  0 11:55 pts/5    00:00:00 tail -f DOS.txt
root     23760 23757  0 11:55 pts/5    00:00:00 ps -f
killing 23758. Process is still there.
23758 pts/5    00:00:00 tail
./bgkill.sh: line 24: 23758 Terminated              tail -f UNO.txt
Just ran 'ps' command, 23758 must not show again.
killing 23759. Process is still there.
23759 pts/5    00:00:00 tail
./bgkill.sh: line 24: 23759 Terminated              tail -f DOS.txt
Just ran 'ps' command, 23759 must not show again.
root@umsstd22 [P]:~# ps -f
UID        PID  PPID  C STIME TTY          TIME CMD
root      3937  3935  0 11:07 pts/5    00:00:00 -bash
root     24200  3937  0 11:56 pts/5    00:00:00 ps -f

Phil ,Oct 15, 2013 at 18:22

You might also be able to use pstree:
pstree -p user

This typically gives a text representation of all the processes for the "user" and the -p option gives the process-id. It does not depend, as far as I understand, on having the processes be owned by the current shell. It also shows forks.

Phil ,Dec 4, 2018 at 9:46

pgrep can get you all of the child PIDs of a parent process. As mentioned earlier $$ is the current scripts PID. So, if you want a script that cleans up after itself, this should do the trick:
trap 'kill $( pgrep -P $$ | tr "\n" " " )' SIGINT SIGTERM EXIT

[Aug 10, 2019] Midnight Commander (mc) convenient hard links creation from user menu "

Notable quotes:
"... You can create hard links and symbolic links using C-x l and C-x s keyboard shortcuts. However, these two shortcuts invoke two completely different dialogs. ..."
"... he had also uploaded a sample mc user menu script ( local copy ), which works wonderfully! ..."
Dec 03, 2015 | bogdan.org.ua

Midnight Commander (mc): convenient hard links creation from user menu

3rd December 2015

Midnight Commander is a convenient two-panel file manager with tons of features.

You can create hard links and symbolic links using C-x l and C-x s keyboard shortcuts. However, these two shortcuts invoke two completely different dialogs.

While for C-x s you get 2 pre-populated fields (path to the existing file, and path to the link – which is pre-populated with your opposite file panel path plus the name of the file under cursor; simply try it to see what I mean), for C-x l you only get 1 empty field: path of the hard link to create for a file under cursor. Symlink's behaviour would be much more convenient

Fortunately, a good man called Wiseman1024 created a feature request in the MC's bug tracker 6 years ago. Not only had he done so, but he had also uploaded a sample mc user menu script ( local copy ), which works wonderfully! You can select multiple files, then F2 l (lower-case L), and hard-links to your selected files (or a file under cursor) will be created in the opposite file panel. Great, thank you Wiseman1024 !

Word of warning: you must know what hard links are and what their limitations are before using this menu script. You also must check and understand the user menu code before adding it to your mc (by F9 C m u , and then pasting the script from the file).

Word of hope: 4 years ago Wiseman's feature request was assigned to Future Releases version, so a more convenient C-x l will (sooner or later) become the part of mc. Hopefully

[Aug 10, 2019] How to check the file size in Linux-Unix bash shell scripting by Vivek Gite

Aug 10, 2019 | www.cyberciti.biz

The stat command shows information about the file. The syntax is as follows to get the file size on GNU/Linux stat:

stat -c %s "/etc/passwd"

OR

stat --format=%s "/etc/passwd"

[Aug 10, 2019] bash - How to check size of a file - Stack Overflow

Aug 10, 2019 | stackoverflow.com

[ -n file.txt ] doesn't check its size , it checks that the string file.txt is non-zero length, so it will always succeed.

If you want to say " size is non-zero", you need [ -s file.txt ] .

To get a file's size , you can use wc -c to get the size ( file length) in bytes:

file=file.txt
minimumsize=90000
actualsize=$(wc -c <"$file")
if [ $actualsize -ge $minimumsize ]; then
    echo size is over $minimumsize bytes
else
    echo size is under $minimumsize bytes
fi

In this case, it sounds like that's what you want.

But FYI, if you want to know how much disk space the file is using, you could use du -k to get the size (disk space used) in kilobytes:

file=file.txt
minimumsize=90
actualsize=$(du -k "$file" | cut -f 1)
if [ $actualsize -ge $minimumsize ]; then
    echo size is over $minimumsize kilobytes
else
    echo size is under $minimumsize kilobytes
fi

If you need more control over the output format, you can also look at stat . On Linux, you'd start with something like stat -c '%s' file.txt , and on BSD/Mac OS X, something like stat -f '%z' file.txt .

--Mikel

On Linux, you'd start with something like stat -c '%s' file.txt , and on BSD/Mac OS X, something like stat -f '%z' file.txt .

Oz Solomon ,Jun 13, 2014 at 21:44

It surprises me that no one mentioned stat to check file size. Some methods are definitely better: using -s to find out whether the file is empty or not is easier than anything else if that's all you want. And if you want to find files of a size, then find is certainly the way to go.

I also like du a lot to get file size in kb, but, for bytes, I'd use stat :

size=$(stat -f%z $filename) # BSD stat

size=$(stat -c%s $filename) # GNU stat?
alternative solution with awk and double parenthesis:
FILENAME=file.txt
SIZE=$(du -sb $FILENAME | awk '{ print $1 }')

if ((SIZE<90000)) ; then 
    echo "less"; 
else 
    echo "not less"; 
fi

[Aug 07, 2019] Find files and tar them (with spaces)

Aug 07, 2019 | stackoverflow.com

Ask Question Asked 8 years, 3 months ago Active 1 month ago Viewed 104k times 106 45


porges ,Sep 6, 2012 at 17:43

Alright, so simple problem here. I'm working on a simple back up code. It works fine except if the files have spaces in them. This is how I'm finding files and adding them to a tar archive:
find . -type f | xargs tar -czvf backup.tar.gz

The problem is when the file has a space in the name because tar thinks that it's a folder. Basically is there a way I can add quotes around the results from find? Or a different way to fix this?

Brad Parks ,Mar 2, 2017 at 18:35

Use this:
find . -type f -print0 | tar -czvf backup.tar.gz --null -T -

It will:

  • deal with files with spaces, newlines, leading dashes, and other funniness
  • handle an unlimited number of files
  • won't repeatedly overwrite your backup.tar.gz like using tar -c with xargs will do when you have a large number of files

Also see:

czubehead ,Mar 19, 2018 at 11:51

There could be another way to achieve what you want. Basically,
  1. Use the find command to output path to whatever files you're looking for. Redirect stdout to a filename of your choosing.
  2. Then tar with the -T option which allows it to take a list of file locations (the one you just created with find!)
    find . -name "*.whatever" > yourListOfFiles
    tar -cvf yourfile.tar -T yourListOfFiles
    

gsteff ,May 5, 2011 at 2:05

Try running:
    find . -type f | xargs -d "\n" tar -czvf backup.tar.gz

Caleb Kester ,Oct 12, 2013 at 20:41

Why not:
tar czvf backup.tar.gz *

Sure it's clever to use find and then xargs, but you're doing it the hard way.

Update: Porges has commented with a find-option that I think is a better answer than my answer, or the other one: find -print0 ... | xargs -0 ....

Kalibur x ,May 19, 2016 at 13:54

If you have multiple files or directories and you want to zip them into independent *.gz file you can do this. Optional -type f -atime
find -name "httpd-log*.txt" -type f -mtime +1 -exec tar -vzcf {}.gz {} \;

This will compress

httpd-log01.txt
httpd-log02.txt

to

httpd-log01.txt.gz
httpd-log02.txt.gz

Frank Eggink ,Apr 26, 2017 at 8:28

Why not give something like this a try: tar cvf scala.tar `find src -name *.scala`

tommy.carstensen ,Dec 10, 2017 at 14:55

Another solution as seen here :
find var/log/ -iname "anaconda.*" -exec tar -cvzf file.tar.gz {} +

Robino ,Sep 22, 2016 at 14:26

The best solution seem to be to create a file list and then archive files because you can use other sources and do something else with the list.

For example this allows using the list to calculate size of the files being archived:

#!/bin/sh

backupFileName="backup-big-$(date +"%Y%m%d-%H%M")"
backupRoot="/var/www"
backupOutPath=""

archivePath=$backupOutPath$backupFileName.tar.gz
listOfFilesPath=$backupOutPath$backupFileName.filelist

#
# Make a list of files/directories to archive
#
echo "" > $listOfFilesPath
echo "${backupRoot}/uploads" >> $listOfFilesPath
echo "${backupRoot}/extra/user/data" >> $listOfFilesPath
find "${backupRoot}/drupal_root/sites/" -name "files" -type d >> $listOfFilesPath

#
# Size calculation
#
sizeForProgress=`
cat $listOfFilesPath | while read nextFile;do
    if [ ! -z "$nextFile" ]; then
        du -sb "$nextFile"
    fi
done | awk '{size+=$1} END {print size}'
`

#
# Archive with progress
#
## simple with dump of all files currently archived
#tar -czvf $archivePath -T $listOfFilesPath
## progress bar
sizeForShow=$(($sizeForProgress/1024/1024))
echo -e "\nRunning backup [source files are $sizeForShow MiB]\n"
tar -cPp -T $listOfFilesPath | pv -s $sizeForProgress | gzip > $archivePath

user3472383 ,Jun 27 at 1:11

Would add a comment to @Steve Kehlet post but need 50 rep (RIP).

For anyone that has found this post through numerous googling, I found a way to not only find specific files given a time range, but also NOT include the relative paths OR whitespaces that would cause tarring errors. (THANK YOU SO MUCH STEVE.)

find . -name "*.pdf" -type f -mtime 0 -printf "%f\0" | tar -czvf /dir/zip.tar.gz --null -T -
  1. . relative directory
  2. -name "*.pdf" look for pdfs (or any file type)
  3. -type f type to look for is a file
  4. -mtime 0 look for files created in last 24 hours
  5. -printf "%f\0" Regular -print0 OR -printf "%f" did NOT work for me. From man pages:

This quoting is performed in the same way as for GNU ls. This is not the same quoting mechanism as the one used for -ls and -fls. If you are able to decide what format to use for the output of find then it is normally better to use '\0' as a terminator than to use newline, as file names can contain white space and newline characters.

  1. -czvf create archive, filter the archive through gzip , verbosely list files processed, archive name

[Aug 06, 2019] Tar archiving that takes input from a list of files>

Aug 06, 2019 | stackoverflow.com

Tar archiving that takes input from a list of files Ask Question Asked 7 years, 9 months ago Active 6 months ago Viewed 123k times 131 29


Kurt McKee ,Apr 29 at 10:22

I have a file that contain list of files I want to archive with tar. Let's call it mylist.txt

It contains:

file1.txt
file2.txt
...
file10.txt

Is there a way I can issue TAR command that takes mylist.txt as input? Something like

tar -cvf allfiles.tar -[someoption?] mylist.txt

So that it is similar as if I issue this command:

tar -cvf allfiles.tar file1.txt file2.txt file10.txt

Stphane ,May 25 at 0:11

Yes:
tar -cvf allfiles.tar -T mylist.txt

drue ,Jun 23, 2014 at 14:56

Assuming GNU tar (as this is Linux), the -T or --files-from option is what you want.

Stphane ,Mar 1, 2016 at 20:28

You can also pipe in the file names which might be useful:
find /path/to/files -name \*.txt | tar -cvf allfiles.tar -T -

David C. Rankin ,May 31, 2018 at 18:27

Some versions of tar, for example, the default versions on HP-UX (I tested 11.11 and 11.31), do not include a command line option to specify a file list, so a decent work-around is to do this:
tar cvf allfiles.tar $(cat mylist.txt)

Jan ,Sep 25, 2015 at 20:18

On Solaris, you can use the option -I to read the filenames that you would normally state on the command line from a file. In contrast to the command line, this can create tar archives with hundreds of thousands of files (just did that).

So the example would read

tar -cvf allfiles.tar -I mylist.txt

,

For me on AIX, it worked as follows:
tar -L List.txt -cvf BKP.tar

[Aug 06, 2019] Shell command to tar directory excluding certain files-folders

Aug 06, 2019 | stackoverflow.com

Shell command to tar directory excluding certain files/folders Ask Question Asked 10 years, 1 month ago Active 1 month ago Viewed 787k times 720 186


Rekhyt ,Jun 24, 2014 at 16:06

Is there a simple shell command/script that supports excluding certain files/folders from being archived?

I have a directory that need to be archived with a sub directory that has a number of very large files I do not need to backup.

Not quite solutions:

The tar --exclude=PATTERN command matches the given pattern and excludes those files, but I need specific files & folders to be ignored (full file path), otherwise valid files might be excluded.

I could also use the find command to create a list of files and exclude the ones I don't want to archive and pass the list to tar, but that only works with for a small amount of files. I have tens of thousands.

I'm beginning to think the only solution is to create a file with a list of files/folders to be excluded, then use rsync with --exclude-from=file to copy all the files to a tmp directory, and then use tar to archive that directory.

Can anybody think of a better/more efficient solution?

EDIT: Charles Ma 's solution works well. The big gotcha is that the --exclude='./folder' MUST be at the beginning of the tar command. Full command (cd first, so backup is relative to that directory):

cd /folder_to_backup
tar --exclude='./folder' --exclude='./upload/folder2' -zcvf /backup/filename.tgz .

James O'Brien ,Nov 24, 2016 at 9:55

You can have multiple exclude options for tar so
$ tar --exclude='./folder' --exclude='./upload/folder2' -zcvf /backup/filename.tgz .

etc will work. Make sure to put --exclude before the source and destination items.

Johan Soderberg ,Jun 11, 2009 at 23:10

You can exclude directories with --exclude for tar.

If you want to archive everything except /usr you can use:

tar -zcvf /all.tgz / --exclude=/usr

In your case perhaps something like

tar -zcvf archive.tgz arc_dir --exclude=dir/ignore_this_dir

cstamas ,Oct 8, 2018 at 18:02

Possible options to exclude files/directories from backup using tar:

Exclude files using multiple patterns

tar -czf backup.tar.gz --exclude=PATTERN1 --exclude=PATTERN2 ... /path/to/backup

Exclude files using an exclude file filled with a list of patterns

tar -czf backup.tar.gz -X /path/to/exclude.txt /path/to/backup

Exclude files using tags by placing a tag file in any directory that should be skipped

tar -czf backup.tar.gz --exclude-tag-all=exclude.tag /path/to/backup

Anish Ramaswamy ,Apr 1 at 16:18

old question with many answers, but I found that none were quite clear enough for me, so I would like to add my try.

if you have the following structure

/home/ftp/mysite/

with following file/folders

/home/ftp/mysite/file1
/home/ftp/mysite/file2
/home/ftp/mysite/file3
/home/ftp/mysite/folder1
/home/ftp/mysite/folder2
/home/ftp/mysite/folder3

so, you want to make a tar file that contain everyting inside /home/ftp/mysite (to move the site to a new server), but file3 is just junk, and everything in folder3 is also not needed, so we will skip those two.

we use the format

tar -czvf <name of tar file> <what to tar> <any excludes>

where the c = create, z = zip, and v = verbose (you can see the files as they are entered, usefull to make sure none of the files you exclude are being added). and f= file.

so, my command would look like this

cd /home/ftp/
tar -czvf mysite.tar.gz mysite --exclude='file3' --exclude='folder3'

note the files/folders excluded are relatively to the root of your tar (I have tried full path here relative to / but I can not make that work).

hope this will help someone (and me next time I google it)

not2qubit ,Apr 4, 2018 at 3:24

You can use standard "ant notation" to exclude directories relative.
This works for me and excludes any .git or node_module directories.
tar -cvf myFile.tar --exclude=**/.git/* --exclude=**/node_modules/*  -T /data/txt/myInputFile.txt 2> /data/txt/myTarLogFile.txt

myInputFile.txt Contains:

/dev2/java
/dev2/javascript

GeertVc ,Feb 9, 2015 at 13:37

I've experienced that, at least with the Cygwin version of tar I'm using ("CYGWIN_NT-5.1 1.7.17(0.262/5/3) 2012-10-19 14:39 i686 Cygwin" on a Windows XP Home Edition SP3 machine), the order of options is important.

While this construction worked for me:

tar cfvz target.tgz --exclude='<dir1>' --exclude='<dir2>' target_dir

that one didn't work:

tar cfvz --exclude='<dir1>' --exclude='<dir2>' target.tgz target_dir

This, while tar --help reveals the following:

tar [OPTION...] [FILE]

So, the second command should also work, but apparently it doesn't seem to be the case...

Best rgds,

Scott Stensland ,Feb 12, 2015 at 20:55

This exclude pattern handles filename suffix like png or mp3 as well as directory names like .git and node_modules
tar --exclude={*.png,*.mp3,*.wav,.git,node_modules} -Jcf ${target_tarball}  ${source_dirname}

Michael ,May 18 at 23:29

I found this somewhere else so I won't take credit, but it worked better than any of the solutions above for my mac specific issues (even though this is closed):
tar zc --exclude __MACOSX --exclude .DS_Store -f <archive> <source(s)>

J. Lawson ,Apr 17, 2018 at 23:28

For those who have issues with it, some versions of tar would only work properly without the './' in the exclude value.
Tar --version

tar (GNU tar) 1.27.1

Command syntax that work:

tar -czvf ../allfiles-butsome.tar.gz * --exclude=acme/foo

These will not work:

$ tar -czvf ../allfiles-butsome.tar.gz * --exclude=./acme/foo
$ tar -czvf ../allfiles-butsome.tar.gz * --exclude='./acme/foo'
$ tar --exclude=./acme/foo -czvf ../allfiles-butsome.tar.gz *
$ tar --exclude='./acme/foo' -czvf ../allfiles-butsome.tar.gz *
$ tar -czvf ../allfiles-butsome.tar.gz * --exclude=/full/path/acme/foo
$ tar -czvf ../allfiles-butsome.tar.gz * --exclude='/full/path/acme/foo'
$ tar --exclude=/full/path/acme/foo -czvf ../allfiles-butsome.tar.gz *
$ tar --exclude='/full/path/acme/foo' -czvf ../allfiles-butsome.tar.gz *

Jerinaw ,May 6, 2017 at 20:07

For Mac OSX I had to do

tar -zcv --exclude='folder' -f theOutputTarFile.tar folderToTar

Note the -f after the --exclude=

Aaron Votre ,Jul 15, 2016 at 15:56

I agree the --exclude flag is the right approach.
$ tar --exclude='./folder_or_file' --exclude='file_pattern' --exclude='fileA'

A word of warning for a side effect that I did not find immediately obvious: The exclusion of 'fileA' in this example will search for 'fileA' RECURSIVELY!

Example:A directory with a single subdirectory containing a file of the same name (data.txt)

data.txt
config.txt
--+dirA
  |  data.txt
  |  config.docx
  • If using --exclude='data.txt' the archive will not contain EITHER data.txt file. This can cause unexpected results if archiving third party libraries, such as a node_modules directory.
  • To avoid this issue make sure to give the entire path, like --exclude='./dirA/data.txt'

Znik ,Nov 15, 2014 at 5:12

To avoid possible 'xargs: Argument list too long' errors due to the use of find ... | xargs ... when processing tens of thousands of files, you can pipe the output of find directly to tar using find ... -print0 | tar --null ... .
# archive a given directory, but exclude various files & directories 
# specified by their full file paths
find "$(pwd -P)" -type d \( -path '/path/to/dir1' -or -path '/path/to/dir2' \) -prune \
   -or -not \( -path '/path/to/file1' -or -path '/path/to/file2' \) -print0 | 
   gnutar --null --no-recursion -czf archive.tar.gz --files-from -
   #bsdtar --null -n -czf archive.tar.gz -T -

Mike ,May 9, 2014 at 21:29

After reading this thread, I did a little testing on RHEL 5 and here are my results for tarring up the abc directory:

This will exclude the directories error and logs and all files under the directories:

tar cvpzf abc.tgz abc/ --exclude='abc/error' --exclude='abc/logs'

Adding a wildcard after the excluded directory will exclude the files but preserve the directories:

tar cvpzf abc.tgz abc/ --exclude='abc/error/*' --exclude='abc/logs/*'

Alex B ,Jun 11, 2009 at 23:03

Use the find command in conjunction with the tar append (-r) option. This way you can add files to an existing tar in a single step, instead of a two pass solution (create list of files, create tar).
find /dir/dir -prune ... -o etc etc.... -exec tar rvf ~/tarfile.tar {} \;

frommelmak ,Sep 10, 2012 at 14:08

You can also use one of the "--exclude-tag" options depending on your needs:
  • --exclude-tag=FILE
  • --exclude-tag-all=FILE
  • --exclude-tag-under=FILE

The folder hosting the specified FILE will be excluded.

camh ,Jun 12, 2009 at 5:53

You can use cpio(1) to create tar files. cpio takes the files to archive on stdin, so if you've already figured out the find command you want to use to select the files the archive, pipe it into cpio to create the tar file:
find ... | cpio -o -H ustar | gzip -c > archive.tar.gz

PicoutputCls ,Aug 21, 2018 at 14:13

gnu tar v 1.26 the --exclude needs to come after archive file and backup directory arguments, should have no leading or trailing slashes, and prefers no quotes (single or double). So relative to the PARENT directory to be backed up, it's:

tar cvfz /path_to/mytar.tgz ./dir_to_backup --exclude=some_path/to_exclude

user2553863 ,May 28 at 21:41

After reading all this good answers for different versions and having solved the problem for myself, I think there are very small details that are very important, and rare to GNU/Linux general use , that aren't stressed enough and deserves more than comments.

So I'm not going to try to answer the question for every case, but instead, try to register where to look when things doesn't work.

IT IS VERY IMPORTANT TO NOTICE:

  1. THE ORDER OF THE OPTIONS MATTER: it is not the same put the --exclude before than after the file option and directories to backup. This is unexpected at least to me, because in my experience, in GNU/Linux commands, usually the order of the options doesn't matter.
  2. Different tar versions expects this options in different order: for instance, @Andrew's answer indicates that in GNU tar v 1.26 and 1.28 the excludes comes last, whereas in my case, with GNU tar 1.29, it's the other way.
  3. THE TRAILING SLASHES MATTER : at least in GNU tar 1.29, it shouldn't be any .

In my case, for GNU tar 1.29 on Debian stretch, the command that worked was

tar --exclude="/home/user/.config/chromium" --exclude="/home/user/.cache" -cf file.tar  /dir1/ /home/ /dir3/

The quotes didn't matter, it worked with or without them.

I hope this will be useful to someone.

jørgensen ,Dec 19, 2015 at 11:10

Your best bet is to use find with tar, via xargs (to handle the large number of arguments). For example:
find / -print0 | xargs -0 tar cjf tarfile.tar.bz2

Ashwini Gupta ,Jan 12, 2018 at 10:30

tar -cvzf destination_folder source_folder -X /home/folder/excludes.txt

-X indicates a file which contains a list of filenames which must be excluded from the backup. For Instance, you can specify *~ in this file to not include any filenames ending with ~ in the backup.

George ,Sep 4, 2013 at 22:35

Possible redundant answer but since I found it useful, here it is:

While a FreeBSD root (i.e. using csh) I wanted to copy my whole root filesystem to /mnt but without /usr and (obviously) /mnt. This is what worked (I am at /):

tar --exclude ./usr --exclude ./mnt --create --file - . (cd /mnt && tar xvd -)

My whole point is that it was necessary (by putting the ./ ) to specify to tar that the excluded directories where part of the greater directory being copied.

My €0.02

t0r0X ,Sep 29, 2014 at 20:25

I had no luck getting tar to exclude a 5 Gigabyte subdirectory a few levels deep. In the end, I just used the unix Zip command. It worked a lot easier for me.

So for this particular example from the original post
(tar --exclude='./folder' --exclude='./upload/folder2' -zcvf /backup/filename.tgz . )

The equivalent would be:

zip -r /backup/filename.zip . -x upload/folder/**\* upload/folder2/**\*

(NOTE: Here is the post I originally used that helped me https://superuser.com/questions/312301/unix-zip-directory-but-excluded-specific-subdirectories-and-everything-within-t )

RohitPorwal ,Jul 21, 2016 at 9:56

Check it out
tar cvpzf zip_folder.tgz . --exclude=./public --exclude=./tmp --exclude=./log --exclude=fileName

tripleee ,Sep 14, 2017 at 4:38

The following bash script should do the trick. It uses the answer given here by Marcus Sundman.
#!/bin/bash

echo -n "Please enter the name of the tar file you wish to create with out extension "
read nam

echo -n "Please enter the path to the directories to tar "
read pathin

echo tar -czvf $nam.tar.gz
excludes=`find $pathin -iname "*.CC" -exec echo "--exclude \'{}\'" \;|xargs`
echo $pathin

echo tar -czvf $nam.tar.gz $excludes $pathin

This will print out the command you need and you can just copy and paste it back in. There is probably a more elegant way to provide it directly to the command line.

Just change *.CC for any other common extension, file name or regex you want to exclude and this should still work.

EDIT

Just to add a little explanation; find generates a list of files matching the chosen regex (in this case *.CC). This list is passed via xargs to the echo command. This prints --exclude 'one entry from the list'. The slashes () are escape characters for the ' marks.

[Aug 06, 2019] bash - More efficient way to find tar millions of files - Stack Overflow

Aug 06, 2019 | stackoverflow.com

More efficient way to find & tar millions of files Ask Question Asked 9 years, 3 months ago Active 8 months ago Viewed 25k times 22 13


theomega ,Apr 29, 2010 at 13:51

I've got a job running on my server at the command line prompt for a two days now:
find data/ -name filepattern-*2009* -exec tar uf 2009.tar {} ;

It is taking forever , and then some. Yes, there are millions of files in the target directory. (Each file is a measly 8 bytes in a well hashed directory structure.) But just running...

find data/ -name filepattern-*2009* -print > filesOfInterest.txt

...takes only two hours or so. At the rate my job is running, it won't be finished for a couple of weeks .. That seems unreasonable. Is there a more efficient to do this? Maybe with a more complicated bash script?

A secondary questions is "why is my current approach so slow?"

Stu Thompson ,May 6, 2013 at 1:11

If you already did the second command that created the file list, just use the -T option to tell tar to read the files names from that saved file list. Running 1 tar command vs N tar commands will be a lot better.

Matthew Mott ,Jul 3, 2014 at 19:21

One option is to use cpio to generate a tar-format archive:
$ find data/ -name "filepattern-*2009*" | cpio -ov --format=ustar > 2009.tar

cpio works natively with a list of filenames from stdin, rather than a top-level directory, which makes it an ideal tool for this situation.

bashfu ,Apr 23, 2010 at 10:05

Here's a find-tar combination that can do what you want without the use of xargs or exec (which should result in a noticeable speed-up):
tar --version    # tar (GNU tar) 1.14 

# FreeBSD find (on Mac OS X)
find -x data -name "filepattern-*2009*" -print0 | tar --null --no-recursion -uf 2009.tar --files-from -

# for GNU find use -xdev instead of -x
gfind data -xdev -name "filepattern-*2009*" -print0 | tar --null --no-recursion -uf 2009.tar --files-from -

# added: set permissions via tar
find -x data -name "filepattern-*2009*" -print0 | \
    tar --null --no-recursion --owner=... --group=... --mode=... -uf 2009.tar --files-from -

Stu Thompson ,Apr 28, 2010 at 12:50

There is xargs for this:
find data/ -name filepattern-*2009* -print0 | xargs -0 tar uf 2009.tar

Guessing why it is slow is hard as there is not much information. What is the structure of the directory, what filesystem do you use, how it was configured on creating. Having milions of files in single directory is quite hard situation for most filesystems.

bashfu ,May 1, 2010 at 14:18

To correctly handle file names with weird (but legal) characters (such as newlines, ...) you should write your file list to filesOfInterest.txt using find's -print0:
find -x data -name "filepattern-*2009*" -print0 > filesOfInterest.txt
tar --null --no-recursion -uf 2009.tar --files-from filesOfInterest.txt

Michael Aaron Safyan ,Apr 23, 2010 at 8:47

The way you currently have things, you are invoking the tar command every single time it finds a file, which is not surprisingly slow. Instead of taking the two hours to print plus the amount of time it takes to open the tar archive, see if the files are out of date, and add them to the archive, you are actually multiplying those times together. You might have better success invoking the tar command once, after you have batched together all the names, possibly using xargs to achieve the invocation. By the way, I hope you are using 'filepattern-*2009*' and not filepattern-*2009* as the stars will be expanded by the shell without quotes.

ruffrey ,Nov 20, 2018 at 17:13

There is a utility for this called tarsplitter .
tarsplitter -m archive -i folder/*.json -o archive.tar -p 8

will use 8 threads to archive the files matching "folder/*.json" into an output archive of "archive.tar"

https://github.com/AQUAOSOTech/tarsplitter

syneticon-dj ,Jul 22, 2013 at 8:47

Simplest (also remove file after archive creation):
find *.1  -exec tar czf '{}.tgz' '{}' --remove-files \;

[Aug 06, 2019] backup - Fastest way combine many files into one (tar czf is too slow) - Unix Linux Stack Exchange

Aug 06, 2019 | unix.stackexchange.com

Fastest way combine many files into one (tar czf is too slow) Ask Question Asked 7 years, 11 months ago Active 21 days ago Viewed 32k times 22 5


Gilles ,Nov 5, 2013 at 0:05

Currently I'm running tar czf to combine backup files. The files are in a specific directory.

But the number of files is growing. Using tzr czf takes too much time (more than 20 minutes and counting).

I need to combine the files more quickly and in a scalable fashion.

I've found genisoimage , readom and mkisofs . But I don't know which is fastest and what the limitations are for each of them.

Rufo El Magufo ,Aug 24, 2017 at 7:56

You should check if most of your time are being spent on CPU or in I/O. Either way, there are ways to improve it:

A: don't compress

You didn't mention "compression" in your list of requirements so try dropping the "z" from your arguments list: tar cf . This might be speed up things a bit.

There are other techniques to speed-up the process, like using "-N " to skip files you already backed up before.

B: backup the whole partition with dd

Alternatively, if you're backing up an entire partition, take a copy of the whole disk image instead. This would save processing and a lot of disk head seek time. tar and any other program working at a higher level have a overhead of having to read and process directory entries and inodes to find where the file content is and to do more head disk seeks , reading each file from a different place from the disk.

To backup the underlying data much faster, use:

dd bs=16M if=/dev/sda1 of=/another/filesystem

(This assumes you're not using RAID, which may change things a bit)

,

To repeat what others have said: we need to know more about the files that are being backed up. I'll go with some assumptions here. Append to the tar file

If files are only being added to the directories (that is, no file is being deleted), make sure you are appending to the existing tar file rather than re-creating it every time. You can do this by specifying the existing archive filename in your tar command instead of a new one (or deleting the old one).

Write to a different disk

Reading from the same disk you are writing to may be killing performance. Try writing to a different disk to spread the I/O load. If the archive file needs to be on the same disk as the original files, move it afterwards.

Don't compress

Just repeating what @Yves said. If your backup files are already compressed, there's not much need to compress again. You'll just be wasting CPU cycles.

[Aug 04, 2019] 10 YAML tips for people who hate YAML Enable SysAdmin

Aug 04, 2019 | www.redhat.com

10 YAML tips for people who hate YAML Do you hate YAML? These tips might ease your pain.

Posted June 10, 2019 | by Seth Kenlon (Red Hat)

Image
There are lots of formats for configuration files: a list of values, key and value pairs, INI files, YAML, JSON, XML, and many more. Of these, YAML sometimes gets cited as a particularly difficult one to handle for a few different reasons. While its ability to reflect hierarchical values is significant and its minimalism can be refreshing to some, its Python-like reliance upon syntactic whitespace can be frustrating.

However, the open source world is diverse and flexible enough that no one has to suffer through abrasive technology, so if you hate YAML, here are 10 things you can (and should!) do to make it tolerable. Starting with zero, as any sensible index should.

0. Make your editor do the work

Whatever text editor you use probably has plugins to make dealing with syntax easier. If you're not using a YAML plugin for your editor, find one and install it. The effort you spend on finding a plugin and configuring it as needed will pay off tenfold the very next time you edit YAML.

For example, the Atom editor comes with a YAML mode by default, and while GNU Emacs ships with minimal support, you can add additional packages like yaml-mode to help.

Emacs in YAML and whitespace mode.

If your favorite text editor lacks a YAML mode, you can address some of your grievances with small configuration changes. For instance, the default text editor for the GNOME desktop, Gedit, doesn't have a YAML mode available, but it does provide YAML syntax highlighting by default and features configurable tab width:

Configuring tab width and type in Gedit.

With the drawspaces Gedit plugin package, you can make white space visible in the form of leading dots, removing any question about levels of indentation.

Take some time to research your favorite text editor. Find out what the editor, or its community, does to make YAML easier, and leverage those features in your work. You won't be sorry.

1. Use a linter

Ideally, programming languages and markup languages use predictable syntax. Computers tend to do well with predictability, so the concept of a linter was invented in 1978. If you're not using a linter for YAML, then it's time to adopt this 40-year-old tradition and use yamllint .

You can install yamllint on Linux using your distribution's package manager. For instance, on Red Hat Enterprise Linux 8 or Fedora :

$ sudo dnf install yamllint

Invoking yamllint is as simple as telling it to check a file. Here's an example of yamllint 's response to a YAML file containing an error:

$ yamllint errorprone.yaml
errorprone.yaml
23:10     error    syntax error: mapping values are not allowed here
23:11     error    trailing spaces  (trailing-spaces)

That's not a time stamp on the left. It's the error's line and column number. You may or may not understand what error it's talking about, but now you know the error's location. Taking a second look at the location often makes the error's nature obvious. Success is eerily silent, so if you want feedback based on the lint's success, you can add a conditional second command with a double-ampersand ( && ). In a POSIX shell, && fails if a command returns anything but 0, so upon success, your echo command makes that clear. This tactic is somewhat superficial, but some users prefer the assurance that the command did run correctly, rather than failing silently. Here's an example:

$ yamllint perfect.yaml && echo "OK"
OK

The reason yamllint is so silent when it succeeds is that it returns 0 errors when there are no errors.

2. Write in Python, not YAML

If you really hate YAML, stop writing in YAML, at least in the literal sense. You might be stuck with YAML because that's the only format an application accepts, but if the only requirement is to end up in YAML, then work in something else and then convert. Python, along with the excellent pyyaml library, makes this easy, and you have two methods to choose from: self-conversion or scripted.

Self-conversion

In the self-conversion method, your data files are also Python scripts that produce YAML. This works best for small data sets. Just write your JSON data into a Python variable, prepend an import statement, and end the file with a simple three-line output statement.

#!/usr/bin/python3	
import yaml 

d={
"glossary": {
  "title": "example glossary",
  "GlossDiv": {
	"title": "S",
	"GlossList": {
	  "GlossEntry": {
		"ID": "SGML",
		"SortAs": "SGML",
		"GlossTerm": "Standard Generalized Markup Language",
		"Acronym": "SGML",
		"Abbrev": "ISO 8879:1986",
		"GlossDef": {
		  "para": "A meta-markup language, used to create markup languages such as DocBook.",
		  "GlossSeeAlso": ["GML", "XML"]
		  },
		"GlossSee": "markup"
		}
	  }
	}
  }
}

f=open('output.yaml','w')
f.write(yaml.dump(d))
f.close

Run the file with Python to produce a file called output.yaml file.

$ python3 ./example.json
$ cat output.yaml
glossary:
  GlossDiv:
	GlossList:
	  GlossEntry:
		Abbrev: ISO 8879:1986
		Acronym: SGML
		GlossDef:
		  GlossSeeAlso: [GML, XML]
		  para: A meta-markup language, used to create markup languages such as DocBook.
		GlossSee: markup
		GlossTerm: Standard Generalized Markup Language
		ID: SGML
		SortAs: SGML
	title: S
  title: example glossary

This output is perfectly valid YAML, although yamllint does issue a warning that the file is not prefaced with --- , which is something you can adjust either in the Python script or manually.

Scripted conversion

In this method, you write in JSON and then run a Python conversion script to produce YAML. This scales better than self-conversion, because it keeps the converter separate from the data.

Create a JSON file and save it as example.json . Here is an example from json.org :

{
	"glossary": {
	  "title": "example glossary",
	  "GlossDiv": {
		"title": "S",
		"GlossList": {
		  "GlossEntry": {
			"ID": "SGML",
			"SortAs": "SGML",
			"GlossTerm": "Standard Generalized Markup Language",
			"Acronym": "SGML",
			"Abbrev": "ISO 8879:1986",
			"GlossDef": {
			  "para": "A meta-markup language, used to create markup languages such as DocBook.",
			  "GlossSeeAlso": ["GML", "XML"]
			  },
			"GlossSee": "markup"
			}
		  }
		}
	  }
	}

Create a simple converter and save it as json2yaml.py . This script imports both the YAML and JSON Python modules, loads a JSON file defined by the user, performs the conversion, and then writes the data to output.yaml .

#!/usr/bin/python3
import yaml
import sys
import json

OUT=open('output.yaml','w')
IN=open(sys.argv[1], 'r')

JSON = json.load(IN)
IN.close()
yaml.dump(JSON, OUT)
OUT.close()

Save this script in your system path, and execute as needed:

$ ~/bin/json2yaml.py example.json
3. Parse early, parse often

Sometimes it helps to look at a problem from a different angle. If your problem is YAML, and you're having a difficult time visualizing the data's relationships, you might find it useful to restructure that data, temporarily, into something you're more familiar with.

If you're more comfortable with dictionary-style lists or JSON, for instance, you can convert YAML to JSON in two commands using an interactive Python shell. Assume your YAML file is called mydata.yaml .

$ python3
>>> f=open('mydata.yaml','r')
>>> yaml.load(f)
{'document': 34843, 'date': datetime.date(2019, 5, 23), 'bill-to': {'given': 'Seth', 'family': 'Kenlon', 'address': {'street': '51b Mornington Road\n', 'city': 'Brooklyn', 'state': 'Wellington', 'postal': 6021, 'country': 'NZ'}}, 'words': 938, 'comments': 'Good article. Could be better.'}

There are many other examples, and there are plenty of online converters and local parsers, so don't hesitate to reformat data when it starts to look more like a laundry list than markup.

4. Read the spec

After I've been away from YAML for a while and find myself using it again, I go straight back to yaml.org to re-read the spec. If you've never read the specification for YAML and you find YAML confusing, a glance at the spec may provide the clarification you never knew you needed. The specification is surprisingly easy to read, with the requirements for valid YAML spelled out with lots of examples in chapter 6 .

5. Pseudo-config

Before I started writing my book, Developing Games on the Raspberry Pi , Apress, 2019, the publisher asked me for an outline. You'd think an outline would be easy. By definition, it's just the titles of chapters and sections, with no real content. And yet, out of the 300 pages published, the hardest part to write was that initial outline.

YAML can be the same way. You may have a notion of the data you need to record, but that doesn't mean you fully understand how it's all related. So before you sit down to write YAML, try doing a pseudo-config instead.

A pseudo-config is like pseudo-code. You don't have to worry about structure or indentation, parent-child relationships, inheritance, or nesting. You just create iterations of data in the way you currently understand it inside your head.

A pseudo-config.

Once you've got your pseudo-config down on paper, study it, and transform your results into valid YAML.

6. Resolve the spaces vs. tabs debate

OK, maybe you won't definitively resolve the spaces-vs-tabs debate , but you should at least resolve the debate within your project or organization. Whether you resolve this question with a post-process sed script, text editor configuration, or a blood-oath to respect your linter's results, anyone in your team who touches a YAML project must agree to use spaces (in accordance with the YAML spec).

Any good text editor allows you to define a number of spaces instead of a tab character, so the choice shouldn't negatively affect fans of the Tab key.

Tabs and spaces are, as you probably know all too well, essentially invisible. And when something is out of sight, it rarely comes to mind until the bitter end, when you've tested and eliminated all of the "obvious" problems. An hour wasted to an errant tab or group of spaces is your signal to create a policy to use one or the other, and then to develop a fail-safe check for compliance (such as a Git hook to enforce linting).

7. Less is more (or more is less)

Some people like to write YAML to emphasize its structure. They indent vigorously to help themselves visualize chunks of data. It's a sort of cheat to mimic markup languages that have explicit delimiters.

Here's a good example from Ansible's documentation :

# Employee records
-  martin:
        name: Martin D'vloper
        job: Developer
        skills:
            - python
            - perl
            - pascal
-  tabitha:
        name: Tabitha Bitumen
        job: Developer
        skills:
            - lisp
            - fortran
            - erlang

For some users, this approach is a helpful way to lay out a YAML document, while other users miss the structure for the void of seemingly gratuitous white space.

If you own and maintain a YAML document, then you get to define what "indentation" means. If blocks of horizontal white space distract you, then use the minimal amount of white space required by the YAML spec. For example, the same YAML from the Ansible documentation can be represented with fewer indents without losing any of its validity or meaning:

---
- martin:
   name: Martin D'vloper
   job: Developer
   skills:
   - python
   - perl
   - pascal
- tabitha:
   name: Tabitha Bitumen
   job: Developer
   skills:
   - lisp
   - fortran
   - erlang
8. Make a recipe

I'm a big fan of repetition breeding familiarity, but sometimes repetition just breeds repeated stupid mistakes. Luckily, a clever peasant woman experienced this very phenomenon back in 396 AD (don't fact-check me), and invented the concept of the recipe .

If you find yourself making YAML document mistakes over and over, you can embed a recipe or template in the YAML file as a commented section. When you're adding a section, copy the commented recipe and overwrite the dummy data with your new real data. For example:

---
# - <common name>:
#   name: Given Surname
#   job: JOB
#   skills:
#   - LANG
- martin:
  name: Martin D'vloper
  job: Developer
  skills:
  - python
  - perl
  - pascal
- tabitha:
  name: Tabitha Bitumen
  job: Developer
  skills:
  - lisp
  - fortran
  - erlang
9. Use something else

I'm a fan of YAML, generally, but sometimes YAML isn't the answer. If you're not locked into YAML by the application you're using, then you might be better served by some other configuration format. Sometimes config files outgrow themselves and are better refactored into simple Lua or Python scripts.

YAML is a great tool and is popular among users for its minimalism and simplicity, but it's not the only tool in your kit. Sometimes it's best to part ways. One of the benefits of YAML is that parsing libraries are common, so as long as you provide migration options, your users should be able to adapt painlessly.

If YAML is a requirement, though, keep these tips in mind and conquer your YAML hatred once and for all! What to read next

[Aug 04, 2019] Ansible IT automation for everybody Enable SysAdmin

Aug 04, 2019 | www.redhat.com

Skip to main content We use cookies on our websites to deliver our online services. Details about how we use cookies and how you may disable them are set out in our Privacy Statement . By using this website you agree to our use of cookies. × Search Enable SysAdmin

Ansible: IT automation for everybody Kick the tires with Ansible and start automating with these simple tasks.

Posted July 31, 2019 | by Jörg Kastning

Image

Ansible is an open source tool for software provisioning, application deployment, orchestration, configuration, and administration. Its purpose is to help you automate your configuration processes and simplify the administration of multiple systems. Thus, Ansible essentially pursues the same goals as Puppet, Chef, or Saltstack.

What I like about Ansible is that it's flexible, lean, and easy to start with. In most use cases, it keeps the job simple.

I chose to use Ansible back in 2016 because no agent has to be installed on the managed nodes -- a node is what Ansible calls a managed remote system. All you need to start managing a remote system with Ansible is SSH access to the system, and Python installed on it. Python is preinstalled on most Linux systems, and I was already used to managing my hosts via SSH, so I was ready to start right away. And if the day comes where I decide not to use Ansible anymore, I just have to delete my Ansible controller machine (control node) and I'm good to go. There are no agents left on the managed nodes that have to be removed.

Ansible offers two ways to control your nodes. The first one uses playbooks . These are simple ASCII files written in Yet Another Markup Language (YAML) , which is easy to read and write. And second, there are the ad-hoc commands , which allow you to run a command or module without having to create a playbook first.

You organize the hosts you would like to manage and control in an inventory file, which offers flexible format options. For example, this could be an INI-like file that looks like:

mail.example.com

[webservers]
foo.example.com
bar.example.com

[dbservers]
one.example.com
two.example.com
three.example.com

[site1:children]
webservers
dbservers
Examples

I would like to give you two small examples of how to use Ansible. I started with these really simple tasks before I used Ansible to take control of more complex tasks in my infrastructure.

Ad-hoc: Check if Ansible can remote manage a system

As you might recall from the beginning of this article, all you need to manage a remote host is SSH access to it, and a working Python interpreter on it. To check if these requirements are fulfilled, run the following ad-hoc command against a host from your inventory:

[jkastning@ansible]$ ansible mail.example.com -m ping
mail.example.com | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
Playbook: Register a system and attach a subscription

This example shows how to use a playbook to keep installed packages up to date. The playbook is an ASCII text file which looks like this:

---
# Make sure all packages are up to date
- name: Update your system
  hosts: mail.example.com
  tasks:
  - name: Make sure all packages are up to date
    yum:
      name: "*"
      state: latest

Now, we are ready to run the playbook:

[jkastning@ansible]$ ansible-playbook yum_update.yml 

PLAY [Update your system] **************************************************************************

TASK [Gathering Facts] *****************************************************************************
ok: [mail.example.com]

TASK [Make sure all packages are up to date] *******************************************************
ok: [mail.example.com]

PLAY RECAP *****************************************************************************************
mail.example.com : ok=2    changed=0    unreachable=0    failed=0

Here everything is ok and there is nothing else to do. All installed packages are already the latest version.

It's simple: Try and use it

The examples above are quite simple and should only give you a first impression. But, from the start, it did not take me long to use Ansible for more complex tasks like the Poor Man's RHEL Mirror or the Ansible Role for RHEL Patchmanagment .

Today, Ansible saves me a lot of time and supports my day-to-day work tasks quite well. So what are you waiting for? Try it, use it, and feel a bit more comfortable at work. What to read next Image 10 YAML tips for people who hate YAML Do you hate YAML? These tips might ease your pain. Posted: June 10, 2019 Author: Seth Kenlon (Red Hat) Topics: Automation Ansible AUTOMATION FOR EVERYONE

Getting started with Ansible Get started Jörg Kastning Joerg is a sysadmin for over ten years now. He is a member of the Red Hat Accelerators and runs his own blog at https://www.my-it-brain.de. More about me Related Content Image 10 YAML tips for people who hate YAML Do you hate YAML? These tips might ease your pain. Posted: June 10, 2019 Author: Seth Kenlon (Red Hat)

OUR BEST CONTENT, DELIVERED TO YOUR INBOX

https://www.redhat.com/sysadmin/eloqua-embedded-subscribe.html?offer_id=701f20000012gE7AAI The opinions expressed on this website are those of each author, not of the author's employer or of Red Hat.

Red Hat and the Red Hat logo are trademarks of Red Hat, Inc., registered in the United States and other countries.

Copyright ©2019 Red Hat, Inc.

Twitter Facebook 0 LinkedIn Reddit 33 Email Twitter Facebook 0 LinkedIn Reddit 33 Email x Subscribe now

Get the highlights in your inbox every week.

https://www.redhat.com/sysadmin/eloqua-embedded-email-capture-block.html?offer_id=701f20000012gE7AAI ✓ Thanks for sharing! Facebook Twitter Email Pinterest LinkedIn Reddit WhatsApp Gmail Telegram Pocket Mix Tumblr Amazon Wish List AOL Mail Balatarin BibSonomy Bitty Browser Blinklist Blogger BlogMarks Bookmarks.fr Box.net Buffer Care2 News CiteULike Copy Link Design Float Diary.Ru Diaspora Digg Diigo Douban Draugiem DZone Evernote Facebook Messenger Fark Flipboard Folkd Google Bookmarks Google Classroom Google+ Hacker News Hatena Houzz Instapaper Kakao Kik Kindle It Known Line LiveJournal Mail.Ru Mastodon Mendeley Meneame Mixi MySpace Netvouz Odnoklassniki Outlook.com Papaly Pinboard Plurk Print PrintFriendly Protopage Bookmarks Pusha Qzone Rediff MyPage Refind Renren Sina Weibo SiteJot Skype Slashdot SMS StockTwits Svejo Symbaloo Bookmarks Threema Trello Tuenti Twiddla TypePad Post Viadeo Viber VK Wanelo WeChat WordPress Wykop XING Yahoo Mail Yoolink Yummly AddToAny Facebook Twitter Email Pinterest LinkedIn Reddit WhatsApp Gmail Email Gmail AOL Mail Outlook.com Yahoo Mail More

https://static.addtoany.com/menu/sm.21.html#type=page&event=load&url=https%3A%2F%2Fwww.redhat.com%2Fsysadmin%2Fansible-it-automation-everybody&referrer=https%3A%2F%2Fwww.linuxtoday.com%2Fit_management%2Fansible-it-automation-for-everybody-190731052032.html

https://redhat.demdex.net/dest5.html?d_nsid=0#https%3A%2F%2Fwww.redhat.com%2Fsysadmin%2Fansible-it-automation-everybody

[Aug 03, 2019] Creating Bootable Linux USB Drive with Etcher

Aug 03, 2019 | linuxize.com

There are several different applications available for free use which will allow you to flash ISO images to USB drives. In this example, we will use Etcher. It is a free and open-source utility for flashing images to SD cards & USB drives and supports Windows, macOS, and Linux.

Head over to the Etcher downloads page , and download the most recent Etcher version for your operating system. Once the file is downloaded, double-click on it and follow the installation wizard.

Creating Bootable Linux USB Drive using Etcher is a relatively straightforward process, just follow the steps outlined below:

  1. Connect the USB flash drive to your system and Launch Etcher.
  2. Click on the Select image button and locate the distribution .iso file.
  3. If only one USB drive is attached to your machine, Etcher will automatically select it. Otherwise, if more than one SD cards or USB drives are connected make sure you have selected the correct USB drive before flashing the image.

[Aug 02, 2019] linux - How to tar directory and then remove originals including the directory - Super User

Aug 02, 2019 | superuser.com

How to tar directory and then remove originals including the directory? Ask Question Asked 9 years, 6 months ago Active 4 years, 6 months ago Viewed 124k times 28 7


mit ,Dec 7, 2016 at 1:22

I'm trying to tar a collection of files in a directory called 'my_directory' and remove the originals by using the command:
tar -cvf files.tar my_directory --remove-files

However it is only removing the individual files inside the directory and not the directory itself (which is what I specified in the command). What am I missing here?

EDIT:

Yes, I suppose the 'remove-files' option is fairly literal. Although I too found the man page unclear on that point. (In linux I tend not to really distinguish much between directories and files that much, and forget sometimes that they are not the same thing). It looks like the consensus is that it doesn't remove directories.

However, my major prompting point for asking this question stems from tar's handling of absolute paths. Because you must specify a relative path to a file/s to be compressed, you therefore must change to the parent directory to tar it properly. As I see it using any kind of follow-on 'rm' command is potentially dangerous in that situation. Thus I was hoping to simplify things by making tar itself do the remove.

For example, imagine a backup script where the directory to backup (ie. tar) is included as a shell variable. If that shell variable value was badly entered, it is possible that the result could be deleted files from whatever directory you happened to be in last.

Arjan ,Feb 13, 2016 at 13:08

You are missing the part which says the --remove-files option removes files after adding them to the archive.

You could follow the archive and file-removal operation with a command like,

find /path/to/be/archived/ -depth -type d -empty -exec rmdir {} \;


Update: You may be interested in reading this short Debian discussion on,
Bug 424692: --remove-files complains that directories "changed as we read it" .

Kim ,Feb 13, 2016 at 13:08

Since the --remove-files option only removes files , you could try
tar -cvf files.tar my_directory && rm -R my_directory

so that the directory is removed only if the tar returns an exit status of 0

redburn ,Feb 13, 2016 at 13:08

Have you tried to put --remove-files directive after archive name? It works for me.
tar -cvf files.tar --remove-files my_directory

shellking ,Oct 4, 2010 at 19:58

source={directory argument}

e.g.

source={FULL ABSOLUTE PATH}/my_directory
parent={parent directory of argument}

e.g.

parent={ABSOLUTE PATH of 'my_directory'/
logFile={path to a run log that captures status messages}

Then you could execute something along the lines of:

cd ${parent}

tar cvf Tar_File.`date%Y%M%D_%H%M%S` ${source}

if [ $? != 0 ]

then

 echo "Backup FAILED for ${source} at `date` >> ${logFile}

else

 echo "Backup SUCCESS for ${source} at `date` >> ${logFile}

 rm -rf ${source}

fi

mit ,Nov 14, 2011 at 13:21

This was probably a bug.

Also the word "file" is ambigous in this case. But because this is a command line switch I would it expect to mean also directories, because in unix/lnux everything is a file, also a directory. (The other interpretation is of course also valid, but It makes no sense to keep directories in such a case. I would consider it unexpected and confusing behavior.)

But I have found that in gnu tar on some distributions gnu tar actually removes the directory tree. Another indication that keeping the tree was a bug. Or at least some workaround until they fixed it.

This is what I tried out on an ubuntu 10.04 console:

mit:/var/tmp$ mkdir tree1                                                                                               
mit:/var/tmp$ mkdir tree1/sub1                                                                                          
mit:/var/tmp$ > tree1/sub1/file1                                                                                        

mit:/var/tmp$ ls -la                                                                                                    
drwxrwxrwt  4 root root 4096 2011-11-14 15:40 .                                                                              
drwxr-xr-x 16 root root 4096 2011-02-25 03:15 ..
drwxr-xr-x  3 mit  mit  4096 2011-11-14 15:40 tree1

mit:/var/tmp$ tar -czf tree1.tar.gz tree1/ --remove-files

# AS YOU CAN SEE THE TREE IS GONE NOW:

mit:/var/tmp$ ls -la
drwxrwxrwt  3 root root 4096 2011-11-14 15:41 .
drwxr-xr-x 16 root root 4096 2011-02-25 03:15 ..
-rw-r--r--  1 mit   mit    159 2011-11-14 15:41 tree1.tar.gz                                                                   


mit:/var/tmp$ tar --version                                                                                             
tar (GNU tar) 1.22                                                                                                           
Copyright © 2009 Free Software Foundation, Inc.

If you want to see it on your machine, paste this into a console at your own risk:

tar --version                                                                                             
cd /var/tmp
mkdir -p tree1/sub1                                                                                          
> tree1/sub1/file1                                                                                        
tar -czf tree1.tar.gz tree1/ --remove-files
ls -la

[Jul 31, 2019] Mounting archives with FUSE and archivemount Linux.com The source for Linux information

Jul 31, 2019 | www.linux.com

Mounting archives with FUSE and archivemount Author: Ben Martin The archivemount FUSE filesystem lets you mount a possibly compressed tarball as a filesystem. Because FUSE exposes its filesystems through the Linux kernel, you can use any application to load and save files directly into such mounted archives. This lets you use your favourite text editor, image viewer, or music player on files that are still inside an archive file. Going one step further, because archivemount also supports write access for some archive formats, you can edit a text file directly from inside an archive too.

I couldn't find any packages that let you easily install archivemount for mainstream distributions. Its distribution includes a single source file and a Makefile.

archivemount depends on libarchive for the heavy lifting. Packages of libarchive exist for Ubuntu Gutsy and openSUSE for not for Fedora. To compile libarchive you need to have uudecode installed; my version came with the sharutils package on Fedora 8. Once you have uudecode, you can build libarchive using the standard ./configure; make; sudo make install process.

With libarchive installed, either from source or from packages, simply invoke make to build archivemount itself. To install archivemount, copy its binary into /usr/local/bin and set permissions appropriately. A common setup on Linux distributions is to have a fuse group that a user must be a member of in order to mount a FUSE filesystem. It makes sense to have the archivemount command owned by this group as a reminder to users that they require that permission in order to use the tool. Setup is shown below:

# cp -av archivemount /usr/local/bin/
# chown root:fuse /usr/local/bin/archivemount
# chmod 550 /usr/local/bin/archivemount

To show how you can use archivemount I'll first create a trivial compressed tarball, then mount it with archivemount. You can then explore the directory structure of the contents of the tarball with the ls command, and access a file from the archive directly with cat.

$ mkdir -p /tmp/archivetest
$ cd /tmp/archivetest
$ date >datefile1
$ date >datefile2
$ mkdir subA
$ date >subA/foobar
$ cd /tmp
$ tar czvf archivetest.tar.gz archivetest
$ mkdir testing
$ archivemount archivetest.tar.gz testing
$ ls -l testing/archivetest/
-rw-r--r-- 0 root root 29 2008-04-02 21:04 datefile1
-rw-r--r-- 0 root root 29 2008-04-02 21:04 datefile2
drwxr-xr-x 0 root root 0 2008-04-02 21:04 subA
$ cat testing/archivetest/datefile2
Wed Apr 2 21:04:08 EST 2008

Next, I'll create a new file in the archive and read its contents back again. Notice that the first use of the tar command directly on the tarball does not show that the newly created file is in the archive. This is because archivemount delays all write operations until the archive is unmounted. After issuing the fusermount -u command, the new file is added to the archive itself.

$ date > testing/archivetest/new-file1
$ cat testing/archivetest/new-file1
Wed Apr 2 21:12:07 EST 2008
$ t