Softpanorama

Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
May the source be with you, but remember the KISS principle ;-)
Bigger doesn't imply better. Bigger often is a sign of obesity, of lost control, of overcomplexity, of cancerous cells

How to have safer rm command and avoid some blunders

News Books Recommended Links Reference User proofing

Horror stories

Creative uses of rm
Root deletion protection Safe-rm Lack of testing complex, potentially destructive, commands Executing command in a wrong directory Typical Errors In Using Find Performing the operation on a wrong computer Pure stupidity
Unix mv command Unix cp command ln command Using -exec option with find Unix History with some Emphasis on Scripting

Humor

Etc

Introduction

The command rm is very important and it is very primitive. Like a blade it can cause catastrophic damage to server filesystems in a shaky hands.

Many Unix sysadmin horror stories are related to unintended consequences such as side effects of classic Unix commands. One of the most prominent of such commands is rm.   Which has several idiosyncrasies which are iether easily forgotten or unknown by novice sysadmins. For example behavior of rm -r .* can easily be forgotten from one encounter to another, especially if sysadmin works both with Unix and Windows servers. Consequences if executed as root are tragic...  

While most, probably 99.9% of operations with rm are successful it is the remaining.01% that create a lot of damage and is subject to sysadmin folklore.

The command rm has relatively few options.  Two of the then are especially important:

Linux uses GNU implementation that as of April 2018 has the following options (note relatively new option -I):

Wiping out useful data by using rm with wrong parameters or options is probably  the most common blunder by both novice and seasoned sysadmins. It happens rarely but the result are often devastating.  This danger can't be completely avoided, but in addition to having an up-to-date backup, there are several steps that you can do to reduce the chances of the major damage to the data:

  1. Do backup before any dangerous operation. That the Rule No. 1
  2. You can make /usr directory write-protected by using a separate filesystem for it (/usr/local should be symlinked to /opt where it belongs). There can be some other write protected filesystem as appropriate. 
  3. Try to use option -I (instead of -i). While -i option (which it default alas in Red Hat and other distributions) is annoying and most often user disable it, there is more modern and more useful option -I which is highly recommended. Unfortunately very few sysadmin know about its existence. It prompt once before removing more than three files, or when removing recursively. Less intrusive than -i, while still giving protection against most mistakes (rm(1) remove files-directories ):

    If the -I or --interactive=once option is given, and there are more than three files or the -r, -R, or --recursive are given, then rm prompts the user for whether to proceed with the entire operation. If the response is not affirmative, the entire command is aborted.

  4. Block operation for system level 2 directories such  as /etc or /usr (this simple measure actually is now implemented in GNU rm used in Linux) using wrapper such  as safe-rm. Aliasing rm to wrapper is safe because aliases work only in interactive sessions. In addition you can block the deletion of directories that are in your favorite directories list if you have any.  For example if you use level 2 directory /Fav for links some often used directories, you can check is the directory you are trying to delete is present in those links.
  5. Create list of files and  directories that  should not be erased and use it in wrapper. Instead of creating  the list you can mark them in some way to distinguish  from other. Linux does not have system attribute for files, but sticky bit on files can be used instead or along with ownership of sys instead of root).  You can also use extended attributes. You can also use file stamps using special meaning for seconds  field in the data stamp like in DOS: any file with creation date that have 00 seconds in it should never be deleted. If you operate server with SE linux in enforcing mode you can use SElinux for file protection.
  6. Rename the directory before applying rm -r command to a special name DELETEIT or some other mnemonic  name and then deleting it using special alias dirdel or something like that (this way you will never mistype the name of the directory): many devastating blunders with rm  occur when you think that you operate on backup copy of the  system or important user/application directory, but in reality operate on a "real thing" or by  accidentally typing "/" inform on name of common system directories such as /etc and  /var
  7. Write the command in editor first. Do not improvise with such command on the command line. A weaker version of this rule is to write options  first and after verification add rm in front.   Sysadmins  automatically type the name of the most common systems directory with slash in front of it  instead of proper relative name (rm -r /etc instead of rm -r etc ) because they are conditioned to type it this way by a long practice and this sequence of keystroke is "etched" in your brain and you do it almost semi-consciously. If you first rename the directory to something else, for example DELETEIT first,  such an error will not occur . A script can check if there is another directory with this name in existence and warn you too.  You can also simply move directory to /Trash folder which has, say 90 day expiration period for files in it
  8. Use a protective script as an envelope to rm. It can  that take into account some of the most dangerous situation typical in your environment. You can write a script like rmm or saferm  which among other useful preventive checks introduces a delay between hitting enter and  starting of the operation and list the file or at lest number of them and first five that will be affected.
  9. Verify the list of files you delete by running find or ls command first on wildcard you created.  When deleting  large number  of files first generate the list of files only only then apply rm command to this list.

Classic blunders with RM command

There are several classic blunders committed using the rm command :

  1. Running rm - R command without testing it using ls . You should always use ls -Rl command to test complex rm -R  commands (  -R, --recursive means process  subdirectories recursively).
  2. Mistyping complex path or file name in rm commands. It's always better to use copy and paste operation for directories and files used in rm command as it helps to avoid various typos. If you do recursive deletion is useful to have a "dry run" using find or ls to see the files. Here are to example of typos:

    rm -rf /tmp/foo/bar/ *

    instead of

    rm -rf /tmp/foo/bar/*

    ===

    rm -r /etc

    instead of

    rm -r etc

    This actually is an interesting type of error because /etc is typed daily so often that it kind of ingrained in sysadmin head and can be typed subconsciously instead of etc

  3. Using .*   That essentially make rm recursive as it matches ".." (parent directory).
  4. Using directory/* --  not understanding that this is symbolic link.
  5. You can accidentally hit Enter by mistake before finishing typing the line containing rm command. 

Those cases can be viewed as shortcomings of rm implementation (it should automatically block * deletion of system directories like //etc/ and so on unless -f  flag is specified. As well as any system file.  Also Unix does not have system attribute for files although sticky bit on files can be used instead or along with ownership of sys instead of root).

Those cases can be viewed as shortcomings of rm implementation (it should automatically block * deletion of system directories like //etc/ and so on unless -f  flag is specified. As well as any system file.  Also Unix does not have system attribute for files although sticky bit on files can be used instead along with ownership of sys instead of root).

Writing a wrapper like "saferm"

Unix lacks system attributes for files although sticky bit on files or immutable attribute can be used instead.  It is wise to use wrappers for rm. There are several more or less usable approaches for writing such a wrapper:

In view of  danger of  rm -r unleashed as root it is wise to use wrappers for rm and alias rm to this wrapper in root .bash_profile or other profile file that you are using for root.  Aliases are not used in non-interactive sessions. So it will not affect any scripts. And in command line sessions rm is usually typed without path (which creates another set of dangers). 

There are several more or less usable features that might wish to experiment with when writing such a wrapper:

Of course each environment is unique and such a wrapper should take into account idiosyncrasies of such an environment.

If you know Perl, you can use Safe-rm as the initial implementation and enhance it. If it licensed under GPL. It present in Ubuntu I think.

How to use
-----------

Once you have installed safe-rm on your system (see INSTALL), you will need to
fill the system-wide or user-specific blacklists with the paths that you'd like
to protect against accidental deletion.

The system-wide blacklist lives in /etc/safe-rm.conf and you should probably add
paths like these:

  /
  /etc
  /usr
  /usr/lib
  /var

The user-specific blacklist lives in ~/.config/safe-rm and could include things like:

  /home/username/documents
  /home/username/documents/*
  /home/username/.mozilla


Other approaches
-----------------

If you want more protection than what safe-rm can offer, here are a few suggestions.

You could of course request confirmation every time you delete a file by putting this in
your /etc/bash.bashrc:

  alias rm='rm -i'

But this won't protect you from getting used to always saying yes, or from accidently
using 'rm -rf'.

Or you could make use of the Linux filesystem "immutable" attribute by marking (as root)
each file you want to protect:

  chattr +i file

Of course this is only usable on filesystems which support this feature.

Here are two projects which allow you to recover recently deleted files by trapping
all unlink(), rename() and open() system calls through the LD_PRELOAD facility:

  delsafe
  http://homepage.esoterica.pt/~nx0yew/delsafe/

  libtrashcan
  http://hpux.connect.org.uk/hppd/hpux/Development/Libraries/libtrash-0.2/readme.html

There are also projects which implement the FreeDesktop.org trashcan spec. For example:

  trash-cli
  http://code.google.com/p/trash-cli/

Finally, this project is a fork of GNU coreutils and adds features similar to safe-rm
to the rm command directly:

  http://wiki.github.com/d5h/rmfd/

Backup files before executing rm command, if you deleting a directory and it is not that large

Backups before execution of rm command are important. For example making backup of /etc directory on modern server takes a couple of seconds, but can save quite a lot of nerves in situations that otherwise can s can be devastating. For example, in the example above you would erase all your files and subdirectories in the /etc directory. Modern flavors of Unix usually prevent erasing / but not /etc. Linux which uses GNU version of rm prevents erasing all level 2 system directories.

You can also move directories to /Trash folder and delete them after, say 7 days or so.

You can also move the directory to /Trash folder is it is relatively small. In this forlder you can delete files that are say, older then 7 days with a cron script, if the total size of this directory exceeds certain threshold. 

More on blunders with the deleting a directory from the backup

I once automatically typed /etc instead of etc trying to delete directory to free space on a backup directory on a production server (/etc probably in engraved in sysadmin head as it is typed so often and can be substituted for etc subconsciously).  I realized that it was mistake and cancelled the command, but it was a fast server and one third of /etc was gone. The rest of the day was spoiled...  Actually not completely: I learned quite a bit about the behavior of AIX in this situation and the structure of AIX /etc directory this day so each such disaster is actually a great learning experience, almost like one day training course ;-). But it's much less nerve wracking to get this knowledge from the course... Another interesting thing is having backup was not enough is this case -- enterprise backup software stopped working on a damaged server. The same was true for telnet and ssh. And this was a remote server is a datacenter across the country.  I restored the directory on the other non-production server (overwriting its /etc directory in this second box with the help of operations, tell me about cascading errors and Murphy law :-). Then netcat helped to transfer the tar file. 

If you are working as root and perform dangerous operations never type a path of the command, copy it from the screen. If you can copy command from history instead of typing, do it ! Always check if the directory you are trying to delete is a symbolic link, such symbolic links are often used in home directories to simplify navigation...

Such blunders are really persistent as often used directories are types in a "semiconscious" fashion, in "autopilot" mode and you realize the blunder only after you hit Enter. For example, many years after the first blunder, I similarly typed rm -r /Soft instead of rm -r Soft in a backup directory.  And I was in a hurry so I did not even bother to load my profile and word on "bare root" account which did not have this "saferm" feature I was talking about earlier. Unfortunately for me /Soft was a huge directory so backup was not practical as I needed to free space.  It with all kind of software in it and the server was very fast to in a couple of seconds that elapsed before I realized the blunder approximately half gigabyte of files and directories was wiped out.  Luckily I have previous day backup.

In such cases network services with authentication stop working and the only way to transfer files is using CD/DVD, USB drive or netcat. That's why it is useful to have netcat on servers: netcat is the last resort file transfer program when  services with authentication like ftp or scp stop working.  It is especially useful to have it if the datacenter is remote.

netcat is the last resort file transfer program when  services with authentication like ftp or scp stop working.  It is especially useful to have it if the datacenter is remote.

The saying is that experience keeps the most expensive school but fools are unable to learn in any other ;-).

The saying is that experience keeps the most expensive school but fools are unable to learn in any other ;-). Please read classic sysadmin horror stories. 

A simple extra space often produced horrible consequences:

cd /usr/lib
ls /tmp/foo/bar

I typed
rm -rf /tmp/foo/bar/ *

instead of

rm -rf /tmp/foo/bar/*

The system doesn’t run very will without all of it’s libraries……

You can block such behavior requiring that if -r option was given you should have one and only one argument to rm. That should be a part of functionality of your "saferm" script.

Important class of subtle Unix errors: dot-star errors

Another poplar class of recursive rm errors are so called dot-star-errors, which often happn when rm is used with find.  Novice sysadmins usually do not realize that '.*' also matches '..' often with disastrous consequences.  If you are in any doubt as for how a wildcard will expand use echo command to see the result.

Using "convenience" symlinks to other (typically deeply nested) directories inside a directories and forgetting about them

Command  rm does not follows symlinks. But if you use it via find -exec option it will follow symlinks. So if you have a symlink to some system on important directory from the directory you are deleting all files in them will be deleted.

That's the danger of "convenience symlinks" which are used to simplify access to deeply nested directories from home directories. Using aliases or functions might be a better approach. If you use them, do this only from the level two directory created specially for this purpose like /Fav and protect this directory in your saferm script.

Disastrous typos in name of the directory or regex -- unintended, automatically entered space

There are some exotic typos that can lean you to troubles, especially with -r  option. One of them is unintended space:

 rm /homejoeuser/old *

instead of

rm /homejoeuser/old*

Or, similarly:

 rm * .bak

instead of

rm *.bak
In all cases when you are deleting multiple files it makes sense to get a listing of them first via ls command and then replace the name of the command. this approach is safer that typing it by yourself, especially if you are working with the remote server.

Such a mistakes are even more damaging  is you use -r option. For example:

rm -rf /tmp/foo/bar/ *

instead of

rm -rf /tmp/foo/bar/*

NOTE:  it is prudent to block execution of rm commands with multiple argument, if option -r is used in your saferm script.

Root deletion protection

To remove a file you must have write permission on the folder where it is stored. Sun introduced "rm -rf /" protection in Solaris 10, first released in 2005. Upon executing the command, the system now reports that the removal of / is not allowed.

Shortly after, the same functionality was introduced into FreeBSD version of rm  utility.

GNU rm  refuses to execute rm -rf /  if the --preserve-root  option is given, which has been the default since version 6.4 of GNU Core Utilities was released in 2006.

No such protection exists for critical system directories like /etc, but you can imitate it putting file "-i" into such directories or using a wrapper for the interactive usage of the command.  This trick is based on the fact that the file -i will be the first file in the list of the arguments to rm and it will trigger the acknowledgements. you can laso consuder any directory owned by system accounts to be system and refuse to delete then in you Safe-rm script.

There are also other approaches as moving files instead of their deletion to /Trash directory.

Using echo command to see expansion of your wildcards, if any

To understand what will be actually being executed after shell expansion, preface your rm command with echo. For example if there is a filename starting with the dash you will receive very confusing message from rm

	$ rm *
	rm: unknown option -- -
	usage: rm [-f|-i] [-dPRrvW] file ...
To find out what caused it prefix the command with echo
echo rm *

You can generate list of files dynamically for rm (piping it via xargs if it is too large. For example

ls | grep copy # check what files will be affected

ls | grep copy | xargs -0 -l rm -f  # remove all the files

The simplest attempt to defend themselves against accidentally deleting files is to create an alias rm or function rm along the lines of:

alias rm="rm -i"

or use the function

function rm { /bin/rm -i "$@" ; }

Unfortunately, this encourage a tendency to mindlessly pound y  and the return key to affirm removes - until you realize that got just past the file you need to keep. Therefore it is better to create a function which first displays the files and then delete them:

function rm 
{ 
   if [ -z "$PS1" ] ; then
     /bin/rm "$@"
     exit $?
   fi 
   ls -l "$@"
   echo 'remove[ny]? ' | tr -d '\012' ;  
   read # read reply (y/n) 
   if [ "_$REPLY" = "_y" ]; then
      /bin/rm "$@" 
      exit $?   
   fi
   echo '(rm command cancelled)'
   exit 1 
} 

If this function is included into startup dot files it will be found ahead of the system  binary /bin/rm  in the search path and can break sloppy written batch jobs. That's why it checks for PS1 being non empty but this is not a 100% reliable test.

Using file starting with - as stoppers for unintended use of rm. Undeletable files

The rm command accepts the -- option which will cause it to stop processing flag options from that point forward. This allows the removal of file names that begin with a dash (-).

rm -- -filename

Actually accidental creation of files starting with dash is a common problem for novice sysadmin.  The problem is that if I type rm -foo, the rm command treats the filename as an option.

There are two simple ways to delete such a file. One was shown above and based on supplying an empty option argument. The second to use a relative or absolute pathname:

rm ./-badfile
 rm /home/user/-badfile 

Here the debugging trick that we mentioned above -- prefacing rm command with echo really helps to understand what happens.

To delete a file with non-printable characters in the name you can also use the shell wildcard "?" for each bad character

rm bad?file?name

Another way to delete files that have control characters in the name is to  use  -i option and an asterisk, which gives you the option of removing each file in the directory — even the ones that you can't type.

% rm -i *
rm: remove faq.html (y/n)? n
rm: remove foo (y/n)? y
%

The -i option may also be helpful when you are dealing with files with Unicode characters that appear to be regular letters in your locale, but that don't match patterns or names you use otherwise.

A great way to discover files with control characters in them is to use the -q option to the Unix ls command (some systems also support a useful -b option). You can, for example, alias the ls command:

alias lsq='ls -q '

Files that have control characters in their filenames will then appear with question marks:

ls -q f*
faq.html                fo?o

Blunders in using rm with find

This is class or blunders that deserves a page of its own. See Typical Errors In Using Find

You need to be extra careful running rm in an -exec option of find command. There are many horror stories in which people delete large parts of their filesystem due to typos of sloppy find command parameters. Again it is prudent first to test it using ls command to see the actual list of files to be deleted. Sometimes it is quite different from what you intended to do :-).

I often first generate the list using ls command, inspect it and in a separate step pipe it into rm :

cat rm.lst | xargs -t -n1 rm 
The -t flag causes the xargs command to display each command before running it, so you can see what is happening.
Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Oct 22, 2018] linux - If I rm -rf a symlink will the data the link points to get erased, to

Notable quotes:
"... Put it in another words, those symlink-files will be deleted. The files they "point"/"link" to will not be touch. ..."
Oct 22, 2018 | unix.stackexchange.com

user4951 ,Jan 25, 2013 at 2:40

This is the contents of the /home3 directory on my system:
./   backup/    hearsttr@  lost+found/  randomvi@  sexsmovi@
../  freemark@  investgr@  nudenude@    romanced@  wallpape@

I want to clean this up but I am worried because of the symlinks, which point to another drive.

If I say rm -rf /home3 will it delete the other drive?

John Sui

rm -rf /home3 will delete all files and directory within home3 and home3 itself, which include symlink files, but will not "follow"(de-reference) those symlink.

Put it in another words, those symlink-files will be deleted. The files they "point"/"link" to will not be touch.

[Oct 22, 2018] Does rm -rf follow symbolic links?

Jan 25, 2012 | superuser.com
I have a directory like this:
$ ls -l
total 899166
drwxr-xr-x 12 me scicomp       324 Jan 24 13:47 data
-rw-r--r--  1 me scicomp     84188 Jan 24 13:47 lod-thin-1.000000-0.010000-0.030000.rda
drwxr-xr-x  2 me scicomp       808 Jan 24 13:47 log
lrwxrwxrwx  1 me scicomp        17 Jan 25 09:41 msg -> /home/me/msg

And I want to remove it using rm -r .

However I'm scared rm -r will follow the symlink and delete everything in that directory (which is very bad).

I can't find anything about this in the man pages. What would be the exact behavior of running rm -rf from a directory above this one?

LordDoskias Jan 25 '12 at 16:43, Jan 25, 2012 at 16:43

How hard it is to create a dummy dir with a symlink pointing to a dummy file and execute the scenario? Then you will know for sure how it works! –

hakre ,Feb 4, 2015 at 13:09

X-Ref: If I rm -rf a symlink will the data the link points to get erased, too? ; Deleting a folder that contains symlinkshakre Feb 4 '15 at 13:09

Susam Pal ,Jan 25, 2012 at 16:47

Example 1: Deleting a directory containing a soft link to another directory.
susam@nifty:~/so$ mkdir foo bar
susam@nifty:~/so$ touch bar/a.txt
susam@nifty:~/so$ ln -s /home/susam/so/bar/ foo/baz
susam@nifty:~/so$ tree
.
├── bar
│   └── a.txt
└── foo
    └── baz -> /home/susam/so/bar/

3 directories, 1 file
susam@nifty:~/so$ rm -r foo
susam@nifty:~/so$ tree
.
└── bar
    └── a.txt

1 directory, 1 file
susam@nifty:~/so$

So, we see that the target of the soft-link survives.

Example 2: Deleting a soft link to a directory

susam@nifty:~/so$ ln -s /home/susam/so/bar baz
susam@nifty:~/so$ tree
.
├── bar
│   └── a.txt
└── baz -> /home/susam/so/bar

2 directories, 1 file
susam@nifty:~/so$ rm -r baz
susam@nifty:~/so$ tree
.
└── bar
    └── a.txt

1 directory, 1 file
susam@nifty:~/so$

Only, the soft link is deleted. The target of the soft-link survives.

Example 3: Attempting to delete the target of a soft-link

susam@nifty:~/so$ ln -s /home/susam/so/bar baz
susam@nifty:~/so$ tree
.
├── bar
│   └── a.txt
└── baz -> /home/susam/so/bar

2 directories, 1 file
susam@nifty:~/so$ rm -r baz/
rm: cannot remove 'baz/': Not a directory
susam@nifty:~/so$ tree
.
├── bar
└── baz -> /home/susam/so/bar

2 directories, 0 files

The file in the target of the symbolic link does not survive.

The above experiments were done on a Debian GNU/Linux 9.0 (stretch) system.

Wyrmwood ,Oct 30, 2014 at 20:36

rm -rf baz/* will remove the contents – Wyrmwood Oct 30 '14 at 20:36

Buttle Butkus ,Jan 12, 2016 at 0:35

Yes, if you do rm -rf [symlink], then the contents of the original directory will be obliterated! Be very careful. – Buttle Butkus Jan 12 '16 at 0:35

frnknstn ,Sep 11, 2017 at 10:22

Your example 3 is incorrect! On each system I have tried, the file a.txt will be removed in that scenario. – frnknstn Sep 11 '17 at 10:22

Susam Pal ,Sep 11, 2017 at 15:20

@frnknstn You are right. I see the same behaviour you mention on my latest Debian system. I don't remember on which version of Debian I performed the earlier experiments. In my earlier experiments on an older version of Debian, either a.txt must have survived in the third example or I must have made an error in my experiment. I have updated the answer with the current behaviour I observe on Debian 9 and this behaviour is consistent with what you mention. – Susam Pal Sep 11 '17 at 15:20

Ken Simon ,Jan 25, 2012 at 16:43

Your /home/me/msg directory will be safe if you rm -rf the directory from which you ran ls. Only the symlink itself will be removed, not the directory it points to.

The only thing I would be cautious of, would be if you called something like "rm -rf msg/" (with the trailing slash.) Do not do that because it will remove the directory that msg points to, rather than the msg symlink itself.

> ,Jan 25, 2012 at 16:54

"The only thing I would be cautious of, would be if you called something like "rm -rf msg/" (with the trailing slash.) Do not do that because it will remove the directory that msg points to, rather than the msg symlink itself." - I don't find this to be true. See the third example in my response below. – Susam Pal Jan 25 '12 at 16:54

Andrew Crabb ,Nov 26, 2013 at 21:52

I get the same result as @Susam ('rm -r symlink/' does not delete the target of symlink), which I am pleased about as it would be a very easy mistake to make. – Andrew Crabb Nov 26 '13 at 21:52

,

rm should remove files and directories. If the file is symbolic link, link is removed, not the target. It will not interpret a symbolic link. For example what should be the behavior when deleting 'broken links'- rm exits with 0 not with non-zero to indicate failure

[Jun 13, 2010] Unix Blog -- rm command - Argument List too long

Some days back I had problems deleting n no of files in folder.

while using rm command it would say

$rm -f /home/sriram/tmp/*.txt
-bash: /bin/rm: Argument list too long

This is not a limitation of the rm command, but a kernel limitation on the size of the parameters of the command. Since I was performing shell globbing (selecting all the files with extension .txt), this meant that the size of the command line arguments became bigger with the number of the files.

One solution is to to either run rm command inside a loop and delete each individual result, or to use find with the xargs parameter to pass the delete command. I prefer the find solution so I had changed the rm line inside the script to:

find /home/$u/tmp/ -name '*.txt' -print0 xargs -0 rm

this does the trick and solves the problem. One final touch was to not receive warnings if there were no actual files to delete, like:

rm: too few arguments

Try `rm --help' for more information.

For this I have added the -f parameter to rm (-f, –force = ignore nonexistent files, never prompt). Since this was running in a shell script from cron the prompt was not needed also so no problem here. The final line I used to replace the rm one was:

find /home/$u/tmp/ -name '*.txt' -print0 xargs -0 rm -f

You can also do this in the directory:
ls xargs rm

Alternatively you could have used a one line find:

find /home/$u/tmp/ -name '*.txt' -exec rm {} \; -print

TidBITS A Mac User's Guide to the Unix Command Line, Part 1

Warning! The command line is not without certain risks. Unlike when you work in the Finder, some tasks you carry out are absolute and cannot be undone. The command I am about to present, rm, is very powerful. It removes files permanently and completely instead of just putting them in the Trash. You can't recover files after deleting them with rm, so use it with great care, and always use the -i option, as explained below, so Terminal asks you to confirm deleting each file.

Your prompt should look something like this, showing that you are still inside the Test directory you created earlier:

   [Walden:~/Test] kirk%

Type the following:

   % rm -i testfile

The rm command removes files and directories, in this case the file testfile. The -i option tells Terminal to run the rm command in interactive mode, asking you to make sure you want to delete the file. Terminal asks:

   remove testfile?

Type y for yes, then press Return or Enter and the file is removed. If you wanted to leave it there, you could just type n for no, or press Return.

We should check to make sure the file is gone:

   % ls

After typing ls, you should just see a prompt. Terminal doesn't tell you that the directory is empty, but it shows what's in the directory: nothing.

Now, move up into your Home folder. Type:

   % cd ..

This is the same cd command that we used earlier to change directories. Here, the command tells the Terminal to go up in the directory hierarchy to the next directory (the .. is a shortcut for the parent directory); in this case, that is your Home directory.

Type ls again to see what's in this directory:

   % ls

You should see something like this:

   Desktop    Library  Music     Public  Test
   Documents  Movies   Pictures  Sites

The Test directory is still there, but using rm, it's easy to delete it by typing:

   % rm -d -i Test

The -d option tells rm to remove directories. When Terminal displays:

   remove Test?

Type y, then press Return or Enter. (If you didn't remove testfile, as explained above, the rm command won't delete the directory because it won't, by default, delete directories that are not empty.)

Make one final check to see if the directory has been deleted.

   % ls
 
   Desktop    Library  Music     Public
   Documents  Movies   Pictures  Sites

The Answer Gang 62 about Unix command rm

I have a question about rm command. Would you please tell me how to remove all the files excepts certain files like anything ended with .c?

[Mike] The easiest way (meaning it will work on any Unix systems anywhere), is to move those files to a temporary directory, then delete "everything", then move those files back.


mkdir /tmp/tdir
mv *.c /tmp/tdir
rm *
mv /tmp/tdir/* .
rmdir /tmp/tdir

[Ben] The above would work, but seems rather clunky, as well as needing a lot of typing.

[Mike] Yes, it's not something you'd want to do frequently. However, if you don't know a lot about Unix commands, and are hesitant to write a shell script which deletes a lot of files, it's a good trick to remember.

[Ben] It's true that it is completely portable; the only questionable part of my suggestion immediately below might be the "-1" in the "ls", but all the versions of "ls" with which I'm familiar support the "single column display" function. It would be very easy to adapt.

My preference would be to use something like

rm $(ls -1|grep -v "\.c$")

because the argument given to "grep" can be a regular expression. Given that, you can say things like "delete all files except those that end in 'htm' or 'html'", "delete all except '*.c', '*.h', and '*.asm'", as well as a broad range of other things. If you want to eliminate the error messages given by the directories (rm can't delete them without other switches), as well as making "rm" ask you for confirmation on each file, you could use a "fancier" version -

rm -i $(ls -AF1|grep -v "/$"|grep -v "\.c$")

Note that in the second argument - the only one that should be changed - the "\" in front of the ".c" is essential: it makes the "." a literal period rather than a single-character match. As an example, lets try the above with different options.

In a directory that contains


testc
test-c
testcx
test.cdx
test.c

".c" means "'c' preceded by any character" - NO files would be deleted.

"\.c" means "'c' preceded by a period" - deletes the first 3 files.

"\.c$" means "'c' preceded by a period and followed by the end of the line" - all the files except the last one would be gone.

Here's a script that would do it all in one shot, including showing a list of files to be deleted:

See attached misc/tag/rmx.bash.txt

[Dan] Which works pretty well up to some limit, at which things break down and exit due to $skip being too long.

For a less interactive script which can remove inordinate numbers of files, something containing:

ls -AF1 | grep -v /$ | grep -v $1 | xargs rm

allows "xargs" to collect as many files as it can on a command line, and invoke "rm" repeatedly.

It would be prudent to try the thing out in a directory containing only expendable files with names similar to the intended victims/saved.

[Ben] Possibly a good idea for some systems. I've just tried it on a directory with 1,000 files in it (created just for the purpose) and deleted 990 of them in one shot, then recreated them and deleted only 9 of them. Everything worked fine, but testing is indeed a prudent thing to do.

[Dan] Or with some typists. I've more than once had to resort to backups due to a slip of the fingers (the brain?) with an "rm" expression.

[Ben] <*snort*> Never happened to me. No sir. Uh-uh. <Anxious glance to make sure the weekly backup disk is where it should be>

I just put in that "to be deleted" display for, umm, practice. Yeah.

<LOL> Good point, Dan.

Got that sinking feeling that often follows an overzealous rm? Our system doctor has a prescription.

by Mark Komarinski

There was recently a bit of traffic on the Usenet newsgroups about the need for (or lack of) an undelete command for Linux. If you were to type rm * tmp instead of rm *tmp and such a command were available, you could quickly recover your files.

The main problem with this idea from a filesystem standpoint involves the differences between the way DOS handles its filesystems and the way Linux handles its filesystems.

Let's look at how DOS handles its filesystems. When DOS writes a file to a hard drive (or a floppy drive) it begins by finding the first block that is marked "free" in the File Allocation Table (FAT). Data is written to that block, the next free block is searched for and written to, and so on until the file has been completely written. The problem with this approach is that the file can be in blocks that are scattered all over the drive. This scattering is known as fragmentation and can seriously degrade your filesystem's performance, because now the hard drive has to look all over the place for file fragments. When files are deleted, the space is marked "free" in the FAT and the blocks can be used by another file.

The good thing about this is that, if you delete a file that is out near the end of your drive, the data in those blocks may not be overwritten for months. In this case, it is likely that you will be able to get your data back for a reasonable amount of time afterwards.

Linux (actually, the second extended filesystem that is almost universally used under Linux) is slightly smarter in its approach to fragmentation. It uses several techniques to reduce fragmentation, involving segmenting the filesystem into independently-managed groups, temporarily reserving large chunks of contiguous space for files, and starting the search for new blocks to be added to a file from the current end of the file, rather than from the start of the filesystem. This greatly decreases fragmentation and makes file access much faster. The only case in which significant fragmentation occurs is when large files are written to an almost-full filesystem, because the filesystem is probably left with lots of free spaces too small to tuck files into nicely.

Because of this policy for finding empty blocks for files, when a file is deleted, the (probably large) contiguous space it occupied becomes a likely place for new files to be written. Also, because Linux is a multi-user, multitasking operating system, there is often more file-creating activity going on than under DOS, which means that those empty spaces where files used to be are more likely to be used for new files. "Undeleteability" has been traded off for a very fast filesystem that normally never needs to be defragmented.

The easiest answer to the problem is to put something in the filesystem that says a file was just deleted, but there are four problems with this approach:

  1. You would need to write a new filesystem or modify a current one (i.e. hack the kernel).
  2. How long should a file be marked "deleted"?
  3. What happens when a hard drive is filled with files that are "deleted"?
  4. What kind of performance loss and fragmentation will occur when files have to be written around "deleted" space?

Each of these questions can be answered and worked around. If you want to do it, go right ahead and try--the ext2 filesystem has space reserved to help you. But I have some solutions that require zero lines of C source code.

I have two similar solutions, and your job as a system administrator is to determine which method is best for you. The first method is a user-by-user no-root-needed approach, and the other is a system-wide approach implemented by root for all (or almost all) users.

The user-by-user approach can be done by anyone with shell access and it doesn't require root privileges, only a few changes to your .profile and .login or .bashrc files and a bit of drive space. The idea is that you alias the rm command to move the files to another directory. Then, when you log in the next time, those files that were moved are purged from the filesystem using the real /bin/rm command. Because the files are not actually deleted by the user, they are accessible until the next login. If you're using the bash shell, add this to your .bashrc file:

alias waste='/bin/rm'
alias rm='mv $1 ~/.rm'
and in your
.profile:
if [ -x ~/.rm ];
 then
   /bin/rm -r ~/.rm
   mkdir ~/.rm
   chmod og-r ~/.rm
 else
   mkdir ~/.rm
   chmod og-r ~/.rm
 fi

Advantages:

Disadvantages:

System-Wide

The second method is similar to the user-by-user method, but everything is done in /etc/profile and cron entries. The /etc/profile entries do almost the same job as above, and the cron entry removes all the old files every night. The other big change is that deleted files are stored in /tmp before they are removed, so this will not create a problem for users with quotas on their home directories.

The cron daemon (or crond) is a program set up to execute commands at a specified time. These are usually frequently-repeated tasks, such as doing nightly backups or dialing into a SLIP server to get mail every half-hour. Adding an entry requires a bit of work. This is because the user has a crontab file associated with him which lists tasks that the crond program has to perform. To get a list of what crond already knows about, use the crontab -l command, for "list the current cron tasks". To set new cron tasks, you have to use the crontab <file command for "read in cron assignments from this file". As you can see, the best way to add a new cron task is to take the list from crontab -l, edit it to suit your needs, and use crontab <file to submit the modified list. It will look something like this:

~# crontab -l > cron.fil
~# vi cron.fil
To add the necessary cron entry, just type the commands above as root and go to the end of the cron.fil file. Add the following lines:
# Automatically remove files from the
# /tmp/.rm directory that haven't been
# accessed in the last week.
0 0 * * * find /tmp/.rm -atime +7 -exec /bin/rm {} \;
Then type:
~# crontab cron.fil

Of course, you can change -atime +7 to -atime +1 if you want to delete files every day; it depends on how much space you have and how much room you want to give your users.

Now, in your /etc/profile (as root):

if [ -n "$BASH" == "" ] ;
then # we must be running bash
   alias waste='/bin/rm'
   alias rm='mv $1 /tmp/.rm/"$LOGIN"'
   undelete () {
     if [ -e /tmp/.rm/"$LOGIN"/$1 ] ; then
       cp /tmp/.rm/"$LOGIN"/$1 .
     else
       echo "$1 not available"
     fi
   }   if [ -n -e /tmp/.rm/"$LOGIN" ] ;
   then
     mkdir /tmp/.rm/"$LOGIN"
     chmod og-rwx /tmp/.rm/"$LOGIN"
   fi
fi

Once you restart cron and your users log in, your new `undelete' is ready to go for all users running bash. You can construct a similar mechanism for users using csh, tcsh, ksh, zsh, pdksh, or whatever other shells you use. Alternately, if all your users have /usr/bin in their paths ahead of /bin, you can make a shell script called /usr/bin/rm which does essentially the same thing as the alias above, and create an undelete shell script as well. The advantage of doing this is that it is easier to do complete error checking, which is not done here.

Advantages:

Disadvantages:

These solutions will work for simple use. More demanding users may want a more complete solution, and there are many ways to implement these. If you implement a very elegant solution, consider packaging it for general use, and send me an e-mail message about it so that I can tell everyone about it here.

Unix is a Four Letter Word Unix --- Strange names

There may come a time that you will discover that you have somehow created a file with a strange name that cannot be removed through conventional means. This section contains some unconventional approaches that may aid in removing such files.

Files that begin with a dash can be removed by typing

 rm ./-filename
A couple other ways that may work are
 rm -- -filename
and
 rm - -filename
Now let's suppose that we an even nastier filename. One that I ran across this summer was a file with no filename. The solution I used to remove it was to type
 rm -i *
This executes the rm command in interactive mode. I then answered "yes" to the query to remove the nameless file and "no" to all the other queries about the rest of the files.

Another method I could have used would be to obtain the inode number of the nameless file with

ls -i

and then type
 find . -inum number -ok rm '{}' \;
where number is the inode number.

The -ok flag causes a confirmation prompt to be displayed. If you would rather live on the edge and not be bothered with the prompting, you can use -exec in place of -ok.

Suppose you didn't want to remove the file with the funny name, but wanted to rename it so that you could access it more readily. This can be accomplished by following the previous procedure with the following modification to the find command:

 find . -inum number -ok mv '{}' new_filename \;

[Oct 01 2004] Meddling in the Affairs of Wizards

Most people who have spent any time on any version of Unix know that "rm -rf /" is about the worst mistake you can make on any given machine. (For novices, "/" is the root directory, and -r means recursive, so rm keeps deleting files until the entire file system is gone, or at least until something like libc is gone after which the system becomes, as we often joke, a warm brick.)

Well a couple of years ago one Friday afternoon a bunch of us were exchanging horror stories on this subject, when Bryan asked "why don't we fix rm?" So I did.

The code changes were, no surprise, trivial. The hardest part of the whole thing was that one reviewer wanted /usr/xpg4/bin/rm to be changed as well, and that required a visit to our standards guru. He thought the change made sense, but might technically violate the spec, which only allowed rm to treat "." and ".." as special cases for which it could immediately exit with an error. So I submitted a defect report to the appropriate standards committee, thinking it would be a slam dunk.

Well, some of these standards committee members either like making convoluted arguments or just don't see the world the same way I do, as more than one person suggested that the spec was just fine and that "/" was not worthy of special consideration. We tried all sorts of common sense arguments, to no avail. In the end, we had to beat them at their own game, by pointing out that if one attempts to remove "/" recursively, one will ultimately attempt to remove ".." and ".", and that all we are doing is allowing rm to pre-determine this heuristically. Amazingly, they bought that!

Anyway, in the end, we got the spec modified, and Solaris 10 has (since build 36) a version of /usr/bin/rm (/bin is a sym-link to /usr/bin on Solaris) and /usr/xpg4/bin/rm which behaves thus:

[28] /bin/rm -rf /
rm of / is not allowed
[29] 

Recommended Links

Google matched content

Softpanorama Recommended

Top articles

Sites

rm (Unix) - Wikipedia, the free encyclopedia

Reference

UNIX man pages rm ()

Remove (unlink) the FILE(s).

-d, --directory
unlink FILE, even if it is a non-empty directory (super-user
only; this works only if your system

supports `unlink' for nonempty directories)

-f, --force
ignore nonexistent files, never prompt

-i, --interactive
prompt before any removal

--no-preserve-root do not treat `/' specially (the default)

--preserve-root
fail to operate recursively on `/'

-r, -R, --recursive
remove directories and their contents recursively

-v, --verbose
explain what is being done

--help display this help and exit

--version
output version information and exit

By default, rm does not remove directories. Use the --recursive (-r or
-R) option to remove each listed directory, too, along with all of its
contents.

To remove a file whose name starts with a `-', for example `-foo', use
one of these commands:

rm -- -foo

rm ./-foo

Note that if you use rm to remove a file, it is usually possible to
recover the contents of that file. If you want more assurance that the
contents are truly unrecoverable, consider using shred.



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2018 by Dr. Nikolai Bezroukov. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) in the author free time and without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: October 23, 2018