# Saferm -- wrapper for rm command  to prevent accidental deletion of important files

### By Nikolai Bezroukov

 News Sysadmin Horror Stories Recommended Links saferm Simple Unix Backup Tools Unix rm command Unix mv command Missing backup horror stories Mistakes made because of the differences between various Unix/Linux flavors Creative uses of rm Lack of testing complex, potentially destructive, commands before execution of production box Performing the operation on a wrong server Pure stupidity xargs Command Tutorial Safe-rm Typical Errors In Using Find Tips Unix History Enterprise Unix System Administration Humor Etc

### Abstract

Saferm -- a wrapper for rm which prevents accidental deletions using a set of regular expressions

The utility performs several checks of supplied argument and executes rm command only if none of the checks failed. It gives the user chance to cacel the command if more then tree files involved.

It does not modify rm functionality in any way, or introduce trash can. This is essentially "argument sanity" checker, kind of lint for rm, nothing more, nothing less.  I tried to keep it very simple and limited to diagnostics.

It is mainly oriented on sysadmins and users who are working with  huge, often hundreds of TB data stores, often on GPFS in computational cluster, databases, in web hosting companies, etc. The only prerequisite is some minimal knowledge of regular expressions. But useful functionality also can also  be achieved with using simple prefix strings instead of regex.

The utility contains the  default set of "protection regex" which are installed on the first run as root and are suitable for RHEL 5/6/7 and similar systems.

The main application of the saferm utility is to prevent some errors which might happen situation where you need to delete large number of files in deep tree of directories with thousands of files. The  script also allows to replace a very dangerous alias to rm in default RHEL installation (where rm is aliased to rm -i for root user) with something more reasonable. The rm='rm -i' alias is an horror because after you get used to it, you will automatically expect rm to prompt you by default before removing files. Of course, one day you'll run it on an account that hasn't that alias set and before you understand what's going on, it is too late.

So instead you are better off using alias like

alias rm='/usr/bin/saferm'

If you need to avoid this alias just prefix the rm with the backslash like

\rm /etc/resolv.conf

### Introduction

The key idea is to use a set of Perl regular expression to detect "creative uses of rm" -- situations when system files or important user files are accidentally deleted.

Default set of protection regex (Perl regular expressions) is provided within the utility. It saferm is run as root it will be written to /etc/saferm.conf on the first invocation, if no such file exists.

Default /etc/saferm.conf (created on the firs run as root) is written for RHEL 5/6/7 and /etc/saferm.conf needs to be manually adapted to other flavors of Linux. Please note that rm in RHEL 5 does not support neither option -I nor  option --one-file-system .  So 10 sec delay between execution of command that delete more than three files was introduced instead.

The utility performs additional "sanity" checks if -r (or -R) option is specified

Normally should be aliased to rm command, so that it is used only for interactive sessions.

Here is an example of saferm run

[0]d620@ROOT:/etc/profile.d # rm *.csh

SAFERM -- rm wrapper  (Ver 2.31) Log is at /root/Saferm/Logs/d620_saferm_190220_1642.log. Type --help for help.
================================================================================
We will be deleting 8 files and directories
[0] /etc/profile.d/256term.csh
[1] /etc/profile.d/colorgrep.csh
[2] /etc/profile.d/colorls.csh
[3] /etc/profile.d/lang.csh
[4] /etc/profile.d/less.csh
[5] /etc/profile.d/mc.csh
[6] /etc/profile.d/vim.csh
... ... ...
GENERATED COMMAND:
/usr/bin/rm  -v -I --one-file-system  256term.csh colorgrep.csh colorls.csh lang.csh less.csh mc.csh vim.csh which2.csh
/usr/bin/rm: remove 8 arguments? y
removed ‘256term.csh’
removed ‘colorgrep.csh’
removed ‘colorls.csh’
removed ‘lang.csh’
removed ‘less.csh’
removed ‘mc.csh’
removed ‘vim.csh’
removed ‘which2.csh’
Here is a protocol of the attempt to remove a protected directory:
SAFERM -- rm wrapper  (Ver 2.31) Log is at /root/Saferm/Logs/d620_saferm_190220_1915.log. Type --help for help.
================================================================================
[W221] Number or files/directories to be deleted is 2428, while the limit was set to 100. Please specify upper limit as option -2428
[0] /etc
[2] /etc/aliases
[3] /etc/aliases.db
[4] /etc/alternatives
[5] /etc/alternatives/ld -> /usr/bin/ld.bfd
[6] /etc/alternatives/libnssckbi.so.x86_64 -> /usr/lib64/pkcs11/p11-kit-trust.so
... ... ...
[2425] /etc/yum.repos.d/CentOS-Media.repo
[2426] /etc/yum.repos.d/CentOS-Sources.repo
[2427] /etc/yum.repos.d/CentOS-Vault.repo
[W258] /etc/httpd/logs  is a symbolic link to a directory: lrwxrwxrwx. 1 root root 19 Dec 13 13:45 /etc/httpd/logs -> ../../var/log/httpd
[W258] /etc/httpd/modules  is a symbolic link to a directory: lrwxrwxrwx. 1 root root 29 Dec 13 13:45 /etc/httpd/modules -> ../../usr/lib64/httpd/modules
[W258] /etc/httpd/run  is a symbolic link to a directory: lrwxrwxrwx. 1 root root 10 Dec 13 13:45 /etc/httpd/run -> /run/httpd
[W258] /etc/init.d  is a symbolic link to a directory: lrwxrwxrwx. 1 root root 11 Nov  2 18:22 /etc/init.d -> rc.d/init.d
[W258] /etc/rc0.d  is a symbolic link to a directory: lrwxrwxrwx. 1 root root 10 Dec 13 13:20 /etc/rc0.d -> rc.d/rc0.d
[W258] /etc/rc1.d  is a symbolic link to a directory: lrwxrwxrwx. 1 root root 10 Dec 13 13:20 /etc/rc1.d -> rc.d/rc1.d
[W258] /etc/rc2.d  is a symbolic link to a directory: lrwxrwxrwx. 1 root root 10 Dec 13 13:20 /etc/rc2.d -> rc.d/rc2.d
[W258] /etc/rc3.d  is a symbolic link to a directory: lrwxrwxrwx. 1 root root 10 Dec 13 13:20 /etc/rc3.d -> rc.d/rc3.d
[W258] /etc/rc4.d  is a symbolic link to a directory: lrwxrwxrwx. 1 root root 10 Dec 13 13:20 /etc/rc4.d -> rc.d/rc4.d
[W258] /etc/rc5.d  is a symbolic link to a directory: lrwxrwxrwx. 1 root root 10 Dec 13 13:20 /etc/rc5.d -> rc.d/rc5.d
[W258] /etc/rc6.d  is a symbolic link to a directory: lrwxrwxrwx. 1 root root 10 Dec 13 13:20 /etc/rc6.d -> rc.d/rc6.d
[W258] /etc/ssl/certs  is a symbolic link to a directory: lrwxrwxrwx. 1 root root 16 Dec 13 13:19 /etc/ssl/certs -> ../pki/tls/certs
[W258] /etc/xdg/systemd/user  is a symbolic link to a directory: lrwxrwxrwx. 1 root root 18 Dec 13 13:19 /etc/xdg/systemd/user -> ../../systemd/user

=== ATTENTION ======================================
[S325] Detected 1 attempt(s) to remove protected object(s) of type d defined by regex  No. 0 -- '^/\w+$' The list of affected files/directories: /etc === ATTENTION ====================================== [S325] Detected 1706 attempt(s) to remove protected object(s) of type f defined by regex No. 5 -- '^/etc/(\w+.*($|\.conf))'
Truncated to 256 bytes list of affected files/directories: /etc/adjtime, /etc/aliases, /etc/aliases.db, /etc/anacrontab, /etc/asound.conf, /etc/audisp/audispd.conf, /etc/audisp/plugins.d/af_unix.conf, /etc/audisp/plugins.d/syslog.conf, /etc/audit/auditd.conf, /etc/audit/audit.rules, /etc/audit/audit-stop.rules, /

=== ATTENTION ======================================
[S325] Detected 69 attempt(s) to remove protected object(s) of type d defined by regex  No. 6 -- '^/etc($|/w+|/\w+.d)' Truncated to 256 bytes list of affected files/directories: /etc, /etc/audisp, /etc/audisp/plugins.d, /etc/audit, /etc/audit/rules.d, /etc/bash_completion.d, /etc/binfmt.d, /etc/chkconfig.d, /etc/cron.d, /etc/cron.daily, /etc/depmod.d, /etc/dhcp/dhclient.d, /etc/dhcp/dhclient-exit-hooks.d, /etc/firewalld, /etc/fir === ATTENTION ====================================== [S325] Detected 1 attempt(s) to remove protected object(s) of type d defined by regex No. 17 -- '^.*/\.ssh$'
The list of affected files/directories: /etc/skel/.ssh

================================================================================
MESSAGES SUMMARY:

Number of generated diagnostic messages of severity S: 4
Number of generated diagnostic messages of severity W: 14
The most severe error: [S325] Detected 1 attempt(s) to remove protected object(s) of type d defined by regex  No. 0 -- '^/\w+$' Cowardly refusing to delete 2428 files. Full list of files that would be deleted by this rm command was written to /root/Saferm/Logs/rm_filelist_190220_1915.log It can be edited and run via xargs ### Options All rm options are passed to generated rm command. Utility intercepts and executes only two own options (both are removed from set of options passed to rm): • --help -- prints the help screen and exits. Allow to see the default set of regular expressions, if -v was also specified. For example saferm -v --help • -<number> -- allow to specify the upper limit for deleted files (the default is 100). For example saferm -1000 /Data/Old_backup/*htm Again, all other options are passed to rm command unchanged. Option -0, which set the allowed limit of files to be deleted to zero, is very convenient for testing your regular expressions and prefix strings (to perform dry runs). For example, if we need to check does the current set of regular expression protects directory /etc/sysconfig we can run the following command [1] d620@ROOT:/tmp/etc # rm -r -0 /etc/sysconfig SAFERM -- rm wrapper (Ver 2.31) Log is at /root/Saferm/Logs/d620_saferm_190220_1918.log. Type --help for help. ================================================================================ We will be deleting 73 files and directories [0] /etc/sysconfig [1] /etc/sysconfig/anaconda [2] /etc/sysconfig/authconfig [3] /etc/sysconfig/cbq [4] /etc/sysconfig/cbq/avpkt [5] /etc/sysconfig/cbq/cbq-0000.example [6] /etc/sysconfig/chronyd ... ... ... [70] /etc/sysconfig/selinux -> ../selinux/config [71] /etc/sysconfig/sshd [72] /etc/sysconfig/wpa_supplicant === ATTENTION ====================================== [S325] Detected 62 attempt(s) to remove protected object(s) of type f defined by regex No. 5 -- '^/etc/(\w+.*($|\.conf))'
Truncated to 256 bytes list of affected files/directories: /etc/sysconfig/anaconda, /etc/sysconfig/authconfig, /etc/sysconfig/cbq/avpkt, /etc/sysconfig/cbq/cbq-0000.example, /etc/sysconfig/chronyd, /etc/sysconfig/cpupower, /etc/sysconfig/crond, /etc/sysconfig/docker, /etc/sysconfig/docker-network, /etc/sysconfig/

================================================================================
MESSAGES SUMMARY:

Number of generated diagnostic messages of severity S: 1
The most severe error: [S325] Detected 62 attempt(s) to remove protected object(s) of type f defined by regex  No. 5 -- '^/etc/(\w+.*($|\.conf))' Cowardly refusing to delete 73 files. Full list of files that would be deleted by this rm command was written to /root/Saferm/Logs/rm_filelist_190220_1918.log It can be edited and run via xargs ### Configuration The utility uses two configuration files: /etc/saferm.conf System configuration file usually contains protection of system directories. If absent the utility writes it on the first invocation, it is run as root. ~/Saferm/saferm.conf Private configuration file (usually contains regex to protect vital user directories). Duplicate entries in this file overwrite entries in system configuration file You can have multiple configuration files for different operations and symlink one of them to ~/Saferm/saferm.conf ### Operation The utility reads the set of "typed" regular expressions and than creates the list of files to be deleted. Each regular expression is applied only to the files of specified type. Four types are currently supported: 1. a -- Protected in regex matches. The regex applied to any type of objects 2. p -- Similar to "a" applied to objects of any type, but the matching is performed by matching presix to the path to the length of supplied string (prefix string match). Fixed string is used for matching, not regex, so special character like "-" which are escaped in Perl regex should not be escaped. 3. l -- Protected only if the match is a link 4. f -- Protected only if the match is a file 5. d -- Protected only if the match is a directory Each file is analyzed against the set of regular expressions compatible with the type of this file. If no rules are violated the utility generate and executes rm command adding three additional options --one-filesystem, -I and -v (only -v for the versions of rm below 6) If recursive key is given only one argument is accepted for safety reasons. Also rm command is generated, but not executed. ### Installation This utility depends on Perl and tree. Please note that in RHEL 7 minimal install the utility tree is not included and needs to be installed separately via yum install tree Installation can be performed iether manually or using provided install script 1. Using install script (should be run from the directory in which script saferm was downloaded) • Local install ./saferm_install.sh ~/bin/ ~/.bashrc • System install (defaults for the script are system install, which should be run as root) ./saferm_install.sh 2. Manual install. Currently installation consists of copying the script into one of the directories on your path an creation of the alias to this place. In tree is not installed on the system it needs to be installed first. For example: alias rm='/usr/bin/saferm' The program uses two blacklists (system-wide and user-specific), each of which consists of set of "typed" (see acceptable types below) Perl regular expressions. Defaults are /etc/saferm.conf and ~/Saferm/saferm.conf . They can be overwritten via env. variables saferm_global_conf and saferm_private_conf, correspondingly. For computational clusters those files are typically stored on NSF or GPFS ### Dependencies This utility depends on Perl and tree. Please note that in RHEL 7 minimal install the utility tree is not included and needs to be installed separately via the command: yum install tree ### System configuration file The first is so called system-wide blacklist which is located in /etc/saferm.conf. If it does not exist on the first invocation as root user, it will be created from the default blacklist in the script. Please not that the default set of protection regex changes from one version to another so the regex below should be viewed as just an example, not that actual set of default expression: # # ====================== TYPES OF CHECKS ================================ # # a -- absolute protection using supplied prefix; type of object does not matter (for example, it can be iether link or directory) # d -- Protected only if match is a directory. # f -- Protected only if the match is a file # l -- Protected only if the match is a link # 1: All level 2 directories ^/\w+$ d

# 2: All files in /boot are protected

^/boot($|/) a # 3: All files in /dev are protected ^/dev($|/) a

# 4: /root/bin and root/.ssh directorories

^/root($|/bin$|.ssh$)' d # 5: dot files in root ^/root/\.bash f # 6: Files directly in /etc, not in subdirectories ^/etc/([-\w]+($|\.conf)) f

... ... ...

After this file was created, you can edit it to adapt to your system (default system blacklist is Red Hat oriented)

### User (or private) configuration file

For users without root privileges this is the only configuration file that can be edited. Entries in it are added to the entries in system configuration file.

It can be multiple such files tuned to different Tasks with different sets of protection regex. The one that is used can be symlinked to the ~/Saferm/saferm.conf

This the user-specific blacklist which is located in ~/Saferm/saferm.conf  and can added to system blacklist directories and files that are important for you.

Location of both files can be changed via environment variables

saferm_global_conf

saferm_local_conf

### Diagnostics and logging

The program produces diagnostic messages in three categories

1. W -- warnings
2. E -- correctable errors
3. S -- un-correctable errors (the run is unlikely to succeed and rm execution is blocked)

By default messages are written to console and to the log file. The latter is written to the directory ~/Saferm/Logs. Only the last 24 entries are preserved to avoid consuming unnecessary space

 Top Visited

Your browser does not support iframes.

Switchboard Latest Past week Past month

## Old News ;-)

#### [Nov 08, 2019] How to prevent and recover from accidental file deletion in Linux Enable Sysadmin

##### trashy - Trashy · GitLab might make sense in simple cases. But often massive file deletions are about attempts to get free space.
###### Nov 08, 2019 | www.redhat.com
Back up

You knew this would come first. Data recovery is a time-intensive process and rarely produces 100% correct results. If you don't have a backup plan in place, start one now.

Better yet, implement two. First, provide users with local backups with a tool like rsnapshot . This utility creates snapshots of each user's data in a ~/.snapshots directory, making it trivial for them to recover their own data quickly.

There are a great many other open source backup applications that permit your users to manage their own backup schedules.

Second, while these local backups are convenient, also set up a remote backup plan for your organization. Tools like AMANDA or BackupPC are solid choices for this task. You can run them as a daemon so that backups happen automatically.

Backup planning and preparation pay for themselves in both time, and peace of mind. There's nothing like not needing emergency response procedures in the first place.

Ban rm

On modern operating systems, there is a Trash or Bin folder where users drag the files they don't want out of sight without deleting them just yet. Traditionally, the Linux terminal has no such holding area, so many terminal power users have the bad habit of permanently deleting data they believe they no longer need. Since there is no "undelete" command, this habit can be quite problematic should a power user (or administrator) accidentally delete a directory full of important data.

Many users say they favor the absolute deletion of files, claiming that they prefer their computers to do exactly what they tell them to do. Few of those users, though, forego their rm command for the more complete shred , which really removes their data. In other words, most terminal users invoke the rm command because it removes data, but take comfort in knowing that file recovery tools exist as a hacker's un- rm . Still, using those tools take up their administrator's precious time. Don't let your users -- or yourself -- fall prey to this breach of logic.

If you really want to remove data, then rm is not sufficient. Use the shred -u command instead, which overwrites, and then thoroughly deletes the specified data

However, if you don't want to actually remove data, don't use rm . This command is not feature-complete, in that it has no undo feature, but has the capacity to be undone. Instead, use trashy or trash-cli to "delete" files into a trash bin while using your terminal, like so:

$trash ~/example.txt$ trash --list
example.txt


One advantage of these commands is that the trash bin they use is the same your desktop's trash bin. With them, you can recover your trashed files by opening either your desktop Trash folder, or through the terminal.

If you've already developed a bad rm habit and find the trash command difficult to remember, create an alias for yourself:

$echo "alias rm='trash'"  Even better, create this alias for everyone. Your time as a system administrator is too valuable to spend hours struggling with file recovery tools just because someone mis-typed an rm command. Respond efficiently Unfortunately, it can't be helped. At some point, you'll have to recover lost files, or worse. Let's take a look at emergency response best practices to make the job easier. Before you even start, understanding what caused the data to be lost in the first place can save you a lot of time: • If someone was careless with their trash bin habits or messed up dangerous remove or shred commands, then you need to recover a deleted file. • If someone accidentally overwrote a partition table, then the files aren't really lost. The drive layout is. • In the case of a dying hard drive, recovering data is secondary to the race against decay to recover the bits themselves (you can worry about carving those bits into intelligible files later). No matter how the problem began, start your rescue mission with a few best practices: • Stop using the drive that contains the lost data, no matter what the reason. The more you do on this drive, the more you risk overwriting the data you're trying to rescue. Halt and power down the victim computer, and then either reboot using a thumb drive, or extract the damaged hard drive and attach it to your rescue machine. • Do not use the victim hard drive as the recovery location. Place rescued data on a spare volume that you're sure is working. Don't copy it back to the victim drive until it's been confirmed that the data has been sufficiently recovered. • If you think the drive is dying, your first priority after powering it down is to obtain a duplicate image, using a tool like ddrescue or Clonezilla . Once you have a sense of what went wrong, It's time to choose the right tool to fix the problem. Two such tools are Scalpel and TestDisk , both of which operate just as well on a disk image as on a physical drive. Practice (or, go break stuff) At some point in your career, you'll have to recover data. The smart practices discussed above can minimize how often this happens, but there's no avoiding this problem. Don't wait until disaster strikes to get familiar with data recovery tools. After you set up your local and remote backups, implement command-line trash bins, and limit the rm command, it's time to practice your data recovery techniques. Download and practice using Scalpel, TestDisk, or whatever other tools you feel might be useful. Be sure to practice data recovery safely, though. Find an old computer, install Linux onto it, and then generate, destroy, and recover. If nothing else, doing so teaches you to respect data structures, filesystems, and a good backup plan. And when the time comes and you have to put those skills to real use, you'll appreciate knowing what to do. #### [Nov 08, 2019] My first sysadmin mistake by Jim Hall ##### Wiping out of /etc directory is one thing that sysadmin accidentally do. This is often happen if the other directory is name etc, for example /Backup/etc. In such cases you automatically put a slash in front of etc because it is ingrained in your mind. And you put the slash in front of etc subconsciously, not realizing what you are doing. And then faces consequences. If you do not use saferm, the result are pretty devastating. In most cases the sever does not die, but new logins are impossible. SSH session survives. That's why it is important to backup /etc/at the first login to the server. On modern severs it takes a couple of seconds. ##### If subdirectories are intact then you still can copy the content from another server. But content of sysconfig subdirectory in linux is unique to the server and you need a backup to restore it. ##### Notable quotes: ##### "... As root. I thought I was deleting some stale cache files for one of our programs. Instead, I wiped out all files in the /etc directory by mistake. Ouch. ..." ##### "... I put together a simple strategy: Don't reboot the server. Use an identical system as a template, and re-create the ..." ###### Nov 08, 2019 | opensource.com rm command in the wrong directory. As root. I thought I was deleting some stale cache files for one of our programs. Instead, I wiped out all files in the /etc directory by mistake. Ouch. My clue that I'd done something wrong was an error message that rm couldn't delete certain subdirectories. But the cache directory should contain only files! I immediately stopped the rm command and looked at what I'd done. And then I panicked. All at once, a million thoughts ran through my head. Did I just destroy an important server? What was going to happen to the system? Would I get fired? Fortunately, I'd run rm * and not rm -rf * so I'd deleted only files. The subdirectories were still there. But that didn't make me feel any better. Immediately, I went to my supervisor and told her what I'd done. She saw that I felt really dumb about my mistake, but I owned it. Despite the urgency, she took a few minutes to do some coaching with me. "You're not the first person to do this," she said. "What would someone else do in your situation?" That helped me calm down and focus. I started to think less about the stupid thing I had just done, and more about what I was going to do next. I put together a simple strategy: Don't reboot the server. Use an identical system as a template, and re-create the /etc directory. Once I had my plan of action, the rest was easy. It was just a matter of running the right commands to copy the /etc files from another server and edit the configuration so it matched the system. Thanks to my practice of documenting everything, I used my existing documentation to make any final adjustments. I avoided having to completely restore the server, which would have meant a huge disruption. To be sure, I learned from that mistake. For the rest of my years as a systems administrator, I always confirmed what directory I was in before running any command. I also learned the value of building a "mistake strategy." When things go wrong, it's natural to panic and think about all the bad things that might happen next. That's human nature. But creating a "mistake strategy" helps me stop worrying about what just went wrong and focus on making things better. I may still think about it, but knowing my next steps allows me to "get over it." #### [Nov 08, 2019] How to use Sanoid to recover from data disasters Opensource.com ###### Nov 08, 2019 | opensource.com filesystem-level snapshot replication to move data from one machine to another, fast . For enormous blobs like virtual machine images, we're talking several orders of magnitude faster than rsync . If that isn't cool enough already, you don't even necessarily need to restore from backup if you lost the production hardware; you can just boot up the VM directly on the local hotspare hardware, or the remote disaster recovery hardware, as appropriate. So even in case of catastrophic hardware failure , you're still looking at that 59m RPO, <1m RTO. https://www.youtube.com/embed/5hEixXutaPo Backups -- and recoveries -- don't get much easier than this. The syntax is dead simple: root@box1:~# syncoid pool/images/vmname root@box2:pooln ame/images/vmname  Or if you have lots of VMs, like I usually do... recursion! root@box1:~# syncoid -r pool/images/vmname root@box2:po olname/images/vmname  This makes it not only possible, but easy to replicate multiple-terabyte VM images hourly over a local network, and daily over a VPN. We're not talking enterprise 100mbps symmetrical fiber, either. Most of my clients have 5mbps or less available for upload, which doesn't keep them from automated, nightly over-the-air backups, usually to a machine sitting quietly in an owner's house. Preventing your own Humpty Level Events Sanoid is open source software, and so are all its dependencies. You can run Sanoid and Syncoid themselves on pretty much anything with ZFS. I developed it and use it on Linux myself, but people are using it (and I support it) on OpenIndiana, FreeBSD, and FreeNAS too. You can find the GPLv3 licensed code on the website (which actually just redirects to Sanoid's GitHub project page), and there's also a Chef Cookbook and an Arch AUR repo available from third parties. #### [Nov 07, 2019] How to prevent and recover from accidental file deletion in Linux Enable Sysadmin ##### trashy - Trashy · GitLab might make sense in simple case. But often deletions are about increasing free space. ###### Nov 07, 2019 | www.redhat.com Back up You knew this would come first. Data recovery is a time-intensive process and rarely produces 100% correct results. If you don't have a backup plan in place, start one now. Better yet, implement two. First, provide users with local backups with a tool like rsnapshot . This utility creates snapshots of each user's data in a ~/.snapshots directory, making it trivial for them to recover their own data quickly. There are a great many other open source backup applications that permit your users to manage their own backup schedules. Second, while these local backups are convenient, also set up a remote backup plan for your organization. Tools like AMANDA or BackupPC are solid choices for this task. You can run them as a daemon so that backups happen automatically. Backup planning and preparation pay for themselves in both time, and peace of mind. There's nothing like not needing emergency response procedures in the first place. Ban rm On modern operating systems, there is a Trash or Bin folder where users drag the files they don't want out of sight without deleting them just yet. Traditionally, the Linux terminal has no such holding area, so many terminal power users have the bad habit of permanently deleting data they believe they no longer need. Since there is no "undelete" command, this habit can be quite problematic should a power user (or administrator) accidentally delete a directory full of important data. Many users say they favor the absolute deletion of files, claiming that they prefer their computers to do exactly what they tell them to do. Few of those users, though, forego their rm command for the more complete shred , which really removes their data. In other words, most terminal users invoke the rm command because it removes data, but take comfort in knowing that file recovery tools exist as a hacker's un- rm . Still, using those tools take up their administrator's precious time. Don't let your users -- or yourself -- fall prey to this breach of logic. If you really want to remove data, then rm is not sufficient. Use the shred -u command instead, which overwrites, and then thoroughly deletes the specified data However, if you don't want to actually remove data, don't use rm . This command is not feature-complete, in that it has no undo feature, but has the capacity to be undone. Instead, use trashy or trash-cli to "delete" files into a trash bin while using your terminal, like so: $ trash ~/example.txt
$trash --list example.txt  One advantage of these commands is that the trash bin they use is the same your desktop's trash bin. With them, you can recover your trashed files by opening either your desktop Trash folder, or through the terminal. If you've already developed a bad rm habit and find the trash command difficult to remember, create an alias for yourself: $ echo "alias rm='trash'"


Even better, create this alias for everyone. Your time as a system administrator is too valuable to spend hours struggling with file recovery tools just because someone mis-typed an rm command.

Respond efficiently

Unfortunately, it can't be helped. At some point, you'll have to recover lost files, or worse. Let's take a look at emergency response best practices to make the job easier. Before you even start, understanding what caused the data to be lost in the first place can save you a lot of time:

• If someone was careless with their trash bin habits or messed up dangerous remove or shred commands, then you need to recover a deleted file.
• If someone accidentally overwrote a partition table, then the files aren't really lost. The drive layout is.
• In the case of a dying hard drive, recovering data is secondary to the race against decay to recover the bits themselves (you can worry about carving those bits into intelligible files later).

No matter how the problem began, start your rescue mission with a few best practices:

• Stop using the drive that contains the lost data, no matter what the reason. The more you do on this drive, the more you risk overwriting the data you're trying to rescue. Halt and power down the victim computer, and then either reboot using a thumb drive, or extract the damaged hard drive and attach it to your rescue machine.
• Do not use the victim hard drive as the recovery location. Place rescued data on a spare volume that you're sure is working. Don't copy it back to the victim drive until it's been confirmed that the data has been sufficiently recovered.
• If you think the drive is dying, your first priority after powering it down is to obtain a duplicate image, using a tool like ddrescue or Clonezilla .

Once you have a sense of what went wrong, It's time to choose the right tool to fix the problem. Two such tools are Scalpel and TestDisk , both of which operate just as well on a disk image as on a physical drive.

Practice (or, go break stuff)

At some point in your career, you'll have to recover data. The smart practices discussed above can minimize how often this happens, but there's no avoiding this problem. Don't wait until disaster strikes to get familiar with data recovery tools. After you set up your local and remote backups, implement command-line trash bins, and limit the rm command, it's time to practice your data recovery techniques.

Download and practice using Scalpel, TestDisk, or whatever other tools you feel might be useful. Be sure to practice data recovery safely, though. Find an old computer, install Linux onto it, and then generate, destroy, and recover. If nothing else, doing so teaches you to respect data structures, filesystems, and a good backup plan. And when the time comes and you have to put those skills to real use, you'll appreciate knowing what to do.

#### [Aug 26, 2019] linux - Avoiding accidental 'rm' disasters - Super User

###### Aug 26, 2019 | superuser.com

Avoiding accidental 'rm' disasters Ask Question Asked 6 years, 3 months ago Active 6 years, 3 months ago Viewed 1k times 1

Mr_Spock ,May 26, 2013 at 11:30

Today, using sudo -s , I wanted to rm -R ./lib/ , but I actually rm -R /lib/ .

I had to reinstall my OS (Mint 15) and re-download and re-configure all my packages. Not fun.

How can I avoid similar mistakes in the future?

Vittorio Romeo ,May 26, 2013 at 11:55

First of all, stop executing everything as root . You never really need to do this. Only run individual commands with sudo if you need to. If a normal command doesn't work without sudo, just call sudo !! to execute it again.

If you're paranoid about rm , mv and other operations while running as root, you can add the following aliases to your shell's configuration file:

[ $UID = 0 ] && \ alias rm='rm -i' && \ alias mv='mv -i' && \ alias cp='cp -i'  These will all prompt you for confirmation ( -i ) before removing a file or overwriting an existing file, respectively, but only if you're root (the user with ID 0). Don't get too used to that though. If you ever find yourself working on a system that doesn't prompt you for everything, you might end up deleting stuff without noticing it. The best way to avoid mistakes is to never run as root and think about what exactly you're doing when you use sudo . #### [Feb 21, 2019] https://github.com/MikeDacre/careful_rm ###### Feb 21, 2019 | github.com rm is a powerful *nix tool that simply drops a file from the drive index. It doesn't delete it or put it in a Trash can, it just de-indexes it which makes the file hard to recover unless you want to put in the work, and pretty easy to recover if you are willing to spend a few hours trying (use shred to actually secure erase files). careful_rm.py is inspired by the -I interactive mode of rm and by safe-rm . safe-rm adds a recycle bin mode to rm, and the -I interactive mode adds a prompt if you delete more than a handful of files or recursively delete a directory. ZSH also has an option to warn you if you recursively rm a directory. These are all great, but I found them unsatisfying. What I want is for rm to be quick and not bother me for single file deletions (so rm -i is out), but to let me know when I am deleting a lot of files, and to actually print a list of files that are about to be deleted . I also want it to have the option to trash/recycle my files instead of just straight deleting them.... like safe-rm , but not so intrusive (safe-rm defaults to recycle, and doesn't warn). careful_rm.py is fundamentally a simple rm wrapper, that accepts all of the same commands as rm , but with a few additional options features. In the source code CUTOFF is set to 3 , so deleting more files than that will prompt the user. Also, deleting a directory will prompt the user separately with a count of all files and subdirectories within the folders to be deleted. Furthermore, careful_rm.py implements a fully integrated trash mode that can be toggled on with -c . It can also be forced on by adding a file at ~/.rm_recycle , or toggled on only for $HOME (the best idea), by ~/.rm_recycle_home . The mode can be disabled on the fly by passing --direct , which forces off recycle mode.

The recycle mode tries to find the best location to recycle to on MacOS or Linux, on MacOS it also tries to use Apple Script to trash files, which means the original location is preserved (note Applescript can be slow, you can disable it by adding a ~/.no_apple_rm file, but Put Back won't work). The best location for trashes goes in this order:

1. $HOME/.Trash on Mac or $HOME/.local/share/Trash on Linux
2. <mountpoint>/.Trashes on Mac or <mountpoint>/.Trash-$UID on Linux 3. /tmp/$USER_trash

Always the best trash can to avoid Volume hopping is favored, as moving across file systems is slow. If the trash does not exist, the user is prompted to create it, they then also have the option to fall back to the root trash ( /tmp/$USER_trash ) or just rm the files. /tmp/$USER_trash is almost always used for deleting system/root files, but note that you most likely do not want to save those files, and straight rm is generally better.

#### [Feb 21, 2019] https://github.com/lagerspetz/linux-stuff/blob/master/scripts/saferm.sh by Eemil Lagerspetz

##### Shell script that tires to implement trash can idea
###### Feb 21, 2019 | github.com
 #!/bin/bash
 ##
 ## saferm.sh
 ## Safely remove files, moving them to GNOME/KDE trash instead of deleting.
 ##
 ## Started on Mon Aug 11 22:00:58 2008 Eemil Lagerspetz
 ## Last update Sat Aug 16 23:49:18 2008 Eemil Lagerspetz
 ##
 version= " 1.16 " ;

... ... ...

#### [Feb 21, 2019] The rm='rm -i' alias is an horror

###### Feb 21, 2019 | superuser.com

The rm='rm -i' alias is an horror because after a while using it, you will expect rm to prompt you by default before removing files. Of course, one day you'll run it with an account that hasn't that alias set and before you understand what's going on, it is too late.

... ... ...

If you want save aliases, but don't want to risk getting used to the commands working differently on your system than on others, you can to disable rm like this
alias rm='echo "rm is disabled, use remove or trash or /bin/rm instead."'


Then you can create your own safe alias, e.g.

alias remove='/bin/rm -irv'


or use trash instead.

#### [Feb 21, 2019] Ubuntu Manpage trash - Command line trash utility.

###### Feb 21, 2019 | manpages.ubuntu.com
Provided by: trash-cli_0.12.9.14-2_all

NAME

       trash - Command line trash utility.
SYNOPSIS 
       trash [arguments] ...
DESCRIPTION 
       Trash-cli  package  provides  a command line interface trashcan utility compliant with the
FreeDesktop.org Trash Specification.  It remembers the name, original path, deletion date,
and permissions of each trashed file.

ARGUMENTS 
       Names of files or directory to move in the trashcan.
EXAMPLES
       $cd /home/andrea/$ touch foo bar
$trash foo bar BUGS   Report bugs to http://code.google.com/p/trash-cli/issues AUTHORS  Trash was written by Andrea Francia <andreafrancia@users.sourceforge.net> and Einar Orn Olason <eoo@hi.is>. This manual page was written by Steve Stalcup <vorian@ubuntu.com>. Changes made by Massimo Cavalleri <submax@tiscalinet.it>. SEE ALSO   trash-list(1), trash-restore(1), trash-empty(1), and the FreeDesktop.org Trash Specification at http://www.ramendik.ru/docs/trashspec.html. Both are released under the GNU General Public License, version 2 or later.  #### [Feb 20, 2019] Version 2.31 released and published on GitHub Type p was added which allow to match prefix of the string using simple string comparison. Useful for files and directories that contain the symbol '-' (minus) and other special characters used in regex, as it does not need to be escaped as in case of regular expressions. Also in many cases matching the prfix of the sting is sufficient to prevent the damage. #### [Jan 28, 2019] "Right," I said. "Time to get the backup." I knew I had to leave when I saw his face start twitching and he whispered: "Backup ...?" ###### Jan 28, 2019 | opensource.com SemperOSS on 13 Sep 2016 Permalink This one seems to be a classic too: Working for a large UK-based international IT company, I had a call from newest guy in the internal IT department: "The main server, you know ..." "Yes?" "I was cleaning out somebody's homedir ..." "Yes?" "Well, the server stopped running properly ..." "Yes?" "... and I can't seem to get it to boot now ..." "Oh-kayyyy. I'll just totter down to you and give it an eye." I went down to the basement where the IT department was located and had a look at his terminal screen on his workstation. Going back through the terminal history, just before a hefty amount of error messages, I found his last command: 'rm -rf /home/johndoe /*'. And I probably do not have to say that he was root at the time (it was them there days before sudo, not that that would have helped in his situation). "Right," I said. "Time to get the backup." I knew I had to leave when I saw his face start twitching and he whispered: "Backup ...?" ========== Bonus entries from same company: It was the days of the 5.25" floppy disks (Wikipedia is your friend, if you belong to the younger generation). I sometimes had to ask people to send a copy of a floppy to check why things weren't working properly. Once I got a nice photocopy and another time, the disk came with a polite note attached ... stapled through the disk, to be more precise! #### [Jan 28, 2019] regex - Safe rm -rf function in shell script ###### Jan 28, 2019 | stackoverflow.com community wiki 5 revs ,May 23, 2017 at 12:26 This question is similar to What is the safest way to empty a directory in *nix? I'm writing bash script which defines several path constants and will use them for file and directory manipulation (copying, renaming and deleting). Often it will be necessary to do something like: rm -rf "/${PATH1}"
rm -rf "${PATH2}/"*  While developing this script I'd want to protect myself from mistyping names like PATH1 and PATH2 and avoid situations where they are expanded to empty string, thus resulting in wiping whole disk. I decided to create special wrapper: rmrf() { if [[$1 =~ "regex" ]]; then
echo "Ignoring possibly unsafe path ${1}" exit 1 fi shopt -s dotglob rm -rf --$1
shopt -u dotglob
}


Which will be called as:

rmrf "/${PATH1}" rmrf "${PATH2}/"*


Regex (or sed expression) should catch paths like "*", "/*", "/**/", "///*" etc. but allow paths like "dir", "/dir", "/dir1/dir2/", "/dir1/dir2/*". Also I don't know how to enable shell globbing in case like "/dir with space/*". Any ideas?

EDIT: this is what I came up with so far:

rmrf() {
local RES
local RMPATH="${1}" SAFE=$(echo "${RMPATH}" | sed -r 's:^((\.?\*+/+)+.*|(/+\.?\*+)+.*|[\.\*/]+|.*/\.\*+)$::g')
if [ -z "${SAFE}" ]; then echo "ERROR! Unsafe deletion of${RMPATH}"
return 1
fi

shopt -s dotglob
if [ '*' == "${RMPATH: -1}" ]; then echo rm -rf -- "${RMPATH/%\*/}"*
RES=$? else echo rm -rf -- "${RMPATH}"
RES=$? fi shopt -u dotglob return$RES
}


Intended use is (note an asterisk inside quotes):

rmrf "${SOMEPATH}" rmrf "${SOMEPATH}/*"


where $SOMEPATH is not system or /home directory (in my case all such operations are performed on filesystem mounted under /scratch directory). CAVEATS: • not tested very well • not intended to use with paths possibly containing '..' or '.' • should not be used with user-supplied paths • rm -rf with asterisk probably can fail if there are too many files or directories inside$SOMEPATH (because of limited command line length) - this can be fixed with 'for' loop or 'find' command

SpliFF ,Jun 14, 2009 at 13:45

I've found a big danger with rm in bash is that bash usually doesn't stop for errors. That means that:
cd $SOMEPATH rm -rf *  Is a very dangerous combination if the change directory fails. A safer way would be: cd$SOMEPATH && rm -rf *


Which will ensure the rf won't run unless you are really in $SOMEPATH. This doesn't protect you from a bad$SOMEPATH but it can be combined with the advice given by others to help make your script safer.

EDIT: @placeybordeaux makes a good point that if $SOMEPATH is undefined or empty cd doesn't treat it as an error and returns 0. In light of that this answer should be considered unsafe unless$SOMEPATH is validated as existing and non-empty first. I believe cd with no args should be an illegal command since at best is performs a no-op and at worse it can lead to unexpected behaviour but it is what it is.

Sazzad Hissain Khan ,Jul 6, 2017 at 11:45

nice trick, I am one stupid victim. – Sazzad Hissain Khan Jul 6 '17 at 11:45

placeybordeaux ,Jun 21, 2018 at 22:59

If $SOMEPATH is empty won't this rm -rf the user's home directory? – placeybordeaux Jun 21 '18 at 22:59 SpliFF ,Jun 27, 2018 at 4:10 @placeybordeaux The && only runs the second command if the first succeeds - so if cd fails rm never runs – SpliFF Jun 27 '18 at 4:10 placeybordeaux ,Jul 3, 2018 at 18:46 @SpliFF at least in ZSH the return value of cd$NONEXISTANTVAR is 0placeybordeaux Jul 3 '18 at 18:46

ruakh ,Jul 13, 2018 at 6:46

Instead of cd $SOMEPATH , you should write cd "${SOMEPATH?}" . The ${varname?} notation ensures that the expansion fails with a warning-message if the variable is unset or empty (such that the && ... part is never run); the double-quotes ensure that special characters in $SOMEPATH , such as whitespace, don't have undesired effects. – ruakh Jul 13 '18 at 6:46

community wiki
2 revs
,Jul 24, 2009 at 22:36

There is a set -u bash directive that will cause exit, when uninitialized variable is used. I read about it here , with rm -rf as an example. I think that's what you're looking for. And here is set's manual .

,Jun 14, 2009 at 12:38

I think "rm" command has a parameter to avoid the deleting of "/". Check it out.

Max ,Jun 14, 2009 at 12:56

Thanks! I didn't know about such option. Actually it is named --preserve-root and is not mentioned in the manpage. – Max Jun 14 '09 at 12:56

Max ,Jun 14, 2009 at 13:18

On my system this option is on by default, but it cat't help in case like rm -ri /* – Max Jun 14 '09 at 13:18

ynimous ,Jun 14, 2009 at 12:42

I would recomend to use realpath(1) and not the command argument directly, so that you can avoid things like /A/B/../ or symbolic links.

Max ,Jun 14, 2009 at 13:30

Useful but non-standard command. I've found possible bash replacement: archlinux.org/pipermail/pacman-dev/2009-February/008130.htmlMax Jun 14 '09 at 13:30

Jonathan Leffler ,Jun 14, 2009 at 12:47

Generally, when I'm developing a command with operations such as ' rm -fr ' in it, I will neutralize the remove during development. One way of doing that is:
RMRF="echo rm -rf"
...
$RMRF "/${PATH1}"


This shows me what should be deleted - but does not delete it. I will do a manual clean up while things are under development - it is a small price to pay for not running the risk of screwing up everything.

The notation ' "/${PATH1}" ' is a little unusual; normally, you would ensure that PATH1 simply contains an absolute pathname. Using the metacharacter with ' "${PATH2}/"* ' is unwise and unnecessary. The only difference between using that and using just ' "${PATH2}" ' is that if the directory specified by PATH2 contains any files or directories with names starting with dot, then those files or directories will not be removed. Such a design is unlikely and is rather fragile. It would be much simpler just to pass PATH2 and let the recursive remove do its job. Adding the trailing slash is not necessarily a bad idea; the system would have to ensure that $PATH2 contains a directory name, not just a file name, but the extra protection is rather minimal.

Using globbing with ' rm -fr ' is usually a bad idea. You want to be precise and restrictive and limiting in what it does - to prevent accidents. Of course, you'd never run the command (shell script you are developing) as root while it is under development - that would be suicidal. Or, if root privileges are absolutely necessary, you neutralize the remove operation until you are confident it is bullet-proof.

Max ,Jun 14, 2009 at 13:09

To delete subdirectories and files starting with dot I use "shopt -s dotglob". Using rm -rf "${PATH2}" is not appropriate because in my case PATH2 can be only removed by superuser and this results in error status for "rm" command (and I verify it to track other errors). – Max Jun 14 '09 at 13:09 Jonathan Leffler ,Jun 14, 2009 at 13:37 Then, with due respect, you should use a private sub-directory under$PATH2 that you can remove. Avoid glob expansion with commands like 'rm -rf' like you would avoid the plague (or should that be A/H1N1?). – Jonathan Leffler Jun 14 '09 at 13:37

Max ,Jun 14, 2009 at 14:10

Meanwhile I've found this perl project: http://code.google.com/p/safe-rm/

community wiki
too much php
,Jun 15, 2009 at 1:55

If it is possible, you should try and put everything into a folder with a hard-coded name which is unlikely to be found anywhere else on the filesystem, such as ' foofolder '. Then you can write your rmrf() function as:
rmrf() {
rm -rf "foofolder/$PATH1" # or rm -rf "$PATH1/foofolder"
}


There is no way that function can delete anything but the files you want it to.

vadipp ,Jan 13, 2017 at 11:37

Actually there is a way: if PATH1 is something like ../../someotherdirvadipp Jan 13 '17 at 11:37

community wiki
btop
,Jun 15, 2009 at 6:34

You may use
set -f    # cf. help set


to disable filename generation (*).

community wiki
Howard Hong
,Oct 28, 2009 at 19:56

You don't need to use regular expressions.
Just assign the directories you want to protect to a variable and then iterate over the variable. eg:
protected_dirs="/ /bin /usr/bin /home $HOME" for d in$protected_dirs; do
if [ "$1" = "$d" ]; then
rm=0
break;
fi
done
if [ ${rm:-1} -eq 1 ]; then rm -rf$1
fi


,

Add the following codes to your ~/.bashrc
# safe delete
move_to_trash () { now="$(date +%Y%m%d_%H%M%S)"; mv "$@" ~/.local/share/Trash/files/"$@_$now"; }
alias del='move_to_trash'

# safe rm
alias rmi='rm -i'


Every time you need to rm something, first consider del , you can change the trash folder. If you do need to rm something, you could go to the trash folder and use rmi .

One small bug for del is that when del a folder, for example, my_folder , it should be del my_folder but not del my_folder/ since in order for possible later restore, I attach the time information in the end ( "$@_$now" ). For files, it works fine.

###### Jan 25, 2012 | superuser.com
I have a directory like this:
$ls -l total 899166 drwxr-xr-x 12 me scicomp 324 Jan 24 13:47 data -rw-r--r-- 1 me scicomp 84188 Jan 24 13:47 lod-thin-1.000000-0.010000-0.030000.rda drwxr-xr-x 2 me scicomp 808 Jan 24 13:47 log lrwxrwxrwx 1 me scicomp 17 Jan 25 09:41 msg -> /home/me/msg  And I want to remove it using rm -r . However I'm scared rm -r will follow the symlink and delete everything in that directory (which is very bad). I can't find anything about this in the man pages. What would be the exact behavior of running rm -rf from a directory above this one? LordDoskias Jan 25 '12 at 16:43, Jan 25, 2012 at 16:43 How hard it is to create a dummy dir with a symlink pointing to a dummy file and execute the scenario? Then you will know for sure how it works! – hakre ,Feb 4, 2015 at 13:09 X-Ref: If I rm -rf a symlink will the data the link points to get erased, too? ; Deleting a folder that contains symlinkshakre Feb 4 '15 at 13:09 Susam Pal ,Jan 25, 2012 at 16:47 Example 1: Deleting a directory containing a soft link to another directory. susam@nifty:~/so$ mkdir foo bar
susam@nifty:~/so$touch bar/a.txt susam@nifty:~/so$ ln -s /home/susam/so/bar/ foo/baz
susam@nifty:~/so$tree . ├── bar │ └── a.txt └── foo └── baz -> /home/susam/so/bar/ 3 directories, 1 file susam@nifty:~/so$ rm -r foo
susam@nifty:~/so$tree . └── bar └── a.txt 1 directory, 1 file susam@nifty:~/so$


So, we see that the target of the soft-link survives.

Example 2: Deleting a soft link to a directory

susam@nifty:~/so$ln -s /home/susam/so/bar baz susam@nifty:~/so$ tree
.
├── bar
│   └── a.txt
└── baz -> /home/susam/so/bar

2 directories, 1 file
susam@nifty:~/so$rm -r baz susam@nifty:~/so$ tree
.
└── bar
└── a.txt

1 directory, 1 file
susam@nifty:~/so$ Only, the soft link is deleted. The target of the soft-link survives. Example 3: Attempting to delete the target of a soft-link susam@nifty:~/so$ ln -s /home/susam/so/bar baz
susam@nifty:~/so$tree . ├── bar │ └── a.txt └── baz -> /home/susam/so/bar 2 directories, 1 file susam@nifty:~/so$ rm -r baz/
rm: cannot remove 'baz/': Not a directory
susam@nifty:~/so\$ tree
.
├── bar
└── baz -> /home/susam/so/bar

2 directories, 0 files


The file in the target of the symbolic link does not survive.

The above experiments were done on a Debian GNU/Linux 9.0 (stretch) system.

Wyrmwood ,Oct 30, 2014 at 20:36

rm -rf baz/* will remove the contents – Wyrmwood Oct 30 '14 at 20:36

Buttle Butkus ,Jan 12, 2016 at 0:35

Yes, if you do rm -rf [symlink], then the contents of the original directory will be obliterated! Be very careful. – Buttle Butkus Jan 12 '16 at 0:35

frnknstn ,Sep 11, 2017 at 10:22

Your example 3 is incorrect! On each system I have tried, the file a.txt will be removed in that scenario. – frnknstn Sep 11 '17 at 10:22

Susam Pal ,Sep 11, 2017 at 15:20

@frnknstn You are right. I see the same behaviour you mention on my latest Debian system. I don't remember on which version of Debian I performed the earlier experiments. In my earlier experiments on an older version of Debian, either a.txt must have survived in the third example or I must have made an error in my experiment. I have updated the answer with the current behaviour I observe on Debian 9 and this behaviour is consistent with what you mention. – Susam Pal Sep 11 '17 at 15:20

Ken Simon ,Jan 25, 2012 at 16:43

Your /home/me/msg directory will be safe if you rm -rf the directory from which you ran ls. Only the symlink itself will be removed, not the directory it points to.

The only thing I would be cautious of, would be if you called something like "rm -rf msg/" (with the trailing slash.) Do not do that because it will remove the directory that msg points to, rather than the msg symlink itself.

> ,Jan 25, 2012 at 16:54

"The only thing I would be cautious of, would be if you called something like "rm -rf msg/" (with the trailing slash.) Do not do that because it will remove the directory that msg points to, rather than the msg symlink itself." - I don't find this to be true. See the third example in my response below. – Susam Pal Jan 25 '12 at 16:54

Andrew Crabb ,Nov 26, 2013 at 21:52

I get the same result as @Susam ('rm -r symlink/' does not delete the target of symlink), which I am pleased about as it would be a very easy mistake to make. – Andrew Crabb Nov 26 '13 at 21:52

,

rm should remove files and directories. If the file is symbolic link, link is removed, not the target. It will not interpret a symbolic link. For example what should be the behavior when deleting 'broken links'- rm exits with 0 not with non-zero to indicate failure

#### [Oct 05, 2018] Sometimes one extra space makes a big difference

###### Oct 05, 2018 | cam.ac.uk

From: rheiger@renext.open.ch (Richard H. E. Eiger)
Organization: Olivetti (Schweiz) AG, Branch Office Berne

In article <1992Oct9.100444.27928@u.washington.edu> tzs@stein.u.washington.edu
(Tim Smith) writes:
> I was working on a line printer spooler, which lived in /etc. I wanted
> to remove it, and so issued the command "rm /etc/lpspl." There was only
> one problem. Out of habit, I typed "passwd" after "/etc/" and removed
>
[deleted to save space[
>
> --Tim Smith

Here's another story. Just imagine having the sendmail.cf file in /etc. Now, I was working on the sendmail stuff and had come up with lots of sendmail.cf.xxx which I wanted to get rid of so I typed "rm -f sendmail.cf. *". At first I was surprised about how much time it took to remove some 10 files or so. Hitting the interrupt key, when I finally saw what had happened was way to late, though.

Fortune has it that I'm a very lazy person. That's why I never bothered to just back up directories with data that changes often. Therefore I managed to restore /etc successfully before rebooting... :-) Happy end, after all. Of course I had lost the only well working version of my sendmail.cf...

Richard

### Sites

Creative uses of rm

safe-rm another, more simple wrapper also written in Perl, that uses text strings to define the set of protected directories instead of regular expressions.

linux - How do I prevent accidental rm -rf - - Server Fault

bash - Best practices to alias the rm command and make it safer - Super User