undo rm -rf * (Patrick O'Callaghan)
walker at omnisterra.com
Sat Mar 30 04:32:25 UTC 2013
> > > pseudo-rm is going to fail, either because the implementer didn't
> > > handle some corner case correctly, or because you typed rm instead
> > > of nrm, or because a file was removed by a program without anyone
> > > typing anything, and the only solution is to have a backup. So if
> > > you're going to have a backup anyway, what's the point?
Sometimes multiple safety nets are better than just one.
I haven't found any failures in more than 20 years of daily
linux/hpux nrm usage. Of course, YMMV.
I have the following alias in both root's and my .bashrc
I have to type "\rm" when I want to really delete things (like the .gone
directory itself). This is handy because I can use a regular expression
to delete files, and then take a look at the .gone directory to make
sure I really deleted just the files I wanted to. Then if I need
the disk space I can delete the directory with "\rm -r .gone".
Most of the time, I let the files get deleted by the normal
cleanup cron job.
I've used nrm (http://omnisterra.com/walker/linux/nrm.html) since around
1987 and have found the current version to handle all my corner cases
properly (even return codes). The initial versions did have some funny
corner cases with deleting targets of symlinks, and some speed problems
in deleting huge numbers of files, but those problems have been fixed a
long time ago.
Heck, it's open-source. Any corner case
that didn't work got fixed by looking at the equivalent code in gnu-rm
or hpus-rm to see how it should be done.
> > Lets say we did a daily backup at 9:00am and deleted some files by error at
> > 5:00 pm. Then what???
Just do "ls .gone" and "cp .gone/critical_file ."
> If you're worried about that, then backup on an hourly basis, or even
> more frequently. Modern backup solutions such as rsnapshot or obnam do
> this incrementally and at very low cost. Then you can only lose a
> maximum of an hour's work.
"nrm -s" keeps a fine-grained horde of every deleted file - even
multiple deletes of the same file! The strategy works so well, that I'd
seriously propose the .gone strategy as a kernel option. Then all
"unlink" calls would use the same sequencing system even if the file
was removed by a program without "anyone typing anything".
Backups are a highly recommended and mostly orthogonal safety
The nrm approach also diverts lots of support requests from the
department backup guru, and lets most users fix their own problems
without any need for help.
More information about the users