grep searches through a file and prints out all the lines that match some pattern. Here, the pattern is some string that is known to be in the deleted file. The more specific this string can be, the better. The file being searched by grep (/dev/sda1) is the partition of the hard drive the deleted file used to reside in. The ?-a? flag tells grep to treat the hard drive partition, which is actually a binary file, as text. Since recovering the entire file would be nice instead of just the lines that are already known, context control is used. The flags ?-B 25 -A 100? tell grep to print out 25 lines before a match and 100 lines after a match. Be conservative with estimates on these numbers to ensure the entire file is included (when in doubt, guess bigger numbers). Excess data is easy to trim out of results, but if you find yourself with a truncated or incomplete file, you need to do this all over again. Finally, the ?> results.txt? instructs the computer to store the output of grep in a file called results.txt. Source: http://spin.atomicobject.com/2010/08/18/undelete?utm_source=y-combinator&utm_medium=social-media&utm_campaign=technical
This command is useful if you accidentally untar or unzip an archive in a directory and you want to automatically remove the files. Just untar the files again in a subdirectory and then run the above command e.g.
for file in ~/Desktop/temp/*; do rm ~/Desktop/`basename $file`; done
This command works by rsyncing the target directory (containing the files you want to delete) with an empty directory. The '--delete' switch instructs rsync to remove files that are not present in the source directory. Since there are no files there, all the files will be deleted. I'm not clear on why it's faster than 'find -delete', but it is. Benchmarks here: https://web.archive.org/web/20130929001850/http://linuxnote.net/jianingy/en/linux/a-fast-way-to-remove-huge-number-of-files.html
Instead, install apt-get install secure-delete and you can use: -- srm to delete file and directory on hard disk -- smem to delete file in RAM -- sfill to delete "free space" on hard disk -- sswap to delete all data from swap
This command will delete files i a given path (/dir_name) , which older than given time in days (-mtime +5 will delete files older than five days.
List out all the names from the zip file and pass it to xargs utility to delete each one of them
This checks if the branch has been merged with master and then will delete the ones that have been. Keeps your local git repo nice and clean from all the branches. Show Sample Output
This will find all files under the path "." which are older than 10 days, and delete them. If you wish to use the "rm" command instead, replace "-delete" with "-exec rm [options] {} \;"
This will search all directories and ignore the CVS ones. Then it will search all files in the resulting directories and act on them.
Maybe you want first check which files will be deleted:
find $HOME -name '*.sol' -exec echo rm {} \;
ls -Q will show the filenames in quotes. xargs -p rm will print all the filenames piped from ls -Q and ask for confirmation before deleting the files. without the -Q switch, if we have spaces in names, then the files won't be deleted. Show Sample Output
Recursively delete empty directories. Use with care.
It does not work without the verbose mode (-v is important)
Useful when you want to cron a daily deletion task in order to keep files not older than one year. The command excludes .snapshot directory to prevent backup deletion.
One can append -delete to this command to delete the files :
find /path/to/directory -not \( -name .snapshot -prune \) -type f -mtime +365 -delete
This command will find all occurrences of one or more patterns in a collection of files and will delete every line matching the patterns in every file
As a user, deletes all your posts from a MyBB board (provided you have the search page listings of all your posts saved into the same directory this command is run from). Full command: for i in *; do cat $i | grep pid | sed -e 's/;/\ /g' -e 's/#/\ /g' -e 's/pid=/\ /g' | awk -F ' ' '{print $2}' >> posts.txt; done; for c in `cat posts.txt`; do curl --cookie name= --data-urlencode name=my_post_key=\&delete=1\&submit=Delete+Now\&action=deletepost\&pid=$c --user-agent Firefox\ 3.5 --url http://url/editpost.php?my_post_key=\&delete=1\&submit=Delete+Now\&action=deletepost\&pid=$c; sleep 2s; done; echo
This command is recursive and will delete in all directories in ".". It will find and delete all files not specified with ! -name "pattern". In this case it's file extensions. -type f means it will only find files and not directories. Finally the -delete flag ask find to delete what it matches. You can test the command by running it first without delete and it will list the files it will delete when you run it. Show Sample Output
This can be used to delete or archive old mails. In fact, for archiving its a bit different, you need to archive mails with any tools (e.g archivemail), and then deleting (if you want!). Here we use -path ".*/cur/*" to avoid files limit in bash globbing and to search in any inbox (e.g .mymail .spam .whatever). ! -newermt "1 week ago" can be read: All files which is older than "1 week ago", adapt it in consequence. Show Sample Output
While `echo rm * | batch` might seem to work, it might still raise the load of the system since `rm` will be _started_ when the load is low, but run for a long time. My proposed command executes a new `rm` execution once every minute when the load is small. Obviously, load could also be lower using `ionice`, but I still think this is a useful example for sequential batch jobs. Show Sample Output
This command removes and then cvs removes all files in the current directory recursively.
If you want to do this in the some one db, use
redis-cli -n5 KEYS "user*" | xargs redis-cli -n5 DEL
Find and delete files over 15 days Show Sample Output
commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for: