commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
Will move in that case every file in the current folder older than 30 days to the "old" folder
Replace "mv $i old/" by any command such as rm / echo to do something different.
This command uses -newerXY to show you the files that are modified since a specific date. I recommend looking for "-newerXY" on the manpage to get the specifics.
Will find and list all core files from the current directory on. You can pass | xargs rm -i to be prompted for the removal if you'd like to double check before removal.
Check all bash scripts in current dir for syntax errors WITHOUT running them.
Use find's internal stat to get the file size then let the shell add up the numbers.
Using find's internal stat to get the file size is about 50 times faster than using -exec stat.
Find files and calculate size with stat of result in shell
This lists the number of ogg/mp3/wav/flac files in each subdirectory of the current directory. The output can be sorted by piping it into "sort -n".
Add `-maxdepth 1` to find to exclude subfolders.
This is just another example of what the nocache package is useful for, which I described in http://www.commandlinefu.com/commands/view/12357/ and that provides the commands
nocache <command to run with page cache disabled>
cachedel <single file to remove from page cache>
cachstats <single file> # to get the current cache state
Often, we do not want to disable caching, because several file reads are involved in a command and operations would be slowed down a lot, due to massive disk seeks. But after our operations, the file sits in the cache needlessly, if we know we're very likely never touching it again.
cachedel helps to reduce cache pollution, i.e. frequently required files relevant for desktop interaction (libs/configs/etc.) would be removed from RAM.
So we can run cachedel after each data intensive job. Today I run commands like these:
<compile job> && find . -type f -exec cachedel '{}' \; &> /dev/null # no need to keep all source code and tmp files in memory
sudo apt-get dist-upgrade && find /var/cache/apt/archives/ -type f -exec cachedel '{}' \; # Debian/*buntu system upgrade
dropbox status | grep -Fi idle && find ~/Dropbox -type f -exec cachedel '{}' \; &> /dev/null # if Dropbox is idle, remove sync'ed files from cache
https://github.com/Feh/nocache
http://packages.debian.org/search?keywords=nocache
Useful when you want to cron a daily deletion task in order to keep files not older than one year. The command excludes .snapshot directory to prevent backup deletion.
One can append -delete to this command to delete the files :
find /path/to/directory -not \( -name .snapshot -prune \) -type f -mtime +365 -delete
Creates an incremental snapshot of individual folders.
Problem: I wanted to backup user data individually, using and incremental method. In this example, all user data is located in "/mnt/storage/profiles", and about 25 folders inside, each with a username ( /mnt/storage/profiles/mike; /mnt/storage/profiles/lucy ...)
I need each individual folder backed up, not the whole "/mnt/storage/profiles". So, using find while excluding directories depth and creating two variables (tarfile=username & desdir=destination), tar will create a .tgz file for each folder, resulting in a "mike_2013-12-05.tgz" and "lucy_2013-12-05.tgz".
Problem: I wanted to backup user data individually. In this example, all user data is located in "/mnt/storage/profiles", and about 25 folders inside, each with a username ( /mnt/storage/profiles/mike; /mnt/storage/profiles/lucy ...)
I need each individual folder backed up, not the whole "/mnt/storage/profiles". So, using find while excluding directories depth and creating two variables (tarfile=username & desdir=destination), tar will create a .tgz file for each folder, resulting in a "mike_full.tgz" and "lucy_full.tgz".
May be useful to get user's ip address over the phone, as users struggle to read through a long ipconfig result.
note that sed -i is non-standard (although both GNU and current BSD systems support it)
Can also be accomplished with
find . -name "*.txt" | xargs perl -pi -e 's/old/new/g'
as shown here - http://www.commandlinefu.com/commands/view/223/a-find-and-replace-within-text-based-files-to-locate-and-rewrite-text-en-mass.
Find all files larger than 500M in home directory and print them ordered by size with full info about each file.