commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
Problem: I wanted to backup user data individually, using and incremental method. In this example, all user data is located in "/mnt/storage/profiles", and about 25 folders inside, each with a username ( /mnt/storage/profiles/mike; /mnt/storage/profiles/lucy ...)
I need each individual folder backed up, not the whole "/mnt/storage/profiles". So, using find while excluding directories depth and creating two variables (tarfile=username & desdir=destination), tar will create a .tgz file for each folder, resulting in a "mike_2013-12-05.tgz" and "lucy_2013-12-05.tgz".
Problem: I wanted to backup user data individually. In this example, all user data is located in "/mnt/storage/profiles", and about 25 folders inside, each with a username ( /mnt/storage/profiles/mike; /mnt/storage/profiles/lucy ...)
I need each individual folder backed up, not the whole "/mnt/storage/profiles". So, using find while excluding directories depth and creating two variables (tarfile=username & desdir=destination), tar will create a .tgz file for each folder, resulting in a "mike_full.tgz" and "lucy_full.tgz".
Alternative to mnikhil's ls/awk solution
Usage exaple cmd
echo 'Sure to continue ??'; read -n1 choi; if [ "$choi" = 'y' ] || [ "$choi" = 'Y' ]; then echo -e '\nExecuting..'; else echo 'Aborted'; fi
Find's all png's in the current folder and all of its children
pngcrushes all results.
This command is used to verify a sha256sum-formatted file hash list on IBM AIX or any other UNIX-like OS that has openssl but doesn't have sha256sum by default. Steps:
1: Save to the filesystem a script that:
A: Receives as arguments the two parts of one line of a sha256sum listing
B: Feeds a file into openssl on SHA256 standard input hash calculation mode, and saves the result
C: Compares the calculated hash against the one received as argument
D: Outputs the result in a sha256sum-like format
2: Make the script runnable
3: Feed the sha256sum listing to xargs, running the aforementioned script and passing 2 arguments at a time
This will strip out the relivent disk information from kvm. I'm using it to find disks on a SAN which are no longer in use.
basic find implementation for systems that don't actually have find, like an android console without busybox installed.
git gc should be run on all git repositories every 100 commits. This will help do do so if you have many git repositories ;-)
How much memory is chrome sucking?
If you want to copy all files listed (with full path) in a text-file (i.e. cmus playlist.pl) to a certain directory use this nice oneliner...
Credits goes to RiffRaff: http://www.programmingforums.org/post242527-2.html
Some computers these days don't have an HDD activity light, but they still have a useless caps-lock, so why not re-purpose that light to show HDD activity?
Requires setleds and dstat and probably needs to run as root.
Find all .gz files and recompress them to bz2 on the fly. No temp files.
edit: forgot the double quotes! jeez!
This command is more robust because it handles spaces, newlines and control characters in filenames. It uses printf, not ls, to determine file size.
This exports all lines of input file as environment variables, assuming each line is like these:
(Please see sample output for usage)
Use any script name (the read command gets it) and it will be encrypted with the extension .crypt, i.e.:
myscript --> myscript.crypt
You can execute myscript.crypt only if you know the password. If you die, your script dies with you.
If you modify the startup line, be careful with the offset calculation of the crypted block (the XX string).
Not difficult to make script editable (an offset-dd piped to a gpg -d piped to a vim - piped to a gpg -c directed to script.new ), but not enough space to do it on a one liner.
Sorry for the chmod on parentheses, I dont like "-" at the end.
Thanks flatcap for the subshell abbreviation to /dev/null
make a bunch of files with the same permissions, owner, group, and content as a template file
(handy if you have much to do w. .php, .html files or alike)
Simply add this to whatever apache startup script you have, or if you are on a MAC, create a new automator application. This will show a pretty growl notification whenever theres a new Apache error log entry. Useful for local development
# find assumes email files start with a number 1-9
# sed joins the lines starting with " " to the previous line
# gawk print the received and from lines
# sort according to the second field (received+from)
# uniq print the duplicated filename
# a message is viewed as duplicate if it is received at the same time as another message, and from the same person.
The command was intended to be run under cron. If run in a terminal, mutt can be used:
mutt -e "push otD~=xq" -f $folder