commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
Sort .avi movies by time length, print the longest first, and so on...
Since coreutils 7.6 provides sort -h
You'll run into trouble if you have files w/ missing newlines at the end. I tried to use
PAGER='sed \$q' git blame
PAGER='sed \$q' git -p blame
to force a newline at the end, but as soon as the output is redirected, git seems to ignore the pager.
Figures out total line contribution per author for an entire GIT repo. Includes binary files, which kind of mess up the true count.
If crashes or takes too long, mess with the ls-file option at the start:
git ls-files -x "*pdf" -x "*psd" -x "*tif" to remove really random binary files
git ls-files "*.py" "*.html" "*.css" to only include specific file types
Based off my original SVN version: http://www.commandlinefu.com/commands/view/2787/prints-total-line-count-contribution-per-user-for-an-svn-repository
note the xargs at the end
Use the hold space to preserve lines until data is needed.
List packages and their disk usage in decreasing order. This uses the "Installed-Size" from the package metadata. It may differ from the actual used space, because e.g. data files (think of databases) or log files may take additional space.
counts the total (recursive) number of files in the immediate (depth 1) subdirectories as well as the current one and displays them sorted.
Fixed, as per ashawley's comment
This dup finder saves time by comparing size first, then md5sum, it doesn't delete anything, just lists them.
Show apps that use internet connection at the moment.
Can be used to discover what programms create internet traffic. Skip the part after awk to get more details, though it will not work showing only unique processes.
This version will work with other languages such as Spanish and Portuguese, if the word for "ESTABLISHED" still contain the fragment "STAB"(e.g. "ESTABELECIDO")
This corrects duplicate output from the previous command.
This is just for fun.
A little bit smaller, faster and should handle files with special characters in the name.
This can be much faster than downloading one or both trees to a common servers and comparing the files there. After, only those files could be copied down for deeper comparison if needed.
search argument in PATH
accept grep expressions
without args, list all binaries found in PATH
From there, just pkg install the package you need.
Lists out all classes used in all *.html files in the currect directory. usefull for checking if you have left out any style definitions, or accidentally given a different name than you intended. ( I have an ugly habit of accidentally substituting camelCase instead of using under_scores: i would name soemthing counterBox instead of counter_box)
WARNING: assumes you give classnames in between double quotes, and that you apply only one class per element.
Based on the MrMerry one, just add some visuals to differentiate files and directories
Based on the MrMerry one, just add some visuals and sort directory and files
biggest->small directories, then biggest->smallest files