commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
I added -S to du so that you don't include /foo/bar/baz.iso in /foo, and change sorts -n to -h so that it can properly sort the human readable sizes.
Shows the 10 biggest files/dirs
Find all files larger than 500M in home directory and print them ordered by size with full info about each file.
* Find all file sizes and file names from the current directory down (replace "." with a target directory as needed).
* sort the file sizes in numeric order
* List only the duplicated file sizes
* drop the file sizes so there are simply a list of files (retain order)
* calculate md5sums on all of the files
* replace the first instance of two spaces (md5sum output) with a \0
* drop the unique md5sums so only duplicate files remain listed
* Use AWK to aggregate identical files on one line.
* Remove the blank line from the beginning (This was done more efficiently by putting another "IF" into the AWK command, but then the whole line exceeded the 255 char limit).
>>>> Each output line contains the md5sum and then all of the files that have that identical md5sum. All fields are \0 delimited. All records are \n delimited.
Replace \-dev with whatever you wanna search for
Enhanced version: fixes sorting by human readable numbers, and filters out non MB or GB entries that have a G or an M in their name.
ls -al gives all files, sort +4n sorts by 5th field numerically
This doesn't require any non-standard programs.
for those without the tree command.
tree -ifsF --noreport .|sort -n -k2|grep -v '/$'
(rows presenting directory names become hidden)
as per eightmillion's comment.
Simply economical :)
Shows the size of the directory the command is ran in.
The size is in MB and GB.
There is no need to type the path, its the current working directory.
Use this to find identify if dirs mostly contain large or small files.
Downloads the entire file, but http servers don't always provide the optional 'Content-Length:' header, and ftp/gopher/dict/etc servers don't provide a filesize header at all.
Specify the size in bytes using the 'c' option for the -size flag. The + sign reads as "bigger than". Then execute du on the list; sort in reverse mode and show the first 10 occurrences.
This command lists all the directories in SEARCHPATH by size, displaying their size in a human readable format.
- Where $URL is the URL of the file.
- Replace the $2 by $3 at the end to get a human-readable size.
Credits to svanberg @ ArchLinux forums for original idea.
Edit: Replaced command with better version by FRUiT. (removed unnecessary grep)