commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
Sort netflow packet capture by unique connections excluding source port.
Very quick! Based only on the content sizes and the character counts of filenames. If both numbers are equal then two (or more) directories seem to be most likely identical.
if in doubt apply:
diff -rq path_to_dir1 path_to_dir2
AWK function taken from here:
Use the AWS CLI tools to generate a list instances, then pipe them to JQ to show only their launch time and instance id. Finally use sort to bring them out in runtime order. Find all those instances you launched months ago and have forgotten about.
displays a list of all file extensions in current directory and how many files there are of each type of extension in ascending order (case insensitive)
this will give u the details in MB's; from high to low....
I added -S to du so that you don't include /foo/bar/baz.iso in /foo, and change sorts -n to -h so that it can properly sort the human readable sizes.
Shows the 10 biggest files/dirs
Sorted in human readable format.
Find all files larger than 500M in home directory and print them ordered by size with full info about each file.
* Find all file sizes and file names from the current directory down (replace "." with a target directory as needed).
* sort the file sizes in numeric order
* List only the duplicated file sizes
* drop the file sizes so there are simply a list of files (retain order)
* calculate md5sums on all of the files
* replace the first instance of two spaces (md5sum output) with a \0
* drop the unique md5sums so only duplicate files remain listed
* Use AWK to aggregate identical files on one line.
* Remove the blank line from the beginning (This was done more efficiently by putting another "IF" into the AWK command, but then the whole line exceeded the 255 char limit).
>>>> Each output line contains the md5sum and then all of the files that have that identical md5sum. All fields are \0 delimited. All records are \n delimited.
"find ./ ..." could be replaced with "find $PWD ..." to display absolute path instead of relative path.
When trying to find an error in a hosted project it's interesting to find out how the source is organized: Are there .inc files? Or .php files only? Or .xml files that probably contain translated texts?
Analyze an Apache access log for the time period with most activity and display the hit count, requesting IP and the timestamp. May help detect a brute force dos attack.
cut can handle files as well. No neet for a cat.