commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
Wow, didn't really expect you to read this far down. The latest iteration of the site is in open beta. It's a gentle open beta-- not in prime-time just yet. It's being hosted over at UpGuard (link) and you are more than welcome to give it a shot. Couple things:
List all open files of all processes.
Look through the /proc file descriptors
list only symlinks to file
print the symlink target
grep -P '^/(?!dev|proc|sys)'
ignore files from /dev /proc or /sys
sort | uniq -c | sort -n
count the results
Many processes will create and immediately delete temporary files.
These can the filtered out by adding:
... | grep -v " (deleted)$" | ...
Sort Apache access logs by date and time using sort key field feature
Find biggest files in a directory
This will list all the files that are a gigabyte or larger in the current working directory. Change the G in the regex to be a M and you'll find all files that are a megabyte up to but not including a gigabyte.
I'm sure there's a more elegant sed version for the tr + grep section.
capture 2000 packets and print the top 10 talkers
credit shall fall to this for non-gzipped version:
Caution: distructive overwrite of filenames
Useful for concatenating pdfs in date order using pdftk
Remove duplicate line in a text file.
Same as the rest, but handle IPv6 short IPs. Also, sort in the order that you're probably looking for.
bit of a contrived example and playing to my OCD but nice for quick scripted output of listening ports which is sorted by port, ip address and protocol.
Goes through all files in the directory specified, uses `stat` to print out last modification time, then sorts numerically in reverse, then uses cut to remove the modified epoch timestamp and finally head to only output the last 10 modified files.
Note that on a Mac `stat` won't work like this, you'll need to use either:
find . -type f -print0 | xargs -0 stat -f '%m%t%Sm %12z %N' | sort -nr | cut -f2- | head
or alternatively do a `brew install coreutils` and then replace `stat` with `gstat` in the original command.
The other commands were good, but they included packages that were installed and then removed.
This command only shows packages that are currently installed, sorts smallest to largest, and formats the sizes to be human readable.
This uses the ability of find (at least the one from GNU findutils that is shiped with most linux distros) to display change time as part of its output. No xargs needed.
Finds files modified today since 00:00, removes ugly dotslash characters in front of every filename, and sorts them.
*EDITED* with the advices coming from flatcap (thanks!)
This command is more robust because it handles spaces, newlines and control characters in filenames. It uses printf, not ls, to determine file size.
Find top 5 big files
Get the longest match of file extension (Ex. For 'foo.tar.gz', you get '.tar.gz' instead of '.gz')