commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
same thing as the other
grabs your local IP Address.
Tested on Solaris.
I wasn't sure how to display the image, so I thought I'd try xml for a different twist.
Same thing just a different way to get there. You will need lynx
For disk space constraint testing. Leaves a little space available for creating temp files, etc. Easily free up the used disk space again by deleting the dummy00 file. Can tailor the testing by building smaller 'blocks' to suit the needs of the testing.
WARNING: do not do this to the '/' (root) filesystem unless you know what you are doing... on some systems it could crash the OS.
you can just use one awk script to parse the rss feed. No need to pipe so many awk's and sed's. Its ugly and inefficient.
Quick and kludgy rss parser for the recent tracks rss feed from last.fm. Extracts artist and track link.
don't have to be that complicated
no need grep. its redundant when awk is present.
Sometimes I need a quick visual way to determine if there is a particular server who is opening too many connections to the database machine.
ls -l may vary depending on operating system, so "print $8" may have to be changed
as unixmonkey7109 pointed out, first awk parse replaces three steps.
It's not a big line, and it *may not* work for everybody, I guess it depends on the detail of access_log configuration in your httpd.conf. I use it as a prerotate command for logrotate in httpd section so it executes before access_log rotation, everyday at midnight.
remove files with access time older than a given date.
If you want to remove files with a given modification time replace %A@ with %T@. Use %C@ for the modification time.
The time is expressed in epoc but is easy to use any other ordered format.
Uses the dumb terminal option in gnuplot to plot a graph of frequencies. In this case, we are looking at a frequency analysis of words in all of the .c files.
I know this has been beaten to death but finding video files using mime types and printing the "hours of video" for each directory is (IMHO) easier to parse than just a single total. Output is in minutes.
Among the other niceties is that it omits printing of non-video files/folders
PS: Barely managed to fit it within the 255 character limit :D