commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
This particular combination of flags mimics Try CoffeeScript (on http://coffeescript.org/#try:) as closely as possible. And the `tail` call removes the comment `// Generated by CoffeeScript 1.6.3`.
See `coffee -h` for explanation of `coffee`'s flags.
This is a handy way to find which modules are loaded with Apache web server.
`tar xfzO` extracts to STDOUT which got redirected directly to mysql. Really helpful, when your hard drive can't fit two copies of non-compressed database :)
Today many hosts are blocking traditional ICMP echo replay for an "security" reason, so nmap's fast ARP scan is more usable to view all live IPv4 devices around you. Must be root for ARP scanning.
Calculate foldersize for each website on an ISPConfig environment. It doesn't add the jail size. Just the "public_html".
# Limited and very hacky wildcard rename
# works for rename *.ext *.other
# and for rename file.* other.*
# but fails for rename file*ext other*other and many more
# Might be good to merge this technique with mmv command...
argv="`history 1 | perl -pe 's/^ *[0-9]+ +[^ ]+ //'`"
files="`echo \"$argv\"|sed -e \"s/ .*//\"`"
str="`history 1 | perl -pe 's/^ *[0-9]+ +[^ ]+ //' | tr -d \*`"
set -- $str
for file in $files
echo mv $file `echo $file|sed -e "s/$1/$2/"`
mv $file `echo $file|sed -e "s/$1/$2/"`
alias rename='mv-helper #'
To get information at your fingertips about Apache compilation.
Simple Compressed Backup of the /etc
defunct processes (zombies) usually have to be killed by killing their parent processes. this command retrieves such zombies and their immediate parents and kills all of the matching processes.
Exactly the same effect with 3 less characters ;-) (Removes all files/filesystems of a harddisk. It removes EVERYTHING of your hard disk. Be careful when to select a device.)
You can press Ctrl + C after few seconds
If you have a folder with thousand of files and want to have many folder with only 100 file per folder, run this.
It will create 0/,1/ etc and put 100 file inside each one.
But find will return true even if it don't find anything ...
This option selects the listing of all Internet and x.25 (HP-UX) network files.
If you work in an environment, where some ssh hosts change regularly this might be handy...
Colors a the current date in cal output
I love CiteULike. It makes keeping a bibtex library easy and keeps all my papers in one place. However, it can be a pain when I add new entries and have to go through the procedure for downloading the new version in my browser, so I made this to grab it for me! I actually pipe it directly into a couple of SED one liners to tidy it up a bit too. Extremely useful, especially if you make a custom BibTeX script that does this first. That way you can sort a fresh BibTeX file for each new paper with no faf.
To use just replace with your CiteULike user name. It doesn't download entries that you've hidden but I don't use that feature anyway.
urls.txt should have a fully qualified url on each line
to clear the log
change curl command to
curl --head $file | head -1 >> log.txt
to just get the http status
without sed, but has no problems with files with spaces or other critical characters
Simple Google Chrome profile manager using zenity for profile name input. Place this in a shell script and then use the path to it as the command field in a gnome/kde shortcut. When you start it you will be prompted for a profile to use, if you leave it blank you should get the default profile.