commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
On a machine behind a firewall, it's possible to pass the proxy server address in as a prefix to wget to avoid having to set it as an environment variable first.
Check if a site is down with downforeveryoneorjustme.com
wget -qO - "http://www.google.com/dictionary/json?callback=dict_api.callbacks.id100&q=steering+wheel&sl=en&tl=en&restrict=pr,de&client=te"
this does the actual google dictionary query, returns a JSON string encapsulated in some fancy tag
here we remove the tag beginning
and here the tag end
There are also some special characters which could cause problems with some JSON parsers, so if you get some errors, this is probably the case (sed is your friend).
I laso like to trim the "webDefinitions" part, because it (sometimes) contains misleading information.
(but remember to append a "}" at the end, because the JSON string will be invalid)
The output also contains links to mp3 files with pronounciation.
As of now, this is only usable in the English language. If you choose other than English, you will only get webDefinitions (which are crap).
EDIT: command updated to support accented characters!
Works in any of 58 google supported languages (some sound like crap, english is the best IMO).
You get a mp3 file containing your query in spoken language. There is a limit of 100 characters for the "q" parameter, so be careful. The "tl" parameter contains target language.
The FLAC audio must be encoded at 16000Hz sampling rate (SoX is your friend).
Outputs a short JSON string, the actual speech is in the hypotheses->utterance, the accuracy is stored in hypotheses->confidence (ranging from 0 to 1).
Google also accepts audio in some special speex format (audio/x-speex-with-header-byte), which is much smaller in comparison with losless FLAC, but I haven't been able to encode such a sample.
substitute "example" with desired string;
tl = target language (en, fr, de, hu, ...);
you can leave sl parameter as-is (autodetection works fine)
Substitute that 724349691704 with an UPC of a CD you have at hand, and (hopefully) this oneliner should return the $Artist - $Title, querying discogs.com.
Yes, I know, all that head/tail/grep crap can be improved with a single sed command, feel free to send "patches" :D
One cannot call the high quality livestream directly, but command this gives you a session ID and the high quality stream. #egypt #jan25
Nothing special required, just wget, sed & tr!
This will uncompress the file while it's being downloaded which makes it much faster
Little faster alternative.
This is a convinient way to do it in scripts. You also want to rm the ip.php file afterwards
Download Gsplitter extension, and execute it with Chrome !
Or download it here :
Useful for ripping wallpaper from 4chan.org/wg
Seeing that we get back plain text anyway we don't need lynx. Also the sed-part removes the credit line.
Download google video with wget. Or, if you wish, pass video URL to ie mplayer to view as stream.
1. VURL: replace with url. I.e. http://video.google.com/videoplay?docid=12312312312312313#
2. OUPUT_FILE : optionally change to a more suited name. This is the downloaded file. I.e. foo.flv
# Improvements greatly appreciated. (close to my first linux command after ls -A :) )
Breakedown pipe by pipe:
1. wget: html from google, pass to stdout
2. grep: get the video url until thumbnailUrl (not needed)
3. grep: Strip off everything before http://
4. sed: urldecode
5. echo: hex escapes
6. sed: stipr of tailing before thumbnailUrl
7. wget: download. Here one could use i.e. mplayer or other...