commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
Use -q as first argument (as described in `man curl`) to ignore curlrc to ensure the output is always the same regardless of user's configuration.
This says if the LHC has destroyed the world. Run it in a loop to monitor the state of Earth. Might not work reliable, if the world has actually been destroyed.
The only pre-requisite is jq (and curl, obviously).
The other version used grep, but jq is much more suited to JSON parsing than that.
Bash process substitution which curls the website 'hashbang.sh' and executes the shell script embedded in the page.
This is obviously not the most secure way to run something like this, and we will scold you if you try.
The smarter way would be:
Download locally over SSL
> curl https://hashbang.sh >> hashbang.sh
Verify integrty with GPG (If available)
> gpg --recv-keys 0xD2C4C74D8FAA96F5
> gpg --verify hashbang.sh
Inspect source code
> less hashbang.sh
> chmod +x hashbang.sh
Replace localhost:9200 with your server location and port. This is the ElasticSearch's default setup for local instances.
Sure, it's dirty, but it's quick, it only displays the excuse, and it works.
Similar to the following:
curl -I <URL>
but curl -I performs a HEAD request, which can yield different results.
Send a text message to an Kodi (XBMC) device. Uses curl to post a JSON request to the Kodi JSON-RPC API.
This encodes it in ogg format.
Does on-the-fly encoding of the incoming stream.
Great for radio streams as they're often flv format.
set BLOCK to "title" or any other HTML / RSS / XML tag and curl URL to get everything in-between e.g. some text
Sets the @ A record for your domain hosted by namecheap to your current internet-facing IP address, logs success or failure with syslog, and logs the data returned to /root/dnsupdate.
Change the XXX's as appropriate.
Fetches the world population JSON data from the US census and parses it uses jshon
They are using json now
first grep all href images then sed the url part then wget
Deprecated due to a change in the site design: see alternatives.
Use the command line to log into Dropbox. You have to replace firstname.lastname@example.org with your Dropbox email (note the URL-encoding of "@" as %40). Also replace my_passwd with your Dropbox password. (Note: special characters in your password (such as #) must be url-encoded. You will get a cookie (stored in file "cookie") that you can use for subsequent curl operations to dropbox, for example curl -b cookie https://www.dropbox.com/home. Debug note: If you want to see what data curl posts, use curl's --trace-ascii flag.
Plain Text Ip Output, independent of Layout change.
Gets your IP address and has a shorter URL.
Replace $USER with the username of the Reddit user in question. To get comment karma instead run...
curl -s http://www.reddit.com/user/$USER/about.json | tr "," "\n" | grep "comment_karma" | tr ": " "\n" | grep -E "[0-9]+" | sed s/"^"/"Comment Karma: "/
This commands queries the delicious api then runs the xml through xml2, grabs the urls cuts out the first two columns, passes through uniq to remove duplicates if any, and then goes into linkchecker who checks the links. the links go the blacklist in ~/.linkchecker/blacklist. please see the manual pages for further info peeps. I took me a few days to figure this one out. I how you enjoy it. Also don't run these api more then once a few seconds you can get banned by delicious see their site for info. ~updated for no recursive