What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

UpGuard checks and validates configurations for every major OS, network device, and cloud provider.

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:



May 19, 2015 - A Look At The New Commandlinefu
I've put together a short writeup on what kind of newness you can expect from the next iteration of clfu. Check it out here.
March 2, 2015 - New Management
I'm Jon, I'll be maintaining and improving clfu. Thanks to David for building such a great resource!

Top Tags



Commands using whois from sorted by
Terminal - Commands using whois - 12 results
ASN=32934; for s in $(whois -H -h riswhois.ripe.net -- -F -K -i $ASN | grep -v "^$" | grep -v "^%" | awk '{ print $2 }' ); do echo " blocking $s"; sudo iptables -A INPUT -s $s -j REJECT &> /dev/null || sudo ip6tables -A INPUT -s $s -j REJECT; done
cat domainlist.txt | while read line; do echo -ne $line; whois $line | grep Expiration ; done | sed 's:Expiration Date::'
for i in `wget -O url|grep '<a rel="nofollow"'|grep http|sed 's|.*<a rel="nofollow" class="[^"]\+" href="[^"]*https\?://\([^/]\+\)[^"]*">[^<]\+</a>.*|\1|'`;do if test -n "$(whois $i|grep -i godaddy)";then echo $i uses GoDaddy;fi;sleep 20;done
domain=google.com; for ns in $(whois $domain | awk -F: '/Name Server/{print $2}'); do echo ">>> Nameservers for $domain from $a <<<"; dig @$ns $domain ns +short; echo; done;
2011-05-08 04:46:34
User: laebshade
Functions: awk dig echo whois

Change the $domain variable to whichever domain you wish to query.

Works with the majority of whois info; for some that won't, you may have to compromise:

domain=google.com; for a in $(whois $domain | grep "Domain servers in listed order:" --after 3 | grep -v "Domain servers in listed order:"); do echo ">>> Nameservers for $domain from $a

Note that this doesn't work as well as the first one; if they have more than 3 nameservers, it won't hit them all.

As the summary states, this can be useful for making sure the whois nameservers for a domain match the nameserver records (NS records) from the nameservers themselves.

x=; whois $x > $x.txt
2011-01-17 03:33:49
User: sxiii
Functions: whois

This can be used in scripts, to find out the origin of target IP etc.

whois -H $(cat ./list_of_domains) | awk 'BEGIN{RS=""}/Registrant/,/Registration Service Provider:/ {print} END{print "----------------\n"}'
2011-01-11 12:55:34
User: djsmiley2k
Functions: awk cat whois

Nice neat feedback showing contact infomation for as many domains as you wish to feed it. I used a list of domains, each one on a new line as supplied by our registar, as we needed to check they were all upto date and back them up as we are updating them all.

while read line; do pais=$(whois "$line" | grep -E '[Cc]ountry') echo -n "IP=$line Pais=$pais" && echo done <listaip
cat domainlist.txt | while read line; do echo -ne $line; whois $line | grep Expiration ; done | sed 's:Expiration Date::'
2010-05-02 06:49:09
User: netsaint
Functions: cat echo grep read sed whois

Create a text file called domainlist.txt with a domain per line, then run the command above. All registries are a little different, so play around with the command. Should produce a list of domains and their expirations date. I am responsible for my companies domains and have a dozen or so myself, so this is a quick check if I overlooked any.

for domain in `cat list_of_domains.txt`; do echo $domain; whois $domain >> output.txt; done
2010-02-15 17:13:45
User: pathcl
Functions: echo whois

Outputs multiple whois from a plain text file.

whois domainnametocheck.com | grep match
2009-08-11 13:33:25
User: Timothee
Functions: grep whois
Tags: whois

Returns nothing if the domain exists and 'No match for domain.com' otherwise.

net=DTAG-DIAL ; for (( i=1; i<30; i++ )); do whois -h whois.ripe.net $net$i | grep '^inetnum:' | sed "s;^.*:;$net$i;" ; done
2009-08-01 05:28:19
User: drizzt
Functions: grep sed whois

Useful if you f.i. want to block/allow all connections from a certain provider which uses successive netnames for his ip blocks. In this example I used the german Deutsche Telekom which has DTAG-DIAL followed by a number as netname for the dial in pools.

There are - as always ;) - different ways to do this. If you have seq available you can use

net=DTAG-DIAL ; for i in `seq 1 30`; do whois -h whois.ripe.net $net$i | grep '^inetnum:' | sed "s;^.*:;$net$i;" ; done

or without seq you can use bash brace expansion

net=DTAG-DIAL ; for i in {1..30}; do whois -h whois.ripe.net $net$i | grep '^inetnum:' | sed "s;^.*:;$net$i;" ; done

or if you like while better than for use something like

net=DTAG-DIAL ; i=1 ; while true ; do whois -h whois.ripe.net $net$i | grep '^inetnum:' | sed "s;^.*:;$net$i;" ; test $i = 30 && break ; i=$(expr $i + 1) ; done

and so on.

whois cmd.fu;whois cmdfu.com|grep -i cmdfu
2009-02-19 08:57:50
User: axelabs
Functions: whois

It would be nice if commandlinefu.com had a better domain name. Will they pick one of the above; We'll see.