What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:



2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.




Commands using tail from sorted by
Terminal - Commands using tail - 225 results
curl http://www.discogs.com/search?q=724349691704 2> /dev/null | grep \/release\/ | head -2 | tail -1 | sed -e 's/^<div>.*>\(.*\)<\/a><\/div>/\1/'
wget http://www.discogs.com/search?q=724349691704 -O foobar &> /dev/null ; grep \/release\/ foobar | head -2 | tail -1 | sed -e 's/^<div>.*>\(.*\)<\/a><\/div>/\1/' ; rm foobar
2011-01-30 23:34:54
User: TetsuyO
Functions: grep head rm sed tail wget

Substitute that 724349691704 with an UPC of a CD you have at hand, and (hopefully) this oneliner should return the $Artist - $Title, querying discogs.com.

Yes, I know, all that head/tail/grep crap can be improved with a single sed command, feel free to send "patches" :D


curl -s http://www.last.fm/user/$LASTFMUSER | grep -A 1 subjectCell | sed -e 's#<[^>]*>##g' | head -n2 | tail -n1 | sed 's/^[[:space:]]*//g'
/usr/bin/lynx -dump http://www.netins.net/dialup/tools/my_ip.shtml | grep -A2 "Your current IP Address is:" | tail -n1 | tr -d ' '|sed '/^$/d'| sed 's/^ *//g'
ping -q -c 1 www.google.com|tail -1|cut -d/ -f5
alias -g R=' &; jobs | tail -1 | read A0 A1 A2 cmd; echo "running $cmd"; fg "$cmd"; zenity --info --text "$cmd done"; unset A0 A1 A2 cmd'
2010-12-13 17:44:36
User: pipeliner
Functions: alias echo fg jobs read tail unset

make, find and a lot of other programs can take a lot of time. And can do not. Supppose you write a long, complicated command and wonder if it will be done in 3 seconds or 20 minutes. Just add "R" (without quotes) suffix to it and you can do other things: zsh will inform you when you can see the results.

You can replace zenity with other X Window dialogs program.

du . -mak|sort -n|tail -10
2010-12-03 19:28:55
User: georgesdev
Functions: du sort tail

du -m option to not go across mounts (you usually want to run that command to find what to destroy in that partition)

-a option to also list . files

-k to display in kilobytes

sort -n to sort in numerical order, biggest files last

tail -10 to only display biggest 10

ls -1t --group-directories-first /path/to/dir/ | tail -n 1
2010-12-02 12:25:16
User: fpunktk
Functions: ls tail

reverse the sorting of ls to get the newest file:

ls -1tr --group-directories-first /path/to/dir/ | tail -n 1


If there are no files in the directory you will get a directory or nothing.

tail -f /var/www/logs/domain.com.log | grep "POST /scripts/blog-post.php" | grep -v 192.168. | awk '{print $1}' | xargs -I{} iptables -I DDOS -s {} -j DROP
2010-11-30 06:22:18
User: tehusr
Functions: awk grep iptables tail xargs

Takes IP from web logs and pipes to iptables, use grep to white list IPs.. use if a particular file is getting requested by many different addresses.

Sure, its already down pipe and you bandwidth may suffer but that isnt the concern. This one liner saved me from all the traffic hitting the server a second time, reconfigure your system so your system will work like blog-post-1.php or the similar so legitimate users can continue working while the botnet kills itself.

tail -f file |xargs -IX printf "$(date -u)\t%s\n" X
tail -f file | awk '{now=strftime("%F %T%z\t");sub(/^/, now);print}'
tail -f file | while read line; do printf "$(date -u '+%F %T%z')\t$line\n"; done
2010-11-24 05:50:12
User: derekschrock
Functions: file printf read tail
Tags: tail date

Should be a bit more portable since echo -e/n and date's -Ins are not.

tail -f file | while read line; do echo -n $(date -u -Ins); echo -e "\t$line"; done
2010-11-19 10:01:57
User: hfs
Functions: date echo file read tail
Tags: tail date

This is useful when watching a log file that does not contain timestamps itself.

If the file already has content when starting the command, the first lines will have the "wrong" timestamp when the command was started and not when the lines were originally written.

history | awk '{print $2,$3}' | sed s/sudo// | awk '{print $1}' | awk 'BEGIN {FS="|"}{print $1}' | sort | uniq -c | sort -n | tail | sort -nr
2010-11-17 12:15:04
User: b_t
Functions: awk sed sort tail uniq

Your version works fine except for someone who's interested in commands 'sudo' was prefixed to

i.e. in your command, use of sudo appears as number of times sudo was used.

Slight variation in my command peeks into what commands sudo was used for and counts the command

(ignores 'sudo')

history | awk '{print $2}' | awk 'BEGIN {FS="|"}{print $1}' | sort | uniq -c | sort -n | tail | sort -nr
grep 'model\|MHz' /proc/cpuinfo |tail -n 2
find /home/ -type f -exec du {} \; 2>/dev/null | sort -n | tail -n 10 | xargs -n 1 du -h 2>/dev/null
2010-11-10 07:24:17
User: mxc
Functions: du find sort tail xargs
Tags: disk usage

This combines the above two command into one. Note that you can leave off the last two commands and simply run the command as

"find /home/ -type f -exec du {} \; 2>/dev/null | sort -n | tail -n 10"

The last two commands above just convert the output into human readable format.

find / -type f -size +100M -exec du {} \; | sort -n | tail -10 | cut -f 2
find / -type f 2>/dev/null | xargs du 2>/dev/null | sort -n | tail -n 10 | cut -f 2 | xargs -n 1 du -h
2010-11-09 13:45:11
User: mxc
Functions: cut du find sort tail xargs
Tags: disk usage

Often you need to find the files that are taking up the most disk space in order to free up space asap. This script can be run on the enitre filesystem as root or on a home directory to find the largest files.

IP=$(nslookup `hostname` | grep -i address | awk -F" " '{print $2}' | awk -F# '{print $1}' | tail -n 1 ); R=3$((RANDOM%6 + 1)); PS1="\n\[\033[1;37m\]\u@\[\033[1;$R""m\]\h^$IP:\[\033[1;37m\]\w\$\[\033[0m\] "
2010-10-20 07:29:14
User: rubo77
Functions: awk grep nslookup tail

this adds a random color to your prompt and the external ip.

useful if you are using multiple mashines with the same hostname.

s=`head -$i fileName | tail -1`
tail -f /var/log/messages | while read line; do accu="$line"; while read -t 1 more; do accu=`echo -e "$accu\n$more"`; done; notify-send "Syslog" "$accu"; done
2010-10-10 16:28:08
User: hfs
Functions: read tail

The given example collects output of the tail command: Whenever a line is emitted, further lines are collected, until no more output comes for one second. This group of lines is then sent as notification to the user.

You can test the example with

logger "First group"; sleep 1; logger "Second"; logger "group"
curl --silent http://www.dudalibre.com/gnulinuxcounter?lang=en | grep users | head -2 | tail -1 | sed 's/.*<strong>//g' | sed 's/<\/strong>.*//g'
tail -f `ls -alst /var/log/maillog* | awk '{print $10} NR>0{exit};0'` | grep "criteria"
endnl () { [[ -f "$1" && -s "$1" && -z $(tail -c 1 "$1") ]]; }
2010-08-25 12:06:10
User: quintic
Functions: tail
Tags: tail

tail -c 1 "$1" returns the last byte in the file.

Command substitution deletes any trailing newlines, so if the file ended in a newline $(tail -c 1 "$1") is now empty, and the -z test succeeds.

However, $a will also be empty for an empty file, so we add -s "$1" to check that the file has a size greater than zero.

Finally, -f "$1" checks that the file is a regular file -- not a directory or a socket, etc.