What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Universal configuration monitoring and system of record for IT.

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:



May 19, 2015 - A Look At The New Commandlinefu
I've put together a short writeup on what kind of newness you can expect from the next iteration of clfu. Check it out here.
March 2, 2015 - New Management
I'm Jon, I'll be maintaining and improving clfu. Thanks to David for building such a great resource!

Top Tags





Commands tagged awk from sorted by
Terminal - Commands tagged awk - 302 results
fbemailscraper YourFBEmail Password
2010-01-31 00:44:35
User: dabom

(Apparently it is too long so I put it in sample output, I hope that is OK.)

Run the long command (or put it in your .bashrc) in sample output then run:

fbemailscraper YourFBEmail Password

Voila! Your contacts' emails will appear.

Facebook seems to have gotten rid of the picture encoding of emails and replaced it with a text based version making it easy to scrape!

Needs curl to run and it was made pretty quickly so there might be bugs.

renice +5 -p $(pidof <process name>)
file='path to file'; tar -cf - "$file" | pv -s $(du -sb "$file" | awk '{print $1}') | gzip -c | ssh -c blowfish user@host tar -zxf - -C /opt/games
2010-01-19 16:02:45
User: starchox
Functions: awk du file gzip ssh tar

You set the file/dirname transfer variable, in the end point you set the path destination, this command uses pipe view to show progress, compress the file outut and takes account to change the ssh cipher. Support dirnames with spaces.

Merged ideas and comments by http://www.commandlinefu.com/commands/view/4379/copy-working-directory-and-compress-it-on-the-fly-while-showing-progress and http://www.commandlinefu.com/commands/view/3177/move-a-lot-of-files-over-ssh

lsof -p $(pidof firefox) | awk '/.mozilla/ { s = int($7/(2^20)); if(s>0) print (s)" MB -- "$9 | "sort -rn" }'
2010-01-13 22:45:53
User: tzk
Functions: awk pidof

Just refining last proposal for this check, showing awk power to make more complex math (instead /1024/1024, 2^20). We don't need declare variable before run lsof, because $(command) returns his output. Also, awk can perform filtering by regexp instead to call grep. I changed the 0.0000xxxx messy output, with a more readable form purging all fractional numbers and files less than 1 MB.

exipick -zi | xargs exim -Mrm
for i in `mailq | awk '$6 ~ /^frozen$/ {print $3}'`; do exim -Mrm $i; done
2010-01-13 21:28:45
User: rjamestaylor
Functions: awk
Tags: bash awk exim

Although Exim will purge frozen (undeliverable) messages over time, the command "exim -Mrm #id#" where #id# is a particular message ID will purge a message immediately. Being lazy, I don't want to type the command for each frozen message, so I wrote the one-liner to do it for me.

$class=ExampleClass; $path=src; for constant in `grep ' const ' $class.php | awk '{print $2;}'`; do grep -r "$class::$constant" $path; done
curl -s search.twitter.com | awk -F'</?[^>]+>' '/\/intra\/trend\//{print $2}'
tar -cf - . | pv -s $(du -sb . | awk '{print $1}') | gzip > out.tgz
2009-12-18 17:09:08
User: opertinicy
Functions: awk du gzip tar

What happens here is we tell tar to create "-c" an archive of all files in current dir "." (recursively) and output the data to stdout "-f -". Next we specify the size "-s" to pv of all files in current dir. The "du -sb . | awk ?{print $1}?" returns number of bytes in current dir, and it gets fed as "-s" parameter to pv. Next we gzip the whole content and output the result to out.tgz file. This way "pv" knows how much data is still left to be processed and shows us that it will take yet another 4 mins 49 secs to finish.

Credit: Peteris Krumins http://www.catonmat.net/blog/unix-utilities-pipe-viewer/

curl -sL 'dynamic.xkcd.com/comic/random/' | awk -F\" '/^<img/{printf("<?xml version=\"1.0\"?>\n<xkcd>\n<item>\n <title>%s</title>\n <comment>%s</comment>\n \n</item>\n</xkcd>\n", $6, $4, $2)}'
wget -q http://dynamic.xkcd.com/comic/random/ -O-| sed -n '/<img src="http:\/\/imgs.xkcd.com\/comics/{s/.*\(http:.*\)" t.*/\1/;p}' | awk '{system ("wget -q " $1 " -O- | display -title $(basename " $1") -write /tmp/$(basename " $1")");}'
2009-12-09 13:41:25
User: laugg
Functions: awk sed wget
Tags: sed awk wget comic

Only need to install Image Magick package.

Display a xkcd comic with its title and save it in /tmp directory

If you prefer to view the newest xkcd, use this command:

wget -q http://xkcd.com/ -O-| sed -n '/<img src="http:\/\/imgs.xkcd.com\/comics/{s/.*\(http:.*\)" t.*/\1/;p}' | awk '{system ("wget -q " $1 " -O- | display -title $(basename " $1") -write /tmp/$(basename " $1")");}'
curl -s 'xkcd.com' | awk -F\" '/^<img/{printf("<?xml version=\"1.0\"?>\n<xkcd>\n<item>\n <title>%s</title>\n <comment>%s</comment>\n \n</item>\n</xkcd>\n", $6, $4, $2)}'
2009-12-03 19:49:37
User: putnamhill
Functions: awk

I wasn't sure how to display the image, so I thought I'd try xml for a different twist.

awk '/q=/{print $11}' /var/log/httpd/access_log.4 | awk -F 'q=' '{print $2}' | sed 's/+/ /g;s/%22/"/g;s/q=//' | cut -d "&" -f 1
cat /var/log/httpd/access_log | grep q= | awk '{print $11}' | awk -F 'q=' '{print $2}' | sed 's/+/ /g;s/%22/"/g;s/q=//' | cut -d "&" -f 1 | mail youremail@isp.com -s "[your-site] search strings for `date`"
2009-11-22 03:03:06
User: isma
Functions: awk cat grep sed strings

It's not a big line, and it *may not* work for everybody, I guess it depends on the detail of access_log configuration in your httpd.conf. I use it as a prerotate command for logrotate in httpd section so it executes before access_log rotation, everyday at midnight.

albumart(){ local y="$@";awk '/View larger image/{gsub(/^.*largeImagePopup\(.|., .*$/,"");print;exit}' <(curl -s 'http://www.albumart.org/index.php?srchkey='${y// /+}'&itempage=1&newsearch=1&searchindex=Music');}
2009-11-15 19:54:16
User: eightmillion

This bash function uses albumart.org to find the cover for an album. It returns an amazon.com url to the image.

Usage: albumart [artist] [album]

These arguments can be reversed and if the album name is distinct enough, it may be possible to omit the artist.

The command can be extended with wget to automatically download the matching image like this:

albumart(){ local x y="$@";x=$(awk '/View larger image/{gsub(/^.*largeImagePopup\(.|., .*$/,"");print;exit}' <(curl -s 'http://www.albumart.org/index.php?srchkey='${y// /+}'&itempage=1&newsearch=1&searchindex=Music'));[ -z "$x" ]&&echo "Not found."||wget "$x" -O "${y}.${x##*.}";}
ompload() { curl -# -F file1=@"$1" http://ompldr.org/upload|awk '/Info:|File:|Thumbnail:|BBCode:/{gsub(/<[^<]*?\/?>/,"");$1=$1;print}';}
2009-11-07 20:56:52
User: eightmillion
Functions: awk

This function uploads images to http://omploader.org and then prints out the links to the file.

Some coloring can also be added to the command with:

ompload() { curl -F file1=@"$1" http://omploader.org/upload|awk '/Info:|File:|Thumbnail:|BBCode:/{gsub(/<[^<]*?\/?>/,"");$1=$1;sub(/^/,"\033[0;34m");sub(/:/,"\033[0m:");print}';}
ifconfig | awk '/HW/ {print $5}'
2009-11-05 18:00:50
User: Cont3mpo
Functions: awk ifconfig

Simple MAC adrress, thanks to ifconfig.

svn ci `svn stat |awk '/^A/{printf $2" "}'`
ifconfig eth1 | grep inet\ addr | awk '{print $2}' | cut -d: -f2 | sed s/^/eth1:\ /g
2009-11-03 19:26:40
User: TuxOtaku
Functions: awk cut grep ifconfig sed

Sometimes, you don't really care about all the other information that ifconfig spits at you (however useful it may otherwise be). You just want an IP. This strips out all the crap and gives you exactly what you want.

awk 'BEGIN {for(i=1;i<=100;i++)sum+=i}; END {print sum}' /dev/null
2009-10-26 18:24:57
User: dennisw
Functions: awk
Tags: awk

Calculating series with awk only, no need for seq: add numbers from 1 to 100


1+3+...+(2n-1) = n^2

awk 'BEGIN {for(i=1;i<=19;i+=2)sum+=i}; END {print sum}' /dev/null # displays 100

1/2 + 1/4 + ... = 1

awk 'BEGIN {for(i=1;i<=10;i++)sum+=1/(2**i)}; END {print sum}' /dev/null # displays 0.999023
awk 'BEGIN{dir=DIR?DIR:ENVIRON["PWD"];l=split(dir,parts,"/");last="";for(i=1;i<l+1;i++){d=last"/"parts[i];gsub("//","/",d);system("ls -ld \""d"\"");last=d}}'
2009-10-22 16:28:07
User: arcege
Functions: awk

Handled all within awk. Takes the value from $PWD and constructs directory structures and runs commands against them. The gsub() call is not necessary, but added for better visibility.

If a variable DIR is given on the awk command-line, then that directory is used instead:

awk -vDIR=$HOME/.ssh 'BEGIN{dir=DIR?...}'
(echo "set terminal png;plot '-' u 1:2 t 'cpu' w linespoints;"; sudo vmstat 2 10 | awk 'NR > 2 {print NR, $13}') | gnuplot > plot.png
vmstat 2 10 | awk 'NR > 2 {print NR, $13}' | gnuplot -e "set terminal png;set output 'v.png';plot '-' u 1:2 t 'cpu' w linespoints;"
find $HOME -type f -print0 | perl -0 -wn -e '@f=<>; foreach $file (@f){ (@el)=(stat($file)); push @el, $file; push @files,[ @el ];} @o=sort{$a->[9]<=>$b->[9]} @files; for $i (0..$#o){print scalar localtime($o[$i][9]), "\t$o[$i][-1]\n";}'|tail
2009-09-21 22:11:16
User: drewk
Functions: find perl

This pipeline will find, sort and display all files based on mtime. This could be done with find | xargs, but the find | xargs pipeline will not produce correct results if the results of find are greater than xargs command line buffer. If the xargs buffer fills, xargs processes the find results in more than one batch which is not compatible with sorting.

Note the "-print0" on find and "-0" switch for perl. This is the equivalent of using xargs. Don't you love perl?

Note that this pipeline can be easily modified to any data produced by perl's stat operator. eg, you could sort on size, hard links, creation time, etc. Look at stat and just change the '9' to what you want. Changing the '9' to a '7' for example will sort by file size. A '3' sorts by number of links....

Use head and tail at the end of the pipeline to get oldest files or most recent. Use awk or perl -wnla for further processing. Since there is a tab between the two fields, it is very easy to process.

ls -t | awk 'NR>5 {system("rm \"" $0 "\"")}'
2009-09-16 04:58:08
User: haivu
Functions: awk ls
Tags: awk ls

I have a directory containing log files. This command delete all but the 5 latest logs. Here is how it works:

* The ls -t command list all files with the latest ones at the top

* The awk's expression means: for those lines greater than 5, delete.