Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Universal configuration monitoring and system of record for IT.
Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

May 19, 2015 - A Look At The New Commandlinefu
I've put together a short writeup on what kind of newness you can expect from the next iteration of clfu. Check it out here.
March 2, 2015 - New Management
I'm Jon, I'll be maintaining and improving clfu. Thanks to David for building such a great resource!
Hide

Top Tags

Hide

Functions

Psst. Open beta.

Wow, didn't really expect you to read this far down. The latest iteration of the site is in open beta. It's a gentle open beta-- not in prime-time just yet. It's being hosted over at UpGuard (link) and you are more than welcome to give it a shot. Couple things:

  • » The open beta is running a copy of the database that will not carry over to the final version. Don't post anything you don't mind losing.
  • » If you wish to use your user account, you will probably need to reset your password.
Your feedback is appreciated via the form on the beta page. Thanks! -Jon & CLFU Team

Commands tagged awk from sorted by
Terminal - Commands tagged awk - 303 results
ls -t | awk 'NR>5 {system("rm \"" $0 "\"")}'
2009-09-16 04:58:08
User: haivu
Functions: awk ls
Tags: awk ls
-2

I have a directory containing log files. This command delete all but the 5 latest logs. Here is how it works:

* The ls -t command list all files with the latest ones at the top

* The awk's expression means: for those lines greater than 5, delete.

awk '{delta = $1 - avg; avg += delta / NR; mean2 += delta * ($1 - avg); } END { print sqrt(mean2 / NR); }'
2009-09-11 04:46:01
User: ashawley
Functions: awk delta
Tags: awk
4

This will calculate a running standard deviation in one pass and should never have the possibility for overflow that can happen with other implementations. I suppose there is a potential for underflow in the corner case where the deltas are small or the values themselves are small.

awk 'length>72' file
2009-09-10 05:54:41
User: haivu
Functions: awk
Tags: awk
16

This command displays a list of lines that are longer than 72 characters. I use this command to identify those lines in my scripts and cut them short the way I like it.

echo src::${PATH} | awk 'BEGIN{pwd=ENVIRON["PWD"];RS=":";FS="\n"}!$1{$1=pwd}$1!~/^\//{$1=pwd"/"$1}{print $1}'
2009-09-09 04:03:46
User: arcege
Functions: awk echo
Tags: awk echo PATH
-2

Removes trailing newline; colon becomes record separator and newline becomes field separator, only the first field is ever printed. Replaces empty entries with $PWD. Also prepend relative directories (like ".") with the current directory ($PWD). Can change PWD with env(1) to get tricky in (non-Bourne) scripts.

find . -name \*.c | xargs wc -l | tail -1 | awk '{print $1}'
2009-09-08 08:25:45
User: karpoke
Functions: awk find tail wc xargs
Tags: awk find wc
0

This is really fast :)

time find . -name \*.c | xargs wc -l | tail -1 | awk '{print $1}'

204753

real 0m0.191s

user 0m0.068s

sys 0m0.116s

curl -u username:password --silent "https://mail.google.com/mail/feed/atom" | tr -d '\n' | awk -F '<entry>' '{for (i=2; i<=NF; i++) {print $i}}' | sed -n "s/<title>\(.*\)<\/title.*name>\(.*\)<\/name>.*/\2 - \1/p"
2009-09-07 21:56:40
User: postrational
Functions: awk sed tr
44

Checks the Gmail ATOM feed for your account, parses it and outputs a list of unread messages.

For some reason sed gets stuck on OS X, so here's a Perl version for the Mac:

curl -u username:password --silent "https://mail.google.com/mail/feed/atom" | tr -d '\n' | awk -F '<entry>' '{for (i=2; i<=NF; i++) {print $i}}' | perl -pe 's/^<title>(.*)<\/title>.*<name>(.*)<\/name>.*$/$2 - $1/'

If you want to see the name of the last person, who added a message to the conversation, change the greediness of the operators like this:

curl -u username:password --silent "https://mail.google.com/mail/feed/atom" | tr -d '\n' | awk -F '<entry>' '{for (i=2; i<=NF; i++) {print $i}}' | perl -pe 's/^<title>(.*)<\/title>.*?<name>(.*?)<\/name>.*$/$2 - $1/'
awk 'BEGIN {a=1;b=1;for(i=0;i<'${NUM}';i++){print a;c=a+b;a=b;b=c}}'
2009-09-06 03:05:55
User: arcege
Functions: awk
Tags: awk
0

Does not require input to function or complete. Number of iterations controlled by shell variable $NUM.

find . -type f -name '*.c' -exec wc -l {} \; | awk '{sum+=$1} END {print sum}'
2009-09-04 15:51:30
User: arcege
Functions: awk find wc
Tags: awk find wc
-1

Have wc work on each file then add up the total with awk; get a 43% speed increase on RHEL over using "-exec cat|wc -l" and a 67% increase on my Ubuntu laptop (this is with 10MB of data in 767 files).

find . -exec grep foobar /dev/null {} \; | awk -F: '{print $1}' | xargs vi
grep -ir 'foo' * | awk -F '{print $1}' | xargs vim
grep -Hrli 'foo' * | xargs vim
2009-09-03 15:44:05
User: dere22
Functions: grep xargs
Tags: vim sed awk grep
3

The grep switches eliminate the need for awk and sed. Modifying vim with -p will show all files in separate tabs, -o in separate vim windows. Just wish it didn't hose my terminal once I exit vim!!

grep -ir 'foo' * | awk '{print $1}' | sed -e 's/://' | xargs vim
2009-09-03 15:12:27
User: elubow
Functions: awk grep sed xargs
Tags: vim sed awk grep
0

This will drop you into vim to edit all files that contain your grep string.

FFPID=$(pidof firefox-bin) && lsof -p $FFPID | awk '{ if($7>0) print ($7/1024/1024)" MB -- "$9; }' | grep ".mozilla" | sort -rn
2009-08-16 08:58:22
User: josue
Functions: awk grep pidof sort
6

Check which files are opened by Firefox then sort by largest size (in MB). You can see all files opened by just replacing grep to "/". Useful if you'd like to debug and check which extensions or files are taking too much memory resources in Firefox.

not necessarily better, but many...!
2009-08-12 11:03:26
Tags: bash awk
-17

( IFS=:; for i in $PATH; do echo $i; done; )

echo $PATH|sed -e 's/:/\n/g' # but the tr one is even better of course

echo $PATH|xargs -d: -i echo {} # but this comes up with an extra blank line; can't figure out why and don't have the time :(

echo $PATH|cut -d: --output-delimiter='

' -f1-99 # note -- you have to hit ENTER after the first QUOTE, then type the second one. Sneaky, huh?

echo $PATH | perl -l -0x3a -pe 1 # same darn extra new line; again no time to investigate

echo $PATH|perl -pe 's/:/\n/g' # too obvious; clearly I'm running out of ideas :-)

echo $PATH|awk -F: ' { for (i=1; i <= NF; i++) print $i }'
perl -F',' -ane '$a += $F[3]; END { print $a }' test.csv
2009-08-11 15:08:58
Functions: perl
Tags: awk column CSV sum
1

More of the same but with more elaborate perl-fu :-)

awk -F ',' '{ x = x + $4 } END { print x }' test.csv
awk /./ filename
2009-08-09 02:04:46
Functions: awk
Tags: awk
1

?Cat and grep? You can use only grep ("grep \. filename"). Better option is awk.

awk '{print NR": "$0; for(i=1;i<=NF;++i)print "\t"i": "$i}'
2009-07-23 06:25:31
User: recursiverse
Functions: awk
Tags: awk
16

Breaks down and numbers each line and it's fields. This is really useful when you are going to parse something with awk but aren't sure exactly where to start.

$ awk '{ split(sprintf("%1.3e", $1), b, "e"); p = substr("yzafpnum_kMGTPEZY", (b[2]/3)+9, 1); o = sprintf("%f", b[1] * (10 ^ (b[2]%3))); gsub(/\./, p, o); print substr( gensub(/_[[:digit:]]*/, "", "g", o), 1, 4); }' < test.dat
2009-07-22 16:54:14
User: mungewell
Functions: awk
Tags: awk
2

converts any number on the 'stdin' to SI notation. My version limits to 3 digits of precious (working with 10% resistors).

(cd /source/dir ; tar cv .)|(cd /dest/dir ; tar xv)
2009-07-19 10:31:13
User: marssi
Functions: cd tar
-11

the f is for file and - stdout, This way little shorter.

I Like copy-directory function It does the job but looks like SH**, and this doesn't understand folders with whitespaces and can only handle full path, but otherwise fine,

function copy-directory () { ; FrDir="$(echo $1 | sed 's:/: :g' | awk '/ / {print $NF}')" ; SiZe="$(du -sb $1 | awk '{print $1}')" ; (cd $1 ; cd .. ; tar c $FrDir/ )|pv -s $SiZe|(cd $2 ; tar x ) ; }

$ grep -or string path/ | wc -l
grep -rc logged_in app/ | cut -d : -f 2 | awk '{sum+=$1} END {print sum}'
2009-07-15 14:16:44
User: terceiro
Functions: awk cut grep
-2

grep's -c outputs how may matches there are for a given file as "file:N", cut takes the N's and awk does the sum.

cat /dev/urandom|awk 'BEGIN{"tput cuu1" | getline CursorUp; "tput clear" | getline Clear; printf Clear}{num+=1;printf CursorUp; print num}'
2009-07-13 07:30:51
User: axelabs
Functions: awk cat printf
Tags: nawk awk clear tput
0

awk can clear the screen while displaying output. This is a handy way of seeing how many lines a tail -f has hit or see how many files find has found. On solaris, you may have to use 'nawk' and your machine needs 'tput'

awk '{c=split($0, s); for(n=1; n<=c; ++n) print s[n] }' INPUT_FILE > OUTPUT_FILE
2009-07-06 06:10:21
User: agony
Functions: awk
Tags: awk
1

Basically it creates a typical word list file from any normal text.