Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands tagged awk from sorted by
Terminal - Commands tagged awk - 293 results
awk -F'\t' '{print $0 >>$5.tsv}'
2012-05-16 18:18:16
User: pykler
Functions: awk
Tags: awk split tsv
0

Will split the std input lines into files grouped by the 5th column content.

cal 04 2012 | awk '{ $7 && X=$7 } END { print X }'
2012-05-06 23:43:21
User: flatcap
Functions: awk cal
2

If your locale has Monday as the first day of the week, like mine in the UK, change the two $7 into $6

echo `disklabel mfid1s4 | sed -n '$p' | awk '{print $2}'` / 1024 / 1024 | bc -l
cal 04 2012 | awk 'NF <= 7 { print $7 }' | grep -v "^$" | tail -1
2012-05-03 16:57:45
User: javidjamae
Functions: awk cal grep tail
-2

This is a little trickier than finding the last Sunday, because you know the last Sunday is in the first position of the last line. The trick is to use the NF less than or equal to 7 so it picks up all the lines then grep out any empty lines.

lynx -dump http://www.cooks4arab.com | awk '/http/{print $2}' | egrep "^https{0,1}"
lynx -dump http://www.domain.com | awk '/http/{print $2}' | egrep "^https{0,1}"
for k in $(git branch | sed /\*/d); do echo "$(git log -1 --pretty=format:"%ct" $k) $k"; done | sort -r | awk '{print $2}'
2012-04-07 11:19:00
User: dahuie
Functions: awk echo sed sort
Tags: bash git sed awk
0

Simpler and without all of the coloring gimmicks. This just returns a list of branches with the most recent first. This should be useful for cleaning your remotes.

sudo apt-get remove $(dpkg -l|awk '/^ii linux-image-/{print $2}'|sed 's/linux-image-//'|awk -v v=`uname -r` 'v>$0'|sed 's/-generic//'|awk '{printf("linux-headers-%s\nlinux-headers-%s-generic\nlinux-image-%s-generic\n",$0,$0,$0)}')
2012-04-02 10:53:40
User: mtron
Functions: awk sed sudo
-1

small update for this command to work with linux kernels 3.x

ps h --ppid $(cat /var/run/apache2.pid) | awk '{print"-p " $1}' | xargs sudo strace
2012-03-21 01:59:41
Functions: awk cat ps sudo xargs
2

Like the original version except it does not include the parent apache process or the grep process and adds "sudo" so it can be run by user.

curl -s mobile.twitter.com/search | sed -n '/trend_footer_list/,/\ul>/p' | awk -F\> '{print $3}' | awk -F\< '{print $1}' | sed '/^$/d'
2012-03-15 17:17:06
User: articmonkey
Functions: awk sed
Tags: twitter awk curl
0

Prints top 5 twitter topics. Not very well written at all but none of the others worked.

find /path/to/dir -iname "*.ext" -print0 | xargs -0 mplayer -really-quiet -cache 64 -vo dummy -ao dummy -identify 2>/dev/null | awk '/ID_LENGTH/{gsub(/ID_LENGTH=/,"")}{SUM += $1}END{ printf "%02d:%02d:%02d\n",SUM/3600,SUM%3600/60,SUM%60}'
2012-03-11 12:28:48
User: DarkSniper
Functions: awk find printf xargs
0

Improvement on Coderjoe's Solution. Gets rid of grep and cut (and implements them in awk) and specifies some different mplayer options that speed things up a bit.

awk 'FNR==100 {print;exit}' file
2012-03-04 20:25:57
User: Testuser_01
Functions: awk
Tags: awk time LINES
0

This will save parsing time for operations on very big files.

awk '{cmd="date --date=\""$1"\" +\"%Y/%m/%d %H:%M:%S\" "; cmd | getline convdate; print cmd";"convdate }' file.txt
2012-02-28 14:08:52
User: EBAH
Functions: awk
0

Convert readable date/time with `date` command

print "$(lsvg -Lo |xargs lsvg -L|grep "TOTAL PPs"|awk -F"(" '{print$2}'|sed -e "s/)//g" -e "s/megabytes/+/g"|xargs|sed -e "s/^/(/g" -e "s/+$/)\/1000/g"|bc ) GB"
2012-02-03 13:58:41
0

Not figured by me, but a colleague of mine.

See the total amount of data on an AIX machine.

sed -r 's/(\[|])//g' | awk ' { $1=strftime("%D %T",$1); print }'
2012-02-03 13:07:37
User: Zulu
Functions: awk sed
Tags: sed awk timestamp
0

It remove the square bracket and convert UNIX time to human readable time for all line of a stream (or file).

ps -fea | grep PATTERN | awk {'print $2'} | xargs kill -9
awk -F":" '!list[$3]++{print $3}' /etc/passwd
find ./ -type f -size +100000k -exec ls -lh {} \; 2>/dev/null| awk '{ print $8 " : " $5}'
2012-01-21 04:19:35
User: Goez
Functions: awk find ls
0

This command does a basic find with size. It also improves the printout given (more clearer then default)

Adjusting the ./ will alter the path.

Adjusting the "-size +100000k" will specify the size to search for.

grep "cpu " /proc/stat | awk -F ' ' '{total = $2 + $3 + $4 + $5} END {print "idle \t used\n" $5*100/total "% " $2*100/total "%"}'
2012-01-21 04:12:50
User: Goez
Functions: awk grep
0

This command displays the CPU idle + used time using stats from /proc/stat.

diff -U99999 original.css modified.css | awk '/^-/{next} {f=f"\n"$0} /^\+.*[^ ]/{yes=1} /}/ {if(yes){print f} f="";yes=0}'
2012-01-12 07:57:22
User: unhammer
Functions: awk diff
0

This will extract the differing CSS entries of two files. I've left the initial character (plus or space) in output to show the real differing line, remove the initial character to get a working CSS file. The output CSS file is usable by either adding it in a below the to original.css, or by only using the output but adding @import url("original.css"); in the beginning.

This is very useful for converting Wordpress theme copies into real Wordpress child themes.

Could exclude common lines within entries too, I guess, but that might not be worth the complexity.

ps -ef | grep [j]ava | awk -F ' ' ' { print $1," ",$2,"\t",$(NF-2),"\t",$(NF-1),"\t",$NF } ' | sort -k4
2012-01-05 16:05:48
User: drockney
Functions: awk grep ps sort
Tags: sort awk grep ps
0

Tested in bash on AIX & Linux, used for WAS versions 6.0 & up. Sorts by node name.

Useful when you have vertically-stacked instances of WAS/Portal. Cuts out all the classpath/optional parameter clutter that makes a simple "ps -ef | grep java" so difficult to sort through.

awk -F: '$3 > 999 { print $1 }' /etc/passwd
2011-12-30 14:47:10
User: rockenrola
Functions: awk
Tags: awk uid passwd
0

To distinguish normal users from system users. Specify an UID, to list all all users with UID bigger than that in /etc/passwd.

awk -F ':' '{print $1 | "sort";}' /etc/passwd
ls -1 | awk ' { print "zip "$1".zip " $1 } ' | sh
2011-12-14 20:30:56
User: kaywhydub
Functions: awk ls
Tags: awk zip sh
1

This will list the files in a directory, then zip each one with the original filename individually.

video1.wmv -> video1.zip

video2.wmv -> video2.zip

This was for zipping up large amounts of video files for upload on a Windows machine.

awk 'NR >= 3 && NR <= 6' /path/to/file
2011-12-14 14:28:56
User: atoponce
Functions: awk
Tags: awk
7

This command uses awk(1) to print all lines between two known line numbers in a file. Useful for seeing output in a log file, where the line numbers are known. The above command will print all lines between, and including, lines 3 and 6.