Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands tagged awk from sorted by
Terminal - Commands tagged awk - 280 results
mco ping | head -n -4 | awk '{print $1}' | sort
while true; do clear;awk '{a[$3]+=1};END{for(x in a){print x,a[x]}}' /proc/[0-9]*/stat; sleep 1; done
awk '$1=="Host"{$1="";H=substr($0,2)};$1=="HostName"{print H,"$",$2}' ~/.ssh/config | column -s '$' -t
2014-05-24 20:51:47
User: wejn
Functions: awk column
Tags: awk column
3

Spits out table that shows your Host->HostName aliases in ~/.ssh/config

grep URL ~/annex/.git/annex/webapp.html | tr -d '">' | awk -F= '{print $4 "=" $5}'
grep -Rl "pattern" files_or_dir
2014-04-06 18:18:07
User: N1nsun
Functions: grep
Tags: awk find grep
0

Grep can search files and directories recursively. Using the -Z option and xargs -0 you can get all results on one line with escaped spaces, suitable for other commands like rm.

find . | xargs grep -l "FOOBAR" | awk '{print "rm -f "$1}' > doit.sh
2014-04-06 15:48:41
User: sergeylukin
Functions: awk find grep xargs
Tags: awk find grep
-2

After this command you can review doit.sh file before executing it.

If it looks good, execute: `. doit.sh`

!a[$0]++
lsof|gawk '$4~/txt/{next};/REG.*\(deleted\)$/{printf ">/proc/%s/fd/%d\n", $2,$4}'
2014-03-11 10:40:32
User: wejn
Functions: gawk
Tags: awk lsof gawk
1

While the posted solution works, I'm a bit uneasy about the "%d" part. This would be hyper-correct approach:

lsof|gawk '$4~/txt/{next};/REG.*\(deleted\)$/{sub(/.$/,"",$4);printf ">/proc/%s/fd/%s\n", $2,$4}'

Oh, and you gotta pipe the result to sh if you want it to actually trim the files. ;)

Btw, this approach also removes false negatives (OP's command skips any deleted files with "txt" in their name).

df -h --total | awk 'NR==1; END{print}'
cat /proc/cpuinfo | grep BogoMIPS | uniq | sed 's/^.*://g' | awk '{print($1 / 4) }'
find . -type d| while read i; do echo $(ls -1 "$i"|wc -m) $(du -s "$i"); done|sort -s -n -k1,1 -k2,2 |awk -F'[ \t]+' '{ idx=$1$2; if (array[idx] == 1) {print} else if (array[idx]) {print array[idx]; print; array[idx]=1} else {array[idx]=$0}}'
2014-02-25 22:50:09
User: knoppix5
Functions: awk du echo find ls read sort wc
1

Very quick! Based only on the content sizes and the character counts of filenames. If both numbers are equal then two (or more) directories seem to be most likely identical.

if in doubt apply:

diff -rq path_to_dir1 path_to_dir2

AWK function taken from here:

http://stackoverflow.com/questions/2912224/find-duplicates-lines-based-on-some-delimited-fileds-on-line

alias ...="awk '{fflush(); printf \".\"}' && echo \"\""
2014-02-22 22:20:22
User: lgarron
Functions: alias
7

If you're running a command with a lot of output, this serves as a simple progress indicator.

This avoids the need to use `/dev/null` for silencing. It works for any command that outputs lines, updates live (`fflush` avoids buffering), and is simple to understand.

while [ 1 ] ;do ps aux|awk '{if ($8 ~ "D") print }'; sleep 1 ;done
rpm -qa --queryformat '%{SIZE}\n' | awk '{sum += $1} END {printf("Total size in packages = %4.1f GB\n", sum/1024**3)}'
2013-12-14 20:22:41
User: skytux
Functions: awk rpm
0

It is not the installed size in files, but the size of RPM packages.

xrandr | grep \* | awk '{print $1}'
% sudo yum remove streams-$(uname-r)
wget "us.download.nvidia.com$(wget -qO- "$(wget -qO- "nvidia.com/Download/processFind.aspx?psid=95&pfid=695&osid=19&lid=1&lang=en-us"|awk '/driverResults.aspx/ {print $4}'|cut -d "'" -f2|head -n 1)"|awk '/url=/ {print $2}'|cut -d '=' -f3|cut -d '&' -f1)"
2013-11-21 03:04:59
User: lowjax
Functions: awk cut head wget
1

Download latest NVIDIA Geforce x64 Windows7-8 driver from Nvidia's website. Pulls the latest download version (which includes beta). This is the "English" version. The following command includes a 'sed' line to replace "english" with "international" if needed. You can also replace the starting subdomain with "eu." "uk." and others. Enjoy this one liner! 1 character under the max :)

wget "us.download.nvidia.com$(wget -qO- "$(wget -qO- "nvidia.com/Download/processFind.aspx?psid=95&pfid=695&osid=19&lid=1&lang=en-us" | awk '/driverResults.aspx/ {print $4}' | cut -d "'" -f2 | head -n 1)" | awk '/url=/ {print $2}' | sed -e "s/english/international/" | cut -d '=' -f3 | cut -d '&' -f1)"
awk '{pattern=$1$2; seen[pattern]++; if (seen[pattern] == 1) print NR}' inputFile
awk 'BEGIN {count=0;prev=-1} {if(count>0) { if(int($1/100) > int(prev/100)) {print $1} } ; prev=$1; count++}' inputFile > rounded
echo 0$(awk '/Pss/ {printf "+"$2}' /proc/$PID/smaps)|bc
2013-09-26 18:20:22
User: atoponce
Functions: awk echo
Tags: Linux awk echo bc proc
5

The "proportional set size" is probably the closest representation of how much active memory a process is using in the Linux virtual memory stack. This number should also closely represent the %mem found in ps(1), htop(1), and other utilities.

dig +short <domain>
netstat -ntu | awk ' $5 ~ /^(::ffff:|[0-9|])/ { gsub("::ffff:","",$5); print $5}' | cut -d: -f1 | sort | uniq -c | sort -nr
2013-09-10 19:28:06
User: mrwulf
Functions: awk cut netstat sort uniq
1

Same as the rest, but handle IPv6 short IPs. Also, sort in the order that you're probably looking for.

function garg () { tail -n 1 ${HISTFILE} | awk "{ print \$$1 }" }
2013-09-10 04:07:46
User: plasticphyte
Functions: awk tail
0

This gets the Nth argument in the last line of your history file. This is useful where history is being written after each command, and you want to use arguments from the previous command in the current command, such as when doing copies/moving directories etc.

I wrote this after getting irritated with having to continually type in long paths/arguments.

You could also use $_ if all you want is the last argument.

host example.com | head -1 | awk '{print $4}'
nslookup www.example.com | tail -2 | head -1 | awk '{print $2}'
2013-09-05 20:26:45
User: wsams
Functions: awk head nslookup tail
1

I'm not sure how reliable this command is, but it works for my needs. Here's also a variant using grep.

nslookup www.example.com | grep "^Address: " | awk '{print $2}'