Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Universal configuration monitoring and system of record for IT.
Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

May 19, 2015 - A Look At The New Commandlinefu
I've put together a short writeup on what kind of newness you can expect from the next iteration of clfu. Check it out here.
March 2, 2015 - New Management
I'm Jon, I'll be maintaining and improving clfu. Thanks to David for building such a great resource!
Hide

Top Tags

Hide

Functions

Hide

Credits

Commands using awk from sorted by
Terminal - Commands using awk - 1,186 results
awk '{for(i=2;i<=NF;i++) printf("%s%s",$i,(i!=NF)?OFS:ORS)}'
awk '{ $1="";print}'
for p in $(pgrep -t $(cat /sys/class/tty/tty0/active)); do d=$(awk -v RS='\0' -F= '$1=="DISPLAY" {print $2}' /proc/$p/environ 2>/dev/null); [[ -n $d ]] && break; done; echo $d
2015-05-18 20:01:20
User: geyslan
Functions: awk cat echo
Tags: display xorg
1

It's useful when you cannot access your env (systemd) or the process DISPLAY variable is not set. Perhaps also when you have a multi-head/user configuration.

showip() { nmcli connection show $1|grep ipv4.addresses|awk '{print $2}' ; }
2015-05-13 16:24:28
User: nnsense
Functions: awk grep
1

Sometimes it's useful to output just the ip address. Or some other information, changing the "ipv4.addresses" in command. The power of awk! Show all possible "greps" with

nmcli connection show [yourInterfaceNameHere]
grep page.php /var/log/httpd/access_log|awk '{print $1}'|sort|uniq|perl -e 'while (<STDIN>){chomp; $cmd=`ipset add banned -! -q $_`; }'
mosth() { history | awk '{CMD[$2]++;count++;}END { for (a in CMD)print CMD[a] " " CMD[a]/count*100 "% " a;}' | grep -v "./" | column -c3 -s " " -t | sort -nr | nl | head -n10; }
2015-05-11 17:41:55
User: nnsense
Functions: awk column grep head nl sort
0

I copied this (let's be honest) somewhere on internet and I just made it as a function ready to be used as alias. It shows the 10 most used commands from history. This seems to be just another "most used commands from history", but hey.. this is a function!!! :D

awk '{out="";for(i=2;i<=NF;i++){out=out" "$i};sub(/ /, "", out);print out}'
2015-05-06 22:26:28
User: endix
Functions: awk
Tags: awk
0

Increase "2" in "i=2" to drop more columns.

wget -q -O- https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/ | grep Linux/7/pdf | cut -d \" -f 2 | awk '{print "https://access.redhat.com"$1}' | xargs wget
function summaryIP() { < $1 awk '{print $1}' | while read ip ; do verifyIP ${ip} && echo ${ip}; done | awk '{ip_array[$1]++} END { for (ip in ip_array) printf("%5d\t%s\n", ip_array[ip], ip)}' | sort -rn; }
2015-05-01 16:45:05
User: mpb
Functions: awk echo read sort
1

Working with lists of IP addresses it is sometimes useful to summarize a count of how many times an IP address appears in the file.

This example, summarizeIP, uses another function "verifyIP" previously defined in commandlinefu.com to ensure only valid IP addresses get counted. The summary list is presented in count order starting with highest count.

awk -F"|" 'BEGIN {OFS="|"} NR==1 {for (b=1;b<=NF;b++) {hdr[b]=$b} } NR > 1 {for (i=1;i<=NF;i++) {if(length($i) > max[i]) max[i] = length($i)} } END {for (i=1;i <= NF;i++) print hdr[i],max[i]+0}' pipe_delimited_file.psv
sudo lsof -i -n | grep sshd | grep sshuser | grep :[PORT-RANGE] | grep -v IPv6 | awk -F\: '{print $2}' | grep -v http | awk -F" " '{print $1}'
2015-04-09 15:41:11
User: das_shark
Functions: awk grep sshd sudo
-2

gets network ports

only ones for the sshd service

only logged in a specific user (changed for public posting)

only in a specific localhost:port range

not IPv6

Only the part of the response after the ":" character

Only the part of the response before the 1st space

Output is just the rssh port

awk '{print $0+0}' <(echo -2; echo +3;)
2015-04-08 09:19:24
Functions: awk echo
0

The leading plus sign is removed - Minus sign is left intact

ps -ef | grep PROCESS | grep -v grep | awk '{system "kill -9" $2}
pgrep -lf processname | cut -d' ' -f1 | awk '{print "cat /proc/" $1 "/net/sockstat | head -n1"}' | sh | cut -d' ' -f3 | paste -sd+ | bc
sudo apt-get purge $(dpkg -l linux-{image,headers}-"[0-9]*" | awk '/ii/{print $2}' | grep -ve "$(uname -r | sed -r 's/-[a-z]+//')")
wmctrl -m | grep Name: | awk '{print $2}'
crontest () { date +'%M %k %d %m *' |awk 'BEGIN {ORS="\t"} {print $1+2,$2,$3,$4,$5,$6}'; echo $1;}
2015-03-12 19:56:56
User: CoolHand
Functions: awk date echo
0

usage = crontest "/path/to/bin"

This version of this function will echo back the entire command so it can be copied/pasted to crontab. Should be able to be automagically appended to crontab with a bit more work. Tested on bash and zsh on linux,freebsd,aix

awk '!NF || !seen[$0]++'
2015-02-25 17:03:13
User: Soubsoub
Functions: awk
1

Remove duplicate lines whilst keeping order and empty lines

sc query state= all | awk '/SERVICE_NAME/{printf"%s:",$2;getline;gsub(/DISP.*:\ /,"");printf"%s\n",$0}' | column -ts\:
2015-02-15 22:35:10
User: lowjax
Functions: awk column
1

Outputs Windows Services service name and display name using "sc query", pipes the output to "awk" for processing, then "column" for formatting.

List All Services:

sc query state= all | awk '/SERVICE_NAME/{printf"%s:",$2;getline;gsub(/DISP.*:\ /,"");printf"%s\n",$0}' | column -ts\:

List Started Services:

sc query | awk '/SERVICE_NAME/{printf"%s:",$2;getline;gsub(/DISP.*:\ /,"");printf"%s\n",$0}' | column -ts\:

List Stopped Services:

sc query state= inactive| awk '/SERVICE_NAME/{printf"%s:",$2;getline;gsub(/DISP.*:\ /,"");printf"%s\n",$0}' | column -ts\:
ip -o -4 a s | awk -F'[ /]+' '$2!~/lo/{print $4}'
2015-02-13 11:19:31
User: paulera
Functions: awk
2

To show ipv6 instead, use [[ -6 ]] instead of [[ -4 ]]

ip -o -6 a s | awk -F'[ /]+' '$2!~/lo/{print $4}'

To show only the IP of a specific interface, in case you get more than one result:

ip -o -4 a s eth0 | awk -F'[ /]+' '$2!~/lo/{print $4}' ip -o -4 a s wlan0 | awk -F'[ /]+' '$2!~/lo/{print $4}'
ls -l /dev/disk/by-id |grep -v "wwn-" |egrep "[a-zA-Z]{3}$" |sed 's/\.\.\/\.\.\///' |sed -E 's/.*[0-9]{2}:[0-9]{2}\s//' |sed -E 's/->\ //' |sort -k2 |awk '{print $2,$1}' |sed 's/\s/\t/'
2015-01-25 19:29:40
User: lig0n
Functions: awk egrep grep ls sed sort
Tags: zfs disk info
0

This is much easier to parse and do something else with (eg: automagically create ZFS vols) than anything else I've found. It also helps me keep track of which disks are which, for example, when I want to replace a disk, or image headers in different scenarios. Being able to match a disk to the kernels mapping of said drive the disks serial number is very helpful

ls -l /dev/disk/by-id

Normal `ls` command to list contents of /dev/disk/by-id

grep -v "wwn-"

Perform an inverse search - that is, only output non-matches to the pattern 'wwn-'

egrep "[a-zA-Z]{3}$"

A regex grep, looking for three letters and the end of a line (to filter out fluff)

sed 's/\.\.\/\.\.\///'

Utilize sed (stream editor) to remove all occurrences of "../../"

sed -E 's/.*[0-9]{2}:[0-9]{2}\s//'

Strip out all user and permission fluff. The -E option lets us use extended (modern) regex notation (larger control set)

sed -E 's/->\ //'

Strip out ascii arrows "-> "

sort -k2

Sort the resulting information alphabetically, on column 2 (the disk letters)

awk '{print $2,$1}'

Swap the order of the columns so it's easier to read/utilize output from

sed 's/\s/\t/'

Replace the space between the two columns with a tab character, making the output more friendly

For large ZFS pools, this made creating my vdevs immeasurably easy. By keeping track of which disks were in which slot (spreadsheet) via their serial numbers, I was able to then create my vols simply by copying and pasting the full output of the disk (not the letter) and pasting it into my command. Thereby allowing me to know exactly which disk, in which slot, was going into the vdev. Example command below.

zpool create tank raidz2 -o ashift=12 ata-... ata-... ata-... ata-... ata-... ata-...
lsof -ns | grep REG | grep deleted | awk '{a[$1]+=$7;}END{for(i in a){printf("%s %.2f MB\n", i, a[i]/1048576);}}'
tail -f access_log | awk '{print $1 , $12}'
2014-12-24 14:15:52
User: tyzbit
Functions: awk tail
0

Use this command to watch apache access logs in real time to see what pages are getting hit.

FILE=somefile.js; LOG=~/changes.diff; truncate -s0 ${LOG}; for change in $(svn log ${FILE} | awk -F' | ' '/^r[0-9]+/{print $1}'); do svn log -c ${change} >> ${LOG}; printf "\n" >> ${LOG}; svn diff -c ${change} >> ${LOG}; printf "\n\n\n" >> ${LOG}; done
2014-12-23 20:00:54
User: hochmeister
Functions: awk diff printf
Tags: svn diff log
0

from a svn repo, print a log, with diff, of each commit touching a given file

awk '{ total += gsub(/yourstring/,"") } END { print total }' yourfile
2014-12-16 21:00:45
User: bugmenot
Functions: awk
0

Count how many times a pattern is present into a file. It can be one or more lines. No overlapping. It means searching for aa on aaa will output 1 not 2.