Commands using awk (1,403)


  • 120
    history | awk '{a[$2]++}END{for(i in a){print a[i] " " i}}' | sort -rn | head
    unixmonkey611 · 2009-02-11 13:12:29 5
  • I find this terribly useful for grepping through a file, looking for just a block of text. There's "grep -A # pattern file.txt" to see a specific number of lines following your pattern, but what if you want to see the whole block? Say, the output of "dmidecode" (as root): dmidecode | awk '/Battery/,/^$/' Will show me everything following the battery block up to the next block of text. Again, I find this extremely useful when I want to see whole blocks of text based on a pattern, and I don't care to see the rest of the data in output. This could be used against the '/etc/securetty/user' file on Unix to find the block of a specific user. It could be used against VirtualHosts or Directories on Apache to find specific definitions. The scenarios go on for any text formatted in a block fashion. Very handy.


    90
    awk '/start_pattern/,/stop_pattern/' file.txt
    atoponce · 2009-03-28 14:28:59 8
  • Using awk, find duplicates in a file without sorting, which reorders the contents. awk will not reorder them, and still find and remove duplicates which you can then redirect into another file.


    83
    awk '!x[$0]++' <file>
    din7 · 2009-12-20 02:33:21 9
  • Written for linux, the real example is how to produce ascii text graphs based on a numeric value (anything where uniq -c is useful is a good candidate). Show Sample Output


    52
    netstat -an | grep ESTABLISHED | awk '{print $5}' | awk -F: '{print $1}' | sort | uniq -c | awk '{ printf("%s\t%s\t",$2,$1) ; for (i = 0; i < $1; i++) {printf("*")}; print "" }'
    knassery · 2009-04-27 22:02:19 7
  • Checks the Gmail ATOM feed for your account, parses it and outputs a list of unread messages. For some reason sed gets stuck on OS X, so here's a Perl version for the Mac: curl -u username:password --silent "https://mail.google.com/mail/feed/atom" | tr -d '\n' | awk -F '<entry>' '{for (i=2; i<=NF; i++) {print $i}}' | perl -pe 's/^<title>(.*)<\/title>.*<name>(.*)<\/name>.*$/$2 - $1/' If you want to see the name of the last person, who added a message to the conversation, change the greediness of the operators like this: curl -u username:password --silent "https://mail.google.com/mail/feed/atom" | tr -d '\n' | awk -F '<entry>' '{for (i=2; i<=NF; i++) {print $i}}' | perl -pe 's/^<title>(.*)<\/title>.*?<name>(.*?)<\/name>.*$/$2 - $1/' Show Sample Output


    47
    curl -u username:password --silent "https://mail.google.com/mail/feed/atom" | tr -d '\n' | awk -F '<entry>' '{for (i=2; i<=NF; i++) {print $i}}' | sed -n "s/<title>\(.*\)<\/title.*name>\(.*\)<\/name>.*/\2 - \1/p"
    postrational · 2009-09-07 21:56:40 13
  • Print all columns except the 1st and 3rd.


    31
    awk '{$1=$3=""}1' file
    zlemini · 2011-10-25 22:15:06 3
  • You have an external USB drive or key. Apply this command (using the file path of anything on your device) and it will simulate the unplug of this device. If you just want the port, just type : echo $(sudo lshw -businfo | grep -B 1 -m 1 $(df "/path/to/file" | tail -1 | awk '{print $1}' | cut -c 6-8) | head -n 1 | awk '{print $1}' | cut -c 5- | tr ":" "-") Show Sample Output


    30
    echo $(sudo lshw -businfo | grep -B 1 -m 1 $(df "/path/to/file" | tail -1 | awk '{print $1}' | cut -c 6-8) | head -n 1 | awk '{print $1}' | cut -c 5- | tr ":" "-") | sudo tee /sys/bus/usb/drivers/usb/unbind
    tweet78 · 2014-04-06 12:06:29 9
  • What happens here is we tell tar to create "-c" an archive of all files in current dir "." (recursively) and output the data to stdout "-f -". Next we specify the size "-s" to pv of all files in current dir. The "du -sb . | awk ?{print $1}?" returns number of bytes in current dir, and it gets fed as "-s" parameter to pv. Next we gzip the whole content and output the result to out.tgz file. This way "pv" knows how much data is still left to be processed and shows us that it will take yet another 4 mins 49 secs to finish. Credit: Peteris Krumins http://www.catonmat.net/blog/unix-utilities-pipe-viewer/ Show Sample Output


    26
    tar -cf - . | pv -s $(du -sb . | awk '{print $1}') | gzip > out.tgz
    opertinicy · 2009-12-18 17:09:08 3

  • 25
    netstat -ant | awk '{print $NF}' | grep -v '[a-z]' | sort | uniq -c
    himynameisthor · 2009-02-05 18:02:59 6
  • Show the number of failed tries of login per account. If the user does not exist it is marked with *. Show Sample Output


    24
    sudo zcat /var/log/auth.log.*.gz | awk '/Failed password/&&!/for invalid user/{a[$9]++}/Failed password for invalid user/{a["*" $11]++}END{for (i in a) printf "%6s\t%s\n", a[i], i|"sort -n"}'
    point_to_null · 2009-03-21 06:41:59 2
  • This uses awk to grab the IP address from each request and then sorts and summarises the top 10.


    23
    tail -10000 access_log | awk '{print $1}' | sort | uniq -c | sort -n | tail
    root · 2009-01-25 21:01:52 9
  • This makes an alias for a command named 'busy'. The 'busy' command opens a random file in /usr/include to a random line with vim. Drop this in your .bash_aliases and make sure that file is initialized in your .bashrc.


    23
    alias busy='my_file=$(find /usr/include -type f | sort -R | head -n 1); my_len=$(wc -l $my_file | awk "{print $1}"); let "r = $RANDOM % $my_len" 2>/dev/null; vim +$r $my_file'
    busybee · 2010-03-09 21:48:41 9

  • 22
    history | awk '{print $2}' | sort | uniq -c | sort -rn | head
    mikeda · 2009-02-17 14:25:49 3

  • 20
    sudo tcpdump -i wlan0 -n ip | awk '{ print gensub(/(.*)\..*/,"\\1","g",$3), $4, gensub(/(.*)\..*/,"\\1","g",$5) }' | awk -F " > " '{print $1"\n"$2}'
    tweet78 · 2014-04-11 22:41:32 0
  • Takes a input file (count.txt) that looks like: 1 2 3 4 5 It will add/sum the first column of numbers.


    19
    cat count.txt | awk '{ sum+=$1} END {print sum}'
    duxklr · 2009-03-16 00:22:13 8
  • When you fill a formular with Firefox, you see things you entered in previous formulars with same field names. This command list everything Firefox has registered. Using a "delete from", you can remove anoying Google queries, for example ;-)


    19
    cd ~/.mozilla/firefox/ && sqlite3 `cat profiles.ini | grep Path | awk -F= '{print $2}'`/formhistory.sqlite "select * from moz_formhistory" && cd - > /dev/null
    klipz · 2009-04-13 20:23:37 3
  • Parses /etc/group to "dot" format and pases it to "display" (imagemagick) to show a usefull diagram of users and groups (don't show empty groups).


    19
    awk 'BEGIN{FS=":"; print "digraph{"}{split($4, a, ","); for (i in a) printf "\"%s\" [shape=box]\n\"%s\" -> \"%s\"\n", $1, a[i], $1}END{print "}"}' /etc/group|display
    point_to_null · 2011-12-04 01:56:44 1
  • This loops through all tables and changes their collations to UTF8. You should backup beforehand though in case some data is lost in the process.


    18
    mysql --database=dbname -B -N -e "SHOW TABLES" | awk '{print "ALTER TABLE", $1, "CONVERT TO CHARACTER SET utf8 COLLATE utf8_general_ci;"}' | mysql --database=dbname &
    root · 2009-03-21 18:45:15 5
  • Busiest seconds: cat /var/log/secure.log | awk '{print substr($0,0,15)}' | uniq -c | sort -nr | awk '{printf("\n%s ",$0) ; for (i = 0; i<$1 ; i++) {printf("*")};}' Show Sample Output


    17
    cat /var/log/secure.log | awk '{print substr($0,0,12)}' | uniq -c | sort -nr | awk '{printf("\n%s ",$0) ; for (i = 0; i<$1 ; i++) {printf("*")};}'
    knassery · 2009-07-24 07:20:06 4

  • 17
    man -t awk | ps2pdf - awk.pdf
    kev · 2011-11-23 01:40:23 2
  • I use this on debian testing, works like the other sorted du variants, but i like small numbers and suffixes :) Show Sample Output


    16
    du --max-depth=1 | sort -r -n | awk '{split("k m g",v); s=1; while($1>1024){$1/=1024; s++} print int($1)" "v[s]"\t"$2}'
    hans · 2009-02-24 11:03:08 4
  • Breaks down and numbers each line and it's fields. This is really useful when you are going to parse something with awk but aren't sure exactly where to start. Show Sample Output


    16
    awk '{print NR": "$0; for(i=1;i<=NF;++i)print "\t"i": "$i}'
    recursiverse · 2009-07-23 06:25:31 16
  • I'm working in a group project currently and annoyed at the lack of output by my teammates. Wanting hard metrics of how awesome I am and how awesome they aren't, I wrote this command up. It will print a full repository listing of all files, remove the directories which confuse blame, run svn blame on each individual file, and tally the resulting line counts. It seems quite slow, depending on your repository location, because blame must hit the server for each individual file. You can remove the -R on the first part to print out the tallies for just the current directory. Show Sample Output


    16
    svn ls -R | egrep -v -e "\/$" | xargs svn blame | awk '{print $2}' | sort | uniq -c | sort -r
    askedrelic · 2009-07-29 02:10:45 6
  • This command displays a list of lines that are longer than 72 characters. I use this command to identify those lines in my scripts and cut them short the way I like it.


    16
    awk 'length>72' file
    haivu · 2009-09-10 05:54:41 3
  • This is a very simple and lightweight way to play DI.FM stations For a more complete version of the command with proper strings in the menu, try: (couldnt fit in the command field above) zenity --list --width 500 --height 500 --title 'DI.FM' --text 'Pick a Radio' --column 'radio' --column 'url' --print-column 2 $(curl -s http://www.di.fm/ | awk -F '"' '/href="http:.*\.pls.*96k/ {print $2}' | sort | awk -F '/|\.' '{print $(NF-1) " " $0}') | xargs mplayer This command line parses the html returned from http://di.fm and display all radio stations in a nice graphical menu. After the radio is chosen, the url is passed to mplayer so the music can start dependencies: - x11 with gtk environment - zenity: simple app for displaying gtk menus (sudo apt-get install zenity on ubuntu) - mplayer: simple audio player (sudo apt-get install mplayer on ubuntu) Show Sample Output


    16
    zenity --list --width 500 --height 500 --column 'radio' --column 'url' --print-column 2 $(curl -s http://www.di.fm/ | awk -F '"' '/href="http:.*\.pls.*96k/ {print $2}' | sort | awk -F '/|\.' '{print $(NF-1) " " $0}') | xargs mplayer
    polaco · 2010-04-28 23:45:35 7
  •  1 2 3 >  Last ›

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

list block devices
Shows all block devices in a tree with descruptions of what they are.

Convert seconds to [DD:][HH:]MM:SS
Converts any number of seconds into days, hours, minutes and seconds. sec2dhms() { declare -i SS="$1" D=$(( SS / 86400 )) H=$(( SS % 86400 / 3600 )) M=$(( SS % 3600 / 60 )) S=$(( SS % 60 )) [ "$D" -gt 0 ] && echo -n "${D}:" [ "$H" -gt 0 ] && printf "%02g:" "$H" printf "%02g:%02g\n" "$M" "$S" }

Block all IP addresses and domains that have attempted brute force SSH login to computer
Searches all log files (including archived bzip2 files) for invalid user and PAM authentication errors, both of which are indicative of brute force attempts at logging into computer. A list of all unique IP addresses and domain names is appended to hosts.deny. The command (and grep error messages) will work on Mac OS X 10.6, small adjustments may be needed for other OSs.

Blue Matrix
Same as original, but works in bash

Multi-thread any command
For instance: $ find . -type f -name '*.wav' -print0 |xargs -0 -P 3 -n 1 flac -V8 will encode all .wav files into FLAC in parallel. Explanation of xargs flags: -P [max-procs]: Max number of invocations to run at once. Set to 0 to run all at once [potentially dangerous re: excessive RAM usage]. -n [max-args]: Max number of arguments from the list to send to each invocation. -0: Stdin is a null-terminated list. I use xargs to build parallel-processing frameworks into my scripts like the one here: http://pastebin.com/1GvcifYa

Exclude grep from your grepped output of ps (alias included in description)
Surround the first letter of what you are grepping with square brackets and you won't have to spawn a second instance of grep -v. You could also use an alias like this (albeit with sed): alias psgrep='ps aux | grep $(echo $1 | sed "s/^\(.\)/[\1]/g")'

Set Time Zone in Ubuntu
Reconfigures time zone in Ubuntu, which I cannot figure out how to do through the GUI. Worked like a charm to set my time zone to CEST from EDT.

Scan Subnet for IP and MAC addresses

a function to find the fastest DNS server
http://public-dns.info gives a list of online dns servers. you need to change the country in url (br in this url) with your country code. this command need some time to ping all IP in list.

Calculate days on which Friday the 13th occurs (inspired from the work of the user justsomeguy)
Friday is the 5th day of the week, monday is the 1st. Output may be affected by locale.


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: