Commands using awk (1,418)


  • 121
    history | awk '{a[$2]++}END{for(i in a){print a[i] " " i}}' | sort -rn | head
    unixmonkey611 · 2009-02-11 13:12:29 28
  • I find this terribly useful for grepping through a file, looking for just a block of text. There's "grep -A # pattern file.txt" to see a specific number of lines following your pattern, but what if you want to see the whole block? Say, the output of "dmidecode" (as root): dmidecode | awk '/Battery/,/^$/' Will show me everything following the battery block up to the next block of text. Again, I find this extremely useful when I want to see whole blocks of text based on a pattern, and I don't care to see the rest of the data in output. This could be used against the '/etc/securetty/user' file on Unix to find the block of a specific user. It could be used against VirtualHosts or Directories on Apache to find specific definitions. The scenarios go on for any text formatted in a block fashion. Very handy.


    94
    awk '/start_pattern/,/stop_pattern/' file.txt
    atoponce · 2009-03-28 14:28:59 25
  • Using awk, find duplicates in a file without sorting, which reorders the contents. awk will not reorder them, and still find and remove duplicates which you can then redirect into another file.


    86
    awk '!x[$0]++' <file>
    din7 · 2009-12-20 02:33:21 23
  • Written for linux, the real example is how to produce ascii text graphs based on a numeric value (anything where uniq -c is useful is a good candidate). Show Sample Output


    53
    netstat -an | grep ESTABLISHED | awk '{print $5}' | awk -F: '{print $1}' | sort | uniq -c | awk '{ printf("%s\t%s\t",$2,$1) ; for (i = 0; i < $1; i++) {printf("*")}; print "" }'
    knassery · 2009-04-27 22:02:19 24
  • Checks the Gmail ATOM feed for your account, parses it and outputs a list of unread messages. For some reason sed gets stuck on OS X, so here's a Perl version for the Mac: curl -u username:password --silent "https://mail.google.com/mail/feed/atom" | tr -d '\n' | awk -F '<entry>' '{for (i=2; i<=NF; i++) {print $i}}' | perl -pe 's/^<title>(.*)<\/title>.*<name>(.*)<\/name>.*$/$2 - $1/' If you want to see the name of the last person, who added a message to the conversation, change the greediness of the operators like this: curl -u username:password --silent "https://mail.google.com/mail/feed/atom" | tr -d '\n' | awk -F '<entry>' '{for (i=2; i<=NF; i++) {print $i}}' | perl -pe 's/^<title>(.*)<\/title>.*?<name>(.*?)<\/name>.*$/$2 - $1/' Show Sample Output


    47
    curl -u username:password --silent "https://mail.google.com/mail/feed/atom" | tr -d '\n' | awk -F '<entry>' '{for (i=2; i<=NF; i++) {print $i}}' | sed -n "s/<title>\(.*\)<\/title.*name>\(.*\)<\/name>.*/\2 - \1/p"
    postrational · 2009-09-07 21:56:40 43
  • Print all columns except the 1st and 3rd.


    31
    awk '{$1=$3=""}1' file
    zlemini · 2011-10-25 22:15:06 16
  • You have an external USB drive or key. Apply this command (using the file path of anything on your device) and it will simulate the unplug of this device. If you just want the port, just type : echo $(sudo lshw -businfo | grep -B 1 -m 1 $(df "/path/to/file" | tail -1 | awk '{print $1}' | cut -c 6-8) | head -n 1 | awk '{print $1}' | cut -c 5- | tr ":" "-") Show Sample Output


    30
    echo $(sudo lshw -businfo | grep -B 1 -m 1 $(df "/path/to/file" | tail -1 | awk '{print $1}' | cut -c 6-8) | head -n 1 | awk '{print $1}' | cut -c 5- | tr ":" "-") | sudo tee /sys/bus/usb/drivers/usb/unbind
    tweet78 · 2014-04-06 12:06:29 19
  • What happens here is we tell tar to create "-c" an archive of all files in current dir "." (recursively) and output the data to stdout "-f -". Next we specify the size "-s" to pv of all files in current dir. The "du -sb . | awk ?{print $1}?" returns number of bytes in current dir, and it gets fed as "-s" parameter to pv. Next we gzip the whole content and output the result to out.tgz file. This way "pv" knows how much data is still left to be processed and shows us that it will take yet another 4 mins 49 secs to finish. Credit: Peteris Krumins http://www.catonmat.net/blog/unix-utilities-pipe-viewer/ Show Sample Output


    27
    tar -cf - . | pv -s $(du -sb . | awk '{print $1}') | gzip > out.tgz
    opertinicy · 2009-12-18 17:09:08 13

  • 26
    netstat -ant | awk '{print $NF}' | grep -v '[a-z]' | sort | uniq -c
    himynameisthor · 2009-02-05 18:02:59 34
  • Show the number of failed tries of login per account. If the user does not exist it is marked with *. Show Sample Output


    24
    sudo zcat /var/log/auth.log.*.gz | awk '/Failed password/&&!/for invalid user/{a[$9]++}/Failed password for invalid user/{a["*" $11]++}END{for (i in a) printf "%6s\t%s\n", a[i], i|"sort -n"}'
    point_to_null · 2009-03-21 06:41:59 14
  • This makes an alias for a command named 'busy'. The 'busy' command opens a random file in /usr/include to a random line with vim. Drop this in your .bash_aliases and make sure that file is initialized in your .bashrc.


    23
    alias busy='my_file=$(find /usr/include -type f | sort -R | head -n 1); my_len=$(wc -l $my_file | awk "{print $1}"); let "r = $RANDOM % $my_len" 2>/dev/null; vim +$r $my_file'
    busybee · 2010-03-09 21:48:41 19
  • This uses awk to grab the IP address from each request and then sorts and summarises the top 10.


    22
    tail -10000 access_log | awk '{print $1}' | sort | uniq -c | sort -n | tail
    root · 2009-01-25 21:01:52 47

  • 22
    history | awk '{print $2}' | sort | uniq -c | sort -rn | head
    mikeda · 2009-02-17 14:25:49 17

  • 21
    sudo tcpdump -i wlan0 -n ip | awk '{ print gensub(/(.*)\..*/,"\\1","g",$3), $4, gensub(/(.*)\..*/,"\\1","g",$5) }' | awk -F " > " '{print $1"\n"$2}'
    tweet78 · 2014-04-11 22:41:32 9
  • Takes a input file (count.txt) that looks like: 1 2 3 4 5 It will add/sum the first column of numbers.


    19
    cat count.txt | awk '{ sum+=$1} END {print sum}'
    duxklr · 2009-03-16 00:22:13 28
  • When you fill a formular with Firefox, you see things you entered in previous formulars with same field names. This command list everything Firefox has registered. Using a "delete from", you can remove anoying Google queries, for example ;-)


    19
    cd ~/.mozilla/firefox/ && sqlite3 `cat profiles.ini | grep Path | awk -F= '{print $2}'`/formhistory.sqlite "select * from moz_formhistory" && cd - > /dev/null
    klipz · 2009-04-13 20:23:37 15
  • Parses /etc/group to "dot" format and pases it to "display" (imagemagick) to show a usefull diagram of users and groups (don't show empty groups).


    19
    awk 'BEGIN{FS=":"; print "digraph{"}{split($4, a, ","); for (i in a) printf "\"%s\" [shape=box]\n\"%s\" -> \"%s\"\n", $1, a[i], $1}END{print "}"}' /etc/group|display
    point_to_null · 2011-12-04 01:56:44 14
  • This loops through all tables and changes their collations to UTF8. You should backup beforehand though in case some data is lost in the process.


    18
    mysql --database=dbname -B -N -e "SHOW TABLES" | awk '{print "ALTER TABLE", $1, "CONVERT TO CHARACTER SET utf8 COLLATE utf8_general_ci;"}' | mysql --database=dbname &
    root · 2009-03-21 18:45:15 15
  • Busiest seconds: cat /var/log/secure.log | awk '{print substr($0,0,15)}' | uniq -c | sort -nr | awk '{printf("\n%s ",$0) ; for (i = 0; i<$1 ; i++) {printf("*")};}' Show Sample Output


    17
    cat /var/log/secure.log | awk '{print substr($0,0,12)}' | uniq -c | sort -nr | awk '{printf("\n%s ",$0) ; for (i = 0; i<$1 ; i++) {printf("*")};}'
    knassery · 2009-07-24 07:20:06 12

  • 17
    man -t awk | ps2pdf - awk.pdf
    kev · 2011-11-23 01:40:23 8
  • I use this on debian testing, works like the other sorted du variants, but i like small numbers and suffixes :) Show Sample Output


    16
    du --max-depth=1 | sort -r -n | awk '{split("k m g",v); s=1; while($1>1024){$1/=1024; s++} print int($1)" "v[s]"\t"$2}'
    hans · 2009-02-24 11:03:08 15
  • Breaks down and numbers each line and it's fields. This is really useful when you are going to parse something with awk but aren't sure exactly where to start. Show Sample Output


    16
    awk '{print NR": "$0; for(i=1;i<=NF;++i)print "\t"i": "$i}'
    recursiverse · 2009-07-23 06:25:31 56
  • I'm working in a group project currently and annoyed at the lack of output by my teammates. Wanting hard metrics of how awesome I am and how awesome they aren't, I wrote this command up. It will print a full repository listing of all files, remove the directories which confuse blame, run svn blame on each individual file, and tally the resulting line counts. It seems quite slow, depending on your repository location, because blame must hit the server for each individual file. You can remove the -R on the first part to print out the tallies for just the current directory. Show Sample Output


    16
    svn ls -R | egrep -v -e "\/$" | xargs svn blame | awk '{print $2}' | sort | uniq -c | sort -r
    askedrelic · 2009-07-29 02:10:45 15
  • This command displays a list of lines that are longer than 72 characters. I use this command to identify those lines in my scripts and cut them short the way I like it.


    16
    awk 'length>72' file
    haivu · 2009-09-10 05:54:41 12
  • This is a very simple and lightweight way to play DI.FM stations For a more complete version of the command with proper strings in the menu, try: (couldnt fit in the command field above) zenity --list --width 500 --height 500 --title 'DI.FM' --text 'Pick a Radio' --column 'radio' --column 'url' --print-column 2 $(curl -s http://www.di.fm/ | awk -F '"' '/href="http:.*\.pls.*96k/ {print $2}' | sort | awk -F '/|\.' '{print $(NF-1) " " $0}') | xargs mplayer This command line parses the html returned from http://di.fm and display all radio stations in a nice graphical menu. After the radio is chosen, the url is passed to mplayer so the music can start dependencies: - x11 with gtk environment - zenity: simple app for displaying gtk menus (sudo apt-get install zenity on ubuntu) - mplayer: simple audio player (sudo apt-get install mplayer on ubuntu) Show Sample Output


    16
    zenity --list --width 500 --height 500 --column 'radio' --column 'url' --print-column 2 $(curl -s http://www.di.fm/ | awk -F '"' '/href="http:.*\.pls.*96k/ {print $2}' | sort | awk -F '/|\.' '{print $(NF-1) " " $0}') | xargs mplayer
    polaco · 2010-04-28 23:45:35 14
  •  1 2 3 >  Last ›

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

dump database from postgresql to a file

Keep a copy of the raw Youtube FLV,MP4,etc stored in /tmp/
Certain Flash video players (e.g. Youtube) write their video streams to disk in /tmp/ , but the files are unlinked. i.e. the player creates the file and then immediately deletes the filename (unlinking files in this way makes it hard to find them, and/or ensures their cleanup if the browser or plugin should crash etc.) But as long as the flash plugin's process runs, a file descriptor remains in its /proc/ hierarchy, from which we (and the player) still have access to the file. The method above worked nicely for me when I had 50 tabs open with Youtube videos and didn't want to have to re-download them all with some tool.

Convert seconds to [DD:][HH:]MM:SS
Converts any number of seconds into days, hours, minutes and seconds. sec2dhms() { declare -i SS="$1" D=$(( SS / 86400 )) H=$(( SS % 86400 / 3600 )) M=$(( SS % 3600 / 60 )) S=$(( SS % 60 )) [ "$D" -gt 0 ] && echo -n "${D}:" [ "$H" -gt 0 ] && printf "%02g:" "$H" printf "%02g:%02g\n" "$M" "$S" }

Change the homepage of Firefox
Pros: Works in all Windows computers, most updated and compatible command. Cons: 3 liner Replace fcisolutions.com with your site name.

Found how how much memory in kB $PID is occupying in Linux
The "proportional set size" is probably the closest representation of how much active memory a process is using in the Linux virtual memory stack. This number should also closely represent the %mem found in ps(1), htop(1), and other utilities.

Copy without overwriting

put current directory in LAN quickly

bash screensaver off

Performance tip: compress /usr/
Periodically run the one-liner above if/when there are significant changes to the files in /usr/ = Before rebooting, add following to /etc/fstab : = $ /squashed/usr/usr.sfs /squashed/usr/ro squashfs loop,ro 0 0 $ usr /usr aufs udba=reval,br:/squashed/usr/rw:/squashed/usr/ro 0 0 No need to delete original /usr/ ! (unless you don't care about recovery). Also AuFS does not work with XFS

Remount root in read-write mode.
Saved my day, when my harddrive got stuck in read-only mode.


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: