Commands tagged awk (348)


  • 0
    % sudo yum remove streams-$(uname-r)
    totalnut151 · 2013-11-21 17:41:19 6
  • It is not the installed size in files, but the size of RPM packages. Show Sample Output


    0
    rpm -qa --queryformat '%{SIZE}\n' | awk '{sum += $1} END {printf("Total size in packages = %4.1f GB\n", sum/1024**3)}'
    skytux · 2013-12-14 20:22:41 10

  • 0
    while [ 1 ] ;do ps aux|awk '{if ($8 ~ "D") print }'; sleep 1 ;done
    paulp · 2014-01-21 08:20:04 6
  • Grep can search files and directories recursively. Using the -Z option and xargs -0 you can get all results on one line with escaped spaces, suitable for other commands like rm. Show Sample Output


    0
    grep -Rl "pattern" files_or_dir
    N1nsun · 2014-04-06 18:18:07 7

  • 0
    grep URL ~/annex/.git/annex/webapp.html | tr -d '">' | awk -F= '{print $4 "=" $5}'
    kseistrup · 2014-04-20 08:46:37 8

  • 0
    ip route list 0/0
    thrix · 2014-06-09 16:07:38 7

  • 0
    mco ping | head -n -4 | awk '{print $1}' | sort
    mrwulf · 2014-06-24 18:20:16 7
  • Original command: cat "log" | grep "text to grep" | awk '{print $1}' | sort -n | uniq -c | sort -rn | head -n 100 This is a waste of multiple cats and greps, esp when awk is being used


    0
    awk '/text to grep/{print $1}' "log" | sort -n | uniq -c | sort -rn | head -n 100
    kln0thing · 2014-07-09 08:48:06 9
  • The AWK part of the code will "collate" the fields from 2nd to Nth field (this is to handle any svn directories that may have spaces in them - typical when working with code that is interchangeably used with windows environment - for example, documentation teams) - the output is passed to "ls -ld" - the -d option to ls will tell ls to handle directories itself, rather than do ls on the directory. The '-p' option is just for pretty printing directories, links and executables (for added readability). Finally, the entire "constructed" command will be passed onto sh for shell execution. Show Sample Output


    0
    svn status | awk -F" " '{ for (i=2; i<=NF; i++) print "ls -ld \""$i"\""}' | sh
    kln0thing · 2014-07-09 09:41:24 14
  • Gets the Hardware UUID of the current machine using system_profiler. Show Sample Output


    0
    system_profiler SPHardwareDataType | awk '/UUID/ { print $3; }'
    thealanberman · 2014-07-25 06:54:40 8
  • This command makes a small graph with the histogram of size blocks (5MB in this example), not individual files. Fine tune the 4+5*int($1/5) block for your own size jumps : jump-1+jump*($1/jump) Also in the hist=hist-5 part, tune for bigger or smaller graphs Show Sample Output


    0
    du -sm *| sort -nr | awk '{ size=4+5*int($1/5); a[size]++ }; END { print "size(from->to) number graph"; for(i in a){ printf("%d %d ",i,a[i]) ; hist=a[i]; while(hist>0){printf("#") ; hist=hist-5} ; printf("\n")}}'
    higuita · 2014-08-19 14:43:20 8
  • Caution: distructive overwrite of filenames Useful for concatenating pdfs in date order using pdftk


    0
    find . -name "*.pdf" -print0 | xargs -r0 stat -c %y\ %n | sort|awk '{print $4}'|gawk 'BEGIN{ a=1 }{ printf "mv %s %04d.pdf\n", $0, a++ }' | bash
    Randy_Legault · 2014-09-23 06:40:45 9
  • Its possible to user a simple regex to extract de username from the finger command. The final echo its optional, just for remove the initial space Show Sample Output


    0
    finger $(whoami) | egrep -o 'Name: [a-zA-Z0-9 ]{1,}' | cut -d ':' -f 2 | xargs echo
    swebber · 2014-09-24 01:22:07 9
  • Given a hosts list, ssh one by one and echo its name only if 'processname' is not running. Show Sample Output


    0
    for i in `cat hosts_list`; do RES=`ssh myusername@${i} "ps -ef " |awk '/[p]rocessname/ {print $2}'`; test "x${RES}" = "x" && echo $i; done
    arlequin · 2014-10-03 14:57:54 9
  • This is useful as a git hook to print out the directories that had files changed on a commit. Each directory is its own package. Show Sample Output


    0
    git log -n 1 --name-only --pretty=oneline | awk -F/ 'NR>=2 {seen[$1]}; END {for (d in seen); print d}'
    Romster · 2014-12-13 10:21:46 9
  • The sample output shows each record/row with the last field zero-padded to 26 digits. For testing, I used (L)ine and field/column numbers.... Line 4, field2 = L42, etc up to the last field where I just used line numbers X 4. I had some whitespace-delimited files with variable-length records/rows (having 4 - 5 fields/columns) which required reformatting by zero-padding the last field to 26 digits. This requires setting NF (Not $NF) as an awk variable, with a simple conditional that assumes that any line where (N)umber of (F)ields does NOT equal 4 has a NF of 5. If needed, more conditional checks can be added, and the "NF" changed to any field ($1, $5, etc). Show Sample Output


    0
    awk '{var = sprintf(NF); if (var == 4) printf "%s %s %s %026d\n" , $1,$2,$3,$4; else printf "%s %s %s %s %026d\n" , $1,$2,$3,$4,$5}' yourfilegoes.here >> yournewfilegoes.here
    genatomics · 2014-12-20 02:53:35 8
  • Use this command to watch apache access logs in real time to see what pages are getting hit. Show Sample Output


    0
    tail -f access_log | awk '{print $1 , $12}'
    tyzbit · 2014-12-24 14:15:52 11

  • 0
    sudo du -kx / |sort -n| awk '{print $1/(1000*1000) " G" ,$2}'
    umiyosh · 2015-01-05 04:49:24 8
  • OSX users as well as linux users with copy/paste buffer commands can remove duplicate items from their copy buffer with this command. I use this often when I have to copy a long list of items that I didn't generate, but I need to paste elsewhere in a list that's unique. If retaining the original order of lines isn't important to you, use the following command which is easier to remember. pbpaste | sort | uniq | pbcopy


    0
    pbpaste | awk ' !x[$0]++' | pbcopy
    dmengelt · 2015-02-05 19:38:38 12
  • us lsof, grep for any pid matching a given name such as "node". Show Sample Output


    0
    lsof -i -n -P | grep -e "$(ps aux | grep node | grep -v grep | awk -F' ' '{print $2}' | xargs | awk -F' ' '{str = $1; for(i = 2; i < NF; i++) {str = str "\\|" $i} print str}')"
    hochmeister · 2015-02-14 23:24:00 10
  • Replace grep | sed with single awk script.


    0
    watch -n10 -d sh -c 'sensors | awk '\''/:.*RPM/ { sub("[^:]*:","") ; print $1 }'\'
    my_username · 2015-04-29 16:50:28 10

  • 0
    pgrep -f /usr/sbin/httpd | awk '{print"-p " $1}' | xargs strace
    savagemike · 2015-06-10 22:55:35 12
  • Removes directories which are less than 1028KB total. This works for systems where blank directories are 4KB. If a directory contains 1 MB (1024KB) or less, it will remove the directory using a path relative to the directory where the command was initially executed (safer than some other options I found). Adjust the 1028 value for your needs. It would be helpful to test the results before proceeding with the removal. Simply run all but the last two commands to see a list of what will be removed: du | awk '{if($1<1028)print;}' | cut -d $'\t' -f 2- If you're unsure what size a blank folder is, test it like this: mkdir test; du test; rmdir test


    0
    du | awk '{if($1<1028)print;}' | cut -d $'\t' -f 2- | tr "\n" "\0" | xargs -0 rm -rf
    i814u2 · 2015-06-25 16:00:48 11
  • Don't want to open up an editor just to view a bunch of XML files in an easy to read format? Now you can do it from the comfort of your own command line! :-) This creates a new function, xmlpager, which shows an XML file in its entirety, but with the actual content (non-tag text) highlighted. It does this by setting the foreground to color #4 (red) after every tag and resets it before the next tag. (Hint: try `tput bold` as an alternative). I use 'xmlindent' to neatly reflow and indent the text, but, of course, that's optional. If you don't have xmlindent, just replace it with 'cat'. Additionally, this example shows piping into the optional 'less' pager; note the -r option which allows raw escape codes to be passed to the terminal. Show Sample Output


    0
    xmlpager() { xmlindent "$@" | awk '{gsub(">",">'`tput setf 4`'"); gsub("<","'`tput sgr0`'<"); print;} END {print "'`tput sgr0`'"}' | less -r; }
    hackerb9 · 2015-07-12 09:22:10 11

  • 0
    eval `cli53 list |grep Name | sed "s/\.$//g" | awk '{printf("echo %s; cli53 export %s > %s;\n", $2, $2, $2);}'`
    cfb · 2015-07-21 14:16:30 10
  • ‹ First  < 9 10 11 12 13 >  Last ›

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

View online pdf documents in cli
Probably will not work very well with scanned documents.

quickly change all .html extensions on files in folder to .htm

Block known dirty hosts from reaching your machine
Blacklisted is a compiled list of all known dirty hosts (botnets, spammers, bruteforcers, etc.) which is updated on an hourly basis. This command will get the list and create the rules for you, if you want them automatically blocked, append |sh to the end of the command line. It's a more practical solution to block all and allow in specifics however, there are many who don't or can't do this which is where this script will come in handy. For those using ipfw, a quick fix would be {print "add deny ip from "$1" to any}. Posted in the sample output are the top two entries. Be advised the blacklisted file itself filters out RFC1918 addresses (10.x.x.x, 172.16-31.x.x, 192.168.x.x) however, it is advisable you check/parse the list before you implement the rules

Convert CSV to JSON
Replace 'csv_file.csv' with your filename.

Convert files from DOS line endings to UNIX line endings
Here "^M" is NOT "SHIFT+6" and "M". Type CTRL+V+M to get it instead. Its shortest and easy. And its sed!, which is available by default in all linux flavours.. no need to install extra tools like fromdos.

get a desktop notification from the terminal
tired of switching to the console to check if some command has finished yet? if notify-send does not work on your box try this one... e.g. rsync -av -e /usr/bin/lsh $HOME slowconnection.bar:/mnt/backup ; z (now fire up X, do something useful, get notified if this stuff has finished).

prevent large files from being cached in memory (backups!)
We all know... $ nice -n19 for low CPU priority.   $ ionice -c3 for low I/O priority.   nocache can be useful in related scenarios, when we operate on very large files just a single time, e.g. a backup job. It advises the kernel that no caching is required for the involved files, so our current file cache is not erased, potentially decreasing performance on other, more typical file I/O, e.g. on a desktop.   http://askubuntu.com/questions/122857 https://github.com/Feh/nocache http://packages.debian.org/search?keywords=nocache http://packages.ubuntu.com/search?keywords=nocache   To undo caching of a single file in hindsight, you can do $ cachedel   To check the cache status of a file, do $ cachestats

Quick and Temporary Named Commands
* Add comment with # in your command * Later you can search that command on that comment with CTRL+R In the title command, you could search it later by invoking the command search tool by first typing CTRL+R and then typing "revert"

Reverse ssh
Both hosts must be running ssh and also the outside host must have a port forwarded to port 22.

Find the package that installed a command


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: