Commands using watch (155)

  • Maybe this will help you to monitor your load balancers or reverse proxies if you happen to use them. This is useful to discover TIME OUTS and this will let you know if one or more of your application servers is not connected by checking. Show Sample Output


    2
    watch -n 1 "/usr/sbin/lsof -p PID |awk '/TCP/{split(\$8,A,\":\"); split(A[2],B,\">\") ; split(B[1],C,\"-\"); print A[1],C[1],B[2], \$9}' | sort | uniq -c"
    ideivid · 2011-08-12 19:16:38 3
  • In certain cases you mighy need to monitor the server load caused by certain process. For example HTTP, while stress testing apache using ab (apache benchmark) you may want to monitor the server status,load, # of spawned HTTP processes, # of established connections, # of connections in close wait state, apache memory footprint etc. Show Sample Output


    2
    watch -n1 "uptime && ps auxw|grep http|grep -v grep | grep -v watch|wc -l && netstat -ntup|grep :80 |grep ESTABLISHED|wc -l && netstat -ntup|grep :80|grep WAIT|wc -l && free -mo && ps -ylC httpd --sort:rss|tail -3|awk '{print \$8}'"
    rockon · 2012-06-06 12:12:10 5
  • This handles when you have a single call or channel. Other commands will strip out the result if there is a single channel or call active because the output changes the noun to be singular instead of plural. Show Sample Output


    2
    watch "asterisk -vvvvvrx 'core show channels' | egrep \"(call|channel)\""
    rowshi · 2012-08-29 13:40:45 5
  • Sends the "USR1" signal every 1 second (-n 1) to a process called exactly "dd". The signal in some systems can be INFO or SIGINFO ... look at the signals list in: man kill


    2
    watch -n 1 pkill -USR1 "^dd$"
    ivanalejandro0 · 2012-08-31 05:15:45 4
  • Sometimes top/htop don't give the fine-grained detail on memory usage you might need. Sum up the exact memory types you want


    2
    watch "awk '/Rss/{sum += \$2; } END{print sum, \"kB\"}' < /proc/$(pidof firefox)/smaps"
    gumnos · 2015-09-19 00:36:34 18

  • 2
    watch grep \"cpu MHz\" /proc/cpuinfo
    wuseman1 · 2018-11-11 00:45:28 474

  • 2
    $ watch -c "netstat -natp 2>/dev/null | tail -n +3 | awk '{print \$6}' | sort | uniq -c"
    emanuele · 2018-11-22 10:37:48 333

  • 2
    watch ss -stplu
    wuseman1 · 2019-07-16 20:41:36 43
  • Use the command watch, which is really hard to pass nested quotes to, and insert newlines where they are supposed to go in the HTTP request. that is after 1.1 after the host and two newlines at the end before the EOF. i use this all day what? no support for HEREDOCs on commandlinefu's interface? need more fu. Show Sample Output


    1
    watch -n 1 nc localhost 80 '<<EOF GET / HTTP/1.1 Host: tux-ninja Connection: Close EOF'
    JustinHop · 2009-08-06 23:20:31 4
  • If you're like some individuals who rely on ndiswrapper and cannot use kismet, this command may be of service. watch -n .5 "iwlist wlan0 scan | egrep 'ESSID|Encryption'" Or... watch -n .5 "iwlist wlan0 scan | egrep 'ESSID|Encryption' | egrep 'linksys'" :-) Hopefully you'll find some dd-wrt compatible routers.


    1
    watch -n .5 "iwlist wlan0 scan"
    Abiden · 2009-08-20 23:05:04 3
  • If you need to keep an eye on a command whose output is changing, use the watch command. For example, to keep an eye on your load average


    1
    watch 'cat /proc/loadavg'
    0disse0 · 2009-09-03 20:10:46 4
  • This time I added a print to reemaining energy, every minute, time stamped. The example shown here is complete and point to large discrepancies as time passes, converging to accuracy near the end. Show Sample Output


    1
    echo start > battery.txt; watch -n 60 'date >> battery.txt ; acpi -b >> battery.txt'
    m33600 · 2009-10-19 05:28:15 4

  • 1
    watch -n 60 du /var/log/messages
    rbossy · 2009-10-27 14:53:41 3
  • If you just executed some long command, like "ps -aefww | grep -i [m]yProcess", and if you don't want to retype it or cycle backwards in history and waste time quoting it, then you can use history substitution.


    1
    watch -n1 -d !!
    TeacherTiger · 2009-11-24 21:01:14 3

  • 1
    watch -n 1 -d "finger"
    tsiqueira · 2009-12-08 14:53:18 3
  • To monitor .vmdk files during snapshot deletion (commit) on ESX only (ESXi doesn't have the watch command): 1. Navigate to the VM directory containing .vmdk files. # watch "ls -tough --full-time *.vmdk" where: -t sorts by modification time -o do not list group information (to narrow the output) -u sorts by access time -g only here for the purpose to easily remember the created mnemonic word 'tough' -h prints sizes in human readable format (e.g., 1K 234M 2G) --full-time sets the time style to full-iso and does not list user information (to narrow the output) optionally useful parameters to the watch command: -d highlight changes between updates -n seconds to wait between updates (default is 2) -t turn off printing the header


    1
    watch 'ls -tough --full-time *.vmdk'
    vRobM · 2010-08-20 17:28:28 6
  • Note: 1) -n option of watch accepts seconds 2) -t option of notify-send accepts milliseconds 3) All quotes stated in the given example are required if notification message is more than a word. 4) I couldn't get this to run in background (use of & at the end fails). Any suggestions/improvements welcome.


    1
    watch -n 900 "notify-send -t 10000 'Look away. Rest your eyes'"
    b_t · 2010-10-05 09:39:31 6
  • Great for watching things like Maildir's or any other queue directory.


    1
    watch "cat `ls -rcA1 | tail -n1`"
    donnoman · 2011-03-25 01:22:05 8

  • 1
    watch !!
    wincus · 2011-07-05 12:50:56 7
  • If you add the -d flag each difference in the command's output will be highlighted. I also monitor individual drives by adding them to df. Makes for a nice thin status line that I can shove to the bottom of the monitor.


    1
    watch -d -n 5 df
    pcphillips · 2011-08-24 19:45:36 6

  • 1
    watch -n 1 "netstat -ntu | sed '1,2d' | awk '{ print \$6 }' | sort | uniq -c | sort -k 2"
    facecool · 2011-09-30 09:04:14 3

  • 1
    watch -t -c -n30 'wget -q -O- "http://wwwapps.ups.com/WebTracking/processInputRequest?TypeOfInquiryNumber=T&InquiryNumber1=1Z4WYXXXXXXXXXX" | html2text | sed -n "/Shipment Progress/,/Shipping Information/p" | grep -v "*" | ccze -A'
    mfr · 2013-06-20 06:01:25 15
  • Starts and shows a timer. banner command is a part of the sysvbanner package. Instead of the banner an echo or figlet commands could be used. Stop the timer with Ctrl-C and elapsed time will be shown as the result. Show Sample Output


    1
    alias timer='export ts=$(date +%s);p='\''$(date -u -d @"$(($(date +%s)-$ts))" +"%H.%M.%S")'\'';watch -n 1 -t banner $p;eval "echo $p"'
    ichbins · 2013-08-24 16:18:45 13
  • Like top, but for files


    1
    watch -d -n 2 'df; ls -FlAt;'
    G2G · 2013-09-17 05:44:47 6
  • Watch a dig in progress Show Sample Output


    1
    watch -n1 dig google.com
    ene2002 · 2013-12-26 19:23:27 13
  •  < 1 2 3 4 5 >  Last ›

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

Don't like the cut command? Tired of typing awk '{print $xxx}', try this

Count the number of deleted files
It does not work without the verbose mode (-v is important)

Convert seconds to [DD:][HH:]MM:SS
Converts any number of seconds into days, hours, minutes and seconds. sec2dhms() { declare -i SS="$1" D=$(( SS / 86400 )) H=$(( SS % 86400 / 3600 )) M=$(( SS % 3600 / 60 )) S=$(( SS % 60 )) [ "$D" -gt 0 ] && echo -n "${D}:" [ "$H" -gt 0 ] && printf "%02g:" "$H" printf "%02g:%02g\n" "$M" "$S" }

Find out current working directory of a process

See how many more processes are allowed, awesome!
There is a limit to how many processes you can run at the same time for each user, especially with web hosts. If the maximum # of processes for your user is 200, then the following sets OPTIMUM_P to 100. $ OPTIMUM_P=$(( (`ulimit -u` - `find /proc -maxdepth 1 \( -user $USER -o -group $GROUPNAME \) -type d|wc -l`) / 2 )) This is very useful in scripts because this is such a fast low-resource-intensive (compared to ps, who, lsof, etc) way to determine how many processes are currently running for whichever user. The number of currently running processes is subtracted from the high limit setup for the account (see limits.conf, pam, initscript). An easy to understand example- this searches the current directory for shell scripts, and runs up to 100 'file' commands at the same time, greatly speeding up the command. $ find . -type f | xargs -P $OPTIMUM_P -iFNAME file FNAME | sed -n '/shell script text/p' I am using it in my http://www.askapache.com/linux-unix/bash_profile-functions-advanced-shell.html especially for the xargs command. Xargs has a -P option that lets you specify how many processes to run at the same time. For instance if you have 1000 urls in a text file and wanted to download all of them fast with curl, you could download 100 at a time (check ps output on a separate [pt]ty for proof) like this: $ cat url-list.txt | xargs -I '{}' -P $OPTIMUM_P curl -O '{}' I like to do things as fast as possible on my servers. I have several types of servers and hosting environments, some with very restrictive jail shells with 20processes limit, some with 200, some with 8000, so for the jailed shells my xargs -P10 would kill my shell or dump core. Using the above I can set the -P value dynamically, so xargs always works, like this. $ cat url-list.txt | xargs -I '{}' -P $OPTIMUM_P curl -O '{}' If you were building a process-killer (very common for cheap hosting) this would also be handy. Note that if you are only allowed 20 or so processes, you should just use -P1 with xargs.

split a string (3)

A signal trap that logs when your script was killed and what other processes were running at that time
trap is the bash builtin that allows you to execute commands when the current script receives a particular signal. Uses $0 for the script name, $$ for the script PID, tee to output to STDOUT as well as a log file and ps to log other running processes.

take a look to command before action
add |sh when you agree the list, I often use that method to prevent typos in dangerous or long operations

Sum columns from CSV column $COL

Launch a game, like Tetris, when apt-get installing an app larger than 50 Megabytes
Change the APP variable's value to whatever you want to install. Depending on how fast your machine is, you'll want to adjust the value 50 to something else. You might also want to play a different game than Gnometris - just make sure it's a GUI game.


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: