Commands tagged tail (64)

  • Run the alias command, then issue ps aux | head and resize your terminal window (putty/console/hyperterm/xterm/etc) then issue the same command and you'll understand. ${LINES:-`tput lines 2>/dev/null||echo -n 12`} Insructs the shell that if LINES is not set or null to use the output from `tput lines` ( ncurses based terminal access ) to get the number of lines in your terminal. But furthermore, in case that doesn't work either, it will default to using the deafault of 12 (-2 = 10). The default for HEAD is to output the first 10 lines, this alias changes the default to output the first x lines instead, where x is the number of lines currently displayed on your terminal - 2. The -2 is there so that the top line displayed is the command you ran that used HEAD, ie the prompt. Depending on whether your PS1 and/or PROMPT_COMMAND output more than 1 line (mine is 3) you will want to increase from -2. So with my prompt being the following, I need -7, or - 5 if I only want to display the commandline at the top. ( https://www.askapache.com/linux/bash-power-prompt/ ) 275MB/748MB [7995:7993 - 0:186] 06:26:49 Thu Apr 08 [askapache@n1-backbone5:/dev/pts/0 +1] ~ In most shells the LINES variable is created automatically at login and updated when the terminal is resized (28 linux, 23/20 others for SIGWINCH) to contain the number of vertical lines that can fit in your terminal window. Because the alias doesn't hard-code the current LINES but relys on the $LINES variable, this is a dynamic alias that will always work on a tty device. Show Sample Output


    27
    alias head='head -n $((${LINES:-`tput lines 2>/dev/null||echo -n 12`} - 2))'
    AskApache · 2010-04-08 22:37:06 14
  • If you use 'tail -f foo.txt' and it becomes temporarily moved/deleted (ie: log rolls over) then tail will not pick up on the new foo.txt and simply waits with no output. 'tail -F' allows you to follow the file by it's name, rather than a descriptor. If foo.txt disappears, tail will wait until the filename appears again and then continues tailing.


    15
    tail -F file
    recursiverse · 2009-07-23 07:37:11 10
  • this way you have the multitail with all its options running on your own machine with the tails of the two remote machines inside :)


    12
    multitail -l 'ssh machine1 "tail -f /var/log/apache2/error.log"' -l 'ssh machine2 "tail -f /var/log/apache2/error.log"'
    mischamolhoek · 2011-10-12 10:05:18 5
  • Displays the realtime line output rate of a logfile. -l tels pv to count lines -i to refresh every 10 seconds -l option is not in old versions of pv. If the remote system has an old pv version: ssh tail -f /var/log/apache2/access.log | pv -l -i10 -r >/dev/null


    10
    tail -f access.log | pv -l -i10 -r >/dev/null
    dooblem · 2010-04-29 21:02:01 7

  • 9
    tail -f file | awk '{now=strftime("%F %T%z\t");sub(/^/, now);print}'
    arcege · 2010-11-25 04:11:52 6
  • Works in Ubuntu, I hope it will work on all Linux machines. For Unixes, tail should be capable of handling more than one file with '-f' option. This command line simply take log files which are text files, and not ending with a number, and it will continuously monitor those files. Putting one alias in .profile will be more useful.


    6
    find /var/log -type f -exec file {} \; | grep 'text' | cut -d' ' -f1 | sed -e's/:$//g' | grep -v '[0-9]$' | xargs tail -f
    mohan43u · 2009-06-03 09:47:08 11
  • This is useful when watching a log file that does not contain timestamps itself. If the file already has content when starting the command, the first lines will have the "wrong" timestamp when the command was started and not when the lines were originally written.


    6
    tail -f file | while read line; do echo -n $(date -u -Ins); echo -e "\t$line"; done
    hfs · 2010-11-19 10:01:57 8
  • with discard wilcards in bash you can "tail" newer logs files to see what happen, any error, info, warn... Show Sample Output


    5
    tail -f *[!.1][!.gz]
    piscue · 2009-03-06 16:24:44 8
  • This command finds the 5 (-n5) most frequently updated logs in /var/log, and then does a multifile tail follow of those log files. Alternately, you can do this to follow a specific list of log files: sudo tail -n0 -f /var/log/{messages,secure,cron,cups/error_log} Show Sample Output


    5
    ls -drt /var/log/* | tail -n5 | xargs sudo tail -n0 -f
    kanaka · 2009-07-22 14:44:41 22
  • Using tail to follow and standard perl to count and print the lps when lines are written to the logfile.


    5
    tail -f /var/log/logfile|perl -e 'while (<>) {$l++;if (time > $e) {$e=time;print "$l\n";$l=0}}'
    madsen · 2011-06-21 10:28:26 7
  • Change the cut range for hits per 10 sec, minute and so on... Grep can be used to filter on url or source IP. Show Sample Output


    4
    tail -f access_log | cut -c2-21 | uniq -c
    buzzy · 2010-04-29 11:16:54 6

  • 4
    tail -n2000 /var/www/domains/*/*/logs/access_log | awk '{print $1}' | sort | uniq -c | sort -n | awk '{ if ($1 > 20)print $1,$2}'
    allrightname · 2010-05-10 19:08:37 3
  • Should be a bit more portable since echo -e/n and date's -Ins are not. Show Sample Output


    4
    tail -f file | while read line; do printf "$(date -u '+%F %T%z')\t$line\n"; done
    derekschrock · 2010-11-24 05:50:12 4
  • This one is tried and tested for Ubuntu 12.04. Works great for tailing any file over http.


    4
    (echo -e "HTTP/1.1 200 Ok\n\r"; tail -f /var/log/syslog) | nc -l 1234
    adimania · 2013-02-09 06:15:42 5
  • This truncates any lines longer than 80 characters. Also useful for looking at different parts of the line, e.g. cut -b 50-100 shows columns 50 through 100.


    3
    tail -f logfile.log | cut -b 1-80
    plasticboy · 2009-03-26 18:41:57 11
  • This pipeline will find, sort and display all files based on mtime. This could be done with find | xargs, but the find | xargs pipeline will not produce correct results if the results of find are greater than xargs command line buffer. If the xargs buffer fills, xargs processes the find results in more than one batch which is not compatible with sorting. Note the "-print0" on find and "-0" switch for perl. This is the equivalent of using xargs. Don't you love perl? Note that this pipeline can be easily modified to any data produced by perl's stat operator. eg, you could sort on size, hard links, creation time, etc. Look at stat and just change the '9' to what you want. Changing the '9' to a '7' for example will sort by file size. A '3' sorts by number of links.... Use head and tail at the end of the pipeline to get oldest files or most recent. Use awk or perl -wnla for further processing. Since there is a tab between the two fields, it is very easy to process. Show Sample Output


    3
    find $HOME -type f -print0 | perl -0 -wn -e '@f=<>; foreach $file (@f){ (@el)=(stat($file)); push @el, $file; push @files,[ @el ];} @o=sort{$a->[9]<=>$b->[9]} @files; for $i (0..$#o){print scalar localtime($o[$i][9]), "\t$o[$i][-1]\n";}'|tail
    drewk · 2009-09-21 22:11:16 11

  • 3
    tail -f file |xargs -IX printf "$(date -u)\t%s\n" X
    unefunge · 2010-11-25 11:23:13 5
  • This will quickly display files last changed in a directory, with the newest on top. Show Sample Output


    2
    ls -t | head
    scottlinux · 2012-01-17 16:28:32 4
  • Run the alias command, then issue ps aux | tail and resize your terminal window (putty/console/hyperterm/xterm/etc) then issue the same command and you'll understand. ${LINES:-`tput lines 2>/dev/null||echo -n 12`} Insructs the shell that if LINES is not set or null to use the output from `tput lines` ( ncurses based terminal access ) to get the number of lines in your terminal. But furthermore, in case that doesn't work either, it will default to using the default of 80. The default for TAIL is to output the last 10 lines, this alias changes the default to output the last x lines instead, where x is the number of lines currently displayed on your terminal - 7. The -7 is there so that the top line displayed is the command you ran that used TAIL, ie the prompt. Depending on whether your PS1 and/or PROMPT_COMMAND output more than 1 line (mine is 3) you will want to increase from -2. So with my prompt being the following, I need -7, or - 5 if I only want to display the commandline at the top. ( http://www.askapache.com/linux/bash-power-prompt.html ) 275MB/748MB [7995:7993 - 0:186] 06:26:49 Thu Apr 08 [askapache@n1-backbone5:/dev/pts/0 +1] ~ In most shells the LINES variable is created automatically at login and updated when the terminal is resized (28 linux, 23/20 others for SIGWINCH) to contain the number of vertical lines that can fit in your terminal window. Because the alias doesn't hard-code the current LINES but relys on the $LINES variable, this is a dynamic alias that will always work on a tty device. Show Sample Output


    2
    alias tail='tail -n $((${LINES:-`tput lines 2>/dev/null||echo -n 80`} - 7))'
    AskApache · 2012-03-22 02:44:11 7

  • 2
    tail -F some.log | perl -ne 'print time(), "\n";' | uniq -c
    grana · 2013-05-11 04:51:22 6
  • The OPs solution will work, however on some systems (bsd), grep will not filter the data, unless the --line-buffered option is enabled.


    2
    tail -f $FILENAME | grep --line-buffered $PATTERN
    FluffyBrick · 2019-05-30 21:01:59 50
  • search the newest *.jpg in the directory an make a copy to newest.jpg. Just change the extension to search other files. This is usefull eg. if your webcam saves all pictures in a folder and you like the put the last one on your homepage. This works even in a directory with 10000 pictures.


    1
    cp `ls -x1tr *.jpg | tail -n 1` newest.jpg
    Psychodad · 2009-06-17 20:32:04 9
  • Another way of counting the line output of tail over 10s not requiring pv. Cut to have the average per second rate : tail -n0 -f access.log>/tmp/tmp.log & sleep 10; kill $! ; wc -l /tmp/tmp.log | cut -c-2 You can also enclose it in a loop and send stderr to /dev/null : while true; do tail -n0 -f access.log>/tmp/tmp.log & sleep 2; kill $! ; wc -l /tmp/tmp.log | cut -c-2; done 2>/dev/null


    1
    tail -n0 -f access.log>/tmp/tmp.log & sleep 10; kill $! ; wc -l /tmp/tmp.log
    dooblem · 2010-04-29 21:23:46 16
  • tail -c 1 "$1" returns the last byte in the file. Command substitution deletes any trailing newlines, so if the file ended in a newline $(tail -c 1 "$1") is now empty, and the -z test succeeds. However, $a will also be empty for an empty file, so we add -s "$1" to check that the file has a size greater than zero. Finally, -f "$1" checks that the file is a regular file -- not a directory or a socket, etc. Show Sample Output


    1
    endnl () { [[ -f "$1" && -s "$1" && -z $(tail -c 1 "$1") ]]; }
    quintic · 2010-08-25 12:06:10 83
  • Uses history to get the last n+1 commands (since this command will appear as the most recent), then strips out the line number and this command using sed, and appends the commands to a file.


    1
    history | tail -(n+1) | head -(n) | sed 's/^[0-9 ]\{7\}//' >> ~/script.sh
    unixmonkey15842 · 2011-06-08 13:40:58 3
  •  1 2 3 > 

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

dump database from postgresql to a file

Keep a copy of the raw Youtube FLV,MP4,etc stored in /tmp/
Certain Flash video players (e.g. Youtube) write their video streams to disk in /tmp/ , but the files are unlinked. i.e. the player creates the file and then immediately deletes the filename (unlinking files in this way makes it hard to find them, and/or ensures their cleanup if the browser or plugin should crash etc.) But as long as the flash plugin's process runs, a file descriptor remains in its /proc/ hierarchy, from which we (and the player) still have access to the file. The method above worked nicely for me when I had 50 tabs open with Youtube videos and didn't want to have to re-download them all with some tool.

Convert seconds to [DD:][HH:]MM:SS
Converts any number of seconds into days, hours, minutes and seconds. sec2dhms() { declare -i SS="$1" D=$(( SS / 86400 )) H=$(( SS % 86400 / 3600 )) M=$(( SS % 3600 / 60 )) S=$(( SS % 60 )) [ "$D" -gt 0 ] && echo -n "${D}:" [ "$H" -gt 0 ] && printf "%02g:" "$H" printf "%02g:%02g\n" "$M" "$S" }

Change the homepage of Firefox
Pros: Works in all Windows computers, most updated and compatible command. Cons: 3 liner Replace fcisolutions.com with your site name.

Found how how much memory in kB $PID is occupying in Linux
The "proportional set size" is probably the closest representation of how much active memory a process is using in the Linux virtual memory stack. This number should also closely represent the %mem found in ps(1), htop(1), and other utilities.

Copy without overwriting

put current directory in LAN quickly

bash screensaver off

Performance tip: compress /usr/
Periodically run the one-liner above if/when there are significant changes to the files in /usr/ = Before rebooting, add following to /etc/fstab : = $ /squashed/usr/usr.sfs /squashed/usr/ro squashfs loop,ro 0 0 $ usr /usr aufs udba=reval,br:/squashed/usr/rw:/squashed/usr/ro 0 0 No need to delete original /usr/ ! (unless you don't care about recovery). Also AuFS does not work with XFS

Remount root in read-write mode.
Saved my day, when my harddrive got stuck in read-only mode.


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: