Commands using tail (292)


  • 3
    history | perl -lane '$lsize{$_} = scalar(@F); if($longest<$lsize{$_}) { $longest = $lsize{$_}; print "$_"; };' | tail -n1
    salparadise · 2009-03-19 02:52:30 6
  • This truncates any lines longer than 80 characters. Also useful for looking at different parts of the line, e.g. cut -b 50-100 shows columns 50 through 100.


    3
    tail -f logfile.log | cut -b 1-80
    plasticboy · 2009-03-26 18:41:57 11
  • See man vmstat for information about the statistics. This does the same thing without the timestamp: vmstat 5 Show Sample Output


    3
    while [ 1 ]; do echo -n "`date +%F_%T`" ; vmstat 1 2 | tail -1 ; sleep 4; done
    plasticboy · 2009-03-26 19:16:55 14
  • This command will show the 20 processes using the most CPU time (hungriest at the bottom). You can see the 20 most memory intensive processes (hungriest at the bottom) by running: ps aux | sort +3n | tail -20 Or, run both: echo "CPU:" && ps aux | sort +2n | tail -20 && echo "Memory:" && ps aux | sort +3n | tail -20


    3
    ps aux | sort +2n | tail -20
    dopeman · 2009-03-31 12:03:34 10
  • yes 6 (tail from 6th line)


    3
    ls -t | tail +6 | xargs rm
    negyvenot · 2009-09-16 06:33:07 3
  • Figures out what has changed in the last 12 hours. Change the author to yourself, change the time since to whatever you want. Show Sample Output


    3
    git diff --stat `git log --author="XXXXX" --since="12 hours ago" --pretty=oneline | tail -n1 | cut -c1-40` HEAD
    askedrelic · 2009-11-04 01:41:33 3
  • Prints the top 10 memory consuming processes (with children and instances aggregated) sorted by total RSS and calculates the percentage of total RAM each uses. Please note that since RSS can include shared libraries it is possible for the percentages to add up to more that the total amount of RAM, but this still gives you a pretty good idea. Also note that this does not work with the mawk version of awk, but it works fine with GNU Awk which is on most Linux systems. It also does not work on OS X. Show Sample Output


    3
    TR=`free|grep Mem:|awk '{print $2}'`;ps axo rss,comm,pid|awk -v tr=$TR '{proc_list[$2]+=$1;} END {for (proc in proc_list) {proc_pct=(proc_list[proc]/tr)*100; printf("%d\t%-16s\t%0.2f%\n",proc_list[proc],proc,proc_pct);}}'|sort -n |tail -n 10
    d34dh0r53 · 2010-03-27 01:34:50 7
  • Several people have submitted commands to do this, but I think this is the simplest solution. It also happens to be the most portable one: It should work with any sh or csh derived shell under any UNIX-like OS. Oh by the way, with my German locale ($LC_TIME set appropriately) it prints "g" most of the time, and sometimes (on Wednesdays) it prints "h". It never prints "y". Show Sample Output


    3
    date +%A | tail -2c
    inof · 2010-04-08 15:14:06 6

  • 3
    tail -f file |xargs -IX printf "$(date -u)\t%s\n" X
    unefunge · 2010-11-25 11:23:13 5

  • 3
    history | awk '{print $2}' | awk 'BEGIN {FS="|"}{print $1}' | sort | uniq -c | sort -n | tail | sort -nr
    secretgspot · 2011-03-12 05:58:21 3

  • 3
    find /var/log -iregex '.*[^\.][^0-9]+$' -not -iregex '.*gz$' 2> /dev/null | xargs tail -n0 -f | ccze -A
    brejktru · 2011-04-17 18:09:25 3

  • 3
    tail -f /var/log/squid/access.log | perl -p -e 's/^([0-9]*)/"[".localtime($1)."]"/e'
    godzillante · 2011-07-06 08:55:27 7
  • script -f /tmp/foo will place all output of the terminal, including carriage returns, to a file. This file can be tail dash-eff'ed by one or more other terminals to display the information of the main terminal. Good way to share one's screen on short notice. Note: This produces a very accurate output, but that includes depending on the size of your terminal to be the same. You can clear screens or even resize the terminal for others using this function; I use it in conjunction with the "mid" command in my list. Show Sample Output


    3
    script -f /tmp/foo; tail -f /tmp/foo
    robinsonaarond · 2011-11-22 15:16:08 23
  • Output lines starting at line 2.


    3
    tail -n +2 foo.txt
    kaushalmehra · 2012-09-13 20:54:36 4
  • The router Technicolor TC7200 has an exploit where the file http://192.168.0.1/goform/system/GatewaySettings.bin is open for unauthenticated access. Even though it is binary, the 2 last strings are the username and password for the pages for router management. It can be read using the 'strings' command, 'hexdump -C' or a hexadecimal editor. (default user/password = admin/admin) Reveals more configuration, including SSID name and Key for the wifi network: wget -q -O - http://192.168.0.1/goform/system/GatewaySettings.bin Hexadecimal dump of the file: wget -q -O - http://192.168.0.1/goform/system/GatewaySettings.bin | hexdump -C Show Sample Output


    3
    wget -q -O - http://192.168.0.1/goform/system/GatewaySettings.bin | strings | tail -n 2
    paulera · 2016-05-03 23:03:55 13
  • This is a standard procedure for me, whenever I set up a new Raspberry Pi system. Because the default user is "pi", I quickly replace it with my own (e.g. "kostis"), but I have to substitute that user to all of pi's groups first, before deleting the default account. xargs helps a lot with that in a single line, while avoiding boring "for" loops. For everything trickier, there's always "parallel" :)


    3
    groups pi | xargs -n 1 | tail -n +4 | xargs -n 1 sudo adduser kostis
    kostis · 2022-01-25 07:20:09 449
  • Tuned for short command line - you can set the path to sessionstore.js more reliable instead of use asterixes etc. Usable when you are not at home and really need to get your actual opened tabs on your home computer (via SSH). I am using it from my work if I forgot to bookmark some new interesting webpage, which I have visited at home. Also other way to list tabs when your firefox has crashed (restoring of tabs doesn't work always). This script includes also tabs which has been closed short time before.


    2
    F="$HOME/.moz*/fire*/*/session*.js" ; grep -Go 'entries:\[[^]]*' $F | cut -d[ -f2 | while read A ; do echo $A | sed s/url:/\n/g | tail -1 | cut -d\" -f2; done
    b2e · 2009-05-21 21:58:42 4

  • 2
    mysql -u<user> -p<password> -s -e 'DESCRIBE <table>' <database> | tail -n +1 | awk '{ printf($1",")}' | head -c -1
    Cowboy · 2009-08-17 12:54:44 3
  • get diskusage of files (in this case logfiles in /var/log) modified during the last n days: sudo find /var/log/ -mtime -n -type f | xargs du -ch n -> last modified n*24 hours ago Numeric arguments can be specified as +n for greater than n, -n for less than n, n for exactly n. => so 7*24 hours (about 7 days) is -7 sudo find /var/log/ -mtime -7 -type f | xargs du -ch | tail -n1 Show Sample Output


    2
    sudo find /var/log/ -mtime -7 -type f | xargs du -ch | tail -n1
    alvinx · 2009-08-27 14:18:47 5

  • 2
    find /path/to/dir -type f -printf "%T@|%p\n" 2>/dev/null | sort -n | tail -n 1| awk -F\| '{print $2}'
    glennie · 2010-02-04 15:13:27 4
  • Finds all C++, Python, SWIG files in your present directory (uses "*" rather than "." to exclude invisibles) and counts how many lines are in them. Returns only the last line (the total). Show Sample Output


    2
    find * \( -name "*.[hc]pp" -or -name "*.py" -or -name "*.i" \) -print0 | xargs -0 wc -l | tail -n 1
    neologism · 2010-03-25 18:58:29 4
  • Just a quick hack to give reasonable filenames to TrueType and OpenType fonts. I'd accumulated a big bunch of bizarrely and inconsistently named font files in my ~/.fonts directory. I wanted to copy some, but not all, of them over to my new machine, but I had no idea what many of them were. This script renames .ttf files based on the name embedded inside the font. It will also work for .otf files, but make sure you change the mv part so it gives them the proper extension. REQUIREMENTS: Bash (for extended pattern globbing), showttf (Debian has it in the fontforge-extras package), GNU grep (for context), and rev (because it's hilarious). BUGS: Well, like I said, this is a quick hack. It grew piece by piece on the command line. I only needed to do this once and spent hardly any time on it, so it's a bit goofy. For example, I find 'rev | cut -f1 | rev' pleasantly amusing --- it seems so clearly wrong, and yet it works to print the last argument. I think flexibility in expressiveness like this is part of the beauty of Unix shell scripting. One-off tasks can be be written quickly, built-up as a person is "thinking aloud" at the command line. That's why Unix is such a huge boost to productivity: it allows each person to think their own way instead of enforcing some "right way". On a tangent: One of the things I wish commandlinefu would show is the command line HISTORY of the person as they developed the script. I think it's that conversation between programmer and computer, as the pipeline is built piece-by-piece, that is the more valuable lesson than any canned script. Show Sample Output


    2
    shopt -s extglob; for f in *.ttf *.TTF; do g=$(showttf "$f" 2>/dev/null | grep -A1 "language=0.*FullName" | tail -1 | rev | cut -f1 | rev); g=${g##+( )}; mv -i "$f" "$g".ttf; done
    hackerb9 · 2010-04-30 09:46:45 6
  • 'data' is the directory to backup, 'backup' is directory to store snapshots. Backup files on a regular basis using hard links. Very efficient, quick. Backup data is directly available. Same as explained here : http://blog.interlinked.org/tutorials/rsync_time_machine.html in one line. Using du to check the size of your backups, the first backup counts for all the space, and other backups only files that have changed. Show Sample Output


    2
    rsync -av --link-dest=$(ls -1d /backup/*/ | tail -1) /data/ /backup/$(date +%Y%m%d%H%M)/
    dooblem · 2010-08-05 19:36:24 6
  • You can use this one-liner for a quick and dirty (more customizable) alternative to the watch command. The keys to making this work: everything exists in an infinite loop; the loop starts with a clear; the loop ends with a sleep. Enter whatever you'd like to keep an eye on in the middle. Show Sample Output


    2
    while (true); do clear; uname -n; echo ""; df -h /; echo ""; tail -5 /var/log/auth.log; echo ""; vmstat 1 5; sleep 15; done
    roknir · 2010-08-23 04:37:58 7
  • Returns a the directory depth.


    2
    find . -printf '%d\n' | sort -n | tail -1
    dustindorroh · 2011-04-25 18:38:12 7
  •  < 1 2 3 4 5 >  Last ›

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

File rotation without rename command
Rotates log files with "gz"-extension in a directory for 7 days and enumerates the number in file name. i.e.: logfile.1.gz > logfile.2.gz I needed this line due to the limitations on AIX Unix systems which do not ship with the rename command.

Hypnosis
rather don't depend on those sleepin'pills http://en.wikipedia.org/wiki/Counting_sheep

Remove a range of lines from a file

Generate Random Passwords
If you want a password length longer than 6, changing the -c6 to read -c8 will give you 8 random characters instead of 6. To end up with a line-feed, use this with echo: # echo `< /dev/urandom tr -dc _A-Z-a-z-0-9 | head -c6` Modern systems need higher strenght, so add some special characters: # < /dev/urandom tr -dc '12345!@#$%qwertQWERTasdfgASDFGzxcvbZXCVB' | head -c8

Mount/unmount your truecrypted file containers
This should automatically mount it to /media/truecrypt1. Further mounts will go to /media/truecrypt2, and so on. You shouldn't need sudo/su if your permissions are right. I alias tru='truecrypt' since tr and true are commands. To explicitly create a mount point do: tru volume.tc /media/foo To make sure an GUI explorer window (nautilus, et al) opens on the mounted volume, add: --explorer To see what you currently have mounted do: tru -l To dismount a volume do: tru -d volume.tc. To dismount all mounted volumes at once do: tru -d Tested with Truecrypt v6.3a / Ubuntu 9.10

Easy Persistent SSH Connections Using Screen
Use as: $ s host1 Will ssh to remote host upon first invocation. Then use C-a d to detatch. Running "s host1" again will resume the shell session on the remote host. Only useful in LAN environment. You'd want to start the screen on the remote host over a WAN. Adapted from Hack 34 in Linux Server Hacks 2nd Addition.

Zip each file in a directory individually with the original file name
This will list the files in a directory, then zip each one with the original filename individually. video1.wmv -> video1.zip video2.wmv -> video2.zip This was for zipping up large amounts of video files for upload on a Windows machine.

IBM AIX: Extract a .tar.gz archive in one shot
This command is for UNIX OSes that have plain vanilla System V UNIX commands instead of their more functional GNU counterparts, such as IBM AIX.

Check which files are opened by Firefox then sort by largest size.
Just refining last proposal for this check, showing awk power to make more complex math (instead /1024/1024, 2^20). We don't need declare variable before run lsof, because $(command) returns his output. Also, awk can perform filtering by regexp instead to call grep. I changed the 0.0000xxxx messy output, with a more readable form purging all fractional numbers and files less than 1 MB.

Calculate days on which Friday the 13th occurs (inspired from the work of the user justsomeguy)
Friday is the 5th day of the week, monday is the 1st. Output may be affected by locale.


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: