Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands tagged tail from sorted by
Terminal - Commands tagged tail - 53 results
tail -f LOGFILE | perl -ne '`say "$_"`;'
2011-09-16 05:33:22
User: tamouse
Functions: perl tail
Tags: perl tail say
0

say only processes a complete file, at eof, so following a file isn't possible. Quick and dirty perl oneliner to feed each line from the tail -f to say. Yes, expensive to lauch a new process each line.

This little ditty was prompted by a discussion on how horrible it is to use VoiceOver on ncurses programs such as irssi.

tail -f /var/log/logfile|perl -e 'while (<>) {$l++;if (time > $e) {$e=time;print "$l\n";$l=0}}'
2011-06-21 10:28:26
User: madsen
Functions: perl tail time
Tags: perl tail
2

Using tail to follow and standard perl to count and print the lps when lines are written to the logfile.

head -n1 sample.txt | tail -n1
2011-06-14 17:45:04
User: gtcom
Functions: head tail
Tags: tail HEAD
-1

You can actually do the same thing with a combination of head and tail. For example, in a file of four lines, if you just want the middle two lines:

head -n3 sample.txt | tail -n2

Line 1 --\

Line 2 } These three lines are selected by head -n3,

Line 3 --/ this feeds the following filtered list to tail:

Line 4

Line 1

Line 2 \___ These two lines are filtered by tail -n2,

Line 3 / This results in:

Line 2

Line 3

being printed to screen (or wherever you redirect it).

history | tail -(n+1) | head -(n) | sed 's/^[0-9 ]\{7\}//' >> ~/script.sh
2011-06-08 13:40:58
Functions: head sed tail
1

Uses history to get the last n+1 commands (since this command will appear as the most recent), then strips out the line number and this command using sed, and appends the commands to a file.

curl -s http://www.last.fm/user/$LASTFMUSER | grep -A 1 subjectCell | sed -e 's#<[^>]*>##g' | head -n2 | tail -n1 | sed 's/^[[:space:]]*//g'
The command is too big to fit here. :( Look at the description for the command, in readable form! :)
2011-01-05 02:45:28
User: hunterm
Functions: at command
-6

Yep, now you can finally google from the command line!

Here's a readable version "for your pleasure"(c):

google() { # search the web using google from the commandline # syntax: google google query=$(echo "$*" | sed "s:%:%25:g;s:&:%26:g;s:+:%2b:g;s:;:%3b:g;s: :+:g") data=$(wget -qO - "https://ajax.googleapis.com/ajax/services/search/web?v=1.0&q=$query") title=$(echo "$data" | tr '}' '\n' | sed "s/.*,\"titleNoFormatting//;s/\":\"//;s/\",.*//;s/\\u0026/'/g;s/\\\//g;s/#39\;//g;s/'amp;/\&/g" | head -1) url="$(echo "$data" | tr '}' '\n' | sed 's/.*"url":"//;s/".*//' | head -1)" echo "${title}: ${url} | http://www.google.com/search?q=${query}" }

Enjoy :)

tail -f file |xargs -IX printf "$(date -u)\t%s\n" X
tail -f file | awk '{now=strftime("%F %T%z\t");sub(/^/, now);print}'
tail -f file | while read line; do printf "$(date -u '+%F %T%z')\t$line\n"; done
2010-11-24 05:50:12
User: derekschrock
Functions: file printf read tail
Tags: tail date
4

Should be a bit more portable since echo -e/n and date's -Ins are not.

tail -f file | while read line; do echo -n $(date -u -Ins); echo -e "\t$line"; done
2010-11-19 10:01:57
User: hfs
Functions: date echo file read tail
Tags: tail date
6

This is useful when watching a log file that does not contain timestamps itself.

If the file already has content when starting the command, the first lines will have the "wrong" timestamp when the command was started and not when the lines were originally written.

endnl () { [[ -f "$1" && -s "$1" && -z $(tail -c 1 "$1") ]]; }
2010-08-25 12:06:10
User: quintic
Functions: tail
Tags: tail
1

tail -c 1 "$1" returns the last byte in the file.

Command substitution deletes any trailing newlines, so if the file ended in a newline $(tail -c 1 "$1") is now empty, and the -z test succeeds.

However, $a will also be empty for an empty file, so we add -s "$1" to check that the file has a size greater than zero.

Finally, -f "$1" checks that the file is a regular file -- not a directory or a socket, etc.

sudo ls -l $(eval echo "/proc/{$(echo $(pgrep java)|sed 's/ /,/')}/fd/")|grep log|sed 's/[^/]* //g'|xargs -r tail -f
2010-07-30 18:20:00
User: vutcovici
Functions: echo eval grep ls sed sudo tail xargs
-1

Tail all logs that are opened by all java processes. This is helpful when you are on a new environment and you do not know where the logs are located. Instead of java you can put any process name. This command does work only for Linux.

The list of all log files opened by java process:

sudo ls -l $(eval echo "/proc/{$(echo $(pgrep java)|sed 's/ /,/')}/fd/")|grep log|sed 's/[^/]* //g'
tail -n2000 /var/www/domains/*/*/logs/access_log | awk '{print $1}' | sort | uniq -c | sort -n | awk '{ if ($1 > 20)print $1,$2}'
tail -n0 -f access.log>/tmp/tmp.log & sleep 10; kill $! ; wc -l /tmp/tmp.log
2010-04-29 21:23:46
User: dooblem
Functions: kill sleep tail wc
Tags: tail kill wc sleep
1

Another way of counting the line output of tail over 10s not requiring pv.

Cut to have the average per second rate :

tail -n0 -f access.log>/tmp/tmp.log & sleep 10; kill $! ; wc -l /tmp/tmp.log | cut -c-2

You can also enclose it in a loop and send stderr to /dev/null :

while true; do tail -n0 -f access.log>/tmp/tmp.log & sleep 2; kill $! ; wc -l /tmp/tmp.log | cut -c-2; done 2>/dev/null

tail -f access.log | pv -l -i10 -r >/dev/null
2010-04-29 21:02:01
User: dooblem
Functions: tail
Tags: tail pv
6

Displays the realtime line output rate of a logfile.

-l tels pv to count lines

-i to refresh every 10 seconds

-l option is not in old versions of pv. If the remote system has an old pv version:

ssh tail -f /var/log/apache2/access.log | pv -l -i10 -r >/dev/null

tail -f access_log | cut -c2-21 | uniq -c
2010-04-29 11:16:54
User: buzzy
Functions: cut tail uniq
Tags: uniq tail cut
4

Change the cut range for hits per 10 sec, minute and so on... Grep can be used to filter on url or source IP.

alias head='head -n $((${LINES:-`tput lines 2>/dev/null||echo -n 12`} - 2))'
26

Run the alias command, then issue

ps aux | head

and resize your terminal window (putty/console/hyperterm/xterm/etc) then issue the same command and you'll understand.

${LINES:-`tput lines 2>/dev/null||echo -n 12`}

Insructs the shell that if LINES is not set or null to use the output from `tput lines` ( ncurses based terminal access ) to get the number of lines in your terminal. But furthermore, in case that doesn't work either, it will default to using the deafault of 12 (-2 = 10).

The default for HEAD is to output the first 10 lines, this alias changes the default to output the first x lines instead, where x is the number of lines currently displayed on your terminal - 2. The -2 is there so that the top line displayed is the command you ran that used HEAD, ie the prompt.

Depending on whether your PS1 and/or PROMPT_COMMAND output more than 1 line (mine is 3) you will want to increase from -2. So with my prompt being the following, I need -7, or - 5 if I only want to display the commandline at the top. ( http://www.askapache.com/linux-unix/bash-power-prompt.html )

275MB/748MB

[7995:7993 - 0:186] 06:26:49 Thu Apr 08 [askapache@n1-backbone5:/dev/pts/0 +1] ~

In most shells the LINES variable is created automatically at login and updated when the terminal is resized (28 linux, 23/20 others for SIGWINCH) to contain the number of vertical lines that can fit in your terminal window. Because the alias doesn't hard-code the current LINES but relys on the $LINES variable, this is a dynamic alias that will always work on a tty device.

find $HOME -type f -print0 | perl -0 -wn -e '@f=<>; foreach $file (@f){ (@el)=(stat($file)); push @el, $file; push @files,[ @el ];} @o=sort{$a->[9]<=>$b->[9]} @files; for $i (0..$#o){print scalar localtime($o[$i][9]), "\t$o[$i][-1]\n";}'|tail
2009-09-21 22:11:16
User: drewk
Functions: find perl
3

This pipeline will find, sort and display all files based on mtime. This could be done with find | xargs, but the find | xargs pipeline will not produce correct results if the results of find are greater than xargs command line buffer. If the xargs buffer fills, xargs processes the find results in more than one batch which is not compatible with sorting.

Note the "-print0" on find and "-0" switch for perl. This is the equivalent of using xargs. Don't you love perl?

Note that this pipeline can be easily modified to any data produced by perl's stat operator. eg, you could sort on size, hard links, creation time, etc. Look at stat and just change the '9' to what you want. Changing the '9' to a '7' for example will sort by file size. A '3' sorts by number of links....

Use head and tail at the end of the pipeline to get oldest files or most recent. Use awk or perl -wnla for further processing. Since there is a tab between the two fields, it is very easy to process.

seq 1 12 | sed 1,5d ; seq 1 12 | head --lines=-5
2009-08-01 00:41:52
User: flux
Functions: head sed seq
Tags: sed tail HEAD fun
0

Strangely enough, there is no option --lines=[negative] with tail, like the head's one, so we have to use sed, which is very short and clear, you see.

Strangely more enough, skipping lines at the bottom with sed is not short nor clear. From Sed one liner :

# delete the last 10 lines of a file

$ sed -e :a -e '$d;N;2,10ba' -e 'P;D' # method 1

$ sed -n -e :a -e '1,10!{P;N;D;};N;ba' # method 2

echo -e "HEAD / HTTP/1.1\nHost: slashdot.org\n\n" | nc slashdot.org 80 | head -n5 | tail -1 | cut -f2 -d-
ls -t1 | head -n1 | xargs tail -f
tail -F file
2009-07-23 07:37:11
User: recursiverse
Functions: tail
Tags: tail logs
14

If you use 'tail -f foo.txt' and it becomes temporarily moved/deleted (ie: log rolls over) then tail will not pick up on the new foo.txt and simply waits with no output.

'tail -F' allows you to follow the file by it's name, rather than a descriptor. If foo.txt disappears, tail will wait until the filename appears again and then continues tailing.

ls -drt /var/log/* | tail -n5 | xargs sudo tail -n0 -f
2009-07-22 14:44:41
User: kanaka
Functions: ls sudo tail xargs
Tags: bash tail log watch
5

This command finds the 5 (-n5) most frequently updated logs in /var/log, and then does a multifile tail follow of those log files.

Alternately, you can do this to follow a specific list of log files:

sudo tail -n0 -f /var/log/{messages,secure,cron,cups/error_log}

cp `ls -x1tr *.jpg | tail -n 1` newest.jpg
2009-06-17 20:32:04
User: Psychodad
Functions: cp tail
1

search the newest *.jpg in the directory an make a copy to newest.jpg. Just change the extension to search other files. This is usefull eg. if your webcam saves all pictures in a folder and you like the put the last one on your homepage. This works even in a directory with 10000 pictures.

find /var/log -type f -exec file {} \; | grep 'text' | cut -d' ' -f1 | sed -e's/:$//g' | grep -v '[0-9]$' | xargs tail -f
2009-06-03 09:47:08
User: mohan43u
Functions: cut file find grep sed tail xargs
Tags: tail
5

Works in Ubuntu, I hope it will work on all Linux machines. For Unixes, tail should be capable of handling more than one file with '-f' option.

This command line simply take log files which are text files, and not ending with a number, and it will continuously monitor those files.

Putting one alias in .profile will be more useful.