What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Universal configuration monitoring and system of record for IT.

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:



May 19, 2015 - A Look At The New Commandlinefu
I've put together a short writeup on what kind of newness you can expect from the next iteration of clfu. Check it out here.
March 2, 2015 - New Management
I'm Jon, I'll be maintaining and improving clfu. Thanks to David for building such a great resource!

Top Tags





Commands using head from sorted by
Terminal - Commands using head - 248 results
genRandomText() { cat /dev/urandom|tr -dc 'a-zA-Z'|head -c $1 }
2012-01-21 00:51:34
User: thomasba
Functions: cat head tr
Tags: random urandom

Using urandom to get random data, deleting non-letters with tr and print the first $1 bytes.

cat /dev/urandom | tr -dc A-Za-z0-9 | head -c 32
memnum=$(awk '{ print $2 }' /proc/meminfo |head -n1); echo "$memnum / 1024 / 1024" | bc -l
2011-11-08 16:28:25
User: wekoch
Functions: awk bc echo head

Probably more trouble than its worth, but worked for the obscure need.

ls -ltp | sed '1 d' | head -n1
2011-10-17 16:21:15
Functions: head ls sed

wrap it in a function if you like...

lastfile () { ls -ltp | sed '1 d' | head -n1 }
alias busy='rnd_file=$(find /usr/include -type f -size +5k | sort -R | head -n 1) && vim +$((RANDOM%$(wc -l $rnd_file | cut -f1 -d" "))) $rnd_file'
2011-10-16 00:05:59
User: frntn
Functions: alias cut find head sort vim wc

Enhancement for the 'busy' command originally posted by busybee : less chars, no escape issue, and most important it exclude small files ( opening a 5 lines file isn't that persuasive I think ;) )

This makes an alias for a command named 'busy'. The 'busy' command opens a random file in /usr/include to a random line with vim.

tail -n +<N> <file> | head -n 1
2011-09-30 08:30:30
User: qweqq
Functions: head tail

Tail is much faster than sed, awk because it doesn't check for regular expressions.

head -n 13 /etc/services | tail -n 1
2011-09-15 19:39:49
User: muonIT
Functions: head tail
Tags: goto

Silly approach, but easy to remember...

sudo netstat|head -n2|tail -n1 && sudo netstat -a|grep udp && echo && sudo netstat|head -n2|tail -n1 && sudo netstat -a|grep tcp
less file.lst | head -n 50000 > output.txt
2011-09-05 05:26:04
User: Richie086
Functions: head less

Useful for situations where you have word lists or dictionaries that range from hundreds of megabytes to several gigabytes in size. Replace file.lst with your wordlist, replace 50000 with however many lines you want the resulting list to be in total. The result will be redirected to output.txt in the current working directory. It may be helpful to run wc -l file.lst to find out how many lines the word list is first, then divide that in half to figure out what value to put for the head -n part of the command.

search="whatyouwant";data=$(grep "$search" * -R --exclude-dir=.svn -B2 -A2);for((i=$(echo "$data" | wc -l);$i>0;i=$(($i-6)) )); do clear;echo "$data"| tail -n $i | head -n 5; read;done
2011-08-29 18:14:16
User: Juluan
Functions: echo grep head tail wc

Not perfect but working (at least on the project i wrote it ;) )

Specify what you want search in var search, then it grep the folder and show one result at a time.

Press enter and then it will show the next result.

It can work bad on result in the firsts lines, and it can be improved to allow to come back.

But in my case (a large project, i was checking if a value wasn't used withouth is corresponding const and the value is "1000" so there was a lot of result ...) it was perfect ;)

shuf /usr/share/dict/words |grep "^[^']\{3,5\}$" |head -n4
2011-08-24 03:43:55
User: menachem
Functions: grep head
Tags: awk xkcd

This does the same thing that the command 'j_melis' submitted, but does it a lot quicker.

That command takes 43 seconds to complete on my system, while the command I submitted takes 6 seconds.

cd $(ls -ltr|grep ^d|head -1|sed 's:.*\ ::g'|tail -1)
2011-08-10 03:39:35
Functions: cd grep head ls sed tail

Replace the head -1 with head -n that is the n-th item you want to go to.

Replace the head with tail, go to the last dir you listed.

You also can change the parameters of ls.

alias cd1='cd $( ls -1t | grep ^d | head -1)'
head /dev/urandom | md5sum | base64
find /myfs -size +209715200c -exec du -m {} \; |sort -nr |head -10
2011-07-07 21:12:46
User: arlequin
Functions: du find head sort

Specify the size in bytes using the 'c' option for the -size flag. The + sign reads as "bigger than". Then execute du on the list; sort in reverse mode and show the first 10 occurrences.

cd $(ls -1t --color=never | head -1)
alias cd1='cd $( ls -lt | grep ^d | head -1 | cut -b 51- )'
head -n1 sample.txt | tail -n1
2011-06-14 17:45:04
User: gtcom
Functions: head tail
Tags: tail HEAD

You can actually do the same thing with a combination of head and tail. For example, in a file of four lines, if you just want the middle two lines:

head -n3 sample.txt | tail -n2

Line 1 --\

Line 2 } These three lines are selected by head -n3,

Line 3 --/ this feeds the following filtered list to tail:

Line 4

Line 1

Line 2 \___ These two lines are filtered by tail -n2,

Line 3 / This results in:

Line 2

Line 3

being printed to screen (or wherever you redirect it).

history | tail -(n+1) | head -(n) | sed 's/^[0-9 ]\{7\}//' >> ~/script.sh
2011-06-08 13:40:58
Functions: head sed tail

Uses history to get the last n+1 commands (since this command will appear as the most recent), then strips out the line number and this command using sed, and appends the commands to a file.

find . -maxdepth 1 -printf '%A@\t%p\n' | sort -r | cut -f 2,2 | head -1
ls -t1 | head -n1
od /dev/urandom -w6 -tx1 -An|sed -e 's/ //' -e 's/ /:/g'|head -n 1
2011-05-16 15:05:34
User: karel1980
Functions: head od sed

Just increase the 1 at the end if you want to generate more than one.

(Alternative to "| head -n N" you could use the -b flag of od: -b $[6*N]

echo $(( $( date +%s ) - $( stat -c %Y * | sort -nr | head -n 1 ) ))
alias screenr='screen -r $(screen -ls | egrep -o -e '[0-9]+' | head -n 1)'
cut -f1 -d" " ~/.bash_history | sort | uniq -c | sort -nr | head -n 30