Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands using head from sorted by
Terminal - Commands using head - 238 results
find . -type f -print0 | xargs -0 du -h | sort -hr | head -10
curl -s kernel.org | grep '<strong>' | head -3 | tail -1 | cut -d'>' -f3 | cut -d'<' -f1
ps -e h -o pid --sort -pcpu | head -10 | vzpid -
2012-05-24 14:16:40
User: mrkmg
Functions: head ps
Tags: openvz
0

This command will list the PID, VEID, and Name of the 10 highest cpu using processes on a openvz host. You must have vzpid installed.

wget -O - http://www.reddit.com/r/wallpapers.rss | grep -Eo 'http://i.imgur.com[^&]+jpg' | head -1 | xargs wget -O background.jpg
2012-04-25 11:15:26
User: untitaker
Functions: grep head wget xargs
0

Doesn't depend on curl and doesn't use thumbnails as wallpaper (which has the unfortunate effect of only allowing imgur links)

(set -o noglob;while sleep 0.05;do for r in `grep -ao '[[:print:]]' /dev/urandom|head -$((COLUMNS/3))`;do [ $((RANDOM%6)) -le 1 ] && r=\ ;echo -ne "\e[$((RANDOM%7/-6+2));32m $r ";done;echo;done)
2012-04-13 02:09:10
User: khopesh
Functions: echo head set sleep
0

A tweak using Patola's code as a base, this full-width green matrix display has all the frills (and all the printable characters).

You don't need the surrounding parens if you don't care about losing globbing capabilities. Z-shell (/bin/zsh) needs neither the parens nor the `set -o noglob`

Screen shot (animated): http://desmond.imageshack.us/Himg32/scaled.php?server=32&filename=matrixh.gif&res=landing

If it's too slow, try lowering the `sleep 0.05` or even replacing it with `true` (which is faster than `sleep 0`).

I squashed it as narrow as I could to conserve space, though somebody could probably squeeze a char or two out.

Enjoy!

find . -type f -print0 | xargs -0 du -h | sort -hr | head -20
2012-03-30 10:21:12
User: flatcap
Functions: du find head sort xargs
7

Search for files and list the 20 largest.

find . -type f

gives us a list of file, recursively, starting from here (.)

-print0 | xargs -0 du -h

separate the names of files with NULL characters, so we're not confused by spaces

then xargs run the du command to find their size (in human-readable form -- 64M not 64123456)

| sort -hr

use sort to arrange the list in size order. sort -h knows that 1M is bigger than 9K

| head -20

finally only select the top twenty out of the list

head -n1 nation.tbl | sed 's/\(.\)/\1\n/g' | sort | uniq -c | grep \| | awk '{ print $1 }'
sed -e 's/[;|][[:space:]]*/\n/g' .bash_history | cut --delimiter=' ' --fields=1 | sort | uniq --count | sort --numeric-sort --reverse | head --lines=20
for i in $(ps -eo pid,pmem,pcpu| sort -k 3 -r|grep -v PID|head -10|awk '{print $1}');do diff -yw <(pidstat -p $i|grep -v Linux) <(ps -o euser,pri,psr,pmem,stat -p $i|tail);done
2012-02-16 20:54:32
Functions: awk diff grep head ps sort
0

It grabs the PID's top resource users with $(ps -eo pid,pmem,pcpu| sort -k 3 -r|grep -v PID|head -10)

The sort -k is sorting by the third field which would be CPU. Change this to 2 and it will sort accordingly.

The rest of the command is just using diff to display the output of 2 commands side-by-side (-y flag) I chose some good ones for ps.

pidstat comes with the sysstat package(sar, mpstat, iostat, pidstat) so if you don't have it, you should.

I might should take off the timestamp... :|

export GREP_COLOR='1;32';while [ true ]; do head -n 100 /dev/urandom; sleep .1; done | hexdump -C | grep --color=auto "ca fe"
ls -t1 $* | head -1 ;
2012-02-10 22:13:24
Functions: head ls
0

Returns the most recently modified file in the current (or specified) directory. You can also get the oldest file, via:

ls -t1 $* | tail-1 ;

TOTAL_RAM=`free | head -n 2 | tail -n 1 | awk '{ print $2 }'`; PROC_RSS=`ps axo rss,comm | grep [h]ttpd | awk '{ TOTAL += $1 } END { print TOTAL }'`; PROC_PCT=`echo "scale=4; ( $PROC_RSS/$TOTAL_RAM ) * 100" | bc`; echo "RAM Used by HTTP: $PROC_PCT%"
while true; do curl -s http://sensiblepassword.com/?harder=1 | tail -n 15 | head -n 1 | sed 's;<br/>;;' | cut -c 5- | cb; sleep 1; done
2012-01-30 20:52:14
User: supervacuo
Functions: cut head sed sleep tail
1

Use the excellent sensiblepasswords.com to a generate random (yet easy-to-remember) password every second, and copy it to the clipboard. Useful for generating a list of passwords and pasting them into a spreadsheet.

This script uses "madebynathan"'s "cb" function (http://madebynathan.com/2011/10/04/a-nicer-way-to-use-xclip/); you could also replace "cb" with

xclip -selection c

Remove "while true; do" and "; done" to generate and copy only 1 password.

PID=`ps | grep process_name | grep -v grep | head -n 1 | awk '{print $1}'`; cat /proc/$PID/smaps | grep heap -A 2
grep . "$f" | head -n1
2012-01-27 02:58:07
User: captaincomic
Functions: grep head
Tags: grep
0

Use this command if your file may contain empty lines and you need to optain the first non-empty line.

genRandomText() { cat /dev/urandom|tr -dc 'a-zA-Z'|head -c $1 }
2012-01-21 00:51:34
User: thomasba
Functions: cat head tr
Tags: random urandom
0

Using urandom to get random data, deleting non-letters with tr and print the first $1 bytes.

cat /dev/urandom | tr -dc A-Za-z0-9 | head -c 32
memnum=$(awk '{ print $2 }' /proc/meminfo |head -n1); echo "$memnum / 1024 / 1024" | bc -l
2011-11-08 16:28:25
User: wekoch
Functions: awk bc echo head
-2

Probably more trouble than its worth, but worked for the obscure need.

ls -ltp | sed '1 d' | head -n1
2011-10-17 16:21:15
Functions: head ls sed
-2

wrap it in a function if you like...

lastfile () { ls -ltp | sed '1 d' | head -n1 }
alias busy='rnd_file=$(find /usr/include -type f -size +5k | sort -R | head -n 1) && vim +$((RANDOM%$(wc -l $rnd_file | cut -f1 -d" "))) $rnd_file'
2011-10-16 00:05:59
User: frntn
Functions: alias cut find head sort vim wc
0

Enhancement for the 'busy' command originally posted by busybee : less chars, no escape issue, and most important it exclude small files ( opening a 5 lines file isn't that persuasive I think ;) )

This makes an alias for a command named 'busy'. The 'busy' command opens a random file in /usr/include to a random line with vim.

tail -n +<N> <file> | head -n 1
2011-09-30 08:30:30
User: qweqq
Functions: head tail
-5

Tail is much faster than sed, awk because it doesn't check for regular expressions.

head -n 13 /etc/services | tail -n 1
2011-09-15 19:39:49
User: muonIT
Functions: head tail
Tags: goto
-5

Silly approach, but easy to remember...

sudo netstat|head -n2|tail -n1 && sudo netstat -a|grep udp && echo && sudo netstat|head -n2|tail -n1 && sudo netstat -a|grep tcp
less file.lst | head -n 50000 > output.txt
2011-09-05 05:26:04
User: Richie086
Functions: head less
-3

Useful for situations where you have word lists or dictionaries that range from hundreds of megabytes to several gigabytes in size. Replace file.lst with your wordlist, replace 50000 with however many lines you want the resulting list to be in total. The result will be redirected to output.txt in the current working directory. It may be helpful to run wc -l file.lst to find out how many lines the word list is first, then divide that in half to figure out what value to put for the head -n part of the command.

search="whatyouwant";data=$(grep "$search" * -R --exclude-dir=.svn -B2 -A2);for((i=$(echo "$data" | wc -l);$i>0;i=$(($i-6)) )); do clear;echo "$data"| tail -n $i | head -n 5; read;done
2011-08-29 18:14:16
User: Juluan
Functions: echo grep head tail wc
-2

Not perfect but working (at least on the project i wrote it ;) )

Specify what you want search in var search, then it grep the folder and show one result at a time.

Press enter and then it will show the next result.

It can work bad on result in the firsts lines, and it can be improved to allow to come back.

But in my case (a large project, i was checking if a value wasn't used withouth is corresponding const and the value is "1000" so there was a lot of result ...) it was perfect ;)