What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Universal configuration monitoring and system of record for IT.

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:



May 19, 2015 - A Look At The New Commandlinefu
I've put together a short writeup on what kind of newness you can expect from the next iteration of clfu. Check it out here.
March 2, 2015 - New Management
I'm Jon, I'll be maintaining and improving clfu. Thanks to David for building such a great resource!

Top Tags





Commands using sed from sorted by
Terminal - Commands using sed - 1,140 results
seq 1 12 | sed 1,5d ; seq 1 12 | head --lines=-5
2009-08-01 00:41:52
User: flux
Functions: head sed seq
Tags: sed tail HEAD fun

Strangely enough, there is no option --lines=[negative] with tail, like the head's one, so we have to use sed, which is very short and clear, you see.

Strangely more enough, skipping lines at the bottom with sed is not short nor clear. From Sed one liner :

# delete the last 10 lines of a file

$ sed -e :a -e '$d;N;2,10ba' -e 'P;D' # method 1

$ sed -n -e :a -e '1,10!{P;N;D;};N;ba' # method 2

echo $( du -sm /var/log/* | cut -f 1 ) | sed 's/ /+/g'
2009-07-31 21:42:53
User: flux
Functions: cut du echo sed
Tags: echo bc

When you've got a list of numbers each on its row, the ECHO command puts them on a simple line, separated by space. You can then substitute the spaces with an operator. Finally, pipe it to the BC program.

curl -sI http://slashdot.org/ | sed -nr 's/X-(Bender|Fry)(.*)/\1\2/p'
2009-07-31 19:55:17
Functions: sed

I'm pretty sure everyone has curl and sed, but not everyone has lynx.

ps -o rss -C httpd | tail -n +2 | (sed 's/^/x+=/'; echo x) | bc
2009-07-31 15:15:08
Functions: echo ps sed tail

Display the amount of memory used by all the httpd processes. Great in case you are being Slashdoted!

lynx -head -dump http://slashdot.org|egrep 'Bender|Fry'|sed 's/X-//'
echo -e "HEAD / HTTP/1.1\nHost: slashdot.org\n\n" | nc slashdot.org 80 | egrep "Bender|Fry" | sed "s/X-//"
2009-07-30 19:15:07
Functions: echo egrep sed

slashdot.org webserver adds an X-Bender or X-Fry HTTP header to every response!

ls foo*.jpg | awk '{print("mv "$1" "$1)}' | sed 's/foo/bar/2' | /bin/sh
md5sum * | sed 's/^\(\w*\)\s*\(.*\)/\2 \1/' | while read LINE; do mv $LINE; done
sed '/^#.*DEBUG.*/ s/^#//' $FILE
sed -i 's/^.*DEBUG.*/#&/' $file
equery s | sed 's/(\|)/ /g' | sort -n -k 9 | gawk '{print $1" "$9/1048576"m"}'
2009-07-30 01:12:10
User: Alanceil
Functions: gawk sed sort

On a Gentoo system, this command will tell you which packets you have installed and sort them by how much space they consume. Good for finding out space-hogs when tidying up disk space.

ls -pt1 | sed '/.*\//d' | sed 1d | xargs rm
2009-07-29 13:59:58
User: patko
Functions: ls sed xargs

Useful for deleting old unused log files.

ipconfig getpacket en0 | grep yi| sed s."yiaddr = "."en0: ". ipconfig getpacket en1 | grep yi| sed s."yiaddr = "."en1: ".
DD=`cat /etc/my.cnf | sed "s/#.*//g;" | grep datadir | tr '=' ' ' | gawk '{print $2;}'` && ( cd $DD ; find . -mindepth 2 | grep -v db\.opt | sed 's/\.\///g; s/\....$//g; s/\//./;' | sort | uniq | tr '/' '.' | gawk '{print "CHECK TABLE","`"$1"`",";";}' )
2009-07-25 03:42:31
User: atcroft
Functions: cd find gawk grep sed sort tr uniq

This command will generate "CHECK TABLE `db_name.table_name` ;" statements for all tables present in databases on a MySQL server, which can be piped into the mysql command. (Can also be altered to perform OPTIMIZE and REPAIR functions.)

Tested on MySQL 4.x and 5.x systems in a Linux environment under bash.

sed 's/,/\t/g' report.csv > report.tsv
2009-07-23 15:39:03
User: viner
Functions: sed

Convert comma separated files to tab separated files.

(MySQL eats tab separated files with much less instruction than comma seperated files.)

sed -n 's/.*<foo>\([^<]*\)<\/foo>.*/\1/p'
2009-07-23 07:59:30
User: recursiverse
Functions: sed

Limited, but useful construct to extract text embedded in XML tags. This will only work if bar is all on one line.

If nobody posts an alternative for the multiline sed version, I'll figure it out later...

lynx -dump http://www.ip-adress.com/ip_tracer/?QRY=$1|sed -nr s/'^.*My IP address city: (.+)$/\1/p'
curl -u 'username' https://api.del.icio.us/v1/posts/all | sed 's/^.*href=//g;s/>.*$//g;s/"//g' | awk '{print $1}' | grep 'http'
2009-07-22 07:32:59
User: bubo
Functions: awk grep sed

a variation of avi4now's command - thanks by the way!

/sbin/ifconfig eth0 | grep "inet addr" | sed -e 's/.*inet addr:\(.*\) B.*/\1/g'
sed -n '/<Tag>/,/<\/Tag>/p' logfile.log
2009-07-20 13:24:56
User: sanmiguel
Functions: sed

If your XML is appended to a line with a time stamp or other leading text irrelevant to the XML, then you can append a s/foo/bar/ command, like this:

sed -n /<Tag>/,/<\/Tag>/p; s/.*\(<Tag.*\)/\1/' logfile.log
for i in *.xml; do sed -i 's/foo/bar/g' "$i"; done
find . -name '*.html' -print0| xargs -0 -L1 cat |sed "s/[\"\<\>' \t\(\);]/\n/g" |grep "http://" |sort -u
2009-07-14 07:00:15
User: jamespitt
Functions: cat find grep sed sort xargs

Just a handy way to get all the unique links from inside all the html files inside a directory. Can be handy on scripts etc.

infile=$1 for i in $(cat $infile) do echo $i | tr "," "\n" | sort -n | tr "\n" "," | sed "s/,$//" echo done
2009-07-12 21:23:37
User: iframe
Functions: cat echo sed sort tr
Tags: cat bash sort sed tr

Save the script as: sort_file

Usage: sort_file < sort_me.csv > out_file.csv

This script was originally posted by Admiral Beotch in LinuxQuestions.org on the Linux-Software forum.

I modified this script to make it more portable.

sed -r 's/[ \t\r\n\v\f]+/\^J/g' INFILE > OUTFILE
2009-07-08 19:59:33
User: qazwart
Functions: sed

What happens if there is more than a single space between words, or spaces and tabs? This command will remove duplicate spaces and tabs.

The "-r" switch allows for extended regular expressions. No additional piping necessary.

system_profiler SPPowerDataType | egrep -e "Connected|Charge remaining|Full charge capacity|Condition" | sed -e 's/^[ \t]*//'