Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Universal configuration monitoring and system of record for IT.
Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

May 19, 2015 - A Look At The New Commandlinefu
I've put together a short writeup on what kind of newness you can expect from the next iteration of clfu. Check it out here.
March 2, 2015 - New Management
I'm Jon, I'll be maintaining and improving clfu. Thanks to David for building such a great resource!
Hide

Top Tags

Hide

Functions

Psst. Open beta.

Wow, didn't really expect you to read this far down. The latest iteration of the site is in open beta. It's a gentle open beta-- not in prime-time just yet. It's being hosted over at UpGuard (link) and you are more than welcome to give it a shot. Couple things:

  • » The open beta is running a copy of the database that will not carry over to the final version. Don't post anything you don't mind losing.
  • » If you wish to use your user account, you will probably need to reset your password.
Your feedback is appreciated via the form on the beta page. Thanks! -Jon & CLFU Team

All commands from sorted by
Terminal - All commands - 12,387 results
python -c 'import socket; s = socket.socket(socket.AF_INET, socket.SOCK_STREAM); s.connect(("<hostname>", <port>)); print s.getsockname()[0] ; s.close() ;' 2> /dev/null
2009-10-13 16:21:15
User: angleto
Functions: python
2

on multihomed hosts, connected to several networks, could be usefull to know the source address (local ip address) used to reach the target host, this command does not require root priviledges.

The command use a TCP socket, if there is any error the command return an empty string, elsewhere return a valid ip address.

PATH=$(cd ${0%/*}; pwd)
ping -q -c1 -w3 brandx.jp.sme 2&>1 /dev/null || echo brandx.jp.sme ping failed | mail -ne -s'Server unavailable' [email protected]
2009-10-13 14:13:04
User: mccalni
Functions: echo mail ping
Tags: bash ping mail
7

Joker wants an email if the Brand X server is down. Set a cron job for every 5 mins with this line and he gets an email when/if a ping takes longer than 3 seconds.

( cd /my/directory; xterm& )
2009-10-13 13:07:21
User: ashawley
Functions: cd
Tags: subshells
-4

Perfect time for the rarely used sub shell.

xterm -e "cd /my/directory; bash"
2009-10-13 12:06:14
User: kekschaot
-4

Usefull e.g. in krusader open terminal function

sed -e "$ ! s/$/,/"
2009-10-13 10:13:52
User: jgc
Functions: sed
4

In this simple example the command will add a comma to the end of every line except the last. I found this really useful when programatically constructing sql scripts. See sample output for example.

echo $RANDOM$RANDOM$RANDOM |cut -c3-12
URL=[target.URL]; curl -q -d "url=$URL" http://untr.im/api/ajax/api | awk -F 'href="' '{print $3}' | awk -F '" rel="' '{print $1}'
a=($(ls *html)) && a=${a[$(expr ${#a[@]} - 1)]} && rm $a
2009-10-12 16:40:06
Functions: expr ls rm
-3

plays with bash arrays. instead of storing the list of files in a temp file, this stores the list in ram, retrieves the last element in the array (the last html file), then removes it.

VBoxManage internalcommands converttoraw winxp.vdi winxp.raw && qemu-img convert -O vmdk winxp.raw winxp.vmdk && rm winxp.raw
2009-10-12 16:23:37
Functions: rm
9

Converts a .vdi file to a .vmdk file for use in a vmware virtual machine. The benefit: using this method actually works. There are others out there that claim to give you a working .vmdk by simply using the qemu-img command alone. Doing that only results in pain for you because the .vmdk file will be created with no errors, but it won't boot either.

Be advised that these conversions are very disk-intensive by nature; you are probably dealing with disk images several gigabytes in size.

Once finished, the process of using the new .vmdk file is left as an exercise to the reader.

dpkg --get-selections | cut -f1 | while read pkg; do dpkg -L $pkg | xargs -I'{}' bash -c 'if [ ! -d "{}" ]; then echo "{}"; fi' | tr '\n' '\000' | du -c --files0-from - | tail -1 | sed "s/total/$pkg/"; done
2009-10-12 14:57:54
User: pykler
Functions: bash cut du echo read sed tail tr xargs
Tags: Debian wajig
4

Calculates the size on disk for each package installed on the filesystem (or removed but not purged). This is missing the

| sort -rn

which would put the biggest packges on top. That was purposely left out as the command is slightly on the slow side

Also you may need to run this as root as some files can only be checked by du if you can read them ;)

for f in *.html; do sed '$d' -i "$f"; done
2009-10-12 14:46:43
User: alperyilmaz
Functions: sed
1

sed can be used deleting the last line and with -i option, there's no need to for temp files, the change is made on the actual file

for f in *.html; do head -n -1 $f > temp; cat temp > $f; rm temp; done
2009-10-12 12:49:18
User: Sunng
Functions: cat head rm
-1

Some malicious program appends a iframe or script tag to you web pages on some server, use this command to clean them in batch.

make [target] VAR=foobar
2009-10-12 09:42:30
User: cifr
Functions: make
Tags: make
1

This would allow reference of $(VAR) (if defined) with the value 'foobar' within the Makefile.

shopt -s checkwinsize
2009-10-12 07:08:55
User: settermjd
3

add the command either in /etc/profile or ~/.bash_profile so that this is available to your shell.

tree -dL 1
2009-10-11 23:20:17
User: Escher
3

to include hidden dirs use:

tree -adL 1

(with ls, requires 'ls -ad */ .*/')

eject /dev/sdb; sleep 1; eject -t /dev/sdb
2009-10-11 23:16:49
User: Escher
Functions: eject sleep
7

Remounts a usb disk /dev/sdb, without having to physically remove and reinsert. (Gnome desktop)

b="http://2010.utosc.com"; for p in $( curl -s $b/presentation/schedule/ | grep /presentation/[0-9]*/ | cut -d"\"" -f2 ); do f=$(curl -s $b$p | grep "/static/slides/" | cut -d"\"" -f4); if [ -n "$f" ]; then echo $b$f; curl -O $b$f; fi done
2009-10-11 17:28:46
User: danlangford
Functions: cut echo grep
Tags: curl cut for UTOSC
2

miss a class at UTOSC2010? need a refresher? use this to curl down all the presentations from the UTOSC website. (http://2010.utosc.com) NOTE/WARNING this will dump them in the current directory and there are around 37 and some are big - tested on OSX10.6.1

rename 'y/A-Z/a-z/' *
ssh -R 2001:localhost:22 [username]@[remote server ip]
2009-10-11 09:51:04
User: felix001
Functions: ssh
7

Allows you to establish a tunnel (encapsulate packets) to your (Server B) remote server IP from your local host (Server A).

On Server B you can then connect to port 2001 which will forward all packets (encapsulated) to port 22 on Server A.

-- www.fir3net.com --

find my_root_dir -depth -exec rename 's/(.*)\/([^\/]*)/$1\/\L$2/' {} \;
AUTOSSH_POLL=1 autossh -M 21010 hostname -t 'screen -Dr'
2009-10-11 06:04:29
Functions: hostname
11

Only useful for really flakey connections (but im stuck with one for now). Though if youre in this situation ive found this to be a good way to run autossh and it does a pretty good job of detecting when the session is down and restarting. Combined with the -t and screen commands this pops you back into your working session lickety split w/ as few headaches as possible.

And if autossh is a bit slow at detecting the downed ssh connection, just run this in another tab/terminal window to notify autossh that it should drop it and start over. Basically for when polling is too slow.

kill -SIGUSR1 `pgrep autossh`

ps aux --sort=%mem,%cpu
2009-10-10 22:48:51
User: mrwill
Functions: ps
13

you can also pipe it to "tail" command to show 10 most memory using processes.

ifs () { echo -n "${IFS}"|hexdump -e '"" 10/1 "'\''%_c'\''\t" "\n"' -e '"" 10/1 "0x%02x\t" "\n\n"'|sed "s/''\|\t0x[^0-9]//g; $,/^$/d"
2009-10-10 22:41:35
User: dennisw
Functions: echo hexdump sed
2

You can display, save and restore the value of $IFS using conventional Bash commands, but these functions, which you can add to your ~/.bashrc file make it really easy.

To display $IFS use the function ifs shown above. In the sample output, you can see that it displays the characters and their hexadecimal equivalent.

This function saves it in a variable called $saveIFS:

sifs () { saveIFS=$IFS; }

Use this function to restore it

rifs () { IFS=$saveIFS; }

Add this line in your ~/.bashrc file to save a readonly copy of $IFS:

declare -r roIFS=$IFS

Use this function to restore that one to $IFS

rrifs () { IFS=$roIFS; }
tree -d
2009-10-10 21:40:56
User: bsussman
2

tree has lots of parms - man is your friend