Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

All commands from sorted by
Terminal - All commands - 12,018 results
ls | tr '[[:punct:][:space:]]' '\n' | grep -v "^\s*$" | sort | uniq -c | sort -bn
2014-10-14 09:52:28
User: qdrizh
Functions: grep ls sort tr uniq
Tags: sort uniq ls grep tr
3

I'm sure there's a more elegant sed version for the tr + grep section.

uname -p
youtube-dl -tci --write-info-json "https://www.youtube.com/watch?v=dQw4w9WgXcQ"
2014-10-13 21:18:34
User: wires
1

Download video files from a bunch of sites (here is a list https://rg3.github.io/youtube-dl/supportedsites.html).

The options say: base filename on title, ignores errors and continue partial downloads. Also, stores some metadata into a .json file plz.

Paste youtube users and playlists for extra fun.

Protip: git-annex loves these files

gcloud components list | grep "^| Not" | sed "s/|\(.*\)|\(.*\)|\(.*\)|/\2/" | xargs echo gcloud components update
2014-10-13 20:52:25
User: wires
Functions: echo grep sed xargs
0

Google Cloud SDK comes with a package manager `gcloud components` but it needs a bit of `sed` to work. Modify the "^| Not" bit to change the package selection. (The gcloud --format option is currently broken)

ip a s eth0 | awk -F'[/ ]+' '/inet[^6]/{print $3}'
dd if=/dev/hda | ssh root@4.2.2.2 'dd of=/root/server.img'
2014-10-13 13:43:47
User: suyashjain
Functions: dd ssh
0

By this command you can take the snapshot of you harddisk (full) and create the image , the image will be directly store on remote server through ssh. Here i am creating the image of /dev/hda and saving it at 4.2.2.2 as /root/server.img.

cat /etc/httpd/logs/access.log | awk '{ print $6}' | sed -e 's/\[//' | awk -F'/' '{print $1}' | sort | uniq -c
2014-10-13 13:39:53
User: suyashjain
Functions: awk cat sed sort uniq
0

The command will read the apache log file and fetch the virtual host requested and the number of requests.

sed -e '/4.2.2.2/ s/^;//' -i test.txt
2014-10-13 13:37:53
User: suyashjain
Functions: sed
Tags: sed
0

This sed command will search for 4.2.2.2 in all lines of test.txt and replace comment symbol ";" . You can use it for other purpose also.

psql -U quassel quassel -c "SELECT message FROM backlog ORDER BY time DESC LIMIT 1000;" | grep my-query
2014-10-12 19:53:06
User: Tatsh
Functions: grep
0

Replace the credentials to psql if necessary, and the my-query part with your query.

curl -s http://pages.cs.wisc.edu/~ballard/bofh/bofhserver.pl |grep 'is:' |awk 'BEGIN { FS=">"; } { print $10; }'
2014-10-10 21:17:33
User: toj
Functions: awk grep
Tags: curl BOFH
0

Sure, it's dirty, but it's quick, it only displays the excuse, and it works.

ip addr show enp3s0 | awk '/inet[^6]/{print $2}' | awk -F'/' '{print $1}'
<ctrl+u>
for f in */*.ape; do avconv -i "$f" "${f%.ape}.flac"; done
2014-10-10 12:33:00
User: qdrizh
0

Converts all monkey audio files below currently directory to FLAC.

For only current directory, use `for f in *.ape; do avconv -i "$f" "${f%.ape}.flac"; done`

To remove APE files afterward, use `rm */*.ape`

mtr www.google.com
firefox $(grep -i ^url=* file.url | cut -b 5-)
2014-10-08 05:56:27
User: nachos117
Functions: cut grep
0

This command will use grep to read the shortcut (which in the above examle is file.url), and filter out all but the only important line, which contains the website URL, and some extra characters that will need to be removes (for example, URL=http://example.com). The cut command is then used to get rid of the URL= at the beginning. The output is then piped into Firefox, which should interpret the it as a web URL to be opened. Of course, you can replace Firefox with any other broswer. Tested in bash and sh.

egrep -wi --color 'warning|error|critical'
alias lp="echo -n \"some text to copy\" | pbcopy; sleep 120 && echo -n \"done\" | pbcopy &"
2014-10-05 19:43:49
User: wsams
Functions: alias
Tags: alias pbcopy
0

This alias is useful if you need to use some text often. Executing the alias will copy the text into your clipboard and then remove it after X seconds.

eog someimg.jpg
url=`curl http://proxybay.info/ | awk -F'href="|" |">|</' '{for(i=2;i<=NF;i=i+4) print $i,$(i+2)}' | grep follow|sed 's/^.\{19\}//'|shuf -n 1` && firefox $url
2014-10-04 19:08:13
User: dunryc
Functions: awk grep sed
-1

polls the pirate bay mirrors list and chooses a random site and opens it for you in firefox

git reflog --date=local | grep "Oct 2 .* checkout: moving from .* to" | grep -o "[a-zA-Z0-9\-]*$" | sort | uniq
2014-10-03 15:12:22
User: Trindaz
Functions: grep sort
0

Replace "Oct 2" in the first grep pattern to be the date to view branch work from

for i in `cat hosts_list`; do RES=`ssh myusername@${i} "ps -ef " |awk '/[p]rocessname/ {print $2}'`; test "x${RES}" = "x" && echo $i; done
2014-10-03 14:57:54
User: arlequin
Functions: awk echo test
Tags: ssh awk test ps
0

Given a hosts list, ssh one by one and echo its name only if 'processname' is not running.

FILE=file_name; CHUNK=$((64*1024*1024)); SIZE=$(stat -c "%s" $FILE); for ((i=0; i < $SIZE; i+=$CHUNK)); do losetup --find --show --offset=$i --sizelimit=$CHUNK $FILE; done
2014-10-03 13:18:19
User: flatcap
Functions: losetup stat
5

It's common to want to split up large files and the usual method is to use split(1).

If you have a 10GiB file, you'll need 10GiB of free space.

Then the OS has to read 10GiB and write 10GiB (usually on the same filesystem).

This takes AGES.

.

The command uses a set of loop block devices to create fake chunks, but without making any changes to the file.

This means the file splitting is nearly instantaneous.

The example creates a 1GiB file, then splits it into 16 x 64MiB chunks (/dev/loop0 .. loop15).

.

Note: This isn't a drop-in replacement for using split. The results are block devices.

tar and zip won't do what you expect when given block devices.

.

These commands will work:

hexdump /dev/loop4

.

gzip -9 < /dev/loop6 > part6.gz

.

cat /dev/loop10 > /media/usb/part10.bin
sudo tee /path/to/file < /dev/null
tee < file.org file.copy1 file.copy2 [file.copyn] > /dev/null
2014-10-02 16:41:36
User: dacoman
Functions: tee
1

Copies file.org to file.copy1 ... file.copyn

sudo bash -c "> /var/log/httpd/access_log"