Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

All commands from sorted by
Terminal - All commands - 11,857 results
smartctl -a /dev/sda |grep Writ |awk '{print $NF/2/1024/1024/1024 " TeraBytes Written"}'
2014-10-21 03:40:32
User: khyron320
Functions: awk grep
2

You must have smartmontools installed for this to work. This also assumes you 512 byte sector sizes, this is pretty standard.

ls | tr '[[:punct:][:space:]]' '\n' | grep -v "^\s*$" | sort | uniq -c | sort -bn
2014-10-14 09:52:28
User: qdrizh
Functions: grep ls sort tr uniq
Tags: sort uniq ls grep tr
1

I'm sure there's a more elegant sed version for the tr + grep section.

youtube-dl -tci --write-info-json "https://www.youtube.com/watch?v=dQw4w9WgXcQ"
2014-10-13 21:18:34
User: wires
1

Download video files from a bunch of sites (here is a list https://rg3.github.io/youtube-dl/supportedsites.html).

The options say: base filename on title, ignores errors and continue partial downloads. Also, stores some metadata into a .json file plz.

Paste youtube users and playlists for extra fun.

Protip: git-annex loves these files

gcloud components list | grep "^| Not" | sed "s/|\(.*\)|\(.*\)|\(.*\)|/\2/" | xargs echo gcloud components update
2014-10-13 20:52:25
User: wires
Functions: echo grep sed xargs
0

Google Cloud SDK comes with a package manager `gcloud components` but it needs a bit of `sed` to work. Modify the "^| Not" bit to change the package selection. (The gcloud --format option is currently broken)

ip a s eth0 | awk -F'[/ ]+' '/inet[^6]/{print $3}'
ip addr show enp3s0 | awk '/inet[^6]/{print $2}' | awk -F'/' '{print $1}'
for f in */*.ape; do avconv -i "$f" "${f%.ape}.flac"; done
2014-10-10 12:33:00
User: qdrizh
0

Converts all monkey audio files below currently directory to FLAC.

For only current directory, use `for f in *.ape; do avconv -i "$f" "${f%.ape}.flac"; done`

To remove APE files afterward, use `rm */*.ape`

egrep -wi --color 'warning|error|critical'
alias lp="echo -n \"some text to copy\" | pbcopy; sleep 120 && echo -n \"done\" | pbcopy &"
2014-10-05 19:43:49
User: wsams
Functions: alias
Tags: alias pbcopy
0

This alias is useful if you need to use some text often. Executing the alias will copy the text into your clipboard and then remove it after X seconds.

url=`curl http://proxybay.info/ | awk -F'href="|" |">|</' '{for(i=2;i<=NF;i=i+4) print $i,$(i+2)}' | grep follow|sed 's/^.\{19\}//'|shuf -n 1` && firefox $url
2014-10-04 19:08:13
User: dunryc
Functions: awk grep sed
-1

polls the pirate bay mirrors list and chooses a random site and opens it for you in firefox

git reflog --date=local | grep "Oct 2 .* checkout: moving from .* to" | grep -o "[a-zA-Z0-9\-]*$" | sort | uniq
2014-10-03 15:12:22
User: Trindaz
Functions: grep sort
0

Replace "Oct 2" in the first grep pattern to be the date to view branch work from

for i in `cat hosts_list`; do RES=`ssh myusername@${i} "ps -ef " |awk '/[p]rocessname/ {print $2}'`; test "x${RES}" = "x" && echo $i; done
2014-10-03 14:57:54
User: arlequin
Functions: awk echo test
Tags: ssh awk test ps
0

Given a hosts list, ssh one by one and echo its name only if 'processname' is not running.

FILE=file_name; CHUNK=$((64*1024*1024)); SIZE=$(stat -c "%s" $FILE); for ((i=0; i < $SIZE; i+=$CHUNK)); do losetup --find --show --offset=$i --sizelimit=$CHUNK $FILE; done
2014-10-03 13:18:19
User: flatcap
Functions: losetup stat
4

It's common to want to split up large files and the usual method is to use split(1).

If you have a 10GiB file, you'll need 10GiB of free space.

Then the OS has to read 10GiB and write 10GiB (usually on the same filesystem).

This takes AGES.

.

The command uses a set of loop block devices to create fake chunks, but without making any changes to the file.

This means the file splitting is nearly instantaneous.

The example creates a 1GiB file, then splits it into 16 x 64MiB chunks (/dev/loop0 .. loop15).

.

Note: This isn't a drop-in replacement for using split. The results are block devices.

tar and zip won't do what you expect when given block devices.

.

These commands will work:

hexdump /dev/loop4

.

gzip -9 < /dev/loop6 > part6.gz

.

cat /dev/loop10 > /media/usb/part10.bin
sudo bash -c "> /var/log/httpd/access_log"
pdftk fill_me_in.pdf output no_thanks.pdf flatten
2014-09-30 09:59:46
User: qdrizh
0

Some PDF viewers don't manage form fields correctly when printing. Instead of treating them as transparent, they print as black shapes.

/bin/ls -lF "$@" | sed -r ': top; s/. ([0-9]+)([0-9]{3}[,0-9]* \w{3} )/ \1,\2/ ; t top'
2014-09-29 14:33:23
User: hackerb9
Functions: sed
2

This modifies the output of ls so that the file size has commas every three digits. It makes room for the commas by destructively eating any characters to the left of the size, which is probably okay since that's just the "group".

Note that I did not write this, I merely cleaned it up and shortened it with extended regular expressions. The original shell script, entitled "sl", came with this description:

 : '

 : For tired eyes (sigh), do an ls -lF plus whatever other flags you give

 : but expand the file size with commas every 3 digits. Really helps me

 : distinguish megabytes from hundreds of kbytes...

 :

 : Corey Satten, corey@cac.washington.edu, 11/8/89

 : '

Of course, some may suggest that fancy new "human friendly" options, like "ls -Shrl", have made Corey's script obsolete. They are probably right. Yet, at times, still I find it handy. The new-fangled "human-readable" numbers can be annoying when I have to glance at the letter at the end to figure out what order of magnitude is even being talked about. (There's a big difference between 386M and 386P!). But with this nifty script, the number itself acts like a histogram, a quick visual indicator of "bigness" for tired eyes. :-)

cat /usr/share/dict/words | egrep '^\w{13,}$' | egrep -iv '(\w).*\1'
2014-09-29 12:52:09
User: hackerb9
Functions: cat egrep
5

This is the most straightforward approach: first regexp limits dictionary file to words with thirteen or more characters, second regexp discards any words that have a letter repeated. (Bonus challenge: Try doing it in a single regexp!)

rsync -arvz -e 'ssh -p 2233' --progress --delete remote-user@remote-server.org:/path/to/folder /path/to/local/folder
2014-09-26 10:42:26
User: nadavkav
Functions: rsync
1

Useful, when you need to backup/copy/sync a folder over ssh with a non standard port number

tcpdump -tnn -c 2000 -i eth0 | awk -F "." '{print $1"."$2"."$3"."$4}' | sort | uniq -c | sort -nr | awk ' $1 > 10 '
2014-09-26 01:15:23
User: hochmeister
Functions: awk sort tcpdump uniq
1

capture 2000 packets and print the top 10 talkers

kpartx -av <image-flat.vmdk>; mount -o /dev/mapper/loop0p1 /mnt/vmdk
2014-09-25 23:05:09
User: rldleblanc
Functions: mount
4

This does not require you to know the partition offset, kpartx will find all partitions in the image and create loopback devices for them automatically. This works for all types of images (dd of hard drives, img, etc) not just vmkd. You can also activate LVM volumes in the image by running

vgchange -a y

and then you can mount the LV inside the image.

To unmount the image, umount the partition/LV, deactivate the VG for the image

vgchange -a n <volume_group>

then run

kpartx -dv <image-flad.vmdk>

to remove the partition mappings.

sed -e "s/^127.0.1.1 $(hostname).novalocal/127.0.1.1/g" /etc/hosts
2014-09-25 15:38:43
User: renoirb
Functions: sed
0

When booting a VM through OpenStack and managed through cloudinit, the hosts file gets to write a line simiar to

127.0.1.1 ns0.novalocal ns0

This command proven useful while installing a configuration manager such as Salt Stack (or Puppet, or Ansible) and getting node name

perl-rename -v 's/720p.+mkv/720p\.mkv/' *.mkv
2014-09-25 14:07:47
User: benkaiser
Functions: perl
0

I used this (along with a modified one replacing `mkv` with `srt`) to remove the slight differences in who the provider of the video / matching subtitle was (as they are the same contents and the subs match anyway).

So now VLC (and other video players) can easily guess the subtitle file.

sleep 10 & wait $!
2014-09-25 13:33:51
User: yorkou
Functions: sleep wait
1

A nice way to interrupt a sleep with a signal.

find . -iname "*.mp4" -print0 | xargs -0 mv --verbose -t /media/backup/
cd tmp ; find . |cpio -o -H newc| gzip > ../initrd.gz
2014-09-24 14:07:54
User: akiuni
Functions: cd cpio find gzip
0

This commands compresses the "tmp" directory into an initrd file.