What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Universal configuration monitoring and system of record for IT.

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:



2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.

Top Tags



All commands from sorted by
Terminal - All commands - 12,066 results
function every() { sed -n -e "${2}q" -e "0~${1}p" ${3:-/dev/stdin}; }
2015-04-03 01:30:36
User: flatcap
Functions: sed

Thanks to knoppix5 for the idea :-)

Print selected lines from a file or the output of a command.


every NTH MAX [FILE]

Print every NTH line (from the first MAX lines) of FILE.

If FILE is omitted, stdin is used.

The command simply passes the input to a sed script:

sed -n -e "${2}q" -e "0~${1}p" ${3:-/dev/stdin}

print no output

sed -n

quit after this many lines (controlled by the second parameter)

-e "${2}q"

print every NTH line (controlled by the first parameter)

-e "0~${1}p"

take input from $3 (if it exists) otherwise use /dev/stdin

du -hsx * | sort -rh
cp -Rs dir1 dir2
2015-04-01 22:51:16
User: knoppix5
Functions: cp

dir1 and all its subdirs and subdirs of subdirs ... but *no files*

will be copied to dir2 (not even symbolic links of files will be made).

To preserve ownerships & permissions:

cp -Rps dir1 dir2

Yes, you can do it with

rsync -a --include '*/' --exclude '*' /path/to/source /path/to/dest

too, but I didn't test if this can handle attributes correctly

(experiment rsync command yourself with --dry-run switch to avoid

harming your file system)

You must be in the parent directory of dir1 while executing

this command (place dir2 where you will), else soft links of

files in dir2 will be made. I couldn't find how to avoid this

"limitation" (yet). Playing with recursive unlink command loop


PS. Bash will complain, but the job will be done.

ssh user@server sudo date -s @`( date -u +"%s" )`
pdftk input.pdf output output.pdf user_pw YOURPASSWORD-HERE
ls | while read line; do ln -s "$(pwd)/$line" "/usr/bin/$line"; done
function every() { N=$1; S=1; [ "${N:0:1}" = '-' ] && N="${N:1}" || S=0; sed -n "$S~${N}p"; }
2015-03-21 23:44:59
User: flatcap
Functions: sed

Sometimes commands give you too much feedback.

Perhaps 1/100th might be enough. If so, every() is for you.

my_verbose_command | every 100

will print every 100th line of output.

Specifically, it will print lines 100, 200, 300, etc

If you use a negative argument it will print the *first* of a block,

my_verbose_command | every -100

It will print lines 1, 101, 201, 301, etc

The function wraps up this useful sed snippet:

... | sed -n '0~100p'

don't print anything by default

sed -n

starting at line 0, then every hundred lines ( ~100 ) print.


There's also some bash magic to test if the number is negative:

we want character 0, length 1, of variable N.


If it *is* negative, strip off the first character ${N:1} is character 1 onwards (second actual character).

ps -ef | grep PROCESS | grep -v grep | awk '{system "kill -9" $2}
pgrep -lf processname | cut -d' ' -f1 | awk '{print "cat /proc/" $1 "/net/sockstat | head -n1"}' | sh | cut -d' ' -f3 | paste -sd+ | bc
nik=clf$RANDOM;sr=irc.efnet.org;expect -c "set timeout -1;spawn nc $sr 6666;set send_human {.1 .2 1 .2 1};expect AUTH*\n ;send -h \"user $nik * * :$nik commandlinefu\nnick $nik\n\"; interact -o -re (PING.:)(.*\$) {send \"PONG :\$interact_out(2,string)\"}"
2015-03-18 09:10:28
User: omap7777

Uses the extremely cool utilities netcat and expect.

"expect" logs in & monitors for server PING checks.

When a PING is received it sends the PONG needed to stay connected.

IRC commands to try: HELP, TIME, MOTD, JOIN and PRIVMSG

The "/" in front of IRC commands are not needed, e.g. type JOIN #mygroup

Learn about expect: http://tldp.org/LDP/LGNET/issue48/fisher.html

The sample output shows snippets from an actual IRC session.

Please click UP button if you like it!

sh <(curl hashbang.sh)
2015-03-15 21:02:01
User: lrvick
Functions: sh

Bash process substitution which curls the website 'hashbang.sh' and executes the shell script embedded in the page.

This is obviously not the most secure way to run something like this, and we will scold you if you try.

The smarter way would be:

Download locally over SSL

> curl https://hashbang.sh >> hashbang.sh

Verify integrty with GPG (If available)

> gpg --recv-keys 0xD2C4C74D8FAA96F5

> gpg --verify hashbang.sh

Inspect source code

> less hashbang.sh


> chmod +x hashbang.sh

> ./hashbang.sh

sudo apt-get purge $(dpkg -l linux-{image,headers}-"[0-9]*" | awk '/ii/{print $2}' | grep -ve "$(uname -r | sed -r 's/-[a-z]+//')")
npm list -g --depth 0
syt() { pipe=`mktemp -u`; mkfifo -m 600 "$pipe" && for i in "$@"; do youtube-dl -qo "$pipe" "$i" & mplayer "$pipe" || break; done; rm -f "$pipe"; }
2015-03-14 01:48:20
User: snipertyler
Functions: mkfifo rm

Streams youtube-dl video to mplayer.


syt 'youtube.com/link' 'anotherlinkto.video'

Uses mplayer controls

sudo sh -c 'echo 1 > /proc/sys/kernel/dmesg_restrict'
2015-03-13 20:54:45
User: Blacksimon
Functions: sh sudo

Linux offers an interesting option to restrict the use of dmesg. It is available via /proc/sys/kernel/dmesg_restrict.

You can check the status with:

cat /proc/sys/kernel/dmesg_restrict

Alternatively you can use sysctl:

sudo sysctl -w kernel.dmesg_restrict=1

To make your change persistent across reboot, edit a fille in /etc/sysctl.d/.

wmctrl -m | grep Name: | awk '{print $2}'
crontest () { date +'%M %k %d %m *' |awk 'BEGIN {ORS="\t"} {print $1+2,$2,$3,$4,$5,$6}'; echo $1;}
2015-03-12 19:56:56
User: CoolHand
Functions: awk date echo

usage = crontest "/path/to/bin"

This version of this function will echo back the entire command so it can be copied/pasted to crontab. Should be able to be automagically appended to crontab with a bit more work. Tested on bash and zsh on linux,freebsd,aix

crontab -l -u USER | grep -v 'YOUR JOB COMMAND or PATTERN' | crontab -u USER -
2015-03-11 13:10:47
User: Koobiac
Functions: crontab grep

The "-u USER" is optional if root user is used

sudo iptables -A INPUT -m limit --limit 2000/sec -j ACCEPT && sudo iptables -A INPUT -j DROP
2015-03-09 20:16:17
User: qdrizh
Functions: iptables sudo
Tags: iptables

VPS server hosts suspect DOS attack if PPS is too high. This limits packets at the interface level. Do "sudo apt-get install iptables-persistent" to make persistent, or, if you already have, reconfigure with "sudo dpkg-reconfigure iptables-persistent"

echo 'export PROMPT_COMMAND="history -a; history -c; history -r; $PROMPT_COMMAND"' >> .bashrc
sqlite3 ~/.mozilla/firefox/*.[dD]efault/places.sqlite "SELECT strftime('%d.%m.%Y %H:%M:%S', dateAdded/1000000, 'unixepoch', 'localtime'),url FROM moz_places, moz_bookmarks WHERE moz_places.id = moz_bookmarks.fk ORDER BY dateAdded;"
2015-03-08 19:26:16
User: return13

Extracts yours bookmarks out of sqlite with the format:


groups user1 user2|cut -d: -f2|xargs -n1|sort|uniq -d
2015-03-04 19:12:27
User: swemarx
Functions: cut groups uniq xargs

Updated according to flatcap's suggestion, thanks!

grep -xFf <(groups user1|cut -f3- -d\ |sed 's/ /\n/g') <(groups user2|cut -f3- -d\ |sed 's/ /\n/g')
install -m 0400 foo bar/
2015-03-02 13:20:38
User: op4
Functions: install
Tags: backup mv cp

Prior to working on/modifying a file, use the 'install -m' command which can both copy files, create directories, and set their permissions at the same time. Useful when you are working in the public_html folder and need to keep the cp'd file hidden.

for f in input/*; do BN=$(basename "$f"); ffmpeg -i "$f" -vn "temp/$BN.flac"...
2015-03-01 02:48:19
Functions: basename

Full command:

for f in input/*; do BN=$(basename "$f"); ffmpeg -i "$f" -vn "temp/$BN.flac"; sox "temp/$BN.flac" "temp/$BN-cleaned.flac" noisered profile 0.3; ffmpeg -i "$f" -vcodec copy -an "temp/$BN-na.mp4"; ffmpeg -i "temp/$BN-na.mp4" -i "temp/$BN-cleaned.flac" "output/$BN"; done

This was over the 255 character limit and I didn't feel like deliberately obfuscating it.

1. Create 'input', 'output' and 'temp' directories.

2. Place the files that you want to remove the hiss/static/general noise from in the input directory.

3. Generate a noise reduction profile with sox using 'sox an_input_file.mp4 -n trim x y noiseprof profile', where x and y indicates a range in seconds that only the sound you want to eliminate is present in.

4. Run the command.