Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Universal configuration monitoring and system of record for IT.
Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

All commands from sorted by
Terminal - All commands - 12,034 results
pdftk input.pdf output output.pdf user_pw YOURPASSWORD-HERE
ls | while read line; do ln -s "$(pwd)/$line" "/usr/bin/$line"; done
function every() { N=$1; S=1; [ "${N:0:1}" = '-' ] && N="${N:1}" || S=0; sed -n "$S~${N}p"; }
2015-03-21 23:44:59
User: flatcap
Functions: sed
1

Sometimes commands give you too much feedback.

Perhaps 1/100th might be enough. If so, every() is for you.

my_verbose_command | every 100

will print every 100th line of output.

Specifically, it will print lines 100, 200, 300, etc

If you use a negative argument it will print the *first* of a block,

my_verbose_command | every -100

It will print lines 1, 101, 201, 301, etc

The function wraps up this useful sed snippet:

... | sed -n '0~100p'

"sed -n" - don't print anything by default

'0~100p' - starting at line 0, then every hundred lines ( ~100 ) print.

There's also some bash magic to test if the number is negative:

{N:0:1} means we want character 0, length 1, of variable N.

If it *is* negative, strip off the first character ${N:1} is character 1 onwards (second actual character).

ps -ef | grep PROCESS | grep -v grep | awk '{system "kill -9" $2}
nik=clf$RANDOM;sr=irc.efnet.org;expect -c "set timeout -1;spawn nc $sr 6666;set send_human {.1 .2 1 .2 1};expect AUTH*\n ;send -h \"user $nik * * :$nik commandlinefu\nnick $nik\n\"; interact -o -re (PING.:)(.*\$) {send \"PONG :\$interact_out(2,string)\"}"
2015-03-18 09:10:28
User: omap7777
0

Uses the extremely cool utilities netcat and expect.

"expect" logs in & monitors for server PING checks.

When a PING is received it sends the PONG needed to stay connected.

IRC commands to try: HELP, TIME, MOTD, JOIN and PRIVMSG

The "/" in front of IRC commands are not needed, e.g. type JOIN #mygroup

Learn about expect: http://tldp.org/LDP/LGNET/issue48/fisher.html

The sample output shows snippets from an actual IRC session.

Please click UP button if you like it!

sh <(curl hashbang.sh)
2015-03-15 21:02:01
User: lrvick
Functions: sh
3

Bash process substitution which curls the website 'hashbang.sh' and executes the shell script embedded in the page.

This is obviously not the most secure way to run something like this, and we will scold you if you try.

The smarter way would be:

Download locally over SSL

> curl https://hashbang.sh >> hashbang.sh

Verify integrty with GPG (If available)

> gpg --recv-keys 0xD2C4C74D8FAA96F5

> gpg --verify hashbang.sh

Inspect source code

> less hashbang.sh

Run

> chmod +x hashbang.sh

> ./hashbang.sh

sudo apt-get purge $(dpkg -l linux-{image,headers}-"[0-9]*" | awk '/ii/{print $2}' | grep -ve "$(uname -r | sed -r 's/-[a-z]+//')")
npm list -g --depth 0
syt() { pipe=`mktemp -u`; mkfifo -m 600 "$pipe" && for i in "$@"; do youtube-dl -qo "$pipe" "$i" & mplayer "$pipe" || break; done; rm -f "$pipe"; }
2015-03-14 01:48:20
User: snipertyler
Functions: mkfifo rm
2

Streams youtube-dl video to mplayer.

Usage:

syt 'youtube.com/link' 'anotherlinkto.video'

Uses mplayer controls

wmctrl -m | grep Name: | awk '{print $2}'
crontab -l -u USER | grep -v 'YOUR JOB COMMAND or PATTERN' | crontab -u USER -
2015-03-11 13:10:47
User: Koobiac
Functions: crontab grep
1

The "-u USER" is optional if root user is used

sudo iptables -A INPUT -m limit --limit 2000/sec -j ACCEPT && sudo iptables -A INPUT -j DROP
2015-03-09 20:16:17
User: qdrizh
Functions: iptables sudo
Tags: iptables
1

VPS server hosts suspect DOS attack if PPS is too high. This limits packets at the interface level. Do "sudo apt-get install iptables-persistent" to make persistent, or, if you already have, reconfigure with "sudo dpkg-reconfigure iptables-persistent"

echo 'export PROMPT_COMMAND="history -a; history -c; history -r; $PROMPT_COMMAND"' >> .bashrc
sqlite3 ~/.mozilla/firefox/*.[dD]efault/places.sqlite "SELECT strftime('%d.%m.%Y %H:%M:%S', dateAdded/1000000, 'unixepoch', 'localtime'),url FROM moz_places, moz_bookmarks WHERE moz_places.id = moz_bookmarks.fk ORDER BY dateAdded;"
2015-03-08 19:26:16
User: return13
2

Extracts yours bookmarks out of sqlite with the format:

dateAdded|url

groups user1 user2|cut -d: -f2|xargs -n1|sort|uniq -d
2015-03-04 19:12:27
User: swemarx
Functions: cut groups uniq xargs
2

Updated according to flatcap's suggestion, thanks!

grep -xFf <(groups user1|cut -f3- -d\ |sed 's/ /\n/g') <(groups user2|cut -f3- -d\ |sed 's/ /\n/g')
install -m 0400 foo bar/
2015-03-02 13:20:38
User: op4
Functions: install
Tags: backup mv cp
3

Prior to working on/modifying a file, use the 'install -m' command which can both copy files, create directories, and set their permissions at the same time. Useful when you are working in the public_html folder and need to keep the cp'd file hidden.

for f in input/*; do BN=$(basename "$f"); ffmpeg -i "$f" -vn "temp/$BN.flac"...
2015-03-01 02:48:19
Functions: basename
0

Full command:

for f in input/*; do BN=$(basename "$f"); ffmpeg -i "$f" -vn "temp/$BN.flac"; sox "temp/$BN.flac" "temp/$BN-cleaned.flac" noisered profile 0.3; ffmpeg -i "$f" -vcodec copy -an "temp/$BN-na.mp4"; ffmpeg -i "temp/$BN-na.mp4" -i "temp/$BN-cleaned.flac" "output/$BN"; done

This was over the 255 character limit and I didn't feel like deliberately obfuscating it.

1. Create 'input', 'output' and 'temp' directories.

2. Place the files that you want to remove the hiss/static/general noise from in the input directory.

3. Generate a noise reduction profile with sox using 'sox an_input_file.mp4 -n trim x y noiseprof profile', where x and y indicates a range in seconds that only the sound you want to eliminate is present in.

4. Run the command.

for i in /usr/share/cowsay/cows/*.cow; do cowsay -f $i "$i"; done
2015-02-26 20:56:45
User: wincus
2

There are lots of different cow options to use, this script will show them all

truncate --size 1G bigfile.txt
2015-02-26 11:56:27
User: ynedelchev
2

If you want to create fast a very big file for testing purposes and you do not care about its content, then you can use this command to create a file of arbitrary size within less than a second. Content of file will be all zero bytes.

The trick is that the content is just not written to the disk, instead the space for it is somehow reserved on operating system level and file system level. It would be filled when first accessed/written (not sure about the mechanism that lies below, but it makes the file creation super fast).

Instead of '1G' as in the example, you could use other modifiers like 200K for kilobytes (1024 bytes), 500M for megabytes (1024 * 1024 bytes), 20G for Gigabytes (1024*1024*1024 bytes), 30T for Terabytes (1024^4 bytes). Also P for Penta, etc...

Command tested under Linux.

xsel -bc
2015-02-26 01:11:03
User: benjabean1
1

Clears your clipboard if xsel is installed on your machine.

If your xsel is dumb, you can also use

xsel --clear --clipboard
awk '!NF || !seen[$0]++'
2015-02-25 17:03:13
User: Soubsoub
Functions: awk
1

Remove duplicate lines whilst keeping order and empty lines

sqlite3 ~/.mozilla/firefox/*.[dD]efault/places.sqlite "SELECT strftime('%d.%m.%Y %H:%M:%S', visit_date/1000000, 'unixepoch', 'localtime'),url FROM moz_places, moz_historyvisits WHERE moz_places.id = moz_historyvisits.place_id ORDER BY visit_date;"
2015-02-24 21:51:14
User: return13
6

This is the way to get access to your Firefox history...

lame -v 2 -b 192 --ti /path/to/file.jpg audio.mp3 new-audio.mp3
sed -n '/url/s#^.*url=\(.*://.*\)#\1#p' ~/.mozilla/firefox/*.[dD]efault/SDBackups/*.speeddial | sort | uniq
2015-02-17 20:56:28
User: return13
Functions: sed sort
0

For all users of https://addons.mozilla.org/de/firefox/addon/speed-dial/