Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

All commands from sorted by
Terminal - All commands - 11,859 results
tail -f FILE | ccze
YEAR=2015; echo Jul $(ncal 7 $YEAR | awk '/^Fr/{print $NF}')
2014-08-17 11:12:09
User: andreasS
Functions: awk echo
Tags: awk date
0

Calculate the date of Sysadmin day (last Friday of July) of any given year

YEAR=2015; date -d${YEAR}0801-$(date -d${YEAR}0801+2days +%u)days +%b\ %e
2014-08-17 11:06:25
User: andreasS
Functions: date
Tags: date
0

Calculate Sysadmin day of any given year using 2 `date`. Code based on http://stackoverflow.com/a/5656859/196133

YEAR=2015; ncal 7 $YEAR | sed -n 's/^Fr.* \([^ ]\+\) *$/Jul \1/p'
2014-08-17 11:04:02
User: andreasS
Functions: sed
Tags: sed date
0

Calculate the date of Sysadmin day (last Friday of July) of any given year

awp () { awk '{print $'$1'}'; }
if [ "`curl -s --head domain.tld | grep HTTP | cut -d" " -f2`" != "200" ];then echo "error"; echo "doing else" ;fi
grep Failed auth.log | rev | cut -d\ -f4 | rev | sort -u
2014-08-14 14:57:41
User: supradave
Functions: cut grep rev sort
0

Find the failed lines, reverse the output because I only see 3 indicators after the IP address, i.e. port, port#, ssh2 (in my file), cut to the 4th field (yes, you could awk '{print $4}'), reverse the output back to normal and then sort -u (for uniq, or sort | uniq).

echo 1 > /proc/sys/sunrpc/nfs_debug
2014-08-12 14:40:55
User: harpo
Functions: echo
0

echo 1 > /proc/sys/sunrpc/nfs_debug && tail -f /var/log/messages

to debug NFS issues.

e() { echo $(curl -o /dev/null --silent --head --write-out '%{http_code}\n' $1); }
2014-08-11 20:51:45
Functions: echo
1

this function will give you a status webpage code using curl.

mysqldump -pyourpass --single-transaction --master-data=2 -q --flush-logs --databases db_for_doslave |tee /home/db_bak.sql |ssh mysqladmin@slave.db.com "mysql"
2014-08-11 05:57:21
User: dragonwei
Functions: ssh tee
0

get master info:

head -n 40 /home/db_bak.sql |awk '$0~/MASTER_LOG_FILE/

slave server:

change master ??.

start slave

ls -la | grep ^l
find /target_directory -type f -mmin -60 --mindepth 2
2014-08-09 06:59:34
User: vikranth
Functions: find
0

To search for files in /target_directory and all its sub-directories, that have been modified in the last 60 minutes:

find /target_directory -type f -mmin -60

To search for files in /target_directory and all its sub-directories, that have been modified in the last 2 days:

find /target_directory -type f -mtime -2

To search for files in /target_directory and all its sub-directories no more than 3 levels deep, that have been modified in the last 2 days:

find /target_directory -type f -mtime -2 -depth -3
eval echo $(echoprint-codegen "/path/to/file.mp3"| jq ' .[0].metadata | "mp3info -a \"" + .artist + "\" -t \"" + .title + "\" -l \"" + .release + "\" \"" + .filename + "\"" ' ) | bash
2014-08-08 21:14:53
User: glaudiston
Functions: echo eval
0

echoprint identify your song, then return artist, song name and album name(release) in a JSON. jq parse it and mp3info set the data in your mp3 file.

of course it depends on:

mp3info

jq

echoprint

You need to set the environment variable

export CODEGEN_NEST_API_KEY='YOUR_ECHONEST_KEY_HERE'

You can use it with find, but probably will bypass the 120 request/minute of developer account key. So, use a sleep to do it.

Something like:

find -name \*.mp3 | while read $f; do eval echo $(echoprint-codegen "$f" | jq ' .[0].metadata | "mp3info -a \"" + .artist + "\" -t \"" + .title + "\" -l \"" + .release + "\" \"" + .filename + "\"" ' ) | bash; sleep 1; done
dig +short -x 127.0.0.1
[ `curl 'http://crl.godaddy.com/gds5-16.crl' 2>/dev/null | openssl crl -inform DER -noout -nextupdate | awk -F= '{print $2}' | xargs -I{} date -d {} +%s` -gt `date -d '8 hours' +%s` ] && echo "OK" || echo "Expires soon"
2014-08-07 17:18:38
User: hufman
Functions: awk date echo xargs
Tags: openssl
0

Downloads a CRL file, determines the expiration time, and checks when it will expire

tail -f LOG_FILE | grep --line-buffered SEARCH_STR | cut -d " " -f 7-
2014-08-07 10:40:45
User: pjsb
Functions: cut grep tail
Tags: grep cut tail -f
0

Outputs / monitors the content of the LOG_FILE , which matches the SEARCH_STR. The output is cutted by spaces (as delimiter) starting from column 7 till the end.

journalctl --unit=named --follow
2014-08-07 04:02:58
User: anomalyst
0

prints and follows the systemd logfile entires for the DNS bind named.service unit (on Arch linux, your distro bind service may have a different name)

yum list installed| awk '{print $1}'| grep -e "x86" -e "noarch" | grep -v -e '^@'| sort
2014-08-06 23:13:24
Functions: awk grep
0

Great for moves, re-installs etc since it is not version specific yet is architecture specific.

Centos yum list is well know for wrapping lines .

sudo dpkg -P $(dpkg -l yourPkgName* | awk '$2 ~ /yourPkgName.*/ && $1 ~ /.i/ {print $2}')
2014-08-06 22:40:32
User: wejn
Functions: awk sudo
Tags: dpkg purge
0

Recently in Debian Wheezy the dpkg command refuses to work with wildcards, so this is the one-liner alternative. (alternative to #13614)

sudo restart lightdm
svn merge -r 854:853 l3toks.dtx
hl() { while read -r; do printf '%s\n' "$(perl -p -e 's/('"$1"')/\a\e[7m$1\e[0m/g' <<< "$REPLY")"; done; }
(echo -e '\x06\x00\x00\x00\x00\x00\x01\x01\x00'; sleep 1)|nc -c $host 25565
w !sudo cat >%
sudo dpkg -P $(sudo dpkg -l yourPkgName* | awk '$2 ~ /yourPkgName.*/' | awk '$1 ~ /.i/' | awk '{print $2}')
2014-08-02 18:14:02
User: woohoo
Functions: awk sudo
Tags: dpkg purge
0

Recently in Debian Wheezy the dpkg command refuses to work with wildcards, so this is the one-liner alternative.