Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands tagged unix from sorted by
Terminal - Commands tagged unix - 60 results
find /var/log -type f -iregex '.*[^\.][^0-9]+$' -not -iregex '.*gz$' 2> /dev/null | xargs tail -n0 -f | ccze -A
2014-07-29 17:11:17
User: rubo77
Functions: find tail xargs
Tags: unix ccze logging
4

This will show all changes in all log files under /var/log/ that are regular files and don't end with `gz` nor with a number

sudo /sbin/route add -host 192.168.12.50 -interface ppp0
2014-04-13 00:17:53
User: jifilis
Functions: sudo
Tags: unix VPN route
0

In this example, 192.168.12.50 is the host that should be routed via the VPN and "ppp0" is the name of the VPN network interface (ifconfig shows you the list of network interfaces). Can be a IP or domain name.

du -g | perl -ne 'print if (tr#/#/# == <maximum depth>)'
2014-02-15 07:33:36
User: RAKK
Functions: du perl
Tags: perl du unix aix
0

Lists directory size up to a maximum traversal depth on systems like IBM AIX, where the du command doesn't have Linux's --max-depth option. AIX's du uses -g to display directory size on gigabytes, -m to use megabytes, and -k to use kilobytes. tr### is a Perl function that replaces characters and returns the amount of changed characters, so in this case it will return how many slashes there were in the full path name.

dd bs=1M if=/dev/scd0 of=./filename.iso OR readom -v dev='D:' f='./filename.iso' speed=2 retries=8
2013-10-23 15:53:27
User: scotharkins
Functions: dd
-1

This example is taken from Cygwin running on Win7Ent-64. Device names will vary by platform.

Both commands resulted in identical files per the output of md5sum, and ran in the same time down to the second (2m45s), less than 100ms apart. I timed the commands with 'time', which added before 'dd' or 'readom' gives execution times after the command completes. See 'man time' for more info...it can be found on any Unix or Linux newer than 1973. Yeah, that means everywhere.

readom is supposed to guarantee good reads, and does support flags for bypassing bad blocks where dd will either fail or hang.

readom's verbosity gave more interesting output than dd.

On Cygwin, my attempt with 'readom' from the first answer actually ended up reading my hard drive. Both attempts got to 5GB before I killed them, seeing as that is past any CD or standard DVD.

dd:

'bs=1M' says "read 1MB into RAM from source, then write that 1MB to output. I also tested 10MB, which shaved the time down to 2m42s.

'if=/dev/scd0' selects Cygwin's representation of the first CD-ROM drive.

'of=./filename.iso' simply means "create filename.iso in the current directory."

readom:

'-v' says "be a little noisy (verbose)." The man page implies more verbosity with more 'v's, e.g. -vvv.

dev='D:' in Cygwin explicitly specifies the D-drive. I tried other entries, like '/dev/scd0' and '2,0', but both read from my hard drive instead of the CD-ROM. I imagine my LUN-foo (2,0) was off for my system, but on Cygwin 'D:' sort of "cut to the chase" and did the job.

f='./filename.iso' specifies the output file.

speed=2 simply sets the speed at which the CD is read. I also tried 4, which ran the exact same 2m45s.

retries=8 simply means try reading a block up to 8 times before giving up. This is useful for damaged media (scratches, glue lines, etc.), allowing you to automatically "get everything that can be copied" so you at least have most of the data.

echo '#! /usr/bin/ksh\ncat $2 | openssl dgst -sha256 | read hashish; if [[ $hashish = $1 ]]; then echo $2: OK; else echo $2: FAILED; fi;' > shacheck; chmod +x shacheck; cat hashishes.sha256 | xargs -n 2 ./shacheck;
2013-09-18 21:51:20
User: RAKK
Functions: cat chmod echo read xargs
0

This command is used to verify a sha256sum-formatted file hash list on IBM AIX or any other UNIX-like OS that has openssl but doesn't have sha256sum by default. Steps:

1: Save to the filesystem a script that:

A: Receives as arguments the two parts of one line of a sha256sum listing

B: Feeds a file into openssl on SHA256 standard input hash calculation mode, and saves the result

C: Compares the calculated hash against the one received as argument

D: Outputs the result in a sha256sum-like format

2: Make the script runnable

3: Feed the sha256sum listing to xargs, running the aforementioned script and passing 2 arguments at a time

echo '#! /usr/bin/ksh\necho `cat $1 | openssl dgst -sha256` $1' > sslsha256; chmod +x sslsha256; find directory -type f -exec ./sslsha256 \{\} \;
2013-09-18 17:37:50
User: RAKK
Functions: chmod echo find
0

This command is for producing GNU sha256sum-compatible hashes on UNIX systems that don't have sha256sum but do have OpenSSL, such as stock IBM AIX.

1.- Saves a wrapper script for UNIX find that does the following:

A.- Feeds a file to openssl on SHA256 hash calculation mode

B.- Echoes the output followed by the filename

2.- Makes the file executable

3.- Runs find on a directory, only processing files, and running on each one the wrapper script that calculates SHA256 hashes

Pending is figuring out how to verify a sha256sum file on a similar environment.

for i in '/tmp/file 1.txt' '/tmp/file 2.jpg'; do ln -s "$i" "$i LINK"; done
2013-08-02 08:30:50
User: qwertyroot
Functions: ln
0

Replace

'/tmp/file 1.txt' '/tmp/file 2.jpg'

with

"$NAUTILUS_SCRIPT_SELECTED_FILE_PATHS"

for Nautilus script

Or with

%F

for Thunar action

If you linking the symlinks itself, but want to link to source files instead of symlinks, use

"`readlink -m "$i"`"

instead of

"$i"

like this:

for i in '/tmp/file 1.txt' '/tmp/file 2.jpg'; do ln -s "`readlink -m "$i"`" "$i LINK"; done

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

i=0; while [ $i -lt 100 ]; do echo "test, ttest, tttest-${i}" >> kk.file; i=`expr $i + 1`; done
2012-09-13 21:46:18
User: kaushalmehra
Functions: echo
-3

while commandt

do

command

command

...

done

{commandt is executed and its exit status tested.}

for i in 1 2 3

> do

> echo $i

> done

cp foo.txt foo.txt.tmp; sed '$ d' foo.txt.tmp > foo.txt; rm -f foo.txt.tmp
2012-09-13 20:57:40
User: kaushalmehra
Functions: cp rm sed
Tags: sed unix
-2

sed '$ d' foo.txt.tmp

...deletes last line from the file

tail -n +2 foo.txt
lspv hdisk1
2012-09-13 15:40:58
User: kaushalmehra
Tags: hard disk unix
0

This physical volumne - hdisk1 - has TOTAL PPs:11999 (1535872 megabytes) -> 1.5 TB

This physical volumne - hdisk1 - has -> 60 GB

find /backup/directory -name "FILENAME_*" -mtime +15 -exec rm -vf {} +
for foo in <list of directories>; do while find $foo -type d -empty 2>/dev/null| grep -q "."; do find $foo -type d -empty -print0 | xargs -0 rmdir; done; done
2012-05-23 08:09:16
Functions: find grep xargs
0

This will check if there are any empty directories, or newly emptied directories, in a list of directories. It will then delete all of those directories. It works with gnu find and OSX/BSD find.

find . -iname *.java -type f -exec bash -c "iconv -f WINDOWS-1252 -t UTF-8 {} > {}.tmp " \; -exec mv {}.tmp {} \;
find . -type f -exec awk '/linux/ { printf "%s %s: %s\n",FILENAME,NR,$0; }' {} \;
rsync --daemon --port 9999 --no-detach -v --config .rsyncd.conf
2011-09-22 20:48:31
User: pykler
Functions: rsync
-3

An example config file is placed in the sample output along with the command line call to use it.

The rsync daemon here is setup on the destination, thus requiring the read only = false flag. Also it uses uid and gid of root, change as required.

find /myfs -size +209715200c -exec du -m {} \; |sort -nr |head -10
2011-07-07 21:12:46
User: arlequin
Functions: du find head sort
2

Specify the size in bytes using the 'c' option for the -size flag. The + sign reads as "bigger than". Then execute du on the list; sort in reverse mode and show the first 10 occurrences.

find /var/log -iregex '.*[^\.][^0-9]+$' -not -iregex '.*gz$' 2> /dev/null | xargs tail -n0 -f | ccze -A
function rot13 { if [ -r $1 ]; then cat $1 | tr '[N-ZA-Mn-za-m5-90-4]' '[A-Za-z0-9]'; else echo $* | tr '[N-ZA-Mn-za-m5-90-4]' '[A-Za-z0-9]'; fi }
2011-03-18 09:59:41
User: twjolson
Functions: cat echo tr
Tags: Linux unix tr
-5

Will rot 13 whatever parameter follows 'rot13', whether it is a string or a file. Additionally, it will rot 5 each digit in a number

alias rot13="tr a-zA-Z n-za-mN-ZA-M"
svn log -q | grep '^r[0-9]' | cut -f2 -d "|" | sort | uniq -c | sort -nr
2011-01-03 15:23:08
User: kkapron
Functions: cut grep sort uniq
2

list top committers (and number of their commits) of svn repository.

in this example it counts revisions of current directory.

alias a=" killall rapidly_spawning_process"; a; a; a;
2010-05-20 02:33:28
User: raj77_in
Functions: alias
Tags: Linux unix kill
3

if you dont want to alias also then you can do

killall rapidly_spawning_process ; !! ; !! ; !!

killall rapidly_spawning_process ; killall rapidly_spawning_process ; killall rapidly_spawning_process
2010-05-20 00:26:10
Functions: killall
Tags: Linux unix kill
-2

Use this if you can't type repeated killall commands fast enough to kill rapidly spawning processes.

If a process keeps spawning copies of itself too rapidly, it can do so faster than a single killall can catch them and kill them. Retyping the command at the prompt can be too slow too, even with command history retrieval.

Chaining a few killalls on single command line can start up the next killall more quickly. The first killall will get most of the processes, except for some that were starting up in the meanwhile, the second will get most of the rest, and the third mops up.

utime(){ python -c "import time; print(time.strftime('%a %b %d %H:%M:%S %Y', time.localtime($1)))"; }
utime(){ awk -v d=$1 'BEGIN{print strftime("%a %b %d %H:%M:%S %Y", d)}'; }