Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

All commands from sorted by
Terminal - All commands - 11,926 results
tar cvzf - dir | ssh my_server 'tar xzf -'
watch -n5 ss \| grep -c WAIT
2014-07-30 17:09:08
User: djpohly
Functions: grep watch
Tags: tcp ss iproute2
0

Uses the suggestions by jld on #12421 as well as the new iproute2 tools instead of old net-tools.

find /var/log -type f -iregex '.*[^\.][^0-9]+$' -not -iregex '.*gz$' 2> /dev/null | xargs tail -n0 -f | ccze -A
2014-07-29 17:11:17
User: rubo77
Functions: find tail xargs
Tags: unix ccze logging
4

This will show all changes in all log files under /var/log/ that are regular files and don't end with `gz` nor with a number

changeFolder() { if [ $# -ne 2 ]; then echo "Usage: changeFolder old new"; return; fi; old=$(pwd); folder=$(echo "$old" | sed -e "s/$1/$2/g"); if [ ! -d "$folder" ]; then echo "Folder '$folder' not found."; return; fi; echo "$old -> $folder"; cd $folder;}
2014-07-29 15:36:32
User: Dracks
Functions: cd echo sed
0

change the path where you are, when is executed, is usefule when you've got folders to classify something like a tags, you've got devel/dist, android/ios, etc. like:

~user/Documents/devel/project

~user/Documents/dist/project

You can change between devel/project folder to dist/project without leave project.

If somebody has a better idea to do that without replace command.

awk -F: '{print $2}' access_log | sort | uniq -c
emerge -a `qcheck -aCB`
sudo bash -c "cd /PATH/TO/THE/DIRECTORY;bash"
2014-07-28 20:20:04
User: Zath
Functions: bash sudo
0

Change current working directory with root permissions.

Place this snippet in your .bashrc to add a new "sudocd" command:

function sudocd { sudo bash -c "cd $1;bash" }

Usage: sudocd DIRECTORY

Please note that if you will use this command to cd into directories with the permissions allowing only root to be in them, you will have to use sudo as a prefix to every command that changes/does something in that directory (yes, even ls).

for a in $(seq 15); do (xset led 3);(xset -led 3);sleep .9;done
docker ps -q | xargs -n 1 docker inspect | jq '.[0].NetworkSettings.Ports +{} | map(select(. != null)[0].HostPort) | map("-L \(.):localhost:\(.)") ' | sed -n 's/.*"\(.*\)".*/\1/p' |xargs boot2docker ssh -N
docker inspect -f "{{ .NetworkSettings.IPAddress }}" $CONTAINERID
docker kill $(docker ps -q)
mount_smbfs '//user:p%40ss@server/share' /Volumes/share
2014-07-27 00:52:19
User: bupsy
0

If the password for the share your trying to mount contains special characters you can use URL escape characters.

The above command uses an example as follows:

username: user

password: p@ss

URL Encoded password: p%40ss

All credit goes to Richard York:

http://www.smilingsouls.net/Blog/20110526100731.html

Also check out this URL Decoder/Encoder to convert your passwords.

http://meyerweb.com/eric/tools/dencoder/

apt-get update && apt-get dist-upgrade -y --show-progress && apt-get autoremove -y && apt-get check && apt-get autoclean -y
0

# AllInOne: Update what packages are available, upgrade to new versions, remove unneeded packages

# (some are no longer needed, replaced by the ones from ap upgrade), check for dependencies

# and clean local cached packages (saved on disk but not installed?,some are needed? [this only cleans unneeded unlike ap clean]).

# aliases (copy into ~/.bashrc file):

alias a='alias'

a ap='apt-get'

a r='ap autoremove -y'

a up='ap update'

a u='up && ap upgrade -y --show-progress && r && ap check && ap autoclean'

# && means "and run if the previous succeeded", you can change it to ; to "run even if previous failed".

I'm not sure if ap check should be before or after ap upgrade -y, you can also change the alias names.

# To expand aliases in bash use ctrl alt e or see this ow.ly/zBKHs

# For more useful aliases go to ow.ly/zBMOx

aria2c -x 4 http://my/url
2014-07-26 03:06:33
User: lx45803
1

jrk's aria2 example is incorrect. -s specifies the global connection limit; the per-host connection limit is specified with -x.

for i in {1..100}; do convert -background lightblue -fill blue -size 100x100 -pointsize 24 -gravity center label:$i $i.jpg; done
system_profiler SPHardwareDataType | awk '/UUID/ { print $3; }'
2014-07-25 06:54:40
Functions: awk
0

Gets the Hardware UUID of the current machine using system_profiler.

stat -f%Su /dev/console
ifconfig eth0 | grep inet | awk '{ print $2 }'
2014-07-23 20:43:15
User: smorg
Functions: awk grep ifconfig
Tags: centos
0

I just use this to see my ip on the server I'm working on

[ "$TERM" != "dumb" ] && [ -z "$STY" ] && screen -dR
2014-07-23 13:33:26
User: GlenSearle
Functions: screen
0

I changed my shell to screen by editing .bashrc, this stopped scp from connecting.

Adding two tests before screen fixed them problem.

echo "import uuid\nimport sys\nsys.stdout.write(str(uuid.uuid4()))" | python
2014-07-23 07:43:01
User: tippy
Functions: echo
Tags: python uuid
0

piped this to pbcopy (OSX only) you got a uuid in the pasteboard

grep -r "<script" | grep -v src | awk -F: '{print $1}' | uniq
2014-07-23 06:24:31
User: sucotronic
Functions: awk grep
Tags: PHP javascript
2

Useful to crawl where the javascript is declared, and extract it a common file. You can redirect it to a file to review item by item.

find -L -type l
2014-07-22 19:52:18
Functions: find
6

-L tells find to follow symbolic links, so -type l will only return links it can't follow (i.e., those that are broken).

find -type l -xtype l
echo {-1..-5}days | xargs -n1 date +"%Y-%m-%d" -d
bkname="test"; tobk="*" ; totalsize=$(du -csb $tobk | tail -1 | cut -f1) ; tar cvf - $tobk | tee >(sha512sum > $bkname.sha512) >(tar -tv > $bkname.lst) | mbuffer -m 4G -P 100% | pv -s $totalsize -w 100 | dd of=/dev/nst0 bs=256k
2014-07-22 15:47:50
User: johnr
Functions: cut dd du tail tar tee
0

This will write to TAPE (LTO3-4 in my case) a backup of files/folders. Could be changed to write to DVD/Blueray.

Go to the directory where you want to write the output files : cd /bklogs

Enter a name in bkname="Backup1", enter folders/files in tobk="/home /var/www".

It will create a tar and write it to the tape drive on /dev/nst0.

In the process, it will

1) generate a sha512 sum of the tar to $bkname.sha512; so you can validate that your data is intact

2) generate a filelist of the content of the tar with filesize to $bkname.lst

3) buffer the tar file to prevent shoe-shining the tape (I use 4GB for lto3(80mb/sec), 8gb for lto4 (120mb/sec), 3Tb usb3 disks support those speed, else I use 3x2tb raidz.

4) show buffer in/out speed and used space in the buffer

5) show progress bar with time approximation using pv

ADD :

To eject the tape :

; sleep 75; mt-st -f /dev/nst0 rewoffl

TODO:

1) When using old tapes, if the buffer is full and the drive slows down, it means the tape is old and would need to be replaced instead of wiping it and recycling it for an other backup. Logging where and when it slows down could provide good information on the wear of the tape. I don't know how to get that information from the mbuffer output and to trigger a "This tape slowed down X times at Y1gb, Y2gb, Y3gb down to Zmb/s for a total of 30sec. It would be wise to replace this tape next time you want to write to it."

2) Fix filesize approximation

3) Save all the output to $bkname.log with progress update being new lines. (any one have an idea?)

4) Support spanning on multiple tape.

5) Replace tar format with something else (dar?); looking at xar right now (https://code.google.com/p/xar/), xml metadata could contain per file checksum, compression algorithm (bzip2, xv, gzip), gnupg encryption, thumbnail, videopreview, image EXIF... But that's an other project.

TIP:

1) You can specify the width of the progressbar of pv. If its longer than the terminal, line refresh will be written to new lines. That way you can see if there was speed slowdown during writing.

2) Remove the v in tar argument cvf to prevent listing all files added to the archive.

3) You can get tarsum (http://www.guyrutenberg.com/2009/04/29/tarsum-02-a-read-only-version-of-tarsum/)

and add >(tarsum --checksum sha256 > $bkname_list.sha256) after the tee to generate checksums of individual files !