Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Universal configuration monitoring and system of record for IT.
Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Top Tags

Hide

Functions

All commands from sorted by
Terminal - All commands - 12,060 results
openssl req -nodes -newkey rsa:2048 -keyout server.key -out server.csr -subj "/C=BR/ST=State/L=City/O=Company Inc./OU=IT/CN=domain.com"
fold -sw 20 <(echo "Long Text to be wrapped with \"\n\"") |sed ':a;N;$!ba;s/ *\n/\\n/g'
2015-04-16 21:06:53
User: alecthegeek
Functions: echo fold sed
0

I used this fragment with Imagemagick convert so that I can place long text strings in pictures. The "\n" gets converted to a true newline in the image.

So this fragment uses fold command to wrap the line and then sed to convert newlines (and any trailing spaces on the line) to the text "\n"

find -name pom.xml | while read f; do cd $(dirname "$f"); mvn clean; cd -; done;
2015-04-15 21:24:49
User: glaudiston
Functions: cd dirname find read
-2

this command is used to locate all pom.xml files, access the dir and do a mvn clean, but I do recommend you to disable network interfaces to not download dependencies packages to be faster.

awk -F"|" 'BEGIN {OFS="|"} NR==1 {for (b=1;b<=NF;b++) {hdr[b]=$b} } NR > 1 {for (i=1;i<=NF;i++) {if(length($i) > max[i]) max[i] = length($i)} } END {for (i=1;i <= NF;i++) print hdr[i],max[i]+0}' pipe_delimited_file.psv
find /PATHNAME -type l | while read nullsymlink ; do wrongpath=$(readlink "$nullsymlink") ; right=$(echo "$wrongpath" | sed s'|OLD_STRING|NEW_STRING|') ; ln -fs "$right" "$nullsymlink" ; done
2015-04-14 14:58:41
User: iDudo
Functions: echo find ln read readlink sed
0

After you run this script, you can check status for broken symlink with this command:

find -L . -type l

docker stop $(docker ps -a -q); docker rm $(docker ps -a -q)
2015-04-14 13:34:15
User: das_shark
Functions: ps rm
2

Will stop all running containers, then remove all containers

**This isn't for selectively handling containers, it removes everything**

env DISPLAY=:0 /usr/bin/gedit ~/df.txt && wmctl -a gedit
2015-04-12 13:48:31
User: knoppix5
Functions: env
0

Usage example: display output of a command running in the background at desired time

The example in details: report disk quotas and that backup process will start soon

In my /etc/crontab file I added following four lines for weekly automatic incremental backup:

.

52 13 * * 7 root mount /dev/sda3 /media/da2dc69c-92cc-4249-b2c3-9b00847e7106

.

53 13 * * 7 knoppix5 df -h >~/df.txt

.

54 13 * * 7 knoppix5 env DISPLAY=:0 /usr/bin/gedit ~/df.txt && wmctl -a gedit

.

55 13 * * 7 root /home/knoppix5/rdiff-backup.sh

.

line one: as root mount media for backup on Sunday 13:52

line two: as user knoppix5 write out to text file in home directory the free space of all mounted disks on Sunday 13:53

line three: in front of you open and display a very simple text editor (I prefer gedit) with content of previously reported disk usage at Sunday 13:54

wmctl -a gedit means (from the manual):

-a Switch to the desktop containing the window , raise the window, and give it focus.

line four: as root run incremental backup script rdiff-backup.sh as root on Sunday 13:54

.

my rdiff-backup.sh, with root permissions backups in short time (writes only changes from the last backup) the etire linux system (except excluded - i.e. you don't want backup recursively your backup disk), looks like this (Show sample output):

netstat -anp | grep :80 | grep ESTABLISHED | wc -l
2015-04-10 19:32:31
User: krizzo
Functions: grep netstat wc
Tags: session
-1

This counts all established sessions on port 80. You can change :80 to any port number you want to check.

sudo lsof -i -n | grep sshd | grep sshuser | grep :[PORT-RANGE] | grep -v IPv6 | awk -F\: '{print $2}' | grep -v http | awk -F" " '{print $1}'
2015-04-09 15:41:11
User: das_shark
Functions: awk grep sshd sudo
-1

gets network ports

only ones for the sshd service

only logged in a specific user (changed for public posting)

only in a specific localhost:port range

not IPv6

Only the part of the response after the ":" character

Only the part of the response before the 1st space

Output is just the rssh port

debugfs -R "stat <$(stat --printf=%i filename)>" /dev/sdaX | grep crtime
2015-04-09 01:23:56
User: pggx999
Functions: debugfs grep
3

Return the creation date of a file on ext2, 3, 4 filesystems, because stat command won't show it.

Useful on ubuntu, debian, and else

runonchange () { local cmd=( "$@" ) ; while inotifywait --exclude '.*\.swp' -qqre close_write,move,create,delete $1 ; do "${cmd[@]:1}" ; done ; }
2015-04-08 17:42:03
User: funollet
1

Example:

runonchange /etc/nginx nginx -t

Ignores vim temp files. Depends on 'inotify-tools' for monitoring of file changes. Alternative to tools like 'entr', 'watchr'.

awk '{print $0+0}' <(echo -2; echo +3;)
2015-04-08 09:19:24
Functions: awk echo
0

The leading plus sign is removed - Minus sign is left intact

a=$(b=$(($LINES/2));f() { for c in $(seq $b); do for i in $(seq $c);do echo x;done|xargs echo;done };paste <(f) <(f|tac|tr 'x' '-') <(f|tac|tr 'x' '-') <(f)|tr '\t' ' ');(cat <<<"$a"|tac;cat <<<"$a")|tr '-' ' '
watch -n 10 -d eval "sensors | grep RPM | sed -e 's/.*: *//;s/ RPM.*//'"
2015-04-07 14:28:32
User: omap7777
Functions: eval watch
1

Uses the lm-sensors package in Linux to display fan speed. Grep RPM is used to discover lines containing the text RPM, and sed is used to edit out everything but the RPM number. The watch utility is used to update the display every 10 seconds and -d highlights any changes from the previous value. The eval function of Bash is used to execute the command enclosed in the ".." string.

xset -display :0 q | grep ' Monitor is On' > /dev/null && xset -display :0 dpms force off || xset -display :0 dpms force on
2015-04-06 19:04:04
User: electrotux
Functions: grep
0

Queries whether the monitor is on according to DPMS. If true then turns the monitor off, if false turns it on. The -display option on xset means the command will work from sessions other than the console, such as ssh or a cron'd script. Command should display any errors if there are any problems (eg no X available), otherwise no output if successful.

mail [email protected]
2015-04-06 13:43:04
User: flatcap
Functions: mail
4

Welcome to Jon H. (@fart), the new maintainer of CommandLineFu.

.

In the absence of a forum, I encourage people welcome him, here, in the comments.

.

Also... What would you like to improve/change about the site?

pandoc --from=markdown --to=rst --output=README.rst README.md
curl -s http://host.net/url.html | grep magnet | sed -r 's/.*(magnet:[^"]*).*/\1/g'
function every() { sed -n -e "${2}q" -e "0~${1}p" ${3:-/dev/stdin}; }
2015-04-03 01:30:36
User: flatcap
Functions: sed
1

Thanks to knoppix5 for the idea :-)

Print selected lines from a file or the output of a command.

Usage:

every NTH MAX [FILE]

Print every NTH line (from the first MAX lines) of FILE.

If FILE is omitted, stdin is used.

The command simply passes the input to a sed script:

sed -n -e "${2}q" -e "0~${1}p" ${3:-/dev/stdin}

print no output

sed -n

quit after this many lines (controlled by the second parameter)

-e "${2}q"

print every NTH line (controlled by the first parameter)

-e "0~${1}p"

take input from $3 (if it exists) otherwise use /dev/stdin

{3:-/dev/stdin}
du -hsx * | sort -rh
cp -Rs dir1 dir2
2015-04-01 22:51:16
User: knoppix5
Functions: cp
1

dir1 and all its subdirs and subdirs of subdirs ... but *no files*

will be copied to dir2 (not even symbolic links of files will be made).

To preserve ownerships & permissions:

cp -Rps dir1 dir2

Yes, you can do it with

rsync -a --include '*/' --exclude '*' /path/to/source /path/to/dest

too, but I didn't test if this can handle attributes correctly

(experiment rsync command yourself with --dry-run switch to avoid

harming your file system)

You must be in the parent directory of dir1 while executing

this command (place dir2 where you will), else soft links of

files in dir2 will be made. I couldn't find how to avoid this

"limitation" (yet). Playing with recursive unlink command loop

maybe?

PS. Bash will complain, but the job will be done.

ssh user@server sudo date -s @`( date -u +"%s" )`
pdftk input.pdf output output.pdf user_pw YOURPASSWORD-HERE
ls | while read line; do ln -s "$(pwd)/$line" "/usr/bin/$line"; done
function every() { N=$1; S=1; [ "${N:0:1}" = '-' ] && N="${N:1}" || S=0; sed -n "$S~${N}p"; }
2015-03-21 23:44:59
User: flatcap
Functions: sed
1

Sometimes commands give you too much feedback.

Perhaps 1/100th might be enough. If so, every() is for you.

my_verbose_command | every 100

will print every 100th line of output.

Specifically, it will print lines 100, 200, 300, etc

If you use a negative argument it will print the *first* of a block,

my_verbose_command | every -100

It will print lines 1, 101, 201, 301, etc

The function wraps up this useful sed snippet:

... | sed -n '0~100p'

don't print anything by default

sed -n

starting at line 0, then every hundred lines ( ~100 ) print.

'0~100p'

There's also some bash magic to test if the number is negative:

we want character 0, length 1, of variable N.

${N:0:1}

If it *is* negative, strip off the first character ${N:1} is character 1 onwards (second actual character).