Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands tagged regex from sorted by
Terminal - Commands tagged regex - 50 results
find . -type f -print0 | xargs -0 perl -pi.save -e 'tr/A-Z/a-z/'
2010-11-25 13:55:34
User: depesz
Functions: find perl xargs
Tags: perl find regex
1

In this way it doesn't have problems with filenames with spaces.

perl -e "tr/[A-Z]/[a-z]/;" -pi.save $(find . -type f)
pcregrep --color -M -N CRLF "owa_pattern\.\w+\W*\([^\)]*\)" source.sql
:g/\n"/jo
2010-09-11 18:51:41
User: mensa13
-2

In case the line you want to join start with a char different than ", you may use \n.*"\n as regex.

:%s/\([^\"]\)\(\n\)/\1 /g
2010-09-03 11:03:49
User: godzillante
1

----

this line ends here

but must be concatenated with this one

"this line ends here"

and should NOT be concatenated with this one

rp() { local p; eval p=":\$$1:"; export $1=${p//:$2:/:}; }; ap() { rp "$1" "$2"; eval export $1=\$$1$2; }; pp() { rp "$1" "$2"; eval export $1=$2:\$$1; }
2010-07-15 18:52:01
User: cout
Functions: eval export
0

I used to do a lot of path manipulation to set up my development environment (PATH, LD_LIBRARY_PATH, etc), and one part of my environment wasn't always aware of what the rest of the environment needed in the path. Thus resetting the entire PATH variable wasn't an option; modifying it made sense.

The original version of the functions used sed, which turned out to be really slow when called many times from my bashrc, and it could take up to 10 seconds to login. Switching to parameter substitution sped things up significantly.

The commands here don't clean up the path when they are done (so e.g. the path gets cluttered with colons). But the code is easy to read for a one-liner.

The full function looks like this:

remove_path() { eval PATHVAL=":\$$1:" PATHVAL=${PATHVAL//:$2:/:} # remove $2 from $PATHVAL PATHVAL=${PATHVAL//::/:} # remove any double colons left over PATHVAL=${PATHVAL#:} # remove colons from the beginning of $PATHVAL PATHVAL=${PATHVAL%:} # remove colons from the end of $PATHVAL export $1="$PATHVAL" } append_path() { remove_path "$1" "$2" eval PATHVAL="\$$1" export $1="${PATHVAL}:$2" } prepend_path() { remove_path "$1" "$2" eval PATHVAL="\$$1" export $1="$2:${PATHVAL}" }

I tried using regexes to make this into a cleaner one-liner, but remove_path ended up being cryptic and not working as well:

rp() { eval "[[ ::\$$1:: =~ ^:+($2:)?((.*):$2:)?(.*):+$ ]]"; export $1=${BASH_REMATCH[3]}:${BASH_REMATCH[4]}; };
ack -a -G '^(?!.*bar/data.*).*$' pattern
2010-05-10 00:13:11
User: rkulla
0

Say you have a directory structure like "foo/, foo/data/, bar/, bar/data/". If you just want to ignore 'bar/data' and you use "ack --ignore-dir=data pattern" it will ignore both foo/data and bar/data and 'ignore-data=bar/data' etc won't work.

perl -lne 'print for /url":"\K[^"]+/g' $(ls -t ~/.mozilla/firefox/*/sessionstore.js | sed q)
2009-12-14 00:51:54
User: sputnick
Functions: ls perl sed
0

If you want all the URLs from all the sessions, you can use :

perl -lne 'print for /url":"\K[^"]+/g' ~/.mozilla/firefox/*/sessionstore.js

Thanks to tybalt89 ( idea of the "for" statement ).

For perl purists, there's JSON and File::Slurp modules, buts that's not installed by default.

egrep 'https?://([[:alpha:]]([-[:alnum:]]+[[:alnum:]])*\.)+[[:alpha:]]{2,3}(:\d+)?(/([-\w/_\.]*(\?\S+)?)?)?'
2009-11-28 15:41:42
User: putnamhill
Functions: egrep
5

For the record: I didn't build this. Just shared what I found that worked. Apologies to the original author!

I decided I should fix the case where http://example.com is not matched for the next time I need this. So I read rfc1035 and formalized the host name regex.

If anyone finds any more holes, please comment.

cho "(Something like http://foo.com/blah_blah)" | awk '{for(i=1;i<=NF;i++){if($i~/^(http|ftp):\/\//)print $i}}'
2009-11-28 03:31:41
Functions: awk
-1

don't have to be that complicated

echo "(Something like http://foo.com/blah_blah)" | grep -oP "\b(([\w-]+://?|www[.])[^\s()<>]+(?:\([\w\d]+\)|([^[:punct:]\s]|/)))"
sed -i '/myexpression/d' /path/to/file.txt
2009-11-09 11:40:45
User: jgc
Functions: sed
Tags: sed regex
9

The -i option in sed allows in-place editing of the input file.

Replace myexpression with any regular expression.

/expr/d syntax means if the expression matches then delete the line.

You can reverse the functionality to keep matching lines only by using:

sed -i -n '/myexpression/p' /path/to/file.txt
ifconfig eth1 | grep inet\ addr | awk '{print $2}' | cut -d: -f2 | sed s/^/eth1:\ /g
2009-11-03 19:26:40
User: TuxOtaku
Functions: awk cut grep ifconfig sed
2

Sometimes, you don't really care about all the other information that ifconfig spits at you (however useful it may otherwise be). You just want an IP. This strips out all the crap and gives you exactly what you want.

perl -we 'my $regex = eval {qr/.*/}; die "$@" if $@;'
2009-10-13 21:50:47
User: tlacuache
Functions: eval perl
4

Place the regular expression you want to validate between the forward slashes in the eval block.

sed -i '19375 s/^/#/' file
2009-10-07 17:50:40
User: TuxOtaku
Functions: sed
5

This will comment out a line, specified by line number, in a given file.

echo 127.0.0.1 | egrep -e '^(([01]?[0-9]{1,2}|2[0-4][0-9]|25[0-4])\.){3}([01]?[0-9]{1,2}|2[0-4][0-9]|25[0-4])$'
2009-09-17 17:40:48
User: arcege
Functions: echo egrep
-1

Handles everything except octets with 255. Ran through ip generator with variable octet lengths.

perl -wlne 'print $1 if /(([01]?\d\d?|2[0-4]\d|25[0-5])\.([01]?\d\d?|2[0-4]\d|25[0-5])\.([01]?\d\d?|2[0-4]\d|25[0-5])\.([01]?\d\d?|2[0-4]\d|25[0-5]))/' iplist
2009-09-17 16:14:52
User: salparadise
Functions: perl
-1

if you want to only print the IP address from a file.

In this case the file will be called "iplist" with a line like "ip address 1.1.1.1"

it will only print the "1.1.1.1" portion

echo 254.003.032.3 | grep -P '^((25[0-4]|2[0-4]\d|[01]?[\d]?[1-9])\.){3}(25[0-4]|2[0-4]\d|[01]?[\d]?[1-9])$'
2009-09-17 12:59:44
User: foob4r
Functions: echo grep
0

This obey that you don't match any broadcast or network addresses and stay between 1.1.1.1 - 254.254.254.254

echo "123.32.12.134" | grep -P '([01]?\d\d?|2[0-4]\d|25[0-5])\.([01]?\d\d?|2[0-4]\d|25[0-5])\.([01]?\d\d?|2[0-4]\d|25[0-5])\.([01]?\d\d?|2[0-4]\d|25[0-5])'
mate - `find * -type f -regex 'REGEX_A' | grep -v -E 'REGEX_B'`
2009-08-12 22:24:08
User: irae
Functions: grep
1

This does the following:

1 - Search recursively for files whose names match REGEX_A

2 - From this list exclude files whose names match REGEX_B

3 - Open this as a group in textmate (in the sidebar)

And now you can use Command+Shift+F to use textmate own find and replace on this particular group of files.

For advanced regex in the first expression you can use -regextype posix-egrep like this:

mate - `find * -type f -regextype posix-egrep -regex 'REGEX_A' | grep -v -E 'REGEX_B'`

Warning: this is not ment to open files or folders with space os special characters in the filename. If anyone knows a solution to that, tell me so I can fix the line.

fetch -q -o - http://ipchicken.com | egrep -o '([[:digit:]]{1,3}\.){3}[[:digit:]]{1,3}'
2009-08-06 11:57:44
User: spackle
Functions: egrep
-1

Same thing as above, just uses fetch and ipchicken.com

perl -e '$p=qr!(?:0|1\d{0,2}|2(?:[0-4]\d?|5[0-5]?|[6-9])?|[3-9]\d?)!;print((shift=~m/^$p\.$p\.$p\.$p$/)?1:0);' 123.123.123.123
2009-07-12 00:24:29
User: speaker
Functions: perl
0

This command will output 1 if the given argument is a valid ip address and 0 if it is not.

echo 2006-10-10 | grep -c '^[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]$'
2009-05-11 22:18:43
User: rez0r
Functions: echo grep
-1

Quick and easy way of validating a date format of yyyy-mm-dd and returning a boolean, the regex can easily be upgraded to handle "in betweens" for mm dd or to validate other types of strings, ex. ip address.

Boolean output could easily be piped into a condition for a more complete one-liner.

txt2regex
2009-04-29 04:00:22
User: bwoodacre
Tags: regex
8

txt2regex can be interactive or noninteractive and generates regular expressions for a variety of dialects based on user input. In interactive mode, the regex string builds as you select menu options. The sample output here is from noninteractive mode, try running it standalone and see for yourself. It's written in bash and is available as the 'txt2regex' package at least under debian/ubuntu.

\bTERM\b
2009-04-11 22:05:12
User: kFiddle
Tags: less regex
5

Although less behaves more or less like vim in certain aspects, the vim regex for word boundaries (\< and \>) do not work in less. Instead, use \b to denote a word boundary. Therefore, if you want to search for, say, the word "exit", but do not want to search for exiting, exits, etc., then surround "exit" with \b. This is useful if you need to search for specific occurrences of a keyword or command. \b can also be used at just the beginning and end, if needed.