Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands using perl from sorted by
Terminal - Commands using perl - 334 results
find . -type l | perl -lne 'print if ! -e'
perl -pe 's/%([0-9a-f]{2})/sprintf("%s", pack("H2",$1))/eig'
perl -p -i -e ?s/New/Old/g? *.html
2009-11-16 13:40:13
User: chappado
Functions: perl
-4

-p -> loop (same as -n in sed)

-i -> edit files

-e -> execute command

replace Old with New in all *.html files

utime(){ perl -e "print localtime($1).\"\n\"";}
2009-11-06 12:58:10
User: MoHaG
Functions: perl
1

A shell function using perl to easily convert Unix-time to text.

Put in in your ~/.bashrc or equivalent.

Tested on Linux / Solaris Bourne, bash and zsh. using perl 5.6 and higher.

(Does not require GNU date like some other commands)

git ls-files | xargs -n1 -d'\n' -i git-blame {} | perl -n -e '/\s\((.*?)\s[0-9]{4}/ && print "$1\n"' | sort -f | uniq -c -w3 | sort -r
2009-10-25 01:44:03
User: askedrelic
Functions: perl sort uniq xargs
Tags: statistics git
3

Figures out total line contribution per author for an entire GIT repo. Includes binary files, which kind of mess up the true count.

If crashes or takes too long, mess with the ls-file option at the start:

git ls-files -x "*pdf" -x "*psd" -x "*tif" to remove really random binary files

git ls-files "*.py" "*.html" "*.css" to only include specific file types

Based off my original SVN version: http://www.commandlinefu.com/commands/view/2787/prints-total-line-count-contribution-per-user-for-an-svn-repository

perl -ne '$pkg=$1 if m/^Package: (.*)/; print "$1\t$pkg\n" if m/^Installed-Size: (.*)/;' < /var/lib/dpkg/status | sort -rn | less
2009-10-19 12:55:59
User: hfs
Functions: perl sort
0

List packages and their disk usage in decreasing order. This uses the "Installed-Size" from the package metadata. It may differ from the actual used space, because e.g. data files (think of databases) or log files may take additional space.

perl -e '$i=0;while($i<10){open(WGET,qq/|xargs lynx -dump/);printf WGET qq{http://www.google.com/search?q=site:g33kinfo.com&hl=en&start=$i&sa=N},$i+=10}'|grep '\/\/g33kinfo.com\/'
2009-10-16 12:20:17
User: op4
Functions: grep perl xargs
Tags: web browser
0

not my cmd... found on the web

[ $(df / | perl -nle '/([0-9]+)%/ && print $1') -gt 90 ] && df -hP | mutt -s "Disk Space Alert -- $(hostname)" admin@example.com
2009-10-15 21:11:54
User: syssyphus
Functions: df perl
3

put it in crontab to get an alert when / is over 89% utilization.

perl -i -ne 'print uc $_' $1
perl -we 'my $regex = eval {qr/.*/}; die "$@" if $@;'
2009-10-13 21:50:47
User: tlacuache
Functions: eval perl
4

Place the regular expression you want to validate between the forward slashes in the eval block.

perl -pi -e 's/([[:lower:]]+)/uc $1/gsex' file
2009-10-08 14:18:50
Functions: perl
Tags: perl
-2

same, except it works on any OS with Perl installed. DOS, Windose, whatever

perl -e '$x = []; push @$x, eval { $x = 1; return $x = 1; }'
2009-10-07 22:42:18
User: dstahlke
Functions: eval perl return
-2

It is not easy to make perl give a segfault, but this does it. This is a known issue but apparently not easy to fix. This is completely useless except for showing people that perl is not bullet-proof.

wget 'link of a Picasa WebAlbum' -O - |perl -e'while(<>){while(s/"media":{"content":\[{"url":"(.+?\.JPG)//){print "$1\n"}}' |wget -w1 -i -
curl "http://www.house.gov/house/MemberWWW.shtml" 2>/dev/null | sed -e :a -e 's/<[^>]*>//g;/</N;//ba' | perl -nle 's/^\t\t(.*$)/ $1/ and print;'
2009-09-24 23:37:36
User: drewk
Functions: perl sed
Tags: perl sed curl
-1

Uses curl to download page of membership of US Congress. Use sed to strip HTML then perl to print a line starting with two tabs (a line with a representative)

find -type f -name "*.avi" -print0 | xargs -0 mplayer -vo dummy -ao dummy -identify 2>/dev/null | perl -nle '/ID_LENGTH=([0-9\.]+)/ && ($t +=$1) && printf "%02d:%02d:%02d\n",$t/3600,$t/60%60,$t%60' | tail -n 1
2009-09-24 15:50:39
User: syssyphus
Functions: find perl printf tail xargs
8

change the *.avi to whatever you want to match, you can remove it altogether if you want to check all files.

cat ~/SortedFile.txt | perl -wnl -e '@f=<>; END{ foreach $i (reverse 0 .. $#f) { $r=int rand ($i+1); @f[$i, $r]=@f[$r,$i] unless ($i==$r); } chomp @f; foreach $line (@f){ print $line; }}'
2009-09-24 15:42:43
User: drewk
Functions: cat perl
0

The sort utility is well used, but sometimes you want a little chaos. This will randomize the lines of a text file.

BTW, on OS X there is no

| sort -R

option! There is also no

| shuf

These are only in the newer GNU core...

This is also faster than the alternate of:

| awk 'BEGIN { srand() } { print rand() "\t" $0 }' | sort -n | cut -f2-
%! perl -MO=Deparse | perltidy
2009-09-24 03:32:04
User: syssyphus
Functions: perl
3

the command show can be run in vim, here is the same thing on the command line

cat script.pl | perl -MO=Deparse | perltidy
perl -MCPAN -e 'CPAN::Shell->install(CPAN::Shell->r)'
perl -e "alarm 10; exec @ARGV" "somecommand"
2009-09-23 12:03:55
User: jgc
Functions: perl
4

In this example the command "somecommand" will be executed and sent a SIGALARM signal if it runs for more than 10 seconds. It uses the perl alarm function. It's not 100% accurate on timing, but close enough. I found this really useful when executing scripts and commands that I knew might hang E.g. ones that connect to services that might not be running. Importantly this can be used within a sequential script. The command will not release control until either the command completes or the timeout is hit.

perl -le 'print join $/, @INC'
2009-09-22 22:18:35
User: chuckr
Functions: join perl
0

This will show where your Perl installation is looking for modules.

perl -le 'use Config; foreach $i (keys %Config) {print "$i : @Config{$i}"}'
2009-09-22 22:14:21
User: chuckr
Functions: perl
Tags: perl
0

This dumps all of your installed perl's config information.

find $HOME -type f -print0 | perl -0 -wn -e '@f=<>; foreach $file (@f){ (@el)=(stat($file)); push @el, $file; push @files,[ @el ];} @o=sort{$a->[9]<=>$b->[9]} @files; for $i (0..$#o){print scalar localtime($o[$i][9]), "\t$o[$i][-1]\n";}'|tail
2009-09-21 22:11:16
User: drewk
Functions: find perl
3

This pipeline will find, sort and display all files based on mtime. This could be done with find | xargs, but the find | xargs pipeline will not produce correct results if the results of find are greater than xargs command line buffer. If the xargs buffer fills, xargs processes the find results in more than one batch which is not compatible with sorting.

Note the "-print0" on find and "-0" switch for perl. This is the equivalent of using xargs. Don't you love perl?

Note that this pipeline can be easily modified to any data produced by perl's stat operator. eg, you could sort on size, hard links, creation time, etc. Look at stat and just change the '9' to what you want. Changing the '9' to a '7' for example will sort by file size. A '3' sorts by number of links....

Use head and tail at the end of the pipeline to get oldest files or most recent. Use awk or perl -wnla for further processing. Since there is a tab between the two fields, it is very easy to process.

perl -wlne 'print $1 if /(([01]?\d\d?|2[0-4]\d|25[0-5])\.([01]?\d\d?|2[0-4]\d|25[0-5])\.([01]?\d\d?|2[0-4]\d|25[0-5])\.([01]?\d\d?|2[0-4]\d|25[0-5]))/' iplist
2009-09-17 16:14:52
User: salparadise
Functions: perl
-1

if you want to only print the IP address from a file.

In this case the file will be called "iplist" with a line like "ip address 1.1.1.1"

it will only print the "1.1.1.1" portion

perl -i -pe 's/\xef\xbb\xbf//g' <file>
perl -e "print scalar(gmtime(1247848584))"
2009-09-11 06:19:09
User: opexxx
Functions: perl
Tags: perl
0

print scalar gmtime