Commands using perl (369)

  • Search in decimal rather than hex. od dumps the character list, cut to remove offsets, sort -u gives the used characters. seq gives the comparison list, but we need this sorted alphabetically for comm, which does the filtering. I drop to perl to convert back to characters (is there a better way?) and then use od to dump them in a print-safe format. Show Sample Output


    0
    comm -13 <(od -vw1 -tu1 dummy.txt|cut -c9-|sort -u) <(seq 0 127|sort)|perl -pe '$_=chr($_)'|od -c
    bazzargh · 2012-01-09 01:32:20 7

  • 0
    perl -e 'for (<*.mp3>) { $old = $_; s/ /-/g; rename $old, $_ }'
    ironcamel · 2012-02-16 15:53:41 3
  • Commandline perl filter for, using a production.log from a rails app, display on realtime the count of requests grouped by "seconds to complete" (gross round, but fair enough for an oneliner) :) Show Sample Output


    0
    tail -f production.log | perl -ne 'if (/^Completed.in.(\d+)/){$d = int($1/1000);print "\n";$f{$d}++;for $t (sort(keys(%f))){print $t."s: ".$f{$t}."\n"}}'
    theist · 2012-02-23 14:37:33 3
  • handels @, ?, whitespaces in names. replace "?" and "add" by "!" and "rm" for svn mass remove. ---> I m looking for a nicer way to write it, perhaps with something using " perl -ne '`blahblah` if /\?(.*)/' " Show Sample Output


    0
    svn st | awk ' {if ( $1 == "?" ){print $1="",$0}} ' | sed -e 's/^[ \t]*//' | sed 's/ /\\ /g' | perl -ne '`svn add ${1}@` if /(.*)(@*)(.*)/'
    JulieCaroline · 2012-02-23 18:42:02 3
  • People are *going* to post the wrong ways to do this. It's one of the most common form-validation tasks, and also one of the most commonly messed up. Using a third party tool or library like exim means that you are future-proofing yourself against changes to the email standard, and protecting yourself against the fact that actually checking whether an email address is valid is *not possible*. Still, perhaps your boss is insisting you really do need to check them internally. OK. Read the RFCs. The bet before the @ is specified by RFC2821 and RFC2822. The domain name part is specified by RFC1035, RFC1101, RFC1123 and RFC2181. Generally, when people say "email address", they mean that part of the address that the RFC terms the "addr-spec": the "blah@domain.tld" address, with no display names, comments, quotes, etc. Also "root@localhost" and "root" should be invalid, as should arbitrary addressing schemes specified by a protocol indicator, like "jimbo@myprotocol:foo^bar^baz". So... With the smallest poetic license for readability (allowing underscores in domain names so we can use "\w" instead of "[a-z0-9]"), the RFCs give us: ^(?:"(?:[^"\\]|\\.)+"|[-^!#\$%&'*+\/=?`{|}~.\w]+)@(?=.{3,255}$)(?:[\w][\w-]{0,62}\.){1,128}[\w][\w-]{0,62}$ Not perfect, but the best I can come up with, and most compliant I've found. I'd be interested to see other people's ideas, though. It's still not going to verify you an address fersure, properly, 100% guaranteed legit, though. What else can you do? Well, you could also: * verify that the address is either a correct dotted-decimal IP, or contains letters. * remove reserved domains (.localhost, .example, .test, .invalid), reserved IP ranges, and so forth from the address. * check for banned domains (whitehouse.gov, example.com...) * check for known TLDs including alt tlds. * see if the domain has an MX record set up: if so, connect to that host, else connect to the domain. * see if the given address is accepted by the server as a recipient or sender (this fails for yahoo.*, which blocks after a few attempts, assuming you are a spammer, and for other domains like rediffmail.com, home.com). But these are moving well out of the realm of generic regex checks and into the realm of application-specific stuff that should be done in code instead - especially the latter two. Hopefully, this is all you needed to point out to your boss "hey, email validation this is a dark pit with no bottom, we really just want to do a basic check, then send them an email with a link in it: it's the industry standard solution." Of course, if you want to go nuts, here's an idea that you could do. Wouldn't like to do it myself, though: I'd rather just trust them until their mail bounces too many times. But if you want it, this (untested) code checks to see if the mail domain works. It's based on a script by John Coggeshall and Jesse Houwing that also asked the server if the specific email address existed, but I disliked that idea for several reasons. I suspect: it will get you blocked as a spambot address harvester pretty quick; a lot of servers would lie to you; it would take too much time; this way you can cache domains marked as "OK"; and I suspect it would add little to the reliability test. // Based on work by: John Coggeshall and Jesse Houwing. // http://www.zend.com/zend/spotlight/ev12apr.php mailRegex = '^(?:"(?:[^"\\\\]|\\\\.)+"|[-^!#\$%&\'*+\/=?`{|}~.\w]+)'; mailRegex .= '@(?=.{3,255}$)(?:[\w][\w-]{0,62}\.){1,128}[\w][\w-]{0,62}$'; function ValidateMail($address) {   global $mailRegex; // Yes, globals are evil. Put it inline if you want.   if (!preg_match($mailRegex)) {     return false;   }   list ( $localPart, $Domain ) = split ("@",$Email);   // connect to the first available MX record, or to domain if no MX record.   $ConnectAddress = new Array();   if (getmxrr($Domain, $MXHost)) {     $ConnectAddress = $MXHost;   } else {     $ConnectAddress[0] = $Domain;   }   // check all MX records in case main server is down - may take time!   for ($i=0; $i < count($ConnectAddress); $i++ ) {     $Connect = fsockopen ( $ConnectAddress[$i], 25 );     if ($Connect){       break;     }   }   if ($Connect) {     socket_set_blocking($Connect,0);     // Only works if socket_blocking is off.     if (ereg("^220", $Out = fgets($Connect, 1024))) {       fclose($Connect); // Unneeded, but let's help the gc.       return true;     }     fclose($Connect); // Help the gc.   }   return false; } Show Sample Output


    0
    perl -e "print 'yes' if `exim -bt $s_email_here | grep -c malformed`;"
    DewiMorgan · 2012-02-28 04:42:41 3
  • loop through files in .php extension make a .bak of each of them find the base64_decode function and remove it


    0
    for f in `find . -name "*.php"`; do perl -p -i.bak -e 's/<\?php \/\*\*\/ eval\(base64_decode\(\"[^\"]+"\)\);\?>//' $f; done
    lizuka · 2012-03-12 10:44:33 3
  • Provides a much cleaner, easier to read output than the closest alternative, ls -1R. This alternative makes it easier to differentiate directories from files: find . -printf '%y %p\n' | perl -ne 'if( m/(\w) (.*)\/(.*)/ ) { $t = $1; $p = $2; $f = $3; $t =~ s/[^d]/ /; $p =~ s/[^\/]/ /g; $p =~ s/\//|/g; print "$t $p/$f\n"; } elsif( m/(\w) (.*)/ ) { print "$1 $2\n"; } else { print "error interpreting: \"$_\"\n"; }' Show Sample Output


    0
    find . -printf '%p\n' | perl -ne 'if( m/(.*)\/(.*)/ ) { $p = $1; $f = $2; $p =~ s/[^\/]/ /g; $p =~ s/\//|/g; print "$p/$f\n"; } elsif( m/(.*)/ ) { print "$1\n"; } else { print "error interpreting: \"$_\"\n"; }'
    cbetti · 2012-04-24 19:51:00 3
  • This also works with -i, just like one might do with sed.


    0
    perl -p -e 's/\\n/\n/g'
    icorbett · 2012-06-06 15:37:34 4

  • 0
    cat ~/.bash_history | perl -lane 'if($F[0] eq "sudo"){$hash{$F[1]}++}else{$hash{$F[0]}++};$all++;END {@top = map {[$_, $hash{$_}]} sort {$hash{$b}<=>$hash{$a}} keys %hash;printf("%10s%10d%10.2f%%\n", $_->[0],$_->[1],$_->[1]/$all*100) for @top[0..9]}'
    vinian · 2012-07-02 09:38:06 7

  • 0
    perl -e '$s="$s\xFF" while length($s)<512; print $s while 1' | dd of=/dev/sdX
    stonefoz · 2012-08-20 05:33:28 4
  • works where perl works, because the awk version is gnu awk only. Show Sample Output


    0
    cat log | perl -ne 'use POSIX; s/([\d.]+)/strftime "%y-%m-%d %H:%M:%S", localtime $1/e,print if /./'
    bs · 2012-09-19 06:38:31 4
  • Encode HTML entities supporting UTF-8 input and output


    0
    perl -MHTML::Entities -ne 'print encode_entities($_)' /tmp/subor.txt
    brx75x · 2012-10-11 08:31:14 28

  • 0
    # strace ... | perl -lne '@F=split(/\\/, $_);for(@F){push @ddd, sprintf("%x", oct("0" . $_))}END{shift @ddd; print pack("H*", join("", @ddd));}'
    zwxajh · 2012-10-16 14:24:13 4

  • 0
    perl -pi -e 's/:([\w\d_]+)(\s*)=>/\1:/g' **/*.rb
    asux · 2012-10-16 16:54:40 4

  • 0
    history | perl -pe "~s/ *[0-9]+ *//"
    tomrag · 2012-10-23 22:14:00 5
  • This fixes the extra lines you get when you request only 1 paragraph using a little bit of grep. Just set p to the number of paragraphs you want. Show Sample Output


    0
    p=1 ; lynx -source http://www.lipsum.com/feed/xml?amount=${p} | grep '<lipsum>' -A$(((p-1))) | perl -p -i -e 's/\n/\n\n/g' | sed -n '/<lipsum>/,/<\/lipsum>/p' | sed -e 's/<[^>]*>//g'
    nublaii · 2012-10-24 14:11:52 4
  • find . = will set up your recursive search. You can narrow your search to certain file by adding -name "*.ext" or limit buy using the same but add prune like -name "*.ext" -prune xargs =sets it up like a command line for each file find finds and will invoke the next command which is perl. perl = invoke perl -p sets up a while loop -i in place and the .bak will create a backup file like filename.ext.bak -e execute the following.... 's/ / /;' your basic substitute and replace.


    0
    find . | xargs perl -p -i.bak -e 's/oldString/newString/;'
    RedFox · 2012-11-28 17:11:18 4
  • Since none of the systems I work on have readlink, this works cross-platform (everywhere has perl, right?). Note: This will resolve links. Show Sample Output


    0
    FULLPATH=$(perl -e "use Cwd 'abs_path';print abs_path('$0');")
    follier · 2013-02-01 20:09:34 5
  • When you need a quick ref guide while troubleshooting Apache|NGINX error|access logs. Show Sample Output


    0
    curl http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html 2>/dev/null | grep '^<h3' | grep -v '^\d' | perl -pe 's/^.*(?<=(\d\d\d)) (.*)<\/h3>$/$1 : $2/' | grep -v h3
    klg · 2013-03-06 01:49:08 10
  • The original command doesn't work for me - does something weird with sed (-r) and xargs (-i) with underscores all over... This one works in OSX Lion. I haven't tested it anywhere else, but if you have bash, gpg and perl, it should work. Show Sample Output


    0
    for i in `gpg --list-sigs | perl -ne 'if(/User ID not found/){s/^.+([a-fA-F0-9]{8}).*/\1/; print}' | sort | uniq`; do gpg --keyserver-options no-auto-key-retrieve --recv-keys $i; done
    hank · 2013-03-10 09:15:15 5
  • `pwd` returns the current path `grep -o` prints each slash on new line perl generates the paths sequence: './.', './../.', ... `readlink` canonicalizes paths (it makes the things more transparent) `xargs -tn1` applies chmod for each of them. Each command applied is getting printed to STDERR. Show Sample Output


    0
    pwd|grep -o '/'|perl -ne '$x.="./.";print`readlink -f $x`'|xargs -tn1 chmod 755
    luke_skywalker · 2013-03-14 12:03:44 7
  • Sends log lines from murmur's (the mumble server's) logfile to syslog.


    0
    nohup tail /var/log/murmur.log | perl -ne '/^<.>[0-9:. -]{24}(\d+ => )?(.*)/; $pid=`pgrep -u murmur murmurd | head`; chomp $pid; `logger -p info -t "murmurd[$pid]" \\"$2\\"`;' &
    MagisterQuis · 2013-05-25 01:12:52 6

  • 0
    perl -e 'printf "%o\n", (stat shift)[2]&07777' file
    smallduck · 2013-06-15 19:26:23 6
  • xmas lights for your terminal - switching the $l value to something like 1200 and zooming out on your terminal gives a great view ... Show Sample Output


    0
    perl -le '$l=80;$l2="!" x $l;substr+($l2^=$l2),$l/2,1,"\xFF";{local $_=$l2;y/\0\xFF/ ^/;print;($lf,$rt)=map{substr $l2 x 2,$_%$l,$l;}1,-1;$l2=$lf^$rt;select undef,undef,undef,.1;redo}'
    l3v3l · 2013-06-21 06:00:24 7

  • 0
    perl -lne 'print unless $seen{$_}++'
    kotokz · 2013-07-01 03:47:48 7
  • ‹ First  < 9 10 11 12 13 >  Last ›

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

Query Wikipedia via console over DNS
Query Wikipedia by issuing a DNS query for a TXT record. The TXT record will also include a short URL to the complete corresponding Wikipedia entry.You can also write a little shell script like: $ $ cat wikisole.sh $ #!/bin/sh $ dig +short txt ${1}.wp.dg.cx and run it like $ ./wikisole.sh unix were your first option ($1) will be used as search term.

grab all commandlinefu shell functions into a single file, suitable for sourcing.
Each shell function has its own summary line, as a comment. If there are multiple shell functions with the same name, the function with the highest number of votes is put into the file. Note: added 'grep -v' to the end of the pipeline, to eliminate extraneous lines containing only '--'. Thanks to matthewbauer for pointing this out.

Update your OpenDNS network ip
Intended for dynamic ip OpenDNS users, this command will update your OpenDNS network IP. For getting your IP, you can use one of the many one-liners here on commandlinefu. Example: I use this in a script which is run by kppp after it has successfully connected to my ISP: --- #!/bin/bash IP="`curl -s http://checkip.dyndns.org/ | grep -o '[[:digit:].]\+'`" PW="hex-obfuscated-pw-here" if [ "$IP" == "" ] ; then echo 'Not online.' ; exit 1 else wget -q --user=topsecret --password="`echo $PW | xxd -ps -r`" 'https://updates.opendns.com/nic/update?hostname=myhostname&myip='"$IP" -O - /etc/init.d/ntp-client restart & fi --- PS: DynDNS should use a similar method, if you know the URL, please post a comment. (Something with members.dyndns.org, if I recall correctly)

Multiple variable assignments from command output in BASH
This version uses read instead of eval.

show framebuffer console modes to use in grub vga option
look at /boot/grub/menu.lst for somethig like: ## additional options to use with the default boot option, but not with the ## alternatives ## e.g. defoptions=vga=791 resume=/dev/hda5 ## defoptions=vga=795 # defoptions=vga=873 ## altoption boot targets option ## multiple altoptions lines are allowed ## e.g. altoptions=(extra menu suffix) extra boot options ## altoptions=(recovery) single # altoptions=(verbose mode) vga=775 debug # altoptions=(console mode) vga=ask # altoptions=(graphic mode) quiet splash # altoptions=(recovery mode) single vga=(decimal value) is framebuffer mode

find unreadable file

Log the current memory statistics frequently to syslog
Uses logger in a while loop to log memory statistics frequently into the local syslog server.

Lookup your own IPv4 address

Check how far along (in %) your program is in a file
Imagine you've started a long-running process that involves piping data, but you forgot to add the progress-bar option to a command. e.g. $ xz -dc bigdata.xz | complicated-processing-program > summary . This command uses lsof to see how much data xz has read from the file. $ lsof -o0 -o -Fo FILENAME Display offsets (-o), in decimal (-o0), in parseable form (-Fo) This will output something like: . p12607 f3 o0t45187072 . Process id (p), File Descriptor (f), Offset (o) . We stat the file to get its size $ stat -c %s FILENAME . Then we plug the values into awk. Split the line at the letter t: -Ft Define a variable for the file's size: -s=$(stat...) Only work on the offset line: /^o/ . Note this command was tested using the Linux version of lsof. Because it uses lsof's batch option (-F) it may be portable. . Thanks to @unhammer for the brilliant idea.

A DESTRUCTIVE command to render a drive unbootable
THIS COMMAND IS DESTRUCTIVE. That said, lets assume you want to render your boot drive unbootable and reboot your machine. Maybe you want it to boot off the network and kickstart from a boot server for a fresh OS install. Replace /dev/fd0 with the device name of your boot drive and this DESTRUCTIVE command will render your drive unbootable. Your BIOS boot priority should be set to boot from HD first, then LAN.


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: