Commands tagged google (68)

  • translate <phrase> <source-language> <output-language> works from command line


    2
    cmd=$( wget -qO- "http://ajax.googleapis.com/ajax/services/language/translate?v=1.0&q=$1&langpair=$2|${3:-en}" | sed 's/.*"translatedText":"\([^"]*\)".*}/\1\n/'; ); echo "$cmd"
    dtolj · 2010-03-13 01:09:00 50
  • Just add this to your .bashrc file. Use quotes when query has multiple word length. Show Sample Output


    2
    findlocation() { place=`echo $1 | sed 's/ /%20/g'` ; curl -s "http://maps.google.com/maps/geo?output=json&oe=utf-8&q=$place" | grep -e "address" -e "coordinates" | sed -e 's/^ *//' -e 's/"//g' -e 's/address/Full Address/';}
    shadyabhi · 2010-10-18 21:11:42 15

  • 2
    say() { curl -sA Mozilla -d q=`python3 -c 'from urllib.parse import quote_plus; from sys import stdin; print(quote_plus(stdin.read()[:100]))' <<<"$@"` 'http://translate.google.com/translate_tts' | mpg123 -q -; }
    kev · 2011-11-26 09:18:16 20
  • (1) required: python-googl ( install by: pip install python-googl ) (2) get from google API console https://code.google.com/apis/console/ Show Sample Output


    2
    python -c 'import googl; print googl.Googl("<your_google_api_key>").shorten("'$someurl'")[u"id"]'
    shr386 · 2012-05-31 17:14:17 3

  • 2
    say() { wget -q -U Mozilla -O output.mp3 "http://translate.google.com/translate_tts?ie=UTF-8&tl=en&q=$1" open output.mp3 &>/dev/null || xdg-open output.mp3 &>/dev/null }
    runvnc · 2014-04-17 07:35:49 8
  • Searches Google, but requires no "", and will also search all terms input in the CL, eg: > google foo bar returns search URL " You could also use awk to replace all spaces with a +, which is how the Google search handles spaces, but that makes it more than one line.


    2
    function google () { st="$@"; open "http://www.google.com/search?q=${st}"; }
    plasticphyte · 2014-05-07 03:14:05 23
  • Improved google text-to-speech function. Allows to specify language, plays sound in terminal. Automatically removes downloaded file after successfully processing. Usage: say LANGUAGE TEXT Examples: say en "This is a test." say pl "To jest test"


    2
    function say { wget -q -U Mozilla -O google-tts.mp3 "http://translate.google.com/translate_tts?ie=UTF-8&tl=$1&q=$2" open google-tts.mp3 &>/dev/null || mplayer google-tts.mp3 &>/dev/null; rm google-tts.mp3; }
    Zath · 2014-08-01 23:43:16 16
  • Access a random news web page on the internet. The Links browser can of course be replaced by Firefox or any modern graphical web browser.


    2
    links $( a=( $( lynx -dump -listonly "http://news.google.com" | grep -Eo "(http|https)://[a-zA-Z0-9./?=_-]*" | grep -v "google.com" | sort | uniq ) ) ; amax=${#a[@]} ; n=$(( `date '+%s'` % $amax )) ; echo ${a[n]} )
    pascalvaucheret · 2016-07-26 11:52:12 54

  • 2
    nslookup -q=TXT _netblocks.google.com | grep -Po '\b([0-1]?\d{1,2}|2[0-4]\d|25[0-5])(\.([0-1]?\d{1,2}|2[0-4]\d|25[0-5])){3}(/\d{1,2})\b'
    emphazer · 2018-10-05 12:50:48 380

  • 1
    spellcheck(){ curl -sd "<spellrequest><text>$1</text></spellrequest>" https://www.google.com/tbproxy/spell | sed 's/.*<spellresult [^>]*>\(.*\)<\/spellresult>/\1/;s/<c \([^>]*\)>\([^<]*\)<\/c>/\1;\2\n/g' | grep 's="1"' | sed 's/^.*;\([^\t]*\).*$/\1/'; }
    matthewbauer · 2010-02-17 01:55:28 14
  • Full Command: google contacts list name,name,email|perl -pne 's%^((?!N\/A)(.+?)),((?!N\/A)(.+?)),([a-z0-9\._-]+\@([a-z0-9][a-z0-9-]*[a-z0-9]\.)+([a-z]+\.)?([a-z]+))%${1}:${3} <${5}>%imx'|grep -oP '^((?!N\/A)(.+?)) <[a-z0-9\._-]+\@([a-z0-9][a-z0-9-]*[a-z0-9]\.)+([a-z]+\.)?([a-z]+)>' | sort You'll need googlecl and python-gdata. First setup google cl via: google Then give your PC access google contacts list name,email Then do the command, save it or use this one to dump it in the cone-address.txt file in your home dir: google contacts list name,name,email | perl -p -n -e 's%^((?!N\/A)(.+?)),((?!N\/A)(.+?)),([a-z0-9\._-]+\@([a-z0-9][a-z0-9-]*[a-z0-9]\.)+([a-z]+\.)?([a-z]+))%${1}:${3} <${5}>%imx' | grep -o -P '^((?!N\/A)(.+?)) <[a-z0-9\._-]+\@([a-z0-9][a-z0-9-]*[a-z0-9]\.)+([a-z]+\.)?([a-z]+)>' | sort > ~/cone-adress.txt Then import into cone. It filters out multiple emails, and contacts with no email that have N/A. (Picasa photo persons without email for example...) Show Sample Output


    1
    google contacts list name,name,email|perl -pne 's%^((?!N\/A)(.+?)),((?!N\/A)(.+?)),([a-z0-9\._-]+\@([a-z0-9][a-z0-9-]*[a-z0-9]\.)+([a-z]+\.)?([a-z]+))%${1}:${3} <${5}>%imx' #see below for full command
    Raymii · 2010-07-12 16:50:44 148
  • opens the Google I'm Feeling Lucky result in lynx, the command line browser


    1
    lucky(){ url=$(echo "http://www.google.com/search?hl=en&q=$@&btnI=I%27m+Feeling+Lucky&aq=f&oq=" | sed 's/ /+/g'); lynx $url; }; lucky "Emperor Norton"
    smop · 2010-08-13 00:23:25 6
  • Alternative to http://commandlinefu.com/commands/view/6831/find-co-ordinates-of-a-location with $* instead of $1 so no need to quote multi-word locations Show Sample Output


    1
    findlocation() { place=`echo $* | sed 's/ /%20/g'` ; curl -s "http://maps.google.com/maps/geo?output=json&oe=utf-8&q=$place" | grep -e "address" -e "coordinates" | sed -e 's/^ *//' -e 's/"//g' -e 's/address/Full Address/';}
    nimmylebby · 2010-10-18 21:38:20 5
  • Simple edit to work for OSX. Now just add this to your ~/.profile and `source ~/.profile`


    1
    rtfm() { help $@ || man $@ || open "http://www.google.com/search?q=$@"; }
    vaporub · 2011-01-26 06:23:42 5
  • wget -qO - "http://www.google.com/dictionary/json?callback=dict_api.callbacks.id100&q=steering+wheel&sl=en&tl=en&restrict=pr,de&client=te" this does the actual google dictionary query, returns a JSON string encapsulated in some fancy tag sed 's/dict_api\.callbacks.id100.//' here we remove the tag beginning sed 's/,200,null)//' and here the tag end There are also some special characters which could cause problems with some JSON parsers, so if you get some errors, this is probably the case (sed is your friend). I laso like to trim the "webDefinitions" part, because it (sometimes) contains misleading information. sed 's/\,\"webDefinitions.*//' (but remember to append a "}" at the end, because the JSON string will be invalid) The output also contains links to mp3 files with pronounciation. As of now, this is only usable in the English language. If you choose other than English, you will only get webDefinitions (which are crap).


    1
    wget -qO - "http://www.google.com/dictionary/json?callback=dict_api.callbacks.id100&q=steering+wheel&sl=en&tl=en&restrict=pr,de&client=te" | sed 's/dict_api\.callbacks.id100.//' | sed 's/,200,null)//'
    sairon · 2011-03-08 15:00:39 16
  • Usage: say hello world how are you today


    1
    say() { local IFS=+;mplayer "http://translate.google.com/translate_tts?q=$*"; }
    RanyAlbeg · 2011-09-08 13:02:46 3
  • This command will place symbolic links to files listed in an m3u playlist into a specified folder. Useful for uploading playlists to Google Music. prefix = The full path prefix to file entries in your .m3u file, if the file paths are relative. For example, if you have "Music/folder/song.mp3" in your list.m3u, you might want to specify "/home/username" as your prefix. list.m3u = Path to the playlist target_folder = Path to the target folder in which you would like to create symlinks


    1
    (IFS=$'\n'; ln -sf $(awk '((NR % 2) != 0 && NR > 1) {print "prefix" $0}' list.m3u) target_folder)
    lxe · 2011-09-25 16:45:28 6
  • Get the first 10 google results form a querry, but showing only the urls from the results. Use + to search diferent terms, ex: commandlinefu+google . Show Sample Output


    1
    gg(){ lynx -dump http://www.google.com/search?q=$@ | sed '/[0-9]*\..http:\/\/www.google.com\/search?q=related:/!d;s/...[0-9]*\..http:\/\/www.google.com\/search?q=related://;s/&hl=//';}
    chon8a · 2012-04-21 03:31:26 6

  • 1
    Q="Hello world"; GOOG_URL="http://www.google.com/search?q="; AGENT="Mozilla/4.0"; stream=$(curl -A "$AGENT" -skLm 10 "${GOOG_URL}\"${Q/\ /+}\"" | grep -oP '\/url\?q=.+?&amp' | sed 's/\/url?q=//;s/&amp//'); echo -e "${stream//\%/\x}"
    westeros91 · 2012-08-26 20:13:21 8
  • Same as the other rtfm's, but using the more correct xdg-open instead of $BROWSER. I can't find a way to open info only if the term exists, so it stays out of my version.


    1
    rtfm() { help $@ || man $@ || xdg-open "http://www.google.com/search?q=$@"; }
    KlfJoat · 2014-04-25 04:17:03 96
  • translate <some phrase> [output-language] [source-language] 1) "some phrase" should be in quotes 2) [output-language] - optional (default: English) 3) [source-language] - optional (default: auto) translate "bonjour petit lapin" hello little rabbit translate "bonjour petit lapin" en hello little rabbit translate "bonjour petit lapin" en fr hello little rabbit Show Sample Output


    1
    translate(){wget -U "Mozilla/5.0" -qO - "https://translate.google.com/translate_a/single?client=t&sl=${3:-auto}&tl=${2:-en}&dt=t&q=$1" | cut -d'"' -f2}
    klisanor · 2014-06-10 12:08:51 11
  • sort -R randomize the list. head -n1 takes the first.


    1
    links `lynx -dump -listonly "http://news.google.com" | grep -Eo "(http|https)://[a-zA-Z0-9./?=_-]*" | grep -v "google.com" | sort -R | uniq | head -n1`
    mogoh · 2016-07-26 12:54:53 15
  • a bit shorter, parenthesis not needed but added for clarity Show Sample Output


    1
    nslookup -q=TXT _netblocks.google.com | grep -Eo 'ip4:([0-9\.\/]+)' | cut -d: -f2
    jseppe · 2018-10-05 18:19:15 383
  • substitute the URL with your private/public XML url from calendar sharing settings substitute the dates YYYY-mm-dd adjust the perl parsing part for your needs Show Sample Output


    0
    wget -q -O - 'URL/full?orderby=starttime&singleevents=true&start-min=2009-06-01&start-max=2009-07-31' | perl -lane '@m=$_=~m/<title type=.text.>(.+?)</g;@a=$_=~m/startTime=.(2009.+?)T/g;shift @m;for ($i=0;$i<@m;$i++){ print $m[$i].",".$a[$i];}';
    unixmonkey4704 · 2009-07-23 14:48:54 4
  • This is a minimalistic version of the ubiquitious Google definition screen scraper. This version was designed not only to run fast, but to work using BusyBox. BusyBox is a collection of basic Unix tools that have been compiled into a single binary to save space on tiny installations of Unix. For example, although my phone doesn't have perl or the GNU utilities, it does have BusyBox's stripped down versions of wget, tr, and sed. It turns out that those tools suffice for many tasks. Known Bugs: This script does not handle HTML entities at all. I don't think there's an easy way to do that within BusyBox, but I'd love to see it if someone could do it. Also, this script can only define a single word, not phrases. (Well, you could if you typed in %20, but that'd be gross.) Lastly, this script does not show the URL where definitions were found. Given the randomness of the Net, that last bit of information is often key. Show Sample Output


    0
    wget -q -U busybox -O- "http://www.google.com/search?ie=UTF8&q=define%3A$1" | tr '<' '\n' | sed -n 's/^li>\(.*\)/\1\n/p'
    hackerb9 · 2010-02-01 13:01:47 9
  •  < 1 2 3 > 

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

save date and time for each command in history
Date-time format: YYYY-MM-DD HH:MM:SS

Write comments to your history.
A null operation with the name 'comment', allowing comments to be written to HISTFILE. Prepending '#' to a command will *not* write the command to the history file, although it will be available for the current session, thus '#' is not useful for keeping track of comments past the current session.

create a simple version of ls with extended output
create a short alias for 'ls' with multi-column (-C), file type syntax additions (slashes after directories, @ for symlinks, etc... (-F), long format (-l), including hidden directories (all ./, ../, .svn, etc) (-a), show file-system blocks actually in use (-s), human readable file sizes (-h)

Recursively remove .svn directories from the current location

check open ports without netstat or lsof

Show a Package Version on RPM based distributions
if you want to see all information about a package use: rpm -qi pkgname full list of querytags can be accessed by the command: rpm --querytags you can also customize the query format how ever you like with using more querytags together along with escape sequences in "man printf"! you can also use more than one package name. for example this command shows name and version in to columns: rpm -q --queryformat %-30{NAME}%{VERSION}\\n pkg1 pkg2

converting vertical line to horizontal line

Convert seconds to [DD:][HH:]MM:SS
Converts any number of seconds into days, hours, minutes and seconds. sec2dhms() { declare -i SS="$1" D=$(( SS / 86400 )) H=$(( SS % 86400 / 3600 )) M=$(( SS % 3600 / 60 )) S=$(( SS % 60 )) [ "$D" -gt 0 ] && echo -n "${D}:" [ "$H" -gt 0 ] && printf "%02g:" "$H" printf "%02g:%02g\n" "$M" "$S" }

Decode base64-encoded file in one line of Perl
Another option is openssl.

Use top to monitor only all processes with the same name fragment 'foo'
top accecpts a comma separated list of PIDs.


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: