Commands tagged dictionary (7)

  • This one uses dictionary.com


    13
    pronounce(){ wget -qO- $(wget -qO- "http://dictionary.reference.com/browse/$@" | grep 'soundUrl' | head -n 1 | sed 's|.*soundUrl=\([^&]*\)&.*|\1|' | sed 's/%3A/:/g;s/%2F/\//g') | mpg123 -; }
    matthewbauer · 2010-03-13 04:23:56 12
  • Note: 1) Replace 'wonder' with any word you looking the meaning for in the above example 2) Need to install these packages: wordnet & wordnet-base (latter should be automatically installed because of dependency) 3) Combined size of packages is about 30MB on my old ubuntu system (I find it worth it) Show Sample Output


    3
    wn wonder -over
    b_t · 2010-10-05 13:56:06 34
  • wget -qO - "http://www.google.com/dictionary/json?callback=dict_api.callbacks.id100&q=steering+wheel&sl=en&tl=en&restrict=pr,de&client=te" this does the actual google dictionary query, returns a JSON string encapsulated in some fancy tag sed 's/dict_api\.callbacks.id100.//' here we remove the tag beginning sed 's/,200,null)//' and here the tag end There are also some special characters which could cause problems with some JSON parsers, so if you get some errors, this is probably the case (sed is your friend). I laso like to trim the "webDefinitions" part, because it (sometimes) contains misleading information. sed 's/\,\"webDefinitions.*//' (but remember to append a "}" at the end, because the JSON string will be invalid) The output also contains links to mp3 files with pronounciation. As of now, this is only usable in the English language. If you choose other than English, you will only get webDefinitions (which are crap).


    1
    wget -qO - "http://www.google.com/dictionary/json?callback=dict_api.callbacks.id100&q=steering+wheel&sl=en&tl=en&restrict=pr,de&client=te" | sed 's/dict_api\.callbacks.id100.//' | sed 's/,200,null)//'
    sairon · 2011-03-08 15:00:39 16
  • Some snippets posted are slow on big dictionaries, this one is fast. Show Sample Output


    1
    echo $(shuf -n4 /usr/share/dict/words)
    bohwaz · 2011-08-30 03:10:06 8
  • This restricts things 3 ways: 1. No capitalized words, hence no proper names. 2. No apostrophes. 3. Restricts size to range (3,7) Show Sample Output


    1
    echo $(grep "^[^'A-Z]\{3,7\}$" /usr/share/dict/words|shuf -n4)
    cbbrowne · 2011-09-07 22:03:45 3
  • Updated to the new version of the MW webpage (seems MW does not use cougar anymore, so the other commands do not work nowadays), and using Xidel to parse the page with a html parser instead regex. Example usage: pronounce onomatopoetic I'm not sure how well Xidel works with binary streams (although it seems to work great in tests), so using wget to download the actual wav file might be safer, i.e.: pronounce(){ wget -qO- $(xidel "http://www.m-w.com/dictionary/$*" -f "replace(css('.au')[1]/@onclick,\".*'([^']+)', *'([^']+)'.*\", '/audio.php?file=\$1&word=\$2')" -e 'css("embed")[1]/@src') | aplay -q;} Xidel is not a standard cli tool and has to be downloaded from xidel.sourceforge.net


    0
    pronounce(){ xidel "http://www.m-w.com/dictionary/$*" -f "replace(css('.au')[1]/@onclick,\".*'([^']+)', *'([^']+)'.*\", '/audio.php?file=\$1&word=\$2')" -f 'css("embed")[1]/@src' --download - | aplay -q;}
    BeniBela · 2013-04-18 13:03:16 4
  • Runs on at least MacOS Sierra (in Bash) Show Sample Output


    0
    egrep "^compat.bility$" /usr/share/dict/words
    demonzrulaz · 2017-02-24 05:20:40 17

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

Replicate a directory structure dropping the files

find out how many days since given date
You can also do this for seconds, minutes, hours, etc... Can't use dates before the epoch, though.

Count number of files in subdirectories
For each directory from the current one, list the counts of files in each of these directories. Change the -maxdepth to drill down further through directories.

Adequately order the page numbers to print a booklet
Useful if you don't have at hand the ability to automatically create a booklet, but still want to. F is the number of pages to print. It *must* be a multiple of 4; append extra blank pages if needed. In evince, these are the steps to print it, adapted from https://help.gnome.org/users/evince/stable/duplex-npage.html.en : 1) Click File ▸ Print. 2) Choose the General tab. Under Range, choose Pages. Type the numbers of the pages in this order (this is what this one-liner does for you): n, 1, 2, n-1, n-2, 3, 4, n-3, n-4, 5, 6, n-5, n-6, 7, 8, n-7, n-8, 9, 10, n-9, n-10, 11, 12, n-11... ...until you have typed n-number of pages. 3) Choose the Page Setup tab. - Assuming a duplex printer: Under Layout, in the Two-side menu, select Short Edge (Flip). - If you can only print on one side, you have to print twice, one for the odd pages and one for the even pages. In the Pages per side option, select 2. In the Page ordering menu, select Left to right. 4) Click Print.

Sort files by date
Show you the list of files of current directory sorted by date youngest to oldest, remove the 'r' if you want it in the otherway.

Convert CSV to JSON
Replace 'csv_file.csv' with your filename.

reverse-i-search: Search through your command line history
"What it actually shows is going to be dependent on the commands you've previously entered. When you do this, bash looks for the last command that you entered that contains the substring "ls", in my case that was "lsof ...". If the command that bash finds is what you're looking for, just hit Enter to execute it. You can also edit the command to suit your current needs before executing it (use the left and right arrow keys to move through it). If you're looking for a different command, hit Ctrl+R again to find a matching command further back in the command history. You can also continue to type a longer substring to refine the search, since searching is incremental. Note that the substring you enter is searched for throughout the command, not just at the beginning of the command." - http://www.linuxjournal.com/content/using-bash-history-more-efficiently

Get the list of local files that changed since their last upload in an S3 bucket
Can be useful to granulary flush files in a CDN after they've been changed in the S3 bucket.

print DateTimeOriginal from EXIF data for all files in folder
see output from `identify -verbose` for other keywords to filter for (e.g. date:create, exif:DateTime, EXIF:ExifOffset).

Change every instance of OLD to NEW in file FILE
Very quick way to change a word in a file. I use it all the time to change variable names in my PHP scripts (sed -i 's/$oldvar/$newvar/g' index.php)


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: