Commands by hunterm (8)

  • Generates a password using symbols, alpha, and digits. No repeating chars. Show Sample Output


    -4
    for i in {21..79};do echo -e "\x$i";done | tr " " "\n" | shuf | tr -d "\n"
    hunterm · 2011-04-14 01:43:40 3

  • 0
    find $(echo "$PATH" | tr ':' ' ') -name "*program*"
    hunterm · 2011-01-07 22:27:44 2
  • RTFMFTW.


    39
    rtfm() { help $@ || man $@ || $BROWSER "http://www.google.com/search?q=$@"; }
    hunterm · 2011-01-05 02:53:38 2
  • Yep, now you can finally google from the command line! Here's a readable version "for your pleasure"(c): google() { # search the web using google from the commandline # syntax: google google query=$(echo "$*" | sed "s:%:%25:g;s:&:%26:g;s:+:%2b:g;s:;:%3b:g;s: :+:g") data=$(wget -qO - "https://ajax.googleapis.com/ajax/services/search/web?v=1.0&q=$query") title=$(echo "$data" | tr '}' '\n' | sed "s/.*,\"titleNoFormatting//;s/\":\"//;s/\",.*//;s/\\u0026/'/g;s/\\\//g;s/#39\;//g;s/'amp;/\&/g" | head -1) url="$(echo "$data" | tr '}' '\n' | sed 's/.*"url":"//;s/".*//' | head -1)" echo "${title}: ${url} | http://www.google.com/search?q=${query}" } Enjoy :) Show Sample Output


    -7
    The command is too big to fit here. :( Look at the description for the command, in readable form! :)
    hunterm · 2011-01-05 02:45:28 1
  • The command was too long for the command box, so here it is: echo $(( `wget -qO - http://i18n.counter.li.org/ | grep 'users registered' | sed 's/.*\<font size=7\>//g' | tr '\>' ' ' | sed 's/<br.*//g' | tr ' ' '\0'` + `curl --silent http://www.dudalibre.com/gnulinuxcounter?lang=en | grep users | head -2 | tail -1 | sed 's/.*<strong>//g' | sed 's/<\/strong>.*//g'` )) This took me about an hour to do. It uses wget and curl because, dudalibre.com blocks wget, and wget worked nicely for me. Show Sample Output


    -1
    Check the Description below.
    hunterm · 2010-10-07 04:22:32 0

  • 0
    curl --silent http://www.dudalibre.com/gnulinuxcounter?lang=en | grep users | head -2 | tail -1 | sed 's/.*<strong>//g' | sed 's/<\/strong>.*//g'
    hunterm · 2010-10-07 04:12:45 0

  • -1
    wget -qO - http://i18n.counter.li.org/ | grep 'users registered' | sed 's/.*\<font size=7\>//g' | tr '\>' ' ' | sed 's/<br.*//g' | tr ' ' '\0'
    hunterm · 2010-10-07 03:19:17 0
  • Output the html from xkcd's index.html, filter out the html tags, and then view it in gwenview. Show Sample Output


    -1
    gwenview `wget -O - http://xkcd.com/ | grep 'png' | grep '<img src="http://imgs.xkcd.com/comics/' | sed s/title=\".*//g | sed 's/.png\"/.png/g' | sed 's/<img src=\"//g'`
    hunterm · 2010-08-24 22:21:51 1

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

how to export a table in .csv file
Exports the result of query in a csv file

clear the cache from memory

defragment files
Thanks to flatcap for optimizing this command. This command takes advantage of the ext4 filesystem's resistance to fragmentation. By using this command, files that were previously fragmented will be copied / deleted / pasted essentially giving the filesystem another chance at saving the file contiguously. ( unlike FAT / NTFS, the *nix filesystem always try to save a file without fragmenting it ) My command only effects the home directory and only those files with your R/W (read / write ) permissions. There are two issues with this command: 1. it really won't help, it works, but linux doesn't suffer much (if any ) fragmentation and even fragmented files have fast I/O 2. it doesn't discriminate between fragmented and non-fragmented files, so a large ~/ directory with no fragments will take almost as long as an equally sized fragmented ~/ directory The benefits i managed to work into the command: 1. it only defragments files under 16mb, because a large file with fragments isn't as noticeable as a small file that's fragmented, and copy/ delete/ paste of large files would take too long 2. it gives a nice countdown in the terminal so you know how far how much progress is being made and just like other defragmenters you can stop at any time ( use ctrl+c ) 3. fast! i can defrag my ~/ directory in 11 seconds thanks to the ramdrive powering the command's temporary storage bottom line: 1. its only an experiment, safe ( i've used it several times for testing ), but probably not very effective ( unless you somehow have a fragmentation problem on linux ). might be a placebo for recent windows converts looking for a defrag utility on linux and won't accept no for an answer 2. it's my first commandlinefu command

Print text string vertically, one character per line.

Copy a file from a remote server to your local box using on-the-fly compression
-P displays a progress meter -z tells rsync to use compression

static compilation

check open ports without netstat or lsof

List all information about all files (in current dir)
This is a funny usage of the traditional command ls. It could be basically simplified as: $ ls -a -l Duplicating arguments is permitted: $ ls -a -l -l And this markup could be shortened as: $ ls -al Extra note: To view filesizes like a pro, pray for your God: $ ls -allah

Generate a graph of package dependencies
Requires: imagemagick and graphviz On Debian systems, displays a graph of package dependencies. Works also with other image formats, like svg : $ apt-cache dotty bash | dot -T svg | display

list files recursively by size


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: