curl doesn't provide url-encoding for 'GET' data, it have an option '--data-urlencode', but its only for 'POST' data. Thats why I need to write down this commandline. With 'perl', 'php' and 'python', this is one liner, but just I wrote it for fun. Works in Ubuntu, will work in all linux varients(I hope it will work in unix varients also). Show Sample Output
Use the hold space to preserve lines until data is needed.
Strips comments from at least bash and php scripts. Normal # and // as well as php block comments
removes all of the:
empty/blank lines
lines beginning with #
lines beginning with //
lines beginning with /*
lines beginning with a space and then *
lines beginning with */
It also deletes the lines if there's whitespace before any of the above.
Add an alias to use in .bashrc like this:
alias stripcomments="sed -e '/^[[:blank:]]*#/d; s/[[:blank:]][[:blank:]]*#.*//' -e '/^$/d' -e '/^\/\/.*/d' -e '/^\/\*/d;/^ \* /d;/^ \*\//d'"
For those days when you need to know if something is happening because the day ends in "y". Show Sample Output
Search in all html files and remove the lines that 'String' is found.
We can put this inside a function:
fxray() { curl -s http://urlxray.com/display.php?url="$1" | grep -o '<title>.*</title>' | sed 's/<title>.*--> \(.*\)<\/title>/\1/g'; };
fxray http://tinyurl.com/demo-xray
Show Sample Output
Are the two strings anagrams of one another? sed splits up the strings into one character per line the result is sorted cmp compares the results Note: This is not pretty. I just wanted to see if I could do it in bash. Note: It uses fewer characters than the perl version :-)
A much shorter version of this command.
This command might not be useful for most of us, I just wanted to share it to show power of command line. Download simple text version of novel David Copperfield from Poject Gutenberg and then generate a single column of words after which occurences of each word is counted by sort | uniq -c combination. This command removes numbers and single characters from count. I'm sure you can write a shorter version. Show Sample Output
Most systems (at least my macbook) have system users defined, such as _www and using "users" for example will not list them. This command allows you to see who the 'virtual' users are on your system. Show Sample Output
macchanger will allow you to change either 1) mfg code, 2) host id, or 3) all of the above. Use this at wifi hotspots to help reduce profiling. Show Sample Output
This command does the following: - converts any sequence of multiple spaces/tabs to one space only - completely removes any space(s)/tab(s) at the end of each line (If spaces and tabs are mixed in a sequence i.e. [tab][tab][space][tab], you have to execute this command twice!) Show Sample Output
show directory three
If you need to print some portion of a huge file, let's say you want to print from line 200 to 300, you can use this command to print the line from LINE1 to LINE2 of file FILE.
Ok so it's rellay useless line and I sorry for that, furthermore that's nothing optimized at all... At the beginning I didn't managed by using netstat -p to print out which process was handling that open port 4444, I realize at the end I was not root and security restrictions applied ;p It's nevertheless a (good ?) way to see how ps(tree) works, as it acts exactly the same way by reading in /proc So for a specific port, this line returns the calling command line of every thread that handle the associated socket
Looks like you're stuck with sed if your ls doesn't have a -Q option.
Select a file/folder at random. Show Sample Output
Tail is much faster than sed, awk because it doesn't check for regular expressions. Show Sample Output
Use optimized sed to big file/stream to reduce execution time
Use
sed '/foo/ s/foo/foobar/g' <filename>
insted of sed
's/foo/foobar/g' <filename>
Yep, now you can finally google from the command line!
Here's a readable version "for your pleasure"(c):
google() { # search the web using google from the commandline
# syntax: google google
query=$(echo "$*" | sed "s:%:%25:g;s:&:%26:g;s:+:%2b:g;s:;:%3b:g;s: :+:g")
data=$(wget -qO - "https://ajax.googleapis.com/ajax/services/search/web?v=1.0&q=$query")
title=$(echo "$data" | tr '}' '\n' | sed "s/.*,\"titleNoFormatting//;s/\":\"//;s/\",.*//;s/\\u0026/'/g;s/\\\//g;s/#39\;//g;s/'amp;/\&/g" | head -1)
url="$(echo "$data" | tr '}' '\n' | sed 's/.*"url":"//;s/".*//' | head -1)"
echo "${title}: ${url} | http://www.google.com/search?q=${query}"
}
Enjoy :)
Show Sample Output
Use the -a flag to display all files, including hidden files. If you just want to display regular files, use a -1 (yes, that is the number one). Got this by RTFM and adding some sed magic. [bbbco@bbbco-dt ~]$ ls -a | sed "s#^#${PWD}/#" /home/bbbco/. /home/bbbco/.. /home/bbbco/2011-09-01-00-33-02.073-VirtualBox-2934.log /home/bbbco/2011-09-10-09-49-57.004-VirtualBox-2716.log /home/bbbco/.adobe /home/bbbco/.bash_history /home/bbbco/.bash_logout /home/bbbco/.bash_profile /home/bbbco/.bashrc ... [bbbco@bbbco-dt ~]$ ls -1 | sed "s#^#${PWD}/#" /home/bbbco/2011-09-01-00-33-02.073-VirtualBox-2934.log /home/bbbco/2011-09-10-09-49-57.004-VirtualBox-2716.log /home/bbbco/cookies.txt /home/bbbco/Desktop /home/bbbco/Documents /home/bbbco/Downloads ... Show Sample Output
Normally, if you just want to see directories you'd use brianmuckian's command 'ls -d *\', but I ran into problems trying to use that command in my script because there are often multiple directories per line. If you need to script something with directories and want to guarantee that there is only one entry per line, this is the fastest way i know Show Sample Output
the f is for file and - stdout, This way little shorter. I Like copy-directory function It does the job but looks like SH**, and this doesn't understand folders with whitespaces and can only handle full path, but otherwise fine, function copy-directory () { ; FrDir="$(echo $1 | sed 's:/: :g' | awk '/ / {print $NF}')" ; SiZe="$(du -sb $1 | awk '{print $1}')" ; (cd $1 ; cd .. ; tar c $FrDir/ )|pv -s $SiZe|(cd $2 ; tar x ) ; } Show Sample Output
commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for: