commandlinefu.com is the place to record those command-line gems that you return to again and again.
You can sign-in using OpenID credentials, or register a traditional username and password.
Subscribe to the feed for:
It only encodes non-Basic-ASCII chars, as they are the only ones not well readed by UTF-8 and ISO-8859-1 (latin-1).
It converts all
* C3 X (some latin symbols like ASCII-extended ones)
and * C2 X (some punctuation symbols like inverted exclamation)
...UTF-8 double byte symbols to escaped form that every parser understands to form the URLs. I didn't encode spaces and the rest of basic punctuation, but supposedly, space and others are coded as \x20, for example, in UTF-8, latin-1 and Windows-cp1252.... so its read perfectly.
Please feel free to correct, the application to which I designe that function works as expected with my assumption.
Note: I specify a w=999, I didn't find a flag to put unlimited value.
I just suppose very improbable surpass the de-facto 255 (* 3 byte max) = 765 bytes length of URL
There is 1 alternative - vote for the best!
Converts reserved characters in a URI to their percent encoded counterparts.
Alternate python version:
echo "$url" | python -c 'import sys,urllib;print urllib.quote(sys.stdin.read().strip())'
This one uses hex conversion to do the converting and is in shell/sed only (should probably still use the python/perl version).
Returns URL Encoded string from input ($1).
If you can do better, submit your command here.
You must be signed in to comment.