What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Universal configuration monitoring and system of record for IT.

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:



May 19, 2015 - A Look At The New Commandlinefu
I've put together a short writeup on what kind of newness you can expect from the next iteration of clfu. Check it out here.
March 2, 2015 - New Management
I'm Jon, I'll be maintaining and improving clfu. Thanks to David for building such a great resource!

Top Tags





Commands tagged html from sorted by
Terminal - Commands tagged html - 24 results
xmlpager() { xmlindent "$@" | awk '{gsub(">",">'`tput setf 4`'"); gsub("<","'`tput sgr0`'<"); print;} END {print "'`tput sgr0`'"}' | less -r; }
2015-07-12 09:22:10
User: hackerb9
Functions: awk less

Don't want to open up an editor just to view a bunch of XML files in an easy to read format? Now you can do it from the comfort of your own command line! :-) This creates a new function, xmlpager, which shows an XML file in its entirety, but with the actual content (non-tag text) highlighted. It does this by setting the foreground to color #4 (red) after every tag and resets it before the next tag. (Hint: try `tput bold` as an alternative). I use 'xmlindent' to neatly reflow and indent the text, but, of course, that's optional. If you don't have xmlindent, just replace it with 'cat'. Additionally, this example shows piping into the optional 'less' pager; note the -r option which allows raw escape codes to be passed to the terminal.

curl ${URL} 2>/dev/null|grep "<${BLOCK}>"|sed -e "s/.*\<${BLOCK}\>\(.*\)\<\/${BLOCK}\>.*/\1/g"
2013-08-31 14:53:54
User: c3w
Functions: grep sed

set BLOCK to "title" or any other HTML / RSS / XML tag and curl URL to get everything in-between e.g. some text

mogrify -format gif -auto-orient -thumbnail 250x90 '*.JPG'&&(echo "<ul>";for i in *.gif;do basename=$(echo $i|rev|cut -d. -f2-|rev);echo "<li style='display:inline-block'><a href='$basename.JPG'><img src='$basename.gif'></a>";done;echo "</ul>")>list.html
2013-08-25 20:45:49
User: ysangkok
Functions: cut echo

The input images are assume to have the "JPG" extension. Mogrify will overwrite any gif images with the same name! Will not work with names with spaces.

alias html2ascii='lynx -force_html -stdin -dump -nolist'
sed 's!<[Aa] *href*=*"\([^"]*\)"*>\([^<>]*\)</[Aa]>!\1,\2!g' links.html
sed "s/\([a-zA-Z]*\:\/\/[^,]*\),\(.*\)/\<a href=\"\1\"\>\2\<\/a\>/"
2012-01-06 13:55:05
User: chrismccoy
Functions: sed
Tags: sed html link

an extension of command 9986 by c3w, allows for link text.

http://google.com,search engine

will link the hyperlink with the text after the url instead of linking with the url as linktext

find . -iname "*.jpg" -printf '<img src="%f" title="%f">\n' > gallery.html
mech-dump --links --absolute http://www.commandlinefu.com
2011-11-19 03:40:52
User: sputnick
Tags: perl html parsing

You need to install WWW::Mechanize Perl module with

# cpan -i WWW::Mezchanize

or by searching mechanize | grep perl in your package manager

With this command, you can get forms, images, headers too

curl -s http://example.com | grep -o -P "<a.*href.*>" | grep -o "http.*.pdf" | xargs -d"\n" -n1 wget -c
2011-06-09 14:42:46
User: b_t
Functions: grep wget xargs

This example command fetches 'example.com' webpage and then fetches+saves all PDF files listed (linked to) on that webpage.

[*Note: of course there are no PDFs on example.com. This is just an example]

xml2asc < inputfile > outputfile
2011-02-23 12:22:18
User: forcefsck

For reverse, there's asc2xml

asc2xml < entitiesfile > utf8file

They come as a part of the html-xml-utils debian package.

PS. Tried to submit sample data, but the site autoconverted the non ascii to html entities. So a bit of imagination is needed.

sqlite3 -line database.db
2010-10-09 16:10:19
User: pykler
Tags: CSV html sql sqlite

Similar output to using MySQL with the \G at the end of a Query. Displays one column per line. Other modes include:


Query results will be displayed in a table like form, using whitespace characters to separate the columns and align the output.

-html Query results will be output as simple HTML tables.

-line Query results will be displayed with one value per line, rows separated by a blank line. Designed to be easily parsed by scripts or other programs

-list Query results will be displayed with the separator (|, by default) character between each field value. The default.

From inside the command line this can be also changed using the mode command:

.mode MODE ?TABLE? Set output mode where MODE is one of:

csv Comma-separated values

column Left-aligned columns. (See .width)

html HTML code

insert SQL insert statements for TABLE

line One value per line

list Values delimited by .separator string

tabs Tab-separated values

tcl TCL list elements

HTMLTEXT=$( curl -s http://www.page.de/test.html > /tmp/new.html ; diff /tmp/new.html /tmp/old.html ); if [ "x$HTMLTEXT" != x ] ; then echo $HTMLTEXT | mail -s "Page has changed." mail@mail.de ; fi ; mv /tmp/new.html /tmp/old.html
2010-07-04 21:45:37
User: Emzy
Functions: diff echo mail mv

Checks if a web page has changed. Put it into cron to check periodically.

Change http://www.page.de/test.html and mail@mail.de for your needs.

find . | perl -wne 'chomp; print qq|<img src="$_" title="$_" /><br />| if /\.(jpg|gif|png)$/;'> gallery.html
2010-07-04 01:43:50
User: spotrick
Functions: find perl

This includes a title attribute so you can see the file name by hovering over an image. Also will hoover up any image format - jpg, gif and png.

find . -iname '*.jpg' | sed 's/.*/<img src="&">/' > gallery.html
2010-07-04 00:50:32
User: kniht
Functions: find sed

My take on the original: even though I like the other's use of -exec echo, sed just feels more natural. This should also be slightly easier to improve.

I expanded this into a script as an exercise, which took about 35 minutes (had to look up some docs): http://bitbucket.org/kniht/nonsense/src/7c1b46488dfc/commandlinefu/quick_image_gallery.py

find . -iname '*.jpg' -exec echo '<img src="{}">' \; > gallery.html
2010-07-03 16:36:15
Functions: echo find

Setting: You have a lot of jpg files in a directory.

Maybe your public_html folder which is readable on the net because of Apache's mod_userdir. All those files from the current folder will be dropped into a file called gallery.html as image tags that can be viewed within a web browser locally or or over the Internet.


find . -iname "*.jpg" -exec echo "<img src=\"{}\">" >> gallery.html \;
grep -ioE "(url\(|src=)['\"]?[^)'\"]*" a.html | grep -ioE "[^\"'(]*.(jpg|png|gif)" | while read l ; do sed -i "s>$l>data:image/${l/[^.]*./};base64,`openssl enc -base64 -in $l| tr -d '\n'`>" a.html ; done;
2010-05-05 14:07:51
User: zhangweiwu
Functions: grep read sed
Tags: html

in "a.html", find all images referred as relative URI in an HTML file by "src" attribute of "img" element, replace them with "data:" URI. This useful to create single HTML file holding all images in it, as a replacement of the IE-created .mht file format. The generated HTML works fine on every other browser except IE, as well as many HTML editors like kompozer, while the .mht format only works for IE, but not for every other browser. Compare to the KDE's own single-file-web-page format "war" format, which only opens correctly on KDE, the HTML file with "data:" URI is more universally supported.

The above command have many bugs. My commandline-fu is too limited to fix them:

1. it assume all URLs are relative URIs, thus works in this case:

<img src="images/logo.png"/>

but does not work in this case:

<img src="http://www.my_web_site.com/images/logo.png" />

This may not be a bug, as full URIs perhaps should be ignored in many use cases.

2. it only work for images whoes file name suffix is one of .jpg, .gif, .png, albeit images with .jpeg suffix and those without extension names at all are legal to HTML.

3. image file name is not allowed to contain "(" even though frequently used, as in "(copy of) my car.jpg". Besides, neither single nor double quotes are allowed.

4. There is infact a big flaw in this, file names are actually used as regular expression to be replaced with base64 encoded content. This cause the script to fail in many other cases. Example: 'D:\images\logo.png', where backward slash have different meaning in regular expression. I don't know how to fix this. I don't know any command that can do full text (no regular expression) replacement the way basic editors like gedit does.

5. The original a.html are not preserved, so a user should make a copy first in case things go wrong.

url="[Youtube URL]"; echo $(curl ${url%&*} 2>&1 | grep -iA2 '<title>' | grep '-') | sed 's/^- //'
2010-04-29 02:03:36
User: rkulla
Functions: echo grep sed

There's another version on here that uses GET but some people don't have lwp-request, so here's an alternative. It's also a little shorter and should work with most youtube URLs since it truncates at the first &

tr -d "\n\r" | grep -ioEm1 "<title[^>]*>[^<]*</title" | cut -f2 -d\> | cut -f1 -d\<
awk 'BEGIN{IGNORECASE=1;FS="<title>|</title>";RS=EOF} {print $2}' | sed '/^$/d' > file.html
2010-04-20 13:27:47
User: tamouse
Functions: awk sed
Tags: Linux awk html

previous version leaves lots of blank lines

awk 'BEGIN{IGNORECASE=1;FS="<title>|</title>";RS=EOF} {print $2}' file.html
2010-04-20 10:54:03
User: sata
Functions: awk
Tags: Linux awk html

Case Insensitive! and Works even if the "<title>...</title>" spans over multiple line.

Simple! :-)

sed -n 's/.*<title>\(.*\)<\/title>.*/\1/ip;T;q' file.html
2010-04-19 07:41:10
User: octopus
Functions: sed

This command can be used to extract the title defined in HTML pages

find . -name '*.html' -exec 'sed' 's/.*class="\([^"]*\?\)".*/\1/ip;d' '{}' ';' |sort -su
2009-09-06 18:43:18
User: kamathln
Functions: find sort

Lists out all classes used in all *.html files in the currect directory. usefull for checking if you have left out any style definitions, or accidentally given a different name than you intended. ( I have an ugly habit of accidentally substituting camelCase instead of using under_scores: i would name soemthing counterBox instead of counter_box)

WARNING: assumes you give classnames in between double quotes, and that you apply only one class per element.

wget $URL | htmldoc --webpage -f "$URL".pdf - ; xpdf "$URL".pdf &
mailx bar@foo.com -s "HTML Hello" -a "Content-Type: text/html" < body.htm
2009-05-19 04:49:26
User: ethanmiller
Functions: mailx
Tags: mail smtp html

Note, this works because smtp is running