commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
Wow, didn't really expect you to read this far down. The latest iteration of the site is in open beta. It's a gentle open beta-- not in prime-time just yet. It's being hosted over at UpGuard (link) and you are more than welcome to give it a shot. Couple things:
This command line will remove password from all PDF files in the current folder. It use qpdf.
In this example we extract pages 14-17
This is an expansion on a previous entry, which I've wrapped in a function and placed in my profile. The "$@" is a positional parameter, much like "$*", but the parameters are passed on intact, without interpretation or expansion; so you can simply call the function like this:
This will output a merged PDF of all PDFs in the current directory. Alternatively, you can simply list them like so:
mergepdf 00.pdf 01.pdf 02.pdf ...
N.B. Passing a wildcard will merge all PDFs in the current directory in name order, e.g. 00.pdf 01.pdf aa.pdf ab.pdf
This will extract all DCT format images from foo.pdf and save them in JPEG format (option -j) to bar-000.jpg, bar-001.jpg, bar-002.jpg, etc.
See man wget if you want linked files and not only those hosted on the website.
This example command fetches 'example.com' webpage and then fetches+saves all PDF files listed (linked to) on that webpage.
[*Note: of course there are no PDFs on example.com. This is just an example]
#4345 also works under windows
use imagemagik convert
Joins two pdf documents coming from a simplex document feed scanner. Needs pdftk >1.44 w/ shuffle.
If you skip this part:
you'll get a very lo-res image.
This is an example of the usage of pdfnup (you can find it in the 'pdfjam' package). With this command you can save ink/toner and paper (and thus trees!) when you print a pdf.
This tools are very configurable, and you can make also 2x2, 3x2, 2x3 layouts, and more (the limit is your fantasy and the resolution of the printer :-)
You must have installed pdfjam, pdflatex, and the LaTeX pdfpages package in your box.
Quick and dirty version. I made a version that checks if a manpage exists (but it's not a oneliner). You must have ps2pdf and of course Ghostscript installed in your box.
Enhancements appreciated :-)
Turns a PDF into HTML (without images) and prints it to the standard out which is picked up and interpreted by w3m.
Given some images (jpg or other supported formats) in input, you obtain a single PDF file with an image for every page.
The pdf is first converted to a bitmap, so change "-density" to match your printer resolution. Also be careful about the RAM required.
In this example rgb(0,0,0) is replaced by rgb(255,255,255), change to suit your needs.
This assumes there is only one result. Either tail your search for one result or add | head -n 1 before the closing bracket. You can also use locate instead of find, if you have locate installed and updated
You should install qpdf.
That way, you can have a copy without any password required.