All commands (14,187)

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

Take screenshots with imagemagick
Now try this. Ones you see small cross arrow, double click on any window you like to make a screenshot "selectively".

Send email with one or more binary attachments
This command uses mutt to send the mail. You must pipe in a body, otherwise mutt will prompt you for some stuff. If you don't have mutt, it should be dead easy to install.

Log your internet download speed
This will log your internet download speed. You can run $gnuplot -persist

ls -qahlSr # list all files in size order - largest last
I find it useful, when cleaning up deleting unwanted files to make more space, to list in size order so I can delete the largest first. Note that using "q" shows files with non-printing characters in name. In this sample output (above), I found two copies of the same iso file both of which are immediate "delete candidates" for me.

View Processeses like a fu, fu
I don't truly enjoy many commands more than this one, which I alias to be ps1.. Cool to be able to see the heirarchy and makes it clearer what need to be killed, and whats really going on.

get all pdf and zips from a website using wget
If the site uses https, use: $ wget --reject html,htm --accept pdf,zip -rl1 --no-check-certificate https-url

Parse compressed apache error log file and show top errors
credit shall fall to this for non-gzipped version: https://gist.github.com/marcanuy/a08d5f2d9c19ba621399

Display two calendar months side by side
Displays last month, current month, and next month side by side.

Get an authorization code from Google
This is a basis for other Google API commands.

Remove duplicate rows of an un-sorted file based on a single column
The command (above) will remove any duplicate rows based on the FIRST column of data in an un-sorted file. The '$1' represents a positional parameter. You can change both instances of '$1' in the command to remove duplicates based on a different column, for instance, the third: $ awk '{ if ($3 in stored_lines) x=1; else print; stored_lines[$3]=1 }' infile.txt > outfile.txt Or you can change it to '$0' to base the removal on the whole row: $ awk '{ if ($0 in stored_lines) x=1; else print; stored_lines[$0]=1 }' infile.txt > outfile.txt ** Note: I wouldn't use this on a MASSIVE file, unless you're RAM-rich ;) **


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: