commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
file(1) can print details about certain devices in the /dev/ directory (block devices in this example). This helped me to know at a glance the location and revision of my bootloader, UUIDs, filesystem status, which partitions were primaries / logicals, etc.. without running several commands.
file -s /dev/dm-*
file -s /dev/cciss/*
This command deletes the "newline" chars, so its output maybe unusable :)
It's works only when you replace '\n' to ONE character.
Should be a bit more portable since echo -e/n and date's -Ins are not.
This is useful when watching a log file that does not contain timestamps itself.
If the file already has content when starting the command, the first lines will have the "wrong" timestamp when the command was started and not when the lines were originally written.
So your boss wants to know how much memory has been assigned to each virtual machine running on your server... here's how to nab that information from the command line while logged in to that server
urls.txt should have a fully qualified url on each line
to clear the log
change curl command to
curl --head $file | head -1 >> log.txt
to just get the http status
1) -n-1 means sort key is the last field
2) -l is important if each separate record is on a new line (usually so for text files)
3) -j tells msort not to create log file (msort.log) in the working directory
4) may need to install msort package.
5) msort does lot more. Check man msort
ls largedir |rd
lynx -dump largewebsite.com |rd
rd < largelogfile
splits a postscript file into multiple postscript files. for each page of the input file one output file will be generated. The files will be numbered for example 1_orig.ps 2_orig.ps ...
The psselect commad is part of the psutils package
Convert all jpegs in the current directory into ~1024*768 pixels and ~ 150 KBytes jpegs
Renames files in a directory to incremental numbers, following alphabetic order. The command does not maintain extensions.
This heavy one liner gets all the files in the "/music/dir/" directory and filters for non 44.1 mp3 files. After doing this it passes the names to sox in-order to re-sample those files. The original files are left just in case.
This command will download $file via server. I've used this when FTP was broken at the office and I needed to download some software packages.
This takes quite a while on my system. You may want to test it out with /bin first, or background it and keep working.
If you want to get rid of the "No manual entry for [whatever]" and just have the [whatever], use the following sed command after this one finishes.
sed -n 's/^No manual entry for \(.*\)/\1/p' nomanlist.txt
A shortcut to generate documentation with phpdoc. Defaults to HTML; optionally to PDF if third argument is given. Stores documentation in cwd under ./docs/. I forget the syntax to the output, -o, option, so this is easier.