commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
Wow, didn't really expect you to read this far down. The latest iteration of the site is in open beta. It's a gentle open beta-- not in prime-time just yet. It's being hosted over at UpGuard (link) and you are more than welcome to give it a shot. Couple things:
Get the line containing "inet addr:" and the line before that, get down to only the first line, and then get the first word on that line, which should be the interface.
use this command to gzip the file and write to stdout and from the stdout redirect to the another file
checking files in current and sub directories, finding out the files containing "sampleString" and removing the containing lines from the file.
* Beware that The command will update the original file [no backup].
The command can be extended if play with 'find' command together,
e.g. it is possible to execute on certain type of files: *.xml, *.txt... (find -name "*.xml" | grep....)
if anybody knows a better solution on that, please drop a comment. thx.
Allows for quick mass renaming, assuming the user has some familiarity with regular expressions. Basically, it replaces the original_file_name in the output of ls with
"mv -v original_file_name new_file_name"
and passes the output to sh.
Tres lineas en un shell script para copiar la base de datos diaramente
find all files in cur dir add to url and append to file
No junk, no pipe, one command, no subcommand - KISS
The following command creates a pool with a single raidz root vdev that consists of six disks.
Tries to reattach to screen, if it's not available, creates one.
created an alias "irc" for it, since sometimes i forget if there already is a screen session running with irssi, this way I avoid creating a new one by mistake.
Substitute nano with your favorite editor, of course.
command creates a pool with two mirrors, where each mirror contains two disks.
Instead of tedious manual mv commands and tabbing, this routine creates a file listing all the filenames in the PWD twice, edit the second instance on each line to the new name, then save the file, the routine does the rest. Feel free to replace nano with your holy war editor of choice.
You will get a lot of "mv: 'x' and 'x' are the same file" warnings, these could be cleaned up but the routine works.
Seeing that _sort_ its been used, why not just _use_ it. ;)