commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
Using urandom to get random data, deleting non-letters with tr and print the first $1 bytes.
ctrl+v to see the result.
with a semicolon text file map, apply multiple replace to a single file
Same as previous but compatible with BSD/IPSO
Same as previous but without fugly sed =x
Capitalize first letter of each word in a string.
This works more reliable for me ("cut -c 8-" had one more space, so it did not work)
Using large wordlists is cumbersome. Using password cracking programs with rules such as Hashcat or John the ripper is much more effective. In order to do this many times we need to "clean" a wordlist removing all numbers, special characters, spaces, whitespace and other garbage. This command will covert a entire wordlist to all lowercase with no garbage.
Expand a URL, aka do a head request, and get the URL. Copy this value to clipboard.
Doesn't use shuf, its much faster with "shuf -n4" instead of sort -R
So I use OSX and don't have the shuf command. This is what I could come up with.
This command assumes /usr/share/dict/words does not surpass 137,817,948 lines and line selection is NOT uniformly random.
The first grep rejects capitalised words since the dict has proper nouns in it that you mightn't want to use. The second grep rejects words with ending in apostrophe s, and the third forces the words to be at least 15 characters long.
Pi also says hello world!
This is a minor variation to cowboy's submission - his script worked great on Ubuntu, but the sed gave issues on osx (which used BSD). Minor tweaks (sed -E instead of sed -r and \'$'\n to handle the new line made it work.
chrome only lets you export in html format, with a lot of table junk, this command will just export the titles of the links and the links without all that extra junk
Generates a password using symbols, alpha, and digits. No repeating chars.