commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
Use the command line to log into Dropbox. You have to replace [email protected] with your Dropbox email (note the URL-encoding of "@" as %40). Also replace my_passwd with your Dropbox password. (Note: special characters in your password (such as #) must be url-encoded. You will get a cookie (stored in file "cookie") that you can use for subsequent curl operations to dropbox, for example curl -b cookie https://www.dropbox.com/home. Debug note: If you want to see what data curl posts, use curl's --trace-ascii flag.
Good because it doesn't use Sed.
This is a slightly modified version of http://www.commandlinefu.com/commands/view/4283/recursive-search-and-replace-old-with-new-string-inside-files (which did not work due to incorrect syntax) with the added option to sed inside only files named filename.ext
Updated to the new version of the MW webpage (seems MW does not use cougar anymore, so the other commands do not work nowadays), and using Xidel to parse the page with a html parser instead regex.
Example usage:
pronounce onomatopoetic
I'm not sure how well Xidel works with binary streams (although it seems to work great in tests), so using wget to download the actual wav file might be safer, i.e.:
pronounce(){ wget -qO- $(xidel "http://www.m-w.com/dictionary/$*" -f "replace(css('.au')[1]/@onclick,\".*'([^']+)', *'([^']+)'.*\", '/audio.php?file=\$1&word=\$2')" -e 'css("embed")[1]/@src') | aplay -q;}
Xidel is not a standard cli tool and has to be downloaded from xidel.sourceforge.net
The original command doesn't work for me - does something weird with sed (-r) and xargs (-i) with underscores all over...
This one works in OSX Lion. I haven't tested it anywhere else, but if you have bash, gpg and perl, it should work.
Assuming a convention looking group file, this command will eject oldspiderman from the leagueofsuperfriends group and add newspiderman:
oldspiderman:x:551:
aquaman:x:552:
superman:x:553:
newspiderman:x:554:
leagueofsuperfriends:x:1000:superman,oldspiderman,superman,aquaman
I look at xkcd in my news reader, but it displays the image's title attribute only for a few seconds which makes reading the longer ones more challenging. So I use this to display it in my console.
It is often recommended to enclose capital letters in a BibTeX file in braces, so the letters will not be transformed to lower case, when imported from LaTeX. This is an attempt to apply this rule to a BibTeX database file.
DO NOT USE sed '...' input.bib > input.bib as it will empty the file!
How it works:
/^\s*[^@%]/
Apply the search-and-replace rule to lines that start (^) with zero or more white spaces (\s*), followed by any character ([...]) that is *NOT* a "@" or a "%" (^@%).
s=<some stuff>=<other stuff>=g
Search (s) for some stuff and replace by other stuff. Do that globally (g) for all matches in each processed line.
\([A-Z][A-Z]*\)\([^}A-Z]\|},$\)
Matches at least one uppercase letter ([A-Z][A-Z]*) followed by a character that is EITHER not "}" and not a capital letter ([^}A-Z]) OR (|) it actually IS a "}", which is followed by "," at the end of the line ($).
Putting regular expressions in escaped parentheses (\( and \), respectively) allows to dereference the matched string later.
{\1}\2
Replace the matched string by "{", followed by part 1 of the matched string (\1), followed by "}", followed by the second part of the matched string (\2).
I tried this with GNU sed, only, version 4.2.1.
Use optimized sed to big file/stream to reduce execution time
Use
sed '/foo/ s/foo/foobar/g' <filename>
insted of sed
's/foo/foobar/g' <filename>
Magic line will extract almost all possible archives from current folder in its own folders. Don't forget to change USER name in sudo command. sed is used to create names for folders from archive names w/o extension. You can test sed expression, used in this command:
arg='war.lan.net' ; x=$(echo $arg|sed 's/\(.*\)\..*/\1/') ; echo $x
If some archives can't be extracted, install packages:
apt-get install p7zip-full p7zip-rar
Hope this will save a lot of your time. Enjoy.
Quietly get a webpage from wikipedia: curl -s
By default, don't output anything: sed -n
Search for interesting lines: /<tr valign="top">/
With the matching lines: {}
Search and replace any html tags: s/<[^>]*>//g
Finally print the result: p
Search and replace recursively. :-) Shorter and simpler than the others. And allows more terms:
replace old new [old new ...] -- `find -type f`
Catches some background colors missed by the highest rated alternative.
Given a dump.sql file, extract table1 creation and data commands. table2 is the one following table1 in the dump file. You can also use the same idea to extract several consecutive tables.
Use meaningful exit codes
change "source" to "cat" to view output instead of assigning
sed '$ d' foo.txt.tmp
...deletes last line from the file