commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
Wow, didn't really expect you to read this far down. The latest iteration of the site is in open beta. It's a gentle open beta-- not in prime-time just yet. It's being hosted over at UpGuard (link) and you are more than welcome to give it a shot. Couple things:
The exported TSV file of Google Adwords' first five columns are text, they usually should collapse into one cell, a multi-line text cell, but there is no guaranteed way to represent line-break within cells for .tsv file format, thus Google split it to 5 columns.
The problem is, with 5 columns of text, there are hardly space to put additional fields while maintain printable output.
This script collapses the first five columns of each row into one single multi-line text cell, for console output or direct send to printer.
unzip /surce/file.zip -d /dest/
From Hong Kong Observatory wap site ;)
I _think_ you were trying to delete files whether or not they had spaces. This would do that. You should probably be more specific though.
"get Hong Kong weather infomation from HK Observatory
From Hong Kong Observatory wap site ;)"
other one showed alot of blank lines for me
Use find's built-in ability to call programs.
find -maxdepth 1 -type f -name "*.7z" -print0 | xargx -0 -n 1 7zr e
would work, too.
Can be used to test error handling
There is no longer a need to add PGP keys for Ubuntu Launchpad PPA's.
The add-apt-repository command creates a new file for the PPA in /etc/sources.list.d/ then adds the PPA's keys to the apt keyring automatically. No muss, no fuss.
View files in ZIP archive
unzip -l files.zip
Replace DOS character ^M with newline using perl inline replace.
Shorter version with curl and awk
This will visit recursively all linked urls starting from the specified URL. It won't save anything locally and it will produce a detailed log.
Useful to find broken links in your site. It ignores robots.txt, so just use it on a site you own!
There's nothing particularly novel about this combination of find, grep, and wc, I'm just putting it here in case I want it again.
Shows useful informations about file descriptors in Squid web proxy
Used by virtualbox and others to create '.run' file.
this command will add the following two lines into the ~/.bash_aliases:
alias exit='pwd > ~/.lastdir;exit'
[ -n "$(cat .lastdir 2>/dev/null)" ] && cd "$(cat .lastdir)"
or redirect it to the ~/.bashrc if you like
Donno, I find it usefull. You may also define an alias for 'cd ~' like - alias cdh='cd ~'
sometimes you got conflicts using SSH (host changing ip, ip now belongs to a different machine) and you need to edit the file and remove the offending line from known_hosts. this does it much easier.
That's the key part.
I got this from http://www.macosxhints.com/article.php?story=20070715091413640. See that article for other other, more basic, tcsh-specific history-related settings.
Replace sed regular expressions with perl patterns on the command line.
The sed equivalent is: echo "sed -e"|sed -e 's/sed -e/perl -pe/'
This command deletes the "newline" chars, so its output maybe unusable :)
with grep for em:name rather than name, you will get much better result.