commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
Normally when a site is blocked through /etc/hosts, traffic is just being redirected to a non-existent server that isn't going to respond. This helps get your point across a little more clearly than a browser timeout.
Of course you could use any number of codes: http://en.wikipedia.org/wiki/List_of_HTTP_status_codes
Obviously, this command can be added to init-rc.d, and more sophisticated responses can be given. Seems noteworthy to mention that the information sent from the browser can be parsed using the bash READ builtin (such as 'while read -t 1 statement; do parsing'), and the connection stays open until the script exits. Take care that you must use EXEC:'bash -c foo.sh', as 'execvp' (socat's method for executing scripts) invokes 'sh', not 'bash'.
Benefit is that it doesn't make you keep the terminal open.
I'll let Slayer handle that. Raining Blood for your pleasure.
Same as 7272 but that one was too dangerous
so i added -P to prompt users to continue or cancel
Note the double space: "...^ii␣␣linux-image-2..."
Like 5813, but fixes two bugs: This leaves the meta-packages 'linux-headers-generic' and 'linux-image-generic' alone so that automatic upgrades work correctly in the future. Kernels newer than the currently running one are left alone (this can happen if you didn't reboot after installing a new kernel).
You can choose these mirror servers to get gpg keys, if the official one ever goes offline
wwwkeys.en.pgp.net #(replace with your country code fr, en, de,etc)
Removes piling kernels from /boot, save the current one.
This command DOES NOT remove the 'linux-image-generic' package, so you'll continue getting kernel updates
Removes the package, 'packagename' in the example ,from your system. '-R' is the actual removal option, 'n' is for removing backup configuration files saved by pacman, and 's' is for removing the dependencies of the given package which are not required by other packages. pacman does not remove configuration files, etc. created by the package.
Clears the package cache of all packages, installed and uninstalled packages. Do NOT USE if you might want to downgrade a package later.
Clears the package cache of all uninstalled packages. Does not remove package configuration files in user's home directory.
Requires 'reflector' package from official repository. '5' in the example is the number of mirrors you want in the mirrorlist, could be other numbers of course.
This is the first version of the Sublime Text 2 packaging so there might be bugs.
View all memcache traffic
You have openjdk and sun java installed. Many of your java plugins will not work properly. Something called IcedTea (the java web plugin) keeps crashing. Run this and select sun java to use sun java.
A much quicker and (not dirtier) option. use the man page for help. On linux/ubuntu you will need to `sudo apt-get -y install arp-scan`.
in Debian-based systems apt-get could be limited to the specified bandwidth in kilobytes using the apt configuration options(man 5 apt.conf, man apt-get). I'd quote man 5 apt.conf:
"The used bandwidth can be limited with Acquire::http::Dl-Limit which accepts integer values in kilobyte. The default value is 0 which deactivates the limit and tries uses as much as possible of the bandwidth..."
"HTTPS URIs. Cache-control, Timeout, AllowRedirect, Dl-Limit and proxy options are the same as for http..."
-T = traceroute
-V = verbose
--tr-stop = exit when target is reached
-n = don't do reverse lookups (faster)
-2 = udp
-p 53 = destination port 53 (dns), change to your needs
Useful when trying to debug a network with complex routing rules and/or multiple gateways.
find the files locked by rcs utility
If you spot a dubious looking cp command running you can use this command to view what is being copied and to where.
1234 is the PID of the cp command being passed to the lsof utility.
3r.*REG will display the file/directory that is being read/copied.
4w.*REG will display the destination it is being written to.
An alternative which does not require to be root