commandlinefu.com is the place to record those command-line gems that you return to again and again.
You can sign-in using OpenID credentials, or register a traditional username and password.
Subscribe to the feed for:
No final count, but clean and simple output.
Much better alternatives - grep-alikes using perl regexps. With more options, and nicer outputs.
Grabs the Apache config file (yielded from httpd) and returns the path specified as DocumentRoot.
If you've ever tried "grep -P" you know how terrible it is. Even the man page describes it as "highly experimental". This function will let you 'grep' pipes and files using Perl syntax for regular expressions.
The first argument is the pattern, e.g. '/foo/'. The second argument is a filename (optional).
ls -F | grep /\$
but will break on directories containing newlines. Or the safe, POSIX sh way (but will miss dotfiles):
for i in *; do test -d "./$i" && printf "%s\n" "$i"; done
Normally, if you just want to see directories you'd use brianmuckian's command 'ls -d *\', but I ran into problems trying to use that command in my script because there are often multiple directories per line. If you need to script something with directories and want to guarantee that there is only one entry per line, this is the fastest way i know
This will affect all invocations of grep, even when it is called from inside a script.
If your version of curl does not support the --compressed option, use
curl -s http://funnyjunk.com | gunzip
curl -s --compressed http://funnyjunk.com
There's nothing particularly novel about this combination of find, grep, and wc, I'm just putting it here in case I want it again.
ack search recursively by default
How to force a userid to log out of a Linux host, by killing all processes owned by the user, including login shells:
This example command fetches 'example.com' webpage and then fetches+saves all PDF files listed (linked to) on that webpage.
[*Note: of course there are no PDFs on example.com. This is just an example]
Original submitter's command spawns a "grep" process for every file found. Mine spawns one grep with a long list of all matching files to search in. Learn xargs, everyone! It's a very powerful and always available tool.
Why use grep and awk?
chrome only lets you export in html format, with a lot of table junk, this command will just export the titles of the links and the links without all that extra junk
easier to remember
Change the $domain variable to whichever domain you wish to query.
Works with the majority of whois info; for some that won't, you may have to compromise:
domain=google.com; for a in $(whois $domain | grep "Domain servers in listed order:" --after 3 | grep -v "Domain servers in listed order:"); do echo ">>> Nameservers for $domain from $a
Note that this doesn't work as well as the first one; if they have more than 3 nameservers, it won't hit them all.
As the summary states, this can be useful for making sure the whois nameservers for a domain match the nameserver records (NS records) from the nameservers themselves.
Same as 7272 but that one was too dangerous
so i added -P to prompt users to continue or cancel
Note the double space: "...^ii␣␣linux-image-2..."
Like 5813, but fixes two bugs: This leaves the meta-packages 'linux-headers-generic' and 'linux-image-generic' alone so that automatic upgrades work correctly in the future. Kernels newer than the currently running one are left alone (this can happen if you didn't reboot after installing a new kernel).
Deletes capistrano-style release directories (except that there are dashes between the YYYY-MM-DD)