commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
Wow, didn't really expect you to read this far down. The latest iteration of the site is in open beta. It's a gentle open beta-- not in prime-time just yet. It's being hosted over at UpGuard (link) and you are more than welcome to give it a shot. Couple things:
Outputs the number of different pixels.
2 params to increase tolerance:
* thumbnails size
* fuzz, the color distance tolerance
See http://en.positon.org/post/Compare-/-diff-between-two-images for more details.
This is what we use.
You can grep -v 127.0.0.1 if you wish.
with zcat force option it's even simpler.
Listens for events in the directory. Each created file is displayed on stdout. Then each fileline is read by the loop and a command is run.
This can be used to force permissions in a directory, as an alternative for umask.
shuf is in the coreutils package
Maybe not the quicker because of the sort command, but it will also look in other man sections.
updated with goodevilgenius 'shuf' idea
The pee command is in the moreutils package.
Do the same as pssh, just in shell syntax.
Put your hosts in hostlist, one per line.
Command outputs are gathered in output and error directories.
Just a little simplification.
I just wanted a simple DNS request.
Because host and nslookup commands are not on all systems, we use getent instead.
Thanks aulem for that tip.
This is /bin/sh compatible.
Using just globing
'data' is the directory to backup, 'backup' is directory to store snapshots.
Backup files on a regular basis using hard links. Very efficient, quick. Backup data is directly available.
Same as explained here :
in one line.
Using du to check the size of your backups, the first backup counts for all the space, and other backups only files that have changed.
I use zgrep because it also parses non gzip files.
With ls -tr, we parse logs in time order.
Greping the empty string just concatenates all logs, but you can also grep an IP, an URL...
Works with files containing spaces and for very large directories.
Open a file directly with execution permission.
Put the function in your .bashrc
You can also put this in your vimrc:
command XX w | set ar | silent exe "!chmod +x %" | redraw!
and open a new file like this:
vi +XX /tmp/script.sh
Another way of counting the line output of tail over 10s not requiring pv.
Cut to have the average per second rate :
tail -n0 -f access.log>/tmp/tmp.log & sleep 10; kill $! ; wc -l /tmp/tmp.log | cut -c-2
You can also enclose it in a loop and send stderr to /dev/null :
while true; do tail -n0 -f access.log>/tmp/tmp.log & sleep 2; kill $! ; wc -l /tmp/tmp.log | cut -c-2; done 2>/dev/null
Displays the realtime line output rate of a logfile.
-l tels pv to count lines
-i to refresh every 10 seconds
-l option is not in old versions of pv. If the remote system has an old pv version:
ssh tail -f /var/log/apache2/access.log | pv -l -i10 -r >/dev/null
or "Execute a command with a timeout"
Run a command in background, sleep 10 seconds, kill it.
! is the process id of the most recently executed background command.
You can test it with:
find /& sleep10; kill $!
Reports all local partitions having more than 90% usage.
Just add it in a crontab and you'll get a mail when a disk is full.
(sending mail to the root user must work for that)
Very handy if you have done a package selection mistake in aptitude.
Note that it's better to do a Ctrl+U (undo) in aptitude if possible, because the keep-all will clear some package states (like the 'hold' state).
Instead of opening your browser, googling "whatismyip"...
Also useful for scripts.
dig can be found in the dnsutils package.