commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
make, find and a lot of other programs can take a lot of time. And can do not. Supppose you write a long, complicated command and wonder if it will be done in 3 seconds or 20 minutes. Just add "R" (without quotes) suffix to it and you can do other things: zsh will inform you when you can see the results.
You can replace zenity with other X Window dialogs program.
du -m option to not go across mounts (you usually want to run that command to find what to destroy in that partition)
-a option to also list . files
-k to display in kilobytes
sort -n to sort in numerical order, biggest files last
tail -10 to only display biggest 10
reverse the sorting of ls to get the newest file:
ls -1tr --group-directories-first /path/to/dir/ | tail -n 1
Problems:
If there are no files in the directory you will get a directory or nothing.
Takes IP from web logs and pipes to iptables, use grep to white list IPs.. use if a particular file is getting requested by many different addresses.
Sure, its already down pipe and you bandwidth may suffer but that isnt the concern. This one liner saved me from all the traffic hitting the server a second time, reconfigure your system so your system will work like blog-post-1.php or the similar so legitimate users can continue working while the botnet kills itself.
Should be a bit more portable since echo -e/n and date's -Ins are not.
This is useful when watching a log file that does not contain timestamps itself.
If the file already has content when starting the command, the first lines will have the "wrong" timestamp when the command was started and not when the lines were originally written.
Your version works fine except for someone who's interested in commands 'sudo' was prefixed to
i.e. in your command, use of sudo appears as number of times sudo was used.
Slight variation in my command peeks into what commands sudo was used for and counts the command
(ignores 'sudo')
This combines the above two command into one. Note that you can leave off the last two commands and simply run the command as
"find /home/ -type f -exec du {} \; 2>/dev/null | sort -n | tail -n 10"
The last two commands above just convert the output into human readable format.
Often you need to find the files that are taking up the most disk space in order to free up space asap. This script can be run on the enitre filesystem as root or on a home directory to find the largest files.
this adds a random color to your prompt and the external ip.
useful if you are using multiple mashines with the same hostname.
The given example collects output of the tail command: Whenever a line is emitted, further lines are collected, until no more output comes for one second. This group of lines is then sent as notification to the user.
You can test the example with
logger "First group"; sleep 1; logger "Second"; logger "group"
tail -c 1 "$1" returns the last byte in the file.
Command substitution deletes any trailing newlines, so if the file ended in a newline $(tail -c 1 "$1") is now empty, and the -z test succeeds.
However, $a will also be empty for an empty file, so we add -s "$1" to check that the file has a size greater than zero.
Finally, -f "$1" checks that the file is a regular file -- not a directory or a socket, etc.
You can use this one-liner for a quick and dirty (more customizable) alternative to the watch command. The keys to making this work: everything exists in an infinite loop; the loop starts with a clear; the loop ends with a sleep. Enter whatever you'd like to keep an eye on in the middle.
Download colorizer by @raszi @ http://github.com/raszi/colorize