Check These Out
Extracts only file number 12 from file. It's meant for text files. Replace 12 with the number you want.
First line starts at 1 not 0.
We use q on next line so doesn't process any line more.
Allows you to preserve your files when using cp, mv, ln, install or patch. When the target file exists, it will generate a file named XXX.~N~ (N is an auto-incremental number) instead of deleting the target file.
runs an rss feed through sed replacing the closing tags with newlines and the opening tags with white space making it readable.
gg puts the cursor at the begin
g? ROT13 until the next mov
G the EOF
Just another curl command to get your public facing IP
Useful when you're trying to unmount a volume and other sticky situations where a rogue process is annoying the hell out of you.
A common mistake in Bash is to write command-line where there's command a reading a file and whose result is redirected to that file.
It can be easily avoided because of :
1) warnings "-bash: file.txt: cannot overwrite existing file"
2) options (often "-i") that let the command directly modify the file
but I like to have that small function that does the trick by waiting for the first command to end before trying to write into the file.
Lots of things could probably done in a better way, if you know one...
There is a limit to how many processes you can run at the same time for each user, especially with web hosts. If the maximum # of processes for your user is 200, then the following sets OPTIMUM_P to 100.
$ OPTIMUM_P=$(( (`ulimit -u` - `find /proc -maxdepth 1 \( -user $USER -o -group $GROUPNAME \) -type d|wc -l`) / 2 ))
This is very useful in scripts because this is such a fast low-resource-intensive (compared to ps, who, lsof, etc) way to determine how many processes are currently running for whichever user. The number of currently running processes is subtracted from the high limit setup for the account (see limits.conf, pam, initscript).
An easy to understand example- this searches the current directory for shell scripts, and runs up to 100 'file' commands at the same time, greatly speeding up the command.
$ find . -type f | xargs -P $OPTIMUM_P -iFNAME file FNAME | sed -n '/shell script text/p'
I am using it in my http://www.askapache.com/linux-unix/bash_profile-functions-advanced-shell.html especially for the xargs command. Xargs has a -P option that lets you specify how many processes to run at the same time. For instance if you have 1000 urls in a text file and wanted to download all of them fast with curl, you could download 100 at a time (check ps output on a separate [pt]ty for proof) like this:
$ cat url-list.txt | xargs -I '{}' -P $OPTIMUM_P curl -O '{}'
I like to do things as fast as possible on my servers. I have several types of servers and hosting environments, some with very restrictive jail shells with 20processes limit, some with 200, some with 8000, so for the jailed shells my xargs -P10 would kill my shell or dump core. Using the above I can set the -P value dynamically, so xargs always works, like this.
$ cat url-list.txt | xargs -I '{}' -P $OPTIMUM_P curl -O '{}'
If you were building a process-killer (very common for cheap hosting) this would also be handy.
Note that if you are only allowed 20 or so processes, you should just use -P1 with xargs.