Check These Out
-r for recursive (if you want to copy entire directories)
src for the source file (or wildcards)
dst for the destination
--progress to show a progress bar
This will send the ASCII sequence for DC3 to the currently running tty which results in SIGSTOP (19).
You can continue with ASCII sequence for DC1 by pressing CTRL+q which results in SIGCONT (18).
Shows all block devices in a tree with descruptions of what they are.
From http://lists.debian.org/debian-devel/2001/01/msg00971.html .
get the SHA256 sum signatur for a file
swap out "80" for your port of interest. Can use port number or named ports e.g. "http"
grab the weather, with a little expletive fun. replace the 48104 with a US zipcode, or the name of your city (such as ZIP="oslo"), unless you want to know what the weather is like for me (and that's fine too)
Helps when I'm editing a script and want to double check some commands without having to exit out of vi multiple times or having to use another terminal session.
Print a row of characters across the terminal. Uses tput to establish the current terminal width, and generates a line of characters just long enough to cross it. In the example '#' is used.
It's possible to use a repeating sequence by dividing the columns by the number of characters in the sequence like this:
$ seq -s'~-' 0 $(( $(tput cols) /2 )) | tr -d '[:digit:]'
or
$ seq -s'-~?' 0 $(( $(tput cols) /3 )) | tr -d '[:digit:]'
You will lose chararacters at the end if the length isn't cleanly divisible.
There is a limit to how many processes you can run at the same time for each user, especially with web hosts. If the maximum # of processes for your user is 200, then the following sets OPTIMUM_P to 100.
$ OPTIMUM_P=$(( (`ulimit -u` - `find /proc -maxdepth 1 \( -user $USER -o -group $GROUPNAME \) -type d|wc -l`) / 2 ))
This is very useful in scripts because this is such a fast low-resource-intensive (compared to ps, who, lsof, etc) way to determine how many processes are currently running for whichever user. The number of currently running processes is subtracted from the high limit setup for the account (see limits.conf, pam, initscript).
An easy to understand example- this searches the current directory for shell scripts, and runs up to 100 'file' commands at the same time, greatly speeding up the command.
$ find . -type f | xargs -P $OPTIMUM_P -iFNAME file FNAME | sed -n '/shell script text/p'
I am using it in my http://www.askapache.com/linux-unix/bash_profile-functions-advanced-shell.html especially for the xargs command. Xargs has a -P option that lets you specify how many processes to run at the same time. For instance if you have 1000 urls in a text file and wanted to download all of them fast with curl, you could download 100 at a time (check ps output on a separate [pt]ty for proof) like this:
$ cat url-list.txt | xargs -I '{}' -P $OPTIMUM_P curl -O '{}'
I like to do things as fast as possible on my servers. I have several types of servers and hosting environments, some with very restrictive jail shells with 20processes limit, some with 200, some with 8000, so for the jailed shells my xargs -P10 would kill my shell or dump core. Using the above I can set the -P value dynamically, so xargs always works, like this.
$ cat url-list.txt | xargs -I '{}' -P $OPTIMUM_P curl -O '{}'
If you were building a process-killer (very common for cheap hosting) this would also be handy.
Note that if you are only allowed 20 or so processes, you should just use -P1 with xargs.