commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
Bash function copies a file prefixed with a version number to a subdirectory
Counts the number of new emails in the post office (or wherever mail is set up to check).
The same thing using only Bash built-in's.
For readability I've kept the variables out, but it could me made extremely more compact (and totally unreadable!) by stuffing everything inside the single echo command.
This gives a very rough estimate of how many pages your text files will print on. Assumes 60 lines per page, and does not take long lines into account.
Download a bunch of random animated gifs from http://gifbin.com/
Center the output text in max line length of buffered output pipe;
If you have a folder with thousand of files and want to have many folder with only 100 file per folder, run this.
It will create 0/,1/ etc and put 100 file inside each one.
But find will return true even if it don't find anything ...
A simple command to find the total number of subdirectories in current directory starting with specific name.
usage:
where COMMIT
for instance:
where 1178c5950d321a8c5cd8294cd67535157e296554
where HEAD~5
Kills all process that belongs to the user that runs it - excluding bash, sshd (so putty/ssh session will be spared). The bit that says grep -vE "..." can be extended to include ps line patterns that you want to spare.
If no process can be found on the hitlist, it will print # NOTHING TO KILL. Otherwise, it will print # KILL EM ALL, with the cull list.
Use bc for decimals...
This Anti-TarBomb function makes it easy to unpack a .tar.gz without worrying about the possibility that it will "explode" in your current directory. I've usually always created a temporary folder in which I extracted the tarball first, but I got tired of having to reorganize the files afterwards. Just add this function to your .zshrc / .bashrc and use it like this;
atb arch1.tar.gz
and it will create a folder for the extracted files, if they aren't already in a single folder.
This only works for .tar.gz, but it's very easy to edit the function to suit your needs, if you want to extract .tgz, .tar.bz2 or just .tar.
More info about tarbombs at http://www.linfo.org/tarbomb.html
Tested in zsh and bash.
UPDATE: This function works for .tar.gz, .tar.bz2, .tgz, .tbz and .tar in zsh (not working in bash):
atb() { l=$(tar tf $1); if [ $(echo "$l" | wc -l) -eq $(echo "$l" | grep $(echo "$l" | head -n1) | wc -l) ]; then tar xf $1; else mkdir ${1%.t(ar.gz||ar.bz2||gz||bz||ar)} && tar xf $1 -C ${1%.t(ar.gz||ar.bz2||gz||bz||ar)}; fi ;}
UPDATE2: From the comments; bepaald came with a variant that works for .tar.gz, .tar.bz2, .tgz, .tbz and .tar in bash:
atb() {shopt -s extglob ; l=$(tar tf $1); if [ $(echo "$l" | wc -l) -eq $(echo "$l" | grep $(echo "$l" | head -n1) | wc -l) ]; then tar xf $1; else mkdir ${1%[email protected](ar.gz|ar.bz2|gz|bz|ar)} && tar xf $1 -C ${1%[email protected](ar.gz|ar.bz2|gz|bz|ar)}; fi ; shopt -u extglob}
command was too long...
this is the complete command:
fname=$1; f=$( ls -la $fname ); if [ -n "$f" ]; then fsz=$( echo $f | awk '{ print $5 }' ); if [ "$fsz" -ne "0" ]; then nrrec=$( wc -l $fname | awk '{ print $1 }' ); recsz=$( expr $fsz / $nrrec ); echo "$recsz"; else echo "0"; fi else echo "file $fname does not exist" >&2; fi
First the input is stored in var $fname
The file is checked for existance using "ls -lart".
If the output of "ls -lart" is empty, the error message is given on stderr
Otherwise the filelength is taken from the output of "ls -lart" (5th field)
With "wc -l" the number of records (or lines) is taken.
The record size is filelength devided by the number of records.
please note: this method does not take into account any headers, variable length records and only works on ascii files where the records are sperated by 0x0A (or 0x0A/0x0D on MS-DOS/Windows).
Suppose you have 11 marbles, 4 of which are red, the rest being blue. The marbles are indistinguishable, apart from colour. How many different ways are there to arrange the marbles in a line? And how many ways are there to arrange them so that no two red marbles are adjacent?
There are simple mathematical solutions to these questions, but it's also possible to generate and count all possibilities directly on the command line, using little more than brace expansion, grep and wc!
The answer to the question posed above is that there are 330 ways of arranging the marbles in a line, 70 of which have no two red marbles adjacent. See the sample output.
To follow the call to marbles 11 4: after c=''; for i in $(seq $1); do c+='{b,r}'; done;, $c equals {b,r}{b,r}{b,r}{b,r}{b,r}{b,r}{b,r}{b,r}{b,r}{b,r}{b,r}
After x=$(eval echo $c), and brace expansion, $x equals bbbbbbbbbbb bbbbbbbbbbr ... rrrrrrrrrrb rrrrrrrrrrr, which is all 2^11 = 2048 strings of 11 b's and r's.
After p=''; for i in $(seq $2); do p+='b*r'; done;, $p equals b*rb*rb*rb*r
Next, after y=$(grep -wo "${p}b*"
Finally, grep -vc 'rr'
Great idea camocrazed. Another twist would be to display a different man page based on the day of the year. The following will continuously cycle through all man pages:
man $(ls /bin | sed -n $(($(date +%j) % $(ls /bin | wc -l)))p)
Of course, the httpd can be replaced with any other process name