Commands by bwoodacre (6)

  • If the pdf/dvi/etc documentation for a latex package is already part of your local texmf tree, then texdoc will find and display it for you. If the documentation is not available on your system, it will bring up the package's webpage at CTAN to help you investigate. Show Sample Output


    2
    texdoc packagename
    bwoodacre · 2010-05-23 20:02:32 0
  • ncdu is a text-mode ncurses-based disk usage analyzer. Useful for when you want to see where all your space is going. For a single flat directory it isn't more elaborate than an du|sort or some such thing, but this analyzes all directories below the one you specify so space consumed by files inside subdirectories is taken into account. This way you get the full picture. Features: file deletion, file size or size on disk and refresh as contents change. Homepage: http://dev.yorhel.nl/ncdu Show Sample Output


    5
    ncdu directory_name
    bwoodacre · 2009-06-09 00:02:48 3
  • txt2regex can be interactive or noninteractive and generates regular expressions for a variety of dialects based on user input. In interactive mode, the regex string builds as you select menu options. The sample output here is from noninteractive mode, try running it standalone and see for yourself. It's written in bash and is available as the 'txt2regex' package at least under debian/ubuntu. Show Sample Output


    8
    txt2regex
    bwoodacre · 2009-04-29 04:00:22 4
  • 'dpkg -S' just matches the string you supply it, so just using 'ls' as an argument matches any file from any package that has 'ls' anywhere in the filename. So usually it's a good idea to use an absolute path. You can see in the second example that 12 thousand files that are known to dpkg match the bare string 'ls'. Show Sample Output


    45
    dpkg -S /usr/bin/ls
    bwoodacre · 2009-04-18 18:18:23 11
  • This command sequence allows simple setup of (gasp!) password-less SSH logins. Be careful, as if you already have an SSH keypair in your ~/.ssh directory on the local machine, there is a possibility ssh-keygen may overwrite them. ssh-copy-id copies the public key to the remote host and appends it to the remote account's ~/.ssh/authorized_keys file. When trying ssh, if you used no passphrase for your key, the remote shell appears soon after invoking ssh user@host.


    20
    ssh-keygen; ssh-copy-id user@host; ssh user@host
    bwoodacre · 2009-03-18 07:59:33 3
  • This invokes tar on the remote machine and pipes the resulting tarfile over the network using ssh and is saved on the local machine. This is useful for making a one-off backup of a directory tree with zero storage overhead on the source. Variations on this include using compression on the source by using 'tar cfvp' or compression at the destination via ssh user@host "cd dir; tar cfp - *" | gzip - > file.tar.gz


    6
    ssh user@host "cd targetdir; tar cfp - *" | dd of=file.tar
    bwoodacre · 2009-03-18 07:43:22 3

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

set a reminder for 5 days in the future
setup for reminder in 5 days, added the date in the future. To run a job at 4pm three days from now, you would do at 4pm + 3 days, to run a job at 10:00am on July 31, you would do at 10am Jul 31 and to run a job at 1am tomorrow, you would do at 1am tomorrow.

See how many more processes are allowed, awesome!
There is a limit to how many processes you can run at the same time for each user, especially with web hosts. If the maximum # of processes for your user is 200, then the following sets OPTIMUM_P to 100. $ OPTIMUM_P=$(( (`ulimit -u` - `find /proc -maxdepth 1 \( -user $USER -o -group $GROUPNAME \) -type d|wc -l`) / 2 )) This is very useful in scripts because this is such a fast low-resource-intensive (compared to ps, who, lsof, etc) way to determine how many processes are currently running for whichever user. The number of currently running processes is subtracted from the high limit setup for the account (see limits.conf, pam, initscript). An easy to understand example- this searches the current directory for shell scripts, and runs up to 100 'file' commands at the same time, greatly speeding up the command. $ find . -type f | xargs -P $OPTIMUM_P -iFNAME file FNAME | sed -n '/shell script text/p' I am using it in my http://www.askapache.com/linux-unix/bash_profile-functions-advanced-shell.html especially for the xargs command. Xargs has a -P option that lets you specify how many processes to run at the same time. For instance if you have 1000 urls in a text file and wanted to download all of them fast with curl, you could download 100 at a time (check ps output on a separate [pt]ty for proof) like this: $ cat url-list.txt | xargs -I '{}' -P $OPTIMUM_P curl -O '{}' I like to do things as fast as possible on my servers. I have several types of servers and hosting environments, some with very restrictive jail shells with 20processes limit, some with 200, some with 8000, so for the jailed shells my xargs -P10 would kill my shell or dump core. Using the above I can set the -P value dynamically, so xargs always works, like this. $ cat url-list.txt | xargs -I '{}' -P $OPTIMUM_P curl -O '{}' If you were building a process-killer (very common for cheap hosting) this would also be handy. Note that if you are only allowed 20 or so processes, you should just use -P1 with xargs.

Efficient count files in directory (no recursion)
$ time perl -e 'if(opendir D,"."){@a=readdir D;print $#a - 1,"\n"}' 205413 real 0m0.497s user 0m0.220s sys 0m0.268s $ time { ls |wc -l; } 205413 real 0m3.776s user 0m3.340s sys 0m0.424s ********* ** EDIT: turns out this perl liner is mostly masturbation. this is slightly faster: $ find . -maxdepth 1 | wc -l sh-3.2$ time { find . -maxdepth 1|wc -l; } 205414 real 0m0.456s user 0m0.116s sys 0m0.328s ** EDIT: now a slightly faster perl version $ perl -e 'if(opendir D,"."){++$c foreach readdir D}print $c-1,"\n"' sh-3.2$ time perl -e 'if(opendir D,"."){++$c foreach readdir D}print $c-1,"\n"' 205414 real 0m0.415s user 0m0.176s sys 0m0.232s

Which processes are listening on a specific port (e.g. port 80)
swap out "80" for your port of interest. Can use port number or named ports e.g. "http"

Every Nth line position # (AWK)
A better way to show the file lines 3n + 1

Get AWS temporary credentials ready to export based on a MFA virtual appliance
You might want to secure your AWS operations requiring to use a MFA token. But then to use API or tools, you need to pass credentials generated with a MFA token. This commands asks you for the MFA code and retrieves these credentials using AWS Cli. To print the exports, you can use: `awk '{ print "export AWS_ACCESS_KEY_ID=\"" $1 "\"\n" "export AWS_SECRET_ACCESS_KEY=\"" $2 "\"\n" "export AWS_SESSION_TOKEN=\"" $3 "\"" }'` You must adapt the command line to include: * $MFA_IDis ARN of the virtual MFA or serial number of the physical one * TTL for the credentials

Multi-line grep
Using perl you can search for patterns spanning several lines, a thing that grep can't do. Append the list of files to above command or pipe a file through it, just as with regular grep. If you add the 's' modifier to the regex, the dot '.' also matches line endings, useful if you don't known how many lines you need are between parts of your pattern. Change '*' to '*?' to make it greedy, that is match only as few characters as possible. See also http://www.commandlinefu.com/commands/view/1764/display-a-block-of-text-with-awk to do a similar thing with awk. Edit: The undef has to be put in a begin-block, or a match in the first line would not be found.

Encrypted archive with openssl and tar
The lifehacker way: http://lifehacker.com/software/top/geek-to-live--encrypt-your-data-178005.php#Alternate%20Method:%20OpenSSL "That command will encrypt the unencrypted-data.tar file with the password you choose and output the result to encrypted-data.tar.des3. To unlock the encrypted file, use the following command:" $ openssl des3 -d -salt -in encrypted-data.tar.des3 -out unencrypted-data.tar

pretend to be busy in office to enjoy a cup of coffee
using seq inside a subshell instead of a bash sequence to create increments.

Propagate a directory to another and create symlink to content
Lndir create from source directory to destination directory a full symlink tree of all contents of source directory, really useful for propagate changes from a directory to another.


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: