Commands by jennings6k (1)

  • Tested with bash v4.1.5 on ubuntu 10.10 Limitations: as written above, only works for programs with no file extention (i.e 'proggy', but not 'proggy.sh') because \eb maps to readine function backward-word rather then shell-backward-word (which is unbinded by default on ubuntu), and correspondingly for \ef. if you're willing to have Ctrl-f and Ctrl-g taken up too , you can insert the following lines into ~/.inputrc, in which case invoking Ctrl-e will do the right thing both for "proggy" and "proggy.sh". -- cut here -- \C-f:shell-backward-word \C-g:shell-forward-word "\C-e":"\C-f`which \C-g`\e\C-e" -- cut here -- Show Sample Output


    0
    bind '"\C-e":"\eb `which \ef`\e\C-e"'
    jennings6k · 2011-01-26 16:11:52 0

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

A function to output a man page as a pdf file
Tested on Fedora 12. This function will take a man page and convert it to pdf, saving the output to the current working directory. In Gnome, you can then view the output with "gnome-open file.pdf", or your favorite pdf viewer.

Which processes are listening on a specific port (e.g. port 80)
swap out "80" for your port of interest. Can use port number or named ports e.g. "http"

Double your disk read performance in a single command
(WARN) This will absolutely not work on all systems, unless you're running large, high speed, hardware RAID arrays. For example, systems using Dell PERC 5/i SAS/SATA arrays. If you have a hardware RAID array, try it. It certainly wont hurt. You may be can test the speed disk with some large file in your system, before and after using this: $ time dd if=/tmp/disk.iso of=/dev/null bs=256k To know the value of block device parameter known as readahead. $ blockdev --getra /dev/sdb And set the a value 1024, 2048, 4096, 8192, and maybe 16384... it really depends on the number of hard disks, their speed, your RAID controller, etc. (see sample)

Sort installed rpms in alphabetic order with their size.

See how many more processes are allowed, awesome!
There is a limit to how many processes you can run at the same time for each user, especially with web hosts. If the maximum # of processes for your user is 200, then the following sets OPTIMUM_P to 100. $ OPTIMUM_P=$(( (`ulimit -u` - `find /proc -maxdepth 1 \( -user $USER -o -group $GROUPNAME \) -type d|wc -l`) / 2 )) This is very useful in scripts because this is such a fast low-resource-intensive (compared to ps, who, lsof, etc) way to determine how many processes are currently running for whichever user. The number of currently running processes is subtracted from the high limit setup for the account (see limits.conf, pam, initscript). An easy to understand example- this searches the current directory for shell scripts, and runs up to 100 'file' commands at the same time, greatly speeding up the command. $ find . -type f | xargs -P $OPTIMUM_P -iFNAME file FNAME | sed -n '/shell script text/p' I am using it in my http://www.askapache.com/linux-unix/bash_profile-functions-advanced-shell.html especially for the xargs command. Xargs has a -P option that lets you specify how many processes to run at the same time. For instance if you have 1000 urls in a text file and wanted to download all of them fast with curl, you could download 100 at a time (check ps output on a separate [pt]ty for proof) like this: $ cat url-list.txt | xargs -I '{}' -P $OPTIMUM_P curl -O '{}' I like to do things as fast as possible on my servers. I have several types of servers and hosting environments, some with very restrictive jail shells with 20processes limit, some with 200, some with 8000, so for the jailed shells my xargs -P10 would kill my shell or dump core. Using the above I can set the -P value dynamically, so xargs always works, like this. $ cat url-list.txt | xargs -I '{}' -P $OPTIMUM_P curl -O '{}' If you were building a process-killer (very common for cheap hosting) this would also be handy. Note that if you are only allowed 20 or so processes, you should just use -P1 with xargs.

Skip over .svn directories when using the "find" command.
The "find" command can be annoying when used inside of a Subversion (or CVS) working directory. Obviously, you can combine this with other predicates and commands to create a more elaborate pipeline: $ find /var/svn -type f -not \( -name .svn -prune \) -print0 | xargs -0 md5sum Note: You can use my "dont-go-there.sh" script to wrap the "find" command and do this automatically at http://forwardlateral.com/blog/2006/02/27/dont-go-there/

Wait for file to stop changing
Here's a way to wait for a file (a download, a logfile, etc) to stop changing, then do something. As written it will just return to the prompt, but you could add a "; echo DONE" or whatever at the end. This just compares the full output of "ls" every 10 seconds, and keeps going as long as that output has changed since the last interval. If the file is being appended to, the size will change, and if it's being modified without growing, the timestamp from the "--full-time" option will have changed. The output of just "ls -l" isn't sufficient since by default it doesn't show seconds, just minutes. Waiting for a file to stop changing is not a very elegant or reliable way to measure that some process is finished - if you know the process ID there are much better ways. This method will also give a false positive if the changes to the target file are delayed longer than the sleep interval for any reason (network timeouts, etc). But sometimes the process that is writing the file doesn't exit, rather it continues on doing something else, so this approach can be useful if you understand its limitations.

Decrypt passwords from Google Chrome and Chromium.
Read this before you down voting and comment that it is not working -> Wont work on latest versions ~75> since database file is locked and has to be decrypted. This is useful if you have an old hdd with a chrome installation and want to decrypt your old passwords fast.

ping MAC ADDRESS
# first install arp-scan if not have it arp-scan 10.1.1.0/24 .... show ip+mac in localnet awk '/00:1b:11:dc:a9:65/ {print $1}' .... get ip associated with MAC ` backtick make do command substitution passing ip to command ping

Encryption file in commad line
This will encrypt your single file and create a filename.gpg file. Option: * -c : Encrypt with symmetric cipher To decrypt [email protected]:~$ gpg -c sample.rb.gpg


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: