What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Universal configuration monitoring and system of record for IT.

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:



May 19, 2015 - A Look At The New Commandlinefu
I've put together a short writeup on what kind of newness you can expect from the next iteration of clfu. Check it out here.
March 2, 2015 - New Management
I'm Jon, I'll be maintaining and improving clfu. Thanks to David for building such a great resource!

Top Tags





Commands by dmmst19 from sorted by
Terminal - Commands by dmmst19 - 11 results
while [ "$(ls -l --full-time TargetFile)" != "$a" ] ; do a=$(ls -l --full-time TargetFile); sleep 10; done
2015-05-09 03:19:49
User: dmmst19
Functions: ls sleep

Here's a way to wait for a file (a download, a logfile, etc) to stop changing, then do something. As written it will just return to the prompt, but you could add a "; echo DONE" or whatever at the end.

This just compares the full output of "ls" every 10 seconds, and as keeps going as long as that output has changed since the last interval. If the file's being appended to the size will change, and if it's being modified without growing the timestamp from the --full-time option will have changed. The output of just "ls -l" isn't sufficient since by default it doesn't show seconds, just minutes.

Waiting for a file to stop changing is not a very elegant or reliable way to measure that some process is finished - if you know the process ID there are much better ways. This method will also give a false positive if the changes to the target file are delayed longer than the sleep interval for any reason (network timeouts, etc). But sometimes the process that is writing the file doesn't exit, rather it continues on doing something else, so this approach can be useful if you understand its limitations.

for i in *; do identify $i | awk '{split($3,a,"x"); if (a[2]>a[1]) print $1;}'; done
2014-05-27 23:41:24
User: dmmst19
Functions: awk

Most people take photos in landscape orientation (wider than it is tall). Sometimes though you turn the camera sideways to capture a narrow/tall subject. Assuming you then manually rotate those picture files 90 degrees for proper viewing on screen or photo frame, you now have a mix of orientations in your photos directory.

This command will print out the names of all the photos in the current directory whose vertical resolution is larger than its horizontal resolution (i.e. portrait orientation). You can then take that list of files and deal with them however you need to, like re-rotating back to landscape for consistent printing with all the others.

This command requires the "identify" command from the ImageMagick command-line image manipulation suite. Sample output from identify:

identify PICT2821.JPG

PICT2821.JPG JPEG 1536x2048 1536x2048+0+0 8-bit DirectClass 688KB 0.016u 0:00.006

if [[ ":$PATH:" != *":$dir:"* ]]; then PATH=${PATH}:$dir; fi
2013-08-11 01:19:13
User: dmmst19
Tags: bash PATH $PATH

Sometimes in a script you want to make sure that a directory is in the path, and add it in if it's not already there. In this example, $dir contains the new directory you want to add to the path if it's not already present.

There are multiple ways to do this, but this one is a nice clean shell-internal approach. I based it on http://stackoverflow.com/a/1397020.

You can also do it using tr to separate the path into lines and grep -x to look for exact matches, like this:

if ! $(echo "$PATH" | tr ":" "\n" | grep -qx "$dir") ; then PATH=$PATH:$dir ; fi

which I got from http://stackoverflow.com/a/5048977.

Or replace the "echo | tr" part with a shell parameter expansion, like

if ! $(echo "${PATH//:/$'\n'}" | grep -qx "$dir") ; then PATH=$PATH:$dir ; fi

which I got from http://www.commandlinefu.com/commands/view/3209/.

There are also other more regex-y ways to do it, but I find the ones listed here easiest to follow.

Note some of this is specific to the bash shell.

find . -printf "touch -m -d \"%a\" '%p'\n" | tee /tmp/retime.sh
2012-11-05 20:32:05
User: dmmst19
Functions: find tee

Sometimes when copying files from one place to another, the timestamps get lost. Maybe you forgot to add a flag to preserve timestamps in your copy command. You're sure the files are exactly the same in both locations, but the timestamps of the files in the new home are wrong and you need them to match the source.

Using this command, you will get a shell script (/tmp/retime.sh) than you can move to the new location and just execute - it will change the timestamps on all the files and directories to their previous values. Make sure you're in the right directory when you launch it, otherwise all the touch commands will create new zero-length files with those names. Since find's output includes "." it will also change the timestamp of the current directory.

Ideally rsync would be the way to handle this - since it only sends changes by default, there would be relatively little network traffic resulting. But rsync has to read the entire file contents on both sides to be sure no bytes have changed, potentially causing a huge amount of local disk I/O on each side. This could be a problem if your files are large. My approach avoids all the comparison I/O. I've seen comments that rsync with the "--size-only" and "--times" options should do this also, but it didn't seem to do what I wanted in my test. With my approach you can review/edit the output commands before running them, so you can tell exactly what will happen.

The "tee" command both displays the output on the screen for your review, AND saves it to the file /tmp/retime.sh.

Credit: got this idea from Stone's answer at http://serverfault.com/questions/344731/rsync-copying-over-timestamps-only?rq=1, and combined it into one line.

ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no username@host
2012-04-20 01:54:04
User: dmmst19
Functions: ssh

This command will bypass checking the host key of the target server against the local known_hosts file.

When you SSH to a server whose host key does not match the one stored in your local machine's known_hosts file, you'll get a error like " WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!" that indicates a key mismatch. If you know the key has legitimately changed (like the server was reinstalled), a permanent solution is to remove the stored key for that server in known_hosts.

However, there are some occasions where you may not want to make the permanent change. For example, you've done some port-forwarding trickery with ssh -R or ssh -L, and are doing ssh user@localhost to connect over the port-forwarding to some other machine (not actually your localhost). Since this is usually temporary, you probably don't want to change the known_hosts file. This command is useful for those situations.

Credit: Command found at http://linuxcommando.blogspot.com/2008/10/how-to-disable-ssh-host-key-checking.html. Further discussion of how it works is there also.

Note this is a bit different than command #5307 - with that one you will still be prompted to store the unrecognized key, whereas this one won't prompt you for the key at all.

wget -S --spider http://osswin.sourceforge.net/ 2>&1 | grep Mod
2012-04-18 03:43:33
User: dmmst19
Functions: grep wget

I used to use the Firefox "View page info" feature a lot to determine how stale the web page I was looking at was. Now that I use mostly Chrome I miss that feature, so here is a command line alternative using wget. The -S says to display the server response, the --spider says to not download any files/pages, just fetch the header. The output goes to stderr, so to grep it you use 2>&1 to combine the stderr stream with stdout, the pipe that to grep for Last-Modified.

You can use curl instead if you have it installed, like this:

curl --head -s http://osswin.sourceforge.net | grep Mod
cat /proc/PID/limits
2011-12-14 16:49:06
User: dmmst19
Functions: cat

When dealing with system resource limits like max number of processes and open files per user, it can be hard to tell exactly what's happening. The /etc/security/limits.conf file defines the ceiling for the values, but not what they currently are, while

ulimit -a

will show you the current values for your shell, and you can set them for new logins in /etc/profile and/or ~/.bashrc with a command like:

ulimit -S -n 100000 >/dev/null 2>&1

But with the variability in when those files get read (login vs any shell startup, interactive vs non-interactive) it can be difficult to know for sure what values apply to processes that are currently running, like database or app servers. Just find the PID via "ps aux | grep programname", then look at that PID's "limits" file in /proc. Then you'll know for sure what actually applies to that process.

# su - <user> ; script /dev/null ; screen -r
2011-07-04 16:26:10
User: dmmst19
Functions: screen script su
Tags: screen su pty

Normally, if you su to another user from root and try to resume that other user's screen session, you will get an error like "Cannot open your terminal '/dev/pts/0' - please check." This is because the other user doesn't have permission for root's pty. You can get around this by running a "script" session as the new user, before trying to resume the screen session. Note you will have to execute each of the three commands separately, not all on the same line as shown here.

Credit: I found this at http://www.hjackson.org/blog/archives/2008/11/29/cannot-open-your-terminal-dev-pts-please-check.

dumpe2fs -h /dev/sdX
2011-01-22 23:50:03
User: dmmst19
Functions: dumpe2fs

You are probably aware that some percent of disk space on an ext2/ext3 file system is reserved for root (typically 5%). As documented elsewhere this can be reduced to 1% with

tune2fs -m 1 /dev/sdX (where X = drive/partition, like /dev/sda1)

but how do you check to see what the existing reserved block percentage actually is before making the change? You can find that with

dumpe2fs -h /dev/sdX

You get a raw block count and reserved block count, from which you can calculate the percentage. In the example here you can easily see that it's currently 1%, so you won't get any more available space by setting it to 1% again.

FYI If your disks are IDE instead of SCSI, your filesystems will be /dev/hdX instead of /dev/sdX.

perl -MMIME::Base64 -ne 'print decode_base64($_)' < file.txt > out
2010-12-13 23:35:20
User: dmmst19
Functions: perl

If you are in an environment where you don't have the base64 executable or MIME tools available, this can be very handy for salvaging email attachments when the headers are mangled but the encoded document itself is intact.

mail -s "subject" user@todomain.com <emailbody.txt -- -f customfrom@fromdomain.com -F 'From Display Name'
2010-01-18 19:55:27
User: dmmst19
Functions: mail
Tags: cronjob mail

It's very common to have cron jobs that send emails as their output, but the From: address is whatever account the cron job is running under, which is often not the address you want replies to go to. Here's a way to change the From: address right on the command line.

What's happening here is that the "--" separates the options to the mail client from options for the sendmail backend. So the -f and -F get passed through to sendmail and interpreted there. This works on even on a system where postfix is the active mailer - looks like postfix supports the same options.

I think it's possible to customize the From: address using mutt as a command line mailer also, but most servers don't have mutt preinstalled.