What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Universal configuration monitoring and system of record for IT.

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:



May 19, 2015 - A Look At The New Commandlinefu
I've put together a short writeup on what kind of newness you can expect from the next iteration of clfu. Check it out here.
March 2, 2015 - New Management
I'm Jon, I'll be maintaining and improving clfu. Thanks to David for building such a great resource!

Top Tags





Commands tagged progress from sorted by
Terminal - Commands tagged progress - 23 results
F=bigdata.xz; lsof -o0 -o -Fo $F | awk -Ft -v s=$(stat -c %s $F) '/^o/{printf("%d%%\n", 100*$2/s)}'
2015-09-19 22:22:43
User: flatcap
Functions: awk stat

Imagine you've started a long-running process that involves piping data,

but you forgot to add the progress-bar option to a command.


xz -dc bigdata.xz | complicated-processing-program > summary


This command uses lsof to see how much data xz has read from the file.

lsof -o0 -o -Fo FILENAME

Display offsets (-o), in decimal (-o0), in parseable form (-Fo)

This will output something like:






Process id (p), File Descriptor (f), Offset (o)


We stat the file to get its size

stat -c %s FILENAME


Then we plug the values into awk.

Split the line at the letter t: -Ft

Define a variable for the file's size: -s=$(stat...)

Only work on the offset line: /^o/


Note this command was tested using the Linux version of lsof.

Because it uses lsof's batch option (-F) it may be portable.


Thanks to @unhammer for the brilliant idea.

rsync --recursive --info=progress2 <src> <dst>
2014-10-21 22:19:44
User: koter84
Functions: rsync
Tags: rsync progress

update the output of rsync after completing a file but don't create newlines, just overwrite the last line, this looks a lot better in scripts where you do want to see a progress-indicator, but not the lengthy logs

this option is available since rsync 3.1.0

alias ...='while read line; do echo -n "."; done && echo ""'
alias ...="awk '{fflush(); printf \".\"}' && echo \"\""
2014-02-22 22:20:22
User: lgarron
Functions: alias

If you're running a command with a lot of output, this serves as a simple progress indicator.

This avoids the need to use `/dev/null` for silencing. It works for any command that outputs lines, updates live (`fflush` avoids buffering), and is simple to understand.

while pgrep ^dd; do pkill -INFO dd; sleep 10; done
2014-01-28 03:09:25
User: sternebrau
Functions: sleep
Tags: dd progress

While a dd is running in one terminal, open another and enter the while loop. The sample output will be displayed in the window running the dd and the while loop will exit when the dd is complete. It's possible that a "sudo" will need to be inserted before "pkill", depending on your setup, for example:

while pgrep ^dd; do sudo pkill -INFO dd; sleep 10; done
pv -tpreb /dev/sda | dd of=/dev/sdb bs=1M
2013-08-19 23:04:15
User: sc0ttyd
Functions: dd
Tags: dd progress pv,

Your platform may not have pv by default. If you are using Homebew on OSX, simply 'brew install pv'.

gcp [source] [destination]
rsync --progress user@host:/path/to/source /path/to/target/ | stdbuf -oL tr '\r' '\n' >> rsyncprogress.txt
2013-03-26 11:06:45
User: MessedUpHare
Functions: rsync tr

This line unbuffers the interactive output of rsync's --progress flag

creating a new line for every update.

This output can now be used within a script to make actions (or possibly piped into a GUI generator for a progress bar)

2012-12-19 02:21:41
Tags: dd progress

Sends SIGINFO to the process. This is a BSD feature OS X inherited. You must have the terminal window executing dd selected when entering CTRL + T for this to work.

watch -n 1 pkill -USR1 "^dd$"
2012-08-31 05:15:45
Functions: watch
Tags: dd progress speed

Sends the "USR1" signal every 1 second (-n 1) to a process called exactly "dd".

The signal in some systems can be INFO or SIGINFO ...

look at the signals list in: man kill

dd if=/dev/urandom of=file.img bs=4KB& sleep 1 && pid=`pidof dd`; while [[ -d /proc/$pid ]]; do kill -USR1 $pid && sleep 10 && clear; done
2012-02-23 01:45:53
Functions: dd kill sleep

The previously-posted one-liner didn't work for me for whatever reason, so I ended up doing this instead.

pv -t -p /path/to/sqlfile.sql | mysql -uUSERNAME -pPASSWORD -D DATABASE_NAME
dd if=/dev/urandom of=file.img bs=4KB& pid=$!; while [[ -d /proc/$pid ]]; do kill -USR1 $pid && sleep 1 && clear; done
2011-06-24 21:49:10
Functions: dd kill sleep
Tags: dd progress

Only slightly different than previous commands. The benefit is that your "watch" should die when the dd command has completed. (Of course this would depend on /proc being available)

dd if=/path/inputfile | pv | dd of=/path/outpufile
dd if=/path/to/inputfile of=/path/to/outputfile & pid=$! && sleep X && while kill -USR1 $pid; do sleep X; done
2010-12-02 15:07:18
User: cyrusza
Functions: dd kill sleep
Tags: dd copy progress

Adjust "sleep X" to your needs.

*NOTE: First sleep is required because bash doesn't have a "post-test" syntax (do XXX while).

killall -INFO dd
2010-04-22 18:38:37
User: jearsh
Functions: killall
Tags: dd progress

"killall -USR1 dd" does not work in OS X for me. However, sending INFO instead of USR1 works.

pv -cN orig < foo.tar.bz2 | bzcat | pv -cN bzcat | gzip -9 | pv -cN gzip > foo.tar.gz
2010-04-16 05:21:10
User: rkulla
Functions: gzip

In this example we convert a .tar.bz2 file to a .tar.gz file.

If you don't have Pipe Viewer, you'll have to download it via apt-get install pv, etc.

while :;do killall -USR1 dd;sleep 1;done
2010-04-07 09:23:31
User: oernii2
Functions: killall
Tags: dd progress speed

every 1sec sends DD the USR1 signal which causes DD to print its progress.

rsync --progress file1 file2
pv file1 > file2
2010-02-25 19:18:32
User: ppaschka

Only works on single files, doesn't preserve permissions/timestamps/ownership.

dc3dd progress=on bs=512 count=2048 if=/dev/zero of=/dev/null
tar -cf - . | pv -s $(du -sb . | awk '{print $1}') | gzip > out.tgz
2009-12-18 17:09:08
User: opertinicy
Functions: awk du gzip tar

What happens here is we tell tar to create "-c" an archive of all files in current dir "." (recursively) and output the data to stdout "-f -". Next we specify the size "-s" to pv of all files in current dir. The "du -sb . | awk ?{print $1}?" returns number of bytes in current dir, and it gets fed as "-s" parameter to pv. Next we gzip the whole content and output the result to out.tgz file. This way "pv" knows how much data is still left to be processed and shows us that it will take yet another 4 mins 49 secs to finish.

Credit: Peteris Krumins http://www.catonmat.net/blog/unix-utilities-pipe-viewer/

copy(){ cp -v "$1" "$2"&watch -n 1 'du -h "$1" "$2";printf "%s%%\n" $(echo `du -h "$2"|cut -dG -f1`/0.`du -h "$1"|cut -dG -f1`|bc)';}