Commands matching progress (155)

  • pv allows a user to see the progress of data through a pipeline, by giving information such as time elapsed, percentage completed (with progress bar), current throughput rate, total data transferred, and ETA. (man pv) Show Sample Output


    59
    pv sourcefile > destfile
    edo · 2010-03-20 20:55:18 25
  • Pipe viewer is a terminal-based tool for monitoring the progress of data through a pipeline. It can be inserted into any normal pipeline between two processes to give a visual indication of how quickly data is passing through, how long it has taken, how near to completion it is, and an estimate of how long it will be until completion. Source: http://www.catonmat.net/blog/unix-utilities-pipe-viewer/ Show Sample Output


    51
    pv access.log | gzip > access.log.gz
    p3k · 2009-02-06 08:50:40 222
  • What happens here is we tell tar to create "-c" an archive of all files in current dir "." (recursively) and output the data to stdout "-f -". Next we specify the size "-s" to pv of all files in current dir. The "du -sb . | awk ?{print $1}?" returns number of bytes in current dir, and it gets fed as "-s" parameter to pv. Next we gzip the whole content and output the result to out.tgz file. This way "pv" knows how much data is still left to be processed and shows us that it will take yet another 4 mins 49 secs to finish. Credit: Peteris Krumins http://www.catonmat.net/blog/unix-utilities-pipe-viewer/ Show Sample Output


    27
    tar -cf - . | pv -s $(du -sb . | awk '{print $1}') | gzip > out.tgz
    opertinicy · 2009-12-18 17:09:08 13
  • Halt script progress until a key has been pressed. Source: http://bash-hackers.org/wiki/doku.php/mirroring/bashfaq/065


    25
    read -sn 1 -p "Press any key to continue..."
    kalaxy · 2009-11-05 21:53:23 11
  • Sends SIGINFO to the process. This is a BSD feature OS X inherited. You must have the terminal window executing dd selected when entering CTRL + T for this to work. Show Sample Output


    25
    CTRL + T
    unixmonkey44467 · 2012-12-19 02:21:41 20
  • It can resume a failed secure copy ( usefull when you transfer big files like db dumps through vpn ) using rsync. It requires rsync installed in both hosts. rsync --partial --progress --rsh=ssh $file_source $user@$host:$destination_file local -> remote or rsync --partial --progress --rsh=ssh $user@$host:$remote_file $destination_file remote -> local


    23
    rsync --partial --progress --rsh=ssh $file_source $user@$host:$destination_file
    dr_gogeta86 · 2009-04-01 13:13:14 10
  • This command utilizes 'pv' to show dd's progress. Notes on use with dd: -- dd block size (bs=...) is a widely debated command-line switch and should usually be between 1024 and 4096. You won't see much performance improvements beyond 4096, but regardless of the block size, dd will transfer every bit of data. -- pv's switch, '-s' should be as close to the size of the data source as possible. -- dd's out file, 'of=...' can be anything as the data within that file are the same regardless of the filename / extension. Show Sample Output


    20
    sudo dd if=/dev/sdc bs=4096 | pv -s 2G | sudo dd bs=4096 of=~/USB_BLACK_BACKUP.IMG
    BruceLEET · 2010-07-28 22:39:46 19
  • The command copies a file from remote SSH host on port 8322 with bandwidth limit 100KB/sec; --progress shows a progress bar --partial turns partial download on; thus, you can resume the process if something goes wrong --bwlimit limits bandwidth by specified KB/sec --ipv4 selects IPv4 as preferred I find it useful to create the following alias: alias myscp='rsync --progress --partial --rsh="ssh -p 8322" --bwlimit=100 --ipv4' in ~/.bash_aliases, ~/.bash_profile, ~/.bash_login or ~/.bashrc where appropriate. Show Sample Output


    17
    rsync --progress --partial --rsh="ssh -p 8322" --bwlimit=100 --ipv4 user@domain.com:~/file.tgz .
    ruslan · 2011-02-10 14:25:22 9
  • "killall -USR1 dd" does not work in OS X for me. However, sending INFO instead of USR1 works. Show Sample Output


    16
    killall -INFO dd
    jearsh · 2010-04-22 18:38:37 5

  • 15
    rsync --progress file1 file2
    fletch · 2010-02-25 19:57:01 15
  • Searches backwards through your command-history for the typed text. Repeatedly hitting Ctrl-R will search progressively further. Return invokes the command. Show Sample Output


    15
    Ctrl-R <search-text>
    tarkasteve · 2009-09-20 05:07:31 13
  • Put it into your sh startup script (I use alias scpresume='rsync --partial --progress --rsh=ssh' in bash). When a file transfer via scp has aborted, just use scpresume instead of scp and rsync will copy only the parts of the file that haven't yet been transmitted. Show Sample Output


    14
    rsync --partial --progress --rsh=ssh SOURCE DESTINATION
    episodeiv · 2009-02-16 16:22:10 694
  • Very useful in shell scripts because you can run a task nicely in the background using job-control and output progress until it completes. Here's an example of how I use it in backup scripts to run gpg in the background to encrypt an archive file (which I create in this same way). $! is the process ID of the last run command, which is saved here as the variable PI, then sleeper is called with the process id of the gpg task (PI), and sleeper is also specified to output : instead of the default . every 3 seconds instead of the default 1. So a shorter version would be sleeper $!; The wait is also used here, though it may not be needed on your system. echo ">>> ENCRYPTING SQL BACKUP" gpg --output archive.tgz.asc --encrypt archive.tgz 1>/dev/null & PI=$!; sleeper $PI ":" 3; wait $PI && rm archive.tgz &>/dev/null Previously to get around the $! not always being available, I would instead check for the existance of the process ID by checking if the directory /proc/$PID existed, but not everyone uses proc anymore. That version is currently the one at http://www.askapache.com/linux-unix/bash_profile-functions-advanced-shell.html but I plan on upgrading to this new version soon. Show Sample Output


    14
    sleeper(){ while `ps -p $1 &>/dev/null`; do echo -n "${2:-.}"; sleep ${3:-1}; done; }; export -f sleeper
    AskApache · 2009-09-21 07:36:25 8
  • This will backup the _contents_ of /media/SOURCE to /media/TARGET where TARGET is formatted with ntfs. The --modify-window lets rsync ignore the less accurate timestamps of NTFS.


    13
    rsync -rtvu --modify-window=1 --progress /media/SOURCE/ /media/TARGET/
    0x2142 · 2009-07-05 07:40:10 11
  • Dialog's gauge widget accepts progress updates on stdin. This version runs dialog once and updates it every second. There's no need to use timeout which causes screen flicker since it restarts dialog for each update.


    13
    for i in {0..600}; do echo $i; sleep 1; done | dialog --gauge "Install..." 6 40
    dennisw · 2010-10-05 02:29:23 4

  • 12
    dd if=/path/inputfile | pv | dd of=/path/outpufile
    lucafaus · 2010-12-02 18:11:42 5
  • This creates an archive that does the following: rsync:: (Everyone seems to like -z, but it is much slower for me) -a: archive mode - rescursive, preserves owner, preserves permissions, preserves modification times, preserves group, copies symlinks as symlinks, preserves device files. -H: preserves hard-links -A: preserves ACLs -X: preserves extended attributes -x: don't cross file-system boundaries -v: increase verbosity --numeric-ds: don't map uid/gid values by user/group name --delete: delete extraneous files from dest dirs (differential clean-up during sync) --progress: show progress during transfer ssh:: -T: turn off pseudo-tty to decrease cpu load on destination. -c arcfour: use the weakest but fastest SSH encryption. Must specify "Ciphers arcfour" in sshd_config on destination. -o Compression=no: Turn off SSH compression. -x: turn off X forwarding if it is on by default. Flip: rsync -aHAXxv --numeric-ids --delete --progress -e "ssh -T -c arcfour -o Compression=no -x" [source_dir] [dest_host:/dest_dir]


    12
    rsync -aHAXxv --numeric-ids --delete --progress -e "ssh -T -c arcfour -o Compression=no -x" user@<source>:<source_dir> <dest_dir>
    somaddict · 2012-12-26 13:46:23 7
  • Nothing fancy, just a regular filesystem scan that calls the badblocks program and shows some progress info. The used options are: -c ? check for bad sectors with badblocks program -D ? optimize directories if possible -f ? force check, even if filesystem seems clean -t ? print timing stats (use -tt for more) -y ? assume answer ?yes? to all questions -C 0 ? print progress info to stdout /dev/sdxx ? the partition to check, (e.g. /dev/sda1 for first partition on first hard disk) NOTE: Never run fsck on a mounted partition!


    11
    fsck.ext4 -cDfty -C 0 /dev/sdxx
    mtron · 2011-05-18 13:13:29 5
  • [re]verify those burned CD's early and often - better safe than sorry - at a bare minimum you need the good old `dd` and `md5sum` commands, but why not throw in a super "user-friendly" progress gauge with the `pv` command - adjust the ``-s'' "size" argument to your needs - 700 MB in this case, and capture that checksum in a "test.md5" file with `tee` - just in-case for near-future reference. *uber-bonus* ability - positively identify those unlabeled mystery discs - for extra credit, what disc was used for this sample output? Show Sample Output


    10
    dd if=/dev/cdrom | pv -s 700m | md5sum | tee test.md5
    asmoore82 · 2009-03-09 00:11:42 13
  • every 1sec sends DD the USR1 signal which causes DD to print its progress. Show Sample Output


    10
    while :;do killall -USR1 dd;sleep 1;done
    oernii2 · 2010-04-07 09:23:31 9
  • -r for recursive (if you want to copy entire directories) src for the source file (or wildcards) dst for the destination --progress to show a progress bar


    10
    rsync -rv <src> <dst> --progress
    fecub · 2011-08-05 09:29:12 14
  • This version was mentioned in the comments. Credits go to flatcap.


    10
    pv -tpreb /dev/urandom | dd of=file.img
    marrowsuck · 2012-04-11 22:32:52 16
  • Running this code will execute dd in the background, and you'll grab the process ID with '$!' and assign it to the 'pid' variable. Now, you can watch the progress with the following: while true; do kill -USR1 $pid && sleep 1 && clear; done The important thing to grasp here isn't the filename or location of your input or output, or even the block size for that matter, but the fact that you can keep an eye on 'dd' as it's running to see where you are at during its execution.


    9
    dd if=/dev/urandom of=file.img bs=4KB& pid=$!
    atoponce · 2009-04-08 05:56:47 21
  • piping through 'pv' shows a simple progress/speed bar for dd. This is a replacement for my otherwise favorite 'while :;do killall -USR1 dd;sleep 1;done' Show Sample Output


    9
    dd if=/dev/nst0 |pv|dd of=restored_file.tar
    oernii2 · 2010-04-07 09:21:18 22

  • 9
    dd if=/dev/zero | pv | dd of=/dev/null
    richard · 2010-05-14 16:58:42 6
  •  1 2 3 >  Last ›

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

Show a Package Version on RPM based distributions
if you want to see all information about a package use: rpm -qi pkgname full list of querytags can be accessed by the command: rpm --querytags you can also customize the query format how ever you like with using more querytags together along with escape sequences in "man printf"! you can also use more than one package name. for example this command shows name and version in to columns: rpm -q --queryformat %-30{NAME}%{VERSION}\\n pkg1 pkg2

Monitor open connections for httpd including listen, count and sort it per IP
It's not my code, but I found it useful to know how many open connections per request I have on a machine to debug connections without opening another http connection for it. You can also decide to sort things out differently then the way it appears in here.

Convert seconds to [DD:][HH:]MM:SS
Converts any number of seconds into days, hours, minutes and seconds. sec2dhms() { declare -i SS="$1" D=$(( SS / 86400 )) H=$(( SS % 86400 / 3600 )) M=$(( SS % 3600 / 60 )) S=$(( SS % 60 )) [ "$D" -gt 0 ] && echo -n "${D}:" [ "$H" -gt 0 ] && printf "%02g:" "$H" printf "%02g:%02g\n" "$M" "$S" }

Decode base64-encoded file in one line of Perl
Another option is openssl.

Find biggest 10 files in current and subdirectories and sort by file size

Get AWS temporary credentials ready to export based on a MFA virtual appliance
You might want to secure your AWS operations requiring to use a MFA token. But then to use API or tools, you need to pass credentials generated with a MFA token. This commands asks you for the MFA code and retrieves these credentials using AWS Cli. To print the exports, you can use: `awk '{ print "export AWS_ACCESS_KEY_ID=\"" $1 "\"\n" "export AWS_SECRET_ACCESS_KEY=\"" $2 "\"\n" "export AWS_SESSION_TOKEN=\"" $3 "\"" }'` You must adapt the command line to include: * $MFA_IDis ARN of the virtual MFA or serial number of the physical one * TTL for the credentials

Create a simple playlist sort by Genre using mp3info

Temporarily ignore known SSH hosts
you may create an alias also, which I did ;-) alias sshu="ssh -o UserKnownHostsFile=/dev/null "

print shared library dependencies
May be used on (embedded) systems lack ldd

Use tee to process a pipe with two or more processes
Tee can be used to split a pipe into multiple streams for one or more process to work it. You can add more " >()" for even more fun.


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: