Commands tagged pv (38)

  • Do above at the Destination aka The Server. Do the following at the Source aka The Client: tar -cf - /srcfolder | pv | nc www.home.com 50002 If you want ETAs and stuff: tar -cf - /srcfolder | pv -s `du -sb /srcfolder | awk '{print $1}'` | nc www.home.com 50002 If you dont care about progress bars @ server/destination: tar -cf - /srcfolder | pv | nc www.home.com 50002 If you dont care about progress bars @ client/source: tar -cf - /srcfolder | pv -s `du -sb /srcfolder | awk '{print $1}'` | nc www.home.com 50002 I have this in alot better detail where there is more room to talk about it on my site: http://www.kossboss.com/linuxtarpvncssh Show Sample Output


    0
    while true; do nc -l -p 50002 | pv | tar -xf -; done
    bhbmaster · 2013-05-30 07:17:23 10
  • NOTE: When doing these commands when asked for questions there might be flowing text from the pv doing the progress bar just continue typing as if its not there, close your eyes if it helps, there might be a yes or no question, type "yes" and ENTER to it, and also it will ask for a password, just put in your password and ENTER I talk alot more about this and alot of other variations of this command on my site: http://www.kossboss.com/linuxtarpvncssh Show Sample Output


    0
    cd /srcfolder; tar -czf - . | pv -s `du -sb . | awk '{print $1}'` | ssh -c arcfour,blowfish-cbc -p 50005 root@destination.com "tar -xzvf - -C /dstfolder"
    bhbmaster · 2013-05-30 07:21:06 7
  • forgot to use a pv or rsync and want to know how much has been copied. Show Sample Output


    0
    watch ls -lh /path/to/folder
    vonElfensenf · 2014-03-27 10:51:36 8
  • This will write to TAPE (LTO3-4 in my case) a backup of files/folders. Could be changed to write to DVD/Blueray. Go to the directory where you want to write the output files : cd /bklogs Enter a name in bkname="Backup1", enter folders/files in tobk="/home /var/www". It will create a tar and write it to the tape drive on /dev/nst0. In the process, it will 1) generate a sha512 sum of the tar to $bkname.sha512; so you can validate that your data is intact 2) generate a filelist of the content of the tar with filesize to $bkname.lst 3) buffer the tar file to prevent shoe-shining the tape (I use 4GB for lto3(80mb/sec), 8gb for lto4 (120mb/sec), 3Tb usb3 disks support those speed, else I use 3x2tb raidz. 4) show buffer in/out speed and used space in the buffer 5) show progress bar with time approximation using pv ADD : To eject the tape : ; sleep 75; mt-st -f /dev/nst0 rewoffl TODO: 1) When using old tapes, if the buffer is full and the drive slows down, it means the tape is old and would need to be replaced instead of wiping it and recycling it for an other backup. Logging where and when it slows down could provide good information on the wear of the tape. I don't know how to get that information from the mbuffer output and to trigger a "This tape slowed down X times at Y1gb, Y2gb, Y3gb down to Zmb/s for a total of 30sec. It would be wise to replace this tape next time you want to write to it." 2) Fix filesize approximation 3) Save all the output to $bkname.log with progress update being new lines. (any one have an idea?) 4) Support spanning on multiple tape. 5) Replace tar format with something else (dar?); looking at xar right now (https://code.google.com/p/xar/), xml metadata could contain per file checksum, compression algorithm (bzip2, xv, gzip), gnupg encryption, thumbnail, videopreview, image EXIF... But that's an other project. TIP: 1) You can specify the width of the progressbar of pv. If its longer than the terminal, line refresh will be written to new lines. That way you can see if there was speed slowdown during writing. 2) Remove the v in tar argument cvf to prevent listing all files added to the archive. 3) You can get tarsum (http://www.guyrutenberg.com/2009/04/29/tarsum-02-a-read-only-version-of-tarsum/) and add >(tarsum --checksum sha256 > $bkname_list.sha256) after the tee to generate checksums of individual files !


    0
    bkname="test"; tobk="*" ; totalsize=$(du -csb $tobk | tail -1 | cut -f1) ; tar cvf - $tobk | tee >(sha512sum > $bkname.sha512) >(tar -tv > $bkname.lst) | mbuffer -m 4G -P 100% | pv -s $totalsize -w 100 | dd of=/dev/nst0 bs=256k
    johnr · 2014-07-22 15:47:50 8

  • 0
    (pv -n centos-7.0-1406-x86_64-DVD.img | dd of=/dev/disk4 bs=1m conv=notrunc,noerror) 2>&1 | dialog --gauge "Copying CentOS to USB Stick in /dev/disk4" 10 70 0
    BoxingOctopus · 2015-01-19 19:36:15 12
  • Create a file with random binary content. Required pv, units packages. It use openssl to encrypt zeros using aes-256 and time stamp as password to generate a pseudo-random file. Show Sample Output


    0
    s=1G bs=16K; count=`units ${s}iB ${bs}iB -1 -t --out="%.f"`; openssl enc -aes-256-ctr -pass pass:`date +%s%N` -nosalt < /dev/zero 2>/dev/null | dd iflag=fullblock bs=$bs count=$count | tee $s | pv -s $s | md5sum | sed -e "s/-/$s/" > ${s}.md5
    jcppkkk · 2015-09-30 06:27:39 10
  • Due to @tremby, here: http://unix.stackexchange.com/a/172088/58343. I'm looking for a way to parallelize openssl and feed that to dd since openssl is the bottleneck on my machine: http://unix.stackexchange.com/questions/253466/parallelize-openssl-as-input-to-dd


    0
    openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt </dev/zero | pv --progress --eta --rate --bytes --size 8000632782848 | dd of=/dev/md0 bs=2M
    diagon · 2016-01-05 19:36:03 20
  • first line is the speed the uncompressed data is read, second line is the compressed data sent over ssh. change sdb to your target drive/partition to be backed up. change pbzip -c1 to suit your compression. and ssh to your target file. don't forget to run zerofree/fstrim first! Show Sample Output


    0
    dd if=/dev/sdb | pv -rabc | pbzip2 -c1 | pv -rabc | ssh user@192.168.0.1 'cat > /dump.bz2'
    sexyrms · 2016-06-19 23:27:03 11
  • Sets the size of the disk to $DISKSIZE so that the percentage readout of pv is correct. set /dev/sdb to whatever your disk is /dev/sdX. Next pipe dd to pv, then pipe pv to gzip so that you get a gzipped image file. Show Sample Output


    0
    DISKSIZE=`sudo blockdev --getsize64 /dev/sdb` && sudo dd bs=4096 if=/dev/sdb | pv -s $DISKSIZE | sudo gzip -9 > ~/USBDRIVEBACKUP.img.gz
    frame45 · 2016-08-31 00:03:56 14
  • uses the wonderful 'pv' command to give a progress bar when copying one partition to another. Amazing for long running dd commands Show Sample Output


    0
    pv -tpreb /dev/sdc2 | dd of=/dev/sdb2 bs=64K conv=noerror,sync
    4fthawaiian · 2016-12-22 03:18:09 14
  • Change your drive letter as you wish. Using pv command for speed detect.First of all you must install pv command for usage. http://www.bayner.com/ kerim@bayner.com Show Sample Output


    -1
    cat /dev/sda | pv -r > /dev/null
    kerim · 2011-01-23 22:58:56 5
  • Only works on single files, doesn't preserve permissions/timestamps/ownership. Show Sample Output


    -6
    pv file1 > file2
    ppaschka · 2010-02-25 19:18:32 5
  • the f is for file and - stdout, This way little shorter. I Like copy-directory function It does the job but looks like SH**, and this doesn't understand folders with whitespaces and can only handle full path, but otherwise fine, function copy-directory () { ; FrDir="$(echo $1 | sed 's:/: :g' | awk '/ / {print $NF}')" ; SiZe="$(du -sb $1 | awk '{print $1}')" ; (cd $1 ; cd .. ; tar c $FrDir/ )|pv -s $SiZe|(cd $2 ; tar x ) ; } Show Sample Output


    -11
    (cd /source/dir ; tar cv .)|(cd /dest/dir ; tar xv)
    marssi · 2009-07-19 10:31:13 12
  •  < 1 2

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

Run a command only when load average is below a certain threshold
Good for one off jobs that you want to run at a quiet time. The default threshold is a load average of 0.8 but this can be set using atrun.

Convert spaces in file names to underscores

Which processes are listening on a specific port (e.g. port 80)
swap out "80" for your port of interest. Can use port number or named ports e.g. "http"

list folders containing less than 2 MB of data
This command will search all subfolders of the current directory and list the names of the folders which contain less than 2 MB of data. I use it to clean up my mp3 archive and to delete the found folders pipe the output to a textfile & run: $ while read -r line; do rm -Rv "$line"; done < textfile

Burn CD/DVD from an iso, eject disc when finished.
cdrecord -scanbus will tell you the (x,y,z) value of your cdr (for example, mine is 3,0,0)

Run a command for blocks of output of another command
The given example collects output of the tail command: Whenever a line is emitted, further lines are collected, until no more output comes for one second. This group of lines is then sent as notification to the user. You can test the example with $ logger "First group"; sleep 1; logger "Second"; logger "group"

ASCII art of yourself
Use libcaca to render ascii chars on the webcam input... or don't.

batch convert Nikon RAW (nef) images to JPG
converts RAW files from a Nikon DSLR to jpg for easy viewing etc. requires ufraw package

a function to create a box of '=' characters around a given string.
First argument: string to put a box around. Second argument: character to use for box (default is '=') Same as command #4948, but shorter, and without the utility function.

Backup all mysql databases to individual files on a remote server
It grabs all the database names granted for the $MYSQLUSER and gzip them to a remote host via SSH.


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: