Commands tagged pv (38)

  • Do above at the Destination aka The Server. Do the following at the Source aka The Client: tar -cf - /srcfolder | pv | nc www.home.com 50002 If you want ETAs and stuff: tar -cf - /srcfolder | pv -s `du -sb /srcfolder | awk '{print $1}'` | nc www.home.com 50002 If you dont care about progress bars @ server/destination: tar -cf - /srcfolder | pv | nc www.home.com 50002 If you dont care about progress bars @ client/source: tar -cf - /srcfolder | pv -s `du -sb /srcfolder | awk '{print $1}'` | nc www.home.com 50002 I have this in alot better detail where there is more room to talk about it on my site: http://www.kossboss.com/linuxtarpvncssh Show Sample Output


    0
    while true; do nc -l -p 50002 | pv | tar -xf -; done
    bhbmaster · 2013-05-30 07:17:23 4
  • NOTE: When doing these commands when asked for questions there might be flowing text from the pv doing the progress bar just continue typing as if its not there, close your eyes if it helps, there might be a yes or no question, type "yes" and ENTER to it, and also it will ask for a password, just put in your password and ENTER I talk alot more about this and alot of other variations of this command on my site: http://www.kossboss.com/linuxtarpvncssh Show Sample Output


    0
    cd /srcfolder; tar -czf - . | pv -s `du -sb . | awk '{print $1}'` | ssh -c arcfour,blowfish-cbc -p 50005 root@destination.com "tar -xzvf - -C /dstfolder"
    bhbmaster · 2013-05-30 07:21:06 2
  • forgot to use a pv or rsync and want to know how much has been copied. Show Sample Output


    0
    watch ls -lh /path/to/folder
    vonElfensenf · 2014-03-27 10:51:36 4
  • This will write to TAPE (LTO3-4 in my case) a backup of files/folders. Could be changed to write to DVD/Blueray. Go to the directory where you want to write the output files : cd /bklogs Enter a name in bkname="Backup1", enter folders/files in tobk="/home /var/www". It will create a tar and write it to the tape drive on /dev/nst0. In the process, it will 1) generate a sha512 sum of the tar to $bkname.sha512; so you can validate that your data is intact 2) generate a filelist of the content of the tar with filesize to $bkname.lst 3) buffer the tar file to prevent shoe-shining the tape (I use 4GB for lto3(80mb/sec), 8gb for lto4 (120mb/sec), 3Tb usb3 disks support those speed, else I use 3x2tb raidz. 4) show buffer in/out speed and used space in the buffer 5) show progress bar with time approximation using pv ADD : To eject the tape : ; sleep 75; mt-st -f /dev/nst0 rewoffl TODO: 1) When using old tapes, if the buffer is full and the drive slows down, it means the tape is old and would need to be replaced instead of wiping it and recycling it for an other backup. Logging where and when it slows down could provide good information on the wear of the tape. I don't know how to get that information from the mbuffer output and to trigger a "This tape slowed down X times at Y1gb, Y2gb, Y3gb down to Zmb/s for a total of 30sec. It would be wise to replace this tape next time you want to write to it." 2) Fix filesize approximation 3) Save all the output to $bkname.log with progress update being new lines. (any one have an idea?) 4) Support spanning on multiple tape. 5) Replace tar format with something else (dar?); looking at xar right now (https://code.google.com/p/xar/), xml metadata could contain per file checksum, compression algorithm (bzip2, xv, gzip), gnupg encryption, thumbnail, videopreview, image EXIF... But that's an other project. TIP: 1) You can specify the width of the progressbar of pv. If its longer than the terminal, line refresh will be written to new lines. That way you can see if there was speed slowdown during writing. 2) Remove the v in tar argument cvf to prevent listing all files added to the archive. 3) You can get tarsum (http://www.guyrutenberg.com/2009/04/29/tarsum-02-a-read-only-version-of-tarsum/) and add >(tarsum --checksum sha256 > $bkname_list.sha256) after the tee to generate checksums of individual files !


    0
    bkname="test"; tobk="*" ; totalsize=$(du -csb $tobk | tail -1 | cut -f1) ; tar cvf - $tobk | tee >(sha512sum > $bkname.sha512) >(tar -tv > $bkname.lst) | mbuffer -m 4G -P 100% | pv -s $totalsize -w 100 | dd of=/dev/nst0 bs=256k
    johnr · 2014-07-22 15:47:50 5

  • 0
    (pv -n centos-7.0-1406-x86_64-DVD.img | dd of=/dev/disk4 bs=1m conv=notrunc,noerror) 2>&1 | dialog --gauge "Copying CentOS to USB Stick in /dev/disk4" 10 70 0
    BoxingOctopus · 2015-01-19 19:36:15 9
  • Create a file with random binary content. Required pv, units packages. It use openssl to encrypt zeros using aes-256 and time stamp as password to generate a pseudo-random file. Show Sample Output


    0
    s=1G bs=16K; count=`units ${s}iB ${bs}iB -1 -t --out="%.f"`; openssl enc -aes-256-ctr -pass pass:`date +%s%N` -nosalt < /dev/zero 2>/dev/null | dd iflag=fullblock bs=$bs count=$count | tee $s | pv -s $s | md5sum | sed -e "s/-/$s/" > ${s}.md5
    jcppkkk · 2015-09-30 06:27:39 6
  • Due to @tremby, here: http://unix.stackexchange.com/a/172088/58343. I'm looking for a way to parallelize openssl and feed that to dd since openssl is the bottleneck on my machine: http://unix.stackexchange.com/questions/253466/parallelize-openssl-as-input-to-dd


    0
    openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt </dev/zero | pv --progress --eta --rate --bytes --size 8000632782848 | dd of=/dev/md0 bs=2M
    diagon · 2016-01-05 19:36:03 15
  • first line is the speed the uncompressed data is read, second line is the compressed data sent over ssh. change sdb to your target drive/partition to be backed up. change pbzip -c1 to suit your compression. and ssh to your target file. don't forget to run zerofree/fstrim first! Show Sample Output


    0
    dd if=/dev/sdb | pv -rabc | pbzip2 -c1 | pv -rabc | ssh user@192.168.0.1 'cat > /dump.bz2'
    sexyrms · 2016-06-19 23:27:03 7
  • Sets the size of the disk to $DISKSIZE so that the percentage readout of pv is correct. set /dev/sdb to whatever your disk is /dev/sdX. Next pipe dd to pv, then pipe pv to gzip so that you get a gzipped image file. Show Sample Output


    0
    DISKSIZE=`sudo blockdev --getsize64 /dev/sdb` && sudo dd bs=4096 if=/dev/sdb | pv -s $DISKSIZE | sudo gzip -9 > ~/USBDRIVEBACKUP.img.gz
    frame45 · 2016-08-31 00:03:56 10
  • uses the wonderful 'pv' command to give a progress bar when copying one partition to another. Amazing for long running dd commands Show Sample Output


    0
    pv -tpreb /dev/sdc2 | dd of=/dev/sdb2 bs=64K conv=noerror,sync
    4fthawaiian · 2016-12-22 03:18:09 9
  • Change your drive letter as you wish. Using pv command for speed detect.First of all you must install pv command for usage. http://www.bayner.com/ kerim@bayner.com Show Sample Output


    -1
    cat /dev/sda | pv -r > /dev/null
    kerim · 2011-01-23 22:58:56 4
  • Only works on single files, doesn't preserve permissions/timestamps/ownership. Show Sample Output


    -6
    pv file1 > file2
    ppaschka · 2010-02-25 19:18:32 2
  • the f is for file and - stdout, This way little shorter. I Like copy-directory function It does the job but looks like SH**, and this doesn't understand folders with whitespaces and can only handle full path, but otherwise fine, function copy-directory () { ; FrDir="$(echo $1 | sed 's:/: :g' | awk '/ / {print $NF}')" ; SiZe="$(du -sb $1 | awk '{print $1}')" ; (cd $1 ; cd .. ; tar c $FrDir/ )|pv -s $SiZe|(cd $2 ; tar x ) ; } Show Sample Output


    -11
    (cd /source/dir ; tar cv .)|(cd /dest/dir ; tar xv)
    marssi · 2009-07-19 10:31:13 11
  •  < 1 2

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

disable history for current shell session
this will cause any commands that you have executed in the current shell session to not be written in your bash_history file upon logout

Which processes are listening on a specific port (e.g. port 80)
swap out "80" for your port of interest. Can use port number or named ports e.g. "http"

backup and synchronize entire remote folder locally (curlftpfs and rsync over FTP using FUSE FS)
connect to a remote server using ftp protocol over FUSE file system, then rsync the remote folder to a local one and then unmount the remote ftp server (FUSE FS) it can be divided to 3 different commands and you should have curlftpfs and rsync installed

Detect illegal access to kernel space, potentially useful for Meltdown detection
Based on capsule8 agent examples, not rigorously tested

Write comments to your history.
A null operation with the name 'comment', allowing comments to be written to HISTFILE. Prepending '#' to a command will *not* write the command to the history file, although it will be available for the current session, thus '#' is not useful for keeping track of comments past the current session.

Convert all .flac from a folder subtree in 192Kb mp3
find . -type f -iname '*.flac' # searches from the current folder recursively for .flac audio files | # the output (a .flac audio files with relative path from ./ ) is piped to while read FILE; do FILENAME="${FILE%.*}"; flac -cd "$FILE" | lame -b 192 - "${FILENAME}.mp3"; done # for each line on the list: # FILE gets the file with .flac extension and relative path # FILENAME gets FILE without the .flac extension # run flac for that FILE with output piped to lame conversion to mp3 using 192Kb bitrate

find sparse files
Prints the path/filename and sparseness of any sparse files (files that use less actual space than their total size because the filesystem treats large blocks of 00 bytes efficiently).

Terminal - Show directories in the PATH, one per line with sed and bash3.X `here string'

Get Futurama quotations from slashdot.org servers

Simple XML tag extract with sed
Limited, but useful construct to extract text embedded in XML tags. This will only work if bar is all on one line. If nobody posts an alternative for the multiline sed version, I'll figure it out later...


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: