Copy files over network using compression

on the listening side: sudo nc -lp 2022 | sudo tar -xvf - and on the sending side: tar -cvzf - ./*| nc -w 3 name_of_listening_host 2022
This is useful for sending data between 2 computers that you have shell access to. Uses tar compression during transfer. Files are compressed & uncompressed automatically. Note the trailing dash on the listening side that makes netcat listen to stdin for data. on the listening side: sudo nc -lp 2022 | sudo tar -xvf - explanation: open netcat to -l listen on -p port 2022, take the data stream and pipe to tar -x extract, -v verbose, -f using file filename - means "stdin" on the sending side: tar -cvzf - ./*| nc -w 3 name_of_listening_host 2022 explanation: compress all files in current dir using tar -c create, -v verbose, -f using file, - filename - here means "stdout" because we're tar -c instead of tar -x, -w3 wait 3 seconds on stream termination and then end the connection to the listening host name_of_listening_host, on port 2022

-2
2009-03-27 09:59:33

These Might Interest You

  • This invokes tar on the remote machine and pipes the resulting tarfile over the network using ssh and is saved on the local machine. This is useful for making a one-off backup of a directory tree with zero storage overhead on the source. Variations on this include using compression on the source by using 'tar cfvp' or compression at the destination via ssh user@host "cd dir; tar cfp - *" | gzip - > file.tar.gz


    6
    ssh user@host "cd targetdir; tar cfp - *" | dd of=file.tar
    bwoodacre · 2009-03-18 07:43:22 3
  • Mirror a remote directory using some tricks to maximize network speed. lftp:: coolest file transfer tool ever -u: username and password (pwd is merely a placeholder if you have ~/.ssh/id_rsa) -e: execute internal lftp commands set sftp:connect-program: use some specific command instead of plain ssh ssh:: -a -x -T: disable useless things -c arcfour: use the most efficient cipher specification -o Compression=no: disable compression to save CPU mirror: copy remote dir subtree to local dir -v: be verbose (cool progress bar and speed meter, one for each file in parallel) -c: continue interrupted file transfers if possible --loop: repeat mirror until no differences found --use-pget-n=3: transfer each file with 3 independent parallel TCP connections -P 2: transfer 2 files in parallel (totalling 6 TCP connections) sftp://remotehost:22: use sftp protocol on port 22 (you can give any other port if appropriate) You can play with values for --use-pget-n and/or -P to achieve maximum speed depending on the particular network. If the files are compressible removing "-o Compression=n" can be beneficial. Better create an alias for the command. Show Sample Output


    1
    lftp -u user,pwd -e "set sftp:connect-program 'ssh -a -x -T -c arcfour -o Compression=no'; mirror -v -c --loop --use-pget-n=3 -P 2 /remote/dir/ /local/dir/; quit" sftp://remotehost:22
    colemar · 2014-10-17 00:29:34 0
  • a - archive m5 - compression level, 0= lowest compression...1...2...3...4...5= max compression -v5M split the output file in 5 megabytes archives, change to 700 for a CD, or 4200 for a DVD R recursive for directories, do not use it for files It's better to have the output of a compression already split than use the 'split' command after compression, would consume the double amount of disk space. Found at http://www.ubuntu-unleashed.com/2008/05/howto-create-split-rar-files-in-ubuntu.html


    0
    rar a -m5 -v5M -R myarchive.rar /home/
    piovisqui · 2009-05-27 15:53:18 4
  • Adds high-performance, lightweight lz4 compression to speed the transfer of files over a trusted network link. Using (insecure) netcat results in a much faster transfer than using a ssh tunnel because of the lack of overhead. Also, LZ4 is as fast or faster than LZ0, much faster than gzip or LZMA, an in a worst-case scenario, incompressible data gets increased by 0.4% in size. Using LZMA or gzip compressors makes more sense in cases where the network link is the bottleneck, whereas LZ4 makes more sense if CPU time is more of a bottleneck.


    0
    On target: "nc -l 4000 | lz4c -d - | tar xvf -" On source: "tar -cf - . | lz4c | nc target_ip 4000"
    baitisj · 2014-08-02 05:09:30 0

What Others Think

If you have scp on the client and the corresponding daemon on the server side, you can just use scp filename user@host:/target/directory
DNSpyder · 482 weeks ago
Curious why a sudo is needed in "sudo nc -lp 2022" (port 2022 is not a privileged port)?
mpb · 482 weeks ago
But where is the compression? You should pipe your data through bzip2, gzip whatever.
OJM · 482 weeks ago
@OJM Good catch - I left out the "-z" option that filters the archive through gzip. Post modified. @mpb I should have pointed out that port 2022 is arbitrary and was open for me when I needed this command. It could be replaced with whatever works for you though. @DNSpyder scp works, but the combination of netcat & tar is faster even with compression turned on in scp (-C).
smcpherson · 482 weeks ago
If you need encryption then use cryptcat: http://sourceforge.net/projects/cryptcat/
OJM · 482 weeks ago
rsync
asmoore82 · 481 weeks and 2 days ago
This works in reverse, too, i.e. the listening side can be the "sender": # on the listening/sending side tar -cvzf - ./* | nc -v -l 2022 # on the receiving side nc -v -w3 name_of_listening_host 2022 | tar -xvf - The listening netcat will wait for the incoming connection before it starts sending the input piped from tar.
jordan · 283 weeks and 6 days ago

What do you think?

Any thoughts on this command? Does it work on your machine? Can you do the same thing with only 14 characters?

You must be signed in to comment.

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands



Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: