dd if=/dev/random of=bigfile bs=1024 count=102400

Create a 100MB file for testing transfer speed


3
By: amiga500
2009-02-17 05:41:20
dd

These Might Interest You

  • This example will close the pipe after transferring 100MB at a speed of 3MB per second.


    2
    cat /dev/zero | pv -L 3m -Ss 100m > /dev/null
    bugmenot · 2012-12-15 10:17:52 1
  • This example will close the pipe after transferring 100MB at a speed of 3MB per second.


    6
    cat /dev/urandom | pv -L 3m | dd bs=1M count=100 iflag=fullblock > /dev/null
    bugmenot · 2012-07-29 00:42:16 2
  • If you have servers on Wide Area Network (WAN), you may experience very long transfer rates due to limited bandwidth and latency. To speed up you transfers you need to compress the data so you will have less to transfer. So the solution is to use a compression tools like gzip or bzip or compress before and after the data transfer. Using ssh "-C" option is not compatible with every ssh version (ssh2 for instance).


    0
    ssh 10.0.0.4 "gzip -c /tmp/backup.sql" |gunzip > backup.sql
    ultips · 2012-01-06 17:44:06 0
  • Mirror a remote directory using some tricks to maximize network speed. lftp:: coolest file transfer tool ever -u: username and password (pwd is merely a placeholder if you have ~/.ssh/id_rsa) -e: execute internal lftp commands set sftp:connect-program: use some specific command instead of plain ssh ssh:: -a -x -T: disable useless things -c arcfour: use the most efficient cipher specification -o Compression=no: disable compression to save CPU mirror: copy remote dir subtree to local dir -v: be verbose (cool progress bar and speed meter, one for each file in parallel) -c: continue interrupted file transfers if possible --loop: repeat mirror until no differences found --use-pget-n=3: transfer each file with 3 independent parallel TCP connections -P 2: transfer 2 files in parallel (totalling 6 TCP connections) sftp://remotehost:22: use sftp protocol on port 22 (you can give any other port if appropriate) You can play with values for --use-pget-n and/or -P to achieve maximum speed depending on the particular network. If the files are compressible removing "-o Compression=n" can be beneficial. Better create an alias for the command. Show Sample Output


    1
    lftp -u user,pwd -e "set sftp:connect-program 'ssh -a -x -T -c arcfour -o Compression=no'; mirror -v -c --loop --use-pget-n=3 -P 2 /remote/dir/ /local/dir/; quit" sftp://remotehost:22
    colemar · 2014-10-17 00:29:34 0
  • Where filein is the source file, destination.com is the ssh server im copying the file to, -c arcfour,blowfish-cbc is selecting the fastest encryption engines, -C is for online compressions and decompression when it comes off the line - supposed to speed up tx in some cases, then the /tmp/fileout is how the file is saved... I talk more about it on my site, where there is more room to talk about this: http://www.kossboss.com/linuxtarpvncssh and http://www.kossboss.com/linux---transfer-1-file-with-ssh Show Sample Output


    0
    cat filein | ssh destination.com -c arcfour,blowfish-cbc -C -p 50005 "cat - > /tmp/fileout"
    bhbmaster · 2013-05-30 07:18:46 0
  • Requires: curl xsel access to the internet(http://transfer.sh) This is an alias utilizing the transfer.sh service to make sharing files easier from the command line. I have modified the alias provided by transfer.sh to use xsel to copy the resulting URL to the clipboard. The full modified alias is as follows since commandlinefu only allows 255 characters: transfer() { if [ $# -eq 0 ]; then echo "No arguments specified. Usage:\necho transfer /tmp/test.md\ncat /tmp/test.md | transfer test.md"; return 1; fi if tty -s; then basefile=$(basename "$1" | sed -e 's/[^a-zA-Z0-9._-]/-/g'); curl --progress-bar --upload-file "$1" "https://transfer.sh/$basefile" |xsel --clipboard; else curl --progress-bar --upload-file "-" "https://transfer.sh/$1" |xsel --clipboard ; fi; xsel --clipboard; } Show Sample Output


    2
    transfer() { basefile=$(basename "$1" | sed -e 's/[^a-zA-Z0-9._-]/-/g');curl --progress-bar --upload-file "$1" "https://transfer.sh/$basefile"|xsel --clipboard;xsel --clipboard ; }
    leftyfb · 2016-03-20 19:38:48 0

What Others Think

/dev/random is a blocking device that will block read when the entropy pool is empty. You should use /dev/urandom instead. But even /dev/urandom is cpu intensive, wich is not a good idea if you are testing data transfer speed. A better alternative should be: dd if=/dev/zero of=bigfile bs=1024 count=102400
berta · 482 weeks and 6 days ago
This is only cpu intensive when you generate the file. After that there is no more overhead except for the bandwidth consumed. I have a small shell script that compares the downloaded checksum with one done on the generated file, then prints the results (either checksums match or they don't).
meganerd · 330 weeks ago

What do you think?

Any thoughts on this command? Does it work on your machine? Can you do the same thing with only 14 characters?

You must be signed in to comment.

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands



Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: