live ssh network throughput test

yes | pv | ssh $host "cat > /dev/null"
connects to host via ssh and displays the live transfer speed, directing all transferred data to /dev/null needs pv installed Debian: 'apt-get install pv' Fedora: 'yum install pv' (may need the 'extras' repository enabled)
Sample Output
88.2MB 0:00:15 [6.83MB/s] [    <=>            ]

27
2009-12-27 21:34:23

1 Alternatives + Submit Alt

What Others Think

NO! Sorry but this will not give you the true network speed of your network or any network. Using SSH adds a lot of extra data on the network in addition to what you are transfering (about 200-300% more) that would of course not be seen by pv. Probably a better way to do this is using nc or something. Try running this on the server side: nc -l -p 44444 > /dev/null And then this on the client side: yes | pv | nc serverhostname.example.com 44444 Nice try though.
deltaray · 446 weeks and 4 days ago
of course it's going to have the overhead added by ssh, that is implied by the fact it is going over ssh (and as indicated by the summary).
opertinicy · 446 weeks and 4 days ago
also, 200-300% is a gross overestimation and is highly dependant on the cpu's doing the encryption/decryption. the actual data being transferred on the network is not increased by much. as i never transfer files between hosts unencrypted, this command is highly useful in tweaking/measuring my ssl/ssh settings for use when using ssh tunneling/scp
opertinicy · 446 weeks and 4 days ago
Nice, I hadn't known about the 'pv' command. Just out of curiousity, why did you use 'yes' instead of /dev/zero? (BTW, I agree that it's actually more useful to know the bandwidth including encryption overhead than the raw network bandwidth.)
hackerb9 · 445 weeks and 6 days ago
Sure, but I just want to make sure people realize when they read this is that all your example measures is what the bandwidth would be for SSH between the two machines it was tested on, not for other protocols on the network in general. You started to get at that in your comments, but you didn't put that in your initial description.
deltaray · 445 weeks and 6 days ago
no specific reason for using yes [other than it's shorter/quicker to type :) ] you could use /dev/zero as well: cat /dev/zero | pv | ssh $host 'cat > /dev/null' or even better: pv /dev/zero -W | ssh $host 'cat > /dev/null'
opertinicy · 445 weeks and 4 days ago
I noticed using the -C option for ssh makes a huge difference... probably mostly due to using /dev/zero being easy to compress :) dd bs=1024 count=1M if=/dev/zero of=s1 1048576+0 records in 1048576+0 records out 1073741824 bytes (1.1 GB) copied, 8.26632 seconds, 130 MB/s
AskApache · 432 weeks ago
Calling /dev/zero as the stdout in this example provides for a pretty good demonstration on how compression works, (specifically zlib compression, in this instance). However, it is relatively useless for any kind of bandwidth benchmark =p
opertinicy · 431 weeks and 6 days ago
How about /dev/random ?
jrdld · 152 weeks and 1 day ago
using /dev/random (combined with compression) would give a good indication of the throughput.
opertinicy · 152 weeks and 1 day ago

What do you think?

Any thoughts on this command? Does it work on your machine? Can you do the same thing with only 14 characters?

You must be signed in to comment.

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands



Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: