Dang! No HPN-SSH!
Dang! No HPN-SSH!
Posted Mar 8, 2010 16:57 UTC (Mon) by aaron (guest, #282)Parent article: OpenSSH 5.4 released
(See http://www.psc.edu/networking/projects/hpn-ssh )
It's truly amazing how they help transfers over higher-latency links (i.e. any distance over a mile.) They also help local transfers a fair bit.
Unfortunately, for now, if you can't patch your own SSHd, you have to use commercial file-transfer software, a Riverbed, or RSA-SSHd.
Please, O pufferfish, strap on a jetpack!
Posted Mar 8, 2010 17:04 UTC (Mon)
by alex (subscriber, #1355)
[Link] (4 responses)
Posted Mar 9, 2010 1:51 UTC (Tue)
by BrucePerens (guest, #2510)
[Link] (3 responses)
Posted Mar 9, 2010 4:20 UTC (Tue)
by djm (subscriber, #11651)
[Link] (2 responses)
Posted Mar 9, 2010 4:51 UTC (Tue)
by BrucePerens (guest, #2510)
[Link] (1 responses)
Posted Mar 9, 2010 11:04 UTC (Tue)
by djm (subscriber, #11651)
[Link]
This limit is enough for a path with 5 seconds latency at your DSL speed.
If you are on Internet2 and want to move files between continental USA and
Posted Mar 9, 2010 4:24 UTC (Tue)
by djm (subscriber, #11651)
[Link] (2 responses)
The other component of the HPN patches that people sometimes ask for is the
Posted Mar 9, 2010 11:51 UTC (Tue)
by tialaramex (subscriber, #21167)
[Link]
No-one is asking for this to be the default of course, but why can't we have it as an option without hacking the code?
Posted Mar 9, 2010 14:45 UTC (Tue)
by andikleen (guest, #39006)
[Link]
And yes multi threading your application is typically intrusive, but
Posted Mar 9, 2010 13:49 UTC (Tue)
by andikleen (guest, #39006)
[Link] (4 responses)
I wonder if it's related to being developed on OpenBSD which is not exactly known for SMP scalability?
Perhaps the distros will use it some day at least, even if the maintainers can't get out of the 70ies.
Posted Mar 9, 2010 21:12 UTC (Tue)
by djm (subscriber, #11651)
[Link] (3 responses)
Posted Mar 9, 2010 22:25 UTC (Tue)
by djm (subscriber, #11651)
[Link]
Posted Mar 11, 2010 2:48 UTC (Thu)
by martinfick (subscriber, #4455)
[Link] (1 responses)
Posted Mar 11, 2010 5:26 UTC (Thu)
by dlang (guest, #313)
[Link]
on very modern system I can transfer data twice as fast over local gigE networks with HTTP or FTP than I can via SSH
Dang! No HPN-SSH!
auditing code that goes from statically allocated to dynamically allocated
buffers?
So, is the alternative to set the static buffers to 2MB?
Dang! No HPN-SSH!
Dang! No HPN-SSH!
Dang! No HPN-SSH!
Dang! No HPN-SSH!
Put differently, if your one-way path latency is 100ms then unmodified
OpenSSH's window size should only start restricting performance if your
transfer rate is ~160Mbit/s.
Europe at gigabit speeds, then you might still want the HPN patches.
Dang! No HPN-SSH!
which should have obviated the need for the HPN patches for all but the
highest BDP links. You should benchmark your connection to see if it
actually benefits from the HPN patches, which are quite intrusive.
ability to select a null cipher/MAC. We are not planning on implementing
that ever.
Dang! No HPN-SSH!
Dang! No HPN-SSH!
which is needed for fast enough links because a single core cannot do full performance otherwise.
it's also very needed if it is CPU time intensive and
you want to keep up with modern systems.
Dang! No HPN-SSH!
Dang! No HPN-SSH!
conditions in security software. AES on my 3 year old desktop is >800Mbit/s
on a single core and RC4 is 2x faster again, so I don't think crypto
performance is a massive problem.
err, s/prefer/prefer not/
Dang! No HPN-SSH!
Dang! No HPN-SSH!
Dang! No HPN-SSH!