Ivan,
you can easily see whether Linux has suboptimal TCP configuration by opening multiple TCP connections at once and check if this increases total throughput. In this case TCP windows don't grow to the required size. Norman already mentioned it, but a simple way to fix this is to give Linux lots of RAM (1GB does not hurt). I would expect your (native) setup to always max out the 1Gb link bandwidth. You should always plot CPU utilization as well.
Julian
Norman Feske <norman.feske@...1...> wrote:
Hi Ivan,
I've done test for nic_bridge, and I've updated page with results
http://ksyslabs.org/doku.php?id=genode_network_perfomance#nic_bridge_test
thanks for posting the results. One thing left me wondering: In the last scenario, the roles of both L4Linux instances look entirely symmetric. How can it be that the benchmark produces different results when swapping the roles of both instances? Are both instances configured identically? In particular, do they have the same amount of RAM configured? I'm asking because we observed that the TCP parameters of the Linux TCP/IP stack depend on the memory available to Linux.
Apart from that, your measurement seems to support my presumption that the driver is the bottleneck. Even with two Linux instances running and the overhead introduced by the indirection via nic_bridge, the throughput stays in the same order as native Linux.
Do you share my interpretation? If so, it would be worthwhile to focus the investigation on the driver.
Cheers Norman
-- Dr.-Ing. Norman Feske Genode Labs
http://www.genode-labs.com · http://genode.org
Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_d2d_mar _______________________________________________ Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main