Hello Anna,
welcome to the list ;-)
On Tue, Jan 28, 2014 at 02:58:35PM +0400, Анна Будкина wrote:
I'm measuring throughput between two hosts. I use genode running on fiasco.OC on one machine and monolithic Linux on another machine. There's 82579LM NIC on each host. I'm running netperf_lxip.run script. As acpi driver doesn't work on my machine i'm using pci_drv driver. Another problem is that level-triggered interrupts are not recieved and i'm polling interrupts in the internal cycle in /os/src/lib/dde_kit/interrupt.cc:
[...]
I was slightly astonished by the bad benchmark results. So, I tried today's Genode master with the following scenario:
* Genode on Lenovo T61 (82566mm, PCIe 8086:1049) * Linux on T410 (82577LM, PCIe 8086:10ea)
With your patch I got
! PERF: TCP_STREAM 2.02 MBit/s ! PERF: TCP_MAERTS 8.00 MBit/s
This substantiates my assumption that your implemented "polling" degrades the performance significantly. The original code without polling produces
! PERF: TCP_STREAM 65.59 MBit/s ! PERF: TCP_MAERTS 543.35 MBit/s
which is no top-notch result, but looks more promising. We did not investigate the performance drop on TCP_STREAM up to now, but suspect the NIC driver or its integration to be the cause.
Best regards