Hello, Genode Hackers!
I'm currently implementing an X86 PCI bus driver for L4Linux as a temporary solution for getting USB support in L4Linux.
Current genode-side code is presented below. The l4linux patch is not ready, but I've submitted the relevant part of the diff to pastebin (it's only makefile and kconfig anyway)
https://github.com/Ksys-labs/genode/blob/pci_wip/ports-foc/src/drivers/genod... https://github.com/Ksys-labs/genode/blob/pci_wip/ports-foc/src/lib/l4lx/geno... https://github.com/Ksys-labs/genode/blob/pci_wip/ports-foc/src/lib/l4lx/l4_i... http://pastebin.com/9c37tuJR
I have implemented the linux driver which calls the Genode pci service routines. I have also extended the l4lx io library to allow mapping pci memory space. With this patches, the ehci and ohci drivers probe in linux, but fail to work because interrupts are not triggered. I have implemented the fake irq thread which kicks the irq handler every millisecond and with that usb works. This indicates that the pci subsystem and memory mapping are
I've got some questions regarding interrupt handling in L4Linux and Genode in general.
1. Why are NR_IRQS_HW interrupts reserved on X86 in l4linux? The EHCI controller wants the interrupt number 11 which is lower than NR_IRQS_HW. Ok, I worked around that by editing the constant and removing the "BUG_ON(NR_REQUESTABLE < 1)" in arch/l4/kernel/irq.c init_array function 2. How are capabilities actually assigned? it would seem to me that l4x_register_irq allocates the capabilitiies in the first available slot, but l4x_have_irqcap looks it up by index. I've worked around it by explicitely allocating 12 capabilities via l4x_cap_alloc, but still got error -2004 on irq attach. Now, I've had success in attaching irq after I allocated the capability via the following code in l4lx:
Genode::Foc_cpu_session_client cpu(Genode::env()->cpu_session_cap()); p->irq_cap = cpu.alloc_irq().dst(); 3. what's the purpose of multiplying the irq number by four in l4_msgtag_t ret = l4_irq_attach(p->irq_cap, data->irq << 2, l4x_cpu_thread_get_cap(p->cpu));? As far as I can see, in the base-foc irq implementation, irq number is never multiplied 4. non-genode question, probably. Do EHCI/PCI interrupts need a special kick to work? I've tried using Genode::Irq_session for irq11, but wait_for_irq never returns. On the other hand, I've tried the regular linux iso on the same machine (which is actually qemu) and the ehci interrupt is indeed 11 and it works fine there.
So, the question is, why is interrupt handling broken and the only working interrupt in l4linux is timer which actually uses L4 timer ipc instead of irq and how this should be fixed. Also, what do you think of making a "genode" arch for linux and getting rid of l4linux which uses the Fiasco.OC API bypassing Genode interfaces?
Hi Alexander,
On 03/18/2013 04:51 PM, Alexander Tarasikov wrote:
Hello, Genode Hackers!
I'm currently implementing an X86 PCI bus driver for L4Linux as a temporary solution for getting USB support in L4Linux.
Current genode-side code is presented below. The l4linux patch is not ready, but I've submitted the relevant part of the diff to pastebin (it's only makefile and kconfig anyway)
https://github.com/Ksys-labs/genode/blob/pci_wip/ports-foc/src/drivers/genod... https://github.com/Ksys-labs/genode/blob/pci_wip/ports-foc/src/lib/l4lx/geno... https://github.com/Ksys-labs/genode/blob/pci_wip/ports-foc/src/lib/l4lx/l4_i... http://pastebin.com/9c37tuJR
I have implemented the linux driver which calls the Genode pci service routines. I have also extended the l4lx io library to allow mapping pci memory space. With this patches, the ehci and ohci drivers probe in linux, but fail to work because interrupts are not triggered. I have implemented the fake irq thread which kicks the irq handler every millisecond and with that usb works. This indicates that the pci subsystem and memory mapping are
I've got some questions regarding interrupt handling in L4Linux and Genode in general.
- Why are NR_IRQS_HW interrupts reserved on X86 in l4linux? The EHCI controller
wants the interrupt number 11 which is lower than NR_IRQS_HW. Ok, I worked around that by editing the constant and removing the "BUG_ON(NR_REQUESTABLE < 1)" in arch/l4/kernel/irq.c init_array function 2. How are capabilities actually assigned? it would seem to me that l4x_register_irq allocates the capabilitiies in the first available slot, but l4x_have_irqcap looks it up by index. I've worked around it by explicitely allocating 12 capabilities via l4x_cap_alloc, but still got error -2004 on irq attach. Now, I've had success in attaching irq after I allocated the capability via the following code in l4lx:
Genode::Foc_cpu_session_client cpu(Genode::env()->cpu_session_cap()); p->irq_cap = cpu.alloc_irq().dst(); 3. what's the purpose of multiplying the irq number by four in l4_msgtag_t ret = l4_irq_attach(p->irq_cap, data->irq << 2, l4x_cpu_thread_get_cap(p->cpu));? As far as I can see, in the base-foc irq implementation, irq number is never multiplied 4. non-genode question, probably. Do EHCI/PCI interrupts need a special kick to work? I've tried using Genode::Irq_session for irq11, but wait_for_irq never returns. On the other hand, I've tried the regular linux iso on the same machine (which is actually qemu) and the ehci interrupt is indeed 11 and it works fine there.
So, the question is, why is interrupt handling broken and the only working interrupt in l4linux is timer which actually uses L4 timer ipc instead of irq and how this should be fixed. Also, what do you think of making a "genode" arch for linux and getting rid of l4linux which uses the Fiasco.OC API bypassing Genode interfaces?
Since you are on x86 and it's interrupt eleven you are concerned about (is this Qemu?), I have one suggestion: Fiasco.OC on x86 uses the APIC instead of the aged PIC as interrupt controller. This means that the EHCI controller might not have IRQ 11 (on real hardware) or might have another interrupt mode on Qemu (lately IRQ 11 uses level/high instead of edge/high as trigger). To solve issues like that there is an ACPI driver in Genode. Please try adding 'drivers/acpi' to your scenario, add
<start name="acpi"> <resource name="RAM" quantum="2M"/> <binary name="acpi_drv"/> <provides> <service name="PCI"/> <service name="IRQ" /> </provides> <route> <service name="PCI"> <any-child /> </service> <any-service> <parent/> <any-child /> </any-service> </route> </start>
to your configuration file/run script, and remove the 'pci_drv' from your configuartion, since the driver will be started by the 'acpi_drv'.
Hope this helps,
Sebastian
2013/3/18 Sebastian Sumpf <Sebastian.Sumpf@...1...>:
Hi Alexander,
On 03/18/2013 04:51 PM, Alexander Tarasikov wrote:
Hello, Genode Hackers!
I'm currently implementing an X86 PCI bus driver for L4Linux as a temporary solution for getting USB support in L4Linux.
Current genode-side code is presented below. The l4linux patch is not ready, but I've submitted the relevant part of the diff to pastebin (it's only makefile and kconfig anyway)
https://github.com/Ksys-labs/genode/blob/pci_wip/ports-foc/src/drivers/genod... https://github.com/Ksys-labs/genode/blob/pci_wip/ports-foc/src/lib/l4lx/geno... https://github.com/Ksys-labs/genode/blob/pci_wip/ports-foc/src/lib/l4lx/l4_i... http://pastebin.com/9c37tuJR
I have implemented the linux driver which calls the Genode pci service routines. I have also extended the l4lx io library to allow mapping pci memory space. With this patches, the ehci and ohci drivers probe in linux, but fail to work because interrupts are not triggered. I have implemented the fake irq thread which kicks the irq handler every millisecond and with that usb works. This indicates that the pci subsystem and memory mapping are
I've got some questions regarding interrupt handling in L4Linux and Genode in general.
- Why are NR_IRQS_HW interrupts reserved on X86 in l4linux? The EHCI controller
wants the interrupt number 11 which is lower than NR_IRQS_HW. Ok, I worked around that by editing the constant and removing the "BUG_ON(NR_REQUESTABLE < 1)" in arch/l4/kernel/irq.c init_array function 2. How are capabilities actually assigned? it would seem to me that l4x_register_irq allocates the capabilitiies in the first available slot, but l4x_have_irqcap looks it up by index. I've worked around it by explicitely allocating 12 capabilities via l4x_cap_alloc, but still got error -2004 on irq attach. Now, I've had success in attaching irq after I allocated the capability via the following code in l4lx:
Genode::Foc_cpu_session_client cpu(Genode::env()->cpu_session_cap()); p->irq_cap = cpu.alloc_irq().dst(); 3. what's the purpose of multiplying the irq number by four in l4_msgtag_t ret = l4_irq_attach(p->irq_cap, data->irq << 2, l4x_cpu_thread_get_cap(p->cpu));? As far as I can see, in the base-foc irq implementation, irq number is never multiplied 4. non-genode question, probably. Do EHCI/PCI interrupts need a special kick to work? I've tried using Genode::Irq_session for irq11, but wait_for_irq never returns. On the other hand, I've tried the regular linux iso on the same machine (which is actually qemu) and the ehci interrupt is indeed 11 and it works fine there.
So, the question is, why is interrupt handling broken and the only working interrupt in l4linux is timer which actually uses L4 timer ipc instead of irq and how this should be fixed. Also, what do you think of making a "genode" arch for linux and getting rid of l4linux which uses the Fiasco.OC API bypassing Genode interfaces?
Since you are on x86 and it's interrupt eleven you are concerned about (is this Qemu?), I have one suggestion: Fiasco.OC on x86 uses the APIC instead of the aged PIC as interrupt controller. This means that the EHCI controller might not have IRQ 11 (on real hardware) or might have another interrupt mode on Qemu (lately IRQ 11 uses level/high instead of edge/high as trigger). To solve issues like that there is an ACPI driver in Genode. Please try adding 'drivers/acpi' to your scenario, add
<start name="acpi"> <resource name="RAM" quantum="2M"/> <binary name="acpi_drv"/> <provides> <service name="PCI"/> <service name="IRQ" /> </provides> <route> <service name="PCI"> <any-child /> </service> <any-service> <parent/> <any-child /> </any-service> </route> </start>
Hi! Tried doing that (already after writing the email), for some reason it failed to claim the config register region (0xcf8), digging the source code I found out it is due to region conflict. I was running a minimal configuration: timer, pci_drv and acpi utilizing pci_drv. Our tree is a bit out of sync, will try on genode master later. I think it showed some errors, does it handle well boards with broken ACPI (I guess you are aware that DSDT is mostly always not standard-compliant due to Microsoft compiler)
So, the capability problem is related to the limitation on using <32 interrupts on x86, right? I guess I won't have to worry about it.
Ok, I will try getting genode irq_session to work with it first. Thanks for your suggestion.
to your configuration file/run script, and remove the 'pci_drv' from your configuartion, since the driver will be started by the 'acpi_drv'.
Hope this helps,
Sebastian
Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_d2d_mar _______________________________________________ Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hi Alexander,
I'm currently implementing an X86 PCI bus driver for L4Linux as a temporary solution for getting USB support in L4Linux.
you are taking our port of L4Linux into an interesting direction. So far, we have deliberately kept our version of L4Linux void of any device-driver functionality and used it as a mere runtime environment for Linux programs. Can you give me a bit of background information behind your work? Are you going to use L4Linux as a device-driver OS, providing Genode service interfaces? Or do you just want to grant Linux access to (a subset of) real devices to avoid the need for native Genode device drivers and session interfaces for those devices?
Current genode-side code is presented below. The l4linux patch is not ready, but I've submitted the relevant part of the diff to pastebin (it's only makefile and kconfig anyway)
https://github.com/Ksys-labs/genode/blob/pci_wip/ports-foc/src/drivers/genod... https://github.com/Ksys-labs/genode/blob/pci_wip/ports-foc/src/lib/l4lx/geno... https://github.com/Ksys-labs/genode/blob/pci_wip/ports-foc/src/lib/l4lx/l4_i... http://pastebin.com/9c37tuJR
That looks good to me.
So, the question is, why is interrupt handling broken and the only working interrupt in l4linux is timer which actually uses L4 timer ipc instead of irq and how this should be fixed.
The reason is plain and simply that our version of L4Linux was never meant to access devices directly. I am surprised that you got that far. :-)
Also, what do you think of making a "genode" arch for linux and getting rid of l4linux which uses the Fiasco.OC API bypassing Genode interfaces?
As you described in your article, the current situation is not exactly beautiful. We have a patched version of Linux (L4Linux) with the original git history gone due to L4Linux' way of releasing snapshots via SVN. Then, we put some patches in top, along with a LRre emulation library. Finally, we garnish the whole thing with some Genode-specific stub drivers.
So why have we chosen this path? We thought that this way minimizes our ongoing investment with maintaining the Genode version of L4Linux. TUD is taking care of adapting L4Linux to changes of the Fiasco.OC kernel interface. So we don't have to do that. But TUD is targeting L4Re only and does not care about Genode. So to leverage their work, we need our custom L4Re emulation library, which hopefully is pretty simple (that is actually true) and easy to maintain. This approach worked well as long as we did not do any invasive changes to L4Linux and use it just as is. Now, that you are pushing the boundaries a bit more and making more intensive use of L4Linux, I can clearly see how the construct becomes a burden for you.
Regarding your last question, sure, I would welcome to remove several levels of indirection that we currently rely on. The cleanest solution would certainly be to fork the vanilla Linux kernel and implement paravirtualization using Genode primitives only, thereby taking L4Linux and the L4Re emulation library out of the loop. On the other hand, I cannot really justify putting much effort into that because I regard virtualized Linux just as a stop-gap solution for running existing software until Genode supports it natively. Naturally, I would prefer to dedicate my energy on developing the native Genode environment instead.
Also, for the x86 architecture, there exists the Vancouver VMM, which is not only actively developed in the open but their core developers cooperate closely with us (see https://github.com/genodelabs/genode/issues/666). So for x86 with hardware virtualization, I see no advantage of a paravirtualized Linux over the Vancouver VMM. We can just use Linux unmodified. The only difference is that Vancouver works on NOVA only, not Fiasco.OC ATM.
This leaves only ARM as the target platform where L4Linux seems to be most suitable. However, since Cortex-A15 CPUs come with hardware virtualization support, I find it more tempting to explore this route rather than investing time in pursuing the paravirtualization of the Linux kernel.
That is of course just my line of thinking. If you like to go forward with developing a replacement for L4Linux that removes from the superficial indirections that plague you today, this would be very welcome and I'd certainly provide assistance if needed.
Cheers Norman
Hi there.
you are taking our port of L4Linux into an interesting direction. So far, we have deliberately kept our version of L4Linux void of any device-driver functionality and used it as a mere runtime environment for Linux programs. Can you give me a bit of background information behind your work? Are you going to use L4Linux as a device-driver OS, providing Genode service interfaces? Or do you just want to grant Linux access to (a subset of) real devices to avoid the need for native Genode device drivers and session interfaces for those devices?
The start point of of this story is a need to use a smart-card reader on top of USB. We had two possibilities: 1. Move crypto applications from L4Linux as dedicated server and connect him with dedicated USB host driver. (it's a right approach) 2. Grant L4linux possibility to communicate with real HW. (it's other approach)
For simplicity let us say that we need to use proprietary Linux binaries without sources which communicate with smart-card reader directly. And then, the first approach is not for us.
Regarding your last question, sure, I would welcome to remove several levels of indirection that we currently rely on. The cleanest solution would certainly be to fork the vanilla Linux kernel and implement paravirtualization using Genode primitives only, thereby taking L4Linux and the L4Re emulation library out of the loop. On the other hand, I cannot really justify putting much effort into that because I regard virtualized Linux just as a stop-gap solution for running existing software until Genode supports it natively. Naturally, I would prefer to dedicate my energy on developing the native Genode environment instead.
Agree.
Also, for the x86 architecture, there exists the Vancouver VMM, which is not only actively developed in the open but their core developers cooperate closely with us (see https://github.com/genodelabs/genode/issues/666). So for x86 with hardware virtualization, I see no advantage of a paravirtualized Linux over the Vancouver VMM. We can just use Linux unmodified. The only difference is that Vancouver works on NOVA only, not Fiasco.OC ATM.
As I said at FOSDEM'12, the popularity of the platform depends on the ability to re-use old code. If porting takes considerable time, the first - it is expensive, and the secondly could lead to the fact that such a device or HW-SW complex will never seen release due to obsolescence. And re-using is most important thing for me, when i start to develop a new device.
I agree that would be worth L4Linux seen only as a sandbox for Linux binaries execution. Again, ideologically correct exclude drivers from L4Linux. But there are some problems. In addition to the noticed above there is another one. Is is a performance. Now we make some experiments with L4linux as gateway with two dedicated NIC drivers and L4Linux. And we see that the performance degradation is around 85% in contrast with Linux. That is too much, and if it turns that the moving NIC driver back to L4Linux ( ie if we give acces L4Linux network driver to HW) can decrease performance degradation (security architecture allows to do it), we will use this approach.
And oddly enough, the applications for the ARM easier develop from scratch and not use L4Linux for binaries execution. While software for network equipment, based on x86, increasingly require re-using. We have not looked NOVA and Vancouver yet. If we can not solve performance problems with L4Linux we will "taste" them.
This leaves only ARM as the target platform where L4Linux seems to be most suitable. However, since Cortex-A15 CPUs come with hardware virtualization support, I find it more tempting to explore this route rather than investing time in pursuing the paravirtualization of the Linux kernel.
That is of course just my line of thinking. If you like to go forward with developing a replacement for L4Linux that removes from the superficial indirections that plague you today, this would be very welcome and I'd certainly provide assistance if needed.
Hi Vasily,
thanks for this nuanced discussion. It helps me lot to understand your situation. For example, I didn't have the binary-blob problem on my radar at all.
I agree that would be worth L4Linux seen only as a sandbox for Linux binaries execution. Again, ideologically correct exclude drivers from L4Linux. But there are some problems. In addition to the noticed above there is another one. Is is a performance. Now we make some experiments with L4linux as gateway with two dedicated NIC drivers and L4Linux. And we see that the performance degradation is around 85% in contrast with Linux. That is too much, and if it turns that the moving NIC driver back to L4Linux ( ie if we give acces L4Linux network driver to HW) can decrease performance degradation (security architecture allows to do it), we will use this approach.
I suspect that the poor networking performance stems from the dde_ipxe networking driver, not L4Linux. I would definitely recommend to benchmark the dde_ipxe networking driver individually without L4Linux. E.g., by adding some code to the driver that transmits a predefined network packet and measure the throughput achieved.
Another test would be to connect two L4Linux instances via nic_bridge and perform the benchmark between both instances. If the communication over the virtual network performs well, we can rule out L4Linux and the 'Nic_session' interface from being the problem.
And oddly enough, the applications for the ARM easier develop from scratch and not use L4Linux for binaries execution. While software for network equipment, based on x86, increasingly require re-using. We have not looked NOVA and Vancouver yet. If we can not solve performance problems with L4Linux we will "taste" them.
Please let us know how you like the taste once you try it. ;-)
Norman
Hi Alexander,
Hi! Tried doing that (already after writing the email), for some reason it failed to claim the config register region (0xcf8), digging the source code I found out it is due to region conflict. I was running a minimal configuration: timer, pci_drv and acpi utilizing pci_drv.
you must not start the PCI driver and the ACPI driver at the same time. Just start the ACPI driver. It will spawn the PCI driver as a child process. This would explain the conflicting access to the PCI ports.
Cheers Norman
I suspect that the poor networking performance stems from the dde_ipxe networking driver, not L4Linux. I would definitely recommend to benchmark the dde_ipxe networking driver individually without L4Linux. E.g., by adding some code to the driver that transmits a predefined network packet and measure the throughput achieved.
Yes, thanks, we will. anyway I guess in nearest future high-performance network subsystem (Stack, drivers) will be one of the our priority tasks.
Another test would be to connect two L4Linux instances via nic_bridge and perform the benchmark between both instances. If the communication over the virtual network performs well, we can rule out L4Linux and the 'Nic_session' interface from being the problem.
We use own realization of communication of two L4Linux. We will compare performance of nic_bridge and our realization. Now our performance via 2 L4Linux around 184 Mb. Performance PC->NIC->L4Linux is 94Mb. I think performance L4Linux->NIC->L4Linux will the same.
Hello Norman,
I've done test for nic_bridge, and I've updated page with results http://ksyslabs.org/doku.php?id=genode_network_perfomance#nic_bridge_test
-- Ivan Loskutov
2013/3/20 Norman Feske <norman.feske@...1...>
Hi Vasily,
thanks for this nuanced discussion. It helps me lot to understand your situation. For example, I didn't have the binary-blob problem on my radar at all.
I agree that would be worth L4Linux seen only as a sandbox for Linux binaries execution. Again, ideologically correct exclude drivers from L4Linux. But there are some problems. In addition to the noticed above there is another one. Is is a performance. Now we make some experiments with L4linux as gateway with two dedicated NIC drivers and L4Linux. And we see that the performance degradation is around 85% in contrast with Linux. That is too much, and if it turns that the moving NIC driver back to L4Linux ( ie if we give acces L4Linux network driver to HW) can decrease performance degradation (security architecture allows to do it), we will use this approach.
I suspect that the poor networking performance stems from the dde_ipxe networking driver, not L4Linux. I would definitely recommend to benchmark the dde_ipxe networking driver individually without L4Linux. E.g., by adding some code to the driver that transmits a predefined network packet and measure the throughput achieved.
Another test would be to connect two L4Linux instances via nic_bridge and perform the benchmark between both instances. If the communication over the virtual network performs well, we can rule out L4Linux and the 'Nic_session' interface from being the problem.
And oddly enough, the applications for the ARM easier develop from scratch and not use L4Linux for binaries execution. While software for network equipment, based on x86, increasingly require re-using. We have not looked NOVA and Vancouver yet. If we can not solve performance problems with L4Linux we will "taste" them.
Please let us know how you like the taste once you try it. ;-)
Norman
-- Dr.-Ing. Norman Feske Genode Labs
http://www.genode-labs.com · http://genode.org
Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_d2d_mar _______________________________________________ Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hi Ivan,
I've done test for nic_bridge, and I've updated page with results http://ksyslabs.org/doku.php?id=genode_network_perfomance#nic_bridge_test
thanks for posting the results. One thing left me wondering: In the last scenario, the roles of both L4Linux instances look entirely symmetric. How can it be that the benchmark produces different results when swapping the roles of both instances? Are both instances configured identically? In particular, do they have the same amount of RAM configured? I'm asking because we observed that the TCP parameters of the Linux TCP/IP stack depend on the memory available to Linux.
Apart from that, your measurement seems to support my presumption that the driver is the bottleneck. Even with two Linux instances running and the overhead introduced by the indirection via nic_bridge, the throughput stays in the same order as native Linux.
Do you share my interpretation? If so, it would be worthwhile to focus the investigation on the driver.
Cheers Norman
Ivan,
you can easily see whether Linux has suboptimal TCP configuration by opening multiple TCP connections at once and check if this increases total throughput. In this case TCP windows don't grow to the required size. Norman already mentioned it, but a simple way to fix this is to give Linux lots of RAM (1GB does not hurt). I would expect your (native) setup to always max out the 1Gb link bandwidth. You should always plot CPU utilization as well.
Julian
Norman Feske <norman.feske@...1...> wrote:
Hi Ivan,
I've done test for nic_bridge, and I've updated page with results
http://ksyslabs.org/doku.php?id=genode_network_perfomance#nic_bridge_test
thanks for posting the results. One thing left me wondering: In the last scenario, the roles of both L4Linux instances look entirely symmetric. How can it be that the benchmark produces different results when swapping the roles of both instances? Are both instances configured identically? In particular, do they have the same amount of RAM configured? I'm asking because we observed that the TCP parameters of the Linux TCP/IP stack depend on the memory available to Linux.
Apart from that, your measurement seems to support my presumption that the driver is the bottleneck. Even with two Linux instances running and the overhead introduced by the indirection via nic_bridge, the throughput stays in the same order as native Linux.
Do you share my interpretation? If so, it would be worthwhile to focus the investigation on the driver.
Cheers Norman
-- Dr.-Ing. Norman Feske Genode Labs
http://www.genode-labs.com · http://genode.org
Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_d2d_mar _______________________________________________ Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hi Norman,
2013/3/22 Norman Feske <norman.feske@...1...>
How can it be that the benchmark produces different results when swapping the roles of both instances? Are both instances configured identically? In particular, do they have the same amount of RAM configured?
Yes, both of L4Linux instances have the same configuration and I've only started iperf server on first l4linux and then on second. You can see my run script for this case: https://github.com/Ksys-labs/genode/blob/staging/iloskutov/run/fedora_nicbri...
I'm asking because we observed that the TCP parameters of the Linux TCP/IP stack depend on the memory available to Linux.
Ok, I'll try to test again with more memory for L4Linux.
-- Ivan Loskutov
Hello Norman,
I continue to understand this problem. I've tried to disable SMP in Fiasco.OC and remove VCPU affinity from l4linux command line. For this case the benchmark with nic_bridge shows symmetric results. I've obtained a bandwidth about 880 Mbit/s.
-- Ivan Loskutov
2013/3/22 Norman Feske <norman.feske@...1...>
Hi Ivan,
I've done test for nic_bridge, and I've updated page with results
http://ksyslabs.org/doku.php?id=genode_network_perfomance#nic_bridge_test
thanks for posting the results. One thing left me wondering: In the last scenario, the roles of both L4Linux instances look entirely symmetric. How can it be that the benchmark produces different results when swapping the roles of both instances? Are both instances configured identically? In particular, do they have the same amount of RAM configured? I'm asking because we observed that the TCP parameters of the Linux TCP/IP stack depend on the memory available to Linux.
Apart from that, your measurement seems to support my presumption that the driver is the bottleneck. Even with two Linux instances running and the overhead introduced by the indirection via nic_bridge, the throughput stays in the same order as native Linux.
Do you share my interpretation? If so, it would be worthwhile to focus the investigation on the driver.
Cheers Norman
-- Dr.-Ing. Norman Feske Genode Labs
http://www.genode-labs.com · http://genode.org
Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_d2d_mar _______________________________________________ Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main