Hello 1 2,
On 20.01.2016 21:00, 1 2 wrote:
Yes, the problem was in usb_drv. Namely, vbox initially reads a pile of single blocks per read. rump_fs at first caches these reads as 8 blocks per once (i.e. single request to the part_blk and then to the usb_drv -> single answer), but at some moment changes the strategy and sends several requests for 64 blocks at once without avaiting the answers, which was not good for old storage.cc code. Sincerely thank you for your help!
Nice !
For me, it was the first step. What I want next: is it possible to passthrough at least the GPU into VM? Have you (Genode labs) some plans about that task? Or at least some guidance? AFAIK VirtualBox itself has such a possibility: https://www.virtualbox.org/manual/ch09.html#pcipassthrough
We have not enabled this feature in the VBox VMM port to Genode@...391... Currently, there are no plans from Genode Labs side to do it.
But, of course, we may assist you if you're willing to spend the effort.
In principal the assignment of devices to VMs (with IOMMUs) already works. It was already done for NUL@...153... and the Vancouver VMM [0] in the past at the TU Dresden (~2009/2010++). Probably - it was also done for Genode@...153... and the Seoul VMM (kind of successor of Vancouver VMM) by the Intel Labs/Braunschweig around the people of Udo Steinberg. Unfortunately, Intel decided to close the Braunschweig office. We, at Genode Labs, however did not experimented with PCI passthrough and Seoul VMM so far.
I would enable the part in the VBox VMM/Genode port regarding PCI passthrough and see what must be done there. If you once manage to get it compile then the missing backend functions, (probably regarding PCI discovery, IO Memory and IO Ports) must be adjusted in the Genode/VBox component to the Genode platform driver interface. Here we/I are willing to assist you.
Cheers,
Alexander Boettcher.