Hi Udo,
thanks for your very helpful explanation!
I am just wondering: If DMA addresses are host-virtual addresses and there are multiple processes running, how does the IOMMU know which virt-to-phys translation to use? I can loosely remember that you once told me that devices must be associated with PDs. Is this correct? If so, how does a driver PD express to the hypervisor that it deals with a certain device?
With the IOMMU active, a user-level device driver must specify which memory regions of its PD are DMA-able. This is done by setting the D-bit in the Delegate Transfer Item. All memory mappings where the D-bit is not set will not be DMA-able (unless the IOMMU is inactive). My recommendation is that device drivers map their own code and private non-DMA-able and only allow DMA transfers to I/O buffer regions.
This seems to fit quite nicely with the recent addition of a facility to explicitly allocate DMA buffers via core's RAM session interface:
https://github.com/genodelabs/genode/commit/288fd4e56e636a0b3eb193353cf80286...
Currently, this function is meaningful only on ARM on Fiasco.OC. But it looks like a good way to handle the D-bit on NOVA without the need of any special precautions at the driver side. Because core is the pager of the driver, it could always map DMA buffers (and only those) with the D-bit set.
Also, when the IOMMU is active, the addresses programmed into DMA transfers must be host-virtual addresses. This alleviates device drivers from having to know physical memory addresses. They can DMA into their virtual address space. In contrast, when the IOMMU is inactive, the addresses in DMA transfers must be host-physical addresses.
Is the use of virtual addresses mandatory? If so, the driver must be aware of the presence of the IOMMU, doesn't it? It would be nice to find a way of using the IOMMU in a way that is transparent to the driver.
Cheers Norman