Hi Stefan,
On 05/06/2015 09:40 AM, Stefan Brenner wrote:
On 05.05.2015 11:29, Stefan Kalkowski wrote:
Hi,
On 04/28/2015 02:26 PM, Stefan Brenner wrote:
Hi,
just a short question, I am using the TZ VMM example on i.MX53. How can I syncronize accesses to a shared memory range between secure world and normal world? I would be interested in a mutex that can be acquired both from Genode in secure world, as well as from Linux in normal world.
From the secure world's side, the synchronization is somehow implicitly
available. Whenever the VMM receives a VM exception signal, it knows that the VM is in pause state and cannot intercept it. Moreover, if the VMM receives other asynchronous events (e.g. events from some backend devices etc.) it can call pause() on the VM object. If the pause() call returns, the VMM can be sure that the VM is not scheduled anymore. By calling run() after a critical path the VMM makes the VM executable again.
From the normal world's perspective you might ask the secure world via a
hypercall explicitly when entering or leaving a critical section.
Of course, you can also implement an inter-world capable mutex laying within a shared memory region that is mapped uncached in both worlds. By now we did not need such a mutex for our experimental VMM. Therefore, there is no implementation existent.
Thanks a lot for your answer! This sounds good for me, at least from the secure world's perspective. However, the other way round, it would be way too much overhead for my normal world application to switch back and forth to secure world twice (once for lock and once for unlock) for each critical section.
It would be more suitable for my app to not do the critical action in normal world, but ask the secure world to do it. Then it would be synchronized as well, but only one world switch back and forth necessary.
I think I have to bite the apple and implement what you mention, a mutex in a shared memory section. Do you have any further input for me according to this? Especially about the "mapped uncached" property?
as Martin and me explained yesterday to Ofer Hasson on the mailing, the guest OS RAM is already mapped uncached within the VMM. Please have a look at the Ram class abstraction used by the VMM to get access to it.
In the Linux kernel you can allocate some DMA capable memory (keyword: dma_alloc_coherent), retrieve its physical address, and propagate that address to the VMM. The VMM uses its Ram object to get the virtual address of the related memory region by the physical address again.
Here is a reference that helps to understand allocation of uncached memory in the Linux kernel:
http://linuxkernelhacker.blogspot.de/2014/07/arm-dma-mapping-explained.html
Best Regards Stefan
Regards Stefan
Ciao
One dashboard for servers and applications across Physical-Virtual-Cloud Widest out-of-the-box monitoring support with 50+ applications Performance metrics, stats and reports that give you Actionable Insights Deep dive visibility with transaction tracing using APM Insight. http://ad.doubleclick.net/ddm/clk/290420510;117567292;y _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
One dashboard for servers and applications across Physical-Virtual-Cloud Widest out-of-the-box monitoring support with 50+ applications Performance metrics, stats and reports that give you Actionable Insights Deep dive visibility with transaction tracing using APM Insight. http://ad.doubleclick.net/ddm/clk/290420510;117567292;y _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main