Asynchronous Nested Page Fault Handling
Norman Feske
norman.feske at ...1...
Mon Sep 12 22:39:36 CEST 2011
Hi Daniel,
> I am wondering why Genode uses asynchronous signals to call custom
> nested page fault-handlers. Can someone explain why? It would seem
> more sensible to use synchronous IPC for this purpose.
the answer to this question is not just a matter of the communication
mechanism used but a matter of trust relationships. If core employed a
synchronous interface for reflecting page faults to the user land, it
would make itself depend on the proper operation of each dataspace
manager involved. I.e., if core called a dataspace manager via
synchronous IPC (let's say, invoking a RPC function 'resolve_fault'), it
can't be sure that the call will ever return.
In contrast, by using asynchronous notifications, core hands out the
information that something interesting happened as a fire-and-forget
message to the dataspace manager. This way, core does not make itself
dependent on any higher-level user-space component. The dataspace
manager can respond to this signal by querying page fault information
from core. This query can be done via synchronous IPC because the
dataspace manager trusts core anyway.
I should mention that there exists an alternative design for
implementing nested dataspaces - using synchronous IPC. This concept is
mostly referred to as "local region mapper". In this approach, the pager
(called region mapper) of a process resides in the same address space as
the process (the pager thread itself is paged by someone else). If any
thread of the process (other than the pager) produces a page fault, a
page-fault message is delivered to the local region mapper. The region
mapper can then request flexpage mappings directly from a dataspace
manager and receives map items as response via synchronous IPC.
Even though the "local region manager" concept can be implemented on
Genode (we did some prototyping in the past), we discarded the concept
for the following reasons:
* The region manager must possess a capability to directly communicate
to the dataspace manager. On Genode, managed dataspaces are entirely
transparent to the process using them.
* The dataspace manager must possess a direct communication right to
the user of its dataspaces (to send mappings via IPC). In contrast,
on Genode, a dataspace manager does not need direct communication
rights to anyone using its dataspaces. It interacts with core only.
* The local region mapper must be paged - so a special case for handling
this thread is always needed.
* By sending flexpage-mappings via synchronous IPC, memory mappings
would get established without core knowing about them. As an ultimate
consequence, the system would depend on an in-kernel mapping database
for revoking these mappings later on (e.g., for regaining physical
resources during the destruction of a process). I regard the in-
kernel mapping database as the most unfortunate part of most L4
kernel designs. Genode does not to depend on such a kernel feature.
* (Somehow related to the previous point) The local region mapper
concept requires an IPC mechanism that supports the communication
of memory mappings in addition to normal message payloads.
That said, the current state of Genode's managed dataspace concept is
not carved in stone. It is primarily designed for implementing use cases
that require a few and fairly large managed dataspaces. You have to keep
in mind that each managed dataspace is actually a RM session that must
be paid for. If we see the concept of managed dataspaces picked up for
implementing many small dataspaces, we should seek for a more
lightweight mechanism.
Coming back to your original question: What is your actual concern about
using asynchronous notifications for implementing managed dataspaces?
Have you come up with a clever idea to implement a synchronous protocol
instead? I would love exploring this.
Cheers
Norman
--
Dr.-Ing. Norman Feske
Genode Labs
http://www.genode-labs.com · http://genode.org
Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden
Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
More information about the users
mailing list