On 02/20/2013 11:53 AM, Norman Feske wrote:
Hi Daniel,
OK, I think this clears things up for me - the hazards of Genode
hacking! ;)
I am not quite sure what you mean by "hazard". The mechanism Stefan
described is actually a safety net that relieves the users of the
framework from the burden of managing the lifetime of capabilities
manually. I'd to say that doing the lifetime management of capabilities
manually would be hazardous. In contrast, the Genode API provides a
coherent and safe way that avoids leaking capabilities (and the
associated kernel resources).
The problem you are facing right now is that you are deliberately
breaking through the abstraction of the API and thereby (unknowingly)
violate an invariant that is normally guaranteed by the Genode API
implementation. In particular, you create capabilities out of thin air,
which is not possible via the legitimate use of the API. Because this
invariant is not satisfied anymore, another part of the API (RPC
marshalling of capabilities) that relies on it does not work as expected.
So I support Stefan with his suggestion of his first solution (letting
core create capabilities and export them via a core service) as this
solution will not work against the design of Genode.
That said, there might be third solution, which is the creation of a
valid ID manually without involving core's CAP service. This is done for
constructing the parent capability at the process startup:
https://github.com/genodelabs/genode/blob/master/base-foc/src/platform/_main_parent_cap.h
Following this procedure, a valid Genode capability gets created, which
can then principally be delegated via RPC. By using
'cap_map()->insert()', the code satisfies the invariant needed by the
RPC mechanism to marshal the capability.
This way, you could wrap a Fiasco.OC capability selector (e.g., a
scheduler cap selector) into a Genode capability in order to delegate it
to another process. I guess, this is what you'd like to do?
@Stefan: Would that be a feasible approach?
Well, not really. The parent capability is a corner case. It's the only
capability that is inserted manually without usage of the IPC framework,
because we need it to do the first IPC at all.
To enable usage of the parent capability, when starting a new child, its
parent stores the capability ID at a specific place (&_parent_cap), when
setting up its address space.
For all capabilities "created out of thin air" the problem remains to
get a valid capability ID.
A viable third way, without using core's CAP service, would be to shrink
the ID range used by core, and use the IDs, which become free. Of
course, the problem remains to divide up the IDs between potentially
different tasks.h
@Daniel: The burden of having global capability IDs, a capability
registry, retrieval etc. wouldn't exist, if the kernel API would allow
to identify capability duplicates when receiving one. Currently, the
only way to identify, whether a received capability is already existent
in the protection domain, is either to compare it against all
capabilities one possesses, or by using an additional identifier. The
first solution obviously is not feasible, because every comparison
between two capabilities means one kernel syscall. That means, if you
own 100 capabilities you've to do 100 syscalls when receiving a new
capability. Therefore, we've chosen the second approach of using a
globally unique ID that is sent in addition to the capability.
A capability-based kernel, where this additional ID isn't needed
anymore, is for example NOVA.
Best regards
Stefan
Cheers
Norman