Hi Daniel,
OK, I think this clears things up for me - the hazards of Genode hacking! ;)
I am not quite sure what you mean by "hazard". The mechanism Stefan described is actually a safety net that relieves the users of the framework from the burden of managing the lifetime of capabilities manually. I'd to say that doing the lifetime management of capabilities manually would be hazardous. In contrast, the Genode API provides a coherent and safe way that avoids leaking capabilities (and the associated kernel resources).
The problem you are facing right now is that you are deliberately breaking through the abstraction of the API and thereby (unknowingly) violate an invariant that is normally guaranteed by the Genode API implementation. In particular, you create capabilities out of thin air, which is not possible via the legitimate use of the API. Because this invariant is not satisfied anymore, another part of the API (RPC marshalling of capabilities) that relies on it does not work as expected.
So I support Stefan with his suggestion of his first solution (letting core create capabilities and export them via a core service) as this solution will not work against the design of Genode.
That said, there might be third solution, which is the creation of a valid ID manually without involving core's CAP service. This is done for constructing the parent capability at the process startup:
https://github.com/genodelabs/genode/blob/master/base-foc/src/platform/_main...
Following this procedure, a valid Genode capability gets created, which can then principally be delegated via RPC. By using 'cap_map()->insert()', the code satisfies the invariant needed by the RPC mechanism to marshal the capability.
This way, you could wrap a Fiasco.OC capability selector (e.g., a scheduler cap selector) into a Genode capability in order to delegate it to another process. I guess, this is what you'd like to do?
@Stefan: Would that be a feasible approach?
Cheers Norman