Checkpoint/restore of capabilities

Denis Huber huber.denis at ...435...
Mon Oct 10 17:27:12 CEST 2016


Hello Norman,

thanks again for your explanation.

It sounds good, that I do not have to checkpoint the component-intern 
session capabilities, if they are not used by the same component. What 
about the locally created capabilities which are created during 
Entrypoint creation?

In particular, when the target component creates an Entrypoint object, 
then it creates a Native_capability (as Ipc_server) from a capability 
found in the utcb's thread control registers:

	repos/base-foc/src/lib/base/ipc.cc:377

The Ipc_server capability is used in two calls to 
Pd_session::alloc_rpc_cap during Entrypoint object creation. The two 
calls go to Entrypoint::manage the Exit-handler for the Rpc_entrypoint 
and for the Signal_proxy_component for the Signal-API. To recreate those 
Native_capabilities at restore time, I have to use the same Ipc_server 
capability. How can this be done?


I also have some general questions about Genode capabilities in Fiasco.OC:
In the Genode Foundations book, on page 37, there is a figure (figure 2) 
with an RPC object and its object identity. What is an object identity 
in Fiasco.OC?
  * How is it called there?
  * Where can I find it in the source files?
  * Does it comprise information about...
    * ...the owner of the RPC object?
    * ...which component has the data in memory?
    * ...where it can be found in the address space?


Kind regards,
Denis


On 07.10.2016 11:34, Norman Feske wrote:
> Hi Denis,
>
>> The target application could create its own service with a root and
>> session Rpc_object and manage requests through an Entrypoint. Although
>> the Entrypoint creates new Capabilities through the PD session which the
>> Checkpoint/Restore intercepts (PD::alloc_rpc_cap). The
>> Checkpoint/Restore application cannot associate the created Capability
>> to a concrete Rpc_object which is created by the target application itself.
>
> that is true. The monitoring component has no idea about the meaning of
> RPC objects created internally within the child.
>
> But the child never uses such capabilities to talk to the outside world.
> If such a capability is created to provide a service to the outside
> world (e.g., a session capability), your monitoring component will
> actually get hold of it along with the information of its type. I.e.,
> the child passes a root capability via the 'Parent::announce' RPC
> function to the monitoring component, or the monitoring component
> receives a session capability as a response of a 'Root::session' RPC
> call (which specifies the name of the session type as argument).
>
> Those capabilities are - strictly speaking - not needed to make the
> child happy, but merely to enable someone else to use the child's
> service. However, there is also the case where the child uses RPCs in a
> component-local way. Even though the monitoring component does not need
> to know the meaning behind those capabilities, it needs to replicate the
> association of the component's internal RPC objects with the
> corresponding kernel capabilities.
>
>> To solve this problem I did not find any solutions which is transparent
>> to the target application nor is possible without modifying the kernel.
>> A non-transparent, but user-level solution would be to let the
>> Checkpoint/Restore application implement the service of the target
>> application. But this will impose rewriting existing Genode components,
>> which I would avoid.
>>
>> Perhaps someone in the Genode community has an idea, how I can get
>> access to the target application's Rpc_objects created by its own service.
>
> This is indeed a tricky problem. I see two possible approaches:
>
> 1. Because the monitoring component is in control of the child's PD
>    session (and thereby the region map of the child's address space), it
>    may peek and poke in the virtual memory of the child (e.g., it may
>    may attach a portion of the child's address space as a managed
>    dataspace to its own region map). In particular, it could inspect
>    and manipulate the child-local meta data for the child's capability
>    space where it keeps the association between RPC object identities
>    and kcap selectors. This approach would require the monitor to
>    interpret the child's internal data structures, similar to what a
>    debugger does.
>
> 2. We may let the child pro-actively propagate information about its
>    capability space to the outside so that the monitoring component can
>    conveniently intercept this information. E.g. as a rough idea, we
>    could add a 'Pd_session::cap_space_dataspace' RPC function where a
>    component can request a dataspace capability for a memory buffer
>    where it reports the layout information of its capability space.
>    This could happen internally in the base library. So it would be
>    transparent for the application code.
>
>    I think however that merely propagating information from the child
>    may not be enough. You also may need a way to re-assign new RPC
>    object identities to the capability space of the restored child.
>
> Noux employs a mix of both approaches when forking a process. The parent
> capability is poked directly into the address space of the new process
> whereas all other capabilities are re-initialized locally in the child.
> Maybe you could find a middle ground where the child component reports
> just enough internal information (e.g., the pointer to its 'cap_map') to
> let the monitor effectively apply the first approach (peeking and poking)?
>
> Btw, just as a side remark, this problem does not exist on the base-hw
> kernel where the RPC object identities are equal to the capability
> selectors.
>
> Cheers
> Norman
>




More information about the users mailing list