Hello Stefan,
thank you for the descriptive explanation :) I found out, that it does not suffice to map the (kernel) Capability from the target application to the Checkpoint/Restore application, because the Checkpoint/Restore application knows only already existing (Genode) Capabilities (kcap and key value) through the interception of Rpc_objects (e.g. own dataspace, rm_session, etc.) the target application uses.
Mapping a Capability gives me a new (kernel) Capability which points to the same object identity, but has a new kcap (= Capability space slot) value.
Through intercepting all services the target application uses, the Checkpoint/Restore application knows (probably) all necessary Capabilities which are created through issuing the parent. But what about Capabilities which are created through a local service of the target application?
The target application could create its own service with a root and session Rpc_object and manage requests through an Entrypoint. Although the Entrypoint creates new Capabilities through the PD session which the Checkpoint/Restore intercepts (PD::alloc_rpc_cap). The Checkpoint/Restore application cannot associate the created Capability to a concrete Rpc_object which is created by the target application itself.
To solve this problem I did not find any solutions which is transparent to the target application nor is possible without modifying the kernel. A non-transparent, but user-level solution would be to let the Checkpoint/Restore application implement the service of the target application. But this will impose rewriting existing Genode components, which I would avoid.
Perhaps someone in the Genode community has an idea, how I can get access to the target application's Rpc_objects created by its own service.
Kind regards, Denis
On 22.09.2016 10:16, Stefan Kalkowski wrote:
Hello Denis,
On 09/21/2016 05:42 PM, Denis Huber wrote:
Hello again,
I have two small problems where I need some guidance from you :)
- I am trying to understand the mechanism of l4_task_map [1]. Are the
following thoughts correct?
- The destination and source task cap (first 2 args of l4_task_map) can
be retrieved through Pd_session::native_pd() and Foc_native_pd::task_cap().
- Send flexpage (arg #3) describes a memory area which contains the
selector number (= address) of the source task's capability.
- The send base (arg #4) is an integer which contains the address of the
capability of the the destination task and also an operation code number for e.g. mapping or granting the capability.
[1] https://l4re.org/doc/group__l4__task__api.html#ga0a883fb598c3320922f0560263d...
That is correct.
To iterate through all possible capabilities I need to know where the capability space starts (first valid selector number) and where it ends. Where can I find these information? I.e. which source files are relevant?
THe capability space of each component is split between an area controlled by core, and one controlled by the component itself. Everything underneath Fiasco::USER_BASE_CAP (in file: repos/base-foc/include/foc/native_capability.h:63) is used by core, and has the following layout: the first nine slots are reserved to not interfere with fixed capabilities of Fiasco.OC/L4Re. The only capabilities of this fixed area that we use are the task capability (slot 1) and the parent capability (slot 8). The rest of the core area is divided into thread-local capabilities. Every thread has three dedicated capabilities: a capability to its own IPC gate (so to say its identity), a capability to its pager object, and a capability to an IRQ object (some kind of kernel semaphore), that is used for blocking in the case of lock-contention. You can find the layout information again in the file: repos/base-foc/include/foc/native_capability.h.
Everything starting from slot 200 is controlled by the component itself. Each component has a capability allocator, and some kind of registry containing all currently allocated capabilities that is called "cap map":
repos/base-foc/src/include/base/internal/cap_* repos/base-foc/src/lib/base/cap_*
Currently, the per-component capability allocator is (compile-time) restricted to a number of up to 4K capabilities. The special component core can allocate more capabilities, because it always owns every capability in the system.
The capability space controlled by the component thereby ranges from 200-4296, but it is filled sparsely. When not knowing the "cap map" of a component, you can however check the validity of a single capability with `l4_task_cap_valid`, have a look here:
https://l4re.org/doc/group__l4__task__api.html#ga829a1b5cb4d5dba33ffee57534a...
- I also wanted to look up the mechanism of Noux where it
re-initializes the parent cap, the noux session cap, and the caps of a child's environment after a fork. But I cannot find the corresponding files.
AFAIK, in Noux the parent capability in the .data section of the program gets overwritten:
repos/ports/src/noux/child.h:458 repos/ports/src/noux/ram_session_component.h:80
After that parts of the main thread initialization of the target needs to be re-done, otherwise e.g., the serialized form of the parent capability in the data section would have no effect. But I'm not well up with respect to Noux initialization. After some grep, I found this being the first routine executed by the forked process:
repos/ports/src/lib/libc_noux/plugin.cc:526
It shows up, how parent capability gets set, and the environment gets re-loaded.
Best regards Stefan