Questions about Address Space, Scheduling and CSpace

Norman Feske norman.feske at genode-labs.com
Mon Jan 10 15:14:55 CET 2022


Hello Sid,

thanks for your interest in Genode and welcome to the mailing list!

> 1. Referring to section 3.4.4 from the Genode foundation's book. I am not
> sure I follow how using a different RM for the stack segment helps us
> provide buffers in between stacks of different threads in a PD.

When attaching a dataspace to a region map without telling a specific
address, core picks a suitable free address range using a best-fit
allocation. When attaching dataspaces to a PD's regular region map, the
region may be placed adjacent to an existing region, which is normally
fine. However, in the case of stacks, we want do enforce two things:

- A certain alignment (1 MiB) of the stack's region so that the
  corresponding thread can infer its TLS pointer from its stack address
  using a mere bit-logical operation (alleviating the need for a special
  register holding a TLS pointer).

- Leaving guard pages around the stacks.

Both constraints can be satisfied by managing the region allocation
within a dedicated stack-region map manually.

> 2. A PD is defined as Address Space and CSpace. I wondered if it is
> possible to have multiple address spaces inside a PD. The RM for each of
> these different Address spaces would still be separate capabilities in the
> singular CSpace for that PD. This way, a thread can potentially jump across
> different address spaces like [1, 2]. Or multiple threads can have slightly
> different address spaces.

Early versions of Genode indeed decoupled address spaces from PDs.
However, as we never found reasonable use cases of this presumed
flexibility, we abandoned this artificial separation of mechanisms. As a
result, the implementation became less complex and thereby more robust.

[1]
https://genode.org/documentation/release-notes/16.05#Consolidation_of_core_s_SIGNAL__CAP__RM__and_PD_services

BTW, it is still possible to share a portion of the virtual address
space between multiple PDs by using core's RM service. E.g., the
cached_fs_rom component uses this mechanism to make files available as
read-only dataspaces shared by multiple clients.

Regarding the idea of a thread "jumping across address spaces", that is
really what's happening when issuing an RPC. Genode's RPC mechanism
mimics plain synchronous function calls.

> 3. Where am I going with this? I am trying to implement a new type of
> thread. In this thread(let's call it thread-stack), each thread's stack
> will be completely isolated from the other threads. That is, threads will
> not have access to each other's stacks. They will still share the
> code-section, library, and heap sections. This is a toy idea, so I can get
> my hand dirty with the Genode basics. I think everything I need to
> implement is already there in Genode but wanted to get your opinion.

Genode's mechanisms allow you to build that scenario. Just a rough sketch:

A custom parent component (runtime) would create a new PD for each
"thread". But instead of loading the real program into the PD, it would
load a simple bootstrapping program. Once executed, this bootstrapping
program would request ROM and RAM dataspaces containing the text and
data segments of the designated program from the parent. The parent is
free to hand out the same dataspaces for each child.

What I just described is actually very similar to the regular function
of our dynamic linker.

> 4. Are they any helper functions to clone an entire address space?
> Replicate the page table hierarchy with the same underlying page frames.

Not an entire address space. But a region map can be used as a
dataspace. On kernels other than Linux, this - so called managed
dataspace" - can be shared between PDs. It can also be used to implement
on-demand page-fault handling. Think of automatically growing threads or
page swapping. You can find an example at [2] However, in practice the
mechanism is not used much.

[2]
https://github.com/genodelabs/genode/blob/master/repos/base/src/test/rm_nested/main.cc

> 1. Using the notion of Affinity, I can restrict a thread to run on a
> particular CPU, but is there a way to restrict other threads from running
> on a given CPU. I am exploring isolating two threads by ensuring that they
> run on separate cores.

It's best to think in terms of PDs when partitioning resources. At the
granularity of PDs, you can express the assignment of two PDs to a
mutually exclusive set of CPU cores using init's configuration: defining
an affinity space of 2x1, assigning the "left" part to one PD, and the
"right" part to the other. Note however, that the boot CPU is
effectively shared with other components and the execution of core's
services. So to minimize the chance for cross talk, one would need to
use a CPU with more than two cores (one for the boot CPU and one for
each resource partition).

BTW, you can readily experiment with the mechanism using the
Genode-based Sculpt OS as described at [3].

[3] http://genodians.org/nfeske/2021-03-24-sculpt-os

> 2. Is there a way to restrict the time slices given to a particular thread.
> I am thinking of something like cgroups. I see that seL4 has the MCS
> kernel, which shares some of the motivations. But I think Genode does not
> support the MCS sel4 kernel.

We haven't used seL4's MCS kernel, yet.

That said, our custom base-hw microkernel supports a similar scheduling
feature since version 14.11 [4].

[4]
https://genode.org/documentation/release-notes/14.11#Trading_CPU_time_between_components_using_the_HW_kernel

> 1. Is the CSpace for a PD the same as CSpace in seL4, or is there a
> different notion of Cspace in Genode?

It is the same notion. It is only used in the seL4-specific part of
Genode (in base-sel4/).

> 2. Is there an easy way to print the entire CSpace for debugging purposes?

Unfortunately not. However, while working with the code, you may chose
to make any class you like printable (a valid argument to 'log',
'warning', 'error') by implementing a const 'print' method in the class.
You can find simple examples at [5].

[5]
https://github.com/genodelabs/genode/blob/master/repos/os/include/util/geometry.h

Have fun with the further exploration!

Norman


-- 
Dr.-Ing. Norman Feske
Genode Labs

https://www.genode-labs.com · https://genode.org

Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden
Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth



More information about the users mailing list