Using dataspaces and allocators
Norman Feske
norman.feske at ...1...
Sat Jul 9 00:55:36 CEST 2016
Hello Denis,
> But, (1) how is a dataspace typically populated with data and (2) how
> can a component typically read the data?
attaching a dataspace to a component's region map is similar to using
'mmap' for a file on Linux. The underlying backing store becomes visible
in the component's virtual address space. The dataspace content can be
read and written via ordinary memory accesses. If two components attach
the same dataspace into their respective region maps, both can access
(read, write) the dataspace's content (shared memory).
Of course, to perform memory accesses, one needs a pointer to the
virtual address where the dataspace is visible. This pointer is the
return value of the 'Region_map::attach' operation.
In practice, we use the 'base/attached_*dataspace.h' utilities, which
simplify the work with dataspaces. The dataspace is automatically
attached when an 'Attached_dataspace' is constructed, and you can access
its content via the pointer returned by the 'local_addr' method.
For certain types of dataspaces (ROM dataspaces, IO-MEM dataspaces, and
RAM dataspaces), there are dedicated attached-dataspace variants, which
implicitly perform the corresponding ROM/IO-MEM session request, or the
RAM allocation.
For examples, I recommend you to study the code within repos/os/src that
uses those utilities, e.g., via
grep -r Attached_dataspace repos/os/src
> A further, related question is, (3) what is an allocator, and (4) what
> is the difference between an allocator and a dataspace? (5) How is an
> allocator (e.g. in the form of a sliced_heap or heap) typically used in C++?
In principle, an allocator is a component-local data structure that
manages one or several ranges of the component's virtual memory. The
memory ranges are backed by RAM dataspaces (or by other allocators). The
'alloc(size)' method hands out an unused chunk of virtual memory of the
specified size (similar to 'malloc' in C). Whereas the granularity of a
dataspaces is bounded by the smallest physical page size supported by
the MMU (typically 4 KiB), an allocator usually works at a much finer
granularity.
An allocator can be passed as argument to the 'new' operator. Thereby,
an object is created using the specified allocator as the object's
backing store. This enables Genode components to tightly control and
account the memory consumed for dynamically allocated objects.
There are different kinds of allocators designed for different use cases:
* The 'Heap' uses several dataspaces as backing store and allocates
memory chunks using a best-fit allocation strategy. Each of those
dataspaces may be used for holding many allocations. Under the hood,
the heap requests RAM dataspaces on demand, depending on how much
memory is allocated via the 'alloc' method. A single allocation is
expected to be relatively quick (as it merely interacts with a
component-local data structure) and to consume little memory
overhead for bookkeeping.
* The 'Sliced_heap' uses a distinct dataspace for each allocation. So
each allocation is very heavy-weighted. It involves inter-component
communication and MMU page-table manipulation, and it is always at
least 4 KiB in size. So a sliced heap should only be used as backing
store for other allocators, or in situations where the backing store
for each allocation must be independently and rigidly accounted for.
* 'Slab' allocators are used in situations where the allocation size is
the same for many objects. Because it is simpler and less flexible
than a heap, it requires less memory for the bookkeeping and
allocations are quicker.
I hope this overview is a good starting point to investigate. If you
have further questions, please don't hesitate to post them to the list.
Cheers
Norman
--
Dr.-Ing. Norman Feske
Genode Labs
http://www.genode-labs.com · http://genode.org
Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden
Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
More information about the users
mailing list