Hi, I'm now trying to port my L4Env-based program to Genode. It is a runtime environment to run OS/2 programs on a L4 family microkernel (Inspired by OS/2 Warp, PowerPC edition/IBM Workplace OS). Currently, the working version (actually, some proof-of-concept) is running under L4Env, but I plan to port it to Genode and L4Re (and maybe, to WinNT native/ReactOS in the future).
For porting purposes, I need to have some region-mapper functionality, like 1) reserve a virtual memory region in the region map, so that it will avoid attaching dataspaces over this region, unless an explicit attachment address is specified; 2) lookup an address in the region mapper, i.e., determine, is this address free, reserved, or some dataspace is attached here, and if so, at which address/offset and which size.
For implementing the memory management functions, this is needed to reserve a virtual memory block, for subsequent committing pages/attaching dataspaces in that region, step by step.
As I looked at the Genode::Region_map interface, I don't see such functions in that class. Though, I found some memory-range allocation routines in the Genode::Range_allocator class, but I'm not sure if this is what I need. Is there some functionality related to reserving regions/looking up addresses implemented elsewhere, or I need to implement such functionality myself?
Also, a required feature is multiple memory areas support in the region mapper. I need at least, two memory areas, a private one, and a shared one. A private area start at the beginning of the address space. It is private to any user-mode program. A shared area starts at some higher address, and it is common for all usermode programs (using my runtime environment). So, it is a global shared memory started at the same address, and each shared memory region is mapped at the same address in all client address spaces. It is used for shared memory IPC, as well as for loading DLL's. DLL's are shared between different usermode programs, unlike Windows DLL's or Unix SO's.
So, I need to allocate memory ranges in specific (private or shared) areas. This could be implemented via my own memory manager with its own accounting. But maybe, there is an already implemented functionality, like this? L4Env region mapper has support for multiple memory areas, as well as a possibility to lookup an address, or to attach a pointer to a user-defined structure, to an address. So, I can attach my own data related to that address/region. Is there a functionality like this implemented in Genode?
I also know, that there is a support for managed dataspaces, or nested region maps, which could be useful here (at least, for implementing a shared memory area), but I also know, that nested region maps are not supported on all platforms (like Genode/Linux). So, it would be, possibly, better to avoid using them, unless really needed.
WBR,
valery
Hi Valery,
I also know, that there is a support for managed dataspaces, or nested region maps, which could be useful here (at least, for implementing a shared memory area), but I also know, that nested region maps are not supported on all platforms (like Genode/Linux). So, it would be, possibly, better to avoid using them, unless really needed.
managed dataspaces are the only mechanism that allow the manual organization of virtual address-spaces ranges on Genode. From your description, I think that they suffice for your needs as long as each address space takes the initialization steps like obtaining the managed dataspaces from somewhere (a central service?) and attaching them at the desired local address. A managed dataspace can be of any size. It contains no mappings unless a real dataspace is attached within. Once the managed dataspace is attached to the local address space, the virtual-address area spanned by it is not used for any other mappings.
You are right in that managed dataspaces are somehow limited on Linux where it is not (easily) possible to remotely manipulate address spaces. But the feature set you need - preserving a virtual address-space area to be manually managed - should be covered. For reference, you may take a look at the handling of the stack area, which is a 256MiB-sized virtual-address space window that is sparsely populated with the stacks of the component-local threads.
Cheers Norman
On 19.01.2018 23:33, Norman Feske wrote:
Hi Valery,
I also know, that there is a support for managed dataspaces, or nested region maps, which could be useful here (at least, for implementing a shared memory area), but I also know, that nested region maps are not supported on all platforms (like Genode/Linux). So, it would be, possibly, better to avoid using them, unless really needed.
managed dataspaces are the only mechanism that allow the manual organization of virtual address-spaces ranges on Genode. From your description, I think that they suffice for your needs as long as each address space takes the initialization steps like obtaining the managed dataspaces from somewhere (a central service?) and attaching them at the desired local address.
Yes, I have an "os2exec" server, which loads binaries of different executable formats (support for which is implemented as a special shared library, per each format) into memory. It parses executable sections to dataspaces, and then passes them to client programs, on request. The client then gets the section dataspaces, one by one, together with intended attach addresses, and attaches them to its own memory.
Os2exec server also maintains the shared memory area in its own memory. Each process is run inside a special l4env binary, which has all system structures moved above the address 0xa0000000. So, the addresses below are freed up. The private and shared areas are reserved at the startup. Then this l4env binary (aka "application container") gets the sections of the specified OS/2 binary, and the sections of all required DLL's, from os2exec server, attaches them to its address space, and jumps to the entry point.
So yes, I could be able to create the shared memory area as a single managed dataspace in os2exec, and pass it to each userland process, at its initialization. Also, reserved sub-regions could also be implemented as managed dataspaces, which will go into needed (private or shared) areas.
A managed dataspace can be of any size. It contains no mappings unless a real dataspace is attached within. Once the managed dataspace is attached to the local address space, the virtual-address area spanned by it is not used for any other mappings.
You are right in that managed dataspaces are somehow limited on Linux where it is not (easily) possible to remotely manipulate address spaces. But the feature set you need - preserving a virtual address-space area to be manually managed - should be covered.
Also, I'm wondering, are the stack area and the linker area still implemented as managed dataspaces on base-linux? So, they are the same as on other platforms, but they only couldn't be nested? (As I understand, dataspaces on Linux are not RPC objects indeed, so they are just mmap-ped as a file in the same address space, that is the cause?).
For reference, you may take a look at the handling of the stack area, which is a 256MiB-sized virtual-address space window that is sparsely populated with the stacks of the component-local threads.
Cheers Norman
Thanks, will look at it!
I'd like everything to work on Linux as well, though. Still having a Genode region map API wrappers, which implement
their own accounting for all VM regions, looks feasible. So, if I use these wrappers exclusively, avoiding the
direct usage of Genode RM API's, I'd be able to maintain the required address space layout. But I need to think, of course.
WBR,
valery
On 20.01.2018 02:05, Valery V. Sedletski via genode-main wrote:
On 19.01.2018 23:33, Norman Feske wrote:
For reference, you may take a look at the handling of the stack area, which is a 256MiB-sized virtual-address space window that is sparsely populated with the stacks of the component-local threads.
Cheers Norman
Thanks, will look at it!
I'd like everything to work on Linux as well, though. Still having a Genode region map API wrappers, which implement
their own accounting for all VM regions, looks feasible. So, if I use these wrappers exclusively, avoiding the
direct usage of Genode RM API's, I'd be able to maintain the required address space layout. But I need to think, of course.
Also, I had another important question, I am in doubt regarding it. -- It seems like there's no means to look up an address for existing mappings on Genode (am I not right here?). I need that, at least, for implementing an API for releasing memory. The API only knows an address of a released memory region. Maybe, it's not required indeed with managed dataspaces, but I'm not sure. -- What if I'll allocate a managed dataspace, attach it to my address space, and map some backing store dataspaces into it. So, when I'll release this managed dataspace, would the backing store dataspaces be released as well automatically? I need to release these dataspaces afterwards. Usually I'm using a lookup function to get these dataspaces, then release them one by one. I work this problem around with maintaining a list of structures, per each allocated memory region, so when I need to release the region, I search the list for a corresponding structure, get a dataspace capability from it, and then detach it and release. With a lookup function, maintaining such list manually is not required. So, do I need to care about releasing these backing store dataspaces manually, in case of a managed dataspace, or is it released automatically, when a managed dataspace is destroyed? Would be the need in a lookup function avoided with managed dataspaces?
WBR,
valery
Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hi Valery,
With a lookup function, maintaining such list manually is not required. So, do I need to care about releasing these backing store dataspaces manually, in case of a managed dataspace, or is it released automatically, when a managed dataspace is destroyed? Would be the need in a lookup function avoided with managed dataspaces?
you'll have to maintain the list manually.
There is no lookup function in the region-map interface.
Cheers Norman
On 20.01.2018 02:05, Valery V. Sedletski via genode-main wrote:
Also, I'm wondering, are the stack area and the linker area still implemented as managed dataspaces on base-linux? So, they are the same as on other platforms, but they only couldn't be nested? (As I understand, dataspaces on Linux are not RPC objects indeed, so they are just mmap-ped as a file in the same address space, that is the cause?).
For reference, you may take a look at the handling of the stack area, which is a 256MiB-sized virtual-address space window that is sparsely populated with the stacks of the component-local threads.
Cheers Norman
Looked at genode/repos/base-linux/src/include/base/internal/region_map_mmap.h, comments make this clear. So yes, stack area is implemented as a managed dataspace. The limitations are that it cannot be attached twice, and cannot be attached to another managed dataspace. It still could be attached to the root region map. So, I'll be able to reserve the shared area as a managed dataspace. But still, I'll could not reserve sub-regions within it, for subsequent mappings/unmappings.
WBR,
valery
Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hi Valery,
So yes, I could be able to create the shared memory area as a single managed dataspace in os2exec, and pass it to each userland process, at its initialization. Also, reserved sub-regions could also be implemented as managed dataspaces, which will go into needed (private or shared) areas.
thanks for the detailed background information. This makes the picture much more clear.
To work around the limitation of base-linux, I recommend to let each process create its shared and private areas as managed dataspaces and actively pull their respective content from the os2exec server. The os2exec server would provide a session interface that allows each client to obtain the individual regions of those areas. So it is the job of each process to attach the individual dataspaces to its locally created managed dataspaces. The managed dataspaces are never shared across component boundaries.
Of course, you cannot enforce the consistency of the 'shared' area across all processes this way. Each process must adhere the protocol. But I guess it would be in the best interest of each process to do so, wouldn't it?
Happy hacking! Norman