Hi Alexander,
as I noted in my other posting, I'm afraid that the support of mappings via mmap at fixed addresses can never be robust across kernels because one cannot make assumptions about the virtual address space layout.
To create such a mechanism in a reasonably robust way, I'd go for the use of a managed dataspace. In the libc configuration, one could define the virtual address range where mmap mappings with fixed addresses are supposed to go. Upon start of the program. The libc would try to preserve this area by creating an appropriately sized Region_map (via the RM service) and attaching it at the base of the configured range. If this step fails (this can happen, depending on the kernel or architecture - think of 32bit), it will fail right at this early point.
Once the managed dataspace is attached to the component's virtual address space, the libc can manually manage its layout via 'attach_at' without fearing conflicts. For an example, please have a look at base/src/test/sub_rm/.
This feature would work independently from the 'Libc::Mem_alloc'. I think that the use of 'Libc::Mem_alloc' by 'mmap' is not the right way to go when taking mmap beyond it current scope. In particular, 'Libc::Mem_alloc' will eventually be swapped out by a better allocator (such as jemalloc). Extending its interface (as suggested by your patch) will make this step more complicated.
So, I split allocation to 2 parts - one for this gap by standard alloc() and random placement and second one by my own alloc_at() with size exactly correspond to the requested one.
That's good. The meta data should be separate from the actual payload.
Implementation available as a patch to 20.05 at
https://github.com/tor-m6/genode/commit/78fc751bea8acc2b515e04bec9f3a8834615...
Problem I see here is a free operation: it just free the block with requested address, and do not touch others, gap-related blocks. For me it is not obvious that they need to be free, because in the code (e.g. by unmap()) I do not see explicit free of other allocated structures, in particular, dataspace extension or related block metadata.
That's another reason why the mmap support needs additional infrastructure. I think of a registry of mmap mappings. The registry entries store all the meta data required. They can be allocated on the 'Kernel::_heap'. The actual memory is allocated as RAM dataspaces. The 'Libc::Mem_alloc' is not required.
Question: should I make additional efforts to trace and delete something else - except what standard free does here: /* forward request to our local allocator */ _alloc.free(addr);
Like for dataspace structure or address ranges for second «auxiliary» allocation?
The current implementation of 'munmap' does not suffice for sure. In particular, it would leak the backing store of meta data and the dataspaces added via 'expand_at'.
PS Another potential problem is a probability to overlap of first «random address» allocation with desired address. This happens with first attempt to implement allocator - it allocates address from desired range, and immediately put here 0x18 bytes of Dataspace structure, and then fail because requested address is already busy by this structure! Anyway, I consider such a probability relatively small with proposed approach and ignore it (may be I am wrong)
I think this is a big problem because it can strike at any time at runtime. It can be addressed best with the managed-dataspace approach outlined above. But this is more invasive than your current patch.
Cheers Norman