Data space creation and physical memory allocation strategy.

Chen Tian chen.tian at ...60...
Mon Jun 13 21:05:11 CEST 2011


Thanks Norman. I think I have a better understanding now. :)

-----Original Message-----
From: Norman Feske [mailto:norman.feske at ...1...] 
Sent: Monday, June 13, 2011 10:51 AM
To: genode-main at lists.sourceforge.net
Subject: Re: Data space creation and physical memory allocation strategy.

Hello Chen,

> resources for each client. To do that, basically we need to allocate a
> "real" dataspace (i.e. get physical memory from backing store) and do
paging
> upon a page fault. When the application runs out of its RAM, we can do a
> swap (a file system is probably needed for swapping), which is what Linux
is
> doing. 

exactly. But the possibilities go even further. For example, on a NUMA
system, a special memory manager could provide a RAM service that
migrates dataspaces transparently between local and non-local memory.
Another use case would be a memory manager with support for large
non-contiguous memory areas.

That said, the concept of managed dataspaces is not time tested yet. We
still need a profound understanding of its use cases and possibly
improve it. Right now, there is one important thing to keep in mind:
each managed dataspace is a separate RM session. Therefore, a managed
dataspace is not exactly cheap. The creation will traverse the process
tree (in contrast to the allocation of RAM dataspaces) and each RM
session must be paid for in terms of a quota donation. Hence, a managed
dataspace makes sense for large memory objects but not as a means to
manage containers of just a few memory pages.

As another limitation, this concept is not functional on 'base-linux'
because Linux does not allow the manipulation of remote address spaces
(at least, we do not know how to do it efficiently).

> BTW, it looks like the current heap implementation did not use managed
data
> space, and therefore, every malloc still leads to a physical memory
> allocation, right?

A process may never know whether its 'env()->ram_session()' (which is
used as backing store by 'env()->heap()') refers to core's RAM service
or not. The RAM session of the process could have been routed to another
implementation (e.g., a swapping memory manager) of the RAM-session
interface. For all current Genode scenarios, your observation is correct
- each process indeed uses core's RAM service and thereby physical
memory. But if we had an alternative RAM service implementation, we
could make a process use this service by simply routing its RAM session
to the alternative RAM service rather than the parent. No code
modifications required.

Best regards
Norman

-- 
Dr.-Ing. Norman Feske
Genode Labs

http://www.genode-labs.com · http://genode.org

Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden
Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth

----------------------------------------------------------------------------
--
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
_______________________________________________
Genode-main mailing list
Genode-main at lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/genode-main





More information about the users mailing list