Hi, We're having bad juju with dataspaces. As we try larger dataspaces things seem to go wrong (like hanging, bad mappings and exceptions).
Can someone check out my test program.. https://github.com/dwaddington/genode/blob/master/testing/src/core-api-1/mai...
and see what is going on? Basically if you change the total memory use in the program (line 26) to something ~ > 512MB, then region conflicts happen.
I tried this out this test on both fiasco.oc and nova with qemu and real PCs. Same result. The NOVA run reports an "unresolvable exception" by pager:core-api-1.
I could of course be doing something wrong with the APIs.
Daniel
Hi Daniel,
On 07/09/2013 09:36 PM, Daniel Waddington wrote:
Hi, We're having bad juju with dataspaces. As we try larger dataspaces things seem to go wrong (like hanging, bad mappings and exceptions).
Can someone check out my test program.. https://github.com/dwaddington/genode/blob/master/testing/src/core-api-1/mai...
and see what is going on? Basically if you change the total memory use in the program (line 26) to something ~ > 512MB, then region conflicts happen.
I tried this out this test on both fiasco.oc and nova with qemu and real PCs. Same result. The NOVA run reports an "unresolvable exception" by pager:core-api-1.
I could of course be doing something wrong with the APIs.
Daniel
When using 'Rm_session::attach()', the Rm_session_component tries to place the dataspace at an address which is aligned to the dataspace size. So, when setting NUM_REGIONS to 8 or 16, your test works, but with 10 regions there will be small holes in the address space and the last dataspace does not fit anywhere, causing a 'Region_conflict' exception.
Christian
On 10.07.2013 14:10, Christian Prochaska wrote:
at an address which is aligned to the dataspace size
actually, it's not aligned exactly to the dataspace size, but I don't know the correct wording. What I mean is, that a dataspace of size 102M, for example, will be placed at address 128M (or 256M ...).
In Daniel's test case, attaching a single dataspace of size 107374182 to an Rm_session of the same size fails, because in 'Rm_session::attach()' the size to be attached gets rounded up to page granularity and the size of the Rm_session does not have page granularity. I wonder if an exception should be thrown if one tries to create an Rm_session with an 'vm_size' which does not have page granularity?
On 07/10/2013 03:27 PM, Christian Prochaska wrote:
attaching a single dataspace of size 107374182
correction:
"trying to attach 107374182 bytes of a (bigger) dataspace to an Rm_session of the same size"
Since an Rm_session can be used as dataspace and dataspace sizes in Genode always have page granularity, it should not be possible to have an Rm_session of a size which does not have page granularity. Perhaps the 'vm_size' should just get rounded up automatically to page granularity like it is done in 'Ram_session_component::alloc()'?
@Daniel: so the problem with your test case is not necessarily the aligned placement of dataspaces in the Rm_session, but at first that Rm_session sizes and offsets given to 'Rm_session::attach()' are not of page granularity when using 10 regions.
Hi Alex, I see, so there is some 2^N round up. Why is this, I thought it used an AVL tree to manage memory?
Also, can you run the test with..
#define NUM_REGIONS 15 #define TOTAL_MEMORY_TO_USE GB(2) #define REGION_SIZE MB(96)
I then get unhandled page faults. Can you see why?
Thanks Daniel
On Wed, 2013-07-10 at 14:26 +0200, Christian Prochaska wrote:
On 10.07.2013 14:10, Christian Prochaska wrote:
at an address which is aligned to the dataspace size
actually, it's not aligned exactly to the dataspace size, but I don't know the correct wording. What I mean is, that a dataspace of size 102M, for example, will be placed at address 128M (or 256M ...).
See everything from the browser to the database with AppDynamics Get end-to-end visibility with application monitoring from AppDynamics Isolate bottlenecks and diagnose root cause in seconds. Start your free trial of AppDynamics Pro today! http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clk... _______________________________________________ Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Sorry Christian, I have no clue why I just called you Alex! Apologies. It must be catching.
Daniel
On Wed, 2013-07-10 at 07:48 -0700, Daniel Waddington wrote:
Hi Alex, I see, so there is some 2^N round up. Why is this, I thought it used an AVL tree to manage memory?
Also, can you run the test with..
#define NUM_REGIONS 15 #define TOTAL_MEMORY_TO_USE GB(2) #define REGION_SIZE MB(96)
I then get unhandled page faults. Can you see why?
Thanks Daniel
On Wed, 2013-07-10 at 14:26 +0200, Christian Prochaska wrote:
On 10.07.2013 14:10, Christian Prochaska wrote:
at an address which is aligned to the dataspace size
actually, it's not aligned exactly to the dataspace size, but I don't know the correct wording. What I mean is, that a dataspace of size 102M, for example, will be placed at address 128M (or 256M ...).
See everything from the browser to the database with AppDynamics Get end-to-end visibility with application monitoring from AppDynamics Isolate bottlenecks and diagnose root cause in seconds. Start your free trial of AppDynamics Pro today! http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clk... _______________________________________________ Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
See everything from the browser to the database with AppDynamics Get end-to-end visibility with application monitoring from AppDynamics Isolate bottlenecks and diagnose root cause in seconds. Start your free trial of AppDynamics Pro today! http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clk... _______________________________________________ Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hi Daniel,
On 10.07.2013 16:48, Daniel Waddington wrote:
Hi Alex, I see, so there is some 2^N round up. Why is this, I thought it used an AVL tree to manage memory?
From my understanding it's a mapping optimization for some kernels. Something like: mapping 64M to another task can be done with one
syscall if the address has 64M alignment. But I'm not familiar with the details, perhaps somebody else can help out here?
Also, can you run the test with..
#define NUM_REGIONS 15 #define TOTAL_MEMORY_TO_USE GB(2) #define REGION_SIZE MB(96)
I then get unhandled page faults. Can you see why?
Thanks Daniel
When only changing these values, the RAM dataspace is too small (phys_slab_size = GB(1)) for 15 regions of 96M. Usually, it would be an error if 'Ram_session::attach()' gets called with an 'offset' which exceeds the size of the dataspace (which happens in this case), but if I remember correctly there was a use case where this was valid. Something with ELF images and the dynamic linker. @ssumpf, do you remember? We should probably document this in 'Rm_session_component::attach()', why no exception gets thrown if 'size' is given and (offset >= dsc()->size()).
Christian
Hi Christian, But this is 2GB which even rounded up to 128M is 1920MB?? The RAM dataspace should be plenty big enough.
Daniel
On Wed, 2013-07-10 at 20:26 +0200, Christian Prochaska wrote:
Hi Daniel,
On 10.07.2013 16:48, Daniel Waddington wrote:
Hi Alex, I see, so there is some 2^N round up. Why is this, I thought it used an AVL tree to manage memory?
From my understanding it's a mapping optimization for some kernels. Something like: mapping 64M to another task can be done with one
syscall if the address has 64M alignment. But I'm not familiar with the details, perhaps somebody else can help out here?
Also, can you run the test with..
#define NUM_REGIONS 15 #define TOTAL_MEMORY_TO_USE GB(2) #define REGION_SIZE MB(96)
I then get unhandled page faults. Can you see why?
Thanks Daniel
When only changing these values, the RAM dataspace is too small (phys_slab_size = GB(1)) for 15 regions of 96M. Usually, it would be an error if 'Ram_session::attach()' gets called with an 'offset' which exceeds the size of the dataspace (which happens in this case), but if I remember correctly there was a use case where this was valid. Something with ELF images and the dynamic linker. @ssumpf, do you remember? We should probably document this in 'Rm_session_component::attach()', why no exception gets thrown if 'size' is given and (offset >= dsc()->size()).
Christian
See everything from the browser to the database with AppDynamics Get end-to-end visibility with application monitoring from AppDynamics Isolate bottlenecks and diagnose root cause in seconds. Start your free trial of AppDynamics Pro today! http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clk... _______________________________________________ Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hi Daniel,
TOTAL_MEMORY_TO_USED was only used to calculate REGION_SIZE, but the RAM dataspace size is still set to GB(1).
When increasing the RAM dataspace size to 2G, I'm getting the error
We ran out of physical memory while allocating - bytes [init -> core-api-1] Assertion failed:ram.alloc() failed
but not the page faults.
Christian
On 10.07.2013 21:52, Daniel Waddington wrote:
Hi Christian, But this is 2GB which even rounded up to 128M is 1920MB?? The RAM dataspace should be plenty big enough.
Daniel
On Wed, 2013-07-10 at 20:26 +0200, Christian Prochaska wrote:
Hi Daniel,
On 10.07.2013 16:48, Daniel Waddington wrote:
Hi Alex, I see, so there is some 2^N round up. Why is this, I thought it used an AVL tree to manage memory? From my understanding it's a mapping optimization for some kernels. Something like: mapping 64M to another task can be done with one
syscall if the address has 64M alignment. But I'm not familiar with the details, perhaps somebody else can help out here?
Also, can you run the test with..
#define NUM_REGIONS 15 #define TOTAL_MEMORY_TO_USE GB(2) #define REGION_SIZE MB(96)
I then get unhandled page faults. Can you see why?
Thanks Daniel
When only changing these values, the RAM dataspace is too small (phys_slab_size = GB(1)) for 15 regions of 96M. Usually, it would be an error if 'Ram_session::attach()' gets called with an 'offset' which exceeds the size of the dataspace (which happens in this case), but if I remember correctly there was a use case where this was valid. Something with ELF images and the dynamic linker. @ssumpf, do you remember? We should probably document this in 'Rm_session_component::attach()', why no exception gets thrown if 'size' is given and (offset >= dsc()->size()).
Christian
See everything from the browser to the database with AppDynamics Get end-to-end visibility with application monitoring from AppDynamics Isolate bottlenecks and diagnose root cause in seconds. Start your free trial of AppDynamics Pro today! http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clk... _______________________________________________ Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
See everything from the browser to the database with AppDynamics Get end-to-end visibility with application monitoring from AppDynamics Isolate bottlenecks and diagnose root cause in seconds. Start your free trial of AppDynamics Pro today! http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clk... _______________________________________________ Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hello,
From my understanding it's a mapping optimization for some kernels. Something like: mapping 64M to another task can be done with one
syscall if the address has 64M alignment. But I'm not familiar with the details, perhaps somebody else can help out here?
we use natural alignments for both allocations of physical memory and allocations of virtual memory in order to increase the likelihood for large-page mappings. Using large mappings has several benefits:
* On most L4-like kernels such as NOVA, a page fault can be answered with a so-called flexpage that can have an arbitrary power-of-two size. Because each mapping creates a node in the in-kernel mapping database, the use of a few large-page mappings consumes less kernel memory than the use of many small-page mappings.
* On CPU architectures with support for different page sizes, the use of large pages reduces the TLB footprint and the number of page- fault exceptions. But large page sizes can be used only if large mappings are used. Therefore, large mappings should be preferred over small mappings to enable the kernel to actually make use of large pages.
* Even if a CPU architecture supports only a few page sizes (i.e., x86), using large mappings has the benefit that the kernel can populate page tables for a large range of virtual memory (covered by a single mapping) without invoking the user-level page-fault protocol. This can happen eagerly or on-demand, whatever the kernel developers prefer. In contrast, if we would resolve each page fault via a measly 4K mapping, the kernel had no room for such optimizations.
However, for large-page mappings to work, some conditions with regard to the alignment of the mapping source and destination must be met. I.e., in order to use superpages on x86, both the physical address of the backing store as well as the virtual address must be aligned to a 4 MiB boundary. For this reason, Genode's core tries to use natural alignments for both the allocation of physical memory ('Ram_session::alloc') as well as virtual memory ('Rm_session::attach'). If no free address range that meets those conditions exists, core successively weakens the condition (see the implementation of 'Rm_session::attach' in 'base/src/core/rm_session_component.cc').
Best regards Norman
Sorry. My mistake. Daniel
On Wed, 2013-07-10 at 22:04 +0200, Christian Prochaska wrote:
Hi Daniel,
TOTAL_MEMORY_TO_USED was only used to calculate REGION_SIZE, but the RAM dataspace size is still set to GB(1).
When increasing the RAM dataspace size to 2G, I'm getting the error
We ran out of physical memory while allocating - bytes [init -> core-api-1] Assertion failed:ram.alloc() failed
but not the page faults.
Christian
On 10.07.2013 21:52, Daniel Waddington wrote:
Hi Christian, But this is 2GB which even rounded up to 128M is 1920MB?? The RAM dataspace should be plenty big enough.
Daniel
On Wed, 2013-07-10 at 20:26 +0200, Christian Prochaska wrote:
Hi Daniel,
On 10.07.2013 16:48, Daniel Waddington wrote:
Hi Alex, I see, so there is some 2^N round up. Why is this, I thought it used an AVL tree to manage memory? From my understanding it's a mapping optimization for some kernels. Something like: mapping 64M to another task can be done with one
syscall if the address has 64M alignment. But I'm not familiar with the details, perhaps somebody else can help out here?
Also, can you run the test with..
#define NUM_REGIONS 15 #define TOTAL_MEMORY_TO_USE GB(2) #define REGION_SIZE MB(96)
I then get unhandled page faults. Can you see why?
Thanks Daniel
When only changing these values, the RAM dataspace is too small (phys_slab_size = GB(1)) for 15 regions of 96M. Usually, it would be an error if 'Ram_session::attach()' gets called with an 'offset' which exceeds the size of the dataspace (which happens in this case), but if I remember correctly there was a use case where this was valid. Something with ELF images and the dynamic linker. @ssumpf, do you remember? We should probably document this in 'Rm_session_component::attach()', why no exception gets thrown if 'size' is given and (offset >= dsc()->size()).
Christian
See everything from the browser to the database with AppDynamics Get end-to-end visibility with application monitoring from AppDynamics Isolate bottlenecks and diagnose root cause in seconds. Start your free trial of AppDynamics Pro today! http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clk... _______________________________________________ Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
See everything from the browser to the database with AppDynamics Get end-to-end visibility with application monitoring from AppDynamics Isolate bottlenecks and diagnose root cause in seconds. Start your free trial of AppDynamics Pro today! http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clk... _______________________________________________ Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
See everything from the browser to the database with AppDynamics Get end-to-end visibility with application monitoring from AppDynamics Isolate bottlenecks and diagnose root cause in seconds. Start your free trial of AppDynamics Pro today! http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clk... _______________________________________________ Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
So Christian. Just to offer some more information. We had an issue with mapping where by memory was being corrupted, I think by overlapping of the memory mappings. I cannot give you a simple test since it only seems to happen with a big machine with lots of memory. Anyway, the fix was to align the sub_rm allocations to 2^N.
So I think there is a bug somewhere either in our code, Genode code or Fiasco.OC. For the moment rounding up works for us.
Best Daniel
On Wed, 2013-07-10 at 13:18 -0700, Daniel Waddington wrote:
Sorry. My mistake. Daniel
On Wed, 2013-07-10 at 22:04 +0200, Christian Prochaska wrote:
Hi Daniel,
TOTAL_MEMORY_TO_USED was only used to calculate REGION_SIZE, but the RAM dataspace size is still set to GB(1).
When increasing the RAM dataspace size to 2G, I'm getting the error
We ran out of physical memory while allocating - bytes [init -> core-api-1] Assertion failed:ram.alloc() failed
but not the page faults.
Christian
On 10.07.2013 21:52, Daniel Waddington wrote:
Hi Christian, But this is 2GB which even rounded up to 128M is 1920MB?? The RAM dataspace should be plenty big enough.
Daniel
On Wed, 2013-07-10 at 20:26 +0200, Christian Prochaska wrote:
Hi Daniel,
On 10.07.2013 16:48, Daniel Waddington wrote:
Hi Alex, I see, so there is some 2^N round up. Why is this, I thought it used an AVL tree to manage memory? From my understanding it's a mapping optimization for some kernels. Something like: mapping 64M to another task can be done with one
syscall if the address has 64M alignment. But I'm not familiar with the details, perhaps somebody else can help out here?
Also, can you run the test with..
#define NUM_REGIONS 15 #define TOTAL_MEMORY_TO_USE GB(2) #define REGION_SIZE MB(96)
I then get unhandled page faults. Can you see why?
Thanks Daniel
When only changing these values, the RAM dataspace is too small (phys_slab_size = GB(1)) for 15 regions of 96M. Usually, it would be an error if 'Ram_session::attach()' gets called with an 'offset' which exceeds the size of the dataspace (which happens in this case), but if I remember correctly there was a use case where this was valid. Something with ELF images and the dynamic linker. @ssumpf, do you remember? We should probably document this in 'Rm_session_component::attach()', why no exception gets thrown if 'size' is given and (offset >= dsc()->size()).
Christian
See everything from the browser to the database with AppDynamics Get end-to-end visibility with application monitoring from AppDynamics Isolate bottlenecks and diagnose root cause in seconds. Start your free trial of AppDynamics Pro today! http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clk... _______________________________________________ Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
See everything from the browser to the database with AppDynamics Get end-to-end visibility with application monitoring from AppDynamics Isolate bottlenecks and diagnose root cause in seconds. Start your free trial of AppDynamics Pro today! http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clk... _______________________________________________ Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
See everything from the browser to the database with AppDynamics Get end-to-end visibility with application monitoring from AppDynamics Isolate bottlenecks and diagnose root cause in seconds. Start your free trial of AppDynamics Pro today! http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clk... _______________________________________________ Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
See everything from the browser to the database with AppDynamics Get end-to-end visibility with application monitoring from AppDynamics Isolate bottlenecks and diagnose root cause in seconds. Start your free trial of AppDynamics Pro today! http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clk... _______________________________________________ Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main