I've had problems with init failing when a component was configured to use all remaining available RAM. I've had to alter some of the standard run scripts to make them work. I don't remember which ones right now, but I think arora was one of them. Maybe init should have an XML option to set its reserved RAM quota, or it could instead automatically calculate the required quota based on the number of components it starts. Also, it would be nice to have both reserved and maximum quotas for components in init's configuration.
On Wed, Oct 5, 2016 at 8:48 AM, Roman Iten <roman.iten@...453...> wrote:
Hi Norman
- How big is the initial quota of init?
init receives the quota for all the available physical memory from core. The amount depends on the physical memory of the machine. It is printed by core when init is started.
Thats what I thought. But I couldn't imagine how init's quota could possibly exceed on a x86-64 machine with several GiB physical memory :)
For each child started by init, init needs to create several capabilities (e.g., the parent capability presented to the child, or the capability for the local ROM session for providing the child's binary as "binary" ROM module). The allocation of those capabilities is performed via the 'Nova_native_pd::alloc_rpc_cap' RPC function. This function naturally consumes session quota of the corresponding PD session (init's PD session). At one point, the initial session quota (that was passed to core when init's PD session was created) is depleted. In this case, core prints the diagnostic message and returns an error to the client (init). Init responds to this error by upgrading the session quota of its PD session using the preserved slack memory. The session upgrading is handled at 'base-nova/src/lib/base/rpc_cap_alloc.cc'.
In your case, the message you see is merely a diagnostic message. The condition is handled properly. In cases where proper error handing of the 'Out_of_metadata' condition is missing, the message used to be quite valuable to spot the problem. So we decided to kept it.
How can I distinguish if the condition is handled properly or not? Are there any preceding or following log messages in either case?
- Does the size of the metadata allocation for a child depends on whether
I'm using a 32 or 64 bit system?
Yes. I.e., your scenario produces the message only on 64 bit, not on 32 bit.
Is it worth thinking about calculating the slack memory size based on Genode::addr_t? Or make the value even configurable?
That said, the limit turned out not to be a problem in your case. The message is a false-positive warning. The default limit becomes a problem not before the PD-session upgrade fails. I can trigger the problem with your run script when configuring Qemu with 64 MiB of memory and starting 76 children. Sorry that my previous email pointed you to the wrong direction.
It didn't. I'm trying to improve my understanding of memory configuration and allocation in Genode. So every hint helps ;)
Thanks, Roman
Check out the vibrant tech community on one of the world's most engaging tech sites, SlashDot.org! http://sdm.link/slashdot _______________________________________________ genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main