Hello Genodians
I'm running a scenario on base-linux on a system that has virtual memory over commit disabled.
While debugging startup problems of this scenario on the target system I realized that at "random" components didn't start without any hints in the log.
`strace` showed that the component that wasn't running (often timer) was terminated very early with SIGSEGV. A look at htop showed that each Genode component uses about 420 MB of virtual memory. lx_hybrid components on the other hand "only" about 170 MB.
Can anybody give me some hints on where should start my quest to reduce the virtual memory footprint of Genode components on base-linux?
Regards, Pirmin
Hi Pirmin,
`strace` showed that the component that wasn't running (often timer) was terminated very early with SIGSEGV. A look at htop showed that each Genode component uses about 420 MB of virtual memory. lx_hybrid components on the other hand "only" about 170 MB.
Can anybody give me some hints on where should start my quest to reduce the virtual memory footprint of Genode components on base-linux?
the major part of the virtual-memory need stems for the reserved address-space ranges for the stack area and the linker area. Strictly speaking, those virtual memory ranges are not actually "used" but they are marked as reserved to prevent the Linux kernel from installing mappings within these ranges, e.g., when the application attaches a dataspace to its local address space.
You can find the relevant definitions for linker area at [1] and for the stack area at [2]. Note that the latter definition is Linux-specific.
[1] https://github.com/genodelabs/genode/blob/master/repos/base/include/pd_sessi... [2] https://github.com/genodelabs/genode/blob/master/repos/base/src/include/base...
According to these definitions, the linker area covers a virtual address range of 160 MiB whereas the stack area covers 256 MiB. The sum is 416 MiB. The remaining 4 MiB is used by the ELF segments of the dynamic linker and dataspaces attached by the actual application.
For hybrid components, the linker area remains unused, which explains the lower footprint.
The easiest way to reduce the footprint would be to lower the values of the definitions. However, I'd instead recommend investigating a way to exclude merely reserved areas from the accounting. If there was a way to clearly distinguish the actual "use" of virtual memory from mere "reservation" in the policy, the policy could be much more ridgid.
To investigate the virtual address-space layout of a given Genode component running on Linux, I recommend looking at the corresponding /proc/<PID>/maps pseudo file. There you can quite clearly see the distinction between the reserved areas and areas backed by RAM, i.e., the entries that point to the "ds" (dataspace) files.
Cheers Norman
Hi Norman
Many thanks.
On 18.02.20 09:42, Norman Feske wrote:
Hi Pirmin,
You can find the relevant definitions for linker area at [1] and for the stack area at [2]. Note that the latter definition is Linux-specific.
[1] https://github.com/genodelabs/genode/blob/master/repos/base/include/pd_sessi... [2] https://github.com/genodelabs/genode/blob/master/repos/base/src/include/base...
According to these definitions, the linker area covers a virtual address range of 160 MiB whereas the stack area covers 256 MiB. The sum is 416 MiB. The remaining 4 MiB is used by the ELF segments of the dynamic linker and dataspaces attached by the actual application.
For hybrid components, the linker area remains unused, which explains the lower footprint.
Using this method I was able to reduce the virtual memory size of components quite a bit, but unfortunately not enough to run all components of the scenario.
The easiest way to reduce the footprint would be to lower the values of the definitions. However, I'd instead recommend investigating a way to exclude merely reserved areas from the accounting. If there was a way to clearly distinguish the actual "use" of virtual memory from mere "reservation" in the policy, the policy could be much more ridgid.
The Linux kernel unfortunately doesn't provide the possibility to check for only the used memory.
Discussing the possible solutions we had a cool idea which I think will be my Hack and Hike project. The idea is to add a command line parameter to core of base-linux where the user may specify the maximum amount of memory core should provide to the initial init. In this way a Genode scenario could never use more as the specified amount of memory. Currently this can happen, if one uses memory saturation and the component with the saturation has a memory leak error. Not that this would solve our problem but would make restrain Genode components on base-linux even more.
Regards, Pirmin