Hi,
I'm playing around with vancouver and genode a little and stumbled across the problem that there is only 32 MiB hard-wired backing memory. Trying to increase that memory lead to weird errors. Are there any special things to take care of when increasing that memory amount?
Regards
Markus
Hi Markus,
great that you are brave enough to dive right into the most adventuresome chambers of Genode. :-)
Are you using the run script at 'ports/run/vancouver.run' as starting point? I have just conducted the little experiment of increasing the memory to 64M (in vancouver.cc at line 841). In addition to this change, you have to slighly adjust the configuration of the Vancouver process in the run script:
* Increase the RAM quota of the vancouver process (look for the '<resource>' node). In my case, 70M are fine, * Adjust the 'end' value of the memory model from 0x2000000 to 0x4000000.
Btw, while doing this little experiment, I noticed a problem with a recent commit at genodelabs/master. I hope, this has not caused you any trouble so far:
https://github.com/genodelabs/genode/issues/283
Cheers Norman
Hi Norman,
what's life without adventures? ;)
I used run/vancouver as starting script, yes. And I noticed that increasing to something around 90 MiB works without problems. Going beyond that (e.g., 100MiB) strange things happen like page faults where there should not be ones, or for example an INT 3 debug instruction which seems to "forget" about the parameter it was passed.
But with my current configuration I am able to proceed, although it would be interesting to know what happens with more memory.
I did not run into that other issue you mentioned, not that I remember.
Cheers
Markus
On 18 July 2012 21:02, Norman Feske <norman.feske@...1...> wrote:
Hi Markus,
great that you are brave enough to dive right into the most adventuresome chambers of Genode. :-)
Are you using the run script at 'ports/run/vancouver.run' as starting point? I have just conducted the little experiment of increasing the memory to 64M (in vancouver.cc at line 841). In addition to this change, you have to slighly adjust the configuration of the Vancouver process in the run script:
- Increase the RAM quota of the vancouver process (look for the '<resource>' node). In my case, 70M are fine,
- Adjust the 'end' value of the memory model from 0x2000000 to 0x4000000.
Btw, while doing this little experiment, I noticed a problem with a recent commit at genodelabs/master. I hope, this has not caused you any trouble so far:
https://github.com/genodelabs/genode/issues/283
Cheers Norman
-- Dr.-Ing. Norman Feske Genode Labs
http://www.genode-labs.com · http://genode.org
Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ _______________________________________________ Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hello,
Vancouver on NUL has a IMHO non obvious way of managing virtual memory. How does Vancouver on Genode allocate VM memory?
Julian
Markus Partheymueller <mail@...119...> wrote:
Hi Norman,
what's life without adventures? ;)
I used run/vancouver as starting script, yes. And I noticed that increasing to something around 90 MiB works without problems. Going beyond that (e.g., 100MiB) strange things happen like page faults where there should not be ones, or for example an INT 3 debug instruction which seems to "forget" about the parameter it was passed.
But with my current configuration I am able to proceed, although it would be interesting to know what happens with more memory.
I did not run into that other issue you mentioned, not that I remember.
Cheers
Markus
On 18 July 2012 21:02, Norman Feske <norman.feske@...1...> wrote:
Hi Markus,
great that you are brave enough to dive right into the most adventuresome chambers of Genode. :-)
Are you using the run script at 'ports/run/vancouver.run' as starting point? I have just conducted the little experiment of increasing the memory to 64M (in vancouver.cc at line 841). In addition to this
change,
you have to slighly adjust the configuration of the Vancouver process
in
the run script:
- Increase the RAM quota of the vancouver process (look for the '<resource>' node). In my case, 70M are fine,
- Adjust the 'end' value of the memory model from 0x2000000 to 0x4000000.
Btw, while doing this little experiment, I noticed a problem with a recent commit at genodelabs/master. I hope, this has not caused you
any
trouble so far:
https://github.com/genodelabs/genode/issues/283
Cheers Norman
-- Dr.-Ing. Norman Feske Genode Labs
http://www.genode-labs.com · http://genode.org
Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond.
Discussions
will include endpoint security, mobile security and the latest in
malware
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ _______________________________________________ Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ _______________________________________________ Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hi Julian,
Vancouver on NUL has a IMHO non obvious way of managing virtual memory. How does Vancouver on Genode allocate VM memory?
can you elaborate a bit more? Which part do you consider non-obvious in particular?
In general, Vancouver on Genode tries to use plain Genode mechanisms to manage its address space wherever possible. For example, for managing the guest-physical memory, Vancouver needs to manually populate the lower portion of its virtual address space (which is a shadow of the guest-physical memory). Consequently, we have to make sure that no other memory object ends up being attached to this virtual memory area. We do this by creating a managed dataspace (in fact, this is an RM session) and attach it at the lower part of Vancouver's address space. This way, this area will never used for something else.
If you are interested in the implementation, please take a look at the 'Guest_memory' class and the accompanying comments:
https://github.com/genodelabs/genode/blob/master/ports/src/vancouver/main.cc
That said, even though I think that the memory management is implemented in a clean way, is it certainly not obvious either. :-)
Cheers Norman
Hi Markus,
I used run/vancouver as starting script, yes. And I noticed that increasing to something around 90 MiB works without problems. Going beyond that (e.g., 100MiB) strange things happen like page faults where there should not be ones, or for example an INT 3 debug instruction which seems to "forget" about the parameter it was passed.
I forgot to mention another point that needs to be considered: The link address of the Vancouver program.
The lower portion of Vancouver's address space corresponds to the guest-physical memory. This one-to-one relationship is imposed by the NOVA hypervisor. For this reason, this particular virtual address range must be kept free from ordinary memory objects (as I outlined in my reply to Julian's posting). This includes dataspaces attached to Vancouver's address space but also Vancouver's text, data, and bss segments. If Vancouver used the default link address used by normal Genode programs, this invariant would be violated. Therefore, the link address is explicitly specified in Vancouver's target.mk file. Currently, it is set to 0x50000000, which might explain your problems. Can you try to increase this value?
Cheers Norman
On Thu, 19 Jul 2012 11:19:11 +0200 Norman Feske (NF) wrote:
NF> The lower portion of Vancouver's address space corresponds to the NF> guest-physical memory. This one-to-one relationship is imposed by the NF> NOVA hypervisor. For this reason, this particular virtual address range NF> must be kept free from ordinary memory objects (as I outlined in my NF> reply to Julian's posting).
Just to clarify this point:
The hypervisor neither forces you to put Vancouver and its associated VM in the same PD, nor does it force you to have one instance of Vancouver per VM. You can create a PD, remotely create a vCPU in it and establish the VMX/SVM portals to point to some other PD. Then that other PD can manage its virtual address space any way it wants.
That said, we have found that putting both VMM and VM in the same PD has a number of advantages. First, a VMM needs to frequently access the memory of its VM, e.g., to look at the guest page tables. Having a 1:1 relationship between virtual memory in the VMM and guest-physical memory of the VM greatly simplifies that task. Second, if the VMM and VM were in different PDs, you'd pay for two additional address-space switches on each VM exit.
Cheers, Udo
Hi Udo,
Just to clarify this point:
The hypervisor neither forces you to put Vancouver and its associated VM in the same PD, nor does it force you to have one instance of Vancouver per VM. You can create a PD, remotely create a vCPU in it and establish the VMX/SVM portals to point to some other PD. Then that other PD can manage its virtual address space any way it wants.
thanks. Let's keep that for the records. ;-)
That said, we have found that putting both VMM and VM in the same PD has a number of advantages. First, a VMM needs to frequently access the memory of its VM, e.g., to look at the guest page tables. Having a 1:1 relationship between virtual memory in the VMM and guest-physical memory of the VM greatly simplifies that task. Second, if the VMM and VM were in different PDs, you'd pay for two additional address-space switches on each VM exit.
These are damn good arguments for the current design - I don't dare to question them. So the 1:1 relationship between guest memory and Vancouver's address space is actually not "imposed" by the kernel but seems to be the most sensible design. Sorry that I mixed that up.
Cheers Norman
Hi Norman,
experimenting with the link address of Vancouver, I encountered very mixed results. Using 0x6000000 works fine, whereas addresses like 0x47000000, 0x80000000, 0xa0000000 or 0xb0000000 cause pagefaults very early in the startup of Genode. Init complains about addresses having changed after attach.
This leads me to my real question: How is the link address restricted, or how does it affect the memory situation? Maybe you could just elaborate a bit more on how the entire memory handling works when using vancouver on Genode.
Maybe that could help me find out what is causing my current problems, because I think it could me memory-related.
Cheers
Markus
On 19 July 2012 11:19, Norman Feske <norman.feske@...1...> wrote:
Hi Markus,
I used run/vancouver as starting script, yes. And I noticed that increasing to something around 90 MiB works without problems. Going beyond that (e.g., 100MiB) strange things happen like page faults where there should not be ones, or for example an INT 3 debug instruction which seems to "forget" about the parameter it was passed.
I forgot to mention another point that needs to be considered: The link address of the Vancouver program.
The lower portion of Vancouver's address space corresponds to the guest-physical memory. This one-to-one relationship is imposed by the NOVA hypervisor. For this reason, this particular virtual address range must be kept free from ordinary memory objects (as I outlined in my reply to Julian's posting). This includes dataspaces attached to Vancouver's address space but also Vancouver's text, data, and bss segments. If Vancouver used the default link address used by normal Genode programs, this invariant would be violated. Therefore, the link address is explicitly specified in Vancouver's target.mk file. Currently, it is set to 0x50000000, which might explain your problems. Can you try to increase this value?
Cheers Norman
Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ _______________________________________________ Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Hi Markus,
experimenting with the link address of Vancouver, I encountered very mixed results. Using 0x6000000 works fine, whereas addresses like 0x47000000, 0x80000000, 0xa0000000 or 0xb0000000 cause pagefaults very early in the startup of Genode. Init complains about addresses having changed after attach.
does changing the link address of vancouver influence the behavior of init? This is unexpected and should not happen. No matter what strange things vancouver is doing, it should not be able to have an effect on init. If Vancouver is able to mess up init, you have likely hit a bug. I would appreciate a way to reproduce it.
Of the addresses you mentioned above, only 0x47000000 looks suspicious. The virtual address range from 0x40000000 to 0x4fffffff is used to keep thread context information (such as stacks and the UTCB). I you picked this link address, the text segment would conflict with the context area.
From the top of my head, I do not know why you run into problems with
the other addresses. I will need to investigate.
This leads me to my real question: How is the link address restricted, or how does it affect the memory situation? Maybe you could just elaborate a bit more on how the entire memory handling works when using vancouver on Genode.
The important invariant Vancouver needs to uphold is that the lower part if address space corresponds one-to-one to the guest-physical memory. Let's call this low virtual area within Vancouver "guest-physical shadow". If your virtual machine is supposed to have 256 MB of physical memory, you will have to keep the lowest 256 MB from being used by any ordinary memory object. That includes the link address of the Vancouver binary.
Unfortunately, reality is just a bit more twisted than that. By taking the paragraph above verbatim, you might expect that a Genode dataspace attached within the guest-physical shadow area will automatically appear in the guest-physical memory. However, this is not true. It's important that the guest-physical shadow area is populated by using a mapping with the 'update_guest_pt' bit set. Otherwise, NOVA won't update the guest's physical memory. For ordinary page-fault resolutions performed by core's pager, the 'update_guest_pt' bit is not set. For this reason, the guest-physical shadow area is paged locally within Vancouver. The physical backing store is acquired using core's RAM service (at construction time of the 'Guest_memory' object) and mapped within Vancouver at a free virtual address range. Each time, a NPT fault occurs (when the guest OS access guest-physical memory that is not mapped), the '_handle_map_memory' function is called. It remaps a flexpage from the backing store to the guest-physical shadow area using the 'update_guest_pt' bit.
Maybe that could help me find out what is causing my current problems, because I think it could me memory-related.
That is quite possible. When I ported the code, getting the NPT mappings right was the most difficult part. The code may still behave wrong in some corner cases that I just haven't hit so far. Have you tried to cross-check my version of 'vancouver.cc' with Bernhard's original implementation? Bernhard's code is known to work for running Linux as Guest OS. I would recommend you to somehow cross-correlate both versions so you may spot semantic gaps in my version.
Please excuse this overly general answer. The broad nature of your question makes it hard to give more specific advice. ;-) Maybe you could share more details about what you are specifically trying to do, or even post a link to the git branch you are working on?
Cheers Norman
Hi Norman,
thanks for your explanations. Now I know that I should not use 0x47000000 as link address ;)
My experiments are not yet made publicly available. Right now I am trying to setup an environment to run Fiasco/Fiasco.OC in a VM, investigating the problems to be solved. But mainly I am working on understanding the architecture of Vancouver/Genode/NOVA, running into several issues, which are in the process of being explained/resolved. My question is of this broad nature because if my not yet very targeted work ;)
But for now, thanks for the insights again and be aware, I'm afraid I will come back to you with more (specific) questions ;)
Regards
Markus
On 30 July 2012 19:56, Norman Feske <norman.feske@...1...> wrote:
Hi Markus,
experimenting with the link address of Vancouver, I encountered very mixed results. Using 0x6000000 works fine, whereas addresses like 0x47000000, 0x80000000, 0xa0000000 or 0xb0000000 cause pagefaults very early in the startup of Genode. Init complains about addresses having changed after attach.
does changing the link address of vancouver influence the behavior of init? This is unexpected and should not happen. No matter what strange things vancouver is doing, it should not be able to have an effect on init. If Vancouver is able to mess up init, you have likely hit a bug. I would appreciate a way to reproduce it.
Of the addresses you mentioned above, only 0x47000000 looks suspicious. The virtual address range from 0x40000000 to 0x4fffffff is used to keep thread context information (such as stacks and the UTCB). I you picked this link address, the text segment would conflict with the context area.
From the top of my head, I do not know why you run into problems with
the other addresses. I will need to investigate.
This leads me to my real question: How is the link address restricted, or how does it affect the memory situation? Maybe you could just elaborate a bit more on how the entire memory handling works when using vancouver on Genode.
The important invariant Vancouver needs to uphold is that the lower part if address space corresponds one-to-one to the guest-physical memory. Let's call this low virtual area within Vancouver "guest-physical shadow". If your virtual machine is supposed to have 256 MB of physical memory, you will have to keep the lowest 256 MB from being used by any ordinary memory object. That includes the link address of the Vancouver binary.
Unfortunately, reality is just a bit more twisted than that. By taking the paragraph above verbatim, you might expect that a Genode dataspace attached within the guest-physical shadow area will automatically appear in the guest-physical memory. However, this is not true. It's important that the guest-physical shadow area is populated by using a mapping with the 'update_guest_pt' bit set. Otherwise, NOVA won't update the guest's physical memory. For ordinary page-fault resolutions performed by core's pager, the 'update_guest_pt' bit is not set. For this reason, the guest-physical shadow area is paged locally within Vancouver. The physical backing store is acquired using core's RAM service (at construction time of the 'Guest_memory' object) and mapped within Vancouver at a free virtual address range. Each time, a NPT fault occurs (when the guest OS access guest-physical memory that is not mapped), the '_handle_map_memory' function is called. It remaps a flexpage from the backing store to the guest-physical shadow area using the 'update_guest_pt' bit.
Maybe that could help me find out what is causing my current problems, because I think it could me memory-related.
That is quite possible. When I ported the code, getting the NPT mappings right was the most difficult part. The code may still behave wrong in some corner cases that I just haven't hit so far. Have you tried to cross-check my version of 'vancouver.cc' with Bernhard's original implementation? Bernhard's code is known to work for running Linux as Guest OS. I would recommend you to somehow cross-correlate both versions so you may spot semantic gaps in my version.
Please excuse this overly general answer. The broad nature of your question makes it hard to give more specific advice. ;-) Maybe you could share more details about what you are specifically trying to do, or even post a link to the git branch you are working on?
Cheers Norman
-- Dr.-Ing. Norman Feske Genode Labs
http://www.genode-labs.com · http://genode.org
Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ _______________________________________________ Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Hello,
On 30.07.2012 19:56, Norman Feske wrote:
Using 0x6000000 works fine, whereas addresses like 0x47000000, 0x80000000, 0xa0000000 or 0xb0000000 cause pagefaults very early in the startup of Genode. Init complains about addresses having changed after attach.
does changing the link address of vancouver influence the behavior of init? This is unexpected and should not happen. No matter what strange things vancouver is doing, it should not be able to have an effect on init. If Vancouver is able to mess up init, you have likely hit a bug. I would appreciate a way to reproduce it.
Init doesn't fail, it's alive. It's just denying to attach at the addresses > 0x80000000.
For some reasons, in base-nova/src/core/platform.cc around line 355 the available virtual address space is configured to use only the first 2G.
After adjusting the size to 3G, I was able to use link addresses for vancouver above 2G.
@Norman: Is there a reason to have this 2G boundary instead of 3G for 32bit @ NOVA ?
Regarding another question of Markus:
I used run/vancouver as starting script, yes. And I noticed that increasing to something around 90 MiB works without problems. Going beyond that (e.g., 100MiB) strange things happen like page faults where there should not be ones, or for example an INT 3 debug instruction which seems to "forget" about the parameter it was passed.
In base-nova/src/core/platform_thread.cc ~line 51 the UTCB of the first address of a new address space is hard coded and is at 0x6000000.
That explains why you get in trouble if you use more than 96 MiB. Some other well chosen address we should use here ...
Cheers,
Alex.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Hello Markus,
On 30.07.2012 13:19, Markus Partheymueller wrote:
experimenting with the link address of Vancouver, I encountered very mixed results. Using 0x6000000 works fine, whereas addresses like 0x47000000, 0x80000000, 0xa0000000 or 0xb0000000 cause pagefaults very early in the startup of Genode. Init complains about addresses having changed after attach.
can you please give [0] a try. Your observed problems should be gone now.
Cheers,
Alex.
[0] https://github.com/alex-ab/genode/commits/nova_vmm
This leads me to my real question: How is the link address restricted, or how does it affect the memory situation? Maybe you could just elaborate a bit more on how the entire memory handling works when using vancouver on Genode.
Maybe that could help me find out what is causing my current problems, because I think it could me memory-related.
Cheers
Markus
On 19 July 2012 11:19, Norman Feske <norman.feske@...1...> wrote:
Hi Markus,
I used run/vancouver as starting script, yes. And I noticed that increasing to something around 90 MiB works without problems. Going beyond that (e.g., 100MiB) strange things happen like page faults where there should not be ones, or for example an INT 3 debug instruction which seems to "forget" about the parameter it was passed.
I forgot to mention another point that needs to be considered: The link address of the Vancouver program.
The lower portion of Vancouver's address space corresponds to the guest-physical memory. This one-to-one relationship is imposed by the NOVA hypervisor. For this reason, this particular virtual address range must be kept free from ordinary memory objects (as I outlined in my reply to Julian's posting). This includes dataspaces attached to Vancouver's address space but also Vancouver's text, data, and bss segments. If Vancouver used the default link address used by normal Genode programs, this invariant would be violated. Therefore, the link address is explicitly specified in Vancouver's target.mk file. Currently, it is set to 0x50000000, which might explain your problems. Can you try to increase this value?
Cheers Norman
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ _______________________________________________ Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ _______________________________________________ Genode-main mailing list Genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main