ARM, VMM and the run example

Stefan Kalkowski stefan.kalkowski at genode-labs.com
Fri Jan 29 19:16:15 CET 2021


Hello Peter,

On Fri, Jan 29, 2021 at 01:41:02PM +0000, fixed-term.peter.jacobi at sojus-software.de wrote:
> Dear Stefan,
> thank you for your very detailed explanation, it helped me a lot.
> Actually I started in former tests not only the VM twice also the VMM. But not the terminal_crosslink, and that made the difference.
> After that we succeded with running the 2 Virtual Machines and we did different tests.
> The build up of the system is the same you saw before. We redirect both VMM outputs through 2 terminal_crosslink to one log_terminal and with this  to one screen.
> The effect we have observed in different tests scenarios is, that the boot process of both machines start directly one after the other, but at the moment the linux program starts the first loaded VM gives alone an output for a long time. After an undefined long time eventually the 2nd VM begins to give the output and after that both begin to interleave the output messages.

I do not wonder about the observations you've made. A lot of the boot
messages are "early kernel messages" that are produced before the
console is initialized. Dependent on the kernel command line, if no
early printk is enabled, those messages are sent out to the console at
once, as soon as it got initialized. In our case the console is a
Virtio console, which handles more than one character at once (in
contrast to native serial line). Also the Terminal API, which is using
shared memory in between client/server, can handle a lot of characters
at once.  Thereby, given that you run both VMMs and VMs on the same
cpu, it can happen that one side prints most or all of its early boot
messages, before the other side is doing the same.

> This brings me to ask you the following questions:
> 1.- how does genode execute the VMs, kernelwise?

A VM is kernel-wise the same scheduling subject like a thread. Apart
from the fact that the context switch to a VM is more costly, it is
almost the same like scheduling a thread.

> 2.- is it possible to define a different core for each VMM?

Of course, it is. The VMM can start a VM only on the cpus it is
running on top of. That means, you can define which VM is running on
which cpu by stating on which cpu the VMM is running.
To achieve this you can use the affinity settings for components
inside of the configuration of init. For more information, please
refer to the Genode Foundations book:

  https://genode.org/documentation/genode-foundations/20.05/system_configuration/The_init_component.html#Assigning_subsystems_to_CPUs

Be aware that currently the number of virtual cpus used by the VM is
hardcoded inside the VMM. If you restrict the number of physical cpus
used by the VMM, it is advised to not use more virtual cpus than
physical ones.


> 3.- is it possible to watch the behaviour of the VMMs on one core? How they time slice and the performance in the genode arm system?

In general you can have a look at the workload of the system, and the
utilization of the different cpu cores by using the top component.
Please, have a look at `repos/os/src/app/top` in the Genode source
code. It uses the TRACE session to collect all component's
information. If you want more detailed information about information
flow, sequences, and timing, the TRACE session is the right mechanism
for you, and the top component a good starting point. I think, for the
beginning you can simply use top -  as it is - to gain more knowledge
about what is going on.

> 4.- is there a possibility to define if they run on one core, a priority for each component chain? I mean for example the VMM -> terminal_crosslink -> log_terminal.

You can define priorities and cpu quota per a component in init's
configuration. The base-hw kernel interprets priorities not as hard
real-time priorities, and they are only effective in combination with
an assigned cpu quota. Please have a look at:

  https://genode.org/documentation/genode-foundations/20.05/under_the_hood/Execution_on_bare_hardware_(base-hw).html#Scheduler_of_the_base-hw_kernel

for more details. Actually, I doubt the usefulness of priorities
in this context. I do not think that you'll observe serious system
utilization regarding the VM's message passing anyway.

> I wish you a relaxed weekend and I hope to hear from you soon.

I wish you a nice weekend too, and good results in the upcoming week.
Best regards
Stefan

> Best wishes,
> Peter Jacobi
> Am Mo., Jan. 18, 2021 01:03 PM schrieb Stefan Kalkowski :
> Hello Peter,
> 
> On Fri, Jan 15, 2021 at 12:31:10PM +0000, fixed-term.peter.jacobi at sojus-software.de (mailto:fixed-term.peter.jacobi at sojus-software.de) wrote:
> Hi,
> after trying to understand a little bit deeper at the "arm_vmm.run" file in an effort to expand this to run 2 VMMs parallel, I faced some issues I do not understand well. So I hope to get here your expert help to clarify what I missundersand.
> 1.- Arm-Run-File: I modified the vmm_arm.run adding a 2nd component vm2 as you can see in the attached file.
> The result of this you can see in the attached result-file.
> So no entry point is found for this 2nd component.
> 
> when looking into your example, I can see that you duplicated the
> terminal_expect_send test-component only, but not the vmm (and thereby
> vm), nor the terminal_crosslink component. Instead you connect the
> additional terminal_expect_send component to the terminal_crosslink in
> addition.
> 
> Before explaining what is going wrong, I'll explain the original
> vmm_arm.run script in more detail, as well as the involved components.
> 
> The vmm_arm.script, like most run-scripts that are listed in
> `tool/autopilot.list`, is used to automatically test every night a
> certain feature. In this case, the ARM virtualization extensions resp.
> our hypervisor and VMM. It is therefore not meant to be used
> interactively. On the other hand, we want to test a bit of the
> interactive features of the VMM, namely its virtio console model.
> The virtio console model in the VMM maps to Genode's Terminal
> interface. That means, the Terminal session route of the VMM
> component can be used to interact with the guest VM's (Linux) console.
> It is connected to the terminal_crosslink component. The
> terminal_crosslink is a simple server component which connects the RX
> line of one Terminal client with the TX line of another one and vice
> versa. The other Terminal client, which is cross-linked with the VMM,
> in this case is the terminal_expect_send component. This is a very
> simple `expect`-like component, which looks for a certain string in
> its Terminal input, and if it received that input sends another
> string. In our example it is configured like the following:
> That means it waits for the terminal prompt "/ #", and then sends the
> command "ls". Moreover, it prints all received characters to the LOG
> output. Therefore, we cann see the interaction of terminal_expect_send
> and VMM. That is also the reason, why the terminal_expect_send
> components name is "vm", although it isn't actually the VM, because it
> prints all console output of the VM to the LOG output. Therefore, an
> observer looks at its output as the output of the VM. The VM itself is
> not described discretely in the init configuration of the run-script.
> It is always managed by the VMM.
> 
> To sum it up: if you want more than one VM, you need to start another
> VMM. You should not connect more clients to the same
> terminal_crosslink component than the two clients that should interact
> with each other.
> 
> If you want to play around with the VMM in more detail. and learn more
> about its interaction, I would recommend to you to replace the
> terminal_expect_send and terminal_crosslink with a Terminal with whom
> you can directly interact. Either you use networking and the
> tcp_terminal, or you take a graphical environment as starting point
> and start a Terminal that uses the GUI server.
> If you simply want to duplicate the VM in the vmm_arm example, you
> have to duplicate the VMM, terminal_crosslink, and
> terminal_expect_send component as well.
> 
> 2.- Source, wandering through the source structure /repos/os/src/ I found  the "terminal_expect_send" in the test directory. What do i expect to find in the os-test-directory structure?
> Greetings,
> Peter
> 
> In `src/test` in the sub-repositories in general, or in `os/src/test`
> in particular, you'll find small components to e.g. showcase the
> usage of an API, or to test-control a certain component. Those
> components are typically not meant to be used as general-purpose,
> productive components.
> 
> Best regards
> Stefan
> 
> _______________________________________________
> Genode users mailing list
> users at lists.genode.org (mailto:users at lists.genode.org)
> https://lists.genode.org/listinfo/users (https://lists.genode.org/listinfo/users)
> 
> -- 
> Stefan Kalkowski
> Genode labs
> 
> https://github.com/skalk (https://github.com/skalk) | https://genode.org (https://genode.org)
> 
> _______________________________________________
> Genode users mailing list
> users at lists.genode.org (mailto:users at lists.genode.org)
> https://lists.genode.org/listinfo/users (https://lists.genode.org/listinfo/users)

> _______________________________________________
> Genode users mailing list
> users at lists.genode.org
> https://lists.genode.org/listinfo/users


-- 
Stefan Kalkowski
Genode labs

https://github.com/skalk | https://genode.org



More information about the users mailing list