Feedback on 23.05

ttcoder at netcourrier.com ttcoder at netcourrier.com
Tue Jul 25 21:08:34 CEST 2023


Fellow Genodians,
what follows is a minor observation and two (more serious) questions.

1) Suspend/resume

The introduction of that feature in 23.02 triggered some "geek lust"
in this aging geek, and now with 23.05 I got around to attempting
system integration of suspend/resume in my stack.

In Qemu it seems to all work fine (after I realized the vesa and input driver need to be restarted, reading acpi_suspend.run
more carefully). However on bare metal (admittely years old), S3-suspend works well, but at resume time,
the screen remains black.
I thought maybe if the video driver causes trouble, I can try the "back door" (vnc), to validate the concept that
everything works except vesa, and confirm I need to revisit my integration of vesa_drv.
So I tried to connect via VNC, but no go, coming back from suspend. Could be that the nic_driver needs to
be restarted as well (I didn't handle that, I only restart vesa and ps2_drv currently).

I could persist, but this is not a feature I really need (just wanted it for "bragging rights" :^) so I'll
probably wait for newer Genode releases, in case it's a matter of waiting for Genode upgrades.
Very cool to have that feature almost at reach though.


2) Launching apps
A newbie question ! (yes even after 5 years I still have those :-)
How to launch components (applications).
Grepping through the Genode repo, I came up with:

- use a sub init with a generous RAM alotment (2 GB), and drive its config ROM through a report_rom:
a list of launched apps has to be maintained by a "third actor" (not the sub-init, and not any of
the client apps that require launching a process, but rather some sort of stand-by "registrar"), and
its job would be to re-generate the sub's config ROM whenever that list grows as a new app is launched.
Problem is, what happens when the user *quits* an application ? When that occurs, the app is still
part of the sub-init's config, so next time sub reads its config, it's going to re-launch it, right ?
This would otherwise seem to fit the bill well, if not for that problem of un-desired app relaunch.
Maybe I can solve that problem by sending notifications every time an app is quit (if it's quit cleanly
rather than crashing, at least), and the notification recipient (probably the "registrar" ?)
would remove the relevant snippet from the sub-init's config.

- loader_session: seems close enough too, and simplier to use, but the README says the ram/caps of the created child
will be substracted from the caller, instead of from the launcher (i.e. the opposite of a sub-init).
But I want to do the reverse.

- fork/exec from libc : same as launcher_session, but seemingly with the (additional) awkwardness of old-style UNIX,
where fork() creates a full duplicate of the caller, at least until exec() is called
(so the caller app would need to have twice as much ram/caps as it needs, even if it just spawns
something tiny like /bin/ls !).

- sandbox.h : couldn't find a "tutorial" style usage of it via grep -r sandbox repos/, but maybe I should
dive right in and experiment until I understand the gist of it, using the more complex use-cases in the repos ?

- anything I missed ?


3) capability leak in pthread_create() (regression in 23.05 ?)

Seems my stack triggers some sort of corner case that it did'nt before.
This is a little more mission-critical, though it's a small leak and I haven't seen it really impact
my apps so far.
Anyway the observed behavior is, in many cases, pthread_create() behaves as it always had,
allocates a new capability for the created thread, which is returned when the thread ends.
But in some scenarios, the cap is not returned, so I'm leaking a cap for each created thread,
or pthread_create() allocates _two_ caps instead of one ; this seems to depend on how soon
the spawned thread is scheduled (i.e. before or after libc returns from create).
Could be that the "corner case" is in my stack, not in Genode, of course.
What angle to take to debug that ?
I should probably create a ticket to discuss that issue.

Cédric









More information about the users mailing list