Dear Genode community,
it is the time of the year again to reflect and make plans for the foreseeable future. Hereby, I'd like to kick off our traditional brainstorming about Genode's road map for the year ahead of us.
What happened in the previous episode... ----------------------------------------
I have two perspectives to share. One is the look at Genode as our project, and the other one is my personal view.
Let's start with the first one. When reviewing the past twelve months, I am immensely proud of the accomplishments of our team. Together, we conquered the territory of GPU support that was ridden with uncertainties and seemed almost impenetrable when we started. Now, our Intel GPU multiplexer has landed in Sculpt like it always belonged there. The next highlight was witnessing the puzzle pieces of our new Linux device-driver environment coming together, replacing former confusion and chaos with knowledge and order, ultimately uncovering the treasure of Linux drivers for Genode with very little friction. The third highlight was witnessing the growing sophistication of Genode-native workloads, with the media features of the Chromium-based browser on 64-bit ARM being the most impressive example. Apart from the apparent functional benefits for Genode and Sculpt OS, this is the long outstanding validation of some arguably risky design decisions we took many year ago, in particular the role and architecture of the VFS and its interplay with the libc.
When reviewing the road map for 2021 [1], some items remained uncovered. In particular the seL4-related topics became stale. One year ago when I assembled the road map, there was a tangible prospect of a paying customer funding this work. However, those plans were repeatedly deferred and we don't know whether or when they will come to fruition. I was too optimistic. Also, there are some items that have seen healthy doses of progress - like the topics related to Ada/SPARK or Goa - but received less attention than anticipated. On the other hand, the four releases [2,3,4,5] of this year covered quite a few topics not advertised at the road map, e.g., webcam support, Xilinx Zynq, or RISC-V. Priorities shift. That's fine.
[1] https://genode.org/about/road-map [2] https://genode.org/documentation/release-notes/21.02 [3] https://genode.org/documentation/release-notes/21.05 [4] https://genode.org/documentation/release-notes/21.08 [5] https://genode.org/documentation/release-notes/21.11
From my personal perspective, I've wholly enjoyed the work on the
Pinephone as documented in my "Pine fun" article series at Genodians.org [6]. Much of the enjoyment came from the _process_, in particular the co-development of the new DDE-Linux together with Stefan, the mutual cross-validation of ideas and code, and our joint sense of great care. Granted, feature-wise, I missed my original goal of being able to issue phone calls with Genode on the Pinephone by now. But the collateral effects of the work in terms of tooling (dts_extract), interfaces (Genode-C-API), and documentation ("Genode Platforms") deserved the attention they got.
What's up for next year? ------------------------
I'm in full swing with the Pinephone. So I will keep moving full steam ahead. With the touchscreen and display tamed now, the next topics are telephony, mobile-data connectivity, Sculpt, browser, and a simple user interface. Over the year, I will increasingly focus on non-functional aspects as well, in particular power management (battery life) and quality of service (UI latency, audio latency). By the end of the year, I want to be able to casually use a video-chat solution like Jitsi on the phone.
Besides the Pinephone, I am planning to simplify and solidify Genode's base framework by gradually removing complexity (like C++ exceptions [7]), increasing the strictness of the coding style (like the aftermath of [8]), and attending the most-neglected corners of our issue tracker [9].
[7] https://genodians.org/nfeske/2021-11-26-attempt-no-exceptions [8] https://genodians.org/nfeske/2021-12-07-wconversion [9] https://github.com/genodelabs/genode/issues
What about you? ---------------
My point of view outlined above is only one way to look at the picture. Now I would be interested in your perspective!
What's your reflection of Genode's past year?
What are the topics you deem as most interesting to work on?
Do you already have tangible plans you can share with us?
Are there road blocks that stand in the way of your plans?
What is your vision of using Genode at the end of 2022?
I hope that this posting spawns a fruitful discussion of potential topics for the next episode. Please be considerate to avoid dropping mere proposals or wish lists. It's best to present suggestions together with actionable steps that you are willing to take.
In mid of January, I am going to update the official road map.
Cheers, Norman
My thoughts in no particular order :
After several years, I still can't get over how modular Genode is and how take-no-prisoners no-compromise its policy is re. clean design and best practices. Seeing that kind of dedication keeps me motivated when I feel tired IRL.
If I get a chance before the end of 2022, I'll port (or help with porting) some packages, including easy ones like 1) fossil or 2) jam, but especially the fledgling-but-already-awesome 3) V-lang language. Could help with day to day developer life, who knows.
As to my pet project, in the next couple weeks I'm about to wrap up the one-before-last 'ticket' that's a pre-requisite before my software runs on Genode, so it looks like 2022 (can't believe it's 2022 already... time flies) will be the year I can resume selling my software and can scale back the "odd jobs" (freelance website programming etc), which will be nice for sure. Not to mention, it'll open the perspective of an 'alternative' desktop for Genode -- though I dare not say I'll get to that before 2023, given my velocity history <g>.
Once the dust settles, I'll be able to look at the Genode/SculptOS eco-system and packages more in depth, even sit back and test the Quake and DOSbox ports, clean up my tech debt (including upgrade to the latest tool-chain, at long last), which will also feel good! And I'll be in a better place to find out if things are lacking or in need of improvement somewhere. From where I stand currently (deep in VFS-related code) it's all super modular and well made, nothing to complain about, nothing to do but finish coding :-)
Cedric
Hi Cedric,
thank you for chiming in with your perspective and plans.
As to my pet project, in the next couple weeks I'm about to wrap up the one-before-last 'ticket' that's a pre-requisite before my software runs on Genode, so it looks like 2022 (can't believe it's 2022 already... time flies) will be the year I can resume selling my software and can scale back the "odd jobs" (freelance website programming etc), which will be nice for sure. Not to mention, it'll open the perspective of an 'alternative' desktop for Genode -- though I dare not say I'll get to that before 2023, given my velocity history <g>.
Picking up the alternative desktop topic would be really nice. It was fun to try out an earlier version of your HoG project some months back. I feel a bit remorseful for not having followed up with the idea of integrating it with Sculpt. All the more I appreciated our recent exchange of ideas around the VFS on the issue tracker. :-)
From where I stand currently (deep in VFS-related code) it's all super modular and well made, nothing to complain about, nothing to do but finish coding :-)
Thanks for the nice feedback and all the best for 2022!
Cheers Norman
On 12/24/21 07:37, ttcoder@netcourrier.com wrote:
My thoughts in no particular order :
After several years, I still can't get over how modular Genode is and how take-no-prisoners no-compromise its policy is re. clean design and best practices. Seeing that kind of dedication keeps me motivated when I feel tired IRL.
I feel exactly the same way.
If I get a chance before the end of 2022, I'll port (or help with porting) some packages, including easy ones like 1) fossil or 2) jam, but especially the fledgling-but-already-awesome 3) V-lang language. Could help with day to day developer life, who knows.
As to my pet project, in the next couple weeks I'm about to wrap up the one-before-last 'ticket' that's a pre-requisite before my software runs on Genode, so it looks like 2022 (can't believe it's 2022 already... time flies) will be the year I can resume selling my software and can scale back the "odd jobs" (freelance website programming etc), which will be nice for sure.
That's very exciting news - please keep us informed!
Not to mention, it'll open the perspective of an 'alternative' desktop for Genode -- though I dare not say I'll get to that before 2023, given my velocity history <g>.
This is very intriguing also. If you need an experimental/alpha tester for this, you know who to ask. :^)
Thanks!
John J. Karcher devuser@alternateapproach.com
Gesendet: Donnerstag, 23. Dezember 2021 um 19:05 Uhr Von: "Norman Feske" norman.feske@genode-labs.com An: users@lists.genode.org Betreff: Roadmap 2022
Besides the Pinephone, I am planning to simplify and solidify Genode's base framework by gradually removing complexity (like C++ exceptions [7]), increasing the strictness of the coding style (like the aftermath of [8]), and attending the most-neglected corners of our issue tracker [9].
[7] https://genodians.org/nfeske/2021-11-26-attempt-no-exceptions
Please do not remove exceptions. https://www.stroustrup.com/P0976-the-evils-of-paradigms.pdf At least leave in enough to test the handling of them regularly by using them. The reason for this request is that exceptions are the only open types in c++. That means I could add to them as an afterthought. That may be needed in mutiplexers, that could be configured with additional cases.
[8] https://genodians.org/nfeske/2021-12-07-wconversion [9] https://github.com/genodelabs/genode/issues
In mid of January, I am going to update the official road map.
Cheers, Norman
-- Dr.-Ing. Norman Feske Genode Labs
https://www.genode-labs.com · https://genode.org
Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
Genode users mailing list users@lists.genode.org https://lists.genode.org/listinfo/users
Gesendet: Freitag, 24. Dezember 2021 um 14:47 Uhr Von: "Uwe" geno.de@public-files.de An: users@lists.genode.org Betreff: Aw: Roadmap 2022
Gesendet: Donnerstag, 23. Dezember 2021 um 19:05 Uhr Von: "Norman Feske" norman.feske@genode-labs.com An: users@lists.genode.org Betreff: Roadmap 2022
Besides the Pinephone, I am planning to simplify and solidify Genode's base framework by gradually removing complexity (like C++ exceptions [7]), increasing the strictness of the coding style (like the aftermath of [8]), and attending the most-neglected corners of our issue tracker [9].
[7] https://genodians.org/nfeske/2021-11-26-attempt-no-exceptions
Please do not remove exceptions. https://www.stroustrup.com/P0976-the-evils-of-paradigms.pdf At least leave in enough to test the handling of them regularly by using them. The reason for this request is that exceptions are the only open types in c++. That means I could add to them as an afterthought. That may be needed in mutiplexers, that could be configured with additional cases.
If you need to change between error return and exceptions consider https://ideone.com/m2ZfHN the lippincott function.
[8] https://genodians.org/nfeske/2021-12-07-wconversion [9] https://github.com/genodelabs/genode/issues
In mid of January, I am going to update the official road map.
Cheers, Norman
-- Dr.-Ing. Norman Feske Genode Labs
https://www.genode-labs.com · https://genode.org
Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
Genode users mailing list users@lists.genode.org https://lists.genode.org/listinfo/users
Genode users mailing list users@lists.genode.org https://lists.genode.org/listinfo/users
In general, my interest is an estimation of possibility of integration of some features like usage of containers (light-weight virtualisation), checkpointing and persistency, applications migration / etc with modern microkernels. May be with verification (while this is a different story).
To do so I make a preliminary port of golang and related stuff onto of Genode to better understand related problems and as a base for containers support (while I am not a Go programmer, I never write any programs on it, just fix others ;).
In this moment for me somehow obvious that to support things above I need a support in Genode some parts of generic virtualisation. Like namespaces based isolation (read: ability to have same names/id’s/etc in different domains for objects and anything provided by the Genode to user apps, together with additional related API). At least for app snapshotting, migration and persistency this is «the must». They are not so necessary for containers themselves, there are support of some platforms without it, as well without dedicated layered FS (unions and similar like auFS/btrfs/zfs/etc - while it is good to have it).
Note: I suspect that having namespaces virtualisation on the kernel level will give some additional advantages for Genode even in terms of security. Same like happens in Linux with our proposals related to openvz/namespaces/userbeancounters (after some time it became clear that they are necessary for modern OS). This is relatively cheap in implementation and overhead points of view. Did you considers this option as a part of Genode future?
Alexander
23 дек. 2021 г., в 21:05, Norman Feske norman.feske@genode-labs.com написал(а):
Dear Genode community,
it is the time of the year again to reflect and make plans for the foreseeable future. Hereby, I'd like to kick off our traditional brainstorming about Genode's road map for the year ahead of us.
Hi Alexander,
it in interesting to learn more about the context of your work with Go.
You said that you are not a Go programmer yourself. But to you happen to have users of your Go runtime to get their feedback?
Like namespaces based isolation (read: ability to have same names/id’s/etc in different domains for objects and anything provided by the Genode to user apps, together with additional related API). At least for app snapshotting, migration and persistency this is «the must». They are not so necessary for containers themselves, there are support of some platforms without it, as well without dedicated layered FS (unions and similar like auFS/btrfs/zfs/etc - while it is good to have it).
I think the two aspects OS-level virtualization and snapshotting/persistency should best be looked at separately.
Regarding OS-level virtualization, Genode's protection domains already provide the benefit of being light-weight - like namespaces when compared to virtual machines - while providing much stronger isolation. Each Genode component has its private capability space after all with no sharing by default. Hence, OS-level virtualization on Genode comes down to hosting two regular Genode sub systems side by side.
The snaphotting/persistency topic is not yet covered. But I see a rather clear path towards it, at least for applications based on Genode's libc. In fact, the libc already has the ability to replicate the state of its application as part of the fork mechanism. Right now, this mechanism is only used internally. But it could be taken as the basis for, e.g., serializing the application state into snapshot file. Vice versa, similar to how a forked process obtains its state from the forking process, the libc could support the ability to import a snapshot file at startup. All this can be implemented in the libc without changing Genode's base framework.
That being said, there is an elephant in the room, namely how POSIX threads fit into the picture. How can the state of a multi-threaded application be serialized in a consistent way? That would be an interesting topic to research.
These are just my thoughts from the top of my head. I'm looking forward to see your steps into this direction.
Cheers Norman
Gesendet: Dienstag, 04. Januar 2022 um 15:07 Uhr Von: "Norman Feske" norman.feske@genode-labs.com An: users@lists.genode.org Betreff: Re: Roadmap 2022
That being said, there is an elephant in the room, namely how POSIX threads fit into the picture. How can the state of a multi-threaded application be serialized in a consistent way? That would be an interesting topic to research.
I would think that it should be relatively simple to mark multi-threaded applications as such (at creation of the first thread), creating an additional thread for snapshot purpose. That thread would wait after creation for a signal to start a snapshot. If it gets that signal it does the opposite of yield(), thereby monopolizing the CPU (for instance by inserting the equivalent of a directed yield() into all other threads) and with this monopolized CPU doing the snapshot (using part of the fork() mechanism). After the snapshot is done this thread would go back to wait for the signal to snapshot again, thereby ceasing to monopolize the CPU. That has the additional advantage that a reloaded snapshot will start in exactly that state that is needed to continue seamlessly if the snapshot is run from this thread.
These are just my thoughts from the top of my head. I'm looking forward to see your steps into this direction.
Cheers Norman
-- Dr.-Ing. Norman Feske Genode Labs
https://www.genode-labs.com · https://genode.org
Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
Genode users mailing list users@lists.genode.org https://lists.genode.org/listinfo/users
Hi Norman, thanks for answer. Some thoughts below.
it in interesting to learn more about the context of your work with Go.
couple years ago I do have a project to implement docker support on some new microkernel OS. as a starting point I need to try this ontop of Genode (because project owners do not have OS sources available for me that time). Initially I think that I need to implement integration of partitioning with containers, like having single container per OS partition. Finally I need to support only a set of containers inside single OS partition with provided by OS linux emulation layer. Later I found that main problem was not in fixing kernel, drivers /etc - problem that all docker support was implemented using golang. So, I need to port a couple millions LOC written on Go (AKA docker support) , and start from porting runtime of golang itself (another 1.2m LOC on golang and C, which touch ALL available syscalls/services/etc of underlying OS, and requires good understanding of small differences between different OS APIs). I have this work half-done for genode and then switches back to main OS (where I finished later everything - port of runtime and port of docker inside single partition).
In this moment I returned back from old project and want to finish undone work for genode as a testbed for initial idea of integration of docker and OS partition in 1 <-> 1, probably using libc port. I do not have formal customers for it, just my curiosity.
You said that you are not a Go programmer yourself. But to you happen to have users of your Go runtime to get their feedback?
About users - not sure, recently I publish my patches and do not have any feedback yet. Go actively used by the developers, I hope that it will be easy to bring some application software to Genode (e.g. different handling of http staff). Anyway, current lack of customers will not stop me from the second part of my research.
I have to compile and run inside genode docker support code - couple millions golang lines which heavily use system OS API including posix and dialects.
So, I consider to have "go build» to run natively inside genode inside qemu. First step was to have TCP support inside integrated with golang - done. Next will be gccgo native (non-cross) support.
Like namespaces based isolation (read: ability to have same names/id’s/etc in different domains for objects and anything provided by the Genode to user apps, together with additional related API). At least for app snapshotting, migration and persistency this is «the must». They are not so necessary for containers themselves, there are support of some platforms without it, as well without dedicated layered FS (unions and similar like auFS/btrfs/zfs/etc - while it is good to have it).
I think the two aspects OS-level virtualization and snapshotting/persistency should best be looked at separately.
Regarding OS-level virtualization, Genode's protection domains already provide the benefit of being light-weight - like namespaces when compared to virtual machines - while providing much stronger isolation. Each Genode component has its private capability space after all with no sharing by default. Hence, OS-level virtualization on Genode comes down to hosting two regular Genode sub systems side by side.
General note. Initially, when in swsoft/virtuozzo/parallels we do create a container-based Os virtualisation (we call it "virtual environment" in 2000), we assume 3 main pillars (having in mind that we want to use it as a base for hosting in hostile environment with open unlimited access from Internet to containers):
1. namespace virtualisation not only to isolate resources but to be able to have the same pid and related resources in different containers (for Unix we think about emulation of init process with pre-defined pid 1, at least)
2. file system virtualisation to allow COW and transparent sharing of the same files (e.g. executable for apache between 100’th of containers instances) to preserve kernel memory and objects space (oppose to VM where you can’t able to share efficiently files and data structures between different VM instances) - key to containers high scalability and performance , and for docker it also a key for "encapsulation of changes" paradigm. Sharing using single kernel instance is a wide paradigm - it allow to optimise kernel structure allocation, resources sharing, single instance of memory allocator/etc.
3. ALL resources limitations on per-container base (we call this userbeancounters) which prevent any attempts to make DoS attack from one container to another or to the host.
every container initially should be like remotely-accessible complete instance of linux with root access and init process, but without ability to load own device drivers. we do implement this first for linux, later for FreeBSD/Solaris (partially), Windows (based on hot patching technique and their Terminal Server), consider mach/MacOS Darwin (just experiments). for linux and Windows it was a commercial-grade implementation and still used by millions of customers.
Now all these features (may be except file systems while zfs/brtfs/overlayfs/etc have something similar) are became a part of most of commercial OS available on the mass market. IMHO they can be cheaply implemented from the very beginning of OS kernels development - everything was in place, except understanding why this is necessary outside of simple testing environments for developers. May be it is also a time for Genode to think into this direction?
returning to Genode.
reasons for existence of namespaces (ns) is not only isolation, it is a bit wider. One thing is an ability to disallow «manual construction» of objects ids/handles/capabilities/references/etc to access something which should not be visible at all.
For example, in ns isolated containers I should not be able to send signal to arbitrary process by name (pid in our case) even if it exists in the kernel. Or vice versa - use some pre-defined processes ids to do something (e.g. unix like to attach orphans to pid 1, and later try to enumerate them, this should be emulated somehow during user-level software port, e.g. for linux docker this is important).
in case of genode probably I can created and keep capability (with data pointer inside) and perform some operations with it, if I do store it somewhere. if this capability be virtualised then we will have additional level of control over it (by creation of pre-defined caps and explicit level of limitations even if it is intentionally shared during initialisation procedure which could be a part of legacy software being ported to genode).
For better understanding - use case. imagine that you want to port application which use 3-d party library which do init some exotic file descriptors. good example is a docker itself - when you exec process inside docker container you typically don’t want to have your main process opened descriptors including stdin/out (typically it’s achieved via CLOEXEC flag - but lets consider its absence, you can just don’t know about descriptors existence). Technically it is a code in your application while it was initialised by 3-d party library linked to it, and you do not have easy way to control it.
ns implementation has simple rules to maintain group isolation, and it is not considered as unnecessary even in linux kernel with their own capabilities set. I think that namespace is a convent way to handle legacy-related question and it worth to have in Genode level where you already have wrappers around native low level kernel calls.
And, for snapshotting (see comment below) this is a must - I need to re-create all objects with the same id even if they do exists in other threads/sessions/processes because they could be stored in «numerical form» inside user thread memory.
as for file system like overlayfs - not sure, I assume that it is possible to port some known fs into genode, while it is not a first priority task (Windows docker does not have it).
for resource counting and limitations - I do not tackle this topic at all for genode.
The snaphotting/persistency topic is not yet covered. But I see a rather clear path towards it, at least for applications based on Genode's libc. In fact, the libc already has the ability to replicate the state of its application as part of the fork mechanism. Right now, this mechanism is only used internally. But it could be taken as the basis for, e.g., serializing the application state into snapshot file. Vice versa, similar to how a forked process obtains its state from the forking process, the libc could support the ability to import a snapshot file at startup. All this can be implemented in the libc without changing Genode's base framework.
That being said, there is an elephant in the room, namely how POSIX threads fit into the picture. How can the state of a multi-threaded application be serialized in a consistent way? That would be an interesting topic to research.
I think we can follow the ideas developed for CRIU patch for linux [1], no need to invent something too complex: It can freeze a running container (or an individual application) and checkpoint its state to disk. The data saved can be used to restore the application and run it exactly as it was during the time of the freeze. Using this functionality, application or container live migration, snapshots, remote debugging, and many other things are now possible.
in short, they do utilise existent linux kernel sys calls like ptrace and add very small subset of absent ones to enumerate process-related objects [2]. This does not mean that you need to have ptrace - it just used as a kind of auxillary interface to obtain info about processes, it can be implemented in different ways.
to stop (freeze) a set of related processes (tree) even with POSIX they use feature (can be considered as a part of ns virtualisation) known as cgroups [3]: The freezer allows the checkpoint code to obtain a consistent image of the tasks by attempting to force the tasks in a cgroup into a quiescent state. Once the tasks are quiescent another task can walk /proc or invoke a kernel interface to gather information about the quiesced tasks. Checkpointed tasks can be restarted later should a recoverable error occur. This also allows the checkpointed tasks to be migrated between nodes in a cluster by copying the gathered information to another node and restarting the tasks there.
Seems that similar to ns and cgroup features should be a first part/base of checkpoint/restore implementation. OF course, part of serialisation related to fork/libc, as you mention, also could be an another pillar.
In general, I think that to implement snapshotting we need 1. freeze set of threads (or make them COW, e.g. for memory changes) 2. enumerate threads 3. enumerate related objects/states (e.g. file descriptors/pipes/etc) 4. enumerate virt mem areas, and related «shared resources» between threads 5. enumerate network stack/sockets states (a bit different beast) 5. dump everything
for restore we need not only create some objects with the same numerical values of id (even same memory layout), we need to have an api to force every object to have the same content/state and related security/ns, and force (restore) «sharing» and parent/child relations if any of object between different threads/processes/sessions/etc
related to this topic: we also need to be able to bring some drivers to the same state, because during checkpointing/restore we assume external connections in the known state (e.g. imagine video drivers and application which draw on the screen. content of their video memory is a part of its state while it is not stored in application). Probably it is related to restartable drivers feature (and related fault tolerance questions).
Note: By the way, now of the key problem of CRIU patch in this moment is inability to restore graphical screen for X/etc. We can restore related sockets, while protocol parts themselves which need to be repeated are not known when you order checkpoint.I think there are no people now available who know real X protocol details necessary for the operations… but this is different story not directly related to genode questions.
[1] https://criu.org/Main_Page [2] https://criu.org/Checkpoint/Restore [3] https://www.kernel.org/doc/Documentation/cgroup-v1/freezer-subsystem.txt
Sincerely, Alexander
Hi Alexander,
thanks for bringing up the discussion. I'm thrilled by the idea of hosting containerised apps on Genode. Apparently, I haven't digged into this topic as deep as you already have, hence excuse my naive view on it. Maybe you can clarify where I'm missing some details since.
I tried to read up on the the container topic and had my own thoughts and ideas on how containers could end up on Genode. As far as I understood, a container is basically a filesystem image with some configuration of how to set things up. The container runtime will read the configuration and prepare everything depending on the target system before it launches the process defined by the container. After that, the started container is merely a standard process that has been encapsulated with namespaces, cgroup and other isolation mechanisms. The process performs syscalls just like a non-containerised process would do.
By the way, I found [1] particularly helpful for reading up on the topic and recommend this to anyone who is keen on following this discussion.
[1] https://developers.redhat.com/blog/2018/02/22/container-terminology-practica...
Thus, when thinking about running a container on Genode, I noticed we have most ingredients already in stock since a Genode component is a sandboxed process with its resource quota and local namespace.
Regarding the file system virtualisation, we have the VFS and can even host a shared VFS in a dedicated server component. I'm not sure about a copy-on-write feature, though.
In my (current) point of view, enabling containerised workloads on Genode probably requires three ingredients:
1. Implementing additional VFS plugins for mounting container images, overlays, and cow functionality. 2. Adding missing plugins for special file nodes in devfs, sysfs or procfs. This highly depends on what the particular container process expects, though. 3. Implementing a container runtime for Genode that sets up a sub-init to launch the container process with the appropriate VFS and helper components according to the container configuration.
Re. 3., I'm uncertain whether this is best approached from scratch or by porting existing runtimes such as runc or crun. The downside of the latter approach is that it requires us to provide all the Linux management interfaces such as cgroup, namespaces, etc. and map these to Genode sub-init configuration. Parsing the container configuration and applying the appropriate actions directly seems more natural to me at the moment.
@Alexander: What do you think are the major road blocks for running a first container image on Genode?
Again, please excuse my naive view on the matters. I feel I merely climbed Mount Stupid when it comes to containers.
Cheers Johannes
On Wed, 5 Jan 2022 19:23:19 +0000 Alexander Tormasov via users users@lists.genode.org wrote:
Hi Norman, thanks for answer. Some thoughts below.
it in interesting to learn more about the context of your work with Go.
couple years ago I do have a project to implement docker support on some new microkernel OS. as a starting point I need to try this ontop of Genode (because project owners do not have OS sources available for me that time). Initially I think that I need to implement integration of partitioning with containers, like having single container per OS partition. Finally I need to support only a set of containers inside single OS partition with provided by OS linux emulation layer. Later I found that main problem was not in fixing kernel, drivers /etc - problem that all docker support was implemented using golang. So, I need to port a couple millions LOC written on Go (AKA docker support) , and start from porting runtime of golang itself (another 1.2m LOC on golang and C, which touch ALL available syscalls/services/etc of underlying OS, and requires good understanding of small differences between different OS APIs). I have this work half-done for genode and then switches back to main OS (where I finished later everything - port of runtime and port of docker inside single partition).
In this moment I returned back from old project and want to finish undone work for genode as a testbed for initial idea of integration of docker and OS partition in 1 <-> 1, probably using libc port. I do not have formal customers for it, just my curiosity.
You said that you are not a Go programmer yourself. But to you happen to have users of your Go runtime to get their feedback?
About users - not sure, recently I publish my patches and do not have any feedback yet. Go actively used by the developers, I hope that it will be easy to bring some application software to Genode (e.g. different handling of http staff). Anyway, current lack of customers will not stop me from the second part of my research.
I have to compile and run inside genode docker support code - couple millions golang lines which heavily use system OS API including posix and dialects.
So, I consider to have "go build» to run natively inside genode inside qemu. First step was to have TCP support inside integrated with golang - done. Next will be gccgo native (non-cross) support.
Like namespaces based isolation (read: ability to have same names/id’s/etc in different domains for objects and anything provided by the Genode to user apps, together with additional related API). At least for app snapshotting, migration and persistency this is «the must». They are not so necessary for containers themselves, there are support of some platforms without it, as well without dedicated layered FS (unions and similar like auFS/btrfs/zfs/etc - while it is good to have it).
I think the two aspects OS-level virtualization and snapshotting/persistency should best be looked at separately.
Regarding OS-level virtualization, Genode's protection domains already provide the benefit of being light-weight - like namespaces when compared to virtual machines - while providing much stronger isolation. Each Genode component has its private capability space after all with no sharing by default. Hence, OS-level virtualization on Genode comes down to hosting two regular Genode sub systems side by side.
General note. Initially, when in swsoft/virtuozzo/parallels we do create a container-based Os virtualisation (we call it "virtual environment" in 2000), we assume 3 main pillars (having in mind that we want to use it as a base for hosting in hostile environment with open unlimited access from Internet to containers):
- namespace virtualisation not only to isolate resources but to be
able to have the same pid and related resources in different containers (for Unix we think about emulation of init process with pre-defined pid 1, at least)
- file system virtualisation to allow COW and transparent sharing of
the same files (e.g. executable for apache between 100’th of containers instances) to preserve kernel memory and objects space (oppose to VM where you can’t able to share efficiently files and data structures between different VM instances) - key to containers high scalability and performance , and for docker it also a key for "encapsulation of changes" paradigm. Sharing using single kernel instance is a wide paradigm - it allow to optimise kernel structure allocation, resources sharing, single instance of memory allocator/etc.
- ALL resources limitations on per-container base (we call this
userbeancounters) which prevent any attempts to make DoS attack from one container to another or to the host.
every container initially should be like remotely-accessible complete instance of linux with root access and init process, but without ability to load own device drivers. we do implement this first for linux, later for FreeBSD/Solaris (partially), Windows (based on hot patching technique and their Terminal Server), consider mach/MacOS Darwin (just experiments). for linux and Windows it was a commercial-grade implementation and still used by millions of customers.
Now all these features (may be except file systems while zfs/brtfs/overlayfs/etc have something similar) are became a part of most of commercial OS available on the mass market. IMHO they can be cheaply implemented from the very beginning of OS kernels development
- everything was in place, except understanding why this is necessary
outside of simple testing environments for developers. May be it is also a time for Genode to think into this direction?
returning to Genode.
reasons for existence of namespaces (ns) is not only isolation, it is a bit wider. One thing is an ability to disallow «manual construction» of objects ids/handles/capabilities/references/etc to access something which should not be visible at all.
For example, in ns isolated containers I should not be able to send signal to arbitrary process by name (pid in our case) even if it exists in the kernel. Or vice versa - use some pre-defined processes ids to do something (e.g. unix like to attach orphans to pid 1, and later try to enumerate them, this should be emulated somehow during user-level software port, e.g. for linux docker this is important).
in case of genode probably I can created and keep capability (with data pointer inside) and perform some operations with it, if I do store it somewhere. if this capability be virtualised then we will have additional level of control over it (by creation of pre-defined caps and explicit level of limitations even if it is intentionally shared during initialisation procedure which could be a part of legacy software being ported to genode).
For better understanding - use case. imagine that you want to port application which use 3-d party library which do init some exotic file descriptors. good example is a docker itself - when you exec process inside docker container you typically don’t want to have your main process opened descriptors including stdin/out (typically it’s achieved via CLOEXEC flag - but lets consider its absence, you can just don’t know about descriptors existence). Technically it is a code in your application while it was initialised by 3-d party library linked to it, and you do not have easy way to control it.
ns implementation has simple rules to maintain group isolation, and it is not considered as unnecessary even in linux kernel with their own capabilities set. I think that namespace is a convent way to handle legacy-related question and it worth to have in Genode level where you already have wrappers around native low level kernel calls.
And, for snapshotting (see comment below) this is a must - I need to re-create all objects with the same id even if they do exists in other threads/sessions/processes because they could be stored in «numerical form» inside user thread memory.
as for file system like overlayfs - not sure, I assume that it is possible to port some known fs into genode, while it is not a first priority task (Windows docker does not have it).
for resource counting and limitations - I do not tackle this topic at all for genode.
The snaphotting/persistency topic is not yet covered. But I see a rather clear path towards it, at least for applications based on Genode's libc. In fact, the libc already has the ability to replicate the state of its application as part of the fork mechanism. Right now, this mechanism is only used internally. But it could be taken as the basis for, e.g., serializing the application state into snapshot file. Vice versa, similar to how a forked process obtains its state from the forking process, the libc could support the ability to import a snapshot file at startup. All this can be implemented in the libc without changing Genode's base framework.
That being said, there is an elephant in the room, namely how POSIX threads fit into the picture. How can the state of a multi-threaded application be serialized in a consistent way? That would be an interesting topic to research.
I think we can follow the ideas developed for CRIU patch for linux [1], no need to invent something too complex: It can freeze a running container (or an individual application) and checkpoint its state to disk. The data saved can be used to restore the application and run it exactly as it was during the time of the freeze. Using this functionality, application or container live migration, snapshots, remote debugging, and many other things are now possible.
in short, they do utilise existent linux kernel sys calls like ptrace and add very small subset of absent ones to enumerate process-related objects [2]. This does not mean that you need to have ptrace - it just used as a kind of auxillary interface to obtain info about processes, it can be implemented in different ways.
to stop (freeze) a set of related processes (tree) even with POSIX they use feature (can be considered as a part of ns virtualisation) known as cgroups [3]: The freezer allows the checkpoint code to obtain a consistent image of the tasks by attempting to force the tasks in a cgroup into a quiescent state. Once the tasks are quiescent another task can walk /proc or invoke a kernel interface to gather information about the quiesced tasks. Checkpointed tasks can be restarted later should a recoverable error occur. This also allows the checkpointed tasks to be migrated between nodes in a cluster by copying the gathered information to another node and restarting the tasks there.
Seems that similar to ns and cgroup features should be a first part/base of checkpoint/restore implementation. OF course, part of serialisation related to fork/libc, as you mention, also could be an another pillar.
In general, I think that to implement snapshotting we need
- freeze set of threads (or make them COW, e.g. for memory changes)
- enumerate threads
- enumerate related objects/states (e.g. file descriptors/pipes/etc)
- enumerate virt mem areas, and related «shared resources» between
threads 5. enumerate network stack/sockets states (a bit different beast) 5. dump everything
for restore we need not only create some objects with the same numerical values of id (even same memory layout), we need to have an api to force every object to have the same content/state and related security/ns, and force (restore) «sharing» and parent/child relations if any of object between different threads/processes/sessions/etc
related to this topic: we also need to be able to bring some drivers to the same state, because during checkpointing/restore we assume external connections in the known state (e.g. imagine video drivers and application which draw on the screen. content of their video memory is a part of its state while it is not stored in application). Probably it is related to restartable drivers feature (and related fault tolerance questions).
Note: By the way, now of the key problem of CRIU patch in this moment is inability to restore graphical screen for X/etc. We can restore related sockets, while protocol parts themselves which need to be repeated are not known when you order checkpoint.I think there are no people now available who know real X protocol details necessary for the operations… but this is different story not directly related to genode questions.
[1] https://criu.org/Main_Page [2] https://criu.org/Checkpoint/Restore [3] https://www.kernel.org/doc/Documentation/cgroup-v1/freezer-subsystem.txt
Sincerely, Alexander
Genode users mailing list users@lists.genode.org https://lists.genode.org/listinfo/users
Hi Johannes,
Gesendet: Donnerstag, 06. Januar 2022 um 13:51 Uhr Von: "Johannes Schlatow" johannes.schlatow@genode-labs.com An: users@lists.genode.org Betreff: Re: Roadmap 2022
Hi Alexander, Thus, when thinking about running a container on Genode, I noticed we have most ingredients already in stock since a Genode component is a sandboxed process with its resource quota and local namespace.
Regarding the file system virtualisation, we have the VFS and can even host a shared VFS in a dedicated server component. I'm not sure about a copy-on-write feature, though.
In my (current) point of view, enabling containerised workloads on Genode probably requires three ingredients:
- Implementing additional VFS plugins for mounting container images, overlays, and cow functionality.
- Adding missing plugins for special file nodes in devfs, sysfs or procfs. This highly depends on what the particular container process expects, though.
- Implementing a container runtime for Genode that sets up a sub-init to launch the container process with the appropriate VFS and helper components according to the container configuration.
I think noux already implements all that is needed to run containers. Only the file format is different. So you need a program to translate between these. It could even be integrated into the download tool. So the Server sends the canonical format an the tool stores in translated form.
Cheers Johannes
Hello Uwe,
On 06.01.22 14:15, Uwe wrote:
I think noux already implements all that is needed to run containers. Only the file format is different. So you need a program to translate between these. It could even be integrated into the download tool. So the Server sends the canonical format an the tool stores in translated form.
I perceive your stream of postings as disrespectful and disoriented.
Please stop pestering this mailing list with noise. The postings sent in the name of your faceless pseudonym are no positive contribution by any stretch. They are hardly distinguishable from gibberish generated by a chat bot.
This mailing list is not the place to run social experiments. It is a place for positively-spirited conversation.
You chose to largely ignore my advise about the mailing-list etiquette in my email of September 30. If you don't stop spamming, we have to resort to banning your email address.
Norman
Hi Johannes,
As far as I understood, a container is basically a filesystem image with some configuration of how to set things up. The container runtime will read the configuration and prepare everything depending on the target system before it launches the process defined by the container.
yes, except the fact that if we want all infrastructure being applicable (all these container management tools like kubernetis/etc) then we need to support appropriate API on the host level which, in turn, requires 1. golang support 2. some containertization features like controllable communications between containers 3. cow on file system base (optional).
After that, the started container is merely a standard process that has been encapsulated with namespaces, cgroup and other isolation mechanisms. The process performs syscalls just like a non-containerised process would do.
definitely so. while there are something «around» process - e.g. a way to execute process inside existing container, error handling/etc
Thus, when thinking about running a container on Genode, I noticed we have most ingredients already in stock since a Genode component is a sandboxed process with its resource quota and local namespace.
partially (mostly) yes, while some infrastructure around are missing
Regarding the file system virtualisation, we have the VFS and can even host a shared VFS in a dedicated server component. I'm not sure about a copy-on-write feature, though.
main idea of sharing was scalability. What happens if we will try to run the same executable (e.g. Apache with 10M code) in 2 different containers ? if this is VM like containers then with big probability we will not share the code pages (e.g. for Windows some of the DLL code pages do contain variables, so, even for the same executable pages content could be different). imagine you have 100 containers with apache. they will eat 10m x 100 = 1GB of ram just for code pages. while could only 10m
Another problem is a memory distribution. imagine that you have, for example, kernel object descriptor of 25 bytes, and a lot of them. if you have single os image then you have single memory allocator (if this is linux- slab/slub/etc) when you could store object instances related to different containers on the same memory page.
if you have own copy of everything - again, you will not just only inefficiently waste the kernel memory space for unused tails, but also will spend memory bandwidth/etc.
if we want to share effectively files they should be visible with the same «inode» (or similar, depending upon a file system) then instance of file system should be visible from every container via single FS instance. it should handle COW as well.
While again, FS is not first priority, e.g. widows version of native (non-linux) containers do not have any dedicated FS (use just slow layers onto of NTFS).
In my (current) point of view, enabling containerised workloads on Genode probably requires three ingredients:
- Implementing additional VFS plugins for mounting container images,
overlays, and cow functionality.
not 100% necessary, you can run (slowly!) just tar, this is supported by docker
- Adding missing plugins for special file nodes in devfs, sysfs or
procfs. This highly depends on what the particular container process expects, though.
this require to have an answer for question: what model you want to utilize? if linux one - then yes, /proc /sys /dev and so on. But, my porting experience show that this is tricky way - if you try to pretend to be linux - you need to have full size emulation of linux facitities, api/etc. Every time when you try to make something it force you to use linux tricky way. as for golang runtime I choose NetBSD as a base as less advanced, adding some features from other systems. E.g. it could use some exotic TCP or fcntl options to force container to do something.
IMHO there are no simple answer, always pro/con trade-off.
- Implementing a container runtime for Genode that sets up a sub-init
to launch the container process with the appropriate VFS and helper components according to the container configuration.
again, same question like above. typically you could use something like tinit (tiny init) for such purposes, while it is not mandatory and for many apps it will work without. but you need to understand what will be with child processes inside container, who will own them after death of parent (or this should not happens and you can use app itself as pseudo init).
Re. 3., I'm uncertain whether this is best approached from scratch or by porting existing runtimes such as runc or crun. The downside of the
it is too big to be rewritten. I do consider both ways and found that for me easier to port, I have really huge porting experience.
latter approach is that it requires us to provide all the Linux management interfaces such as cgroup, namespaces, etc. and map these to Genode sub-init configuration. Parsing the container configuration and applying the appropriate actions directly seems more natural to me at the moment.
imho it is easier to tailor (cut) appropriate code from existing golang sources for different platforms then to write it from the scratch.
@Alexander: What do you think are the major road blocks for running a first container image on Genode?
it depends upon definition of «run». for me to run docker container on my first project on some new OS I ported 3 coupled apps including runc, containerd and docker cli. they are connected via unix sockets. then you have api to create/delete/manipulate containers.
I don’t see any principal roadblocks while think that having some compatible with linux API for ns and cgroups will help a lot while again -non mandatory.
we need ti invent the way how to manipulate set of processes (signals are virtually absent in terms of Unix on Genode), connect and reconnect pipes (unix sockets), and run chroot() - like (anything like this available on Genode?)
I assume a lot of work related to tuning/utlization of some features used by container manipulation (e.g. file descriptor flags, options related to exec/fork and similar) while there are Windows containers without them definitely can exists (they use native Windows API).
but all problems probably solvable. IMHO. may be we need to implement something inside libc port as well.
Alexander
Hi Alexander,
thanks for taking the time to reply.
After that, the started container is merely a standard process that has been encapsulated with namespaces, cgroup and other isolation mechanisms. The process performs syscalls just like a non-containerised process would do.
definitely so. while there are something «around» process - e.g. a way to execute process inside existing container, error handling/etc
In my view, a container maps in Genode to a subsystem (comprising a VFS server and other infrastructure). Hence, starting a process inside an existing container should be pretty straightforward.
Regarding the file system virtualisation, we have the VFS and can even host a shared VFS in a dedicated server component. I'm not sure about a copy-on-write feature, though.
main idea of sharing was scalability. What happens if we will try to run the same executable (e.g. Apache with 10M code) in 2 different containers ? if this is VM like containers then with big probability we will not share the code pages (e.g. for Windows some of the DLL code pages do contain variables, so, even for the same executable pages content could be different). imagine you have 100 containers with apache. they will eat 10m x 100 = 1GB of ram just for code pages. while could only 10m
I believe sharing code pages in Genode would be a matter of the parent component who sets up the children's address spaces. Currently, we use the sandbox library for this.
Another problem is a memory distribution. imagine that you have, for example, kernel object descriptor of 25 bytes, and a lot of them. if you have single os image then you have single memory allocator (if this is linux- slab/slub/etc) when you could store object instances related to different containers on the same memory page.
if you have own copy of everything - again, you will not just only inefficiently waste the kernel memory space for unused tails, but also will spend memory bandwidth/etc.
if we want to share effectively files they should be visible with the same «inode» (or similar, depending upon a file system) then instance of file system should be visible from every container via single FS instance. it should handle COW as well.
I think this is exactly what a VFS server component does. It provides a File_system service to which multiple components can connect.
- Implementing a container runtime for Genode that sets up a
sub-init to launch the container process with the appropriate VFS and helper components according to the container configuration.
again, same question like above. typically you could use something like tinit (tiny init) for such purposes, while it is not mandatory and for many apps it will work without. but you need to understand what will be with child processes inside container, who will own them after death of parent (or this should not happens and you can use app itself as pseudo init).
Sorry, I was not crystal clear in my terminology. By "sub-init", I meant Genode's init component that we use for spawning subsystems. Honestly, I haven't spent any thought on multi-process containers. I had the impression that most commonly a container merely runs a single process, i.e. does not spawn new processes on its own.
Best regards Johannes
Hi, Johannes, thanks for ideas sharing.
definitely so. while there are something «around» process - e.g. a way to execute process inside existing container, error handling/etc
In my view, a container maps in Genode to a subsystem (comprising a VFS server and other infrastructure). Hence, starting a process inside an existing container should be pretty straightforward.
…
I believe sharing code pages in Genode would be a matter of the parent component who sets up the children's address spaces. Currently, we use the sandbox library for this.
I assume that this sharing implemented on the edge between file system and page cache (at least this is true for Linux/unix and Windows)
in that case we need to have single VFS server with own cache/page mapping for files being shared between different instance of containers (subsystems), not only for children’s? is it true for current implementation of [single VFS+FS server] <=> [[multiple subsystems]]?
if we want to share effectively files they should be visible with the same «inode» (or similar, depending upon a file system) then instance of file system should be visible from every container via single FS instance. it should handle COW as well.
I think this is exactly what a VFS server component does. It provides a File_system service to which multiple components can connect.
do you have an example of implementation of combination of VFS+FS server and a set of subsystems (at least 2) connected to the single server instance?
- Implementing a container runtime for Genode that sets up a
sub-init to launch the container process with the appropriate VFS and helper components according to the container configuration.
again, same question like above. typically you could use something like tinit (tiny init) for such purposes, while it is not mandatory and for many apps it will work without. but you need to understand what will be with child processes inside container, who will own them after death of parent (or this should not happens and you can use app itself as pseudo init).
Sorry, I was not crystal clear in my terminology. By "sub-init", I meant Genode's init component that we use for spawning subsystems. Honestly, I haven't spent any thought on multi-process containers. I had the impression that most commonly a container merely runs a single process, i.e. does not spawn new processes on its own.
this is not exactly true. while initially containers was developed with such an idea, later it became more complex
imagine build container - it run make inside (which fork gcc which in turn fork cpp then cc1 then as then ls and may be ar/ranlib/objcopy/etc) and if you have make -j4 - then make will run 4 parallel compilation (if Makefile allows). they must use the same file system instance (volume) to process intermediate files like .c -> .i -> .s -> .o -> .out...
returning back to genode and subsystems. how it is implemented in this moment , e.g. how make (native) can run inside genode noux? probably it use libc fork()/exec()/etc together with pthreads? do the processes (threads in genode terminology) share something bu default after start? can I run a bunch of «processes» inside genode in single subsystem which share some services from outside (like VFS+FS)?
more interesting question - do they share single swapping to disk service if need? or every subsystem has own pager with own page file?
I think that if I will have examples of implementation of these features in the way which will be suitable for genode subsystem-per-container model then we can have docker on genode relatively fast.
Sincerely, Alexander
Hi Alexander,
I believe sharing code pages in Genode would be a matter of the parent component who sets up the children's address spaces. Currently, we use the sandbox library for this.
I assume that this sharing implemented on the edge between file system and page cache (at least this is true for Linux/unix and Windows)
I'm not sure whether I can follow your thoughts here exactly. In Genode, the existence of a component (i.e. process) ist not necessarily tied to a file system. Typically, the binary of a component is either loaded from a boot image or indirectly loaded from a persistent or volatile file system. By indirectly I mean that the binary is not accessed as a ROM module (i.e. via a Rom_session) rather than a file. The init component that is normally used for instantiating subsystems and which uses the sandbox library, is therefore not aware of a file system, it must only be provided a means to access ROM modules. In principle, I believe it should be possible to share code pages of the same ROM module between multiple children but this is currently not implemented in the sandbox library. By previous reply was probably a bit unclear in this matter.
in that case we need to have single VFS server with own cache/page mapping for files being shared between different instance of containers (subsystems), not only for children’s? is it true for current implementation of [single VFS+FS server] <=> [[multiple subsystems]]?
I'm afraid you lost me. In Genode, a file system is accessed via a File_system session. This session provides an API for typical file/directory operations (open/create, symlink, watch, move). File content is transferred via a packet stream (cf. Genode Foundations Book). A VFS server would access e.g. a persistent file system and deliver its contents to its own clients, which could be separate subsystems. I see two places for caching here: First, the VFS server could cache some file content so that it can be delivered to multiple clients without transferring it from the block device multiple times. Second, the clients can perform their own (local) caching. Since I'm not familiar with the internals implementation though, I don't know to what extend such mechanisms are already implemented.
if we want to share effectively files they should be visible with the same «inode» (or similar, depending upon a file system) then instance of file system should be visible from every container via single FS instance. it should handle COW as well.
I think this is exactly what a VFS server component does. It provides a File_system service to which multiple components can connect.
do you have an example of implementation of combination of VFS+FS server and a set of subsystems (at least 2) connected to the single server instance?
Maybe a good start would be to have a look at `repos/ports/run/bash.run`. In comprises a VFS server that is accesses by a bash component via the file system session and via a ROM session (with vfs_rom in between) for accessing binaries. It is pretty straightforward to extend this example to contain two bash components.
- Implementing a container runtime for Genode that sets up a
sub-init to launch the container process with the appropriate VFS and helper components according to the container configuration.
again, same question like above. typically you could use something like tinit (tiny init) for such purposes, while it is not mandatory and for many apps it will work without. but you need to understand what will be with child processes inside container, who will own them after death of parent (or this should not happens and you can use app itself as pseudo init).
Sorry, I was not crystal clear in my terminology. By "sub-init", I meant Genode's init component that we use for spawning subsystems. Honestly, I haven't spent any thought on multi-process containers. I had the impression that most commonly a container merely runs a single process, i.e. does not spawn new processes on its own.
this is not exactly true. while initially containers was developed with such an idea, later it became more complex
imagine build container - it run make inside (which fork gcc which in turn fork cpp then cc1 then as then ls and may be ar/ranlib/objcopy/etc) and if you have make -j4 - then make will run 4 parallel compilation (if Makefile allows). they must use the same file system instance (volume) to process intermediate files like .c -> .i -> .s -> .o -> .out...
returning back to genode and subsystems. how it is implemented in this moment , e.g. how make (native) can run inside genode noux? probably it use libc fork()/exec()/etc together with pthreads? do the processes (threads in genode terminology) share something bu default after start?
I'm not familiar with the implementation of the pthreads library or noux. The latter is basically retired (see release notes 20.05) and superseded by Genode's C runtime and the VFS server. Yet, the C runtime transparently implements fork/execve. Following the recursive system structure, the child processes would have a similar environment as the parent process. Yet, I'm not familiar with the defaults.
can I run a bunch of «processes» inside genode in single subsystem which share some services from outside (like VFS+FS)?
That's basically the default due to the recursive system structure.
more interesting question - do they share single swapping to disk service if need? or every subsystem has own pager with own page file?
I'm not aware of any swapping to disk feature in Genode.
I think that if I will have examples of implementation of these features in the way which will be suitable for genode subsystem-per-container model then we can have docker on genode relatively fast.
That sounds superb.
Cheers Johannes
Hi Johannes,
in that case we need to have single VFS server with own cache/page mapping for files being shared between different instance of containers (subsystems), not only for children’s? is it true for current implementation of [single VFS+FS server] <=> [[multiple subsystems]]?
I'm afraid you lost me. In Genode, a file system is accessed via a File_system session. This session provides an API for typical file/directory operations (open/create, symlink, watch, move). File content is transferred via a packet stream (cf. Genode Foundations Book). A VFS server would access e.g. a persistent file system and deliver its contents to its own clients, which could be separate subsystems. I see two places for caching here: First, the VFS server could cache some file content so that it can be delivered to multiple clients without transferring it from the block device multiple times. Second, the clients can perform their own (local) caching. Since I'm not familiar with the internals implementation though, I don't know to what extend such mechanisms are already implemented.
another question here is a way how to provide an access rights (different for different clients) which will use the same FS server. Do you have something like ACL applied to file system?
or it just borrowed from, e.g. ext2 implementation (need to provide /etc/passwd and /etc/groups together with chmod/chown as separate files and utils)? I see in ssh_server.run some inline implementation of similar files...
Do you have own or external auth mechanism, like LDAP server/Radius/etc?
Sincerley, Alexander
Hi Alexander,
in that case we need to have single VFS server with own cache/page mapping for files being shared between different instance of containers (subsystems), not only for children’s? is it true for current implementation of [single VFS+FS server] <=> [[multiple subsystems]]?
I'm afraid you lost me. In Genode, a file system is accessed via a File_system session. This session provides an API for typical file/directory operations (open/create, symlink, watch, move). File content is transferred via a packet stream (cf. Genode Foundations Book). A VFS server would access e.g. a persistent file system and deliver its contents to its own clients, which could be separate subsystems. I see two places for caching here: First, the VFS server could cache some file content so that it can be delivered to multiple clients without transferring it from the block device multiple times. Second, the clients can perform their own (local) caching. Since I'm not familiar with the internals implementation though, I don't know to what extend such mechanisms are already implemented.
another question here is a way how to provide an access rights (different for different clients) which will use the same FS server. Do you have something like ACL applied to file system?
The VFS server has no notion of users or ACL, yet it is possible to provide different parts of the VFS to different clients/sessions. This is achieved by specifying a <policy>, which sets the root directory for the session and whether write operations are permitted. This basically provides per-directory access control.
or it just borrowed from, e.g. ext2 implementation (need to provide /etc/passwd and /etc/groups together with chmod/chown as separate files and utils)? I see in ssh_server.run some inline implementation of similar files...
Do you have own or external auth mechanism, like LDAP server/Radius/etc?
Natively, there is no notion of users in Genode. Instead, access control is conducted on a per-session basis. On the one hand, the init component takes care of routing a particular session request to a certain child component (or parent). On the other hand, the child providing the service may further allow the specification of session policies (as mentioned above) so that different clients receive different permissions.
Best Johannes
Hi Johannes, thanks for clarification
Do you have something like ACL applied to file system?
The VFS server has no notion of users or ACL, yet it is possible to provide different parts of the VFS to different clients/sessions. This is achieved by specifying a <policy>, which sets the root directory for the session and whether write operations are permitted. This basically provides per-directory access control.
Do you have own or external auth mechanism, like LDAP server/Radius/etc?
Natively, there is no notion of users in Genode. Instead, access control is conducted on a per-session basis. On the one hand, the init component takes care of routing a particular session request to a certain child component (or parent). On the other hand, the child providing the service may further allow the specification of session policies (as mentioned above) so that different clients receive different permissions.
I want to integrate Genode + low level OS (e.g. nova or sel4) to be integrated with existing environments related to containers. Most of this things do assume some kind of per-user control.
may be I can ask my question in different format - what is the best way of such integration??
Simplistic approach I saw in the implementation of ssh_server.run - they just create «inline» plain text fake user+password (with non-fake crypto key).
in standard unix/linux/etc during container creation I use some credentials for docker and for files access simultaneously. docker suggest to keep it outside (while can hold inside) [1]: ... Credentials store The Docker Engine can keep user credentials in an external credentials store, such as the native keychain of the operating system. Using an external store is more secure than storing credentials in the Docker configuration file.
To use a credentials store, you need an external helper program to interact with a specific keychain or external store. Docker requires the helper program to be in the client’s host $PATH.
This is the list of currently available credentials helpers and where you can download them from:
• D-Bus Secret Service: https://github.com/docker/docker-credential-helpers/releases • Apple macOS keychain: https://github.com/docker/docker-credential-helpers/releases • Microsoft Windows Credential Manager: https://github.com/docker/docker-credential-helpers/releases • pass: https://github.com/docker/docker-credential-helpers/releases …
To implement docker container I need to answer for both questions : - what access control and credentials I will use for underlaying file system (it can generate endless problems if treat in wrong way - like failed scripts execution/etc), and - how access control info should be provided to docker itself (at least in form of root/non root users, or keychains/etc), see [1].
Note: In theory we can have 3-d question to be answered - "how container will store secure 3-d party data", but it is different from first two above and can be answered by applications later. As I understand encrypted block storage vbe is a movement to this direction?
So, my question is: I do not like an idea to keep event for tests «a+rwx» mode for files and plain text user/passwords stored in run files. Are there anything better than that available for prototype, or I need to keep this unsecure approach in this moment for both file system and container permissions?
Note: I understand that internally system based on Genode will be significantly more secure by itself. Anyway we need to consider the whole system, including external clients used by users to access and manage the system.
[1] https://docs.docker.com/engine/reference/commandline/login/
Sincerely, Alexander
Hi Alexander,
I want to integrate Genode + low level OS (e.g. nova or sel4) to be integrated with existing environments related to containers. Most of this things do assume some kind of per-user control.
may be I can ask my question in different format - what is the best way of such integration??
That is a question I don't have a satisfying answer for at the moment. What I understand from your explanations is that you want to have some sort of user authentication by which the docker engine decides what permissions the user gets for starting containers. In other words the user's permissions determine the view the user gets on a shared file system.
I believe I would approach it in a way that maps users to File_system sessions. This will not be a direct translation of file-based ACLs though but it will allow having multiple users sharing a certain directory. A container may also open multiple File_system sessions for different users by which you should be able to control access permissions on the file system. Yet, I have no particular idea at the moment on how a chmod/chown done by a container can be emulated with this approach since it would need to modify the session policies of the VFS server.
By the way, I recommend you have a look at Martin's article series about the VFS: http://genodians.org/m-stein/2021-06-21-vfs-1
Cheers Johannes
Hello Alexander,
Gesendet: Sonntag, 02. Januar 2022 um 15:12 Uhr Von: "Alexander Tormasov via users" users@lists.genode.org An: "Genode users mailing list" users@lists.genode.org Cc: "Alexander Tormasov" a.tormasov@innopolis.ru Betreff: Re: Roadmap 2022
In this moment for me somehow obvious that to support things above I need a support in Genode some parts of generic virtualisation. Like namespaces based isolation (read: ability to have same names/id’s/etc in different domains for objects and anything provided by the Genode to user apps, together with additional related API). At least for app snapshotting, migration and persistency this is «the must». They are not so necessary for containers themselves, there are support of some platforms without it, as well without dedicated layered FS (unions and similar like auFS/btrfs/zfs/etc - while it is good to have it).
Note: I suspect that having namespaces virtualisation on the kernel level will give some additional advantages for Genode even in terms of security. Same like happens in Linux with our proposals related to openvz/namespaces/userbeancounters (after some time it became clear that they are necessary for modern OS). This is relatively cheap in implementation and overhead points of view. Did you considers this option as a part of Genode future?
I think your mental image of genode is at odds with reality. Genode is not another Linux (or comparable OS). There are no processes. The most process-like entity in genode is the protection domain(PD). But the PD is already virtualized. That makes it like a container with only one process. To share a namespace between two PDs you have to explicitely configure that sharing. There are libraries that help you with that and contain even some default naming of capabilities. But mostly you will have to setup the sharing of a namespace by copying capabilities between PDs.
Alexander
23 дек. 2021 г., в 21:05, Norman Feske norman.feske@genode-labs.com написал(а):
Dear Genode community,
it is the time of the year again to reflect and make plans for the foreseeable future. Hereby, I'd like to kick off our traditional brainstorming about Genode's road map for the year ahead of us.
Genode users mailing list users@lists.genode.org https://lists.genode.org/listinfo/users
Hello,
my main work topics of 2021 have been the enabling of multimedia support in the Falkon web browser, the tool chain update and as of latest the ongoing optimization of audio support for VirtualBox 6.
I'm pretty impressed by Falkon's multimedia abilities on Genode already, even though some optimization is still needed and something I intend to continue working on in 2022, especially with the planned execution on the PinePhone in mind. More PinePhone-related personal work topics could involve porting of the 'Morph' mobile web browser to Genode and exploring the possibilities of porting QML-based Sailfish OS or Ubuntu Touch applications and making it possible to build those with Goa.
Christian
Hi Christian,
my main work topics of 2021 have been the enabling of multimedia support in the Falkon web browser, the tool chain update and as of latest the ongoing optimization of audio support for VirtualBox 6.
your contributions in 2021 had been nothing short of amazing. I still remember my eyes popping out watching you debug a race condition on Genode/ARM64 in JIT-compiled code generated by Chromium from a JavaScript blob delivered by youtube.com. What a super-human level of patience! :-)
I'm pretty impressed by Falkon's multimedia abilities on Genode already, even though some optimization is still needed and something I intend to continue working on in 2022, especially with the planned execution on the PinePhone in mind. More PinePhone-related personal work topics could involve porting of the 'Morph' mobile web browser to Genode and exploring the possibilities of porting QML-based Sailfish OS or Ubuntu Touch applications and making it possible to build those with Goa.
There is so much in your plans to look forward to. I think the latter point has a lot of potential from a community perspective because - if it works out as anticipated - it would nicely bridge the gap between app developers and the Genode-based phone. So our platform won't need to stay insular.
Cheers Norman
A happy and healthy new year to all of you fellow genodians!
Here are my thoughts regarding the past year and upcoming roadmap:
What's your reflection of Genode's past year?
I share the enthusiasm Norman described regarding our joint efforts to facilitate the porting of existing device drivers especially Linux device drivers to Genode. A sometimes painful but nonetheless motivating work. Also I did not expected that so much drivers will be ported using the new approach in such a short period of time.
Nevertheless, this line of work prevented me to achieve the unification of the platform driver API across different architectures in time. Although it is almost accomplished, the original goal was to finish it in 21.05. Inwardly, I dreamed of having a working Wifi card in Sculpt on the MNT Reform 2 at this time, which is not the case.
Anyway, when looking at the great steps regarding GPU support and the heavy workloads, e.g., the Falkon webbrowser is capable to manage on top of Genode, that is great!
Deeply impressed I am about the contributions from our community just out of enthusiasm, like the extended VirtIO support from Piotr Tworek, or the tireless Tomasz Gajewski working on all kinds of Raspberry Pi support.
What are the topics you deem as most interesting to work on?
Well obviously I want to finish the unification and re-newal of the platform driver and PCI landscape. Deprecate all older drivers during this year, and move them to their corresponding SoC/Board repositories. I would like to enable WiFi and NVMe on the MNT Reform2 and like to consolidate the current driver landscape to minimize performance overhead e.g. with respect to timer service usage when running a significant number of drivers concurrently.
Currently, I cannot use the MNT Reform2 with Sculpt OS as daily driver, because for various use-cases I still need a Linux VM. Therefore, I would like to extend the ARM VMM to provide VirtIO GPU/Framebuffer and Input devices.
Do you already have tangible plans you can share with us?
Are there road blocks that stand in the way of your plans?
I would not call them road blocks, but while analyzing the requirements of a re-newed platform driver implementation, I was wondering whether we still need to support ancient kernels like Pistachio, or the old Fiasco. Apart from the dependencies on quite old hardware of these kernels, a lot of workloads are not possible to run on them. We have to ignore the nightly kernel faults etc. of especially Pistachio. Therefore, I would suggest to retire these kernels within 2022.
What is your vision of using Genode at the end of 2022?
Receiving a second-factor authentication code via SMS on the Genode/Pinephone while loging-in to some web service using Sculpt on my MNT Reform2 that would be nice ;-).
Best regards Stefan
Hi Stefan,
Nevertheless, this line of work prevented me to achieve the unification of the platform driver API across different architectures in time. Although it is almost accomplished, the original goal was to finish it in 21.05. Inwardly, I dreamed of having a working Wifi card in Sculpt on the MNT Reform 2 at this time, which is not the case.
there should be no reason for any regrets. Plans change, in your case for very good reasons. Your work on DDE-Linux evolved into a much more profound project than I imagined, and I'm super happy and proud about it. Also, I think that our vision of the role and design of the platform driver was too uncertain to make definite a-priory estimations. The design that emerged over the past year vastly exceeds the old - in many places ad-hoc - nature of the traditional platform driver.
I encourage you to keep up following the goal to make Genode on the MNT-Reform fully capable, including Wifi. This is of course out of selfishness. I love the MNT-Reform laptop. I'd love it even more with wireless connectivity on Sculpt :-)
I would not call them road blocks, but while analyzing the requirements of a re-newed platform driver implementation, I was wondering whether we still need to support ancient kernels like Pistachio, or the old Fiasco. Apart from the dependencies on quite old hardware of these kernels, a lot of workloads are not possible to run on them. We have to ignore the nightly kernel faults etc. of especially Pistachio. Therefore, I would suggest to retire these kernels within 2022.
This line of reasoning is certainly valid because those old kernels won't play any practical role for real-world Genode scenarios.
However, I don't share the sentiment to retire them because I still get a lot of value from the diversity of kernels, when compared to the costs of maintenance. Let me share what those values are:
- When working on the base-framework infrastructure, I'm forced to apply all changes in a way that is workable with all the different kernels. This (arguably artificial) problem challenges me to re-evaluate changes repeatedly and from different perspectives. The repetition has become some kind of Zen when working on the base framework. Sometimes, I stop applying a change mid-way for the better because applying one idea 8 times makes one critically evaluate the idea 8 times in a row. ;-)
In your case - looking at IRQ support on PIC vs. IOAPIC - the (useless at surface level) support for plain PIC-based systems is additional work. But if we manage to cover it in an elegant way, the result will be more general and sustainable. So I invite you to accept this challenge.
- The diversity of kernels fuzzes our system and uncovers issues - often related to generic code - that would remain unknown to us without the broad spectrum of hardware/kernel combinations regularly exercised. Yes, we see kernel assertions triggered by a few tests that won't ever be fixed. But on the other hand, when assessing a regression, the cross-kernel test results often allow me to cross- correlate the behavior, and form a mental model of what could be wrong. E.g., if I see a test failing on all kernels that use s mapping database, I can form a hypothesis.
- During the development of low-level features - the fork mechanism of the libc comes in mind - it is best to start with the simplest kernel possible. E.g., on OKL4 I can readily use the Genode::raw mechanism, and can easily track capabilities across PDs because they are just global numbers. Such developments would be far more complicated on a real capability-based kernel.
- There are still some areas where we haven't yet learned all the lessons the kernels can teach us. Specifically, there still exist unexplained performance anomalies between kernels, e.g., when looking at network throughput. The performance tends to be influenced by details of the kernels scheduling. We should strive to refine Genode such that those anomalies eventually disappear, making the framework as deterministic as possible against the whims of the various kernel schedulers. Until that point is reached, it is good to keep the anomalies clearly visible and reproducible.
Other (sometimes surprising) effects occur because of different approaches when comes to cache attributes, or the choice of time source (PIT as scheduing timer).
- I actually used the L4/Fiasco kernel debugger in 2021. Not a strong argument, but still worth noting.
- Being the one with most time invested refining the base framework over the past year, I found the costs of maintenance of the legacy kernels pretty much negligible compared the grand scope of changes. I'd be fine with kicking this can further down the road.
So my position to keep the kernels supported is not merely some weird nostalgic attachment that I feel for those relics, but I see actual technical value of keeping them around.
Cheers Norman
A happy new year to all of you!
What's your reflection of Genode's past year?
Personally, I really enjoyed resuming the work with Genode and switching to Sculpt as a daily driver. After a couple years of deep sleep on my side, I started highly motivated into this endeavour. Converting to Sculpt with a new Laptop was a little painful at times, yet also gave me the opportunity to dig into a variety of topics and familiarise myself with quite a bit of the current code base. Figuratively speaking, it was a fun experience surfing the learning curve most of the time. Though, enabling all peripherals (e.g. modem, webcam, wifi) that I was used to on Sculpt took quite a bit longer than I had expected. There are still a few uncompleted low-priority jobs left.
My personal highlights have been that I got the opportunity to continue two lines of work that I was already focused on when I first got in touch with Genode. First, I revived the Zynq-7000 support in Genode and started enabling the use of its FPGA. Secondly, I started exploiting Genode's tracing capabilities to record and analyse component interactions and component state. While test driving the prototypical implementation for narrowing down some network performance anomalies, I already enjoyed the ease with which one can acquire trace data from a running Sculpt system.
What are the topics you deem as most interesting to work on?
Making good use of the Zynq's FPGA in Genode and continuing on the tracing capabilities are my top priorities (more details down below). I'd also love to see my mailserver running on Genode at some point in the future. A first step into this direction is to look into how container images could be hosted on Genode. I doubt that I'll be able to spend much time on this but would love to touch this topic if time permits.
Do you already have tangible plans you can share with us?
Regarding the FPGA topic, I am going to familiarise myself with the Xilinx tools in order to build custom bitstreams. In particular, I'd like to investigate solutions for guarding DMA via custom FPGA logic. The main idea is to emulate the register interface of DMA-capable devices in the FPGA and having a SystemMMU-like access control mechanism controlled by the platform driver. As a side effect of working with Zynq-based boards, I'll incrementally implement/port additional device drivers (SD card, pin, some standard IP cores, ...).
When it comes to tracing, I am going to write a trace recorder component with which one can extract different trace outputs from multiple Genode components at the same time. Major output formats are pcap files (capturing network traffic to be analysed with wireshark), ctf traces (capturing component interactions and arbitrary checkpoints to be visualised and analysed with TraceCompass), and log output. I think this is realistically finished by the end of May so that it can be put into good use for the rest of the year.
Another idea that I'd like to pursue is to leverage the tracing capabilities for systematically identifying performance bottlenecks. Having a tool for Genode similar to the coz profiler [1] could turn out to be very helpful in narrowing down where performance optimisation has the biggest impact.
[1] https://github.com/plasma-umass/coz
Are there road blocks that stand in the way of your plans?
What is your vision of using Genode at the end of 2022?
By the end of the year, I envision having sophisticated and easy-to-use tracing tools for Genode that we are able to routinely use for debugging and performance analysis.
On the FPGA topic, custom programmable logic will integrate nicely into Genode and there will be corresponding documentation for how to augment a resource-limited embedded Genode system with custom hardware accelerators.
Best, Johannes
Hello Johannes,
thank you for sharing your experience and plans. Both facets, the FPGA line of work and the tracing topic are exciting. Especially the latter fits perfectly into the recurring theme of optimization and deep performance/latency analysis. It could play a vital role to get the browser running at acceptable performance on the Pinephone for example. I also have QoS challenges - in particluar low-latency audio processing - in the back of my head for 2022. Holistic and precise event tracing would be a godsend.
Regarding the FPGA topic, I am going to familiarise myself with the Xilinx tools in order to build custom bitstreams. In particular, I'd like to investigate solutions for guarding DMA via custom FPGA logic. The main idea is to emulate the register interface of DMA-capable devices in the FPGA and having a SystemMMU-like access control mechanism controlled by the platform driver.
That's a cool idea, especially as IOMMUs don't seem to be commonplace in the ARM world yet.
By the end of the year, I envision having sophisticated and easy-to-use tracing tools for Genode that we are able to routinely use for debugging and performance analysis.
On the FPGA topic, custom programmable logic will integrate nicely into Genode and there will be corresponding documentation for how to augment a resource-limited embedded Genode system with custom hardware accelerators.
What a beautiful outlook! :-)
Cheers Norman
Hello Genodians, I will again offer my thoughts as a casual Genode user and tinkerer, although this year I didn't get to have as much fun with it as I wanted to, for various reasons.
What is your vision of using Genode at the end of 2022?
I hope that this posting spawns a fruitful discussion of potential topics for the next episode. Please be considerate to avoid dropping mere proposals or wish lists. It's best to present suggestions together with actionable steps that you are willing to take.
In mid of January, I am going to update the official road map.
Cheers, Norman
My vision for Genode is to be able to run it on my (non-standard, Apple) X86_64 hardware as a main system, virtualizing the existing Linux Mint partition. I think some progress has been made on this front, others have virtualized an existing partition, and my system can boot Sculpt from EFI with tweaks and use the FB, but the main barrier to me is that virtualizing the USB wifi dongle I use for network access (ath9k usb) is a bit slow. So for the past long time I've been playing with porting drivers. I think I said in last years summary that I was able to port the HelenOS driver for it, which ending up working partially, but the driver randomly stalls the hardware and I can't tell how deep the issue goes, since I can't easily simulate high network activity running it natively under HelenOS. There is also a lot of feature-limitation in the 802.11 implementation.
So, encouraged, by one of this year's themes, I started trying to port the linux ath9k driver. Unfortunately, I did not experience the same success as others. For example the new lx_emul/lx_kit is conceptually very nice, and I was excited about it, but I'm running into snags because I don't think it's completely implemented yet for x86. It's also a little sad to me that there is no way to take what is, under linux, a platform indepedent USB driver, and make a platform indepedent Genode component, because the porting process seems to depend on the architecture. I guess the actionable step here is for me to finish porting the driver, but it would be great to know what the plans for intel are, if the dde_linux is under development now, or if I need to hack it myself to make it work under x86.
Best wishes to all for the new year, CP
Hello Colin,
thank you for sharing your experiences with us.
On Tue, Jan 04, 2022 at 07:12:36PM -0500, Colin Parker wrote:
So, encouraged, by one of this year's themes, I started trying to port the linux ath9k driver. Unfortunately, I did not experience the same success as others. For example the new lx_emul/lx_kit is conceptually very nice, and I was excited about it, but I'm running into snags because I don't think it's completely implemented yet for x86. It's also a little sad to me that there is no way to take what is, under linux, a platform indepedent USB driver, and make a platform indepedent Genode component, because the porting process seems to depend on the architecture. I guess the actionable step here is for me to finish porting the driver, but it would be great to know what the plans for intel are, if the dde_linux is under development now, or if I need to hack it myself to make it work under x86.
You're right, currently the new dde_linux/lx_kit was developed for ARM64 only. Nevertheless, beginning from today we start to update our x86 driver base including framebuffer, usb host and wifi driver to use the new approach, and concurrently update the Linux kernel source to a recent version. We are confident to have first working results at the end of March. Then when an updated version of the Intel WiFi driver is running successfully that is using the new dde_linux/lx_kit, it might be a good time to step in and port the Atheros card.
Best regards Stefan
Best wishes to all for the new year, CP
Genode users mailing list users@lists.genode.org https://lists.genode.org/listinfo/users
Thanks to the list for all the interesting insights about your work and plans!
I'll try to keep my contribution short.
2021
At the beginning of 2021 I introduced the Uplink session to Genode as pendant to the NIC session with swapped roles of server and client. This allowed me to adapt all network drivers in the next step to no longer act as NIC server but as Uplink client at a network multiplexer component. This way, network drivers can be restarted without rendering the network sessions of other components dead.
Throughout spring and summer, I enhanced the NIC router in several use-case specific directions (DNS-config forwarding via DHCP, consideration of fragmented IP, ARP-less mode).
Furthermore, I created the File Vault, a graphical user interface for the CBE (driver for encrypted and managed block devices) and integrated it into Sculpt. This work also made me consolidate and enhance several features of the CBE ecosystem itself.
Another project I quite enjoyed was the introduction of a Main class in the base-hw kernel and thereby clean up some aspects of base-hw that were on my TODO list for a long time. This work was motivated by but also delayed the one project of mine that came a bit short again in 2021: the Spunky kernel. However, Spunky made some steps forward. It has now its own drivers for interrupt controller and timer as well as an Ada implementation of the "inter-processor tasks" of base-hw.
At the end of 2021 I started porting the Linux Wireguard-implementation to Genode. This work was the most challenging for me this year as I seldomly do porting. The benefit, however, was that I already learned a lot of stuff regarding the new DDE Linux approach.
2022
I will start the year with continuing my efforts regarding the Wireguard port. In a next step, I'll work on support for hardware-based trust-anchors which then can be combined with both Wireguard and the CBE project.
Speaking of the CBE: The CBE and File Vault project could also use some further care. The CBE could become much smarter when it comes to resource management and error handling. And there are also plans for letting CBE images grow automatically according to their needs. The File Vault, on the other hand, should reflect all features of the CBE. E.g., snapshot handling is still missing.
Another thing I'm looking forward to would be a thorough review and possible re-implementation of the VFS socket-FS. And, last but not least, I will proceed with the completion of the Spunky kernel.
Cheers, Martin
Hi all,
On 12/23/21 19:05, Norman Feske wrote:
Dear Genode community,
it is the time of the year again to reflect and make plans for the foreseeable future. Hereby, I'd like to kick off our traditional brainstorming about Genode's road map for the year ahead of us.
For me the most important topic of this year is to incorporate our experiences from last years Intel/Etnaviv GPU projects into Genode.We have learned many lessons and by now know what is needed and what not, and where there is room for optimization. For example, I got Doom3 from 4 FPS to ~40 FPS of native performance as a prototype. And of course, I want to see the same performance on vanilla Sculpt as soon as possible. Also Vulkan comes to mind, because it is already in Mesa and should not be hard to enable. So all in all, OpenGL/Vulkan applications should just work on Genode!
That would be great,
Sebastian
On 12/23/21 13:05, Norman Feske wrote:
Dear Genode community,
it is the time of the year again to reflect and make plans for the foreseeable future. Hereby, I'd like to kick off our traditional brainstorming about Genode's road map for the year ahead of us.
Congratulations to the Genode team and community for another year of impressive (and sometimes surprising) progress on so many fronts at the same time!
This thread has also been fascinating and enlightening. I don't have anything worthwhile to add to the Roadmap discussion, but I am very excited about the plans, especially related to the mobile UI.
My personal goals for the year are:
1. Get Sculpt running as a daily driver (preferably on Spunky ;^) ). The VBox 6 update should make this transition easier for me. I will be asking for advice on this topic as I go along.
2. Play with the Sculpt mobile UI as it progresses. On this topic, if there is any need for it, I will be happy to serve as an experimental/alpha tester for the mobile UI, first on the convertible laptop, and later I will probably get a PinePhone.
I also have some development and Genodians ideas, but I will keep those to myself until I have something to show for it. :^)
Happy Sculpting!
John J. Karcher devuser@alternateapproach.com
Hi John,
On 13.01.22 05:18, John J. Karcher wrote:
On 12/23/21 13:05, Norman Feske wrote:
- Get Sculpt running as a daily driver (preferably on Spunky ;^) ). The
VBox 6 update should make this transition easier for me. I will be asking for advice on this topic as I go along.
It's cool to read that you're interested in using Spunky! But just as a side note: Spunky is still limited to the feature set of base-hw and that means no hardware virtualization on x86 so far. This is especially important when it comes to you plan of using Vbox6.
Cheers, Martin
On 1/26/22 05:00, Martin Stein wrote:
Hi John,
On 13.01.22 05:18, John J. Karcher wrote:
On 12/23/21 13:05, Norman Feske wrote:
- Get Sculpt running as a daily driver (preferably on Spunky ;^) ). The
VBox 6 update should make this transition easier for me. I will be asking for advice on this topic as I go along.
It's cool to read that you're interested in using Spunky! But just as a side note: Spunky is still limited to the feature set of base-hw and that means no hardware virtualization on x86 so far. This is especially important when it comes to you plan of using Vbox6.
Thanks for the tip - I either didn't know that, or knew it and forgot.
Just curious, are there any plans to add virtualization support to base-hw (and Spunky) in the foreseeable future?
In either case, that means I now have two separate goals: using Sculpt as a daily driver, and running Sculpt/Spunky . . . just because I like it. ;^)
Thanks!
John J. Karcher devuser@alternateapproach.com
Hi John
There is a clear motivation to add virtualization support to base-hw because there are, AFAIK, several people at Genode Labs who would like to use Sculpt on base-hw as their daily desktop system.
However, so far, I think none of us has started implementing it because other projects were more pressing. Maybe this will change in 2022 - at least it was mentioned during the roadmap discussions - but it's not set in stone.
Cheers, Martin
On 30.01.22 06:25, John J. Karcher wrote:
Thanks for the tip - I either didn't know that, or knew it and forgot.
Just curious, are there any plans to add virtualization support to base-hw (and Spunky) in the foreseeable future?
In either case, that means I now have two separate goals: using Sculpt as a daily driver, and running Sculpt/Spunky . . . just because I like it. ;^)
Thanks!
John J. Karcher devuser@alternateapproach.com
Genode users mailing list users@lists.genode.org https://lists.genode.org/listinfo/users
Hello,
thanks to everyone who contributed to the road-map discussion! It was a delight to see your individual reflections and plans.
Today, I have finalized Genode's official road map for 2022.
https://genode.org/news/road-map-for-2022
The updated road map can be found at:
https://genode.org/about/road-map
Cheers Norman