Hi Jamey!
Welcome to Genode. I'm also interested in 3D and GPU architecture, though not doing much Genode hacking recently. I suggest that you use the NOVA or Fiasco.OC kernels because they're the primary platforms.
Could you elaborate on what you mean by a multi-process 3D?
From the architectural point of view the Linux GPU stack is itself very modular
and designed similiarly to a microkernel OS - the clients share nothing, everyone just allocates buffers for the GPU data and command buffers via libdrm, and then submits them to the kernel-side driver which does additional verification, sets up the MMU and does other steps to prepare the GPU for executing the code.
There are some use-cases which require the transfer from a GPU memory buffer from one client to another. Most notable examples are using the hardware encoders and decoders. If you are interested in that, take a look at the libva (VAAPI) and weston-recorder, there's an example of how to use the Intel GPU to H.264 encode what was rendered by OpenGL. (read the sources: https://github.com/hardening/weston/blob/master/src/vaapi-recorder.c#L1024
and also my blog post: http://allsoftwaresucks.blogspot.ru/2014/10/abusing-mesa-by-hooking-elfs-and...)
So I think an interesting use-case would be replicating or porting weston - creating a GPU-backed compositing window manager.
In the Linux world, memory sharing in the DRM subsystem is done via the Unix Domain Sockets. Each memory buffer allocated by the libDRM and libGBM can be attached to a file descriptor via the ioctl called "flink". Since it's an fd, you can then pass it to another process via a Domain Socket.
If you are going to design a "secure" system for sharing the GPU resources on top of Genode, I suggest to consider the following things: * There should probably be an intermediate resource management server between the kernel/libdrm container and the app. * You should think of whether you want to allow multiple clients to access the same buffer simultaneously or make the access exclusive. * In the latter case you need to figure out how to guarantee exclusivity. Since buffers are basically chunks of memory, you will probably have to write a custom pager (memory manager) that will handle page faults when a client is prohibited from accessing memory and return the error to the client somehow. * An interesting problem is to prove the exclusive access to the resources when they are not mapped into the client's address space, but already uploaded to the GPU and therefore controlled by some handle (basically an unsigned integer indexing some array in the GPU memory)
On Fri, Jul 31, 2015 at 2:03 AM, Jamey Sharp <jamey@...343...> wrote:
Good day all! By way of introduction: I learned about Genode because of the recent release's experimental support for seL4. Then I noticed that one of the open challenges is to implement a better architecture for direct-rendered 3D. I've been hacking on X for years and wanting to play with 3D on a microkernel, so now I'm trying to figure out what it'd take to tackle the Mesa rework challenge.
As a first step, I had trouble following the "getting started" directions. I've filed/commented on GitHub issues for code related things. In the web site documentation, you might mention that you're relying on SDL version 1.2, not the current version 2.0 (in Debian, libsdl1.2-dev). And to get the isohybrid tool I needed the Debian package named syslinux-utils.
http://genode.org/documentation/developer-resources/getting_started
Now that I can run the linux_x86 and okl4_x86 demos, what steps would you recommend for trying to prototype a multi-process 3D infrastructure?
Jamey
genode-main mailing list genode-main@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/genode-main