exploring Mesa in Genode

Jamey Sharp jamey at ...343...
Mon Aug 10 00:15:56 CEST 2015


On Sun, Aug 02, 2015 at 07:33:07PM +0300, Alexander Tarasikov wrote:
> Hi Jamey!
> 
> Welcome to Genode. I'm also interested in 3D and GPU architecture, though
> not doing much Genode hacking recently.

Thank you for the warm welcome! I'd have replied sooner if this week
didn't get so busy.

> I suggest that you use the NOVA or Fiasco.OC kernels because they're
> the primary platforms.

That was one thing I wondered, thanks!

> Could you elaborate on what you mean by a multi-process 3D?

I was referring to the Genode "challenges" list, which mentions that
"Genode 10.08 introduced Gallium3D including the GPU driver for Intel
GMA CPUs." (I'm guessing this has bit-rotted somewhat since then? I
haven't found where that code might live yet.)

It goes on to say that "the current approach executes the GPU driver
alongside the complete Gallium3D software stack and the application code
in one address space," which of course is undesirable for security, but
also because it limits users to a single 3D client at a time.

I think what I want to do is:

- define an analogue of the Linux DRM API using Genode IPC,
- port the Linux kernel generic DRM layer and the driver for Intel
  integrated graphics to this IPC interface (as part of dde_linux I
  guess?),
- and port libdrm to the IPC interface.

I'm hoping that the libdrm abstraction layer is comprehensive enough
that Mesa would not need much, if any, patching.

For testing, I imagine primarily using some EGL/libgbm/modesetting
render-only demo, because I don't want to have to think about input APIs
at the same time.

As you pointed out, I'd really like to wind up with a Wayland interface
replacing Genode's Nitpicker. (Which is another wishlist item on the
"challenges" page, I noticed.)

> * There should probably be an intermediate resource management server
> between the kernel/libdrm container and the app.

Agreed! In a complete implementation, something should keep track of how
much video memory is available and share it fairly between clients.
Bonus points if it also can provide a generic implementation of command
scheduling, to keep any one client from starving other clients' access
to the GPU.

That said, I'm hoping to get a single-client demo working without any
resource management first. :-)

> * You should think of whether you want to allow multiple clients to
> access the same buffer simultaneously or make the access exclusive.

I think, to support the Wayland model, multiple clients need to be
allowed to access the same buffer. But they shouldn't usually be trying
to map the raw buffer contents into their local address space, right?
That is a recipe for a performance disaster, especially on graphics
cards with dedicated VRAM.

Jamey




More information about the users mailing list