exploring Mesa in Genode

Nobody III hungryninja101 at ...9...
Thu Sep 3 05:43:58 CEST 2015


I can't give a definite answer on the Thinkpad compatibility, but it sounds
like the i915 driver should be compatible with the Intel 945GM, and the CPU
shouldn't be the problem. I might be wrong, but I don't think your problem
lies there.
As for the console output, there are instructions in the 13.05 release
notes:
http://genode.org/documentation/release-notes/13.05#Output_and_reset_with_Intel_s_AMT
Also, if you are able to get a serial port connection, be sure to use a
null modem (crossover) cable. other serial cables will not work.

On Thu, Sep 3, 2015 at 3:31 AM, Jamey Sharp <jamey at ...343...> wrote:

> Thanks to your pointers, Norman, I've gotten as far as building Genode
> 14.11 for nova_x86_32 with the eglgears run script, with i915 added to
> SPECS.
>
> I've booted the resulting .iso on a couple of Thinkpads from various
> eras, as well as in qemu of course. I don't actually get any gears
> rendering on any of them.
>
> I don't expect it to work right under qemu since I don't think Intel
> integrated graphics is emulated there, but it's the only way I know to
> get debugging output so far. Under qemu, I see the following output:
>
> [init -> launchpad -> init -> eglgears] native_probe*
> native_create_probe(EGLNativeDisplayType): not yet implemented dpy=0
> [init -> launchpad -> init -> eglgears] native_probe_result
> native_get_probe_result(native_probe*): not yet implemented
> [init -> launchpad -> init -> eglgears] falling back to softpipe driver
> [init -> launchpad -> init -> eglgears] returned from init display->screen
> [init -> launchpad -> init -> eglgears] no plugin found for fcntl(2)
> [init -> launchpad -> init -> eglgears] no plugin found for write(2)
> [init -> launchpad -> init -> eglgears] called, return 1 connector
> no RM attachment (READ pf_addr=9 pf_ip=481c4 from a2bfefc6 eglgears)
> virtual void
> Genode::Signal_session_component::submit(Genode::Signal_context_capability,
> unsigned int): invalid signal-context capability
> static void Genode::Pager_object::_page_fault_handler(): unhandled
> page fault, 'pager:eglgears' address=0x9 ip=0x481c4
>
> So under qemu I guess eglgears crashes by dereferencing a bogus
> pointer. How can I get this console output on real hardware, to see if
> it's crashing the same way?
>
> Also, Norman, do you remember exactly which hardware you tested this
> code on in 2010? I grabbed an old Thinkpad to try to match your setup
> more closely, so my test box has Intel 945GM graphics (PCI ID
> 8086:27a2) and a 32-bit Core Duo CPU. I may have gone a little too far
> back as it's a 2006 model, perhaps?
>
> Jamey
>
> On Mon, Aug 10, 2015 at 6:44 AM, Norman Feske
> <norman.feske at ...1...> wrote:
> > Hello Jamey,
> >
> > welcome to the list! Great that you are interested in picking up the
> > GPU-related line of work.
> >
> > I'd like to chime in because I conceived the original i915 GPU work 5
> > years ago.
> >
> > On 10.08.2015 00:15, Jamey Sharp wrote:
> >> I was referring to the Genode "challenges" list, which mentions that
> >> "Genode 10.08 introduced Gallium3D including the GPU driver for Intel
> >> GMA CPUs." (I'm guessing this has bit-rotted somewhat since then? I
> >> haven't found where that code might live yet.)
> >
> > The state of my original port is roughly explained in the release notes
> > of Genode 10.08:
> >
> >
> >
> http://genode.org/documentation/release-notes/10.08#Gallium3D_and_Intel%27s_Graphics_Execution_Manager
> >
> > We maintained this state until spring this year when we decided to
> > abandon it until somehow becomes interested again. Now, just shortly
> > after, you are showing up. ;-)
> >
> > The code is still there but not regularly tested or maintained. The
> > important pieces are:
> >
> > * The port of the i915 GPU driver / the GEM subsystem of the Linux
> >   kernel. I ported the code via our DDE approach. But unlike all
> >   recent DDE-Linux-based drivers, the code resides in a separate
> >   repository:
> >
> >
> https://github.com/genodelabs/linux_drivers/tree/master/src/drivers/gpu
> >
> >   We planned to add the revived version of this code to our new
> >   'repos/dde_linux' repository within the Genode tree but haven't
> >   done so yet.
> >
> > * The port of libdrm:
> >
> >
> >
> https://github.com/genodelabs/genode/blob/master/repos/libports/ports/libdrm.port
> >
> >
> https://github.com/genodelabs/genode/blob/master/repos/libports/lib/mk/libdrm.mk
> >
> >
> https://github.com/genodelabs/genode/tree/master/repos/libports/src/lib/libdrm
> >
> >   As you can see in ioctl.cc, the code implements ioctl by simply
> >   calling the corresponding function of the GPU driver. Normally,
> >   we'd need to redirect those calls via RPC. But in my setup, I
> >   just co-located the GPU driver + libdrm + gallium3d + application
> >   within a single component.
> >
> > * Mesa / Gallium3d, which is part of 'repos/libports/'.
> >
> > * A custom EGL driver to interface Mesa with Genode:
> >
> >
> >
> https://github.com/genodelabs/genode/tree/master/repos/libports/src/lib/egl
> >
> > * An example application and a corresponding run script:
> >
> >
> >
> https://github.com/genodelabs/genode/tree/master/repos/libports/src/app/eglgears
> >
> >
> https://github.com/genodelabs/genode/blob/master/repos/libports/run/eglgears.run
> >
> >> It goes on to say that "the current approach executes the GPU driver
> >> alongside the complete Gallium3D software stack and the application code
> >> in one address space," which of course is undesirable for security, but
> >> also because it limits users to a single 3D client at a time.
> >>
> >> I think what I want to do is:
> >>
> >> - define an analogue of the Linux DRM API using Genode IPC,
> >> - port the Linux kernel generic DRM layer and the driver for Intel
> >>   integrated graphics to this IPC interface (as part of dde_linux I
> >>   guess?),
> >> - and port libdrm to the IPC interface.
> >
> > I hope that the pointers above will serve you well as a suitable
> > starting point.
> >
> >> I'm hoping that the libdrm abstraction layer is comprehensive enough
> >> that Mesa would not need much, if any, patching.
> >
> > That it consistent with my experience. As far as I remember, I have not
> > modified Mesa at all.
> >
> >> As you pointed out, I'd really like to wind up with a Wayland interface
> >> replacing Genode's Nitpicker. (Which is another wishlist item on the
> >> "challenges" page, I noticed.)
> >
> > I do not think that the replacement of Nitpicker by something else is
> > strictly necessary as Nitpicker and Wayland share the same principle
> > architecture.
> >
> >>> * There should probably be an intermediate resource management server
> >>> between the kernel/libdrm container and the app.
> >>
> >> Agreed! In a complete implementation, something should keep track of how
> >> much video memory is available and share it fairly between clients.
> >> Bonus points if it also can provide a generic implementation of command
> >> scheduling, to keep any one client from starving other clients' access
> >> to the GPU.
> >
> > I would refer to this component simply as "GPU driver". It would contain
> > both, the actual driver code that talks to the GPU and the code for
> > multiplexing the GPU. I think that, given the re-use of the Linux kernel
> > code, it would be quite difficult to separate those both concerns into
> > two distinct components.
> >
> >> That said, I'm hoping to get a single-client demo working without any
> >> resource management first. :-)
> >>
> >>> * You should think of whether you want to allow multiple clients to
> >>> access the same buffer simultaneously or make the access exclusive.
> >>
> >> I think, to support the Wayland model, multiple clients need to be
> >> allowed to access the same buffer. But they shouldn't usually be trying
> >> to map the raw buffer contents into their local address space, right?
> >> That is a recipe for a performance disaster, especially on graphics
> >> cards with dedicated VRAM.
> >
> > Buffer objects are mapped directly into the application's address
> > spaces. This is also the case on Linux where a custom page-fault handler
> > manages the part of the address space where the /dev/drm device node is
> > mapped via mmap. The code (and the overloading of the mmap arguments
> > with different semantics by the i915 driver) is quite frightening. But
> > in principle, the construct could work very similar on Genode where are
> > have a proper interface for managing (parts of) virtual address spaces
> > from a remote component. On Genode, each buffer object would be
> > represented as a dataspace. But let us keep this topic for later.
> >
> > Have fun with you exploration! For giving the existing code a try, I
> > would recommend you to test a slightly older Genode version (like 14.11)
> > where the i915 GPU driver was still known to work.
> >
> > Cheers
> > Norman
> >
> > --
> > Dr.-Ing. Norman Feske
> > Genode Labs
> >
> > http://www.genode-labs.com · http://genode.org
> >
> > Genode Labs GmbH · Amtsgericht Dresden · HRB 28424 · Sitz Dresden
> > Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth
> >
> >
> ------------------------------------------------------------------------------
> > _______________________________________________
> > genode-main mailing list
> > genode-main at lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/genode-main
>
>
> ------------------------------------------------------------------------------
> Monitor Your Dynamic Infrastructure at Any Scale With Datadog!
> Get real-time metrics from all of your servers, apps and tools
> in one place.
> SourceForge users - Click here to start your Free Trial of Datadog now!
> http://pubads.g.doubleclick.net/gampad/clk?id=241902991&iu=/4140
> _______________________________________________
> genode-main mailing list
> genode-main at lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/genode-main
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.genode.org/pipermail/users/attachments/20150903/e985c9d3/attachment.html>


More information about the users mailing list